Bitcoin Forum
July 16, 2018, 12:58:46 PM *
News: Latest stable version of Bitcoin Core: 0.16.1  [Torrent]. (New!)
 
   Home   Help Search Donate Login Register  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [21] 22 23 24 25 26 »
  Print  
Author Topic: New demonstration CPU miner available  (Read 385227 times)
ancow
Full Member
***
Offline Offline

Activity: 373
Merit: 100


View Profile WWW
June 09, 2011, 06:16:47 PM
 #401

I agree with your changes.  I think the default should be the maximum number of CPUs on board.  The default wasn't the SSE2_64?  Hmm, good thing I set that then.
But I don't think I've ever seen Via padlock not crash the program on start.  Does it actually have a use?  If no, it could be edited out until then.
I only changed the usage text, no "proper" code. SSE2_64 is default on 64bit Linux since the last commit, therefore the usage text was wrong, so I wrote a patch to amend that.

BTC: 1GAHTMdBN4Yw3PU66sAmUBKSXy2qaq2SF4
1531745926
Hero Member
*
Offline Offline

Posts: 1531745926

View Profile Personal Message (Offline)

Ignore
1531745926
Reply with quote  #2

1531745926
Report to moderator
MORE CRYPTO, LESS NOISE
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
xf2_org
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
June 09, 2011, 06:46:44 PM
 #402

Actually Con's patch is rather simplified -- you want the number of cores, not the total number of processors (which might include HyperThread siblings).

If you use all cores + HT, then your hash performance is slower than cores alone.

-ck
Moderator
Legendary
*
Offline Offline

Activity: 2576
Merit: 1110


Ruu \o/


View Profile WWW
June 09, 2011, 09:00:01 PM
 #403

Actually Con's patch is rather simplified -- you want the number of cores, not the total number of processors (which might include HyperThread siblings).

If you use all cores + HT, then your hash performance is slower than cores alone.



I tested total threads vs total cores and got slightly more with total threads on i7. Plus there is no particularly easy and reliable way to detect cores versus threads. So total "processors" actually generated more in my testing.

Primary developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org 1% Fee Solo mining at solo.ckpool.org
-ck
-ck
Moderator
Legendary
*
Offline Offline

Activity: 2576
Merit: 1110


Ruu \o/


View Profile WWW
June 09, 2011, 09:21:46 PM
 #404

Actually that's not quite true. There was one more complex setup that produced higher throughput. If I set the number of mining threads to the total number of cores only, and then bound each worker thread to all the logical CPUs that shared cache, the throughput was a bit better again. However which cores shares threads is even harder to detect reliably without digging around in /sys and the format has changed between kernels. Furthermore, on my i7, the shared caches aren't even sequential numbers so binding threads to sequential logical CPUs was worse (i.e. CPUs 0 and 2 shared caches and 1 and 3 and so on).

Primary developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org 1% Fee Solo mining at solo.ckpool.org
-ck
d3m0n1q_733rz
Sr. Member
****
Offline Offline

Activity: 378
Merit: 250



View Profile WWW
June 10, 2011, 12:40:01 PM
 #405

I believe there is one i7 processor out there that doesn't have a shared cache.  Each core on it has its own dedicated 2M cache.  However, if you could detect the shared cache, you could also detect the unshared cache and end up with what I suggested above that I had no idea how to start coding for a similar reason.  But anyhow, if you're worried about the kernels having sys information in different locations, it's a simple if-else-else statement that you'll be using to find or not find it.
But, as I suggested, using the two cores sharing the same cache, you can have each core perform different portions of the same work.  While one has completed half of the equations, you can send the work off to the next core for completion and get the next work.  With each core doing a specific task, you can simplify the code for unrolled loops and pass fewer instructions to the processor I believe.  Fewer instructions generally means less overhead.
So a part of the problem is the first getwork.  Half as many threads will be running until half of the work is completed and passed to the next core.  I don't know a way around it since I would think the SHA equation to only be done according to the order of operations (Parenthesis, exponents, multiple/divide...).  Actually, now that I think about it, if a matrix calculation comes into play, that would have its own unique optimizations...eep digressing!  But yeah, how to split the cores to keep them from doing redundant work is the biggest issue here.

Funroll_Loops, the theoretically quicker breakfast cereal!
Check out http://www.facebook.com/JupiterICT for all of your computing needs.  If you need it, we can get it.  We have solutions for your computing conundrums.  BTC accepted!  12HWUSguWXRCQKfkPeJygVR1ex5wbg3hAq
-ck
Moderator
Legendary
*
Offline Offline

Activity: 2576
Merit: 1110


Ruu \o/


View Profile WWW
June 10, 2011, 12:45:13 PM
 #406

Bouncing work from one CPU to another will decrease throughput a fair amount. The cost of that should not be discounted. Anyway feel free to try...

Primary developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org 1% Fee Solo mining at solo.ckpool.org
-ck
rocksalt
Jr. Member
*
Offline Offline

Activity: 51
Merit: 0



View Profile
June 10, 2011, 04:28:05 PM
 #407

is there a working flag for outputting to a log file for this using the windows binaries?

I've tried
--f
-f
>

with full path, just a file name, with extension, without ext, with "" "" and also without.

I think i've worn my fingers out doing this

TIPS/Donations: mwahahaha.. not that desperate, just a thank you or a flame please but if you must... 1NTZcWQGfdGang9piBKUv9Z1VZ7x6cTXjV
ancow
Full Member
***
Offline Offline

Activity: 373
Merit: 100


View Profile WWW
June 10, 2011, 05:22:46 PM
 #408

You need to redirect stderr. According to http://www.techtalkz.com/windows-xp/27452-redirect-stdout-stderr-windows-shell.html, "2>" will do that.

BTC: 1GAHTMdBN4Yw3PU66sAmUBKSXy2qaq2SF4
rocksalt
Jr. Member
*
Offline Offline

Activity: 51
Merit: 0



View Profile
June 11, 2011, 06:24:28 AM
 #409

Im now discovering a different issue Tongue

minerd.exe --algo cryptopp_asm32 --s 2 --url http://btcguild.com/ --userpass xxxx:xxx this runs when i tried it on deepbit, local miner and a few others....

however on btcguild i get the following error

[2011-06-12 10:02:16] 1 miner threads started, using SHA256 'cryptopp_asm32' algorithm.
[2011-06-12 10:02:20] JSON decode failed(1): '[' or '{' expected near '<'
[2011-06-12 10:02:20] json_rpc_call failed, retry after 30 seconds


its only happening with btcguild though, not any of the other mining pools i tested with.

anyone come accross this before ??

Win7
Intel Dual Core
Nvidia GTX470OC

TIPS/Donations: mwahahaha.. not that desperate, just a thank you or a flame please but if you must... 1NTZcWQGfdGang9piBKUv9Z1VZ7x6cTXjV
-ck
Moderator
Legendary
*
Offline Offline

Activity: 2576
Merit: 1110


Ruu \o/


View Profile WWW
June 11, 2011, 03:41:35 PM
 #410

Hi Jeff, et al.

I've made some modifications to the output to generate a total throughput counter since there was confusion with the multiple threads issue, cleaned up the output a little, and added a solution counter. I also dropped a lot of output when only one thread is in use. Please pull the changes into your tree if you agree with the changes.

it now looks like this:

[2011-06-12 01:37:26] [Total: 8.40 Mhash/sec] [thread 3: 109989796 hashes, 3075 khash/sec] [Solved: 0]
[2011-06-12 01:37:26] PROOF OF WORK RESULT: true (yay!!!)
[2011-06-12 01:37:47] [Total: 8.45 Mhash/sec] [thread 0: 183024176 hashes, 3090 khash/sec] [Solved: 1]
[2011-06-12 01:37:48] [Total: 9.89 Mhash/sec] [thread 1: 183024176 hashes, 3085 khash/sec] [Solved: 1]
[2011-06-12 01:37:48] [Total: 11.31 Mhash/sec] [thread 2: 183024176 hashes, 3082 khash/sec] [Solved: 1]
[2011-06-12 01:38:27] [Total: 9.72 Mhash/sec] [thread 3: 183316328 hashes, 3019 khash/sec] [Solved: 1]
[2011-06-12 01:38:50] [Total: 9.54 Mhash/sec] [thread 0: 186126280 hashes, 2969 khash/sec] [Solved: 1]
[2011-06-12 01:38:50] [Total: 10.52 Mhash/sec] [thread 1: 186126280 hashes, 2989 khash/sec] [Solved: 1]
[2011-06-12 01:38:51] [Total: 11.50 Mhash/sec] [thread 2: 186126280 hashes, 3007 khash/sec] [Solved: 1]

Thanks.

Primary developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org 1% Fee Solo mining at solo.ckpool.org
-ck
-ck
Moderator
Legendary
*
Offline Offline

Activity: 2576
Merit: 1110


Ruu \o/


View Profile WWW
June 11, 2011, 04:06:32 PM
 #411

Hmm perhaps "solved" isn't quite the right word there for accepted blocks.

Primary developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org 1% Fee Solo mining at solo.ckpool.org
-ck
dserrano5
Legendary
*
Offline Offline

Activity: 1848
Merit: 1000



View Profile
June 11, 2011, 04:20:36 PM
 #412

As a user I would expect that total Mhash/s were the sum of the khash/s of all threads—in your example it would be always around 12 Mhash/s (since each thread works at a consistent pace of nearly 3000 khash/s). That's not the case, so I guess the algorithm is different.

-ck
Moderator
Legendary
*
Offline Offline

Activity: 2576
Merit: 1110


Ruu \o/


View Profile WWW
June 11, 2011, 04:22:25 PM
 #413

As a user I would expect that total Mhash/s were the sum of the khash/s of all threads—in your example it would be always around 12 Mhash/s (since each thread works at a consistent pace of nearly 3000 khash/s). That's not the case, so I guess the algorithm is different.

That was the miner just starting up. After a while it converges more and more.

Primary developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org 1% Fee Solo mining at solo.ckpool.org
-ck
-ck
Moderator
Legendary
*
Offline Offline

Activity: 2576
Merit: 1110


Ruu \o/


View Profile WWW
June 12, 2011, 11:42:39 PM
 #414

Now that I've got a meaningful total throughput counter, I can confirm that running number of threads == number of logical processors on i7 is actually faster than even carefully bound number of threads == number of physical cores. This means that the default behaviour of minerd with my modifications which chooses how many threads to start up will give you the highest throughput.

My cumulative changes are here till jgarzik pulls them if anyone's interested:
https://github.com/ckolivas/cpuminer

Primary developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org 1% Fee Solo mining at solo.ckpool.org
-ck
hugolp
Hero Member
*****
Offline Offline

Activity: 784
Merit: 1000



View Profile
June 13, 2011, 02:58:04 PM
 #415

Im getting this error in Ubuntu 10.04 LTS when running autogen.sh:

~/cpuminer$ sh autogen.sh
configure.ac:15: installing `./compile'
configure.ac:4: installing `./config.guess'
configure.ac:4: installing `./config.sub'
configure.ac:6: installing `./install-sh'
configure.ac:6: installing `./missing'
compat/jansson/Makefile.am: installing `./depcomp'
Makefile.am: installing `./INSTALL'
configure.ac:96: error: possibly undefined macro: AC_MSG_ERROR
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.

Any suggestion?
RaTTuS
Hero Member
*****
Offline Offline

Activity: 792
Merit: 1000


Bite me


View Profile
June 13, 2011, 03:19:47 PM
 #416

try :-
./configure

In the Beginning there was CPU , then GPU , then FPGA then ASIC, what next I hear to ask ....

1RaTTuSEN7jJUDiW1EGogHwtek7g9BiEn
hugolp
Hero Member
*****
Offline Offline

Activity: 784
Merit: 1000



View Profile
June 13, 2011, 03:31:25 PM
 #417

try :-
./configure

I actually tried ./configure just for the sake of it and does nothing as expected. Autogen is failing.
jgarzik
Legendary
*
Offline Offline

Activity: 1554
Merit: 1001


View Profile
June 13, 2011, 05:07:09 PM
 #418

Im getting this error in Ubuntu 10.04 LTS when running autogen.sh:

~/cpuminer$ sh autogen.sh
configure.ac:15: installing `./compile'
configure.ac:4: installing `./config.guess'
configure.ac:4: installing `./config.sub'
configure.ac:6: installing `./install-sh'
configure.ac:6: installing `./missing'
compat/jansson/Makefile.am: installing `./depcomp'
Makefile.am: installing `./INSTALL'
configure.ac:96: error: possibly undefined macro: AC_MSG_ERROR
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.

Any suggestion?

Standard advice -- your autotools installation is old or broken.  Use release tarball.


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
hugolp
Hero Member
*****
Offline Offline

Activity: 784
Merit: 1000



View Profile
June 13, 2011, 06:30:17 PM
 #419

Im getting this error in Ubuntu 10.04 LTS when running autogen.sh:

~/cpuminer$ sh autogen.sh
configure.ac:15: installing `./compile'
configure.ac:4: installing `./config.guess'
configure.ac:4: installing `./config.sub'
configure.ac:6: installing `./install-sh'
configure.ac:6: installing `./missing'
compat/jansson/Makefile.am: installing `./depcomp'
Makefile.am: installing `./INSTALL'
configure.ac:96: error: possibly undefined macro: AC_MSG_ERROR
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.

Any suggestion?

Standard advice -- your autotools installation is old or broken.  Use release tarball.



Its a server with Ubuntu 10.04 LTS. I guess its "old". Ill try the tarball, thanks.

EDIT: The tarball worked.
c_k
Donator
Full Member
*
Offline Offline

Activity: 242
Merit: 100



View Profile
June 14, 2011, 03:03:00 AM
 #420

I can get the tarball 1.0.1 to compile OK however the source from github gives the following problem when compiling on debian 6 (amd64):

Quote
me@machine:~$ git clone git://github.com/jgarzik/cpuminer.git
Cloning into cpuminer...
remote: Counting objects: 633, done.
remote: Compressing objects: 100% (273/273), done.
remote: Total 633 (delta 404), reused 575 (delta 356)
Receiving objects: 100% (633/633), 138.30 KiB | 196 KiB/s, done.
Resolving deltas: 100% (404/404), done.
me@machine:~$ cd cpuminer
me@machine:~/cpuminer$ ./autogen.sh
configure.ac:15: installing `./compile'
configure.ac:4: installing `./config.guess'
configure.ac:4: installing `./config.sub'
configure.ac:6: installing `./install-sh'
configure.ac:6: installing `./missing'
compat/jansson/Makefile.am: installing `./depcomp'
Makefile.am: installing `./INSTALL'
me@machine:~/cpuminer$ CFLAGS="-O3 -Wall -msse2" ./configure
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking whether gcc needs -traditional... no
checking whether gcc and cc understand -c and -o together... yes
checking for ranlib... ranlib
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking syslog.h usability... yes
checking syslog.h presence... yes
checking for syslog.h... yes
checking for working alloca.h... yes
checking for alloca... yes
checking for json_loads in -ljansson... no
checking for pthread_create in -lpthread... yes
checking for yasm... /usr/bin/yasm
checking if yasm version is greater than 1.0.1... no
configure: yasm is required for the sse2_64 algorithm. It will be skipped.
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for gawk... (cached) mawk
checking for curl-config... /usr/bin/curl-config
checking for the version of libcurl... 7.21.0
checking for libcurl >= version 7.10.1... yes
checking whether libcurl is usable... yes
checking for curl_free... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating compat/Makefile
config.status: creating compat/jansson/Makefile
config.status: creating x86_64/Makefile
config.status: creating cpuminer-config.h
config.status: executing depfiles commands
me@machine:~/cpuminer$ make
make  all-recursive
make[1]: Entering directory `/home/me/cpuminer'
Making all in compat
make[2]: Entering directory `/home/me/cpuminer/compat'
Making all in jansson
make[3]: Entering directory `/home/me/cpuminer/compat/jansson'
gcc -DHAVE_CONFIG_H -I. -I../..     -O3 -Wall -msse2 -MT dump.o -MD -MP -MF .deps/dump.Tpo -c -o dump.o dump.c
mv -f .deps/dump.Tpo .deps/dump.Po
gcc -DHAVE_CONFIG_H -I. -I../..     -O3 -Wall -msse2 -MT hashtable.o -MD -MP -MF .deps/hashtable.Tpo -c -o hashtable.o hashtable.c
mv -f .deps/hashtable.Tpo .deps/hashtable.Po
gcc -DHAVE_CONFIG_H -I. -I../..     -O3 -Wall -msse2 -MT load.o -MD -MP -MF .deps/load.Tpo -c -o load.o load.c
mv -f .deps/load.Tpo .deps/load.Po
gcc -DHAVE_CONFIG_H -I. -I../..     -O3 -Wall -msse2 -MT strbuffer.o -MD -MP -MF .deps/strbuffer.Tpo -c -o strbuffer.o strbuffer.c
mv -f .deps/strbuffer.Tpo .deps/strbuffer.Po
gcc -DHAVE_CONFIG_H -I. -I../..     -O3 -Wall -msse2 -MT utf.o -MD -MP -MF .deps/utf.Tpo -c -o utf.o utf.c
mv -f .deps/utf.Tpo .deps/utf.Po
gcc -DHAVE_CONFIG_H -I. -I../..     -O3 -Wall -msse2 -MT value.o -MD -MP -MF .deps/value.Tpo -c -o value.o value.c
mv -f .deps/value.Tpo .deps/value.Po
rm -f libjansson.a
ar cru libjansson.a dump.o hashtable.o load.o strbuffer.o utf.o value.o
ranlib libjansson.a
make[3]: Leaving directory `/home/me/cpuminer/compat/jansson'
make[3]: Entering directory `/home/me/cpuminer/compat'
make[3]: Nothing to be done for `all-am'.
make[3]: Leaving directory `/home/me/cpuminer/compat'
make[2]: Leaving directory `/home/me/cpuminer/compat'
make[2]: Entering directory `/home/me/cpuminer'
gcc -DHAVE_CONFIG_H -I. -pthread -fno-strict-aliasing -I./compat/jansson    -O3 -Wall -msse2 -MT minerd-cpu-miner.o -MD -MP -MF .deps/minerd-cpu-miner.Tpo -c -o minerd-cpu-miner.o `test -f 'cpu-miner.c' || echo './'`cpu-miner.c
cpu-miner.c: In function ‘drop_policy’:
cpu-miner.c:43: error: ‘SCHED_IDLE’ undeclared (first use in this function)
cpu-miner.c:43: error: (Each undeclared identifier is reported only once
cpu-miner.c:43: error: for each function it appears in.)
make[2]: *** [minerd-cpu-miner.o] Error 1
make[2]: Leaving directory `/home/me/cpuminer'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/me/cpuminer'
make: *** [all] Error 2

I get the same error with the code from ckolivas' repo Undecided

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [21] 22 23 24 25 26 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!