Bitcoin Forum
November 06, 2024, 06:04:38 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 »
  Print  
Author Topic: New demonstration CPU miner available  (Read 386311 times)
geekmug
Newbie
*
Offline Offline

Activity: 4
Merit: 0



View Profile
February 07, 2011, 06:37:49 AM
 #161

- lfm's byte swap optimization (faster! improves via, cryptopp)

This optimization (__builtin_bswap32) is only available on gcc 4.3 or later, and it's a bit silly because you can get this optimization from <byteswap.h> via bswap_32() (which is smart enough to use the bswap opcode) on gcc 2.0 or later. AFAIK, as long as you have optimizations turned on, the generated code is identical. I appreciate your work here, I am using your miner on my servers to mine with their idle cpu-time -- the power is already bought, after all. I've attached a patch if you don't want to trust bswap_32() to be equivalent:

Code:
diff --git a/miner.h b/miner.h
index 539b5d6..eef4ee0 100644
--- a/miner.h
+++ b/miner.h
@@ -1,6 +1,7 @@
 #ifndef __MINER_H__
 #define __MINER_H__
 
+#include <byteswap.h>
 #include <stdbool.h>
 #include <stdint.h>
 #include <sys/time.h>
@@ -24,7 +25,11 @@
 
 static inline uint32_t swab32(uint32_t v)
 {
+#if (__GNUC__ >= 4 && __GNUC_PATCHLEVEL__ >= 3)
        return __builtin_bswap32(v);
+#else
+       return bswap_32(v);
+#endif
 }
 
 static inline void swap256(void *dest_p, const void *src_p)

roldansu
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
February 08, 2011, 01:53:35 PM
 #162

Hi,

I'm trying to run the miner on my Ubuntu 8.04 (Hardy) system, with an Intel Pentium Dual CPU E2140 .

The  CFLAGS="-O3 -Wall -msse2" ./configure command seems to work fine, but then after the "make" command I get the following:

make  all-recursive
make[1]: Entering directory `/home/roldansu/Desktop/cpuminer-0.6.1'
Making all in compat
make[2]: Entering directory `/home/roldansu/Desktop/cpuminer-0.6.1/compat'
Making all in jansson
make[3]: Entering directory `/home/roldansu/Desktop/cpuminer-0.6.1/compat/jansson'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory `/home/roldansu/Desktop/cpuminer-0.6.1/compat/jansson'
make[3]: Entering directory `/home/roldansu/Desktop/cpuminer-0.6.1/compat'
make[3]: Nothing to be done for `all-am'.
make[3]: Leaving directory `/home/roldansu/Desktop/cpuminer-0.6.1/compat'
make[2]: Leaving directory `/home/roldansu/Desktop/cpuminer-0.6.1/compat'
make[2]: Entering directory `/home/roldansu/Desktop/cpuminer-0.6.1'
gcc  -O3 -Wall -msse2 -pthread  -o minerd cpu-miner.o sha256_generic.o sha256_4way.o sha256_via.o sha256_cryptopp.o util.o -lcurl -Wl,-Bsymbolic-functions -lgssapi_krb5 compat/jansson/libjansson.a -lpthread
sha256_via.o: In function `scanhash_via':
sha256_via.c:(.text+0x4c): undefined reference to `__builtin_bswap32'
sha256_via.c:(.text+0x70): undefined reference to `__builtin_bswap32'
sha256_via.c:(.text+0x120): undefined reference to `__builtin_bswap32'
sha256_via.c:(.text+0x12e): undefined reference to `__builtin_bswap32'
sha256_via.c:(.text+0x13c): undefined reference to `__builtin_bswap32'
sha256_via.o:sha256_via.c:(.text+0x14a): more undefined references to `__builtin_bswap32' follow
collect2: ld returned 1 exit status
make[2]: *** [minerd] Error 1
make[2]: Leaving directory `/home/roldansu/Desktop/cpuminer-0.6.1'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/roldansu/Desktop/cpuminer-0.6.1'
make: *** [all] Error 2

My programing knowledge is almost zero, so I'd appreciate some advice on this.

Thanks
Cerebrum
Newbie
*
Offline Offline

Activity: 34
Merit: 0


View Profile
February 09, 2011, 06:52:16 PM
 #163

So, I was looking at the processor meter in my machine while the miner was running, and noticed something strange: Every time a work-unit is completed, that processor sits idle for a few seconds.

I looked through the code to see what was causing this, and I found that the I/O for the getwork() calls is done inside the mining threads while the main thread sits idle and waits for the miners to exit.

To fix this, I implemented a blocking bounded queue and had the main thread constantly do getwork() commands while the mining threads crunch the resulting work. This seems to have removed the potentially long wait for a response form the RPC server, especially when doing pooled mining from a remote server.

However, something strange is occurring. When I set the queue up such that each mining thread has a small work queue of length 1, there is no problem and the system works as intended. However, when the queue length goes above one for each queue, all the results get a response to the effect that they are invalid. I have verified that the correct data is getting into the work structures, so I'm asking in here because I don't know much about how this RPC system works.

Is there some reason that the same thread can't do this queueing? For instance, some kind of an ID for the block that gets overwritten if a new getwork() request is made? I'm doing this while mining in slush's pool, so is that causing a problem?
slush
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
February 10, 2011, 01:40:07 AM
 #164

However, something strange is occurring. When I set the queue up such that each mining thread has a small work queue of length 1, there is no problem and the system works as intended. However, when the queue length goes above one for each queue, all the results get a response to the effect that they are invalid.

How old are these calls? Miners need to have jobs as fresh as possible. Crunching too old jobs leads to rejecting by pool server. Usualy the ask rate is 5 seconds, this leads only to decent overhead.

Cerebrum
Newbie
*
Offline Offline

Activity: 34
Merit: 0


View Profile
February 10, 2011, 01:56:15 AM
 #165

Well, the queue length I was testing was three. My processor can get through a getwork() in about 5 seconds per core, so I guess I was falling outside of that range for your server. Well, I guess this can't be solved in the way I was thinking. Maybe there's some other way to do it, I'll have to think through how I could do it without a queue and without taking a long time. Maybe having the miner threads call to the main thread to get the next work unit ready just before they crunch an existing one would be a good idea. I'll look into doing that.

Also, it looks like an entire new curl object is created every time that you want to get more work. It might be a good idea to use the existing object over again, since I'm sure that wastes some resources and time. All requests for more work have the same form anyway. I'll see if I can get that working as well, and then I'll submit my changes here. I'm not so sure how to git, so if anyone could link me a good tutorial on how to use it to make those patch files that seem so popular, I would be grateful.
jgarzik (OP)
Legendary
*
Offline Offline

Activity: 1596
Merit: 1100


View Profile
February 10, 2011, 05:55:24 AM
 #166

Just committed two changes to git, based on suggestions here:
  • Re-use CURL object.  This means persistent HTTP/1.1 connections, if your pool server supports it!
  • Use bswap_32(), if compiler does not provide intrinsic.  Useful for older OS's.

Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
jgarzik (OP)
Legendary
*
Offline Offline

Activity: 1596
Merit: 1100


View Profile
February 10, 2011, 06:01:12 AM
 #167

Well, the queue length I was testing was three. My processor can get through a getwork() in about 5 seconds per core, so I guess I was falling outside of that range for your server. Well, I guess this can't be solved in the way I was thinking. Maybe there's some other way to do it, I'll have to think through how I could do it without a queue and without taking a long time. Maybe having the miner threads call to the main thread to get the next work unit ready just before they crunch an existing one would be a good idea. I'll look into doing that.

Someone implemented a work queue a long time ago.  The reason that changed was never merged into upstream:  it basically guaranteed that you are always working on "old work," lagging a second or three.  Ideally, a background thread polls for work, carefully observing timings sufficient that it issues a 'getwork' JSON-RPC call, downloads and parses the JSON response just in time for a thread to request new work from the queue.

Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
tashlan
Newbie
*
Offline Offline

Activity: 14
Merit: 0


View Profile WWW
February 10, 2011, 07:45:51 AM
 #168

I'll throw my suggestion in too.. any chance the miner could optionally time-stamp the successful results?

Maybe something like: "PROOF OF WORK RESULT: true (yay!!!) 18:24:32 02.10.2011"
Cerebrum
Newbie
*
Offline Offline

Activity: 34
Merit: 0


View Profile
February 10, 2011, 01:08:10 PM
 #169


Someone implemented a work queue a long time ago.  The reason that changed was never merged into upstream:  it basically guaranteed that you are always working on "old work," lagging a second or three.  Ideally, a background thread polls for work, carefully observing timings sufficient that it issues a 'getwork' JSON-RPC call, downloads and parses the JSON response just in time for a thread to request new work from the queue.


OK, that sounds like a good plan. I'll see what I can do to get that working. Maybe I can get an average time for the mining thread to process one getwork(), and the average time for a getwork() to resolve. I'll look into how the threads might be scheduled more effectively.

Also, it may be a good idea to set CURLOPT_DNS_CACHE_TIMEOUT to -1 or suggest that people use the IP address of the server they wish to connect to instead of the hostname. I used both, and the performance of the networking code improved significantly in both cases. However, this might cause a problem if the IP address of the server changes while crunching is happening. Maybe causing the cache to reset whenever a connection fails would be a good idea. Again, I'll look into doing these things.
slush
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
February 10, 2011, 02:30:53 PM
 #170

suggest that people use the IP address of the server

No please! No for my pool! Hardcoded IP makes system migration almost impossible or with significant drop in hashrate until all of 400 workers is fixed. I already migrated server to stronger machine and I'll do it again soon.

Cerebrum
Newbie
*
Offline Offline

Activity: 34
Merit: 0


View Profile
February 10, 2011, 05:42:27 PM
 #171

No please! No for my pool! Hardcoded IP makes system migration almost impossible or with significant drop in hashrate until all of 400 workers is fixed. I already migrated server to stronger machine and I'll do it again soon.

So instead, perhaps we make the DNS cache not expire, except in cases where the connection fails. When a failure occurs, we assume that the IP address of the server might have changed, so we will clear the cache and do the DNS look-up again. I'll look into how to do this with CURL.
slush
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
February 11, 2011, 10:19:11 AM
 #172

So instead, perhaps we make the DNS cache not expire, except in cases where the connection fails. When a failure occurs, we assume that the IP address of the server might have changed, so we will clear the cache and do the DNS look-up again. I'll look into how to do this with CURL.

Yes, this is better. But, it is necessary? DNS typically expires once every 8 hours, so if your DNS cache works properly, you're saving rountrip of 3 requests daily. Worthless.

Cerebrum
Newbie
*
Offline Offline

Activity: 34
Merit: 0


View Profile
February 11, 2011, 01:35:50 PM
Last edit: February 11, 2011, 02:19:11 PM by Cerebrum
 #173

Yes, this is better. But, it is necessary? DNS typically expires once every 8 hours, so if your DNS cache works properly, you're saving rountrip of 3 requests daily. Worthless.

Yes, however, libcurl has its internal DNS cache set to expire once every 60 seconds. So I'd be saving 1440 DNS requests per day and per thread even after the changes that jgarzik recently made to the networking code are released. This will be an increase in performance of about 1.6% over the current code in the git repository (assumes one second to do a DNS look-up, which is what I experience at my machine).

Also the current release of the program does one DNS request for every time it does work since it does not re-use curl objects. My local DNS server blows and takes almost a full second to do a look-up, so you can imagine that this gets very costly, since the network code is synchronous with the crunching code. I was spending 10-20% of my time doing DNS requests instead of crunching hashes before making changes to the code to prevent this. This is the reason that I started looking at the code: I wanted to know why my processors were sitting idle for so much time instead of making hashes.

EDIT:

I think I just changed some of the files in the git repository:
-DNS caching now lasts indefinitely, until a connection attempt fails, at which point the cache is cleared and the look-up happens again.
-All threads use a single shared CURL object which is initialized only once.
-Yay and Boo results are now timestamped.

I've been testing this in slush's pool for the last few minutes and it's working fine so far, it's found about 5 shares.
jgarzik (OP)
Legendary
*
Offline Offline

Activity: 1596
Merit: 1100


View Profile
February 11, 2011, 06:13:20 PM
 #174

The current solution in git is sufficient: We re-use the CURL object in each miner thread, which implies we fully cache DNS entries according to their cache timeouts.

Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
tashlan
Newbie
*
Offline Offline

Activity: 14
Merit: 0


View Profile WWW
February 13, 2011, 12:33:17 AM
 #175

Thank you Cerebrum for your code contributions.  I think such code review is great and eagerly await the packaging into the next version of the client (thanks jgarzik).
jgarzik (OP)
Legendary
*
Offline Offline

Activity: 1596
Merit: 1100


View Profile
February 13, 2011, 12:48:28 AM
 #176

I'm sorry but it's me again))

after I added CFLAGS
./configure CFLAGS="-I/usr/local/include"
and then run "make"
I get another error Smiley

Quote
vitsum# make
make  all-recursive
Making all in compat
Making all in jansson
gcc  -I/usr/local/include -pthread   -o minerd cpu-miner.o sha256_generic.o  sha256_4way.o sha256_via.o  sha256_cryptopp.o util.o -L/usr/local/lib -lcurl -rpath=/usr/lib:/usr/local/lib -lssl -lcrypto -lz compat/jansson/libjansson.a -lpthread
sha256_via.o(.text+0x49): In function `scanhash_via':
/root/cpuminer-0.6.1/miner.h:27: undefined reference to `__builtin_bswap32'
sha256_via.o(.text+0xd5):/root/cpuminer-0.6.1/miner.h:27: undefined reference to `__builtin_bswap32'
sha256_via.o(.text+0x189):/root/cpuminer-0.6.1/miner.h:27: undefined reference to `__builtin_bswap32'
util.o(.text+0xb44): In function `swab32':
: undefined reference to `__builtin_bswap32'
*** Error code 1

Stop in /root/cpuminer-0.6.1.
*** Error code 1

Stop in /root/cpuminer-0.6.1.
*** Error code 1

Stop in /root/cpuminer-0.6.1.

What do I do wrong?

This is a bug in the current version, that is fixed in git.  Scroll up for the bswap fix, check out the fix from the git repository, or wait for the next release.

Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
jgarzik (OP)
Legendary
*
Offline Offline

Activity: 1596
Merit: 1100


View Profile
February 13, 2011, 01:04:36 AM
Last edit: February 14, 2011, 04:42:11 AM by jgarzik
 #177

cpuminer version 0.7 is released (see top of thread for URLs).

Changes:
- Re-use CURL object, thereby reusing DNS cache and HTTP connections.  Pool users are strongly encouraged to upgrade to this version of cpuminer.
- Use bswap_32, if compiler intrinsic is not available.  Fixes "__builtin_bswap32" compile problems.
- Disable full target validation (as opposed to simply H==0) for now

SHA1: 9fb370d019e475d9a01a34c42fbcbcae823d971b  cpuminer-installer-0.7.zip
MD5: 3546f606c99bdba9ad2796ecfa86f2cc  cpuminer-installer-0.7.zip


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
Cryptoman
Hero Member
*****
Offline Offline

Activity: 726
Merit: 500



View Profile
February 13, 2011, 01:12:03 AM
 #178

Changes:
- Re-use CURL object, thereby reuseing DNS cache and HTTP connections.  Pool users are strongly encouraged to upgrade to this version of cpuminer.

This is a very welcome update.  I was using IPs rather than hostnames to partially work around this problem.  Thanks!

"A small body of determined spirits fired by an unquenchable faith in their mission can alter the course of history." --Gandhi
Big Bear
Newbie
*
Offline Offline

Activity: 11
Merit: 0



View Profile
February 13, 2011, 09:34:47 AM
 #179

So what command line do I write to execute the new program?

I've tried...

minerd -url=mining.bitcoin.cz:8332 -user=XXXX.XXXX -password=XXXXXXX

but all I get is the help options?

is there a way I can change or see the program settings so I don't have to constantly write that in to start the program?

Thanks!
BitLex
Hero Member
*****
Offline Offline

Activity: 532
Merit: 505


View Profile
February 13, 2011, 10:24:48 AM
 #180

try --userpass=worker.ID:pass

Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!