ancow
|
|
July 26, 2011, 11:10:38 PM |
|
*cough* the deepbit thread is a better place to ask this. Anyway, you can and should set individual passwords for the workers and it's been like that for months.
|
BTC: 1GAHTMdBN4Yw3PU66sAmUBKSXy2qaq2SF4
|
|
|
|
|
|
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4088
Merit: 1631
Ruu \o/
|
|
July 26, 2011, 11:45:52 PM |
|
Current git, I'm having issues with the curses display. Normal running is okay, but attempting to access any of the interactive options fails to have them drawn properly on the screen. It had worked with 1.4.1, but not in 1.5.0.
I can still press the appropriate key to access the appropriate function, but I never see the options. Example: I select [P]ool, the whole screen is redrawn, and the scroll portion goes blank. If I type "i" for info, it then prompts me what pool I want info for. However, then the screen redraws completely, and I'm given no info.
I'm aware of this, thanks. It was a half arsed attempt to fix the crash that old libcurls have when the display is refreshed twice in a row. I've pushed a change which should fix that just now.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
ah42
Newbie
Offline
Activity: 14
Merit: 0
|
|
July 27, 2011, 01:11:42 AM |
|
I'm aware of this, thanks. It was a half arsed attempt to fix the crash that old libcurls have when the display is refreshed twice in a row. I've pushed a change which should fix that just now.
Yup, that fixed it, thanks (I figured you probably knew about it, but it could have just as easily been local to me, too...)
|
|
|
|
d3m0n1q_733rz
|
|
July 27, 2011, 07:18:51 AM |
|
1.5.0 works well. I see you're using an approach similar to what I had suggested for the CPUminer. The numbers are looking far more stable now and it's completing the work far more efficiently as predicted. Kudos on getting the darn thing working.
|
Funroll_Loops, the theoretically quicker breakfast cereal! Check out http://www.facebook.com/JupiterICT for all of your computing needs. If you need it, we can get it. We have solutions for your computing conundrums. BTC accepted! 12HWUSguWXRCQKfkPeJygVR1ex5wbg3hAq
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4088
Merit: 1631
Ruu \o/
|
|
July 27, 2011, 07:22:04 AM |
|
1.5.0 works well. I see you're using an approach similar to what I had suggested for the CPUminer. The numbers are looking far more stable now and it's completing the work far more efficiently as predicted. Kudos on getting the darn thing working.
Thanks. It's not entirely working as planned, but hopefully 1.5.1 will be working even better.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
dikidera
|
|
July 27, 2011, 07:22:29 AM |
|
1.5.0 works well. I see you're using an approach similar to what I had suggested for the CPUminer. The numbers are looking far more stable now and it's completing the work far more efficiently as predicted. Kudos on getting the darn thing working.
Hehe, if you are talking about the split nonces, that was me who suggested it.
|
|
|
|
d3m0n1q_733rz
|
|
July 27, 2011, 08:00:28 AM Last edit: July 27, 2011, 08:18:47 AM by d3m0n1q_733rz |
|
One thing that I've noticed though and I can't seem to fix effectively is that the SSE4.1 change appears to be nullified. While I was seeing higher numbers using non-temporal moves/copies from memory to cache, it appears to be the exact same as the sse2 code now. I've tried disassembly and modification of the compiled C code for the sse4_64 , but yasm is being a little temperamental about the code that objconv puts together. At each movntd, it says that "error: instruction expected after label". I don't know why GCC didn't automatically use non-temporal moves with -msse4.1 specified in the CFLAGS, but I'm not much good beyond simply modifying asm source code to utilize cpu optimizations. Would you mind taking a look at it and figuring out why it's not allowing the optimizations to be used? However, don't revert back to the asm versions if you're not using them. The C-based ones are working with a higher degree of stability and slightly higher speed. But yes, I can't figure out how to enable the optimizations for GCC to use them.
Thanks!
PS: I've been playing around with the use of uint_fast32_t in the C code since the hashes can be different sizes. I'm not a C coder, but it's something to think about that might add up over time. The problem I've having with it is that I can't seem to find the right place to use it even though it does slightly increase code speed. In short, it's faster, but the results are useless/rejected due to how the data is handled. Not fun.
|
Funroll_Loops, the theoretically quicker breakfast cereal! Check out http://www.facebook.com/JupiterICT for all of your computing needs. If you need it, we can get it. We have solutions for your computing conundrums. BTC accepted! 12HWUSguWXRCQKfkPeJygVR1ex5wbg3hAq
|
|
|
zaytsev
Newbie
Offline
Activity: 59
Merit: 0
|
|
July 27, 2011, 08:20:53 AM |
|
That part is an error secondary to the first one which is the sockopt function. What does that part of ./configure show for you? checking for curl-config... /usr/bin/curl-config checking for the version of libcurl... 7.15.5 checking for libcurl >= version 7.10.1... yes checking whether libcurl is usable... yes checking for curl_free... yes checking for gawk... (cached) gawk checking for curl-config... (cached) /usr/bin/curl-config checking for the version of libcurl... (cached) 7.15.5 checking for libcurl >= version 7.15.6... (cached) yes checking whether libcurl is usable... (cached) yes checking for curl_free... (cached) yes
|
|
|
|
d3m0n1q_733rz
|
|
July 27, 2011, 08:26:01 AM |
|
1.5.0 works well. I see you're using an approach similar to what I had suggested for the CPUminer. The numbers are looking far more stable now and it's completing the work far more efficiently as predicted. Kudos on getting the darn thing working.
Hehe, if you are talking about the split nonces, that was me who suggested it. http://forum.bitcoin.org/index.php?topic=1925.380 It's not the exact implementation I suggested, but similar idea. Though I like yours better. My idea was to dynamically assign a core with a smaller work unit to assist a core with a larger work unit that was falling behind until they were about equal and then resume its own work. To keep them all leveled out so to speak. Similarly, since this is now CGminer, you can extend the ability to do this to the graphics core as well so as to give the CPU smaller work loads that it can handle effectively based upon its performance and then give the rest to the GPU to finish off. This does, however, assume that the GPU is capable of handling the loads at a faster rate than the CPU which isn't always the case for older GPUs of the NVidia variety.
|
Funroll_Loops, the theoretically quicker breakfast cereal! Check out http://www.facebook.com/JupiterICT for all of your computing needs. If you need it, we can get it. We have solutions for your computing conundrums. BTC accepted! 12HWUSguWXRCQKfkPeJygVR1ex5wbg3hAq
|
|
|
d3m0n1q_733rz
|
|
July 27, 2011, 09:01:27 AM |
|
Pool Management of 1.5.0 is a no-go as far as I can tell. Goes to black then resumes work.
|
Funroll_Loops, the theoretically quicker breakfast cereal! Check out http://www.facebook.com/JupiterICT for all of your computing needs. If you need it, we can get it. We have solutions for your computing conundrums. BTC accepted! 12HWUSguWXRCQKfkPeJygVR1ex5wbg3hAq
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4088
Merit: 1631
Ruu \o/
|
|
July 27, 2011, 10:54:44 AM |
|
That part is an error secondary to the first one which is the sockopt function. What does that part of ./configure show for you? checking for curl-config... /usr/bin/curl-config checking for the version of libcurl... 7.15.5 checking for libcurl >= version 7.10.1... yes checking whether libcurl is usable... yes checking for curl_free... yes checking for gawk... (cached) gawk checking for curl-config... (cached) /usr/bin/curl-config checking for the version of libcurl... (cached) 7.15.5 checking for libcurl >= version 7.15.6... (cached) yes checking whether libcurl is usable... (cached) yes checking for curl_free... (cached) yes Well that's just ridiculous :\ It tests for version >= 7.15.6, finds version 7.15.5 ... and is happy???
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4088
Merit: 1631
Ruu \o/
|
|
July 27, 2011, 10:55:14 AM |
|
Pool Management of 1.5.0 is a no-go as far as I can tell. Goes to black then resumes work.
Yes, known bug. It's there, it's just invisible Wait till 1.5.1
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4088
Merit: 1631
Ruu \o/
|
|
July 27, 2011, 01:25:27 PM |
|
New release: 1.5.1 Source: http://ck.kolivas.org/apps/cgminer/cgminer-1.5.1.tar.bz2Linux x86_64 dynamic binary: http://ck.kolivas.org/apps/cgminer/cgminer-1.5.1-x86_64-built.tar.bz2Win32 binary: http://ck.kolivas.org/apps/cgminer/cgminer-1.5.1-win32.zipThis release is almost entirely bugfixes. Some minor, some major. I'm trying now to settle down to a solid stable release because I'll be leaving for a couple of weeks shortly and don't want to leave with code in a state of flux. See my kernel blog for a little more detail: http://ck-hack.blogspot.comThe changes in this one were to improve dramatically the CPU mining splitting of work between threads to minimise the risk of generating stale or repeated work and to get a much work as possible out of it. This is done by using the original work item as a kind of "master DNA" where the extra work is copied from it, and the original work is flagged and pushed back to the top of the list of work to be done (so it doesn't sit around too long before being picked up). Also I've fixed the longstanding bug where it would crash on decreasing the size of the window on older operating systems (this was actually a libncurses bug and I've worked around it). There were some other bugs where it was possible where a hung GPU could take out the whole of cgminer because the thread restart code wouldn't restart properly. Also there was a very slight chance it would simply stop taking work after a longpoll. Also I tried to detect the version of libcurl it is being compiled against to prevent build failure (but as you can see in this forum, some older versions of libcurl still fail??). I improved the display options menu and fixed the invisible menu bug and did lots of other things... short changelog below. - Two redraws in a row cause a crash in old libncurses so just do one redraw using the main window. - Don't adjust hash_div only up for GPUs. Disable hash_div adjustment for GPUs. - Only free the thread structures if the thread still exists. - Update both windows separately, but not at the same time to prevent the double refresh crash that old libncurses has. Do the window resize check only when about to redraw the log window to minimise ncurses cpu usage. - Abstract out the decay time function and use it to make hash_div a rolling average so it doesn't change too abruptly and divide work in chunks large enough to guarantee they won't overlap. - Sanity check to prove locking. - Don't take more than one lock at a time. - Make threads report out when they're queueing a request and report if they've failed. - Make cpu mining work submission asynchronous as well. - Properly detect stale work based on time from staging and discard instead of handing on, but be more lax about how long work can be divided for up to the scantime. - Do away with queueing work separately at the start and let each thread grab its own work as soon as it's ready. - Don't put an extra work item in the queue as each new device thread will do so itself. - Make sure to decrease queued count if we discard the work. - Attribute split work as local work generation. - If work has been cloned it is already at the head of the list and when being reinserted into the queue it should be placed back at the head of the list. - Dividing work is like the work is never removed at all so treat it as such. However the queued bool needs to be reset to ensure we *can* request more work even if we didn't initially. - Make the display options clearer. - Add debugging output to tq_push calls. - Add debugging output to all tq_pop calls.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Diapolo
|
|
July 27, 2011, 02:07:25 PM |
|
Great work, I have to add a suggestion . Let us specify different arguments for different GPUs. I use a worksize of 256 for the 5830 and 128 for the 5870, so I have to run 2 instances of CGMINER. Would be neat to only use one instead (which seems to be a great idea with 1.5.X releases). Edit: Shoot me if this is standard and I never tried to do that ^^. Edit 2: While thinking about it, how are the credentials to the pools are passed, if using more than 1 GPU per instance? I setup a miner in the pools for each GPU, so what if I use more than one GPU ? I'm tired, please enlighten me! By the way, if I input anything into the Enable GPU mask, GPU 0 gets activated (G - E - x for example). Same for disabling of GPUs, any input disables GPU 0. Regards, Dia
|
|
|
|
Viceroy
|
|
July 27, 2011, 02:33:49 PM |
|
Edit 2: While thinking about it, how are the credentials to the pools are passed, if using more than 1 GPU per instance? I setup a miner in the pools for each GPU, so what if I use more than one GPU ? I'm tired, please enlighten me! You don't need multiple miners at the pool to use CGMiner. It takes the job and manages the handing out of threads for you, so to the pool you only need to be a single user. With earlier miners you needed to set up a worker for each card, or in my case 2 workers per card. With CGMiner you only need one worker... the software does all the hard work for you.
|
|
|
|
Diapolo
|
|
July 27, 2011, 02:36:50 PM |
|
Edit 2: While thinking about it, how are the credentials to the pools are passed, if using more than 1 GPU per instance? I setup a miner in the pools for each GPU, so what if I use more than one GPU ? I'm tired, please enlighten me! You don't need multiple miners at the pool to use CGMiner. It takes the job and manages the handing out of threads for you, so to the pool you only need to be a single user. With earlier miners you needed to set up a worker for each card, or in my case 2 workers per card. With CGMiner you only need one worker... the software does all the hard work for you. Cool stuff, it was not clear to me . What about different worksizes for different GPUs, is this possible already, too? -I 8 -d 0 -v 2 -w 128 -d 1 -v 2 -w 256 Dia
|
|
|
|
d3m0n1q_733rz
|
|
July 27, 2011, 10:43:18 PM |
|
I found a very tiny part of the problem with the SSE4 miner. You're using the xmmintrin.h instruction set which only supports up to SSE and really doesn't even allow for SSE2 really. emmintrin.h would be for SSE2 and smmintrin.h (or nmmintrin.h) would be for SSE4.1 and 4.2 respectively. This could also allow for AMD optimizations via the ammintrin.h instruction set so that it too can take advantage of the optimizations. However, this still hasn't completely solved the problem as I'm still only seeing SSE2 instructions used in the SSE4 miner after compiling the C code with the -msse4.1 flag enabled.
|
Funroll_Loops, the theoretically quicker breakfast cereal! Check out http://www.facebook.com/JupiterICT for all of your computing needs. If you need it, we can get it. We have solutions for your computing conundrums. BTC accepted! 12HWUSguWXRCQKfkPeJygVR1ex5wbg3hAq
|
|
|
xcooling
Member
Offline
Activity: 145
Merit: 10
|
|
July 27, 2011, 10:53:47 PM Last edit: July 27, 2011, 11:06:06 PM by xcooling |
|
for amd X6.. for X4 set -ftree-parallelize-loops=4 #//** AMD Family 10h, x86_64 (Phenom X6) #//**-ffast-math : might cause issues CFLAGS="-O3 -Wall -march=amdfam10 -msse4a -mtune=amdfam10 -mabm -combine -funroll-all-loops -ffast-math -fprefetch-loop-arrays -ftree-parallelize-loops=6 -I/extras/AMDAPP/include" LDFLAGS="-L/extras/AMDAPP/lib/ -g" ./configure
http://developer.amd.com/assets/AMDGCCQuickRef.pdf
|
|
|
|
DBordello
|
|
July 27, 2011, 11:06:42 PM |
|
I am giving cgminer a shot (from phoenix + smartcoin). My initial impression is positive. However, I am hesitant because I had modified phoenix to record in a sql database every time a share was submitted.
So now I am thinking how I can do this with cgminer. Coding in C to connect to a database is over my head, so I am going with a simpler solution.
My first thought was to use the -T option and pipe it to python for processing. This seems like it would work great. However, I am greedy and would also like the curses interface. Do you think you can add an option to direct status updates ([2011-07-27 18:05:05 Accepted xxxxxx GPU 1 thread 1 pool 0) to another stderr? That way I could direct share submissions for sql processing, and still maintaining the stdout interface.
That is, unless you have a more elegant solution.
Excellent work so far.
Dan
|
www.BTCPak.com - Exchange your bitcoins for MP: Secure, Anonymous and Easy!
|
|
|
RudeDude
Newbie
Offline
Activity: 11
Merit: 0
|
|
July 27, 2011, 11:24:53 PM |
|
I'm loving cgminer, in part because the binary just plain runs on Ubuntu 10.04 And the ncurses display rocks! (I need to fight with the CUDA drivers to get OpenCL working if I want GPU mining.) Has anybody noticed the hashrate dying off after running for several hours? I've also noticed something that _might_ be related... Right after starting the debug display outputs a line or two every second. I usually go back to normal display if things are working and after a few new blocks are detected on the network I go to look at the debug display and now it's flying by super fast. It's almost as if something is looping an extra time (therefore hitting the debug output more often) after running for a while. Anyway, thanks for coding this up. I'm going to scrape together a few bitpennies to donate. --RD Edit: P.S. I wanted to ask if anyone has tried to compile a Windows 64bit version? I'd like to run cgminer instead of guiminer but the sse2_64 is much faster.
|
|
|
|
|