I care about my efficiency because I want to be a good pool citizen. Doing getworks only once a minute reduces load on the pool. Less load on the pool means higher pool reliability and more scalability with less hardware (which may translate to lower fees). I can think of no reason to waste CPU and bandwidth with getwork requests every 5-10 seconds instead of once a minute for pools that support it. There is benefit to the pool and no downside to the miner.
I never said cgminer was a low efficiency miner! It usually is >90% even without rolling the time. It asks for work whenever it needs work, and that depends on how fast your GPU is as to how often it requests more work. I could certainly make it roll time by default at intermittent work intervals, but -not all pools support it- so that will bring unexpected results with rejected shares rising even though the efficiency rises.
|
|
|
Would be nice if the windows version actually worked It wont even connect to slushes pool
error can't rewind ... say what !
I registered with slush just to try it and it works fine for me? cgminer doesn't give meaningful enough errors or fail if you give the wrong login credentials and I need to do something about that. Perhaps that was the issue here?
|
|
|
Wow. Cgminer has been coming along nicely!
Any plans for pool load balancing / redundancy? Diablominer has load balancing, poclbm has failover. Both are nice for different reasons.
As far as I can see, Diablominer supports two pools at once, not really load balancing. If you run two instances of cgminer, one connecting to each pool, you get exactly the same effect so I see no advantage of doing this in the one process? Failover I have certainly considered adding.
|
|
|
Does cgminer support X-Roll-NTime? Since my efficiency was below 100%, I'm guessing not.
Yes it does, but it only needs to resort to using it when network connectivity is down. Since cgminer preemptively asynchronously gathers work before its needed it copes with poor network connectivity very well so it doesn't often need to even use it. Also note that super high efficiency benefits the pool server by decreasing the number of getworks it asks for, but it does not benefit the person mining if they are already at 100% utilisation since you get paid according to how much work you do. Since cgminer basically uses rolltime only when it *might* have dropped utilisation, but it keeps utilisation high at all times, there is no advantage to using it more. You can tell when cgminer has started doing it with this message: Server not providing work fast enough, generating work locally and when it stops doing it the message follows: Resumed retrieving work from server
|
|
|
That would be nice and probably save some watts.
Though shouldnt -q do that? It doesnt update anything on the screen at all..
Good point. Updated tree: Fixed the debug output by using locking around any screen updates. Made quiet mode not enable curses at all. Changed longpoll calls to not use POST.
|
|
|
Would you like a no-curses version as an option? It's using hardly any CPU here but clearly it will use more CPU with curses than a basic printf.
|
|
|
Hello, It appears that you use post instead of get when doing Long Polling. That is contradictory to the unofficial spec and really annoying when trying to debug software LP failures. See https://deepbit.net/longpolling.php-C00w That's good to know, thanks. I didn't implement any of the original communications parts that cgminer grew out of, so I had no idea. I'll look into correcting that.
|
|
|
Updated tree:
Fixed lots of little bugs here and there in the curses code, some of which would lead to screen corruption and some of which would lead to crashes.
|
|
|
Ok, a suggestion for the CPU miner thread. Instead of new work for each thread, every thread should work on the same work. I.e, the nonces to be tried are split by the number of threads. So thread 1 does X nonces, thread 2 does XX nonces and so on and so forth.
Very interesting idea considering how inefficient cpu mining is. I'll consider it after sorting out all the new bugs from the new interface.
|
|
|
Oh and kripz, hardware overheating/failing will not cause low "efficiency", but hardware errors... Low efficiency is when you queue much more work than you return generally (like CPU mining does). You can have that effect if you decrease the scantime or increase the gpu threads per card (not the overall number of gpus, but the number of threads per card).
|
|
|
first of all: wow, I think it's awesome to see ckolivas here! Loved your BFS work Just a short question about the statistics: GPU 0: [360.6 Mh/s] [Q:22 A:23 R:0 HW:3841 E:105% U:5.28/m] What are the HW and E parameters? Thanks It's in the readme. HW is hardware errors and E is efficiency. There is an issue where cgminer can have a run of apparent HW errors when you first start it, and it's harmless. I'm looking into that. If the hardware errors continue to rise after that, though, be concerned about your hardware.
|
|
|
Wow kripz you got all the luck. I guess I never really tested debug output very well. All in good time As for the efficiency, I did notice a slight trend towards lower efficiency with more Mhash but I have one machine with a 5770 (~200MH) and another machine with 4x6970 (~1700MH) and they're both running to within 3% of the same efficiency after 10,000 shares on the big box. I've considered a number of reasons for why the shares may be lower in your case, but nothing really seems convincing. I wonder if bitcoind actually hands out the same work and you might well be doing overlapping work between the multiple cards. I've found that even if the efficiency drops off, the total accepted is higher and that's what ultimately matters? Check what your pool is reporting since they report back the hash rate according to how many accepted shares you return (but you'll need to get a few samples from the pool since they average over too small a period and it will fluctuate a lot). As for your crash, I found a small bug that I pushed a couple of changes to fix, and hopefully it's one and the same thing.
|
|
|
this is the same speed as poclbm for me, around 141Mhash/s but every time a share is accepted, it overestimates the average even though every 5 seconds it's obvious the average is say 141.3Mhash/s when a share is accepted it will say something like 143.2Mhash/s which is probably not true
The 5 second average is quite a coarse measure and not meant to have great weight placed on it. I update it infrequently in order to minimise the overhead as much as possible and depending on when on the time boundary you land on it, it can be higher or lower than the real figure. The overall average is the only reliable number. As for overall speed, most mining software should give pretty much the same performance when running flat out since they all use the same opencl kernels. The difference between them is how they manage the rest (like intermittent network connectivity, ddosed servers, maximising accepted shares, minimising cpu and ram overhead and so on...).
|
|
|
@Tartarus: I can't really see anything obvious in that debug output. I'll have to code in some way of dumping all sorts of information when you tell it to, and hopefully I can figure it out from there.
@Nobu: Ddetecting new blocks faster than the server is sending out longpolls almost every single time is fine. However I'm not sure that's exactly what's going on in your case. I'll review the code further.
|
|
|
Updated tree: Much improved interface using ncurses. It now looks like this:
cgminer version 1.2.0 -------------------------------------------------------------------------------- Totals: [(5s):166.9 (avg):194.3 Mh/s] [Q:43 A:14 R:0 HW:2 E:33% U:2.53/m] -------------------------------------------------------------------------------- GPU 0: [183.5 Mh/s] [Q:15 A:14 R:0 HW:2 E:93% U:2.57/m] CPU 0: [3.2 Mh/s] [Q:1 A:0 R:0 HW:0 E:0% U:0.00/m] CPU 1: [3.2 Mh/s] [Q:1 A:0 R:0 HW:0 E:0% U:0.00/m] CPU 2: [3.2 Mh/s] [Q:1 A:0 R:0 HW:0 E:0% U:0.00/m] CPU 3: [3.2 Mh/s] [Q:5 A:0 R:0 HW:0 E:0% U:0.00/m] -------------------------------------------------------------------------------- [2011-07-11 13:35:41] Share accepted from GPU 0 [2011-07-11 13:36:00] Share accepted from GPU 0 [2011-07-11 13:36:37] Share accepted from GPU 0 [2011-07-11 13:36:57] Share accepted from GPU 0 [2011-07-11 13:37:06] Server not providing work fast enough, generating work locally [2011-07-11 13:37:07] Resumed retrieving work from server [2011-07-11 13:37:23] LONGPOLL detected new block, flushing work queue [2011-07-11 13:37:41] Share accepted from GPU 0 [2011-07-11 13:37:43] LONGPOLL detected new block, flushing work queue [2011-07-11 13:38:10] Share accepted from GPU 0 [2011-07-11 13:39:08] Share accepted from GPU 0
The text at the top stays there while the log messages below scroll. It now needs libcurses-dev to build. pdcurses on windows should achieve the same thing, and the windows version is currently being worked on.
|
|
|
Updated tree:
You can now choose which device(s) to start cgminer on:
--device|-d <arg> Select device to use, (Use repeat -d for multiple devices, default: all)
|
|
|
Updated tree:
Now I've fixed it for real, and incremented the version number and tag to v1.1.1 so people know which is the current good version. Everyone should upgrade.
|
|
|
|