ckolivas, do you have an NMC wallet address ?
Yeah, I need a way to dump mine too since I'm getting them whether I want them or not. Sam Hmm, good idea. Mind if I send you some as thanks? Uh I don't have one... Exchange them for BTC and keep them instead?
|
|
|
If you're using the prebuilt binary, those are built for the latest ubuntu now which is 11.10. Possibly something is incompatible between them but that library comes from libncurses. Presumably linuxcoin is based on an older ubuntu.
|
|
|
Thanks a lot. Sorry I completely forgot about that issue because I mostly forget that anyone is still CPU mining.
|
|
|
Version 2.0.8 - November 11, 2011
- Make longpoll do a mandatory flushing of all work even if the block hasn't changed, thus supporting longpoll initiated work change of any sort and merged mining.
Donated 5BTC as I promised. Thanks for continuing to improve THE BEST miner out there.I was hoping your could clarification on the flushing "timeline". Does it work like this: 1) LP arrives. 2) cgminer flushes any work in the queue (this work hasn't been worked so not worried about this) 3) cgminer loads queue w/ new work. 4) GPU continues to work on any data in its cycle (can't be interrupted, length based on intensity) 6) If GPU finds a share that share it considered stale and not submitted (SS increments by 1) 7) GPU begins work on new data If so does enabling --submit-stale option cause it to work like this: 1) LP arrives. 2) cgminer flushes any work in the queue (this work hasn't been worked so not worried about this) 3) cgminer loads queue w/ new work. 4) GPU continues to work on any data in its cycle (can't be interrupted, length based on intensity) 5) If GPU finds a share that share will still be submitted. (If actually stale R increments by one otherwise A increments by 1) 6) GPU begins work on new data The reason I ask is because it is possible the LP is from new transactions or NMC header change and the share is still valid for BTC. I understanding using --submit-stale option will potentially increase rejects but if I understand this right. SS + R should be equal regardless of if --submit-stale is used or not. Right? It makes me think I should use -submit-stale. The only downside is that it increases potential workload for the server. The upside is that I may gain a few more accepted shares. Thank you very much for your donation Your interpretation is spot on. --submit-stale is fine since pools don't usually penalise you for submitting shares it rejects. It just increases your reject count.
|
|
|
1 BTC for you ckolivas, you deserve it.
And another to kano, thanks for your hints.
Thanks, much appreciated =)
|
|
|
New version: 2.0.8, links in top post as always.
Version 2.0.8 - November 11, 2011
- Make longpoll do a mandatory flushing of all work even if the block hasn't changed, thus supporting longpoll initiated work change of any sort and merged mining. - Byteswap computed hash in hashtest so it can be correctly checked. This fixes the very rare possibility that a block solve on solo mining was missed. - Add x86_64 w64 mingw32 target - Allow a fixed speed difference between memory and GPU clock speed with --gpu-memdiff that will change memory speed when GPU speed is changed in autotune mode. - Don't load the default config if a config file is specified on the command line. - Don't build VIA on apple since -a auto bombs instead of gracefully ignoring VIA failing. - Build fix for dlopen/dlclose errors in glibc.
|
|
|
In preparation for a new version, I finally stopped my main miner that was running 2.0.7 for the longest period I've run cgminer for. I figured I should post what my miner ran like for that time period. Bear in mind I was load balancing so the reject rate would be slightly higher.
[2011-11-11 19:47:02] Started at [2011-10-26 07:04:04] [2011-11-11 19:47:02] Runtime: 396 hrs : 42 mins : 58 secs [2011-11-11 19:47:02] Average hashrate: 1749.3 Megahash/s [2011-11-11 19:47:02] Solved blocks: 2 [2011-11-11 19:47:02] Queued work requests: 612950 [2011-11-11 19:47:02] Share submissions: 569593 [2011-11-11 19:47:02] Accepted shares: 567274 [2011-11-11 19:47:02] Rejected shares: 2319 [2011-11-11 19:47:02] Reject ratio: 0.4 [2011-11-11 19:47:02] Hardware errors: 0 [2011-11-11 19:47:02] Efficiency (accepted / queued): 93% [2011-11-11 19:47:02] Utility (accepted shares / min): 23.83/min
[2011-11-11 19:47:02] Discarded work due to new blocks: 29680 [2011-11-11 19:47:02] Stale submissions discarded due to new blocks: 274 [2011-11-11 19:47:02] Unable to get work from server occasions: 1220 [2011-11-11 19:47:02] Work items generated locally: 31636 [2011-11-11 19:47:02] Submitting work remotely delay occasions: 956 [2011-11-11 19:47:02] New blocks detected on network: 2191
[2011-11-11 19:47:02] Summary of per device statistics:
[2011-11-11 19:47:02] GPU0 74.5C 5321RPM | (5s):450.0 (avg):442.6 Mh/s | A:143414 R:593 HW:0 U:6.03/m I:9 [2011-11-11 19:47:02] GPU1 73.5C 5266RPM | (5s):428.8 (avg):426.4 Mh/s | A:138718 R:547 HW:0 U:5.83/m I:9 [2011-11-11 19:47:02] GPU2 74.0C 5558RPM | (5s):430.1 (avg):430.9 Mh/s | A:139513 R:560 HW:0 U:5.86/m I:9 [2011-11-11 19:47:02] GPU3 73.5C 4479RPM | (5s):443.9 (avg):449.4 Mh/s | A:145629 R:619 HW:0 U:6.12/m I:9
|
|
|
The CPU usage of cgminer is extremely low. Most of the CPU usage you see is the driver interacting with the GPU. Even in debug mode there would be no demonstrable performance hit. As for hashing, it will hash if there's something to hash and something to communicate with. It won't keep hashing randomly on any old shit just to pass the time. So if it was hashing, then it was still communicating with something, or the driver hung.
|
|
|
You can add solo mining as a backup in the current version as well, it just won't show any shares because there's no such thing, only block solves.
How do we know for sure that it is actually working? I'm still using 2.0.5 part of the time because it gives feedback that solo mining is working. Sam Faith.... Or debug mode if you're really concerned.
|
|
|
You can add solo mining as a backup in the current version as well, it just won't show any shares because there's no such thing, only block solves.
|
|
|
That won't help if it's the first time you're running it...
If you have a previous working version, you can copy all the .bin files from there into the directory of the new version and it should work. It remains a mysterious windows only bug.
|
|
|
The work item stores a reference to which pool it came from and that reference is never changed. If the pool itself is having communication trouble of some sort, it's entirely possible that the pool doesn't recognise its own work afterwards? Unless there's a bug I haven't tracked down, cgminer never changes reference to where the work came from. Now if you're piping it through some other proxy of some sort or hopper or something, then perhaps they mess with the headers as they come through.
static bool submit_upstream_work(const struct work *work) struct pool *pool = work->pool; val = json_rpc_call(curl, pool->rpc_url, pool->rpc_userpass, s, false, false, &rolltime, pool);
So it sends the work upstream each time with the pool information directly stored in the work structure. As I said, there may be a bug I haven't tracked down, but that field is never changed once the work is grabbed.
|
|
|
I believe people found that 11.9 fixed the CPU bug with one GPU on windows only when the intensity was low. If the intensity is cranked up, the CPU goes up. Me, I'm still using 11.6 on linux, last good one for CPU usage
|
|
|
They drove a dumptruck full of money and unloaded it on my front lawn. I'm not made of stone. It's not quite true, but some people are still donating (thanks!!), and the alleged other uses for longpoll on the same block were valid.
I'm using --donate 5 right now. Please don't confuse that with supporting Merged Mining, because I don't. Thanks for your great work. Sam For that I thank you muchly! I was thinking more about cgminer simply working as good as possible with the largest BTC mining pools by default.
|
|
|
They drove a dumptruck full of money and unloaded it on my front lawn. I'm not made of stone. It's not quite true, but some people are still donating (thanks!!), and the alleged other uses for longpoll on the same block were valid.
Actually I agree with you Kano, NMC only eventually takes value from BTC. It has to come from somewhere. It was not the argument for merged mining that made me -indirectly- add support for it.
Plus, basically, there is only so much I can influence how people mine bitcoin from within my software.
|
|
|
2^20 iterations is 1048576 hashes right? If we consider the upper and lower bound of modern GPU to be 100MH/s to 500MH/s that is 0.002s to 0.01s.
Not sure how a GPU can take full seconds to finish. Wouldn't that cause system instabilities? I mean the GPU is unusable for other tasks while OpenCL kernel is running.
2^20/2^32 = 0.024% Thus each interrupted "cycle" reduces EV (expected value) by 0.00024 shares.* *Granted each individual iteration will either be 1 share lost or 0 shares lost but the EV is still a fractional share.
Goddamnit I was still thinking from when cgminer would support intensities up to 14 which would put out a 5770 for 5 seconds at a time. Of course it only supports up to intensity 10 now because it proved to be a waste of time, not improving throughput and did cause nice stalls at 14 - plus people tend to just set something to the top value thinking it will definitely be better. So yes you're right at intensity 10 it's not much time/lost work. I had forgotten exactly how much it was, but intensity 10 is 2^25 just for the record.
|
|
|
At high intensity levels, the time spent stuck in GPU code is in the order of SECONDS, not microseconds. The faster the GPU, the shorter it is, but at intensity 9 or 10 a 200Mhash card could actually be working for more than 5 seconds on each iteration into the GPU. It takes less time when there is only one thread per GPU, but then the hash rate drops off slightly. And NO there is NOT a way to interrupt a GPU once it has started working on the openCL code. While the worksize is somewhere between 64 and 256, the actual requested work every time the GPU is loaded is up to 2^20 iterations (at intensity 10). There is no way to interrupt it. It is not like doing something on a CPU. Faster cards won't take long to return even at high intensity levels, but basically, any shares discovered during this time in the GPU do NOT get returned until the GPU has finished its 2^20 iterations. That's just the way opencl kernel code works. The GPU takes its work and runs off and does it independently of anything else going on in your PC and then only returns answers once it's done. So there is work "wasted" here if it starts just before a longpoll, goes out for say 5 seconds and finds a share in that time. It is then obliged to discard it since cgminer now says that work is no longer valid for the current block of work unless you enable the --submit-stale option.
|
|
|
Waiting for work DOES hurt the miner. I can't think of a scenario where a longpoll should make a miner wait for work. Are you telling me a pool can universally throw out enough work for every single miner working for it in the microsecond after sending out a longpoll at a rate fast enough to guarantee their GPUs don't go idle? Let's close the discussion before I get more annoyed.
|
|
|
Waiting for work DOES hurt the miner. That's why cgminer tries to tell if some of the work shouldn't actually be discarded by determining if it's from the valid next block and should only throw out work that's from the previous block. However you dress it up for me to support your pool better, there is likely to be more downtime waiting for work by blindly supporting LP for this shit. Admittedly since every pool has taken a hashrate hit lately, none are close to capacity any more, but one day they will be again.
|
|
|
There is no benefit to ignoring longpoll results.
That's not true. cgminer can detect block changes before it receives a longpoll, especially when mining on multiple pools at once. Thus it will have already thrown out the work. Getting a longpoll again after that will make cgminer throw out yet more work. Why do you think I've been resisting dealing with this fucking issue?
|
|
|
|