sp_ (OP)
Legendary
Offline
Activity: 2912
Merit: 1087
Team Black developer
|
|
September 03, 2015, 02:17:08 PM |
|
I'm seeing regression in pool-side hash rates as well. ypool.ca, X11 algo, git at 562f049 with 7a47e66 reverted. Some fun new (to me) log messages like this: [2015-09-03 07:11:42] accepted: 4950/4995 (99.10%), 5218 kH/s yes! [2015-09-03 07:11:46] GPU #0: GeForce GTX 960, 5224 Temp= 60C Fan= 10% [2015-09-03 07:11:54] accepted: 4951/4996 (99.10%), 5224 kH/s yes! [2015-09-03 07:12:00] GPU #0: GeForce GTX 960, 5224 Temp= 60C Fan= 8% [2015-09-03 07:12:08] nonce ee4b0e00 was already sent 115 seconds ago [2015-09-03 07:12:10] nonce b6f40e00 was already sent 115 seconds ago [2015-09-03 07:12:13] nonce 672b0d00 was already sent 115 seconds ago [2015-09-03 07:12:15] GPU #0: GeForce GTX 960, 5224 Temp= 60C Fan= 6% [2015-09-03 07:12:20] nonce d34d0700 was already sent 115 seconds ago [2015-09-03 07:12:23] nonce 9fae0c00 was already sent 115 seconds ago [2015-09-03 07:12:29] GPU #0: GeForce GTX 960, 5222 Temp= 60C Fan= 5% [2015-09-03 07:12:29] nonce 223a0300 was already sent 115 seconds ago [2015-09-03 07:12:33] stratum connection reset [2015-09-03 07:12:33] mine.xpool.ca:2000 x11 block 330186 [2015-09-03 07:12:34] accepted: 4952/4997 (99.10%), 5223 kH/s yes! [2015-09-03 07:12:36] GPU #0: GeForce GTX 960, 5224 Temp= 64C Fan= 5%
The drop off is pretty apparent. I think I'll look into a component-izing refactor as my next contribution. There are too many globals being tickled all over the show to have confidence that any changes (outside the kernels) aren't going to have side effects. my 50cents: the miners may find several nonce per loop, and I would bet that it restart mining at the nonce of the 1st one even when it finds a second one, which means it finds it again, in the next iteration. there never been any real duplicate problem (may-be zrc but it a bit special on itself), just some screw up in the miner... This is bug I have been tracking, the bug is not present in release 63. After a while the miner will start to work on data it worked on 2 minutes ago. causing duplicate shares) You should get the latest version of the sourcecode though, I disabled the duplicate checking because it slows down the mining. Bether to let the pool find out and reset the connection. In release 64, Lya2v2 seems to be affected. Quark is fine. edit: It doesn't happen on pools/coins wich is switching blocks or difficulty rapidly.
|
|
|
|
t-nelson
Member
Offline
Activity: 70
Merit: 10
|
|
September 03, 2015, 02:39:11 PM |
|
I'm seeing regression in pool-side hash rates as well.
ypool.ca, X11 algo, git at 562f049 with 7a47e66 reverted.
Some fun new (to me) log messages like this:
-- SNIP --
The drop off is pretty apparent.
-- SNIP --
I think I'll look into a component-izing refactor as my next contribution. There are too many globals being tickled all over the show to have confidence that any changes (outside the kernels) aren't going to have side effects.
my 50cents: the miners may find several nonce per loop, and I would bet that it restart mining at the nonce of the 1st one even when it finds a second one, which means it finds it again, in the next iteration. there never been any real duplicate problem (may-be zrc but it a bit special on itself), just some screw up in the miner... This is bug I have been tracking, the bug is not present in release 63. After a while the miner will start to work on data it worked on 2 minutes ago. causing duplicate shares) You should get the latest version of the sourcecode though, I disabled the duplicate checking because it slows down the mining. Bether to let the pool find out and reset the connection. In release 64, Lya2v2 seems to be affected. Quark is fine. edit: It doesn't happen on pools/coins wich is switching blocks or difficulty rapidly. Yup. I'm seeing a bunch of dupe shares now at HEAD. It seems to start ~2min after the last difficulty adjustment, like you suggest. https://github.com/sp-hash/ccminer/blob/windows/ccminer.cpp#L1974 is starting to look suspect...
|
BTC: 1K4yxRwZB8DpFfCgeJnFinSqeU23dQFEMu DASH: XcRSCstQpLn8rgEyS6yH4Kcma4PfcGSJxe
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2912
Merit: 1087
Team Black developer
|
|
September 03, 2015, 03:36:55 PM |
|
Found the bug I think. I put the reset flag in the change difficulty method in utils.cpp. Again testing something and it got commited. There is a problem when the pool is changeing the difficulty rapidly. ccminer will submit shares with the old difficulty, and you get reject rates.. Try lyra2v2 on p2pool.pl 5% reject rate.
|
|
|
|
kama
|
|
September 03, 2015, 03:46:03 PM |
|
sp_ i want to ask you something buddy
why don't you compile it with mingw64 for windows ? it is more good than visual studio's compiler because the program that compiled with mingw64 gives more good hashrate or it can be mingw32 too
|
|
|
|
NiceHashSupport
|
|
September 03, 2015, 03:48:57 PM |
|
Found the bug I think. I put the reset flag in the change difficulty method in utils.cpp. Again testing something and it got commited. There is a problem when the pool is changeing the difficulty rapidly. ccminer will submit shares with the old difficulty, and you get reject rates.. Try lyra2v2 on p2pool.pl 5% reject rate.
I hope you know that by stratum specifications, when miner receives new diff, that diff shall be used for every next job. And NOT current jobs as so many miners (even official cgminer) and many pools work wrong way. When coding a miner, it is simply best to have another variable in job/work structure called diff. Compare each share to this diff. When pool provides new diff, you just save this diff and apply it to every next job that arrives. If you make it like this (by official stratum specifications), you will also reach maximum efficiency on NiceHash. If you use old/wrong principle where new diff is immediately applied, your miners will be loosing few percent of hashrate on NiceHash and causing slight rejects with share above target.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2912
Merit: 1087
Team Black developer
|
|
September 03, 2015, 03:52:32 PM |
|
sp_ i want to ask you something buddy why don't you compile it with mingw64 for windows ? it is more good than visual studio's compiler because the program that compiled with mingw64 gives more good hashrate or it can be mingw32 too
x86 builds are always faster on windows. 32bit. The latest cuda compiler (7.5 rc1) shows a drop in hashrate of 10% in 64 bit and 30% in 32 bit. (x11)
|
|
|
|
djm34
Legendary
Offline
Activity: 1400
Merit: 1050
|
|
September 03, 2015, 03:53:06 PM |
|
sp_ i want to ask you something buddy
why don't you compile it with mingw64 for windows ? it is more good than visual studio's compiler because the program that compiled with mingw64 gives more good hashrate or may be it can be mingw32 too
actually the compilation is done with nvcc not visual studio compiler and it would still be the case with mingw32 or 64 (unless you recompile cuda entirely...)
|
djm34 facebook pageBTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
|
|
|
Schleicher
|
|
September 03, 2015, 03:56:10 PM |
|
sp could you please put the http protocol back in the source code. I can't wallet mine. And djm lyra2v2 is the same. thx
Well I still can't wallet mine with latest versions. http protocol failed. Does it work in the 1.6.6 fork by tvpruvot? I don't know sp . Do you mean 1.5.64 ? my version works with solomining (and was tested with...) so it is probably on your hand (wrong rpc port, username or password) I know someone who is using your older version because the new ones won't work solo mining. maybe cuda build app files have changed ..because about the same time sp releases aren't working either. I'm going to try an older version of lyra2v2 of yours djm. older ? there are 4 releases which are mostly bug correction (and compatibility issues) and none of these changes are related to solo mining. all are there, and all are working: https://github.com/djm34/ccminer-lyra/releasesregarding solo mining, I don't know what is the default vtc port, I use a custom config file to define which port are used and some other settings. (make sure as well, that you are using the latest wallet... ) also I am using --api-bind 0 (as it becomes problematic when there are several instances) and obviously don't use the same port for api-bind and the wallet edit: there was a changed pushed by pallas to bmw256, however I don't think it would break solo mining, you can probably test that with testnet Ok thank you djm. The other person had to use your version before ver4 to make it work. But still something happened , related?, to solo mining about the time your oldest version was released I think. Will try your oldest tomorrow. But to clarify ... sp r43 solo mining a coin works....now with exactly the same bat file and using r55 or r60 or r62 it won't work. Will try again tomorrow. thx I have tested solo mining Quarkcoin with release 64 now on my system. Command line: ccminer -O name:password -o http://localhost:8372 -a quark --no-gbt This happened: ccminer actually crashed: Unhandled exception at 0x00C99824 in ccminer.exe: Stack cookie instrumentation code detected a stack-based buffer overrun.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2912
Merit: 1087
Team Black developer
|
|
September 03, 2015, 04:08:30 PM |
|
If you use old/wrong principle where new diff is immediately applied, your miners will be loosing few percent of hashrate on NiceHash and causing slight rejects with share above target.
Yes and this is a problem since the threads continue to mine for a while and send results. T Nelson and Pallas have submitted some changes to make the miner exit quicker. Hopefully the next ccminer will be bether.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2912
Merit: 1087
Team Black developer
|
|
September 03, 2015, 04:09:46 PM |
|
ccminer actually crashed: Unhandled exception at 0x00C99824 in ccminer.exe: Stack cookie instrumentation code detected a stack-based buffer overrun. same crash if you run ccminer with a paramater that is not recognized.. like ccminer --not-implemented crash. Doesn't exit properly on windows.
|
|
|
|
NiceHashSupport
|
|
September 03, 2015, 04:13:26 PM |
|
If you use old/wrong principle where new diff is immediately applied, your miners will be loosing few percent of hashrate on NiceHash and causing slight rejects with share above target.
Yes and this is a problem since the threads continue to mine for a while and send results. T Nelson and Pallas have submitted some changes to make the miner exit quicker. Hopefully the next ccminer will be bether. I do not think you understood me what I wrote, please read again. Here is more about this: https://bitcointalk.org/index.php?topic=28402.msg12165885#msg12165885 (with example what is wrong) We submitted this even to official cgminer, at first, they included the patch, but later revoked as there are many pools that don't work correctly, even ckpool. And since creators of ckpool = creators of cgminer, patch will probably never be in cgminer. But hopefully, you will not make the same mistake.
|
|
|
|
Schleicher
|
|
September 03, 2015, 04:31:19 PM |
|
ccminer actually crashed: Unhandled exception at 0x00C99824 in ccminer.exe: Stack cookie instrumentation code detected a stack-based buffer overrun. same crash if you run ccminer with a paramater that is not recognized.. like ccminer --not-implemented crash. Doesn't exit properly on windows. Without --no-gbt it crashes too: [2015-09-03 18:18:28] 1 miner thread started, using 'quark' algorithm. [2015-09-03 18:18:28] Binding thread 0 to cpu 0 (mask 1) [2015-09-03 18:19:37] GPU #0: GeForce GTX 970, 14004 [2015-09-03 18:20:49] GPU #0: GeForce GTX 970, 13897 [2015-09-03 18:22:16] GPU #0: GeForce GTX 970, 13907 [2015-09-03 18:23:37] GPU #0: GeForce GTX 970, 13879 [2015-09-03 18:24:52] GPU #0: GeForce GTX 970, 13712 [2015-09-03 18:26:02] GPU #0: GeForce GTX 970, 13762 [2015-09-03 18:26:57] JSON-RPC call failed: Invalid parameter [2015-09-03 18:26:57] submit_upstream_work json_rpc_call failed I think it happens when ccminer is trying to send a result.
|
|
|
|
joblo
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
September 03, 2015, 04:51:47 PM Last edit: September 03, 2015, 05:08:36 PM by joblo |
|
linux guys, check out the kopiemtu thread on litecoin.org. i solved the overclocking multiple cards issue around page 60-65 or so. i run driver 346.59 and cuda 6.5
I finally got OC and fan control on a second card. I initially tried a dummy plug but that didn't work. However, with the dummy plug still connected to the second card I was able to get coolbits to work by doing the following: nvidia-xconfig -a edit xorg.conf to add the following to each screen section: Option "AllowEmptyInitialConfiguration" "True" Option "Coolbits" "12" This is essentially the same procedure referenced by hashbrown,, without the detailed explanation. He states a monitor needs to be connected for the initial config so I think the dummy plug was useful. It still works after removing the plug. Thanks.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2912
Merit: 1087
Team Black developer
|
|
September 03, 2015, 05:22:30 PM Last edit: September 03, 2015, 05:36:59 PM by sp_ |
|
On p2pool.pl I get this when a block is changed and the diff is low. As you can see ccminer uses 0.8 second to shut down the threads, and by that time it has found alot of solutions.. The hashrate is also lower when the default diff is as low as this.. you can reproduce with this bat: ccminer.exe -a lyra2v2 -u Vpir9UQWjDmajnC81eNS6nXVH2qAGos6Ma -p d=x -o http://eu.p2pool.pl:9171I will try to solve it.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2912
Merit: 1087
Team Black developer
|
|
September 03, 2015, 05:30:22 PM |
|
I do not think you understood me what I wrote, please read again. Here is more about this: https://bitcointalk.org/index.php?topic=28402.msg12165885#msg12165885 (with example what is wrong) We submitted this even to official cgminer, at first, they included the patch, but later revoked as there are many pools that don't work correctly, even ckpool. And since creators of ckpool = creators of cgminer, patch will probably never be in cgminer. But hopefully, you will not make the same mistake. A you can see in the picture above. CCminer has a problem when the pool changes the difficulty, but on Nicehash we get few rejects since the default difficulty is much higher than in the p2pool.pl example.
|
|
|
|
pallas
Legendary
Offline
Activity: 2716
Merit: 1094
Black Belt Developer
|
|
September 03, 2015, 05:45:13 PM |
|
Found the bug I think. I put the reset flag in the change difficulty method in utils.cpp. Again testing something and it got commited.
That pretty much fixed the low hashrate on pool AND hashrate flooding! :-) You can revert the "prevent hashrate flooding" fix, now it's kinda reporting dry
|
|
|
|
NiceHashSupport
|
|
September 03, 2015, 05:57:15 PM |
|
Found the bug I think. I put the reset flag in the change difficulty method in utils.cpp. Again testing something and it got commited. There is a problem when the pool is changeing the difficulty rapidly. ccminer will submit shares with the old difficulty, and you get reject rates.. Try lyra2v2 on p2pool.pl 5% reject rate.
I hope you know that by stratum specifications, when miner receives new diff, that diff shall be used for every next job. And NOT current jobs as so many miners (even official cgminer) and many pools work wrong way. When coding a miner, it is simply best to have another variable in job/work structure called diff. Compare each share to this diff. When pool provides new diff, you just save this diff and apply it to every next job that arrives. If you make it like this (by official stratum specifications), you will also reach maximum efficiency on NiceHash. If you use old/wrong principle where new diff is immediately applied, your miners will be loosing few percent of hashrate on NiceHash and causing slight rejects with share above target. I'm quite familiar with them, and I'm also rather sure this is incorrect. You do NOT apply it to every next job that arrives, it only takes effect after a new job with cleanjobs = true has arrived. Cleanjobs flag set to true only signals miners to drop all current work and start this new job. It doesn't have anything to do with diff. I don't exactly understand what you are trying to say here with "NOT apply" ? I do not think you understood me what I wrote, please read again. Here is more about this: https://bitcointalk.org/index.php?topic=28402.msg12165885#msg12165885 (with example what is wrong) We submitted this even to official cgminer, at first, they included the patch, but later revoked as there are many pools that don't work correctly, even ckpool. And since creators of ckpool = creators of cgminer, patch will probably never be in cgminer. But hopefully, you will not make the same mistake. A you can see in the picture above. CCminer has a problem when the pool changes the difficulty, but on Nicehash we get few rejects since the default difficulty is much higher than in the p2pool.pl example. This more looks like a stale share, since shares were sent after pool informed you about the new block.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2912
Merit: 1087
Team Black developer
|
|
September 03, 2015, 06:04:50 PM |
|
This more looks like a stale share, since shares were sent after pool informed you about the new block.
Yes it is. I am working on code that will exit the miner threads faster after a new block/diff change. Ccminer with -i 19 will mine 2048 hashes every round. In the current version it has to wait untill all the 2048 hashes are done, and there are no test to exit before the cpu-test.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2912
Merit: 1087
Team Black developer
|
|
September 03, 2015, 07:53:23 PM |
|
I improved it abit with the last commit. The fix is a stupid fix. Inserted alot of tests.. Must be a bether way to do this. Kill the thread? I will build another release, since 64 had bugs.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2912
Merit: 1087
Team Black developer
|
|
September 03, 2015, 08:08:09 PM |
|
I point my rig to nicehash I have 26,5Mhash for the rig verified on the pool with no rejects.. p2pool takes some time to get stable. (when de difficulty reaches a good value)
|
|
|
|
|