cpuminer-opt v3.2 is released. This is a restructuring release with no new algos or optimizations. - algo_gate is now used to select RPC version in many instances - Significant restructuring of algo_gate with realignment and renaming of many functions to be more descriptive and logical. - Some gate functions were removed or replaced with variables. - Code cleanup, fixed some compile warnings. https://drive.google.com/file/d/0B0lVSGQYLJIZTzR2WU1NRGpYRnM/view?usp=sharingNice! I think that this version has a serious bug - on ZR5 it submits one share and after that all rejects due to duplicate share. ********** cpuminer-opt 3.2 *********** A CPU miner with multi algo support and optimized for CPUs with AES_NI extension. BTC donation address: 12tdvfF7KmAsihBXQXynT6E6th2c2pByTT Forked from TPruvot's cpuminer-multi with credits to Lucas Jones, elmad, palmd, djm34, pooler, ig0tik3d, Wolf0 and Jeff Garzik.
Checking CPU capatibility... Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz CPU arch supports AES_NI...YES. SW built for AES_NI........YES. Algo supports AES_NI.......YES. Start mining with AES_NI optimizations...
[2016-05-10 09:47:54] Starting Stratum on stratum+tcp://ziftrpool.io:3032 [2016-05-10 09:47:54] 8 miner threads started, using 'zr5' algorithm. [2016-05-10 09:47:57] Stratum difficulty set to 0.001 [2016-05-10 09:47:57] ziftrpool.io:3032 zr5 block 613501 [2016-05-10 09:47:58] CPU #0: 62.55 kH, 84.38 kH/s [2016-05-10 09:47:59] accepted: 1/1 (100%), 62.55 kH, 84.38 kH/s yes! [2016-05-10 09:47:59] CPU #0: 62.55 kH, 106.76 kH/s [2016-05-10 09:47:59] accepted: 1/2 (50%), 62.55 kH, 106.76 kH/s nooooo [2016-05-10 09:47:59] reject reason: duplicate share [2016-05-10 09:48:00] CPU #0: 62.55 kH, 107.13 kH/s [2016-05-10 09:48:00] accepted: 1/3 (33%), 62.55 kH, 107.13 kH/s nooooo [2016-05-10 09:48:00] reject reason: duplicate share [2016-05-10 09:48:00] CPU #0: 62.55 kH, 107.00 kH/s [2016-05-10 09:48:00] accepted: 1/4 (25%), 62.55 kH, 107.00 kH/s nooooo [2016-05-10 09:48:00] reject reason: duplicate share [2016-05-10 09:48:01] CPU #0: 62.55 kH, 107.15 kH/s [2016-05-10 09:48:01] accepted: 1/5 (20%), 62.55 kH, 107.15 kH/s nooooo [2016-05-10 09:48:01] reject reason: duplicate share [2016-05-10 09:48:01] CPU #0: 62.55 kH, 106.72 kH/s [2016-05-10 09:48:01] accepted: 1/6 (17%), 62.55 kH, 106.72 kH/s nooooo [2016-05-10 09:48:01] reject reason: duplicate share [2016-05-10 09:48:02] CPU #0: 62.55 kH, 107.09 kH/s [2016-05-10 09:48:02] accepted: 1/7 (14%), 62.55 kH, 107.09 kH/s nooooo [2016-05-10 09:48:02] reject reason: duplicate share [2016-05-10 09:48:03] CPU #0: 62.55 kH, 106.65 kH/s [2016-05-10 09:48:03] accepted: 1/8 (12%), 62.55 kH, 106.65 kH/s nooooo [2016-05-10 09:48:03] reject reason: duplicate share [2016-05-10 09:48:03] CPU #0: 62.55 kH, 107.12 kH/s [2016-05-10 09:48:03] accepted: 1/9 (11%), 62.55 kH, 107.12 kH/s nooooo [2016-05-10 09:48:03] reject reason: duplicate share [2016-05-10 09:48:04] CPU #0: 62.55 kH, 107.12 kH/s ^C[2016-05-10 09:48:04] SIGINT received, exiting Bug confirmed, under investigation. Edit: I had broken ZR5 and fixed it and apparently broke it again I missed the second break because I can get a few (I've seen up to 5) accepts before the duplicate shares start. I never found the first break with code analysis, I simply backed out until it worked and recoded from there. I have found an intermediate build that worked before the second break and will recode again with more extensive testing so I can identify experimentally what exactly broke it. It's not the preferred approach but it is effective. I've got 42 accepts @ 100% with the current test build. Edit2: I've reimplemented most of the changes between the last stable build and v3.2 without breaking anything, but I saved the best for last. I want to cleanup the code a bit first and do more testing before tackling the likely culprit. Edit3: I have almost everything reimplemented and zr5 still works. That means I have no idea what the bug was but I fixed it. I'm getting tired so I 'll wait till tomorrow, do another review and more testing before release.
|
|
|
cpuminer-opt v3.2 is released. This is a restructuring release with no new algos or optimizations. - algo_gate is now used to select RPC version in many instances - Significant restructuring of algo_gate with realignment and renaming of many functions to be more descriptive and logical. - Some gate functions were removed or replaced with variables. - Code cleanup, fixed some compile warnings. https://drive.google.com/file/d/0B0lVSGQYLJIZTzR2WU1NRGpYRnM/view?usp=sharing
|
|
|
Took the 970 out...I'm done with it. A book marker for now.
I have a couple of cards that don't like to switch algos much but they will run for days on one algo. One example is a 970 that will mine quark at default intensity fresh from a reboot but over time while profit switching it will fail to start. Sometimes it will start at lower intensity but continues to degrade and eventually won't start at all until a system reboot. Then it will run at default intensity again. The symptoms smell like a memory leak but if that was the case it should affect all cards eventually but it doesn't. I'm stumped.
|
|
|
i think 50-60, temp can rise to 70 without problem, gpu are build for higher temp, 50° is way too low as a margin i think, my fan are at 34% with additional external fan at 70%, temp is there at 65°
But isn't a lower temperature better ? I think, and it would sound pretty logical, that if it runs cooler, it will run also longer. No No No No ^^For clear effect.^^ Too low due to maxing fans will kill the fans. 51 c is 10c too low. get a stable number at 60c to 65 c. A little dramatic and misleading. Cooler HW is better period. It is better to wear out the fans than burn out the GPU. Their cheaper to replace. You make it seem like a cooler GPU is bad for the GPU which is just plain wrong. Over driving the fans unneccessarily will shorten their life but won't hurt the GPU.
|
|
|
I know - intelligent people have better things to do with their time than reply to bounties. But still - I'm grateful & like to stick to my word. Not sure how I missed your sig.
Rceived, many thanks.
|
|
|
I'm going to ask this question again & looking for an answer from someone who knows what they're talking about (it shouldn't take you more than 5 or 10 min). Per this thread - https://bitcointalk.org/index.php?topic=1438066.msg14704298#msg14704298http://The bottleneck is going to be sorting - so it will be transfer speed between the ram and the processor and the processor speed. Not the amount of RAM that will be the bottleneck? And on the official ZCash forums there's this - Note that Equihash is not intended to be "GPU-resistant"; only to limit the relative advantage of a GPU over a CPU. There is some discussion of the relative efficiency of CPUs and GPUs for parallel sorting, which is designed to be the main performance bottleneck, on page 10 (section VI part b)) of the Equihash paper20. Now my question is - what is "sorting" going to look like hardware wise? What's going to be the most efficient / give the most hash from the list of the following? Memory size (8GB > 4GB), memory speed (Nano > 390X), or card processing power? Any ideas on power usage / hash ratio? Which of these combinations is likely to perform better & by how much? 6 X R9 280X 3GB 3.5 X R9 390 1.5 R9 Nano I'm looking for an answer from Wolf0, Claymore or Tromp Sorting is the first real programming assignment for any comp-sci student. What it looks like HW wise is shuffling data around in memory until it is sorted. The amount of memmory is not realy a factor but the cache size is. Sorting involves random accesses to memory. The bigger the cache the more likely an access will hit the cache. A worst case scenario is that each access misses the cache and has to fetch the data from memory. That's where the second bottle neck comes into play, memory bandwidth, how fast can you get the data from memory to the cache. This seems to be a similar approach to HOdl except that HOdl doesn't sort but search. Searching can be done sequentially to take advantage of the cache. As previously mentioned sorting is mostly random. As long as there is enough RAM to contain the entire sort set the amount of RAM doesn't matter. Cache size and mem bandwidth are the main limiting factors. Give me your btc address - & thank you for the explanation. http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/4 - Essentially if cache winds up being the primary bottleneck then Fiji based cards (2mb l2 cache) could be up to 2X fast as Hawaii architecture (1Mb cache) and Polaris may thump all of them with 6mb cache. http://techreport.com/news/29616/amd-will-introduce-two-polaris-gpus-this-yearI was reading a bit of the white paper and there may be techniques to optimize the sort based on cache size. The list could be split into smalller chunks that fit in the cache. Once all the chunks have been sorted they can be merged into one big sorted list. There may already exist such an algorithm but I haven't looked at sorting algorithms since that first assignment in school. The only trick would be tuning it for the cache size of the CPU being used. I didn't really respond for the purpose of claiming your bounty but my BTC addr is in my sig.
|
|
|
I'm going to ask this question again & looking for an answer from someone who knows what they're talking about (it shouldn't take you more than 5 or 10 min). Per this thread - https://bitcointalk.org/index.php?topic=1438066.msg14704298#msg14704298http://The bottleneck is going to be sorting - so it will be transfer speed between the ram and the processor and the processor speed. Not the amount of RAM that will be the bottleneck? And on the official ZCash forums there's this - Note that Equihash is not intended to be "GPU-resistant"; only to limit the relative advantage of a GPU over a CPU. There is some discussion of the relative efficiency of CPUs and GPUs for parallel sorting, which is designed to be the main performance bottleneck, on page 10 (section VI part b)) of the Equihash paper20. Now my question is - what is "sorting" going to look like hardware wise? What's going to be the most efficient / give the most hash from the list of the following? Memory size (8GB > 4GB), memory speed (Nano > 390X), or card processing power? Any ideas on power usage / hash ratio? Which of these combinations is likely to perform better & by how much? 6 X R9 280X 3GB 3.5 X R9 390 1.5 R9 Nano I'm looking for an answer from Wolf0, Claymore or Tromp Sorting is the first real programming assignment for any comp-sci student. What it looks like HW wise is shuffling data around in memory until it is sorted. The amount of memmory is not realy a factor but the cache size is. Sorting involves random accesses to memory. The bigger the cache the more likely an access will hit the cache. A worst case scenario is that each access misses the cache and has to fetch the data from memory. That's where the second bottle neck comes into play, memory bandwidth, how fast can you get the data from memory to the cache. This seems to be a similar approach to HOdl except that HOdl doesn't sort but search. Searching can be done sequentially to take advantage of the cache. As previously mentioned sorting is mostly random. As long as there is enough RAM to contain the entire sort set the amount of RAM doesn't matter. Cache size and mem bandwidth are the main limiting factors.
|
|
|
Apparently, the pool has not been paying anyone, from the conversations here. --scryptr
Sure looks that way. I have MUE that never matured because they dropped it. Fortunately my balance is less than 1 mBTC so it's no big deal to write it off. Cue Queen:"Another one bites the dust"
|
|
|
ok, i've found how to reset to p2 state without rebooting your computer and without using inspector, just this nvidia-smi -rac
very useful, was a pain in the ass to reset each time i need to use other algo, which otherwise would result in a drivers crash with p0 state, like quark...probably due to the high overclock...
quark is the one with the most problems and then neoscrypt and lyra2v2 ... but quark is the worst.. even with no oc'ing. There was a command in cuda mining - C 0 - C 1 -C 2 but there not in ccminer and I don't know if that would help in the oc'ing or not. yeah when i shift from ethereum to quark with the same oc(+150 core/mem) i always get a crash, qubit is the same, there is a bug on the miner maybe? i don't remember such thing with 750ti and i had 10 of those baby...well there was not ethereum back then... Do you use different intensities for different algos? The default -i may be too high when combined with OC on some algos. all at default, but i set p0 state for etheruem, maybe is that the problem? also the tdp is limted to 60% I don't know if the p state has anything to do with the problem but I would recommend test algos that crash with different intesnsities to find what is stable.. The intensity can be set on a per algo basis because it is a ccminer parameter but p state, tdp, OC etc are per GPU so changing them would affect all algos. That way you can optimize each algo individually for each rig. Another point is that different cards have different intensity limits so if you have different cards in the same rig you will have to set it to a value that works on all cards which may not be optimum on some cards.
|
|
|
ok, i've found how to reset to p2 state without rebooting your computer and without using inspector, just this nvidia-smi -rac
very useful, was a pain in the ass to reset each time i need to use other algo, which otherwise would result in a drivers crash with p0 state, like quark...probably due to the high overclock...
quark is the one with the most problems and then neoscrypt and lyra2v2 ... but quark is the worst.. even with no oc'ing. There was a command in cuda mining - C 0 - C 1 -C 2 but there not in ccminer and I don't know if that would help in the oc'ing or not. yeah when i shift from ethereum to quark with the same oc(+150 core/mem) i always get a crash, qubit is the same, there is a bug on the miner maybe? i don't remember such thing with 750ti and i had 10 of those baby...well there was not ethereum back then... Do you use different intensities for different algos? The default -i may be too high when combined with OC on some algos.
|
|
|
Finally it works! (Almost.. but it very close to work ![Wink](https://bitcointalk.org/Smileys/default/wink.gif) Many thanks! Last question.. How to keep it run after closing ssh session? Not running this myself on Linux, but it should be either of these options - Run the daemon, something like "./hodlermined" or - Put the program to background, something like "./hodlermine &". The & sign hooks the program from the console so it doesn't stop when you log out. or - Use the program screen to use a virtual console that will not close but run in the background. You can detach from this and log out and then back in again later. Good for programs that you need to follow with data output. Good luck. It works with ' &' Many thanks again! but one litle problem lasts.. How to foreground hodlminer process after reconnecting? fg http://linuxcommand.org/lts0080.php
|
|
|
tanx tbear ... im going to test this in the coming week with a more powerful cpu on the board ... I don' think this applies to mining else we wouldn't be able to mine from x1 slots.
|
|
|
Is there a memory variable for Wolf's miner? Is there any benefit to using more memory?
Also on suprnova when using the -t variable, it starts, but no CPU usage happens and the program seems to do nothing, with no output.
I'll let the pricipals give the authoritative answer but from my knowledge it won't benefit from more memory than it needs and it won't run with less. So the answer is no. How long did you wait with no output? I have noticed in occasion that the miner takes a while to start up on Suprnova. Maybe you just need to wait a little longer.
|
|
|
Hello guys, How can I mine this on ubuntu via ssh ? I get such errors: In file included from hodl.cpp:1:0: miner.h:10:21: fatal error: jansson.h: No such file or directory #include <jansson.h> ^ compilation terminated. Makefile:704: recipe for target 'hodlminer-hodl.o' failed make[2]: *** [hodlminer-hodl.o] Error 1 make[2]: Leaving directory '/testhodl/hodlminer' Makefile:767: recipe for target 'all-recursive' failed make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory '/testhodl/hodlminer' Makefile:413: recipe for target 'all' failed make: *** [all] Error 2 root@miner03:/testhodl/hodlminer# Makefile:704: recipe for target 'hodlminer-hod l.o' failed -bash: Makefile:704:: command not found root@miner03:/testhodl/hodlminer# ^C root@miner03:/testhodl/hodlminer#
First you need to get it to compile by installing the missing dependencies, looks like jansson is the missing one.
|
|
|
Rejct share rate is incorrect. One reject and the reject rate jumps to around 34%, and the share rate drops by a similar amount. It also takes several minutes for the reject rate to return to 0. I have seen this on, at least, 2 algos in the past few days.
|
|
|
Hi, I bought gigabyte GA-970A-DS3P for which I know MANY miners use, it should run 5 gpu's without problem. When we run 1 gpu up to 3 gpu's it works, when we run 4 gpu's imediatelly after hashing stats pc just powers off, it is not PSU issue since with same setup, same cards, same everything other botherboard works, we also tried all bios versions for this board. We double tested all HW and same setup on 2 diff boards, all fine, on this one, simply when hashing starts with 4 gpu's->power off in 2-3 seconds.
I am using same unpowered risers and yeah I tried that risers are not faulty(they are not) as on other setups I run. Sme setup works with another mobo(s) and cards have plenty of power from PSU (tested on another mobo). Powered risers just give about 70watts if I am not wrong. Feel free to explain me if I am mistaken, I am always happy to learn!
ANY ideas?
THANK YOU TONS!
Use powered risers in x1 slots. GPUs require 75W from the PCIe slot, some MBs only provide 25W to x1 slots.
|
|
|
I have problem with CentOs 6.5 (final). I try lo install with build.sh command but i have problems (i'm windows user.. i don't know more about ![Cheesy](https://bitcointalk.org/Smileys/default/cheesy.gif) ). Someone know how to install on CentOS 6.5? Specific parameters on configure command? Thank you. great job There should be no special procedure for centos. If this is your first time it is likely you are missing some dependencies. Some new ones were added recently to support new algos. Post your errors if you still have problems. I went on Ubuntu... after some dep install and fix I compiled it! but i think i have a problem.. : ********** cpuminer-opt 3.1.18 *********** A CPU miner with multi algo support and optimized for CPUs with AES_NI extension. BTC donation address: 12tdvfF7KmAsihBXQXynT6E6th2c2pByTT Forked from TPruvot's cpuminer-multi with credits to Lucas Jones, elmad, palmd, djm34, pooler, ig0tik3d, Wolf0 and Jeff Garzik.
Checking CPU capatibility... Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz CPU arch supports AES_NI...YES. SW built for AES_NI........YES. Algo supports AES_NI.......YES. Start mining with AES_NI optimizations...
[2016-05-02 15:03:08] 2 miner threads started, using 'hodl' algorithm. [2016-05-02 15:03:08] Starting Stratum on stratum+tcp://hodl.suprnova.cc:4693 [2016-05-02 15:03:17] Stratum difficulty set to 1 [2016-05-02 15:04:46] hodl.suprnova.cc:4693 hodl block 48575 [2016-05-02 15:04:46] CPU #0: 206 H, 2.31 H/s [2016-05-02 15:04:46] CPU #1: 193 H, 2.17 H/s [2016-05-02 15:05:09] CPU #1: 21 H, 0.90 H/s [2016-05-02 15:05:10] accepted: 1/1 (100%), 227 H, 3.21 H/s yes! [2016-05-02 15:05:14] CPU #0: 30 H, 1.09 H/s [2016-05-02 15:05:14] accepted: 2/2 (100%), 51
That's a pretty low hashrate, you should be getting about 10 times that rate. You're only running 2 threads, is that intentional? Does the hashrate remain that low or does it increase and stabilize over time? The only thing I can think of that would cause it is RAM. How much do you have and are you swapping?
|
|
|
Hey OCMiner, What happened to the HOdl dashboard? IT SUCKS NOW! You removed every useful piece of information, it's now useless. I can't even find my payouts anymore.
|
|
|
I think the focus should be on why the precompiled versions aren't working. There are 2 versions, the wolf version for CPUs with AES_NI and the regular version for older CPUs. Make sure you are using the correct version. And as previously asked, what errors are you seeing?
|
|
|
I have problem with CentOs 6.5 (final). I try lo install with build.sh command but i have problems (i'm windows user.. i don't know more about ![Cheesy](https://bitcointalk.org/Smileys/default/cheesy.gif) ). Someone know how to install on CentOS 6.5? Specific parameters on configure command? Thank you. great job There should be no special procedure for centos. If this is your first time it is likely you are missing some dependencies. Some new ones were added recently to support new algos. Post your errors if you still have problems.
|
|
|
|