joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 03:55:26 PM |
|
On a sidenote, the feature I would love to see would be continuous hash rate reporting, not only when a share is found (which can take a while on some algos with current), but reporting it continuously
Wouldn't we all. The problem is the locally calculated hashrate is completely artificial. To put this in perspective, while testing blakecoin with very high stratum diff the miner was reporting 150 MH/s but a single share submit resulted in the pool displaying 469 Mh/s. It remained at that rate until the share fell out of the sample window. So the pool reported hashrate went from 0 to 469 and back to zero every time a share was submitted. So I have to ask, what exactly do you expect from a continuous hashrate display that has no real connection to what the pool is seeing? I would like to see a measure that uses share submission rate and share difficulty so the miner can make calculations based on real data.
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 05:17:56 PM |
|
I hope this explains it
Thanks, it's becoming clear. Considering the API output isn't intended to be human readable dynamic rate scaling is innapropriate. What is most appropriate is just raw H/s and the monitoring app would be responsible for making it human readable. The hitch is that there is a legacy of using kH/s which doesn't work for very low hashrate algos. So I will go back to the initial request and just add H/s alongside kH/s. Although kH/s would be deprecated it would be maintained indefinitely. Since the dual reporting of hashrate is not visible to the user there is no harm. That resolves that issue. I am also considering adding solution count to the API and console output. A bug fix release shouldn't normally have a new feature but in this case it makes sense to be to coordinate multiple API changes rather than speading them over multiple releases. I will reverse all the GBT changes introduced in 3.8.3 and analyze the data provided by Nokedll in more detail before making any changes to try to support GBT. The final issue is the role of the get_new_work changes to the problems seem. The key question is whether it contributed to the solo mining problems reported. The role of job_id is critical as I discovered using benchmark. I have to be sure a job_id exists before I test it else I get a NULL pointer deref. There may be 3 tracks through this function: 1. benchmark where there is no job id to test for new work. 2. stratum where job_id is tested and in some instances the work may also need regeneration 3. getwork/gbt where there may (or may not?) be a job_id to compare but work should never be regenarated I'm still not sure if testing the job_id is safe for getwork and gbt, I'm hoping for more clarification which will determine if additional checking is required.
|
|
|
|
Nokedli
Newbie
Offline
Activity: 15
Merit: 0
|
|
February 24, 2018, 05:25:58 PM |
|
Excellent data, it wil take some time to analyze it. Edit: I have an initial question about this data that will affect the bug fix release. You initially reported that yescryptr16 crashed but this shows that it was hashing and submitting rejects. Were both of these tests with the same code? I need to know if v3.8.3.1 can hash or if it crashes before starting to hash. 1700x or E5-2640 + manjaro + 3.8.3.1 Segmentation fault, no any hash http://prntscr.com/ij6v61 <-original git compiled without any change 8700k + win10 + 3.8.3.1 start working, crash after 1-2 minute, normal hashrate 1khs +/- 5%, windows also from git, not self compiled I'm using -march=znver1 -DUSE_SPH_SHA tweaks for amd and sandybridge for E5 thats all
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 05:58:27 PM |
|
Excellent data, it wil take some time to analyze it. Edit: I have an initial question about this data that will affect the bug fix release. You initially reported that yescryptr16 crashed but this shows that it was hashing and submitting rejects. Were both of these tests with the same code? I need to know if v3.8.3.1 can hash or if it crashes before starting to hash. 1700x or E5-2640 + manjaro + 3.8.3.1 Segmentation fault, no any hash http://prntscr.com/ij6v61 <-original git compiled without any change 8700k + win10 + 3.8.3.1 start working, crash after 1-2 minute, normal hashrate 1khs +/- 5%, windows also from git, not self compiled I'm using -march=znver1 -DUSE_SPH_SHA tweaks for amd and sandybridge for E5 thats all Thanks, I'll play it safe for now and skip the job_id check for getwork/gbt
|
|
|
|
Amstellodamois
Newbie
Offline
Activity: 182
Merit: 0
|
|
February 24, 2018, 05:59:20 PM |
|
Wouldn't we all. The problem is the locally calculated hashrate is completely artificial.
To put this in perspective, while testing blakecoin with very high stratum diff the miner was reporting 150 MH/s but a single share submit resulted in the pool displaying 469 Mh/s. It remained at that rate until the share fell out of the sample window. So the pool reported hashrate went from 0 to 469 and back to zero every time a share was submitted.
So I have to ask, what exactly do you expect from a continuous hashrate display that has no real connection to what the pool is seeing?
I would like to see a measure that uses share submission rate and share difficulty so the miner can make calculations based on real data. It gives you an idea of your CPU's performance. If you want to tweak it, you'll see in real time how O/C settings (for instance) affect the hashrate.
|
|
|
|
AlecMe
|
|
February 24, 2018, 06:06:43 PM |
|
(excuse my ignorance)
What can you effectively mine on a CPU these days ? Would that be any cpu or med high end ones?
|
|
|
|
4ward
Member
Offline
Activity: 473
Merit: 18
|
|
February 24, 2018, 07:19:59 PM |
|
On a sidenote, the feature I would love to see would be continuous hash rate reporting, not only when a share is found (which can take a while on some algos with current), but reporting it continuously
Wouldn't we all. The problem is the locally calculated hashrate is completely artificial. To put this in perspective, while testing blakecoin with very high stratum diff the miner was reporting 150 MH/s but a single share submit resulted in the pool displaying 469 Mh/s. It remained at that rate until the share fell out of the sample window. So the pool reported hashrate went from 0 to 469 and back to zero every time a share was submitted. So I have to ask, what exactly do you expect from a continuous hashrate display that has no real connection to what the pool is seeing? I would like to see a measure that uses share submission rate and share difficulty so the miner can make calculations based on real data. Pools don't report a "more real" hashrate, they calculate an estimation based on amount of shares submitted and current difficulty. If you submit one share in 5 minutes, they don't have enough information to calculate real rate When difficulty is low enough to supply the pool with enough shares, the pool will show hashrate close to that which is calculated in the miner The lower the amount of shares submitted, the bigger role "luck" plays in the hashrate calculated by the pool But the bottom line, hashrate is a function of how many hashes can a processor calculate
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 07:32:55 PM |
|
But the bottom line, hashrate is a function of how many hashes can a processor calculate
Agree with your comments about pool stats, that is what I am aiming to replicate because it represents actual earnings. But CPU performance is static, once it's benchmarked why would you need/want continuous monitoring?
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 07:42:06 PM |
|
cpuminer-opt-3.8.3.2https://github.com/JayDDee/cpuminer-opt/releases/tag/v3.8.3.2Reverted gbt changes from v3.8.0 that broke getwork. Reverted scaled hash rate for API, added HS term in addition to KHS. Added blocks solved to console display and API.
|
|
|
|
4ward
Member
Offline
Activity: 473
Merit: 18
|
|
February 24, 2018, 08:07:39 PM |
|
But the bottom line, hashrate is a function of how many hashes can a processor calculate
Agree with your comments about pool stats, that is what I am aiming to replicate because it represents actual earnings. But CPU performance is static, once it's benchmarked why would you need/want continuous monitoring? I'm using an automated script (personal fork of Megaminer) for all benchmarking and mining. For benchmarking, I prefer to mine to a real pool, while also earning a little, without having to use "--benchmark" For some algos, the time until initial share can be very high (as in minutes), which makes for unnecessary long benchmark Sometimes the first share is submitted very fast (often happens for me with Lyra2z), and the reported rate, until the next share is submitted, is of 1 or 2 threads, which is very low and thus skews the benchmark It is also used in a watchdog to see that the miner really works as it should
|
|
|
|
Nokedli
Newbie
Offline
Activity: 15
Merit: 0
|
|
February 24, 2018, 08:28:47 PM |
|
All algo back to normal, no any problem.
Thanks!
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 08:29:09 PM |
|
But the bottom line, hashrate is a function of how many hashes can a processor calculate
Agree with your comments about pool stats, that is what I am aiming to replicate because it represents actual earnings. But CPU performance is static, once it's benchmarked why would you need/want continuous monitoring? I'm using an automated script (personal fork of Megaminer) for all benchmarking and mining. For benchmarking, I prefer to mine to a real pool, while also earning a little, without having to use "--benchmark" For some algos, the time until initial share can be very high (as in minutes), which makes for unnecessary long benchmark Sometimes the first share is submitted very fast (often happens for me with Lyra2z), and the reported rate, until the next share is submitted, is of 1 or 2 threads, which is very low and thus skews the benchmark It is also used in a watchdog to see that the miner really works as it should I'll consider that after I come up with a better way to report earnings. I'll have to figure out how pools convert a share to a hashrate then do the same calculation locally if I have all the variables. That would be a new data term that would probably be measured in hash rate and would be normalized based on share submission rate and share difficulty. The existing HS or KHS could then be used as you request and represent CPU performance.
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 08:30:22 PM |
|
All algo back to normal, no any problem.
Thanks!
Thanks for your help, sorry for the mess.
|
|
|
|
Amstellodamois
Newbie
Offline
Activity: 182
Merit: 0
|
|
February 24, 2018, 08:41:33 PM |
|
But CPU performance is static, once it's benchmarked why would you need/want continuous monitoring? See #3515
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 09:35:54 PM |
|
Nist5 looks pretty good in 3.8.2.1 using GBT. Did you ever find a block? This would indicate that GBT basically works as is.
|
|
|
|
Nokedli
Newbie
Offline
Activity: 15
Merit: 0
|
|
February 24, 2018, 10:22:24 PM |
|
Nist5 looks pretty good in 3.8.2.1 using GBT. Did you ever find a block? This would indicate that GBT basically works as is. nist5 for algo test. other: http://prntscr.com/ijay5wworking fine
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 25, 2018, 06:03:52 AM |
|
I'm planning another quick release to add support for hodl gbt. If there any issues with the API changes or anything else in v3.8.3.2, now is the time to raise them so they can be addressed quickly.
I'll continue to work any issues with hodl gbt or anything else in the v3.8.3 stream then work on more optimizations for v3.8.4.
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 25, 2018, 06:29:08 PM |
|
I'm using -march=znver1 -DUSE_SPH_SHA tweaks for amd and sandybridge for E5 thats all
A quick note about USE_SPH_SHA, I have written 4-way (AVX) and 8-way (AVX2) implementations of sha256 and are used in all 4-way or 8-way instances, even on Ryzen regardless of the availability of SHA or the compile flag USE_SPH_SHA. USE_SPH_SHA is effectively deprecated and should not be used unless your testing has proven it to be faster on a specific algo, CPU architecture, and openssl version. As a reminder, it has only been shown to help if the CPU doesn't have SHA and the OS has openssl-1.0.1 or older. As with all options, or deviations from defaults, they should only be used when a need or benefit has been proven for the particular set of circumstances.
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 25, 2018, 06:40:03 PM |
|
cpuminer-sse2 & cpuminer-sse42 & cpuminer-aes-sse42 all give ILLEGAL INSTRUCTION
From RELEASE_NOTES, anything seem familiar? The following tips may be useful for older AMD CPUs.
AMD CPUs older than Steamroller, including Athlon x2 and Phenom II x4, are not supported by cpuminer-opt due to an incompatible implementation of SSE2 on these CPUs. Some algos may crash the miner with an invalid instruction. Users are recommended to use an unoptimized miner such as cpuminer-multi.
Some users with AMD CPUs without AES_NI have reported problems compiling with build.sh or "-march=native". Problems have included compile errors and poor performance. These users are recommended to compile manually specifying "-march=btver1" on the configure command line.
Support for even older x86_64 without AES_NI or SSE2 is not availble.
Also this is an optimized CPU miner, don't you think it would be a good idea to mention what CPU you're using when you ask for help?
|
|
|
|
|
|