Nokedli
Newbie
Offline
Activity: 15
Merit: 0
|
|
February 23, 2018, 10:14:10 PM |
|
still Segmentation fault same error with other wallet
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 23, 2018, 10:23:13 PM |
|
still Segmentation fault same error with other wallet
If it crashes with BLOCK_VERSION_CURRENT 3 and std_longpoll_rpc_call from 3.8.2 I'm stumped.
|
|
|
|
felixbrucker
|
|
February 23, 2018, 10:31:40 PM |
|
The API changes were a request, but it seems the old way was preferred. I'll revert the change if the majority want.
Edit: there are more changes to the API coming, adding solved block count.
actually the requester on github was talking about what i mentioned above: use H/s in addition to KH/s Could you modify api.c to also include hashes/sec? 159 "ALGO=%s;CPUS=%d;URL=%s;KHS=%.2f;HS=%.2f;ACC=%d;REJ=%d;" 163 algo, opt_n_threads, short_url, (double)global_hashrate / 1000.0, (double)global_hashrate,
|
|
|
|
Nokedli
Newbie
Offline
Activity: 15
Merit: 0
|
|
February 23, 2018, 10:34:52 PM |
|
still Segmentation fault same error with other wallet
If it crashes with BLOCK_VERSION_CURRENT 3 and std_longpoll_rpc_call from 3.8.2 I'm stumped. git + 2 patch, no build modification http://prntscr.com/iixh31
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 23, 2018, 10:59:24 PM |
|
still Segmentation fault same error with other wallet
If it crashes with BLOCK_VERSION_CURRENT 3 and std_longpoll_rpc_call from 3.8.2 I'm stumped. git + 2 patch, no build modification http://prntscr.com/iixh31Ok, undo those changes, start fresh and make the following change to std_get_new_work: if ( ( memcmp( work->data, g_work->data, algo_gate.work_cmp_size ) && clean_job ) || ( *nonceptr >= *end_nonce_ptr ) del: || ( !opt_benchmark && strcmp( work->job_id, g_work->job_id ) ) ) add: || ( have_stratum && strcmp( work->job_id, g_work->job_id ) ) ) {
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 23, 2018, 11:02:10 PM |
|
The API changes were a request, but it seems the old way was preferred. I'll revert the change if the majority want.
Edit: there are more changes to the API coming, adding solved block count.
actually the requester on github was talking about what i mentioned above: use H/s in addition to KH/s Could you modify api.c to also include hashes/sec? 159 "ALGO=%s;CPUS=%d;URL=%s;KHS=%.2f;HS=%.2f;ACC=%d;REJ=%d;" 163 algo, opt_n_threads, short_url, (double)global_hashrate / 1000.0, (double)global_hashrate, It doesn't make sense to me to put both but if that's what people want I'll do it. I'd like some opinions from other users.
|
|
|
|
Nokedli
Newbie
Offline
Activity: 15
Merit: 0
|
|
February 23, 2018, 11:05:05 PM |
|
still Segmentation fault same error with other wallet
If it crashes with BLOCK_VERSION_CURRENT 3 and std_longpoll_rpc_call from 3.8.2 I'm stumped. git + 2 patch, no build modification http://prntscr.com/iixh31Ok, undo those changes, start fresh and make the following change to std_get_new_work: if ( ( memcmp( work->data, g_work->data, algo_gate.work_cmp_size ) && clean_job ) || ( *nonceptr >= *end_nonce_ptr ) del: || ( !opt_benchmark && strcmp( work->job_id, g_work->job_id ) ) ) add: || ( have_stratum && strcmp( work->job_id, g_work->job_id ) ) ) {
http://prntscr.com/iixtyv
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 23, 2018, 11:15:38 PM Last edit: February 24, 2018, 02:07:11 AM by joblo |
|
still Segmentation fault same error with other wallet
If it crashes with BLOCK_VERSION_CURRENT 3 and std_longpoll_rpc_call from 3.8.2 I'm stumped. git + 2 patch, no build modification http://prntscr.com/iixh31Ok, undo those changes, start fresh and make the following change to std_get_new_work: if ( ( memcmp( work->data, g_work->data, algo_gate.work_cmp_size ) && clean_job ) || ( *nonceptr >= *end_nonce_ptr ) del: || ( !opt_benchmark && strcmp( work->job_id, g_work->job_id ) ) ) add: || ( have_stratum && strcmp( work->job_id, g_work->job_id ) ) ) {
http://prntscr.com/iixtyvOne more shot in the dark, replace std_get_new_work with the old version. If that doesn't work apply all patches above: replace std_get_new_work and std_longpoll_rpc_call with old versions, and #define BLOCK_VERSION_CURRENT 3 as per old version. Ater that I'm really stuck. Edit: This is really strange. I need you to confirm the previous version still works. I've reviewed the changes I made. There were none to yescrypt but many other algos were changed. I made a few changes to common code: Increasing the block version count, reverting did not help. Removing getwork code from longpoll. This was my first suspect if my assumption that getwork doesn't use longpoll. But reversing that change did not help either. I made a change to how new work is detected to fix an issue with super-fast algos. But reversing that didn't fix it either. I made a change to how shares are detected but that only applies when a solution is found. The last change was to the API which also doesn't apply. I'm at a loss to explain it. some getwork code from longpoll
|
|
|
|
felixbrucker
|
|
February 23, 2018, 11:23:18 PM |
|
It doesn't make sense to me to put both but if that's what people want I'll do it. I'd like some opinions from other users.
keep KH/s for backwards compatibility that is
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 23, 2018, 11:49:50 PM |
|
It doesn't make sense to me to put both but if that's what people want I'll do it. I'd like some opinions from other users.
keep KH/s for backwards compatibility that is Given that another API change is coming is it worth it to reintroduce kH/s or just take the compatibility hit for both and get it over with?
|
|
|
|
felixbrucker
|
|
February 24, 2018, 12:01:47 AM |
|
my personal choice would be to keep the api backwards compatible and just add stuff to it as cpuminer-opt/cpuminer-multi is widely used
if you feel stuff is not needed anymore just mark it deprecated and remove it at a later stage/version, however i don't see the need to remove the KH/s, it's not like it's a million extra bytes, just some few chars
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 02:26:11 AM |
|
my personal choice would be to keep the api backwards compatible and just add stuff to it as cpuminer-opt/cpuminer-multi is widely used
if you feel stuff is not needed anymore just mark it deprecated and remove it at a later stage/version, however i don't see the need to remove the KH/s, it's not like it's a million extra bytes, just some few chars
That's ok with me. If there are no objections I'll include it in another bug fix release if the yescrypt/getwork issue can be solved. I can also include the solution count in the API, it's already coded so it's more work to remove it.
|
|
|
|
jperser
Jr. Member
Offline
Activity: 51
Merit: 5
|
|
February 24, 2018, 05:22:10 AM |
|
The API changes were a request, but it seems the old way was preferred. I'll revert the change if the majority want.
Edit: there are more changes to the API coming, adding solved block count.
I need hashes per second for some of my miners. I had to modify cpuminer-opt every time a new version comes out. Here is the output of my modified API looks like: NAME=cpuminer-opt;VER=3.8.2.1;API=1.0;ALGO=scrypt;CPUS=4;URL=pool-us.bloxstor.com:3002; KHS=0.01;HS=14.02;ACC=2;REJ=0;ACCMN=0.239;DIFF=0.019401;TEMP=39.9;FAN=0;FREQ=0; UPTIME=503;TS=1519448273| The current KHS tag doesn't give me enough resolution, so I added HS. Auto-scaling will break poor coding. Adding this extra tag will also break poor coding (i.e. if Awesome Miner reports the temperature is 0.019401). I assume that the programmers at Awesome Miner know to read the tags, not count the ";". My watchdog code reads all the tags and understands THS, GHS, MHS, KHS, and HS. My code tracks the lowest non-zero value, then displays it in Engineering notation. Either dynamic scaling or multiple tags will work for me. I have seen multiple tags on another miner, that is why I adopted it. I stopped using Awesome Miner because its limitations (but that is a subject for a different thread).
|
HORIZEN ►►► Bringing Privacy To Life | https://horizen.global/
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 06:25:21 AM |
|
The API changes were a request, but it seems the old way was preferred. I'll revert the change if the majority want.
Edit: there are more changes to the API coming, adding solved block count.
I need hashes per second for some of my miners. I had to modify cpuminer-opt every time a new version comes out. Here is the output of my modified API looks like: NAME=cpuminer-opt;VER=3.8.2.1;API=1.0;ALGO=scrypt;CPUS=4;URL=pool-us.bloxstor.com:3002; KHS=0.01;HS=14.02;ACC=2;REJ=0;ACCMN=0.239;DIFF=0.019401;TEMP=39.9;FAN=0;FREQ=0; UPTIME=503;TS=1519448273| The current KHS tag doesn't give me enough resolution, so I added HS. Auto-scaling will break poor coding. Adding this extra tag will also break poor coding (i.e. if Awesome Miner reports the temperature is 0.019401). I assume that the programmers at Awesome Miner know to read the tags, not count the ";". My watchdog code reads all the tags and understands THS, GHS, MHS, KHS, and HS. My code tracks the lowest non-zero value, then displays it in Engineering notation. Either dynamic scaling or multiple tags will work for me. I have seen multiple tags on another miner, that is why I adopted it. I stopped using Awesome Miner because its limitations (but that is a subject for a different thread). I don't like being bound to the legacy of others but I have a compromise proposal. I can restore the kH/s term alongside the new scaled term for a conversion period to allow mining managers to update their code to support scaled hash rate, then remove the legacy kH/s permanently. This will ensure H/s is used when appropriate which satisfies the feature request. Any downside to this?
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 06:32:16 AM |
|
still Segmentation fault same error with other wallet
If it crashes with BLOCK_VERSION_CURRENT 3 and std_longpoll_rpc_call from 3.8.2 I'm stumped. git + 2 patch, no build modification http://prntscr.com/iixh31Ok, undo those changes, start fresh and make the following change to std_get_new_work: if ( ( memcmp( work->data, g_work->data, algo_gate.work_cmp_size ) && clean_job ) || ( *nonceptr >= *end_nonce_ptr ) del: || ( !opt_benchmark && strcmp( work->job_id, g_work->job_id ) ) ) add: || ( have_stratum && strcmp( work->job_id, g_work->job_id ) ) ) {
http://prntscr.com/iixtyvOne more shot in the dark, replace std_get_new_work with the old version. If that doesn't work apply all patches above: replace std_get_new_work and std_longpoll_rpc_call with old versions, and #define BLOCK_VERSION_CURRENT 3 as per old version. Ater that I'm really stuck. Edit: This is really strange. I need you to confirm the previous version still works. I've reviewed the changes I made. There were none to yescrypt but many other algos were changed. I made a few changes to common code: Increasing the block version count, reverting did not help. Removing getwork code from longpoll. This was my first suspect if my assumption that getwork doesn't use longpoll. But reversing that change did not help either. I made a change to how new work is detected to fix an issue with super-fast algos. But reversing that didn't fix it either. I made a change to how shares are detected but that only applies when a solution is found. The last change was to the API which also doesn't apply. I'm at a loss to explain it. some getwork code from longpoll This problem is bugging me, it defies logic. I'm beginning to suspect it may be an isolated issue. If anyone else is solo mining with v3.8.3.1 using getwork or gbt please post your results, success or failure, Please include the algo, your CPU, OS, any deveation from defaults and any relevant console output.
|
|
|
|
4ward
Member
Offline
Activity: 473
Merit: 18
|
|
February 24, 2018, 10:13:36 AM |
|
It doesn't make sense to me to put both but if that's what people want I'll do it. I'd like some opinions from other users.
keep KH/s for backwards compatibility that is Given that another API change is coming is it worth it to reintroduce kH/s or just take the compatibility hit for both and get it over with? Current api is compatible with Ccminer, so I think changing it would break compatibility with many tools that people may use If you are reworking the api, then perhaps keeping existing one and activating it with a switch would be the "safe" way, but it would introduce additional code which I'm not sure you want to maintain. Perhaps set a cutoff date of when the old API will be discontinued? I personally mine on my own fork of MegaMiner ( https://github.com/yuzi-co/Megaminer) and changing the api is easy for me. On a sidenote, the feature I would love to see would be continuous hash rate reporting, not only when a share is found (which can take a while on some algos with current), but reporting it continuously
|
|
|
|
felixbrucker
|
|
February 24, 2018, 11:16:21 AM Last edit: February 24, 2018, 11:46:37 AM by felixbrucker |
|
Any downside to this?
i use H/s as often as possible and just format it into the appropriate unit on display. A change to not always use H/s and remove KH/s (after some time) will require some conversion to bring it down to H/s first, push it through the app and finally convert it back to whatever is the appropriate format. It wont be a large overhead in programming but it will be unnecessary. I do not know of any app using the api returned hashrate just as is, it wouldn't make sense. This is how i currently extract data from api: const result = { accepted: parseFloat(obj.ACC), acceptedPerMinute: parseFloat(obj.ACCMN), algorithm: obj.ALGO, difficulty: parseFloat(obj.DIFF), hashrate: parseFloat(obj.KHS) * 1000, miner: `${obj.NAME} ${obj.VER}`, rejected: parseFloat(obj.REJ), uptime: obj.UPTIME, cpus: parseFloat(obj.CPUS), temperature: parseFloat(obj.TEMP), }; this would need to change to this: const units = [ {key: 'PH/s', factor: 5}, {key: 'TH/s', factor: 4}, {key: 'GH/s', factor: 3}, {key: 'MH/s', factor: 2}, {key: 'KH/s', factor: 1}, {key: 'H/s', factor: 0}, ]; const unit = units.find(currUnit => obj[currUnit.key]); let hashrate = 0; if (unit) { hashrate = parseFloat(obj[unit.key]) * (Math.pow(1000, unit.factor)); } const result = { accepted: parseFloat(obj.ACC), acceptedPerMinute: parseFloat(obj.ACCMN), algorithm: obj.ALGO, difficulty: parseFloat(obj.DIFF), hashrate, miner: `${obj.NAME} ${obj.VER}`, rejected: parseFloat(obj.REJ), uptime: obj.UPTIME, cpus: parseFloat(obj.CPUS), temperature: parseFloat(obj.TEMP), }; With H/s always present it would be just like the first example except i do not need to do * 1000 I hope this explains it
|
|
|
|
Nokedli
Newbie
Offline
Activity: 15
Merit: 0
|
|
February 24, 2018, 11:28:54 AM Last edit: February 24, 2018, 12:47:08 PM by Nokedli |
|
still Segmentation fault same error with other wallet
If it crashes with BLOCK_VERSION_CURRENT 3 and std_longpoll_rpc_call from 3.8.2 I'm stumped. git + 2 patch, no build modification http://prntscr.com/iixh31Ok, undo those changes, start fresh and make the following change to std_get_new_work: if ( ( memcmp( work->data, g_work->data, algo_gate.work_cmp_size ) && clean_job ) || ( *nonceptr >= *end_nonce_ptr ) del: || ( !opt_benchmark && strcmp( work->job_id, g_work->job_id ) ) ) add: || ( have_stratum && strcmp( work->job_id, g_work->job_id ) ) ) {
http://prntscr.com/iixtyvOne more shot in the dark, replace std_get_new_work with the old version. If that doesn't work apply all patches above: replace std_get_new_work and std_longpoll_rpc_call with old versions, and #define BLOCK_VERSION_CURRENT 3 as per old version. Ater that I'm really stuck. Edit: This is really strange. I need you to confirm the previous version still works. I've reviewed the changes I made. There were none to yescrypt but many other algos were changed. I made a few changes to common code: Increasing the block version count, reverting did not help. Removing getwork code from longpoll. This was my first suspect if my assumption that getwork doesn't use longpoll. But reversing that change did not help either. I made a change to how new work is detected to fix an issue with super-fast algos. But reversing that didn't fix it either. I made a change to how shares are detected but that only applies when a solution is found. The last change was to the API which also doesn't apply. I'm at a loss to explain it. some getwork code from longpoll This problem is bugging me, it defies logic. I'm beginning to suspect it may be an isolated issue. If anyone else is solo mining with v3.8.3.1 using getwork or gbt please post your results, success or failure, Please include the algo, your CPU, OS, any deveation from defaults and any relevant console output. http://prntscr.com/ij3bkhhttp://prntscr.com/ij3flqwindows 10 local wallet 3.8.2.1 -> http://prntscr.com/ij3qkpinterzone/c11 simply crashed -> 3.8.2.1 http://prntscr.com/ij3pmaBWK/nist5 also crashed -> 3.8.2.1 http://prntscr.com/ij3ou2Solaris/xevan http://prntscr.com/ij3lnl -> 3.8.2.1 http://prntscr.com/ij3nmcUTC/scryptjane:14 http://prntscr.com/ij3zoo -> 3.8.2.1 http://prntscr.com/ij40duon localhost / win 10
|
|
|
|
Amstellodamois
Newbie
Offline
Activity: 182
Merit: 0
|
|
February 24, 2018, 11:42:10 AM |
|
Is there a page of the thread presenting benchmarks? (Wondering what a really shitty CPU would do - like a G4400)
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
February 24, 2018, 03:38:41 PM Last edit: February 24, 2018, 04:38:07 PM by joblo |
|
Excellent data, it wil take some time to analyze it. Edit: I have an initial question about this data that will affect the bug fix release. You initially reported that yescryptr16 crashed but this shows that it was hashing and submitting rejects. Were both of these tests with the same code? I need to know if v3.8.3.1 can hash or if it crashes before starting to hash.
|
|
|
|
|