vs3
|
|
June 02, 2013, 12:07:20 PM |
|
To be quite honest - I'm not sure why is there the need of "reset" in the first place? I mean - why set it to zero? Why not just set it to half of what it is - that way the proportional part for everyone would stay the same (e.g. each miner's contribution % would not change)?
p.s. someone suggested earlier that it is a double-integer - that's probably wrong as it seems to be a floating point type (most likely whatever the default python/php one is)
|
|
|
|
diskodasa
|
|
June 02, 2013, 12:15:19 PM |
|
Now what!!!??? 18329 2013-06-02 01:07:40 0:20:23 3210872 334 0.00273237 239177 25.14480000 invalid
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
June 02, 2013, 12:15:26 PM |
|
To be quite honest - I'm not sure why is there the need of "reset" in the first place? I mean - why set it to zero? Why not just set it to half of what it is - that way the proportional part for everyone would stay the same (e.g. each miner's contribution % would not change)?
p.s. someone suggested earlier that it is a double-integer - that's probably wrong as it seems to be a floating point type (most likely whatever the default python/php one is)
The "reset" just normalises all the scores. No information is lost. If no "reset" happened, you'd receive the same score and the same reward.
THERE IS NO "RESET TO ZERO".
|
|
|
|
TiborB
Member
Offline
Activity: 83
Merit: 10
|
|
June 02, 2013, 12:21:24 PM |
|
I see, but u should understand, when u reseting all scores, then we lost information. That means when the all score is reset at 1:00:00 then its all newly, what i have shared before i would that lose. Because i would have then 0 score. The users score should NOT be reseted! Perhaps you missed this: The "reset" just normalises all the scores. No information is lost. If no "reset" happened, you'd receive the same score and the same reward.
Or maybe you didn't understand? Then, i think i dont understand. For me says reset thah reseting scores to zero. OK, then reset your idea about "reset" : it just normalises all the scores. No information is lost. If no "reset" happened, you'd receive the same score and the same reward. Let hear no more about it, eh? OrganOfCorti, I have read many of your writings and comments here and I am convinced you do know very well what you are talking about. However, with all respect I am not sure you are 100% right in this case. Yes, what you write is how it should work, but I wonder if you have seen and analysed the actual code performing the renormalization. What we know from experience is, that a round, that ended very closely after a renormalisation, yields rewards being off by magnitudes, which affects some miners in a positive, others in a negative way. It seems, the closer to a renormalisation the round ends, the larger the deviation from the expected avg reward. My guess (which is only a guess based on observations) is that constant C in the formula is not changed after the renormalisation to match the scale-down factor of existing scores. If it was correctly changed, then a share submitted right after a renormalization would have the same impact/importance, than the share submitted right before the renormalization. This does not seem to be the case. In really extreme cases, where the deviation is significant (e.g. only a few satoshis for continuous miners even at around 1Gh/s), Slush recalculates the blocks manually with PPS. You can see that after a "magical fix" that we see in some cases your score is equal to the number of shares you submitted. (I have been collecting and storing block stats for two months and am not just making this up.) Please let me know if you think I made a logical mistake in my conclusions, I am always open to learn. And it also belongs to learning to periodically revisit and challenge previous theories and statements. So no offence is intended whatsoever. Cheers, T
|
|
|
|
vs3
|
|
June 02, 2013, 12:22:25 PM |
|
Now what!!!??? 18329 2013-06-02 01:07:40 0:20:23 3210872 334 0.00273237 239177 25.14480000 invalid
This was already a known fact: 239177 orphaned.
Grr etc. oh well, at least it isn't because some flawed logic in an equation
|
|
|
|
DryMartini
Newbie
Offline
Activity: 37
Merit: 0
|
|
June 02, 2013, 12:27:02 PM |
|
Honestly. The reset thingy is just another bug caused by arithmetic overflow. This pool is so full of bugs I'm not sure it can be called a pool any longer. It's something between Satoshi Dice, a daycare center and a mining pool.
Is it a bingo hall?
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
June 02, 2013, 12:35:00 PM |
|
OrganOfCorti,
I have read many of your writings and comments here and I am convinced you do know very well what you are talking about. However, with all respect I am not sure you are 100% right in this case.Yes, what you write is how it should work, but I wonder if you have seen and analysed the actual code performing the renormalization. What we know from experience is, that a round, that ended very closely after a renormalisation, yields rewards being off by magnitudes, which affects some miners in a positive, others in a negative way. It seems, the closer to a renormalisation the round ends, the larger the deviation from the expected avg reward.
No, no-one I know has audited slush's code. So any claims that the renormalisation is broken need to be proven - anecdotal evidence is insufficient. You'll need to prove your assertion beyond a doubt, not just show examples of when you think it misfires. ou could be correct - why not actually do the groundwork and prove it? My guess (which is only a guess based on observations) is that constant C in the formula is not changed after the renormalisation to match the scale-down factor of existing scores.
If it was correctly changed, then a share submitted right after a renormalization would have the same impact/importance, than the share submitted right before the renormalization. This does not seem to be the case.
'c' should never be changed. If it did, the score method wouldn't work. In really extreme cases, where the deviation is significant (e.g. only a few satoshis for continuous miners even at around 1Gh/s), Slush recalculates the blocks manually with PPS. You can see that after a "magical fix" that we see in some cases your score is equal to the number of shares you submitted. (I have been collecting and storing block stats for two months and am not just making this up.)
Please let me know if you think I made a logical mistake in my conclusions, I am always open to learn. And it also belongs to learning to periodically revisit and challenge previous theories and statements. So no offence is intended whatsoever.
Cheers, T
You think something's wrong - fair enough. But now you need to prove it. You'll need lots of data, and you'll have to scrape slush's site every few minutes. But it's doable. Good luck!
|
|
|
|
KNK
|
|
June 02, 2013, 12:55:41 PM |
|
You'll need lots of data, and you'll have to scrape slush's site every few minutes. But it's doable.
Good luck!
You just don't read do you? I have been collecting and storing block stats for two months and am not just making this up. You are so sure in what you think that every one else is just wrong and your theory is better than the other's observations based on facts. That's why i didn't want to argue with you few pages back and i won't even now.
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
June 02, 2013, 12:58:30 PM |
|
You'll need lots of data, and you'll have to scrape slush's site every few minutes. But it's doable.
Good luck!
You just don't read do you? I have been collecting and storing block stats for two months and am not just making this up. You are so sure in what you think that every one else is just wrong and your theory is better than the other's observations based on facts. TiborB hasn't produced a proof, just anecdotal evidence. He needs to show exactly what should have happened if a renormalisation rather than a "reset" occurred for his dataset. That's why i didn't want to argue with you few pages back and i won't even now.
Good-oh!
|
|
|
|
TiborB
Member
Offline
Activity: 83
Merit: 10
|
|
June 02, 2013, 01:27:15 PM |
|
OrganOfCorti,
I have read many of your writings and comments here and I am convinced you do know very well what you are talking about. However, with all respect I am not sure you are 100% right in this case.Yes, what you write is how it should work, but I wonder if you have seen and analysed the actual code performing the renormalization. What we know from experience is, that a round, that ended very closely after a renormalisation, yields rewards being off by magnitudes, which affects some miners in a positive, others in a negative way. It seems, the closer to a renormalisation the round ends, the larger the deviation from the expected avg reward.
No, no-one I know has audited slush's code. So any claims that the renormalisation is broken need to be proven - anecdotal evidence is insufficient. You'll need to prove your assertion beyond a doubt, not just show examples of when you think it misfires. ou could be correct - why not actually do the groundwork and prove it? My guess (which is only a guess based on observations) is that constant C in the formula is not changed after the renormalisation to match the scale-down factor of existing scores.
If it was correctly changed, then a share submitted right after a renormalization would have the same impact/importance, than the share submitted right before the renormalization. This does not seem to be the case.
'c' should never be changed. If it did, the score method wouldn't work. In really extreme cases, where the deviation is significant (e.g. only a few satoshis for continuous miners even at around 1Gh/s), Slush recalculates the blocks manually with PPS. You can see that after a "magical fix" that we see in some cases your score is equal to the number of shares you submitted. (I have been collecting and storing block stats for two months and am not just making this up.)
Please let me know if you think I made a logical mistake in my conclusions, I am always open to learn. And it also belongs to learning to periodically revisit and challenge previous theories and statements. So no offence is intended whatsoever.
Cheers, T
You think something's wrong - fair enough. But now you need to prove it. You'll need lots of data, and you'll have to scrape slush's site every few minutes. But it's doable. Good luck! Thanks for the response, you are absolutely right. I do pull the statistics via the JSON API every 10 minutes and analyse & plot it automatically every 6 hours. (I do not want to pull more frequently for practical reasons.) At this point in time, based on stats from the last 2 months, what I see confirmed is the "magical fix" manual recalculation via PPS which is easy to spot and slush has been honest about it when I contacted him (he is very open and honest, and I do understand him not wanting to actively post here recently). I do not think that based on my current amount of data I could back my assertion beyond a doubt, this is why I was very explicitly calling it a guess. I could collect data and perform "black box analysis" for any amount of time and still not reach 100% as it is theoretically impossible to reach 100% via passive black box analysis but the confidence level is growing with the amount of data collected. Checking the code (aka white box analysis) could be much less effort taking and allow for a statement beyond doubt. And honestly, I have never even seen a precise description of what exactly is done on a renormalisation, only the fact is mentioned that a renormalisation is periodically performed. In contrast to this, Meni Rosenfeld is very explicit in how rescaling should work when using DGM ( https://bitcointalk.org/index.php?topic=39497.0). Regarding changes to C: what I mean is that on rescaling/renormalization, when the score is changed, but C is not, then you very aggressively change the weight of all the work one has performed before renormalisation versus the weight of the new per share increment. However, if the score is divided by X, and C multiplied by log(X), then the value of previous scores relative to the value of the increment score for the new share is kept the same (maintaining the exponential semantics used by slush), and you will end up with the same exponential curve, just rescaled (zoomed out). Cheers, T
|
|
|
|
KNK
|
|
June 02, 2013, 01:42:06 PM |
|
Proper normalization is time (CPU) consuming and is very difficult to do on several nodes simultaneously, that's why i also think it is 'reset' instead of normalization. My guess is that: On reset a signal is sent to all nodes (but some do not see the message and that's where things go wrong) Each node recalculates per miner score based on his last share The central node / database sums all the scores sent from the nodes Now if one node does not reset it's data we have all miners affected except those on that node. In such cases the total score remains high after reset and this can probably be used as a trigger (more of flag actualy) for recalculation if the round is not long enough to cause another reset and thus clear the 'recalculation needed' flag if successful.
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
June 02, 2013, 01:49:55 PM |
|
Thanks for the response, you are absolutely right. I do pull the statistics via the JSON API every 10 minutes and analyse & plot it automatically every 6 hours. (I do not want to pull more frequently for practical reasons.)
At this point in time, based on stats from the last 2 months, what I see confirmed is the "magical fix" manual recalculation via PPS which is easy to spot and slush has been honest about it when I contacted him (he is very open and honest, and I do understand him not wanting to actively post here recently).
Can you explain this? PM me if you don't want to post publicly. I do not think that based on my current amount of data I could back my assertion beyond a doubt, this is why I was very explicitly calling it a guess. I could collect data and perform "black box analysis" for any amount of time and still not reach 100% as it is theoretically impossible to reach 100% via passive black box analysis but the confidence level is growing with the amount of data collected.
Great! Have you been able to calculate 'c' yet? Having this will allow you to back calculate shares which will make figuring out the renormalisation easier. Checking the code (aka white box analysis) could be much less effort taking and allow for a statement beyond doubt. And honestly, I have never even seen a precise description of what exactly is done on a renormalisation, only the fact is mentioned that a renormalisation is periodically performed. In contrast to this, Meni Rosenfeld is very explicit in how rescaling should work when using DGM ( https://bitcointalk.org/index.php?topic=39497.0). I thought there was one somewhere. Maybe not. I'll try to find it, if it exists. Regarding changes to C: what I mean is that on rescaling/renormalization, when the score is changed, but C is not, then you very aggressively change the weight of all the work one has performed before renormalisation versus the weight of the new per share increment. However, if the score is divided by X, and C multiplied by log(X), then the value of previous scores relative to the value of the increment score for the new share is kept the same (maintaining the exponential semantics used by slush), and you will end up with the same exponential curve, just rescaled (zoomed out).
Since 'c' can't be changed without completely changing the 'hoppability' of the pool, you are in effect saying that given this restriction proper renormalisation or rescaling can't occur? Hmmm. Have to think about that. You've obviously done a lot of work - you should post your results somewhere on the forum, as a work in progress. I'm keen to see what you have.
|
|
|
|
iFA88
|
|
June 02, 2013, 02:34:25 PM |
|
Compare this block info: 18339 2013-06-02 14:00:37 2:02:30 19604110 328 0.00000000 239277 25.12461000 95 confirmations left 18331 2013-06-02 04:22:48 2:08:01 19975253 314 0.00046448 239200 25.06660003 18 confirmations left
On 18339 i have stopped the mining on half time, and my previous shares value is 0 ?! On 18331 i have mined the whole block, but with only one worker.
Thats fair?
|
|
|
|
TiborB
Member
Offline
Activity: 83
Merit: 10
|
|
June 02, 2013, 02:38:14 PM |
|
Can you explain this? PM me if you don't want to post publicly.
(...)
Great! Have you been able to calculate 'c' yet? Having this will allow you to back calculate shares which will make figuring out the renormalisation easier.
(...)
I thought there was one somewhere. Maybe not. I'll try to find it, if it exists.
(...)
Since 'c' can't be changed without completely changing the 'hoppability' of the pool, you are in effect saying that given this restriction proper renormalisation or rescaling can't occur? Hmmm. Have to think about that.
You've obviously done a lot of work - you should post your results somewhere on the forum, as a work in progress. I'm keen to see what you have.
I will send you a PM. I tried to look for how exactly the rescaling works, but have not found detailed docs. I would really appreciate a link in case you have it handy. Regarding reverse engineering the current value of C, I have read your blog post on the topic ( http://organofcorti.blogspot.hu/2012/09/43-slushs-score-method-and-miner.html), kudos to you, I enjoyed the read. My idea was changing C on each rescaling, then resetting it to the original value when a new round starts. This would indeed affect the hop point. Honestly, I have not dived deeply into the hopping aspect of changing C intra-round yet, my first point of interest was checking how variance is correlated with the time elapsed since the last renormalisation with and without changing C. I would avoid going public prematurely, the community might be a bit harsh and I do not want to raise a flame war before I can properly back the points. I'll PM you, you are way more experienced with this type of statistical analysis than me so any idea/comment is welcome. Cheers, T
|
|
|
|
stephengillon
Newbie
Offline
Activity: 56
Merit: 0
|
|
June 02, 2013, 02:50:44 PM |
|
no registration on block 239281
|
|
|
|
iFA88
|
|
June 02, 2013, 02:52:16 PM |
|
|
|
|
|
valladex
Member
Offline
Activity: 61
Merit: 10
|
|
June 02, 2013, 08:19:16 PM |
|
18347 2013-06-02 19:34:47 0:47:47 7511826 318 0.00096651 239329 25.17718286
18346 2013-06-02 18:47:00 0:51:59 8189628 329 0.00052418 239319 25.04053933
NOT complaining, just wondering if anyone else is seeing a lesser reward on this round as it doesn't seem to be getting corrected?
|
|
|
|
desired_username
|
|
June 02, 2013, 08:28:52 PM |
|
18347 2013-06-02 19:34:47 0:47:47 7511826 318 0.00096651 239329 25.17718286
18346 2013-06-02 18:47:00 0:51:59 8189628 329 0.00052418 239319 25.04053933
NOT complaining, just wondering if anyone else is seeing a lesser reward on this round as it doesn't seem to be getting corrected?
It's alright for me. I had a bit less than usual on 18343 2013-06-02 16:33:13 0:00:23 50946 150 0.07358613 otherwise good times
|
|
|
|
minerapia
|
|
June 02, 2013, 09:18:48 PM |
|
Compare this block info: 18339 2013-06-02 14:00:37 2:02:30 19604110 328 0.00000000 239277 25.12461000 95 confirmations left 18331 2013-06-02 04:22:48 2:08:01 19975253 314 0.00046448 239200 25.06660003 18 confirmations left
On 18339 i have stopped the mining on half time, and my previous shares value is 0 ?! On 18331 i have mined the whole block, but with only one worker.
Thats fair?
Yes, its fair. You completly stopped looking for the block solution, youre shares would be worthless anyway if someone else would not have finished the job you left halfway.
|
donations -> btc: 1M6yf45NskQxWXknkMTzQ8o6wShQcSY4EC ltc: LeTpCd6cQL26Q1vjc9kJrTjjFMrPhrpv6j
|
|
|
Trongersoll
|
|
June 02, 2013, 09:38:00 PM |
|
Compare this block info: 18339 2013-06-02 14:00:37 2:02:30 19604110 328 0.00000000 239277 25.12461000 95 confirmations left 18331 2013-06-02 04:22:48 2:08:01 19975253 314 0.00046448 239200 25.06660003 18 confirmations left
On 18339 i have stopped the mining on half time, and my previous shares value is 0 ?! On 18331 i have mined the whole block, but with only one worker.
Thats fair?
Yes, its fair. You completly stopped looking for the block solution, youre shares would be worthless anyway if someone else would not have finished the job you left halfway. The purpose, and a feature of this pool to to discourage people from casually coming and going in the middle of a block. If you feel this is unfair, you are in the wrong pool.
|
|
|
|
|