roy7
|
|
April 28, 2013, 02:56:29 PM |
|
If you want to assume he keeps mining, there are two factors to consider - his current score and his future proportion of the pool's hashrate. Taking them both into account requires some calculations. It's easier to just show what his reward will be if a block is found right now.
Yup. I just wanted to have an accurate way to show a miner their future remaining balance to offset the capacitor charging up originally. I have to imagine there's some learning curve for most miners who have never used a pool with DGM before.
|
|
|
|
doublec
Legendary
Offline
Activity: 1078
Merit: 1005
|
|
April 28, 2013, 04:30:57 PM |
|
Yup. I just wanted to have an accurate way to show a miner their future remaining balance to offset the capacitor charging up originally. I have to imagine there's some learning curve for most miners who have never used a pool with DGM before.
I've just added DGM to my pool, and yes there's a learning curve. Since introducing it the pool hasn't solved a block with less than 1x difficulty shares for the last 6 or so blocks. The new DGM users are struggling with the concept of lower earnings why they'd be earning more on PPS. It's probably easier with a pool that starts with DGM vs switching.
|
|
|
|
roy7
|
|
May 02, 2013, 11:49:37 PM |
|
The method is as follows: 3. When a share is found, increase by p*s*B the score of the participant who found it.
b. Logarithmic scale 3. When a share is found, let lS = ls + log(exp(lS-ls) + pB) for the participant who found it.
For readability the first formula matches up to lS = log( exp(lS) + exp(ls) * pB ) but I assume you are doing it your way to have one less exp() call per calculation and/or to avoid huge/tiny numbers? My math skills are failing me to be sure both expressions are identical. Edit: I understand setting lS = -Inf for new miners if you are setting them up in your score table in advance. But if you aren't going to add them until they request their first block to work on, it is equivalent to change ls + log(exp(lS-ls) + pB) to ls + log(pB), right? Since exp(lS-ls) will be 0 for an unknown miner, with lS=-Inf.
|
|
|
|
Meni Rosenfeld (OP)
Donator
Legendary
Offline
Activity: 2058
Merit: 1054
|
|
May 03, 2013, 05:47:02 AM |
|
The method is as follows: 3. When a share is found, increase by p*s*B the score of the participant who found it.
b. Logarithmic scale 3. When a share is found, let lS = ls + log(exp(lS-ls) + pB) for the participant who found it.
For readability the first formula matches up to lS = log( exp(lS) + exp(ls) * pB ) but I assume you are doing it your way to have one less exp() call per calculation and/or to avoid huge/tiny numbers? My math skills are failing me to be sure both expressions are identical. It's to make it numerically robust and avoid overflow. Both lS and ls eventually get so large that exp(lS) and exp(ls) cause overflow. But their difference lS-ls always manageable. That's why we're using logarithmic scale in the first place, if you could handle exp(lS) directly you could simply work directly with S as in the original formulation. Edit: I understand setting lS = -Inf for new miners if you are setting them up in your score table in advance. But if you aren't going to add them until they request their first block to work on, it is equivalent to change ls + log(exp(lS-ls) + pB) to ls + log(pB), right? Since exp(lS-ls) will be 0 for an unknown miner, with lS=-Inf.
Right.
|
|
|
|
roy7
|
|
May 03, 2013, 02:29:10 PM |
|
It's to make it numerically robust and avoid overflow. Both lS and ls eventually get so large that exp(lS) and exp(ls) cause overflow. But their difference lS-ls always manageable.
That's why we're using logarithmic scale in the first place, if you could handle exp(lS) directly you could simply work directly with S as in the original formulation. Yeah, I just wanted to be sure I'm handling the switch to using logs correctly. I'm trying to be very careful as I code it, that there's no mistakes. Any sort of error could be too subtle for me to spot in production just by looking at score numbers alone. I implemented it the way you wrote it, I was just trying to make sure I understood the math and didn't just plug things in blindly. Thanks.
|
|
|
|
flound1129
|
|
May 07, 2013, 09:03:23 PM |
|
Has anyone implemented DGM in php or a similar language? If so I would love to take a look.
|
Multipool - Always mine the most profitable coin - Scrypt, X11 or SHA-256!
|
|
|
roy7
|
|
May 19, 2013, 02:03:45 AM |
|
Hey Meni, with f=0.0 and c=.03 I'm assuming in long rounds the pool makes little/nothing and in short rounds it makes more than 3% to offset that. However, it seems so far (now that the first few blocks are out of the way and the miners have plenty of score) that in a really long round I'm making a hair over 0% (which is ok) but in a really short round I made a hair under 3%. The short round making 3% of the pool is what had me confused. I was figuring in short rounds (CDF was 2.2%) the pool would make well over the value of c.
Any thoughts? It's possible I have something wrong, although my java and php implementations resulted in the same values and it seems to be functioning correctly for actual miner payments.
|
|
|
|
Meni Rosenfeld (OP)
Donator
Legendary
Offline
Activity: 2058
Merit: 1054
|
|
May 19, 2013, 08:59:21 AM Last edit: May 19, 2013, 02:28:17 PM by Meni Rosenfeld |
|
Hey Meni, with f=0.0 and c=.03 I'm assuming in long rounds the pool makes little/nothing and in short rounds it makes more than 3% to offset that. However, it seems so far (now that the first few blocks are out of the way and the miners have plenty of score) that in a really long round I'm making a hair over 0% (which is ok) but in a really short round I made a hair under 3%. The short round making 3% of the pool is what had me confused. I was figuring in short rounds (CDF was 2.2%) the pool would make well over the value of c.
Any thoughts? It's possible I have something wrong, although my java and php implementations resulted in the same values and it seems to be functioning correctly for actual miner payments.
There's cross-round leakage, so the pool's proceeds depends not only on the length of the last round but the previous rounds as well. If the short round you looked at followed a succession of long rounds it's normal to have low revenue.
|
|
|
|
roy7
|
|
May 19, 2013, 01:59:48 PM |
|
There's cross-round leakage, so the pool's proceeds depends not only on the length of the last rounds but the previous rounds as well. If the short round you looked at followed a succession of long rounds it's normal to have low revenue.
Ah ok. I'll wait to worry until a few dozen more rounds have happened to give me more data to look at.
|
|
|
|
roy7
|
|
May 24, 2013, 04:05:15 PM |
|
I've hit so many short blocks in a row the fee is up to about 10%. I can't see miners appreciating that, even if in the long run it will all even out. (My fee on a recent long block was 0%.) So I might retool a bit to move from c=3% to f=3%.
If I have c=0 though, what should I use for r since I can't divide by zero?
|
|
|
|
Meni Rosenfeld (OP)
Donator
Legendary
Offline
Activity: 2058
Merit: 1054
|
|
May 26, 2013, 08:21:47 AM |
|
I've hit so many short blocks in a row the fee is up to about 10%. I can't see miners appreciating that, even if in the long run it will all even out. (My fee on a recent long block was 0%.)
This sentence doesn't make sense. You're reducing the miners' variance and taking it on yourself. In PPS with similar luck your fee would have been 300% and miners would still appreciate giving them variance-free payouts. So I might retool a bit to move from c=3% to f=3%.
If I have c=0 though, what should I use for r since I can't divide by zero?
If c=0 then o must be 1, and then you can choose anything for r. r = 1+p is a good choice, more generally you can use r = 1 + a p, where larger a gives more variance and less maturity time.
|
|
|
|
roy7
|
|
May 26, 2013, 02:56:57 PM |
|
This sentence doesn't make sense. You're reducing the miners' variance and taking it on yourself. In PPS with similar luck your fee would have been 300% and miners would still appreciate giving them variance-free payouts.
Oh I understand, and it's why I choose DGM. I'm just not sure the average miner will understand. I still see people on other pool's threads complain about DGM now and then for totally incorrect reasons (penalizes hoppers, penalizes people who don't mine 24 hours a day, etc etc). I'm sticking with it after all. For those who understand DGM and why I'm doing it this way, they'll appreciate it. And those that don't will just mine elsewhere.
|
|
|
|
Ascholten
|
|
May 29, 2013, 08:41:47 PM |
|
Ive been sticking with bitparking thru their problems.
Right now my mining has been pretty stable yet the 'reward' seems to have dropped thru the floor. Is this normal for this kind of reward system or do they have a flaw or what? Did the difficulty just jump an order of magnitude or what?
Aaron
|
|
|
|
redtwitz
|
|
May 29, 2013, 08:56:32 PM |
|
Ive been sticking with bitparking thru their problems.
Right now my mining has been pretty stable yet the 'reward' seems to have dropped thru the floor. Is this normal for this kind of reward system or do they have a flaw or what? Did the difficulty just jump an order of magnitude or what?
If you mean the payout per round, take into account that Bitparking's hash rate has almost doubled since yesterday.
|
|
|
|
Meni Rosenfeld (OP)
Donator
Legendary
Offline
Activity: 2058
Merit: 1054
|
|
May 29, 2013, 09:51:26 PM |
|
Ive been sticking with bitparking thru their problems.
Right now my mining has been pretty stable yet the 'reward' seems to have dropped thru the floor. Is this normal for this kind of reward system or do they have a flaw or what? Did the difficulty just jump an order of magnitude or what?
If you mean the payout per round, take into account that Bitparking's hash rate has almost doubled since yesterday. Indeed. If the pool's hashrate doubles, you have twice as many rounds with half the reward each. But if that's not the issue, I'd need more details to know what is.
|
|
|
|
doublec
Legendary
Offline
Activity: 1078
Merit: 1005
|
|
May 30, 2013, 02:04:24 AM |
|
Right now my mining has been pretty stable yet the 'reward' seems to have dropped thru the floor. Is this normal for this kind of reward system or do they have a flaw or what? Did the difficulty just jump an order of magnitude or what?
Pool luck has been pretty reasonable lately. See the blockstats here. When exactly did your rewards reduce? And are you comparing to PPS, other pools, or historical DGM payout on bitparking?
|
|
|
|
grich
Newbie
Offline
Activity: 29
Merit: 0
|
|
June 17, 2013, 12:56:20 PM |
|
Did I understand correctly: we need to recalculate the global variable "s" after each submitted share?
Suppose there is a large pool which handles a few TH/s (a hundreds shares per second). What if operator will recalculate 's' only on each 100th share? How it will affect miners?
And what if pool supports adaptive share difficulty (x2, x4..)? Can it just count each x16 diff share as 16 shares x1 diff in row and push it into DGM algorithm? Adaptive diff increases miner's variance, but is it the only problem?
|
|
|
|
roy7
|
|
June 17, 2013, 01:07:48 PM |
|
Did I understand correctly: we need to recalculate the global variable "s" after each submitted share?
Suppose there is a large pool which handles a few TH/s (a hundreds shares per second). What if operator will recalculate 's' only on each 100th share? How it will affect miners? The calculations are lightning fast, updating s hundreds of times per second shouldn't be a problem. Well, doing it locally in your front end system and then storing it/updating it every so often. I wouldn't want to do hundreds of sql updates per second just to update the value of s. Update it on every share and store it every 100 (or some number) of shares in your database. If you have 100 shares/sec and store it every 500-1000 shares, then if the front end crashes/etc you only lose 5-10 seconds of data which may not be all that bad. Or do it every 100 shares. 1 sql query per second is no issue. And what if pool supports adaptive share difficulty (x2, x4..)? Can it just count each x16 diff share as 16 shares x1 diff in row and push it into DGM algorithm? Adaptive diff increases miner's variance, but is it the only problem?
https://bitcointalk.org/index.php?topic=39497.msg1928641#msg1928641p = d/D where d is the difficulty of the work sent to the miner, and D is the difficulty of that block being worked on. So it'd just be 1/D or 2/D or 4/D etc. You don't need to update it 4 times in a row with 1/D, just use 4/D. One error I had early on was to use the difficulty of the block being worked on now, not the difficulty of the block when the work was sent to the miner. So keep an eye on that. (On bitcoin it'd only matter every difficulty change, but for my TRC pool the difficulty changes every block.)
|
|
|
|
grich
Newbie
Offline
Activity: 29
Merit: 0
|
|
June 18, 2013, 10:51:36 AM |
|
Did I understand correctly: we need to recalculate the global variable "s" after each submitted share?
Suppose there is a large pool which handles a few TH/s (a hundreds shares per second). What if operator will recalculate 's' only on each 100th share? How it will affect miners? The calculations are lightning fast, updating s hundreds of times per second shouldn't be a problem. Well, doing it locally in your front end system and then storing it/updating it every so often. I wouldn't want to do hundreds of sql updates per second just to update the value of s. Update it on every share and store it every 100 (or some number) of shares in your database. If you have 100 shares/sec and store it every 500-1000 shares, then if the front end crashes/etc you only lose 5-10 seconds of data which may not be all that bad. Or do it every 100 shares. 1 sql query per second is no issue. Thanks, but you answered a different question =) I agree that 100-1000 multiplications per sec is not a problem. But I'd like to know if recalculating 's' on every i'th step is a reasonable optimization for large pools?
|
|
|
|
roy7
|
|
June 18, 2013, 01:17:14 PM |
|
Thanks, but you answered a different question =) I agree that 100-1000 multiplications per sec is not a problem. But I'd like to know if recalculating 's' on every i'th step is a reasonable optimization for large pools?
Then that I'll have to leave to Meni but surely it'd break the math. To what extent, I can't say.
|
|
|
|
|