-ck (OP)
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
June 22, 2012, 10:23:19 AM |
|
With the upcoming HUGE hashing hardware starting to hit, now would be a good time to consider supporting higher than 1 difficulty shares for bigger miners which would allow pools to scale without as much increase in bandwidth and server resource requirements as the increase in hashrates. I'd suggest initially making an optional difficulty multiplier switch for workers on the website, which would scale with the miners' hashrate. Enabling it by default would surprise and confuse many miners, and also some mining software may not support it so they'll just get high rejects unexpectedly. As a rough guess, I'd recommend increasing difficulty by 1 for every 1GH of hashing power. This will not dramatically change getwork rates, but it would change share submission rate and processing of them which is bandwidth and CPU intensive. There would be issues with fluctuating hashrates and difficulty targets when precisely on the 1GH boundaries, and this could be worked around by the user setting their own target or by using some hysteresis for the change up and down of targets to avoid frequently flicking between difficulties.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Graet
VIP
Legendary
Offline
Activity: 980
Merit: 1001
|
|
June 22, 2012, 10:39:15 AM |
|
We have been discussing higher diff shares going forward Initially we will offer miners a choice of 2 difficulties. Over time we will code up some way to do dynamic difficulty as suggested and ensure payouts work correctly with dynamic share difficulties
|
|
|
|
kano
Legendary
Offline
Activity: 4620
Merit: 1851
Linux since 1997 RedHat 4
|
|
June 22, 2012, 10:57:46 AM |
|
Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before. e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
June 22, 2012, 11:02:26 AM |
|
Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before. e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
Indeed, back to that chance debate I've seen many times before in different forms. Whether the random nature of shares scattered about and when they fall relative to block changes will even out compared to many many small shares. Mathematically to me it would seem to be exactly the same as current shares based on chance: It will fluctuate visibly more but even out long term to be identical.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
June 22, 2012, 11:32:47 AM |
|
It's also worth mentioning this would make things very interesting for the proxy pools out there if they start passing work to pools set up for higher difficulty shares
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Meni Rosenfeld
Donator
Legendary
Offline
Activity: 2058
Merit: 1054
|
|
June 22, 2012, 12:01:08 PM |
|
Perhaps this is a good opportunity to review sections 7.5 and 7.6 of AoBPMRS. Having different difficulty shares for different miners is fairly straightforward from the reward method perspective. Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before. e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
Stale rate is measured as a percentage, and as long as the percentage remains the same it doesn't matter what difficulty the share are. And, the share difficulty should have absolutely no effect on the stale rate.
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
June 22, 2012, 12:07:54 PM |
|
Increasing D for pooled share submissions would increase variance for miners.
The variance in time between share submissions at a constant hashrate will increase by the square of the ratio of the greater difficulty to the lesser one. Increase D from 1 to 10 and the variance of the time in between share submissions increases one hundred fold.
|
|
|
|
kano
Legendary
Offline
Activity: 4620
Merit: 1851
Linux since 1997 RedHat 4
|
|
June 22, 2012, 12:12:06 PM |
|
Perhaps this is a good opportunity to review sections 7.5 and 7.6 of AoBPMRS. Having different difficulty shares for different miners is fairly straightforward from the reward method perspective. Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before. e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
Stale rate is measured as a percentage, and as long as the percentage remains the same it doesn't matter what difficulty the share are. And, the share difficulty should have absolutely no effect on the stale rate. Except that with higher difficulty it takes longer to find a share thus the amount of work lost due to a stale increases ...... So no.
|
|
|
|
racerguy
|
|
June 22, 2012, 12:17:08 PM |
|
Perhaps this is a good opportunity to review sections 7.5 and 7.6 of AoBPMRS. Having different difficulty shares for different miners is fairly straightforward from the reward method perspective. Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before. e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
Stale rate is measured as a percentage, and as long as the percentage remains the same it doesn't matter what difficulty the share are. And, the share difficulty should have absolutely no effect on the stale rate. Except that with higher difficulty it takes longer to find a share thus the amount of work lost due to a stale increases ...... So no. mathfail
|
|
|
|
kano
Legendary
Offline
Activity: 4620
Merit: 1851
Linux since 1997 RedHat 4
|
|
June 22, 2012, 12:18:48 PM Last edit: June 22, 2012, 12:34:09 PM by kano |
|
Perhaps this is a good opportunity to review sections 7.5 and 7.6 of AoBPMRS. Having different difficulty shares for different miners is fairly straightforward from the reward method perspective. Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before. e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
Stale rate is measured as a percentage, and as long as the percentage remains the same it doesn't matter what difficulty the share are. And, the share difficulty should have absolutely no effect on the stale rate. Except that with higher difficulty it takes longer to find a share thus the amount of work lost due to a stale increases ...... So no. mathfail Yep - OK he said % - I was thinking number of shares I guess my wording was really bad Edit: as long as the number of stales drops by the same ratio as difficulty increases then it will be ok My reason for bringing this up is simply that different devices have different characteristics so it will be interesting to see if the differences come into play with LP's and higher difficulty. Edit2: cgminer reports numbers not % (and I don't look at the pool %) so I guess that's why I messed up.
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
June 22, 2012, 12:20:00 PM |
|
Increasing D for pooled share submissions would increase variance for miners.
The variance in time between share submissions at a constant hashrate will increase by the square of the ratio of the greater difficulty to the lesser one. Increase D from 1 to 10 and the variance of the time in between share submissions increases one hundred fold.
If true then the automatic scaling should be logarithmic rather than linear.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Meni Rosenfeld
Donator
Legendary
Offline
Activity: 2058
Merit: 1054
|
|
June 22, 2012, 12:34:12 PM |
|
Increasing D for pooled share submissions would increase variance for miners.
The variance in time between share submissions at a constant hashrate will increase by the square of the ratio of the greater difficulty to the lesser one. Increase D from 1 to 10 and the variance of the time in between share submissions increases one hundred fold.
If true then the automatic scaling should be logarithmic rather than linear. If anything it should be square-root, definitely not logarithmic. If we assume the miner has a target function with linear factors for expectation and variance, his optimal difficulty doesn't depend on his hashrate, only on his net worth. If we assume his net worth is proportional to his hashrate, then the optimal difficulty grows by the square root of the hashrate.
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
June 22, 2012, 12:41:28 PM Last edit: December 20, 2012, 07:16:42 AM by organofcorti |
|
Increasing D for pooled share submissions would increase variance for miners.
The variance in time between share submissions at a constant hashrate will increase by the square of the ratio of the greater difficulty to the lesser one. Increase D from 1 to 10 and the variance of the time in between share submissions increases one hundred fold.
If true then the automatic scaling should be logarithmic rather than linear. The expected number of hashes to solve a D 1 block is 2^48 / as.numeric(0xffff) or approximately 2^32. Each hash has the same probability to solve a D 1 block and create a share , so the probability distribution is geometric. If p = 1/D, the variance for the geometric distribution is:which approaches D^2 as D increases. In the case of D 1, the variance is 1.844731e+19. For D 10, it's 1.844731e+21. The ratio of (D 1)/(D 10) ~ 100.
... and then Meni got there first. What he said about square roots rather than logs.This bit was all wrong. The CDF of a share passing a particular difficulty is derived from 1/(uniform distribution CDF).
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
June 22, 2012, 12:58:16 PM |
|
Gentlemen, I salute thee o\
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
rjk
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
June 22, 2012, 03:44:02 PM |
|
|
|
|
|
|
eleuthria
Legendary
Offline
Activity: 1750
Merit: 1007
|
|
June 23, 2012, 10:08:29 AM |
|
Just to repeat what was stated in the last thread about this:
Changing the difficulty does not change the frequency your miner will request work. It will only reduce the frequency you send work back. For pools, the difference in load here is minimal, it's much harder to send you work than it is to verify work that was sent back.
If the ASICs are real (I still highly doubt BFL will produce anything remotely close to their claims), the entire mining protocol will need an overhaul. Clients will either have to generate work locally and send it to the pool (similar to p2pool), or a method of getwork where a miner can get a packet of many getworks at once in a condensed format.
|
RIP BTC Guild, April 2011 - June 2015
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
June 23, 2012, 10:12:41 AM |
|
Indeed. There's nothing like a change to drive development is there? edit: I updated the entries for cgminer and bfgminer since they support it
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
June 23, 2012, 10:13:42 AM |
|
Just to repeat what was stated in the last thread about this:
Changing the difficulty does not change the frequency your miner will request work. It will only reduce the frequency you send work back. For pools, the difference in load here is minimal, it's much harder to send you work than it is to verify work that was sent back.
If the ASICs are real (I still highly doubt BFL will produce anything remotely close to their claims), the entire mining protocol will need an overhaul. Clients will either have to generate work locally and send it to the pool (similar to p2pool), or a method of getwork where a miner can get a packet of many getworks at once in a condensed format.
Does proof of work on all those shares coming in require that little cpu? I've never run a pool so maybe I'm barking up the wrong tree entirely.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
DrHaribo
Legendary
Offline
Activity: 2730
Merit: 1034
Needs more jiggawatts
|
|
June 23, 2012, 10:23:19 AM |
|
If the ASICs are real (I still highly doubt BFL will produce anything remotely close to their claims), the entire mining protocol will need an overhaul. Clients will either have to generate work locally and send it to the pool (similar to p2pool), or a method of getwork where a miner can get a packet of many getworks at once in a condensed format.
That already exists. It's called rollntime. Sadly miner support isn't very good. To make good use of rollntime the miner should 1: make the best use of the roll range it is given, and 2: never roll further than the server allows ("expire" support). In my pool I whitelist miners with proper rollntime support. Others don't get rollable work. Currently only DiabloMiner and my own miner (although I'm only now working on actual support in the miner). I will have a look at MPBM and possibly add that. I hope other miners will improve support and I'll whitelist them as they come out. I think ASIC mining without this could be a very bad idea. Does proof of work on all those shares coming in require that little cpu? I've never run a pool so maybe I'm barking up the wrong tree entirely.
If someone gets (through rollntime) 100 work units (nonce ranges) from my pool in 1 request and then send in 100 proofs of work then processing the proofs of work is what's causing server load. Many seem to believe that processing proofs of work is free, but I think we'll see this proven wrong in october. Indeed. There's nothing like a change to drive development is there?
Yeah. The bitcoin world is in constant change, and it's always do or die for developers.
|
|
|
|
|