Bitcoin Forum
September 29, 2016, 03:17:29 PM *
News: Due to DDoS attacks, there may be periodic downtime.
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: Dynamic Difficulty Adjustments  (Read 2081 times)
InterArmaEnimSil
Member
**
Offline Offline

Activity: 77


View Profile
July 18, 2010, 10:15:36 PM
 #1

I have read that the difficulty of BTC generation varies every two weeks, based upon statistics gathered over the course of the two weeks.  This seems to introduce an undue amount of lag into the system.  We've all read about the guy who supposedly has 1,000 cores running the client - if he leaves, suddenly BTC generation takes far longer.  Also, if the difficulty is low and we suddenly have a mass influx of power, then coins are generated too quickly.

What was the justification behind the two-week blocks of time between difficulty adjustments. Why not do something like the following?  (In this example, magnitudes of changes are fictitious, and target block generation time is taken to be ten minutes.)

after each block:

difficulty+=.001*600-block.genTimeInSeconds

Thus, the difficulty would adjust dynamically up or down every block, with the magnitude of the adjustments being in proportion to the influx or exodus of computing power during that last block.  Yes, you would get situations where someone would randomly solve a block in ten seconds and thus the next difficulty would ramp up exceedingly high, but the high (or low, in the converse situation) difficulty would only last for one block, and these random noise variations would even out in the long run.

12aro27eH2SbM1N1XT4kgfsx89VkDf2rYK
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1475162249
Hero Member
*
Offline Offline

Posts: 1475162249

View Profile Personal Message (Offline)

Ignore
1475162249
Reply with quote  #2

1475162249
Report to moderator
ByteCoin
Sr. Member
****
expert
Offline Offline

Activity: 416


View Profile
July 19, 2010, 12:29:00 PM
 #2

If the aim is to make block generation more regular by adjusting the difficulty more frequently why not mandate that blocks are generated exactly every 10 minutes using the scheme in

http://bitcointalk.org/index.php?topic=425.0

The main advantage of my scheme is that when somone asks you "How long until my transaction goes 'confirmed'?" Instead of saying "Uhh... Dunno, depends on the difficulty, total computer power and a load of completely unpredictable random factors" you can go "It will be confirmed in 2hours to 2hours 10minutes." The unpredictability of the current approach is not a feature of a commercialy suitable financial system.

Transactions in block 68477 took under 3 minutes to get confirmed but transactions in block 68780 took over two hours!

ByteCoin
dete
Newbie
*
Offline Offline

Activity: 22



View Profile
July 19, 2010, 05:19:36 PM
 #3

The noise in the time it takes to generate blocks is very, very high.  There was a link posted somewhere in the forums that showed the time taken for the swarm to solve the previous 100 blocks (or so), and the range in solution time was between 3 seconds and 20 minutes.

Adjusting SLIGHTLY after each block makes sense to me though.  Maybe the difficulty can go up or down a fraction of one percent each block, and then recomputed wholesale on the two-week boundaries?

Regardless of how it's done though, we need to be wary of wild swings...

Send me a tip!
1Benh27wZoszDjvGTSWTWD2CWPr4rWUoGk
wizeman
Newbie
*
Offline Offline

Activity: 7


View Profile
July 19, 2010, 06:02:31 PM
 #4

after each block:

difficulty+=.001*600-block.genTimeInSeconds

I assume you meant "difficulty += .001 * (600 - block.genTimeInSeconds)" instead?

Thus, the difficulty would adjust dynamically up or down every block, with the magnitude of the adjustments being in proportion to the influx or exodus of computing power during that last block.

Actually, in the current algorithm the magnitude of the adjustments is also proportional to the influx or exodus of computing power, it's just that it happens at a period of 2016 blocks instead of being adjusted whenever a block is received.

I think that what you're proposing is for the algorithm to adjust the difficulty continuously rather than periodically.

If so, I think there are better ways.
For one, we shouldn't use floating point to compute the target hash, because rounding differences in architectures could cause some computers to accept one hash but others not.

Second, I don't understand where the ".001" constant comes from. I'm not convinced that your algorithm would accurately reflect the difficulty of the hash generation.. it looks like it would converge into it eventually, but I'm not sure how quickly it would do that.

I think that to achieve your goal, it would be much better to simply change the current algorithm to adjust the difficulty at the end of each block rather than at the end of each sequence of 2016 blocks.

In other words, every time we accept a new block, we'd look at the elapsed time of the latest 2016 blocks and calculate what the new target hash should be.

This seems feasible and easy enough, the only problem is that it's not a backwards-compatible change, so you can only start doing this when everyone has upgraded to a new version which knows how to do the new calculations.
wizeman
Newbie
*
Offline Offline

Activity: 7


View Profile
July 19, 2010, 06:20:54 PM
 #5

Regardless of how it's done though, we need to be wary of wild swings...

Indeed, this is also very important, especially to protect against DDoS attacks.
The current algorithm already tries to prevent this by not allowing a more than 4x factor adjustment every 2016 blocks.

I think we'd need a way to achieve the same end result when doing the continuous adjustments, but not a 4x factor for every block, obviously... it would need to prevent a 4x factor difficulty difference in the latest 2016 blocks compared to the calculated difficulty of the 2016 blocks before those.
InterArmaEnimSil
Member
**
Offline Offline

Activity: 77


View Profile
July 19, 2010, 09:58:45 PM
 #6


I assume you meant "difficulty += .001 * (600 - block.genTimeInSeconds)" instead?
Yes.

Second, I don't understand where the ".001" constant comes from. I'm not convinced that your algorithm would accurately reflect the difficulty of the hash generation.. it looks like it would converge into it eventually, but I'm not sure how quickly it would do that.
randomFractionFactor=.001.  This isn't .001 necessarily, but just a factor to scale the change to a certain percentage of current difficulty.

12aro27eH2SbM1N1XT4kgfsx89VkDf2rYK
wizeman
Newbie
*
Offline Offline

Activity: 7


View Profile
July 19, 2010, 10:12:01 PM
 #7


I assume you meant "difficulty += .001 * (600 - block.genTimeInSeconds)" instead?

(...)

randomFractionFactor=.001.  This isn't .001 necessarily, but just a factor to scale the change to a certain percentage of current difficulty.

The difficulty factor should be a function of the hash target, not the other way around. We shouldn't try to reach consensus (i.e. the hash target) based on floating point numbers, because the rounding depends on the compiler flags used and/or architecture, and you don't want different nodes to have a different view of what the hash target is.

Also, you can't trust block.genTimeInSeconds, because the timestamp inside the block can be forged by the node who generated it, and the nodes can't use their own clocks to determine how long it took for the block to be generated, because then every node would have a different view of what the hash target should be.

IMO, we should do the calculation based on the timestamps of the latest set of 2016 blocks, similarly how it is done today, taking into account that not all block timestamps may be accurate.

So the bottom line is that I disagree with the actual implementation you are proposing, but I agree with the general idea.
John Tobey
Hero Member
*****
Offline Offline

Activity: 481



View Profile WWW
October 20, 2011, 04:33:30 AM
 #8

A footnote to this old topic: I've proposed an improved(?) version here in the context of Ixcoin, but applicable to any new or experimental chain.  Not for BTC unless hashrate falls to 2010 levels.  Wink

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!