kano
Legendary
Offline
Activity: 2002
Linux since 1997 RedHat 4


October 02, 2011, 01:52:57 AM 

The last and next difficulty adjustment (and anyone who has watched the problems with the other crypto copies and the patches they have attempted) suggests that the 2 week difficulty adjustment should have a second test to adjust it early if needed.
Basically I'd propose that from 144 blocks (~24hrs) after a difficulty adjustment, and tested then and each 12 blocks (~2hrs) after that, if the actual calculated adjustment based on all blocks since the last adjustment is higher or lower by 50% than the current difficulty, then an early difficulty adjustment should kick in. Of course those approximate values in hours for when to do this are way out if the problem should actually occur, however, there would need to be 144 blocks to attempt to ensure it doesn't happen completely unnecessarily due to likely random probability (as opposed to unlikely random probability)
This may never be needed, however, if it is needed Bitcoin $ value will obviously drop badly if it isn't implemented. Simple example would be a short time frame drop in the network capacity by 50% would mean that the number of days remaining of the 2 weeks, would double thus extending out the slow down of the transaction confirmation in the network and thus having all the roll on effects that this would lead to including the possible self perpetuating spiral down that it could cause.
It would (as I said) be an up or down adjustment test, not just for downward adjustments, though the problems caused by a late upward adjustment would not be as severe as a late downward adjustment, thus the up test could be removed.
Comments? Suggestions?

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1 KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU FreeNode IRC: irc.freenode.net channel #kano.isHelp keep Bitcoin secure by mining on pools with full block verification on all blocks






Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.




kano
Legendary
Offline
Activity: 2002
Linux since 1997 RedHat 4


October 02, 2011, 04:03:13 AM 

Thus, this extra adjustment wouldn't take effect. However, if the problem ever did occur, it would take effect. Are you implying that could never happen? At the moment it's at 7% due to perceived value and a game BF3 (it was 10% earlier today) 7% seems a lot ...

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1 KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU FreeNode IRC: irc.freenode.net channel #kano.isHelp keep Bitcoin secure by mining on pools with full block verification on all blocks



Stephen Gornick
Legendary
Offline
Activity: 2072


October 02, 2011, 04:15:36 AM 

7% seems a lot ... So making no changes each block takes 10 minutes and 36 seconds, on average, for the 15 days, instead of 10 minutes. Problem?




kano
Legendary
Offline
Activity: 2002
Linux since 1997 RedHat 4


October 02, 2011, 04:30:15 AM 

7% seems a lot ... So making no changes each block takes 10 minutes and 36 seconds, on average, for the 15 days, instead of 10 minutes. Problem? No problem ... and that is not what I was suggesting either ... Is that why you ignored my question? You didn't read the 1st post?

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1 KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU FreeNode IRC: irc.freenode.net channel #kano.isHelp keep Bitcoin secure by mining on pools with full block verification on all blocks



2112
Legendary
Offline
Activity: 1778


October 02, 2011, 04:49:28 AM 

Basically I'd propose that from 144 blocks (~24hrs) after a difficulty adjustment, and tested then and each 12 blocks (~2hrs) after that, if the actual calculated adjustment based on all blocks since the last adjustment is higher or lower by 50% than the current difficulty, then an early difficulty adjustment should kick in. [...] Comments? Suggestions?
I have a semiconstructive suggestion: Anyone proposing a change in the feedback controller for difficulty should be required to show the stability region of his proposal. Pretty much everyone who tries to "improve" the PI controller implemented by Satoshi comes up with some hackneyed version of PID controller and then is surprised that it can be made to osciallte and is not even assymptotically stable in the Lyapunov sense if any nonlinearity is included. http://en.wikipedia.org/wiki/Lyapunov_stabilityhttp://en.wikipedia.org/wiki/PID_controllerI may have some software engineering disagreement with some of the Satoshi's choices, but the choice of PI regulator to adjust the difficulty is an example of excellent engineering: PIs may be slow, but they are absolutely stable for every causal process and for every error signal.




genjix


October 02, 2011, 05:31:49 AM 

Basically I'd propose that from 144 blocks (~24hrs) after a difficulty adjustment, and tested then and each 12 blocks (~2hrs) after that, if the actual calculated adjustment based on all blocks since the last adjustment is higher or lower by 50% than the current difficulty, then an early difficulty adjustment should kick in. [...] Comments? Suggestions?
I have a semiconstructive suggestion: Anyone proposing a change in the feedback controller for difficulty should be required to show the stability region of his proposal. Pretty much everyone who tries to "improve" the PI controller implemented by Satoshi comes up with some hackneyed version of PID controller and then is surprised that it can be made to osciallte and is not even assymptotically stable in the Lyapunov sense if any nonlinearity is included. http://en.wikipedia.org/wiki/Lyapunov_stabilityhttp://en.wikipedia.org/wiki/PID_controllerI may have some software engineering disagreement with some of the Satoshi's choices, but the choice of PI regulator to adjust the difficulty is an example of excellent engineering: PIs may be slow, but they are absolutely stable for every causal process and for every error signal. PID controllers are completely standard, and the reason to use them (from my experience) is that they can be very easily fine tuned to optimal cybernetic states. Can you explain your reasoning though for arguing that the difficulty adjustment is a PI controller? P = nActualTimespan / nTargetTimespan? Then where do you get the integral component from the difficulty recalculation? To me it looks like a simple proportional scaling algorithm.




2112
Legendary
Offline
Activity: 1778


October 02, 2011, 06:19:27 AM 

Then where do you get the integral component from the difficulty recalculation? To me it looks like a simple proportional scaling algorithm.
If I remember correctly it integrates the error over 2016 time intervals. This is some approximation of PI, more accurate approximation would be if the calculation for the expected block time was carried since the block 0 (time = infinity). Due to the esotheric block timekeeping the typical discretetime transformation cannot be applied. If I remember correctly the block chain can be extended by a block up to two hours in the past. Because of that process under control is marginally acausal. This esotheric timekeeping precludes use of any standard mathematical tool from the theory of control systems. So if you wanted to implement a discrete time approximation of P controller, you would measure the error over a single interblock interval. In case of nonmonotonic block time you would have to set difficulty to some negative number. PID, PI and P are terms from the the theory of Linear TimeInvariant systems and pretty much nobody deals with acausal systems. I agree that PID controllers are not that difficult to fine tune. But any tool (theorethic or practical) do do such tuning assumes causality. Moreover, the PID controller design with discrete time but variable time step are quite complex mathematically. If the block time was changed from the current esotheric one to something which is monotonically increasing (like NTP time), then the change in the difficulty feedback loop could be considered and implemented with safety.




maaku


October 02, 2011, 06:41:40 AM 

I did some simulation of blocktime variance (assuming honest nodes, and after calibration for network growth) for various difficulty adjustment intervals, posed on the freicoin forums. The nifty chart is copied below. The difference between oneweek and twoweek intervals was negligible, and is what I would recommend if a shorter interval was desired. A 24hour interval (144 blocks) would have a variance/clock skew of 8.4%meaning that one would *expect* the parameters governing difficulty adjustment to be in error by as much as 8.4% (vs bitcoin's current 2.2%). That's a significant difference. A 1week retarget would have 3.8% variance. Twiceweekly would have 4.4% variance. I certainly wouldn't let it go any smaller than that..

I'm an independent developer working on bitcoincore, making my living off community donations. If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP



Meni Rosenfeld


October 02, 2011, 07:46:40 AM 

Bitcoin does not have the problem with miners coming and going at anywhere near the level that seen with the alternates.
Scenario 1: Something really bad happens and the Bitcoin exchange rate quickly drops to a tenth of its previous value. This is when mining was already close to breakeven, so for most miners it is no longer profitable to mine and they quit. Hashrate drops to a tenth of the previous value, blocks are found every 100 minutes, retargeting is in 5 months. Scenario 2: Someone breaks the hashing function or builds a huge mining cluster, and uses it to attack Bitcoin. He drives the difficulty way up and then quits. Scenario 3: It is decided to change the hashing function. Miners want to protect their investment so they play hardball bargaining and (threaten to) quit. These can all be solved by hardcoding a new value for the difficulty, but wouldn't it be better to have an adjustment algorithm robust against this? Especially considering most Bitcoin activity will freeze until a solution is decided, implemented and distributed. Anyone proposing a change in the feedback controller for difficulty should be required to show the stability region of his proposal. Pretty much everyone who tries to "improve" the PI controller implemented by Satoshi comes up with some hackneyed version of PID controller and then is surprised that it can be made to osciallte and is not even assymptotically stable in the Lyapunov sense if any nonlinearity is included. ... If I remember correctly it integrates the error over 2016 time intervals. This is some approximation of PI, more accurate approximation would be if the calculation for the expected block time was carried since the block 0 (time = infinity).
I know enough to understand the ideas of what you're saying, but not the specifics. And I think you're wrong about the last part. Integrating the error over 2016 time intervals is not an approximation for I, the integral from infinity. It is an approximation for P, used rather than direct measurements (equivalent to 1 time interval) because the quantity measured is stochastic. P is what exists currently. I would fix problems with longterm trends. Currently, halving is done every less than 4 years because of the rising difficulty trend. D would rapidly adapt to abrupt changes, basically what the OP is suggesting. It's possible that the existing linear control theory doesn't apply directly to the block finding system and that P, I and D are only used metaphorically. People who understand this stuff should gather and design a proper difficulty adjustment algorithm with all these components. I did some simulation of blocktime variance (assuming honest nodes, and after calibration for network growth) for various difficulty adjustment intervals, posed on the freicoin forums. The nifty chart is copied below. The difference between oneweek and twoweek intervals was negligible, and is what I would recommend if a shorter interval was desired. A 24hour interval (144 blocks) would have a variance/clock skew of 8.4%meaning that one would *expect* the parameters governing difficulty adjustment to be in error by as much as 8.4% (vs bitcoin's current 2.2%). That's a significant difference. A 1week retarget would have 3.8% variance. Twiceweekly would have 4.4% variance. I certainly wouldn't let it go any smaller than that.. We're not talking about using the same primitive algorithm with a shorter timespan. We're talking about either a new intelligent algorithm, or a specific exception to deal with emergencies.




2112
Legendary
Offline
Activity: 1778


October 02, 2011, 09:19:55 AM 

I did some simulation of blocktime variance (assuming honest nodes,
It is my understanding that at least Eligius pool isn't a "honest node" and intentionally produces acausal blocks (or at least as close to acausal as they deem practical).




2112
Legendary
Offline
Activity: 1778


October 02, 2011, 10:13:09 AM 

I know enough to understand the ideas of what you're saying, but not the specifics. And I think you're wrong about the last part. Integrating the error over 2016 time intervals is not an approximation for I, the integral from infinity. It is an approximation for P, used rather than direct measurements (equivalent to 1 time interval) because the quantity measured is stochastic.
Let me say what I said using a different terminology. The current regulator approximates a PI using a 2016tap lowpass FIR filter with a sampeandhold (0th order extrapolator) doing 2016 times subsampling of the output from the FIR. D would rapidly adapt to abrupt changes, basically what the OP is suggesting.
Those are the famous last words. There were the days when average student of engineering would be splashed with an ink from the papertape analog data logger after cranking up the D term during his lab work. Nowadays I don't know how people learn the basics of system stability. ArtForz had shown on one of the alternate chains both persistent oscillation and lack of assymptotic stability when people implement the diffoperator or some nonlinear and/or timevarying approximations to a differentiator. I know of the only one thing that could be safely done without fixing the problem of acausality or nonmonotonicity of the time stamps. $ factor 2016 2016: 2 2 2 2 2 3 3 7
From this I can see that it is possible to synthesize an absolutely stable 5octave multirate filter bank that would work in front of a 63times subsampler. There are two problems for which I don't know the answer: 1) difficulty uses some custom floating point format with very low precision that may cause limitcycle oscillations in a theorethically stable filter. If the format uses correct rounding then this wouldn't matter. 2) making changes to the difficulty 32 times more often would increase the probability of network splitting into disjoint blockchains, but I don't know how to estimate that.




kano
Legendary
Offline
Activity: 2002
Linux since 1997 RedHat 4


October 02, 2011, 12:21:28 PM 

Hmm, firstly I guess most notice, but I did not say to do away with the 2016 calculation.
What I am suggesting is specifically a way to handle large swings in the mining community and only that. That's the reason for firstly waiting a full 144 blocks (normally ~1 day) before allowing it to even be checked and also for ignoring any changes less than 50%
I'm not talking about a general algorithm that would affect the normal working of the difficulty adjustment, but only actually have an effect if something drastic happened (i.e. network hash rate changed by at least 50%)
Namecoin has even had this specific problem happen already (and some of the scamcoins  I mean  alternate coins  have also seen this happen)
It's not something that would normally effect the difficulty adjustment calculation i.e. it would give 'false' in any situation but some drastic network change
That's the reasoning behind the 50% check (and also of course the 144 block delay before even checking helps somewhat with network statistical variance)
At the moment the 2 week estimate is 9% (well I think that's right from the irc bot in the channel I visit) and that's still less than 1/5 of the value necessary for any early change to occur
The point is to have something that cuts in if something does go wrong but normally has no effect.
Waiting for it to happen first and then making a quick hack in bitcoin is IMO a bad idea, better to come up with something beforehand and yes discuss it as well. However, the point of the original idea is to come up with a reasonable intervention and of course code that doesn't intervene except when needed.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1 KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU FreeNode IRC: irc.freenode.net channel #kano.isHelp keep Bitcoin secure by mining on pools with full block verification on all blocks



maaku


October 02, 2011, 05:48:10 PM 

If the block time was changed from the current esotheric one to something which is monotonically increasing (like NTP time), then the change in the difficulty feedback loop could be considered and implemented with safety.
What about ordering blocks with respect to time (just for the purposes of this calculation)? From this I can see that it is possible to synthesize an absolutely stable 5octave multirate filter bank that would work in front of a 63times subsampler. There are two problems for which I don't know the answer: 1) difficulty uses some custom floating point format with very low precision that may cause limitcycle oscillations in a theorethically stable filter. If the format uses correct rounding then this wouldn't matter. 2) making changes to the difficulty 32 times more often would increase the probability of network splitting into disjoint blockchains, but I don't know how to estimate that.
If you can provide code or pseudocode of or even just MATLAB code for this, I can bring it into an altchain for testing. I didn't pay attention enough in controls class to follow the discussion here :\

I'm an independent developer working on bitcoincore, making my living off community donations. If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP



2112
Legendary
Offline
Activity: 1778


October 02, 2011, 07:26:09 PM 

What about ordering blocks with respect to time (just for the purposes of this calculation)?
I think it is still risky. The risk would not be in outoforder blocks but blockswiththesametimestamps and a subsequent zerodivide or blockswithalmostthesametimestamps and total numerical loss of accuracy. If you can provide code or pseudocode of or even just MATLAB code for this, I can bring it into an altchain for testing. I didn't pay attention enough in controls class to follow the discussion here :\
Yeah, I was thinking about too. My copy of MATLAB is on an SGI O2 that is in the storage. I would not be able to deal with this yet. If I find some backups on a nonIRIX disk I'll post.




kjj
Legendary
Offline
Activity: 1302


October 02, 2011, 10:25:17 PM 

Allowing asymmetric adjustments can lead to security problems. Artforz described a few when this came up on alternate chains.




maaku


October 02, 2011, 10:29:33 PM 

Unless I'm seriously misunderstanding, no one is discussing asymmetric adjustments here..

I'm an independent developer working on bitcoincore, making my living off community donations. If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP



d.james


October 02, 2011, 11:25:51 PM 

how about real time difficulty adjustments? is that even possible?

You can not roll a BitCoin, but you can rollback some. Roll me back: 1NxMkvbYn8o7kKCWPsnWR4FDvH7L9TJqGG



kano
Legendary
Offline
Activity: 2002
Linux since 1997 RedHat 4


October 02, 2011, 11:28:52 PM 

how about real time difficulty adjustments? is that even possible?
No, coz all reasonable calculations of the network hash rate are based on the block finding rate  which is a random statistical value ...

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1 KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU FreeNode IRC: irc.freenode.net channel #kano.isHelp keep Bitcoin secure by mining on pools with full block verification on all blocks



kjj
Legendary
Offline
Activity: 1302


October 03, 2011, 12:23:50 AM 

Unless I'm seriously misunderstanding, no one is discussing asymmetric adjustments here..
You are. This is exactly what is under discussion.




