Bitcoin Forum
May 11, 2024, 09:51:28 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Regtest Consensus Forking Behavior Introduced in Bitcoin Core in May 2014  (Read 1582 times)
davec (OP)
Newbie
*
Offline Offline

Activity: 39
Merit: 0


View Profile
May 19, 2015, 03:57:52 PM
Last edit: May 20, 2015, 01:42:29 AM by davec
 #1

First, I want to point out that I'm only reporting this here because it can't be used to fork mainnet or testnet.  However, this is a problem for the regression test network if any of the tests actually had more than 2016 blocks to exercise the retarget code.

While doing some testing using our simulation test network in btcd, Josh Rickmar and I noticed that Bitcoin Core improperly calculates the difficulty readjustment due to an overflow which occurs on the regtest network.  This is different behavior from all versions of Bitcoin Core prior to commit https://github.com/bitcoin/bitcoin/commit/df9eb5e14fa8072bc8a82b59e712c2ba36f13f4c.

The issue is that previously the calculations used the CBigNum class, which in turn used arbitrary precision OpenSSL bignums, are now limited to 256-bit integers.  For reference, here is the difficulty readjustment calculations in pow.cpp:

Code:
 const arith_uint256 bnPowLimit = UintToArith256(params.powLimit);
 arith_uint256 bnNew;
 arith_uint256 bnOld;
 bnNew.SetCompact(pindexLast->nBits);
 bnOld = bnNew;
 bnNew *= nActualTimespan;
 bnNew /= params.nPowTargetTimespan;

 if (bnNew > bnPowLimit)
     bnNew = bnPowLimit;

Notice that readjustment is calculated by multiplying the old difficulty (converted from its compact representation to a uint256) by the adjusted time span (which is limited to a max of the target time span * 4 and min of target time span / 4).  The result is then divided by the target time span and converted back to compact form.  Unfortunately, this means that using the new uint256s introduced with the aforementioned commit causes the result of the `bnNew *= nActualTimespan;` line to overflow, whereas previously it was arbitrary precision and would not.


Using real numbers for regtest, the forking condition becomes clear.

First, let's do the math using the old arbitrary precision based arithmetic.  For the sake of simplicity and to illustrate the issue, we'll assume that first 2016 blocks are found fast enough such that the minimum possible multiplication case gets hit.

Code:
Compact Bits for regtest genesis block: 0x207fffff
Converted to a uint256: 0x7fffff0000000000000000000000000000000000000000000000000000000000
nTargetTimespan: 14 * 24 * 60 * 60 = 1209600 = 0x127500
nActualTimespan = nTargetTimespan/4 = 302400 = 0x49d40 (minimum possible multiplier for sake of simplicity)

Code:
bnNew.SetCompact(pindexLast->nBits) --> 0x7fffff0000000000000000000000000000000000000000000000000000000000
bnNew *= nActualTimespan            --> 0x7fffff0000000000000000000000000000000000000000000000000000000000 * 0x49d40 = 0x24e9ffb62c00000000000000000000000000000000000000000000000000000000000
*** Notice how the previous line overflows max uint256, but is fine because of the arbitrary precision arithmetic of OpenSSL ***
bnNew /= Params().TargetTimespan()  --> 0x24e9ffb62c00000000000000000000000000000000000000000000000000000000000 / 0x127500 = 0x1fffffc000000000000000000000000000000000000000000000000000000000

Old Expected difficulty bits: 0x201fffff


Now, let's repeat the math using the current code in Bitcoin Core:

Code:
bnNew.SetCompact(pindexLast->nBits) --> 0x7fffff0000000000000000000000000000000000000000000000000000000000
bnNew *= nActualTimespan            --> 0x7fffff0000000000000000000000000000000000000000000000000000000000 * 0x49d40 = 0xfb62c00000000000000000000000000000000000000000000000000000000000
*** Notice how the previous line overflows max uint256, but is now 2s complement wrapped and thus is not the same as above ***
bnNew /= Params().TargetTimespan()  --> 0x7fffff0000000000000000000000000000000000000000000000000000000000 / 0x127500 = 0xd9ebbc99a778556334111eefccdaab889667445223000ddebbc99a77855

New Expected difficulty bits: 0x1e0d9ebb


Notice how the calculated value is much lower than it should be as compared to the previous behavior and therefore leads to the required proof of work being ~154000 times more difficult than it should be.

A little more math shows that since the maximum multiplier is nTargetTimespan*4 = 4838400 = 0x49d400, this overflow condition can occur so long as the difficulty is >= 0x377aef2669de1558cd0447bbf336aae22599d11488c00377aef2669de15 (compact bits: 0x1e0377ae).

A simple fix is using higher precision arithmetic for the intermediate results.  Josh Rickmar is working on a patch which does this.
"I'm sure that in 20 years there will either be very large transaction volume or no volume." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715421088
Hero Member
*
Offline Offline

Posts: 1715421088

View Profile Personal Message (Offline)

Ignore
1715421088
Reply with quote  #2

1715421088
Report to moderator
1715421088
Hero Member
*
Offline Offline

Posts: 1715421088

View Profile Personal Message (Offline)

Ignore
1715421088
Reply with quote  #2

1715421088
Report to moderator
1715421088
Hero Member
*
Offline Offline

Posts: 1715421088

View Profile Personal Message (Offline)

Ignore
1715421088
Reply with quote  #2

1715421088
Report to moderator
voisine
Member
**
Offline Offline

Activity: 115
Merit: 19


View Profile
May 19, 2015, 04:59:03 PM
 #2

So, the switch to 256bit ints was considered safe presumably because 0x1e0377ae is higher than MAX_PROOF_OF_WORK (0x1d00ffff), but that's not a safe assumption for regtest
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
May 19, 2015, 05:36:14 PM
Last edit: May 19, 2015, 05:58:54 PM by gmaxwell
 #3

This was actually pointed out at the time the change was made; but it seemed silly to change regtest's minimum just to make it fit (or worse, to add a lot of additional complexity and another number type just to handle values which _cannot_ occur in Bitcoin). Somewhat similar to how the testnet 20-minute rule exposes weird behavior where if the penultimate block in the retargeting window is diff-1 the difficulty will just from whatever back to ~1: The tests intentionally break the system in order to make it easier to test, sometimes that has collateral damage; needless generality exposes its own risks-- e.g. the OpenSSL bignum code was wrong in a platform dependent way until fairly recently (and irritatingly, we spent months with that discovery embargoed)-- any use of it carries its own risks.
davec (OP)
Newbie
*
Offline Offline

Activity: 39
Merit: 0


View Profile
May 19, 2015, 08:07:55 PM
 #4

... but it seemed silly to change regtest's minimum just to make it fit (or worse, to add a lot of additional complexity and another number type just to handle values which _cannot_ occur in Bitcoin).

Thanks for answering.  Unfortunately, the regtest network is currently essentially unusable in the intended way once you've generated enough blocks to reach the first retarget interval.

I suppose this is a bigger deal for us in btcd because we have tests that exercise these types of things and do a lot of simulation testing using simnet (which is effectively like the regtest network without some of the things specific to the block tester tool and some additional logic to prevent address propagation and discovery).  That is a big reason we discovered this issue since we were comparing behavior during some simulations and noticed that Bitcoin Core was erroneously rejecting blocks due to having too low difficulty, when in fact the difficulty was correct.

I have to admit I'm disappointed that the answer is basically that Bitcoin Core doesn't feel like regression and simulation testing is important enough to warrant proper retarget behavior for it, but I appreciate the response nonetheless.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
May 19, 2015, 10:45:34 PM
 #5

I have to admit I'm disappointed that the answer is basically that Bitcoin Core doesn't feel like regression and simulation testing is important enough to warrant proper retarget behavior for it, but I appreciate the response nonetheless.
Testing is very important, but adding additional code to accommodate makes the simulation even _less_ faithful, and more likely to miss real issues (or potentially even introduce real issues).  I do regression testing with a ASIC miner and the main network code (with checkpoints=0, of course). Testing with the actual production-time behavior is the gold standard and cannot be replaced with shortcutted version without compromise.  (FWIW, testnet which, in spite of its own stupid shortcuts, is somewhat closer to Bitcoin has test cases in the chain for adjustment extremes).

The only reason you were able to make this comment at all is because regtest exists, and the only reason regtest mode exists is because it was specifically created for the block tester harness that runs on externally hosted ci setup, e.g. its intended use. For a long time the harness applied a patch to change the behavior to make it computationally cheaper to test-- at the expense of making them less accurate and faithful--, but maintaining the patch externally took work.

Use of it has expanded since then--  I think today a somewhat different approach would make more sense for the regtest shortcutting and would result in a smaller divergence from the normal network behavior.  (e.g. when in testing mode, mask out the highest bits of the block hashes before the target check).  So quite the opposite, testing is important enough that one should actually be testing the actual Bitcoin network code and not a altcoinified mockup that makes testing easier, or to the extent a modified version for testability is used great care should be taken to minimize the number of differences (and  to not add risk to production code).  How you could extract "testing is not important" from my comments about a whole alternative network mode created specifically for testing is beyond me-- specifically given the amount of effort I put in previously in convincing you that agreement testing was essential.



jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1097


View Profile
May 20, 2015, 07:20:53 AM
 #6

Sorry if this is a stupid question. Why would you need to test with a low difficulty like this, while you could have a difficulty of 46 with the now worthless 330MH/s USB Block Erupter?

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
May 20, 2015, 08:19:17 AM
 #7

Sorry if this is a stupid question. Why would you need to test with a low difficulty like this, while you could have a difficulty of 46 with the now worthless 330MH/s USB Block Erupter?
To test on some 'cloud' server that doesn't have the worthless usb block erupter. Smiley  Also because you want to crank through thousands of simulated blocks in a few minutes.

I personally think its of fairly marginal value (thus the mention of testing with mainnet) but not worthless.

Perhaps  I should start a collection effort for old asic miners for bitcoin software developers?  Smiley  There are actually USB miners with a lot more than 330MH/s which should be worthless-ish now. 
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1097


View Profile
May 20, 2015, 08:52:58 AM
 #8

Sorry if this is a stupid question. Why would you need to test with a low difficulty like this, while you could have a difficulty of 46 with the now worthless 330MH/s USB Block Erupter?
To test on some 'cloud' server that doesn't have the worthless usb block erupter. Smiley  Also because you want to crank through thousands of simulated blocks in a few minutes.


I think it's relevant only if the test involves difficulty change? Otherwise, using a fixed difficulty (increasing the retarget interval from 2016 to an astronomical number) should be ok.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
May 20, 2015, 08:57:37 AM
Last edit: May 20, 2015, 11:58:32 AM by TierNolan
 #9

Thanks for answering.  Unfortunately, the regtest network is currently essentially unusable in the intended way once you've generated enough blocks to reach the first retarget interval.

Is the problem that "nActualTimespan" is to large, right?

The statement:

Code:
bnNew *= nActualTimespan;

means multiply the old target by nActualTimespan and it overflows.

The powLimit is 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff, which means 2 hashes per block.

The safe condition is that nActualTimespan * powLimit < (2^256) -1.  

If you can control the timestamps on the blocks, then you can for nActualTimestamp to anything.

A compromise solution would be to allow nPowTargetTimespan and powLimit to be modified for testing.  This keeps the code the same for both, while not increasing the block difficulty for all the other tests.

The timestamp must increase every 5 blocks due to the median rule.  This means that 2016 blocks must take at least 2016 / 5 = 404 seconds.

There could be a RetargetRegtest chain with a powLimit to 0x003fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff (so 4096 hashes per block) and nPowTargetTimespan to 1024, then you would get one block every 1024 seconds.  

This would be slower to build the chain, but at least it would work, assuming < 4096 seconds between re-targetting.

This could enabled with a "-regtest2" flag or maybe the 2 parameters could be independently set.  "-regtest -powlimit=4096 -targettimespan=1024".

It also means that for the main chain, when the timestamp wraps, the code will have to have a rule of a max 2^32 seconds per block.

[Edit]
Quote
I think today a somewhat different approach would make more sense for the regtest shortcutting and would result in a smaller divergence from the normal network behavior.  (e.g. when in testing mode, mask out the highest bits of the block hashes before the target check).

That's a better idea.  It emulates running lots of hashes by only running one.

[Edit 2]

It could even be a permanent thing, just change the CheckProofOfWork() method and use a default mask of all ones.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
davec (OP)
Newbie
*
Offline Offline

Activity: 39
Merit: 0


View Profile
May 20, 2015, 03:16:14 PM
Last edit: May 20, 2015, 04:03:56 PM by davec
 #10

Testing is very important, but adding additional code to accommodate makes the simulation even _less_ faithful, and more likely to miss real issues (or potentially even introduce real issues).  I do regression testing with a ASIC miner and the main network code (with checkpoints=0, of course). Testing with the actual production-time behavior is the gold standard and cannot be replaced with shortcutted version without compromise.  (FWIW, testnet which, in spite of its own stupid shortcuts, is somewhat closer to Bitcoin has test cases in the chain for adjustment extremes).

You're making a supposition here that the code contains special casing for the regtest network.  I would agree with you if the retarget code had an extra branch condition that was only executed on the regtest network as then you would only be testing that branch, which is not the same branch as the one that mainnet/testnet would use.  However, that is not the case here since it is parameterized.  I personally think testnet's approach of having special cased branches regarding difficulty is more dangerous from a testing standpoint than the parameterized code in question.

The only reason you were able to make this comment at all is because regtest exists, and the only reason regtest mode exists is because it was specifically created for the block tester harness that runs on externally hosted ci setup, e.g. its intended use.
...
Use of it has expanded since then--  I think today a somewhat different approach would make more sense for the regtest shortcutting and would result in a smaller divergence from the normal network behavior.  (e.g. when in testing mode, mask out the highest bits of the block hashes before the target check).  So quite the opposite, testing is important enough that one should actually be testing the actual Bitcoin network code and not a altcoinified mockup that makes testing easier, or to the extent a modified version for testability is used great care should be taken to minimize the number of differences (and  to not add risk to production code).

I agree that regtest specifically was created for the block tester and then later expanded.  This is why we created a separate simnet in btcd for actual simulation testing because the regtest mode has behavior specific to the block tester harness and we didn't want to conflate them.  Unfortunately, when we've discussed simnet in #bitcoin-dev, more than one Bitcoin Core dev suggested to just use regtest for that purpose.  So, based upon your comments, I think there is some confusion by the BC community in general on how regtest is supposed to be used.

We can certainly discuss other approaches for the future, but the entire current RPC test harness is based on it, so for better or worse, this is way things currently work.

How you could extract "testing is not important" from my comments about a whole alternative network mode created specifically for testing is beyond me-- specifically given the amount of effort I put in previously in convincing you that agreement testing was essential.

I'm not sure how a sharp guy such as yourself would expect any other conclusion.  You stated that the behavior was known at the time of the patch and merged anyways.  Therefore a conscious decision was made to commit a change that knowingly broke retargetting on the regtest network.  The only logical conclusion from that information is that proper retargetting on the regtest network was not deemed important enough to avoid merging the change.  Obviously, there were reasons for that decision (some of which you elucidated), but whatever the circumstances leading to the decision, the conclusion is the same.
davec (OP)
Newbie
*
Offline Offline

Activity: 39
Merit: 0


View Profile
May 20, 2015, 03:50:37 PM
Last edit: May 20, 2015, 05:24:45 PM by davec
 #11

Is the problem that "nActualTimespan" is to large, right?

It's really the result of the multiplication as opposed to just the "nActualTimespan" being too large.  There is a check to ensure the minimum possible value for "nActualTimespan" is nTargetTimespan/4 = 302,400 = 0x49d40.  Thus, when you take the regtest powLimit of 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff and multiply it by the minimum possible "nActualTimespan", the result is > (2^256) - 1.

If you can control the timestamps on the blocks, then you can for nActualTimestamp to anything.

Not quite.  It is limited to a minimum and maximum value.  The minimum value is, as previously stated, 0x49d40.  The maximum value is nTargetTimespan*4 = 4,838,400 = 0x49d400.  This effectively limits how much a single difficulty readjustment step can be.

So, while you do have some measure of control, there is no way to make it lower than the minimum allowed value and the regtest powLimit multiplied by that minimum allowed value overflows.

The simple fix, as provided by jrick in PR 6162 is to use 512-bit arithmetic so it can't overflow.


Quote
I think today a somewhat different approach would make more sense for the regtest shortcutting and would result in a smaller divergence from the normal network behavior.  (e.g. when in testing mode, mask out the highest bits of the block hashes before the target check).  

That's a better idea.  It emulates running lots of hashes by only running one.

It could even be a permanent thing, just change the CheckProofOfWork() method and use a default mask of all ones.

I agree this is not a bad idea, however it does mean that whatever is creating the block also has to be modified since they are most likely mining blocks using either the built-in CPU miner, or based on details from getblocktemplate (which includes the target bits).  They would therefore, by default, be aiming to create blocks using the higher difficulty.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
May 20, 2015, 04:28:47 PM
 #12

I agree this is not a bad idea, however it does mean that whatever is creating the block also has to be modified since they are most likely mining blocks using either the bult-in CPU miner, or based on details from getblocktemplate (which include the target bits).  They would therefore, by default, be aiming to create blocks using the higher difficulty.

The CPU miner in core mines in 2 steps.  It scans until it finds a block with the top 8 bits equal to zero (ScanHash) and then does a more complex check.

It doesn't use CheckProofOfWork directly though.  If it did, then the internal miner could be used, since the mask would be taken into account.  This should probably be changed.

If the target gives less than 256 hashes per block, then the first check will ignore some block headers that meet the full POW.  Block hashes with fewer than 8 leading zero bits would be rejected by the pre-filter.

I think external test programs should be able to handle the non-standard behaviour?

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
May 20, 2015, 08:28:53 PM
 #13

Unfortunately, the regtest network is currently essentially unusable in the intended way once you've generated enough blocks to reach the first retarget interval.

If this is really a problem, we can just change nTargetTimespan or nTargetSpacing for regtest.

The CPU miner in core mines in 2 steps.  It scans until it finds a block with the top 8 bits equal to zero (ScanHash) and then does a more complex check.

It doesn't use CheckProofOfWork directly though.  If it did, then the internal miner could be used, since the mask would be taken into account.  This should probably be changed.

See https://github.com/bitcoin/bitcoin/pull/4793

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!