Bitcoin Forum
May 11, 2024, 12:10:48 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Why do blocks get harder to verify the further into the scan I get?  (Read 610 times)
Automatic (OP)
Full Member
***
Offline Offline

Activity: 238
Merit: 105


View Profile
February 10, 2014, 12:21:22 AM
 #1

When I rerun -reindex, I notice I do the first 100k blocks in about five minutes, then the next 100k in about an hour, then when I get to about 250k it goes at a few ten blocks per minute. Why do block get harder to verify the further through it gets? Is it not simply hashing them, verifying the hash, and, going to the next one? I understand why generating the blocks would get harder, but, not verifying them. Isn't that the whole point of a non-deterministic polynomial function?

Can someone explain to me what verifying a block actually entails? What the difference is between the 'checklevels' in bitcoin-qt/d?

Please ask for a signed message from my on-site Bitcoin address (Check my profile) before doing any offsite trades with me.
Bitcoin addresses contain a checksum, so it is very unlikely that mistyping an address will cause you to lose money.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715386248
Hero Member
*
Offline Offline

Posts: 1715386248

View Profile Personal Message (Offline)

Ignore
1715386248
Reply with quote  #2

1715386248
Report to moderator
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 10, 2014, 12:22:34 AM
Last edit: February 10, 2014, 12:38:09 AM by DeathAndTaxes
 #2

It is verifying all the txs in the block.  Number of tx per block has increased over time (which is a good thing BTW).

There is more to verify a block is valid than just hashing it.

First a block can't be valid if any tx is invalid.

For a tx to be valid the client needs to check that:
a) it has valid form & structure.
b) that the inputs are all valid and have not been spent in a prior tx (i.e. the inputs are in the UXTO at the time of the tx)
c) the signature is valid
d) that the pubkey when hashed produces the pubkey hash in the prior output
e) (for coinbase tx only) the amount of the coinbase < subsidy + tx fees.

If any of these checks fail, the tx is invalid, and as a result the block is invalid.

Second a block can't be valid if any part of the header is invalid.
Once all the tx are verified as valid the block header is verified.

The merkle tree is constructed and the merkle tree root hash computed.  If the merkle root hash in the block doesn't match the computed one, then the block is invalid.
The client also needs to verify the other information in the header is valid (prior block hash, version, etc).  Other than nonce and timestamp these are deterministic and if they don't match
what the client expects then the block is invalid. 
The client needs to verify the timestamp falls within the range allowed by the protocol.  Bitcoin allows a "loose" timestamp so the client needs to construct the valid range and ensure the block's timestamp falls within it.  If the timestamp is outside the expected range then the block is invalid.

Once the header is verified the client will hash the header and verify the the resulting hash is less than the target based on block difficulty.
If the blockhash isn't smaller than the difficulty target the block is invalid.

Only if all those checks are successful is the block considered valid and added to the local blockchain.  The client then moves on to the next block.  The most time consuming step is the validating of the transactions (namely the validation of the ECDSA signature on each input).
Automatic (OP)
Full Member
***
Offline Offline

Activity: 238
Merit: 105


View Profile
February 10, 2014, 12:24:17 AM
 #3

It is verifying all the txs in the block.  Number of tx per block has increased over time (which is a good thing BTW).

Is that really the only reason? I see how more transactions per block would slow it down, but, not to the speed it's currently going. Isn't it a simple hash of the block, compare, and, 'done'?

EDIT:- Oh, it's verifying all the transactions in the block, as in, verifying the transactions themselves, yeah, fair enough. I do see how that would slow it down. Fair enough.


Can I then ask what the difference is between the check levels (0, 1, 2, 3 and 4)?

Code:
-checklevel=<n>        How thorough the block verification is (0-4, default: 3)

https://en.bitcoin.it/wiki/Running_Bitcoin#Bitcoin.conf_Configuration_File

EDIT2:- Can answer my own question:-
https://github.com/bitcoin/bitcoin/blob/95e66247ebaac88dadd081f850ebf86c71831e61/src/main.cpp#L2767-L2807

Code:
// check level 1: verify block validity

(Can I ask what 'undo validity' is?)
Code:
// check level 2: verify undo validity

(I also have no diea what this is)
Code:
// check level 3: check for inconsistencies during memory-only disconnect of tip blocks

Code:
// check level 4: try reconnecting blocks

EDIT3:-
It is verifying all the txs in the block.  Number of tx per block has increased over time (which is a good thing BTW).

There is more to verify a block is valid than just hashing it.

First a block can't be valid if any tx is invalid.

For a tx to be valid the client needs to check that:
a) it has valid form & structure.
b) that the inputs are all valid and have not been spent in a prior tx (i.e. the inputs are in the UXTO at the time of the tx)
c) the signature is valid
d) that the pubkey when hashed produces the pubkey hash in the prior output

The coinbase tx needs to verified that it has the correct amount (coinbase < subsidy + tx fees).

Once all the tx are verified as valid the block header is verified.

The merkle tree (which represents an entire block of tx with a single hash) and merkle tree root need to be constructed.
The client needs to verify the merkle tree root produces matches the header of the block.
The client also needs to verify the other information in the header is valid (prior block hash, version, etc).
The client needs to verify the timestamp falls within the range allowed by the protocol.
Once the header is verified the client will hash the header and verify the the resulting hash is less than the target based on block difficulty.

Only THEN if the block considered valid and the client moves on to the next block.

The most time consuming step is the validating of the transactions (namely the validation of the ECDSA signature on each input).


Thanks for the more flushed out reply, fair enough. Thanks.

Please ask for a signed message from my on-site Bitcoin address (Check my profile) before doing any offsite trades with me.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 10, 2014, 12:40:27 AM
 #4

IIRC the checklevel doesn't relate to enforcement of protocol rules it is checks done on the local copy of the database to detect and possibly correct corruption and errors in the underlying database.  You should  not normally need to change the checklevel.  I do recall at one time a bug in the client caused a failure, and running at a lower check level (lower = more extensive checking) was used as a workaround.
Automatic (OP)
Full Member
***
Offline Offline

Activity: 238
Merit: 105


View Profile
February 10, 2014, 12:45:37 AM
 #5

IIRC the checklevel doesn't relate to enforcement of protocol rules it is checks done on the local copy of the database to detect and possibly correct corruption and errors in the underlying database.  You should  not normally need to change the checklevel.  I do recall at one time a bug in the client caused a failure, and running at a lower check level (lower = more extensive checking) was used as a workaround.

Are you sure lower = higher checking? I'd seem the opposite from the source:-
Code:
if(nCheckLevel >= $value) {

which, if I was running 4, all of them would trigger, if I was running 0, none would trigger.

Please ask for a signed message from my on-site Bitcoin address (Check my profile) before doing any offsite trades with me.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 10, 2014, 12:50:52 AM
 #6

Yes sorry your right higher = more extensive checking.

The larger point is that these are low level database validations.  In a perfect world no checking would ever be needed.  The block was valid when written to the disk.  Nothing should change that and it should forever be valid when reading from the disk.  However in the real world, hdd aren't perfect and abnormal program termination could result in corruption at the database level.  In normal usage you should not need to ever use anything other than the default.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!