Bitcoin Forum
May 21, 2024, 08:20:37 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Headers-first client implementation  (Read 1604 times)
keeshux (OP)
Newbie
*
Offline Offline

Activity: 26
Merit: 6


View Profile WWW
December 04, 2014, 12:19:03 AM
 #1

I wonder if anybody out there has already moved or tried to move to the recent headers-first sync process as described by sipa.

What's the status? Is the addition ready for inclusion in wallet software? Any wiki to learn more from?

Thanks!
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8421



View Profile WWW
December 04, 2014, 12:27:00 AM
 #2

There is no change in behaviour for SPV lite wallets, as they are _headers-only_ for the most part.

This is just a change in Bitcoin core that improves performance, it's been our recommended process for full node synchronization for years; it just took time to fully prepare and test it for Bitcoin core.
keeshux (OP)
Newbie
*
Offline Offline

Activity: 26
Merit: 6


View Profile WWW
December 04, 2014, 01:49:38 AM
 #3

Oh, nice. Can we also expect the getblocks message to be deprecated some day along with this improvement? I don't see any other use than legacy from now on.

One more thing: I currently sync both headers and blocks with a single node and I'd really like to take advantage of the parallel download, but I don't get the 'moving window' approach in detail without diving into the Bitcoin Core code. I'd very naively guess you're requesting one or a few different blocks at a time from each connected peer to fill the 'window' and you only keep moving it (going for higher heights) as the lower side of the window is filled (i.e. older blocks are downloaded). Another fix for multiple download would be assuming the blocks may come in an unordered fashion. Is this optimal for SPV or even correct at all? Perhaps a much wider window due to the trivial size of filtered blocks?
grau
Hero Member
*****
Offline Offline

Activity: 836
Merit: 1021


bits of proof


View Profile WWW
December 04, 2014, 02:34:13 AM
 #4

I also do headers first, then download from different peers in random order. Headers already give you the right order and trunk to fit into. You have to postpone validation of blocks (if you do that) until a sequence is without gaps.
hhanh00
Sr. Member
****
Offline Offline

Activity: 467
Merit: 266


View Profile
December 04, 2014, 06:39:17 AM
 #5

The bottleneck in my app is the creation of the UTXO db. Even with a ram disk and leveldb, building it takes about 1h for the current blockchain. In comparison, script verification is only ~15 mn because it is CPU bound and easily parallelizable. Leveldb spends a large portion of time deleting and recompacting its sorted tables because tx hashes are basically random.
I haven't found a way to improve that part. It's not so bad since it's only done once but still.

grau
Hero Member
*****
Offline Offline

Activity: 836
Merit: 1021


bits of proof


View Profile WWW
December 04, 2014, 10:19:58 AM
 #6

Computing UTXO set could run in parallel with download upto the height that is downloaded without a gap.
hexafraction
Sr. Member
****
Offline Offline

Activity: 392
Merit: 259

Tips welcomed: 1CF4GhXX1RhCaGzWztgE1YZZUcSpoqTbsJ


View Profile
December 08, 2014, 11:33:57 AM
 #7

The bottleneck in my app is the creation of the UTXO db. Even with a ram disk and leveldb, building it takes about 1h for the current blockchain. In comparison, script verification is only ~15 mn because it is CPU bound and easily parallelizable. Leveldb spends a large portion of time deleting and recompacting its sorted tables because tx hashes are basically random.
I haven't found a way to improve that part. It's not so bad since it's only done once but still.

Why don't you write the UTXO as a huge LevelDB batch write (or a series of batches, depending on your failure-tolerance without having to restart the whole process)?

Quote
Apart from its atomicity benefits, WriteBatch may also be used to
speed up bulk updates by placing lots of individual mutations into the
same batch.

I have recently become active again after a long period of inactivity. Cryptographic proof that my account has not been compromised is available.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!