Bitcoin Forum
May 20, 2024, 10:55:38 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Chunked download & scalability  (Read 1406 times)
pmlyon (OP)
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile
July 01, 2014, 02:22:25 PM
 #1

Hi, I've recently been kicking around the idea of being able to download blocks in chunks after verifying the header and choosing the longest chain.

It seems to me that it should be possible to download the block in chunks of e.g. 512 transactions, and then provide a proof that that is a valid chunk via the merkle root. This would allow blocks to be streamed out across the network without having to wait for the entire block to be uploaded and validated first.

If I'm correct that this could work, the merkle root collision (CVE-2012-2459) is a bit of a nuisance to deal with. Has there been any talk of addressing that bug when we eventually hard fork to allow for a larger block size? I think using a hash of 0 on the right side of the pairing would work, as opposed to duplicating the current hash to get a pair.

Author of BitSharp, a C# Bitcoin node
https://github.com/pmlyon/BitSharp/wiki
TimS
Sr. Member
****
Offline Offline

Activity: 250
Merit: 253


View Profile WWW
July 01, 2014, 04:13:02 PM
 #2

This would make DoS attacks much easier: instead of flooding the network with expensive transactions, you can flood the network with worthless blocks (ones that don't really meet the target difficulty). Since nobody can tell it's invalid before they've received the whole thing, honest nodes will be tricked into relaying the majority of the junk block.
pmlyon (OP)
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile
July 01, 2014, 04:28:49 PM
 #3

The headers actually take care of that, since they have to be mined either way. You'd have to have a ton of mining power and waste it all to carry out such an attack.

Author of BitSharp, a C# Bitcoin node
https://github.com/pmlyon/BitSharp/wiki
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8421



View Profile WWW
July 01, 2014, 06:53:52 PM
 #4

I'm not sure why you think this would be all that advantageous.

You still cannot accept&connect the block without the whole thing, and you cannot verify the later parts of the block without the earlier parts (as blocks very often spend txouts created in the same block)— if you want instead to forward blocks without verifying them, you can do that without any chunking.  This was implemented some time back for the reference software in a pull req by luke but it turned out to not really improve performance much (esp after the ecdsa caching went in).
pmlyon (OP)
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile
July 01, 2014, 06:57:30 PM
 #5

Today? Not at all. What I've been thinking about though is if we have something like 1GB blocks. A scheme like this could allow peers to start sharing the block out across the network without having to first wait to download the entire block. I'm trying to think of how we could reduce the latency involved with getting huge blocks out across the entire network.

Oops, only replied to half your comment. Smiley I wasn't thinking of trying to verify that later chunks of the block were valid before you have the earlier parts, just that the chunk did in fact make up a piece of the block, and that it wasn't just random garbage. You'd still have to validate the block itself in order.

Author of BitSharp, a C# Bitcoin node
https://github.com/pmlyon/BitSharp/wiki
pmlyon (OP)
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile
July 03, 2014, 01:28:21 PM
 #6

I should expand a bit on what I have in mind.

The idea is that I could send a request to a peer to download a block in a streaming fashion. Every 512 transactions they could include a proof that the previous 512 transactions do in fact make up a piece of the block that I've requested, via the merkle tree.

This allows me to then stream the block through validation, without having to wait for it to be fully downloaded first. As I commit each chunk of the block the validation can kick in and process that chunk.

If another peer asks me for a streaming copy of that same block, I can also start streaming the block out to them before I've finished receiving it myself.

On the receiving end, you wouldn't be doing any more work than you would normally. If a block is invalid, you could potentially find that out sooner than you could currently, before you've had to download the entire thing.

If you start sharing out the block before you've finished it, that would lead to more upstream bandwidth usage if the block does turn out to be invalid. I think mined headers is enough to mitigate that risk.

Author of BitSharp, a C# Bitcoin node
https://github.com/pmlyon/BitSharp/wiki
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
July 03, 2014, 04:42:24 PM
Last edit: July 03, 2014, 05:08:49 PM by DeathAndTaxes
 #7

The reality is that optimally nodes should already have all (or at least most) of the txns that make up a new block.  Txns propagate the network independently of blocks.  The current block message is the simplest aproach but it is also the most efficient.  Most of the time when your node receives a new block it is 99%+ information you already have.   The good news is that the protocol can be made more efficient.  Blocks stored on disk consists of the header and an array of txns.  Today the block message is essentially the same format but it doesn't need to be.  Block messages for new blocks could consist of just the header, the coinbase txn, and an array of TxIds.  The average txn is ~600 bytes and the TxId is 32 bytes so that would reduce the bandwidth requirements of the block message by up to 95%.

Nodes would process new blocks in a slightly different manner:
1) Receive the new block message.
2) Validate header information
3) Validate the block hash meets difficulty target.
4) Construct and validate the merkle tree (only hashes are needed).
5) Scan the local memory pool for the TxIds in the merkle tree.
  a) If it is missing txns it requests those by TxId from other nodes (at a minimum any peer the node received the blockheader from should have the missing txn(s)).
  b) Validate the new txns.
  c) Relay the new txns to peers.
6) The node constructs the block and write it to the local blockchain and relays the block message to its peers.

At any point if the validation fails the node stops.  This prevents malicious nodes from making up bogus blocks without completing the Pow (which would be pointless).  Optimally the protocol would allow miners to probabilistically include the full txn of txns that are unlikely to be in the memory pool of peers.  So a block message could consist of all txns or all TxIds or any combination in between.  All this would require changes to the protocol but they could be done in a backwards compatible manner.
pmlyon (OP)
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile
July 03, 2014, 05:30:28 PM
 #8

Thanks, I had also wondered about doing something like that.

In the design that I have, it would look something like this, after your step 3:

1) Open a stream to validate the new block
2) Open a stream to write the block to disk
3) Open a stream to download the list of block tx hashes
3a) This could optionally use the idea I have above, except you would proving that chunks of tx hashes make up a block as you grab them. Might be overkill, and you'd want to do more than 512 at a time since just the tx hashes are a lot smaller.
4) Start reading from the tx hash stream and spin off requests for any missing txes, forward the txes to the block writing stream in order as they come in.
5) As txes get streamed into the block, that then gets streamed into the block validator.
6) The double-spend check is the only serial part of the validator as the block streams through it. Looking up previous tx outputs and validating scripts is done completely in parallel, and can be spanned across multiple disks & CPUs. If you've already validated the tx outside of the block, you can just skip it here.
7) Once everything has finished streaming through, and no errors have occurred, commit.

If you use 3a, you can stream into 4 before you've grabbed all the tx hashes. If you don't, you need to get them all and verify the merkle root first to make sure you have the right hash list.

Author of BitSharp, a C# Bitcoin node
https://github.com/pmlyon/BitSharp/wiki
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8421



View Profile WWW
July 03, 2014, 05:37:45 PM
 #9

Block messages for new blocks could consist of just the header, the coinbase txn, and an array of TxIds.
P2Pool already does this for its own blocks.  The problem more generally is that you don't know what other transactions the far end knows (at least not completely) and if you need an extra RTT to fetch the transactions the end result is that the block takes longer to relay than just sending the data in the first place. So in terms of reducing block orphaning and such, it wouldn't be a win.  (It works better in the p2pool case, since the nodes themselves are mining and can be sure to have pre-forwarded any transactions they'll mine)

Perhaps as a general optimization for nodes that don't care about getting the block as soon as possible it might be interesting, though it is possible to optimize latency and bandwidth at the same time through more complicated protocols (see link in gavin's post).
pmlyon (OP)
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile
July 03, 2014, 06:46:10 PM
 #10

Block messages for new blocks could consist of just the header, the coinbase txn, and an array of TxIds.
P2Pool already does this for its own blocks.  The problem more generally is that you don't know what other transactions the far end knows (at least not completely) and if you need an extra RTT to fetch the transactions the end result is that the block takes longer to relay than just sending the data in the first place. So in terms of reducing block orphaning and such, it wouldn't be a win.  (It works better in the p2pool case, since the nodes themselves are mining and can be sure to have pre-forwarded any transactions they'll mine)

Perhaps as a general optimization for nodes that don't care about getting the block as soon as possible it might be interesting, though it is possible to optimize latency and bandwidth at the same time through more complicated protocols (see link in gavin's post).

Is this the link you're referring to? https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

I confess that went over my head. Smiley In a scheme such as that, does that allow you to write out the block at all while it's coming down from the network? Or do you need finish downloading it completely before you can start writing out the block? If you can download the block fast enough it doesn't really matter if you can start feed it through validation it before it's finished, but I'm just wondering if that would still be possible with the network coding you described.

Author of BitSharp, a C# Bitcoin node
https://github.com/pmlyon/BitSharp/wiki
pmlyon (OP)
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile
July 03, 2014, 07:45:00 PM
 #11

What I'm aiming for with the streaming is to try and help reduce latency. So if it takes 10 seconds to download a block and 20 to validate, it should take you 20 seconds total to do both. If it takes you 30 seconds to download and 10 to validate, then it should take you 30 seconds to do both.

Author of BitSharp, a C# Bitcoin node
https://github.com/pmlyon/BitSharp/wiki
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8421



View Profile WWW
July 04, 2014, 02:12:53 AM
 #12

What I'm aiming for with the streaming is to try and help reduce latency. So if it takes 10 seconds to download a block and 20 to validate, it should take you 20 seconds total to do both. If it takes you 30 seconds to download and 10 to validate, then it should take you 30 seconds to do both.
Unless something weird has happened it takes ~no time anymore to validate a new block at the tip of the network— its almost all already validated when you receive it.
pmlyon (OP)
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile
July 04, 2014, 02:43:19 AM
 #13

What I'm aiming for with the streaming is to try and help reduce latency. So if it takes 10 seconds to download a block and 20 to validate, it should take you 20 seconds total to do both. If it takes you 30 seconds to download and 10 to validate, then it should take you 30 seconds to do both.
Unless something weird has happened it takes ~no time anymore to validate a new block at the tip of the network— its almost all already validated when you receive it.

Yeah, it definitely doesn't take that long at today's block size, I'm just thinking ahead to when they are much larger. I'm trying to come up with a core design that is able to elegantly handle large blocks, if the network gets to a point where large blocks are being used.

Author of BitSharp, a C# Bitcoin node
https://github.com/pmlyon/BitSharp/wiki
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8421



View Profile WWW
July 04, 2014, 03:34:35 AM
 #14

Yeah, it definitely doesn't take that long at today's block size, I'm just thinking ahead to when they are much larger. I'm trying to come up with a core design that is able to elegantly handle large blocks, if the network gets to a point where large blocks are being used.
Sorry if I wasn't clear— It shouldn't take much time regardless of the size because most of the transactions in the block are already validated and waiting in the mempool. Before the software cached that validation it was taking a fair amount of time, but not anymore. Getting most of the validation off the latency sensitive critical path is a big improvement.
pmlyon (OP)
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile
July 04, 2014, 12:36:23 PM
 #15

Yeah, it definitely doesn't take that long at today's block size, I'm just thinking ahead to when they are much larger. I'm trying to come up with a core design that is able to elegantly handle large blocks, if the network gets to a point where large blocks are being used.
Sorry if I wasn't clear— It shouldn't take much time regardless of the size because most of the transactions in the block are already validated and waiting in the mempool. Before the software cached that validation it was taking a fair amount of time, but not anymore. Getting most of the validation off the latency sensitive critical path is a big improvement.

Ahh, I see what you're saying. I'll definitely be able to take advantage of that as well, which will allow everything to run at the same speed as the core spent/unspent UTXO update since all the work it queues up to verify will have already been done. That core piece I can run over 10,000 tps (SSD), and all the pre-verification can be done using multiple, separate disks over which the block data is spanned. I'm trying to extract as much parallelism from the processing as I can, and allow for adding disks to increase your I/O speed, if that becomes an issue.

Author of BitSharp, a C# Bitcoin node
https://github.com/pmlyon/BitSharp/wiki
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!