Bitcoin Forum
November 13, 2024, 02:57:31 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: O(1) block propagation  (Read 5566 times)
jonny1000 (OP)
Member
**
Offline Offline

Activity: 129
Merit: 14



View Profile
August 11, 2014, 03:11:18 PM
 #1

Dear all

I have read Gavin’s proposal on O(1) block propagation, available here:

https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2

If there is another thread on this I apologise, I couldn’t find it.  I think the proposal is interesting and could be brilliant.  However there are various basic aspects to it which I do not understand, if possible could some of you try to help explain these:

1.      Is this a proposal for initial block propagation only, with the old style blocks still being propagated with a delay or will the blockchain only contain the new smaller style of blocks?

2.      What is the incentive to propagate the old block format, with all the transactions, as this will presumably no longer be required for mining?  Couldn’t this make the problem of a lack of full nodes worse?

3.      If the blockchain now only contains only the new type of block, how does the client get the history of all bitcoin transactions?  Will clients need to connect to the memory pool of other nodes and how will all historic transactions be reconstructed?

4.      How will miners be incentivised once the block reward drops, as if there is no longer as much scarcity in the blockchain, the fees may be too low to incentivise miners?

Many thanks
azeteki
Member
**
Offline Offline

Activity: 96
Merit: 10

esotericnonsense


View Profile WWW
August 11, 2014, 03:24:02 PM
 #2

Initial block propagation only. At the point immediately after a successful nonce has been found.
The block itself still contains all transactions in full form. It is simply transferred in a more efficient manner.
There is no 'new smaller style of block'.
Nodes will still relay full blocks. There is no (direct) financial incentive for anyone to run a full node and there never has been.

CJYP
Member
**
Offline Offline

Activity: 112
Merit: 10


View Profile
August 11, 2014, 03:27:52 PM
 #3

Dear all

I have read Gavin’s proposal on O(1) block propagation, available here:

https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2

If there is another thread on this I apologise, I couldn’t find it.  I think the proposal is interesting and could be brilliant.  However there are various basic aspects to it which I do not understand, if possible could some of you try to help explain these:

1.      Is this a proposal for initial block propagation only, with the old style blocks still being propagated with a delay or will the blockchain only contain the new smaller style of blocks?

2.      What is the incentive to propagate the old block format, with all the transactions, as this will presumably no longer be required for mining?  Couldn’t this make the problem of a lack of full nodes worse?

3.      If the blockchain now only contains only the new type of block, how does the client get the history of all bitcoin transactions?  Will clients need to connect to the memory pool of other nodes and how will all historic transactions be reconstructed?

4.      How will miners be incentivised once the block reward drops, as if there is no longer as much scarcity in the blockchain, the fees may be too low to incentivise miners?

Many thanks

This proposal does not change anything about the blockchain. To answer question 1, sort of. Since finding a block really just means finding a low enough hash to make the block valid, all the data except for the hash can be known to the entire network before the block is found.
Currently, when a miner finds a block, they have to send the entire block (all transactions in the block + header), to everyone else, which is harder the more transactions there are in the block.
The proposal is (oversimplified) to ensure each miner has all the transactions that will be included in the block beforehand. Then when a miner finds a block, they will only have to send the header and coinbase transaction (telling everyone who gets the block reward), and other miners will be able to reconstruct the block from this.
Since they will be able to reconstruct the block, the only difference is that the entire block isn't sent over the network.
jonny1000 (OP)
Member
**
Offline Offline

Activity: 129
Merit: 14



View Profile
August 11, 2014, 06:14:46 PM
 #4

Thanks for clearing this up for me, sorry about my misconceptions about the idea.  I still need some time to think about this, however I think this sounds brilliant and could solve what I thought was one of Bitcoin’s biggest problems.

Please let me know if I have got it right now:
When a miner finds a block, they propagate both the actual block and this new “mini block” across the network.  Then if another miner receives the mini block but can’t reconstruct the full block in time, they just wait until they receive the full block.  This way miners are incentivised to send out both type of blocks.

What happens if miners start to get too far ahead, e.g. they reconstruct a block and find many new valid blocks before even one full block has propagated, could this be an issue?  There could be two types of nodes, one type that reconstructs blocks and has a more up to date version of the network and a slower traditional node that only recognises full blocks.
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
August 11, 2014, 09:16:36 PM
Last edit: August 12, 2014, 02:23:46 AM by solex
 #5

When a miner finds a block, they propagate both the actual block and this new “mini block” across the network.  

No. That was my initial thought too when I saw Gavin's tweet, but the reality is far more amazing. The new IBLT block concept is absolute and total genius and deserves a lot of constructive input from this forum. This change can launch Bitcoin from being a serious competitor to Western Union (today) to a serious competitor to VISA (later next year).

The best way to think about this idea is to consider IBLT blocks as containing the instructions for how to construct the next block yourself, as well as differences, a bunch of needed transactions which you don't already know about.

If you look at the unconfirmed transactions on blockchain.info, you see their mempool displayed out. Your copy of Bitcoin Core which is running at home will have a very similar set of unconfirmed transactions, maybe less as BC.i has many more peer-to-peer connections.  

By way of example, assuming the IBLT upgrade is in and running and a miner produces an IBLT block, you receive it and so does BC.i (and other nodes).  The IBLT contains all the transactions to make up a block - but the transactions can't be read because (basically) they are overlaying each other. You, BC.i, others, take the IBLT and subtract out all the transactions you have that are likely to be in the new block and are left with the differences. Different differences for each! Then the subtracted ones and those extra transactions are used to build the same new block, and then regular validation begins.

IBLT blocks are equal size, but the block which goes into the blockchain could be smaller, or much larger, depending on the number of tx as usual.

gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
August 12, 2014, 03:20:39 AM
 #6

Don't also miss https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

See also the PGP SKS keyserver, which uses efficient set reconciliation.

Quote
This change can launch Bitcoin from being a serious competitor to Western Union (today) to a serious competitor to VISA (later next year).

I'm a fan of fancy schemes to increase efficiency (see link) but that is a bit over the top.

The impact is much more modest than you're thinking, especially compared to the kind of forwarding that p2pool and bluematt's relay network already do (send blocks without repeating transactions that the far end already knows you have). The transactions still must be sent and validated. It does avoid the 2x bandwidth overhead of sending it again with the block, and the marginal latency that implies— but so do simpler tools e.g. matt's relay for me has a hitrate of "In total, skipped 140004 of 142773 (0.9806055766846673%)" since last restart.

Unlike some other technologies like fraud proofs these doesn't change the decentralization/scale tradeoff or the bandwidth usage (It's still O(n) with the number of transactions, they're just happening earlier). It's all awesome sauce and all and should be a great thing to have, but that I love this stuff makes it hurt all the more to see it exaggerated. Smiley
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1111


View Profile
August 12, 2014, 07:27:44 AM
 #7

Don't also miss https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

See also the PGP SKS keyserver, which uses efficient set reconciliation.

Quote
This change can launch Bitcoin from being a serious competitor to Western Union (today) to a serious competitor to VISA (later next year).

I'm a fan of fancy schemes to increase efficiency (see link) but that is a bit over the top.

The impact is much more modest than you're thinking, especially compared to the kind of forwarding that p2pool and bluematt's relay network already do (send blocks without repeating transactions that the far end already knows you have). The transactions still must be sent and validated. It does avoid the 2x bandwidth overhead of sending it again with the block, and the marginal latency that implies— but so do simpler tools e.g. matt's relay for me has a hitrate of "In total, skipped 140004 of 142773 (0.9806055766846673%)" since last restart.

Unlike some other technologies like fraud proofs these doesn't change the decentralization/scale tradeoff or the bandwidth usage (It's still O(n) with the number of transactions, they're just happening earlier). It's all awesome sauce and all and should be a great thing to have, but that I love this stuff makes it hurt all the more to see it exaggerated. Smiley

I think you somehow understated the implication of Gavin's proposal. Although the bandwidth or CPU usage are still O(n), the block propagation becomes O(1). Since the risk of block orphaning is positively correlated with block propagation time, this proposal will reduce the cost related to block orphaning and encourage miners to include more transactions in a block

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
jonny1000 (OP)
Member
**
Offline Offline

Activity: 129
Merit: 14



View Profile
August 12, 2014, 10:28:51 AM
 #8

Will there be a transitional period, in which some users and miners do not attempt to reconstruct blocks and rely on the old block propagation method?  During this period wont it make sense for nodes to propagate the new “mini IBLT block” and the old style block?  Miners who find a block will want to maximise their chance of being recognised as first, therefore will they not be incentivised to try to distribute the both the IBLT block and the old full block?  Going forward, won’t there always be a minority of nodes that only download new blocks in the traditional way and do not attempt to reconstruct them?

Also what happens to new nodes.  New nodes won’t have the memory pool data from say 12 months ago, how will they reconstruct those old blocks.  Will the download of the historic blockchain work in the same way as it does now?
dserrano5
Legendary
*
Offline Offline

Activity: 1974
Merit: 1029



View Profile
August 12, 2014, 10:38:10 AM
 #9

Will there be a transitional period, in which some users and miners do not attempt to reconstruct blocks and rely on the old block propagation method?

Yes. Read Gavin's gist.



Also what happens to new nodes.  New nodes won’t have the memory pool data from say 12 months ago, how will they reconstruct those old blocks.  Will the download of the historic blockchain work in the same way as it does now?

Older blocks are downloaded like now, newly mined blocks are propagated following this new mechanism. If mostly all transactions in the memory pool end up in a block, the pool starts (almost) empty every 10 minutes for everybody so "getting up to speed" isn't a problem.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
August 12, 2014, 02:21:03 PM
 #10

Block re-orgs need some thought.

If I have chain A-B-C, and get IBLT's for an alternative chain A-B'-C'-D'...

... then the current memory pool won't work to try to reconstruct B' C' D'.

Using B and C to reconstruct B' and C' should work pretty well. Then the remaining memory pool transactions can be used to reconstruct D.

If any of the reconstructions fail, just fall back to fetching all of B' C' D'.

Then again, re-orgs are rare enough that always falling back to fetching full blocks would be OK.

How often do you get the chance to work on a potentially world-changing project?
jonny1000 (OP)
Member
**
Offline Offline

Activity: 129
Merit: 14



View Profile
August 12, 2014, 07:55:11 PM
 #11

Thanks Gavin

There could therefore by some cases of an old fashioned block propagation race in the event that a re-org takes place and miners are unable to reconstruct the blocks.  I can see how this could be rare enough not to incentivise mining centralisation or creating smaller blocks.

How exactly does the memory pool work?  Could one keep a record of what the memory pool was like in say the last n blocks, such that if the situation you describe above occurs, one can reconstruct the blocks more effectively?
jonny1000 (OP)
Member
**
Offline Offline

Activity: 129
Merit: 14



View Profile
August 21, 2014, 09:13:08 AM
 #12

As many have said, of course this doesn’t completely solve the blocksize problem or feature, whichever way you look at it. Prior to this proposal I thought that block propagation times incentivising mining centralisation was potentially one of the network's biggest security issues. Therefore this proposal could help significantly with this. Maybe this does tip the balance slightly in favour of a moderate increase in the blocksize limit, however there are still many reasons for a limit overall.

These reasons include:
1.      Bandwidth limits, as the transactions still need to be relayed and downloaded at least once.  Although this could become less of an urgent network security problem.
2.      Issues surrounding new nodes catching up with the network
3.      The need for scarcity in the blockchain to create a market price for transaction fees, which will prevent spam and eventually be required to incentivise miners
4.      Storage limits
5.      Fast propagation times in the event IBLT block reconstruction fails, due to a deliberate attack or pure accident

What kind of blocksize limit could there be if this becomes implemented?
hayek
Sr. Member
****
Offline Offline

Activity: 370
Merit: 250


View Profile
October 03, 2014, 03:38:45 PM
 #13

Not sure if this helps anyone but CCN has a writeup out on IBLTs

https://www.cryptocoinsnews.com/bitcoin-in-bloom-how-iblts-allow-bitcoin-scale/
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
October 20, 2014, 10:55:08 PM
 #14

Someone has picked up the torch  Smiley

Kalle Rosenbaum has written an IBLT test package in java, utilizing bitcoinj.

http://www.reddit.com/r/Bitcoin/comments/2jszdl/encodingdecoding_blocks_in_iblt_experimets_on_o1/

Quote

I've been working on an IBLT written in Java, as well as a project to encode and decode Bitcoin blocks using this IBLT. The main inspiration comes from Gavin Anresens (/u/gavinandresen) excellent writeup on O(1) block propagation, https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2.
The projects are called ibltj (https://github.com/kallerosenbaum/ibltj) and bitcoin-iblt (https://github.com/kallerosenbaum/bitcoin-iblt). In bitcoin-iblt I've run some experiments to find a good value size and a good number of hash functions to use. Have a look at the results at https://github.com/kallerosenbaum/bitcoin-iblt/wiki/BlockStatsTest
I'm very interested in discussing this and listen to your comments. I also need some help to specify other tests to perform. I'm thinking it would be nice to have some kind of "Given that there is no more that 100 differing transactions, I need 867 cells of size 270 B to have <0.1% chance that decoding fails.". Any thoughts on this?

The test bench is pretty capable. I can perform tests on arbitrarily large fake blocks constructed from real world transactions. I can modify the following parameters:

Number of transaction in the block
Number of differences between the sender and the receiver, both extra and absent transactions.
Number of hash functions
keySize
valueSize
keyHashSize
Number of cells


TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
October 21, 2014, 11:25:55 AM
 #15

Does this mean that block selection method gets locked-in?

You tell the receiver how the txs in the block were selected.  However, it assumes priority size and fee per kb.

A block that doesn't use the standard transaction selection algorithm is at a disadvantage. 

This could be a good thing, since it would mean more consistency between miners.

I guess if block size is greater than demand for space, then it isn't a big deal, since the default rule is "all known transactions".

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
October 21, 2014, 10:34:45 PM
Last edit: October 22, 2014, 05:12:31 AM by solex
 #16

Does this mean that block selection method gets locked-in?

You tell the receiver how the txs in the block were selected.  However, it assumes priority size and fee per kb.

A block that doesn't use the standard transaction selection algorithm is at a disadvantage.  

This could be a good thing, since it would mean more consistency between miners.

I guess if block size is greater than demand for space, then it isn't a big deal, since the default rule is "all known transactions".

Pretty much locked-in by consensus. And the conclusion in bold is important.

IBLT makes the Bitcoin protocol function more closely to how all its users expect it to work. Right now there are several thousand users who expect their transaction(s) to get confirmed in the next block, and thousands of nodes who have a very similar view of those transactions. Usually, miners are good citizens and include many of the pending transactions.

However, a block could be mined where all this consensus is ignored and the new block is full of transactions which the miner has "pulled out of his ass" (real-world business or not) with fees payable to himself, a scenario which makes the existing 1MB limit look like a safety-blanket. Apart from providing PoW security to older blocks, the new block is unhelpful, and many of them together is an attack.

Transactions from the consensus pool are in an IBLT but they are canonically ordered and XOR'd in an offset manner such that a small percentage do not have to be known in advance, but the rest do, because that is the only way to peel them off. This goes way beyond normal data compression because the receivers know most of the contents in advance, hence O(1) block propagation occurs, or at worst O(logn).

The block propagation delay cost of including all transactions will be low, so the incentive improves for miners to rake in as many fees as possible, getting a high percentage of transactions confirmed within the 10 minute average.

A rogue miner gaming IBLT by withholding a new one for a few seconds, and broadcasting his secret/spam transactions first, could be frustrated by requiring 20% of IBLT transactions to be earlier than the mean age of the new block and the previous one.

TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
October 24, 2014, 05:34:17 PM
 #17

Why is it O(1) for block propagation?

If 1% of the transactions were different between 2 blocks, then the size of the IBLT would have to be 10 times larger to handle blocks that are 10 times larger.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
October 26, 2014, 10:24:18 PM
Last edit: October 26, 2014, 11:12:34 PM by solex
 #18

Why is it O(1) for block propagation?

If 1% of the transactions were different between 2 blocks, then the size of the IBLT would have to be 10 times larger to handle blocks that are 10 times larger.

No one else is chiming in so this is what I think: it depends.

It depends upon how synchronized all the unconfirmed tx mempools are. If all nodes are 100% synchronized, then, yes, O(1) block propagation can occur, even when clearing the whole mempool. In reality, mempools vary, by how much?, well it would be nice if there were metrics on it.

A 1% difference is a working number based upon allowing 6 seconds for a new transaction to fully propagate and 600 seconds between blocks. So, perhaps tx mempools are mostly non-synchronized at the margins (new tx incoming, and old tx being discarded without confirmation), and the middle 98% of the tx mempools have only a 0.1% difference. Maybe an IBLT would encounter a 0.01% difference by always selecting the "best" 50% of the unconfirmed transactions.

Also, the starting point is not zero, it is 1MB blocks which can hold 1000 differences.

TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
October 26, 2014, 11:28:27 PM
 #19

No one else is chiming in so this is what I think: it depends.

Yeah.  The whole system assumes that all miners use similar rules.

Quote
If all nodes are 100% synchronized, then, yes, O(1) block propagation can occur, even when clearing the whole mempool.

With 100% syncing, you can just send the header.

O(1) is a claim that as blocks get larger, the differences between 2 blocks does not increase.

Quote
A 1% difference is a working number based upon allowing 6 seconds for a new transaction to fully propagate and 600 seconds between blocks. So, perhaps tx mempools are mostly non-synchronized at the margins (new tx incoming, and old tx being discarded without confirmation), and the middle 98% of the tx mempools have only a 0.1% difference. Maybe an IBLT would encounter a 0.01% difference by always selecting the "best" 50% of the unconfirmed transactions.

I think miners could include when their block was created as part of the info they give.  If each tx has a timestamp linked with it, then this would remove network latency as a problem.

If a block had only transactions that are at least 20 seconds old, then 5 seconds of latency wouldn't matter.

Getting this to work means that there needs to be a way to agree on what the timestamp is for each transaction though.

Quote
Also, the starting point is not zero, it is 1MB blocks which can hold 1000 differences.

I don't understand what you mean.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
October 27, 2014, 04:02:42 AM
Last edit: October 27, 2014, 05:44:00 AM by solex
 #20

Fair points.

Also, the starting point is not zero, it is 1MB blocks which can hold 1000 differences.

I don't understand what you mean.

What I mean there is that because the differences increase relatively slowly, in practice the O(1) can hold for a while. Fixed size 1MB IBLT blocks being normal while disk block volumes increase through 5MB, 10MB, 15MB, which may take quite a few years. So it is an approximation, a reasonable functional description of what is happening in that time period. Of course, mathematically, the differences mount until, at some point 2MB blocks are needed. Who knows, maybe sidechains are doing heavy lifting by then.

It's still a hell of a lot better than O(n)  Smiley
 

Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!