Bitcoin Forum
December 15, 2017, 11:17:29 AM *
News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: O(1) Block Propagation, IBLT  (Read 2202 times)
shorena
Legendary
*
Offline Offline

Activity: 1400


ALL escrow is signed! https://keybase.io/verify


View Profile WWW
November 10, 2015, 12:07:48 PM
 #1

It looks like there is no thread about IBLT yet. Since there are many others out there that know more about it than me, I will mainly collect and link information here.

AFAIK the initial suggestion was made by Gavin.

Quote
O(1) Block Propagation

The problem

Bitcoin miners want their newly-found blocks to propagate across the network as quickly as possible, because every millisecond of delay increases the chances that another block, found at about the same time, wins the "block race."

With today's p2p protocol, this gives miners an incentive to limit the number of transactions included in their blocks. A transaction must pay more fees to the miner than they are statistically likely to lose due to the increased chance of losing a block race, since new block announcements include all of the data for all of the transactions in the block. This is inefficient (transaction data is transmitted across the network twice, using twice as much bandwidth) and artificially increases transaction fees to be much higher than they need to be.

Full text here: https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2

Recently Kalle Rosenbaum did some work in the area, which includes a very nice infographic.

Quote
Solution

What if we could exploit the fact that all transactions in the block probably already have propagated the network. Why do we have to send all the transactions in the block again? We don’t. We have a few things to consider:

  • 1. Order of transactions within the block matters
  • 2. There are probably differences between mempools on different nodes
  • 3. We don’t know the differences

Let’s ignore 1 for now. (We can fix that either by canonical ordering of transactions within blocks, or by attaching ordering information with the message.)

This is how we deal with 2 and 3. Buckle up!



Full text (multi part): http://popeller.io/index.php/2015/10/09/bitcoin-block-propagation-with-iblt-infographic/

Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1513336649
Hero Member
*
Offline Offline

Posts: 1513336649

View Profile Personal Message (Offline)

Ignore
1513336649
Reply with quote  #2

1513336649
Report to moderator
monsterer
Legendary
*
Offline Offline

Activity: 1008


View Profile
November 11, 2015, 11:33:45 PM
 #2

How big is the constant in the constant time propagation, compared to say, regular propagation models?
TierNolan
Legendary
*
Offline Offline

Activity: 1148


View Profile
November 12, 2015, 02:19:31 PM
 #3

There is a limit to how small the table can be.  For it to work, at least one cell must have +/- 1 transaction piece stored in it.

This means that the table size is proportional to the difference between the memory pool at the receiver and the memory pool for the miner who created the block.

If all nodes create the same block but with 1% difference then the table size is proportional to the block size. 

For IBLTs to operate with O(1) efficiency, there needs to be some method to boost efficiency of keeping all miners' memory pools aligned. 

If 1MB blocks tended to have 50kB differences, is it reasonable to assume that 100MB blocks would also be aligned to within 50kB?

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
2112
Legendary
*
Offline Offline

Activity: 1988



View Profile
November 12, 2015, 03:12:52 PM
 #4

How big is the constant in the constant time propagation, compared to say, regular propagation models?
I think your question isn't posed properly. The better way would be to observe that IBLT is O(1) in space and somewhere between O(n) and O(n2) (or maybe even O(n3)) in time. Then the proper way of asking the question would be: under what circumstances IBLT produces lower orphan probability than just sending the straight linear list of transactions (O(n) in space and O(n) in time)?

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
solex
Legendary
*
Offline Offline

Activity: 1078


100 satoshis -> ISO code


View Profile
November 16, 2015, 10:49:17 AM
 #5

For IBLTs to operate with O(1) efficiency, there needs to be some method to boost efficiency of keeping all miners' memory pools aligned.  

The incentive is economic as Gavin mentioned earlier.

RE: O(1) versus O(some-function-of-total-number-of-transactions):

Yes, it will depend on whether or not the number of differences goes up as the number of transactions goes up.

The incentives align so it is in everybody's best interest to make the differences as small as possible. I wouldn't be surprised if that causes innovations to drive the actual size to O(1) minus an increasing constant, as code gets better at predicting which transactions our peers do or don't have.


The gain actually achievable towards O(1) is not really important. What is important is seeing the implementation of some level of block propagation efficiency that is native to all nodes, i.e. not just elite miners on a sub-network.

gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2366



View Profile
November 16, 2015, 10:25:08 PM
 #6

What is important is seeing the implementation of some level of block propagation efficiency that is native to all nodes, i.e. not just elite miners on a sub-network.

I take it that is an attempted barb against the block relay network protocol, but I don't really get it--  Block relay network protocol is completely open, anyone can run it. Anyone can run their own servers, anyone can connect to the existing public servers.  And lots of people do, including even non-miners (not to mention small miners).

Bringing more efficient relay to the ordinary Bitcoin protocol is a reasonable goal-- absolutely, but it takes time to develop a good protocol for it;   There have been something like 4 major revisions of the relay network protocol since the last major release of Bitcoin Core; supporting the protocol as an external gateway allows rapid development and much better safety and security since bugs in the relay network client cannot easily break your node. We've learned a lot too, like a simple protocol gets almost all of the benefit, that TCP behavior matters a lot more than smaller optimizations, that even with moderate compression hashing becomes the bottleneck, etc. Meanwhile, there hasn't been much development for other schemes. (And much of what has been developed turned out to be much less efficient than the block relay network protocol).

More protocols for transmitting the bitcoin consensus also increases the robustness of the system: if someone finds an attack to jam block broadcast via the classical protocol, blocks can still get through via the alternative protocol(s).

Also, relay efficiency itself is just one of MANY sources of delay in consensus; and we've been busily at work on the others in Bitcoin Core. Dropping or diminishing that work just to re-implement the still-in-flux protocol of the relay network client wouldn't be a good use of resources. Beyond that: If you disagree and think that particular piece is more urgently important then why aren't you doing it? Smiley

Bitcoin will not be compromised
monsterer
Legendary
*
Offline Offline

Activity: 1008


View Profile
November 17, 2015, 09:55:04 AM
 #7

I think your question isn't posed properly. The better way would be to observe that IBLT is O(1) in space and somewhere between O(n) and O(n2) (or maybe even O(n3)) in time. Then the proper way of asking the question would be: under what circumstances IBLT produces lower orphan probability than just sending the straight linear list of transactions (O(n) in space and O(n) in time)?

It bitcoin itself this is a rather moot point, at roughly one orphan per day produced (hardly inefficient). I can see how this might matter in a chain with a much lower block time, though.
shorena
Legendary
*
Offline Offline

Activity: 1400


ALL escrow is signed! https://keybase.io/verify


View Profile WWW
November 17, 2015, 10:36:14 AM
 #8

I think your question isn't posed properly. The better way would be to observe that IBLT is O(1) in space and somewhere between O(n) and O(n2) (or maybe even O(n3)) in time. Then the proper way of asking the question would be: under what circumstances IBLT produces lower orphan probability than just sending the straight linear list of transactions (O(n) in space and O(n) in time)?

It bitcoin itself this is a rather moot point, at roughly one orphan per day produced (hardly inefficient). I can see how this might matter in a chain with a much lower block time, though.

It also matters with larger blocks. If miners see an increasing orphan rate because big blocks propagate slower throughout the network they have an incentive to keep blocks small no matter the limit. This is a big topic especially for chinese miners, which hold the majority of the networks hashpower.

gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2366



View Profile
November 17, 2015, 10:49:07 PM
 #9

This is a big topic especially for [...] which hold the majority of the networks hashpower.
Orphaning is a net benefit for those who hold more hashpower, as miners do not orphan themselves.

Bitcoin will not be compromised
skang
Sr. Member
****
Offline Offline

Activity: 452


from democracy to self-rule.


View Profile
November 24, 2015, 11:27:39 PM
 #10

But the 'O(1) block propagation time proposal' essentially merges the consensus and networking layer.

Notable problems:

1. Engineering would like to keep the networking layer and consensus layer separate because of architectural decisions for eg; difficult to push networking upgrades

2. The IBLT compression is just a way to check whether a data (transaction) is part of a dataset (transactions in mempool) when collision is low. For this to work, the mempools must be almost same. Syncing unordered data in a trust-less distributed system is a hard problem. Some propose ordering them, but that promotes censorship and centralization.


"India is the guru of the nations, the physician of the human soul in its profounder maladies; she is destined once more to remould the life of the world and restore the peace of the human spirit.
But Swaraj is the necessary condition of her work and before she can do the work, she must fulfil the condition."
2112
Legendary
*
Offline Offline

Activity: 1988



View Profile
November 25, 2015, 02:56:46 AM
 #11

But the 'O(1) block propagation time proposal' essentially merges the consensus and networking layer.

Notable problems:

1. Engineering would like to keep the networking layer and consensus layer separate because of architectural decisions for eg; difficult to push networking upgrades

2. The IBLT compression is just a way to check whether a data (transaction) is part of a dataset (transactions in mempool) when collision is low. For this to work, the mempools must be almost same. Syncing unordered data in a trust-less distributed system is a hard problem. Some propose ordering them, but that promotes censorship and centralization.
That would be true only for very lame implementations.

Non-lame implementation will have a smarter choice of the constant(s) size(s) of the bitmaps. The optimal (or near-optimal) size of the bitmaps will be a function of two parameters, either
S = F(number  of public txns , number of private txns)
or
S = F(number of public txns , cumulative size of private txns)
where "public txns" are the pre-propagated transactions and "private txns" are the ones that were miner-private and weren't seen before on the P2P network.

There can be a doubt whether transaction is public or private for two reasons:

a) miner isn't sure if the transaction was really well-propagated over the P2P network

b) miner isn't sure if the transaction could possibly hit a corner case of the consensus code

it is then always safe to locally reclassify it from 'public' to 'private' before compiling the particular IBLT bitmap.

It is not worth to spend to much time and energy on maintaining the global shared idea of the best mempool. It is only a convoluted way to minimize the size of the IBLT bitmaps.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2366



View Profile
November 25, 2015, 07:08:28 AM
 #12

Some propose ordering them, but that promotes censorship and centralization.

Ordering is a free parameter, and can be a deterministic function. Other than the topological constrain for spending the order of transactions in a block is completely arbitrary and without effect. Nothing about _ordering_ promotes censorship and centralization.

Synchronization might well punish those with inconsistent policies; but mining centralization prevents brought about by slow propagation prevents their existence completely.

There are ways to mostly eliminate inconsistency costs using more advanced (pre-forwarding schemes).

If a block is mostly ordered as expected, but not quite; then the amount of information needed to transmit the difference is small in any case; there is no need to strongly bind the consensus rules to this.

Bitcoin will not be compromised
watashi-kokoto
Sr. Member
****
Offline Offline

Activity: 266



View Profile
December 02, 2015, 02:57:22 PM
 #13

thanks god someone is working on this
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!