Bitcoin Forum
November 13, 2024, 02:38:56 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: O(1) block propagation  (Read 5566 times)
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
October 27, 2014, 01:27:22 PM
 #21

RE: O(1) versus O(some-function-of-total-number-of-transactions):

Yes, it will depend on whether or not the number of differences goes up as the number of transactions goes up.

The incentives align so it is in everybody's best interest to make the differences as small as possible. I wouldn't be surprised if that causes innovations to drive the actual size to O(1) minus an increasing constant, as code gets better at predicting which transactions our peers do or don't have.

How often do you get the chance to work on a potentially world-changing project?
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
October 27, 2014, 04:03:52 PM
 #22

Fixed size 1MB IBLT blocks being normal while disk block volumes increase through 5MB, 10MB, 15MB, which may take quite a few years.

Ahh, good point. 

As time passes the odds of a decode failure increases exponentially and then it has to be stepped.

Quote
It's still a hell of a lot better than O(n)  Smiley

It would still increase with the block size though, but in steps.

It just increases network efficiency and acts as an incentive to keep memory pool rules identical between miners.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
kallerosenbaum
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
November 05, 2014, 08:49:04 PM
 #23

Fair points.

Also, the starting point is not zero, it is 1MB blocks which can hold 1000 differences.

I don't understand what you mean.

What I mean there is that because the differences increase relatively slowly, in practice the O(1) can hold for a while. Fixed size 1MB IBLT blocks being normal while disk block volumes increase through 5MB, 10MB, 15MB, which may take quite a few years. So it is an approximation, a reasonable functional description of what is happening in that time period. Of course, mathematically, the differences mount until, at some point 2MB blocks are needed. Who knows, maybe sidechains are doing heavy lifting by then.

It's still a hell of a lot better than O(n)  Smiley
 

I think that how differences between mempools changes over time is irrelevant. What is relevant is that the size of the current IBLT being encoded is not affected by how many transactions I put into it, given that the receiver has enough information to decode it. When the new block is found, then we'll start over and make new guesstimates on the differences between mempools. Basically, it's O(1) with respect to the transactions within the current block being worked on, regardless of what the next or previous block might look like.

Also on a side note, it's up to the sender to create a large enough IBLT. She might want to gamble and make the IBLT really small. That would make the propagation faster, but the risk of a decoding failure becomes higher.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
November 05, 2014, 09:01:53 PM
 #24

I think that how differences between mempools changes over time is irrelevant. What is relevant is that the size of the current IBLT being encoded is not affected by how many transactions I put into it, given that the receiver has enough information to decode it.

The size of the IBLT is proportional to the differences between the 2 tables.  If there is 1% difference between the memory pools, then the IBLT size is at least 2% of the memory pool size.

This means as memory pools get bigger (as blocks increase), the IBLT increases.

What Gavin is saying is that as blocks get bigger, more effort will be spent syncing the memory pools.

This could mean that the IBLT will grow at a slower rate than the blocks (but still grow).

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
November 19, 2014, 09:37:56 PM
Last edit: November 20, 2014, 12:27:11 AM by solex
 #25

I can't get unexcited about IBLT so I just want to memo my recent thoughts here. Feedback always welcome :-)

Since only miners create IBLT blocks, but all nodes would need to process them, it seems the best implementation method is split the IBLT software into two parts:

a) IBLT encode
b) IBLT decode (set reconciliation)

The idea is to get the decode software widely adopted first, by giving it a head-start. It gets included with version 0.n, while the encode logic is delayed until version 0.n+x, any later version which is deemed appropriate. i.e. when an arbitrarily large majority of nodes have the decode software

A mining consensus would also be needed, probably requiring a new block version to indicate that decoding is supported, and encoded blocks only being acceptable when a super-majority exists for that block version.

If the decoding is as generic as possible, such that basic element dimensions (below) are parameters at the start of each IBLT, then the optimum values of these do not need to be agreed in advance, and may be in flux for a long time:
keySize
valueSize
keyHashSize
Number of cells

The nice thing is that no miner needs to change its own block propagation method, by super-majority. Hypothetically, if for example, only 10% of the hashing power ever wanted to propagate new blocks via IBLT then that would work fine.

instagibbs
Member
**
Offline Offline

Activity: 114
Merit: 12


View Profile
November 20, 2014, 02:31:18 PM
 #26


Since only miners create IBLT blocks, but all nodes would need to process them, it seems the best implementation method is split the IBLT software into two parts:

I might be wrong, but in the short term it's not as big a deal that regular nodes get these IBLT stuff. Full nodes care much less about ~15 seconds of latency.

Long term it might make sense.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
November 20, 2014, 08:45:23 PM
 #27

If you want to work on IBLT stuff...

... start with Matt's fast-relay code: https://github.com/TheBlueMatt/RelayNode

That is an "I know what I've already told my peers, so I won't tell them again" optimization for transaction data. I haven't tried to figure out how far that already-written-and-running code lets us scale, but I think that would be the first step.

Once you understand what Matt is doing, then figure out how an IBLT can further optimize to eliminate sending even lists of transaction IDs. The first step there is to figure out what software miners are using to build their blocks, and figuring out how hard it would be to get that software to do the IBLT thing (have similar policies for selecting transactions, and identical policies for ordering transactions inside the block).


How often do you get the chance to work on a potentially world-changing project?
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
November 21, 2014, 12:56:35 AM
 #28

Thanks for the info Gavin. I will learn about how the relay nodes work and see whether I can add value. There is, of course, a bunch of reasons why this is the best approach.

kallerosenbaum
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
November 24, 2014, 06:22:35 PM
 #29

I have done some statistical tests on encoding/decoding blocks in IBLT. Apart from my previous tests that tries to find reasonable values for valueSize and hashFunctionCount, I have also done tests that plots cellCount vs failure probability and diffCount vs failure probability. Please have a look at https://github.com/kallerosenbaum/bitcoin-iblt/wiki.

Conclusions:

1. 64 bytes looks like a good valueSize
2. Space savings seems to increase as diffCount increases. This is of course based on the assumptions explained in the wiki.
3. k=3 seems to be the best hashFunctionCount no matter how you look at it.
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
November 26, 2014, 07:16:16 PM
 #30

Indeed. Your results looks very good.

However, the first real-world IBLT implementation that Gavin wants to consider is to leverage Matt Corrallo's relay node software which already achieves nearly an order of magnitude reduction in propagated block sizes for nodes which use this service.

So the data is no longer raw transactions, but smaller transaction hashes.
https://github.com/TheBlueMatt/RelayNode/blob/master/c%2B%2B/server.cpp


Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!