Bitcoin Forum
March 19, 2024, 02:50:50 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay  (Read 7108 times)
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8343



View Profile WWW
February 26, 2016, 02:48:54 AM
Last edit: August 02, 2017, 10:32:42 PM by gmaxwell
Merited by ABCbits (11), darosior (10), Welsh (4), Kakmakr (1)
 #1

Bitcoin Core 0.12 introduced a new blocksonly setting.  When set to blocksonly a node behaves normally but sends and receives no lose transactions; instead it handles only complete blocks. There are many applications for nodes where only confirmed transactions are interesting, and a node which still verifies and forwards blocks still contributes to network health-- less, perhaps, than one that relays transactions: but it also consumes fewer resources to begin with. An additional downside they don't get the latency advantages of signature caching since every transaction they see is totally new to them-- this isn't something miners should use.

How much less bandwidth does blocksonly use in practice?  I recently measured this using two techniques: Once by instrumenting a node to measure bandwidth used for blocks vs all other traffic, and again by repeatedly running in both modes for a day and monitoring the hosts total network usage; both modes gave effectively the same result.

How much is the savings?  Blocksonly reduced the node's bandwidth usage by 88%.

A significant implication of this is that any scheme for bandwidth reduction which works by using already relayed transactions to reduce the size of transmitted blocks can _AT MOST_ reduce the overall bandwidth usage by 12%-- assuming that differential compression achieved an "infinity-fold" improvement. (An example of Amdahl's law)

Why does relay use so much bandwidth?  The fundamental reason for this is because relay in the Bitcoin Protocol is linear in the number of transactions and the number of peers. (E.g. bandwidth is a function of Transactions * Peers).  The protocol uses an 'efficient' INV scheme --- but this scheme only reduces the bandwidth by a constant factor:  For every transaction a 36 byte long INV is sent (or received) from _every_ peer, and the full transaction is only sent once. Since the INV is a significant fraction of the transaction size (e.g. 36 bytes vs 300) this means that every 10 or so peers uses just as much bandwidth as sending the full transaction every time would take.

For the subset of nodes where blocksonly meets their needs it's available now and gives pretty much the greatest possible bandwidth savings. Full stop. For blocks the INV process is much more efficient because the ratio of INV size to a megabyte block is so large. But what can we do for all the other cases?

In the literature there are schemes for quite efficient gossip but there seems to be a trade-off between efficiency and attack robustness (for example, you could arrange nodes in a minimum spanning tree rooted at some super node(s) which would be optimal for repeated bandwidth but insanely unreliable and vulnerable to attacks). Even the simple INV scheme is vulnerable to attack: peers can delay transactions by INVing but then sending the data only after a long delay.  With ephemeral anonymous peers, attack resistance is very important to us-- but fortunately we don't need to achieve optimal efficiency.

One possible scheme I've been discussing (and working on) for a while is mempool reconciliation. My work here was originally motivated by the desire to avoid relay costs for transactions which are just going to be quickly replace or fall below the relay threshold, but this can also be applied to drastically reduce relay costs.

The idea is this, at random intervals a node will ask one of it's peers for a sketch of the top X MBytes of it's mempool. In it's request it could also signal that it's interested only in transactions whos ancestor feerate is over some minimum threshold, along with information about the size and feerate of its own mempool.  The returned sketch is an IBLT (see this reddit post for an EL5 explanation that I created for IBLT) of some size estimated by the sender based on the information sent by the requester.  Unlike the efficient block transmission IBLT proposals, this IBLT only transmits transaction IDs-- which will avoid the fragmentation overhead needed in sending whole transactions.

The requester then attempts to reconstruct the IDs using the content of it's own mempool.  If it is unable to reconstruct all of the IDs, it requests from another random peer with a different seed.

When the node has multiple IBLTs it can use the partial solutions from each of them to help solve the other ones.

As it learns new txids that it was previously unaware of it then fetches them via getdata. When it completes a reconstruction it can INV any missing transactions towards the peers that didn't have them.

(To avoid IBLT batching delays slowing transaction propagation too much, two peers can be elected to receive INVs for every transaction immediately; similar to the existing trickle-for-privacy scheme).

Most of the academic literature on IBLTs is concerned about parameter selections that minimize the asymptotic overhead (size of IBLT vs set difference) for perfect reconstruction of the set.  These parameters, however, result in quite high probabilities for complete failure.  In our case, being able to recover some of a set is fine both because mempools don't need to be completely consistent, and requests to other peers will help recover the missing entries.

A similar pattern happened (for the same reason) in the invention of fountain codes-- (one can approximately think of a IBLT as the dual of a fountain code). To achieve better odds of reconstruction, the order of nodes in the graph (isomorphic to the number of hash buckets in IBLT) should be randomly selected from a special distribution, instead of a constant. The effect is that the mixture of node orders reduces the probability that a decode gets stuck, because it is more likely that there will be be some entries that have the optimal density.  For IBLT this can be done by using the hash of the set member to pick the number of buckets for that set just like it is used to pick which buckets are used.

But right now I'm not aware of any research on what these optimal order distributions would look like for IBLTs given assumptions around capacity ratios... so I think there is a fair amount of basic science that needs to be worked out in order to really nail down a good design.  I've been trying to work on making more progress here for a couple months, but the toxic environment makes the grind a little difficult. I thought I would checkpoint my thinking here in the hope that someone would have some more thoughts to contribute.

In any case, this scheme would avoid the quadratic-like behavior of relay bandwidth. It would let nodes trade latency off vs relay overhead, and it would allow for a graceful way of handling the highest priority transactions first. -- transactions a dozen or more blocks deep in the mempool are not going to get mined any time soon, so if they have lower relay latency from reconciling them less frequently that is no great harm. I believe it could do so without substantial increases in vulnerability to attack. The same software infrastructure could also be used for bandwidth minimized block transfer for nodes that do have mempools (otherwise blocksonly gives optimal bandwidth gains), though latency minimization is much better accomplished via techniques like Matt's efficient block relay protocol-- since the most important thing to minimize latency is to minimize roundtrips.
1710816650
Hero Member
*
Offline Offline

Posts: 1710816650

View Profile Personal Message (Offline)

Ignore
1710816650
Reply with quote  #2

1710816650
Report to moderator
In order to get the maximum amount of activity points possible, you just need to post once per day on average. Skipping days is OK as long as you maintain the average.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1710816650
Hero Member
*
Offline Offline

Posts: 1710816650

View Profile Personal Message (Offline)

Ignore
1710816650
Reply with quote  #2

1710816650
Report to moderator
1710816650
Hero Member
*
Offline Offline

Posts: 1710816650

View Profile Personal Message (Offline)

Ignore
1710816650
Reply with quote  #2

1710816650
Report to moderator
1710816650
Hero Member
*
Offline Offline

Posts: 1710816650

View Profile Personal Message (Offline)

Ignore
1710816650
Reply with quote  #2

1710816650
Report to moderator
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1147


View Profile
February 26, 2016, 04:09:30 AM
Merited by gmaxwell (1), ABCbits (1)
 #2

Would these IBLT tx relay techniques also help defeat traffic analysis? Seems to me that they might, which would improve privacy against attackers like the NSA.

gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8343



View Profile WWW
February 26, 2016, 04:14:37 AM
 #3

Would these IBLT tx relay techniques also help defeat traffic analysis? Seems to me that they might, which would improve privacy against attackers like the NSA.
Yes. They would frustrate traffic analysis at least if 'constant size getdata' were also implemented-- assuming other mempool leaks were closed. (like the mempool p2p message), and if the traffic were encrypted (e.g. over tor or other encrypted transport) or if content were not visible to the attacker (e.g. for pretextual compliance with the law).


jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1109


View Profile WWW
February 26, 2016, 04:17:04 AM
 #4

Blocksonly, very cool and practical.

A way to reduce the size of the INV would be to just send the lowest 64bits. Yes I know, the hash collisions, the hash collisions. But the odds of this is quite rare, just have a combined hash of all the txids also. Then a node can find out with pretty good probability if another node knows about a tx it doesnt and if all the tx are in common the combined hash can be used to verify this. In the rare event all the 64bit truncations match, but the combined hash doesnt, then the full txids could be exchanged.

A way to avoid the quadratic behavior is for each node to create a network topology map and have a mechanism to route the message to the likely nodes along the likely paths. Before you say, the privacy, the privacy. All this information is already known to the attacker, so what harm for honest nodes to take advantage of the local topology?

With a combined push protocol on top of a streamlined pull to fill in the gaps, I think the behavior would reduce from quadratic to something a lot more scalable.

In the extreme case of a single node that is connected to all 6000 nodes, it could broadcast a tx to all nodes at once. The problem with this is the redundant packets going to nodes that already have it. A naive approach would be to have a 6000 bit bitmap indicating what nodes have seen a packet and use that for optimizing the packet broadcasting. Clearly this can be abused, but as a starting point to get a tx broadcast with minimal redudancy feels like a good starting point.

Then with a protocol similar to the current INV process, the nodes can still verify they have all the tx their peers have, so the broadcast seems like it just seeds that many more peers with the tx and avoids local retransmission between nodes.

James

P.S. broadcasting would of course point directly at the node that broadcast, so if privacy of that is a concern, then a privacy-centric node would need to have a different node broadcast the tx and make sure the tx gets to that node in a secure way

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8343



View Profile WWW
February 26, 2016, 04:29:56 AM
Merited by ABCbits (3)
 #5

We care a lot about performance in the adversarial setting. It is very cheap to compute 64-bit collisions. So a trouble maker could cheaply break that scheme, causing it to always fall back... there are ways around that. ... or even better improvement possible, I can tell you how to reduce the "ID" overhead to _zero_ for at least the first time that ID is sent:  (and if the set reconciliation was perfectly efficient, there really only would be a 'first' time)

Instead of using transaction hashes as identifiers, pass the transaction through a cryptographic permutation (e.g. a large block blockcipher). and then use the initial N bits as the "ID"; when sending the txn, send in encoded form and don't send that part.  Now the bandwidth used sending ID's at least counts towards sending the transactions. (I describe this on the network block coding article from a while back)

If one really must use IDs smaller than 160 bits, the ID could be seeded by random value agreed between peers (best for security) to prevent collisions, but it would break multiparty reconstruction, so a recent block hash could be used instead (boo, still has collisions; but harder to compute).

Personally I don't think the small constant factor savings from smaller IDs is worth the software complexity, especially the introduction of infrequently tested error conditions... the big gain here comes from getting rid of the quadratic behavior.  If that is successful the bandwidth should be dominated by the size of the actual transactions themselves, and so even big improvements in the rest wouldn't matter much.

A way to avoid the quadratic behavior is for each node to create a network topology map and have a mechanism to route the message to the likely nodes along the likely paths.
But then you can place misbehaving nodes in the minimum cut of those graphs (or DOS the nodes in those positions)-- that is part of what I meant by efficient gossip being in some conflict with attack resistance.

Quote
With a combined push protocol on top of a streamlined pull to fill in the gaps, I think the behavior would reduce from quadratic to something a lot more scalable.

Random walks in random graphs tend to fill pretty efficiently, even with low fan-out.  I believe that simple trickling behavior combined with set reconciliation to escape cycles kills the quadratic behavior and can do so without introducing any new vulnerabilities or major slowdowns.
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1109


View Profile WWW
February 26, 2016, 04:42:32 AM
 #6

Random walks in random graphs tend to fill pretty efficiently, even with low fan-out.  I believe that simple trickling behavior combined with set reconciliation to escape cycles kills the quadratic behavior and can do so without introducing any new vulnerabilities or major slowdowns.
I agree. Set reconciliation appears to be the key and nothing seems to beat a random graph when it comes to propagation and resilience.

I look forward to it and if there are any specs that already exist, I can give a try to code it.

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
jtoomim
Hero Member
*****
Offline Offline

Activity: 818
Merit: 1005


View Profile WWW
February 26, 2016, 05:28:57 AM
 #7

Blocks-only mode is a great new feature, and will be useful for many people; thanks to everyone who worked on it.

The protocol uses an 'efficient' INV scheme --- but this scheme only reduces the bandwidth by a constant factor:  For every transaction a 36 byte long INV is sent (or received) from _every_ peer, and the full transaction is only sent once. Since the INV is a significant fraction of the transaction size (e.g. 36 bytes vs 300) this means that every 10 or so peers uses just as much bandwidth as sending the full transaction every time would take.

Actually, the INV scheme is not very efficient at all. Once you take into account Bitcoin, TCP (including ACKs), IP, and ethernet overheads, each INV takes 193 bytes. That's 127 bytes for the INV message and 66 bytes for the ACK. All of this is for 32 bytes of payload, or an "efficiency" of 16.5% (i.e. 83.5% overhead). For a 400 byte transaction with 20 peers, this can result in 3860 bytes sent in INVs for only 400 bytes of actual data.

An improvement that I've been thinking about implementing (after Blocktorrent) is an option for batched INVs. Including the hashes for two txes per IP packet instead of one would increase the INV size to 229 bytes for 64 bytes of payload -- that is, you add 36 bytes to the packet for every 32 bytes of actual payload. This is a marginal efficiency of 88.8% for each hash after the first. This is *much* better.

Waiting a short period of time to accumulate several hashes together and send them as a batched INV could easily reduce the traffic of running bitcoin nodes by a factor of 2, and possibly even more than that. However, such a technique would slow down the propagation of transactions across the bitcoin network, which might make some people unhappy. The ill effects could likely be mitigated by choosing a different batch size for each peer based on each peer's preferences. Each node could choose one or two peers to which they send INVs in batches of one or two, four more peers in which they send batches of two to four, and the rest in batches of four to eight, for example.

Anyway, I don't actually read Bitcointalk very much, so I probably won't notice if any of you reply. I'm going to post this message on bitcoin-dev as well. Please reply there if you have comments.

Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power.
http://Toom.im
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8343



View Profile WWW
February 26, 2016, 05:39:52 AM
Last edit: February 26, 2016, 05:52:30 AM by gmaxwell
 #8

An improvement that I've been thinking about implementing (after Blocktorrent) is an option for batched INVs. Including the hashes for two txes per IP packet instead of one would increase the INV size to 229 bytes for 64 bytes of payload -- that is, you add 36 bytes to the packet for every 32 bytes of actual payload. This is a marginal efficiency of 88.8% for each hash after the first. This is *much* better.

Waiting a short period of time to accumulate several hashes together and send them as a batched INV could easily reduce the traffic of running bitcoin nodes by a factor of 2, and
Uh. Bitcoin has done this since the very early days. The batching was temporarily somewhat hobbled between 0.10 and 0.12 (especially when you had any abusive frequently pinging peers attached), but is now fully functional again and it now manages to batch many transactions per INV pretty effectively. Turn on net message debugging and you'll see the many INVs that are much larger than the minimum. The average batching size (ignoring the trickle cut-through) is about 5 seconds long-- and usually gets about 10 transactions per INV. My measurements were with these fixes in effect; I expect the blocksonly savings would have been higher otherwise.


2016-02-26 05:47:08 sending: inv (1261 bytes) peer=33900
2016-02-26 05:47:08 sending: inv (109 bytes) peer=32460
2016-02-26 05:47:08 sending: inv (37 bytes) peer=34501
2016-02-26 05:47:08 sending: inv (217 bytes) peer=33897
2016-02-26 05:47:08 sending: inv (145 bytes) peer=41863
2016-02-26 05:47:08 sending: inv (37 bytes) peer=35725
2016-02-26 05:47:08 sending: inv (73 bytes) peer=20567
2016-02-26 05:47:08 sending: inv (289 bytes) peer=44703
2016-02-26 05:47:08 sending: inv (73 bytes) peer=13408
2016-02-26 05:47:09 sending: inv (649 bytes) peer=41279
2016-02-26 05:47:09 sending: inv (145 bytes) peer=42612
2016-02-26 05:47:09 sending: inv (325 bytes) peer=34525
2016-02-26 05:47:09 sending: inv (181 bytes) peer=41174
2016-02-26 05:47:09 sending: inv (469 bytes) peer=41460
2016-02-26 05:47:10 sending: inv (973 bytes) peer=133
2016-02-26 05:47:10 sending: inv (361 bytes) peer=20541


Twiddling here doesn't change the asymptotic efficiency though; which is what my post is about.

[I'm also somewhat surprised that you were unaware of this; one of the patches "classic" was talking about patching out was the one restoring the batching... due to a transaction deanonymization service (or troll) claiming it interfered with their operations.]
jtoomim
Hero Member
*****
Offline Offline

Activity: 818
Merit: 1005


View Profile WWW
February 26, 2016, 07:29:53 AM
Last edit: February 26, 2016, 07:42:46 AM by jtoomim
 #9

Uh. Bitcoin has done this since the very early days. The batching was temporarily somewhat hobbled between 0.10 and 0.12 (especially when you had any abusive frequently pinging peers attached)

I've been mostly using and working on 0.11-series versions, which very rarely send out INV batches greater than 2 or 3. In my examination, about 85% of the packets had a single hash in it. Nice to know this is one of the other improvements in 0.12.

Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power.
http://Toom.im
sickpig
Legendary
*
Offline Offline

Activity: 1260
Merit: 1008


View Profile
February 26, 2016, 08:51:32 AM
Last edit: February 26, 2016, 09:28:44 AM by sickpig
 #10

How much less bandwidth does blocksonly use in practice?  I recently measured this using two techniques: Once by instrumenting a node to measure bandwidth used for blocks vs all other traffic, and again by repeatedly running in both modes for a day and monitoring the hosts total network usage; both modes gave effectively the same result.

How much is the savings?  Blocksonly reduced the node's bandwidth usage by 88%.

Do you care to share the raw data? I would like to independently verify your claim.

edit: the patch to introduced code changes to measure different type of bandwidth would be even better.

Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
February 26, 2016, 11:42:24 AM
 #11

Do you care to share the raw data? I would like to independently verify your claim.

It should be moderate to do a back of the envelop calculation.

If a node has 24 peers, then it will send/receive 24 invs.  That is 36 * 24 = 864 bytes.  It also has to receive the transaction at, say 400 bytes.

It will also receive the transaction in the full block for another 400 bytes (invs being negligible).

With blocks only, it is  only receives 400 bytes.  That gives an efficiency of 400 / (400 + 864) = 0.31.

If inv messages were only 33% efficient (due to headers etc), then that gives 400 / (400 + 864 * 3) = 0.13.

That is pretty close to the stated number.

I wonder if simple Bloom filters would be the way to go (rather than IBLT).  The sender sends a Bloom filter (sender_filter) containing all transactions that it added to its memory pool since the last (5 minute) broadcast.  The receiver creates an empty Bloom filter (receiver_filter) with the same parameters.  It finds the transactions in its memory pool that match sender_filter and adds them to receiver_filter.  If both filters end up matching, then it assumes that it has all the transactions.

If not, it XORs the two filters (=rle_filter) and sends the result as a reply.  Run length encoding could be used, since the XOR should be mostly strings of zeros.

The sending node generates receiver_filter using (sender_filter XOR rle_filter).  It sends on any transactions that were in sender_filter that aren't in receiver_filter.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
TooDumbForBitcoin
Legendary
*
Offline Offline

Activity: 1638
Merit: 1001



View Profile
February 26, 2016, 02:47:53 PM
 #12

 
...nice technical stuff....

[I'm also somewhat surprised that you were unaware of this
; one of the patches "classic" was talking about patching out was the one restoring the batching... due to a transaction deanonymization service (or troll) claiming it interfered with their operations.]

Nice technical post, but why the drama at the end?



▄▄                                  ▄▄
 ███▄                            ▄███
  ██████                      ██████
   ███████                  ███████
    ███████                ███████
     ███████              ███████
      ███████            ███████
       ███████▄▄      ▄▄███████
        ██████████████████████
         ████████████████████
          ██████████████████
           ████████████████
            ██████████████
             ███████████
              █████████
               ███████
                █████
                 ██
                  █
veil|     PRIVACY    
     WITHOUT COMPROMISE.      
▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂
|   NO ICO. NO PREMINE. 
   X16RT GPU Mining. Fair distribution.  
|      The first Zerocoin-based Cryptocurrency      
   WITH ALWAYS-ON PRIVACY.  
|



                   ▄▄████
              ▄▄████████▌
         ▄▄█████████▀███
    ▄▄██████████▀▀ ▄███▌
▄████████████▀▀  ▄█████
▀▀▀███████▀   ▄███████▌
      ██    ▄█████████
       █  ▄██████████▌
       █  ███████████
       █ ██▀ ▀██████▌
       ██▀     ▀████
                 ▀█▌




   ▄███████
   ████████
   ███▀
   ███
██████████
██████████
   ███
   ███
   ███
   ███
   ███
   ███




     ▄▄█▀▀ ▄▄▄▄▄▄▄▄ ▀▀█▄▄
   ▐██▄▄██████████████▄▄██▌
   ████████████████████████
  ▐████████████████████████▌
  ███████▀▀▀██████▀▀▀███████
 ▐██████     ████     ██████▌
 ███████     ████     ███████
▐████████▄▄▄██████▄▄▄████████▌
▐████████████████████████████▌
 █████▄▄▀▀▀▀██████▀▀▀▀▄▄█████
  ▀▀██████          ██████▀▀
      ▀▀▀            ▀▀▀
belcher
Sr. Member
****
Offline Offline

Activity: 261
Merit: 512


View Profile
February 26, 2016, 03:30:50 PM
 #13

 
...nice technical stuff....

[I'm also somewhat surprised that you were unaware of this
; one of the patches "classic" was talking about patching out was the one restoring the batching... due to a transaction deanonymization service (or troll) claiming it interfered with their operations.]

Nice technical post, but why the drama at the end?

I'd say that ending of drama will come from solving the underlying disagreements, not by just one side keeping quiet.

Efficiency improvements like blocksonly can help more of the bitcoin economy adopt full nodes. It can help stop any move by miners that attempt to force the real economy to follow a previously-invalid blockchain. The only reason Bitcoin Classic is even a danger is because mining is relatively centralized and much of the bitcoin economy uses SPV-security wallets that completely trusts the miners and would simply go along with almost any hardfork.

With this in mind, you can see it's not possible to separate the technical discussion from the political fight over how bitcoin should scale.

Aside: I just went to search for that PR to bitcoin classic but couldn't find it. Has it been deleted?

1HZBd22eQLgbwxjwbCtSjhoPFWxQg8rBd9
JoinMarket - CoinJoin that people will actually use.
PGP fingerprint: 0A8B 038F 5E10 CC27 89BF CFFF EF73 4EA6 77F3 1129
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
February 26, 2016, 04:47:10 PM
 #14

Having a look at the code, the effects of the flag are as follows:

  • Version message sets the relay flag to zero
  • Prints a warning if an inv message is received (and ignores the inv)
  • Prints a warning if a tx message is received (and ignores the tx)

The warnings are disabled for whitelisted peers if the whitelistrelay flag is set.

It doesn't count as misbehavior error though.  A peer could send lots of transactions and inv messages and not get banned.

Transactions that are received from whitelisted peers are still relayed fully.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
trout
Sr. Member
****
Offline Offline

Activity: 333
Merit: 251


View Profile
February 26, 2016, 05:30:31 PM
Last edit: February 26, 2016, 05:44:48 PM by trout
 #15

since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute
minimum of transaction sends you'd need. Am I missing something trivial here?

As far as I understand, a peer will send every transaction to everyone he hasn't received it from. So this results in an additional multiplicative factor of something like k/2, where k is the average number of neighbours for a node. (Or is it much more than that?)

Is it this factor that you'd like to get rid of, gmaxwell?

gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8343



View Profile WWW
February 26, 2016, 05:47:29 PM
 #16

The warnings are disabled for whitelisted peers if the whitelistrelay flag is set.
Right. Unfortunately, Bitcoinj sends unsolicited transactions. The developers of BTCD have been complaining about that for a while, and I'd like to stop supporting unsolicited txn... but while the behavior is common we didn't want to add misbehavior for it.

Do you care to share the raw data? I would like to independently verify your claim.

I have data from the confirmation test run,

        RX packets 169821144  bytes 83997523200 (78.2 GiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 269163479  bytes 293982867649 (273.7 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        RX packets 171751847  bytes 84478658144 (78.6 GiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 271892507  bytes 296796582415 (276.4 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
and
        RX packets 175622574  bytes 85220192328 (79.3 GiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 277631356  bytes 303320529705 (282.4 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        RX packets 175824690  bytes 85283702303 (79.4 GiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 277920978  bytes 303653216190 (282.7 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

These were 6 hour runs, with the node pre-primed for an hour before the ran began.

As tiernolan points out, the results aren't surprising-- if you go look in my reddit comments you'll see I computed similar figures for this from first principles before measuring.

Nice technical post, but why the drama at the end?
[...]
Aside: I just went to search for that PR to bitcoin classic but couldn't find it. Has it been deleted?

Took me a bit to find it: https://github.com/bitcoinclassic/bitcoinclassic/pull/16  I'm not trying to stir drama, I was earnestly surprised as jtoomim commented extensively there.

since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute
minimum of transaction sends you'd need. Am I missing something trivial here?
You are!

Imagine if the network were wired such that if a message was relayed along connection 0 on each peer it would form a Hamiltonian cycle and reach every node in the network. Thus, each peer would receive and send each transaction exactly once. This shows the bound cannot be transactions*peers.

Now, this is obviously pretty fragile-- and our network doesn't form a nice complete cycle.  So if you want to check if your peers actually has all the transactions you have, does that reintroduce transactions*peers(*ratio of txid to tx size) traffic?  No.  Set reconciliation lets you communicate a set difference with size proportional to the difference, see my EL5 description of a computationally efficient set reconciliation scheme:  https://www.reddit.com/r/btc/comments/43iup7/mike_hearn_implemented_a_test_version_of_thin/czirz2p

The difference is, hopefully, almost zero size-- so the communication is small even if the number of transactions are large.

Quote
As far as I understand, a peer will send every transaction to everyone he hasn't received it from. So this results in an additional multiplicative factor of something like k/2, where k is the average number of neighbours for a node.

Is it this factor that you'd like to get rid of, gmaxwell?
Yes, it is.  Technically it sends an INV (a message with TXids) not the whole transaction, though this is only a constant factor improvement.
trout
Sr. Member
****
Offline Offline

Activity: 333
Merit: 251


View Profile
February 26, 2016, 07:14:26 PM
 #17


since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute
minimum of transaction sends you'd need. Am I missing something trivial here?
You are!

Imagine if the network were wired such that if a message was relayed along connection 0 on each peer it would form a Hamiltonian cycle and reach every node in the network. Thus, each peer would receive and send each transaction exactly once. This shows the bound cannot be transactions*peers.
OK I see I was just talking about a different quantity. In your ideal example, each transaction
is sent  Peers times  from someone to someone (which is what I was talking about), though indeed each node only sends it once.

Anyway, this settled,  are you saying that currently (without set reconciliation) each node sends each transaction roughly  Peers times? (Where Peers is the size of the whole network - not just the number of
neighbours).   That's ... pretty bad indeed.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
February 26, 2016, 08:27:04 PM
 #18

since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute
minimum of transaction sends you'd need. Am I missing something trivial here?

On average each node should receive each transaction once and send it onward once.  Better connected nodes would tend to have more peers who haven't seen the transaction and so request it.

That is where the suggestion to pick 2 nodes to forward it to comes from.  If each node immediately sends it to 2 others, pretty soon the whole network knows about the transaction.  It doesn't matter how many you are connected to, you only really need to forward it to 2.

The more efficient method could be used later to catch detect peers who didn't get it.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1109


View Profile WWW
February 27, 2016, 12:17:33 AM
 #19

since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute
minimum of transaction sends you'd need. Am I missing something trivial here?

On average each node should receive each transaction once and send it onward once.  Better connected nodes would tend to have more peers who haven't seen the transaction and so request it.

That is where the suggestion to pick 2 nodes to forward it to comes from.  If each node immediately sends it to 2 others, pretty soon the whole network knows about the transaction.  It doesn't matter how many you are connected to, you only really need to forward it to 2.

The more efficient method could be used later to catch detect peers who didn't get it.
with a fanout of 2 and 6000 nodes, wouldnt it be 10 to 12 hops on average? assuming average ping time of 300 millis, plus latency of ?, I guess it is ~5 seconds. pretty good.

If a bit of privacy is sacrificed and it is started with sending to 16 peers, I think that would reduce things by 3 hops or so. probably not worth doing. But it would dramatically reduce the chances of a propagation "fizzling" out. That could be avoided by retransmitting to a pair of two other peers if the tx is not seen in the network after 15 seconds

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
Richy_T
Legendary
*
Offline Offline

Activity: 2380
Merit: 2100


1RichyTrEwPYjZSeAYxeiFBNnKC9UjC5k


View Profile
March 14, 2016, 10:37:02 PM
 #20

with a fanout of 2 and 6000 nodes, wouldnt it be 10 to 12 hops on average? assuming average ping time of 300 millis, plus latency of ?, I guess it is ~5 seconds. pretty good.

If a bit of privacy is sacrificed and it is started with sending to 16 peers, I think that would reduce things by 3 hops or so. probably not worth doing. But it would dramatically reduce the chances of a propagation "fizzling" out. That could be avoided by retransmitting to a pair of two other peers if the tx is not seen in the network after 15 seconds

James

Hmm, yeah. Well connected nodes or nodes close to or actually the originating node could make more connections.

I'm put in mind of a neural structure where connections are well used get stronger. If node B always has the transaction when I try and send it to it, maybe I back off and allow it to try to send me transactions first.

1RichyTrEwPYjZSeAYxeiFBNnKC9UjC5k
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!