Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: Mike Hearn on March 03, 2011, 06:38:32 PM



Title: Problems with send buffer limits in 0.3.20
Post by: Mike Hearn on March 03, 2011, 06:38:32 PM
0.3.20 has a new feature that disconnects nodes if vSend gets larger than 256kb, controllable via a flag.

This has the unfortunate bug that it's no longer possible to download the production block chain. I suggest people do not upgrade past 0.3.20 for now, unless you run with -maxsendbuffer=2000 as a flag.

It dies when sending the block chain between block 51501 and 52000 on the production network. The problem is this block:

http://blockexplorer.com/block/00000000186a147b91a88d37360cf3a525ec5f61c1101cc42da3b67fcdd5b5f8

It's 200kb. During block chain download, this block plus the others in that run of 500 blocks pushes vSend over 256kb and it results in a disconnect.

I feel like send limiting is perhaps not that essential. If we really see BitCoin nodes OOMing because they tried to send data too fast that implies there's a bug elsewhere. For instance getdata requests have a size limit for exactly this kind of reason (it might be too large, but we can tweak that).


Title: Re: Problems with send buffer limits in 0.3.20
Post by: jgarzik on March 03, 2011, 06:58:35 PM
I feel like send limiting is perhaps not that essential. If we really see BitCoin nodes OOMing because they tried to send data too fast that implies there's a bug elsewhere. For instance getdata requests have a size limit for exactly this kind of reason (it might be too large, but we can tweak that).

Ultimately, the goal is flow control.  Your OS has a buffer for outgoing data.  When that gets full, we need to stop sending more data, and wait for empty buffer space.

The worst case buffer size of a hacker is zero.  The worst case "normal" buffer size 8k.

Since bitcoin needs to send more data than that in a single message, an implementation must choose:  (a) store a pointer to the middle of the object you were sending, for later resumption of transfer, or (b) provide an application buffer that stores a copy of all outgoing data until it is transmitted.  satoshi chose (b) but placed no limits on the size of that outgoing data buffer.

It does sound like the limits are tighter than they should be.



Title: Re: Problems with send buffer limits in 0.3.20
Post by: theymos on March 03, 2011, 07:04:08 PM
That's pretty bad. Good thing you caught this before everyone upgraded and new nodes were no longer able to connect.


Title: Re: Problems with send buffer limits in 0.3.20
Post by: Cusipzzz on March 03, 2011, 07:05:44 PM
so much spam in that block, sigh.


Title: Re: Problems with send buffer limits in 0.3.20
Post by: Gavin Andresen on March 03, 2011, 07:57:55 PM
Oops.

My fault-- I DID test downloading the entire production block chain with a 0.3.20 client, but I wasn't careful to make sure I downloaded it from another 0.3.20 client.

Workaround:  if you are running 0.3.20, run with -maxsendbuffer=10000


Title: Re: Problems with send buffer limits in 0.3.20
Post by: theymos on March 03, 2011, 08:43:47 PM
Couldn't peers theoretically need to send 500 MB in response to a getblocks request? The limit should perhaps be MAX_BLOCK_SIZE*500.


Title: Re: Problems with send buffer limits in 0.3.20
Post by: Gavin Andresen on March 03, 2011, 09:03:30 PM
Couldn't peers theoretically need to send 500 MB in response to a getblocks request? The limit should perhaps be MAX_BLOCK_SIZE*500.

500MB per connection times 100 connections would be 50 GB.  That re-opens the door to a memory exhaustion denial-of-service attack, which is the problem -maxsendbuffer fixes.

As transaction volume grows I think there will be lots of things that need optimization/fixing.  One simple fix would be to request fewer blocks as they get bigger, to stay inside the sendbuffer limit...

(ps: I've been re-downloading the current block chain connected to a -maxsendbuffer=10000 0.3.20 node, and the workaround works)


Title: Re: Problems with send buffer limits in 0.3.20
Post by: Gavin Andresen on March 03, 2011, 09:30:02 PM
Please help test:  https://github.com/bitcoin/bitcoin/pull/95

Sets the -maxsendbuffer and -maxreceivebuffer limits to 10MB each (so possible max of 2GB of memory if you had 100 connections).

I tested by running a 0.3.20 node to act as server, then ran a client with:
  -connect={server_ip} -noirc -nolisten
... to make sure I was downloading the block chain from that 0.3.20 node.


Title: Re: Problems with send buffer limits in 0.3.20
Post by: theymos on March 03, 2011, 09:43:18 PM
With a 10MB limit, someone can create 10 full blocks within a 500-block span to disable getblocks uploading for almost the entire network. This is probably an even more effective attack than whatever the limit is designed to protect against.


Title: Re: Problems with send buffer limits in 0.3.20
Post by: jgarzik on March 03, 2011, 09:50:47 PM
With a 10MB limit, someone can create 10 full blocks within a 500-block span to disable getblocks uploading for almost the entire network. This is probably an even more effective attack than whatever the limit is designed to protect against.

That won't disable getblocks uploading.

But even so, the ideal would be to simply stop reading until the write buffer clears...