Bitcoin Forum
December 02, 2016, 10:32:14 PM *
News: Latest stable version of Bitcoin Core: 0.13.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: Problems with send buffer limits in 0.3.20  (Read 1464 times)
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526


View Profile
March 03, 2011, 06:38:32 PM
 #1

0.3.20 has a new feature that disconnects nodes if vSend gets larger than 256kb, controllable via a flag.

This has the unfortunate bug that it's no longer possible to download the production block chain. I suggest people do not upgrade past 0.3.20 for now, unless you run with -maxsendbuffer=2000 as a flag.

It dies when sending the block chain between block 51501 and 52000 on the production network. The problem is this block:

http://blockexplorer.com/block/00000000186a147b91a88d37360cf3a525ec5f61c1101cc42da3b67fcdd5b5f8

It's 200kb. During block chain download, this block plus the others in that run of 500 blocks pushes vSend over 256kb and it results in a disconnect.

I feel like send limiting is perhaps not that essential. If we really see BitCoin nodes OOMing because they tried to send data too fast that implies there's a bug elsewhere. For instance getdata requests have a size limit for exactly this kind of reason (it might be too large, but we can tweak that).
1480717934
Hero Member
*
Offline Offline

Posts: 1480717934

View Profile Personal Message (Offline)

Ignore
1480717934
Reply with quote  #2

1480717934
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1480717934
Hero Member
*
Offline Offline

Posts: 1480717934

View Profile Personal Message (Offline)

Ignore
1480717934
Reply with quote  #2

1480717934
Report to moderator
jgarzik
Legendary
*
qt
Offline Offline

Activity: 1470


View Profile
March 03, 2011, 06:58:35 PM
 #2

I feel like send limiting is perhaps not that essential. If we really see BitCoin nodes OOMing because they tried to send data too fast that implies there's a bug elsewhere. For instance getdata requests have a size limit for exactly this kind of reason (it might be too large, but we can tweak that).

Ultimately, the goal is flow control.  Your OS has a buffer for outgoing data.  When that gets full, we need to stop sending more data, and wait for empty buffer space.

The worst case buffer size of a hacker is zero.  The worst case "normal" buffer size 8k.

Since bitcoin needs to send more data than that in a single message, an implementation must choose:  (a) store a pointer to the middle of the object you were sending, for later resumption of transfer, or (b) provide an application buffer that stores a copy of all outgoing data until it is transmitted.  satoshi chose (b) but placed no limits on the size of that outgoing data buffer.

It does sound like the limits are tighter than they should be.


Jeff Garzik, bitcoin core dev team and BitPay engineer; opinions are my own, not my employer.
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
theymos
Administrator
Legendary
*
expert
Offline Offline

Activity: 2492


View Profile
March 03, 2011, 07:04:08 PM
 #3

That's pretty bad. Good thing you caught this before everyone upgraded and new nodes were no longer able to connect.

1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
Cusipzzz
Sr. Member
****
Offline Offline

Activity: 300


View Profile
March 03, 2011, 07:05:44 PM
 #4

so much spam in that block, sigh.

http://BTCSportsBet.com - The most complete bitcoin Sportsbook - All games from pro and college sports, Champions League, E-Sports, and reduced juice as well!
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652


Chief Scientist


View Profile WWW
March 03, 2011, 07:57:55 PM
 #5

Oops.

My fault-- I DID test downloading the entire production block chain with a 0.3.20 client, but I wasn't careful to make sure I downloaded it from another 0.3.20 client.

Workaround:  if you are running 0.3.20, run with -maxsendbuffer=10000

How often do you get the chance to work on a potentially world-changing project?
theymos
Administrator
Legendary
*
expert
Offline Offline

Activity: 2492


View Profile
March 03, 2011, 08:43:47 PM
 #6

Couldn't peers theoretically need to send 500 MB in response to a getblocks request? The limit should perhaps be MAX_BLOCK_SIZE*500.

1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652


Chief Scientist


View Profile WWW
March 03, 2011, 09:03:30 PM
 #7

Couldn't peers theoretically need to send 500 MB in response to a getblocks request? The limit should perhaps be MAX_BLOCK_SIZE*500.

500MB per connection times 100 connections would be 50 GB.  That re-opens the door to a memory exhaustion denial-of-service attack, which is the problem -maxsendbuffer fixes.

As transaction volume grows I think there will be lots of things that need optimization/fixing.  One simple fix would be to request fewer blocks as they get bigger, to stay inside the sendbuffer limit...

(ps: I've been re-downloading the current block chain connected to a -maxsendbuffer=10000 0.3.20 node, and the workaround works)

How often do you get the chance to work on a potentially world-changing project?
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652


Chief Scientist


View Profile WWW
March 03, 2011, 09:30:02 PM
 #8

Please help test:  https://github.com/bitcoin/bitcoin/pull/95

Sets the -maxsendbuffer and -maxreceivebuffer limits to 10MB each (so possible max of 2GB of memory if you had 100 connections).

I tested by running a 0.3.20 node to act as server, then ran a client with:
  -connect={server_ip} -noirc -nolisten
... to make sure I was downloading the block chain from that 0.3.20 node.

How often do you get the chance to work on a potentially world-changing project?
theymos
Administrator
Legendary
*
expert
Offline Offline

Activity: 2492


View Profile
March 03, 2011, 09:43:18 PM
 #9

With a 10MB limit, someone can create 10 full blocks within a 500-block span to disable getblocks uploading for almost the entire network. This is probably an even more effective attack than whatever the limit is designed to protect against.

1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
jgarzik
Legendary
*
qt
Offline Offline

Activity: 1470


View Profile
March 03, 2011, 09:50:47 PM
 #10

With a 10MB limit, someone can create 10 full blocks within a 500-block span to disable getblocks uploading for almost the entire network. This is probably an even more effective attack than whatever the limit is designed to protect against.

That won't disable getblocks uploading.

But even so, the ideal would be to simply stop reading until the write buffer clears...


Jeff Garzik, bitcoin core dev team and BitPay engineer; opinions are my own, not my employer.
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!