Bitcoin Forum
May 08, 2024, 02:15:15 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Problems with send buffer limits in 0.3.20  (Read 1695 times)
Mike Hearn (OP)
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1129


View Profile
March 03, 2011, 06:38:32 PM
 #1

0.3.20 has a new feature that disconnects nodes if vSend gets larger than 256kb, controllable via a flag.

This has the unfortunate bug that it's no longer possible to download the production block chain. I suggest people do not upgrade past 0.3.20 for now, unless you run with -maxsendbuffer=2000 as a flag.

It dies when sending the block chain between block 51501 and 52000 on the production network. The problem is this block:

http://blockexplorer.com/block/00000000186a147b91a88d37360cf3a525ec5f61c1101cc42da3b67fcdd5b5f8

It's 200kb. During block chain download, this block plus the others in that run of 500 blocks pushes vSend over 256kb and it results in a disconnect.

I feel like send limiting is perhaps not that essential. If we really see BitCoin nodes OOMing because they tried to send data too fast that implies there's a bug elsewhere. For instance getdata requests have a size limit for exactly this kind of reason (it might be too large, but we can tweak that).
Transactions must be included in a block to be properly completed. When you send a transaction, it is broadcast to miners. Miners can then optionally include it in their next blocks. Miners will be more inclined to include your transaction if it has a higher transaction fee.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715134515
Hero Member
*
Offline Offline

Posts: 1715134515

View Profile Personal Message (Offline)

Ignore
1715134515
Reply with quote  #2

1715134515
Report to moderator
jgarzik
Legendary
*
qt
Offline Offline

Activity: 1596
Merit: 1091


View Profile
March 03, 2011, 06:58:35 PM
 #2

I feel like send limiting is perhaps not that essential. If we really see BitCoin nodes OOMing because they tried to send data too fast that implies there's a bug elsewhere. For instance getdata requests have a size limit for exactly this kind of reason (it might be too large, but we can tweak that).

Ultimately, the goal is flow control.  Your OS has a buffer for outgoing data.  When that gets full, we need to stop sending more data, and wait for empty buffer space.

The worst case buffer size of a hacker is zero.  The worst case "normal" buffer size 8k.

Since bitcoin needs to send more data than that in a single message, an implementation must choose:  (a) store a pointer to the middle of the object you were sending, for later resumption of transfer, or (b) provide an application buffer that stores a copy of all outgoing data until it is transmitted.  satoshi chose (b) but placed no limits on the size of that outgoing data buffer.

It does sound like the limits are tighter than they should be.


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
theymos
Administrator
Legendary
*
Offline Offline

Activity: 5194
Merit: 12974


View Profile
March 03, 2011, 07:04:08 PM
 #3

That's pretty bad. Good thing you caught this before everyone upgraded and new nodes were no longer able to connect.

1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
Cusipzzz
Sr. Member
****
Offline Offline

Activity: 334
Merit: 250



View Profile
March 03, 2011, 07:05:44 PM
 #4

so much spam in that block, sigh.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
March 03, 2011, 07:57:55 PM
 #5

Oops.

My fault-- I DID test downloading the entire production block chain with a 0.3.20 client, but I wasn't careful to make sure I downloaded it from another 0.3.20 client.

Workaround:  if you are running 0.3.20, run with -maxsendbuffer=10000

How often do you get the chance to work on a potentially world-changing project?
theymos
Administrator
Legendary
*
Offline Offline

Activity: 5194
Merit: 12974


View Profile
March 03, 2011, 08:43:47 PM
 #6

Couldn't peers theoretically need to send 500 MB in response to a getblocks request? The limit should perhaps be MAX_BLOCK_SIZE*500.

1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
March 03, 2011, 09:03:30 PM
 #7

Couldn't peers theoretically need to send 500 MB in response to a getblocks request? The limit should perhaps be MAX_BLOCK_SIZE*500.

500MB per connection times 100 connections would be 50 GB.  That re-opens the door to a memory exhaustion denial-of-service attack, which is the problem -maxsendbuffer fixes.

As transaction volume grows I think there will be lots of things that need optimization/fixing.  One simple fix would be to request fewer blocks as they get bigger, to stay inside the sendbuffer limit...

(ps: I've been re-downloading the current block chain connected to a -maxsendbuffer=10000 0.3.20 node, and the workaround works)

How often do you get the chance to work on a potentially world-changing project?
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
March 03, 2011, 09:30:02 PM
 #8

Please help test:  https://github.com/bitcoin/bitcoin/pull/95

Sets the -maxsendbuffer and -maxreceivebuffer limits to 10MB each (so possible max of 2GB of memory if you had 100 connections).

I tested by running a 0.3.20 node to act as server, then ran a client with:
  -connect={server_ip} -noirc -nolisten
... to make sure I was downloading the block chain from that 0.3.20 node.

How often do you get the chance to work on a potentially world-changing project?
theymos
Administrator
Legendary
*
Offline Offline

Activity: 5194
Merit: 12974


View Profile
March 03, 2011, 09:43:18 PM
 #9

With a 10MB limit, someone can create 10 full blocks within a 500-block span to disable getblocks uploading for almost the entire network. This is probably an even more effective attack than whatever the limit is designed to protect against.

1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
jgarzik
Legendary
*
qt
Offline Offline

Activity: 1596
Merit: 1091


View Profile
March 03, 2011, 09:50:47 PM
 #10

With a 10MB limit, someone can create 10 full blocks within a 500-block span to disable getblocks uploading for almost the entire network. This is probably an even more effective attack than whatever the limit is designed to protect against.

That won't disable getblocks uploading.

But even so, the ideal would be to simply stop reading until the write buffer clears...


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!