Bitcoin Forum
May 04, 2024, 09:36:55 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Peer Isolation for DoS prevention  (Read 1232 times)
Sergio_Demian_Lerner (OP)
Hero Member
*****
expert
Offline Offline

Activity: 552
Merit: 622


View Profile WWW
September 18, 2012, 02:38:35 PM
 #1

I was thinking about what could be done to proactively protect from future DoS attacks.

It's not only my opinion that this may be a weak point in the source code.
From https://en.bitcoin.it/wiki/Weaknesses:
Quote
Bitcoin has some denial-of-service prevention built-in (..), but is likely still vulnerable to more sophisticated denial-of-service attacks.

Why not isolate clients from another. Suppose we add a "sliding window" of resources (CPU/RAM) for each node.

So, for example, when I client sends a transaction Tx1, it receives the message:

("avail-resouces" ,100 Kb, 75 sig)

Which means that he can send an additional 100 Kb of data containing no more than 75 signature verifications.

Every 1 minute (if no avail-resources message was previously sent in the last minute), a node broadcasts new avail-resources messages to every peer, with an update of the available resources for each one of them.

If a peer tries to overpass the resource limit, it is banned.

Also, counters should be maintained for every peer (for example dFreeCount should be local and not global)

As an additional benefit client isolation allows the user to specify how much CPU/RAM he is willing to give to the Bitcoin application.

eDonkey2000 and other similar P2P nets have a bandwidth control to limit use. Why not Bitcoin?

Best regards,
Sergio.






1714815415
Hero Member
*
Offline Offline

Posts: 1714815415

View Profile Personal Message (Offline)

Ignore
1714815415
Reply with quote  #2

1714815415
Report to moderator
1714815415
Hero Member
*
Offline Offline

Posts: 1714815415

View Profile Personal Message (Offline)

Ignore
1714815415
Reply with quote  #2

1714815415
Report to moderator
BitcoinCleanup.com: Learn why Bitcoin isn't bad for the environment
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714815415
Hero Member
*
Offline Offline

Posts: 1714815415

View Profile Personal Message (Offline)

Ignore
1714815415
Reply with quote  #2

1714815415
Report to moderator
1714815415
Hero Member
*
Offline Offline

Posts: 1714815415

View Profile Personal Message (Offline)

Ignore
1714815415
Reply with quote  #2

1714815415
Report to moderator
1714815415
Hero Member
*
Offline Offline

Posts: 1714815415

View Profile Personal Message (Offline)

Ignore
1714815415
Reply with quote  #2

1714815415
Report to moderator
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 18, 2012, 02:54:15 PM
 #2

Why not isolate clients from another. Suppose we add a "sliding window" of resources (CPU/RAM) for each node.

So, for example, when I client sends a transaction Tx1, it receives the message:

("avail-resouces" ,100 Kb, 75 sig)

Why not just drop messages you're too burdened to handle? ... Bitcoin doesn't need reliable delivery.  That would avoid adding tracking code and complexity and changing the protocol. And message acceptance could be fairly distributed among peers/netgroups.

Doesn't stop someone from outright flooding you, but you can't stop that even if you ban them.

Quote
eDonkey2000 and other similar P2P nets have a bandwidth control to limit use. Why not Bitcoin?

Right now? Two reasons: Just haven't been implemented yet, and because we don't have load-balancing in the initial block download and peer rotation:  Getting a highly rate limited peer could totally kill your performance.
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1129


View Profile
September 18, 2012, 03:54:15 PM
 #3

Yeah you really want to see anti-DoS as a way to prioritize traffic and drop what overflows. Banning peers and such can work but it can also lead to chaos if network behavior changes such that it triggers the attack heuristics.
Sergio_Demian_Lerner (OP)
Hero Member
*****
expert
Offline Offline

Activity: 552
Merit: 622


View Profile WWW
September 18, 2012, 04:09:46 PM
 #4

Yeah you really want to see anti-DoS as a way to prioritize traffic and drop what overflows. Banning peers and such can work but it can also lead to chaos if network behavior changes such that it triggers the attack heuristics.

A honest node can never trigger the attack heuristic: if you receive an authorization to send 500 Kb, and you send 1Mb, it's your fault.
It's like limiting the TCP window at the session level.

What could happen is that transaction load becomes higher than the average bandwidth limitation. But I doubt this would be a problem since we already know what the maximum tps/second or bytes/second is: it's limited by the 1M/10min block size.

Also, a special authorization to send a big chunk of block data could be sent to a peer if he announces the block by sending the block header containing the expected PoW





Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1129


View Profile
September 18, 2012, 05:52:58 PM
 #5

Old clients won't know about your extension, therefore they will be (by your definition) "dishonest".

I don't see any reason to introduce arbitrary limits here. With smart ordering/prioritization a node should be able to handle a lot of overflow traffic.
Syke
Legendary
*
Offline Offline

Activity: 3878
Merit: 1193


View Profile
September 19, 2012, 04:05:06 AM
 #6

What could happen is that transaction load becomes higher than the average bandwidth limitation. But I doubt this would be a problem since we already know what the maximum tps/second or bytes/second is: it's limited by the 1M/10min block size.
There's no 10min limit. There is only a 10min average target over 2016 blocks. That's a very large amount of potential variance.

Buy & Hold
Sergio_Demian_Lerner (OP)
Hero Member
*****
expert
Offline Offline

Activity: 552
Merit: 622


View Profile WWW
September 19, 2012, 03:12:56 PM
 #7

What could happen is that transaction load becomes higher than the average bandwidth limitation. But I doubt this would be a problem since we already know what the maximum tps/second or bytes/second is: it's limited by the 1M/10min block size.
There's no 10min limit. There is only a 10min average target over 2016 blocks. That's a very large amount of potential variance.

I know, but setting a maximum of 5x the average would be just fine.
Also, as I said, block transmission could be prioritized, giving extra quota to peers, since the proof-of-work can be checked in advance inspecting the block header.

Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!