Bitcoin Forum
April 25, 2024, 08:58:13 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 4 »  All
  Print  
Author Topic: Analysis of hashrate-based double-spending  (Read 14746 times)
Meni Rosenfeld (OP)
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
December 11, 2012, 02:07:46 PM
Last edit: December 11, 2012, 07:21:07 PM by Meni Rosenfeld
 #1

In Satoshi's original whitepaper, he concludes with discussing double-spending attacks and their probabilities of success. This section does not get the attention it deserves, unfortunately, leading to several widespread myths:

Myth. Double-spending requires having more than half of the network hashrate.

Myth. Waiting for confirmations offers meaningful protection against an attacker with majority hashrate.

Myth. There is something special about 6 confirmations, and/or it grants absolute protection.

Myth. The important factor is the amount of time spent waiting for confirmations, rather than the number of blocks. (Still a myth if instead of actual time you look at #blocks * average time per block).

In Analysis of hashrate-based double-spending, I elucidate and build on Satoshi's work in this area. I explain the basics of double-spending and derive the probability for its success. I provide various graphs and charts for this probability. I also give a simplified model for the economics of double-spending.

Some of the key conclusions are:

  • Hashrate-based double-spending boils down to a race between an attacker and the honest network. In every turn one of them moves a discrete step forward, each with a probability determined by their relative hashrate. A successful attempt means managing to catch up with the honest network, a failure means falling so far behind there is no longer a chance of catching up.
  • Successful double-spending is possible with any attacker hashrate. There is no need for a majority to carry out this attack.
  • Waiting for more confirmations exponentially decreases the probability of double-spending success. The decay rate depends on the attacker's relative hashrate.
  • The probability of success depends on the number of confirmations and not on the amount of time waited. An alternative network with a shorter time between blocks can thus obtain more security with a given amount of wait time.
  • The time constant might be relevant, if we assume that the attacker cannot sustain his hashrate for a prolonged period of time. In this case, even majority hashrate does not guarantee success, as the attack could take more time than available. However, it does not seem likely that an attacker would go through the trouble of obtaining enormous computational resources and only maintain them for a time short enough to make this consideration relevant.
  • No amount of confirmations will reduce the success rate to 0.
  • If the attacker controls more hashrate than the honest network, no amount of confirmations will reduce the success rate below 100%.
  • There is nothing special about the default, often-cited figure of 6 confirmations. It was chosen based on the assumption that an attacker is unlikely to amass more than 10% of the hashrate, and that a negligible risk of less than 0.1% is acceptable. Both these figures are arbitrary, however; 6 confirmations are overkill for casual attackers, and at the same time powerless against more dedicated attackers with much more than 10% hashrate.
  • In practice, the probabilities of success are less relevant directly, than as a major factor in determining the economics of an attack.
  • Economic modeling is complicated, and some additional factors come into play. The time between blocks can affect the economics, and even with majority hashrate an attack might not be economical despite the guaranteed success. However, by the time the latter factor becomes relevant, the security is anyway weak beyond redemption.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
1714078693
Hero Member
*
Offline Offline

Posts: 1714078693

View Profile Personal Message (Offline)

Ignore
1714078693
Reply with quote  #2

1714078693
Report to moderator
Each block is stacked on top of the previous one. Adding another block to the top makes all lower blocks more difficult to remove: there is more "weight" above each block. A transaction in a block 6 blocks deep (6 confirmations) will be very difficult to remove.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714078693
Hero Member
*
Offline Offline

Posts: 1714078693

View Profile Personal Message (Offline)

Ignore
1714078693
Reply with quote  #2

1714078693
Report to moderator
1714078693
Hero Member
*
Offline Offline

Posts: 1714078693

View Profile Personal Message (Offline)

Ignore
1714078693
Reply with quote  #2

1714078693
Report to moderator
1714078693
Hero Member
*
Offline Offline

Posts: 1714078693

View Profile Personal Message (Offline)

Ignore
1714078693
Reply with quote  #2

1714078693
Report to moderator
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1075


Ian Knowles - CIYAM Lead Developer


View Profile WWW
December 11, 2012, 02:12:58 PM
 #2

  • No amount of confirmations will reduce the success rate to 0.

Does a "checkpoint" not make it impossible to do a block re-org for all blocks before and including itself?

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
Meni Rosenfeld (OP)
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
December 11, 2012, 02:15:35 PM
 #3

  • No amount of confirmations will reduce the success rate to 0.

Does a "checkpoint" not make it impossible to do a block re-org for all blocks before and including itself?
Yes, it does. I am explicitly discussing hashrate-based attacks with the vanilla protocol, disregarding additional layers of protection such as hardcoded checkpoints and proof of stake.

With the current schedule of checkpoints, it is not practical to consider waiting for a checkpoint to accept payment.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1075


Ian Knowles - CIYAM Lead Developer


View Profile WWW
December 11, 2012, 02:25:13 PM
 #4

With the current schedule of checkpoints, it is not practical to consider waiting for a checkpoint to accept payment.

Understood - actually I've been wondering in regard to the "rewrite the whole history since the last checkpoint attack" that wouldn't the easiest solution to be to simply prevent blocks that are trying to replace older blocks being accepted if they are simply too old (or does the protocol already do that?).

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
Meni Rosenfeld (OP)
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
December 11, 2012, 02:30:24 PM
 #5

With the current schedule of checkpoints, it is not practical to consider waiting for a checkpoint to accept payment.
Understood - actually I've been wondering in regard to the "rewrite the whole history since the last checkpoint attack" that wouldn't the easiest solution to be to simply prevent blocks that are trying to replace older blocks being accepted if they are simply too old (or does the protocol already do that?).
I call this practice "cementing" and it has some problems, similar to what we would have if there was no blockchain at all. The basic idea is that a node could be stuck on "the wrong version" after being isolated and fed bad data from an attacker, and would have no way to recover even after being exposed to the honest network.

There are some ways to work with that but it's far from simple.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1075


Ian Knowles - CIYAM Lead Developer


View Profile WWW
December 11, 2012, 02:42:20 PM
 #6

I call this practice "cementing" and it has some problems, similar to what we would have if there was no blockchain at all. The basic idea is that a node could be stuck on "the wrong version" after being isolated and fed bad data from an attacker, and would have no way to recover even after being exposed to the honest network.

There are some ways to work with that but it's far from simple.

Very interesting - I do hope some more ideas about this could be hashed out as I think it would actually be a much better solution than using "proof of x" (where x is anything other than "work") to stop such attacks.

BTW - nice paper. Smiley

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
Transisto
Donator
Legendary
*
Offline Offline

Activity: 1731
Merit: 1008



View Profile WWW
December 11, 2012, 05:24:18 PM
 #7

This was a much needed paper,  I've seen these myths propagated a lot.

Thanks,
Steve
Hero Member
*****
Offline Offline

Activity: 868
Merit: 1007



View Profile WWW
December 11, 2012, 06:31:54 PM
 #8

It's great to see this topic given such formal treatment.  Very nice!

Regarding this myth:  "The probability of success depends on the number of confirmations and not on the amount of time waited. An alternative network with a shorter time between blocks can thus obtain more security with a given amount of wait time."

If you have two networks, A & B and both have the same hash rate, but one has a 1 minute block interval while the other has a 10 minute interval, is it not the case that for a given percentage of the network hashing power, an attacker would have an equivalent chance of executing a successful double spend on a time frame of say 1 hour or greater?  Blocks are a reflection of the amount of work that has been done to find them...I don't think people argue that waiting a certain amount of time buys you any more security (you could wait an hour, and if there still isn't a confirmation block, you're no more safe than waiting only 1 second).  However, I think what most people argue is that on two networks, with an equivalent amount of hashing power, 6 blocks emitted from one that calibrates to produce 1 block every 10 minutes is equivalent (in terms of security) to 60 blocks emitted from the network that produces one block every minute.  I think they may be correct, but I need to study your paper more closely (I do agree with the first part of your assertion though, which is that the probability of success does indeed depend on the number of confirmations).  

(gasteve on IRC) Does your website accept cash? https://bitpay.com
Steve
Hero Member
*****
Offline Offline

Activity: 868
Merit: 1007



View Profile WWW
December 11, 2012, 06:45:54 PM
 #9

I call this practice "cementing" and it has some problems, similar to what we would have if there was no blockchain at all. The basic idea is that a node could be stuck on "the wrong version" after being isolated and fed bad data from an attacker, and would have no way to recover even after being exposed to the honest network.

There are some ways to work with that but it's far from simple.
I like this term "cementing."  And I like how you frame it in comparison with having no blockchain at all.  I debate whether it's a good idea or not...if it is, then it should be a part of the protocol and not just something cooked into the source code.  One positive side effect of cementing is that it would force a would be attacker to reveal their blocks before cementing occurs.  I wonder if there is a way that could be accomplished without using cementing (because you'd really want to force an attacker to reveal themselves as soon as possible).

(gasteve on IRC) Does your website accept cash? https://bitpay.com
Meni Rosenfeld (OP)
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
December 11, 2012, 07:01:50 PM
 #10

BTW - nice paper. Smiley
This was a much needed paper,  I've seen these myths propagated a lot.

Thanks,
It's great to see this topic given such formal treatment.  Very nice!
Thank you.

However, I think what most people argue is that on two networks, with an equivalent amount of hashing power, 6 blocks emitted from one that calibrates to produce 1 block every 10 minutes is equivalent (in terms of security) to 60 blocks emitted from the network that produces one block every minute.
This is indeed what they argue, and this is wrong.

I don't think people argue that waiting a certain amount of time buys you any more security (you could wait an hour, and if there still isn't a confirmation block, you're no more safe than waiting only 1 second)
I understand the distinction and not trying to set up a strawman, it's just too wordy. Their actual argument as stated above really is a myth.

I think they may be correct, but I need to study your paper more closely
The key observation is that the process is entirely non-temporal. It's a series of discrete turns where each party has a chance to move a step ahead. The probability of success relies on the properties of this Markov chain, and there is no place where the time constant can have an effect. Of course, if we were talking about the time it takes to complete the attack, things would be different.

Blocks are a reflection of the amount of work that has been done to find them...
Yes, and hence reflect the average amount of work an attacker would need to do to overturn them. But not the probability of success, when the actual number of blocks (and the hashrate) is invariant.

if it is, then it should be a part of the protocol and not just something cooked into the source code.
I've been meaning to write a post discussing a framework for various levels of synchronization, where each can be cemented but overturned by the next. For a few months now, haven't gotten around to it yet.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
cunicula
Legendary
*
Offline Offline

Activity: 1050
Merit: 1003


View Profile
December 11, 2012, 07:12:16 PM
Last edit: December 11, 2012, 07:24:03 PM by cunicula
 #11

However, I think what most people argue is that on two networks, with an equivalent amount of hashing power, 6 blocks emitted from one that calibrates to produce 1 block every 10 minutes is equivalent (in terms of security) to 60 blocks emitted from the network that produces one block every minute.
This is indeed what they argue, and this is wrong.

Where did this erroneous argument come from anyways? It is a very basic mistake.
Anyways, thanks for helping to provide some rigorous documentation for people. It helps greatly in the fight against BS.
notme
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


View Profile
December 11, 2012, 07:19:31 PM
 #12

So an attach on the quicker blockchain would succeed of fail more quickly but the probability of success for both chains would be equal, right?

https://www.bitcoin.org/bitcoin.pdf
While no idea is perfect, some ideas are useful.
Meni Rosenfeld (OP)
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
December 11, 2012, 07:22:07 PM
Last edit: December 11, 2012, 07:46:03 PM by Meni Rosenfeld
 #13

So an attach on the quicker blockchain would succeed of fail more quickly but the probability of success for both chains would be equal, right?
Right. (When the number of confirmations is equal.)


Another way to look at this: If the attacker does not have majority, he will find on average less blocks than the honest network. If the average thing always happened, he would always fail. Variance in the number of blocks works to his advantage, because he has a chance to be lucky and get more than the average number of blocks, enough to beat the network. Assuming a fixed (#blocks * avg. time), the smaller the avg. time, the more individual items are being tracked, and so (as is the case with these Poisson processes) the lower the relative variance, and the lower the chance to luck out. This is similar to how mining pools reduce variance by basing the reward on tracking the number of abundant shares, rather than the number of rare blocks.

However, I think what most people argue is that on two networks, with an equivalent amount of hashing power, 6 blocks emitted from one that calibrates to produce 1 block every 10 minutes is equivalent (in terms of security) to 60 blocks emitted from the network that produces one block every minute.
This is indeed what they argue, and this is wrong.

Where did this erroneous argument come from anyways? It is a very basic mistake.
Anyways, thanks for helping to provide some rigorous documentation for people. It helps greatly in the fight against BS.
I don't know. It's pretty old.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
2_Thumbs_Up
Sr. Member
****
Offline Offline

Activity: 323
Merit: 251


View Profile
December 11, 2012, 10:06:18 PM
 #14

So an attach on the quicker blockchain would succeed of fail more quickly but the probability of success for both chains would be equal, right?
Right. (When the number of confirmations is equal.)


Another way to look at this: If the attacker does not have majority, he will find on average less blocks than the honest network. If the average thing always happened, he would always fail. Variance in the number of blocks works to his advantage, because he has a chance to be lucky and get more than the average number of blocks, enough to beat the network. Assuming a fixed (#blocks * avg. time), the smaller the avg. time, the more individual items are being tracked, and so (as is the case with these Poisson processes) the lower the relative variance, and the lower the chance to luck out. This is similar to how mining pools reduce variance by basing the reward on tracking the number of abundant shares, rather than the number of rare blocks.
But assuming two networks with the same hash rate, a smaller avg. time causes reduced difficulty. This increases variance and counters this effect, no?

It also seems more economical to try an attack on the chain with the lower difficulty, since the expected cost for one attempt is lower, but the expected return is realistically the same.
FreeMoney
Legendary
*
Offline Offline

Activity: 1246
Merit: 1014


Strength in numbers


View Profile WWW
December 11, 2012, 10:25:18 PM
 #15

So an attach on the quicker blockchain would succeed of fail more quickly but the probability of success for both chains would be equal, right?
Right. (When the number of confirmations is equal.)


Another way to look at this: If the attacker does not have majority, he will find on average less blocks than the honest network. If the average thing always happened, he would always fail. Variance in the number of blocks works to his advantage, because he has a chance to be lucky and get more than the average number of blocks, enough to beat the network. Assuming a fixed (#blocks * avg. time), the smaller the avg. time, the more individual items are being tracked, and so (as is the case with these Poisson processes) the lower the relative variance, and the lower the chance to luck out. This is similar to how mining pools reduce variance by basing the reward on tracking the number of abundant shares, rather than the number of rare blocks.
But assuming two networks with the same hash rate, a smaller avg. time causes reduced difficulty. This increases variance and counters this effect, no?

It also seems more economical to try an attack on the chain with the lower difficulty, since the expected cost for one attempt is lower, but the expected return is realistically the same.

Difficulty doesn't matter for the success of an attack. The attacker is racing the rest of miners.

Imagine a magic network with no latency and difficulty so low that every hash made a valid bock. You could comfortably accept 1000 blocks or so with no risk of (lucky) double spend and it would take a microsecond or less.

Play Bitcoin Poker at sealswithclubs.eu. We're active and open to everyone.
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
December 11, 2012, 11:46:28 PM
 #16

Meni, thanks for the paper, it was well written and informative.
Maged
Legendary
*
Offline Offline

Activity: 1204
Merit: 1015


View Profile
December 12, 2012, 03:28:53 AM
Last edit: December 12, 2012, 03:49:22 AM by Maged
 #17

In the .pdf, things start to look bad for honest side once you asumed attacker pre-mined one block. How can he do that?
An attacker could choose to hold off the tranaction(s) they are sending to the merchant(s) until they find a block that double-spends those transactions. Then, they send off those transactions with a good fee and hope to god that they are included in the next block, otherwise it would be as if the merchant got the benefit of waiting an extra block. However, unless you only started mining after the transactions made it into a block (which would be stupid, since you'd start a block behind), it is better to premine and hope that the transactions make it into the next block than it is to start mining when the transactions are sent since the transactions could still make it into the next block, except you might not have one mined yet, yourself.

And why not pre-mine more than just one block?
Because then you risk wastefully reverting more blocks than you need to because your transaction(s) didn't get into the first block on the honest side of the fork, decreasing your chance of success.

Meni Rosenfeld (OP)
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
December 12, 2012, 06:16:05 AM
 #18

But assuming two networks with the same hash rate, a smaller avg. time causes reduced difficulty. This increases variance and counters this effect, no?
Assuming the same number of confirmations, there is the same variance in the number of blocks found by the attacker.

Assuming the same wait time, a reduced difficulty means more blocks found on average. For Poisson-distributed variables, the variance is equal to the mean, so the standard deviation is equal to the square root of the mean. With a larger mean, the relative standard deviation is lower.

It also seems more economical to try an attack on the chain with the lower difficulty, since the expected cost for one attempt is lower, but the expected return is realistically the same.
Yes. But this is dwarfed by the fact that with a lower difficulty, you can increase the number of confirmations with the same wait time and greatly reduce the probability of success (which is the main factor in the economics of the attack).

In the .pdf, things start to look bad for honest side once you asumed attacker pre-mined one block. How can he do that?
The attacker can mine away and only start the attack when he finds a block. This is easy to do even with a low hashrate. If you are familiar with the Finney attack, it is a special case of this where the number of confirmations is 0, and hence the probability of success is 100%, with any hashrate.

And why not pre-mine more than just one block?
He could do that, and a fuller treatment of this is under the scope of the economic discussion. However, there is a big gap in the effectiveness of pre-mining one vs. two. Pre-mining 1 block is a "sure thing" as the attacker counts on finding a block anyway, so he may as well postpone the attack until he does. But after he finds the first block, waiting to get more blocks just puts him at risk that the honest network will regain the lead. If this happens he loses the reward of the block he found. He is in a hashrate disadvantage so he's better off pressing his head start.

If the only attack cost is the lost block reward, then he should pre-mine just one block because the risk of losing the reward is the same and this gives him a better chance to benefit from it. If, however, the main cost is the illiquidity of the goods purchased, he should indeed make multiple attempts to pre-mine several blocks, and call the thing off with every failure.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Steve
Hero Member
*****
Offline Offline

Activity: 868
Merit: 1007



View Profile WWW
December 12, 2012, 07:00:30 AM
 #19

However, I think what most people argue is that on two networks, with an equivalent amount of hashing power, 6 blocks emitted from one that calibrates to produce 1 block every 10 minutes is equivalent (in terms of security) to 60 blocks emitted from the network that produces one block every minute.
This is indeed what they argue, and this is wrong.
Is it?  Note, there is no time component being discussed here.  It is simply saying that 6 confirmations is equivalent to 60 confirmations (given equivalent overall hash rate).  It doesn't matter if the 6 or 60 blocks take 10 hours or 10 seconds to arrive.

(gasteve on IRC) Does your website accept cash? https://bitpay.com
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1075


Ian Knowles - CIYAM Lead Developer


View Profile WWW
December 12, 2012, 07:05:54 AM
 #20

Just out of curiosity is a "time stamp" being included in the block header?

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
Pages: [1] 2 3 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!