Bitcoin Forum
May 22, 2024, 03:34:19 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: How would 51% double spending work in the long term ? Thought experiment  (Read 2845 times)
ShadowOfHarbringer (OP)
Legendary
*
Offline Offline

Activity: 1470
Merit: 1005


Bringing Legendary Har® to you since 1952


View Profile
December 26, 2012, 05:13:12 PM
 #21

How exactly ?
By running multiple nodes spanned over various IP addresses, of course.

Yes, but in no way will this be easy to do.

It would require simultaneously VERY significant resources in multiple countries around the world AND >51% Hashing power, AND few million $ to buy Bitcoins to perform an attack.
Which makes it infeasible for almost every possible adversary on the planet.

Attack like this would be almost impossible to do, even for government of a very rich country (we all know what country I am talking about).

Also, if we implement cementing after Y confirmations as a quick fix and set Y to something relatively high (like 20), shouldn't everybody be safe this way ?
Maybe. Let's put it this way - cementing is simple and obvious, if it was that great Satoshi would have implemented it from the start. So we shouldn't do any "quick fix" without a solid understanding of the reasons for the decision and a thorough analysis of the dynamics of the new system.

I mean, were there ever blocks reorgs longer than 10 blocks ? Do they even occur naturally?
AFAIK no, and depends on the definition of "naturally" but essentially no.

So the only time a block reorg longer than 10 blocks will ever happen is because an advanced double spending attack.
Logically, implementing cementing after Y blocks (where Y = relatively big number, like 15, 20 or 50) should be an absolutely viable solution to our problems.

Unless there are other drawbacks, not mentioned in this discussion.

I surely could use @Gavin's opinion here.

DannyHamilton
Legendary
*
Offline Offline

Activity: 3388
Merit: 4653



View Profile
December 26, 2012, 05:15:35 PM
 #22

. . . So the only time a block reorg longer than 10 blocks will ever happen is because an advanced double spending attack.
Logically, implementing cementing after Y blocks (where Y = relatively big number, like 15, 20 or 50) should be an absolutely viable solution to our problems . . .
Doesn't this already occur on an ad-hoc basis when new versions of the client are released through the process of creating a "checkpoint"?
ShadowOfHarbringer (OP)
Legendary
*
Offline Offline

Activity: 1470
Merit: 1005


Bringing Legendary Har® to you since 1952


View Profile
December 26, 2012, 05:28:24 PM
 #23

. . . So the only time a block reorg longer than 10 blocks will ever happen is because an advanced double spending attack.
Logically, implementing cementing after Y blocks (where Y = relatively big number, like 15, 20 or 50) should be an absolutely viable solution to our problems . . .
Doesn't this already occur on an ad-hoc basis when new versions of the client are released through the process of creating a "checkpoint"?

Exactly. This already happens, but only once every few months.
So why not make it more often if there are no known drawbacks ?

Simple solutions are not always bad.

2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1068



View Profile
December 26, 2012, 05:32:40 PM
 #24

So the only time a block reorg longer than 10 blocks will ever happen is because an advanced double spending attack.
Or when there is an exploit, community detects it, patches are produced, executables distributed and all the legitimate transactions are redone.

https://en.bitcoin.it/wiki/Incidents#CVE-2010-5139


Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
Meni Rosenfeld
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
December 26, 2012, 05:35:08 PM
 #25

. . . So the only time a block reorg longer than 10 blocks will ever happen is because an advanced double spending attack.
Logically, implementing cementing after Y blocks (where Y = relatively big number, like 15, 20 or 50) should be an absolutely viable solution to our problems . . .
Doesn't this already occur on an ad-hoc basis when new versions of the client are released through the process of creating a "checkpoint"?
Yes, but checkpoints are much more infrequent, and require upgrading the client.

How exactly ?
By running multiple nodes spanned over various IP addresses, of course.

Yes, but in no way will this be easy to do.

It would require simultaneously VERY significant resources in multiple countries around the world AND >51% Hashing power, AND few million $ to buy Bitcoins to perform an attack.
Which makes it infeasible for almost every possible adversary on the planet.

Attack like this would be almost impossible to do, even for government of a very rich country (we all know what country I am talking about).
If the IP vote overrides the longer chain, it is not clear you still need majority hashrate. You can use your IPs to vote on your minority branch. How this works exactly will depend on the specific implementation and attack.

Also, the cost is additive. You'll need the cost of multiple IPs + cost of hashpower + cost of bitcoins. Tripling the resource cost isn't so impressive if the attacker might have a thousand-to-one resource advantage.

So the only time a block reorg longer than 10 blocks will ever happen is because an advanced double spending attack.
Logically, implementing cementing after Y blocks (where Y = relatively big number, like 15, 20 or 50) should be an absolutely viable solution to our problems.

Unless there are other drawbacks, not mentioned in this discussion.
Suppose you're a new node joining the network. Of course you are not cemented on anything. You see some nodes broadcasting a short chain and some an incompatible, longer chain. Who do you believe? If you default to the longer chain, you could fall right into the majority attacker's net. If you use something else you make it easier to carry out an attack without majority hashrate.

In this way the attacker can prevent new nodes from joining reliably. Not "some poor victim somewhere will be attacked". Nobody can join the network. Unless, of course, we spend some time thinking about a framework for higher-level overriding.

And I will repeat myself and say there are two primary attacks with majority hashrate - double-spending and rejection. Cementing helps with one. What about the other?

I surely could use @Gavin's opinion here.
I agree, when we do start seriously thinking about this, we will need more knowledgeable people to take part.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
ShadowOfHarbringer (OP)
Legendary
*
Offline Offline

Activity: 1470
Merit: 1005


Bringing Legendary Har® to you since 1952


View Profile
December 26, 2012, 05:51:27 PM
 #26

I more-or-less agree with with the rest of your post, this is the part i have doubts about:

Suppose you're a new node joining the network. Of course you are not cemented on anything. You see some nodes broadcasting a short chain and some an incompatible, longer chain. Who do you believe? If you default to the longer chain, you could fall right into the majority attacker's net. If you use something else you make it easier to carry out an attack without majority hashrate.

How is this different when using cementing ?
I mean clients already take longest chain when offered multiple chains or different length.

No change here.

(Unless you were talking about the majority rule, not cementing itself)

Meni Rosenfeld
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
December 26, 2012, 06:02:34 PM
 #27

I more-or-less agree with with the rest of your post, this is the part i have doubts about:

Suppose you're a new node joining the network. Of course you are not cemented on anything. You see some nodes broadcasting a short chain and some an incompatible, longer chain. Who do you believe? If you default to the longer chain, you could fall right into the majority attacker's net. If you use something else you make it easier to carry out an attack without majority hashrate.

How is this different when using cementing ?
I mean clients already take longest chain when offered multiple chains or different length.

No change here.

(Unless you were talking about the majority rule, not cementing itself)
Without cementing, new nodes use the longest branch, as do all other nodes. In case there's a majority attacker, he can make attacks but at least the nodes are consistent with each other.

With cementing, if a certain attack is executed, new nodes will be detached from the rest of the network (e.g., the veteran network is cemented on a shorter branch, but the new node will join the attacker's longer branch). At the very least it shows that cementing doesn't make the network (where "the network" includes the ability of nodes to join it) immune to such attacks.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
ShadowOfHarbringer (OP)
Legendary
*
Offline Offline

Activity: 1470
Merit: 1005


Bringing Legendary Har® to you since 1952


View Profile
December 26, 2012, 06:08:12 PM
 #28

Suppose you're a new node joining the network. Of course you are not cemented on anything. You see some nodes broadcasting a short chain and some an incompatible, longer chain. Who do you believe? If you default to the longer chain, you could fall right into the majority attacker's net. If you use something else you make it easier to carry out an attack without majority hashrate.

How is this different when using cementing ?
I mean clients already take longest chain when offered multiple chains or different length.

No change here.

(Unless you were talking about the majority rule, not cementing itself)
Without cementing, new nodes use the longest branch, as do all other nodes. In case there's a majority attacker, he can make attacks but at least the nodes are consistent with each other.

With cementing, if a certain attack is executed, new nodes will be detached from the rest of the network (e.g., the veteran network is cemented on a shorter branch, but the new node will join the attacker's longer branch). At the very least it shows that cementing doesn't make the network (where "the network" includes the ability of nodes to join it) immune to such attacks.

Wow, I didn't think about that. Thanks for clearing this up, @Meni.

This is a very good discussion we are having, i wish I did that more often.

kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1025



View Profile
December 26, 2012, 06:12:37 PM
 #29

So the question we must ask ourselves is: what is more important - (1) protecting the entire network against powerful adversaries, or (2) protecting single (<1%) stranded clients gone wild because of either an attack, or local network malfunction ?

For me, the answer will be always (1).
I can see the marketing copy - "With Bitcoin, you are in full control of your money, and payments cannot be reversed! Unless you're one of the 1% whom we don't care about, and then payments to you could be completely bogus."

Erm, don't take this the wrong way, but this seems like a strawman to me.  If an attacker has you isolated, they can already feed you bogus blocks, and there isn't a damn thing you can do about it.  In a parity proof-of-work system, when the isolation is broken, your node will resync with the rest of the world.  But the damage (to you) has already been done, it was done when you accepted blocks that you thought were current, but really weren't.  Throwing them out and loading the correct blocks doesn't fix anything, and might actually make it harder for you to realize what happened.

This is the trade in reality.
You get:  your node can fix itself when reconnected to the global network.
You pay:  The entire global network is vulnerable to hidden chains, whatever damage you suffered while disconnected is unchanged, you don't necessarily know that you were attacked.

Seems like a terrible trade to me.

Seriously though, I don't disagree but moving away from pure PoW is a fundamental design change and we need to consider it carefully to keep disruption and unwanted side-effects at an absolute minimum. If nodes aren't safe when participating in the network then the network is meaningless.

And as mentioned, if we consider majority attack to be plausible we need to protect against rejection attacks, not just double-spending.

The best part of my proposal is that it doesn't change the network rules at all.  It is still pure proof of work.  The only change is that the burden of proof for a reorganization is higher than it is now, and it scales with the amount of danger than such a reorganization represents.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2217


Chief Scientist


View Profile WWW
December 26, 2012, 06:13:48 PM
 #30

I sent ShadowOfHarbringer's some of my thoughts on this in this private message:
Quote
Take a look at Table 2 in Meni's paper:  "The maximal safe transaction value, in BTC, as a function of the attacker's hashrate
q and the number of con firmations n."

Lets go with a 33% hashpower attacker, at 6 confirmations you can assume any transaction up to about 300 bitcoins is safe.

(disclaimer: I haven't taken the time to digest Meni's analysis, I'm going to assume his numbers are correct).

If you're worried about that, then don't make multi-thousand-dollar bitcoin transactions with people you think might try to double-spend and rip you off OR wait for more confirmations.

Also: don't forget that "33% hashpower" means you have half as many (asics/fpgas) as the rest of the network combined:

Before attack:  lets say network has 100 Thash
You add 50 Thash, so during attack you have 50 of 150 Thash (== 33%)

I don't worry much right now about economically irrational, "I'm going to spend millions of dollars to disrupt the bitcoin network" attacks because I don't think anybody is going to spend millions of dollars to disrupt our tiny payment network.

I have no idea what bitcoin payments will look like in 5-10 years; I expect all sorts of trust mechanisms and relationships to develop that are independent of the bitcoin network, and I suspect some of those will make 51% attacks irrelevant.

And I have no idea what the mining landscape will look like in 5-10 years; if thousands of companies around the world are installing bitcoin mining hardware for free in every house built in cold climates (generate bitcoins and get a discount on your heating bill) then it may be completely inconceivable for even a government to amass enough hashing power to mount a 51% attack.

So while I encourage y'all to keep thinking about it as an interesting theoretical problem, it is only slightly higher on my personal priority list than worrying about quantum computers breaking ECDSA.

How often do you get the chance to work on a potentially world-changing project?
ShadowOfHarbringer (OP)
Legendary
*
Offline Offline

Activity: 1470
Merit: 1005


Bringing Legendary Har® to you since 1952


View Profile
December 26, 2012, 06:23:44 PM
 #31

Erm, don't take this the wrong way, but this seems like a strawman to me.  If an attacker has you isolated, they can already feed you bogus blocks, and there isn't a damn thing you can do about it.  In a parity proof-of-work system, when the isolation is broken, your node will resync with the rest of the world.  But the damage (to you) has already been done, it was done when you accepted blocks that you thought were current, but really weren't.  Throwing them out and loading the correct blocks doesn't fix anything, and might actually make it harder for you to realize what happened.

This is the trade in reality.
You get:  your node can fix itself when reconnected to the global network.
You pay:  The entire global network is vulnerable to hidden chains, whatever damage you suffered while disconnected is unchanged, you don't necessarily know that you were attacked.

Seems like a terrible trade to me.

Seriously though, I don't disagree but moving away from pure PoW is a fundamental design change and we need to consider it carefully to keep disruption and unwanted side-effects at an absolute minimum. If nodes aren't safe when participating in the network then the network is meaningless.

And as mentioned, if we consider majority attack to be plausible we need to protect against rejection attacks, not just double-spending.

The best part of my proposal is that it doesn't change the network rules at all.  It is still pure proof of work.  The only change is that the burden of proof for a reorganization is higher than it is now, and it scales with the amount of danger than such a reorganization represents.

This is not how things work. Let me tell you how you should do it, it you ever want something done (as with my fork I have already learned the hard way how things work in the world).

Nobody will do anything for you (for free). If you ever want to get stuff done, you must work hard, do it and code it yourself, test it thoroughly and only then - when you are 100% sure that it works correctly, you can try to convince people to merge it to the client.

Meni Rosenfeld
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
December 26, 2012, 06:28:06 PM
 #32

Erm, don't take this the wrong way, but this seems like a strawman to me.  If an attacker has you isolated, they can already feed you bogus blocks, and there isn't a damn thing you can do about it.  In a parity proof-of-work system, when the isolation is broken, your node will resync with the rest of the world.  But the damage (to you) has already been done, it was done when you accepted blocks that you thought were current, but really weren't.  Throwing them out and loading the correct blocks doesn't fix anything, and might actually make it harder for you to realize what happened.

This is the trade in reality.
You get:  your node can fix itself when reconnected to the global network.
You pay:  The entire global network is vulnerable to hidden chains, whatever damage you suffered while disconnected is unchanged, you don't necessarily know that you were attacked.

Seems like a terrible trade to me.
Maybe. I think the ability to recover is important, both practically and for the theoretical "correctness" of the system.

The best part of my proposal is that it doesn't change the network rules at all.  It is still pure proof of work.  The only change is that the burden of proof for a reorganization is higher than it is now, and it scales with the amount of danger than such a reorganization represents.
I used "Pure PoW" in the narrower sense of sticking to the longest branch (which depends only on the current observed data and not the history of receiving it).

Take a look at Table 2 in Meni's paper:  "The maximal safe transaction value, in BTC, as a function of the attacker's hashrate
q and the number of con firmations n."

Lets go with a 33% hashpower attacker, at 6 confirmations you can assume any transaction up to about 300 bitcoins is safe.

(disclaimer: I haven't taken the time to digest Meni's analysis, I'm going to assume his numbers are correct).
Thank you for this. I would however like to emphasize that table 2 is probably the least reliable item in the paper, it assumes a specific model for an attack which is likely to diverge significantly from reality. (And doesn't even follow the math of that model to the end because of the complexity of the calculations.)

So while I encourage y'all to keep thinking about it as an interesting theoretical problem, it is only slightly higher on my personal priority list than worrying about quantum computers breaking ECDSA.
I see what you're saying. When you say it's only slightly higher than breaking ECDSA, you mean that THIS PROBLEM IS SO EXTREME AND URGENT THAT WE NEED TO FREAK OUT ABOUT IT RIGHT NOW!  No? Ok.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
paraipan
In memoriam
Legendary
*
Offline Offline

Activity: 924
Merit: 1004


Firstbits: 1pirata


View Profile WWW
December 26, 2012, 06:34:24 PM
 #33

...
So while I encourage y'all to keep thinking about it as an interesting theoretical problem, it is only slightly higher on my personal priority list than worrying about quantum computers breaking ECDSA.


Nice to know that Gavin, thank you. Will be interesting to see how solutions are developed with enough time.

BTCitcoin: An Idea Worth Saving - Q&A with bitcoins on rugatu.com - Check my rep
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1025



View Profile
December 26, 2012, 07:27:08 PM
 #34

Maybe. I think the ability to recover is important, both practically and for the theoretical "correctness" of the system.

You don't lose the ability to recover.  You only lose the silent, automatic recovery that obfuscates the damage.  You, the human, is going to have a mess that you are personally going to have to deal with.  The difference is that in one scenario, you have no idea what happened or why, and in the other scenario, you have to clear out some invalid blocks and restart your node, but you know exactly why.

Take a look at Table 2 in Meni's paper:  "The maximal safe transaction value, in BTC, as a function of the attacker's hashrate
q and the number of con firmations n."

Lets go with a 33% hashpower attacker, at 6 confirmations you can assume any transaction up to about 300 bitcoins is safe.

(disclaimer: I haven't taken the time to digest Meni's analysis, I'm going to assume his numbers are correct).
Thank you for this. I would however like to emphasize that table 2 is probably the least reliable item in the paper, it assumes a specific model for an attack which is likely to diverge significantly from reality. (And doesn't even follow the math of that model to the end because of the complexity of the calculations.)

Also, I should point out that section 6 of the paper doesn't really apply here, because section 6 is written with the assumption that q < p, while the assumption in this thread is that q > p.  In this case, it means that we are assuming that the attacker will eventually overtake the honest network.  Your only protection in that case is to have done all transactions and converted all of your bitcoins to wealth prior to the block when the attacker publishes their chain that destroys the entire system.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
phelix
Legendary
*
Offline Offline

Activity: 1708
Merit: 1020



View Profile
December 27, 2012, 11:27:32 AM
 #35

So the only time a block reorg longer than 10 blocks will ever happen is because an advanced double spending attack.
Or when there is an exploit, community detects it, patches are produced, executables distributed and all the legitimate transactions are redone.

https://en.bitcoin.it/wiki/Incidents#CVE-2010-5139

this

khal
Hero Member
*****
Offline Offline

Activity: 540
Merit: 500



View Profile WWW
December 30, 2012, 12:14:34 PM
 #36

So the only time a block reorg longer than 10 blocks will ever happen is because an advanced double spending attack.
Another case is when a country or an island gets isolated from the rest of the world.
I guess that's why Satoshi did not put an automatic cementing feature. But such a case can be detected due to a sudden really slower network speed ? At least, the user may know it :p
paraipan
In memoriam
Legendary
*
Offline Offline

Activity: 924
Merit: 1004


Firstbits: 1pirata


View Profile WWW
December 30, 2012, 02:18:09 PM
 #37

So the only time a block reorg longer than 10 blocks will ever happen is because an advanced double spending attack.
Another case is when a country or an island gets isolated from the rest of the world.
I guess that's why Satoshi did not put an automatic cementing feature. But such a case can be detected due to a sudden really slower network speed ? At least, the user may know it :p

In such a case the "cementing feature" would be more than useful, to lock in clients on the "island blockchain".

Just imagine how would you react after trading for a whole year with your neighbors, and then a new neighbor comes in with a satellite link to the Internet and the whole island blockchain is dumped the moment first wallet updates (assuming the main chain is longer). I know I would be pissed, and at least would like to have the option to keep doing business on the "island blockchain". The moment you have a group of bitcoin users that work and trade over a blockchain fork for a period larger than a week, you can confidently say they are trading in their own digital currency, and there isn't an easy way to deal with it at the blockchain level.

This is similar to bitcoin tech ported onto other planets, which in absence of some low latency telecommunication channel, that would naturally create their own local currency.

BTCitcoin: An Idea Worth Saving - Q&A with bitcoins on rugatu.com - Check my rep
ShadowOfHarbringer (OP)
Legendary
*
Offline Offline

Activity: 1470
Merit: 1005


Bringing Legendary Har® to you since 1952


View Profile
December 30, 2012, 02:47:35 PM
 #38

I may be wrong, but i think that the best solution would be to implement @kjj's idea with difficulty automatically increasing exponentially if block chain fork is longer than Y blocks.

This is still pure Proof Of Work, not really cementing. It would simply require much more than 51% to fork blockchain after Y number of blocks.


Meni Rosenfeld
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
December 30, 2012, 03:06:31 PM
 #39

I may be wrong, but i think that the best solution would be to implement @kjj's idea with difficulty automatically increasing exponentially if block chain fork is longer than Y blocks.

This is still pure Proof Of Work, not really cementing. It would simply require much more than 51% to fork blockchain after Y number of blocks.
It's soft cementing (wet cement?). It probably has some advantages over hard cementing, but essentially it has the same tradeoffs.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
gmaxwell
Moderator
Legendary
*
expert
Online Online

Activity: 4186
Merit: 8421



View Profile WWW
January 02, 2013, 07:00:35 AM
Last edit: January 02, 2013, 07:25:12 AM by gmaxwell
 #40

Quote from: kjj
to counter this, I proposed an exponential difficulty ramp for reorganizations.  Reorgs of up to X blocks (6 might be a good value for X) happen normally, with the longer chain winning.  Reorgs beyond X blocks require Y(B-X) more difficulty in the new chain than the old chain, where B is the number of blocks to be thrown out.
::sigh:: I'm pretty sure I've gone through this before, but perhaps I now know how to explain this in a way that will get through.

Self-consistency and global convergence are mutually incompatible objectives.  Large scale of convergence is the worst possible failure mode for Bitcoin, worse than any amount of transaction reversal.

If participants prefer their past state it means that two participants with different pasts can fail to agree on the current state even when presented with identical information. Small amounts of self-consistency can be tolerated but it is at the direct expense of convergence (which is why bitcoin is often not converged at all after just a single block, though it becomes converged with infinitesimal chance of convergence failure after enough blocks relative to the communication delays).

The problem of non-convergence is made worse for a distributed systems in a universes with a speed of light, as an attacker can pick which nodes have which past by selecting their 'locations' that they use to announce messages and so they can choose the shape of the non-convergence to be most damaging.

To specifically address your particular suggestion:  A byzantine attacker obtains a large amount of hashpower for long enough to mine a couple blocks— say twelve total for your figures— faster than the network, and he maps out the locations of large mining power concentrations with some accuracy.   Then he mines two forks off the current head, starting with blocks A and A'.  He feeds these forks all in parallel to all nodes, giving A to half the estimated hash power and A' to the other half. Congrats: Bitcoin is over, from one currency you have two. (well, at least until people manually intervene, turn off your 'tweak' or force nodes onto the state they're rejecting, and the system causes a reversal of all transactions on one side or another, belonging and trusted by roughly half the participants).

The idea of 'solving' high hashpower attackers by imposing some kind of self-consistency constraint appears to be one of the perennial (bad-)proposals around here. They all damage consistency— which is a more important objective—, they all fail to prevent reversals with certainty (e.g. huge amounts would be reversed in my above attack example when consistency is finally recovered, ones with many many more confirms than the 6 the attacker produced), and to the extent that they might do anything at all they all present a new practical attack surface where none existed before. It's possible to propose forms which are which are so weak— e.g. only bar >=210000 block reorgs— that they do nothing at all and thus also don't create a practical problem, but if you propose one that would stop an actual attack then that same attack effort could instead be put into creating an optimized convergence failure by careful propagation to produce a maximum entropy partitioning.
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!