Sergio_Demian_Lerner (OP)
|
|
September 06, 2012, 09:51:42 PM |
|
In the following months Butterfly Labs ( http://www.butterflylabs.com/) will be introducing a new ASIC miner product. This will increase the MHash/s/$ approximately 30 times. Other vendors such as http://www.btcfpga.com are building competing products. Let's take the "BitForce Single SC" (BF) as reference: - 40GH/s - $1,299 Although at a first glance this look like a huge benefit for the network, there are new vulnerabilities we must face: 1. There will be a window of time where new vulnerabilities will be exposed to a government or anyone willing to invest 1M USD to temporarily (1 week?) disrupt Bitcoin and generate a rush to the coin (a big price fall). An attacker can exhaust the bandwidth of all the connections in the Bitcoin network. The attacked needs a 820 BF (1M USD) to achieve 32800 GH/s (or 2^45 hash/s). The attacker chooses the root block at index 193000 (which has an PoW of 2^53 hashes (53 zero bits)). From checkpoints.cpp: (193000, uint256("0x000000000000059f452a5f7340de6682a977387c17010ff6e6c3bd83ca8b1317")) Since block 193000 was issued at date 2012-08-09, the attacker waits 4 months so ComputeMinWork() allows the acceptance of PoW of 4 bits less. (This lowers the money required 16 times) He can reach 2^53 hashes in 53-4-45=16 seconds. Then he starts creating a branch from block 193000, each block being 1 Megabyte long, with current (not past) block time, and having a single coinbase transaction, and extending the chain of the previous created block. Sending 1 block every 16 seconds. All nodes start spreading these past blocks, possibly filling the entire network bandwidth and blocking normal blocks for as long as most of the nodes upgrade. Also the attacker will be filling 5.4 GB of hard disk every day, and the blockchain on disk will need to be manually pruned to cut the offending branch so it is compacted to its normal size. The only way to recover from these attacks is by downloading a new version of the client with a new checkpoint with a much higher block difficulty. I can't think of any other possible patch. Maybe the interval between new releases during the transition from GPUs to ASICs could be decreased. 2. What would happen if miners switch ALL to this cheap 30X ASIC solution and this vendor has build-in a backdoor in the chip to: - Stop working after block height N - Hide some private information (e.g. part of the private key) in the nonce (as a side channel attack) In the first case, the network will suddenly stop and, because of a higher difficulty reached, there will one block every 5 hours during a period of 14*30 days=420 days !! This will destroy Bitcoin for a long while and will require a manual adjustment in the difficulty. In the second case, an attacker may compromise the wallets of all miners! People should use open source mining solutions.... Best regards, Sergio.
|
|
|
|
Revalin
|
|
September 07, 2012, 01:47:12 AM |
|
People should use open source mining solutions.... I want people to use open source solutions, but for-profit miners are going to use whatever works best for them. This is one of many reasons that it's important to keep developing Litecoin and other alternative chains.
|
War is God's way of teaching Americans geography. --Ambrose Bierce Bitcoin is the Devil's way of teaching geeks economics. --Revalin 165YUuQUWhBz3d27iXKxRiazQnjEtJNG9g
|
|
|
Gavin Andresen
Legendary
Offline
Activity: 1652
Merit: 2301
Chief Scientist
|
|
September 07, 2012, 02:20:19 AM |
|
First: I think it is extremely unlikely that somebody would spend a million dollars on an attack that takes months to pull off, doesn't benefit the attacker at all, is easy to fix, and that would be easy for the network to recover from. The only way to recover from these attacks is by downloading a new version of the client with a new checkpoint with a much higher block difficulty. I can't think of any other possible patch. Maybe the interval between new releases during the transition from GPUs to ASICs could be decreased.
Good idea, and easy to do. I've got a half-finished "user-defined checkpoint" patch in my personal git tree, so users, merchants, and big mining pools can decide for themselves to add checkpoints on-the-fly (via an 'addcheckpoint' RPC command) to protect against this type of attack.
|
How often do you get the chance to work on a potentially world-changing project?
|
|
|
Etlase2
|
|
September 07, 2012, 02:37:36 AM |
|
doesn't benefit the attacker at all, Have to stop having this mentality when the potential exists for someone who just wants to ruin the network. Otherwise it's head-in-sand. is easy to fix, and that would be easy for the network to recover from. Anything that requires developer intervention and community consensus is not an easy fix and is very bad for the reputation of the network. I've got a half-finished "user-defined checkpoint" patch in my personal git tree, so users, merchants, and big mining pools can decide for themselves to add checkpoints on-the-fly (via an 'addcheckpoint' RPC command) to protect against this type of attack.
So *some* *might* be protected by an option that isn't required or part of the protocol. Who just lost 250k because of an unencrypted wallet? Instead have a second difficulty determined by bitcoin days destroyed over the last 2016 blocks and work some kind of formula around that so it is easy for a legitimate block chain to overtake an attacking chain with significantly less hashing power. Only a client update, no options needed, no breaking changes, secure from sustained 51% attack. Unless the attacker controls more bitcoin days destroyed than the entire rest of the network activity during the time frame of a history rewrite, it will be ignored. If anyone is worried about this possibility, then there could be a further addition to the formula similar to checkpoints on the fly.
|
|
|
|
kjj
Legendary
Offline
Activity: 1302
Merit: 1026
|
|
September 07, 2012, 03:37:14 AM |
|
I just don't see any part of this working. First, a million dollars won't do it because a million dollars worth of available ASICs don't exist. I guess the million could be spent stealing (or developing) a clone, but that is just an argument in favor of getting ASIC miners into peoples' hands ASAP. Second, the attack chain would be laughably invalid. Currently, I think you might be right about the DOS potential here, but only from flooding the network with blocks that can never be connected. See * below for mitigation. Third, what gets passed to the device is the midstate. The device has no idea what the current block height is, nor does it have access to any sort of keys. (See here for an example of what exactly gets sent to the device.) * This suggests a potentially useful patch. I haven't checked, maybe something like this is already implemented. If you get a block that could potentially replace block N, but the new block's timestamp is more than X hours after the timestamp in block N, refuse to relay it. X=3 fits with the currently allowed amount of clock skew in the network, but X=6, X=12 or X=24 would be more conservative, and any of them would work.
|
17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8 I routinely ignore posters with paid advertising in their sigs. You should too.
|
|
|
Etlase2
|
|
September 07, 2012, 03:44:33 AM |
|
* This suggests a potentially useful patch. I haven't checked, maybe something like this is already implemented. If you get a block that could potentially replace block N, but the new block's timestamp is more than X hours after the timestamp in block N, refuse to relay it. Besides the fact that timestamps are added by the miner creating the block, if you mean the time when the blocks are received this breaks the "unified vision" of one block chain. Forks can exist permanently without users or miners having done anything wrong. But this is also along the same lines of what Gavin suggested. This is mostly in the case of a network split though which I think is pretty unlikely and shouldn't be guarded against when it leaves the network open to certain types of more important attacks. If there is a permanent fork, then leave it up to the users and the community to decide which chain is the correct one. In the case of a network split where one country's internet is cut off or something, it is obvious.
|
|
|
|
Foxpup
Legendary
Offline
Activity: 4532
Merit: 3183
Vile Vixen and Miss Bitcointalk 2021-2023
|
|
September 07, 2012, 03:58:48 AM |
|
2. What would happen if miners switch ALL to this cheap 30X ASIC solution and this vendor has build-in a backdoor in the chip to:
- Stop working after block height N - Hide some private information (e.g. part of the private key) in the nonce (as a side channel attack)
...
In the second case, an attacker may compromise the wallets of all miners!
The second scenario is impossible. Mining software (that isn't a wallet-stealing trojan) does not have access to your private keys, and the hardware has no access to any kind of data except for what the software sends to it. There is no way for an ASIC (or any other kind of mining hardware) to know about your private keys. It isn't even necessary (or even useful) to run a miner on the same system that has your private keys on it in the first place.
|
Will pretend to do unspeakable things (while actually eating a taco) for bitcoins: 1K6d1EviQKX3SVKjPYmJGyWBb1avbmCFM4I am not on the scammers' paradise known as Telegram! Do not believe anyone claiming to be me off-forum without a signed message from the above address! Accept no excuses and make no exceptions!
|
|
|
kjj
Legendary
Offline
Activity: 1302
Merit: 1026
|
|
September 07, 2012, 04:00:46 AM |
|
* This suggests a potentially useful patch. I haven't checked, maybe something like this is already implemented. If you get a block that could potentially replace block N, but the new block's timestamp is more than X hours after the timestamp in block N, refuse to relay it. Besides the fact that timestamps are added by the miner creating the block, if you mean the time when the blocks are received this breaks the "unified vision" of one block chain. Forks can exist permanently without users or miners having done anything wrong. But this is also along the same lines of what Gavin suggested. This is mostly in the case of a network split though which I think is pretty unlikely and shouldn't be guarded against when it leaves the network open to certain types of more important attacks. If there is a permanent fork, then leave it up to the users and the community to decide which chain is the correct one. In the case of a network split where one country's internet is cut off or something, it is obvious. The miner timestamp. We already enforce rules on the miner-provided timestamps, this is just one more. It shouldn't cause any problems for honest forks, even when X is pretty low. Read his attack again, it depends on timestamp manipulation to multiply the amount of DOS-blocks generated by a factor of 16 (per month). There is absolutely no reason why the network should consider a block mined today, with a timestamp from today, as a candidate to create a month-old fork. The only legitimate reason to allow this is for cases like the infamous overflow bugfix. I doubt that such a fix would work the same way today as it did back then, but if that is a concern, setting X to something high, like 168, should provide plenty of time.
|
17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8 I routinely ignore posters with paid advertising in their sigs. You should too.
|
|
|
Etlase2
|
|
September 07, 2012, 04:25:03 AM |
|
The miner timestamp. We already enforce rules on the miner-provided timestamps, No, the rule is enforced between blocks strung together, not the blocks on their own. Block 2 can not be more than 2 hours before block 1 or whatever the rule is. If another whole chain of blocks come along from a common point, as long as they follow that one rule about block X+1 being less than 2 hours before block X, then it is valid. Read his attack again, it depends on timestamp manipulation to multiply the amount of DOS-blocks generated by a factor of 16 (per month). It does not depend on timestamp manipulation, it depends on creating a chain with a very low difficulty with the intent to spam the network. I don't think Sergio's specific example works because of the 2016 block requirement for changing difficulty though. There is absolutely no reason why the network should consider a block mined today, with a timestamp from today, as a candidate to create a month-old fork. Well perhaps you should check the code then, because it is perfectly valid. The only thing preventing this is hard-coded checkpoints.
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4270
Merit: 8805
|
|
September 07, 2012, 04:48:47 AM Last edit: September 07, 2012, 05:26:30 AM by gmaxwell |
|
The only way to recover from these attacks is by downloading a new version of the client with a new checkpoint with a much higher block difficulty. I can't think of any other possible patch. Maybe the interval between new releases during the transition from GPUs to ASICs could be decreased.
Good idea, and easy to do. I've got a half-finished "user-defined checkpoint" patch in my personal git tree, so users, merchants, and big mining pools can decide for themselves to add checkpoints on-the-fly (via an 'addcheckpoint' RPC command) to protect against this type of attack. I feel fairly leery about this. In terms of the general baddness, having nodes on mutually inconsistent forks, _regardless of the details_, is actually much much worse than just about anything that can happen short of some kind of long sustained attack (which can't be fixed by adding checkpoints if it really is sustained). The ability to add a checkpoint is basically a big footgun because it sounds pretty attractive in the short term or with simplistic analysis (ignoring what happens when everyone else doesn't do the same thing as you). Basically, getting a transaction finney attacked on you _sucks_ and there surely would be a temptation to try to go around and convince people to set some checkpoint to undo it. Perhaps you might even get some friends to join some co-conspiring network to coordinate it a bit and allow you to pay the participants ala GPUMAX (introducing an ugly bit of central control if it grew to a relevant size). But actually getting enough of a majority of hash-power onto it would be very hard... and as bad as your finny attack is, the currency being split in two for potentially days or weeks as this is resolved would be much much worse to everyone collectively (though this is mostly an externalized cost that you don't care about when you start the snowball). And if it _isn't_ hard to get the hashpower onto it, then it really is a highly vulnerable central point of control itself. We think and work so hard to make sure that any BIP rule change we'd introduce doesn't carry the risk of triggering a hardfork... An addcheckpoint RPC could just as easily be called addhardfork. And the PPcoin results convinces me that there is a fairly substantial part of the community that doesn't really grok decenteralized systems— and that they would use a checkpoint RPC foolishly if given a chance, especially if guided by leaders that don't understand the technology themselves (e.g. people who run justly loved services, but understand bitcoin poorly enough, or are indifferent enough to it, to pick the worst transaction styles for scalability)— since with PPcoin people are willing to pay a premium for coins which are checkpointed block by block by some anonymous authority (I mined a bit and even had one of my blocks orphaned by one of their centrally controlled checkpoints!). Perhaps, because of this reality, bitcoin is already doomed to become a failed experiment— a modest money maker for the earliest participant but something that eventually becomes undifferentiated from all the rules-of-convenience based currencies, but I hope not. As far as "any other possible patch" goes, I believe the correct (and really boring) solution to any all orphan/weak-chain flooding concerns, which doesn't depends on any checkpoints or other potentially risky compromises: Select the best chain first based on headers only (very small! 10 years worth is under 50 MB), then only switch to a second best if the best fails validation. I wrote about this sometime back, and originally suggested it somewhat earlier when roconnor went a bit fatalistic thinking that there was no way to produce a DOS resistant node without checkpoints. IIRC, roconnor was satisfied that header based chain selection was sufficient. Though I haven't bothered writing any code for it, as I don't think it's actually important. For me it's enough to know that it's possible, without any incompatible changes, or especially difficult implementation. This remedy would be greatly improved by either reducing the timewarp attack surface by making the timestamp rules slightly stricter, or increasing the min difficulty after some height to, say, 10000 (~irrelevant hardfork risk: only a risk if bitcoin fails), or both... but generally solves the problem even without those tweaks.
|
|
|
|
kjj
Legendary
Offline
Activity: 1302
Merit: 1026
|
|
September 07, 2012, 04:52:46 AM |
|
The miner timestamp. We already enforce rules on the miner-provided timestamps, No, the rule is enforced between blocks strung together, not the blocks on their own. Block 2 can not be more than 2 hours before block 1 or whatever the rule is. If another whole chain of blocks come along from a common point, as long as they follow that one rule about block X+1 being less than 2 hours before block X, then it is valid. In main.cpp, the two rules are: CBlock::CheckBlock ensures that a block's timestamp is no more than 2 hours into the future at the time it is first seen by a node. CBlock::AcceptBlock ensures that a block's timestamp is greater than the median of the timestamps of the 11 blocks before it. AcceptBlock is called before a block is written to disk, CheckBlock is called earlier. In practice, this gives you about 3 hours of wiggle room. Read his attack again, it depends on timestamp manipulation to multiply the amount of DOS-blocks generated by a factor of 16 (per month). It does not depend on timestamp manipulation, it depends on creating a chain with a very low difficulty with the intent to spam the network. I don't think Sergio's specific example works because of the 2016 block requirement for changing difficulty though. By starting from the latest checkpoint and waiting a month, he is able to generate 16 times as many blocks as he would otherwise be able to generate. The amount of blockspam is directly connected to the interval between the block chosen for his starting point (typically the latest checkpoint), and today. There is absolutely no reason why the network should consider a block mined today, with a timestamp from today, as a candidate to create a month-old fork. Well perhaps you should check the code then, because it is perfectly valid. The only thing preventing this is hard-coded checkpoints. Yes, thank you, my entire point was that the code as written today allows this case. I'm suggesting that it might maybe be a good idea to change that. Did you even read my posts? Right now, an attacker doesn't even need to create much of a chain. He can just generate enough blocks to trigger the difficulty adjustment a few times, and then keep generating the lowest difficulty block over and over again.
|
17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8 I routinely ignore posters with paid advertising in their sigs. You should too.
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4270
Merit: 8805
|
|
September 07, 2012, 04:59:01 AM |
|
Right now, an attacker doesn't even need to create much of a chain. He can just generate enough blocks to trigger the difficulty adjustment a few times, and then keep generating the lowest difficulty block over and over again.
Right. Because of the timewarp attack an attacker who cuts >2 weeks back (more is better, of course) can make a chain with interleaved past timestamps which reduces in difficulty no matter how much hashpower he's throwing at it (if anyone is interested in this attack, I performed it on testnet3). Of course, if he doesn't have a majority of the hashpower his chain won't be the longest; so it's moot except as a flooding DOS against nodes with the full block sync behavior. It's possible to do a little more aggressive timestamp sanity checking to largely close that behavior... but it's hardly an attack if nodes first check header difficulty before pulling a chain.
|
|
|
|
kjj
Legendary
Offline
Activity: 1302
Merit: 1026
|
|
September 07, 2012, 05:05:01 AM |
|
Right now, an attacker doesn't even need to create much of a chain. He can just generate enough blocks to trigger the difficulty adjustment a few times, and then keep generating the lowest difficulty block over and over again.
Right. Because of the timewarp attack an attacker who cuts >2 weeks back (more is better, of course) can make a chain with interleaved past timestamps which reduces in difficulty no matter how much hashpower he's throwing at it (if anyone is interested in this attack, I performed it on testnet3). Of course, if he doesn't have a majority of the hashpower his chain won't be the longest; so it's moot except as a flooding DOS against nodes with the full block sync behavior. It's possible to do a little more aggressive timestamp sanity checking to largely close that behavior... but it's hardly an attack if nodes first check header difficulty before pulling a chain. The problem is that when you look at a block by itself, you don't know if that block is eventually going to be part of a chain with more difficulty than what you already have or not. The timewarp won't give the attacker a longer (more difficult) chain, but it can allow him to create a ton of blocks that look valid enough by themselves that we have to keep them around anyway. This is a potential DOS vector, not necessarily a Finney. I know that a ton of work is currently underway on the block storing and indexing parts of the client, so presumably a node will be able to purge BS blocks like this sooner or later, but why even assist the attacker by letting the network relay them?
|
17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8 I routinely ignore posters with paid advertising in their sigs. You should too.
|
|
|
Etlase2
|
|
September 07, 2012, 05:07:02 AM |
|
Yes, thank you, my entire point was that the code as written today allows this case. I'm suggesting that it might maybe be a good idea to change that. Did you even read my posts? I was trying to draw attention to the fact that all the suggested fixes here involve allowing permanent forks. But yeah I misread some tenses.
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4270
Merit: 8805
|
|
September 07, 2012, 05:09:59 AM |
|
The problem is that when you look at a block by itself, you don't know if that block is eventually going to be part of a chain with more difficulty than what you already have or not. The timewarp won't give the attacker a longer (more difficult) chain, but it can allow him to create a ton of blocks that look valid enough by
Please do me the respect of reading my message above where (in the last paragraph) I explain how to solve this, in a way which isn't a fork or a rule change at all... just a minor difference in the order of operations when fetching and checking a chain. While I didn't include an actual implementation, I provided pseudocode of the algorithm detailed enough to propose attacks against. It isn't a new idea: this whole set of issues has been discussed many times before and so far no one has yet pointed out why what I suggest wouldn't make it mostly a non-issue.
|
|
|
|
kjj
Legendary
Offline
Activity: 1302
Merit: 1026
|
|
September 07, 2012, 05:47:19 AM |
|
The problem is that when you look at a block by itself, you don't know if that block is eventually going to be part of a chain with more difficulty than what you already have or not. The timewarp won't give the attacker a longer (more difficult) chain, but it can allow him to create a ton of blocks that look valid enough by
Please do me the respect of reading my message above where (in the last paragraph) I explain how to solve this, in a way which isn't a fork or a rule change at all... just a minor difference in the order of operations when fetching and checking a chain. While I didn't include an actual implementation, I provided pseudocode of the algorithm detailed enough to propose attacks against. Heh, I did read it. It seems like a good system for usability/performance. I wouldn't classify that as a "minor" change in client behavior exactly, but it does have the advantage of not changing the timestamp rules. Something about it bothers me, but I'm not sure what. I think it might be that it only protects clients that use that algorithm for fetching blocks, leaving the rest of the network open to the attack. The same could probably be said about changes to the timestamp rules, at least right now while the network is fairly homogeneous. I will ponder on it some more. Unrelated to your algorithm, say that the attacker did have 51% of the network power, which I think is silly, but try it anyway. The current rules allow him to rewrite history, and blatantly tell everyone that he is doing it (by using correct timestamps). Why not force him to make fake timestamps back to his chosen fork point, and then accept the difficulty adjustment consequences of doing so? The amount of extra work for the attacker would in some cases be non-trivial. And my philosophical objection still stands. Why should the network accept a block today, with a timestamp of today, as a candidate to start a fork days or weeks or months in the past? Inertia doesn't seem to be a good answer to that question.
|
17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8 I routinely ignore posters with paid advertising in their sigs. You should too.
|
|
|
kjj
Legendary
Offline
Activity: 1302
Merit: 1026
|
|
September 07, 2012, 05:50:46 AM |
|
Yes, thank you, my entire point was that the code as written today allows this case. I'm suggesting that it might maybe be a good idea to change that. Did you even read my posts? I was trying to draw attention to the fact that all the suggested fixes here involve allowing permanent forks. But yeah I misread some tenses. I'm not sure that my suggestion actually allows for permanent forks, at least not honest ones. In an honest fork, the timestamps in both branches will follow the rules, allowing an isolated network to rejoin.
|
17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8 I routinely ignore posters with paid advertising in their sigs. You should too.
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4270
Merit: 8805
|
|
September 07, 2012, 06:39:42 AM |
|
Unrelated to your algorithm, say that the attacker did have 51% of the network power, which I think is silly, but try it anyway. The current rules allow him to rewrite history, and blatantly tell everyone that he is doing it (by using correct timestamps). Why not force him to make fake timestamps back to his chosen fork point, and then accept the difficulty adjustment consequences of doing so? The amount of extra work for the attacker would in some cases be non-trivial.
And my philosophical objection still stands. Why should the network accept a block today, with a timestamp of today, as a candidate to start a fork days or weeks or months in the past? Inertia doesn't seem to be a good answer to that question.
You're imagining a honest shorter chain, and a dishonest longer fork that has a big timestamp gap. Lets reverse that: Imagine the network is following your rules. There is an honest longest chain. Now I construct a dishonest fork timestamped such that the true longest chain looks like it jumped forward in time relative to to my fork. Either the whole network now rejects the honest chain on seeing my fork (bad), or they only apply your rule only one way on reorg decision (e.g. only demand it when switching from a 'better timestamped' shorter fork to a longer fork) which would mean that a newly bootstraped node's chain decision depends on which chain he heard first (because the dishonest fork may have been the longest from his perspective until he heard the longer one) and as a result network can't reliably converge (bad). I'm skeptical about the extra work comment... The amount of work needed to overtake the longest chain from a given cut point is _constant_: its the amount of work in the longest chain after that cut. Difficulty doesn't come into play. Ignoring the timewarp issue, there isn't much advantage that can be gained by lying about the timestamps, and most you could get 4x per 8 weeks you cut. Go too far back and you need a really significant super majority to get ahead in a reasonable time... and the advantage is just the inflation you could create for as a factor of log4(your rate/network rate) from undercorrection with your correct timestamps during the point where your chain is 'catching up'. When I initially read your message I misread it as asserting that sufficiently old stamped blocks should not be considered. I realize now I misread it, but since someone else might have:Because unless you will accept old timestamps any partition would result in a perpetually unresolvable hardfork— you start with a worldwide Bitcoin, a cable gets cut and a a little bit later you have north american bitcoin vs everyone else, and everyones bitcoin is now double spendable (once in each partition).
Worse, an attacker could intentionally produce these kinds splits by creating slightly longer fork and then announcing it to half the world right at the edge of whatever criteria you impose for 'too old a rewrite', so that half would accept it and the other half would hear about it too late.
|
|
|
|
HostFat
Staff
Legendary
Offline
Activity: 4270
Merit: 1209
I support freedom of choice
|
|
September 07, 2012, 06:50:26 AM |
|
@gmaxwell Is there an open issue about your proposal on bitcoin git?
|
|
|
|
kjj
Legendary
Offline
Activity: 1302
Merit: 1026
|
|
September 07, 2012, 07:56:04 AM |
|
Unrelated to your algorithm, say that the attacker did have 51% of the network power, which I think is silly, but try it anyway. The current rules allow him to rewrite history, and blatantly tell everyone that he is doing it (by using correct timestamps). Why not force him to make fake timestamps back to his chosen fork point, and then accept the difficulty adjustment consequences of doing so? The amount of extra work for the attacker would in some cases be non-trivial.
And my philosophical objection still stands. Why should the network accept a block today, with a timestamp of today, as a candidate to start a fork days or weeks or months in the past? Inertia doesn't seem to be a good answer to that question.
You're imagining a honest shorter chain, and a dishonest longer fork that has a big timestamp gap. Lets reverse that: Imagine the network is following your rules. There is an honest longest chain. Now I construct a dishonest fork timestamped such that the true longest chain looks like it jumped forward in time relative to to my fork. Either the whole network now rejects the honest chain on seeing my fork (bad), or they only apply your rule only one way on reorg decision (e.g. only demand it when switching from a 'better timestamped' shorter fork to a longer fork) which would mean that a newly bootstraped node's chain decision depends on which chain he heard first (because the dishonest fork may have been the longest from his perspective until he heard the longer one) and as a result network can't reliably converge (bad). How would an attacker re-write the timestamps in the blocks that everyone already has? The original chain has a sequence of blocks with (more or less) evenly spaced timestamps, and there is no possible way for an attacker to make that look like it has a jump in it. The best the attacker could do would be to pile up the timestamps, one after another, in his attack chain. He can't go backwards to make a jump. Essentially, if we are looking at a possible fork from, say, a month ago, the first block in the newly presented fork really should have a timestamp from a month ago too. I'm skeptical about the extra work comment... The amount of work needed to overtake the longest chain from a given cut point is _constant_: its the amount of work in the longest chain after that cut. Difficulty doesn't come into play. Ignoring the timewarp issue, there isn't much advantage that can be gained by lying about the timestamps, and most you could get 4x per 8 weeks you cut. Go too far back and you need a really significant super majority to get ahead in a reasonable time... and the advantage is just the inflation you could create for as a factor of log4(your rate/network rate) from undercorrection with your correct timestamps during the point where your chain is 'catching up'.
Good point on the constant work amount. Whatever he gains by messing with the old timestamps, he'll lose when his fork is putting out blocks more often than usual and he'll end up in the same place. When I initially read your message I misread it as asserting that sufficiently old stamped blocks should not be considered. I realize now I misread it, but since someone else might have:Because unless you will accept old timestamps any partition would result in a perpetually unresolvable hardfork— you start with a worldwide Bitcoin, a cable gets cut and a a little bit later you have north american bitcoin vs everyone else, and everyones bitcoin is now double spendable (once in each partition).
Worse, an attacker could intentionally produce these kinds splits by creating slightly longer fork and then announcing it to half the world right at the edge of whatever criteria you impose for 'too old a rewrite', so that half would accept it and the other half would hear about it too late.
Yup, that idea would have some issues. I occasionally suggest using an exponential difficulty difference for triggering deep reorgs (or rather for avoiding them), and people make similar objections to that proposal too. Also, it doesn't help that we are wandering around two different issues, a 51% attack and a BS blockspam annoyance. Your algorithm would kill the blockspam problem, but only if every client uses your algorithm, which would be a de facto protocol change.
|
17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8 I routinely ignore posters with paid advertising in their sigs. You should too.
|
|
|
|