ghostlander
Legendary
Offline
Activity: 1241
Merit: 1020
No surrender, no retreat, no regret.
|
|
April 05, 2014, 03:05:06 PM |
|
The another fix (preventing PastRateActualSeconds to go to 0) takes care of another attack vector. Here is a short explanation of the attack: 1. generate a block 2 weeks to the future. You cannot publish it, it is not on current time window. 2. Start generating blocks with the same timestamp (ie the moment 2 weeks in the future)
See what would happen: after there is PastBlocksMax blocks in the private chain, *the diff would not change* at all!
That would mean you have 2 weeks to generate blocks with 0 difficulty. With decent hashrate, you easily get 1 block in a second. In 2 weeks you get 1209600 blocks.
When that 2 weeks has passed, what would happen to the blockchain, if you suddenly publish 1209600 perfectly valid blocks? The whole network would be doing nothing but checking those 1209600 blocks... and finding nothing wrong with them. That would be the end of the coin.
First, an attacker still needs to exceed the cumulative difficulty score of the original chain. Second, there must not be any checkpoints on the original chain for those 2 weeks, neither hard coded nor synchronised. Third if the second is true, this is a huge reorganisation which won't pass unnoticed and a smart developer would secure his chain with a checkpoint immediately, release an updated client and ask the community to upgrade. EDIT: Actually, it *is* prevented somewhere else. One can generate only 5 blocks with the same timestamp. Median of 11 is 6 blocks. Although AUR has changed this to median of 3 which is a bad idea actually.
|
|
|
|
Cryddit
Legendary
Offline
Activity: 924
Merit: 1132
|
|
April 05, 2014, 03:25:39 PM |
|
The problem that makes the time warp possible at all is that difficulty is being measured wrong.
Having a lower difficulty threshold and more blocks, generated by a small fraction of the main chain's hashing power, should NEVER result in the difficulty calculation thinking that you have more total work than the main chain.
Consider two forks, one with a difficulty of, say 20 and one with a difficulty of, say, 11. If there is actually more work on the chain with 11 difficulty, it will have more blocks that meet the 20 difficulty than the other chain has total blocks. It shouldn't be getting any work credit at all for blocks that don't meet the hardest branch's difficulty.
|
|
|
|
YarkoL
Legendary
Offline
Activity: 996
Merit: 1013
|
|
April 05, 2014, 04:03:07 PM |
|
Having a lower difficulty threshold and more blocks, generated by a small fraction of the main chain's hashing power, should NEVER result in the difficulty calculation thinking that you have more total work than the main chain.
Your opinion, then, is that one really can make TW exploit with less than majority hashing power. I would be very grateful to know the reasoning behind that statement.
|
“God does not play dice"
|
|
|
Nite69 (OP)
|
|
April 05, 2014, 04:08:27 PM |
|
Median of 11 is 6 blocks. Although AUR has changed this to median of 3 which is a bad idea actually.
Saying it is a bad idea does not yet make it a bad idea. Can you give some reasoning behind this? If there is good reasoning, why didnt you tell that when it was only being planned?
|
Sync: ShiSKnx4W6zrp69YEFQyWk5TkpnfKLA8wx Bitcoin: 17gNvfoD2FDqTfESUxNEmTukGbGVAiJhXp Litecoin: LhbDew4s9wbV8xeNkrdFcLK5u78APSGLrR AuroraCoin: AXVoGgYtSVkPv96JLL7CiwcyVvPxXHXRK9
|
|
|
ghostlander
Legendary
Offline
Activity: 1241
Merit: 1020
No surrender, no retreat, no regret.
|
|
April 05, 2014, 05:04:48 PM |
|
Median of 11 is 6 blocks. Although AUR has changed this to median of 3 which is a bad idea actually.
Saying it is a bad idea does not yet make it a bad idea. Can you give some reasoning behind this? If there is good reasoning, why didnt you tell that when it was only being planned? Why have you not asked? I'm not a part of your community, not supposed to monitor what you do and to give advice on every matter.
|
|
|
|
memecoin
Member
Offline
Activity: 308
Merit: 10
★YoBit.Net★ 1400+ Coins Exchange
|
|
April 05, 2014, 09:41:05 PM |
|
Yes, the blockchain work has nothing to do with time. I still don't think this is an exploit and it's annoying because many exchanges have bought into it and are asking everyone to upgrade needlessly.
Would you like to back that statement up and offer your coin as a sacrificial lamb? ~BCX~ where is the evidence that you even attacked aur. it seems most these people here don't agree with you so i think most of our coins are probably safe. I see you hide behind a new registered account today with a single post while making that "bold" statement. Shows real confidence doesn't it. All you and your fellow "devs" are whining about is that you paid someone to create a basic shitcoin clone and lack the technical ability to update it LOL Maybe I really do need to kill a few to convince the masses. Since you're so sure it's not possible, don't be a wanker, volunteer your coin! ~BCX~ Please do. I've found a comfortable seat, and I have plenty of snacks.
|
Am I spamming? Report me!
|
|
|
Cannacoin
|
|
April 05, 2014, 11:52:41 PM |
|
Cannacoin has updated its protocol to patch the potential KGW/Time-Warp exploit. Thanks to everyone involved in the discussion and fix, your time and efforts are appreciated!
|
CCN - Cannacoin - Cannapay - NWGT.tv - TokeTalk.Net - Cannacoin Community Network - Cannashares - A Cryptocurrency & Cannabis Development Team ONLY DOWNLOAD CANNACOIN WALLET SOFTWARE FROM ORIGINAL POST: https://bitcointalk.org/index.php?topic=740903.0Hi Dec 8th, 2021
|
|
|
vilgem
Member
Offline
Activity: 98
Merit: 10
|
|
April 06, 2014, 08:43:59 AM Last edit: April 06, 2014, 10:40:40 AM by vilgem |
|
I looked at the diff file you proposed very thoroughly. And I must conclude that your fix is nothing more but a bullshit. The only thing your fix does - it protects PastRateActualSeconds varibale in a way that it is always >= 1 (second). Old code assumed it was always >= 0. You probably cared about PastRateAdjustmentRatio variable which is equal to 1 if PastRateActualSeconds happens to be 0. But the point is that it never happens. KGW takes at least PastBlocksMin and at most PastBlocksMax blocks into the calculation. You will never have PastRateActualSeconds == 0 except for the case if your blockchain has only one block.
You have misunderstood the fix. There are 2 fixes and the one you are referring fixes another attack vector. The fix to usual TW attack (which BCX was planning was to use) was to use LatestBlockTime instead of BlockLastSolved->GetBlockTime() to count the timespan. Without this one can timetravel back without diff rise, with this the benefit attacker gets by travellin past is lost. The another fix (preventing PastRateActualSeconds to go to 0) takes care of another attack vector. Here is a short explanation of the attack: 1. generate a block 2 weeks to the future. You cannot publish it, it is not on current time window. 2. Start generating blocks with the same timestamp (ie the moment 2 weeks in the future) See what would happen: after there is PastBlocksMax blocks in the private chain, *the diff would not change* at all! That would mean you have 2 weeks to generate blocks with 0 difficulty. With decent hashrate, you easily get 1 block in a second. In 2 weeks you get 1209600 blocks. When that 2 weeks has passed, what would happen to the blockchain, if you suddenly publish 1209600 perfectly valid blocks? The whole network would be doing nothing but checking those 1209600 blocks... and finding nothing wrong with them. That would be the end of the coin. You will never have PastRateActualSeconds == 0 except for the case if your blockchain has only one block. Thats not true. You can generate blocks with the same timestamp. Or is there something that would prevent it (I have not read all the code, it might be prevented somewhere) ? If there is, then this attack vector was already closed and this part was not necessary. EDIT: Actually, it *is* prevented somewhere else. One can generate only 5 blocks with the same timestamp. So this #2 fix is not necessary to prevent that attack vector, it is already closed elsewhere. However that means the whole if clauses are never true, ie they are itself worthless. But leaving them as they were would keep there an unecessary dependancy between the code blocks, so it is nevertheless better to change it. Also, the main fix is #1, which has been confirmed to work. // Check timestamp against prev if (GetBlockTime() <= pindexPrev->GetMedianTimePast()) return error("AcceptBlock() : block's timestamp is too early"); Ok, it turned out that you can't publish a block with a timestamp less or equal to the median time of several prior blocks. So, fix #2 is not necessary. Even if you could the attack you mentioned would be very unlikely. One would have to start working on a brach chain of PastBlocksMax blocks of the same timestamp. With every new block found the actual difficulty would RAISE (timespan would diminish with every new block found). It would be an exponential difficulty growth for several thousands iterations (the base difficulty would be equal to the network difficulty at the fork time). So, one would have to possess really good hashing power. I think that mathematically this attack is even less possible than wellknown 51% attack. More precisely this attack would have possibility 100% but the computation time would be HUGE. Now... fix #1. I'm sorry but is not a fix. Logically both flows (without fix and with it) are absolutelly THE SAME. Just check it line by line very carefuly. Both flows protect PastRateActualSeconds variable in a way that it is >= 0. That is it. Just CHECK.
|
★★★ VERTCOIN ★★★ ALL GENIOUS IS SIMPLE ★★★
|
|
|
Nite69 (OP)
|
|
April 06, 2014, 07:35:31 PM |
|
Now... fix #1. I'm sorry but is not a fix. Logically both flows (without fix and with it) are absolutelly THE SAME. Just check it line by line very carefuly. Both flows protect PastRateActualSeconds variable in a way that it is >= 0. That is it. Just CHECK.
The fix actually changes the way algorithm deals with blocks that are timestamped before the latest block's timestamp (ie timewarped blocks). It processes them just like if they were timestamped as the latest block. So if there is any benefit an attacker gets by timestamping the blocks in the past, the benefit will be gone. The attacker could as well timestamp them with latest blocks time. No more any extra gain from Time Warp, whatever it has been. This change actually lowers the difficulty what the time warped block would get. However, the attacker would get the same lower diff by timestamping that block at the latest block time, ie not using timewarp, so the fix does not give any advantage which had not already been there.
|
Sync: ShiSKnx4W6zrp69YEFQyWk5TkpnfKLA8wx Bitcoin: 17gNvfoD2FDqTfESUxNEmTukGbGVAiJhXp Litecoin: LhbDew4s9wbV8xeNkrdFcLK5u78APSGLrR AuroraCoin: AXVoGgYtSVkPv96JLL7CiwcyVvPxXHXRK9
|
|
|
memecoin
Member
Offline
Activity: 308
Merit: 10
★YoBit.Net★ 1400+ Coins Exchange
|
|
April 07, 2014, 06:07:11 PM |
|
It would be an exponential difficulty growth for several thousands iterations (the base difficulty would be equal to the network difficulty at the fork time). So, one would have to possess really good hashing power. I think that mathematically this attack is even less possible than wellknown 51% attack. More precisely this attack would have possibility 100% but the computation time would be HUGE.
You should sand box it before making those statements. This is exactly the vector that was opened up. ~BCX~ It appears some draino/coin hybrid has been hit with a difficulty of 55 million and just so happens to use KGW. Would you happen to know anything about this? Please tell me I didn't look away and miss something!
|
Am I spamming? Report me!
|
|
|
ghur
|
|
April 07, 2014, 06:17:00 PM |
|
It would be an exponential difficulty growth for several thousands iterations (the base difficulty would be equal to the network difficulty at the fork time). So, one would have to possess really good hashing power. I think that mathematically this attack is even less possible than wellknown 51% attack. More precisely this attack would have possibility 100% but the computation time would be HUGE.
You should sand box it before making those statements. This is exactly the vector that was opened up. ~BCX~ It appears some draino/coin hybrid has been hit with a difficulty of 55 million and just so happens to use KGW. Would you happen to know anything about this? Please tell me I didn't look away and miss something! Link?
|
doge: D8q8dR6tEAcaJ7U65jP6AAkiiL2CFJaHah Automated faucet, pays daily: Qoinpro
|
|
|
memecoin
Member
Offline
Activity: 308
Merit: 10
★YoBit.Net★ 1400+ Coins Exchange
|
|
April 07, 2014, 06:31:13 PM |
|
It would be an exponential difficulty growth for several thousands iterations (the base difficulty would be equal to the network difficulty at the fork time). So, one would have to possess really good hashing power. I think that mathematically this attack is even less possible than wellknown 51% attack. More precisely this attack would have possibility 100% but the computation time would be HUGE.
You should sand box it before making those statements. This is exactly the vector that was opened up. ~BCX~ It appears some draino/coin hybrid has been hit with a difficulty of 55 million and just so happens to use KGW. Would you happen to know anything about this? Please tell me I didn't look away and miss something! Link? The announcement section is a jungle: https://bitcointalk.org/index.php?topic=419873.msg6113496#msg6113496
|
Am I spamming? Report me!
|
|
|
ghur
|
|
April 07, 2014, 06:38:43 PM |
|
It would be an exponential difficulty growth for several thousands iterations (the base difficulty would be equal to the network difficulty at the fork time). So, one would have to possess really good hashing power. I think that mathematically this attack is even less possible than wellknown 51% attack. More precisely this attack would have possibility 100% but the computation time would be HUGE.
You should sand box it before making those statements. This is exactly the vector that was opened up. ~BCX~ It appears some draino/coin hybrid has been hit with a difficulty of 55 million and just so happens to use KGW. Would you happen to know anything about this? Please tell me I didn't look away and miss something! Link? The announcement section is a jungle: https://bitcointalk.org/index.php?topic=419873.msg6113496#msg6113496Thank you. It is a jungle indeed.
|
doge: D8q8dR6tEAcaJ7U65jP6AAkiiL2CFJaHah Automated faucet, pays daily: Qoinpro
|
|
|
simondlr
|
|
April 13, 2014, 03:05:08 PM |
|
So here's my understanding of the TW exploit & the fix.
The problem with the well is that any blocks in the past (or blocks with the same timestamp), get regarded as "PastRateActualSeconds = 0"
if (PastRateActualSeconds < 0) { PastRateActualSeconds = 0; }
What happens now, is that IF PastRateActualSeconds are 0, it does not adjust the diff.
if (PastRateActualSeconds != 0 && PastRateTargetSeconds != 0)
So, the attacker does the following: they work on their own chain. They put the timestamp as far as possible into the future to get max downward adjustment, and continues until the diff is lowest possible diff for the chain. At that point, the attacker generates blocks with the same timestamp, until it hits the "block-time too early" (getmedianpast) check in AcceptBlock. Basically, the attacker is pumping loads of low-diff blocks "at the same time" until they can't anymore. Putting blocks in the past don't help as they get regarded as "0" anyway in terms of diff adjustment. But putting blocks in the past, allows the attacker to get back in line with the main chain.
So, now they do it again. Until getmedianpast check doesn't fail, and do it again. It's a trade-off between being too far ahead and having enough work. But if the chains match based on the time, it's entirely possible that the attacker has the same work, with more blocks and the chains re-organise.
So if I understand it correctly (correct me if I'm wrong), the attacker still needs 51%. But now that they HAVE that 51% they can do the same amount of work, but generate more blocks.
While the fix protects against this attack, by making it unfeasible, it actually causes other problems [sky high diff adjustments as was seen with Aiden & Coin-o].
What's SUPPOSED to happen is that the attacker is not supposed to generate blocks with the same timestamp & keep the same diff. In most scenarios, this would mean there is A LOT of hashing power and the diff should adjust accordingly.
That's what this fix does here:
if (PastRateActualSeconds < 1) { PastRateActualSeconds = 1; }
If it's a block in the past, or the same timestamp, it's regarded as having taken 1 second. The problem with this is, is that if there are legitimate blocks in the past [due to network lag, or other reasons] it is regarded as an extremely fast block, rather than a legitimate block [within range]. This shoots up the difficulty (as we've seen with some coins). This happens more easily with coins with low block-times.
These diff adjustments didn't happen with the original KGW because they weren't taken into account for the diff adjustment.
With diff adjustments happening per block, it's a difficult problem to solve. Any 1-block algo is vulnerable, as blocks in the past [which is a legitimate anomaly] is an "odd" happenstance in 1-block diff algos.
So the trade-off is:
1) When an attacker gains 51%, they have the capability to generate more blocks with the same work. While this is not ideal, the reason why this is bad, is that incentives 51% attacks (as it is more monetary profitable). 2) Remove the incentives with this fix, but screw up legitimate blocks from the past [which causes absurd diff adjustments].
Personally, I think there needs to be a small "free" period where blocks from the past is okay, which of course makes the KGW not a 1 block algo...
This is of course, all assuming I'm correct... BCX? Nite69? Others? Care to comment?
Thoughts?
|
|
|
|
ghostlander
Legendary
Offline
Activity: 1241
Merit: 1020
No surrender, no retreat, no regret.
|
|
April 13, 2014, 09:06:40 PM |
|
if (PastRateActualSeconds < 1) { PastRateActualSeconds = 1; }
If it's a block in the past, or the same timestamp, it's regarded as having taken 1 second. The problem with this is, is that if there are legitimate blocks in the past [due to network lag, or other reasons] it is regarded as an extremely fast block, rather than a legitimate block [within range]. This shoots up the difficulty (as we've seen with some coins). This happens more easily with coins with low block-times.
1 second isn't much better than 0. There should be either a higher value as a fraction of block target or these blocks may be skipped from difficulty calculation or their time stamps recalculated as average of neighborous blocks. Another way is to limit every difficulty adjustment like many non-KGW algorithms do. With diff adjustments happening per block, it's a difficult problem to solve. Any 1-block algo is vulnerable, as blocks in the past [which is a legitimate anomaly] is an "odd" happenstance in 1-block diff algos.
Not every. Those operating with large traditional averaging windows are fine, though their time warp vulnerability level is higher than usual. PPC even allows negative actual time span, so many faster forks got burned by this "feature". It samples only 2 time stamps per retarget (the last block and the previous one), applies extreme damping and gets the job done. Very slow as the damping suggests, though very easy to allow difficulty manipulations through time warps. Those PPC forks brave enough to run without the ACP enabled can be torn apart by 51% attacks quite easily.
|
|
|
|
Crestington
Legendary
Offline
Activity: 882
Merit: 1024
|
|
April 13, 2014, 10:00:06 PM |
|
Altcoins check underneath their bed at night for BCX
|
|
|
|
simondlr
|
|
April 14, 2014, 02:45:14 PM |
|
1 second isn't much better than 0. There should be either a higher value as a fraction of block target or these blocks may be skipped from difficulty calculation or their time stamps recalculated as average of neighborous blocks. Another way is to limit every difficulty adjustment like many non-KGW algorithms do.
Right. Agree on both counts. I was specifically referring to the fix. The fix treats blocks from that past as very, very fast blocks which shoots up the diff [which is not what you want]. Not every. Those operating with large traditional averaging windows are fine, though their time warp vulnerability level is higher than usual. PPC even allows negative actual time span, so many faster forks got burned by this "feature". It samples only 2 time stamps per retarget (the last block and the previous one), applies extreme damping and gets the job done. Very slow as the damping suggests, though very easy to allow difficulty manipulations through time warps. Those PPC forks brave enough to run without the ACP enabled can be torn apart by 51% attacks quite easily.
Didn't know about that. Interesting! So what do you suggest? Other retargeting algos that seem promising, but haven't taken a look at is digishield and dark gravity wave2. How do they perform? And do you agree with the trade-off? Implementing the fix pushes in erroneous diff adjustments that could be detrimental to the coin. Or make it so that a 51%-er can mint more blocks with the same work? Currently, I think it's not ideal to implement the current tw fix that everyone is implementing, since if someone is maliciously doing a 51% attack, you are screwed anyway... Thoughts?
|
|
|
|
Cryddit
Legendary
Offline
Activity: 924
Merit: 1132
|
|
April 14, 2014, 05:34:58 PM |
|
The issue is a bug in branch difficulty evaluation, IMO.
BCX, can you check the logic here? Isn't the real vulnerability to time warp due to it being possible to create a branch using less than 51% hashing power that the current code believes has more hashing work in it, than a branch created using more than 51% hashing power? And doesn't that depend on the existence of a bug in the code that estimates how much hashing has gone into a branch?
Time warp exploits really oughtn't be possible if you can accurately pick the highest-work branch.
If you pick a 'threshold' difficulty just high enough to ensure that any block meeting that threshold could appear in either branch regardless of when it was mined, and then you count the blocks meeting *that* difficulty, you get a very good comparison of the real hashing power used. The one with the most such blocks is the one with the most hashing work. Award no partial credit for blocks mined at lower difficulty as the current code (IMO erroneously) does, and I think you wind up with no time warp vulnerability.
|
|
|
|
ghostlander
Legendary
Offline
Activity: 1241
Merit: 1020
No surrender, no retreat, no regret.
|
|
April 16, 2014, 11:43:44 PM |
|
Not every. Those operating with large traditional averaging windows are fine, though their time warp vulnerability level is higher than usual. PPC even allows negative actual time span, so many faster forks got burned by this "feature". It samples only 2 time stamps per retarget (the last block and the previous one), applies extreme damping and gets the job done. Very slow as the damping suggests, though very easy to allow difficulty manipulations through time warps. Those PPC forks brave enough to run without the ACP enabled can be torn apart by 51% attacks quite easily.
Didn't know about that. Interesting! So what do you suggest? Other retargeting algos that seem promising, but haven't taken a look at is digishield and dark gravity wave2. How do they perform? And do you agree with the trade-off? Implementing the fix pushes in erroneous diff adjustments that could be detrimental to the coin. Or make it so that a 51%-er can mint more blocks with the same work? Currently, I think it's not ideal to implement the current tw fix that everyone is implementing, since if someone is maliciously doing a 51% attack, you are screwed anyway... Thoughts? I've seen Orbitcoin time warped today with their difficulty falling to near zero in minutes literally. The coin is a fork of NVC with some bells and whistles using 1 hour PPC style retarget window and very fast blocks. Such a small window combined with their low network hash rate and no limits in the code made it possible. Their previous retarget fix sibstituted negative time span with 0, but it didn't help much. 1 wouldn't help either. The attacker instamined a couple thousand blocks in a few hours and got away. The conclusion is, the more weird/complicated algorithm you employ, the more likely you to run into a trouble with it some day.
|
|
|
|
simondlr
|
|
April 18, 2014, 10:13:55 AM |
|
The DigiShield algorithm is just asking to be exploited. The DGW2 algorithm is vulnerable to a timewarp as well. Almost every single one block difficulty adjustment algorithm contains a flaw with regard to timestamps, and if miners are not mining to secure the network, the coin will be forked.
Absolutely, it seems as if finally somebody understands this. ~BCX~ It seems so. Cheers for the discussion.
|
|
|
|
|