...Gmaxell stated that the existing strategy has the same problem with derivative unwinds as my strategy-- that is not the same as saying my idea won't work...
The cited paper:
Variants in measuring the blockchain, such as following the heaviest subtree, not the longest chain, have been proposed, and are the best hope at improving that basic piece of the protocol.
https://eprint.iacr.org/2013/881.pdfPerhaps the most important question that will affect Bitcoin’s success,
is whether or not it will be able to scale to support the
high volume of transactions required from a global currency system.
We investigate the restrictions on the rate of transaction processing in Bitcoin as a
function of both the bandwidth available to nodes and the network delay, both of which
lower the efficiency of Bitcoin’s transaction processing.
Summarizes Gmaxell's point as reinterpreted as quoted above, and also my orthogonal point that there is an
unbounded increase in number of confirmations needed to protect against an attacker who can sustain > 50% of the network hashrate:
https://eprint.iacr.org/2013/881.pdf#page=7The replacement of the current world-view with an alternative one has far reaching conse-
quences: some transactions may be removed from the current ledger. This fact can be used by
an attacker to reverse transactions. The attacker may pay some merchant and then secretly
create a blockchain that is longer than that of the network that does not include his payment.
By releasing this chain he can trigger a switch that effectively erases the transaction, or redirects
the payment elsewhere. This is a difficult undertaking, since the honest nodes usually have a
great deal of computational power, and the attacker must get very lucky if he is to replace
long chains. The longer the chain, the more difficult it becomes to generate the proof-of-work
required to replace it. Satoshi’s original security analysis defines a policy for receivers of pay-
ments: a transaction is only considered sufficiently irreversible after it was included in a block
and some n additional blocks were built on top of it. With this policy, Satoshi shows that the
probability of a successful attack can be made arbitrarily low. As a receiver of funds waits for
more blocks (larger n ), this probability goes down exponentially.
However, if an attacker has more computational power than the rest of the network combined
(i.e., it holds at least 50% of the computing power), it is always able to generate blocks faster
than the rest of the network and thus to reverse transactions at will (given enough time). This
stronger form of attack is known as the 50% attack.
The paper even points out that with network propagation advantages the attacker may be able to sustain the longest chain indefinitely with < 50% of the network hashrate:
In fact, the assumption that at least 50% of the computational power is required for such an
attack to succeed with high probability is inaccurate. If we assume the attacker is centralized
and does not suffer from delays, he can beat a network that does suffer from delays using fewer
resources. We formulate the exact conditions for safety from this attack, and amend Satoshi’s
analysis below. We return to the analysis of the weaker double spend attack in Sections 6 and
7.
The following calculation applies to a block period of 3.5 (1/0.29) seconds and 17 (59/3.5) confirmations:
https://eprint.iacr.org/2013/881.pdf#page=17in some network configurations that match the assumptions above, an attacker with just over
24% of the hash-rate can successfully execute a so-called 50% attack, i.e., to replace the main chain
at will
The paper has some related insights as I did for my idea:
https://eprint.iacr.org/2013/881.pdf#page=18The basic observation behind the protocol modification that we suggest, is that blocks that
are off the main chain can still contribute to a chain’s irreversibility. Consider for example
a block B, and two blocks that were created on top of it C1 and C2, i.e.,
parent(C1) = parent(C2) = B. The Bitcoin protocol, at its current form, will
eventually adopt only one of the sub-chains rooted at C1 and C2, and will discard
the other. Note however, that both blocks were created by nodes that have accepted block B and its
entire history as correct. The heaviest sub-tree protocol we suggest makes use of this fact, and adds
additional weight to block B, helping to ensure that it will be part of the main chain.
However it is not trying to address the ephemeral > 50% attack that my idea does. Instead the paper's GHOST protocol mitigates the fact that otherwise network propagation delay topologies can give the attacker an advantage such that it can execute attacks with the same probability of success with less than 50% (actually less than any probability curve calculated in Meni Rosenfeld's paper as cited).
GHOST aggregates the proof-of-work over n confirmations of all forks in the subtree above (i.e. after) B, i.e. it is a smoothing function:
10We are in fact interested in the sub-tree with the hardest combined proof-of-work, but for the sake of
conciseness, we write the size of the subtree instead.
https://eprint.iacr.org/2013/881.pdf#page=19Thus, if we wait long enough, the honest subtree above B will be larger than the one constructed
by the attacker, with sufficiently high probability.
In my idea the nodes of the network utilize an additional piece of information which is the observation that a fork above B was orphaned by a fork which double-spends transactions in B (which applies weighting to the forks of the subtree above B by the observations of the nodes). Thus I believe my idea is more powerful and able to address the ephemeral > 50% attack (as well as < 50% attacks with greater probability) because it utilizes more information.
That paper and my idea are applying smoothing filters which incorporate more information, so that aliasing error is mitigated. There is a general concept in sampling theory-- don't discard information, filter it instead.
The paper also says we also shouldn't discard information when retargeting the difficulty:
https://eprint.iacr.org/2013/881.pdf#page=21Retargeting (difficulty adjustment).
Given potentially complex relations between the
growth rate of the main chain and the rate of created blocks, and the fact that
GHOST depends more on the total rate of block creation, we suggest a change in the way difficulty
adjustments to the proof-of-work are done. Instead of targeting a certain rate of growth for
the longest chain, i.e., Beta (which is Bitcoin’s current strategy), we suggest that the total rate of
block creation be kept constant (Lambda). As our protocol requires knowledge of off chain blocks by
all nodes, we propose that information about off chain blocks be embedded inside each block
(blocks can simply hold hashes of other blocks they consider off-chain). This can be used to
measure and re-target the difficulty level so as to keep the total block creation rate constant.