I understand @dinofelis wasn't able to assimilate this information, so I think by putting it all organized concisely in one post will help him and readers to understand. And hopefully he will stop lying.
I don't lie, and I'm only here to acquire rational arguments of which I can learn. I will try to limit my comments to purely technical and rational comments, because as you really need Satoshi to be an evil genius in order for your work to have sufficient sensible meaning, I know that I have no way to convince you of anything else, nor do I have any incentive to do so (apart from the fact that it is going to perturb, of course, the logic of your arguments).
@dinofelis claims that since it is known that the true security of Bitcoins 256-bit ECDSA (elliptic curve digital signature algorithm, i.e. a form of ECC aka elliptic curve cryptography) is only 128-bits, then if we hash the ECDSA public key, then we only need a 128-bit hash. Thus he claims that Satoshi was wasteful and not genius. Although Satoshi's long-term priorities were not prioritized on not consuming too much block size given 1 MB was deemed more than sufficient for Bitcoin's planned future as block chain for the $billionaires only, Satoshi did minimize the length of the hash function by choosing 160-bit RIPE160 instead of SHA256 for the final hash of Bitcoin addresses (as they appear on the blockchain, but note that publicly distributed addresses also
have a checksum for eliminating user typos but afaik this checksum is or could be discarded from what is stored on the blockchain). He did this minimization because it is good design sense, it is sufficient security and collision resistance, it provides an extra layer of protection against any unknown cryptanalysis interaction between SHA256 (or RIPE160) alone and ECDSA, and it helps to market the product to the n00bs as scalable (even though Satoshi was deception in this regard) in Bitcoin's nascent stage. Also SHA256 before RIPE160 provides an extra layer of protection against any
unknown cryptanalysis breakage on collisions for RIPE160 alone. For example,
SHA256 has a Merkle-Damgard length extension weakness when not doubled with itself or another hash, which tangentially btw would provide someone with a strong hint as to where to look for inventing the AsicBoost to make SHA256 mining 30% more efficient.
All this falls apart of course, because there are contradictory elements. This is what I wanted to outline. I'm not saying that bitcoin's crypto design is erroneous in the sense of disfunctional. It is simply incoherently designed. It is like removing two of the four screws that hold the bicycle bell to win some weight, and adding a wheel made of lead or something.
You are perfectly right that hashing the public key protects the signature scheme in the case that the ECDS scheme becomes broken. The problem is that at a certain point, you will need to expose that broken system. Now, the question is: HOW broken will it be, and can the broken system still resist for the functionality that is needed ?
This is why in crypto, you never "protect a broken system", because it implies that 1) you didn't need that broken system in which case, there's no reason to use it or 2) you need it, and as it is broken, this is where your overall design will fail. And this is exactly what I was pointing out: if ECC is considered broken (partially or totally), then no matter how well we protect it with superior hash protection, when we are going to have to use that broken ECC, our crypto system breaks down. Now, maybe it is still "surviving long enough ?" is it ?
This is where we come to the "time scales of resistance" of both systems. The address, on chain, will need to survive for centuries. The signature will need to survive AT LEAST 10 minutes, and most probably of the order of days. The ratio between both is about 12 bits of security. Not more. So anything that has *SUFFICIENT* security at the 128 bit level (ECC), will have enough security at the 140 bits level long term. Or, vice versa, if you NEED 160 bits security in the long term, you NEED at least 148 bits security in the short term, and 128 bits is TOO SMALL. But given, on top of that, moreover that the chances are higher that ECC will be broken SOMEWHAT before hash functions get broken, you would need somewhat more, vastly more, or infinitely more security for the ECC (so more than 148 bits).
In other words, if you could crack the 160 bit address in a century, then you can crack the signature in less than a second (classically).You cannot deny this, it is simple math.
==> again, my obvious conclusion is that the address is overprotected, or the signature scheme is very very vulnerable.
But bitcoin's design is even worse. Bitcoin's design allows you to
re-use addresses in several UTXO. This re-use not only makes for a mess in indicating WHICH output you mean when you specify an address,
it also renders the above protection scheme totally moot. Why ? Because if you spend ONE of the UTXO with a given address A, you have exposed the public key of that address LONG TERM for all the other UTXO that have the same address. So by allowing the re-use of addresses, bitcoin's design totally destroyed, IN ANY CASE, the "protection by the hash function long term" of the public key.
On top of that, it is true that a single use of a hash protocol in a Merkle-Damgard construction is open to the extension attack, and that a good idea is to do a double hash IF YOU USE THE MERKLE-DAMGARD construction, that is, if you need to hash more than 512 bits. However, in this particular case, this is ridiculous, because the key doesn't NEED the Merkel Damgard construction, given that SHA-256 takes data blocks of 512 bits, so the whole key fits in a single SHA-256 primitive. As such, the double hash utilisation is again to protect against a non-existent threat.
Now, the succession of a SHA-256 hash, and a RIPEM160 hash, can be funny, but don't serve any purpose. You could just take the first 160 bits of the SHA-256 hash ; if you think that SHA-256 is broken, first of all, it doesn't matter much, because with address re-usage, the hash only "protects" the supposedly not broken ECC public key only for the first address that is going to be used, so the importance of this hash is not so large ; but on top of that, one can always reduce a hash length by just taking the first n bits of a hash. Transforming a 256 bit hash in a 160, 128 or whatever hash is easy: take only the n first bits.
The super-over-protected public key by a double SHA-256 and a RIPEM-160 hash, is given out when the first appearance of an address is signed.
Yet @dinofelis is incorrect to claim that 128-bits would have been sufficient for the hash function, because of at least two reasons:
Reducing 160-bits by 16 bits only saves 10%, and for that miniscule size reduction you are not factoring
the exponential loss in randomized collision resistance.
Then, reducing from 256 bits which was possible, to 160 bits, sounds even sillier, doesn't it ?
Insufficient collision resistance of 128-bits. Even if we assume that all attacks on collision resistance of SHA128 are intractable, even the
equation for random chance says that if we generation more than a trillion addresses then we have a near certainty of production one random collision.
==> this looks like a smart and correct argument, but it isn't, for several reasons.
The first reason is that the security level is
the security level of a single address. If there are many addresses, of course the probability to break one of them diminishes proportionally to their number, but that doesn't change the individual security level of an address. In other words, if your probability to die in a car accident is 1/10000, then that is not influenced by how many people run that risk. If there are 1 million people, then there will be 100 death due to car accidents, but your individual security level to die of a car accident, remains 1/10000.
So the fact that there are trillions of addresses, and that there is hence a trillion times higher probability to find ONE of those addresses than if there were only one,
doesn't alter the security of a single address to be found.Another way of saying this, is to say that the security of using, say, AES-256, isn't diminished because MANY PEOPLE use AES-256, and hence, that the chance to find ONE of the secret keys of ANY ONE of the users of AES-256, is increased, and hence, the security is diminished.
-->
the security level of an individual address is not influenced by the fact that there are many addresses, and hence, that the probability to find one is increased.
But let us assume that Satoshi wanted an OVERALL security level of 128 bits, and not just the single address security. What does it mean, to have a security level of 128 bits ? It means that it takes 2^128 trials to find it. Now, suppose that there are N possible addresses on the block chain (trillions = 42 bits, say). What is the probability to find ONE of them ? That is, an attacker wants to find AN address, no matter which one, on the block chain.
It is simply one in 2^160 / N = 2^160 / 2^42 = 2^118. That is still a very small probability ! The "near certainty" is essentially a very rare happening ! In the case of original 2^128 hash bits, it would lead to one out of 2^86.
To what kind of security risk does this correspond ? It corresponds to an attacker wanting to find JUST AN ADDRESS on the chain.
==> note, however, that the 128 bit level security is not reached with "trillions of addresses" on the chain, with a 160 bit hash.
He would at least need a 170 bit hash if that were the case. But is that all ? The confusion is here between collision resistance of a hash (bits / 2) and the probability of finding one of the hashes in a GIVEN SET. When you have a 160 bit hash, you need on average about 2^80 hashes to have a reasonable probability that two of them are equal (birthday paradox). However, if you have a 160 bit hash, and you HAVE 2^42 hashes given, then the probability that
a next one will be equal to one of them is only 1/2^118. That said, the probability that
ANY two of them in the set are equal, is rather 1/2^76, which is still very small. In the case of 128 bits, this will bring us down to 1/2^44.
To what kind of security risk does this correspond ? It corresponds to ALL users being attackers, and trying to find a collision with a previously existing address. Then ALL of them need to try about 2^76 addresses in the 160 bit hash case, and 2^44 addresses in the 128 bit case, in order for ONE OF THEM to succeed in finding ONE other address.
If we consider this a security risk, then we see that 160 bits is BY FAR not good enough.
==> so in as much as this "overall security level" was required to be 128 bits, the 160 bit hash is WAY WAY TOO SMALL.
In fact, if you want a non-collision probability of 1/2^128 for "trillions of hashes", then you need to add 2^(42 x 2) = 2^(84).
You'd need a hash length of 128 + 84 = 206 bits !Note that in no way, even with a 128 bit hash, there is "almost certainty" that a collision occurs. The probability is still of the order of picking the right molecule in a glass of water of this happening without being an "attack". So the chance that 128 bit address hashes are going to screw up the block chain because of address collision with almost certainty, is wrong. It remains a very tiny probability.
==> in any case, 160 bits doesn't make sense. If we look at the security of individual addresses, this is not influenced by the amount of other addresses in the chain (and this is the true "security level" concept). Then 160 bits is too large.
If we consider an attacker wanting to find a single collision with just any address, the security level should be higher than 170 bits. And if we want to require a probability of non-collision less than 1/2^128, the hash length should be 206 bits.
Note that this problem goes away if we publish directly the full 256 bit public key (which, with address re-usage, is in any case published when the first appearance of address is spend).
If we have trillions of 256 bit public keys, the collision probability within a set of trillions of keys, is less than 1/2^172.
Note that all this is when we erroneously consider "overall" security levels, and not individual address security levels. The overall security of bitcoin is however, way, way smaller than this. In fact, you can render unspendable an address by simply reversing the transaction that had this address as an output, by redoing the block chain from that point on.
The acctual PoW security of the entire block chain, and hence of every single address, is much, much, much smaller than any of these numbers in any case. So all this is theory in any case.
Satohi was prescient in his prudence because since Bitcoin's launch in 2009, a
collision attack against SHA128 has been discovered which reduces the collision security to 60-bits which is approaching the realm of tractability.
Collision attacks are of no importance in bitcoin. It is pre-image attacks that are of importance. Being able to generate two well-designed inputs that hash to the same hash doesn't help anything. You need to find a hash that is in a GIVEN TABLE (the existing addresses). That's pre-image, not collision.
Moreover, the "protection" of the public key is silly, if bitcoin addresses can occur in several outputs: once the first one is spend, the public key, so well protected by a hash, is published !
It is also this address re-usage that obliges one to specify the transaction, to know WHICH of the different UTXO with the same address, one is meaning. If address reusage was prohibited, there could only be one single UTXO with a given address, and that would be unique: no more transaction hashes, positions within a transaction and so further ; including the transaction malleability problem.
Additionally since the attacker can control the message being signed
this has no influence on the public key itself, or its hash, it only influences the signature.
@dinofelis claims that quantum computing resistance with the hash is futile because if the ECDSA is broken via Shor's algorithm, because he claims the attacker can crack the transaction signature and double-spend it when it is published before the bonafide signature becomes final in the blockchain. I already refuted this argument based on two reasons.
If you argue that it doesn't matter if we have the hashes when ECC is broken by quantum computing, because the transactions can be intercepted and cracked by the attacker before they are confirmed in the network, you would not be thinking clearly. Because quantum computing would at its inception (nascent stages) likely be only able to break long-term security but not short-term. So there would be a period to transition as I already stated in the above quote from my prior post.
|
That's a faulty rebuttal. A full quantum computer (that is, a general purpose quantum computer with sufficient qubits) cracks an ECC IMMEDIATELY (only a few 1000 clock pulses needed). My "not thinking clearly" is rebutted with a "not so very clear argument that has no ground".
Any partial quantum computer is equivalent to a seriously sped up classical machine. My point was that if you can crack a 160 bit security level in 50 years, you can crack a 128 bit security level in 0.3 seconds with the same machine. Your rebuttal doesn't address this. In any case, if you NEED 160 bits of security to be centuries-safe, then a 128 bit security level is cracked in a matter of seconds, classically. Quantum is even worse, because even though quantum computers still face a 2^80 bits security level for the hash, the ECC is TOTALLY GONE. In a blink of an eye, a quantum computer solves your ECC problem, no matter its initial security level, if the quantum computer has enough qubits.
But I already said that. I hope you see that your "rebuttal" is totally empty of arguments.
But you're analogy does not apply, because Shor's algorithm (a form of cryptanalysis) is already known! It is not a future unknown.
Also (and this is a minor point which isn't really necessary for my slamdunk) you are conflating the breakage of discrete logarithm math theoretic security with the security of permutation algorithms of hash functions. I repeat the distinction between the two which you have failed to assimilate:
You are arguing against your own position ! My claim is that if Satoshi feared that ECC was going to be broken, there's no reason to use it and try to protect it, because if that happens, the moment you USE the ECC part of the system, your system breaks down. And now you argue in the same sense, that the hash protects the broken ECC, because we know already how to break it if ever we can build a quantum computer. Yes. I agree, and that's exactly what it was a bad design ! And why you don't protect systems of which you think they are broken.
I'm not going to explain this another time.
--> if we assume that ECC will be broken one day, bitcoin's crypto scheme is IN ANY CASE not usable.
Not only are you failing to assimilate the fact that Shor's breakage is already known (not a future thing not knowable as you are arguing) which is sufficient slamdunk on why you are incorrect, but you are also claiming that hash functions can typically be entirely broken in one swoop which afaik not the case (and I studied the cryptanalysis history on past SHA submissions from round 1 to final rounds).
You are totally missing my argument, by making your point worse. Shor's algorithm is known, but the computer on which it can run is not known. The whole question is whether one day one will be able to make such a machine.
The day that we have that computer, an ECC signature doesn't survive 100 milliseconds. So your hash protected public key is worthless. That's my point. You can't do bloody shit with your protected address, because the day that you want to sign a transaction, one can IMMEDIATELY fake another one.
And again, the inconsistency is to allow multiple usage of the same address. Once THE address key is given out to spend one single UTXO, all other UTXO with the same address are entirely compromised if this hash protection was needed.
This address re-use is also what makes a lot of other things difficult, like having to refer to a transaction as a whole, needing transaction hashes and so on.
I repeat:
1) If the hash was to protect the ECC, have collision resistance etc....
then
1a) this was silly, don't protect a broken system
1b) if you insist, use maximal protection, that is, a 256 bit hash of the key (maximal key entropy preserved).
1c) don't allow, OBVIOUSLY, address re-usage, which makes the protection of the public key entirely moot.
2) if the hash was to reduce block chain bloat, then, there are 2 possibilities:
A: hash the public key to 128 bits, to win room
B: publish the public key directly (and avoid all collision discussions)
but also:
2b) don't allow address re-usage. As such, the signature will reveal the public key, which indicates directly the single UTXO which corresponds --> no need for a silly 256 bit transaction hash, with 32 extra bits of transaction output number: there's only one corresponding UTXO possible.
2c) don't publish a second time the public key, it can be derived from the signature !
These two actions reduce the transaction block chain room by two.
Hashing to 160 bits "to win room on the chain" (and not hashing to 256 bits, and hence loosing out on collision security, if this is an argument), but using transaction hashes of 256 bits, and publishing the public key which is not needed (it can be derived from the signature if some precaution is taken), is totally contradictory. Claiming that a 160 bit hash is needed to counter whatever attack on the public key, but allowing address re usage and hence exposing all those second and third.... UTXO with the same address, is incoherent.