Bitcoin Forum
April 24, 2024, 01:02:41 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 [11] 12 »  All
  Print  
Author Topic: NSA and ECC  (Read 48704 times)
hashman
Legendary
*
Offline Offline

Activity: 1264
Merit: 1008


View Profile
July 14, 2014, 12:52:07 AM
 #201


Yes, but G is security irrelevant for our normal usage in Bitcoin (and generally, except for some contrived examples— e.g. where I need to convince you that I don't know the discrete log of some nothing up my sleeve point X (X!=G), and I picked X long in advance and selected G so that I knew the discrete log of X, but this is contrived and isn't something that I can think of any reason we'd do in Bitcoin.


For normal usage I am considering a signature verification operation which just for fun is:

sig = ((HashOfThingToSign + EccMultiply(Gx,Gy,RandNum)%N*privKey)*(modinv(RandNum,N))) % N

If I stare at this long enough I can convince myself you are right..  and the more I deal with really big numbers like these
the more I think it is bulletproof.   


 but I'd still love to hear some story about where these lovely numbers Gx and Gy come from,

0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798,
0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8

or

55066263022277343669578718895168534326250603453777594175500187360389116729240,
32670510020758816978083085130507043184471273380659243275938904335757337482424

I've heard:  "value of G should be generated canonically (verifiably random)."

1713920561
Hero Member
*
Offline Offline

Posts: 1713920561

View Profile Personal Message (Offline)

Ignore
1713920561
Reply with quote  #2

1713920561
Report to moderator
1713920561
Hero Member
*
Offline Offline

Posts: 1713920561

View Profile Personal Message (Offline)

Ignore
1713920561
Reply with quote  #2

1713920561
Report to moderator
1713920561
Hero Member
*
Offline Offline

Posts: 1713920561

View Profile Personal Message (Offline)

Ignore
1713920561
Reply with quote  #2

1713920561
Report to moderator
You get merit points when someone likes your post enough to give you some. And for every 2 merit points you receive, you can send 1 merit point to someone else!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713920561
Hero Member
*
Offline Offline

Posts: 1713920561

View Profile Personal Message (Offline)

Ignore
1713920561
Reply with quote  #2

1713920561
Report to moderator
1713920561
Hero Member
*
Offline Offline

Posts: 1713920561

View Profile Personal Message (Offline)

Ignore
1713920561
Reply with quote  #2

1713920561
Report to moderator
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 14, 2014, 01:14:17 AM
 #202

I can't help but notice that not one of the safecurves.cr.yp.to families are implemented by openssl:
All of the curves most preferred on there are ones that DJB made. Some of them have no implementation in software, only the curve25519/ed25519 stuff has any non-trivial deployment at all.

While the analysis there is generally good and thoughtful, I believe that the page crosses the line into being a bit ... uh .. marketing motivated. This has been discussed before several times— e.g. it cites several points which are irrelevant to us (e.g. the elegator encoding), are 'one time costs' completeness making a correct implementation take some more work, and other points which are compensated by increased security of secp256k1 in other respects.  (E.g. the efficient endomorphism for secp256k1 that yields very fast verification for our curve reduces security by a couple bits, which is pretty comparable to the cofactor of 8 in ed25519 that reduces their security by a couple bits (actually slightly more).

Whats implemented in openssl has more or less nothing to do with what bitcoin uses, and getting rid of openssl wouldn't change it.

If I am not mistaken, aren't you moving the goal posts in that you appear to be talking about bits of security which is orthogonal to what he was talking about verifiable randomness of the parameters to eliminate any suspicion of a backdoor?

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
July 14, 2014, 05:19:57 AM
 #203

If I am not mistaken, aren't you moving the goal posts in that you appear to be talking about bits of security which is orthogonal to what he was talking about verifiable randomness of the parameters to eliminate any suspicion of a backdoor?
Unlike the NIST curves the Bitcoin curve is not "random": There is no parameter question there. The comments on the safer curves page addresses a whole host of analysis points, so with the randomness off the table I was speaking to some of the other things where there are meaningful engineering differences between Bitcoin and curve25519.
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 14, 2014, 05:35:53 AM
Last edit: July 14, 2014, 09:55:37 AM by AnonyMint
 #204

Okay but my point remains that I think the person you were replying to appeared to be ask why not use bada55 curves which claim verifiable randomness of the parameters and the post from Peter R preceding it asked why 977 was arbitrarily chosen and appears (?) no sufficient justification has been provided.

I hope we agree there is a distinct difference between "arbitrarily or even justifiably chosen" and "verifiably random"?

Edit: note I did not dig into the bada55 curves to make sure I understand what they mean by verifiable random. So I am not sure if it is distinct. I am just speaking conceptually above.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
July 14, 2014, 11:12:32 AM
 #205

Edit: note I did not dig into the bada55 curves to make sure I understand what they mean by verifiable random. So I am not sure if it is distinct. I am just speaking conceptually above.
The bada55 curves were created to show the worthlessness "verifyably random" (at least as done by NIST), they are "random"— but the authors (ab)used the process to make sure the parameters had "bada55" in them. You are supposed to imagine an attacker armed with a mathematical breakthrough who could compromise 1/2^24 random curves doing the same kind of seed grinding.

I didn't believe the post I was responding to was at all talking about the bada55 curves, the subject had drifted and the post was talking about the curves recommended by the site. Presumably if I failed to answer the authors question he could respond.
hashman
Legendary
*
Offline Offline

Activity: 1264
Merit: 1008


View Profile
July 14, 2014, 02:21:30 PM
 #206

Edit: note I did not dig into the bada55 curves to make sure I understand what they mean by verifiable random. So I am not sure if it is distinct. I am just speaking conceptually above.
The bada55 curves were created to show the worthlessness "verifyably random" (at least as done by NIST), they are "random"— but the authors (ab)used the process to make sure the parameters had "bada55" in them. You are supposed to imagine an attacker armed with a mathematical breakthrough who could compromise 1/2^24 random curves doing the same kind of seed grinding.

I didn't believe the post I was responding to was at all talking about the bada55 curves, the subject had drifted and the post was talking about the curves recommended by the site. Presumably if I failed to answer the authors question he could respond.

Thanks, if you mean me yes I was not talking specifically about the bada55 curves, though thanks for explaining how they are badass Smiley  I was looking for some insight into whether any of the vulnerabilities mentioned by Daniel J. Bernstein and Tanja Lange are serious and why nobody has added more curves to the openssl default list. 

I'm not too concerned with the 977 parameter but to the untrained eye of for example an investor the presence of unexplained numbers in the generator could still be troubling.  Earlier in the thread we heard: 
 
SECG chair Dan Brown: 

3. The base point G is something I cannot explain, [snip]...   I strongly doubt that G is malicious, [snip]

OK so I took him a little bit out of context but you can see how someone researching the security of the DSA of bitcoin might not feel 100% satisfied.   It's a pity we can't do better and determine where they really came from but I guess whoever picked those points probably had no idea they would one day secure millions of people's finances.

Shall we offer a bounty? 
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
July 15, 2014, 12:37:45 PM
Last edit: July 15, 2014, 01:00:36 PM by gmaxwell
 #207

how someone researching the security of the DSA of bitcoin might not feel 100% satisfied
Please read the thread. We explain in detail how choice of G is not relevant for our own applications. If you don't believe that, DJB also argues why choice of G is usually irrelevant on safe-curves (SafeCurves does not place restrictions on the choice of this base point. If there is a "weak" base point W allowing easy computations of discrete logarithms, then ECDLP is weak for every base point)— an argument he added after I emailed him and suggested he add an explanation of his own G choice, which he did not previously provide... it took me inventing a novel attack against a contrived hypothetical protocol (not Bitcoin) where choice of G actually mattered to convince him to say anything at all.
cypherdoc
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 15, 2014, 04:50:58 PM
 #208

In making its recommendations, the VCAT specifically addressed NIST's interactions with the National Security Agency (NSA). The report states, "NIST may seek the advice of the NSA on cryptographic matters but it must be in a position to assess it and reject it when warranted."

"The report also recommends that NIST review the current requirement for interaction with the NSA and recommends changes in instances where it "hinders [NIST's] ability to independently develop the best cryptographic standards."

"We will continue to work with the best cryptography experts in the world, both inside and outside of government," said May. "At the same time, we recognize and agree with the VCAT that NIST must strengthen its in-house cryptography capabilities to ensure we can reach independent conclusions about the merits of specific algorithms or standards."


http://www.nist.gov/public_affairs/releases/crypto-review-071414.cfm
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
July 15, 2014, 06:07:34 PM
Last edit: July 15, 2014, 06:20:06 PM by DeathAndTaxes
 #209

Edit: note I did not dig into the bada55 curves to make sure I understand what they mean by verifiable random. So I am not sure if it is distinct. I am just speaking conceptually above.
The bada55 curves were created to show the worthlessness "verifyably random" (at least as done by NIST), they are "random"— but the authors (ab)used the process to make sure the parameters had "bada55" in them. You are supposed to imagine an attacker armed with a mathematical breakthrough who could compromise 1/2^24 random curves doing the same kind of seed grinding.

That is the rub isn't it.  Any "verifiably random" curve which is just a deterministic process starting from an arbitrary and claimed (but completely unprovable) random seed value is worthless.  I mean provably fair bitcoin casinos are better constructed in their proof and transparency.

This is the logic one is suppose to accept from NIST/NSA:
Generate "random value" -> curve parameters = insecure because one could just pick "random values" until they find one which produces a weak curve.
Generate one* "random value" -> seed -> deterministic process -> curve parameters are the first valid curve = "secure" because look it is a deterministic process.

Yes NIST claims the seed for secp256r1 was the first and only one they generated.  It might actually be but it can't be proven and they know it can't be proven so the "provably random" is just "random".  It is no different than NIST just randomly generating curves directly.  In both cases you have to trust that what they published as the "first valid curve" is actually the first valid curve and not the
first valid but weak curve" after they attempted and discarded quadrillions of other more secure curves. Trusting the seed is actually random is no different than trusting the parameters are random.

Now doing something like the following the term "provably random" might actually mean something:
Code:
note: this is just psuedocode.

uint64 nonce = 0;
do
{
  byte[] curve_bytes = SHA-3(nonce)
  if( ValidateCurve(curve_bytes) == true)
     break
  nonce++;
}while(nonce < uint64.max_value)

The process is completely deterministic, each curve can be documented as to the reason for rejection.  The criteria for "valid" can be closely analyzed.   Some may choose to use a lower nonce curve that while less efficient is purer in the sense that the window for manipulation by setting criteria is small.  Others may select a higher nonce curve which is more efficient than a lower nonce one.  The probability that the hash of an incrementing nonce would in sequence produce a curve that looks secure but has a flaw NIST/NSA is aware of before a strong curve is remote.   There would be some value in random curves using a generator that is open and transparent.  Now this isn't a very hard concept and NIST/NSA have lots of very smart people so I am sure someone had to think of something functionally similar to this.   It wasn't used and instead they created a dog and pony show about deterministic curve generation from a random seed which does nothing but make the process more opaque.  So the obvious question becomes why?
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
July 15, 2014, 06:35:24 PM
 #210

It is no different than NIST just randomly generating curves directly.
Well, it is different— say there is a class of weak curves known to the NSA but the weakness requires you to select a one in 2^128 set of parameters... without a pre-image attack on the hash function you couldn't use one of these 'random' procedures to pick such a curve.

If the weakness was a 2^32 one— you could, thus the bada55 curves.

The scheme you suggest has a weakness in that there are potentially too many plausible schemes.  For example, an attacker might use their ability to choose a hash function and value serializations to choose insecure parameters.

What I would prefer is a scheme that worked like this:

(1) Take a scheme like yours, but give it an initialization seed as an input. The scheme should be designed such that all acceptable parameters are equally likely to be selected (assuming good properties of some cryptographic hash).
(2) Publish the scheme, and ask various mutual distrusting parties to compute random values R_0, R_1 ... and each of them publishes X_n = H(R_n).
(3) Take the H(scheme) and X_n and commit to them in a bitcoin transaction, publicly announce this commitment.
(4) 100s blocks after the commitment, take the block hash 100 blocks after the comment and have the parties release their R_x numbers. H(scheme||R_n...||block hash)

The idea here is that up to the preimage resistance of the hash function, to influence the selection you must both compromise all the distrusting parties AND do a huge amount of sha256 computation.

But thats all for where a random selection can't be avoided... the "fixed by security/performance considerations" approach is probably best.  An interesting question how reasonable it would be to formalize all those considerations in order to combine the approaches... since DJB's claims notwithstanding, there really is many degrees of freedom in within that (otherwise secp256k1 _and_ ed25519 wouldn't both exist).

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
July 15, 2014, 07:23:58 PM
Last edit: July 15, 2014, 07:38:06 PM by DeathAndTaxes
 #211

If the weakness was a 2^32 one— you could, thus the bada55 curves.
...
The scheme you suggest has a weakness in that there are potentially too many plausible schemes.  For example, an attacker might use their ability to choose a hash function and value serializations to choose insecure parameters.

That is true however it is about reducing the window as much as possible.  The number of permutations of possible hash functions and serializations is relatively small compared to the number of "random" seeds that can be produced at "reasonable" cost by an intelligence agency.  Thanks for pointing out that the system used by Certicom/NIST does have some merit in preventing the exploit of weak parameters which occur rarely.  Those flawed parameters which occur less frequently than than 1 in > 2^96 are probably not exploitable .  The problem is that computing 2^80 curves (or at least 2^64) would be possible and would allow exploit of even weaknesses which occur rarely.  If (and it may not be possible) one found a weakness which occurs at least one in 2^80 random parameters an agency could produce a "verifiably random" curve that they have pre-compromised.  It may not even be a scenario where they know the curve can be compromised but rather generating a huge number of curves and then having their top minds pick the curves they believe show the most promise for cryptanalysis in the future.

So while it would be possible for one to try more than one permutation of an incrementing nonce curve the window would be significantly smaller right?  How many possible serializations and hash functions could be used.  2^20? 2^10? less?  Of those how many would be greeted with suspicion?  There are only a handful of hashing algorithms which would be seen as logical choices.  Likewise with serialization, simple is better and more transparent.  Using some exoteric serialization would be seen with suspicion "why did you do x,y,z when you could just do x".

Still I do like the idea of using the blockchain to act as both a timestamp mechanism and very difficult to manipulate entropy source.  Instead of a single block hash the computational cost can be easily increased by using a sequence of n block hashes starting from the the block containing the commitment txn (block_x).  Since miners are in fact distrusting parties you could optionally avoid the need for step #2.

H(scheme||blockhash_x || blockhash_x+1 ... || blockhash_x+n-1)
or
H(scheme||R_n...||blockhash_x || blockhash_x+1 ... || blockhash_x+n-1)

I also like the idea that you can in essence do a trial run.  Pretest the system to be sure it works as expected without knowing what the official seed is going to be.  Get peer reviewed feedback and then commit it to the blockchain and wait for the entropy source to be generated.  Now everyone can use the generator to produce curves in an open and transparent manner.   I know I would trust it more than any other so called "provably random" curves.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
July 15, 2014, 07:46:42 PM
Last edit: July 15, 2014, 08:25:08 PM by gmaxwell
 #212

Instead of a single block hash the computational cost can be easily increased by using a sequence of n block hashes starting from the the block containing the commitment txn (block_x).  Since miners are in fact distrusting parties you could optionally avoid the need for step #2.
I think since each block already hashes the prev block we already have that property without doing it explicitly! Smiley

There is an argument that it should be using toeplitz hashing with as much data as possible as an input as the entropy extraction, but there is a tradeoff that more "unusual crypto" in the scheme makes it harder to trust.

Quote
I also like the idea that you can in essence do a trial run.  Pretest the system to be sure it works as expected without knowing what the official seed is going to be.  Get peer reviewed feedback and then commit it to the blockchain and wait for the entropy source to be generated.  I know I would trust it more than any other so called "provably random" curves.
Yep. The idea is that you lay out all the "influence" parts up front, then use the blockchain to 'cut the deck' in a hard to rig way.

Even if someone didn't personally witness the process they still have the blockchain POW as evidence for a minimum computational cost in retrying. E.g. doing 2^70 blockchain work multiple times to rig it might be conceivable, but not the 2^128 work required to do 2^58 retries.

The next annoying property is that you want to have curve-acceptability tests but too many tests and perhaps you are actually testing for weakness-included. E.g. DJB would perhaps have you test that there exists a specific mapping to nearly uniform scalars (e.g. elegator), but perhaps that guarantees a vulnerability. Or in our case secp256k1 is selected with the efficient endomorphism to permit the GLV method for faster operations.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
July 15, 2014, 08:00:52 PM
 #213

A neat variation is that for the parties that compute H(R_n) you make H() really be g*R_n  on bitcoin, and require that the transaction commuting to the scheme actually be doing a coinjoin with the R_n parties, putting up some non-trivial amount of bitcoin to be held under each of the private keys for the duration of mining time.  This means that if any of the parties leak their R_n value to a miner (or other R_n) holders during the mining interval so that the miner could attempt to grind a solution with a particular output then someone could instead steal their coins... so the integrity of the process would be bonded.  The bonds could potentially be very large.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
July 15, 2014, 08:11:34 PM
 #214

I think since each block already hashes the prev block we already have that property without doing it explicitly! Smiley

On second thought I believe you are right.  My thinking was it is unlikely but possible someone (maybe NSA with their super secret hashing farm) could attempt multiple blocks and discard them until it found one which produced a weak curve.  In essence stacking the deck.  I failed to realize that adding more blocks wouldn't help as they are indirectly included in the prev_block and to change the seed the attacker would still only need to change the last block. So control over block n is the only thing which matters.   Honestly it is probably a non-issue as even if the attacker rented a majority of the hashrate and discarded blocks he is paying a cost per block and eventually he will fall to far behind.  As you indicate below    Also there is the human element while it isn't doesn't disprove manipulation, any manipulation would be very easy to spot.  If there suddenly was a 238 block reorg which replaced the magic block well it might be prudent to not use the results. Smiley

Quote
Yep. The idea is that you lay out all the "influence" parts up front, then use the blockchain to 'cut the deck' in a hard to rig way.

That is a good analogy.

Quote
Even if someone didn't personally witness the process they still have the blockchain POW as evidence for a minimum computational cost in retrying. E.g. doing 2^70 blockchain work multiple times to rig it might be conceivable, but not the 2^128 work required to do 2^58 retries.

Agreed.  That should be ~2^66 not 2^70 right?  I assume it represents the average amount of work needed to solve one block and thus created and test one set of curve parameters for vulnerabilities.

Quote
The next annoying property is that you want to have curve-acceptability tests but too many tests and perhaps you are actually testing for weakness-included. E.g. DJB would perhaps have you test that there exists a specific mapping to nearly uniform scalars (e.g. elegator), but perhaps that guarantees a vulnerability. Or in our case secp256k1 is selected with the efficient endomorphism to permit the GLV method for faster operations.

Agreed and I would lean towards minimal validity testing.  Still if some people decide to be more restrictive the use of a standard how to cheat generator is still beneficial.  For example lets say curve 283 is minimally valid but it doesn't have some properties that DJB et all see are important for security.  If they can convince people to not used curve 283 (the first valid nonce) and instead use curve 13,288 (the first nonce which passes all his tests) than more power to him.  If nothing else it puts them both out there on an equal playing field.  If the additional criteria worth the possibility of manipulating the selection by bypassing other currently valid curves? 
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
July 15, 2014, 08:13:50 PM
 #215

A neat variation is that for the parties that compute H(R_n) you make H() really be g*R_n  on bitcoin, and require that the transaction commuting to the scheme actually be doing a coinjoin with the R_n parties, putting up some non-trivial amount of bitcoin to be held under each of the private keys for the duration of mining time.  This means that if any of the parties leak their R_n value to a miner (or other R_n) holders during the mining interval so that the miner could attempt to grind a solution with a particular output then someone could instead steal their coins... so the integrity of the process would be bonded.  The bonds could potentially be very large.

Awesome.  A provable "secrecy bond".
hashman
Legendary
*
Offline Offline

Activity: 1264
Merit: 1008


View Profile
July 19, 2014, 09:14:44 PM
 #216

how someone researching the security of the DSA of bitcoin might not feel 100% satisfied
Please read the thread. We explain in detail how choice of G is not relevant for our own applications. If you don't believe that, DJB also argues why choice of G is usually irrelevant on safe-curves (SafeCurves does not place restrictions on the choice of this base point. If there is a "weak" base point W allowing easy computations of discrete logarithms, then ECDLP is weak for every base point)— an argument he added after I emailed him and suggested he add an explanation of his own G choice, which he did not previously provide... it took me inventing a novel attack against a contrived hypothetical protocol (not Bitcoin) where choice of G actually mattered to convince him to say anything at all.

I understand how the choice of G can be irrelevant.  However this fact is irrelevant to my question: why were they chosen as such?  My argument here is for better marketing.  I'm sure you understand the "nothing up your sleeve" motivation..  sure having something in your sleeve is not going to have relevance to ECC but the idea is to still show that there is nothing in your sleeve. 

My question stands:  where did those numbers come from?  The probable answer is that they came from a random number generator that was lying around at the time, probably initialized by the date.  It's too bad this wasn't documented.     

 
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
July 20, 2014, 04:09:46 AM
 #217

My question stands:  where did those numbers come from?  The probable answer is that they came from a random number generator that was lying around at the time, probably initialized by the date.  It's too bad this wasn't documented.     
Seems no one knows, but likewise— who created the paper the printed version of the spec was printed on?  What software was used to spell check the document?  Who came up with the shortname for the curve? maybe they were a secret NSA plant! Tongue  if you want to go down the rat whole of _provably_ irrelevant things there is no end to it.  Ultimately people who are not technically sophisticated are at risk of being FUDed by people who are dishonest or themselves confused, but no amount of good process can prevent that.
hashman
Legendary
*
Offline Offline

Activity: 1264
Merit: 1008


View Profile
July 24, 2014, 01:10:06 AM
Merited by vapourminer (1)
 #218

Seems no one knows, but likewise— who created the paper the printed version of the spec was printed on?  What software was used to spell check the document?  Who came up with the shortname for the curve? maybe they were a secret NSA plant! Tongue  if you want to go down the rat whole of _provably_ irrelevant things there is no end to it. 

I'd like to clarify this terminology a bit. 

When I think of your three examples, yes the word irrelevant comes to mind.  The issuer of the paper, the spell checker, and the short name of the curve, are all things which I don't even need to know to use bitcoin or to write my own client.  One might even say "provably" irrelevant for that reason, in that the relevance in question here implied by the forum in which we are using.  However I personally would avoid using that terminology, perhaps because of my mathematics background.   

However the G points are hardly irrelevant in this context, please correct me if I am wrong.  If I don't know them I can't validate TXs, can't sign TXs, and so I can't use bitcoin at all.  Every ECC operation in bitcoin requires every bit of G to be exactly right.  When you say G is provably irrelevant, I can only assume (and I'd rather not hence this reply) that you mean a choice of G cannot effect the ability of an attacker to brute force a private key.  While there are convincing arguments of that in this thread, I wouldn't call any of them a proof.  AFAIK the difficulty of discrete logs, on elliptic curves, or even the difficulty in factoring large numbers, has not been proven and probably a proof that these things cannot be done in polynomial time would be equivalent to a proof that P!=NP. 

   

   
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
July 24, 2014, 01:58:24 AM
Last edit: July 24, 2014, 02:29:28 AM by gmaxwell
 #219

When you say G is provably irrelevant, I can only assume (and I'd rather not hence this reply) that you mean a choice of G cannot effect the ability of an attacker to brute force a private key.  While there are convincing arguments of that in this thread, I wouldn't call any of them a proof.
You can transform any pubkey on any G to a pubkey on another generator by means of addition.  In particular, if there is some bad generator O where you can compute the log of Ox for arbitrary x easily, one can use find the discrete log of Gx as log_O(Gx)/log_O(G) mod order. One doesn't need to prove anything about the hardness of the discrete log to just show the arithmetic relation that if on a curve discrete log is insecure with respect to one generator then discrete log is insecure with respect to all generators of that group.

A better example that I could have given is how the byte order is chosen (big endian or little endian). You surely can't create an implementation without knowing how to deseralize the bytes, but byte order isn't relevant to security.
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
August 19, 2014, 10:44:44 AM
Last edit: August 19, 2014, 03:47:39 PM by AnonyMint
 #220

Schneier is justified in his recommendation, IMHO. But there is one bright spot: even if the standard ECDSA curves were broken in this way, if you do not reuse addresses it would not concern you as no public key or ciphertext is available until the coins are spent. So don't re-use bitcoin addresses (you shouldn't anyway).

I understand that in Bitcoin the public address is a cryptographic hash of the public key thus the public key is obscured often until it is spent.

In addition to reusing addresses, doesn't also the ability to sign a message to prove you own some coins reveal the public key? And the adversary has a lot of time to attempt to crack your public key then (but no publicly known feasible attacks exist at this time).

Also I made the point else where recently that since one Bitcoin pool sometimes controls near to or more than 50% of the hashrate, then intercepting a transaction before it is on the blockchain is feasible. In this case, I understand ECDSA would still need to be more than just a little bit broken because the time to crack would be measured in minutes, thus a quantum computer would need much more than just a few qubits or the number theoretic breakage would need to be severe.



unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
Pages: « 1 2 3 4 5 6 7 8 9 10 [11] 12 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!