Bitcoin Forum
May 13, 2024, 06:03:50 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 [4] 5 6 7 8 9 »
61  Bitcoin / Development & Technical Discussion / Re: A(nother) downside to Proof-of-Stake? on: October 31, 2014, 01:52:50 AM
I am curious to hear other's opinions on Vitalik's PoS proposals that attempt to address these severe security weaknesses:

https://blog.ethereum.org/2014/10/03/slasher-ghost-developments-proof-stake/

These proposals do not address the fundamental concerns in the document that gmaxwell posted. They do add a fair bit of complexity, making them hard to analyze (and making a concrete attack too intricate to describe). IIRC Vitalik has backed away from these proposals because they do not provide the security benefits he originally thought they did.

It's worth noting that by writing a well-defined security model and working toward it, it is possible to create a "working" PoS which is only broken when the assumptions of the security model are violated. If one were to do this, it would then be easy to point out how the security model is not applicable to the real world. But Vitalik's posts --- and no PoS writeups that I'm aware of --- actually do this.
62  Bitcoin / Development & Technical Discussion / Re: AttackerSuccessProbability(0.3,1) = 0.627749 on: October 31, 2014, 01:40:10 AM
Either this interpretation is incorrect or something in the Satoshi formulae is wrong: an attacker having less than 50% of the hashing power cannot have higher than 50% probability of catching up! That's counterintuitive.

If an attacker has 50% of the hashpower, his probability of catching up is 1 -- this decreases continuously to zero as the attacker's hashpower goes to zero. There is no magic number related to "50% probability of success" that I'm aware of.

So without checking yours or Satoshi's math, the 63% number sounds about right to me.
63  Bitcoin / Development & Technical Discussion / Re: multisig using Curve25519 on: October 14, 2014, 01:01:49 AM
Oops! || is concatenation -- you hash the message, then hash R with the same sha2 state. (The existing ed25519 code should do this somewhere..) (I say Oops because I thought to write this and evidently forgot..)

Using xor is not secure.
64  Bitcoin / Development & Technical Discussion / Re: multisig using Curve25519 on: October 14, 2014, 12:32:26 AM
Hi James,

m is the message to be signed Smiley. The signature is the pair (r, s).

Andrew
65  Bitcoin / Development & Technical Discussion / Re: multisig using Curve25519 on: October 13, 2014, 07:24:07 PM
Hi James,

It seems like are a few misunderstandings here. Let me start with some background. It is easier to explain what's going on if we abstract a bit: let's start by defining a group. A group is simply a set with an operation (called +), some set element 0 such that 0 + x = x + 0 = x for all x in the group, and a unary operation - such that x + (-x) = 0 for all x in the group. So, the integers with addition are a group (as are the reals, rationals, etc), the reals with multiplication are a group, the set of invertible nxn matrices are a group under matrix multiplication, etc.

An important group in cryptography is an elliptic curve group, which is a group whose elements are pairs (x, y) of numbers which satisfy an elliptic curve equation. Addition is defined in a bit of a funny way, but there is a Wikipedia page on it that is interesting.

What's important about these groups for cryptography is the following: given a group element G, you can "multiply" by an integer n by adding G to itself n times. (If n is negative, add -G to itself -n times.) It turns out that if you fix an element G in some group, then take all the elements {nG} for integers n, this set is itself a group such that (nG) + (mG) = (n + m)G. It's a true statement that given any nonzero elements in this group, say P and Q, there is always some number n such that P = nQ. But for elliptic curve group, there is the so-called "discrete logarithm assumption" which says that actually calculating n is very hard. There is also the "Diffie-Hellman" assumption which says that given nG, mG in the group, it is hard to calculate nmG. (Of course, if you know n or m, it's easy to calculate. This is why the "shared secret" scheme you described is efficiently computable by the participants, and the Diffie-Hellman assumption is why it is a secret.)

So...everywhere you say Curve25519(n, G), you really mean nG, where G is some element in the elliptic curve group defined by djb's curve25519 curve. Where you say Curve25519(A, B) where both A and B are elements .... I don't know what you mean. Can you clarify what you are actually computing here? It is possible you are interpreting curvepoints as numbers without realize it, in which case you should use a better programming language that has a type system. (In particular, C is has a very weak type system and djb's code uses char* for everything, which is an asinine thing to do in public code with subtle operation. But possibly it compiles to faster code than it would if he'd used wrapper structs..) It is possible to convert a group element to a number, e.g. by choosing a binary encoding, hashing this, and interpreting the hash as a number.

Now, shared secrets are not really useful for building multisignature schemes. They let you build 1-of-N signatures (every party who has the secret is able to sign) but nothing stronger. The reason is that any party who knows the shared secret is able to use this to sign arbitrary messages. As an example of an actual multisig scheme, I will show how to construct a N-of-N multisig Schnorr signature. The semantics will be that as long as all parties agree to sign a specific message, they are able to form a signature on that message. However, no individual will see enough secret material to form signatures or her own, or even with the cooperation of (fewer than N) other members.

djb's ed25519 signature scheme is based off the standard Schnorr signature, but has some changes to allow efficient batch validation. I don't recall what these changes are (and don't have the paper with me on the bus that I'm typing this from), so I'm not sure if this is directly applicable .... but it is illustrative in any case. So here it is:

Suppose we have a public key P, and let G be a generator of some elliptic curve group, and let H be a hash function. A standard Schnorr signature is a pair (s, e) of numbers which satisfy the following relation: if R = eP + sG then e = H(m || R). It is possible to create such a signature if you know x such that xG = P (so x is the secret key here, and the "discrete logarithm assumption" above is why it is secret even though P is public), by the following mechanism: choose random secret k and compute R = kG; then set e = H(m||R) and s = k - xe. The signature is (s, e). Given some very strong assumptions on the hash function (that it is a opaque mystical source of uniformly random numbers except that it returns the same output when given same input), we can prove that any algorithm which forges Schnorr signatures can be extended to extract the secret key. So given such a hash function, forging Schnorr signatures is as hard as the discrete log problem.

Now, the question is: how can we make a 2-of-2 signature? One idea, as you suggested, is to use a shared secret: users have secret values x and y and form a private key from (a hash of) xyG. The problem with this is as mentioned: both parties knows the whole secret, so they can both form signatures on their own. So this is really a 1-of-2 multisig scheme. To get 2-of-2, we need to be a bit more clever. Here is a protocol:
0. Suppose we have two keypairs (x, xG) and (y, yG) belonging to parties A and B. We will show how to construct a signature with the "2-of-2" key (x + y, (x + y)G).
1. Both parties choose secret random numbers. Call A's secret α and B's secret β. A sends αG to B, and B sends βG to A. Now both parties can compute R = αG + βG = (α + β)G, and they compute e = H(m||R).
2. A computes s' = α - xe and B computes s'' = β - ye, and they send these to each other. Then both compute s = s' + s''.

Now (s, e) is a signature of m on public key (x + y)G, which required both parties' cooperation to form, but neither party gained enough information to produce such a signature alone.


It's a worthwhile exercise to (a) verify that the standard Schnorr signature k - xe actually satisfies the relation (e = H(m||R) where R = sG + eP) that I claimed it did, and (b) so does the 2-of-2 version.


This isn't really an answer to your question, but I hope it sheds some light on things.

Andrew
66  Bitcoin / Development & Technical Discussion / Re: A -failed- solution to a collusion flaw in Deterministic Wallets on: October 11, 2014, 04:23:55 PM
You can mitigate this somewhat by use of hardened keys (which cannot be derived except by the holder of the private key). These cannot be derived publically, but still have the simplified key management of HD wallets. So if you only derive hardened keys from your master keys, and provide each of the derived keys to auditors, you can limit the damage caused by a single collusion attack (at the expense of needing to give more info to auditors).

dabura667, I'm not sure exactly what you're proposing, but adding extra nonces (e.g. these Rn's) will only get you so far. If you have N secret values which are combined linearly to produce keys, then N linearly independent equations (i.e. collusion by N parties) will be able to solve for them all. So your collusion resistance is only linear in your system complexity :/.

Edit: Ah, ffe beat me to the punch on that last point.
67  Bitcoin / Development & Technical Discussion / Re: checkpoints.cpp and timestamps on: October 11, 2014, 04:16:12 PM
Checkpoints are really not necessary. They prevent a DoS attack where on first bootstrap somebody feeds you a bad chain and you waste a lot of time validating it before finding the good one. There is ongoing work right now on "headers first" (largely sipa's doing if you want somebody to thank) which will allow Bitcoin Core to check the proof-of-work before doing any validation, and therefore eliminate bad chains quickly.

They also make an isolation attack harder by forcing an isolation attacker to build fake blocks off the latest checkpoint, which has a high difficulty and will be costly. But this can be fixed without checkpoints by simply encoding a "minimum total work" as a weak sort of checkpoint, which doesn't favour any particular chain, just insists that enough work be done to create it.

We really want checkpoints to be dropped because they cause confusion (some other currencies use them as a stealth consensus centralization mechanism). In Bitcoin they have nothing to do with consensus, they are just a DoS mitigation, and there are certainly better solutions.

Edit: Also, please don't bump your post after only two hours, (and while I'm being a grump, please remove your garish signature).
68  Bitcoin / Bitcoin Technical Support / Re: [Q] How/where TXID is created on: October 10, 2014, 12:54:44 PM
He could generate a raw transaction and sign it multiple times (as this yields a different, yet valid, signature) until the hash of the signed transaction matched the required constraints.

I'm not sure if this is what you're saying, but with programs that use a randomized signing algorithm (Bitcoin Core is one), if you sign a tx multiple time it'll have a new txid each time. There is no need to tweak the transaction.

And as spin observes, even without any private keys you can tweak the transaction in a couple of ways, which is unfortunate but a fact of life when designing Bitcoin protocols.
69  Bitcoin / Development & Technical Discussion / Re: Looking into forking the core wallet to use parallel computing to verify blocks on: October 09, 2014, 02:10:16 AM
Interested in this also. Maybe begin looking at how parallelisable sipa's libsecp265k1? ...which is basically the crypto verifying engine.

https://github.com/bitcoin/secp256k1

You should be using libsecp256k1 if you are doing verification. It is trivially parallelizable in the sense that you can verify many signatures at once...however, it does not do batch verification (it is not obvious that this is even possible for ECDSA since there is a sign ambiguity in the k value). There are some coming improvements but even today it is something like 6x faster than openssl.
70  Bitcoin / Development & Technical Discussion / Re: A vulnerability in olalonde's implementation of gmaxwell's proof-of-solvency on: October 08, 2014, 07:12:05 PM
You don't need to verify the integrity of nodeC.left.hash or nodeC.right.hash.
71  Bitcoin / Development & Technical Discussion / Re: A vulnerability in olalonde's implementation of gmaxwell's proof-of-solvency on: October 08, 2014, 05:33:06 PM
Yes.
72  Bitcoin / Development & Technical Discussion / Re: A Scalability Roadmap on: October 08, 2014, 05:08:34 PM
Ordinary people aren't supposed to run full nodes anyway  Tongue

This is absurd and false. Bitcoin is deliberately a publically verifiable system.
73  Bitcoin / Development & Technical Discussion / Re: A vulnerability in olalonde's implementation of gmaxwell's proof-of-solvency on: October 08, 2014, 05:07:54 PM
The prover provides nodeC's child hashes. There is no need to verify that they are correct, that is the job of verifiers whose merkle paths go through nodeC.
74  Bitcoin / Development & Technical Discussion / Re: Looking into forking the core wallet to use parallel computing to verify blocks on: October 08, 2014, 05:07:03 PM
It's not so much that they are far away from each other (though they are), it's the fact that they can be that hurts parallelizability because you need access to a lot of data. The way to handle this is to maintain an "output set" of all unspent outputs; you add every output for each transaction and delete every output that is referenced in each transaction. Maintaining this data structure (which requires random insert/delete access in roughly equal and enormous proportions, which is a nasty data structure problem ... not sure you can do better than a hashmap) is why you have to go through the song and dance I described.

To see what's involved with verifying the blockchain, you should check out my high level overview and also check out the Bitcoin Wiki pages on transactions and Script. It's a bit involved.
75  Bitcoin / Development & Technical Discussion / Re: Looking into forking the core wallet to use parallel computing to verify blocks on: October 08, 2014, 03:49:15 PM
Hi Skeeter,


I've played around with parallelizing verification (though not as part of Bitcoin Core), and once past 100k blocks I can pin all eight cores of my CPU constantly. So there is definitely benefit for parallel verification using a fairly large number of cores. There are a few complications though. Here are my thoughts:

Transactions refer to earlier transactions' outputs, so it is difficult to verify out-of-order unless you can do fast lookups (so memory bandwidth is gonna wreck your performance). Further, within a block, transactions are allowed to refer to earlier transactions' outputs but not vice-versa. So to be consensus-correct you need to verify them in order. You can solve this by doing two passes: one to make sure that all transaction references are valid (this can be done in about 30 minutes on my CPU with my crappy Rust code), and the other to actually do the signature verifications. However, this is actually what Bitcoin Core is already doing for the majority of the chain (full signature verification could take a couple of days), so you wouldn't be gaining anything vs the Bitcoin Core behaviour.

One thing you could do (and what I do in my own code) is to break consensus-compatibility and verify all transactions in each block in parallel. For blocks with hundreds or thousands of transactions there is a big benefit here. The way to do it is:
1. Scan the block and blindly add all transaction outputs to your output set in a linear pass. This makes sure that backreferences will work, but will also allow forward references. (It occurred to me while typing this that it's easy to label the outputs with an index into the block, so you can still easily detect forward refs. Oops Smiley I will fix my code to do this.)
2. Split all transactions in the block into blocks of N_TRANSACTIONS/N_CORES transactions, and pass each one to a new thread for verification. Each thread will need to be able to lookup old outputs, so you will need to make sure your output set is immutable during this phase.
3. Scan the block and delete all spent outputs from your output set in a linear pass.

I expect that by using a parallel data structure for your output set you can merge some or all of these passes. There is room for experimentation and measurement here.

Andrew
76  Bitcoin / Development & Technical Discussion / Re: A vulnerability in olalonde's implementation of gmaxwell's proof-of-solvency on: October 08, 2014, 03:35:09 PM
The value of each node is part of the input to the hash for that node. The verifier can compute this himself, the prover does not need to make any claims about it.
77  Bitcoin / Development & Technical Discussion / Re: A vulnerability in olalonde's implementation of gmaxwell's proof-of-solvency on: October 08, 2014, 03:26:44 PM
The prover cannot, because nodeC.hash = H(4 || nodeC.left_child_hash || nodeC.right_child_hash) and the user is provided all three pieces of information in the hash (as part of nodeC).

Agreed, this is what I was saying before. Node E needs to require more than just node C from the prover; node E needs nodeC.left.hash, nodeC.left.value, nodeC.right.hash, and nodeC.right.value to verify node C.hash. But then you run into the same problem with nodeC.left and nodeC.right -- you need their hash preimages as well to verify their values and hashes.

You don't need their hash preimages. If these hashes are incorrect it will be detected by the users whose merkle paths go through these nodes.
78  Bitcoin / Development & Technical Discussion / Re: A vulnerability in olalonde's implementation of gmaxwell's proof-of-solvency on: October 08, 2014, 02:25:01 PM
The prover could falsify nodeC.val without changing nodeC.hash

The prover cannot, because nodeC.hash = H(4 || nodeC.left_child_hash || nodeC.right_child_hash) and the user is provided all three pieces of information in the hash (as part of nodeC). To change the value of nodeC the prover would need to find strings X, Y such that nodeC.hash = H(2 || X || Y), which is impossible if H is a collision-resistant hash function.
79  Bitcoin / Development & Technical Discussion / Re: A vulnerability in olalonde's implementation of gmaxwell's proof-of-solvency on: October 08, 2014, 02:00:37 PM
It's still logarithmic. You only reveal the preimages of the neighboring hashes (which include amounts for the immediate children of everything in your merkle path), there is no need to recurse further.
80  Bitcoin / Development & Technical Discussion / Re: A vulnerability in olalonde's implementation of gmaxwell's proof-of-solvency on: October 08, 2014, 12:48:10 PM
You are making sense. What I'm saying is that in the iwilcox writeup, all the information is provided to the user to verify the hash preimages. Which does include user data for one other user, apparently, though I recall some discussion at the time gmaxwell proposed this about cleaning that up..
Pages: « 1 2 3 [4] 5 6 7 8 9 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!