Bitcoin Forum
April 19, 2024, 12:59:24 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 [2] 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 »
21  Bitcoin / Development & Technical Discussion / Re: Dual SNARK: How to make ZKP with trusted initilization trustless in some cases. on: March 16, 2014, 08:56:08 AM
For example, say I want to pay you conditional on you publishing a solution to a— say— Sudoku puzzle. A script running under an efficient proof of knowledge system could verify an encrypted solution, and because it's zero-knowledge miners couldn't come along and replace the outputs like in a plain hash-locked transaction. You could use a CRS snark here where the payer computes the trusted initialization and then the payee cannot cheat— but the fact that the payer initialized it means the payer could double-spend race the redemption and get both the solution and their coin back.

Why do you say "because it's zero-knowledge miners couldn't come along and replace the outputs like in a plain hash-locked transaction" ? If I understand this paragraph (and your last comment in this post) correctly, the payer has to specify a specific payee that can redeem the coins (rather than allowing anyone who knows a solution to the Sudoku puzzle to redeem the coins) ? If so, this can be done outside the snark, as we normally do in Bitcoin by requiring the ScriptSig input-script to provide a signature for a specified address/pubkey of the payee, in order to prevent miners from replacing the output. Is there any beneficial reason to enforce the payee's identity inside the snark, as you seem to imply here?
Also, if I understand correctly then the protocol here isn't non-interactive, i.e. the payee sends to the payer via a private channel an encrypted solution s0 encrypted under symmetric key k0, and then the payer broadcasts the snark transaction that should reveal (in the clear) k0, so that only the payer will know the solution?

In a two-party trade, just have both parties compute their own trusted initiations, then require the script provide proofs under each of them. Then so long as one side isn't cheating the transaction will behave faithfully.

If my understanding above wasn't way off, then isn't it enough here to require the payee to provide his digital signature (in additional to the proof that passes the snark verification), and because the payer cannot produce a signature for the corresponding pubkey of the payee, he cannot double-spend by providing a false proof (that he can produce because he knows the snark initialization), at least until some nlocktime expired where he'd spend the output via a transaction that the payee already signed for him.


It seems to me that for non-interactive proofs (as in CoinWitness), the need to avoid a trusted initialization cannot be overcome. And on the other hand, trusted initialization isn't really an issue in interactive ZK proof scenarios? But please elaborate on any observations that I could be missing here...
22  Bitcoin / Development & Technical Discussion / Re: Paper discusses solution to transaction malleability on: March 06, 2014, 04:42:16 PM
I'll reply to myself, because I discussed it with Bitcoin devs at #bitcoin-wizards (freenode IRC), so they shouldn't waste their time replying here too.

The issue with that simple suggestion indeed appears to be along the lines of what I suspected it to be. From the point of view of programmers who are responsible to deploying the protocol, it may still make sense to simplify and use only NTXID, i.e. to compromise and lose some of the expressive power that Bitcoin scripts allow. There could also be other options, such as attaching the signature (for the implicit message of the entire transaction) outside of the ScriptSig input-script.

Fortunately, we can indeed get rid of transaction malleability with a softfork that keeps the current scripts infrastructure in place. The thing that I missed about NOOP operations w.r.t. malleability is that the ScriptSig input-script (that doesn't get signed) doesn't really need to contain executable opcodes, only data. In other words, if the user can redeem an unspent output by providing a witness w that satisfies a circuit (script) phi(w)==true, then it's too general to require that w can be an executable circuit in itself. If I understand correctly, this sensible restriction is specified as new rule #4 at https://gist.github.com/sipa/8907691
23  Bitcoin / Development & Technical Discussion / Re: Paper discusses solution to transaction malleability on: March 04, 2014, 04:20:08 PM
Given the recent interest in transaction malleability, I'd like to bump this old thread.

The suggestion in this paper (section 3.2) is to get rid completely of TXID in the Bitcoin protocol itself, and use only NTXID to reference a transaction (i.e. hash of the simplified transaction without the input scripts).

I'm trying to understand what are the problems with this simple suggestion.
What I can see is that we would lose some of the expressive power that Bitcoin scripts currently allow. For example, Suppose that Alice can redeem an unspent output whose script is of the form (if w1 then true, else if w2 then true, else false), meaning that there can be two different witnesses w1 and w2 (for example preimages of hashed values H(w1),H(w2)) that Alice can provide to redeem the coins. So when Alice broadcasts a transaction that redeems that unspent output, and her transaction is added to the blockchain, the blockchain NTXID hash wouldn't express whether Alice revealed w1 or w2. Therefore, if Bob and Carol have some contract that says "if Alice knows w1 then Bob gets 5 BTC from Carol, and if Alice knows w2 then Carol gets 5 BTC from Bob", then Bob and Carol wouldn't be able to rely on the NTXID in the blockchain in order to settle their contract (there would be plausible deniability that Alice broadcasted the other witness).

Is that the only kind of problem, or are the much more immediate/crucial problems that I failed to see? (other than making sure that generation transactions have a unique NTXID, which is mentioned in that paper). If this was discussed before, please just give me a link to the relevant info (I searched but didn't find much).


Also, a different question: would it be possible to eliminate all the mutations (like subtracting secp256k1_order from the S value of the ECDSA signature) in nonstandard scripts, or only in standard scripts? It seems to me that nonstandard scripts can always simulate NOOP operations, therefore malleability is inherent? If that's the case, maybe there should be some sort of an updated standard-v2 scripts that are still restricted more general than the current standard scripts, where these standard-v2 scripts cannot be mutated?
24  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: January 06, 2014, 08:57:21 AM
Most mining now is done in certain hot spots, such as the big mine of Iceland [1]. Well, now you have the effect that any other mining is more profitable in Iceland, and miners should move to Iceland because delays are shorter and the heaviest subtree is located there. Simply by being located there you are nearest to the heaviest subtree (assuming that this miner is the biggest). That would introduce the effect that mining would quickly would be located in one or two places, when in fact we want the exact opposite.

Yes, if some miner nodes have better network connectivity to large portions of the total hashpower, then a shorter blocktime will benefit them more. The analysis in the GHOST paper doesn't take into account scenarios in which network connectivity among the miner nodes is unequal.
25  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: January 05, 2014, 11:42:30 PM
In practice, this isn't really a problem, because the 2 rules almost always give the same answer.

If this was true then using GHOST with all its added complexities would be pointless.
26  Bitcoin / Development & Technical Discussion / Re: My proposal for GHOST protocol on: January 05, 2014, 11:29:17 PM
If chain selection would depend on other node's proof of work as long as its working on the current block, you get an infinite loop. You can't transmit information instantly.

Huh? GHOST isn't different from Bitcoin in this regard. Each miner node that follows the Bitcoin protocol tries to extend the heaviest continuation of the blocks history that it is aware of (by using PoW to generate a valid continuation block), and if another node transmitted to it an even heavier continuation of the history, then this miner node switches and tries to extend this heavier continuation. With GHOST it's exactly the same, except that it uses another rule to determine which continuation is the heaviest. Intuitively you can think of GHOST as exploiting orphans PoW that other hopefully honest miners generated (instead of giving zero weight i.e. eliminating this orphans PoW data), but it's true that GHOST nodes need to maintain more complex data structures than Bitcoin nodes. Your objections are incoherent, could you please be more clear?

Edit:
I see that you received answers in the other thread too: https://bitcointalk.org/index.php?topic=359582.msg4333993#msg4333993
Maybe your objection is that orphan solved blocks wouldn't be broadcasted? Honest nodes who follow the GHOST protocol will broadcast the orphan solved blocks to their peers, and rational self-interested miners will also broadcast orphan solved block in the heaviest subtree that they're working on, to increase the probability that their PoW will not go to waste.

Everything in the GHOST paper just doesn't make any sense if you think about it. I mean it just claims it "proves" it solves the 50% attack,

That isn't what it claims, it claims that GHOST can support a higher transaction volume and remain secure against attackers that have less than 50% of the hashpower.
27  Bitcoin / Development & Technical Discussion / Re: Deterministic wallets on: January 04, 2014, 02:33:17 AM
Because of the (mod n) operation, it looks to me like any possible value of IL would be valid.

The issue isn't validity, it's uniformity, i.e. we wouldn't want some privkeys to be more likely than others.
Please see posts #220 and #226 of this thread.
28  Alternate cryptocurrencies / Altcoin Discussion / Re: Anon136's NXT Giveaway Thread (weighted by forum notoriety) on: December 27, 2013, 01:33:14 AM
9750296245072558888
thanks
29  Alternate cryptocurrencies / Altcoin Discussion / Re: [LTC] Changing the litecoin Proof of Work function to avoid ASIC mining? on: December 18, 2013, 09:47:35 AM
No, because I want LTC to strengthen (higher hashrate leads to stronger network).

This statement isn't necessarily true, the correct statement is that higher decentralized hashrate leads to a stronger network.
Here's a comment by coblee on that: https://bitcointalk.org/index.php?topic=165015.msg1723744#msg1723744
You can see some more info at https://litecoin.info/Comparison_between_Litecoin_and_Bitcoin#SHA256_mining_vs_scrypt_mining
30  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 18, 2013, 09:40:32 AM
F(prev_utxo_tree_hash, transaction set for current block) = utxo_tree_hash

I meant that it'd probably have less complexity to do: F(prev_utxo_tree_hash, transaction set for current block, utxo_tree_hash) == 1

It's a great idea, but what does it got to do with zero-knowledge?

I suppose it's partially zero-knowledge as you wouldn't need all of the inputs to the program. I did read somewhere that the SNARK programs can have public inputs as well as arbitrary inputs which are unknown to verifiers. Is that right? Some details would be hidden to the verifier, such as the transactions.

Your attitude is too cavalier... Just because you weren't supposed to know the non-public inputs doesn't mean that those inputs don't leak when you examine the SNARK proof, you need a zero-knowledge proof to guarantee that nothing leaks besides the output.
But for the last time, why do you care whether the proof is zero-knowledge or not in this context?
31  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 10, 2013, 08:41:58 PM
The output of the validation could be the unspent outputs

Long output will blowup the complexity, the output can be boolean, where the hash the UTXO set is an input.

Why is this not a good idea?

It's a great idea, but what does it got to do with zero-knowledge?

How would anyone actually start developing a crypto-currency that worked in such a way? It all seems very theoretical to me.

When an efficient SNARK implementation is available, there's a good chance that the Bitcoin devs will integrate this option.
32  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 10, 2013, 12:44:16 AM
Maybe. The proof needs to be verifiable without any blockchain data. Regular SNARKS can do this?

I was thinking about this myself. Could you use a zero-knowledge proof to validate the block-chain without needing the actual block-chain? I don't know very much about it at all.

The answer is yes, but why do you guys keep resurrecting zero-knowledge back into this discussion?

Here's a talk from the 2013 Bitcoin conference by my PhD supervisor about it: http://www.youtube.com/watch?v=YRcPReUpkcU

You can think of it as some specified C or assembler source code having a similar role to that of ECDSA pubkey, and a succinct form of an execution of this source code as having a similar role to that of ECDSA signature. Given this "pubkey", everyone can verify that the "signature" indeed corresponds to a valid execution of that C/assembler program and that this execution terminated with the required output. The probability that an invalid "signature" will pass this verification algorithm is negligible. So in order to "compress" the blockchain, we specify the C/assembler code that checks the blocks from genesis until a certain checkpoint block, and compute a "signature" for an execution this computation, so that nodes can verify with zero-trust that this checkpoint is valid without downloading all the blocks since genesis, and without the need to fetch any blockchain data (to answer maaku's question). Saying that this "signature"/proof is zero-knowledge just means that it doesn't reveal any additional information besides the single bit that says whether in this execution all the blockchain blocks indeed passed validation (an example of additional info is, say, that this "signature"/proof leaks that Alice sent Bob 5 coins at some block). Why do you guys care about zero-knowledge in this regard?
33  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 09, 2013, 07:49:08 PM
I think the point of being zero knowledge is that you don't have to download the blockchain, no?

Maybe you're confusing zero-trust with zero-knowledge?
34  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 09, 2013, 04:40:16 PM
Guys, what you're talking about doesn't benefit from zero-knowledge, please google "succinct non-interactive argument of knowledge" or "probabilistically checkable proof", i.e. you'd like to have SNARK and not necessarily zk-SNARK here.
35  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 09, 2013, 03:06:30 PM
jtimon, why zero-knowledge? And relying on a single validator is bad, because that validator could deny txns.
36  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 07, 2013, 11:29:45 PM
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?

Iddo, a collapse has to occur. This is exactly the same reasoning as in Theorem 8.6 in the paper: block construction is stochastic, so the two forks are not synchronized perfectly even if they have exactly the same amount of hash power supporting them.  The differences in block creation times drift apart until they grow larger than the time it takes to send the message about the total weight in the branch. Thus, a collapse eventually occurs.

Right, apologies, I agree that this will eventually re-converge. But there still could be an interesting question here regarding whether it takes a long time for the two fractions to re-converge, depending on the anti-DoS rules that they deploy.

With anti-Dos it's highly unlikely that you get into these sort of forks in the first place: you do accept all blocks created, say, in the past month or so (difficulty is high enough that they are not really a DoS attack). A fork that breaks DoS rules will only form if 1) an attacker manages to keep up with the network's rate for a whole month (has to be a 50% attack) 2) the network is disconnected for over a month. Even in these cases, the forks resolve rather quickly.

Besides, another heuristic can also help here: accept alternative branches into the tree even if they have 80% of the total weight of your current branch. This is still enough to stop a DoS, and certainly enough to make DoS forks collapse instantly.

Yes, those can be sensible anti-DoS rules. But if the node accepts all blocks in the past month, we may also need a rule to delete the old blocks in the losing subtrees and thereby reclaim disk space. As Andrew Miller said in #bitcoin-wizards, in terms of incentives there's a tradeoff here since keeping some orphans around will potentially save on future bandwidth, but at the cost of storage now.
37  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 07, 2013, 10:09:29 PM
2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?
38  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 07, 2013, 09:21:48 PM
iddo, I think there are two things to point out with respect to that wizards' conversation (where we didn't really reach consensus):

This is true, so far no one claimed that this chain selection rule cannot withstand DoS (when checkpoints are removed), only that it's unclear whether it can withstand DoS.

1) The DoS danger can be avoided with some trivial heuristics, for example: drop or ignore orphaned blocks with less than factor X of the difficulty of the currently accepted best block, and drop blocks furthest from the current best block when orphan storage exceeds Y megabytes/gigabytes. In reality there are a spectrum of possibilities between nodes having omniscient knowledge about all forks (pure GHOST), and nodes blindly following the most-work chain (bitcoin current). Even a heuristic-limited, DoS-safe version somewhere in the middle of that spectrum would be an improvement over today.

Those heuristics are trivial, whether they work is less trivial Smiley It seems that when you do such heuristics with the GHOST chain selection rule, it's more sensitive (compared to Bitcoin) to netsplits/convergence issues, and/or to communication blowup as nodes might need to negotiate with their peers regarding which missing solved blocks should be re-transmitted. This will of course be easier to discuss according to a certain specific proposal for anti-DoS rules, but it's probably difficult to design on paper the best rules and have a theoretical proof that they work, therefore (as mentioned in #bitcoin-wizards) the thing that we really need is to do simulations.

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.

3) What this patch actually accomplishes is a little obscured by the hyperbole of the OP. There are alt coins which had a constant difficulty, and those which had incredibly low block intervals. I'm speaking in the past tense they basically imploded when their network size increased and the propagation time approached the block interval. This patch wouldn't fix that problem (because eventually you run out of bandwidth), but it does let you get a lot closer to the network's block propagation time before witnessing catastrophic consequences. This is important because when we increase the maximum transaction processing rate of bitcoin, it will be by either dropping the block interval or increasing the maximum block size (both of which are hard forks), and how far we can safely do that is bounded by this limit, among others.

Agreed.
39  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 07, 2013, 05:10:02 PM
After some discussion at #bitcoin-wizards (freenode IRC) with Aviv Zohar and several Bitcoin devs, it seems to me that there is an advantage to the current Bitcoin rule for selecting the best chain, over the proposed rule in this new paper.

First note that there is ongoing research by Bitcoin devs to enable lite nodes verify whether txns are valid (link1, link2, link3), which may also allow us to remove the checkpoints in the client because new nodes can be bootstrapped without having to verify the signatures in the entire history, hence the Bitcoin system will be more trust-free.

Consider an attacker who creates many difficulty-1 orphans that extend the genesis block (which requires relatively low PoW effort) and thereby DoS-attacks the other nodes by bloating their local copy of the blockchain with these many orphans. Note that headers-first sync will mitigate this attack somewhat, because the nodes will only fetch and keep the headers of the orphans, but it wouldn't fully mitigate this attack. Right now, Bitcoin is protected from this attack because of the checkpoints, as clients will reject orphans at genesis because of the later checkpoints.

If the checkpoints are removed, I think that Bitcoin can still have anti-DoS mechanisms against this kind of an attack. For example, the most naive anti-DoS protection would be for the node to have some quota and not accept more than certain amount of forks that extend an (old) block, with small risk that it may need to request those rejected blocks later from peers, and waste communication.

So, under the assumption that eliminating checkpoints to have a zero-trust system is a worthy goal, the question is whether we can have anti-DoS protection with the proposed rule for best chain selection of this new paper. Depending on how exactly the nodes will reject blocks that are suspected to be attack-driven, I think that there is a danger that it could cause netsplits and that the network won't re-converge. Even if the network will indeed be convergent, the communication complexity could be significantly greater, as nodes will frequently need to prove to other nodes (who rejected or deleted blocks due to anti-DoS protection) that they have a subtree with a bigger weight?

I think that there are pros/cons to this new proposed rule versus the Bitcoin rule (the complexity needed for lite nodes is probably another disadvantage, and of course there are also significant advantages as discussed in the paper). At this particular moment in time I'd feel more secure to exchange my fiat money for bitcoins than for a cryptocoin that implements the newly proposed chain selection rule Smiley But I'm of course open the possibility that the new rule will lead to a system that's more secure overall.

40  Alternate cryptocurrencies / Altcoin Discussion / Re: [LTC] Changing the litecoin Proof of Work function to avoid ASIC mining? on: December 06, 2013, 05:48:56 PM
I suspect that gmaxwell started this discussion in order to let LTC fail. Changing hash algo is very dangerous and is the easiest way for a hard fork. Enough altcoins have dies because of hard fork.

It indeed can be dangerous if the network topology is truly decentralized, but not so much when there are no more that several dozens mining pools. We already have a precedent, the transition can be done safely in the same way that the BIP16 transition was done, i.e. first a majority of the miners vote that they are willing to accept the change in the future, and only then after there's a significant majority they switch to a client that actually implements this change.
Pages: « 1 [2] 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!