Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: gmaxwell on August 19, 2013, 05:53:55 AM



Title: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on August 19, 2013, 05:53:55 AM
In this message I offer a brief start of a proposal for improving the scalability, flexibility, privacy, and fungibility of Bitcoin.  The idea is based on bleeding-edge cryptography and would require a soft-fork to deploy—so it is not something which could be implemented immediately, but I believe it would be a useful area for further research.

In SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge (http://www.scipr-lab.org/) (referred to as SCIP below), Eli Ben-Sasson et al. describe their work on highly efficient non-interactive proofs with zero-knowledge for the faithful execution of programs written in C. Eli also presented (http://www.youtube.com/watch?v=YRcPReUpkcU) at the Bitcoin conference.

The short layman's explanation of their work is that they've constructed a system where someone can run an arbitrary program inside a special environment and then publish a very compact and quickly-checkable proof which will convince anyone that 1) they ran the program faithfully (e.g., without modification or tampering) and 2) that the program "accepted" (e.g., exited with return true;) with a given set of public inputs and (optionally) additional non-public inputs. Because their system provides computational zero knowledge, the program's execution can also depend on any non-public inputs and the validator learns nothing about them beyond the fact that the program accepted.

The mathematics behind this are highly dense—starting with the surprising result (http://en.wikipedia.org/wiki/IP_%28complexity%29#PSPACE_is_a_subset_of_IP) from over twenty years ago that all programs can be converted into interactive proofs, they apply many layers of techniques to build a concrete system with performance that could actually be used, rather than just a proof that proofs of programs are possible.

I've proposed some interesting ideas on how to use this kind of technology with Bitcoin which don't require any changes to Bitcoin itself elsewhere (https://en.bitcoin.it/wiki/User:Gmaxwell/why_hash_locked). But if you add the technology directly to Bitcoin some more interesting things become possible.

One of the obvious applications of this idea is that it could replace script (https://en.bitcoin.it/wiki/Script) in Bitcoin. Instead of embedding the rules that govern an output inside the blockchain, you'd instead embed a proof that the rules were followed. Instead of everyone checking that a transaction was permitted to be spent, they'd instead check that you checked.

This is itself interesting, as it would make much more powerful scripts possible without burdening the network with their execution, but improvements to script aren't terribly exciting when the existing system is so far mostly unused. Additionally, the verification of SCIP proofs is somewhat slow compared to ECDSA (on the order of, say, several hundred times slower), and although it scales excellently most transactions are very simple with only a single hash and ECDSA operation.

Here I suggest one very interesting way this functionality could be used.

Let's imagine that my friends and I agree on a set of rules for a verifiable off-chain transaction system. By "verifiable", I mean that if its users record a transcript of the activity for a single coin, they could show the transcript to a third party, and the third party would be convinced that the system is behaving correctly and would know who the ultimate owner of the coin was.

Here are two example systems which would meet that criteria:

Anti-replay oracle based
Alice has a digital coin: coin={Value, Alice's pubkey, transcript of the coin's history}, and wants to assign it to Bob.
Bob tells Alice Hash(bob's pubkey)=HB.
Alice computes Hash(coin)==HC, and also Sign(HB)==SHB.
Alice contacts a trusted anti-replay oracle,  telling it {HC,SHB}.
The oracle returns Sign({HC,SHB}) if and only if it has never signed something which began with HC before.
Alice passes this information on to Bob, along with the coin. Bob checks the coin's history and is satisfied that he has a valid coin which has not been double-spent.

This can be continued on and on, passing a coin from party to party, producing an ever-growing transcript.  It can be trivially extended by using M of N anti-replay oracles, or other similar improvements. The anti-replay oracles this uses are trusted to not replay, but since they only deal with hashes they learn nothing about the transactions (or even that they're being used for a currency-like use—it might just be timestamping for all the oracle can tell). Such an oracle could be on trusted hardware, fidelity bonded, secured by a bounty txout that you can redeem if and only if you can publish a proof showing a replay, etc.

Blockchain based
Alice has a bitcoin which is represented by a colored coin in some other specific blockchain, perhaps a geographic-specific blockchain.
Alice passes it to Bob by transacting in the normal way.
Bob can form the verifiable transcript by grabbing every transaction along the colored coin's path along with the SPV fragments connecting them to the chain and some additional blockchain headers to show the transactions are adequately buried.

(I provided two examples here to make a point that the idea is completely general. Any other system with the verifiable property, E.g. OpenTransactions, complicated ones involving voting by all participants, or even trivial ones that doesn't have double spending prevention, could be used. Though they'd work best if constructed for very efficient validation.)

The idea I'm calling CoinWitness is this:

You write down a small program which verifies the faithfulness of one of these transcripts for your chosen verifiable off-chain system. The program requires that the last transaction in the transcript is special in that it pays to a Bitcoin scrippubkey/p2sh. The same address must also be provided as a public input to the program. We call this program a "witness" because it will witness the transcript and accept if and only if the transcript is valid.

You then use the SCIP proof system to convert the program into a verifying key.  When someone wants to create a Bitcoin in an off-chain system, they pay that coin to the hash of that verifying key. People then transact in the off-chain system as they wish. To be confident that the system works faithfully they could repeat the computationally-expensive verifying key generation process to confirm that it corresponds to the transaction rules they are expecting.

When a user of one of these coins wants to exit the system (to compact its history, to move to another system, to spend plain Bitcoins, or for any other reason), they form a final transaction paying to a Bitcoin address, and run the witness on their transcript under SCIP and produce a proof. They create a Bitcoin transaction redeeming the coin providing the proof in their script (but not the transcript, thats kept private), and the Bitcoin network validates the proof and the transaction output. The public learns nothing about the intermediate transactions, improving fungibility, but unlike other ideas which improve fungibility this idea has the potential to both improve Bitcoin's scalability and securely integrate new and innovative alternative transaction methods and expand Bitcoin's zero-trust nature to more types of transactions.

Open challenges:
This depends on new cryptographic techniques which may have security weaknesses and may have much room for performance improvement. Use of it requires adding the validation software to the Bitcoin network, which then fixes a particular proof format in stone.

The SCIP proof validation is slow by our standard. In the paper they give an example from their implementation on a 2.4 GHz host where validation of a proof (which is only 2576 bits long for 80-bit security) on a small public input (which is the case for CoinWitness) takes 100ms. This is fast enough that it may actually be viable without further optimization considering the potential for compression of multiple transactions into one.

SCIP proof construction is enormously slow by our standards. In an example given in their paper for a 100-instruction program which executes for 11001 cycles, proof generation takes 155 minutes. Proof time is basically linear with the number of operations, so bringing a coin with a long transcript back into the chain may be computationally prohibitive. However, since this is done off-chain, it at least puts the work with an interested party instead of externalizing it on the whole Bitcoin userbase. (And there likely is a lot of room to improve the performance with software engineering, in particular, the problem is very parallelizable)

The SCIP keys must have an upper bound on the number of operations performed by the program baked into them. This results in a maximum transcript length which must be committed to in advance.

I believe all these issues are surmountable, in time, barring any really exciting cryptographic breaks in the approach.

This CoinWitness idea is something of a less ambitious example of what Eli suggested in his talk—effectively running blockchain validation under SCIP to produce UTXO checkpoints that have cryptographic proof of their correctness. But instead of validating all transactions from a public blockchain, here I suggest validating small sequences of private transactions to avoid ever making them public at all and, in doing so, improve the scalability, flexibility, privacy, and fungibility of Bitcoin.  (And, in fact, this idea may in fact be trivially obvious to those working on the SCIP tools, but I expect it will blow some minds here).



Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on August 19, 2013, 05:58:57 AM
One cute note on the above is that in the blockchain based off-chain example system, the "off-chain" blockchain could be colored _Bitcoins_. Using Bitcoin as the consensus system eliminate the scaling advantages and flexibility in transaction rules, but it means you don't even have to assume an off-chain system if you just want the privacy benefits. The advantage from using Bitcoin itself and making one of the public inputs to the witness be a hash of a block which is required to be in the chain, you can increase the security of the "off-chain" coin from SPV+headers fragment to full security.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: Mike Hearn on August 19, 2013, 09:36:23 AM
I wonder how this system compares to Pinnochio which has similar capabilities.

   http://research.microsoft.com/apps/pubs/default.aspx?id=180286

However, my understanding is that the performance is significantly better.

One area I'm interested in the use of this is digesting bearer tokens like e-Passports into less sensitive, app-specific proofs that are combined with key material. For instance, I'd like to be able to sign payment requests as "myself" so purchasers can prove they really bought something from Mike Hearn the legal entity. It provides certainty and avoids some kinds of fraud. But my e-Passport doesn't support signing things itself (it could do, but it's an optional feature and the British government chose not to provide it, presumably for cost reasons). It boils down to a signed piece of data that is by itself quite useless.

But with provable computation, I could generate a private key, give the public key+signed e-Passport data as an input, the program checks the signatures on the e-Passport data to prove it's valid and then spits out its own kind of "certificate" containing some selected fields like name/date of birth/photo-hash plus my public key. The certificate is not a normal PKIX style certificate but rather is a proof that such a certificate was verified, along with the other inputs that were provided.

At the moment this is too inefficient as you'd have to implement an entire crypto library inside the provable computation. I contacted one of the Pinnochio researchers and they said they were planning to explore ways to do crypto-inside-crypto in more efficient ways.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: marcus_of_augustus on August 19, 2013, 01:30:32 PM
Quote
starting with the surprising result from over twenty years ago that all programs can be converted into interactive proofs,

Interesting ... are there constraints on the set of programs or is it really 'all programs'?

Quote
The idea I'm calling CoinWitness is this:

You write down a small program which verifies the faithfulness of one of these transcripts for your chosen verifiable off-chain system. The program requires that the last transaction in the transcript is special in that it pays to a Bitcoin scrippubkey/p2sh. The same address must also be provided as a public input to the program. We call this program a "witness" because it will witness the transcript and accept if and only if the transcript is valid.

You then use the SCIP proof system to convert the program into a verifying key.  When someone wants to create a Bitcoin in an off-chain system, they pay that coin to the hash of that verifying key. People then transact in the off-chain system as they wish. To be confident that the system works faithfully they could repeat the computationally-expensive verifying key generation process to confirm that it corresponds to the transaction rules they are expecting.

When a user of one of these coins wants to exit the system (to compact its history, to move to another system, to spend plain Bitcoins, or for any other reason), they form a final transaction paying to a Bitcoin address, and run the witness on their transcript under SCIP and produce a proof. They create a Bitcoin transaction redeeming the coin providing the proof in their script (but not the transcript, thats kept private), and the Bitcoin network validates the proof and the transaction output. The public learns nothing about the intermediate transactions, improving fungibility, but unlike other ideas which improve fungibility this idea has the potential to both improve Bitcoin's scalability and securely integrate new and innovative alternative transaction methods and expand Bitcoin's zero-trust nature to more types of transactions.

Wow, sounds good ... better than good.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: EmperorBob on August 19, 2013, 03:49:45 PM
Correct me if I'm wrong, but it's still possible to double spend the transcripted coin?

1. Alice sends Bob the using the off-chain system.
2. She also redeems it onto the chain. She can do this anytime before Bob does (or whoever he passes it on to).

Or is the idea that redemption step still requires the participation of the offchain system?
If so it's still superior to existing off-chain systems in that funds can only be frozen, not confiscated.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: iddo on August 19, 2013, 04:16:31 PM
When a user of one of these coins wants to exit the system (to compact its history, to move to another system, to spend plain Bitcoins, or for any other reason), they form a final transaction paying to a Bitcoin address, and run the witness on their transcript under SCIP and produce a proof. They create a Bitcoin transaction redeeming the coin providing the proof in their script (but not the transcript, thats kept private), and the Bitcoin network validates the proof and the transaction output. The public learns nothing about the intermediate transactions, improving fungibility, but unlike other ideas which improve fungibility this idea has the potential to both improve Bitcoin's scalability and securely integrate new and innovative alternative transaction methods and expand Bitcoin's zero-trust nature to more types of transactions.

The user who can create the Bitcoin transaction that redeems the coin back into the Bitcoin network is the one who holds the privkey that corresponds to the pubkey at the top of the coin's transcript in the off-chain system? If I understand correctly then the Bitcoin nodes cannot be completely oblivious to the off-chain system. So can you please elaborate on what exactly do the Bitcoin nodes verify when a user exits the off-chain system? You've mentioned in the first paragraph that this can be achieved as a soft-fork to Bitcoin, are you sure about that?


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: Peter Todd on August 19, 2013, 04:32:03 PM
The user who can create the Bitcoin transaction that redeems the coin back into the Bitcoin network is the one who holds the privkey that corresponds to the pubkey at the top of the coin's transcript in the off-chain system? If I understand correctly then the Bitcoin nodes cannot be completely oblivious to the off-chain system. So can you please elaborate on what exactly do the Bitcoin nodes verify when a user exits the off-chain system? You've mentioned in the first paragraph that this can be achieved as a soft-fork to Bitcoin, are you sure about that?

Basically the idea is that in this case the funds are not spendable only by a privkey. Essentially the way it works is kinda like this:

  • Make a transaction that tells the whole world that some funds are now only spendable if someone proves a certain computer program was run.
  • Magic!
  • Make a proof that computer program was run. (also magic)
  • Show the rest of the Bitcoin world that proof, which shows why the funds are now allowed to move.

This is basically how Bitcoin already works... except normally to verify the computer program was run, currently all Bitcoin nodes actually run that program. With SCIP, they don't need to actually do that - just the proof that someone ran it is enough. Yes I know that sounds kinda crazy, but amazingly math actually lets you verify that someone ran a particular computer program honestly without actually running it yourself or seeing all the data it operated on.

Of course that program can be as simple as "Bob signed a message saying Alice deserves the funds now" or as complex as some multi-stage off-chain transaction thing where double-spends are prevented by a signing oracle that sees nothing more than some nonce values it timestamps.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: iddo on August 19, 2013, 04:35:06 PM
Correct me if I'm wrong, but it's still possible to double spend the transcripted coin?

1. Alice sends Bob the using the off-chain system.
2. She also redeems it onto the chain. She can do this anytime before Bob does (or whoever he passes it on to).

If I understand correctly, the transaction that redeems the coin back into the Bitcoin chain embeds inside it a required SCIP proof of a transcript that specifies that the latest history of this coin is that it is now spent back to the Bitcoin network. Thefore, Alice cannot also send this coin to Bob in the off-chain system, under the assumption that the double-spending prevention in the off-chain system is functional. But we'll wait for more info from gmaxwell...


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: iddo on August 19, 2013, 04:57:14 PM
The user who can create the Bitcoin transaction that redeems the coin back into the Bitcoin network is the one who holds the privkey that corresponds to the pubkey at the top of the coin's transcript in the off-chain system? If I understand correctly then the Bitcoin nodes cannot be completely oblivious to the off-chain system. So can you please elaborate on what exactly do the Bitcoin nodes verify when a user exits the off-chain system? You've mentioned in the first paragraph that this can be achieved as a soft-fork to Bitcoin, are you sure about that?

Basically the idea is that in this case the funds are not spendable only by a privkey. Essentially the way it works is kinda like this:

  • Make a transaction that tells the whole world that some funds are now only spendable if someone proves a certain computer program was run.
  • Magic!
  • Make a proof that computer program was run. (also magic)
  • Show the rest of the Bitcoin world that proof, which shows why the funds are now allowed to move.

This is basically how Bitcoin already works... except normally to verify the computer program was run, currently all Bitcoin nodes actually run that program. With SCIP, they don't need to actually do that - just the proof that someone ran it is enough. Yes I know that sounds kinda crazy, but amazingly math actually lets you verify that someone ran a particular computer program honestly without actually running it yourself or seeing all the data it operated on.

Of course that program can be as simple as "Bob signed a message saying Alice deserves the funds now" or as complex as some multi-stage off-chain transaction thing where double-spends are prevented by a signing oracle that sees nothing more than some nonce values it timestamps.

OK, but I was looking for more specific details. What do the Bitcoin nodes verify exactly when a user exits the off-chain system? If a Bitcoin user transfers a bitcoin to an off-chain system, then later only another user who now controls that coin in the off-chain system can transfer it back to the Bitcoin network, right? So if a previous owner of that coin in the off-chain system could have produced (at an earlier point in time) the transaction that redeems that coin back into the Bitcoin network, what prevents this previous owner from redeeming that coin now that he no longer owns it in the off-chain system?


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on August 19, 2013, 05:36:34 PM
What do the Bitcoin nodes verify exactly when a user exits the off-chain system?
Exactly what they need to, no less, no more. Or to be more specific, what you want to know is what the witness verifies. The Bitcoin nodes only ever verify that the witness was satisfied.

If that sounds a little orbicular, it's because it depends on the details of the system in question.  What I'm suggesting is a, effectively, an application for a replacement script system, the users provide the script. In general you'd provably destroy the coins in the off-chain system, in a way which binds the destruction to the bitcoin address you want to pay or in some other way visible to the off-chain users unbind them. The witness' job is to observe the transcript and be convinced by it.

For the anti-replay-oracle example,  you'd make a final off chain transaction with something like H('hello bitcoin!'||nonce||bitcoinaddress) as the destination, instead of a public key.  This makes the coin forever unspendable in the off-chain system.

For the colored-coin example, you might do something similar— you could pay the coin to an unspendable address that was really some hash of the intended bitcoin destination and a nonce,  or you'd could just have the funds pass through a transaction that has a zero value OP_RETURN {hash like above}, not discarding the coin at all, but signaling to all the other users of your colored coin (by convention established in the coding of your witness) that it has lost its color in order to move it back to Bitcoin. (This latter case means you would need to be using special colored coin software for your intermediate transactions that watched for this happening in order to avoid getting cheated).

In either case the nonce would never be made public, for privacy sake. It would be placed in the transcript, however, so that the witness could verify that you will be paying the correct Bitcoin destination, which you include as a public input so it can be checked against what you actually provided (I suppose technically it would be a hash of the masked redemption transaction and sighash flags and not an address, but ... implementation details).

But whatever you're doing for double-spending prevention off-chain you'd effectively be using the same thing here to convince the witness that you're not double-spending your redemption. Or not— none of this actually requires you to be secure against double spending if you don't want to adopt rules which are... but, uh, no one else may want your double-spendable coins. :)

If so it's still superior to existing off-chain systems in that funds can only be frozen, not confiscated.
And even there, part of the reason I gave an example using a hash blinded oracle is that it was one where the trusted part is so deprived of information it could only prevent spending by shutting down completely (and even not then if you use M of N them) or if the penultimate off-chain holder cooperates with it.

I could easily imagine witness designs that reduce the frozen funds risk further in exchange for some double-spend risk. For example, if there is an alternative way to redeem the coin by providing a incomplete transcript (e.g. one where you're the final owner, but there is no transfer into Bitcoin at the end) plus Bitcoin headers proving that the current time is in the sufficiently far future. There is a risk that a prior owner of the coin will steal it at this point (by truncating their transcript), but that risk may be better then allowing the coin to be frozen forever if the off-chain system fails, and would only exist if the holder of the coin failed to exit the off-chain system before their previously agreed on timeout ran out.

Interesting ... are there constraints on the set of programs or is it really 'all programs'?
It's turing complete, and for the purposes here it really is all programs. The system emulates a specialized harvard architecture virtual machine and cannot, as currently defined, run self-modifying code. The execution environment can only do input by reading from two one way input tapes (one for public inputs, one for private ones), and can only output by "accepting" or failing to accept by the time the user specified time limit is up. None of this is problematic for the use we'd want here— e.g. any output you make an input, and have the program accept only if it matches. The primary limitation that would confront users doing what I've proposed is that longer proof computation time results from longer execution time and larger proof construction memory (and also larger programs, but the growth is well controlled).


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: Mike Hearn on August 19, 2013, 06:21:37 PM
OK, from re-reading the Pinnochio paper and a quick look at the zk-SNARKs paper, it seems like Pinnochio is much faster but has a much more heavily restricted variant of C. For instance qcc doesn't support mutable state or iteration so it unrolls all loops.

Ben-Sasson et als work is therefore much more general, but also has way higher proving time.

edit: actually I just noticed they have an entire section devoted to discussing Pinnochio. I didn't see it because they never refer to it by name, only by citing the paper (the title of which also does not mention the name).

Quote
All previous implementation work (except the work of Canetti et al. [CRR11] on competing provers) requires
an instance to be represented as a circuit, or other similar “low-level” representations.
While [SBW11, SMBW12, CMT12, TRMP12] do not address the problem of converting arbitrary programs to a circuit representation, [SVP+12, SBV+13] do consider the problem of programming arithmetic circuits. Specifically, they present a solution, based on the Fairplay compiler [MNPS04, BDNP08], for compiling programs written in a special language called SFDL. Next, both [SVP+12, SBV+13] convert the output of the FairPlay compiler to the constraints required by their respective protocols. Compared with high-level programming languages like C, SFDL is quite limited in the sense that it does not support important primitives and has inefficient support for others. For example, SFDL does not support loops with a non-constant number of iterations; also, it does not support recursions. Furthermore, each array access of the form A, where A is an array and i is an index, is implemented via a multiplexer circuit over the entire array. In particular, relying on a compiler such as FairPlay has a severe drawback: the circuits it generates have size that is quadratic in the time bound T in the worst case, due to inefficient support of memory accesses. So, e.g., the prover in all previous works runs in time that is (T2) in the worst case.

Parno et al. [PGHR13] do not rely on the Fairplay compiler, but also rely on an approach with a blowup that is at least quadratic in the worst case. Indeed, they provide a compiler for a basic subset of C that, like SFDL, is very restrictive: memory accesses are static constants, loops are unrolled up to a static bound, both branches of a conditional are executed, and so on. In particular, accesses to memory are inefficiently supported. The quadratic blowup in previous work is not accidental but is due to a fundamental difficulty: how is consistency of random-access memory achieved? As discussed (see Section 2.3.2), the naive solution of multiplexing from memory at each time step is inefficient. Instead, in this work (see Section 2.3) we implement an efficient circuit generator: by leveraging nondeterminism and routing [BCGT13a], we generate an arithmetic circuit whose size is only O(T log T). The bound holds even when a program makes use of datadependent loops, control flow, recursions, and memory accesses. (Indeed, the bound holds for all TinyRAM programs.) Because most prior works support circuit satisfiability, all these prior works directly benefit from
our circuit generator in the sense that their circuit generators can be replaced with our more efficient one.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: iddo on August 19, 2013, 06:46:21 PM
In general you'd provably destroy the coins in the off-chain system, in a way which binds the destruction to the bitcoin address you want to pay or in some other way visible to the off-chain users unbind them.

Right, I see now. The thing that I missed is that you'd provably destroy the coins in the off-chain system. So for example in the anti-replay oracle system, the proof of destruction will be of a computation that includes a signature by the oracle that he considers the coin to be destroyed, so when a user initially sends a coins into this off-chain system he sends it to a SCIP program that references the pubkey of the oracle.

But why do you say that it's a soft-fork? Isn't it true the current Bitcoin nodes cannot verify such proofs-of-computation, and therefore they will disagree with the newer nodes regarding whether the transaction of the user who exited the off-chain system is valid, so the network would split unless the older nodes upgrade?

This CoinWitness idea is something of a less ambitious example of what Eli suggested in his talk—effectively running blockchain validation under SCIP to produce UTXO checkpoints that have cryptographic proof of their correctness. But instead of validating all transactions from a public blockchain, here I suggest validating small sequences of private transactions to avoid ever making them public at all and, in doing so, improve the scalability, flexibility, privacy, and fungibility of Bitcoin.  (And, in fact, this idea may in fact be trivially obvious to those working on the SCIP tools, but I expect it will blow some minds here).

Yes, this new idea has much more beneficial properties (compared to just "compressing" the blockchain in order for nodes to be able to get UTXO checkpoints in a trustless manner, instead of getting the entire blockchain history from archival nodes). From what I know, I doubt that this idea was trivially obvious to anyone before now. If I remember correctly, you've said in the past that the properties that this kind of system can accomplish is what Ripple should have tried to accomplish.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on August 19, 2013, 07:31:20 PM
Right, I see now. The thing that I missed is that you'd provably destroy the coins in the off-chain system. So for example in the anti-replay oracle system, the proof of destruction will be of a computation that includes a signature by the oracle that he considers the coin to be destroyed, so when a user initially sends a coins into this off-chain system he sends it to a SCIP compiled program that references the pubkey of the oracle.
The oracle doesn't have to know the coin is destroyed, though it could if you wanted to. In my example the coin is just paid to some hashed destination as far as the oracle can tell (well, technically the oracle doesn't even know there is a coin: It's just timestamping some data with a prefix that its never seen before).  Other uses of the off-chain system know that destination isn't them, and thats sufficient to prevent double-spending. The witness knows the destination is Bitcoin because the redeemer provides the nonce.  The distinction isn't super important, but it does inhibit certain kinds of DOS attacks (like the oracle refusing to let you take coins out of the system and put them back into Bitcoin).

Quote
But why do you say that it's a soft-fork? Isn't it true the current Bitcoin nodes cannot verify such proofs-of-computation, and therefore they will disagree with the newer nodes regarding whether the transaction of the user who exited the off-chain system is valid, so the network would split unless the older nodes upgrade?
Correct. A soft-fork change is one which strictly decreases the set of valid transactions.  Old nodes accept valid ones under the new rules, but they also accept transactions which are invalid under the new rules so the network risks diverging unless at least a significant super-majority of mining hash-power enforces the new rules. We've made soft-forking changes a number of times, and I think we can make them pretty safely... or at least will continue to be so long as there remains a single codebase used for the full-nodes behind almost all mining. But they are still inherently risky and imply a long term commitment to the change and a public consensus to make it, so they can't be made lightly or experimentally.

Yes, this new idea has much more beneficial properties (compared to just "compressing" the blockchain in order for nodes to be able to get UTXO checkpoints in a trustless manner, instead of getting the entire blockchain history from archival nodes). From what I know, I doubt that this idea was trivially obvious to anyone before now. If I remember correctly, you've said in the past that the properties that this kind of system can accomplish is what Ripple should have tried to accomplish.
There are many neat off-chain systems possible— including ones with really good scaling properties (e.g. no global consensus, indeed what ripple could have been before the change in ownership)—, but even when they are themselves zero-trust or nearly zero-trust getting value in and out of Bitcoin without trust is a hard problem.  I hope here that I've shown a potential tool we could employ to address that challenge.

New cryptographic tools make many things possible which were unthinkable before. Someday people will look back at the things we invent between then and now and with a surprised tone exclaim "What? wasn't that obvious??", as some do even now about Bitcoin itself. I thought the idea was interesting, or else I wouldn't have posted about it— but I can't deny that at least half of the idea comes from simply having a basic understanding of these new tools. The potential to make easy discoveries like this is why tools like SCIP and Bitcoin are the new frontier and thats one reason I enjoy thinking about them and working with them.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: iddo on August 19, 2013, 08:52:55 PM
Right, I see now. The thing that I missed is that you'd provably destroy the coins in the off-chain system. So for example in the anti-replay oracle system, the proof of destruction will be of a computation that includes a signature by the oracle that he considers the coin to be destroyed, so when a user initially sends a coins into this off-chain system he sends it to a SCIP compiled program that references the pubkey of the oracle.
The oracle doesn't have to know the coin is destroyed, though it could if you wanted to. In my example the coin is just paid to some hashed destination as far as the oracle can tell (well, technically the oracle doesn't even know there is a coin: It's just timestamping some data with a prefix that its never seen before).  Other uses of the off-chain system know that destination isn't them, and thats sufficient to prevent double-spending. The witness knows the destination is Bitcoin because the redeemer provides the nonce.  The distinction isn't super important, but it does inhibit certain kinds of DOS attacks (like the oracle refusing to let you take coins out of the system and put them back into Bitcoin).

Right, I wrote it less clearly than you. So the oracle only prevents double-spending in a generic way and he's oblivious to the fact that he assisted to destroy the coin, but the signature that the oracle provides for the particular off-chain transaction that destroys the coin is required for the verification (by Bitcoin nodes) to pass when trying to redeem that coin back into the Bitcoin network.

Correct. A soft-fork change is one which strictly decreases the set of valid transactions.  Old nodes accept valid ones under the new rules, but they also accept transactions which are invalid under the new rules so the network risks diverging unless at least a significant super-majority of mining hash-power enforces the new rules. We've made soft-forking changes a number of times, and I think we can make them pretty safely... or at least will continue to be so long as there remains a single codebase used for the full-nodes behind almost all mining. But they are still inherently risky and imply a long term commitment to the change and a public consensus to make it, so they can't be made lightly or experimentally.

Maybe the distinction between soft-fork and hard-fork is less important than I thought, but I'm still a bit confused on why this is a soft-fork. With regard to a valid transaction that passes the SCIP verification (from the off-chain system back into the Bitcoin network), can you please explain why the older nodes would consider it to be a valid transaction? If that's the case, it means that the older nodes would allow anyone to spend such outputs, without checking anything at all?


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: Peter Todd on August 19, 2013, 09:01:15 PM
Maybe the distinction between soft-fork and hard-fork is less important than I thought, but I'm still a bit confused on why this is a soft-fork. With regard to a valid transaction that passes the SCIP verification (from the off-chain system back into the Bitcoin network), can you please explain why the older nodes would consider it to be a valid transaction? If that's the case, it means that the older nodes would allow anyone to spend such outputs, without checking anything at all?

That is exactly the case; see BIP 16 (https://en.bitcoin.it/wiki/BIP_0016) for an example where a soft-fork was used in that way. CoinWitness could be implemented similarly.

Basically it isn't safe to use CoinWitness until >50% of the hashing power rejects invalid CoinWitness transactions, but unless you are a miner you don't actually need to upgrade your node.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: iddo on August 19, 2013, 11:36:44 PM
An observation regarding privacy/fungibility: if a user sends a tainted coin even to a dummy off-chain system (i.e. a supposedly off-chain system that he runs by himself and only he is aware of its existence), and after a while sends the coin back into the Bitcoin network, then it seems to me that that's already enough to un-taint the coin, because the rest of the Bitcoin network cannot distinguish between this scenario and a scenario in which multiple users sent coins to the off-chain system and mixed their coins among themselves so that the user who sent the tainted coin received a different coin in the transaction that's sent back into the Bitcoin network. In other words, the mere possibility that off-chain mixing systems exist is already enough to un-taint coins, without really doing any mixing. Note that the size of the proof is short i.e. not linear in the size of the computation or anything like that (it's the Merkle root hash of the PCP style proof + pseudorandom queries that are derived from the root hash to entries in the proof), so there isn't even a need to bloat the coin's transcript in the off-chain system so that nodes who try to impose policies on tainted coins couldn't see that the proof is too short, but anyway a dummy transcript that does one extra step of sending/receiving a coin once is enough in any case.

It seems to me that this general framework that Gregory Maxwell describes here supercedes other mixing proposals like the ideas of Adam Back, zerocoin, or non-protocol extensions that can be used on top of Bitcoin. And that's even before we start talking about the other important benefits in terms of scalability and extendibility of the off-chain systems.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on August 20, 2013, 01:00:49 AM
It seems to me that this general framework that Gregory Maxwell describes here supercedes other mixing proposals like the ideas of Adam Back, zerocoin, or non-protocol extensions that can be used on top of Bitcoin. And that's even before we start talking about the other important benefits in terms of scalability and extendibility of the off-chain systems.
Perhaps but there are deployment challenges, this technology will not be easy to get mature and adopted. I have alternatives which are more pragmatic if not as impressive in their mechanisms. You are going to like my next post (https://bitcointalk.org/index.php?topic=279249.0) too.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: iddo on August 20, 2013, 03:02:08 PM
It seems to me that this general framework that Gregory Maxwell describes here supercedes other mixing proposals like the ideas of Adam Back, zerocoin, or non-protocol extensions that can be used on top of Bitcoin. And that's even before we start talking about the other important benefits in terms of scalability and extendibility of the off-chain systems.
Perhaps but there are deployment challenges, this technology will not be easy to get mature and adopted.

Very true, but I spoke with Eli today and maybe particular opcodes can be optimized especially for Bitcoin. For example the TinyRAM can include a special opcode to verify ECDSA signatures, that would be optimized by avoiding the need to translate a C style ECDSA algorithm to a circuit, and instead directly use an algebraic circuit over F_p that implements the ECDSA verification algorithm. There's a lot of room for optimizations.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: Mike Hearn on August 20, 2013, 03:05:22 PM
Yeah, I was wondering yesterday how much scope there is for custom opcodes. I guess the issue is that each special opcode complicates the transition function and thus the circuit so it results in extra blowup per time step. Perhaps it's possible to have transition circuits that vary depending on what instructions are possible at each given point.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: ShadowOfHarbringer on August 20, 2013, 03:33:43 PM
This is basically how Bitcoin already works... except normally to verify the computer program was run, currently all Bitcoin nodes actually run that program. With SCIP, they don't need to actually do that - just the proof that someone ran it is enough. Yes I know that sounds kinda crazy, but amazingly math actually lets you verify that someone ran a particular computer program honestly without actually running it yourself or seeing all the data it operated on.
This may be the most fucking brilliant idea i have seen since the dawn of proof-of-work (or sliced bread), however I am obviously too stupid to verify if that will work or if that is even possible.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on August 20, 2013, 03:43:54 PM
[Warning: non-expert blabber ahead:]
I believe that some of the fundamental improvements they made to prevent circuit blowup generally should also make adding opcodes easier. One of the big reasons you get blow up in conversion to circuits is that you need a MUX hierarchy to get the data from all the places it could have come from to all the places it may need to go to, and so that makes adding more opcodes (or memory) costly.  They address this through the use of non-deterministic advice.

The idea is pretty neat: You construct the circuit with additional inputs which are effectively outside of the proof, but which do not break the soundness because there is no value that they can take which would cause false acceptance.  The prover solves for these values and fills them in, and that allows the system to do things like use rearrangably non-blocking switch networks instead of huge mux cascades (which then requires the prover to solve a routing problem, but replaces something with quadratic scaling with something that has log scaling).  An example they give in their paper is the divider:  Implementing a divide circuit would take a lot of gates (as is the case in digital logic too), instead they do something along the lines of implementing a multiply circuit which receives non-deterministic advice (the solution of the division) and then multiplies with one of the inputs and checks it against the other.

The obvious opcodes which would be most useful for us are the EC field operations over our curve.  I might say parts of SHA-2, but I'm not sure how much more efficient a direct implementation of the SHA-2 round function might be implemented directly rather than from C. At least in digital logic adders use a lot of gates, but they're the same adders tinyram would use with a 32 bit wordsize, so...


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: ShadowOfHarbringer on August 20, 2013, 06:43:41 PM
[Warning: non-expert blabber ahead:]
I believe that some of the fundamental improvements they made to prevent circuit blowup generally should also make adding opcodes easier. One of the big reasons you get blow up in conversion to circuits is that you need a MUX hierarchy to get the data from all the places it could have come from to all the places it may need to go to, and so that makes adding more opcodes (or memory) costly.  They address this through the use of non-deterministic advice.

The idea is pretty neat: You construct the circuit with additional inputs which are effectively outside of the proof, but which do not break the soundness because there is no value that they can take which would cause false acceptance.  The prover solves for these values and fills them in, and that allows the system to do things like use rearrangably non-blocking switch networks instead of huge mux cascades (which then requires the prover to solve a routing problem, but replaces something with quadratic scaling with something that has log scaling).  An example they give in their paper is the divider:  Implementing a divide circuit would take a lot of gates (as is the case in digital logic too), instead they do something along the lines of implementing a multiply circuit which receives non-deterministic advice (the solution of the division) and then multiplies with one of the inputs and checks it against the other.

The obvious opcodes which would be most useful for us are the EC field operations over our curve.  I might say parts of SHA-2, but I'm not sure how much more efficient a direct implementation of the SHA-2 round function might be implemented directly rather than from C. At least in digital logic adders use a lot of gates, but they're the same adders tinyram would use with a 32 bit wordsize, so...
http://www.reactiongifs.com/wp-content/uploads/2011/09/mind_blown.gif


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: Mike Hearn on August 20, 2013, 07:37:35 PM
The obvious opcodes which would be most useful for us are the EC field operations over our curve.  I might say parts of SHA-2, but I'm not sure how much more efficient a direct implementation of the SHA-2 round function might be implemented directly rather than from C. At least in digital logic adders use a lot of gates, but they're the same adders tinyram would use with a 32 bit wordsize, so...

They say that adders require 2W gates. So pretty cheap. But then they also say that they can do bitwise operations directly on the binary form as in regular boolean circuits. Although they don't mention shifts, I guess it's not so different. So I'd intuitively guess that a SHA256 implementation isn't going to be too expensive.

I guess for RSA the fact that their "native" arithmetic works in large prime fields might be useful - but I'm really out of my depth at that point.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: 2112 on August 23, 2013, 04:45:45 PM
In SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge (http://www.scipr-lab.org/) (referred to as SCIP below), Eli Ben-Sasson et al. describe their work on highly efficient non-interactive proofs with zero-knowledge for the faithful execution of programs written in C. Eli also presented (http://www.youtube.com/watch?v=YRcPReUpkcU) at the Bitcoin conference.
Thank you very much. I've spent some time reading and comprehending this paper. I have following questions about the associated area of research:

1) TinyRAM has a requirement that the memory is W bits wide and has exactly 2W cells. Is this a convenience for the sake of simplyfying the presentation or is this a necessary condition for the validity of the proof?

2) TinyRAM has exactly one deterministic input tape that can only be read once and in the forward direction only. I would love to hear from somebody about some research that would make TinyRAM Turing-equivalent in some way: a writable and rewindable tape, a standard-Turing rewritable tape, a stack (one-ended tape that is writable forward and readable backward), etc. Again my question is: was that done for the purpose of compactness of the derivation or do writable tapes break the derivation in some way?

Without knowing answers to the above questions my thinking is: CoinWitness C program will have to contain the code of the form:
Code:
if (p = malloc(n)) {
   /* real computation goes here */
} else {
   reject; /* or accept false; */
}
While the above code can be made constant the verification proofs for them will have to be published for multiple values of W. In other words either:

A) coins can be made to disappear when the blockchain becomes long enough;
B) proofs will need to be periodically recompiled and recomputed for a wider W.

Therefore I think this breaks the assumptions about the computational complexity of using SCIP for Bitcoin.

Is there anyone here that lives close to Haifa,Massachussets or Tel Aviv and could pose those questions to the authors and receive answers different than "I'll get back to you on that."

Thanks in advance.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on August 23, 2013, 06:14:57 PM
1) TinyRAM has a requirement that the memory is W bits wide and has exactly 2W cells. Is this a convenience for the sake of simplyfying the presentation or is this a necessary condition for the validity of the proof?
I thought this was simplification just to avoid having to have additional logic for out of bounds addresses. Though if so, I don't see why it couldn't have parameter m such that m<w and memory = 2^m where addresses with w-m high-bits set alias other low memory, as has been the case for many real architectures. I can try pass on the question to the authors.

Quote
2) TinyRAM has exactly one deterministic input tape that can only be read once and in the forward direction only. I would love to hear from somebody about some research that would make TinyRAM Turing-equivalent in some way: a writable and rewindable tape, a standard-Turing rewritable tape, a stack (one-ended tape that is writable forward and readable backward), etc. Again my question is: was that done for the purpose of compactness of the derivation or do writable tapes break the derivation in some way?
The way its described the tape is forced to be read into memory at startup, you might as well understand it as not existing. Tinyram is already turing complete (subject to memory and operations bounds) without any tapes at all because it is a random access machine. Making it more turing machine like wouldn't serve the purpose of making general purpose software run better on it.

I guess what you're really looking for here is a way to achieve large input without large memory state under the consistency check, for problems where the input size is much much larger than the ongoing random-access state. One way to do that may be to operate in a hierarchical manner.   This is the same kind of challenge we'd have for validating the blockchain using SCIP— we have 12 gigabytes of data to verify now, but only a final state space of about 250 mbytes.

Quote
Without knowing answers to the above questions my thinking is: CoinWitness C program will have to contain the code of the form:
if (p = malloc(n)) {
[...]
A) coins can be made to disappear when the blockchain becomes long enough;
B) proofs will need to be periodically recompiled and recomputed for a wider W.
It's somewhat worse than that.  When computing the verification key (the thing that would get stuffed into the scriptpubkey) you must have an operations bound set in advance. This means that you must specify the upper size of the transcript you are validating... and the coins must be returned to the bitcoin chain before you reach that limit, or they become unrecoverable.  I don't actually think this is too grave a limit, so long as the bound can be made reasonably high in practice, and perhaps it even solves some systemic risks as a side effect (if transactions all move off the blockchain for unbounded time, what will provide transaction fees to pay for security?).


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: 2112 on August 24, 2013, 12:15:36 AM
The way its described the tape is forced to be read into memory at startup, you might as well understand it as not existing. Tinyram is already turing complete (subject to memory and operations bounds) without any tapes at all because it is a random access machine. Making it more turing machine like wouldn't serve the purpose of making general purpose software run better on it.

I guess what you're really looking for here is a way to achieve large input without large memory state under the consistency check, for problems where the input size is much much larger than the ongoing random-access state. One way to do that may be to operate in a hierarchical manner.   This is the same kind of challenge we'd have for validating the blockchain using SCIP— we have 12 gigabytes of data to verify now, but only a final state space of about 250 mbytes.
Thanks for your reply. The way the paper is written this is a very unclear point. Check out the "Looking ahead" section (page 11 in my copy):

Quote
The two main challenges are implementing those functions that must be written directly in the underlying machine language, and supporting (or reasonably approximating) functionality that extends into the program runtime environment, such as file I/O, process management, inter-process communication (IPC), and other system services.

It isn't clear whether they look into that as a major research area or as a implementation detail drudgery. The answers could be:

X) sorry, editing error, no file I/O for you.
Y) we are describing that in the textbook form of our paper that has 544 pages.

Perhaps my using of Turing-equivalency wasn't the best choice of words. But it stayed within the spirit of the paper that somewhat freely mixes asymptotical complexity (big-O notation: O(something)) and measurable complexity (const*F(|something|), where || denotes some form of measure). Yes, I was looking for a way of writing the TinyRAM program that doesn't start with slurping the entire deterministic input tape.

I fully agree with you that TinyRAM is Turing-complete and will not help in moving any problem between say P and NP. But this paper is focused on "succint", "efficient" implementations and then various results from the areas of http://en.wikipedia.org/wiki/Multitape_Turing_machine and http://en.wikipedia.org/wiki/Turing_machine_equivalents are really helpfull when establishing the most accurate bounds of complexity for particular problems.

Thanks again for any insight you may already have or could obtain from the authors.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: socrates1024 on August 26, 2013, 02:32:28 AM
The idea is pretty neat: You construct the circuit with additional inputs which are effectively outside of the proof, but which do not break the soundness because there is no value that they can take which would cause false acceptance.  The prover solves for these values and fills them in, and that allows the system to do things like use rearrangably non-blocking switch networks instead of huge mux cascades (which then requires the prover to solve a routing problem, but replaces something with quadratic scaling with something that has log scaling).  An example they give in their paper is the divider:  Implementing a divide circuit would take a lot of gates (as is the case in digital logic too), instead they do something along the lines of implementing a multiply circuit which receives non-deterministic advice (the solution of the division) and then multiplies with one of the inputs and checks it against the other.

My favorite kind of non-deterministic advice to use here is a Merkle tree branch. The circuit then is just a Merkle tree branch checker. This is basically the standard way in crypto to get access to an array of memory without having to have a switching gate for every element in the array. This would of course work with any our existing merkleized-UTXO proposals too.

Also, this would simplify things like P2Ptradex involving validation of other chains... BitcoinA produces proofs that its chain is correct. If a user in BitcoinB makes a transaction that defines the parameters for BitcoinA, then the miners/validators in BitcoinB can check it in a single step.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on August 26, 2013, 03:16:42 AM
Is it possible to run the SCIP calculation on GPU? Would it be more efficient than CPU? We may even have SCIP-ASIC in the future.

However, I think the blockchain-based system does not really imprlove scalability a lot, because it just shifts the burden to the alt-chain. To make it more scalable, the alt-chain must be less secure than bitcoin, e.g. using smaller key size and hash length (therefore smaller signature), limited coin life (ancient UTXOs are pruned). This might be good enough for small amount transactions.

For the oracle-based system, there is a risk of shutdown of the oracle, which will lock all BTC in limbo. Although we may use M-of-N oracles but the risk is still here. Instead of signing {HC,SHB}, I wonder if the oracle may sign {HC,SHB,SHBx}, where SHBx specifies a time-locked transaction sending the bitcoin to Bob's address ("the emergency transaction", "ET"). When passing the off-blockchain bitcoin to Bob, Alice has to provide the details of all ETs in the chain, and Bob will check whether his ET has an earlier locktime than all the previous ETs. Bob has to spend or redeem the off-chain coin before the locktime of Alice's ET, or Alice will be able to redeem the coin on the blockchain. If this is possible, we could significantly reduce the risk of cheating by requiring a bigger N in an M-of-N oracles system (or simply use M-of-M).


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on August 26, 2013, 03:55:13 AM
will lock all BTC in limbo. Although we may use M-of-N oracles but the risk is still here.
It would lock the BTC using that (set of) oracles in limbo, but not all BTC or even all CoinWitness BTC.

In general: the best is having no points of failure,  but failing that it's better to have many things required to fail (M of N), ecosystem diversity (so one set of M of N failing only hurts a fraction of the users), and good fallbacks (like timeouts, indeed).  One tricky part of things like timeouts is that they can create incentives for producing failures where none existed before.

E.g. if after a timeout anyone with a valid transcript showing them as the last owner can redeem the coin then there would be an incentive for someone who previously held a coin to attack the oracles and cause a timeout, allowing them to attempt to race to obtain the coin.

One possibility, borrowing from my whimsical CoinCovenants (https://bitcointalk.org/index.php?topic=278122.0) thread,  would be to have the CoinWitness program designed to enforce that a timeout recovery pays the output to another a scriptpubkey which can either be redeemed by you after a second timeout OR can be redeemed by another timeout recovery which presents a _longer_ transcript.  In this case a timeout unwind would be very likely to go to the person with the longest transcript and that might mostly remove the attack incentive the timeout creates. E.g. if you try to cheat and present a truncated transcript in order to steal some coins, before the coins are finally released to you someone with a longer transcript comes along and takes them back.

So there is some risk balancing required...  My thinking here is that no single solution will be best for everyone and all uses, but we don't have to commit to a single usage, there can be diversity.

Quote
To make it more scalable, the alt-chain must be less secure than bitcoin, e.g. using smaller key size and hash length (therefore smaller signature), limited coin life (ancient UTXOs are pruned).
You miss one major way in which the "alt" can be less secure:  it can just be used by fewer users. Bitcoin's main lack of scalability comes from its global scope. Fewer users is a reduction in security, but many transactions— low value ones, for example— don't need all the security Bitcoin provides... Bitcoin's security needs to be good enough for the entire Bitcoin economy.

In any case, I just gave two distinct examples to make the point that I wasn't proposing a specific off-chain scheme, but a mechanism under which almost any cryptographically provable one could be bound to Bitcoin in a way that didn't create counterparty risk at the interface. You can think of an enormous number of possible alternatives with different security/scaling tradeoffs.

Quote
Is it possible to run the SCIP calculation on GPU? Would it be more efficient than CPU? We may even have SCIP-ASIC in the future.
Not yet, just from an implementation perspective, AFAIK, but the work is very parallel so it may be pretty GPU agreeable. Certainly special hardware could be designed which accelerated the field operations and made the work much faster.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on August 26, 2013, 01:02:16 PM
Such an oracle could be .... secured by a bounty txout that you can redeem if and only if you can publish a proof showing a replay, etc.

I don't think this works. Before the oracle operator doing evil things, he could generate a replay proof and redeem bounty for himself


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on August 26, 2013, 02:16:55 PM
Such an oracle could be .... secured by a bounty txout that you can redeem if and only if you can publish a proof showing a replay, etc.
I don't think this works. Before the oracle operator doing evil things, he could generate a replay proof and redeem bounty for himself
The purpose of the replay bounty is to make sure that a replaying oracle gets publicly disclosed. I would normally expect that to be coupled with a fidelity bond as well.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on August 27, 2013, 04:01:26 PM
Quote
Blockchain based
Alice has a bitcoin which is represented by a colored coin in some other specific blockchain, perhaps a geographic-specific blockchain.
Alice passes it to Bob by transacting in the normal way.
Bob can form the verifiable transcript by grabbing every transaction along the colored coin's path along with the SPV fragments connecting them to the chain and some additional blockchain headers to show the transactions are adequately buried.

What prevent Alice from forking the alt-chain and double spend after she sent the coin to Bob? When talking about SPV, one needs to know the height of the longest chain. Since the SCIP program won't be able to learn this, the attacker doesn't even need to outpace the honest chain.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jtimon on August 28, 2013, 05:26:14 PM
Quote
Blockchain based
Alice has a bitcoin which is represented by a colored coin in some other specific blockchain, perhaps a geographic-specific blockchain.
Alice passes it to Bob by transacting in the normal way.
Bob can form the verifiable transcript by grabbing every transaction along the colored coin's path along with the SPV fragments connecting them to the chain and some additional blockchain headers to show the transactions are adequately buried.

What prevent Alice from forking the alt-chain and double spend after she sent the coin to Bob? When talking about SPV, one needs to know the height of the longest chain. Since the SCIP program won't be able to learn this, the attacker doesn't even need to outpace the honest chain.

That's why whatever is executed and validated by the chain must be reproducible by other miners. With SCIP they don't need to repeat the process because they can trust the computation's signature. But the source to be executed must be public to all miners and all its inputs must come from the public chain as well.

But most of the difficulties encountered come from the unnecessary
that regular bitcoins must be 100% with an off-chain asset which
obviously is not bitcoin.

Things are much easier if we focus SCIP potential on the off-chain
side of the scheme without requiring alterations to the public chain
(which can also improve with SCIP, just in different ways).

Assuming we have private chains in which people can issue off-chain
assets that can bee atomically traded for public/in-chain assets (like
@maaku and I propose with Freimarkets), an off-chain asset backed with
bitcoin seems the best solution to me.
The accountant (the operator of the private chain) can issue accBTC
on its private chain. Because the public chain is public, we can make
a periodical audit process take data from it and the accountant can
use his bitcoin wallet to prove the unspent output he owns.
If this audit process uses SCIP, the privacy doesn't get compromised
and we can warranty that total issued accBTC <= BTC reserves at any
time.
The accountant could publish chain blocks (signed by him as a
substitute of proof of work) to prove their users he's not allowing
double-spends of the public assets. Or...he could use SCIP to prove
exactly the same thing without publishing all transactions!!

I am really excited about SCIP and what it means to bitcoin's
scalability, but it may not be as magical as we desire. It's magical
enough for me though. (And as said we've not talked about SCIP-based
productive proof-of-work yet...)


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: AnonyMint on September 01, 2013, 01:16:05 AM
How does this compare to CoinWitness?

CoinWitness is even rocket-sciency than Zerocoin, it also shares many of the weaknesses as a privacy-improver: Novel crypto, computational cost, and the huge point of requiring a soft fork and not being available today. It may have some scaling advantages if it is used as more than just a privacy tool. But it really is overkill for this problem, and won't be available anytime real soon.

After further thought off-chain transactions with CoinWitness is insecure, i.e. bringing off-chain transaction back on to the blockchain at par is insecure. How can we be sure there wasn't a double-spend?

This is analogous to trying to operate multiple blockchains with a fixed value between them.

If coins are allowed to be moved between blockchains at par (no market exchange variance) and the blockchains don't exchange coins at par with any blockchains that don't adhere, the problem remains that 50% attacking the blockchain with the lowest PoW difficulty will infect with ill effects the blockchains with higher PoW difficulty.

After further thought off-chain transactions with CoinWitness is insecure, i.e. bringing off-chain transaction back on to the blockchain at par is insecure. How can we be sure there wasn't a double-spend?
This is explained in the CoinWitness post, it's as secure as you wish to make it. It's also offtopic. Please stay on-topic and if you want to talk about that please post in that thread.

To make my point more obvious, note that it is impossible to determine if the fork being merged back into the dominant blockchain is the only or correct fork. Sorry but I think this is unarguable.

Edit: I see bytemaster agrees with me (https://bitcointalk.org/index.php?topic=279771.msg3053522#msg3053522).


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on September 01, 2013, 05:51:33 AM
Let me see if I can make it more clear for you:

I proposed two examples to demonstrate that this is a _generic_ concept which can bind arbitrary machine verifiable off-chain systems.  The security of any particular implementation depends on its specifics and is up to the users who chose to use a particular system.

Or as I said,
Quote
I provided two examples here to make a point that the idea is completely general. Any other system with the verifiable property, E.g. OpenTransactions, complicated ones involving voting by all participants, or even trivial ones that doesn't have double spending prevention, could be used.

You're speaking specifically about the blockchain consensus example, which I specifically noted only has SPV security:

The advantage from using Bitcoin itself and making one of the public inputs to the witness be a hash of a block which is required to be in the chain, you can increase the security of the "off-chain" coin from SPV+headers fragment to full security.

And I'd appreciate it if you'd go edit your post which claims that I was arguing otherwise there.

Also, as I noted, in the case where you do only have SPV security, the users can choose the security parameters, presumably according to the coin values in question, their risk tolerance, and the consensus stability of the other chain:
Quote
along with the SPV fragments connecting them to the chain and some additional blockchain headers to show the transactions are adequately buried.

The trade-off here is specific to independent blockchain consensus systems, and not relevant to off-chain systems which have stronger consistency models.

If you wanted to go further than you can probably improve things in the blockchain case with a one-way consensus dependency: You merge mine the second chain but, instead of the namecoin ships-in-the-night, require that the alternative system's nodes also validate the Bitcoin chain and that membership in the Bitcoin chain dominate its consensus: There is a longest chain of all alt-chain block headers which were also successful Bitcoin blocks on the current best bitcoin chain (there may have also additionally been alt blocks between them) and the consensus chain for the altchain contains at least these blocks. By doing this, a consensus proof of the altchain can be constructed relative to the Bitcoin consensus for at least older portions of that altchain.  But I freely admit I haven't thought about slaved consensus much: I think using a blockchain as the alternative system is by far the least interesting, because blockchains would inherently share Bitcoin's limitations and I think complementary approaches are more interesting... and because this is really more in the domain of altchain design than anything else: the notion of slaved consensus is interesting and potentially useful independent of the generic binding idea presented in this thread.

(Though perhaps thats a question that deserves more consideration than I've granted it: If you are to have one way consensus slaving with many secondary systems its important that the parent chain be very cheap to verify since all must verify it.)

If you really want to go nuts you could force those redeems to go into a temporarily timelocked locked output which can be reset back by someone providing a longer chain proof before the time runs up (See the coin covenant thread).


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on September 01, 2013, 07:20:34 AM
Quote
You're speaking specifically about the blockchain consensus example, which I specifically noted only has SPV security:

As I pointed out earlier, it is worse than SPV security. SPV requires a user to monitor the network, look for the longest PoW chain, and verify whether a transaction is adequately buried in the longest chain. In your blockchain based SCIP example, however, bitcoin miners will not monitor the alt-chain and won't know its current height. Bitcoin miners will accept the SCIP proof as long as the transaction is adequately buried, but this is not necessarily in the longest chain. Therefore it is (much) worse than SPV model.

On the other hand, if all bitcoin nodes are required to validate the alt-chain, it is basically equivalent to my aux-block proposal: https://bitcointalk.org/index.php?topic=283746.0


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on September 01, 2013, 07:25:46 AM
SPV requires a user to monitor the network, look for the longest PoW chain, and verify whether a transaction is adequately buried in the longest chain.
Fair enough. There is ambiguity in what SPV means— e.g. go see what electrum (http://electrum.org/), which does not monitor the network, calls itself or the BitcoinJ clients which get their peers exclusively via DNS.   Can you suggest a better name for "SPV with high risk of network isolation"? :P  It would probably be a good name to have.



Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on September 01, 2013, 11:16:46 AM
SPV requires a user to monitor the network, look for the longest PoW chain, and verify whether a transaction is adequately buried in the longest chain.
Fair enough. There is ambiguity in what SPV means— e.g. go see what electrum (http://electrum.org/), which does not monitor the network, calls itself or the BitcoinJ clients which get their peers exclusively via DNS.   Can you suggest a better name for "SPV with high risk of network isolation"? :P  It would probably be a good name to have.



SPV, defined by Satoshi, "needs to keep a copy of the block headers of the longest proof-of-work chain". There is no ambiguity on this point.

"network isolation" is exactly the term I want to use. I just find a possible solution to mitigate this problem. For a low value transaction (say 0.1BTC), one may believe it is economically prohibitive to do a certain amount of PoW (say 4000PHash, which should give 250BTC reward if one mines honestly) just to steal it. So the SCIP script will require the tx to be buried by at least 4000PHash of work.

Thought?


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on September 01, 2013, 11:27:22 AM
"network isolation" is exactly the term I want to use. I just find a possible solution to mitigate this problem. For a low value transaction (say 0.1BTC), one may believe it is economically prohibitive to do a certain amount of PoW (say 4000PHash, which should give 250BTC reward if one mines honestly) just to steal it. So the SCIP script will require the tx to be buried by at least 4000PHash of work.
Thought?
Less obvious that it works once the subsidy is small. Proofs of transaction fees in our current protocol are also enormous (every txn in the block, then every txn referenced as an input and tree fragements to connect to potentially thousands of fragments in distinct blocks), but without them I can make fake blocks that appear to have a kajillion in transaction fees.

But in general, sure, there is some level of POW which probably provides adequate security for some transaction value. 

Did you see what I was suggesting with respect to making spends provisional? E.g. you provide a header fragment to redeem, the txout moves to a new txout which is released to you after some span of time— or, alternatively, can be updated by someone else providing a longer conflicting chain of headers.  The idea is that rather than you going out to seek the chain's information, some interested party (e.g. real owner of the coin) brings it to the blockchain.
 


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on September 01, 2013, 02:49:19 PM
Did you see what I was suggesting with respect to making spends provisional? E.g. you provide a header fragment to redeem, the txout moves to a new txout which is released to you after some span of time— or, alternatively, can be updated by someone else providing a longer conflicting chain of headers.  The idea is that rather than you going out to seek the chain's information, some interested party (e.g. real owner of the coin) brings it to the blockchain.

Sounds reasonable. Coupling with PoW requirement makes it economically prohibitive to steal or DoS


One possibility, borrowing from my whimsical CoinCovenants (https://bitcointalk.org/index.php?topic=278122.0) thread,  would be to have the CoinWitness program designed to enforce that a timeout recovery pays the output to another a scriptpubkey which can either be redeemed by you after a second timeout OR can be redeemed by another timeout recovery which presents a _longer_ transcript.  In this case a timeout unwind would be very likely to go to the person with the longest transcript and that might mostly remove the attack incentive the timeout creates. E.g. if you try to cheat and present a truncated transcript in order to steal some coins, before the coins are finally released to you someone with a longer transcript comes along and takes them back.

This will allow any previous owners to DoS the latest owner by forcing him to redeem the coin on the blockchain.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: AnonyMint on September 01, 2013, 05:31:41 PM
And I'd appreciate it if you'd go edit your post which claims that I was arguing otherwise there.

Done. Thanks for elaborating. Good to see that we are not in disagreement.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on September 01, 2013, 09:33:53 PM
This will allow any previous owners to DoS the latest owner by forcing him to redeem the coin on the blockchain.
Hm? You'd still have to provide a transcript that shows the coin being directed back to bitcoin via sending it to a special destination.

You provide such a transcript to redeem the coin.

Someone else can provide a conflicting transcript but only if its of a longer conflicting chain.

So you can dos, but only if you're also reorganizing the secondary chain (or at least producing more POW than that chain).


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on September 02, 2013, 02:01:53 AM
This will allow any previous owners to DoS the latest owner by forcing him to redeem the coin on the blockchain.
Hm? You'd still have to provide a transcript that shows the coin being directed back to bitcoin via sending it to a special destination.

You provide such a transcript to redeem the coin.

Someone else can provide a conflicting transcript but only if its of a longer conflicting chain.

So you can dos, but only if you're also reorganizing the secondary chain (or at least producing more POW than that chain).

Sorry for the confusion. This comment is referring to the timeout mechanism for oracle-based system.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on September 02, 2013, 02:11:06 AM
Sorry for the confusion. This comment is referring to the timeout mechanism for oracle-based system.
Oh, gotcha. Indeed, oracle based timeout means that prior parties to the transcript can make you waste your time.  Though if the horizon of it is set further than some likely lifetime of the coin, it's moot— and mostly serves the reduce the motivations someone might have in shutting down the oracles.

"You can kill these things, but if you to all you'll have successfully done is made their users wait a bit then waste some effort."


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on September 13, 2013, 05:46:53 AM
Anti-replay oracle based
Alice has a digital coin: coin={Value, Alice's pubkey, transcript of the coin's history}, and wants to assign it to Bob.
Bob tells Alice Hash(bob's pubkey)=HB.
Alice computes Hash(coin)==HC, and also Sign(HB)==SHB.
Alice contacts a trusted anti-replay oracle,  telling it {HC,SHB}.
The oracle returns Sign({HC,SHB}) if and only if it has never signed something which began with HC before.
Alice passes this information on to Bob, along with the coin. Bob checks the coin's history and is satisfied that he has a valid coin which has not been double-spent.

This can be continued on and on, passing a coin from party to party, producing an ever-growing transcript.  It can be trivially extended by using M of N anti-replay oracles, or other similar improvements. The anti-replay oracles this uses are trusted to not replay, but since they only deal with hashes they learn nothing about the transactions (or even that they're being used for a currency-like use—it might just be timestamping for all the oracle can tell). Such an oracle could be on trusted hardware, fidelity bonded, secured by a bounty txout that you can redeem if and only if you can publish a proof showing a replay, etc.


I think this is a better idea than the "trusted-computing bank" proposed by retep, because the oracle won't directly hold bitcoins, and it is also much more anonymous. I'm quite interested to build one, just for fun and for experiment. However, I have the following practical issues:

1. Do I need to buy very expensive TPM hardware to run this? As I know current Intel CPUs have trusted computing support. Could that be used for such purpose

2. It seems I have to hold an ever-growing list of hashes to avoid replay. That won't scale in long-term. Is there any solution? One way I could think of is to revoke signing key regularly and drop the related hash table.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on September 13, 2013, 06:10:03 AM
I think this is a better idea than the "trusted-computing bank" proposed by retep, because the oracle won't directly hold bitcoins, and it is also much more anonymous. I'm quite interested to build one, just for fun and for experiment. However, I have the following practical issues:

1. Do I need to buy very expensive TPM hardware to run this? As I know current Intel CPUs have trusted computing support. Could that be used for such purpose

2. It seems I have to hold an ever-growing list of hashes to avoid replay. That won't scale in long-term. Is there any solution? One way I could think of is to revoke signing key regularly and drop the related hash table.

I wrote up this little spec a while ago for such an oracle which would be immediately useful (e.g. useful without this coinwitness stuff), e.g. for anti-doublespending bitcoin transactions:

http://0bin.net/paste/JCtxYmKrRXfGE6jw#M2b+70sG971rHdEmDKIDgz2PT/zlgSDa8zCTLHE1xbM=

I think it would be useful to implement one— in a very "embedded system" style, e.g. straight C minimal/no dependencies— so that it could easily be ported to some strong trusted environment, even an IBM cryptocard ($$)... but to start it could run normally.   I make some suggestions on how it could avoid ever growing hashes, same as you're suggesting.

In theory a modern intel cpu with the right security bits and a tpm will get you working remote attest. I have no clue what the state of the software is in right now. But I'd suggest starting here (http://trousers.sourceforge.net/).

My thinking is that such a system would have layered security— part from reputation, part from things like performance bonding, part from a trusted hardware environment, part from just simply having little incentive to cheat.  All these things don't need to exist at once in order to get something started and start creating applications.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on September 14, 2013, 05:57:47 PM
I have an idea to combine the blockchain-based and anti-replay oracle schemes for a stronger model.

1. The oracle will create an UTXO on the bitcoin blockchain and announce it to the world. (the "initial UTXO")

2. The oracle will collect timestamping requests from users. The requests are in the form of {HC,SHB} (as defined in OP). The oracle will accept a timestamping request only if it has never seen something beginning with the same HC before

3. With all timestamping requests collected, a Hash Merkle Root ("HMR") is calculated.

4. Using the initial UTXO as input, the oracle will create a transaction with only one output (the "second UTXO"), with a script like:
Code:
OP_DUP OP_HASH160 <pubkey hash> OP_EQUALVERIFY OP_CHECKSIG <HMR> OP_DROP

5. Before the transaction is mined, the oracle will update the transaction if it receives more timestamping requests. It will pay more fee to encourage miner to accept the updated transaction. (There is no big deal if an old version is mined. Just some people need to wait for the next block)

6. When the transaction is mined, the oracles will publish all timestamped requests.

7. The oracle will use the second UTXO to create a new timestamping transaction. The procedure is repeated, making it a chain of time stamps.

8. Usually there should be only one timestamping transaction in a block. After a blockchain re-org there could be more than one such tx in a block. But this is okay since the sequence is always clear.

On the user side:

1. When a merchant receives an off-chain coin (as described in the anti-replay oracle in OP), he will not deliver the product until he finds that the off-chain coin is timestamped in the blockchain with at least one confirmation. He will also check the hashes lists of the current and ALL previous timestamping transactions to make sure there is no replay. He will deliver when he is satisfied.

2. The merchant won't need to keep monitoring the future timestamping transactions. Instead, he will just monitor the bitcoin blockchain to see if someone is trying to redeem the SCIP-locked bitcoin. (Such redemption should not be possible unless the oracle replays)

3. We may have a separate P2P network to distribute the hash list, and use something like bloom-filter to allow light nodes to verify its newly accepted off-chain coins.

The SCIP-locked bitcoin:

1. The SCIP-locked bitcoin is redeemable with a proof of the full transcript (as OP suggest), PLUS the SPV links showing that the transcript is buried in the blockchain.

2. Using CoinCovenant, the SCIP-locked bitcoin is sent to another SCIP address. If it is not challenged by other people, the bitcoin could be sent to a normal bitcoin address after a timeout.

3. If the oracle replays, the first owner of the off-chain coin (the merchant in the last section) will find the redemption attempts on the blockchain. Now he will publish his transcript, with SPV links showing he is an earlier owner, and sends the SCIP-locked bitcoin to another SCIP address. Again, if it is not further challenged by other people, the bitcoin could be sent to the merchant's normal bitcoin address after the second timeout.

--------------------

This system is better than the original anti-replay oracle scheme because the bitcoin blockchain provides an undeniable proof of timestamp sequence, so the SCIP could judge who is the original owner.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jtimon on September 15, 2013, 03:23:42 AM
Since the absence of replay transactions is not validated by miners when these timestamp blocks get into the chain, you still have to trust the oracle not to replay transactions. So I don't see what exactly the merchant gains from waiting for block confirmations instead of just quering the oracle directly and instantly.

I don't even undesrtand how the redeption is even secure.
If the off-chain coin goes from Alice to Bob. When Bob wants to redeem it for the in-chain coin, couldn't Alice still try to redeem it as well?

Before sending the off-chain coin to Bob, Alice could have exercise redemption with the proofs she already had. What exactly invalidates those proofs when she sends the off-chain coin to Bob?
I undesrtand how Alice's signature can give Bob the ability to redeem, but I don't undesrstand how she sacrifizes her own ability forredemption at the same time.

It seems to me that you will always need the signature of an external accountant to guarantee who is the current owner. At that point you're basically depositing the Bitcoins with the accountant, so basically you're sending BTC from one mtgox account to another, which doesn't remove much trust.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on September 15, 2013, 03:32:52 AM
So I don't see what exactly the merchant gains from waiting for block confirmations instead of just quering the oracle directly and instantly.
The purpose of block confirmations is moving coins between different external systems, paying some who doesn't want to use those systems (because they do have a different security model than Bitcoin), etc.

Quote
I don't even undesrtand how the redeption is even secure.
If the off-chain coin goes from Alice to Bob. When Bob wants to redeem it for the in-chain coin, couldn't Alice still try to redeem it as well?
Because to redeem Bob must first pay to a special "address" which is really just a blinded copy of the Bitcoin address he's paying it to. This would be the mandatory last step in the transcript. Alice cannot redeem because she used her one and only spend to pay bob.

Quote
It seems to me that you will always need the signature of an external accountant to guarantee who is the current owner. At that point you're basically depositing the Bitcoins with the accountant, so basically you're sending BTC from one mtgox account to another, which doesn't remove much trust.
Even if the oracle misbehaves it can not steal coins without the cooperation of the current or past owners of that coin. It is also completely blinded to the identity of the transactions is assisting in (can't tell their value, who they're coming from or where they're going, can't see any linking). And with the longest-transcript-timelocked-refunds discussed in some of the above posts the coins are all still completely recoverable if the oracle vanishes too.  Every possibility has its tradeoffs and this isn't perfect, but its certainly not the same security model as "deposit funds with an opaque bank".


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on September 15, 2013, 05:47:17 AM
So I don't see what exactly the merchant gains from waiting for block confirmations instead of just quering the oracle directly and instantly.
The purpose of block confirmations is moving coins between different external systems, paying some who doesn't want to use those systems (because they do have a different security model than Bitcoin), etc.

I think he's referring to my lately proposed system

Although there is only one "timestamp miner", i.e. the oracle, in my proposed system, the timestamps list is made public and everyone can verify. Although the oracle won't do any PoW, it can't trivially rewrite historical timestamps because they are buried in the bitcoin blockchain. So basically, it has the same security level of bitcoin: everyone can verify, and protected with SAME amount of PoW. Since the timestamps list is merely a list of hashes, there is no privacy concern.

The merchant will verify that there is no replay in the current and all previous timestamps. The future ones are irrelevant because the system will always favor the first owner of the off-chain coin.

For the rest of jtimon's questions, I think gmaxwell has answered.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jtimon on September 15, 2013, 02:12:39 PM
]
I think he's referring to my lately proposed system

...the oracle [...] can't trivially rewrite historical timestamps because they are buried in the bitcoin blockchain.

Yes, I was referring to that and I got my answer, thanks. Making it
harder for the oracle to cheat, fair enough.

Quote
I don't even undesrtand how the redeption is even secure.
If the off-chain coin goes from Alice to Bob. When Bob wants to redeem it for the in-chain coin, couldn't Alice still try to redeem it as well?
Because to redeem Bob must first pay to a special "address" which is really just a blinded copy of the Bitcoin address he's paying it to. This would be the mandatory last step in the transcript. Alice cannot redeem because she used her one and only spend to pay bob.

But I still don't understand this part. The only stuff that changed was
on the oracle, not the chain. I don't get how Alice "spends her one
and only chance of redemption".

Anti-replay oracle based
Alice has a digital coin: coin={Value, Alice's pubkey, transcript of the coin's history}, and wants to assign it to Bob.
Bob tells Alice Hash(bob's pubkey)=HB.
Alice computes Hash(coin)==HC, and also Sign(HB)==SHB.
Alice contacts a trusted anti-replay oracle,  telling it {HC,SHB}.
The oracle returns Sign({HC,SHB}) if and only if it has never signed something which began with HC before.
Alice passes this information on to Bob, along with the coin. Bob checks the coin's history and is satisfied that he has a valid coin which has not been double-spent.

So coinA = {value, HA, previousHistory.append({previous owner pubkey, SHA})}
And coinB = {{value, HB, previousHistory.append({A pubkey, SHB})}}

I understand how the SCIP script can accept coinB for redeption now,
but I don't undesrtand why coinA cannot be used for redemption
anymore.
Alice has only signed additional data, but coinA as input for the
SCIP script should be producing the same result.

With the external chain approach I see the same problem. Unless the
bitcoin chain directly observes the current utxo for that chain, how
can the script tell if the transcript used for redemption contains
the full history of the colored coin and no transactions have been
validated in the external chain after that?

This double-redemption prevention mechanism that doesn't rely on
proof of work seems magical to me.
 
What am I missing?

By the way, the external chain approach reminds me a lot to private
chains in freimarkets. The blocks are signed by trusted accountants
that are responsible to prevent double-spending instead of having
proof of work, but he can still publish the chain or use SCIP-based
audit tools to prove he's accounting correctly so far.
Instead of this magical redemption, someone issues off-chain coins
that can be traded directly and atomically for bitcoins or other
in-chain assets. Users have to trust the issuer of the off-chain
asset too (which may not be the same person as the accountant of the
private chain), but again SCIP audits can make continued cheating
impossible.

Still, the in-chain coin and the off-coin chain are
different assets (one backed by the other, but not directly fungible)
and they need to be traded. If I get to understand your
double-redemption prevention mechanism, we could improve that
situation by incorporating it to our proposal.

That solution could also be used with other public chains. For
example, bitcoins could be moved in and out of the freicoin chain to
be traded there. But again, I just don't see how this is possible without
bitcoin and freicoin chains completely observing each other.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on September 15, 2013, 02:59:30 PM
I have one more question for gmaxwell: how do you deal with the divisibility of the off-chain coin? It seems the off-chain coins are transferred like smart properties, which means that they are indivisible. If one tries to divide a coin in the off-chain system, there is no guarantee that the fractions will reunite in the future, making them permanently locked in the off-chain system. Therefore, people need to mint off-chain coins with different denominations for daily use. However redeeming a low value coin may not be economically viable due to relatively high bitcoin miner fee.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: hashman on October 29, 2013, 02:21:19 PM

This CoinWitness idea is something of a less ambitious example of what Eli suggested in his talk—effectively running blockchain validation under SCIP to produce UTXO checkpoints that have cryptographic proof of their correctness. But instead of validating all transactions from a public blockchain, here I suggest validating small sequences of private transactions to avoid ever making them public at all and, in doing so, improve the scalability, flexibility, privacy, and fungibility of Bitcoin.  (And, in fact, this idea may in fact be trivially obvious to those working on the SCIP tools, but I expect it will blow some minds here).


Thanks for bringing this to our attention :) 

I don't see how the blockchain validation scheme would work, seeing as somebody wishing to verify the blockchain would need to (according to the SNARK paper) perform operations of order > P to prepare for verification, where P is the program they could have just run to verify the blockchain. 

However, something like your CoinWitness seems feasible to me though I don't see all the details.  I am also excited that this type of scheme could be used as basis of a proof-of-work chain in which the work was programs submitted with a fee by parties who wanted the programs run for whatever purpose.  I think this was proposed elsewhere on this forum back in the day but I can't seem to bring it up.  Of course it would be even nicer if the computation were fully homomorphically encrypted but this is not necessary (there are programs people need run but don't care too much about the privacy of the contents and output). 




Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: killerstorm on December 27, 2013, 12:18:13 PM
Here's my idea how to make something conceptually similar to CoinWitness for colored coins, without SCIP and without protocol changes: (SCIP is optional)

Quote
I thought that my understanding of colored coins is more or less complete, but today I found that I had more to learn...

I've been looking through a very naive (and broken) proposal to have Mastercoin in multiple blockchains at once, which prompted me to think:
Is it even possible to do things like that? What is possible?

It's worth noting that I've been thinking about multi-blockchain interactions since 2011, and all potential solutions I'm aware of are quite a bit cumbersome.
(I'm not talking about cross-blockchain trade here, but about representing the same asset in multiple systems at once.)

I'll skip some details for now, the basic idea is that if it is possible to embed a compact cryptographic proofs of off-chain transfer into a colored coin transaction (which colored coin agents will be able to verify when necessary), then it is possible to integrate colored coins with off-chain systems in a consistent, trustless way.

The problem with off-chain transfers is that situations with ambiguity/lack of consensus are possible. But cryptocurrency transactions must be unambiguous. The current solution is to have a trusted party (potentially, some distributed entity, "d.a.c.") to work as an interface between an off-chain transfer system and a blockchain-based cryptocurrency (incl. colored coins).

But as I found today, it's also possible to do it the other way around: embed a reference to proof into colored coin transaction and let participants to verify the proof. There is no longer a need for a trusted party: when reference to a proof is embedded into a transaction, all potential ambiguities are eliminated.

To make it even better, we should re-think how we see colored coins. I think majority of people (at least, non-cryptographers) understand cryptocurrency as a ledger-based consensus: that is, there is a ledger which shows how much money each one has.

But a more general approach is to see it as a system based on proofs...
Say, "Alice has 50 coins" means that Alice is able to prove that she has 50 coins.
"Alice transfers 50 coins to Bob" means that Alice performed an action which now allows Bob to prove that he has 50 coins.

The difference is that we don't need to make sure that we know balance of each account; Alice and Bob are concerned only about their own balances.

Switching to this model would make off-chain transfers I've outlined above feasible in practice. Also it mitigates problems with consensus/hard forks: if we assume that an interactive proof protocol is used when payment is made, it's possible to make sure that hard forks can't result in permanent monetary loss.

Now the only problem which remains is compactness of proofs... I.e. a performance problem. Ultimately it can be addressed through SCIP or other compact proof system, I think.
But at least this framework lets users to choose level of assuredness they are comfortable with.

It's worth noting that it is somewhat similar to Gregory Maxwell's CoinWitness idea:
https://bitcointalk.org/index.php?topic=277389.0

The difference is that in my case neither SCIP nor changes to Bitcoin protocol are required.
So it's feasible to implement it right now, and it's not terribly complex... In terms of complexity, it is what a bunch of undergrad CS students could do.

EDIT: Users who run thin clients and thus are unable to verify proofs themselves can outsource verification to a network of validators à la Ripple consensus (https://ripple.com/wiki/Consensus#Not_colluding). Of course, it is possible that these validators would collude and try to fool users. Such collusion can be detected by any entity which runs verification independently, and a cryptographic proof of wrongdoing can be produced. Due to the nature of colored coins, damage from this collusion will be limited to users who: 1) trusted validators; 2) accepted payments when collusion was in effect. Others will be unaffected. Thus robustness of colored coins w.r.t. erroneous transactions might allow us to replace a complex approach like SCIP with simpler one (but less reliable).


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: AnonyMint on February 24, 2014, 02:30:39 AM
I wonder how this system compares to Pinnochio which has similar capabilities.

   http://research.microsoft.com/apps/pubs/default.aspx?id=180286

However, my understanding is that the performance is significantly better.

In attempting to understand span programs and not having ready access to the citation [1], I found the following explanation available online at books.google.com.

Extremal Combinatorics: With Applications in Computer Science By Stasys Jukna

Quote
Chapter 16 Span Programs

A span program for a function f(x1,...,xn) is presented as a matrix over some field, with the rows labeled by variables xi or their negations !xi (one variable can label many rows). The span program accepts an input assignment if and only if the all-1 vector can be obtained as a linear combination of the rows who labels are satisfied by the input. The size of the span program is the number of rows in the matrix. The span program is monotone if only positive literals are used as labels of the rows, i.e. negated variables are not allowed.

The model appears to be quite strong: classical models for computing boolean functions — like the switching networks or DeMorgan formulas — can be simulated by span programs without any increase in size.

16.1 The model

We describe the model more precisely.
Let F be a field. A span program over F is given by a matrix M over F with its rows labeled by literals x1,...,xn,!x1,...,!xn. For an input a = (a1,...,an) ∈ {0,1}n, let Ma denote the submatrix of M obtained by keeping those rows whose labels are satisfied by a. That is, Ma contains rows labeled by those xi for which ai = 1 and by those !xi for which ai = 0. The program M accepts the input a if the all-ones vector 1 (or any other, fixed in advance, vector) belongs to the span of the rows of Ma. A span program computes a boolean function f if it accepts exactly those inputs a where f(a) = 1. The size of the span program is the number of rows in it.

The number of columns is not counted as part of the size. It is always possible to restrict the matrix of a span program to set of linearly independent columns without changing the function computed by the program, therefore it is not necessary to use more columns than rows. However, it is usually easier to design a span program with a large number of columns, many of which may be linearly dependent.

Also.

Extremal Bipartite Graphs and Superpolynomial Lower Bounds for Monotone Span Programs, 1.3 Monotone span programs

[1] Mauricio Karchmer and Avi Wigderson. On span programs. In Structure in Complexity Theory Conference, pages 102–111, 1993.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: eldentyrell on March 22, 2014, 06:11:51 PM
Fair enough. There is ambiguity in what SPV means— e.g. go see what electrum (http://electrum.org/), which does not monitor the network, calls itself or the BitcoinJ clients which get their peers exclusively via DNS.   Can you suggest a better name for "SPV with high risk of network isolation"? :P  It would probably be a good name to have.

Block depth check (https://en.bitcoin.it/wiki/Thin_Client_Security#Block_Depth_as_a_Transaction_Validity_Check) (as opposed to block height check (https://en.bitcoin.it/wiki/Thin_Client_Security#Block_Height_as_a_Transaction_Validity_Check)).

  • Electrum is (last time I checked) just block depth check.
  • SPV is block depth check on the longest-header-chain (no block validation, header-hash-chaining and difficulty check only).
  • Full clients are block height check on the longest-valid-block-chain (transactions validated).

At the moment there's not an enormous difference between the practical security level, but (as gmaxwell mentioned) the distinctions will start to become more pronounced as the block incentive drops.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: eldentyrell on March 22, 2014, 06:14:00 PM

This is really way cool stuff!


Interesting ... are there constraints on the set of programs or is it really 'all programs'?
It's turing complete, and for the purposes here it really is all programs.

Er, not quite.  You have to commit to an upper bound on the running time in advance; Turing machines have unbounded tapes -- and the unboundedness of the tape is really critical to most reasoning and reductions involving TMs.

This doesn't change the fact that this is a fantastic idea, or the fact that SNARKs are cool.


cannot, as currently defined, run self-modifying code

This is one place where unboundedness matters.  An unbounded machine can emulate a machine that executes self-modifying code.  Compilers and proof checkers (as in logical proofs (http://en.wikipedia.org/wiki/Calculus_of_constructions), not cryptographic proofs) are another.


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on March 23, 2014, 12:58:06 PM
This is one place where unboundedness matters.  An unbounded machine can emulate a machine that executes self-modifying code.  Compilers and proof checkers (as in logical proofs (http://en.wikipedia.org/wiki/Calculus_of_constructions), not cryptographic proofs) are another.
Here is self-modifying code: http://eprint.iacr.org/2013/879

The class of techniques is universal up to the parametrized runtime limit, as you note— while that technically makes it non-universal— well, in a finite universe everything has a finite tape :).
  • Electrum is (last time I checked) just block depth check.
  • SPV is block depth check on the longest-header-chain (no block validation, header-hash-chaining and difficulty check only).
  • Full clients are block height check on the longest-valid-block-chain (transactions validated).
I'm told electrum now connects to multiple servers, which helps. Some of the things I was discussing here is SPV but counts on other people to go out and find and get mined the evidence of a longer chain, maybe arguably in-better.  SPV could— in theory— be boosted to full security compactly if you were able to have SNARKs for block validity as the block headers could carry proofs that they were a result of validating a faithful history though the engineering gap to verify such large computations is pretty big.

Somewhat relevant on the subject of the SPV tying of external systems is the abstract I submitted to an upcoming bitcoin workshop (and the citations it includes): http://0bin.net/paste/vkc4NWr3BeXwgO6O#TzYCxardJ3U4lkQAoP8mv+/nDJWZZk6TZeWKrMQ1Gcc=



Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: jl2012 on August 29, 2014, 03:06:16 PM
As people are talking about scalability again, is there any new development in SCIP?


Title: Re: Really Really ultimate blockchain compression: CoinWitness
Post by: gmaxwell on August 29, 2014, 07:32:42 PM
As people are talking about scalability again, is there any new development in SCIP?
Yes, check out the recent paper on  "Scalable Zero Knowledge via Cycles of Elliptic Curves": http://eprint.iacr.org/2014/595

Which is a pretty wild technique.  Basically they managed (through an enormous amount of computation) to find a pair of pairing-compatible elliptic curves such that the number of points on one is the size of the finite field the other is defined over, and vice versa.

What this means is that in a ZKP written using curve A it's cheap to run the verifier for ZKP written in curve B. And for ZKP in curve B its cheap to verify proofs for curve A.

They take this structure and write proofs of the form "Verify a ZKP in the other curve of the machine state;  Execute one more instruction on top of that state.". Then they alternate these constructions, allowing for completely linear scaling.

The downside is that this magical stunt requires they use curves where the ultimate verifier (not insider a proof but on a computer) is a far bit slower. It also only allows for 80 bit security (The size ratios make achieving 128 bit security much harder). It also only helps for problems that work by repeated application of a universal circuit, like running tinyram, rather than running a hard wired application specific circuit— which many applications will have preferred for performance.