Bitcoin Forum
May 04, 2024, 01:09:18 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 »  All
  Print  
Author Topic: Really Really ultimate blockchain compression: CoinWitness  (Read 34143 times)
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
August 20, 2013, 03:43:54 PM
 #21

[Warning: non-expert blabber ahead:]
I believe that some of the fundamental improvements they made to prevent circuit blowup generally should also make adding opcodes easier. One of the big reasons you get blow up in conversion to circuits is that you need a MUX hierarchy to get the data from all the places it could have come from to all the places it may need to go to, and so that makes adding more opcodes (or memory) costly.  They address this through the use of non-deterministic advice.

The idea is pretty neat: You construct the circuit with additional inputs which are effectively outside of the proof, but which do not break the soundness because there is no value that they can take which would cause false acceptance.  The prover solves for these values and fills them in, and that allows the system to do things like use rearrangably non-blocking switch networks instead of huge mux cascades (which then requires the prover to solve a routing problem, but replaces something with quadratic scaling with something that has log scaling).  An example they give in their paper is the divider:  Implementing a divide circuit would take a lot of gates (as is the case in digital logic too), instead they do something along the lines of implementing a multiply circuit which receives non-deterministic advice (the solution of the division) and then multiplies with one of the inputs and checks it against the other.

The obvious opcodes which would be most useful for us are the EC field operations over our curve.  I might say parts of SHA-2, but I'm not sure how much more efficient a direct implementation of the SHA-2 round function might be implemented directly rather than from C. At least in digital logic adders use a lot of gates, but they're the same adders tinyram would use with a 32 bit wordsize, so...
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714784958
Hero Member
*
Offline Offline

Posts: 1714784958

View Profile Personal Message (Offline)

Ignore
1714784958
Reply with quote  #2

1714784958
Report to moderator
1714784958
Hero Member
*
Offline Offline

Posts: 1714784958

View Profile Personal Message (Offline)

Ignore
1714784958
Reply with quote  #2

1714784958
Report to moderator
1714784958
Hero Member
*
Offline Offline

Posts: 1714784958

View Profile Personal Message (Offline)

Ignore
1714784958
Reply with quote  #2

1714784958
Report to moderator
ShadowOfHarbringer
Legendary
*
Offline Offline

Activity: 1470
Merit: 1005


Bringing Legendary Har® to you since 1952


View Profile
August 20, 2013, 06:43:41 PM
 #22

[Warning: non-expert blabber ahead:]
I believe that some of the fundamental improvements they made to prevent circuit blowup generally should also make adding opcodes easier. One of the big reasons you get blow up in conversion to circuits is that you need a MUX hierarchy to get the data from all the places it could have come from to all the places it may need to go to, and so that makes adding more opcodes (or memory) costly.  They address this through the use of non-deterministic advice.

The idea is pretty neat: You construct the circuit with additional inputs which are effectively outside of the proof, but which do not break the soundness because there is no value that they can take which would cause false acceptance.  The prover solves for these values and fills them in, and that allows the system to do things like use rearrangably non-blocking switch networks instead of huge mux cascades (which then requires the prover to solve a routing problem, but replaces something with quadratic scaling with something that has log scaling).  An example they give in their paper is the divider:  Implementing a divide circuit would take a lot of gates (as is the case in digital logic too), instead they do something along the lines of implementing a multiply circuit which receives non-deterministic advice (the solution of the division) and then multiplies with one of the inputs and checks it against the other.

The obvious opcodes which would be most useful for us are the EC field operations over our curve.  I might say parts of SHA-2, but I'm not sure how much more efficient a direct implementation of the SHA-2 round function might be implemented directly rather than from C. At least in digital logic adders use a lot of gates, but they're the same adders tinyram would use with a 32 bit wordsize, so...

Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1129


View Profile
August 20, 2013, 07:37:35 PM
 #23

The obvious opcodes which would be most useful for us are the EC field operations over our curve.  I might say parts of SHA-2, but I'm not sure how much more efficient a direct implementation of the SHA-2 round function might be implemented directly rather than from C. At least in digital logic adders use a lot of gates, but they're the same adders tinyram would use with a 32 bit wordsize, so...

They say that adders require 2W gates. So pretty cheap. But then they also say that they can do bitwise operations directly on the binary form as in regular boolean circuits. Although they don't mention shifts, I guess it's not so different. So I'd intuitively guess that a SHA256 implementation isn't going to be too expensive.

I guess for RSA the fact that their "native" arithmetic works in large prime fields might be useful - but I'm really out of my depth at that point.
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1065



View Profile
August 23, 2013, 04:45:45 PM
 #24

In SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge (referred to as SCIP below), Eli Ben-Sasson et al. describe their work on highly efficient non-interactive proofs with zero-knowledge for the faithful execution of programs written in C. Eli also presented at the Bitcoin conference.
Thank you very much. I've spent some time reading and comprehending this paper. I have following questions about the associated area of research:

1) TinyRAM has a requirement that the memory is W bits wide and has exactly 2W cells. Is this a convenience for the sake of simplyfying the presentation or is this a necessary condition for the validity of the proof?

2) TinyRAM has exactly one deterministic input tape that can only be read once and in the forward direction only. I would love to hear from somebody about some research that would make TinyRAM Turing-equivalent in some way: a writable and rewindable tape, a standard-Turing rewritable tape, a stack (one-ended tape that is writable forward and readable backward), etc. Again my question is: was that done for the purpose of compactness of the derivation or do writable tapes break the derivation in some way?

Without knowing answers to the above questions my thinking is: CoinWitness C program will have to contain the code of the form:
Code:
if (p = malloc(n)) {
   /* real computation goes here */
} else {
   reject; /* or accept false; */
}
While the above code can be made constant the verification proofs for them will have to be published for multiple values of W. In other words either:

A) coins can be made to disappear when the blockchain becomes long enough;
B) proofs will need to be periodically recompiled and recomputed for a wider W.

Therefore I think this breaks the assumptions about the computational complexity of using SCIP for Bitcoin.

Is there anyone here that lives close to Haifa,Massachussets or Tel Aviv and could pose those questions to the authors and receive answers different than "I'll get back to you on that."

Thanks in advance.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
August 23, 2013, 06:14:57 PM
 #25

1) TinyRAM has a requirement that the memory is W bits wide and has exactly 2W cells. Is this a convenience for the sake of simplyfying the presentation or is this a necessary condition for the validity of the proof?
I thought this was simplification just to avoid having to have additional logic for out of bounds addresses. Though if so, I don't see why it couldn't have parameter m such that m<w and memory = 2^m where addresses with w-m high-bits set alias other low memory, as has been the case for many real architectures. I can try pass on the question to the authors.

Quote
2) TinyRAM has exactly one deterministic input tape that can only be read once and in the forward direction only. I would love to hear from somebody about some research that would make TinyRAM Turing-equivalent in some way: a writable and rewindable tape, a standard-Turing rewritable tape, a stack (one-ended tape that is writable forward and readable backward), etc. Again my question is: was that done for the purpose of compactness of the derivation or do writable tapes break the derivation in some way?
The way its described the tape is forced to be read into memory at startup, you might as well understand it as not existing. Tinyram is already turing complete (subject to memory and operations bounds) without any tapes at all because it is a random access machine. Making it more turing machine like wouldn't serve the purpose of making general purpose software run better on it.

I guess what you're really looking for here is a way to achieve large input without large memory state under the consistency check, for problems where the input size is much much larger than the ongoing random-access state. One way to do that may be to operate in a hierarchical manner.   This is the same kind of challenge we'd have for validating the blockchain using SCIP— we have 12 gigabytes of data to verify now, but only a final state space of about 250 mbytes.

Quote
Without knowing answers to the above questions my thinking is: CoinWitness C program will have to contain the code of the form:
if (p = malloc(n)) {
[...]
A) coins can be made to disappear when the blockchain becomes long enough;
B) proofs will need to be periodically recompiled and recomputed for a wider W.
It's somewhat worse than that.  When computing the verification key (the thing that would get stuffed into the scriptpubkey) you must have an operations bound set in advance. This means that you must specify the upper size of the transcript you are validating... and the coins must be returned to the bitcoin chain before you reach that limit, or they become unrecoverable.  I don't actually think this is too grave a limit, so long as the bound can be made reasonably high in practice, and perhaps it even solves some systemic risks as a side effect (if transactions all move off the blockchain for unbounded time, what will provide transaction fees to pay for security?).
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1065



View Profile
August 24, 2013, 12:15:36 AM
 #26

The way its described the tape is forced to be read into memory at startup, you might as well understand it as not existing. Tinyram is already turing complete (subject to memory and operations bounds) without any tapes at all because it is a random access machine. Making it more turing machine like wouldn't serve the purpose of making general purpose software run better on it.

I guess what you're really looking for here is a way to achieve large input without large memory state under the consistency check, for problems where the input size is much much larger than the ongoing random-access state. One way to do that may be to operate in a hierarchical manner.   This is the same kind of challenge we'd have for validating the blockchain using SCIP— we have 12 gigabytes of data to verify now, but only a final state space of about 250 mbytes.
Thanks for your reply. The way the paper is written this is a very unclear point. Check out the "Looking ahead" section (page 11 in my copy):

Quote
The two main challenges are implementing those functions that must be written directly in the underlying machine language, and supporting (or reasonably approximating) functionality that extends into the program runtime environment, such as file I/O, process management, inter-process communication (IPC), and other system services.

It isn't clear whether they look into that as a major research area or as a implementation detail drudgery. The answers could be:

X) sorry, editing error, no file I/O for you.
Y) we are describing that in the textbook form of our paper that has 544 pages.

Perhaps my using of Turing-equivalency wasn't the best choice of words. But it stayed within the spirit of the paper that somewhat freely mixes asymptotical complexity (big-O notation: O(something)) and measurable complexity (const*F(|something|), where || denotes some form of measure). Yes, I was looking for a way of writing the TinyRAM program that doesn't start with slurping the entire deterministic input tape.

I fully agree with you that TinyRAM is Turing-complete and will not help in moving any problem between say P and NP. But this paper is focused on "succint", "efficient" implementations and then various results from the areas of http://en.wikipedia.org/wiki/Multitape_Turing_machine and http://en.wikipedia.org/wiki/Turing_machine_equivalents are really helpfull when establishing the most accurate bounds of complexity for particular problems.

Thanks again for any insight you may already have or could obtain from the authors.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
socrates1024
Full Member
***
Offline Offline

Activity: 126
Merit: 108


Andrew Miller


View Profile
August 26, 2013, 02:32:28 AM
 #27

The idea is pretty neat: You construct the circuit with additional inputs which are effectively outside of the proof, but which do not break the soundness because there is no value that they can take which would cause false acceptance.  The prover solves for these values and fills them in, and that allows the system to do things like use rearrangably non-blocking switch networks instead of huge mux cascades (which then requires the prover to solve a routing problem, but replaces something with quadratic scaling with something that has log scaling).  An example they give in their paper is the divider:  Implementing a divide circuit would take a lot of gates (as is the case in digital logic too), instead they do something along the lines of implementing a multiply circuit which receives non-deterministic advice (the solution of the division) and then multiplies with one of the inputs and checks it against the other.

My favorite kind of non-deterministic advice to use here is a Merkle tree branch. The circuit then is just a Merkle tree branch checker. This is basically the standard way in crypto to get access to an array of memory without having to have a switching gate for every element in the array. This would of course work with any our existing merkleized-UTXO proposals too.

Also, this would simplify things like P2Ptradex involving validation of other chains... BitcoinA produces proofs that its chain is correct. If a user in BitcoinB makes a transaction that defines the parameters for BitcoinA, then the miners/validators in BitcoinB can check it in a single step.

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1093


View Profile
August 26, 2013, 03:16:42 AM
 #28

Is it possible to run the SCIP calculation on GPU? Would it be more efficient than CPU? We may even have SCIP-ASIC in the future.

However, I think the blockchain-based system does not really imprlove scalability a lot, because it just shifts the burden to the alt-chain. To make it more scalable, the alt-chain must be less secure than bitcoin, e.g. using smaller key size and hash length (therefore smaller signature), limited coin life (ancient UTXOs are pruned). This might be good enough for small amount transactions.

For the oracle-based system, there is a risk of shutdown of the oracle, which will lock all BTC in limbo. Although we may use M-of-N oracles but the risk is still here. Instead of signing {HC,SHB}, I wonder if the oracle may sign {HC,SHB,SHBx}, where SHBx specifies a time-locked transaction sending the bitcoin to Bob's address ("the emergency transaction", "ET"). When passing the off-blockchain bitcoin to Bob, Alice has to provide the details of all ETs in the chain, and Bob will check whether his ET has an earlier locktime than all the previous ETs. Bob has to spend or redeem the off-chain coin before the locktime of Alice's ET, or Alice will be able to redeem the coin on the blockchain. If this is possible, we could significantly reduce the risk of cheating by requiring a bigger N in an M-of-N oracles system (or simply use M-of-M).

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
August 26, 2013, 03:55:13 AM
 #29

will lock all BTC in limbo. Although we may use M-of-N oracles but the risk is still here.
It would lock the BTC using that (set of) oracles in limbo, but not all BTC or even all CoinWitness BTC.

In general: the best is having no points of failure,  but failing that it's better to have many things required to fail (M of N), ecosystem diversity (so one set of M of N failing only hurts a fraction of the users), and good fallbacks (like timeouts, indeed).  One tricky part of things like timeouts is that they can create incentives for producing failures where none existed before.

E.g. if after a timeout anyone with a valid transcript showing them as the last owner can redeem the coin then there would be an incentive for someone who previously held a coin to attack the oracles and cause a timeout, allowing them to attempt to race to obtain the coin.

One possibility, borrowing from my whimsical CoinCovenants thread,  would be to have the CoinWitness program designed to enforce that a timeout recovery pays the output to another a scriptpubkey which can either be redeemed by you after a second timeout OR can be redeemed by another timeout recovery which presents a _longer_ transcript.  In this case a timeout unwind would be very likely to go to the person with the longest transcript and that might mostly remove the attack incentive the timeout creates. E.g. if you try to cheat and present a truncated transcript in order to steal some coins, before the coins are finally released to you someone with a longer transcript comes along and takes them back.

So there is some risk balancing required...  My thinking here is that no single solution will be best for everyone and all uses, but we don't have to commit to a single usage, there can be diversity.

Quote
To make it more scalable, the alt-chain must be less secure than bitcoin, e.g. using smaller key size and hash length (therefore smaller signature), limited coin life (ancient UTXOs are pruned).
You miss one major way in which the "alt" can be less secure:  it can just be used by fewer users. Bitcoin's main lack of scalability comes from its global scope. Fewer users is a reduction in security, but many transactions— low value ones, for example— don't need all the security Bitcoin provides... Bitcoin's security needs to be good enough for the entire Bitcoin economy.

In any case, I just gave two distinct examples to make the point that I wasn't proposing a specific off-chain scheme, but a mechanism under which almost any cryptographically provable one could be bound to Bitcoin in a way that didn't create counterparty risk at the interface. You can think of an enormous number of possible alternatives with different security/scaling tradeoffs.

Quote
Is it possible to run the SCIP calculation on GPU? Would it be more efficient than CPU? We may even have SCIP-ASIC in the future.
Not yet, just from an implementation perspective, AFAIK, but the work is very parallel so it may be pretty GPU agreeable. Certainly special hardware could be designed which accelerated the field operations and made the work much faster.
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1093


View Profile
August 26, 2013, 01:02:16 PM
 #30

Such an oracle could be .... secured by a bounty txout that you can redeem if and only if you can publish a proof showing a replay, etc.

I don't think this works. Before the oracle operator doing evil things, he could generate a replay proof and redeem bounty for himself

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
August 26, 2013, 02:16:55 PM
 #31

Such an oracle could be .... secured by a bounty txout that you can redeem if and only if you can publish a proof showing a replay, etc.
I don't think this works. Before the oracle operator doing evil things, he could generate a replay proof and redeem bounty for himself
The purpose of the replay bounty is to make sure that a replaying oracle gets publicly disclosed. I would normally expect that to be coupled with a fidelity bond as well.
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1093


View Profile
August 27, 2013, 04:01:26 PM
 #32

Quote
Blockchain based
Alice has a bitcoin which is represented by a colored coin in some other specific blockchain, perhaps a geographic-specific blockchain.
Alice passes it to Bob by transacting in the normal way.
Bob can form the verifiable transcript by grabbing every transaction along the colored coin's path along with the SPV fragments connecting them to the chain and some additional blockchain headers to show the transactions are adequately buried.

What prevent Alice from forking the alt-chain and double spend after she sent the coin to Bob? When talking about SPV, one needs to know the height of the longest chain. Since the SCIP program won't be able to learn this, the attacker doesn't even need to outpace the honest chain.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
August 28, 2013, 05:26:14 PM
 #33

Quote
Blockchain based
Alice has a bitcoin which is represented by a colored coin in some other specific blockchain, perhaps a geographic-specific blockchain.
Alice passes it to Bob by transacting in the normal way.
Bob can form the verifiable transcript by grabbing every transaction along the colored coin's path along with the SPV fragments connecting them to the chain and some additional blockchain headers to show the transactions are adequately buried.

What prevent Alice from forking the alt-chain and double spend after she sent the coin to Bob? When talking about SPV, one needs to know the height of the longest chain. Since the SCIP program won't be able to learn this, the attacker doesn't even need to outpace the honest chain.

That's why whatever is executed and validated by the chain must be reproducible by other miners. With SCIP they don't need to repeat the process because they can trust the computation's signature. But the source to be executed must be public to all miners and all its inputs must come from the public chain as well.

But most of the difficulties encountered come from the unnecessary
that regular bitcoins must be 100% with an off-chain asset which
obviously is not bitcoin.

Things are much easier if we focus SCIP potential on the off-chain
side of the scheme without requiring alterations to the public chain
(which can also improve with SCIP, just in different ways).

Assuming we have private chains in which people can issue off-chain
assets that can bee atomically traded for public/in-chain assets (like
@maaku and I propose with Freimarkets), an off-chain asset backed with
bitcoin seems the best solution to me.
The accountant (the operator of the private chain) can issue accBTC
on its private chain. Because the public chain is public, we can make
a periodical audit process take data from it and the accountant can
use his bitcoin wallet to prove the unspent output he owns.
If this audit process uses SCIP, the privacy doesn't get compromised
and we can warranty that total issued accBTC <= BTC reserves at any
time.
The accountant could publish chain blocks (signed by him as a
substitute of proof of work) to prove their users he's not allowing
double-spends of the public assets. Or...he could use SCIP to prove
exactly the same thing without publishing all transactions!!

I am really excited about SCIP and what it means to bitcoin's
scalability, but it may not be as magical as we desire. It's magical
enough for me though. (And as said we've not talked about SCIP-based
productive proof-of-work yet...)

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
September 01, 2013, 01:16:05 AM
Last edit: September 01, 2013, 01:27:04 AM by AnonyMint
 #34

How does this compare to CoinWitness?

CoinWitness is even rocket-sciency than Zerocoin, it also shares many of the weaknesses as a privacy-improver: Novel crypto, computational cost, and the huge point of requiring a soft fork and not being available today. It may have some scaling advantages if it is used as more than just a privacy tool. But it really is overkill for this problem, and won't be available anytime real soon.

After further thought off-chain transactions with CoinWitness is insecure, i.e. bringing off-chain transaction back on to the blockchain at par is insecure. How can we be sure there wasn't a double-spend?

This is analogous to trying to operate multiple blockchains with a fixed value between them.

If coins are allowed to be moved between blockchains at par (no market exchange variance) and the blockchains don't exchange coins at par with any blockchains that don't adhere, the problem remains that 50% attacking the blockchain with the lowest PoW difficulty will infect with ill effects the blockchains with higher PoW difficulty.

After further thought off-chain transactions with CoinWitness is insecure, i.e. bringing off-chain transaction back on to the blockchain at par is insecure. How can we be sure there wasn't a double-spend?
This is explained in the CoinWitness post, it's as secure as you wish to make it. It's also offtopic. Please stay on-topic and if you want to talk about that please post in that thread.

To make my point more obvious, note that it is impossible to determine if the fork being merged back into the dominant blockchain is the only or correct fork. Sorry but I think this is unarguable.

Edit: I see bytemaster agrees with me.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 01, 2013, 05:51:33 AM
Last edit: September 01, 2013, 09:43:58 AM by gmaxwell
 #35

Let me see if I can make it more clear for you:

I proposed two examples to demonstrate that this is a _generic_ concept which can bind arbitrary machine verifiable off-chain systems.  The security of any particular implementation depends on its specifics and is up to the users who chose to use a particular system.

Or as I said,
Quote
I provided two examples here to make a point that the idea is completely general. Any other system with the verifiable property, E.g. OpenTransactions, complicated ones involving voting by all participants, or even trivial ones that doesn't have double spending prevention, could be used.

You're speaking specifically about the blockchain consensus example, which I specifically noted only has SPV security:

The advantage from using Bitcoin itself and making one of the public inputs to the witness be a hash of a block which is required to be in the chain, you can increase the security of the "off-chain" coin from SPV+headers fragment to full security.

And I'd appreciate it if you'd go edit your post which claims that I was arguing otherwise there.

Also, as I noted, in the case where you do only have SPV security, the users can choose the security parameters, presumably according to the coin values in question, their risk tolerance, and the consensus stability of the other chain:
Quote
along with the SPV fragments connecting them to the chain and some additional blockchain headers to show the transactions are adequately buried.

The trade-off here is specific to independent blockchain consensus systems, and not relevant to off-chain systems which have stronger consistency models.

If you wanted to go further than you can probably improve things in the blockchain case with a one-way consensus dependency: You merge mine the second chain but, instead of the namecoin ships-in-the-night, require that the alternative system's nodes also validate the Bitcoin chain and that membership in the Bitcoin chain dominate its consensus: There is a longest chain of all alt-chain block headers which were also successful Bitcoin blocks on the current best bitcoin chain (there may have also additionally been alt blocks between them) and the consensus chain for the altchain contains at least these blocks. By doing this, a consensus proof of the altchain can be constructed relative to the Bitcoin consensus for at least older portions of that altchain.  But I freely admit I haven't thought about slaved consensus much: I think using a blockchain as the alternative system is by far the least interesting, because blockchains would inherently share Bitcoin's limitations and I think complementary approaches are more interesting... and because this is really more in the domain of altchain design than anything else: the notion of slaved consensus is interesting and potentially useful independent of the generic binding idea presented in this thread.

(Though perhaps thats a question that deserves more consideration than I've granted it: If you are to have one way consensus slaving with many secondary systems its important that the parent chain be very cheap to verify since all must verify it.)

If you really want to go nuts you could force those redeems to go into a temporarily timelocked locked output which can be reset back by someone providing a longer chain proof before the time runs up (See the coin covenant thread).
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1093


View Profile
September 01, 2013, 07:20:34 AM
 #36

Quote
You're speaking specifically about the blockchain consensus example, which I specifically noted only has SPV security:

As I pointed out earlier, it is worse than SPV security. SPV requires a user to monitor the network, look for the longest PoW chain, and verify whether a transaction is adequately buried in the longest chain. In your blockchain based SCIP example, however, bitcoin miners will not monitor the alt-chain and won't know its current height. Bitcoin miners will accept the SCIP proof as long as the transaction is adequately buried, but this is not necessarily in the longest chain. Therefore it is (much) worse than SPV model.

On the other hand, if all bitcoin nodes are required to validate the alt-chain, it is basically equivalent to my aux-block proposal: https://bitcointalk.org/index.php?topic=283746.0

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 01, 2013, 07:25:46 AM
Last edit: September 01, 2013, 09:42:08 AM by gmaxwell
 #37

SPV requires a user to monitor the network, look for the longest PoW chain, and verify whether a transaction is adequately buried in the longest chain.
Fair enough. There is ambiguity in what SPV means— e.g. go see what electrum, which does not monitor the network, calls itself or the BitcoinJ clients which get their peers exclusively via DNS.   Can you suggest a better name for "SPV with high risk of network isolation"? Tongue  It would probably be a good name to have.

jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1093


View Profile
September 01, 2013, 11:16:46 AM
 #38

SPV requires a user to monitor the network, look for the longest PoW chain, and verify whether a transaction is adequately buried in the longest chain.
Fair enough. There is ambiguity in what SPV means— e.g. go see what electrum, which does not monitor the network, calls itself or the BitcoinJ clients which get their peers exclusively via DNS.   Can you suggest a better name for "SPV with high risk of network isolation"? Tongue  It would probably be a good name to have.



SPV, defined by Satoshi, "needs to keep a copy of the block headers of the longest proof-of-work chain". There is no ambiguity on this point.

"network isolation" is exactly the term I want to use. I just find a possible solution to mitigate this problem. For a low value transaction (say 0.1BTC), one may believe it is economically prohibitive to do a certain amount of PoW (say 4000PHash, which should give 250BTC reward if one mines honestly) just to steal it. So the SCIP script will require the tx to be buried by at least 4000PHash of work.

Thought?

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 01, 2013, 11:27:22 AM
 #39

"network isolation" is exactly the term I want to use. I just find a possible solution to mitigate this problem. For a low value transaction (say 0.1BTC), one may believe it is economically prohibitive to do a certain amount of PoW (say 4000PHash, which should give 250BTC reward if one mines honestly) just to steal it. So the SCIP script will require the tx to be buried by at least 4000PHash of work.
Thought?
Less obvious that it works once the subsidy is small. Proofs of transaction fees in our current protocol are also enormous (every txn in the block, then every txn referenced as an input and tree fragements to connect to potentially thousands of fragments in distinct blocks), but without them I can make fake blocks that appear to have a kajillion in transaction fees.

But in general, sure, there is some level of POW which probably provides adequate security for some transaction value. 

Did you see what I was suggesting with respect to making spends provisional? E.g. you provide a header fragment to redeem, the txout moves to a new txout which is released to you after some span of time— or, alternatively, can be updated by someone else providing a longer conflicting chain of headers.  The idea is that rather than you going out to seek the chain's information, some interested party (e.g. real owner of the coin) brings it to the blockchain.
 
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1093


View Profile
September 01, 2013, 02:49:19 PM
 #40

Did you see what I was suggesting with respect to making spends provisional? E.g. you provide a header fragment to redeem, the txout moves to a new txout which is released to you after some span of time— or, alternatively, can be updated by someone else providing a longer conflicting chain of headers.  The idea is that rather than you going out to seek the chain's information, some interested party (e.g. real owner of the coin) brings it to the blockchain.

Sounds reasonable. Coupling with PoW requirement makes it economically prohibitive to steal or DoS


One possibility, borrowing from my whimsical CoinCovenants thread,  would be to have the CoinWitness program designed to enforce that a timeout recovery pays the output to another a scriptpubkey which can either be redeemed by you after a second timeout OR can be redeemed by another timeout recovery which presents a _longer_ transcript.  In this case a timeout unwind would be very likely to go to the person with the longest transcript and that might mostly remove the attack incentive the timeout creates. E.g. if you try to cheat and present a truncated transcript in order to steal some coins, before the coins are finally released to you someone with a longer transcript comes along and takes them back.

This will allow any previous owners to DoS the latest owner by forcing him to redeem the coin on the blockchain.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
Pages: « 1 [2] 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!