gmaxwell
Moderator
Legendary
Offline
Activity: 4298
Merit: 8818
|
|
September 03, 2014, 10:02:54 PM |
|
Set a maximum total memory for the stack and a script that exceeds that value automatically fails.
Sure, but this requires: a consistent way of measuring it and enforcing it, and being sure that no operation has unlimited intermediate state. As Bitcoin was originally written it was thought that it had precisely that: There was a limit on the number of pushes, and a limit on the number of operations. This very clearly makes the stack size "limited", but because of operations that allow exponential growth, the limit wasn't useless. Being absolutely sure that the limits imposed are effective isn't hard for any fundamental reason, as I keep pointing out. "Just have a limit", but being _sure_ that the limit does what you expect is much harder than it seems.
|
|
|
|
Taras
Legendary
Offline
Activity: 1386
Merit: 1053
Please do not PM me loan requests!
|
|
September 03, 2014, 10:53:28 PM |
|
Well, there can only be one OP_CHECKSIG... Why not make that kind of limit for OP_CAT? All the string functions, in fact, should be enabled (even if they are "expensive words" like checksig) What if there was a minimum base transaction fee (rendering a tx with an insufficient base fee invalid) that would be incremented by a certain amount for every OP_CAT in the transaction?
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4298
Merit: 8818
|
|
September 03, 2014, 11:11:18 PM Last edit: September 03, 2014, 11:44:53 PM by gmaxwell |
|
Well, there can only be one OP_CHECKSIG...
Thats not true. Why not make that kind of limit for OP_CAT? All the string functions, in fact, should be enabled (even if they are "expensive words" like checksig) What if there was a minimum base transaction fee (rendering a tx with an insufficient base fee invalid) that would be incremented by a certain amount for every OP_CAT in the transaction?
No one is saying that things like OP_CAT cannot be done, or that they're bad or whatever. But making them not a danger requires careful work. Case in point: What you're suggesting is obviously broken. So I write a transaction which pays 100x that (presumably nominal fee) and I crash _EVERY BITCOIN SYSTEM ON THE NETWORK_, and I don't really have to pay the fee at all because a transaction needing a zillion yottabytes of ram to verify will not be mined, so I'll be free to spend it later. Congrats, you added a severe whole network crashing vulnerability to hypothetical-bitcoin. You should also remove "enabled" from your dictionary, that those opcodes were "disabled" doesn't mean they can just be enabled. They're completely gone— adding them is precisely equivalent to adding something totally novel in terms of the required deployment procedure.
|
|
|
|
andytoshi
Full Member
Offline
Activity: 179
Merit: 151
-
|
|
September 03, 2014, 11:17:16 PM |
|
Well, there can only be one OP_CHECKSIG...
That is false. You even could do threshold signatures with multiple OP_CHECKSIGs if you wanted to be a goof. This requires setting a size for each data type. I think it is basically integer, byte array and boolean (which is an int). Script is not typed; there is only one type, "raw byte data", that is interpreted in various ways by the various opcodes. (This makes accounting quite easy actually.) And today you are required to match the byte representation of all stack objects exactly, since OP_EQUAL requires it, so arguably a total stack size limit would be an easy thing to describe precisely. My biggest metawish for a script 2.0 would be ease of analysis.... in particular I would like separate types (uint, bool, bytedata) and explicit casts between them. I spent quite a bit of time working on script satisfiability analysis recently, and it seems the best way to describe abstract stack elements is as a bundle of complementary bounds on numeric values, boolean values, length, etc. Bitcoin-ruby uses a typed script and has each opcode do casts.. the idea makes me smile but for consensus code it is really not appropriate, sadly. They have plans one day to replace it with a more-or-less direct port of bitcoind's script parser.
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4298
Merit: 8818
|
|
September 04, 2014, 12:44:19 AM |
|
Typed data on the stack makes writing correct code much harder, I can't say that I've ever wished for that. I general prefer the stack be "bytes" and everything "converts" them to the right type. Yes, additional constraints would make things like your provably undependable code easier, but they do so by adding more corner cases that an implementation must get right. I'm also a fan of analyizability, but that always has to be second seat to consensus safeness.
|
|
|
|
Taras
Legendary
Offline
Activity: 1386
Merit: 1053
Please do not PM me loan requests!
|
|
September 04, 2014, 12:48:35 AM |
|
Well guys, I broke theoretical bitcoin. My lack of relevant knowledge has theoretically doomed us all. In all seriousness (not that breaking theoretical bitcoin isn't) the whole take-down-the-network-in-one-transaction is scary as shit. I'd love to be able to use string functions, but I'd rather not advocate risking the network for some silly scriptsigs
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4298
Merit: 8818
|
|
September 04, 2014, 01:28:13 AM |
|
Well guys, I broke theoretical bitcoin. My lack of relevant knowledge has theoretically doomed us all. In all seriousness (not that breaking theoretical bitcoin isn't) the whole take-down-the-network-in-one-transaction is scary as shit. I'd love to be able to use string functions, but I'd rather not advocate risking the network for some silly scriptsigs I send you my theoretical condolences. No worries, everyone breaks theoretical Bitcoin.
|
|
|
|
jl2012
Legendary
Offline
Activity: 1792
Merit: 1111
|
|
September 04, 2014, 04:08:37 AM |
|
I'm not a programmer so this may sound very stupid: [...] Max OP_CAT output size = 520 bytes: why risky? I mean, is there any fundamental difference between these cases?
All the limits are risks, all complexity— practically every one of them has been implemented incorrectly by one alternative full node implementation or another (or Bitcoin core itself) at some point. They miss them completely, or count wrong for them, or respond incorrectly when they're violated. E.g. here what happens if you OP_CAT 520 and 10 bytes? Should the verify fail? Should the result be truncated? But even that wasn't the point here. Point here was that realizing you _needed_ a limit and where you needed it was a risk. The reasonable and pedantically correct claim was made that OP_CAT didn't increase memory usage, that it just took two elements and replaced them with one which was just as large as the two... and yet having (unfiltered) OP_CAT in the instruction set bypasses the existing limits and allowed exponential memory usage. None of it insurmountable, but I was answering the question as to why it's not just something super trivial. Assuming OP_CAT is still available, we can do everything with existing OP codes: <A><B> SIZE ROT SIZE ROT 2DUP <520> LESSTHANOREQUAL VERIFY <520> LESSTHANOREQUAL VERIFY ADD <520> LESSTHANOREQUAL VERIFY SWAP CAT If the size of A, size of B, and the sum of size of A and B are all less than or equal to 520, it will return <AB> Otherwise, the script fails. So we can create an alias for this part: SIZE ROT SIZE ROT 2DUP <520> LESSTHANOREQUAL VERIFY <520> LESSTHANOREQUAL VERIFY ADD <520> LESSTHANOREQUAL VERIFY SWAP CAT Call it OP_LIMITCAT, and disallow the use of a bare OP_CAT. Unless bugs are already there in the existing OP codes, or in my script, that should be fine.
|
Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY) LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC) PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
|
|
|
sukhi (OP)
Newbie
Offline
Activity: 12
Merit: 0
|
|
September 04, 2014, 04:24:09 AM |
|
gmaxwell, you remind that part of me that says "not so fast..." to "of course..." statements. You are thoughtful; thanks for your help and insight! I'm not seeing how OP_CAT (at least by itself) facilitates any of the high level examples there. Can you give me a specific protocol and set of scripts to show me how it would work?
I am simultaneously to prove that I need OP_CAT and trying to find a way to do without it. I guess I can't have it both ways. In very very short, it involved verifying that hash(salt+data) indeed equals [suchhash]. Providing salt and data, the script can confirm hash(salt), hash(data) and hash(salt+data) and validates the transaction based of whether the hashes match what was claimed. However, I think I found a way around that, which will involve more never-broadcasted transaction and more multsig addresses. etc. i.e. the process is more complex and exhaustive, but I think the OPs available allows an alternative implementation. As I fell like I'm finding my way toward "a solution without OP_CAT", the next biggest wall seems to be: Even if I can make scripts that do exactly what I want, will the network accept and broadcast them? I am about to start experimenting on testnet, but even if it works on testnet, that doesn't tell me if it is going to work on the mainnet. The OP_CAT would indeed simplify the process of what I am thinking of, but it seems that the main scenarios would be resolved, i.e. I wouldn't "need" OP_CAT, even though it would have made it easier. One of the protocol is about two parties escrowing money for future instant payments, i.e. off-net renegotiation of the balances. An extended version could allow decentralised banking for Bitcoin. (isn't that what Bitcoin IS?) Yes, but I am talking about instant proof/guarantee of receiving a minimum of n confirmations, after escrow/deposits are placed. It seems that it would only require one non-standard script, which would look like this: inputs: pubkey signature secret OP_DUP <pubkeyA> OP_EQUAL OP_IF <hashB> OP_ELSE <pubkeyB> OP_EQUALVERIF <hashA> OP_ENDIF OP_ROT OP_CHECKSIGVERIFY OP_SWAP OP_HASH160 OP_EQUAL
As it is, this would allow either of two party to claim the output if they can provide the other's party secret, secret which hash to the hardcoded hashA or hashB, depending of who is signing. (How is that useful is part of a bigger picture that I will talk about later in another thread.) This script is untested and incomplete; I also needs to allow both signatures to validate the transaction. So the question is now: should I bother testing that on testnet or is it doomed because the network wouldn't like it, e.g. too many nodes not broadcasting unknown, strange and/or non-standard transactions?
|
|
|
|
sukhi (OP)
Newbie
Offline
Activity: 12
Merit: 0
|
|
September 04, 2014, 05:08:42 AM |
|
Assuming OP_CAT is still available,
Right there, I started feeling skeptical about your post. we can do everything with existing OP codes: <A><B> SIZE ROT SIZE ROT 2DUP <520> LESSTHANOREQUAL VERIFY <520> LESSTHANOREQUAL VERIFY ADD <520> LESSTHANOREQUAL VERIFY SWAP CAT How is that useful? The problem is not about being able to do nice scripts and use them properly, but about avoiding the possibility of making any script that could potentially... get naughty. Call it OP_LIMITCAT, and disallow the use of a bare OP_CAT.
How is that better than implementing OP_CAT correctly in the first place? Wouldn't that "alias" require the nasty bare OP_CAT to be "present"? It sounds like you are saying "Instead of doing the checks in the OP_CAT itself, let's make an improper OP_CAT that we cannot use directly, and make an alias that does the checks before calling the improper OP_CAT that doesn't do these checks" Am I totally misunderstanding you?
|
|
|
|
2112
Legendary
Offline
Activity: 2128
Merit: 1073
|
|
September 04, 2014, 05:35:23 AM |
|
Typed data on the stack makes writing correct code much harder, I can't say that I've ever wished for that. I general prefer the stack be "bytes" and everything "converts" them to the right type. Yes, additional constraints would make things like your provably undependable code easier, but they do so by adding more corner cases that an implementation must get right. I'm also a fan of analyizability, but that always has to be second seat to consensus safeness. This claim about "typed data" and "provability" is false. There are actual proofs of that coming from the people involved in designing/implementing Algol 68. I don't have any references handy, but in broad terms the progression "classic Von Neumann" -> "type-tagged Von Neumann" -> "static-typed Von Neumann/Harvard modification" strictly increases the set of programs that have provable results. I also remember than in the USA IBM did pay for some academic research about "PL/I without implicit type coercion" that had similar results. As an aside to the theoretical results: in school I had side income helping debug/fix/extend several RPN-style / Forth-style language interpreters including then-popular commercial implementations by Tektronix & HP in their IEEE-488 lab-control equipment. For that application type-tagging was (and is) a godsend both for human programming and for automated program analysis/translation.
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4298
Merit: 8818
|
|
September 04, 2014, 06:22:40 AM |
|
I absolutely agree that additional type data makes for software which is easier to analyze. The question isn't the result of the program being provable, the question is of the implementations of the interpreter being simple enough to have even a small chance of having multiple absolutely identically behaving implementations, since we are performing this inside of a consensus system.
You continue to miss the point completely.
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4298
Merit: 8818
|
inputs: pubkey signature secret OP_DUP <pubkeyA> OP_EQUAL OP_IF <hashB> OP_ELSE <pubkeyB> OP_EQUALVERIF <hashA> OP_ENDIF OP_ROT OP_CHECKSIGVERIFY OP_SWAP OP_HASH160 OP_EQUAL
That script is perfectly standard as a P2SH in current code. Though I suspect you've confused the operation of the machine somewhat. I am simultaneously to prove that I need OP_CAT and trying to find a way to do without it. I guess I can't have it both ways. I'm not asking you to prove that OP_CAT is necessary, I'm asking you to describe a specific, complete, protocol for which it is sufficient— something that starts with Alice and Bob and Charlie who want to accomplish a task, and a series of specific messages they send, and a series of guaranteed outcomes. Then I could try to help you reimagine a functionally equivalent protocol without it. As it is, this would allow either of two party to claim the output if they can provide the other's party secret, secret which hash to the hardcoded hashA or hashB, depending of who is signing. It sounds like you're describing an atomic swap or a related transaction. Often they don't need two hashes. If you really just want something conditionally redeemable by one person or another, I would recommend the transaction type I recommend for reality keys: Reality keys will reveal private key A if a true/false fact is true, and private key B if it's false. Alice and Bob want to make a contract to hedge the outcome of a fact because they each have opposing short positions relative to the fact. Alice will be paid if the fact is true, Bob will be paid if the fact is false. Reality keys publishes the pubkey pairs a := gA ; b := gB Alice has private key X and corresponding pubkey x, Bob has private key Y and corresponding pubkey y. Alice and Bob compute new pubkeys q:=x+a and r:=y+b and they send their coins to a 1 of 2 multisig of those new pubkeys, q,r. The values q,r are zero-knoweldge indistinguishable from a and b unless you know x and/or y, so no one except alice and bob, not even reality keys can tell which transaction on the network is mediated by the release of A vs B. Later, realitykeys releases A or B, lets say alice wins. She computes a new private key X+A, and uses it to redeem the multisig. Bob cannot redeem the multisig because he knows neither X or B. This looks like a perfectly boring transaction to everyone else. Alice and Bob collectively cannot be robbed by a third party, though they could be held up or if realitykeys conspires with Alice or Bob then there could be cheating. This risk could be reduced by using a threshold of multiple observers— which this scheme naturally extends to.
|
|
|
|
andytoshi
Full Member
Offline
Activity: 179
Merit: 151
-
|
|
September 04, 2014, 12:58:38 PM |
|
This claim about "typed data" and "provability" is false. There are actual proofs of that coming from the people involved in designing/implementing Algol 68. I don't have any references handy, but in broad terms the progression "classic Von Neumann" -> "type-tagged Von Neumann" -> "static-typed Von Neumann/Harvard modification" strictly increases the set of programs that have provable results. We are not talking about von Neumann architecture. We are talking about a small non-TC stack machine without mutability and a fixed opcode limit. In this case the set of allowable programs absolutely does shrink, and more importantly, the space of accepting inputs for (most) given scripts shrinks. This is easy to see --- consider the program OP_VERIFY. There would be one permissible top stack element in a typed script; in untyped script every legal stack element in (0x80|0x00)0x00* is permissible. That said, nobody actually said that anything about the space of provable programs. What I said is that script would be easier to analyze. This is obviously true because of the tighter restrictions on stack elements, as I already illustrated. As another example, consider the sequence OP_CHECKSIG OP_CHECKSIG which always returns zero. One reason this is true today is that the output of OP_CHECKSIG always has length one while the top element of its accepting input always has length > one. To analyze script today you need to carry around these sorts of length restrictions; with typing you only need to carry around the data that CHECKSIG's output is boolean and its input is a bytestring.
|
|
|
|
2112
Legendary
Offline
Activity: 2128
Merit: 1073
|
|
September 04, 2014, 10:04:26 PM |
|
I absolutely agree that additional type data makes for software which is easier to analyze. The question isn't the result of the program being provable, the question is of the implementations of the interpreter being simple enough to have even a small chance of having multiple absolutely identically behaving implementations, since we are performing this inside of a consensus system.
You continue to miss the point completely.
I apologize for writing too ambiguously the first time. I'm going to try to linearize my thoughts better now: 1) Given the current Bitcoin script language with the following problems (amongst others): 1a) implicit conversions between integers and bit strings with semantics depending on precise detail of OpenSSL implementation (word size, word order in a large integer, byte order in a word) 1b) ostensibly allowing emulating iteration by mutual recursion of P2SH invocations 2) a non-binary compatible but only morally-compatible scripting language featuring: 2a) explicit type conversion operators and type tagging of the stack storage, in particular clean conversions between integers and bit strings 2b) somehow type-safe or type-checking implementation of P2SH invocation that verifies both arguments and return values 3) will allow writing a completely new scripting interpreter 3a) in a theoretically strong programming language like a Lisp subset that is provable (Lisp because I'm most familiar with it, but there are many other candidates, I did not keep up with recent developments in the theoretical computer science) 3b) that can be mechanically/automatically verified and proven to obey certain theorems and conditions 4) said interpreter then can be translated 4a) to C/C++/Java/etc. via completely mechanical translation or manual pattern-based transliteration of a very restricted subset Lisp to be incorporated in a software-only implementation 4b) to SystemC/Verilog/VHDL/etc. to be synthesized into a logic circuit (with stack memory) for the hardware-assisted implementations and for additional verification The 4a) output in a restricted C++ subset could then replace the current, completely improvised, implementation in Bitcoin core. Because of subset C++ use it most likely would be longer in terms of lines of code, but it would be also much simpler to analyze. The 3b) step has an additional problem that all the existing Lisp provers use only conventional ring of integers arithmetic. Since Bitcoin depends on an elliptic curve over a finite field the proving software would have to be extended to efficiently handle that. From my school days algebra I remember that the stratification group->ring->field significantly influences the complexity of proofs. Sliding back from "ring of integers" to "abelian group of elliptic curves" could potentially greatly reduce the set of theorems that could be mechanically proven. I realize that the points 1-4 still read like a complex sentence in a patent application. I'm not good at writing easy to read essays. But from the purely technical point of view the two-level process is the way to maximize correctness (1st language for proving/verification, 2nd language for implementation/integration).
|
|
|
|
2112
Legendary
Offline
Activity: 2128
Merit: 1073
|
|
September 04, 2014, 10:09:34 PM |
|
We are not talking about von Neumann architecture. We are talking about a small non-TC stack machine without mutability and a fixed opcode limit. In this case the set of allowable programs absolutely does shrink, and more importantly, the space of accepting inputs for (most) given scripts shrinks. This is easy to see --- consider the program OP_VERIFY. There would be one permissible top stack element in a typed script; in untyped script every legal stack element in (0x80|0x00)0x00* is permissible.
That said, nobody actually said that anything about the space of provable programs. What I said is that script would be easier to analyze. This is obviously true because of the tighter restrictions on stack elements, as I already illustrated. As another example, consider the sequence OP_CHECKSIG OP_CHECKSIG which always returns zero. One reason this is true today is that the output of OP_CHECKSIG always has length one while the top element of its accepting input always has length > one. To analyze script today you need to carry around these sorts of length restrictions; with typing you only need to carry around the data that CHECKSIG's output is boolean and its input is a bytestring.
I'm sorry I haven't kept with the advances in the theoretical computer science. But I believe we have already discussed the "non-TC" chestnut here and the consensus was that one can abuse P2SH to escape the "no-loops" restriction. Let me try to dig the thread and I will edit this message later. Edit: https://bitcointalk.org/index.php?topic=431513.msg6533466#msg6533466The operative words were "opcode limit" in the "Turing complete language vs non-Turing complete (Ethereum vs Bitcoin)" thread.
|
|
|
|
andytoshi
Full Member
Offline
Activity: 179
Merit: 151
-
|
|
September 04, 2014, 10:15:25 PM |
|
But I believe we have already discussed the "non-TC" chestnut here and the consensus was that one can abuse P2SH to escape the "no-loops" restriction.
You can't use P2SH to create loops and nobody said anything about loops anyway.
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4298
Merit: 8818
|
|
September 04, 2014, 10:19:28 PM |
|
1a) implicit conversions between integers and bit strings with semantics depending on precise detail of OpenSSL implementation (word size, word order in a large integer, byte order in a word)
Not so. We proved there were no semantic leaks from OpenSSL in the numbers on the stack via exhaustive testing some time ago and removed all use of OpenSSL from the script code (except the calls out for signature verification, of course— so only signature verification and the accompanying signature serialization are handled by it). 1b) ostensibly allowing emulating iteration by mutual recursion of P2SH invocations
Also not so, very intentionally not.
|
|
|
|
2112
Legendary
Offline
Activity: 2128
Merit: 1073
|
|
September 04, 2014, 10:33:50 PM |
|
Not so. We proved there were no semantic leaks from OpenSSL in the numbers on the stack via exhaustive testing some time ago and removed all use of OpenSSL from the script code (except the calls out for signature verification, of course— so only signature verification and the accompanying signature serialization are handled by it).
So this "semantic leak" is now only apparent in the block layout on the wire and on the disk? But the "abstract virtual machine" of Bitcoin script cannot discover its internal bit ordering? Do I understand you right? 1b) ostensibly allowing emulating iteration by mutual recursion of P2SH invocations
Also not so, very intentionally not. Then can you state again what is the possible attack that the "opcode limit" is protecting against? Thanks.
|
|
|
|
sukhi (OP)
Newbie
Offline
Activity: 12
Merit: 0
|
|
September 05, 2014, 02:43:55 AM Last edit: September 05, 2014, 07:02:17 PM by sukhi |
|
inputs: pubkey signature secret OP_DUP <pubkeyA> OP_EQUAL OP_IF <hashB> OP_ELSE <pubkeyB> OP_EQUALVERIF <hashA> OP_ENDIF OP_ROT OP_CHECKSIGVERIFY OP_SWAP OP_HASH160 OP_EQUAL
That script is perfectly standard as a P2SH in current code. Though I suspect you've confused the operation of the machine somewhat. The script may be perfectly fine, but would bitcoin mainnet nodes broadcast transactions that contains that script?I need a definition of what a standard script is and what a non-standard script is; the bitcoind gives me: "scriptPubKey" : { "asm" : "OP_DUP aaaaaaaaaa OP_IF bbbbbbbbbb OP_ELSE cccccccccc OP_EQUALVERIFY dddddddddd OP_ENDIF OP_ROT OP_CHECKSIGVERIFY OP_SWAP OP_HASH160 OP_EQUAL", "hex" : "7605aaaaaaaaaa6305bbbbbbbbbb6705cccccccccc8805dddddddddd687bad7ca987", "type" : "nonstandard" }
And I think I missed a few OP_DROPs What operation do you suspect I am confusing? The stack goes like this: inputs | pubkey signature password | OP_DUP | pubkey pubkey signature password | <pubkeyA> | pubkeyA pubkey pubkey signature password | OP_EQUAL | isAlice pubkey signature password | OP_IF | pubkey signature password | <hashB> | BobsHash pubkey signature password | OP_ELSE | | <pubkeyB> | pubkeyB pubkey signature password | OP_EQUALVERIF | isBob pubkey signature password | OP_DROP | pubkey signature password | <hashA> | AlicesHash pubkey signature password | OP_ENDIF | HashY pubkeyX signature passwordY | OP_ROT | pubkeyX signature HashY passwordY | OP_CHECKSIGVERIFY | true HashY passwordY | OP_DROP | HashY passwordY | OP_SWAP | passwordY HashY | OP_HASH160 | PasswordY_hash HashY | OP_EQUAL | Signature-Password match |
I have seen OP_EVAL in some places, as well as other OPs that I don't find at https://en.bitcoin.it/wiki/Script. What and where do I find documentation about those? I am simultaneously to prove that I need OP_CAT and trying to find a way to do without it. I guess I can't have it both ways. I'm not asking you to prove that OP_CAT is necessary, I'm asking you to describe a specific, complete, protocol for which it is sufficient— something that starts with Alice and Bob and Charlie who want to accomplish a task, and a series of specific messages they send, and a series of guaranteed outcomes. Then I could try to help you reimagine a functionally equivalent protocol without it. Starting from the fact that I don't have OP_CAT and that I don't count of having it, I focus on finding alternative ways rather than to develop protocols that requires something I don't have. I'll give an example if (and when) I get there; what I have is still too incomplete and abstract. As it is, this would allow either of two party to claim the output if they can provide the other's party secret, secret which hash to the hardcoded hashA or hashB, depending of who is signing. It sounds like you're describing an atomic swap or a related transaction. Often they don't need two hashes. If you really just want something conditionally redeemable by one person or another, I would recommend the transaction type I recommend for reality keys: Yes, atomic transaction; for balance update or for winning-losing a dice bet. A third party is unacceptable, which rules out the reality keys.
|
|
|
|
|