Alright.
There's one other thing to address: in the secp256k1_fe_mul (or something like that) function, the all but the last leg are multiplied by the constant R. This causes a result different from when I calculated an example in Python. So inside the fe_mul function, I need to modify it to avoid multiplying the values in the result (stack) by R, and send that multiplication to a temporary instead.
That makes no sense; fe_mul just multiplies two field elements modulo p. That R constant is an implementation detail, that even differs between 32-bit and 64-bit platforms. It's not actually multiplying the result by this value. Note that field elements are internally stored in a denormalized representation where the limbs can overflow. If you want to convert it to a portable format, use fe_get_b32.
|
|
|
But since I use secp256k1 curve only for testing and research I do no care much for any of possible vulnerabilities and attacks.
The safest (not necessary the fastest) secp256k1 is the one used in Bitcoin Core. But I don't use it because I keep getting wrong answers when I do arithmetic. Maybe the privkey bytes are not being filled correctly or something. Perhaps you should open an issue or discussion topic ( https://github.com/bitcoin-core/secp256k1/issues or https://github.com/bitcoin-core/secp256k1/discussions/categories/q-a) on the library, because that's definitely not supposed to happen.
|
|
|
It is just an invention, a bad one: Programmable digital signature. There was no such thing in the world before you guys made it up, right? No problem, as super devs you are entitled to do inventions, but as an observer I have all rights to review and to resist.
It's okay if you don't feel this terminology is warranted. Do you disagree it is useful abstraction? Then how is it possible to have both, template based and interpreter based implementations in one world?
A template-matching based verifier will only be compatible with a subset of potential BIP322 signatures, reporting "inconclusive" for others. That's the price for not having a full script interpreter. But even if *all* implementations use just template-matching based verification, this approach still has the advantage of defining a single format that is compatible with *all* potential future extensions that correspond to script features. Because addresses are encodings of scripts, and what we're signing for is the ability to spend outputs sent to a certain address, using script for the message signing too is just the obvious match in my view. 2- Ripping this sign-by-script concept off from BIP322, let it focus on true signing with support for references to standard txns (with well-formed scripts).
I believe it is entirely uninteresting to work on any kind of message signing system that is restricted to a subset of what script can do. That is postponing another inevitable future problem again, when that subset no longer suffices.
|
|
|
The Bitcoin script system *is* a (programmable) digital signature scheme on its own, which achow101 is referring to as "script signatures". This digital signature scheme is distinct from the ECDSA/Schnorr signature algorithms available in Bitcoin script (but builds upon it). - The "public keys" of this digital signature scheme are the scriptPubKeys in transaction outputs.
- The "messages" of this scheme are the spending transactions, excluding witness data.
- The "signatures" of this scheme are the scriptSigs and witnesses in transaction inputs.
It's a programmable signature scheme in that it supports more complex assertions than "a single party with key X agrees"; e.g. it can express agreement of multiple parties (using e.g. a P2SH scriptPubKey with a redeemscript that requires signatures with multiple keys). The actual script semantics are more-or-less irrelevant for this. It just suffices to express the kinds of assertions we care about. BIP322 is taking this script signature system, and transposing it to a different context: messages that aren't transactions. Everything else remains the same: the scriptPubKeys remain the "public keys" verified against (=addresses), the "signatures" remain scriptSigs/witnesses (but now embedded in a BIP322 signature encoding, rather than being placed in a transaction). However, the "messages" are replaced with a virtual transaction derived from the message being signed, rather than any real transaction. This permits reusing all the script logic a signer and/or verifier may have (however complex or simplistic that may be) for transactions to be immediately applied to messages. No new opcodes are needed or involved - the only thing that changes is that in BIP322 context, OP_CHECKSIG (and friends) don't compute a sighash from a spending transaction, but from a virtual transaction that is derived from the message being signed. If you want to BIP322-sign for "I have the ability to spend funds sent to address X" with message M, all you need to do is demonstrate your ability to "spend" funds sent to X, using a virtual transaction that commits to M. If you have the capacity (in terms of access to private keys, and in terms of having signing algorithms for it) to do that for the script corresponding to X (whether that's a simple single-key constructor, or something far more complicated) for real transactions, you also have the ability to do that for this BIP322 virtual transaction. I don't understand any of the concerns brought up here. AFAICT, the only thing necessary to move BIP322 forward is finalizing the last details of the specification, and implementing it (which may mean bring the Bitcoin Core implementation up-to-date, if BIP changes are made, and/or implementations for other software, which may be template based).
|
|
|
I'm not sure about papers. Some of the low-level field arithmetic in libsecp256k1 was inspired by techniques used in certain curve25519/ed25519 implementations, but it has certainly evolved from there, with many optimizations by several contributors.
I came up with the bracket notation, and it isn't an optimization; just a concise way of of writing down the data flow to allow (humans) to reason about the correctness of the algorithm.
|
|
|
Oh, I had missed part of the question. I don't recall why the documentation limits to 0x7FFF; either it was just to be conservative and not "leak" a constraint from either of the field implementations into the interface, or it was so an int on platforms with 16-bit int could be used.
Regarding the ranges of permitted magnitudes: indeed, the point is to avoid carries in additions. By having even just a few slack bits in every limb, it's possible to have field elements with a temporarily "denormalized" representation (where the individual limb values exceed 2^26 or 2^52). The restrictions on how much they permit exceeding that 2^26 or 2^52 depends mostly on the multiplication code, with is optimized to take advantage of these limits.
|
|
|
Interesting question!
The reason for the restriction is simply to keep secp256k1_fe_set_int simple. Field elements are represented as 10 26-bit or 5 52-bit limbs internally, so restricting the function to only accept inputs that can be represented by a single limb means the function can just set all limbs to 0 and set the bottom one to the provided value.
Now in retrospect, it does seem that this function is rather pointless. It's currently only used for setting values to 0 or to 1 anymore. That used to be different; early on we e.g. didn't have a mechanism for constructing compile-time constants, and e.g. the B constant in the curve equation (y^2 = x^3 + B, with B=7 for secp256k1 proper) didn't exist until fairly recently.
I'm now considering adding a constant secp256k1_fe for 0 (a constant for 1 already exists), and removing the function. Thanks for the observation!
|
|
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Hello all,
this is to announce that we're awarding: * 10 BTC to JoinMarket, for the first practical CoinJoin solution, and continued research into progressing this domain. * 10 BTC to Wasabi, for building a more end-user accessible solution and larger adoption.
The remainder of the funds is left for future solutions with more ubiquitous impact on the ecosystem.
For those watching, 822f559df14894bd57bdd1ef0ab983228b7816a69d035cc1c5d18fb569ee5e94 is the payout transaction, crediting several individual contributors directly as requested by the winning projects, and aggregating the remaining bounty funds into a single UTXO. It is (obviously) a joined transaction, mixed with other transfers.
Congratulations! -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEErGYmFy4AqCz/rolypjbpdjH3Z+AFAlzvMfkACgkQpjbpdjH3 Z+D2Pw/+IXvHLrQ1IuTicPXQgauzgIGV9F9tOZln6cIgduVW3A2nlenw9uSixh/4 rhV+kPUOjLsswEocKA1zt0pxq1QbSbhKaDRq2Rms3CtYfyvnlBTDvd/bdNDWBr2g 9Koc0QRq0ETKnJ7dXlOtwhUcOaWrW2qJMf8TekOPb6b70BSUSZzJe5YfItdLu8YM KmmJ02GIr+urcTJDT3O6kEUpRBEUEaKcWPCAm+CVMGiKAisuhBnhKb163TKOKuH5 zoBdCHKd9giHB5obrqeaCKtw5Rg1Q7Q7hDRcFgvk5YN11EzqFyJrrHq6cyDxso8T DsO5sa38+0aEi1ijAElwhX+7Wh5/AyadIaAq57V+9Y4TQCbDd0jhwjSclSMuiTkb r1S0Zc6HuU0ztJyddguDKIZdUpvuRLCQXH0dUW27eYkt3NMrJTiUzN39fSNaRLDM ZS8mHga1aUxv3IhNVf1pnDOlSE9kHrPMfaaWhrEFLROi/zz5idb6xeZ8bLIAo76D YbY2zJ7BN0AMTI/EPX/ArkAU8qITfSwy0C9MDfZfmqeA9iy5eTj1EUSPcvoFPksg wY+HvBptA+qaekNqmqZPZnGRx34e8QWTOP8r3NQxib2Nep8ycHB9TQXiIMGcxLvg V3SXBCz7MCfJsKieBtZUaIOfWFzHvKGwPdjn/KoMCcwoE5rnQco= =6vff -----END PGP SIGNATURE-----
|
|
|
Yes @Pietre, I do agree, several problems will be fixed but one problem will be created: the signatures that you think 'in particular are only required to validate the blockchain state ' will loose their immutability for this simple fact that they are not been hashed. right?
The statement above is false. The signatures are still hashed and committed to by blocks. No immutability is lost. If you don't believe me, please read BIP 141 carefully: A new wtxid is defined: the double SHA256 of the new serialization with witness data:
...
A new block rule is added which requires a commitment to the wtxid.
...
A witness root hash is calculated with all those wtxid as leaves, in a way similar to the hashMerkleRoot in the block header.
By removing this data from the transaction structure committed to the transaction merkle tree, several problems are fixed:
The data is removed from the transaction merkle tree, and instead committed to in the witness merkle tree. That witness merkle tree is committed to in the coinbase.
|
|
|
Let's take a moment and chew it: Current light clients can't check UTXO and prevent double spends, post-SegWit 'semi-ligh' clients can.
The only thing the 'SegWit semi-ligh' client you describe can do is verify that no historical inflation occurred. It cannot be used to verify incoming SegWit payments in a meaningful way beyond what a light client today can do, so it's useless for any economically relevant entity to rely upon. It is how SegWit encourages this kind of nodes and leaves no incentive to remain a prehistoric fat and resource consuming dinosaur. Clear?
Perfectly clear, but I disagree. Nodes can already prune history, and I think in the future nearly every node will. This does not impact security, as they still download and verify everything. If you feel like running a 'semi-ligh' node which is basically useless, still needs to maintain the full UTXO set (likely the largest resource cost of running a node in the near future), and in return only get a small constant factor of bandwidth reduction, be my guest. That 'only' difference is a huge one No, it's a technicality. The same data is still committed to in the wtxid. It's just moved. When something is discarded from SHA2 process and txId, it is no longer necessary for validating the integrity of the hashing process and the block header
This is wrong. The witnesses are committed to by the wtxid, which are included in the witness merkle tree, which is committed to by the coinbase, which contributes to the block header. You cannot change a witness and expect a block to still be valid, except to your pointless 'semi-ligh' node. and we just need UTXO to check double spend and become a full node ... Oops! we got a validator node (in your terminology) without any obligation to store the signatures for future use.
Again, nodes already don't have any obligation to store signatures (or anything) for future use.
|
|
|
A full node is capable to handle temporary chain splits and to choose the right sequence by checking for double spend and not the signatures.
That's a nonsensical security model. You're allowing an attacker to take anyone's money with invalid signatures, but not spend it twice? He'll just pay you and then steal the money back. No need for a double spend. Without the ability to validate signatures, you can effectively not validate any useful property of the system. If that's the model you want, it already exists, and it's called a light client. All SegWit does in this regard is reducing the bandwidth for such nodes. It doesn't change the requirements for nodes that do need to validate. 5- March 2028: The court issues a verdict: "As there is no electronic signature proof 'attached' to the transaction
There is just as much proof in SegWit transactions as in others. The only difference is that it does not contribute to the txid. It's still included in transactions and blocks, it's still required for validation, and it's still committed to. There is indeed a new storage model possible where someone chooses to delete old signatures but not delete the rest. However (1) there is already no guarantee that nodes keep around everything for you, and if you want proof in the future that a transaction took place you'll need to keep it yourself (which every wallet software does) (2) that new model is only useful for serving light clients that already don't care about the signatures anyway.
|
|
|
Validating nodes need to download and verify the signatures. This is true now, and will be true after SegWit.
Validating nodes do not need to keep signatures around after validation. This is true now (pruning), and will be true after SegWit.
Anyone who has a transaction included in the chain can construct a proof that this happened (SPV proof), and this proof includes the signature that was used. This is true now, and will be true after SegWit (using the wtxid merkle tree instead of the txid merkle tree).
|
|
|
In order to get collision with a 40% chance on a set of 2 ^ 80 addresses, you need a memory size of 2.4 * 10 ^ 25 bytes. And even applying algorithms that use a trade-off between memory and calculation time - it will still be a huge size.
That's not correct. Floyd's cycle-finding algorithm can be used to find colliding script hashes with O(1) memory, for just a factor 3 slowdown.
|
|
|
From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space". The blockchain on-disk can be pruned though (implemented as an experimental feature in Bitcoin Core 0.11, and with nearly full functionality in 0.12), so calling it "permanently" is not very accurate. If you're talking about storage space used by segwit-compatible full nodes, well, obviously it will use more space, because it increases block capacity - that capacity has to go somewhere. However: - The extra space used by witnesses is more prunable than normal block space, as it's not needed by non-validating clients.
- Has less effect on bandwidth, as light clients don't need the witness data.
- Has no effect on the UTXO set, so does not contribute to database growth and/or churn.
- Enables script versions, which will make the introduction of Schnorr signatures much easier later on, which are more space efficient than what we have now (even for simple single-key outputs/inputs).
|
|
|
Running with -testnet is default. You can run with -notestnet, but this will result in an error, as the "main chain" code is disabled in all but the unit tests. For the features, look here: https://github.com/ElementsProject/elementsproject.github.io/blob/master/README.mdThe peg is two-way, but relies on a federation of functionaries that hold the coins in multisig and verify transfers. On the sidechain side, a 21M UTXO entry is awarded to the functionaries, who transfer coins out of it whenever transfers on the Bitcoin side into the sidechain happen. Indeed. Signed blocks are blocks where the proof-of-work mechanism is replaced with a traditional signature. Block creation, for test purposes, is done by a federation as well, rather than by mining.
|
|
|
Hard to say without seeing the script...
|
|
|
void print_number(double x) { double y = x; int c = 0; if (y < 0.0) y = -y; while (y < 100.0) { y *= 10.0; c++; } printf("%.*f", c, x); }
Why 100? I guess some comments here would help. It's just guaranteeing a print of 3 significant digits, so it counts how many places to shift the comma by to bring the number between 100 and 999.....
|
|
|
However, libsecp256k1 takes its nonce as input to its API, and from that point on signing and verification are deterministic functions. Any nonce skew would need to occur in the Bitcoin code which calls into libsecp256k1; however, since November nonce generation has been deterministic as well (using RFC6979). This code has been audited and replicated by myself and others; it is also unit tested.
This is not technically true anymore. Since recently, there is a full RFC6979 implementation inside libsecp256k1, with test vectors that were generated by another implementation (feel free to review it; it's too recent to go in Bitcoin Core v0.10.0 still, though). The reason for this change was making sure that the easiest way for using the library is always safe - the old API allowed you to shoot yourself in the foot if you passed in a bad nonce.
|
|
|
|