Bitcoin Forum
April 19, 2024, 08:45:08 PM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Compressed signature  (Read 462 times)
ertil (OP)
Jr. Member
*
Offline Offline

Activity: 31
Merit: 77


View Profile
January 27, 2020, 10:24:37 PM
Merited by bones261 (5), suchmoon (4), LoyceV (2), fillippone (2), vapourminer (1), ABCbits (1), o_e_l_e_o (1), Heisenberg_Hunter (1)
 #1

Now we have all signatures for all inputs. Nowadays, for every input a signature should be calculated separately and included in transaction. I think about ways to compress it and include only one signature for all of those inputs. Valid transaction require publishing all public keys, so including all of them is necessary (except for P2PK transactions). We cannot compress the keys, but we can still try to compress all of those signatures.

Any public key is simply privateKey*basePoint. Thus, having two public keys we can now add these two points and have "shared public key": (privKeyA*basePoint)+(privKeyB*basePoint)=(privKeyA+privKeyB)*basePoint. In this way we can join as many public keys as we want (without revealing private keys), creating one public key for all of those inputs.

Then, we need a valid signature matching this key and it will be enough to validate it. Assuming common-input-ownership heuristic it may be a good way of merging all inputs into single output owned by the same entity, but I am still thinking about doing it in a secure way without knowing all private keys.

All participants can safely choose any "K" values for their signatures and calculate "signature public key" from it. They can safely share these keys between each other and sum it into one point, because extracting "K" from public key is as difficult as extracting any other private key from any other public key. They can also calculate "r" by taking "x" value of this key modulo "n".

Is calculating "s" possible without knowing all "K" values and all private keys? Or do you have other ideas how to compress signatures?
1713559508
Hero Member
*
Offline Offline

Posts: 1713559508

View Profile Personal Message (Offline)

Ignore
1713559508
Reply with quote  #2

1713559508
Report to moderator
1713559508
Hero Member
*
Offline Offline

Posts: 1713559508

View Profile Personal Message (Offline)

Ignore
1713559508
Reply with quote  #2

1713559508
Report to moderator
The forum was founded in 2009 by Satoshi and Sirius. It replaced a SourceForge forum.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713559508
Hero Member
*
Offline Offline

Posts: 1713559508

View Profile Personal Message (Offline)

Ignore
1713559508
Reply with quote  #2

1713559508
Report to moderator
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3374
Merit: 6511


Just writing some code


View Profile WWW
January 27, 2020, 11:47:30 PM
Merited by bones261 (5), Foxpup (4), LoyceV (4), dbshck (4), vapourminer (3), ABCbits (3), fillippone (3), Husna QA (1), Heisenberg_Hunter (1)
 #2

Any public key is simply privateKey*basePoint. Thus, having two public keys we can now add these two points and have "shared public key": (privKeyA*basePoint)+(privKeyB*basePoint)=(privKeyA+privKeyB)*basePoint. In this way we can join as many public keys as we want (without revealing private keys), creating one public key for all of those inputs.
This is not secure. I can generate my own priv and pubkey pair, subtract everyone else's pubkeys from that to get a different pubkey, then claim that calculated pubkey is mine. When all those pubkeys are added up, you get the pubkey I generated which I have the private key for. So now I can use my private key to produce a signature and steal coins. I would be able to do this with any other UTXO for which I know the pubkeys for without needing anyone else to be involved.

Then, we need a valid signature matching this key and it will be enough to validate it. Assuming common-input-ownership heuristic it may be a good way of merging all inputs into single output owned by the same entity, but I am still thinking about doing it in a secure way without knowing all private keys.
I don't think there is a secure way to do this with ECDSA, or if there is, it will be fairly complicated. ECDSA does not allow signatures to be combined linearly which makes this difficult.

What you are describing is a technique known as Cross Input Signature Aggregation. This is an idea that has been around for a while, and using Schnorr signatures, would be possible. Schnorr signatures can be combined linearly and there are now secure pubkey aggregation and signature aggregation schemes that have been developed using Schnorr signatures. The proposed Taproot and Schnorr signatures BIPs do not include Cross Input Signature Aggregation, as it requires more complicated changes to transaction formats, but do introduce Schnorr signatures themselves. So in the future, we could deploy Cross Input Signature Aggregation into Bitcoin using Schnorr signatures.

gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
January 28, 2020, 05:32:17 AM
 #3

https://bitcointalk.org/index.php?topic=1377298.0
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
February 01, 2020, 06:43:20 PM
 #4

Actually there is a very strong and promising scheme for signature aggregation named BLS after the cryptographers who have proposed and developed it Boneh–Lynn–Shacham. It is a pairing-based cryptosystem and yields huge advantages compared to both the traditional ECDSA scheme used in bitcoin and almost everywhere and the rising Schnorr signature scheme which is partially supported in the Taproot proposal.  

I'll start a new topic about BLS asap, discussing trade-offs involved, meanwhile, it would be very productive if people here bother doing their own research on the subject.


P.S.
Yes, I'm back  Grin
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
February 01, 2020, 07:44:13 PM
Last edit: February 02, 2020, 12:06:08 AM by gmaxwell
Merited by bones261 (5), fillippone (2)
 #5

I'll start a new topic about BLS asap, discussing trade-offs involved,
Maybe instead just go look at the prior threads on the subject?

You could at least pretend to have read the links already in this thread. The first five paragraphs of my above link from 2016 are about BLS signatures and their limitations. The whole point of the post was "we knew we could do this with BLS but the tradeoffs made it less exciting, but it turns out we can also do it with schnorr signatures."

They allow a non-interactive aggregation which is really cool, but interactive aggregation can already give most of the benefit. Their cost is new strong security assumptions (in fact, had they been deployed back in 2016 they'd have to be deprecated now because new attacks lowered the security of the groups that would have been used, and now most things doing pairing are using a 384 bit group as a result, and there are arguments for 460+ bit), and that they are quite slow... even worse than that 2016 thread suggests because of the necessity of increasing the size to keep ~128 bit security (At practical sizes for cryptography, ECC computation time is roughly cubic or worse in the size of the group). Cross transaction aggregation also has complications with validation caching and relay that are probably surmountable but present their own trade-offs.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
February 01, 2020, 08:42:34 PM
 #6

I'll start a new topic about BLS asap, discussing trade-offs involved,
Maybe instead just go look at the prior threads on the subject?

You could at least pretend to have read the links already in this thread. The first five paragraphs of my above link from 2016 are about BLS signatures and their limitations. The whole point of the post was "we knew we could do this with BLS but the tradeoffs made it less exciting, but it turns out we can also do it with schnorr signatures."

You are right, I just overlooked your link, will check it out asap. From my own research and for now, I'm a bit more optimistic about BLS, because I suppose verification overhead of BLS could be significantly addressed by hardware development leaving a huge room empty for compression and non-interactive multi-signature set up features of BLS to shine, specially when long-term, strategic decisions are to be made.

gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
February 01, 2020, 10:42:59 PM
 #7

addressed by hardware development
Unfortunately, advances in technology for consumer devices are mostly pouring into reducing power usage and size and improving portability... and to the extent there is surplus, it is in competition for other uses in bitcoin like increased privacy or increased transaction capacity.

The gini coefficient of computing performance is increasing.

Not to say that I think it's a non-starter, but it's not so obviously a huge win, particularly when due to decenteralization we shouldn't care about the fastest computers but about the speed of the most widely deployed or inexpensive ones.

aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
February 02, 2020, 07:05:54 AM
Last edit: February 02, 2020, 07:28:42 AM by aliashraf
 #8

addressed by hardware development
The gini coefficient of computing performance is increasing.
Once BLS becomes a dominant electronic signature scheme it could be supported by specialized circuits just like ECDSA. Meanwhile, a hierarchical sharding scheme can take care of the Gini coefficient problem.

Generally speaking, it doesn't look to me as a best practice to take cpu as the bottleneck compared to size in cryptocurrency, here communication is the worst nightmare. BLS can compress witness data of  blocks to unbelievable levels and it deserves a lot more attention I suppose.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
February 03, 2020, 01:32:41 AM
 #9

it could be supported by specialized circuits just like ECDSA.
Seems unlikely, ECDSA is effectively not supported by specialized circuits today (okay, on RF powered smart cards or whatever it is, but not by anything useful for or use in Bitcoin)... and the parameter selection for pairing keeps changing because of improving attacks.

Quote
Meanwhile, a hierarchical sharding scheme can take care of the Gini coefficient problem.
That sounds like a meaningless handwave to me.

Quote
Generally speaking, it doesn't look to me as a best practice to take cpu as the bottleneck compared to size in cryptocurrency, here communication is the worst nightmare. BLS can compress witness data of  blocks to unbelievable levels and it deserves a lot more attention I suppose.

The best way to make CPU the bottleneck would be to introduce operations which are 40x as expensive per byte... which BLS does.  For almost all hardware sold as a full node in use in the developed world their sync speed is already CPU bottlenecked. For faster hardware or nodes outside of the places with fast bandwidth that isn't the case.

You also run into problems with sticking slow operations in the block propagation critical path as you cannot _completely_ cache the validation, as is done today.

Most of the savings already result from schnorr aggregation. So the marginal increase isn't that tremendous compared to the performance.

There are other cool advantages, but they seem difficult to weigh against the weaker security and considerable CPU time increase.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
February 04, 2020, 04:21:21 PM
Last edit: February 04, 2020, 05:42:00 PM by aliashraf
 #10

That sounds like a meaningless handwave to me.
Hierarchical sharding is a must-go direction, IMO, it is the ultimate solution to your "Gini coefficient" argument and an alternative to your own proposed strategy for keeping things super cheap. Additionally, I brought it up to make a point: It is all about the design and the architecture when discussing bottlenecks.

Quote
operations which are 40x as expensive per byte.
What if the size is reduced comparatively? As of my understanding, a bitcoin block with 1000 tx load can be verified with a pair of BLS aggregated keys thanks to the latest developments presented here, in the same time as legacy blocks with the same hardware.

You also run into problems with sticking slow operations in the block propagation critical path as you cannot _completely_ cache the validation, as is done today.
A very good point, it is a valid and strong objection to Schnorr public key aggregation technique you originally proposed and Peter Wuille has improved using the magic of Merkle Tree to resist rug public key attacks BUT it is not valid for BLS!

Actually, the most important aspect of BLS key aggregation (due to one of its cool advantages: being non-interactive) is its compatibility with cached validation of transactions. let's quote Boneh
Quote from: Boneh link=https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html
Compatibility with transaction validation caching. Recall that transaction validation caching is a process whereby a node validates transactions as they are added to the node's mempool and marks them as validated. When a previously validated transaction appears in a new block, the node need not re-validate the transaction. Transaction validation caching is compatible with signature aggregation across multiple transactions in a block. To see why, consider a node that receives a new block containing an aggregate signature σ=σ1⋯σn, aggregated over n transactions in the block. The node can identify all the transactions in this new block that are already marked as validated in its mempool, and divide σ by the signatures associated with these pre-validated transactions. Effectively, the pre-validated signatures are removed from the aggregate signature σ. Let σ′ be the resulting aggregate signature. Now the node needs only check that σ′ is a valid aggregate signature over the remaining transactions in the block, namely those transactions that are not already in its mempool.
Please recall that a BLS aggregate signature is simply the product of its individual signatures.

Most of the savings already result from schnorr aggregation. So the marginal increase isn't that tremendous compared to the performance. There are other cool advantages, but they seem difficult to weigh against the weaker security and considerable CPU time increase.
While brilliant, Schnorr public key aggregation is not mathematically proven to be secure AFAIK, and it is not included in the Taproot proposal and will not be ever because of its above mentioned inherent incompatibility with cached transactions.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
February 05, 2020, 12:57:33 AM
 #11

What if the size is reduced comparatively?
The computation w/ BLS aggregation isn't proportional to the size. It's proportional to the number of signed message and number of pubkeys.

Quote
A very good point, it is a valid and strong objection to Schnorr public key aggregation technique you originally proposed
You've got that totally backwards. Schnorr aggregation is not slower than non-aggregated validation and has absolutely zero negative effect on caching. Validation aching simply continues to work without any modification because the unit of aggregation is also the same as the unit of inclusion/non-inclusion: a whole transaction.


Quote
Actually, the most important aspect of BLS key aggregation (due to one of its cool advantages: being non-interactive) is its compatibility with cached validation of transactions. let's quote Boneh
You misunderstand the text, it saying that it doesn't destroy caching completely.

In Bitcoin today and with schnorr aggregation script validation consists a single hash table lookup for each transaction in the block.  "Is this transaction in my already validated cache? Yes? Good. Its scripts are valid. Done."

Quote from: Boneh link=https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html
and divide σ by the signatures associated with these pre-validated transactions.
This requires an cryptographic operation per already included signature, the cost of that is more similar to a schnorr verification than it is to a hash table lookup, sadly. Keeping the data around to be divided out is also an additional overhead. This is why I said earlier that ' also has complications with validation caching' and that the caching interaction has its own tradeoffs, particularly around worst-case performance. As I said, non-insurmountable but just another way that it isn't a totally free lunch.

Quote
While brilliant, Schnorr public key aggregation is not mathematically proven to be secure AFAIK,
It has been, it's the subject of the musig paper (which is cited by the one you linked...).

Quote
and it is not included in the Taproot proposal and will not be ever because of its above mentioned inherent incompatibility with cached transactions.

As mentioned you are simply incorrect there. I can't fathom how you ended up incorrect. Can you help me understand how you came to believe there was any negative caching interaction with cross input aggregation at all?
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
February 07, 2020, 12:05:44 PM
Last edit: February 07, 2020, 04:59:32 PM by aliashraf
 #12

A very good point, it is a valid and strong objection to Schnorr public key aggregation technique you originally proposed
You've got that totally backwards. Schnorr aggregation is not slower than non-aggregated validation and has absolutely zero negative effect on caching. Validation aching simply continues to work without any modification because the unit of aggregation is also the same as the unit of inclusion/non-inclusion: a whole transaction.
Greg, with all due respect, I think you are basically wrong comparing simple cross-input aggregation and multisig in Schnorr with bullish cross-transaction aggregation in BLS. You have no such thing in Schnorr, you just can't have a couple of keys as witness data for the whole transactions of a block with like 2000 txns with Schnorr and what I'm saying is about such a radical aggregation with BLS being still cache-friendly.

You misunderstand the text, it saying that it doesn't destroy caching completely.
I'm confused with this _complete_ thing you are mentioning again. What is it supposed to mean in the context of this discussion? You said that key/signature aggregation puts us in trouble with caching and I'm quoting from Boneh directly addressing your issue, what am I missing?

In Bitcoin today and with schnorr aggregation script validation consists a single hash table lookup for each transaction in the block.  "Is this transaction in my already validated cache? Yes? Good. Its scripts are valid. Done."
Quote from: Boneh link=https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html
and divide σ by the signatures associated with these pre-validated transactions.
This requires an cryptographic operation per already included signature, the cost of that is more similar to a schnorr verification than it is to a hash table lookup, sadly. Keeping the data around to be divided out is also an additional overhead. This is why I said earlier that ' also has complications with validation caching' and that the caching interaction has its own tradeoffs, particularly around worst-case performance. As I said, non-insurmountable but just another way that it isn't a totally free lunch.
Of course, it is not free lunch, there is no such thing in the universe. But your point is not valid because for every cached transaction it takes just a simple division operation to be excluded from the verification process and it is a pretty affordable price IMO.

Quote
While brilliant, Schnorr public key aggregation is not mathematically proven to be secure AFAIK,
It has been, it's the subject of the musig paper (which is cited by the one you linked...).
AFAIK, the OMDL assumption (hardness of OMDL problem) which is behind the proof presented in musig paper, is still an open problem in cryptography.
Drijvers et al. 2018 paper has discussed this as you already now.

Quote
and it is not included in the Taproot proposal and will not be ever because of its above mentioned inherent incompatibility with cached transactions.
As mentioned you are simply incorrect there. I can't fathom how you ended up incorrect. Can you help me understand how you came to believe there was any negative caching interaction with cross input aggregation at all?
No, I can't. It is because I'm not talking about cross input aggregation (where the transaction is the unit of aggregation) and I don't have any idea how such a problem can be ever proposed, I'm talking about cross-transaction aggregation where the block is the unit of aggregation.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
February 08, 2020, 01:31:18 PM
Last edit: February 08, 2020, 01:51:50 PM by gmaxwell
 #13

I think you are basically wrong comparing simple cross-input aggregation and multisig in Schnorr with bullish cross-transaction aggregation in BLS. You
Of course they can be compared.  Indeed, aggregating the across the entire block gets more gains, but the majority of the savings are achieved by aggregating within transactions, so the returns are diminishing and aggregating more widely has costs including adding new radically different strong assumptions... particularly so because you can get arbitrarily large aggregation in one transaction by coinjoining interactively.

Quote
I'm confused with this _complete_ thing you are mentioning again. What is it supposed to mean in the context of this discussion? You said that key/signature aggregation puts us in trouble with caching and I'm quoting from Boneh directly addressing your issue, what am I missing?
You misunderstand the material. I've implemented BLS aggregation (see the earliest thread on the subject), so I'm pretty confident that I do understand it. Smiley

Quote
it takes just a simple division operation to be excluded from the verification process and it is a pretty affordable price IMO.
Ah, no. If that were the case it would be much easier.  The 'division' in this case is just the algebraic notation.  That's like saying that generating a public key is trivial because multiply a 32bit integer is a single cycle instruction and pubkey generation is just multiplication.  In fact pubkey generation is _algebraically_ represented as a multiplication, but the implementation is much more complex and slow than "just multiplication" would imply.

It's not cosmically expensive, but it's a lot more expensive than a hash table lookup.

Quote
AFAIK, the OMDL assumption (hardness of OMDL problem)
All hardness assumptions are inherently open problems in cryptography, that's what a hardness assumption is called an assumption.

OMDL is a pretty unassuming elaboration on DL: It says if a challenger give an attacker N random group elements and they're permitted to make N-1 queries to a magic black box that will give the discrete log of an arbitrary point they provide, they cannot use this provide the discrete logs of all N elements (only N-1, at most).  If anything it's more surprising that OMDL is not simply equivalent to DL.

But musig doesn't rely on OMDL: there was an earlier version of the paper that proved a two round version under OMDL but the proof was flawed. The current version of the paper only has the three round version which has a security proof that uses a plain DL hardness assumption. (The 'rounds' only come into play when you're aggregating with third parties.)

Cheers,
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
February 09, 2020, 05:54:11 PM
Last edit: February 09, 2020, 06:27:01 PM by aliashraf
 #14

The 'division' in this case is just the algebraic notation.  That's like saying that generating a public key is trivial because multiply a 32bit integer is a single cycle instruction and pubkey generation is just multiplication.  In fact pubkey generation is _algebraically_ represented as a multiplication, but the implementation is much more complex and slow than "just multiplication" would imply.

It's not cosmically expensive, but it's a lot more expensive than a hash table lookup.
We have divide-and-conquer class of algorithms with typical complexity of O(nlogn) n being the number of computer words, not a big deal as you've already mentioned.


Now, let me open up about my concern:
In 2008 Schnorr patent expired and the only remarkable reason behind not choosing it over ECDSA became void, still, Satoshi stayed with the latter because it was adopted more widely and there was a ready to use open-source library available. From a pragmatic point of view, it could be regarded as a wise choice, but it wasn't: Unproven security, transaction malleability, bloated signature data, infeasibility for aggregation, ... were the least cones of ECDSA compared to Schnorr.

Now, I'm afraid that history is repeating somehow: Before Boneh latest works, (especially 2018 paper), BLS was not in the best shape for being chosen over Schnorr and bitcoiners went after the latter almost rightfully.  Soon it became the ultimate choice for a long-awaited transition to a better and more promising electronic signature scheme but now, we have a serious competitor with promising features and superiorities, again with poor installed base and lack of libraries.

What are we gonna do? Stick with the matured one or ride the rising wave? Again we can decide either from a pragmatic perspective or from a strtegic one and I'm not totally convinced that they both hold the same result.
oryhp
Jr. Member
*
Offline Offline

Activity: 59
Merit: 87


View Profile
May 28, 2022, 04:29:18 PM
 #15

Sorry for reviving the thread, I wasn't able to find a thread about non-interactive half-aggregation schemes for Schnorr so this topic felt the closest to that.
I think non-interactive half-aggregation of signatures also makes the group of transactions atomic - at least for observers that have not seen transactions prior to half-aggregation. This
"atomic group" feature has some neat properties that I tried to capture here https://gist.github.com/phyro/209dde6bdf756f1a5bbafde7f9ba92db.

This isn't necessarily a good or welcome change for Bitcoin, but it may be worth thinking about what this would enable. Having non-interactive transaction merging through half-aggregation (or perhaps some other method?) might enable some new options including not being able to distinguish coinjoin transactions if they're represented as groups. If coinjoins were through multi txs, then non-interactive aggregation could then act as noninteractive coinjoin. There are some potential downsides e.g. adaptor signatures, joining a tx with higher fees to prioritize yours faster etc..

I'm not a cryptographer, nor do I know the details of how Bitcoin works. Hope someone finds this worth reading.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!