Bitcoin Forum
May 25, 2019, 06:33:52 AM *
News: Latest Bitcoin Core release: 0.18.0 [Torrent] (New!)
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: [Schnorr] Should batched verification result in reduced weight per sig?  (Read 161 times)
Carlton Banks
Legendary
*
Offline Offline

Activity: 2380
Merit: 1673



View Profile
February 17, 2019, 07:26:25 PM
Last edit: February 18, 2019, 05:46:16 PM by Carlton Banks
Merited by dbshck (4), bones261 (3), ETFbitcoin (1), HeRetiK (1), MagicByt3 (1)
 #1

So the rationale for introducing transaction weight is to put a separate price on signature operations, to reflect the resources sigops use when running a fully validating node (i.e. a price component for block space and a price component for sigops when determining tx fee).

Should this be reflected in the weight value assigned to transactions using schnorr sigs in the future?



Using schnorr sigs already reduces the proportion of a tx comprising the signature:

  • BIP-schnorr defines a standardised 64kB size, smaller than the typical ECDSA sig size (71-72kB)
  • Schnorr permits signature aggregation, that treats the sum of >1 signature as a single valid signature for more than 1 transaction tx input
  • Taproot will allow conditional branches in more spending scripts to be collapsed into a Merkle root hash for all branches, so only the condition that is met is ever recorded on the blockchain

All the above reduce the space that signatures use on chain, and sig-agg can reduce the number of sigops used drastically for transactions with multiple inputs.

But batch verification works across an entire block of transactions, which would improve verification performance ~2x according to BIP-schnorr. That would make far more difference to validation performance than any of the points above, as it functions whether using sig-agg/taproot or not (and the 64kB size reduces space on chain, not sigops).


My question is: to incentivise the gains for the network, should schnorr sigs be assigned a lower weight than ECDSA sigs? It seems to make sense, given how much validation performance can be realised.

Vires in numeris
1558766032
Hero Member
*
Offline Offline

Posts: 1558766032

View Profile Personal Message (Offline)

Ignore
1558766032
Reply with quote  #2

1558766032
Report to moderator
PLAY OVER 3000 GAMES
LIGHTNING FAST WITHDRAWALS
PLAY NOW
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
gmaxwell
Moderator
Legendary
*
qt
Online Online

Activity: 2744
Merit: 2266



View Profile
February 18, 2019, 02:04:39 AM
Last edit: February 18, 2019, 02:45:45 AM by gmaxwell
Merited by Foxpup (6), Carlton Banks (6), dbshck (4), bones261 (4), ETFbitcoin (2), HeRetiK (1), Coding Enthusiast (1)
 #2

  • BIP-schnorr defines a standardised 64kB size, smaller than the typical ECDSA sig size (71-72kB)
NIT: 64 bytes instead of 72 bytes.

Quote
  • Schnorr permits signature aggregation, that treats the sum of >1 signature as a single valid signature for more than 1 transaction

So multiple concepts get confused here, so I can't tell exactly what you're talking about.    

There is signature aggregation which combines signatures from multiple inputs (but probably just one transaction) in to one,  or efficient threshold signatures which allows many signers to produce a single signature for a single input.

Both make signatures in transactions much smaller, so don't justify any change in how weight is computed.

Quote
  • Taproot will allow conditional branches in more spending scripts to be collapsed into a Merkle root hash for all branches, so only the condition that is met is ever recorded on the blockchain
This directly makes transactions much smaller, so again, no need to change how weight works.

Quote
But batch verification works across an entire block of transactions, which would improve verification performance ~2x according to BIP-schnorr.
Yes, it makes the cold cache catchup case spend half the time in signature validation. (non-catchup doesn't do validation in the critical path due to caching! ---  the small batching you can do as txn come in doesn't get much speedup)

Quote
My question is: to incentivise the gains for the network, should schnorr sigs be assigned a lower weight than ECDSA sigs? It seems to make sense, given how much validation performance can be realised.
The eventual speedup from batching (and the speedup we achieved from caching in the non-catchup case) was part of the justification for having witness data have lower weight to begin with.

With the exception of batching the other advantages you cite already result  in lower weight (in the cross input case, much lower weight).  So they're naturally already awarded.

Different users experience different pain points, some are cpu limited, some are bandwidth limited, some are power limited, some are storage limited. Many are some mixture of multiple of these.  Because of this no single weight formula can be optimal.   What really matters is that it sets the incentives in the right general direction, in order to break ties in the favour of public interest.

Generally we can assume that in the long run most users are going to do whatever is most cost effective for them. If foobar signatures were a LOT better for the network it would still be sufficient that they be only slightly better for the end user, even if making them much better would be justifiable under some cost model... even a little better will get them made a default.  Some users will have different motivations and make different choices, but a small number of exceptions is mostly irrelevant for the overall network health.  This is important, because a perfect balance isn't possible.  E.g. with weight, you could easily argue that an 8:1 ratio or a 16:1 ratio would have been better-- but a higher ratio means a LOT worse worst-case bandwidth, and so wouldn't be  good trade-off for those users who are bandwidth limited.   The fact that the only "direction of incentive" needs to be right, not so much the magnitude, means its possible to make compromises that give good results for everyone without screwing over some cost models.
Coding Enthusiast
Hero Member
*****
Offline Offline

Activity: 626
Merit: 853


Novice C♯ Coder


View Profile WWW
February 18, 2019, 04:16:24 AM
Merited by aliashraf (2), ETFbitcoin (1)
 #3

    BIP-schnorr defines a standardised 64kB size, smaller than the typical ECDSA sig size (71-72kB)[/li][/list]

    To be fair, that has nothing to do with Schnorr, the size is reduced by simply dropping the useless (in case of bitcoin) DER encoding. You can already drop the extra 6 to 8 bytes from every single signature that has been created in the past 10 years since they all tell you the same thing:
    - 1x DER-sequence tag: 0x30 (we already know it is 2x 32 byte integers)
    - 3x DER-length: telling us what we already know about the lengths (32 bytes)
    - 2x DER-int tag: 0x02 which we already know it is an integer (r and s)
    - possible upto 2x 0 byte appended to tell us these numbers are positive which again we already know

    Projects List+Suggestion box
    Donation link using BIP21
    Bech32 Donation link!
    BitcoinTransactionTool (0.9.2):  Ann - Source Code
    Watch Only Bitcoin Wallet (supporting SegWit) (3.1.0):  Ann - Source Code
    SharpPusher (broadcast transactions) (0.10.0): Ann - Source Code

    Carlton Banks
    Legendary
    *
    Offline Offline

    Activity: 2380
    Merit: 1673



    View Profile
    February 18, 2019, 10:39:30 AM
     #4

    • Schnorr permits signature aggregation, that treats the sum of >1 signature as a single valid signature for more than 1 transaction

    So multiple concepts get confused here, so I can't tell exactly what you're talking about.    

    There is signature aggregation which combines signatures from multiple inputs (but probably just one transaction) in to one

    Cheesy s/transaction/input/


    or efficient threshold signatures which allows many signers to produce a single signature for a single input.

    Both make signatures in transactions much smaller, so don't justify any change in how weight is computed.

    Multisig schnorr allows sig aggregation too, forgot about that.


    • Taproot will allow conditional branches in more spending scripts to be collapsed into a Merkle root hash for all branches, so only the condition that is met is ever recorded on the blockchain
    This directly makes transactions much smaller, so again, no need to change how weight works.

    But batch verification works across an entire block of transactions, which would improve verification performance ~2x according to BIP-schnorr.
    Yes, it makes the cold cache catchup case spend half the time in signature validation. (non-catchup doesn't do validation in the critical path due to caching! ---  the small batching you can do as txn come in doesn't get much speedup)

    Ahhhh, I didn't retain that from BIP-schnorr either, batched validation depends on a certain amount of cached signatures to work. I assumed that individual blocks were the unit of resolution at which batching would happen, as the batching performance graph shows 2-2.5x improvement at ~2500 transactions, which is a rough average of maximum transactions per block.


    The eventual speedup from batching (and the speedup we achieved from caching in the non-catchup case) was part of the justification for having witness data have lower weight to begin with.

    With the exception of batching the other advantages you cite already result  in lower weight (in the cross input case, much lower weight).  So they're naturally already awarded.

    Sure, but my point is that although batching doesn't affect the total number of signatures to validate, it does incentivise the same thing that weight differentiation does (validation performance). Really, the other changes I cited don't alter the weight per sigop, simply the aggregate weight of  (of course this can make a huge difference to the total number of sigops per block)


    Different users experience different pain points, some are cpu limited, some are bandwidth limited, some are power limited, some are storage limited. Many are some mixture of multiple of these.  Because of this no single weight formula can be optimal.   What really matters is that it sets the incentives in the right general direction, in order to break ties in the favour of public interest.

    Generally we can assume that in the long run most users are going to do whatever is most cost effective for them. If foobar signatures were a LOT better for the network it would still be sufficient that they be only slightly better for the end user, even if making them much better would be justifiable under some cost model... even a little better will get them made a default.  Some users will have different motivations and make different choices, but a small number of exceptions is mostly irrelevant for the overall network health.  This is important, because a perfect balance isn't possible.  E.g. with weight, you could easily argue that an 8:1 ratio or a 16:1 ratio would have been better-- but a higher ratio means a LOT worse worst-case bandwidth, and so wouldn't be  good trade-off for those users who are bandwidth limited.   The fact that the only "direction of incentive" needs to be right, not so much the magnitude, means its possible to make compromises that give good results for everyone without screwing over some cost models.

    That makes sense. I'm still interested in an argument why improving (future) IBD by up to 2x shouldn't lessen weight assignment per schnorr signature (regardless of how many signatures or script hashes are aggregated together). You're essentially saying that the witness discount was formulated to price any signature scheme, no matter it's validation performance?

    Vires in numeris
    Carlton Banks
    Legendary
    *
    Offline Offline

    Activity: 2380
    Merit: 1673



    View Profile
    March 26, 2019, 04:20:36 PM
     #5

    Different users experience different pain points, some are cpu limited, some are bandwidth limited, some are power limited, some are storage limited. Many are some mixture of multiple of these.  Because of this no single weight formula can be optimal.   What really matters is that it sets the incentives in the right general direction, in order to break ties in the favour of public interest.

    Generally we can assume that in the long run most users are going to do whatever is most cost effective for them. If foobar signatures were a LOT better for the network it would still be sufficient that they be only slightly better for the end user, even if making them much better would be justifiable under some cost model... even a little better will get them made a default.  Some users will have different motivations and make different choices, but a small number of exceptions is mostly irrelevant for the overall network health.  This is important, because a perfect balance isn't possible.  E.g. with weight, you could easily argue that an 8:1 ratio or a 16:1 ratio would have been better-- but a higher ratio means a LOT worse worst-case bandwidth, and so wouldn't be  good trade-off for those users who are bandwidth limited.   The fact that the only "direction of incentive" needs to be right, not so much the magnitude, means its possible to make compromises that give good results for everyone without screwing over some cost models.


    So I read this too quickly the first time, and I think I now see your point: one ratio for sig weight doesn't apply to all possible users given the differing contraints on their node. Signature aggregation improves the CPU constraint, but isn't the only consideration.

    Vires in numeris
    Pages: [1]
      Print  
     
    Jump to:  

    Sponsored by , a Bitcoin-accepting VPN.
    Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!