Bitcoin Forum
May 14, 2024, 04:41:40 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: What are the 'NECESSARY' things in the segwit fork?  (Read 1350 times)
kano (OP)
Legendary
*
Offline Offline

Activity: 4494
Merit: 1808


Linux since 1997 RedHat 4


View Profile
March 28, 2017, 02:16:03 AM
Merited by ABCbits (2)
 #1

I'm wondering what is actually 'NECESSARY' in the segwit fork?

Two of the things that I hear being repeated by the segwit evangelists seem to be malleability and the so called quadratic issue.

--

Malleability is an accounting software issue, not a bitcoin issue.
If you track the transactions you own and the addresses you own, malleability doesn't matter.
Thus it's not a bitcoin fork issue, it's a good or poorly written accounting software issue.

--

The quadratic issue I've yet to understand why it even matters?
Anyone want to chime in here and explain why it 'actually' matters in the real bitcoin world?

I'll give an explanation of why I guess it doesn't matter:

Once bitcoin gets a transaction, it processes it.
Is there some bad design issue in bitcoin where the necessary results of this processing are thrown away, so the processing has to be repeated at later times?
If this bad design doesn't exist, then once a transaction is processed, the time it takes to process no longer matters.
(I brought this up with Gavin in 2011/2012 so I imagine it shouldn't still be an issue ...)
Worst case scenario, I guess you could only keep the "slow" to process details, if there was some unexpected reason why this isn't possible for all transactions.

Now people don't like to throw away blocks, so if someone puts a "slow" hidden transaction (i.e. unknown elsewhere on the bitcoin network) into block mining work, that they then distribute the transaction normally with a block they find, that will increase the chance of that block being orphaned if it is "slow"
Sounds like a bad choice by anyone wanting to do that.

From a mining point of view, if these transactions are able to be identified, then it might even be worth considering not mining them?

Anyone have any actual real statistics about these '"slow" transactions'?
All transactions that ever existed are there in the blockchain, so it's possible to actually give real information about them also - how many, how often, processing time statistics etc, rather than making up theoretical stats about how often they occur.

This issue is also given as an excuse by the segwit evangelists to have in the current core code, awaiting activation, that the cost of using a 1xxxx address should be four times the cost of using a 3xxxx address - i.e. kill off the 1xxxx addresses.
I imagine those stats I mentioned could give some insight into this also ...

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
1715661700
Hero Member
*
Offline Offline

Posts: 1715661700

View Profile Personal Message (Offline)

Ignore
1715661700
Reply with quote  #2

1715661700
Report to moderator
1715661700
Hero Member
*
Offline Offline

Posts: 1715661700

View Profile Personal Message (Offline)

Ignore
1715661700
Reply with quote  #2

1715661700
Report to moderator
1715661700
Hero Member
*
Offline Offline

Posts: 1715661700

View Profile Personal Message (Offline)

Ignore
1715661700
Reply with quote  #2

1715661700
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715661700
Hero Member
*
Offline Offline

Posts: 1715661700

View Profile Personal Message (Offline)

Ignore
1715661700
Reply with quote  #2

1715661700
Report to moderator
1715661700
Hero Member
*
Offline Offline

Posts: 1715661700

View Profile Personal Message (Offline)

Ignore
1715661700
Reply with quote  #2

1715661700
Report to moderator
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3388
Merit: 6637


Just writing some code


View Profile WWW
March 28, 2017, 02:40:46 AM
Merited by ABCbits (11)
 #2

Malleability is an accounting software issue, not a bitcoin issue.
If you track the transactions you own and the addresses you own, malleability doesn't matter.
Thus it's not a bitcoin fork issue, it's a good or poorly written accounting software issue.
While it is an issue of wallet software, malleability has also been known to wreak havoc on many popular services and exchanges that people use. While it does not really cause a loss of funds, it can also be extremely confusing and frustrating when transactions are being malleated. While I agree that those services and exchanges should get their act together and fix their own system to handle malleability, they don't always do so and users cannot easily check whether those services have handled malleability since they typically do not publish the source code for their service.

Furthermore, malleability causes issues with unconfirmed transaction chains. Nearly all major wallets allow you to spend from unconfirmed change outputs. When these are spent from, they create unconfirmed transaction chains which can be broken by transaction malleability. Here, this can cause some loss of funds as people are likely to still send goods once they see the unconfirmed transaction, even though they really should wait for a confirmation.

Lastly, malleability makes implementing further layer 2 scaling solutions such as payment channels much more difficult. Dealing with transaction malleability for such solutions, while not impossible, makes the task much harder. In order for Bitcoin to successfully scale, we will need layer 2 solutions and thus will need some way to make transactions non-malleable.

The quadratic issue I've yet to understand why it even matters?
Anyone want to chime in here and explain why it 'actually' matters in the real bitcoin world?
A miner can create a 1 MB transaction inside a block which, due to quadratic sighashing, can take a long time to validate. This has been done before. The block and the transactions are still valid, they just take a long time to validate due to quadratic sighashing.

Is there some bad design issue in bitcoin where the necessary results of this processing are thrown away, so the processing has to be repeated at later times?
Yes.

Currently sighashing requires a unique preimage that is hashed for every single input of a transaction. This preimage consists of the entire unsigned transaction, the output script being spent from, and the sighash type. As you add more inputs, the preimage for every single input that must be hashed becomes larger and larger. This makes it grow quadraticly. With a massive 1 MB transaction, this means that you could end up hashing more than 1 GB of data just to verify a transaction. Since the preimages for every single input is unique, none of the signashing can be cached for later use as it is all useless for anything else.

What segwit does is that it makes the preimage for every single input basically a fixed size. This means that it will grow linearly as the preimages do not change in size, just the number of preimages that must be hashed.

Now people don't like to throw away blocks, so if someone puts a "slow" hidden transaction (i.e. unknown elsewhere on the bitcoin network) into block mining work, that they then distribute the transaction normally with a block they find, that will increase the chance of that block being orphaned if it is "slow"
Sounds like a bad choice by anyone wanting to do that.
Suppose a miner has connections to a number of other miners through something like FIBRE. He can create a block that takes a long time to validate by including a large transaction that requires a lot of sighashing. This block is then broadcast on the miners, who then begin validating it. While they are validating it, they will likely still be working on the previous block, but the miner who made the block has an advantage and thus is more likely to find the next block on top of the slow block before other miners do while they are bogged down with validating the thing. This gives that miner an advantage over the other miners.

From a mining point of view, if these transactions are able to be identified, then it might even be worth considering not mining them?
Yes. In fact, such large transactions are considered non-standard so miners using Core would not choose to mine those transactions at all. However that does not stop a miner from making and including such a transaction in their own blocks.

kano (OP)
Legendary
*
Offline Offline

Activity: 4494
Merit: 1808


Linux since 1997 RedHat 4


View Profile
March 28, 2017, 03:17:22 AM
 #3

OK, so I guess firstly ... 'NECESSARY' seems to be missing Smiley

Anyway, about one thing:

...
Now people don't like to throw away blocks, so if someone puts a "slow" hidden transaction (i.e. unknown elsewhere on the bitcoin network) into block mining work, that they then distribute the transaction normally with a block they find, that will increase the chance of that block being orphaned if it is "slow"
Sounds like a bad choice by anyone wanting to do that.
Suppose a miner has connections to a number of other miners through something like FIBRE. He can create a block that takes a long time to validate by including a large transaction that requires a lot of sighashing. This block is then broadcast on the miners, who then begin validating it. While they are validating it, they will likely still be working on the previous block, but the miner who made the block has an advantage and thus is more likely to find the next block on top of the slow block before other miners do while they are bogged down with validating the thing. This gives that miner an advantage over the other miners.
...
Well, I'm not sure if it's quite so cut and dried as that?

If 99% of the network is working on the earlier block, while waiting to try and validate the "slow" block, then they could produce a competing block to the "slow" block.
Then most of that 99% could end up working on the competing block and thus end up orphaning the "slow" block.

I'm of course not sure how core accepts multiple blocks, since if it is serialised - and core's record on multi-threading is poor at best so that may be the case - then indeed being serialised would help the "slow" block.

I've chosen 99% since, anyone with anything but a small % of the network would be a fool to do this, since it would be obvious immediately that they were the ones that did it ...

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Foxpup
Legendary
*
Offline Offline

Activity: 4354
Merit: 3044


Vile Vixen and Miss Bitcointalk 2021-2023


View Profile
March 28, 2017, 10:29:05 AM
 #4

This issue is also given as an excuse by the segwit evangelists to have in the current core code, awaiting activation, that the cost of using a 1xxxx address should be four times the cost of using a 3xxxx address - i.e. kill off the 1xxxx addresses.
If you're talking about the block size increase to 4MB for SegWit transactions, this actually isn't a "necessary thing". SegWit could just as easily leave the limit at 1MB, but everyone was screaming for a block size increase and SegWit allowed it to be done safely, so it did. Anyone complaining about this feature obviously doesn't want big blocks as badly as they said they did. But don't worry - if SegWit fails to activate due to the block size controversy, it can always be proposed again without the block size increase.

Will pretend to do unspeakable things (while actually eating a taco) for bitcoins: 1K6d1EviQKX3SVKjPYmJGyWBb1avbmCFM4
I am not on the scammers' paradise known as Telegram! Do not believe anyone claiming to be me off-forum without a signed message from the above address! Accept no excuses and make no exceptions!
kano (OP)
Legendary
*
Offline Offline

Activity: 4494
Merit: 1808


Linux since 1997 RedHat 4


View Profile
March 28, 2017, 10:37:05 AM
 #5

This issue is also given as an excuse by the segwit evangelists to have in the current core code, awaiting activation, that the cost of using a 1xxxx address should be four times the cost of using a 3xxxx address - i.e. kill off the 1xxxx addresses.
If you're talking about the block size increase to 4MB for SegWit transactions, this actually isn't a "necessary thing". SegWit could just as easily leave the limit at 1MB, but everyone was screaming for a block size increase and SegWit allowed it to be done safely, so it did. Anyone complaining about this feature obviously doesn't want big blocks as badly as they said they did. But don't worry - if SegWit fails to activate due to the block size controversy, it can always be proposed again without the block size increase.
No, the code specifically costs 1xxx address transactions at 4 times the cost of 3xxx address transactions.

The excuse given is the quadratic issue.

The result will be that if you use core and generate transactions, it will expect 4 times the fee per byte for 1xxx transactions vs 3xxx transactions.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Foxpup
Legendary
*
Offline Offline

Activity: 4354
Merit: 3044


Vile Vixen and Miss Bitcointalk 2021-2023


View Profile
March 28, 2017, 12:06:56 PM
 #6

No, the code specifically costs 1xxx address transactions at 4 times the cost of 3xxx address transactions.

The excuse given is the quadratic issue.
That issue is at best only indirectly related. Non-SegWit transactions are weighted 4x heavier solely so that you can't fit more of them into a 4MB SegWit block than will fit into a 1MB non-SegWit block. That way old nodes won't reject the blocks for being too big, making the block size increase backwards-compatible. The quadratic validation issue has nothing to do with it except for SegWit's solution to it making the increase to 4MB blocks safe in the first place.

The result will be that if you use core and generate transactions, it will expect 4 times the fee per byte for 1xxx transactions vs 3xxx transactions.
Unavoidable. Filling a block with non-SegWit transactions means that block is limited to 1MB, so it can't contain as many transactions, and miners gotta make up that fee revenue somehow. The freeloaders have always been complaining about fees; there's little point in trying to please them.

Will pretend to do unspeakable things (while actually eating a taco) for bitcoins: 1K6d1EviQKX3SVKjPYmJGyWBb1avbmCFM4
I am not on the scammers' paradise known as Telegram! Do not believe anyone claiming to be me off-forum without a signed message from the above address! Accept no excuses and make no exceptions!
kano (OP)
Legendary
*
Offline Offline

Activity: 4494
Merit: 1808


Linux since 1997 RedHat 4


View Profile
March 29, 2017, 02:18:42 AM
 #7

...
Is there some bad design issue in bitcoin where the necessary results of this processing are thrown away, so the processing has to be repeated at later times?
Yes.

Currently sighashing requires a unique preimage that is hashed for every single input of a transaction. This preimage consists of the entire unsigned transaction, the output script being spent from, and the sighash type. As you add more inputs, the preimage for every single input that must be hashed becomes larger and larger. This makes it grow quadraticly. With a massive 1 MB transaction, this means that you could end up hashing more than 1 GB of data just to verify a transaction. Since the preimages for every single input is unique, none of the signashing can be cached for later use as it is all useless for anything else.
...
So I had a think about this (for 5 minutes Smiley ) and I see that even I may have been over stating the quadratic issue?

Are you basically implying that the quadratic issue is caused by the number of inputs/outputs? or is it also how the transaction is created and the number doesn't really matter too much?
Would it be correct to say that a transaction size limit would solve the quadratic issue?
i.e. pick a number Smiley 50K? 25K? limit per transaction and then it doesn't really matter?

Clearly if the block size ever did get to 32MB, it would be foolhardy to allow ~32MB transactions, so there should be some limit in there somewhere anyway.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
praxeology
Newbie
*
Offline Offline

Activity: 18
Merit: 2


View Profile
March 29, 2017, 03:45:50 AM
Last edit: March 29, 2017, 03:56:44 AM by praxeology
Merited by ABCbits (1)
 #8

Hi Kano.  Maybe what you are missing is that in SegWit blocks, the SegWit transaction signature data is moved into a data structure that older versions of the software don't see nor consider to be a part of the block.  And signature data takes up like 75% of a pre-segwit transaction's size.  Given that we need to forever remember pre-segwit transactions to calculate their txids, but segwit transactions we can eventually throw away the signature data and still calculate the txid... this is the other reason why old transactions are weighted more...  other than that they are also weighted more because they take up more space in the perspective of non-segwit nodes.

Edit: and about the "slow" stuff you are talking about... I think that has more to do with schnorr signatures... "https://en.wikipedia.org/wiki/Schnorr_signature" which are not a part of SegWit.  I don't think CPU time has anything to do with weighting differences.  Correct me if I'm wrong.
kano (OP)
Legendary
*
Offline Offline

Activity: 4494
Merit: 1808


Linux since 1997 RedHat 4


View Profile
March 29, 2017, 01:02:39 PM
 #9

...
Is there some bad design issue in bitcoin where the necessary results of this processing are thrown away, so the processing has to be repeated at later times?
Yes.

Currently sighashing requires a unique preimage that is hashed for every single input of a transaction. This preimage consists of the entire unsigned transaction, the output script being spent from, and the sighash type. As you add more inputs, the preimage for every single input that must be hashed becomes larger and larger. This makes it grow quadraticly. With a massive 1 MB transaction, this means that you could end up hashing more than 1 GB of data just to verify a transaction. Since the preimages for every single input is unique, none of the signashing can be cached for later use as it is all useless for anything else.
...
So I had a think about this (for 5 minutes Smiley ) and I see that even I may have been over stating the quadratic issue?

Are you basically implying that the quadratic issue is caused by the number of inputs/outputs? or is it also how the transaction is created and the number doesn't really matter too much?
Would it be correct to say that a transaction size limit would solve the quadratic issue?
i.e. pick a number Smiley 50K? 25K? limit per transaction and then it doesn't really matter?

Clearly if the block size ever did get to 32MB, it would be foolhardy to allow ~32MB transactions, so there should be some limit in there somewhere anyway.
... and spent another 5 minutes and I now think achow101 is leading me astray Tongue

My point about the hash is that when bitcoin receives a transaction, the inputs and outputs and the transaction hash are fixed.
... ignoring malleability, since that really doesn't matter in this case, since it can be treated as another transaction, which just happens to have the same inputs and outputs ...
so anyway, as a miner, if it takes a long time to process a block, it's far from ideal, but if bitcoin has stored that information about all the transactions it has in it's mempool (or some or most or ... whatever) then when a block comes in, it will ONLY need to verify transactions in the block that have a txn hash it doesn't already know (or hasn't stored the processed information).
Thus in this case, the quadratic issue is a complete non-event, unless we've never seen the transaction hash before.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3388
Merit: 6637


Just writing some code


View Profile WWW
March 29, 2017, 01:15:47 PM
 #10

So I had a think about this (for 5 minutes Smiley ) and I see that even I may have been over stating the quadratic issue?

Are you basically implying that the quadratic issue is caused by the number of inputs/outputs? or is it also how the transaction is created and the number doesn't really matter too much?
Would it be correct to say that a transaction size limit would solve the quadratic issue?
i.e. pick a number Smiley 50K? 25K? limit per transaction and then it doesn't really matter?

Clearly if the block size ever did get to 32MB, it would be foolhardy to allow ~32MB transactions, so there should be some limit in there somewhere anyway.
Putting a limit on the transaction size doesn't really solve the issue. Sighashing will still be quadratic. The only way to actually fix that is to make it not quadratic anymore. Putting a limit on transaction size still means that the biggest transaction can still potentially take a long time to validate, especially on lower end hardware.

But then, what is a good limit to put on transactions? Are we going to be bickering over the transaction size limit like we are over the block size limit? (I think it is very likely that is going to happen). How would you measure what a good limit is?

FWIW, I think Bitcoin XT and Classic put a limit on the transaction size as their fix for the quadratic sighashing issue.

... and spent another 5 minutes and I now think achow101 is leading me astray Tongue

My point about the hash is that when bitcoin receives a transaction, the inputs and outputs and the transaction hash are fixed.
... ignoring malleability, since that really doesn't matter in this case, since it can be treated as another transaction, which just happens to have the same inputs and outputs ...
so anyway, as a miner, if it takes a long time to process a block, it's far from ideal, but if bitcoin has stored that information about all the transactions it has in it's mempool (or some or most or ... whatever) then when a block comes in, it will ONLY need to verify transactions in the block that have a txn hash it doesn't already know (or hasn't stored the processed information).
Yes. And that is what stuff like compact blocks and xthin does (among many other things).

Thus in this case, the quadratic issue is a complete non-event, unless we've never seen the transaction hash before.
Yes, and that is the main concern. The concern is not that people are going to make big transactions and broadcast them to the network. Rather it is that some miner is going to make a big transaction (like f2pool did a while ago) and include it in their own block without broadcasting it to everyone else beforehand.

kano (OP)
Legendary
*
Offline Offline

Activity: 4494
Merit: 1808


Linux since 1997 RedHat 4


View Profile
March 29, 2017, 01:57:56 PM
 #11

So I had a think about this (for 5 minutes Smiley ) and I see that even I may have been over stating the quadratic issue?

Are you basically implying that the quadratic issue is caused by the number of inputs/outputs? or is it also how the transaction is created and the number doesn't really matter too much?
Would it be correct to say that a transaction size limit would solve the quadratic issue?
i.e. pick a number Smiley 50K? 25K? limit per transaction and then it doesn't really matter?

Clearly if the block size ever did get to 32MB, it would be foolhardy to allow ~32MB transactions, so there should be some limit in there somewhere anyway.
Putting a limit on the transaction size doesn't really solve the issue. Sighashing will still be quadratic. The only way to actually fix that is to make it not quadratic anymore. Putting a limit on transaction size still means that the biggest transaction can still potentially take a long time to validate, especially on lower end hardware.

But then, what is a good limit to put on transactions? Are we going to be bickering over the transaction size limit like we are over the block size limit? (I think it is very likely that is going to happen). How would you measure what a good limit is?

...
Saying quadratic is an issue by definition, is side stepping reality.

Most current CPUs will not have an issue with anything but very large transactions, and trying to claim a need for bitcoin to work on pissy small slow hadware is a false argument since it wont work there already anyway, ignoring the 'quadratic issue'
I run pruned bitcoinds on 4vCPU 4GB RAM VPSes and that is clearly not very far above the lower limit - much lower end than that are you will have start to have performance problems.
(yes I have more than a dozen full and pruned nodes running on the net for my pool)
A full node needs a lot more than this.

Anyway, well there seems to have been no issue with having a block size limit for a very long time ... until recently ...
But a transaction size limit is minor if irrelevant, since you can usually create 2 (or more) transactions and put them out at the same time to overcome a transaction size limit, you can't create 2 or more blocks at the same time to overcome the current block size limit Tongue
So comparing a transaction size limit to a block size limit is not a valid comparison.

It only becomes an issue if you are using large complex P2SH tranasctions - that, well, P2SH clearly has design issues already.

Meanwhile, if there was no limit, you will of course hit a similar limit with linear at some block size, that 1MB has with quadratic, then your little slow hardware CPUs wont handle that either, but I guess the answer would be that you can ignore them by then? i.e. you can ignore them now anyway.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
stdset
Hero Member
*****
Offline Offline

Activity: 572
Merit: 506



View Profile
March 30, 2017, 09:04:17 AM
 #12

Putting a limit on the transaction size doesn't really solve the issue. Sighashing will still be quadratic. The only way to actually fix that is to make it not quadratic anymore. Putting a limit on transaction size still means that the biggest transaction can still potentially take a long time to validate, especially on lower end hardware.
Time complexity of an algo isn't always a problem. Users don't care about time complexity, they don't even know what it is, they care about real world time. A constant time algo may take longer time to work than a quadratic time algo (imagine a hashtable with a cryptographic hashfunction).
Another thing is that we want to be able to use big transactions (for instance coinjoin transactions), so in our case quadratic complexity is an issue.

Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!