Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: jl777 on March 15, 2016, 11:20:53 AM



Title: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 15, 2016, 11:20:53 AM
I cant find the changes needed to be made to support segwit.

It must change the protocol and blockchain format, so I would imagine there is some obvious place I overlooked where to find it.

James


Title: Re: Segwit details?
Post by: achow101 on March 15, 2016, 11:43:21 AM
Read the BIPs: https://github.com/bitcoin/bips.They are appropriately named. Their numbers are 14x.


Title: Re: Segwit details?
Post by: jl777 on March 15, 2016, 12:08:21 PM
Read the BIPs: https://github.com/bitcoin/bips.They are appropriately named. Their numbers are 14x.
wow, that's a LOT of changes...

practically speaking, will segwit tx work for sending to an old wallet or do both sides need to run it for it to be spendable. it seems that would be the case. if so, doesnt that create a lot of problems along the lines of "i sent you this txid, but you need this wtxid to be able to spend it, oh and the new updated wallet that supports segwit that isnt available yet from your vendor"



Title: Re: Segwit details?
Post by: achow101 on March 15, 2016, 12:24:23 PM
Read the BIPs: https://github.com/bitcoin/bips.They are appropriately named. Their numbers are 14x.
wow, that's a LOT of changes...

practically speaking, will segwit tx work for sending to an old wallet or do both sides need to run it for it to be spendable. it seems that would be the case. if so, doesnt that create a lot of problems along the lines of "i sent you this txid, but you need this wtxid to be able to spend it, oh and the new updated wallet that supports segwit that isnt available yet from your vendor"


The txid of a segwit transaction will be the same segwit or not since the signatures are not part of the transaction. Unupgraded users will be able to receive but not spend from validate segwit transactions fully. They can spend from segwit transactions because the output will still have to be a p2pkh output to non-upgraded wallets.

Edit: Fix error where I said that segwit txs could not be spent from.


Title: Re: Segwit details?
Post by: jl777 on March 15, 2016, 12:41:41 PM
Read the BIPs: https://github.com/bitcoin/bips.They are appropriately named. Their numbers are 14x.
wow, that's a LOT of changes...

practically speaking, will segwit tx work for sending to an old wallet or do both sides need to run it for it to be spendable. it seems that would be the case. if so, doesnt that create a lot of problems along the lines of "i sent you this txid, but you need this wtxid to be able to spend it, oh and the new updated wallet that supports segwit that isnt available yet from your vendor"


The txid of a segwit transaction will be the same segwit or not since the signatures are not part of the transaction. Unupgraded users will be able to receive but not spend from segwit transactions.
when you say "receive" but not spend, it is received and unverifiable and unspendable, right?

is it just me, or does it seem like calling this a softfork might be technically accurate, the market confusion and incompatibility it will cause is pretty much like a hardfork


Title: Re: Segwit details?
Post by: achow101 on March 15, 2016, 01:41:27 PM
when you say "receive" but not spend, it is received and unverifiable and unspendable, right?
In essence, yes.

is it just me, or does it seem like calling this a softfork might be technically accurate, the market confusion and incompatibility it will cause is pretty much like a hardfork
Yeah, pretty much. Although it still allows for the old type transactions so old wallets will still work.

IIRC segwit was originally proposed as a hard fork.


Title: Re: Segwit details?
Post by: fbueller on March 15, 2016, 02:40:03 PM
Sipas implementation is available here: https://github.com/sipa/bitcoin/pull/8


Title: Re: Segwit details?
Post by: rizzlarolla on March 15, 2016, 07:45:34 PM
Read the BIPs: https://github.com/bitcoin/bips.They are appropriately named. Their numbers are 14x.
wow, that's a LOT of changes...

practically speaking, will segwit tx work for sending to an old wallet or do both sides need to run it for it to be spendable. it seems that would be the case. if so, doesnt that create a lot of problems along the lines of "i sent you this txid, but you need this wtxid to be able to spend it, oh and the new updated wallet that supports segwit that isnt available yet from your vendor"


The txid of a segwit transaction will be the same segwit or not since the signatures are not part of the transaction. Unupgraded users will be able to receive but not spend from segwit transactions.
when you say "receive" but not spend, it is received and unverifiable and unspendable, right?

is it just me, or does it seem like calling this a softfork might be technically accurate, the market confusion and incompatibility it will cause is pretty much like a hardfork

It's not just you, it's me too.
Wow, that is a lot of changes.



Title: Re: Segwit details?
Post by: jl777 on March 16, 2016, 01:53:26 AM
considering the obstacles it faces, I will not implement segwit support into initial iguana versions. If a bunch of users end up with unspendable funds, then I guess I will  be forced to massively complicate the blockchain handling.

does anybody know the BIP that defines the new network message(s)? I assume each block would need a getsegwitdata equivalent so existing nodes just do the existing getdata and the segwit softforks (ha ha) do the additional call, process the new data format, update the internal dataset, validate the signatures and enable spending. Seems like a LOT of work to get a 60% increased capacity.

Since this is basically a hardfork (let us not kid ourselves that creating INCOMPATIBLE bitcoins is in any way backward compatible!!), and it requires the additional data, then just hardfork 2MB.

That would not require wtxid wasting precious space in the blockchain.

The logic used to justify wtxid permanently using up space that otherwise wouldnt be needed is that it allows a softfork. But this is a fake softfork, as existing nodes wont be able to validate or spend any segwit payments it gets. HOW ON EARTH IS THAT BACKWARD COMPATIBLE?? which is the industry's understanding of softfork behavior (I know technically that is not what softfork is, I speak of what users are thinking)

So, if segwit gets 75%, then it forces all the nodes to update. How is that not a hardfork?

The logic is segwit allows 60% increase in capacity without a hardfork, but this premise is wrong. It is effectively a hardfork, actually I think it is worse. If it was a hardfork, then users who know about such things will understand that they have to update. Saying it is a softfork will make users think they dont have to upgrade. Then they find out that they cant spend the bitcoins they got. So we end up with two incompatible bitcoins. this is not a good plan at all

Let us call a hardfork a hardfork. segwit is a hardfork from the most important aspect that is it NOT BACKWARD COMPATIBLE with existing wallets and independent cores.

OK, so that then means that segwit is WASTING precious blockchain space and not avoid a hardfork. It makes no sense to me that a defacto hardfork that breaks ALL EXISTING wallets and also requires all independent cores to make significant changes, testing, field updates, customer support, and it is all to permanently waste space with the redundant wtxids that would not be needed if we just hardforked 2MB

Please tell me there is some sanity here. There is no logical justification for segwit and plenty of risk factors of creating the impression that bitcoin is broken, and if you dont consider the existing installed base not being able to validate or spend bitcoins as not broken, then something about how you evaluate brokenness is broken.

James

P.S. Technically segwit is very clever and I can see plenty of use cases it enables. However, positioning it as a softfork is a disaster. But I guess bitcoin has nothing to worry about, after all there are no altcoins out there that have any chance at all against bitcoin. So we can just do whatever we want and it wont matter since there are no alternatives out there for users. Who is afraid of LTC anyway, right?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: funkenstein on March 16, 2016, 02:21:03 AM
ACK

I dun get it either. 


Title: Re: Segwit details?
Post by: achow101 on March 16, 2016, 02:39:58 AM
considering the obstacles it faces, I will not implement segwit support into initial iguana versions. If a bunch of users end up with unspendable funds, then I guess I will  be forced to massively complicate the blockchain handling.

does anybody know the BIP that defines the new network message(s)? I assume each block would need a getsegwitdata equivalent so existing nodes just do the existing getdata and the segwit softforks (ha ha) do the additional call, process the new data format, update the internal dataset, validate the signatures and enable spending. Seems like a LOT of work to get a 60% increased capacity.
I would advise that you actually read all of the BIPs.

The one you are looking for is BIP 143: https://github.com/bitcoin/bips/blob/master/bip-0144.mediawiki.

Since this is basically a hardfork (let us not kid ourselves that creating INCOMPATIBLE bitcoins is in any way backward compatible!!), and it requires the additional data, then just hardfork 2MB.

That would not require wtxid wasting precious space in the blockchain.
How does the wtxid waste space? The only place it ends up in the blockchain is in the coinbase transaction as the witness root hash where the wtxids are hashed together.

The logic used to justify wtxid permanently using up space that otherwise wouldnt be needed is that it allows a softfork. But this is a fake softfork, as existing nodes wont be able to validate or spend any segwit payments it gets. HOW ON EARTH IS THAT BACKWARD COMPATIBLE?? which is the industry's understanding of softfork behavior (I know technically that is not what softfork is, I speak of what users are thinking)
It is a soft fork because after the upgrade old nodes and wallets still function perfectly fine with the old system. They can still receive segwit transactions, they just can't spend from them. They can still receive and spend from traditional transactions which will still be valid under the new rules

So, if segwit gets 75%, then it forces all the nodes to update. How is that not a hardfork?
No. It simply means that the new rules can go into effect and are now considered valid. No one is being forced to upgrade and you can still function fine without upgrading for a while.

The logic is segwit allows 60% increase in capacity without a hardfork, but this premise is wrong. It is effectively a hardfork, actually I think it is worse. If it was a hardfork, then users who know about such things will understand that they have to update. Saying it is a softfork will make users think they dont have to upgrade. Then they find out that they cant spend the bitcoins they got. So we end up with two incompatible bitcoins. this is not a good plan at all
No. It will not fork the blockchain. All of the blocks produced after the fork are still valid under the old rules. This is part of what makes it a soft fork. It doesn't fork the blockchain like a hard fork does.

Let us call a hardfork a hardfork. segwit is a hardfork from the most important aspect that is it NOT BACKWARD COMPATIBLE with existing wallets and independent cores.

OK, so that then means that segwit is WASTING precious blockchain space and not avoid a hardfork. It makes no sense to me that a defacto hardfork that breaks ALL EXISTING wallets and also requires all independent cores to make significant changes, testing, field updates, customer support, and it is all to permanently waste space with the redundant wtxids that would not be needed if we just hardforked 2MB

Please tell me there is some sanity here. There is no logical justification for segwit and plenty of risk factors of creating the impression that bitcoin is broken, and if you dont consider the existing installed base not being able to validate or spend bitcoins as not broken, then something about how you evaluate brokenness is broken.
Segwit is needed for its solution to the transaction malleability problem. It makes transaction malleability impossible to occur now since the txids now contain only data that is already signed. If everyone were to upgrade to segwit, it would indeed be a very good thing for Bitcoin. It also solves the O(n^2) hashing problem.

Additionally, you can still use Bitcoin as it is now after the fork. A lot of people seem to forget that.

Lastly, I will say that marketing segwit as a scalability solution was probably a bad idea. Its original intent was to solve the transaction malleability problem and a side effect was that the block space was effectively doubled. People use the 60% - 80% figure because the assumption is that people won't upgrade to segwit and make use of its advantages. Otherwise it would be the same as a block size doubling.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 02:42:59 AM
ACK

I dun get it either. 
it is a hardfork pretending to be a softfork that increases tx capacity without doing a hardfork, but it actually wastes space permanently.

Of course it only wastes space on nodes running the segwit "softfork"

but if you received any wtxid and want to spend it, you need to run an updated wallet, which I am sure will be available within 3 days from when you get such a wtxid. After all the changes required are quite simple: https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki

Anybody that has written a bitcoin core from scratch (like me) should be able to implement the dozen or so changes in a month or two, and we can ignore any issues about the customer support as they can just do manual rawtx manipulations with all the great tools available for that if they really want to and dont want to update to the only wallet that supports segwit

So it breaks the installed base
Creates customer support and field update issues
And permanently wastes 30%+ of the new space as opposed to a simple 2MB hardfork (or 4MB)
But it will single source wallets for the months it takes for everybody to "fix" their software

Dont get me wrong, I think the segwit tech is pretty cool, but instead of pushing it into BTC mainnet under the innocent sounding "softfork" and exposing the entire installed base to pain and suffering, it seems this magnitude change should start in a side chain, get field tested and then if it makes sense to do a HARDFORK for it. DO NOT BREAK THE INSTALLED BASE

James


Title: Re: Segwit details?
Post by: jl777 on March 16, 2016, 02:51:38 AM
considering the obstacles it faces, I will not implement segwit support into initial iguana versions. If a bunch of users end up with unspendable funds, then I guess I will  be forced to massively complicate the blockchain handling.

does anybody know the BIP that defines the new network message(s)? I assume each block would need a getsegwitdata equivalent so existing nodes just do the existing getdata and the segwit softforks (ha ha) do the additional call, process the new data format, update the internal dataset, validate the signatures and enable spending. Seems like a LOT of work to get a 60% increased capacity.
I would advise that you actually read all of the BIPs.

The one you are looking for is BIP 143: https://github.com/bitcoin/bips/blob/master/bip-0144.mediawiki.

Since this is basically a hardfork (let us not kid ourselves that creating INCOMPATIBLE bitcoins is in any way backward compatible!!), and it requires the additional data, then just hardfork 2MB.

That would not require wtxid wasting precious space in the blockchain.
How does the wtxid waste space? The only place it ends up in the blockchain is in the coinbase transaction as the witness root hash where the wtxids are hashed together.

The logic used to justify wtxid permanently using up space that otherwise wouldnt be needed is that it allows a softfork. But this is a fake softfork, as existing nodes wont be able to validate or spend any segwit payments it gets. HOW ON EARTH IS THAT BACKWARD COMPATIBLE?? which is the industry's understanding of softfork behavior (I know technically that is not what softfork is, I speak of what users are thinking)
It is a soft fork because after the upgrade old nodes and wallets still function perfectly fine with the old system. They can still receive segwit transactions, they just can't spend from them. They can still receive and spend from traditional transactions which will still be valid under the new rules

So, if segwit gets 75%, then it forces all the nodes to update. How is that not a hardfork?
No. It simply means that the new rules can go into effect and are now considered valid. No one is being forced to upgrade and you can still function fine without upgrading for a while.

The logic is segwit allows 60% increase in capacity without a hardfork, but this premise is wrong. It is effectively a hardfork, actually I think it is worse. If it was a hardfork, then users who know about such things will understand that they have to update. Saying it is a softfork will make users think they dont have to upgrade. Then they find out that they cant spend the bitcoins they got. So we end up with two incompatible bitcoins. this is not a good plan at all
No. It will not fork the blockchain. All of the blocks produced after the fork are still valid under the old rules. This is part of what makes it a soft fork. It doesn't fork the blockchain like a hard fork does.

Let us call a hardfork a hardfork. segwit is a hardfork from the most important aspect that is it NOT BACKWARD COMPATIBLE with existing wallets and independent cores.

OK, so that then means that segwit is WASTING precious blockchain space and not avoid a hardfork. It makes no sense to me that a defacto hardfork that breaks ALL EXISTING wallets and also requires all independent cores to make significant changes, testing, field updates, customer support, and it is all to permanently waste space with the redundant wtxids that would not be needed if we just hardforked 2MB

Please tell me there is some sanity here. There is no logical justification for segwit and plenty of risk factors of creating the impression that bitcoin is broken, and if you dont consider the existing installed base not being able to validate or spend bitcoins as not broken, then something about how you evaluate brokenness is broken.
Segwit is needed for its solution to the transaction malleability problem. It makes transaction malleability impossible to occur now since the txids now contain only data that is already signed. If everyone were to upgrade to segwit, it would indeed be a very good thing for Bitcoin. It also solves the O(n^2) hashing problem.

Additionally, you can still use Bitcoin as it is now after the fork. A lot of people seem to forget that.

Lastly, I will say that marketing segwit as a scalability solution was probably a bad idea. Its original intent was to solve the transaction malleability problem and a side effect was that the block space was effectively doubled. People use the 60% - 80% figure because the assumption is that people won't upgrade to segwit and make use of its advantages. Otherwise it would be the same as a block size doubling.
Are you trying to claim that having the entire existing installed base not being able to validate the wtxids they get is acceptable? And that not being able to spend it, is acceptable? The customer support issues it is guaranteed to cost, is acceptable? That the lost reputation for backward compatibility and reliability, is acceptable?

fixing malleability, great!
But to be able to spend the wtxids dont you need to get the extra witness data? Is that data part of the segwit blockchain? If it is part of the segwit blockchain, isnt there the wtxid for each txid that wouldnt otherwise be needed?

What am I missing?

To spend any received wtxid, you need to update to segwit chain, which is increased in size and includes the wtxid's for EVERY segwit tx, not just the merkle root. And this wtxid wouldnt be needed if we just hardforked to 2MB. So segwit as a space saver, actually loses space. Segwit as a softfork might be technically true, but it forces everyone to update to a sole sourced wallet or not be able to spend the coins received. And when they update, the wtxids are sitting there in their blockchain that wouldnt have been needed otherwise.

So, for fixing malleability and other things, great. no problems with that. but to claim it is increasing tx capacity without a hardfork is disingenuous at best. Most people would like to be able to spend the bitcoins they get. If you can agree with that, then you must agree that they will need to load the segwit blockchain, which is bloated with wtxids that would not be needed in a simple 2MB hardfork

I know you must have some sort of marching orders to follow the party line, but please, let us not make silly claims like "wallets still function perfectly fine with the old system. They can still receive segwit transactions, they just can't spend from them" I dont want you to lose your credibility

James


Title: Re: Segwit details?
Post by: achow101 on March 16, 2016, 03:05:33 AM
Are you trying to claim that having the entire existing installed base not being able to validate the wtxids they get is acceptable? And that not being able to spend it, is acceptable? The customer support issues it is guaranteed to cost, is acceptable? That the lost reputation for backward compatibility and reliability, is acceptable?

fixing malleability, great!
But to be able to spend the wtxids dont you need to get the extra witness data? Is that data part of the segwit blockchain? If it is part of the segwit blockchain, isnt there the wtxid for each txid that wouldnt otherwise be needed?

What am I missing?
Segwit defines a new address type, but I don't think that will actually be implemented. Rather it will be using the witness program in p2sh addresses instead. A non-upgraded node will not have such addresses, they will still be using p2pkh addresses like we do now. If a segwit transaction were to be made which spent from a segwit output to an old p2pkh output, an old node would still be able to spend from it. The transaction would validate because the node sees it as an anyonecanspend input and the output is just like any p2pkh output in use right now. There is no need for the witness data to spend, it just cannot know whether the transaction it spends from was legitimate or not. Then it can still spend the p2pkh output normally as it does now.

To spend any received wtxid, you need to update to segwit chain, which is increased in size and includes the wtxid's for EVERY segwit tx, not just the merkle root. And this wtxid wouldnt be needed if we just hardforked to 2MB. So segwit as a space saver, actually loses space. Segwit as a softfork might be technically true, but it forces everyone to update to a sole sourced wallet or not be able to spend the coins received. And when they update, the wtxids are sitting there in their blockchain that wouldnt have been needed otherwise.
Well they aren't actually in the blockchain as we know it. It would essentially be like a secondary blockchain of all of the witness data. Either way, yes upgraded nodes would have to download 2 Mb of data.

So, for fixing malleability and other things, great. no problems with that. but to claim it is increasing tx capacity without a hardfork is disingenuous at best. Most people would like to be able to spend the bitcoins they get. If you can agree with that, then you must agree that they will need to load the segwit blockchain, which is bloated with wtxids that would not be needed in a simple 2MB hardfork

I know you must have some sort of marching orders to follow the party line, but please, let us not make silly claims like "wallets still function perfectly fine with the old system. They can still receive segwit transactions, they just can't spend from them" I dont want you to lose your credibility
Sorry, I actually spoke incorrectly there. I forgot about the whole address thing.


Title: Re: Segwit details?
Post by: jl777 on March 16, 2016, 03:19:09 AM
Are you trying to claim that having the entire existing installed base not being able to validate the wtxids they get is acceptable? And that not being able to spend it, is acceptable? The customer support issues it is guaranteed to cost, is acceptable? That the lost reputation for backward compatibility and reliability, is acceptable?

fixing malleability, great!
But to be able to spend the wtxids dont you need to get the extra witness data? Is that data part of the segwit blockchain? If it is part of the segwit blockchain, isnt there the wtxid for each txid that wouldnt otherwise be needed?

What am I missing?
Segwit defines a new address type, but I don't think that will actually be implemented. Rather it will be using the witness program in p2sh addresses instead. A non-upgraded node will not have such addresses, they will still be using p2pkh addresses like we do now. If a segwit transaction were to be made which spent from a segwit output to an old p2pkh output, an old node would still be able to spend from it. The transaction would validate because the node sees it as an anyonecanspend input and the output is just like any p2pkh output in use right now. There is no need for the witness data to spend, it just cannot know whether the transaction it spends from was legitimate or not. Then it can still spend the p2pkh output normally as it does now.

To spend any received wtxid, you need to update to segwit chain, which is increased in size and includes the wtxid's for EVERY segwit tx, not just the merkle root. And this wtxid wouldnt be needed if we just hardforked to 2MB. So segwit as a space saver, actually loses space. Segwit as a softfork might be technically true, but it forces everyone to update to a sole sourced wallet or not be able to spend the coins received. And when they update, the wtxids are sitting there in their blockchain that wouldnt have been needed otherwise.
Well they aren't actually in the blockchain as we know it. It would essentially be like a secondary blockchain of all of the witness data. Either way, yes upgraded nodes would have to download 2 Mb of data.

So, for fixing malleability and other things, great. no problems with that. but to claim it is increasing tx capacity without a hardfork is disingenuous at best. Most people would like to be able to spend the bitcoins they get. If you can agree with that, then you must agree that they will need to load the segwit blockchain, which is bloated with wtxids that would not be needed in a simple 2MB hardfork

I know you must have some sort of marching orders to follow the party line, but please, let us not make silly claims like "wallets still function perfectly fine with the old system. They can still receive segwit transactions, they just can't spend from them" I dont want you to lose your credibility
Sorry, I actually spoke incorrectly there. I forgot about the whole address thing.
OK, so you agree that if segwit achieves the activation level, all nodes will have to update and download the 2MB of data, which contains 300kb+ of wtxids that otherwise wouldnt be needed in a straight 2MB hardfork.

so it is a total fail from a "increasing tx capacity without requiring a hardfork" point of view. Let us not quibble if it technically is a softfork or hardfork, the reality is users will have to update or not be  able to spend. it looks like a hardfork, walks like a hardfork, quacks like a hardfork.

It sounds like it is possible to make it less of a problem, but it will be possible for segwit to be used to make unspendable payments to old nodes, so this creates an attack vector where the attacker simply sends to thousands of users some small amount of unspendable segwit wtxids. Once users get the bitcoins, they wont care about whether it is softfork or whatever, they will want to spend the bitcoins.

So, in the case where segwit is adopted, then all the nodes must update and get the full 2MB blocks that are bloated with needless wtxids. Now I just briefly looked at segwit details for the first time today, so maybe there is some super magic negative knowlege antimatter spacetime warping data compression that allows the segwit to actually save blockchain space. but calling the witness data not the blockchain since it is separate, again it becomes the type of stuff politicians do and not what technical guys should be doing. So if you are a politico, then fine, but I had always seen your posts as from an objective technical guy and was totally shocked at what you wrote. I am assuming the witness data is treated the same as the normal blockchain data, so it is in the same category and thus the statement that segwit is a total fail for increasing tx capacity without hardfork is fully justifed

Since segwit was started to fix malleability, maybe it should stick to that and not try to solve a problem it cannot solve. Officially claiming that it solves this is damaging to bitcoin's technical credibility and the other coins will take FULL advantage of this.

You cannot claim to be intelligent while advocating idiotic things, right?

James


Title: Re: Segwit details?
Post by: achow101 on March 16, 2016, 03:51:36 AM
OK, so you agree that if segwit achieves the activation level, all nodes will have to update and download the 2MB of data, which contains 300kb+ of wtxids that otherwise wouldnt be needed in a straight 2MB hardfork.
I don't know, so I can't agree or disagree. I couldn't find anything about how that would be serialized. I do agree that upgraded nodes would have to download all of the witness data though, whether they include the wtxids, I cannot say.

so it is a total fail from a "increasing tx capacity without requiring a hardfork" point of view. Let us not quibble if it technically is a softfork or hardfork, the reality is users will have to update or not be  able to spend. it looks like a hardfork, walks like a hardfork, quacks like a hardfork.
I still say that it is a soft fork, albeit not entirely a soft fork but not a hard fork either. Let's agree to disagree.

It sounds like it is possible to make it less of a problem, but it will be possible for segwit to be used to make unspendable payments to old nodes, so this creates an attack vector where the attacker simply sends to thousands of users some small amount of unspendable segwit wtxids. Once users get the bitcoins, they wont care about whether it is softfork or whatever, they will want to spend the bitcoins.
No, that is not possible. An old node would not know about the new witness program and how that works in an p2sh output. It would only know of p2pkh outputs that are meant for it, even if the witness program is just a p2pkh but segwit spendable only. The old node wouldn't know that it received such a payment so this attack wouldn't work.

So, in the case where segwit is adopted, then all the nodes must update and get the full 2MB blocks that are bloated with needless wtxids. Now I just briefly looked at segwit details for the first time today, so maybe there is some super magic negative knowlege antimatter spacetime warping data compression that allows the segwit to actually save blockchain space. but calling the witness data not the blockchain since it is separate, again it becomes the type of stuff politicians do and not what technical guys should be doing. So if you are a politico, then fine, but I had always seen your posts as from an objective technical guy and was totally shocked at what you wrote. I am assuming the witness data is treated the same as the normal blockchain data, so it is in the same category and thus the statement that segwit is a total fail for increasing tx capacity without hardfork is fully justifed
Now that I think about it, I don't think the wtxids are included and that the witness data is included in a separate structure. If the witness data is requested, then I think (don't quote me on this) that it is just appended to its respective transaction in the block when it is sent. The wtxids are probably generated on the fly just like regular txids are generated on the fly as well. So in fact, it would be a 2 Mb increase but the "official" size of the block cuts out all of that witness data. If the block were to be requested normally as it is now, the witness data would not be included and the size of the block would be at most 1 Mb.

Since segwit was started to fix malleability, maybe it should stick to that and not try to solve a problem it cannot solve. Officially claiming that it solves this is damaging to bitcoin's technical credibility and the other coins will take FULL advantage of this.
I do agree that it should not have been marketed as a scalability solution but rather that it is for transaction malleability with a side effect of some scalability.


Title: Re: Segwit details?
Post by: AliceGored on March 16, 2016, 03:56:44 AM
I do agree that it should not have been marketed as a scalability solution but rather that it is for transaction malleability with a side effect of some scalability.

"should not have been marketed as a scalability solution but rather that it is for transaction malleability with a side effect of some scalability a 75% fee discount for signature heavy settlement transactions."



Title: Re: Segwit details?
Post by: jl777 on March 16, 2016, 04:13:53 AM
OK, so you agree that if segwit achieves the activation level, all nodes will have to update and download the 2MB of data, which contains 300kb+ of wtxids that otherwise wouldnt be needed in a straight 2MB hardfork.
I don't know, so I can't agree or disagree. I couldn't find anything about how that would be serialized. I do agree that upgraded nodes would have to download all of the witness data though, whether they include the wtxids, I cannot say.

so it is a total fail from a "increasing tx capacity without requiring a hardfork" point of view. Let us not quibble if it technically is a softfork or hardfork, the reality is users will have to update or not be  able to spend. it looks like a hardfork, walks like a hardfork, quacks like a hardfork.
I still say that it is a soft fork, albeit not entirely a soft fork but not a hard fork either. Let's agree to disagree.

It sounds like it is possible to make it less of a problem, but it will be possible for segwit to be used to make unspendable payments to old nodes, so this creates an attack vector where the attacker simply sends to thousands of users some small amount of unspendable segwit wtxids. Once users get the bitcoins, they wont care about whether it is softfork or whatever, they will want to spend the bitcoins.
No, that is not possible. An old node would not know about the new witness program and how that works in an p2sh output. It would only know of p2pkh outputs that are meant for it, even if the witness program is just a p2pkh but segwit spendable only. The old node wouldn't know that it received such a payment so this attack wouldn't work.

So, in the case where segwit is adopted, then all the nodes must update and get the full 2MB blocks that are bloated with needless wtxids. Now I just briefly looked at segwit details for the first time today, so maybe there is some super magic negative knowlege antimatter spacetime warping data compression that allows the segwit to actually save blockchain space. but calling the witness data not the blockchain since it is separate, again it becomes the type of stuff politicians do and not what technical guys should be doing. So if you are a politico, then fine, but I had always seen your posts as from an objective technical guy and was totally shocked at what you wrote. I am assuming the witness data is treated the same as the normal blockchain data, so it is in the same category and thus the statement that segwit is a total fail for increasing tx capacity without hardfork is fully justifed
Now that I think about it, I don't think the wtxids are included and that the witness data is included in a separate structure. If the witness data is requested, then I think (don't quote me on this) that it is just appended to its respective transaction in the block when it is sent. The wtxids are probably generated on the fly just like regular txids are generated on the fly as well. So in fact, it would be a 2 Mb increase but the "official" size of the block cuts out all of that witness data. If the block were to be requested normally as it is now, the witness data would not be included and the size of the block would be at most 1 Mb.

Since segwit was started to fix malleability, maybe it should stick to that and not try to solve a problem it cannot solve. Officially claiming that it solves this is damaging to bitcoin's technical credibility and the other coins will take FULL advantage of this.
I do agree that it should not have been marketed as a scalability solution but rather that it is for transaction malleability with a side effect of some scalability.
ok, it seems the wtxid is not included in the witness data, however I cannot imagine how it can be encoded such that the space taken in the original blockspace + space in witness blockspace is smaller than just using 2MB of ordinary blockspace.

and if it doesnt REDUCE the total space, then it has no net gain and is a failure for increasing tx capacity. So where is the proof that it will reduce the total space used? We still trust in math around here, dont we?

and if the details about the total space used is not known by you, then the question arises about who has peer reviewed this. Using this for scalability has negative effect unless the combined space is reduced and in almost all cases when you have a single reference to something else, you cant save space as the something else needs to exist and also the reference to it. The best that would be possible is to have the position in the witness space be the implicit reference and that is probably how it is done.

however, there is still the issue of:

Transaction ID

A new data structure, witness, is defined. Each transaction will have 2 IDs.

Definition of txid remains unchanged: the double SHA256 of the traditional serialization format:

  [nVersion][txins][txouts][nLockTime]
  
A new wtxid is defined: the double SHA256 of the new serialization with witness data:

  [nVersion][marker][flag][txins][txouts][witness][nLockTime]

from the BIP...

the wtxid is based on all of the original, plus marker (1 byte?) flag (1 byte) and witness, which appears to be:

 1-byte - OP_RETURN (0x6a)
   1-byte - Push the following 36 bytes (0x24)
   4-byte - Commitment header (0xaa21a9ed)
  32-byte - Commitment hash: Double-SHA256(witness root hash|witness nonce)

all this seems to be above and beyond what would be needed for the normal, plus the nVersion (4 bytes) and nLockTime (4 bytes) are duplicated. To a simple C programmer like me it sure looks like instead of reducing the net amount as required by anything claiming to save space, it is increasing the size by approx 50 bytes.

So even if we say the cost for all the work in all the project across the bitcoin world is ZERO, it still reduces the overall tx capacity of bitcoin permanently. The fact that such a anti-space saving mechanism is marketed at all, let alone as a space saving "softfork", well you see my concerns about the technical reputations of anybody that supports the segwit for increased tx capacity.

I dont want to take away from the brilliance of the tech to solve the malleability and increase the potential usecases. The problem is that it is being backdoored through the softfork mechanism and being marketed without objective peer review.

James

P.S. So my understanding is that you need a special segwit address (that is somehow determined to be a segwit address using what mechanism?) so both sender and receiver need to already have the segwit version. I guess just ignoring all the existing nodes is at least some level of backward compatibility. But are you sure all users will quickly get used to having to deal with two types of addresses for every transaction and they will make sure they know what version the other party is running. Doesnt this bifurcate the bitcoin universe? maybe the name should be "bifurcating softfork"


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: achow101 on March 16, 2016, 04:36:00 AM
So, I'm on mobile right now so I can't give as detailed of a response. I will edit this with my full response later in half a day when I get back to my computer to give you a full response. This is just the gist of what I want to day.

I'm not sure about the space reducing part. Was that actually mentioned anywhere? I don't think I said that it would reduce the space used. It is essentially a 2Mb block size limit increase but with the added benefit of making transaction malleability impossible.

About the txid thing, you are incorrect. Reread all the BIPs carefully and multiple times. It can take a couple reads to fully comprehend and understand what is happening. Also keep in mind that there will probably be changes when segwit is actually released. The changes will only be omissions of what was specified e.g.I don't think they will include the new address type.

With the addresses, the witness program is nested in a p2sh address so it will be 3xxxxx. These should all be able to be spent to under the current system. You as the sender don't need to know the whether the address is segwit or not, but the receiver will need to in order to properly spend from it.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 04:50:23 AM
So, I'm on mobile right now so I can't give as detailed of a response. I will edit this with my full response later in half a day when I get back to my computer to give you a full response. This is just the gist of what I want to day.

I'm not sure about the space reducing part. Was that actually mentioned anywhere? I don't think I said that it would reduce the space used. It is essentially a 2Mb block size limit increase but with the added benefit of making transaction malleability impossible.

About the txid thing, you are incorrect. Reread all the BIPs carefully and multiple times. It can take a couple reads to fully comprehend and understand what is happening. Also keep in mind that there will probably be changes when segwit is actually released. The changes will only be omissions of what was specified e.g.I don't think they will include the new address type.

With the addresses, the witness program is nested in a p2sh address so it will be 3xxxxx. These should all be able to be spent to under the current system. You as the sender don't need to know the whether the address is segwit or not, but the receiver will need to in order to properly spend from it.
you know that segwit is being marketed as the magic solution to allow blocksize increase without hardfork. that is the issue here, disingenuous marketing

it might be that the "only" thing that is lost is the ability to verify the tx, but hey, what's wrong with needing to trust things, right? and the if the receiving node for the p2sh spend also doesnt run segwit, he is able to validate the tx because it is an anyone can spend signature? If anyone can spend it, what prevents somebody else from spending it? I am really confused as to how older nodes without the signature witness data can spend it to another node that doesnt have the witness data, without also allowing any random person from spending. What magic am I missing? Is that what the zeroknowlege commitment stuff is?

If you can only spend a wtxid to a segwit node, that is considered good enough? It seems segwit demotes all older nodes, even if they are fully validating and relaying nodes, into SPV nodes. This is acceptable?

I am not convinced at all that there is any space savings and pretty sure that it will take more space per tx if you count the witness data and we really need to count that data too.

This needs to be stopped being marketed as the softfork that increases tx capacity as that is not proven at all and likely the exact opposite when you make the reasonable assumptions that people will have to update to segwit and thus get all the witness data


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 05:15:24 AM
I'm not sure about the space reducing part. Was that actually mentioned anywhere? I don't think I said that it would reduce the space used. It is essentially a 2Mb block size limit increase but with the added benefit of making transaction malleability impossible.
maybe you didnt say it, but the official claims are here: https://bitcoincore.org/en/2016/01/26/segwit-benefits/#block-capacitysize-increase

maybe its just me misinterpreting the english as my second language and the above isnt claiming that it will increase the block capacity. I avoid political stuff so maybe I am just not understanding the nuances of the english. recently I found out that "sick" meant "cool", but cool wasnt about the temperature, but something else. So I guess it just matters what the meaning of the words "size increase" means.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: funkenstein on March 16, 2016, 05:56:51 AM

it is a hardfork pretending to be a softfork that increases tx capacity without doing a hardfork, but it actually wastes space permanently.

Of course it only wastes space on nodes running the segwit "softfork"

but if you received any wtxid and want to spend it, you need to run an updated wallet, which I am sure will be available within 3 days from when you get such a wtxid. After all the changes required are quite simple: https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki

Anybody that has written a bitcoin core from scratch (like me) should be able to implement the dozen or so changes in a month or two, and we can ignore any issues about the customer support as they can just do manual rawtx manipulations with all the great tools available for that if they really want to and dont want to update to the only wallet that supports segwit

So it breaks the installed base
Creates customer support and field update issues
And permanently wastes 30%+ of the new space as opposed to a simple 2MB hardfork (or 4MB)
But it will single source wallets for the months it takes for everybody to "fix" their software

Dont get me wrong, I think the segwit tech is pretty cool, but instead of pushing it into BTC mainnet under the innocent sounding "softfork" and exposing the entire installed base to pain and suffering, it seems this magnitude change should start in a side chain, get field tested and then if it makes sense to do a HARDFORK for it. DO NOT BREAK THE INSTALLED BASE

James

Funny :)

Is this thing just a hack to fool miners into storing more data?  As you point out it's not just more TX but also more bytes per TX.  I don't see it happening.  Who is going to send that first transaction that might be spendable by anyone?  Probably I am naive here and missing some political or alchemical posturing nuance.  If so that's fine, in the end nobody is going to break the installed base as you put it.  It's not worth the risk.     

There is also the claimed advantage that it removes malleability.  But again, is malleability really a problem?  If you are concerned about a TXID changing, for some scheme that relies on using unpublished TXs, then submit your transactions directly to a mining pool.  This is the way we need to go eventually anyway, as TX relaying by random nodes is purely altruistic and in a long term perspective looks like a temporary condition.   

Anyway thanks for your analysis. 





 


Title: Re: Segwit details?
Post by: l8orre on March 16, 2016, 05:57:27 AM

I still say that it is a soft fork, albeit not entirely a soft fork but not a hard fork either. Let's agree to disagree.
 

So according to you it is not a soft fork, but it is not a hard fork either... so how many other forks are there? I think you need to explain this. Pitch forks maybe?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: molecular on March 16, 2016, 08:18:10 AM
I really don't understand why we need to force our beloved wallet devs through this complicated mess. New address format? How to explain to users? All infrastructure needs to be upgraded... What a gargantuan task...

Why do we need segwit again?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: TierNolan on March 16, 2016, 09:58:36 AM
it is a hardfork pretending to be a softfork that increases tx capacity without doing a hardfork, but it actually wastes space permanently.

What does "wastes space permanently" mean?

Quote
but if you received any wtxid and want to spend it, you need to run an updated wallet, which I am sure will be available within 3 days from when you get such a wtxid. After all the changes required are quite simple: https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki

You don't receive transactions, you receive transaction outputs.  If the address you provide is a standard address, then it can be processed by legacy clients. 

Old clients would have problems with zero confirms though.  SW outputs look like they can be spent by anyone.  You could send someone a transaction that spends a SW output and they will think the transaction is valid.

Even with P2SH protection, there is a window where you could "double spend" the output.  Once you get the pre-image of the P2SH output, you can create an unlimited number of double spends for that transaction that will be accepted by old clients.

Quote
So it breaks the installed base

The space improvements do assume that people actually use SW.  If nobody uses it, then there is no benefit.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: JorgeStolfi on March 16, 2016, 10:24:08 AM
I asked some of these questions 3 months ag (https://www.reddit.com/r/bitcoinxt/comments/3w34o0/would_segregated_witnesses_really_help_anyone/)o.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit (https://www.reddit.com/r/bitcoin_uncensored/comments/43w24e/raising_the_21_million_btc_limit_with_a_soft_fork/).  One could also raise the block size limit (https://www.reddit.com/r/btc/comments/43w4rx/how_core_can_increase_the_21_million_btc_issuance/czlsk2q) that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.


Title: Re: Segwit details?
Post by: cypherblock on March 16, 2016, 11:42:28 AM
No. It will not fork the blockchain. All of the blocks produced after the fork are still valid under the old rules. This is part of what makes it a soft fork. It doesn't fork the blockchain like a hard fork does.

Technically this is incorrect. Blocks produced by non updated Miners will be forked if they come after 95% blocks are updated to new version. New nodes will not accept these blocks, old nodes will.

From the BIP:
Quote
Furthermore, when 950 out of the 1000 blocks preceding a block do have nVersion >= 5, nVersion < 5 blocks become invalid, and all further blocks enforce the new rules.

So if 5% of the miners don't upgrade, and produce a block (which will have nVersion<5), non-updated nodes will accept that block, but updated nodes will not. Once this happens, non-updated nodes will only accept new blocks from the 5% since the 95% will not be building off that <version 5 block. Thus the chain is forked into a 5% hash power chain and a 95% hash power chain.

EDIT:
amaclin corrected me below, no hardfork because, duh, old nodes will still see blocks from the 95% as valid, so even if they receive a 5% block first, the 95% will quickly build a longer valid chain, thus 'orphaning' this 5% block. So no fork really just orphaned/stale blocks. 5% miners will lose money though in wasted electricity and no block rewards.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: BlindMayorBitcorn on March 16, 2016, 01:00:34 PM
I asked some of these questions 3 months ag (https://www.reddit.com/r/bitcoinxt/comments/3w34o0/would_segregated_witnesses_really_help_anyone/)o.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit (https://www.reddit.com/r/bitcoin_uncensored/comments/43w24e/raising_the_21_million_btc_limit_with_a_soft_fork/).  One could also raise the block size limit (https://www.reddit.com/r/btc/comments/43w4rx/how_core_can_increase_the_21_million_btc_issuance/czlsk2q) that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

You've come to the right place for answers, professor. Openness is our middle name!


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: 600watt on March 16, 2016, 01:09:00 PM
sry for offtopic noob question:

is this relevant?

http://seclists.org/oss-sec/2016/q1/645 (http://seclists.org/oss-sec/2016/q1/645)


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: achow101 on March 16, 2016, 01:11:17 PM
sry for offtopic noob question:

is this relevant?

http://seclists.org/oss-sec/2016/q1/645 (http://seclists.org/oss-sec/2016/q1/645)
No. That is with the git, a VCS not specific to bitcoin. We are taking about segwit here which is a bitcoin consensus specific thing.


Title: Re: Segwit details?
Post by: amaclin on March 16, 2016, 01:12:43 PM
So if 5% of the miners don't upgrade, and produce a block (which will have nVersion<5), non-updated nodes will accept that block, but updated nodes will not. Once this happens, non-updated nodes will only accept new blocks from the 5% since the 95% will not be building off that <version 5 block. Thus the chain is forked into a 5% hash power chain and a 95% hash power chain.
No fork, but an increased number (~5%) of orphaned blocks.
Non-updated nodes will accept blocks version 5, because this is soft-fork


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: Pieter Wuille on March 16, 2016, 01:27:36 PM
From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space". The blockchain on-disk can be pruned though (implemented as an experimental feature in Bitcoin Core 0.11, and with nearly full functionality in 0.12), so calling it "permanently" is not very accurate.

If you're talking about storage space used by segwit-compatible full nodes, well, obviously it will use more space, because it increases block capacity - that capacity has to go somewhere. However:
  • The extra space used by witnesses is more prunable than normal block space, as it's not needed by non-validating clients.
  • Has less effect on bandwidth, as light clients don't need the witness data.
  • Has no effect on the UTXO set, so does not contribute to database growth and/or churn.
  • Enables script versions, which will make the introduction of Schnorr signatures much easier later on, which are more space efficient than what we have now (even for simple single-key outputs/inputs).


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: watashi-kokoto on March 16, 2016, 01:44:40 PM
From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space". The blockchain on-disk can be pruned though (implemented as an experimental feature in Bitcoin Core 0.11, and with nearly full functionality in 0.12), so calling it "permanently" is not very accurate.

pieter keep up the great work! we're on your side.

don't worry about the lies, manipulation and misinformation, we're on it. we got it covered.

jl777 joined the dark side, and for all intents and purposes should be considered a troll with agenda.

don't feed the trolls


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: achow101 on March 16, 2016, 02:11:52 PM
From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space". The blockchain on-disk can be pruned though (implemented as an experimental feature in Bitcoin Core 0.11, and with nearly full functionality in 0.12), so calling it "permanently" is not very accurate.

If you're talking about storage space used by segwit-compatible full nodes, well, obviously it will use more space, because it increases block capacity - that capacity has to go somewhere. However:
  • The extra space used by witnesses is more prunable than normal block space, as it's not needed by non-validating clients.
  • Has less effect on bandwidth, as light clients don't need the witness data.
  • Has no effect on the UTXO set, so does not contribute to database growth and/or churn.
  • Enables script versions, which will make the introduction of Schnorr signatures much easier later on, which are more space efficient than what we have now (even for simple single-key outputs/inputs).

How are the witnesses serialized and sent with blocks?

Also, is there (or will there be) a full technical write up of the changes of segwit so that wallet developers can change write changes accordingly? Preferably before the ogival release of segwit?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 03:29:36 PM
From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space". The blockchain on-disk can be pruned though (implemented as an experimental feature in Bitcoin Core 0.11, and with nearly full functionality in 0.12), so calling it "permanently" is not very accurate.

If you're talking about storage space used by segwit-compatible full nodes, well, obviously it will use more space, because it increases block capacity - that capacity has to go somewhere. However:
  • The extra space used by witnesses is more prunable than normal block space, as it's not needed by non-validating clients.
  • Has less effect on bandwidth, as light clients don't need the witness data.
  • Has no effect on the UTXO set, so does not contribute to database growth and/or churn.
  • Enables script versions, which will make the introduction of Schnorr signatures much easier later on, which are more space efficient than what we have now (even for simple single-key outputs/inputs).

Let us ignore pruning nodes or anything about SPV nodes. My concern is about full relaying nodes. My assumption is that for a segwit compatible full relaying node to be able to relay the full blockchain it would need to have ALL the data, original blockchain and witness data.

How can ANY of that data be pruned and still be able to be a full  relaying node?

If all such data is needed, I want to call the combined size the size of the blockchain. Regardless of whether it is one or 2 different datasets.

So unless there is magic involved that allows relaying data that has been pruned, pruning is IRRELEVANT.

This then gets us to my question that is not being answered. On average, how many bytes in the blockchain will be needed for a standard payment sent via segwit?

Is this ever less than it would be now?
Is this ever the same as it is now?
Is this usually about 50 bytes more per tx?

Unless there is actual savings of blockchain space, then it would be a failure as far as reducing blockchain usage. What am I missing?

James


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 03:39:41 PM
From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space". The blockchain on-disk can be pruned though (implemented as an experimental feature in Bitcoin Core 0.11, and with nearly full functionality in 0.12), so calling it "permanently" is not very accurate.

pieter keep up the great work! we're on your side.

don't worry about the lies, manipulation and misinformation, we're on it. we got it covered.

jl777 joined the dark side, and for all intents and purposes should be considered a troll with agenda.

don't feed the trolls
dark side? I am asking questions I need answered to be able to implement things. The more I find out about segwit, the more it appears to be a LOT of work needed that can be avoided just with a 2MB hardfork. And compared to a 2MB hardfork segwit wastes space as far as I know at this moment and I await to be corrected.

FYI I am on no side other than the side of truth. The other side has politicized the RBF and claims it is the devil's spawn. I keep saying the reason RBF breaks zeroconf is that zeroconf is totally broken when the blocks are full. So adding RBF still leaves zeroconf broken when the blocks are full.

Now, one difference is that the other side doesnt start calling me a troll when I point out that their technical analysis is incorrect.

my agenda is to implement a scalable onchain bitcoin. If I see something that doesnt make sense, I will ask about it. If I get nonsense answers, I will point that out.

If using logic and asking pointed questions makes me a troll, then I guess I am a troll. Convince me with the math, then I will be segwit's strongest advocate. It the math doesnt add up, I will call it what it is


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: amaclin on March 16, 2016, 03:51:49 PM
Now, one difference is that the other side doesnt start calling me a troll when I point out that their technical analysis is incorrect.
You are fucking lucky!  ;D Both sides call me troll when I asking questions!


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: watashi-kokoto on March 16, 2016, 04:14:56 PM
If using logic and asking pointed questions makes me a troll, then I guess I am a troll. Convince me with the math, then I will be segwit's strongest advocate. It the math doesnt add up, I will call it what it is

the paper's out there
read it and then come back
FYI 1.375 to 1.75 MB per block


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: 2112 on March 16, 2016, 04:22:11 PM
maybe its just me misinterpreting the english as my second language and the above isnt claiming that it will increase the block capacity. I avoid political stuff so maybe I am just not understanding the nuances of the english. recently I found out that "sick" meant "cool", but cool wasnt about the temperature, but something else. So I guess it just matters what the meaning of the words "size increase" means.
Your English is very fine, in fact it is already better than the written word of many native English speakers.

The mistake you are making is your avoidance of the "political stuff". There's no way to avoid learning about https://en.wikipedia.org/wiki/Political_economy .

"segregated witness" is a neat tool that cleaves not only "big blockists" but also "small blockists" associated around Mircea Popescu's  http://thebitcoin.foundation/ which considers 0.5.3 as the "original codebase".

The key contribution of Adam Back is his update of https://en.wikipedia.org/wiki/Democratic_centralism vocabulary to the situation in 21st century.

If you really try to understand what's going on reread the https://en.wikipedia.org/wiki/Twenty-one_Conditions , but replace "communist" with "bitcoinist" and "proletariat" with "bitcoinariat", "counter-revolutionary element" with "troll", etc.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: watashi-kokoto on March 16, 2016, 04:24:32 PM
Someone's stressed that  ˵Bitcoin is out of their control˶


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: ChronosCrypto on March 16, 2016, 04:36:25 PM
Whoa, whoa, wait...

From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space".

So, 47 bytes per block. That's not too unreasonable. But...

This then gets us to my question that is not being answered. On average, how many bytes in the blockchain will be needed for a standard payment sent via segwit?

Is this ever less than it would be now?
Is this ever the same as it is now?
Is this usually about 50 bytes more per tx?

50 bytes per transaction for fully-validating nodes? This needs to be answered.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 04:49:39 PM
If using logic and asking pointed questions makes me a troll, then I guess I am a troll. Convince me with the math, then I will be segwit's strongest advocate. It the math doesnt add up, I will call it what it is

the paper's out there
read it and then come back
FYI 1.375 to 1.75 MB per block
how many bytes total does a segwit tx permanently occupy
just pick any standard normal tx.

is that number bigger or smaller than the size of the normal tx.

if the number is bigger, is this the case where "more" means "less"?
am I a troll because I am confused why something that is 10x more complicated, has potential attack vectors and requires adding trust to the bitcoin is marketed as helping with scaling by using MORE bytes.

I am just a simple C programmer and I dont understand complicated things like using more bytes to help scaling when that seems to be you end up with less tx that fit in the same amount of space.

please help me understand. Is there a new quantum zero knowledge phased space bit multiplexer now? do I need to find my flux capacitor from the attic?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: Sigals on March 16, 2016, 04:51:36 PM
Whoa, whoa, wait...

From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space".

So, 47 bytes per block. That's not too unreasonable. But...

This then gets us to my question that is not being answered. On average, how many bytes in the blockchain will be needed for a standard payment sent via segwit?

Is this ever less than it would be now?
Is this ever the same as it is now?
Is this usually about 50 bytes more per tx?

50 bytes per transaction for fully-validating nodes? This needs to be answered.

I would like to see an answer to this too, it seems everyone is avoiding this question. Fully validating nodes are very important for those of us that want to verify the blockchain ourselves and are required for bootstrapping new nodes.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 04:58:58 PM
Whoa, whoa, wait...

From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space".

So, 47 bytes per block. That's not too unreasonable. But...

This then gets us to my question that is not being answered. On average, how many bytes in the blockchain will be needed for a standard payment sent via segwit?

Is this ever less than it would be now?
Is this ever the same as it is now?
Is this usually about 50 bytes more per tx?

50 bytes per transaction for fully-validating nodes? This needs to be answered.
It is answered in the BIP:

Quote
Transaction ID

A new data structure, witness, is defined. Each transaction will have 2 IDs.

Definition of txid remains unchanged: the double SHA256 of the traditional serialization format:

  [nVersion][txins][txouts][nLockTime]
  
A new wtxid is defined: the double SHA256 of the new serialization with witness data:

  [nVersion][marker][flag][txins][txouts][witness][nLockTime]

from the BIP...

the wtxid is based on all of the original, plus marker (1 byte?) flag (1 byte) and witness, which appears to be:

 1-byte - OP_RETURN (0x6a)
   1-byte - Push the following 36 bytes (0x24)
   4-byte - Commitment header (0xaa21a9ed)
  32-byte - Commitment hash: Double-SHA256(witness root hash|witness nonce)

all this seems to be above and beyond what would be needed for the normal, plus the nVersion (4 bytes) and nLockTime (4 bytes) are duplicated. To a simple C programmer like me it sure looks like instead of reducing the net amount as required by anything claiming to save space, it is increasing the size by approx 50 bytes.

Maybe its 32 + 4 + 1 + 1 + 4, so 42 bytes?

I am trying to understand enough to implement this, but unless the original tx is reduced by more than the witness data uses, it will cost more per tx.

But dont worry, I was told that it is likely that 100% of nodes will be pruning nodes in the future and all that matters is the size of the utxo. I still await how any new node can bootstrap if all nodes are pruning nodes...

James


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 05:09:39 PM
Whoa, whoa, wait...

From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space".

So, 47 bytes per block. That's not too unreasonable. But...

This then gets us to my question that is not being answered. On average, how many bytes in the blockchain will be needed for a standard payment sent via segwit?

Is this ever less than it would be now?
Is this ever the same as it is now?
Is this usually about 50 bytes more per tx?

50 bytes per transaction for fully-validating nodes? This needs to be answered.

I would like to see an answer to this too, it seems everyone is avoiding this question. Fully validating nodes are very important for those of us that want to verify the blockchain ourselves and are required for bootstrapping new nodes.


01000000000102fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f0 0000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5c dd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eef fffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000 ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9 093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac000247304402 203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4 518331561406f90300e8f3358f51928d43c212a8caed02de67eebee0121025476c2e83188368da1 ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee635711000000

the above is from https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki
it is a 2 input 2 output tx in the witness space.

In addition to the above, the much smaller anyonecan spend tx is needed too. I think it will be about 100 bytes?

so we have a combined space of around 800 bytes against the 1000 bytes the usual 2 input/2 output tx occupies. Or was that 400 bytes that the 2input/2output tx takes?

I was told that all nodes are expected to be pruning nodes anyway, so you dont have to worry about any full node requirements. They will make sure all the archive copies will forever be kept safe and not tampered with. you can trust them. it is better for bitcoin to require trust

Isnt it nice to have all the hard choices made for you. We can trust in the math done by the central planners. Dont worry, be happy.

James


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: xyzzy099 on March 16, 2016, 05:20:41 PM
It is possible that there is a fundamental misunderstanding here.

I don't think anyone ever claimed that segwit was a way to expand capacity in a more (or even equally) efficient way than simply increasing the block size.

The advantage of segwit is that it elegantly fixes a couple of other hard problems (malleability, O(n^2) sigops issue) while ALSO allowing more transactions per block without requiring a hard fork for the block size.  The amount of data in the blockchain for fully-validating nodes will definitely increase, just as it would if there were a 2MB block-size hard-fork.

Am I misunderstanding the concern here?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: ChronosCrypto on March 16, 2016, 05:25:45 PM
Hold up. I'd like to hear from Wuille (one of the creators of segwit) about the size difference between a standard 2-input, 2-output transaction and its equivalent using segwit, for a fully-validating node. No need to attack with sarcasm.

BTW, I am also curious if the O(n^2) sigops issue can be solved in a much more simple way.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: amaclin on March 16, 2016, 05:27:50 PM
Am I misunderstanding the concern here?
The problems are
1) SegWit does not exist
2) Nobody knows how it works
3) Nobody needs it

There is only one goal for everyone: double his fiat money with cryptocurrency.
SegWit does not solve this problem. But the developers are trying to convince you in it.
 


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: xyzzy099 on March 16, 2016, 05:27:55 PM
Hold up. I'd like to hear from Wuille (one of the creators of segwit) about the size difference between a standard 2-input, 2-output transaction and its equivalent using segwit, for a fully-validating node. No need to attack with sarcasm.

BTW, I am also curious if the O(n^2) sigops issue can be solved in a much more simple way.

If you are addressing that to me, I assure you my reply was not meant to be sarcastic at all.  Not sure how anyone could take it that way.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: 2112 on March 16, 2016, 05:35:49 PM
The advantage of segwit is that it elegantly fixes a couple of other hard problems (malleability, O(n^2) sigops issue)
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: xyzzy099 on March 16, 2016, 05:39:47 PM
The advantage of segwit is that it elegantly fixes a couple of other hard problems (malleability, O(n^2) sigops issue)
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?

I don't claim to know the answer to that question, but your reply begs the question:  Have you submitted a pull request with code that fixes these problems that you see as 'not "hard" by themselves'?



Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: ChronosCrypto on March 16, 2016, 05:40:24 PM
No need to attack with sarcasm.

If you are addressing that to me, I assure you my reply was not meant to be sarcastic at all.  Not sure how anyone could take it that way.
Oh, my bad, I should have been more clear. It's directed at statements like this:
it is better for bitcoin to require trust

Isnt it nice to have all the hard choices made for you. We can trust in the math done by the central planners. Dont worry, be happy.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: xyzzy099 on March 16, 2016, 05:41:54 PM
No need to attack with sarcasm.

If you are addressing that to me, I assure you my reply was not meant to be sarcastic at all.  Not sure how anyone could take it that way.
Oh, my bad, I should have been more clear. It's directed at statements like this:
it is better for bitcoin to require trust

Isnt it nice to have all the hard choices made for you. We can trust in the math done by the central planners. Dont worry, be happy.

Yeah, I agree with that.  I was really interested in reading this thread 'til that comment made it political.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: 2112 on March 16, 2016, 05:48:03 PM
I don't claim to know the answer to that question, but your reply begs the question:  Have you submitted a pull request with code that fixes these problems that you see as 'not "hard" by themselves'?
Submitting pull request without first discussing the viability of proposed "pull" is only for terminally naïve.

Normal programmers do design first then code later, especially on a large financial project.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: xyzzy099 on March 16, 2016, 05:54:11 PM
I don't claim to know the answer to that question, but your reply begs the question:  Have you submitted a pull request with code that fixes these problems that you see as 'not "hard" by themselves'?
Submitting pull request without first discussing the viability of proposed "pull" is only for terminally naïve.

Normal programmers do design first then code later, especially on a large financial project.


My point, perhaps poorly expressed, was that if you think these problems are 'not hard', you must have solutions in mind, no?  I'd be interested in hearing your ideas.  I am genuinely interested, not being sarcastic here.



Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 05:55:12 PM
No need to attack with sarcasm.

If you are addressing that to me, I assure you my reply was not meant to be sarcastic at all.  Not sure how anyone could take it that way.
Oh, my bad, I should have been more clear. It's directed at statements like this:
it is better for bitcoin to require trust

Isnt it nice to have all the hard choices made for you. We can trust in the math done by the central planners. Dont worry, be happy.

Yeah, I agree with that.  I was really interested in reading this thread 'til that comment made it political.
I did not make this a political thing.
segwit is marketed as a way to enable scaling, when it is no such thing.

my analysis so far is that it creates a much more complicated error prone system with potential attack vectors that is not peer reviewed that reduces the ability to scale. Maybe my problem is that I am just not smart enough to understand it well enough to appreciate it?

but in some weeks it will be softforked, so its ok, there is no need to worry about it.

so if the bitcoin supply is increased to 1 billion with a softfork, that's ok?

All I see is that segwit tx requires more work, more space, more confusion, but we do end up where there are tx in the blockchain that need to be trusted. bitcoin becomes partly a trusted ledger, but ripple is doing fine, so why not



Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: xyzzy099 on March 16, 2016, 05:57:10 PM

I did not make this a political thing.

segwit is marketed as a way to enable scaling, when it is no such thing.


I didn't think you did.  One of us replying messed up the quoting.  I know you are not the one who 'went there'.

[EDIT]How does it not help scaling, if it increases the number of transactions that can be included in each block?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 05:58:36 PM

I did not make this a political thing.


I didn't think you did.  One of us replying messed up the quoting.  I know you are not the one who 'went there'.

ah, the crosspost.

I am just so confused how being a softfork makes fundamentally changing (breaking) things ok


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: 2112 on March 16, 2016, 06:01:58 PM
My point, perhaps poorly expressed, was that if you think these problems are 'not hard', you must have solutions in mind, no?  I'd be interested in hearing your ideas.  I am genuinely interested, not being sarcastic here.
It wasn't only me that had those solutions in mind. In fact they are already included in the "segregated witness" proposal, but without the "segregation" part. The "segregation" just splits the transaction in two parts. In fact one could come up with a deficient "segregated witness" proposal that wouldn't fix the discussed problems. They are orthogonal concepts.
 


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: ChronosCrypto on March 16, 2016, 06:04:06 PM
[EDIT]How does it not help scaling, if it increases the number of transactions that can be included in each block?
Block size is easy to change. There's an arguably-popular client (Bitcoin Classic) that solves that problem today. To help scaling you need to invent tech to make running a full node easier, such as thin-blocks or IBLT. Shameless plug: I recently produced a video on Xtreme Thin Blocks (https://youtu.be/KYvWTZ3p9k0).


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: xyzzy099 on March 16, 2016, 06:09:09 PM
[EDIT]How does it not help scaling, if it increases the number of transactions that can be included in each block?
Block size is easy to change. There's an arguably-popular client (Bitcoin Classic) that solves that problem today. To help scaling you need to invent tech to make running a full node easier, such as thin-blocks or IBLT. Shameless plug: I recently produced a video on Xtreme Thin Blocks (https://youtu.be/KYvWTZ3p9k0).

That may be true, but you didn't answer the question I asked (See above).  I don't think segwit is being proposed as the solution to scaling...  I don't think it was really meant to be a scaling solution at all, really - the increased transaction capacity is just a side-effect, right?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: xyzzy099 on March 16, 2016, 06:10:45 PM
My point, perhaps poorly expressed, was that if you think these problems are 'not hard', you must have solutions in mind, no?  I'd be interested in hearing your ideas.  I am genuinely interested, not being sarcastic here.
It wasn't only me that had those solutions in mind. In fact they are already included in the "segregated witness" proposal, but without the "segregation" part. The "segregation" just splits the transaction in two parts. In fact one could come up with a deficient "segregated witness" proposal that wouldn't fix the discussed problems. They are orthogonal concepts.
 

Which solutions are you referring to here?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: sgbett on March 16, 2016, 06:40:10 PM
I asked some of these questions 3 months ag (https://www.reddit.com/r/bitcoinxt/comments/3w34o0/would_segregated_witnesses_really_help_anyone/)o.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit (https://www.reddit.com/r/bitcoin_uncensored/comments/43w24e/raising_the_21_million_btc_limit_with_a_soft_fork/).  One could also raise the block size limit (https://www.reddit.com/r/btc/comments/43w4rx/how_core_can_increase_the_21_million_btc_issuance/czlsk2q) that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

A hard fork based consensus mechanism, far from being dangerous, is actually the solution to centralised control over consensus.

Script versioning is essentially about changing this consensus mechanism so that any change can be made without any consensus. Giving this control to anyone, even satoshi himself, entirely undermines the whole idea of bitcoin. *Decentralised* something something.

[bold]Script versioning[/bold]
Changes to Bitcoin’s script allow for both improved security and improved functionality. However, the design of script only allows backwards-compatible (soft-forking) changes to be implemented by replacing one of the ten extra OP_NOP opcodes with a new opcode that can conditionally fail the script, but which otherwise does nothing. This is sufficient for many changes – such as introducing a new signature method or a feature like OP_CLTV, but it is both slightly hacky (for example, OP_CLTV usually has to be accompanied by an OP_DROP) and cannot be used to enable even features as simple as joining two strings.

Segwit resolves this by including a version number for scripts, so that additional opcodes that would have required a hard-fork to be used in non-segwit transactions can instead be supported by simply increasing the script version.

It doesn't matter where you stand on the blocksize debate, which dev team you support, or any of the myriad disagreements. As Gregory Maxwell himself states (https://www.reddit.com/r/btc/comments/43lxgn/21_months_ago_gavin_andresen_published_a/czjmh7n?context=3):

"Anyone who /understood/ it would [shut down bitcoin], if somehow control of it were turned over to them."


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: rizzlarolla on March 16, 2016, 06:50:35 PM


my analysis so far is that it creates a much more complicated error prone system with potential attack vectors that is not peer reviewed that reduces the ability to scale. Maybe my problem is that I am just not smart enough to understand it well enough to appreciate it?

but in some weeks it will be softforked, so its ok, there is no need to worry about it.

All I see is that segwit tx requires more work, more space, more confusion, but we do end up where there are tx in the blockchain that need to be trusted. bitcoin becomes partly a trusted ledger, but ripple is doing fine, so why not

[snipped]


Great posts. I think your smart enough to understand segwit. If anyone can.

In "a few weeks" Bitcoin will be fundamentally changed by segwit. (soft forked by core, Bitcoin guardians)

I agree with your earlier comment. Segwit must be abandoned postponed.

2mb blocks first, soon, then reassess segwit. At least hard fork.
(core could do this, 2mb blocks are road mapped in core?)

Segwit is not my bitcoin. Not at this point in time at least.
"a much more complicated error prone system with potential attack vectors that is not peer reviewed"



Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: gmaxwell on March 16, 2016, 07:15:16 PM
Wow. The deceptive misinformation in this thread is really astonishing.

Contrary to the claims here, segwit doesn't increase transaction sizes (as was noted, it adds a single coinbase commitment per block).

all this seems to be above and beyond what would be needed for the normal, plus the nVersion (4 bytes) and nLockTime (4 bytes) are duplicated. To a simple C programmer like me it sure looks like instead of reducing the net amount as required by anything claiming to save space, it is increasing the size by approx 50 bytes.

Maybe its 32 + 4 + 1 + 1 + 4, so 42 bytes?

jl777, to be blunt, and offer some unsolicited advice: You have almost no chance of actually writing that bitcoin full node you say you want to be working on when you are so unwilling to spend more than a second reading or take any time at all to understand how existing Bitcoin software works.  Virtually every post of yours contains one or another fundamental misunderstanding of the existing system/software-- and your abrasive an accusatory approach leave other people disinterested in spending their time educating you. Even here, I am not responding to for your benefit-- as I would otherwise-- but because other people are repeating the misinformation you've unintentionally generated due to your ignorance. Please take a step back: Bitcoin is not "bitcoin dark", "nxt", or the other altcoins you've worked on in the past where an abusive/armwaving style that leans heavily on native intelligence while eschewing study will itself establish you as an expert. Bitcoin is full of really remarkably intelligent people, so simply being smarter than average doesn't make you a shining star as it may in some places.

The text you are quoting is instructions on computing a hash. None of the data involved in it is stored, not any more than the tens of times the transaction size of data used for the sighashes on a large transaction is stored.

If the carefully constructed, peer reviewed specifications are not to your liking; you could also spend some time studying the public segnet testnet (https://github.com/sipa/bitcoin/tree/segwit). Given that there are both specifications and a running public network, the continued inquisitory "needs to be answered" conspiracy theory nonsense-- even after being given a _direct_ and specific answer ("segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block")-- is highly inappropriate. Please do not subject other contributors to this forum to that kind of hostility.  

Quote
My assumption is that for a segwit compatible full relaying node to be able to relay the full blockchain it would need to have ALL the data, original blockchain and witness data.
Your lack of understanding about how Bitcoin is structured and existed today works against you. A full node does not need to store "ALL the data", and in Bitcoin Core today you can set an option and run a full node with only about 2GB storage. Configured in this pruning manner, the node relays transactions, blocks, fully validates everything, etc.  This is the state _today_.

Segwit improves scaling in several ways as was already explained in this thread:
Quote
  • The extra space used by witnesses is more prunable than normal block space, as it's not needed by non-validating clients.
  • Has less effect on bandwidth, as light clients don't need the witness data.
  • Has no effect on the UTXO set, so does not contribute to database growth and/or churn.
  • Enables script versions, which will make the introduction of Schnorr signatures much easier later on, which are more space efficient than what we have now (even for simple single-key outputs/inputs).

For example, all existing full node software that I'm aware of widely used on the current network does not validate signatures in the far past chain. They just download them, and if pruning is enabled, throw them out. They can't verify the transaction hashes, make sure no inflation or other non-signature validation rule violations happened, and build their UTXO set without downloading them... but the download is pure waste. Segwit makes it possible for a node which isn't going to verify all the signatures in the fast past to skip downloading them.  Segwit reduces greatly the bandwidth required to service lite nodes for a given amount of transactions, segwit increases the capacity (in terms of transactions per block) without increasing the amount of UTXO growth per block... and all this on top of the non-scaling related improvements it brings.

This is why the technical space around Bitcoin is overwhelmingly in favor of it

Script versioning is essentially about changing this consensus mechanism so that any change can be made without any consensus. Giving this control to anyone, even satoshi himself, entirely undermines the whole idea of bitcoin. *Decentralised* something something.
The content of your scriptpubkey, beyond the resource costs to the network, is a private contract between the sender of the funds and the receiver of the funds. It is only the business of these parties, no one else. Ideally, it would not be subject to "consensus", in any way/shape/form-- it is a _private matter_. It is not any of your business how I spend my Bitcoins; but unfortunately, script enhancing softforks do require consensus of at least the network hashpower.

Bitcoin Script was specifically designed because how the users contract with it isn't the network's business-- though it has limitations. And, fundamentally, even with those limitations it already, at least theoretically, impossible to prevent users from contracting however they want. For example, Bitcoin has no Sudoku implementation in Script, and yet I can pay someone conditionally on them solving one (https://bitcoincore.org/en/2016/02/26/zero-knowledge-contingent-payments-announcement/) (or any other arbitrary program).

Bitcoin originally had an OP_VER to enable versioned script upgrades. Unfortunately, the design of this opcode was deeply flawed-- it allowed any user of the network, at their unannounced whim, to hardfork the network between different released versions of Bitcoin.  Bitcoin's creator, removed it and in its place put in facilities for softforks. Softforks have been used many times to compatibly extend the system-- first by Bitcoin's creator, and later by the community. The segwit script versioning brings back OP_VER but with a design that isn't broken---- it makes it faster and safer to design and deploy smart contracting/script improvements (for example, a recently proposed one will reduce transaction sizes by ~30% (https://bitcointalk.org/index.php?topic=1377298.0) with effectively no costs once deployed); but doesn't change the level of network consensus required to deploy softforks; only perhaps the ease of achieving the required consensus because the resulting improvements are safer.

If you're going to argue that you don't want a system where hashpower consensus allows new script rules for users to use to voluntarily contract with themselves, you should have left Bitcoin in 2010 or 2011 (though it's unclear how any blockchain cryptocurrency could _prevent_ this from happening).  Your views, if not just based on simple misunderstandings, are totally disjoint with how Bitcoin works. I don't begrudge you the freedom to want weird or even harmful things-- and I would call denying users the ability to choose whatever contract terms they want out of principle rather than considerations like resource usage both weird and harmful--, but Bitcoin isn't the place for them, and the restrictions you're asking for appear to be deeply disjoint with Bitcoin's day-one and every-day-since design, which has a huge amount of complexity in the original design for user (not consensus) determined smart contracting and where softforks (hashpower consensus) have been frequently used to extend the system.


Title: Re: Segwit details?
Post by: achow101 on March 16, 2016, 07:24:22 PM
Oh, the amount of misinformation in this thread!!

jl777 you have misunderstood the segwit BIPs.

Transaction ID

A new data structure, witness, is defined. Each transaction will have 2 IDs.

Definition of txid remains unchanged: the double SHA256 of the traditional serialization format:

  [nVersion][txins][txouts][nLockTime]
  
A new wtxid is defined: the double SHA256 of the new serialization with witness data:

  [nVersion][marker][flag][txins][txouts][witness][nLockTime]

from the BIP...

the wtxid is based on all of the original, plus marker (1 byte?) flag (1 byte) and witness, which appears to be:

 1-byte - OP_RETURN (0x6a)
   1-byte - Push the following 36 bytes (0x24)
   4-byte - Commitment header (0xaa21a9ed)
  32-byte - Commitment hash: Double-SHA256(witness root hash|witness nonce)

all this seems to be above and beyond what would be needed for the normal, plus the nVersion (4 bytes) and nLockTime (4 bytes) are duplicated. To a simple C programmer like me it sure looks like instead of reducing the net amount as required by anything claiming to save space, it is increasing the size by approx 50 bytes.
NO NO NO!!!!. AND PLEASE STOP SPREADING THIS!! IT IS WRONG.

The above with the OP_RETURN is how the witness root hash (the hash of the wtxids where the wtxids are leaves on a hash tree). This is added to the Coinbase transaction of the block. The point of doing this is to commit the witnesses to the blockchain without adding all of the witnesses to the size calculation of the blockchain except for these 38 bytes.

The wtxid (also called witness hash) is the hash of the transaction with all of the witness data and the transaction data. The transaction format if witness serialization is specified is 4 bytes for the transaction version (currently used), 1 byte for a marker (new, is always 0), 1 byte for a flag (new), the input count (currently used), the inputs (currently used), the output count (currently used), the outputs (currently used), the witnesses (new), and the locktime (currently used). All of this is what is hashed to get the wtxid. However, the regular txid is just the hash of everything that is currently used (so it contains none of the new stuff) so as to maintain backwards compatibility. If witness serialization is not specified then only the currently used stuff is sent in the tx message.

Likewise, with blocks, from what I understand, the block sent will contain the new transaction format if witness serialization is specified. Otherwise it will just include the transaction data that is currently used so that it maintains backwards compatibility. The only "bloat" caused by segwit is the 38 bytes in a second output of the Coinbase to commit the wtxids in a similar manner to the way that regular txids are committed and 2 extra bytes per transaction. Only upgraded nodes would get the witness data and those 2 extra bytes.

P.S. So my understanding is that you need a special segwit address (that is somehow determined to be a segwit address using what mechanism?) so both sender and receiver need to already have the segwit version. I guess just ignoring all the existing nodes is at least some level of backward compatibility. But are you sure all users will quickly get used to having to deal with two types of addresses for every transaction and they will make sure they know what version the other party is running. Doesnt this bifurcate the bitcoin universe? maybe the name should be "bifurcating softfork"
The addresses are different. An upgraded node will use p2sh addresses, which we currently use. Those p2sh addresses can be spent to using the current methods so non-upgraded users can still spend to those addresses. To spend from those addresses requires segwit, and the way that the scriptsig is set up, non-upgraded nodes will always validate those transactions as valid even if the witness (which those nodes cannot see) is invalid. Those segwit transactions still create outputs the regular way, so they can still send to p2pkh or p2pk outputs which non-upgraded users can still receive and spend from. Segwit transactions are considered by old nodes as transactions which spent an anyonecanspend output and thus are treated with a grain of salt. The best course of action is to of course wait for confirmations as we already should still be doing now.

Whoa, whoa, wait...

From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space".

So, 47 bytes per block. That's not too unreasonable. But...

This then gets us to my question that is not being answered. On average, how many bytes in the blockchain will be needed for a standard payment sent via segwit?

Is this ever less than it would be now?
Is this ever the same as it is now?
Is this usually about 50 bytes more per tx?

50 bytes per transaction for fully-validating nodes? This needs to be answered.

I would like to see an answer to this too, it seems everyone is avoiding this question. Fully validating nodes are very important for those of us that want to verify the blockchain ourselves and are required for bootstrapping new nodes.


01000000000102fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f0 0000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5c dd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eef fffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000 ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9 093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac000247304402 203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4 518331561406f90300e8f3358f51928d43c212a8caed02de67eebee0121025476c2e83188368da1 ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee635711000000

the above is from https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki
it is a 2 input 2 output tx in the witness space.

In addition to the above, the much smaller anyonecan spend tx is needed too. I think it will be about 100 bytes?

so we have a combined space of around 800 bytes against the 1000 bytes the usual 2 input/2 output tx occupies. Or was that 400 bytes that the 2input/2output tx takes?
Again, NO. See above.




Wow. The deceptive misinformation in this thread is really astonishing.
Ah *sigh of relief*; here comes somebody who actually knows what they are talking about.

Could you also let me know if I presented any misinformation? I have been trying my best to not and to make jl777 understand why he is wrong but I may have accidentally (either due to misunderstanding the BIPs or just really bad typing) given him false information.

If the carefully constructed, peer reviewed specifications are not to your liking; you could also spend some time studying the public segnet testnet (https://github.com/sipa/bitcoin/tree/segwit).
Since Pieter and no one on the IRC reponded either, I will ask this again. Will there be a full write up (preferably before segwit's release) of all of the changes that segwit entails so that wallet developers can get working on implementing segwit? AFAIK the segwit implementation contains omissions and changes from what was specified in the BIPs.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 07:28:09 PM
I apologize for my technical ineptitudes.

I am trying to understand how segwit saves blockchain space as that is what it is being marketed as and with a softfork in the upcoming weeks.

So i look at the BIP, I find each segwit tx needs a commitment hash, plus more. this is 32 bytes per tx that to my ignorant simple minded thinking is in addition to what would be needed if it was a normal tx.

Now I am being told that we dont need to count the space used by the witness data. This confuses me. I like to count all permanently required data as the permanent space cost.

I apologize for asking questions like this, but I am just a simple C programmer trying to allocate space for the segwit and noticing it seems to take more space for every tx. I await to be properly learned about how the commitment hash is not actually needed to be stored anywhere and how that would still allow a full node to properly validate all the transactions.

Or did I misunderstand? Does segwit mean that it is no business of full nodes to verify all the transactions?

James

****
OK, it was made clear that the commitment hash is just for the wtxid calculation so it doesnt really exist anywhere. The cost is 2 bytes per tx and 1 byte per vin. Still it is increasing HDD space used permanently, which is what confused me as segwit was marketed as saving HDD space and helping with scaling


Title: Re: Segwit details?
Post by: jl777 on March 16, 2016, 07:33:29 PM
Oh, the amount of misinformation in this thread!!

jl777 you have misunderstood the segwit BIPs.

Transaction ID

A new data structure, witness, is defined. Each transaction will have 2 IDs.

Definition of txid remains unchanged: the double SHA256 of the traditional serialization format:

  [nVersion][txins][txouts][nLockTime]
  
A new wtxid is defined: the double SHA256 of the new serialization with witness data:

  [nVersion][marker][flag][txins][txouts][witness][nLockTime]

from the BIP...

the wtxid is based on all of the original, plus marker (1 byte?) flag (1 byte) and witness, which appears to be:

 1-byte - OP_RETURN (0x6a)
   1-byte - Push the following 36 bytes (0x24)
   4-byte - Commitment header (0xaa21a9ed)
  32-byte - Commitment hash: Double-SHA256(witness root hash|witness nonce)

all this seems to be above and beyond what would be needed for the normal, plus the nVersion (4 bytes) and nLockTime (4 bytes) are duplicated. To a simple C programmer like me it sure looks like instead of reducing the net amount as required by anything claiming to save space, it is increasing the size by approx 50 bytes.
NO NO NO!!!!. AND PLEASE STOP SPREADING THIS!! IT IS WRONG.

The above with the OP_RETURN is how the witness root hash (the hash of the wtxids where the wtxids are leaves on a hash tree). This is added to the Coinbase transaction of the block. The point of doing this is to commit the witnesses to the blockchain without adding all of the witnesses to the size calculation of the blockchain except for these 38 bytes.

The wtxid (also called witness hash) is the hash of the transaction with all of the witness data and the transaction data. The transaction format if witness serialization is specified is 4 bytes for the transaction version (currently used), 1 byte for a marker (new, is always 0), 1 byte for a flag (new), the input count (currently used), the inputs (currently used), the output count (currently used), the outputs (currently used), the witnesses (new), and the locktime (currently used). All of this is what is hashed to get the wtxid. However, the regular txid is just the hash of everything that is currently used (so it contains none of the new stuff) so as to maintain backwards compatibility. If witness serialization is not specified then only the currently used stuff is sent in the tx message.

Likewise, with blocks, from what I understand, the block sent will contain the new transaction format if witness serialization is specified. Otherwise it will just include the transaction data that is currently used so that it maintains backwards compatibility. The only "bloat" caused by segwit is the 38 bytes in a second output of the Coinbase to commit the wtxids in a similar manner to the way that regular txids are committed and 2 extra bytes per transaction. Only upgraded nodes would get the witness data and those 2 extra bytes.

P.S. So my understanding is that you need a special segwit address (that is somehow determined to be a segwit address using what mechanism?) so both sender and receiver need to already have the segwit version. I guess just ignoring all the existing nodes is at least some level of backward compatibility. But are you sure all users will quickly get used to having to deal with two types of addresses for every transaction and they will make sure they know what version the other party is running. Doesnt this bifurcate the bitcoin universe? maybe the name should be "bifurcating softfork"
The addresses are different. An upgraded node will use p2sh addresses, which we currently use. Those p2sh addresses can be spent to using the current methods so non-upgraded users can still spend to those addresses. To spend from those addresses requires segwit, and the way that the scriptsig is set up, non-upgraded nodes will always validate those transactions as valid even if the witness (which those nodes cannot see) is invalid. Those segwit transactions still create outputs the regular way, so they can still send to p2pkh or p2pk outputs which non-upgraded users can still receive and spend from. Segwit transactions are considered by old nodes as transactions which spent an anyonecanspend output and thus are treated with a grain of salt. The best course of action is to of course wait for confirmations as we already should still be doing now.

Whoa, whoa, wait...

From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space".

So, 47 bytes per block. That's not too unreasonable. But...

This then gets us to my question that is not being answered. On average, how many bytes in the blockchain will be needed for a standard payment sent via segwit?

Is this ever less than it would be now?
Is this ever the same as it is now?
Is this usually about 50 bytes more per tx?

50 bytes per transaction for fully-validating nodes? This needs to be answered.

I would like to see an answer to this too, it seems everyone is avoiding this question. Fully validating nodes are very important for those of us that want to verify the blockchain ourselves and are required for bootstrapping new nodes.


01000000000102fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f0 0000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5c dd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eef fffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000 ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9 093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac000247304402 203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4 518331561406f90300e8f3358f51928d43c212a8caed02de67eebee0121025476c2e83188368da1 ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee635711000000

the above is from https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki
it is a 2 input 2 output tx in the witness space.

In addition to the above, the much smaller anyonecan spend tx is needed too. I think it will be about 100 bytes?

so we have a combined space of around 800 bytes against the 1000 bytes the usual 2 input/2 output tx occupies. Or was that 400 bytes that the 2input/2output tx takes?
Again, NO. See above.
OK, I apologize again for misunderstanding the BIPs
that is why I am posting questions.

can you show me a specific rawtxbytes for a simple tx as it is now and what the required bytes are if it was a segwit? and for the segwit case it would need to be two sets of bytes, I think I understand that part well enough.

If the combined overhead is as small as 3 bytes, per tx, then it is amazing. but I dont understand how it is done.

But still even if it is only 3 bytes more, it is more data required permanently, so it is reducing the total tx capacity per MB and it anti-scaling from that standpoint. I cannot speak to the scaling impact of optimizd signature handling, etc.

Also, it seems a 2MB hardfork is much, much simpler and provides the same or more tx capacity. Why isnt that done before segwit?



Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: achow101 on March 16, 2016, 08:04:51 PM
I apologize for my technical ineptitudes.

I am trying to understand how segwit saves blockchain space as that is what it is being marketed as and with a softfork in the upcoming weeks.
It saves space in that it does not require that the signatures be downloaded. Since clients that are syncing do not need verify the signatures of old transactions, the witness data is then not needed. This would mean that they do not need to download the witnesses and can thus save bandwidth and space. This is only for several years in the future after segwit has been used for a while.

So i look at the BIP, I find each segwit tx needs a commitment hash, plus more. this is 32 bytes per tx that to my ignorant simple minded thinking is in addition to what would be needed if it was a normal tx.
Not every transaction needs that hash, just like how not every transaction now has its txid stored in the blockchain. Just like it is done today, the merkle root of the wtxids is stored, and that is it.

Now I am being told that we dont need to count the space used by the witness data. This confuses me. I like to count all permanently required data as the permanent space cost.
The witnesses are not permanent and can be deleted. After a few years, they won't even need to be downloaded.

I apologize for asking questions like this, but I am just a simple C programmer trying to allocate space for the segwit and noticing it seems to take more space for every tx. I await to be properly learned about how the commitment hash is not actually needed to be stored anywhere and how that would still allow a full node to properly validate all the transactions.
Checking the transaction hash is not part of the validation of transactions. Rather the signature is validated with the witness data and checks that the signature is valid for that transaction. The block contains a merkle root. This merkle root is calculated by hashing all of the transaction hashes. The same is done for the witness root hash which is just the hash of all of the wtxids.

Or did I misunderstand? Does segwit mean that it is no business of full nodes to verify all the transactions?
See above.

OK, I apologize again for misunderstanding the BIPs
that is why I am posting questions.
It is okay to ask questions, but when you spread around false information (like the title of this thread) then it is not okay. I admit, sometimes I do say false things, but I usually do say things with stuff like "I think" or "from what I understand" to indicate that what I say may not be the truth.

can you show me a specific rawtxbytes for a simple tx as it is now and what the required bytes are if it was a segwit? and for the segwit case it would need to be two sets of bytes, I think I understand that part well enough.
Kind of, I can't build one right now and I don't have a segnet node set up yet so I can't pull one from there. I will give you a similar example, pulled from the BIP. In this example there are two inputs, one from p2pk (currently used) and one from a p2wpkh (segwit but this format will probably not be used).

Take for example this transaction:
Code:
01000000000102fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f00000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eeffffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac000247304402203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4518331561406f90300e8f3358f51928d43c212a8caed02de67eebee0121025476c2e83188368da1ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee635711000000

Here it is broken down:
Code:
nVersion:  01000000
    marker:    00
    flag:      01
    txin:      02 fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f 00000000 494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01 eeffffff
                  ef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a 01000000 00 ffffffff
    txout:     02 202cb20600000000 1976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac
                  9093510d00000000 1976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac
    witness    00
               02 47304402203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4518331561406f90300e8f3358f51928d43c212a8caed02de67eebee01 21025476c2e83188368da1ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee6357
    nLockTime: 11000000
An upgraded node would request and receive this transaction like this. It would contain all of the witness data.

But a non-upgraded node (and a node that doesn't want witness data) would only receive
Code:
0100000002fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f00000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eeffffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac11000000
which is considerably smaller than the witness serialization because it doesn't include the witness.

If the combined overhead is as small as 3 bytes, per tx, then it is amazing. but I dont understand how it is done.

But still even if it is only 3 bytes more, it is more data required permanently, so it is reducing the total tx capacity per MB and it anti-scaling from that standpoint. I cannot speak to the scaling impact of optimizd signature handling, etc.
Those three bytes (and whatever other overhead because I am fairly sure there is other overhead in the witness data) are not part of the transaction that goes into the count of the size of a block.

Also, it seems a 2MB hardfork is much, much simpler and provides the same or more tx capacity. Why isnt that done before segwit?
Because Segwit adds more useful stuff and 2 Mb hard fork doesn't solve the O(n^2) hashing problem. The Classic 2 Mb hard fork had some weird hackish workaround for the O(n^2) hashing problem. In terms of lines of code, I believe that there was an analysis done on segwit and the classic hard fork that found that the amount of code lines added was roughly the same. This is because the hashing problem needed to be fixed with that hard fork.

I suppose you could also say that segwit is more "efficient". Segwit, with roughly the same amount of code as the classic hard fork, brings much more functionality to Bitcoin than a 2 Mb hard fork does.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: rizzlarolla on March 16, 2016, 08:20:37 PM
Wow. The deceptive misinformation in this thread is really astonishing.

snip

If you're going to argue that you don't want a system where hashpower consensus allows new script rules for users to use to voluntarily contract with themselves, you should have left Bitcoin in 2010 or 2011 (though it's unclear how any blockchain cryptocurrency could _prevent_ this from happening).  Your views, if not just based on simple misunderstandings, are totally disjoint with how Bitcoin works. I don't begrudge you the freedom to want weird or even harmful things-- and I would call denying users the ability to choose whatever contract terms they want out of principle rather than considerations like resource usage both weird and harmful--, but Bitcoin isn't the place for them,

That is so wrong.

"If you're going to argue that you don't want a system where hashpower consensus allows new script rules for users to use to voluntarily contract with themselves, you should have left Bitcoin in 2010 or 2011"

I don't see jl777 sgbett arguing that. You want hashpower, consencus or not, to blind existing nodes. Introduce trust or obscurity. A hard fork in disguise.


"-- and I would call denying users the ability to choose whatever contract terms they want out of principle rather than considerations like resource usage both weird and harmful--"

What has jl777 sgbett done to harm Bitcoin?
Segwit however could destroy it. (unproven obviously. just opinion)

Something better will come in due course.
Segwit needs more thought.

Segwit needs to be hard forked.


edited - gmax does appear to be responding to sgbett. apologies for wrong accreditation.  :-[



Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 08:24:00 PM
I apologize for my technical ineptitudes.

I am trying to understand how segwit saves blockchain space as that is what it is being marketed as and with a softfork in the upcoming weeks.
It saves space in that it does not require that the signatures be downloaded. Since clients that are syncing do not need verify the signatures of old transactions, the witness data is then not needed. This would mean that they do not need to download the witnesses and can thus save bandwidth and space. This is only for several years in the future after segwit has been used for a while.

So i look at the BIP, I find each segwit tx needs a commitment hash, plus more. this is 32 bytes per tx that to my ignorant simple minded thinking is in addition to what would be needed if it was a normal tx.
Not every transaction needs that hash, just like how not every transaction now has its txid stored in the blockchain. Just like it is done today, the merkle root of the wtxids is stored, and that is it.

Now I am being told that we dont need to count the space used by the witness data. This confuses me. I like to count all permanently required data as the permanent space cost.
The witnesses are not permanent and can be deleted. After a few years, they won't even need to be downloaded.

I apologize for asking questions like this, but I am just a simple C programmer trying to allocate space for the segwit and noticing it seems to take more space for every tx. I await to be properly learned about how the commitment hash is not actually needed to be stored anywhere and how that would still allow a full node to properly validate all the transactions.
Checking the transaction hash is not part of the validation of transactions. Rather the signature is validated with the witness data and checks that the signature is valid for that transaction. The block contains a merkle root. This merkle root is calculated by hashing all of the transaction hashes. The same is done for the witness root hash which is just the hash of all of the wtxids.

Or did I misunderstand? Does segwit mean that it is no business of full nodes to verify all the transactions?
See above.

OK, I apologize again for misunderstanding the BIPs
that is why I am posting questions.
It is okay to ask questions, but when you spread around false information (like the title of this thread) then it is not okay. I admit, sometimes I do say false things, but I usually do say things with stuff like "I think" or "from what I understand" to indicate that what I say may not be the truth.

can you show me a specific rawtxbytes for a simple tx as it is now and what the required bytes are if it was a segwit? and for the segwit case it would need to be two sets of bytes, I think I understand that part well enough.
Kind of, I can't build one right now and I don't have a segnet node set up yet so I can't pull one from there. I will give you a similar example, pulled from the BIP. In this example there are two inputs, one from p2pk (currently used) and one from a p2wpkh (segwit but this format will probably not be used).

Take for example this transaction:
Code:
01000000000102fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f00000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eeffffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac000247304402203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4518331561406f90300e8f3358f51928d43c212a8caed02de67eebee0121025476c2e83188368da1ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee635711000000

Here it is broken down:
Code:
nVersion:  01000000
    marker:    00
    flag:      01
    txin:      02 fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f 00000000 494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01 eeffffff
                  ef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a 01000000 00 ffffffff
    txout:     02 202cb20600000000 1976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac
                  9093510d00000000 1976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac
    witness    00
               02 47304402203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4518331561406f90300e8f3358f51928d43c212a8caed02de67eebee01 21025476c2e83188368da1ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee6357
    nLockTime: 11000000
An upgraded node would request and receive this transaction like this. It would contain all of the witness data.

But a non-upgraded node (and a node that doesn't want witness data) would only receive
Code:
0100000002fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f00000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eeffffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac11000000
which is considerably smaller than the witness serialization because it doesn't include the witness.

If the combined overhead is as small as 3 bytes, per tx, then it is amazing. but I dont understand how it is done.

But still even if it is only 3 bytes more, it is more data required permanently, so it is reducing the total tx capacity per MB and it anti-scaling from that standpoint. I cannot speak to the scaling impact of optimizd signature handling, etc.
Those three bytes (and whatever other overhead because I am fairly sure there is other overhead in the witness data) are not part of the transaction that goes into the count of the size of a block.

Also, it seems a 2MB hardfork is much, much simpler and provides the same or more tx capacity. Why isnt that done before segwit?
Because Segwit adds more useful stuff and 2 Mb hard fork doesn't solve the O(n^2) hashing problem. The Classic 2 Mb hard fork had some weird hackish workaround for the O(n^2) hashing problem. In terms of lines of code, I believe that there was an analysis done on segwit and the classic hard fork that found that the amount of code lines added was roughly the same. This is because the hashing problem needed to be fixed with that hard fork.

I suppose you could also say that segwit is more "efficient". Segwit, with roughly the same amount of code as the classic hard fork, brings much more functionality to Bitcoin than a 2 Mb hard fork does.
my reaction was based on the answers I was getting and clearly it is a complex issue. segwit is arguably more changes to bitcoin than all prior BIP's combined. I dont think anybody would say otherwise.

Now please ignore the space savings for nodes that are not full nodes. I am assuming that to bootstrap a node it will need to get the witness data from somewhere, right? so it is needed permanently and thus part of the permanent HDD requirement.

I still dont fully understand how the size of the truncated tx+ witness data is as small as 2 bytes per tx + 1 byte per vin. But even if that is the case, my OP title is accurate. as N+2*numtx+numvins is more than N

that is the math I see.

Also I made the mistake of making sure the transaction hash matches for a transaction. I had assumed that if the transaction hash doesnt match, it is invalid rawbytes. Are you saying that we dont need to verify that the transaction hashes match? As you know verifying signatures is very time consuming compared to verifying txid. So if verifying txid is not available anymore, that would dramatically increase the CPU load for any validating node.

Before I go making new threads about that, let us wait for some clarity on this issue.

I think if the witness data is assumed to be there permanently, then we dont increase the CPU load 10x or more to have to validate sigs vs validate txid, so it would be a moot point.

but it negates your point that the witness data can just go away

James


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: achow101 on March 16, 2016, 08:37:31 PM
my reaction was based on the answers I was getting and clearly it is a complex issue. segwit is arguably more changes to bitcoin than all prior BIP's combined. I dont think anybody would say otherwise.
I agree with that, after all, there are 6 separate BIPs for it of which 4 (or 5?) are being implemented in one go.

Now please ignore the space savings for nodes that are not full nodes. I am assuming that to bootstrap a node it will need to get the witness data from somewhere, right? so it is needed permanently and thus part of the permanent HDD requirement.
Again, no. Full nodes don't have to validate the signatures of transactions in super old blocks. This is something that Bitcoin Core currently does and will continue to do. Since it doesn't check the signatures of transactions in historical blocks those witnesses don't need to be downloaded, although I suppose they could. There will of course be people who run nodes that store all of that data (I will probably be one of those people) in case someone wants it.

I still dont fully understand how the size of the truncated tx+ witness data is as small as 2 bytes per tx + 1 byte per vin. But even if that is the case, my OP title is accurate. as N+2*numtx+numvins is more than N

that is the math I see.
From the view point of a new node some years in the future, that data isn't needed and it most certainly won't be downloaded or needed. But for now, yes, you will be storing that data but it can be pruned away like the majority of the blockchain can be now. Keep in mind that a pruned node is still a full node. It still validates and relays every transaction and block it receives. Full Nodes do not have to store the entire blockchain, they just need to download it. That extra data is also not counted as what is officially defined as the blockchain.

Also I made the mistake of making sure the transaction hash matches for a transaction. I had assumed that if the transaction hash doesnt match, it is invalid rawbytes. Are you saying that we dont need to verify that the transaction hashes match? As you know verifying signatures is very time consuming compared to verifying txid. So if verifying txid is not available anymore, that would dramatically increase the CPU load for any validating node.
Anymore? It was never done in the first place. Verifying the transaction has always been checking the signatures because the creating and verifying signatures involve the hash of the transaction.

Before I go making new threads about that, let us wait for some clarity on this issue.

I think if the witness data is assumed to be there permanently, then we dont increase the CPU load 10x or more to have to validate sigs vs validate txid, so it would be a moot point.
It won't increase the CPU load because that is what is currently being done and has always been done. In fact, signature validation in Bitcoin Core 0.12+ is significantly faster than in previous versions due to the use of libsecp256k1 which dramatically increased the performance of the validation.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: molecular on March 16, 2016, 08:53:21 PM
Hold up. I'd like to hear from Wuille (one of the creators of segwit) about the size difference between a standard 2-input, 2-output transaction and its equivalent using segwit, for a fully-validating node. No need to attack with sarcasm.

BTW, I am also curious if the O(n^2) sigops issue can be solved in a much more simple way.

That leads me to another question I've been having that hasn't been answered as far as I know. If segregating the signatures out of the tx leads to a stable txid (malleability fixed), then why can't we simply fix malleability independantly by simply ingoring the signatures when hashing the txid?


Title: Re: Segwit details?
Post by: gmaxwell on March 16, 2016, 09:02:30 PM
Segwit transactions are considered by old nodes as transactions which spent an anyonecanspend output and thus are treated with a grain of salt. The best course of action is to of course wait for confirmations as we already should still be doing now.
The segwit transactions are non-standard to old nodes. This means that old nodes/wallets ignore them until they are confirmed-- they don't show them in the wallet, they don't relay them, they don't mine them, so even confusion about unconfirmed transactions is avoided.

Quote
Ah *sigh of relief*; here comes somebody who actually knows what they are talking about.

Could you also let me know if I presented any misinformation? I have been trying my best to not and to make jl777 understand why he is wrong but I may have accidentally (either due to misunderstanding the BIPs or just really bad typing) given him false information.
At least the above was the only minor correction I've seen so far.

Quote
Since Pieter and no one on the IRC reponded either, I will ask this again. Will there be a full write up (preferably before segwit's release) of all of the changes that segwit entails so that wallet developers can get working on implementing segwit? AFAIK the segwit implementation contains omissions and changes from what was specified in the BIPs.
If that was you asking in #bitcoin-dev earlier, you need to wait around a bit for an answer on IRC-- I went to answer but the person who asked was gone.  BIPs are living documents and will be periodically updated as the functionality evolves. I thought they were currently up to date but haven't checked recently; make sure to look for pull reqs against them that haven't been merged yet.


my reaction was based on the answers I was getting and clearly it is a complex issue. segwit is arguably more changes to bitcoin than all prior BIP's combined. I dont think anybody would say otherwise.
I'll happily say otherwise.  It's a change of somewhat more complexity than P2SH; certainly less than all combined. The implementation, however is smaller than the BIP101 implementation (comparing with tests removed). The Bitcoin community is getting better at documenting changes, so there is more documentation written about this than many prior ones.  Conceptually segwit's changes are very simple; based on signaling in the scriptPubkey, scriptsigs can be moved to the ends of transactions, where they are not included in the txid. An additional hashtree is added to the coinbase transaction to commit to the signatures. The new scriptsigs begin with a version byte that describes how the scripts are interperted, two kinds are defined now, the rest are treated as "return true".

Quote
Now please ignore the space savings for nodes that are not full nodes. I am assuming that to bootstrap a node it will need to get the witness data from somewhere, right? so it is needed permanently and thus part of the permanent HDD requirement.
You can't "please ignore" major parts of the system scalability and hope to pose a discussion worth reading, if one is willing to ignore all the facts that disagree with them they can prove anything.  None the less, no-- right now existing full nodes do not verify signatures in the far past, but currently have to download them. Under segwit they could skip downloading them.  If you're not going to check it, there is no reason to download it-- but the legacy transaction hashing structure forces you to do so anyways; segwit fixes that.

Quote
I still dont fully understand how the size of the truncated tx+ witness data is as small as 2 bytes per tx + 1 byte per vin. But even if that is the case, my OP title is accurate. as N+2*numtx+numvins is more than N
There is no such thing as "size"-- size is always a product of how you serialize it.  An idiotic implementation could store non-segwit transactions by prepending them with a megabyte of zeros-- would I argue that segwit saves a megabyte per transaction? No.  

It's likely that implementations will end up using an extra byte per scriptsig to code the script version, though they could do that more efficiently some other way... but who cares about a byte per input? It certainly doesn't deserve an ALL CAPS forum post title-- you can make some strained argument that you're pedantically correct; that doesn't make you any less responsible for deceiving people, quite the opposite because now it's intentional. And even that byte per input exists only for implementations that don't want to do extra work to compress it (and end up with ~1 bit per transaction).

Meanwhile, that version byte makes it easy to safely deploy upgrades that reduce transaction sizes by ~30%.  What a joke that you attack this. God forbid that 'inefficient' implementations might store a byte for functionality that makes the system much more flexible and will allow saving hundreds of bytes.

Quote
Also I made the mistake of making sure the transaction hash matches for a transaction. I had assumed that if the transaction hash doesnt match, it is invalid rawbytes. Are you saying that we dont need to verify that the transaction hashes match? As you know verifying signatures is very time consuming compared to verifying txid. So if verifying txid is not available anymore, that would dramatically increase the CPU load for any validating node.
Before I go making new threads about that, let us wait for some clarity on this issue.

I think if the witness data is assumed to be there permanently, then we dont increase the CPU load 10x or more to have to validate sigs vs validate txid, so it would be a moot point.
You are still deeply confused. With segwit the witnesses-- the part containing the signature-- are not part of the transaction ID. They _must_ not be for malleability to be strongly fixed, and they really shouldn't be to optimal scalability.  This is no way increases the amount of signature validation anyone does.

(Nor does it decrease the amount of signature validation anyone does, though while you've been ranting here-- the people you're continually insulting went and shipped code that makes signature validation more than 5x faster.)

That leads me to another question I've been having that hasn't been answered as far as I know. If segregating the signatures out of the tx leads to a stable txid (malleability fixed), then why can't we simply fix malleability independantly by simply ingoring the signatures when hashing the txid?
This is what segwit effectively does, among other improvements. The first version of segwit that was created for elements alpha does _EXACTLY_ that, but there was no way to deploy that design in bitcoin because it would deeply break every piece of Bitcoin software ever written-- all nodes, all lite wallets, all thin clients, all hardware wallets, all web front ends, all block explorers, all pre-signed nlocked timed transactions, even many pieces of mining hardware; we learned about how impactful doing that was with elements alpha when it was very difficult getting existing software working with it... and for a while we didn't see any realistic way to deploy it short of rebooting the whole blockchain in a great big flag day (which would inevitably end up unintentionally confiscating some peoples' coins)-- not just a hard fork but an effective _rewrite_.  The clever part of segwit was reorganizing things a bit-- the signature field is still part of the txid but we don't use it for signatures anymore, we use a separate set of fields stapled onto the end to achieve exactly the same effect; but without blowing everything up.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: achow101 on March 16, 2016, 09:06:04 PM
Hold up. I'd like to hear from Wuille (one of the creators of segwit) about the size difference between a standard 2-input, 2-output transaction and its equivalent using segwit, for a fully-validating node. No need to attack with sarcasm.

BTW, I am also curious if the O(n^2) sigops issue can be solved in a much more simple way.

That leads me to another question I've been having that hasn't been answered as far as I know. If segregating the signatures out of the tx leads to a stable txid (malleability fixed), then why can't we simply fix malleability independantly by simply ingoring the signatures when hashing the txid?

I am pretty sure that the idea of segwit grew from this idea to ignore signatures in the txid calculation. This was proposed in BIP 140: https://github.com/bitcoin/bips/blob/master/bip-0140.mediawiki. It basically proposed a normalized txid which was the txid but ignoring the signatures. The other stuff in that BIP was because the author wanted it deployed as a soft fork rather than a hard fork. Otherwise a hard fork would be needed.


Title: Re: Segwit details?
Post by: molecular on March 16, 2016, 09:08:34 PM
The above with the OP_RETURN is how the witness root hash (the hash of the wtxids where the wtxids are leaves on a hash tree). This is added to the Coinbase transaction of the block. The point of doing this is to commit the witnesses to the blockchain without adding all of the witnesses to the size calculation of the blockchain except for these 38 bytes.

So this is kind-of like a merge-mined witness chain?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 16, 2016, 09:11:41 PM
my reaction was based on the answers I was getting and clearly it is a complex issue. segwit is arguably more changes to bitcoin than all prior BIP's combined. I dont think anybody would say otherwise.
I agree with that, after all, there are 6 separate BIPs for it of which 4 (or 5?) are being implemented in one go.

Now please ignore the space savings for nodes that are not full nodes. I am assuming that to bootstrap a node it will need to get the witness data from somewhere, right? so it is needed permanently and thus part of the permanent HDD requirement.
Again, no. Full nodes don't have to validate the signatures of transactions in super old blocks. This is something that Bitcoin Core currently does and will continue to do. Since it doesn't check the signatures of transactions in historical blocks those witnesses don't need to be downloaded, although I suppose they could. There will of course be people who run nodes that store all of that data (I will probably be one of those people) in case someone wants it.

I still dont fully understand how the size of the truncated tx+ witness data is as small as 2 bytes per tx + 1 byte per vin. But even if that is the case, my OP title is accurate. as N+2*numtx+numvins is more than N

that is the math I see.
From the view point of a new node some years in the future, that data isn't needed and it most certainly won't be downloaded or needed. But for now, yes, you will be storing that data but it can be pruned away like the majority of the blockchain can be now. Keep in mind that a pruned node is still a full node. It still validates and relays every transaction and block it receives. Full Nodes do not have to store the entire blockchain, they just need to download it. That extra data is also not counted as what is officially defined as the blockchain.

Also I made the mistake of making sure the transaction hash matches for a transaction. I had assumed that if the transaction hash doesnt match, it is invalid rawbytes. Are you saying that we dont need to verify that the transaction hashes match? As you know verifying signatures is very time consuming compared to verifying txid. So if verifying txid is not available anymore, that would dramatically increase the CPU load for any validating node.
Anymore? It was never done in the first place. Verifying the transaction has always been checking the signatures because the creating and verifying signatures involve the hash of the transaction.

Before I go making new threads about that, let us wait for some clarity on this issue.

I think if the witness data is assumed to be there permanently, then we dont increase the CPU load 10x or more to have to validate sigs vs validate txid, so it would be a moot point.
It won't increase the CPU load because that is what is currently being done and has always been done. In fact, signature validation in Bitcoin Core 0.12+ is significantly faster than in previous versions due to the use of libsecp256k1 which dramatically increased the performance of the validation.
I was told by gmax himself that a node that doesnt validate all signatures should call itself a fully validating node.

Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature

not sure what libsecp256k1's speed has anything to do with the fact that it is still much slower to calculate than SHA256.

So my point again, is that all witness data needs to be stored permanently for a full node that RELAYS historical blocks to a bootstrapping node. If we are to lose this, then we might as well make bitcoin PoS as that is the one weakness for PoS vs PoW. So if you are saying that we need to view bitcoin as fully SPV all the time with PoS level security for bootstrapping nodes, ok, with those assumptions lots and lots of space is saved.

However, with such drastic assumptions I can (and have) already saved lots more space without adding a giant amount of new protocol and processing.

So this controversy has at least clarified that segwit INCREASES the size of the permanently needed data for fully validating and relaying node. Of course for SPV nodes things are much improved, but my discussion is not about SPV nodes.

So the powers that be can call me whatever names they want. I still claim that:

N + 2*numtx + numvins > N

And as such segwit as way to save permanent blockchain space is an invalid claim.Now the cost of 2*numtx+numvins is not that big, so maybe it is worth the cost for all the benefits we get.

However on the benefits claims, one of them is the utxo dataset is becoming a lot more manageable. this is irrelevant as that is a local inefficiency that can be optimized without any external effects. I have it down to 4 bytes of RAM per utxo, but I could make it smaller if needed

It just seems a lot of unsupported (or plain wrong) claims are made to justify the segwit softfork. And the most massive change by far is being slipped in as a minor softfork update?



Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: rizzlarolla on March 16, 2016, 09:17:24 PM
... and for a while we didn't see any realistic way to deploy it short of rebooting the whole blockchain in a great big flag day (which would inevitably end up unintentionally confiscating some peoples' coins)-- not just a hard fork but an effective _rewrite_.  The clever part of segwit was reorganizing things a bit-- the signature field is still part of the txid but we don't use it for signatures anymore, we use a separate set of fields stapled onto the end to achieve exactly the same effect; but without blowing everything up...

Yet.
Coming soon via core soft fork.


Title: Re: Segwit details?
Post by: jl777 on March 16, 2016, 09:22:46 PM
There is no such thing as "size"-- size is always a product of how you serialize it.  An idiotic implementation could store non-segwit transactions by prepending them with a megabyte of zeros-- would I argue that segwit saves a megabyte per transaction? No.  

It's likely that implementations will end up using an extra byte per scriptsig to code the script version, though they could do that more efficiently some other way... but who cares about a byte per input? It certainly doesn't deserve an ALL CAPS forum post title-- you can make some strained argument that you're pedantically correct; that doesn't make you any less responsible for deceiving people, quite the opposite because now it's intentional. And even that byte per input exists only for implementations that don't want to do extra work to compress it (and end up with ~1 bit per transaction).
Since you feel I am deceiving people by using uppercase, I changed it to lower case. The first responses I got did not make it clear the overhead was 2 bytes per tx and 1 byte per vin.

I was under the misunderstanding that segwit saved blockchain space, you know cuz I am idiotic and believed stuff on the internet.

I am glad that now we all know that a 2MB hardfork would be more space efficient than segwit as far as permanent blockchain space.

What I still dont understand is how things will work when a segwit tx is sent to a non-segwit node and that is spent to another non-segwit node. How will the existing wallets deal with that? What happens if an attacker created segwit rawtransactions and sent them to non-segwit nodes? there are no attack vectors? what about in zeroconf environments? how does a full relaying node mine a block with segwit inputs? or do existing full nodes cease to be able to mine blocks after segwit softfork?

And even a simpleton like me can understand how to increase blocksizes with a hardfork, so why not do that before adding massive new changes like segwit? especially since it is more space efficient and not prone to misunderstandings


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: sgbett on March 16, 2016, 09:32:05 PM

Script versioning is essentially about changing this consensus mechanism so that any change can be made without any consensus. Giving this control to anyone, even satoshi himself, entirely undermines the whole idea of bitcoin. *Decentralised* something something.
The content of your scriptpubkey, beyond the resource costs to the network, is a private contract between the sender of the funds and the receiver of the funds. It is only the business of these parties, no one else. Ideally, it would not be subject to "consensus", in any way/shape/form-- it is a _private matter_. It is not any of your business how I spend my Bitcoins; but unfortunately, script enhancing softforks do require consensus of at least the network hashpower.

Bitcoin Script was specifically designed because how the users contract with it isn't the network's business-- though it has limitations. And, fundamentally, even with those limitations it already, at least theoretically, impossible to prevent users from contracting however they want. For example, Bitcoin has no Sudoku implementation in Script, and yet I can pay someone conditionally on them solving one (https://bitcoincore.org/en/2016/02/26/zero-knowledge-contingent-payments-announcement/) (or any other arbitrary program).

Bitcoin originally had an OP_VER to enable versioned script upgrades. Unfortunately, the design of this opcode was deeply flawed-- it allowed any user of the network, at their unannounced whim, to hardfork the network between different released versions of Bitcoin.  Bitcoin's creator, removed it and in its place put in facilities for softforks. Softforks have been used many times to compatibly extend the system-- first by Bitcoin's creator, and later by the community. The segwit script versioning brings back OP_VER but with a design that isn't broken---- it makes it faster and safer to design and deploy smart contracting/script improvements (for example, a recently proposed one will reduce transaction sizes by ~30% (https://bitcointalk.org/index.php?topic=1377298.0) with effectively no costs once deployed); but doesn't change the level of network consensus required to deploy softforks; only perhaps the ease of achieving the required consensus because the resulting improvements are safer.

This is a really good explanation, thanks for taking the time to write it up. My understanding of Bitcoin doesn't come direct from the code (yet!) I have to rely on second hand information. The information you just provided has really deepened my understanding of the purpose of the scripting system over and above "it exists, and it makes the transactions work herp" which probably helps address your final paragraph...

If you're going to argue that you don't want a system where hashpower consensus allows new script rules for users to use to voluntarily contract with themselves, you should have left Bitcoin in 2010 or 2011 (though it's unclear how any blockchain cryptocurrency could _prevent_ this from happening).  Your views, if not just based on simple misunderstandings, are totally disjoint with how Bitcoin works. I don't begrudge you the freedom to want weird or even harmful things-- and I would call denying users the ability to choose whatever contract terms they want out of principle rather than considerations like resource usage both weird and harmful--, but Bitcoin isn't the place for them, and the restrictions you're asking for appear to be deeply disjoint with Bitcoin's day-one and every-day-since design, which has a huge amount of complexity in the original design for user (not consensus) determined smart contracting and where softforks (hashpower consensus) have been frequently used to extend the system.

As we have established my understanding was, lets say limited ;), then I don't think its far to say I am arguing against what it is for. I was arguing against what I though it meant. Quite the opposite to wanting weird or harmful things, I was very much arguing that we shouldn't be allowing a harmful thing! If as may be the case, that harmful thing is not an issue, then I have nothing to worry about!

I'm trying not to get (too) sucked into the conspiracy theories on either side, I'm only human though so sometimes I do end up with five when adding together two and two.

A question that still niggles me is segwit as a soft fork. I know that just dredges up the same old discussion about pros and cons of soft vs hard but for a simpleton such as me it seems that if the benefits of segwit are so clear, then compromising on the elegance of implementation in order to make it a soft fork seems a strange decision.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: iCEBREAKER on March 16, 2016, 09:48:08 PM
https://i.imgur.com/h0OTOUf.jpg
I really don't understand why we need to force our beloved wallet devs through this complicated mess.   :'(
New address format? How to explain to users? All infrastructure needs to be upgraded... What a gargantuan task...  :'(
Why do we need segwit again?  :'(

Have you been in a cave for the last 6 months?  Did you miss https://bitcoincore.org/en/2016/01/26/segwit-benefits/ ?

Segwit has been explained in many ways, from technical BIPS to colorful info-graphics.

Most wallet, etc. providers had little to no trouble adding segwit, because they like the idea: https://bitcoincore.org/en/segwit_adoption/

jl777 is only whining about his difficulties because he doesn't like anything Core supports and because he's a terrible dev who never finishes a single project he starts (eg SuperNET).

I find the shadowy linkages between alt-coin scammer jl777 and Classic fascinating.  The bitco.in alliance between him, the DashHoles, and the Frap.doc crew make for an interesting demographic.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: achow101 on March 16, 2016, 09:54:00 PM
I was told by gmax himself that a node that doesnt validate all signatures should call itself a fully validating node.
As long as it fully validates all of the NEW blocks and transactions that it receives. HISTORICAL blocks and the transactions within them are not validated because they are HISTORICAL and are tens of thousands of blocks deep.

Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature

not sure what libsecp256k1's speed has anything to do with the fact that it is still much slower to calculate than SHA256.
And how are you checking the txids if they are not provided? A tx message can be sent unsolicited with a new transaction and it does not contain the txid. In fact, there is no network message that I could find that sends a transaction with its txid. Of course, I think it is safe to assume that if a node requested a specific transaction that it would check the hash of the data it received so that it knows whether that data is correct. But for unsolicited transactions, then the only way to verify them is to check the signature.

So my point again, is that all witness data needs to be stored permanently for a full node that RELAYS historical blocks to a bootstrapping node. If we are to lose this, then we might as well make bitcoin PoS as that is the one weakness for PoS vs PoW. So if you are saying that we need to view bitcoin as fully SPV all the time with PoS level security for bootstrapping nodes, ok, with those assumptions lots and lots of space is saved.
No, when bootstrapping historical blocks the witness data is not required because it doesn't need to validate historical blocks. See above.

However, with such drastic assumptions I can (and have) already saved lots more space without adding a giant amount of new protocol and processing.

So this controversy has at least clarified that segwit INCREASES the size of the permanently needed data for fully validating and relaying node. Of course for SPV nodes things are much improved, but my discussion is not about SPV nodes.

So the powers that be can call me whatever names they want. I still claim that:

N + 2*numtx + numvins > N

And as such segwit as way to save permanent blockchain space is an invalid claim.Now the cost of 2*numtx+numvins is not that big, so maybe it is worth the cost for all the benefits we get.

However on the benefits claims, one of them is the utxo dataset is becoming a lot more manageable. this is irrelevant as that is a local inefficiency that can be optimized without any external effects. I have it down to 4 bytes of RAM per utxo, but I could make it smaller if needed

It just seems a lot of unsupported (or plain wrong) claims are made to justify the segwit softfork. And the most massive change by far is being slipped in as a minor softfork update?
If you are going to run your node from now until the end of time continuously and save all of the data relevant to the blocks and transactions that it receives and call of that data "permanent blockchain data", then yes, I think it does require more storage than a simple 2 Mb fork.

Since when has anyone ever claimed that segwit is "a way to save permanent blockchain space"?

What I still dont understand is how things will work when a segwit tx is sent to a non-segwit node and that is spent to another non-segwit node. How will the existing wallets deal with that?
Since you keep saying stuff about sending transactions between nodes, I don't think you understand how Bitcoin transactions work. It isn't sending between things but creating outputs from inputs after proving that the transaction creator can spend from those inputs. The inputs of a transaction don't affect the outputs of a transaction except for the amounts.

A transaction that spends a segwit input can still create a p2pkh and p2pk output which current nodes and wallets understand. p2pkh and p2pk are two output types that wallets currently understand. Those p2pkh and p2pk outputs can be spent from just like every other p2pkh and p2pk output is now. That will not change. The inputs and the scriptsigs of spending from those outputs will be the exact same as they are today. Segwit doesn't change that.

Rather segwit spends to a special script called a witness program. This script becomes a p2sh address, another output type which current wallets know about and can spend to.

Segwit wallets would instead always create p2sh addresses because that is the only way that segwit can implement witness programs to be backwards compatible. Those p2sh addresses are distributed normally but can only be spent from with a witness program.

What happens if an attacker created segwit rawtransactions and sent them to non-segwit nodes? there are no attack vectors?
Then the attacker is just sending the owner of an address a bunch of Bitcoin. If it is a bunch of spam outputs, then it can be annoying, but that is something that people can already do today.

what about in zeroconf environments? how does a full relaying node mine a block with segwit inputs? or do existing full nodes cease to be able to mine blocks after segwit softfork?
Well firstly, full nodes don't mine blocks.

The data that composes the block is the data that currently comprises of a block. The header is the same. The Coinbase transaction just has the OP_RETURN output to add the witness root to the blockchain. The transactions are the transactions with the current format. If a block is requested by another node that wants the witness data, then the block is sent with the transactions serialized in the witness serialization format.

And even a simpleton like me can understand how to increase blocksizes with a hardfork, so why not do that before adding massive new changes like segwit? especially since it is more space efficient and not prone to misunderstandings
And in the future, what is to say that simpletons will be able to understand segwit? In the future, someone would still be saying that segwit is too complicated and that we should not use it. In the future it will still be large changes and it will still be prone to misunderstandings. Nothing will change in the future except instead of increasing the block size limit from 1 Mb to 2 Mb, they will be clamoring to increase the block size limit from 2 Mb to 4 Mb. The situation would literally be the same.



If that was you asking in #bitcoin-dev earlier, you need to wait around a bit for an answer on IRC-- I went to answer but the person who asked was gone.  BIPs are living documents and will be periodically updated as the functionality evolves. I thought they were currently up to date but haven't checked recently; make sure to look for pull reqs against them that haven't been merged yet.
Yeah, I asked on #bitcoin-core-dev as achow101 (I go by achow101 pretty much everywhere else except here, although I am also achow101 here). I logged off of IRC because I went to sleep, probably should have asked it earlier.

I will look at the BIP pulls and see if there is anything there.



A question that still niggles me is segwit as a soft fork. I know that just dredges up the same old discussion about pros and cons of soft vs hard but for a simpleton such as me it seems that if the benefits of segwit are so clear, then compromising on the elegance of implementation in order to make it a soft fork seems a strange decision.
It was originally proposed as a hard fork, but someone (luke-jr I think) pointed out that it could be done as a soft fork. Soft forks are preferred because they are backwards compatible. In this case, the backwards compatibility is that if you run non-upgraded software, you can continue as you were and have no ill effect. You just won't be able to take advantage of the new functionality provided by segwit.

Alternatively, if this were done as a hard fork, then everyone would be required to upgrade in order to deploy segwit and then that would essentially force everyone to use segwit.


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: BlindMayorBitcorn on March 16, 2016, 10:11:40 PM
I asked some of these questions 3 months ag (https://www.reddit.com/r/bitcoinxt/comments/3w34o0/would_segregated_witnesses_really_help_anyone/)o.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit (https://www.reddit.com/r/bitcoin_uncensored/comments/43w24e/raising_the_21_million_btc_limit_with_a_soft_fork/).  One could also raise the block size limit (https://www.reddit.com/r/btc/comments/43w4rx/how_core_can_increase_the_21_million_btc_issuance/czlsk2q) that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

You've come to the right place for answers, professor. Openness is our middle name!

Now that that's all settled: What's Stolfi on about here? The 75% discount?


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: gmaxwell on March 16, 2016, 10:22:57 PM
I was told by gmax himself that a node that doesnt validate all signatures should call itself a fully validating node.
A node not verifying signatures in blocks during the initial block download with years of POW on them is not at all equivalent to not verifying signatures _at all_.

I agree it is preferably to verify more-- but we live in the real world, not black and white land; and offering multiple trade-offs is essential to decentralized scalability.   If there are only two choices: Run a thin client, verify _nothing_; or run a maximally costly node and verify EVERYTHING then large amounts of decentralization will be lost because everyone who cannot justify or afford the full cost will have no option but to not participate in running a full node.  This makes it essential to support half steps-- it's better to allow people to choose to save resources and not verify months old data-- which is very likely correct unless the system has failed-- since the alternative is them verifying nothing at all.

Quote
Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature                                        
Every piece of Bitcoin software does this.  It is a little obnoxious that you spend so much time talking about these optimizations you're "adding" which are basic behaviors that _every_ piece of Bitcoin software ever written has always done, as if you're the only person to have thought of them or how they distinguish this hypothetical node software you claim to be writing.                                    
                                                                                                                                            
Quote
However, with such drastic assumptions I can (and have) already saved lots more space without adding a giant amount of new protocol and processing.
Your claims of saved space (10GB) earlier on the list, were already five times larger than what Bitcoin Core already does... another case of failing to understand the state of the art while thinking that some optimization you just came up with is vastly better while it's actually inferior.                                                                                                            
                                                                                                                                            
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.

Quote
I still claim that:
N + 2*numtx + numvins > N
As I pointed out, that is purely a product of whatever serialization an implementation chooses to store the data.

Quote
However on the benefits claims, one of them is the utxo dataset is becoming a lot more manageable. this is irrelevant as that is a local inefficiency that can be optimized without any external effects. I have it down to 4 bytes of RAM per utxo, but I could make it smaller if needed
Taking a hint from your earlier pedantry... It sounds like you have a long way to go... Bitcoin Core uses 0 bytes of RAM per UTXO. By comparison, the unreleased implementation you are describing is embarrassingly inefficient-- Bitcoin core is infinity fold better. :)

What I still dont understand is how things will work when a segwit tx is sent to a non-segwit node and that is spent to another non-segwit node. How will the existing wallets deal with that? What happens if an attacker created segwit rawtransactions and sent them to non-segwit nodes? there are no attack vectors? what about in zeroconf environments? how does a full relaying node mine a block with segwit inputs? or do existing full nodes cease to be able to mine blocks after segwit softfork?
jl777, I already responded to pretty much this question directly just above. It seems like you are failing to put in any effort to read these things, disrespecting me and everyone else in this thread; it makes it seem like responding to you further is a waste of time. :(

The segwit transactions are non-standard to old nodes. This means that old nodes/wallets ignore them until they are confirmed-- they don't show them in the wallet, they don't relay them, they don't mine them, so even confusion about unconfirmed transactions is avoided.
If you don't understand the concept of transaction standardness, you can learn about it from a few minutes of reading the Bitcoin developer guide: https://bitcoin.org/en/developer-guide#non-standard-transactions and by searching around a bit.

This is a really good explanation, thanks for taking the time to write it up. My understanding of Bitcoin doesn't come direct from the code (yet!) I have to rely on second hand information. The information you just provided has really deepened my understanding of the purpose of the scripting system over and above "it exists, and it makes the transactions work herp" which probably helps address your final paragraph...
[...]

Indeed it does. I am sincerely sorry for being a bit abrasive there: I've suffered too much exposure to people who aren't willing to reconsider positions-- and I was reading a stronger argument into your post than you intended--, and this isn't your fault.

Quote
I'm trying not to get (too) sucked into the conspiracy theories on either side, I'm only human though so sometimes I do end up with five when adding together two and two.

A question that still niggles me is segwit as a soft fork. I know that just dredges up the same old discussion about pros and cons of soft vs hard but for a simpleton such as me it seems that if the benefits of segwit are so clear, then compromising on the elegance of implementation in order to make it a soft fork seems a strange decision.
It would be a perfectly reasonable question, if it were the case there was indeed a compromise here.

If segwit were to be a hardfork? What would it be?

Would it change how transaction IDs were computed, like elements alpha did? Doing so is conceptually simpler and might save 20 lines of code in the implementation... But it's undeployable: even as a hardfork-- it would break all software, web wallets, thin wallets, lite wallets, hardware wallets, block explorers-- it would break them completely, along with all presigned nlocktime transactions, all transactions in flight. It would add more than 20 lines of code in having to handle the flag day.  So while that design might be 'cleaner' conceptually the deployment would be so unclean as to be basically inconceivable. Functionally it would be no better, flexibility it would be no better.  No one has proposed doing this.

Would it instead do the same as it does not, but instead put the commitment someplace else in the block rather than as a coinbase transaction OP_RETURN? -- at the top of the hashtree?  This is what Gavin Andresen responded to segwit proposing.  This would be deployable as a lite-client compatible semi-hardfork, like the blocksize increase. Would this be more elegant?

In that case... All that changes changing is the position of the commitment from one location to another. Writing the 32+small extra bytes of data in one place in the block rather than another place. It would not change the implementation except some constants about where it reads from. It would not change storage, it would not change performance. It wouldn't be the most logical and natural way to deploy it (the above undeployable method would be).  Because it would be a hard fork, all nodes would have to upgrade for it at the same time.  So if you're currently on 0.10.2 because you have business related patches against that version which are costly to rebase-- or just because you are prohibited from upgrading without a security audit, you'll be kicked off the network under the hard fork model when you don't upgrade by the flag day. Under the proposed deployment mechanism you can simply ignore it with no cost to you (beyond the general costs of being on an older version) and upgrade whenever it makes sense to do so-- maybe against 0.14 when there finally are some new features that you feel justify your upgrade, rather than paying the upgrade costs multiple times.  One place vs the other doesn't make a meaningful difference in the functionality, though I agree top 'feels' a little more orderly. But again, it doesn't change the functionality, efficiency or performance, it wouldn't make the implementation simpler at all. And there there is other data that would make more sense to move to the top (e.g. stxo/utxo commitments) which haven't been designed yet, so if segwit was moved to the top now that commitment at the top would later need to be redesigned for these other things in any case.  It's not clear that even greenfield that this would be more elegant than the proposal, and the deployment-- while not impossible for this one-- would be much less elegant and more costly.

So in summary:  the elegance of a feature must be considered holistically. We must think about the feature itself, how it interacts with the future, and-- critically-- the effect of deploying it.  Considered together the segwit deployment proposed is clearly the most elegant approach.  If deployment were ignored, the elements alpha approach would be slightly preferable, but only slightly -- it makes no practical difference-- but it is so unrealistic to deploy that in Bitcoin today that no one has proposed it. One person did propose changing the commitment location; but the different location to a place that would only be possible in a hardfork but the location makes no functional difference for the feature and would add significant amounts of deployment cost and risk.


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: jl777 on March 16, 2016, 10:35:28 PM
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: achow101 on March 16, 2016, 10:41:39 PM
Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.
Yes. If you are following the standardness and validation rules that Bitcoin Core uses, then it should be a non-issue.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: 2112 on March 17, 2016, 12:14:33 AM
My point, perhaps poorly expressed, was that if you think these problems are 'not hard', you must have solutions in mind, no?  I'd be interested in hearing your ideas.  I am genuinely interested, not being sarcastic here.
It wasn't only me that had those solutions in mind. In fact they are already included in the "segregated witness" proposal, but without the "segregation" part. The "segregation" just splits the transaction in two parts. In fact one could come up with a deficient "segregated witness" proposal that wouldn't fix the discussed problems. They are orthogonal concepts.
 

Which solutions are you referring to here?

The same we discussed less than an hour ago; 9:20am vs. 10:10am.
The advantage of segwit is that it elegantly fixes a couple of other hard problems (malleability, O(n^2) sigops issue)
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: johnyj on March 17, 2016, 12:30:17 AM
https://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/segregated-witness-and-its-impact-on-scalability/

Quote
There are still malleability problems that remain, like Bitcoin selecting which part of the transaction is being signed, like the sighash flags. This remains possible, obviously. That's something that you opt-in to, though. This directly has an effect on scalability for various network payment transaction channels and systems like lightning and others

IMO, segwit is a clean up of the transaction format, but in order to do that without a hard fork, it uses a strange way of twin-block structure, which cause unnecessary complexity. Raised level of complexity typically open many new attack vectors, so far this has not been fully analyzed

And the 75% discount of witness data also change the economy of the blockchain space, so that it specially designed to benefit lightning network and other stuffs

In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: RHA on March 17, 2016, 12:39:38 AM
Also I made the mistake of making sure the transaction hash matches for a transaction. I had assumed that if the transaction hash doesnt match, it is invalid rawbytes. Are you saying that we dont need to verify that the transaction hashes match? As you know verifying signatures is very time consuming compared to verifying txid. So if verifying txid is not available anymore, that would dramatically increase the CPU load for any validating node.
Anymore? It was never done in the first place. Verifying the transaction has always been checking the signatures because the creating and verifying signatures involve the hash of the transaction.
Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature

Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature
Every piece of Bitcoin software does this.  It is a little obnoxious that you spend so much time talking about these optimizations you're "adding" which are basic behaviors that _every_ piece of Bitcoin software ever written has always done, as if you're the only person to have thought of them or how they distinguish this hypothetical node software you claim to be writing.                                    

Can't you, gmaxwell and knightdk, settle on verifying txid at last?
It's really hard to get info on SegWits here if even such an obvious thing (one would think) gets contradictory answers. ;)


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: hhanh00 on March 17, 2016, 12:42:53 AM
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James

The problem is that what you lost a lot of credibility by making your claims earlier and now it'll be hard to take your software seriously. Basically, you are asking us to check out your rocket after you argued against the laws of gravity.


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: AliceGored on March 17, 2016, 12:44:29 AM
I asked some of these questions 3 months ag (https://www.reddit.com/r/bitcoinxt/comments/3w34o0/would_segregated_witnesses_really_help_anyone/)o.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit (https://www.reddit.com/r/bitcoin_uncensored/comments/43w24e/raising_the_21_million_btc_limit_with_a_soft_fork/).  One could also raise the block size limit (https://www.reddit.com/r/btc/comments/43w4rx/how_core_can_increase_the_21_million_btc_issuance/czlsk2q) that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

You've come to the right place for answers, professor. Openness is our middle name!

Now that that's all settled: What's Stolfi on about here? The 75% discount?

The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: jl777 on March 17, 2016, 12:45:35 AM
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James

The problem is that what you lost a lot of credibility by making your claims earlier and now it'll be hard to take your software seriously. Basically, you are asking us to check out your rocket after you argued against the laws of gravity.

N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility



Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: achow101 on March 17, 2016, 01:10:19 AM
N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility
I believe I have forgotten to address this. Can you please explain how you are getting this?

AFAIK the txids aren't in any structure used by Bitcoin except in the inventories. Those might be stored, depends on the implementation. However, when it comes to the wtxids, there is absolutely no reason to store them. Their sole purpose is to simply have a hash of all of the data in a segwit transaction and have that be applied to the witness root hash in the coinbase transaction. There is no need to store the wtxids since nothing ever references them.

Where are you getting numvins from?

Anyways, your formula is wrong if you assume that the regular txid is currently being stored. Rather it should be

N + wtxid + numvins > N

and that is only if you are going to store wtxids which are not necessary to store anyways.


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: jl777 on March 17, 2016, 01:24:14 AM
N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility
I believe I have forgotten to address this. Can you please explain how you are getting this?

AFAIK the txids aren't in any structure used by Bitcoin except in the inventories. Those might be stored, depends on the implementation. However, when it comes to the wtxids, there is absolutely no reason to store them. Their sole purpose is to simply have a hash of all of the data in a segwit transaction and have that be applied to the witness root hash in the coinbase transaction. There is no need to store the wtxids since nothing ever references them.

Where are you getting numvins from?

Anyways, your formula is wrong if you assume that the regular txid is currently being stored. Rather it should be

N + wtxid + numvins > N

and that is only if you are going to store wtxids which are not necessary to store anyways.
I was told the extra space needed was 2 bytes per segwit tx plus 1 byte per vin, though maybe the 1 byte per vin can be reduced to 1 bit. Not sure how that is possible without new script opcodes, so maybe that is a possibility in the fullness of time sort of thing.

Regardless, the total space needed is more for segwit tx than normal tx, this is confirmed by wuille, lukejr and gmaxwell.

now I never said segwit wasnt impressive tech as that is quite a small overhead. my point is that segwit does not reduce the permanent space needed and if you feel that the HDD space needed to store the blockchain (or the data need to be shared between full nodes) is a factor that is important to scalability, then segwit does not help scalability regarding those two factors.

I do not speak about any other factors, only the permanent space used. Originally I was told that segwit did everything, including allow improved scalability and what confused me was that it was presented in a way that led me (and many others) to believe that segwit reduced the permanent storage needed.

now this is clarified that segwit does not reduce the space needed and that segwit softfork will force any node that wants to be able to validate setwit tx to also upgrade to segwit, then I think the rest is about implementation details.

And maybe someone can clarify the text on the bitcoincore.org site that presents segwit as curing cancer and world hunger?

James


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: BlindMayorBitcorn on March 17, 2016, 01:28:13 AM
I asked some of these questions 3 months ag (https://www.reddit.com/r/bitcoinxt/comments/3w34o0/would_segregated_witnesses_really_help_anyone/)o.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit (https://www.reddit.com/r/bitcoin_uncensored/comments/43w24e/raising_the_21_million_btc_limit_with_a_soft_fork/).  One could also raise the block size limit (https://www.reddit.com/r/btc/comments/43w4rx/how_core_can_increase_the_21_million_btc_issuance/czlsk2q) that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

You've come to the right place for answers, professor. Openness is our middle name!

Now that that's all settled: What's Stolfi on about here? The 75% discount?

The discount is the question you won't get a good answer for
. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.

How come? ???


Title: Re: Segwit details? segwit wastes precious blockchain space permanently
Post by: achow101 on March 17, 2016, 01:39:42 AM
I was told the extra space needed was 2 bytes per segwit tx plus 1 byte per vin, though maybe the 1 byte per vin can be reduced to 1 bit. Not sure how that is possible without new script opcodes, so maybe that is a possibility in the fullness of time sort of thing.
I think it might actually be 33 bytes per vin because of the implementation being used which does not introduce the new address type. This is so that the p2sh script will still verify true to old nodes. It is a 0 byte followed by 32 byte hash of the witness.

Regardless, the total space needed is more for segwit tx than normal tx, this is confirmed by wuille, lukejr and gmaxwell.

now I never said segwit wasnt impressive tech as that is quite a small overhead. my point is that segwit does not reduce the permanent space needed and if you feel that the HDD space needed to store the blockchain (or the data need to be shared between full nodes) is a factor that is important to scalability, then segwit does not help scalability regarding those two factors.
And I don't think that anybody has ever said that it would reduce the space needed to store it. If you are believing everything you read on the internet, you need a reality check. When you read these things, make sure that they are actually backed up by reputable sources e.g. the technical papers.

I do not speak about any other factors, only the permanent space used. Originally I was told that segwit did everything, including allow improved scalability and what confused me was that it was presented in a way that led me (and many others) to believe that segwit reduced the permanent storage needed.
Could you cite the article(s) which did that? If it was something on bitcoin.org or bitcoincore.org then that could be fixed.

now this is clarified that segwit does not reduce the space needed and that segwit softfork will force any node that wants to be able to validate setwit tx to also upgrade to segwit, then I think the rest is about implementation details.
Sure. Any other questions about implementation?

And maybe someone can clarify the text on the bitcoincore.org site that presents segwit as curing cancer and world hunger?
Does it portray segwit that extremely positive? I read it and it didn't seem that way to me.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: gmaxwell on March 17, 2016, 01:44:50 AM
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?
A strong malleability fix _requires_ segregation of signatures.

A less strong fix could be achieved without it if generality is abandoned (e.g. only works for a subset of script types, rather than all without question) and a new cryptographic signature system (something that provides unique signatures, not ECC signatures) was deployed.

And even with giving up on fixing malleability for most smart contracts, it's very challenging to be absolutely sure that a specific instance is actually non-malleable. This can be seen in the history of BIP62-- where at several points it was believed that it addressed all forms of malleability for the subset of transactions it attempted to fix, only to  later discover that there were additional forms.  If a design is inherently subject to malleability but you hope to fix it by disallowing all but one possible representation there is a near endless source of ways to get it wrong.

Segregation removes that problem. Segwitness using scripts achieve a strong base level of non-malleability without doubt or risk of getting it wrong, both in design and by script authors. And only segregation applies to all scripts, not just a careful subset of "inherently non-malleable rules".

Getting signatures out from under TXIDs is the natural design to prevent problems from malleability and engineers were lamenting that Bitcoin didn't work that way as far back as 2011/late-2012.

Can't you, gmaxwell and knightdk, settle on verifying txid at last?
It's really hard to get info on SegWits here if even such an obvious thing (one would think) gets contradictory answers. ;)
Knightdk will tell you to defer to me if there is a conflict on such things.

But here there isn't really, I think-- we're answering different statements. I was answering "The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature".

Knightdk is responding about verifying lose transactions, there is no "verify the transaction ID", because no ID is even sent. You have nothing to verify against. All you can do is compute the ID.

I was referring to processing blocks. Generally first step of validating a block, after connecting it to a chain, is checking the proof of work. The second step is hashing the transactions in the block to verify that the block hash is consistent with the data you received. If it is not, the information is discarded before performing further processing. Unlike a loose transaction, you have a block header, and can actually validate against something.

In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.

Last year I tried proposing an utterly technically simple hard fork to fix the time-warp vulnerability and provide extranonce in the block header using the prev-hash bits that are currently always forced to zero (often requested by miners and ASIC makers-- and important for avoiding hardcoding block logic in asics) and it was _vigorously_ opposed by Mike Hearn and Gavin Andresen-- because it would have required that smartphone wallets upgrade to fix their header checks and difficulty calculation.  ... and that was for something that would be just a well contained four of five lines of code changed.

I hope that that change eventually happens; but given that it was attacked so aggressively by the two biggest advocates of "hard forks are no big deal", I can't imagine a radical backwards incompatible change to the transaction format happening; especially when the alternative is so easy and good that I'd prefer to use it for increased similarity even in an explicitly incompatible system.

The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.
What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).  

At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g.  http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.

In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied.  So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.

One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.

The claims that the discounts have something to do with lightning and blockstream have no substance at all.
(1) Lightning predates Segwit significantly.
(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.
(3) Blockstream has no plans to make any money from running Lightning in Bitcoin in any case;  we started funding some work work on Lightning because we believed it was long term important for Bitcoin and Mike Hearn criticized us for not funding it if we thought it important, because one of our engineers _really_ wanted to work on it himself, and because we were able to work out a business case for using it to make sidechains scalable too.

N + 2*numtxids + numvins > N
I still claim that is true, not sure how that loses me any credibility
In one post you were claiming 42 bytes per a one in / one out transaction, the other you appeared to be claiming 800 bytes.  In any case, even your formula depends on what serialization is used; one could choose one where it was smaller and not bigger. The actual amount of true entropy added is on the order of a couple bits per transaction (are segwit coins being spent or not and what script versions).

To characterize that as "SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY", when the same signaling will allow the use of new signature schemes that reduce the size of transactions on average about _30%_ seems really deceptive, and it makes me sad that you're continuing with this argument even after having your misunderstandings corrected.

I thought you said you were said you were actually going to write the software you keep talking about and speak through results; rather than the continued factually incorrect criticisms you keep making of software and designs which you don't care to spend a minute to learn the first thing about? We're waiting.

In the mean time: Shame on you, and shame on you for having no shame.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 17, 2016, 02:02:35 AM
N + 2*numtxids + numvins > N
I still claim that is true, not sure how that loses me any credibility
In one post you were claiming 42 bytes per a one in / one out transaction, the other you appeared to be claiming 800 bytes.  In any case, even your formula depends on what serialization is used; one could choose one where it was smaller and not bigger. The actual amount of true entropy added is on the order of a couple bits per transaction (are segwit coins being spent or not and what script versions).

To characterize that as "SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY", when the same signaling will allow the use of new signature schemes that reduce the size of transactions on average about _30%_ seems really deceptive, and it makes me sad that you're continuing with this argument even after having your misunderstandings corrected.

I thought you said you were said you were actually going to write the software you keep talking about and speak through results; rather than the continued factually incorrect criticisms you keep making of software and designs which you don't care to spend a minute to learn the first thing about? We're waiting.

In the mean time: Shame on you, and shame on you for having no shame.
I corrected my mistaken estimates and I made it clear I didnt know the exact overheads. I did after all just start looking into segwit yesterday. Unlike you, I do make mistakes, but when I understand my mistake, I admit it. Maybe you can understand the limitations of mortals who are prone to make errors.

Last I was told, the vinscript that would otherwise be in the normal 1MB blockchain needs to go into the witness area. Is that not correct? If it goes from the 1MB space to the witness space, how is that 30% smaller? (I am talking about permanent storage for full relaying/verifying nodes)

I only responded to knightdk's questions, should I have ignored his direct question?

luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers

James


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TooDumbForBitcoin on March 17, 2016, 02:04:25 AM
Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

Techno-eunuchs in Classic (Peter R, HashFastDefendantDoc, many others) are hypmotized by jl777's tecnobabble, and are now forced to type with one hand while they rub themselves up a genie with the other.

Classic/jl777 or GOP/Trump - which is more entertaining/sad/doomed?







Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: achow101 on March 17, 2016, 02:10:07 AM
luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Where did luke-jr tell you this? Did he explain why? I don't understand the 1 byte per vin part and would like to see the explanation for it.

What gmaxwell is saying is that segwit allows for future upgrades. One of those future upgrades could be an upgrade to a different signature scheme which does have the 30% reduction.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: BlindMayorBitcorn on March 17, 2016, 02:18:04 AM
Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

Techno-eunuchs in Classic (Peter R, HashFastDefendantDoc, many others) are hypmotized by jl777's tecnobabble, and are now forced to type with one hand while they rub themselves up a genie with the other.

Classic/jl777 or GOP/Trump - which is more entertaining/sad/doomed?

I'm so glad you're here! Can you put this in words the r/btc crowd could relate to? Me included.  :-\

Quote from: Gmax
What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).  

At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g.  http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.

In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied.  So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.

One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 17, 2016, 02:22:32 AM
luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Where did luke-jr tell you this? Did he explain why? I don't understand the 1 byte per vin part and would like to see the explanation for it.

What gmaxwell is saying is that segwit allows for future upgrades. One of those future upgrades could be an upgrade to a different signature scheme which does have the 30% reduction.
luke-jr: https://www.reddit.com/r/Bitcoin/comments/4amg1f/iguana_bitcoin_full_node_developer_jl777_argues/d12bcz7

plz note i didnt start any of the reddit threads, they seem to spontaneously start

OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 17, 2016, 02:31:12 AM
Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

Techno-eunuchs in Classic (Peter R, HashFastDefendantDoc, many others) are hypmotized by jl777's tecnobabble, and are now forced to type with one hand while they rub themselves up a genie with the other.

Classic/jl777 or GOP/Trump - which is more entertaining/sad/doomed?
I did ask for an iguana childboard here, but was totally ignored on that, not even a rejection. Maybe if that wasnt just ignored, I wouldnt be so active elsewhere. bitco.in gave me a child board the next day, so I am more active there. it is as simple as that.

I do not agree with classic's position against RBF, that does not make sense to me. I still have not heard any rational explanation about how RBF breaks zeroconf. zeroconf cant work when the blocks are full, as when the blocks are full you cant know when a tx in the mempool is likely to confirm. If anything, defining the RBF behavior allows a much better statistical model to predict when an unconfirmed tx will confirm.

Convince me with the math, or you can call me names and I stay unconvinced.

With RBF, I came here, asked some questions, got reasonable answers and made my analysis. I dont like the changing of sequenceid into the RBF field, but it isnt the horrible devils' spawn that it is made out to be. However, the people there are much better behaved and nobody trolled me or insinuated that I dont understand bitcoin at all.

James


Title: Re: Segwit details?
Post by: achow101 on March 17, 2016, 03:03:36 AM
OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.
Well I think those changes could be soft forked in because it changes the script version number, which I think would only affect the address type. I could be wrong though.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?
I am fairly certain that this isn't possible since it would require the private keys that can spend the inputs of all of the transactions to sign it. However, I could be wrong as I am not well versed in many parts of cryptography. There maybe is an algorithm which could combine all of the signatures, I don't know. You'll have to ask gmaxwell, he is the "chief cryptographer".


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: AliceGored on March 17, 2016, 03:20:32 AM
The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.

What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).  

“What?” Yes, it is an explicit goal, an under-publicized one. Glad to hear you acknowledge that you are realigning, in your view, the misaligned incentives of the current system, via a soft fork without a full node referendum.

At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g.  http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.

I’m aware that Core is focused on encouraging a gradation of nodes on the network. To me, a full node means a full, archival, fully validating node, and that’s what I’m concerned with. You are applying economic favoritism in order to achieve benefits for these new partial full nodes, which is ok, as long as everyone is aware of it. With a handful of miners activating it, I’m not sure you have the full consent of the network to pursue this goal. With a soft fork, full consent is not required or even relevant.

In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied.  So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.

So… changing these incentives was _the_ ray of light that allowed “lots of people” (assuming blockstream here) that a capacity increase could be had, fascinating. Before your email became the core roadmap, and before the conclusion of the HK conference, almost everyone thought that we would be hard forking at least some block size increase. Interesting to hear that perspective was wrong all along.


One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.

This is all just you playing economic central planner, and the 1MB anti DOS limit from 2010 has become your most valued control lever, kudos.

The claims that the discounts have something to do with lightning and blockstream have no substance at all.
(1) Lightning predates Segwit significantly.

Not surprising, segwit was designed with the "side" benefit of making sig heavy settlement tx cheaper, and a main benefit of fixing malleability which LN requires.

(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.

Waves hands.

(3) Blockstream has no plans to make any money from running Lightning in Bitcoin in any case;  we started funding some work work on Lightning because we believed it was long term important for Bitcoin and Mike Hearn criticized us for not funding it if we thought it important, because one of our engineers _really_ wanted to work on it himself, and because we were able to work out a business case for using it to make sidechains scalable too.

I will be paying attention as to whether this statement remains true. You got your jabs in at both Gavin and Mike, so, kudos again.


Title: Re: Segwit details?
Post by: jl777 on March 17, 2016, 03:23:11 AM
OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.
Well I think those changes could be soft forked in because it changes the script version number, which I think would only affect the address type. I could be wrong though.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?
I am fairly certain that this isn't possible since it would require the private keys that can spend the inputs of all of the transactions to sign it. However, I could be wrong as I am not well versed in many parts of cryptography. There maybe is an algorithm which could combine all of the signatures, I don't know. You'll have to ask gmaxwell, he is the "chief cryptographer".
I would think that to implement a blockwide aggregated signature, would at the least require a three step process:

1. block is mined to determine the tx that are in it
2. the txids of this protoblock would need to be broadcast
3. nodes that are running and part of the protoblock txid would need to sign and return to miner(s)?
4. miner prunes out all the signatures that are aggregated and publishes optimized block

Not sure if the libsecp256k1-zkp lib's schnorr routines are sufficient for this and clearly it cant be done with all sigs, and of course details about timing and protocol for the above have plenty to be defined. like when is the mining reward earned, etc. so this is just a fantasy protocol for now

I am not saying the above is possible, just that the above is the minimum back and forth that would be needed and it has some privacy issues, so some privacy enhancements are probably needed too. A bitmap of the aggregate signers would probably be needed, but that can be run length encoded to take up relatively small amount of space


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: iCEBREAKER on March 17, 2016, 03:28:12 AM
Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

I did ask for an iguana childboard here, but was totally ignored on that, not even a rejection. Maybe if that wasnt just ignored, I wouldnt be so active elsewhere. bitco.in gave me a child board the next day, so I am more active there. it is as simple as that.

I do not agree with classic's position against RBF, that does not make sense to me. I still have not heard any rational explanation about how RBF breaks zeroconf. zeroconf cant work when the blocks are full, as when the blocks are full you cant know when a tx in the mempool is likely to confirm. If anything, defining the RBF behavior allows a much better statistical model to predict when an unconfirmed tx will confirm.

Convince me with the math, or you can call me names and I stay unconvinced.

With RBF, I came here, asked some questions, got reasonable answers and made my analysis. I dont like the changing of sequenceid into the RBF field, but it isnt the horrible devils' spawn that it is made out to be. However, the people there are much better behaved and nobody trolled me or insinuated that I dont understand bitcoin at all.

James

We can't make a new childboard for every project you think of.  You start like 4 new things every month, and none of them are ever completed.  If you demonstrate the capacity to follow through on things you begin, perhaps your latest weekly brain fart vaporware might be taken seriously.

Glad you saw through Classic's FUD about RBF.  It doesn't make Classic look good to reject such an obviously beneficial feature, especially for the sake of preserving their false hope about zero-conf tx viability.

The problem in this thread is you contradicting yourself:
Quote
jl777: 'I'm just a simple C programmer' (everybody take a shot, per alt sub rules!  :D)
jl777: 'I'm just here to ask questions'
jl777: 'I just heard about SEGWIT and I'm here to fix it!'
jl777: 'I can't be bothered to read the freaking SEGWIT manual (BIP docs) but will still post my melodramatic FUDDY conclusions about "wasting precious blockchain space"'

Do you see the problem ^there?

IMO, it looks like Gavin helped hype Iguana with the name-drop in order to introduce you as Classic's latest FUDster-In-Chief, following in the ignoble footsteps of Hearn's and Toomin's failures.

Your usual 'baffle them with techno-babble BS' strategy may work in the altcoin space, but there are much higher standards in Bitcoin.   ;)


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: johnyj on March 17, 2016, 03:42:25 AM

In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.


This is a networked society, I don't think a hard fork is that difficult as you said. Ethereum just had one and no one complains

Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped. Anyone can make a hard fork right away, but if major exchanges, major service providers/merchants are not accepting his coins, there is no point of that minority coin

When a large bank upgrading their system, all the users of that bank can not access the banking service for at least hours or whole night/weekend, no one complains. And sometimes when they have an incident, that could happen during middle of the day and suddenly all the payment can not be done in the whole country, still no one cares, only a piece of news appear on the newspaper

Of course banks can always reverse transaction so it's a bit different than bitcoin. However, bitcoin is use at your own risk, no one will compensate anyone's bitcoin loss due to incompetent devs or forks, so it is the user's responsibility to keep himself updated with the latest change in bitcoin



Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: gmaxwell on March 17, 2016, 05:53:37 AM
This is a networked society, I don't think a hard fork is that difficult as you said. Ethereum just had one and no one complains
You're getting caught up on terms, thinking that all hard forks are the same. They aren't.  Replacing the entire Bitcoin system with Ethereum would, complete with the infinite inflation schedule of ethereum would just be a hardfork. ... but uhhh.. it's not the same thing as, say, increasing the Bitcoin Blocksize, which is not the same as allowing coinbase txn to spend coinbase outputs...

Quote
Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped.
That isn't like a soft fork, soft forks don't kick anyone out of the network. And you seem to have missed what I said, because of nlocked transactions changing the transaction format would effectively confiscate some people's Bitcoins.

Quote
When a large bank upgrading their system, all the users of that bank can not access the banking service for at least hours or whole night/weekend, no one complains.
Yes, Banks are centralized systems-- ones which usually only serve certain geographies and aren't operational 24/7. Upgrading them is a radically different proposition than a decentralization system.  A Bitcoin hard fork is a lot more like switching from English to Metric system, except worse, because no one values measurement systems based on how immune to political influence they are.

I’m aware that Core is focused on encouraging a gradation of nodes on the network. To me, a full node means a full, archival, fully validating node, and that’s what I’m
Your usage of the word full node is inconsistent with the Bitcoin communities since something like 2010 at least. A pruned node is a full node. You can invent new words if you like, but keep in mind the purpose of words is to communicate, and so when you make up new meanings just to argue that you're right, you are just wasting time.

You claim to be concerned with validating, but I do not see you complaining that classic has functionality so that miners will skip validation: https://www.reddit.com/r/Bitcoin/comments/4apl97/gavins_head_first_mining_thoughts/

Quote
So… changing these incentives was _the_ ray of light that allowed “lots of people” (assuming blockstream here) that a capacity increase could be had, fascinating. Before your email became the core roadmap, and before the conclusion of the HK conference, almost everyone thought that we would be hard forking at least some block size increase. Interesting to hear that perspective was wrong all along.
No, not blockstream people (go look for proposals from Blockstream people-- there are several blocksize hardforks suggested). Because of the constant toxic abuse, most of us have backed away from Bitcoin Core involvement in any case.

Quote
Not surprising, segwit was designed with the "side" benefit of making sig heavy settlement tx cheaper, and a main benefit of fixing malleability which LN requires.
Fixing this is a low enough priority that we canceled work on BIP62 before soft-fork segwit was invented. In spite of this considerable factual evidence, you're going to believe what you want, please don't waste my time like this again:

Quote
(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.

Waves hands.

luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Luke told you what the Bitcoin Core segwitness implementation stores. For ease of implementation it stores the flags that way. Any implementation could do something more efficient to save the tiny amount of additional space there, Core probably won't bother-- not worth the engineering effort because it's a tiny amount.

Part of what segwitness does is facilitate signature system upgrades. One of the proposed upgrades now saves an average of 30% on current usage patterns-- I linked it in an earlier response. It would save more if users did whole block coinjoins. The required infrastructure to do that is exactly the same as coinjoin (because it is a coinjoin), with a two round trip signature-- but the asymptotic gain is only a bit over 41%.  It'll be nice for coinjoins to have lower marginal fees than non-coinjoins; but given the modest improvement possible over current usage, it isn't particularly important to have whole block joins with that scheme; existing usage gets most of the gains.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: johnyj on March 17, 2016, 08:10:19 AM

Quote
Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped.

That isn't like a soft fork, soft forks don't kick anyone out of the network. And you seem to have missed what I said, because of nlocked transactions changing the transaction format would effectively confiscate some people's Bitcoins.


The world will not collapse because of a bitcoin hard fork, and since it has been advertised as an experiment, everyone knows it can have many disruptions, they all play with risk capitals and will tighten their security belt if well-informed. By successfully doing a hard fork, you cleared the way to many difficult changes in future. You can't spell a new soft fork trick every time when you want a backward-incompatible change. If you have to do a hard fork anyway in future, the earlier the better

If you are aiming for million dollars per bitcoin, it is still very early stage of the development


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: BlindMayorBitcorn on March 17, 2016, 08:14:47 AM

Quote
Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped.

That isn't like a soft fork, soft forks don't kick anyone out of the network. And you seem to have missed what I said, because of nlocked transactions changing the transaction format would effectively confiscate some people's Bitcoins.


The world will not collapse because of a bitcoin hard fork, and since it has been advertised as an experiment, everyone knows it can have many disruptions, they all play with risk capitals and will tighten their security belt if well-informed. By successfully doing a hard fork, you cleared the way to many difficult changes in future. You can't spell a new soft fork trick every time when you want a backward-incompatible change. If you have to do a hard fork anyway in future, the earlier the better

If you are aiming for million dollars per bitcoin, it is still very early stage of the development

I didn't know anything about anybody losing time-locked coins in a hard frok. That's not cool. Powerfully not cool!  >:(


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: JorgeStolfi on March 17, 2016, 08:57:16 AM
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

Quote
It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits

Biggest issue of this week, perhaps?  

Surely you know that the non-mining relay nodes invalidate the few security guarantees that the protocol can offer. Simple clients should not connect to them, but to miners (or relay nodes that are know to be run by miners). It makes no sense to twist the protocol inside out in order to to meet CONJECTURAL needs of those nodes.

The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block.  That is the cost that the transaction fees have to cover.  The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction.  But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: ChronosCrypto on March 17, 2016, 02:59:10 PM
Thanks for your answers, gmax. I understand segwit much better now, in areas like the backward-compatibility in the soft-fork scenario, and the changes to the "base" block.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: BlindMayorBitcorn on March 17, 2016, 03:22:48 PM
Are Core developers against a hard frok because it will somehow confiscate time-locked coins? How many people aside from Blockstream employees have time-locked coins now? (I know this is off-topic, might need a new thread.)


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: achow101 on March 17, 2016, 03:45:07 PM
Are Core developers against a hard frok because it will somehow confiscate time-locked coins? How many people aside from Blockstream employees have time-locked coins now? (I know this is off-topic, might need a new thread.)
No, they were against hard forking for changing the way that a txid was calculated.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: sickpig on March 17, 2016, 05:12:19 PM
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

Quote
It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits

Biggest issue of this week, perhaps?  

Surely you know that the non-mining relay nodes invalidate the few security guarantees that the protocol can offer. Simple clients should not connect to them, but to miners (or relay nodes that are know to be run by miners). It makes no sense to twist the protocol inside out in order to to meet CONJECTURAL needs of those nodes.

The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block.  That is the cost that the transaction fees have to cover.  The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction.  But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.

according to Adam Back SegWit discount applied to signature data will fix an incentive bug in bitcoin, see:

https://www.reddit.com/r/btc/comments/4aka3f/over_3000_classic_nodes/d11atxc


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: molecular on March 17, 2016, 05:35:16 PM
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?



Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: achow101 on March 17, 2016, 05:48:34 PM
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?


Maybe you should read gmaxwell's posts about doing a hard fork to change that calculation. They are a few posts above this in this thread.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: Bergmann_Christoph on March 17, 2016, 06:23:11 PM
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

Quote
It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits

Biggest issue of this week, perhaps?  

Surely you know that the non-mining relay nodes invalidate the few security guarantees that the protocol can offer. Simple clients should not connect to them, but to miners (or relay nodes that are know to be run by miners). It makes no sense to twist the protocol inside out in order to to meet CONJECTURAL needs of those nodes.

The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block.  That is the cost that the transaction fees have to cover.  The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction.  But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.

according to Adam Back SegWit discount applied to signature data will fix an incentive bug in bitcoin, see:

https://www.reddit.com/r/btc/comments/4aka3f/over_3000_classic_nodes/d11atxc

Funny, r/btc gave him a symbol as president of blockstream.

Not funny how he plays with words.

He is asked

Quote
Next you'll claim "Classic isn't doing anything to combat UTXO bloat but Blockstream is!"

and he answers

Quote
well Bitcoin developers are yes, via the mechanism I described. Classic isnt doing that ...

Just a notice, slightly offtopic --


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: rizzlarolla on March 17, 2016, 08:49:02 PM
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?



Yeah, both hackish (although possibly beautiful code) and the economic model, if I understand that correctly.

I don't think segwit could ever achieve HF consensus, my opinion. However if a winning hard fork was achieved, I would respect that.
A soft fork is not right here, and could well be considered an attack.

Why not 2mb first, which is on every partisan roadmap. Then segwit maybe. maybe not.
(I am assuming 2mb is more easily coded than segwit, and not as complicated as segwit as was stated earlier. Although the ease of coding is only a small part of the reason segwit should not be introduced yet. certainly not introduced by core. a SF attack on nodes.)


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: ChronosCrypto on March 17, 2016, 08:58:53 PM
(I am assuming 2mb is more easily coded than segwit
You're right about that. It's so much easier, that it's already been finished for some time now, on the second-most-popular bitcoin client. See http://bitcoinclassic.com


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: kn_b_y on March 17, 2016, 09:13:29 PM
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?


Maybe you should read gmaxwell's posts about doing a hard fork to change that calculation. They are a few posts above this in this thread.
Yes. Well said.

It's unsettling how often information that has already been provided is left ignored or unreferenced.

More excusable for the one-off contributors, I realise, but not for those whose apparent strong interest in the issues lead them to post again and again.

Sometimes it's been like wading through treacle, but I'm glad Gregory Maxwell contributed (don't know anything about bitcoin developers, and had never heard of him until today) and what he wrote about the different methods of fixing transaction malleabilty, and their implications, particularly made an impression.

I'd recommend reading his posts in full. Advisory notice - he does lose his patiance occassionally! And for balance, read those he references and those who reference him. And if that prevents just one unecessary post, I'll have done my…


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: achow101 on March 17, 2016, 09:14:18 PM
jl777, this link: https://bitcoincore.org/en/segwit_wallet_dev/ might be useful to you for help with implementing segwit.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: molecular on March 17, 2016, 09:58:56 PM
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?



Yeah, both hackish (although possibly beautiful code) and the economic model, if I understand that correctly.

I don't think segwit could ever achieve HF consensus, my opinion. However if a winning hard fork was achieved, I would respect that.
A soft fork is not right here, and could well be considered an attack.

Why not 2mb first, which is on every partisan roadmap. Then segwit maybe. maybe not.
(I am assuming 2mb is more easily coded than segwit, and not as complicated as segwit as was stated earlier. Although the ease of coding is only a small part of the reason segwit should not be introduced yet. certainly not introduced by core. a SF attack on nodes.)

I didn't mean "do segwit as a hardfork", I meant do a hf that achieves the same things (more capacity, malleability fix, bandwidth savings, prune signatures from storage,...) just more -- let's say -- directly. A package with something for everybody but nothing too bad for anybody to swallow. A compromise.

That's why I was asking wether the "change of economic model" (which would be missing from that package) was something core devs couldn't live without. So far I haven't seen this desirability in itself argued, seemed to me this was understood by everyone as just a side-effect of soft-forking higher capacity.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: gmaxwell on March 18, 2016, 09:21:30 AM
So far I haven't seen this desirability in itself argued,
Please read the fine thread here.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: gmaxwell on March 18, 2016, 09:25:16 AM
* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  
That _is_ segregation of the signatures up to completely non-normative ordering of data transferred. Segwit could just as well order the data into the same place in the serialized transactions when sending them, but its cleaner to not do so.

Quote
* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  
This would be no greater and it would have _no_ security at all. The clients would be _utterly_ beholden to the third party randomly selected servers to tell them correct information and they would have no way to verify it.

I normally don't expect people advocating Bitcoin Classic to put security first, but completely tossing it out is a new turn. I guess it's consistent with the latest validation removal changes in classic.

Quote
* Pruning signature data from old transactions can be done the same way.
Has been for years.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: JorgeStolfi on March 18, 2016, 10:32:23 AM
* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  
That _is_ segregation of the signatures up to completely non-normative ordering of data transferred. Segwit could just as well order the data into the same place in the serialized transactions when sending them, but its cleaner to not do so.

On the contrary, rearranging the data in transactions and blocks is an unnecessary and ugly hack to get that effect.  It means hundreds of lines of new code scattered all over the place, in the Core source and wallets, rather than a few lines in one library routine that everybody else can copy.

Quote
Quote
* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  
This would be no greater and it would have _no_ security at all. The clients would be _utterly_ beholden to the third party randomly selected servers to tell them correct information

If a client fetches a block without signatures, with SegWit or not, he cannot check whether the transactions contained in it were properly signed.  With SegWit, he can check the hash of the non-signature data; but if he is an old client, he will not even be aware that he is not checking the signatures.  

With the special call solution, if the client wants to validate a particular block, he asks for it in full, and then he can validate everything (except the parent link), as now.  The extra call can be implemented with no fork, so clients who do not upgrade, or do not wish to use that special call, will still be able to verify everything as they do now.

In other words, soft-forked SegWit *forces* old clients to fetch only part of the data, and limits them to verify only that part, *without them being aware of it*.  The special call solution lets clients decide case by case whether they want to verify a block or trust the node (that they are already trusting to some extent); and it does not change the behavior or security of existing client software.

The savings would be greater because clients who choose to use this call for old blocks would get fewer data, whereas with soft-forked SegWit everybody would have to fetch old blocks in full, signatures included.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: kn_b_y on March 18, 2016, 11:24:56 AM
* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  
That _is_ segregation of the signatures up to completely non-normative ordering of data transferred. Segwit could just as well order the data into the same place in the serialized transactions when sending them, but its cleaner to not do so.

On the contrary, rearranging the data in transactions and blocks is an unnecessary and ugly hack to get that effect.  It means hundreds of lines of new code scattered all over the place, in the Core source and wallets, rather than a few lines in one library routine that everybody else can copy.


I think I can see what you are arguing against, but not what for.

Are you suggesting fixing malleability by storing transactions as they are now and omitting signatures from the txid calculation? In effect, a hard fork.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: JorgeStolfi on March 18, 2016, 11:36:24 AM
Are you suggesting fixing malleability by storing transactions as they are now and omitting signatures from the txid calculation? In effect, a hard fork.

Yes.  That fix should be done by a hard fork: because the code will be much cleaner, and because hard forks are safer than soft forks. (More precisely: ensuring that old versions are inoperable after 3-4 releases is safer than deploying changes to the protcol without alerting users, and letting them discover later that they must upgrade to understand why their transactions are not confirming anymore.)


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: kn_b_y on March 18, 2016, 12:28:11 PM
Are you suggesting fixing malleability by storing transactions as they are now and omitting signatures from the txid calculation? In effect, a hard fork.

Yes.  That fix should be done by a hard fork: because the code will be much cleaner, and because hard forks are safer than soft forks. (More precisely: ensuring that old versions are inoperable after 3-4 releases is safer than deploying changes to the protcol without alerting users, and letting them discover later that they must upgrade to understand why their transactions are not confirming anymore.)

I've seen enough in this thread to convince me that that approach would make deployment a disaster for bitcoin. People would lose funds.

One example gmaxwell gives: all presigned nlocktime transactions would be broken. For users keeping these in storage they may well represent a lot of security. Gone... the moment a new version of the software no longer sees the transaction as being valid.

As far as I see it, if malleability can be fixed in such a way that older versions of the software still see immalleable transactions as valid transactions then, well…  do it.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: TierNolan on March 18, 2016, 01:16:54 PM
One example gmaxwell gives: all presigned nlocktime transactions would be broken. For users keeping these in storage they may well represent a lot of security. Gone... the moment a new version of the software no longer sees the transaction as being valid.

You could have a rule that you can refer to inputs using either txid or normalized-txid.  That maintains backwards compatibility.  The problem is that you need twice the lookup table size.  You need to store both the txid to transaction lookup and the n-txid to transaction lookup.

The rule could be changed so that transactions starting at version 2 using n-txid and version 1 transactions use txid.  This means that each transaction only needs 1 lookup entry depending on its version number.  If transaction 1 transactions cannot spend outputs from transaction 2 transactions, then the network will eventually update over time.  It is still a hard-fork though.

Segregated witness has additional benefits with regards to data organization.  The non-signed transactions are committed separately from the signatures.  Script versioning means that it is easier to change the script language. 

It looks like they have added improvements to how transaction signing works.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: JorgeStolfi on March 18, 2016, 03:29:36 PM
One example gmaxwell gives: all presigned nlocktime transactions would be broken. For users keeping these in storage they may well represent a lot of security. Gone... the moment a new version of the software no longer sees the transaction as being valid.

As far as I see it, if malleability can be fixed in such a way that older versions of the software still see immalleable transactions as valid transactions then, well…  do it.

In a soft fork, by definition, the new version of the software can reject transactions that the revious version considered OK.

For example, IIUC the soft-forked SegWit proposal implies redefining an op code that previously meant "no-op" to mean "check the signatures in the extension record" or something like that.  Thus, a transaction that used that opcode (for some bizarre reason of its own, possibly fraudulent) could be valid before SegWit was enabled, but become invalid after it.

That may be a good argument to phase out nLocktime in favor of CLTV.  Once a transaction is in the blockchain, its position in it defines the rules by which it should be validated, which allows proper handling of old time locks.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: JorgeStolfi on March 18, 2016, 03:33:00 PM
The non-signed transactions are committed separately from the signatures. 

What do you mean? They are committed in the same block, at the same time, right?

Quote
Script versioning means that it is easier to change the script language. 

The position of a transaction in the blockchain should define which version of the rules is applicable to it (in particular,
which version of the scripting language it uses).


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: CIYAM on March 18, 2016, 03:49:24 PM
That may be a good argument to phase out nLocktime in favor of CLTV.

Huh?

You do realise that CLTV actually checks the nLocktime (hence its name) so if you got rid of nLocktime then it wouldn't do anything at all?

Also scripts can exist "outside the blockchain" (signed but not broadcast which was the very point being made about nLocktime) so you can't rely upon at what block they appear to determine the rules at all.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: TierNolan on March 18, 2016, 05:17:54 PM
What do you mean? They are committed in the same block, at the same time, right?

Yes, but with a separate merkle tree with the root in the coinbase.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: rizzlarolla on March 18, 2016, 05:45:30 PM
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?



Yeah, both hackish (although possibly beautiful code) and the economic model, if I understand that correctly.

I don't think segwit could ever achieve HF consensus, my opinion. However if a winning hard fork was achieved, I would respect that.
A soft fork is not right here, and could well be considered an attack.

Why not 2mb first, which is on every partisan roadmap. Then segwit maybe. maybe not.
(I am assuming 2mb is more easily coded than segwit, and not as complicated as segwit as was stated earlier. Although the ease of coding is only a small part of the reason segwit should not be introduced yet. certainly not introduced by core. a SF attack on nodes.)

I didn't mean "do segwit as a hardfork", I meant do a hf that achieves the same things (more capacity, malleability fix, bandwidth savings, prune signatures from storage,...) just more -- let's say -- directly. A package with something for everybody but nothing too bad for anybody to swallow. A compromise.

That's why I was asking wether the "change of economic model" (which would be missing from that package) was something core devs couldn't live without. So far I haven't seen this desirability in itself argued, seemed to me this was understood by everyone as just a side-effect of soft-forking higher capacity.


Ok a compromise,

(but would that mean effectively start coding from scratch? timewise, could that happen now, sensible as it may sound, as expectations of some sort of block size increase soon have been stoked.)


Either way, why should segwit SF be abandoned?
Indeed, because of the "change of economic model". But particularly through a SF.
segwit SF is an attack on bitcoin. More so than a segwit HF.

knightdk says "It was originally proposed as a hard fork, but someone (luke-jr I think) pointed out that it could be done as a soft fork."

luke jr found a technical fix to enable the possibility of a segwit SF.

Also "Soft forks are preferred because they are backwards compatible. In this case, the backwards compatibility is that if you run non-upgraded software, you can continue as you were and have no ill effect. You just won't be able to take advantage of the new functionality provided by segwit."

(functionality, i knew i'd seen a desire argued somewhere)
If he didn't upgrade he wont be able to verify segwit tx's.
Trust is now introduced to his blockchain.
That is an ill effect, and would "normally" require HF.
90% could be against segwit, yet they are the ones excluded from a fully verified blockchain.
And segwit will always be on the blockchain.
That goes against all the principles of bitcoin I thought I knew.


And "if this were done as a hard fork, then everyone would be required to upgrade in order to deploy segwit and then that would essentially force everyone to use segwit."

He cannot force everyone to use segwit through HF
(any more than XT could force anyone to upgrade and adopt)
Everyone would be required to upgrade, or not if they didn't want to.
If segwit was not wanted, he would loose the fork and segwit would be gone.
He cannot force the majority to do anything.


If/(when) segwit SF is abandoned/(postponed), a 2mb increase, in its simplest safe form, should be implemented. (no partisan additions)
Is that compromise?

That will buy time for a proper re-assessment for all in the Bitcoin space.





Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: molecular on March 18, 2016, 07:41:15 PM
If/(when) segwit SF is abandoned/(postponed), a 2mb increase, in its simplest safe form, should be implemented. (no partisan additions)
Is that compromise?

That will buy time for a proper re-assessment for all in the Bitcoin space.

Abandon/postpone segwit sf? (How) do you think that could happen?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: rizzlarolla on March 18, 2016, 09:06:09 PM
If/(when) segwit SF is abandoned/(postponed), a 2mb increase, in its simplest safe form, should be implemented. (no partisan additions)
Is that compromise?

That will buy time for a proper re-assessment for all in the Bitcoin space.

Abandon/postpone segwit sf? (How) do you think that could happen?


It could happen. (i particually speak of a soft fork. segwit HF has different connotations)

How? oh, erm..

Core could pro-actively declare segwit a work in progress undergoing rigorous testing.
Due to this extended development time, Core could propose a 2mb hard limit be implemented first, as 2mb is also on their to do list? (nothing else. no other general tweeks)

Or maybe the test net will crash the night before launch, forcing segwit release to be abandoned last miniute.

Or possibly users will become more aware of the implications of segwit SF, or fear segwit it is not fully tested yet or needed, and rise up.

Could even be that segwit coding naturally runs into bugs, leading to delays, and again. Users loose patience and fork to classic.


So, I think it could happen in many ways.
How will possibly depend on core, balls in their court atm. Or users if they get restless, or the interaction between the two, or a tech failure.

However it should be abandoned as a SF.
"90% could be against segwit, yet THEY are the ones excluded from a fully verified blockchain." That is an attack.
(if 90% want segwit, then prove it. HF)
Why is core attacking bitcoin?





Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: iCEBREAKER on March 18, 2016, 09:36:42 PM
Core could propose a 2mb hard limit be implemented first, as 2mb is also on their to do list? (nothing else. no other general tweeks)

Absent the Schnorr sigs enabled by segwit, 2mb blocks would require "other general tweeks" in the form of restricting old style quadratic scaling sigs to some magic number maximum.

Even Gavin concluded that was a Bad Idea, because otherwise we get obnoxiously construed troll blocks that take a minute or longer to process.  And that means more empty blocks, because miners aren't going to stop mining while the troll blocks complete validation sigops.

Please stop suggesting and advocating nonsense and misinformation about "The One Simple Trick To Scale Bitcoin That Core Doesn't Want You To Know."


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: JorgeStolfi on March 18, 2016, 11:20:54 PM
Also "Soft forks are preferred because they are backwards compatible. In this case, the backwards compatibility is that if you run non-upgraded software, you can continue as you were and have no ill effect. You just won't be able to take advantage of the new functionality provided by segwit."

That is not quite true.  After a soft fork, old clients may issue transactions that are invalid by the new rules, and not understand why they are never confirmed.  A soft fork can also introduce new ways of storing transactions in the blockchain, implicitly or explicitly, that are invisible to old clients, as in this example (https://www.reddit.com/r/bitcoin_uncensored/comments/43w24e/raising_the_21_million_btc_limit_with_a_soft_fork/).   In this case, the old clients will not see coins that new clients send them. 

Quote
90% could be against segwit, yet they are the ones excluded from a fully verified blockchain.

Yes. 

More precisely, if 51% of the miners decide to do a soft fork, the soft fork happens -- even if no one was told about it in advance -- and all other miners and clients have to accept it. 

Quote
"if this were done as a hard fork, then everyone would be required to upgrade in order to deploy segwit and then that would essentially force everyone to use segwit."

Not exactly.

With a hard fork, all users would have to be warned in advance, with explanation of what the change is and why it is a good idea. 

I there is not enough support from the miners, the hard fork does not happen and nothing changes.

If there is enough support from the miners to execute the hard fork, the users would have to be warned again to upgrade to a version that is at most K releases old by date D.  Hopefully, most everybody will convert in time, and then the few lagards will be unable to use their coins until they upgrade too. 

However, if a substantial minority *of the miners* remains absolutely opposed to the changes, the coin will split into new-rule and old-rule coins.  Each user will see his own coins replicated in both branches, and will be able to use both  independently.  Is freedom of choice such a bad thing?


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: JorgeStolfi on March 18, 2016, 11:45:15 PM
Absent the Schnorr sigs enabled by segwit, 2mb blocks would require "other general tweeks" in the form of restricting old style quadratic scaling sigs to some magic number maximum.

The initial depoyment of SegWit will not enable Schnorr signatures, will it? Won't they require a hard fork anyway?

Even with Schnorr signatures, the miners would still have to accept old-style multisigs produced by old clients, right?  Then an attacker could still generate those hard-to-validate blocks, no?

As a temporary fix, a soft fork can be deployed limiting the max number of signatures.  Even a low limit like 100 is no restriction, only a small annoyance for the few users who would want to use more   It woudl be a good use of an "arbitrary numerical limit", like the 1 MB limit was when it was introduced. 

But there is no logical reason why signature validation should take quadratic time.  That is a bug in the protocol, that should be fixed by changing the algorithm -- with a hard fork if need be.

(By the way,  [for a couple of hours today](https://statoshi.info/dashboard/db/transactions?from=1458258715516&to=1458259562505) there was an apparent "stress test" where each transaction was 10 kB long (rather than the usual 0.5 kB).  Was the "tester" trying to generate such troll blocks?)


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: iCEBREAKER on March 18, 2016, 11:54:10 PM
Absent the Schnorr sigs enabled by segwit, 2mb blocks would require "other general tweeks" in the form of restricting old style quadratic scaling sigs to some magic number maximum.

The initial depoyment of SegWit will not enable Schnorr signatures, will it? Won't they require a hard fork anyway?

Even with Schnorr signatures, the miners would still have to accept old-style multisigs produced by old clients, right?  Then an attacker could still generate those hard-to-validate blocks, no?

As a temporary fix, a soft fork can be deployed limiting the max number of signatures.  Even a low limit like 100 is no restriction, only a small annoyance for the few users who would want to use more   It woudl be a good use of an "arbitrary numerical limit", like the 1 MB limit was when it was introduced.  

But there is no logical reason why signature validation should take quadratic time.  That is a bug in the protocol, that should be fixed by changing the algorithm -- with a hard fork if need be.

(By the way,  [for a couple of hours today](https://statoshi.info/dashboard/db/transactions?from=1458258715516&to=1458259562505) there was an apparent "stress test" where each transaction was 10 kB long (rather than the usual 0.5 kB).  Was the "tester" trying to generate such troll blocks?)

Good questions.  Let's try reading the fantastic manual:

https://bitcoincore.org/en/2016/01/26/segwit-benefits/#linear-scaling-of-sighash-operations

Quote
Linear scaling of sighash operations

A major problem with simple approaches to increasing the Bitcoin blocksize is that for certain transactions, signature-hashing scales quadratically rather than linearly.

Linear versus quadratic

In essence, doubling the size of a transaction increases can double both the number of signature operations, and the amount of data that has to be hashed for each of those signatures to be verified. This has been seen in the wild, where an individual block required 25 seconds to validate, and maliciously designed transactions could take over 3 minutes.

Segwit resolves this by changing the calculation of the transaction hash for signatures so that each byte of a transaction only needs to be hashed at most twice. This provides the same functionality more efficiently, so that large transactions can still be generated without running into problems due to signature hashing, even if they are generated maliciously or much larger blocks (and therefore larger transactions) are supported.
Who benefits?

Removing the quadratic scaling of hashed data for verifying signatures makes increasing the block size safer. Doing that without also limiting transaction sizes allows Bitcoin to continue to support payments that go to or come from large groups, such as payments of mining rewards or crowdfunding services.

The modified hash only applies to signature operations initiated from witness data, so signature operations from the base block will continue to require lower limits.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 19, 2016, 12:08:27 AM
The modified hash only applies to signature operations initiated from witness data, so signature operations from the base block will continue to require lower limits.

The way it is worded makes it sound fantastic...

However, I couldnt find info about the the witness data immunities from the attacks. Are you saying that signature attacks are not possible inside the witness data?

Clearly if signatures are moved from location A to location B, then saying signature attacks are not possible in location A. OK, that is good. but what about location B?

Are sigs in the witness data immune from malicious tx via lots of sigs? It is strange this isnt specifically addressed. Maybe its just me and my low reading comprehension. But all the text on that segwit marketing page seems quite one sided and of the form:

###
things are removed from the base blocks so now there are no problems with the base block, without addressing if the problems that used to be in the base block are actually solved, or just moved into the witness data.
###

We could easily say SPV solves all signature attack problems. Just make it so your node doesnt do much at all and it avoids all these pesky problems, but the important issue to many people is what is the effect on full nodes. And by full, I mean a node that doesnt prune, relays, validates signatures and enables other nodes to do the bootstrapping.

Without that, doesnt bitcoin security model change to PoS level? I know how much you hate PoS

James


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: trashman43 on March 19, 2016, 12:52:50 AM
We could easily say SPV solves all signature attack problems. Just make it so your node doesnt do much at all and it avoids all these pesky problems, but the important issue to many people is what is the effect on full nodes. And by full, I mean a node that doesnt prune, relays, validates signatures and enables other nodes to do the bootstrapping.

Without that, doesnt bitcoin security model change to PoS level? I know how much you hate PoS

James

you paint the situation as if the binary options are 1. fully validating nodes (verify everything) and 2. thin clients (verify nothing). if we increase bandwidth pressure on nodes by increasing throughput capacity, then fully validating nodes can only switch to verifying nothing.

a much better solution is one that allows for fully validating nodes that would otherwise be forced off the network to partially validate -- whether by relaying blocks only, validating non-segwit tx, pruning data that is already under significant proof of work and therefore very likely secure. just because a pruned node cant bootstrap a new node doesnt mean it doesnt provide great value to the network.

are you suggesting that it would be better to simply force all these nodes off the network and into using trust-based protocols? because when you double bandwidth requirements and leave full nodes no other options, that's what happens.

there is a new term for this: "tradeoff denialism" :P

one could claim that increasing throughput doesnt mean pressuring nodes to shut down. but youd be living in denial, as throughput is directly related to bandwidth requirements


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 19, 2016, 01:52:06 AM
We could easily say SPV solves all signature attack problems. Just make it so your node doesnt do much at all and it avoids all these pesky problems, but the important issue to many people is what is the effect on full nodes. And by full, I mean a node that doesnt prune, relays, validates signatures and enables other nodes to do the bootstrapping.

Without that, doesnt bitcoin security model change to PoS level? I know how much you hate PoS

James

you paint the situation as if the binary options are 1. fully validating nodes (verify everything) and 2. thin clients (verify nothing). if we increase bandwidth pressure on nodes by increasing throughput capacity, then fully validating nodes can only switch to verifying nothing.

a much better solution is one that allows for fully validating nodes that would otherwise be forced off the network to partially validate -- whether by relaying blocks only, validating non-segwit tx, pruning data that is already under significant proof of work and therefore very likely secure. just because a pruned node cant bootstrap a new node doesnt mean it doesnt provide great value to the network.

are you suggesting that it would be better to simply force all these nodes off the network and into using trust-based protocols? because when you double bandwidth requirements and leave full nodes no other options, that's what happens.

there is a new term for this: "tradeoff denialism" :P

one could claim that increasing throughput doesnt mean pressuring nodes to shut down. but youd be living in denial, as throughput is directly related to bandwidth requirements
I am not saying that at all, but the question is if it is worth doubling the complexity of the code to process the blockchain, in order to have the intermediate "validates non-segwit tx" nodes. That appears to be the usecase that is created in this context.

So I ask you, is it worth doubling the amount of code dealing with signing and wtxid calculations to be able to have nodes that cant see a new class of segwit tx? In fact, what good is that as they cant see these tx. That is my point.

pruning nodes dont have a problem with HDD space now, so that is not an issue
validating nodes are still going to have to validate the witness data, unless they dont upgrade and cant even see them.

it just seems like a lot of work to get small benefit. Now if there was a new signature scheme that reduced the space required by 30% that comes with segwit, then it starts becoming like a tradeoff decision that can be made.

Right now the tradeoff is a lot of extra complexity for slightly less tx capacity than 2MB HF, I think a bit more CPU load too as the wtxid needs to be calculated.

I guess non-malleable txid is a good thing, but not including the signature in the txid calculation would achieve that too. The same segwit softfork trick can probably be used to surgically implement non-malleable signatures.

I just sense a lot bigger attack surface for minimal immediate functionality gains, especially as compared to alternate possibilities.

The following is test data from a test run I just did with iguana parallel sync:

Code:
  Time           eth0       
HH:MM:SS   KB/s in  KB/s out
02:16:09  33845.10    529.20
02:16:10  22049.13    451.58
02:16:11  11677.73    228.16
02:16:12   9593.46    455.37
02:16:13   6336.21    343.45
02:16:14   5547.12    253.51
02:16:15   5571.61    443.59
02:16:16   8923.98    284.75
02:16:17   4965.57    329.09
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:16:18   2707.07    308.64
02:16:19   4556.55    531.37
02:16:20  23731.02    404.43
02:16:21  21888.67    578.04
02:16:22  34865.94    287.81
02:16:23   6858.57     84.74
02:16:24   7388.59    204.51
02:16:25  25366.26    358.41
02:16:26  14404.62    369.48
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:16:27   4309.20    210.68
02:16:28   2171.04    131.02
02:16:29   6415.35    541.98
02:16:30   5755.03    229.80
02:16:31   2871.57    104.28
02:16:32  31940.83    336.98
02:16:33   9254.67    296.59
02:16:34   3870.30    127.04
02:16:35   2311.22    151.33
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:16:36  40519.47    794.40
02:16:37  41520.63    599.23
02:16:38  20989.28    177.32
02:16:39   7380.14    119.51
02:16:40   3840.29     93.45
02:16:41   5518.21    273.35
02:16:42  21878.96    389.22
02:16:43  18944.35    205.19
02:16:44   8115.07    172.49
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:16:45   5995.48    247.25
02:16:46   3898.50     89.55
02:16:47   8779.15    342.28
02:16:48  17804.29    220.64
02:16:49  17875.56    150.98
02:16:50   6362.67     97.60
02:16:51  12898.52    280.99
02:16:52   4688.76    118.78
02:16:53  30455.30    429.20
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:16:54  22671.03    368.31
02:16:55  31944.43    453.90
02:16:56  15339.38    210.14
02:16:57  26194.08    392.04
02:16:58  32547.23    383.35
02:16:59  43963.34    403.81
02:17:00  56543.47    451.39
02:17:01  44521.50    393.46
02:17:02  15638.87    121.88
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:17:03   4431.99    105.13
02:17:04  36061.83    437.45
02:17:05  21794.80    185.65
02:17:06   4929.13     92.09
02:17:07  48649.40    458.98
02:17:08  51054.86    405.49
02:17:09  46497.26    364.73
02:17:10  52669.18    430.47
02:17:11  57609.61    454.50
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:17:12  53891.66    438.20
02:17:13  75635.86    893.06
02:17:14  30123.17    163.68
02:17:15  49683.16    444.59
02:17:16  47817.53    377.48
02:17:17  55363.33    461.86
02:17:18  43078.26    313.46
02:17:19  26149.84    278.79
02:17:20   6218.16    128.42
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:17:21  18398.26    250.10
02:17:22  33918.31    325.32
02:17:23  11346.06     92.86
02:17:24   3202.43     19.64
02:17:25   2052.03     46.51
02:17:26   2337.96     71.44
02:17:27   2207.81     65.98
02:17:28  23519.14    304.05
02:17:29  55862.76    415.71
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:17:30  47336.18    390.28
02:17:31  54468.87    494.74
02:17:32  56162.71    446.20
02:17:33  53209.51    359.89
02:17:34  49673.05    390.43
02:17:35  55885.92    390.03
02:17:36  53509.14    343.98
02:17:37  51986.69    342.46
02:17:38  45596.10    295.73
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:17:39  14180.28     92.42
02:17:40  39211.49    352.47
02:17:41  63358.33    454.38
02:17:42  56712.67    422.91
02:17:43  28156.47    264.18
02:17:44  30555.58    259.53
02:17:45  56936.93    455.36
02:17:46  59813.32    418.80
02:17:47  59308.36    441.71
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:17:48  59827.94    385.09
02:17:49  63606.64    430.43
02:17:50  59492.03    364.88
02:17:51  59679.47    427.00
02:17:52  51657.23    341.52
02:17:53  31356.31    240.92
02:17:54  55589.56    415.41
02:17:55  48241.90    359.84
02:17:56  52045.72    406.88
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:17:57  16252.14    123.89
02:17:58   3553.59     26.32
02:17:59   5630.81     43.69
02:18:00  51815.96    464.66
02:18:01  47686.79    314.43
02:18:02  60419.80    424.08
02:18:03  46624.46    330.34
02:18:04  47545.23    424.62
02:18:05  47507.25    385.20
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:18:06  47868.21    439.81
02:18:07  39462.08    344.86
02:18:08  47640.44    420.90
02:18:09  14381.75    143.29
02:18:10   5073.33     79.79
02:18:11  39820.47    392.17
02:18:12  16655.82    115.73
02:18:13   5446.54    171.00
02:18:14  34158.03    224.28
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:18:15   7642.00    100.28
02:18:16  63972.90    510.19
02:18:17  54716.79    418.07
02:18:18  55642.69    412.52
02:18:19  57620.22    421.78
02:18:20  51703.89    357.54
02:18:21  55794.66    395.24
02:18:22  74435.85    493.97
02:18:23  69177.78    413.13
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:18:24  35185.93    194.08
02:18:25  50689.11    370.72
02:18:26  54193.62    311.03
02:18:27  48325.34    334.62
02:18:28  40097.72    301.69
02:18:29  42524.21    348.53
02:18:30  29990.71    174.90
02:18:31  46417.99    393.07
02:18:32  49354.48    365.07
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:18:33  49785.06    354.96
02:18:34  58241.59    397.50
02:18:35  40331.71    208.12
02:18:36  38532.94    306.23
02:18:37  59926.13    462.11
02:18:38  55388.72    457.29
02:18:39  51891.44    362.08
02:18:40  58160.40    407.42
02:18:41  56494.46    375.54
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:18:42  58764.23    421.94
02:18:43  39135.91    299.46
02:18:44  54445.69    495.31
02:18:45  40178.11    275.20
02:18:46   9888.95     88.57
02:18:47   3974.48     60.40
02:18:48   5706.35     70.53
02:18:49   4687.59     50.77
02:18:50   2559.35     27.27
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:18:51   1300.63     20.38
02:18:52  53046.86    440.35
02:18:53  57797.57    408.33
02:18:54  53286.55    358.68
02:18:55  47007.64    307.33
02:18:56  44760.59    392.78
02:18:57  44529.40    328.03
02:18:58  55051.48    427.69
02:18:59  16109.61    124.49
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:19:00  64336.61    502.38
02:19:01  52468.14    306.32
02:19:02  55338.03    378.65
02:19:03  58055.49    387.75
02:19:04  33642.29    176.69
02:19:05  76283.29    658.32
02:19:06  26809.79    158.43
02:19:07  44293.91    285.87
02:19:08  16992.35     92.36
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:19:09   7930.64     52.08
02:19:10  18896.54    172.51
02:19:11  65831.62    458.27
02:19:12  59365.14    385.31
02:19:13  55428.89    368.86
02:19:14  66314.21    423.40
02:19:15  61998.23    378.79
02:19:16  41052.71    218.75
02:19:17  64654.85    481.15
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:19:18  48836.84    304.99
02:19:19  40473.96    294.56
02:19:20  78438.34    616.25
02:19:21  61219.32    220.61
02:19:22  68857.04    418.10
02:19:23  51494.45    328.57
02:19:24  61066.10    440.70
02:19:25  63359.72    403.87
02:19:26  61503.87    376.38
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:19:27  52609.02    341.24
02:19:28  62605.62    344.72
02:19:29  33506.52    213.18
02:19:30  61961.18    425.11
02:19:31  58548.41    419.69
02:19:32  67196.68    459.66
02:19:33  58272.33    325.00
02:19:34  36245.20    161.49
02:19:35  49304.49    321.84
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:19:36  69566.06    483.68
02:19:37  57570.63    257.62
02:19:38  30481.10    190.85
02:19:39  20689.72    120.54
02:19:40   9237.98    121.76
02:19:41  39236.86    279.72
02:19:42  86731.88    288.46
02:19:43  48629.80    156.77
02:19:44  26932.19    108.87
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:19:45  21396.59    127.82
02:19:46  14506.07     95.55
02:19:47  34846.09    372.36
02:19:48  67259.36    376.53
02:19:49  50631.76    295.99
02:19:50  58821.97    373.39
02:19:51  35396.34    180.00
02:19:52  17401.16    146.28
02:19:53  15857.72    120.87
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:19:54  55611.05    273.55
02:19:55  37599.18    151.61
02:19:56  48564.53    324.09
02:19:57  57451.85    290.47
02:19:58  54583.66    336.44
02:19:59  37948.65    169.34
02:20:00  48550.33    312.58
02:20:01  65724.29    423.13
02:20:02  60209.88    332.45
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:20:03  74052.36    500.69
02:20:04  64638.80    359.71
02:20:05  29832.36    195.49
02:20:06  17762.60    215.73
02:20:07  16180.92    199.96
02:20:08  13248.35    110.94
02:20:09   8491.46    142.89
02:20:10  57840.12    429.11
02:20:11  41637.34    164.31
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:20:12  22256.24    170.11
02:20:13  48529.94    397.18
02:20:14  62792.30    405.66
02:20:15  66254.80    484.94
02:20:16  67550.37    164.20
02:20:17  30034.56    109.21
02:20:18  23392.60    118.91
02:20:19  12935.04    152.88
02:20:20  72649.63    410.91
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:20:21  50598.86    248.77
02:20:22  38862.75    258.64
02:20:23  57587.91    363.20
02:20:24  65281.96    305.90
02:20:25  34910.63    182.79
02:20:26  37640.38    208.30
02:20:27  40726.89    221.85
02:20:28  51446.10    304.25
02:20:29  57708.71    296.97
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:20:30  56701.46    272.89
02:20:31  40277.95    242.45
02:20:32  60091.48    318.82
02:20:33  50029.19    340.54
02:20:34  51111.51    300.14
02:20:35  45111.85    261.23
02:20:36  64856.58    391.74
02:20:37  48861.61    217.04
02:20:38  43913.26    288.61
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:20:39  61526.10    300.26
02:20:40  47306.20    217.89
02:20:41  39147.65    276.59
02:20:42  74420.89    731.66
02:20:43  39885.88    214.77
02:20:44  19364.66    157.79
02:20:45  45577.97    270.80
02:20:46  51020.70    335.66
02:20:47  70866.59    360.11
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:20:48  62171.44    309.96
02:20:49  62204.88    344.18
02:20:50  61137.40    339.22
02:20:51  70663.35    376.55
02:20:52  55582.67    367.74
02:20:53  76263.89    400.27
02:20:54  63452.74    336.72
02:20:55  51701.00    225.72
02:20:56  44965.56    272.85
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:20:57  62732.16    328.52
02:20:58  73721.37    631.83
02:20:59  51871.17    241.76
02:21:00  46303.93    198.11
02:21:01  32508.94    213.94
02:21:02  73284.92    433.73
02:21:03  49834.10    252.62
02:21:04  63456.48    325.10
02:21:05  58625.35    260.59
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:21:06  35097.09    231.37
02:21:07  73310.38    379.32
02:21:08  61125.11    313.39
02:21:09  74764.18    536.69
02:21:10  58698.85    280.84
02:21:11  46448.25    176.45
02:21:12  59788.81    342.76
02:21:13  49127.29    286.97
02:21:14  61682.25    329.70
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:21:15  50247.67    292.26
02:21:16  45406.79    258.51
02:21:17  67076.83    337.77
02:21:18  60259.81    314.62
02:21:19  56777.60    299.90
02:21:20  42174.36    233.22
02:21:21  53835.24    338.31
02:21:22  68306.79    394.66
02:21:23  53365.89    269.94
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:21:24  57407.31    353.79
02:21:25  51447.38    240.82
02:21:26  46836.79    258.98
02:21:27  72469.32    692.14
02:21:28  37474.04    103.96
02:21:29  23530.95    108.24
02:21:30  16598.97     86.37
02:21:31  36923.23    290.40
02:21:32  67147.53    382.28
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:21:33  61126.80    252.27
02:21:34  47133.20    220.84
02:21:35  65734.86    348.91
02:21:36  43444.24    192.81
02:21:37  55281.62    260.41
02:21:38  70902.61    345.20
02:21:39  61351.06    298.33
02:21:40  56973.40    255.57
02:21:41  63535.54    314.64
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:21:42  62351.92    312.44
02:21:43  80183.78    482.16
02:21:44  35935.50    135.09
02:21:45  18757.51    121.89
02:21:46  10865.56     88.57
02:21:47   6301.87    116.73
02:21:48  59690.39    636.39
02:21:49  63399.55    348.95
02:21:50  48542.85    222.30
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:21:51  36881.65    212.34
02:21:52  62972.36    373.64
02:21:53  54096.12    264.91
02:21:54  58251.50    322.55
02:21:55  60595.40    275.94
02:21:56  22283.33    179.93
02:21:57  10404.02    148.78
02:21:58   5238.00    123.95
02:21:59   4034.98     80.90
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:22:00   3522.30     72.64
02:22:01   5469.38     50.20
02:22:02   7297.45     39.79
02:22:03   3099.11     54.25
02:22:04   3248.17     70.89
02:22:05   2913.00     71.76
02:22:06   2987.06     78.72
02:22:07  54645.86    255.73
02:22:08  61528.18    193.00
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:22:09  35396.66    202.57
02:22:10  19741.68    120.61
02:22:11  15428.63    115.48
02:22:12   7972.73     89.96
02:22:13   9824.29    126.01
02:22:14   4874.94    138.29
02:22:15   3796.94     96.80
02:22:16   3575.66     71.28
02:22:17   6495.05    138.44
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:22:18   9402.45    109.48
02:22:19   3968.94     58.29
02:22:20   3743.62     48.55
02:22:21   3694.58     91.52
02:22:22  39369.09    309.25
02:22:23  52195.56    270.83
02:22:24  65553.51    841.36
02:22:25  60596.64    308.53
02:22:26  49923.62    310.89
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:22:27  38015.93    272.27
02:22:28  77433.58    397.76
02:22:29  43295.96    177.89
02:22:30  39624.80    352.38
02:22:31  76940.18    313.48
02:22:32  48802.54    239.23
02:22:33  42220.54    210.28
02:22:34  30216.32    209.80
02:22:35  19857.96    168.15
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:22:36  19749.08    179.32
02:22:37  69007.84    383.16
02:22:38  62657.22    237.67
02:22:39  40278.10    182.12
02:22:40  29505.32     73.24
02:22:41  16779.58     80.06
02:22:42  13546.22     82.75
02:22:43  11332.97    115.06
02:22:44   9341.68    148.25
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:22:45   7892.18     99.29
02:22:46  63716.31    436.78
02:22:47  65213.64    228.96
02:22:48  37110.78    239.34
02:22:49  24108.09    138.60
02:22:50  20563.37    188.07
02:22:51  85426.18    519.25
02:22:52  60595.63    166.00
02:22:53  34772.03    147.21
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:22:54  20921.52    139.12
02:22:55  18048.99     91.69
02:22:56  14149.85    161.82
02:22:57  10498.95    165.25
02:22:58  45172.00    401.83
02:22:59  57405.60    189.23
02:23:00  23521.44    149.76
02:23:01  21669.55    189.22
02:23:02  16121.44    179.46
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:23:03  77328.84    420.89
02:23:04  41386.30    227.66
02:23:05  25465.56    172.06
02:23:06  17135.39    170.74
02:23:07   7251.74     99.70
02:23:08   7156.27    135.40
02:23:09   6012.16    100.57
02:23:10   7197.49     79.37
02:23:11  31121.92    308.29
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:23:12  49174.40    241.25
02:23:13  18126.57    193.59
02:23:14   8291.02     97.42
02:23:15   5704.04    130.39
02:23:16   5572.79     99.40
02:23:17   4238.67     51.94
02:23:18  23084.83    236.83
02:23:19  66062.94    301.79
02:23:20  69801.78    217.66
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:23:21  42275.56    190.32
02:23:22  33421.54    175.13
02:23:23  23242.02    237.03
02:23:24  74739.70    589.49
02:23:25  48575.43     89.82
02:23:26  22961.87    110.47
02:23:27  16669.12    150.61
02:23:28  19389.75    190.22
02:23:29  12441.76    148.14
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:23:30   9394.74     90.56
02:23:31   8814.92    107.57
02:23:32  10041.65    140.45
02:23:33   7615.92     81.05
02:23:34   4394.78     90.10
02:23:35   5058.33     97.70
02:23:36  17210.13    251.39
02:23:37  71479.65    599.75
02:23:38  36538.63    226.54
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:23:39  23137.74    204.80
02:23:40  13405.63    106.80
02:23:41  11146.87    141.70
02:23:42   8407.14    132.03
02:23:43   6475.33     99.52
02:23:44  10320.05     84.66
02:23:45   8336.83    130.87
02:23:46  48163.20    298.63
02:23:47  28930.53    163.72
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:23:48  13932.03    114.17
02:23:49  10571.42    109.56
02:23:50   9142.69    152.98
02:23:51   8345.02    104.82
02:23:52   5981.88     95.52
02:23:53   7646.74    144.23
02:23:54  63774.75    431.31
02:23:55  33982.19    128.44
02:23:56  11022.05     28.06
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:23:57  11703.95     80.05
02:23:58  22302.66    159.12
02:23:59  10086.63     89.38
02:24:00  14350.75     97.90
02:24:01  34534.08    415.80
02:24:02  57772.03    299.37
02:24:03  36504.87    174.28
02:24:04  21607.46    159.68
02:24:05  63727.13    305.43
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:24:06  41723.63    182.41
02:24:07  22508.25    120.39
02:24:08  36023.64    290.90
02:24:09  80853.44    276.29
02:24:10  68707.29    186.05
02:24:11  44681.10    103.14
02:24:12  30686.46    166.93
02:24:13  24367.32    166.80
02:24:14  23025.15    123.80
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:24:15  16374.40    161.83
02:24:16  13540.23    181.54
02:24:17  55362.17    535.15
02:24:18  71284.66    273.90
02:24:19  39006.55    190.18
02:24:20  29352.29    161.27
02:24:21  19205.08    126.50
02:24:22  13631.40     96.25
02:24:23  13984.01    146.13
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:24:24  11885.65    139.05
02:24:25  10765.22     89.47
02:24:26   9837.67    107.86
02:24:27   8067.06     99.33
02:24:28   7435.46    120.90
02:24:29   9676.53    164.67
02:24:30  11567.90    154.95
02:24:31   7071.66    139.11
02:24:32   7238.58    114.08
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:24:33   7380.01    135.64
02:24:34   9878.01    115.30
02:24:35   6907.94     89.99
02:24:36   5678.94    101.34
02:24:37   5108.99     78.75
02:24:38   5603.92     93.55
02:24:39  10047.05    168.08
02:24:40   6253.95     87.87
02:24:41   9258.91     99.51
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:24:42   5163.81     77.68
02:24:43   4742.32     76.61
02:24:44   4796.64     64.30
02:24:45   9213.41    152.15
02:24:46   5437.21     93.17
02:24:47   4815.19     53.38
02:24:48   4825.59     76.67
02:24:49   4778.73     98.66
02:24:50   8459.87    104.20
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:24:51   5823.31    112.63
02:24:52   6333.90     63.41
02:24:53   6218.70     73.32
02:24:54   4303.65     77.32
02:24:55   4034.35     63.10
02:24:56   8541.50    105.18
02:24:57  66760.80    496.00
02:24:58  72990.92    226.50
02:24:59  70738.14    226.90
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:25:00  48705.98    182.72
02:25:01  42352.30    195.52
02:25:02  35287.80    231.86
02:25:03  26577.77    196.54
02:25:04  26851.50    133.52
02:25:05  25732.53    211.39
02:25:06  50782.19    392.96
02:25:07  68746.20    183.96
02:25:08  40287.56    123.09
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:25:09  29495.92    148.00
02:25:10  17260.72    114.26
02:25:11  30780.41    242.87
02:25:12  88400.50    204.73
02:25:13  53483.17    137.77
02:25:14  28357.99     88.91
02:25:15  20691.74    120.55
02:25:16  19874.62    155.95
02:25:17  27575.89    273.08
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:25:18  87978.59    295.60
02:25:19  35152.23    112.75
02:25:20  22804.24    141.63
02:25:21  17659.70    131.40
02:25:22  17517.00    157.05
02:25:23  28606.48    219.29
02:25:24  12942.29    158.40
02:25:25  10937.27    111.85
02:25:26  10177.11    103.19
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:25:27  12103.97     79.12
02:25:28   9957.51     95.22
02:25:29   8232.01     98.03
02:25:30   9091.32     88.95
02:25:31   7223.65     83.58
02:25:32  12675.62    140.81
02:25:33  12580.18    147.82
02:25:34   6441.42     57.97
02:25:35   6371.36     79.69
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:25:36   4348.66     65.75
02:25:37   4983.48    125.04
02:25:38  92406.68    335.42
02:25:39  46378.26    138.67
02:25:40  29588.51    148.12
02:25:41  25718.35    164.80
02:25:42  19422.97    156.98
02:25:43  15440.77    214.03
02:25:44  73045.72    388.06
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:25:45  45430.09    159.06
02:25:46  30190.29     65.52
02:25:47  21028.20     56.76
02:25:48  14713.18    106.88
02:25:49  15327.04     93.09
02:25:50  14247.42     89.77
02:25:51  14259.28    155.16
02:25:52   9251.41    131.32
02:25:53  55407.45    346.64
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:25:54  65855.13    231.28
02:25:55  53515.27    286.13
02:25:56  74206.59    253.90
02:25:57  52763.92    175.76
02:25:58  41574.83    147.08
02:25:59  41699.50    158.14
02:26:00  31735.70    166.23
02:26:01  31175.74    206.41
02:26:02  42723.29    407.44
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:26:03  84512.22    337.51
02:26:04  54827.23    114.76
02:26:05  45921.46    164.45
02:26:06  31283.68    145.32
02:26:07  32760.08    165.74
02:26:08  31279.24    214.17
02:26:09  32475.31    239.85
02:26:10  89632.01    511.55
02:26:11  65139.50    197.75
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:26:12  45620.33    219.24
02:26:13  35992.47    181.68
02:26:14  20648.30     99.27
02:26:15  15937.25    110.60
02:26:16  88124.57    435.29
02:26:17  63627.90    170.19
02:26:18  42710.77    171.93
02:26:19  28587.40    153.24
02:26:20  19010.68    133.95
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:26:21  15239.48    108.86
02:26:22  22261.42    240.48
02:26:23  90092.43    536.38
02:26:24  49668.56    119.64
02:26:25  29174.26     52.31
02:26:26  25523.14    120.71
02:26:27  21444.93    113.81
02:26:28  19039.51    105.05
02:26:29  20322.35    149.35
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:26:30  16424.09    137.49
02:26:31  18055.73    194.58
02:26:32  93763.10    388.41
02:26:33  57006.67    199.86
02:26:34  38879.62    172.51
02:26:35  36929.73    148.98
02:26:36  20115.76    102.73
02:26:37  58781.17    467.04
02:26:38  81480.73    307.01
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:26:39  66074.77    185.19
02:26:40  59093.21    183.04
02:26:41  47310.03    121.97
02:26:42  51759.69    138.47
02:26:43  41423.16    113.82
02:26:44  37384.89    163.40
02:26:45  37392.45    159.13
02:26:46  26169.71    108.32
02:26:47  28784.15    195.75
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:26:48  17795.62    167.67
02:26:49  54385.83    358.89
02:26:50  71105.62    276.69
02:26:51  37402.86    153.60
02:26:52  31592.48    196.47
02:26:53  17890.61    140.24
02:26:54  16136.34    179.56
02:26:55  11446.82    120.61
02:26:56  14718.49    171.99
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:26:57  42355.41    249.34
02:26:58  86458.75    236.69
02:26:59  56783.69    155.03
02:27:00  40061.02    163.69
02:27:01  24649.42     91.67
02:27:02  23326.01    131.48
02:27:03  17044.87    133.29
02:27:04  14990.12    112.46
02:27:05  18407.80    184.32
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:27:06  51270.37    603.11
02:27:07  65746.89    276.98
02:27:08  48080.10    194.53
02:27:09  38869.17    170.04
02:27:10  27341.18    145.61
02:27:11  18631.65     92.26
02:27:12  37025.32    269.93
02:27:13  60984.46    214.34
02:27:14  44035.86    208.91
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:27:15  24763.50    160.09
02:27:16  13202.80    102.59
02:27:17  19576.65    204.94
02:27:19  81949.20    642.30
02:27:20  72806.61    240.26
02:27:21  64690.11    207.83
02:27:22  50213.85    109.43
02:27:23  41497.78    140.43
02:27:24  42515.89    209.17
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:27:25  42461.82    174.10
02:27:26  33700.94    168.96
02:27:27  32869.52    107.41
02:27:28  32631.54    178.13
02:27:29  32576.31    174.30
02:27:30  26762.37    183.74
02:27:31  57654.24    511.36
02:27:32  69653.15    204.22
02:27:33  61301.35    205.36
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:27:34  45061.74    151.23
02:27:35  35339.93    192.41
02:27:36  22439.62    146.83
02:27:37  15462.09     54.75
02:27:38  12528.15     35.61
02:27:39  12627.45     50.99
02:27:40  11996.36     69.41
02:27:41  13510.10    104.68
02:27:42  23204.21    198.36
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:27:43  15943.04    114.33
02:27:44  15076.80    118.37
02:27:45  10156.39    103.00
02:27:46  10422.15    135.71
02:27:47  12266.09    160.72
02:27:48  10922.82    145.37
02:27:49  12585.87    138.00
02:27:50  68289.63    352.15
02:27:51  60807.79    316.21
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:27:52  27762.34    188.39
02:27:53  20117.07    121.86
02:27:54  16655.50    115.44
02:27:55  13659.23     92.88
02:27:56  19868.94    160.38
02:27:57  11453.97     70.56
02:27:58  13390.40    140.28
02:27:59  13240.90    113.21
02:28:00  12676.01    130.32
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:28:01  46915.90    450.36
02:28:02  79553.08    654.74
02:28:03  60601.54    201.73
02:28:04  38376.10    187.34
02:28:05  30453.17    121.40
02:28:06  24634.53    168.62
02:28:07  20460.96    147.03
02:28:08  17133.40    130.35
02:28:09  14220.38    124.35
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:28:10  10983.89    102.29
02:28:11  27313.08    280.75
02:28:12  18886.11    178.06
02:28:13  11207.46     83.74
02:28:14  11177.55    117.12
02:28:15   5971.50     26.64
02:28:16   7536.95     87.90
02:28:17   7659.23    116.48
02:28:18  13167.34    157.62
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:28:19   8856.75    101.46
02:28:20  21839.90    249.04
02:28:21  21634.76    162.16
02:28:22   8565.96    103.46
02:28:23   7898.29     90.00
02:28:24   8704.44    101.95
02:28:25   7062.73     76.78
02:28:26   6465.20    124.62
02:28:27  17367.24    171.61
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:28:28  25245.66    196.62
02:28:29   8433.70     95.78
02:28:30  10175.92     52.76
02:28:31   8664.96    105.04
02:28:32   9006.16     95.31
02:28:33   5879.33     61.66
02:28:34   6918.05     65.44
02:28:35  10965.66    142.49
02:28:36  72947.28    739.04
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:28:37  73678.01    251.98
02:28:38  58886.80    166.31
02:28:39  32432.21    107.33
02:28:40  78547.68    381.96
02:28:41  67963.51    176.46
02:28:42  62552.06    148.66
02:28:43  55239.48    108.44
02:28:44  37857.05    112.51
02:28:45  34437.72    105.14
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:28:46  25011.91    104.95
02:28:47  17379.54     83.65
02:28:48  15233.39     91.98
02:28:49  13980.90     72.06
02:28:50  13755.29    109.45
02:28:51  12014.07     99.72
02:28:52  16094.17    115.84
02:28:53  13651.72     73.71
02:28:54  10630.23     75.37
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:28:55   7998.11     89.25
02:28:56   8803.32     56.79
02:28:57  57358.63    527.10
02:28:58  70942.49    256.42
02:28:59  63205.96    249.50
02:29:00  72486.67    262.17
02:29:01  63630.54    191.47
02:29:02  59942.47    146.56
02:29:03  44194.05    125.04
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:29:04  40216.38    132.22
02:29:05  33787.09    144.08
02:29:06  28827.74    132.45
02:29:07  60555.25    504.04
02:29:08  77544.44    414.35
02:29:09  66796.80    138.28
02:29:10  53373.66    142.60
02:29:11  42884.18    115.80
02:29:12  37071.29    137.86
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:29:13  28247.28    128.57
02:29:14  25520.22    132.26
02:29:15  24315.78    158.61
02:29:16  19461.36     83.62
02:29:17  19127.17    115.54
02:29:18  17749.41     95.44
02:29:19  19925.15    126.07
02:29:20  18659.55    104.82
02:29:21  14946.88     95.89
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:29:22  16163.44    132.98
02:29:23  16052.21     92.71
02:29:24  13242.81     92.66
02:29:25  14759.39    146.26
02:29:26  16566.62    192.35
02:29:27  15025.01    136.54
02:29:28  10528.31    104.79
02:29:29  10886.05    123.11
02:29:30   8513.08    122.78
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:29:31   8631.74     80.31
02:29:32   9103.51    144.35
02:29:33  14163.01    175.80
02:29:34  40962.72    493.84
02:29:35  93048.32    665.11
02:29:36  66424.20    261.16
02:29:37  59000.29    199.75
02:29:38  37862.25    117.94
02:29:39  36531.98    150.34
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:29:40  40262.95    147.31
02:29:41  30986.46    173.52
02:29:42  24124.62    163.99
02:29:43  18995.60    149.74
02:29:44  16674.01    154.58
02:29:45  19638.81    180.11
02:29:46  13889.63    129.72
02:29:47  12060.11    120.22
02:29:48  12496.54    148.26
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:29:49  14630.61    136.88
02:29:50  44890.28    339.14
02:29:51  80303.43    774.34
02:29:52  74545.47    283.52
02:29:53  65714.89    220.00
02:29:54  62666.76    173.67
02:29:55  62347.10    148.69
02:29:56  56790.12    140.29
02:29:57  61189.43    161.03
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:29:58  61268.59    133.26
02:29:59  52569.67    104.14
02:30:00  48180.98    117.89
02:30:01  45463.79    133.40
02:30:02  42118.19    175.73
02:30:03  33361.92    122.04
02:30:04  30977.12    153.76
02:30:05  26376.39    193.59
02:30:06  24151.89    140.94
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:30:07  27859.65    217.03
02:30:08  17074.16    141.67
02:30:09  17341.53     88.21
02:30:10  13711.81     66.33
02:30:11  14031.88    120.92
02:30:12  18071.87    127.80
02:30:13  10958.72     94.88
02:30:14  50175.89    403.47
02:30:15  22650.31    140.84
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:30:16  12285.94    103.64
02:30:17  15057.09    141.00
02:30:18  13864.30    124.51
02:30:19  14038.04    113.00
02:30:20   9788.40     85.55
02:30:21  14167.05     77.27
02:30:22   9607.41     84.15
02:30:23  11463.11     79.88
02:30:24   7735.19    101.06
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:30:25   9898.37    118.44
02:30:26   8772.24     95.22
02:30:27  13057.54    144.44
02:30:28   9840.82    108.84
02:30:29  15310.92    140.55
02:30:30  56621.52    481.95
02:30:31  19301.80    144.23
02:30:32  47218.27    584.58
02:30:33  66724.63    273.51
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:30:34  57402.13    180.79
02:30:35  53781.58    177.28
02:30:36  42859.96    148.14
02:30:37  25829.27    115.08
02:30:38  23305.98    164.23
02:30:39  23254.84    139.58
02:30:40  25676.02    176.34
02:30:41  14714.19    106.36
02:30:42  18272.70    147.78
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:30:43  10789.26     91.34
02:30:44  12355.84     83.97
02:30:45   8252.45     80.12
02:30:46  13603.73    157.97
02:30:47   8984.21    124.46
02:30:48  10610.11    118.70
02:30:49  49534.84    659.56
02:30:50  69002.60    406.49
02:30:51  55594.60    232.67
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:30:52  58367.57    263.09
02:30:53  51598.79    197.92
02:30:54  52928.29    257.74
02:30:55  66857.27    357.13
02:30:56  80847.42    463.30
02:30:57  49054.93    212.40
02:30:58  32558.85    167.15
02:30:59  19084.98     99.73
02:31:00  15713.44    101.56
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:31:01  17505.31    146.00
02:31:02  18007.33    183.57
02:31:03  21844.09    224.65
02:31:04  21458.43    202.42
02:31:05  20377.77    205.40
02:31:06  16818.59    138.02
02:31:07  17066.45    157.10
02:31:08  18720.30    202.31
02:31:09  51725.16    484.95
  Time           eth0      
HH:MM:SS   KB/s in  KB/s out
02:31:10  18991.26    150.26
02:31:11  10827.27    131.27
02:31:12   8537.31    106.34
02:31:13   8285.83    103.56
02:31:14  11360.60    162.70
02:31:15  74303.56    673.62
02:31:16  72552.23    288.89
02:31:17  65030.57    192.44
02:31:18  47718.31    153.97


It created a readonly data set that is compressible to less than 20GB and has 35GB of sig data in a separate directory. So the sig data can be deleted after it is verified, or with a bit more work, you can just skip it if you are willing to rely on checkpoint. then you can get a 20GB bandwidth used for a full sync (without sigs).

Still not fully optimized, but mostly processing at close to full resource utilization. I am not saying others havent done this too, all I am saying is that I have and maybe my experience is useful to some people who want to hear a different point of view than the party line.

v.129/129 (2000 1st.129) to 201 N[202] Q.70 h.402000 r.258000 c.0.000kb s.377583 d.129 E.129:452552 M.403286 L.403287 est.119 0.000kb 0:32:42 2.905 peers.83/256 Q.(0 0)

downloaded 377583 blocks, fully processed 258000 in  0:32:42

James


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: achow101 on March 19, 2016, 02:50:20 AM
Are sigs in the witness data immune from malicious tx via lots of sigs?
I think they are since the transaction is hashed differently if the transaction uses witnesses. The different hashing method allows for faster hashes by using midstates which can be reused for every signature verification.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: jl777 on March 19, 2016, 02:51:45 AM
Are sigs in the witness data immune from malicious tx via lots of sigs?
I think they are since the transaction is hashed differently if the transaction uses witnesses. The different hashing method allows for faster hashes by using midstates which can be reused for every signature verification.
well that's good, but would be nice to see it explicitly instead of having to assume.

oh, thanks for the URL about segwit implementation, that helped me understand it a lot better

James


Title: Segwit details? IGNORANT GAVINTOOL JL777 BELIVES SEGWIT WASTES BLOCKCHAIN SPACE
Post by: iCEBREAKER on March 19, 2016, 03:05:52 AM
Are sigs in the witness data immune from malicious tx via lots of sigs?
I think they are since the transaction is hashed differently if the transaction uses witnesses. The different hashing method allows for faster hashes by using midstates which can be reused for every signature verification.
well that's good, but would be nice to see it explicitly instead of having to assume.

oh, thanks for the URL about segwit implementation, that helped me understand it a lot better

James

Explicitly stated here:

SF Bitcoin Devs Seminar: Key Tree Signatures
https://www.youtube.com/watch?v=gcQLWeFmpYg


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: molecular on March 19, 2016, 08:09:37 AM
The following was quoted on and linked from reddit (https://www.reddit.com/r/Bitcoin/comments/4aysz6/with_bitpay_preparing_to_release_its_own_fork_of/d152ggt):

If segwit were to be a hardfork? What would it be?

Would it change how transaction IDs were computed, like elements alpha did? Doing so is conceptually simpler and might save 20 lines of code in the implementation... But it's undeployable: even as a hardfork-- it would break all software, web wallets, thin wallets, lite wallets, hardware wallets, block explorers-- it would break them completely, along with all presigned nlocktime transactions, all transactions in flight. It would add more than 20 lines of code in having to handle the flag day.  So while that design might be 'cleaner' conceptually the deployment would be so unclean as to be basically inconceivable. Functionally it would be no better, flexibility it would be no better.  No one has proposed doing this.

Is the following suggestion a solution to that?

The position of a transaction in the blockchain should define which version of the rules is applicable to it

What keeps us from using the old way of calculating a txid for transactions in prefork-blocks and the new way after the fork?

------

Also, Tyler Nolan has a similar suggestion:

One example gmaxwell gives: all presigned nlocktime transactions would be broken. For users keeping these in storage they may well represent a lot of security. Gone... the moment a new version of the software no longer sees the transaction as being valid.
You could have a rule that you can refer to inputs using either txid or normalized-txid.  That maintains backwards compatibility.  The problem is that you need twice the lookup table size.  You need to store both the txid to transaction lookup and the n-txid to transaction lookup.

The rule could be changed so that transactions starting at version 2 using n-txid and version 1 transactions use txid.  This means that each transaction only needs 1 lookup entry depending on its version number.  If transaction 1 transactions cannot spend outputs from transaction 2 transactions, then the network will eventually update over time.  It is still a hard-fork though.

Is that applicable / workable?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: achow101 on March 19, 2016, 11:20:40 AM
Is the following suggestion a solution to that?

The position of a transaction in the blockchain should define which version of the rules is applicable to it

What keeps us from using the old way of calculating a txid for transactions in prefork-blocks and the new way after the fork?
Unconfirmed transactions are the issue. What do we do about transactions that were created just before the fork block? How do you distinguish between an unconfirmed transaction created prior to the fork and an unconfirmed transaction created after the fork block?


Also, Tyler Nolan has a similar suggestion:

One example gmaxwell gives: all presigned nlocktime transactions would be broken. For users keeping these in storage they may well represent a lot of security. Gone... the moment a new version of the software no longer sees the transaction as being valid.
The rule could be changed so that transactions starting at version 2 using n-txid and version 1 transactions use txid.
You could have a rule that you can refer to inputs using either txid or normalized-txid.  That maintains backwards compatibility.  The problem is that you need twice the lookup table size.  You need to store both the txid to transaction lookup and the n-txid to transaction lookup.

The rule could be changed so that transactions starting at version 2 using n-txid and version 1 transactions use txid.  This means that each transaction only needs 1 lookup entry depending on its version number.  If transaction 1 transactions cannot spend outputs from transaction 2 transactions, then the network will eventually update over time.  It is still a hard-fork though.

Is that applicable / workable?

The first is possible but it is not optimal because it requires twice the lookup table size.

The second is also possible but the issue is the hard fork. The problem is that hard forks shouldn't be done often and for small things like this. It would be better if it was packaged with other stuff that is desired that also requires a hard fork. If also has less functionality than segwit.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: molecular on March 19, 2016, 12:01:40 PM
One example gmaxwell gives: all presigned nlocktime transactions would be broken. For users keeping these in storage they may well represent a lot of security. Gone... the moment a new version of the software no longer sees the transaction as being valid.
You could have a rule that you can refer to inputs using either txid or normalized-txid.  That maintains backwards compatibility.  The problem is that you need twice the lookup table size.  You need to store both the txid to transaction lookup and the n-txid to transaction lookup.

The rule could be changed so that transactions starting at version 2 using n-txid and version 1 transactions use txid.  This means that each transaction only needs 1 lookup entry depending on its version number.  If transaction 1 transactions cannot spend outputs from transaction 2 transactions, then the network will eventually update over time.  It is still a hard-fork though.

Is that applicable / workable?

The first is possible but it is not optimal because it requires twice the lookup table size.

The second is also possible but the issue is the hard fork. The problem is that hard forks shouldn't be done often and for small things like this. It would be better if it was packaged with other stuff that is desired that also requires a hard fork. If also has less functionality than segwit.

Thanks for the info and those clarifications.

I understand you take issue with hardforking in general and I don't want to downplay the inherent risks.

I'm not suggesting to do a hard-fork just for this. I'm investigating the feasability of assembling a compromise package of changes. As you said:

It would be better if it was packaged with other stuff that is desired that also requires a hard fork.

Mainly here I'm reacting to Maxwell saying txid change couldn't be deployed as a hardfork at all, because that quote very publicly being used on reddit to defend "segwit as a softfork".

Would it change how transaction IDs were computed, like elements alpha did? Doing so is conceptually simpler and might save 20 lines of code in the implementation... But it's undeployable: even as a hardfork-- it would break all software, web wallets, thin wallets, lite wallets, hardware wallets, block explorers-- it would break them completely, along with all presigned nlocktime transactions, all transactions in flight.

So that's basically FUD?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: achow101 on March 19, 2016, 12:47:03 PM
Mainly here I'm reacting to Maxwell saying txid change couldn't be deployed as a hardfork at all, because that quote very publicly being used on reddit to defend "segwit as a softfork".

Would it change how transaction IDs were computed, like elements alpha did? Doing so is conceptually simpler and might save 20 lines of code in the implementation... But it's undeployable: even as a hardfork-- it would break all software, web wallets, thin wallets, lite wallets, hardware wallets, block explorers-- it would break them completely, along with all presigned nlocktime transactions, all transactions in flight.

So that's basically FUD?
No. What he was responding to was th original idea of changing  txid calculation entirely too something new. This idea (the second option) is instead introducing a new txid calculation method which will work alongside the original txid calculation algorithm.

Additionally, on further thought, it will still require twice the lookup tables. There needs to be one for the transactions that version 1 txs can spend from and one for the version 2 txs. Version 2 txs still need to be able to reference the txid of a version 1 tx to spend from it or that ntxid for the version 1 tx also needs to be stored somewhere so it will increase the lookup table sizes.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TierNolan on March 19, 2016, 07:33:53 PM
Additionally, on further thought, it will still require twice the lookup tables. There needs to be one for the transactions that version 1 txs can spend from and one for the version 2 txs. Version 2 txs still need to be able to reference the txid of a version 1 tx to spend from it or that ntxid for the version 1 tx also needs to be stored somewhere so it will increase the lookup table sizes.

I meant if you want to refer to a version 2 transaction, you use the n-txid.  The reason that this is ok is that legacy/timelocked transactions are automatically version 1, so it doesn't make locked transactions suddenly unspendable.

Version 1 transactions would only refer to version 1 inputs (so no change)
Version 2 transactions would use txid when referring to version 1 inputs and n-txid when referring to version 2 inputs.

The nice feature of this is that n-txid don't need to recomputed back to the genesis block.

It is a hard-fork though.  I should have been clearer.  Given that SW can achieve the same with a soft fork, I think SW wins here.

Maybe SW should have happened in stages.  The first stage could have been purely adding SW and no other changes (other than script versioning).  Later script versions could add the new hashing rules.

In fairness, they have been trying to keep feature creep to a minimum.

With regards to the O(N2) hashing operation.  The transactions could simply have been limited to 1MB.  This would have meant no changes at all.  The O(N2) performance assumes that the block contains a single transaction.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 19, 2016, 08:11:46 PM
Additionally, on further thought, it will still require twice the lookup tables. There needs to be one for the transactions that version 1 txs can spend from and one for the version 2 txs. Version 2 txs still need to be able to reference the txid of a version 1 tx to spend from it or that ntxid for the version 1 tx also needs to be stored somewhere so it will increase the lookup table sizes.

I meant if you want to refer to a version 2 transaction, you use the n-txid.  The reason that this is ok is that legacy/timelocked transactions are automatically version 1, so it doesn't make locked transactions suddenly unspendable.

Version 1 transactions would only refer to version 1 inputs (so no change)
Version 2 transactions would use txid when referring to version 1 inputs and n-txid when referring to version 2 inputs.

The nice feature of this is that n-txid don't need to recomputed back to the genesis block.

It is a hard-fork though.  I should have been clearer.  Given that SW can achieve the same with a soft fork, I think SW wins here.

Maybe SW should have happened in stages.  The first stage could have been purely adding SW and no other changes (other than script versioning).  Later script versions could add the new hashing rules.

In fairness, they have been trying to keep feature creep to a minimum.

With regards to the O(N2) hashing operation.  The transactions could simply have been limited to 1MB.  This would have meant no changes at all.  The O(N2) performance assumes that the block contains a single transaction.
If N squared performance assumes a single giant transaction, why not a rule to limit the size of a transaction?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: molecular on March 19, 2016, 09:16:06 PM
With regards to the O(N2) hashing operation.  The transactions could simply have been limited to 1MB.  This would have meant no changes at all.  The O(N2) performance assumes that the block contains a single transaction.
If N squared performance assumes a single giant transaction, why not a rule to limit the size of a transaction?
[/quote]

Seems like a no-brainer. I've been looking through the commits, but couldn't find it. I think classic has a defense along those lines.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 19, 2016, 09:21:19 PM
With regards to the O(N2) hashing operation.  The transactions could simply have been limited to 1MB.  This would have meant no changes at all.  The O(N2) performance assumes that the block contains a single transaction.
If N squared performance assumes a single giant transaction, why not a rule to limit the size of a transaction?

Seems like a no-brainer. I've been looking through the commits, but couldn't find it. I think classic has a defense along those lines.

[/quote]
Since I am supposed to not have a brain, it explains


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: achow101 on March 19, 2016, 09:34:41 PM
If N squared performance assumes a single giant transaction, why not a rule to limit the size of a transaction?
Seems like a no-brainer. I've been looking through the commits, but couldn't find it. I think classic has a defense along those lines.
Even so, a 1 Mb transaction can still take a while to validate. A hypothetical scenario described here: https://bitcointalk.org/?topic=140078 states that a transaction of 1 Mb could take up to 3 minutes to verify. In reality, there was a roughly 1 Mb transaction that took about 25 seconds to verify described here: http://rusty.ozlabs.org/?p=522. Anything that is over a few seconds is quite a long time in computer standards. Now, both of those scenarios are less likely to happen now since libsecp256k1 introduced significantly faster signature validation, but it is still vulnerable to such attacks. A maliciously crafted 1 Mb transaction could, in theory, still take 25 seconds or longer to verify.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TierNolan on March 19, 2016, 09:42:53 PM
Seems like a no-brainer. I've been looking through the commits, but couldn't find it. I think classic has a defense along those lines.

The rule for classic is to directly limit the amount of hashing required.  If your block does more than 1.3GB of hashing, it is invalid.  I assume that the 1.3GB limit is sufficient for a 1MB transaction.  Ideally, any valid version 1 transaction (so less than 1MB inherently) should be valid when rules are changed.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 19, 2016, 09:44:21 PM
If N squared performance assumes a single giant transaction, why not a rule to limit the size of a transaction?
Seems like a no-brainer. I've been looking through the commits, but couldn't find it. I think classic has a defense along those lines.
Even so, a 1 Mb transaction can still take a while to validate. A hypothetical scenario described here: https://bitcointalk.org/?topic=140078 states that a transaction of 1 Mb could take up to 3 minutes to verify. In reality, there was a roughly 1 Mb transaction that took about 25 seconds to verify described here: http://rusty.ozlabs.org/?p=522. Anything that is over a few seconds is quite a long time in computer standards. Now, both of those scenarios are less likely to happen now since libsecp256k1 introduced significantly faster signature validation, but it is still vulnerable to such attacks. A maliciously crafted 1 Mb transaction could, in theory, still take 25 seconds or longer to verify.
The point is that I dont see any huge outcry if a tx is limited to say 1024 vins/vouts or some such number. If that avoids the N*N behavior it seems a simple way.

On the non-malleable txid basis, I cant find issues with T. Nolan's approach and any need for internal lookup tables, is a local implementation matter, right? And should limitations of existing implementations constrain improving the protocol?

just a question about tradeoffs.

If the position is that anything that requires retooling the codebase to handle the increased load in the future is not acceptable, ok, just say so. After all, I dont want to be shamed again by suggesting that the current code isnt absolutely perfect in all ways possible. Just want to know what the groundrule are. If the current codebase is sacred then it changes the analysis of what is and isnt possible. Who am I to suggest making any code changes to the people that are 100x smarter than me.

James


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: achow101 on March 19, 2016, 10:27:17 PM
The point is that I dont see any huge outcry if a tx is limited to say 1024 vins/vouts or some such number. If that avoids the N*N behavior it seems a simple way.
Well it could be done but I don't think that it would be liked, probably depends on who proposed it. At that point, it becomes political and not technical. Some would say that we shouldn't make the possibilities that you can do less than what can currently be done. Others may not. It becomes political whether to put a limit or not.

On the non-malleable txid basis, I cant find issues with T. Nolan's approach and any need for internal lookup tables, is a local implementation matter, right? And should limitations of existing implementations constrain improving the protocol?
Well local implementation does kind of matter. If it is something that is extremely difficult  to implement, it probably isn't optimal. If it places additional system requirements, to use, it might not be something that we want to do. Of course, it can be implementation specific, but if implementing it can only be done in a few specific ways, then I don't think it should be done.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 19, 2016, 10:48:52 PM
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 

If the protocol is to support such transactions, then soft forks must be forbidden, since (by definition)  transactions that were valid before a soft fork may be invalid after it.  BIP66 invalidated any unbroadcast transactions that used the signature variants that it excluded. Even the deployment of SegWit as a soft fork could invalidate some valid transactions.

IMHO, the safest way to introduce changes is by a clean fork:  making sure that *every* transaction or block that is valid under the old rules is invalid under the new ones, and vice-versa.  The code for the change should be introduced in some release N of the software, but the change itself should be programmed to become active at some block number X that is ~6 months in the future, after the expected date of release N+3.   Then users can be warned of the impending fork at the time of release N, and in particular that any transaction created with older releases that is not confirmed by block X will never be confirmed. 

This does not solve completely the problem of transactions that are created specifically for delayed broadcast, but reduces the severity.  Clients who need that feature can tell their new wallet software, after upgrading to release N, whether they intend to bradcast them before or after block X.  (Or they can create both versions, just in case.)

The problem is that soft forks cannot be prevented.  If a simple majority of the miners wants to impose a soft-fork type of change, all they have to do is to start rejecting all blocks and transactions that are invalid under their chosen new rules.  They don't even have to warn other miners, users, or relay nodes; and even if they do, there is nothing that these players can do to prevent the fork.

TL;DR: Holding on to pre-signed transactions,without broadcasting them, seem to be a bad idea.  There is no way to guarantee that a transaction will be confirmed, until it is confirmed.   The older the transaction, the greater the risk of it becoming invalid.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 19, 2016, 11:14:28 PM
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 
...
TL;DR: Holding on to pre-signed transactions,without broadcasting them, seem to be a bad idea.  There is no way to guarantee that a transaction will be confirmed, until it is confirmed.   The older the transaction, the greater the risk of it becoming invalid.
maybe I am being a bit simplistic about this, but "unconfirmed" to me means that it hasnt been confirmed. So to require that all unconfirmed transactions must be confirmed contradicts the fundamental meaning of unconfirmed. What is the meaning of the word 'unconfirmed'?

If all mempool tx that are unconfirmed, must be confirmed, then doesnt the confirmation point move to being accepted in the mempool? We would then need to say that all zeroconf tx in the mempool are actually confirmed?

But if that is the case, then how can there be consensus about what tx are confirmed or not? If being in the mempool means its confirmed, we would need to enforce mempool consensus. Is that currently the case? Why is this a requirement? Is the whole point of blocks to have something to consensus on?

I think things are difficult enough without requiring any solution to also treat unconfirmed tx as confirmed.



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on March 19, 2016, 11:39:45 PM
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 
I disagree on the "tough" part. In my opinion this is less difficult than DOSbox/Wine on Linux or DOS subsystem in Windows 32 (and Itanium editions of Windows 64). It is more of the problem how much energy to spend on scoping the required area of backward compatibility and preparing/verifying test cases.

The initial step is already done in form of libconsensus. It is a matter of slightly broadening the libconsensus' interface to allow for full processing of compatibility-mode transactions off the wire and old-style blocks out of the disk archive.

Then it is just a matter of keeping track of the versions of libconsensus.

To my nose this whole "segregated witness as a soft fork" has a strong whiff of the "This program cannot be run in DOS mode" from Redmond, WA. Initially there were paeans written about how great it is that one could start Aldus Pagemaker both by typing PAGEMKR on the C> prompt (to start Windows) and by clicking PageMaker icon in the Program Manager (if you already had Windows started). Only years later the designers admitted this to be one of the worst choices in the history of backward compatibility.



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 19, 2016, 11:43:28 PM
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 
I disagree on the "tough" part. In my opinion this is less difficult than DOSbox/Wine on Linux or DOS subsystem in Windows 32 (and Itanium editions of Windows 64). It is more of the problem how much energy to spend on scoping the required area of backward compatibility and preparing/verifying test cases.

The initial step is already done in form of libconsensus. It is a matter of slightly broadening the libconsensus' interface to allow for full processing of compatibility-mode transactions off the wire and old-style blocks out of the disk archive.

Then it is just a matter of keeping track of the versions of libconsensus.

To my nose this whole "segregated witness as a soft fork" has a strong whiff of the "This program cannot be run in DOS mode" from Redmond, WA. Initially there were paeans written about how great it is that one could start Aldus Pagemaker both by typing PAGEMKR on the C> prompt (to start Windows) and by clicking PageMaker icon in the Program Manager (if you already had Windows started). Only years later the designers admitted this to be one of the worst choices in the history of backward compatibility.


I agree


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TierNolan on March 20, 2016, 01:07:53 AM
maybe I am being a bit simplistic about this, but "unconfirmed" to me means that it hasnt been confirmed. So to require that all unconfirmed transactions must be confirmed contradicts the fundamental meaning of unconfirmed. What is the meaning of the word 'unconfirmed'?

Consider the standard refund transaction setup.  A transaction with a 2 of 2 output is committed to the block-chain that is spent by a refund transaction.

If the refund transaction has a locktime 2 years into the future, then it cannot be spent for at least two years.

On the one hand, the refund transaction is unconfirmed.  But on the other hand, there is no risk of its input being double spent.  Both parties are safe to assume that the transaction will eventually be included.

A hard fork which makes the refund transaction invalid effectively steals that output.  At the absolute minimum, there should be a notice period, but it is better to just not have that problem in the first place.

There was at least one thread that asked about leaving money to someone for their 18th birthday.  A payment like that could very easily be locked for 10+ years.  I think the conclusion in the thread was that leaving a letter with a lawyer was probably safer.

If someone has a 1MB transaction that spends a 2 of 2 output but is locked for 5 years, is it fair to say to them that it is no longer spendable?

There is probably a reasonable compromise, but it should err on the side of not invalidating locked transactions.

That is why increasing the version number helps.  If someone has a locked transaction that uses a non-defined transaction version number, then I think it is fair enough that their locked transaction ends up not working.  For the time being, only version 1 transactions are safe to use with locktime.

I made a post on the dev list (http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011643.html) at the end of last year with some suggestions for rules. 

  • Transaction version numbers will be increased, if possible
  • Transactions with unknown/large version numbers are unsafe to use with locktime
  • Reasonable notice is given that the change is being contemplated
  • Non-opt-in changes will only be to protect the integrity of the network

I think if a particular format of transaction has mass use, then it is probably safer for locking than an obscure or very unusual transaction.  A transaction that uses one of the IsStandard forms would be safer than one that is 500kB and has lots of OP_CHECKSIG calls.

The guidelines could say that transactions which put an 'excessive' load on the network are riskier.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 20, 2016, 01:24:31 AM
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 
I disagree on the "tough" part. In my opinion this is less difficult than DOSbox/Wine on Linux or DOS subsystem in Windows 32 (and Itanium editions of Windows 64). It is more of the problem how much energy to spend on scoping the required area of backward compatibility and preparing/verifying test cases.

I don't think it is the same thing at all. 

Support for legacy files and programs in newer relases of an OS is similar to the "clean fork" approach that I described.  namely, the new software is aware of the old sematics and can use it when required.  Any hard fork must have such backwards compatibilty, because it must recognize as valid all blocks and transactions that were confirmed before the fork.

Backwards compatibility in general is feasible as long as there is a feasible mapping of old semantics to the new infrastructure, and there is no technical or other reason to deny the conversion.    However, that sometimes is impossible; e.g. if an old program tries to access hardware functions that are not accessible in newer hardware, or if the mapping would require decrypting and re-encrypting data without access to the keys.

Similar difficulties exist in handling an old transaction that was created before a soft fork but was broadcast only after it, and became invalid under new rules.  The rules must have changed for a reason, so the transaction cannot simply be included in the blockchain as such.   For example, suppose that the change consisted in imposing a strict limit to the complexity of signatures, to prevent "costly transaction" attacks.  The miners cannot continue to accept old transactions according to old rules, because that would frustrate the goal of the fork. 

(Note that there is no way for a miner to determine when a transaction T1 was signed.  Even if it spends an UTXO in a transaction T2 that was confirmed only yesterday, it is possible that both T1 and T2 were signed a long time ago.)

maybe I am being a bit simplistic about this, but "unconfirmed" to me means that it hasnt been confirmed. So to require that all unconfirmed transactions must be confirmed contradicts the fundamental meaning of unconfirmed. What is the meaning of the word 'unconfirmed'?

I don't think that anyone is proposing to change the definition.  Transactions that have not been broadcast yet and transactions that are in the queue (mempool) of some nodes or miners, but are not safely buried into the blockchain, are equally unconfirmed. 

I meant to point out that there is no way that a client can make sure that an unconfirmed transaction will ever be confirmed, even if it seems to be valid by current rules.  Everybody agrees on that.?

In fact, there is no way to put a probability value on that event, even with the usual assumptions of well-distributed mining etc.  Everybody still agrees?

But then it follows that clients who hold signed transactions for broadcast at a later date cannot trust that they will be confirmed, even if they seem to be valid at the time of igning.  Everybody OK with this?

Thus, there is no weight in the argument "we cannot do X because it would invalidate all pre-signed transactions that people are holding". 


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TierNolan on March 20, 2016, 01:33:46 AM
Support for legacy files and programs in newer relases of an OS is similar to the "clean fork" approach that I described.  namely, the new software is aware of the old sematics and can use it when required.  Any hard fork must have such backwards compatibilty, because it must recognize as valid all blocks and transactions that were confirmed before the fork.

You could just checkpoint the block where the rule change happened and then just include code for the new rules.  The client would still need to be able to read old blocks, but wouldn't need to be able to validate them.

Checkpoints aren't very popular though and takes away from claims that everything is p2p.

Quote
I meant to point out that there is no way that a client can make sure that an unconfirmed transaction will ever be confirmed, even if it seems to be valid by current rules.  Everybody agrees on that.?

In fact, there is no way to put a probability value on that event, even with the usual assumptions of well-distributed mining etc.  Everybody still agrees?

I disagree.

If you create a transaction that spends your own outputs, then it is possible to be sure that that transaction will be included in the blockchain.  You might have to pay extra fees though (assuming some miners have child pays for parent).

A rule change can make the transaction invalid and that is a reason for not making those rule changes.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 20, 2016, 02:30:02 AM
I meant to point out that there is no way that a client can make sure that an unconfirmed transaction will ever be confirmed, even if it seems to be valid by current rules.  Everybody agrees on that.?

In fact, there is no way to put a probability value on that event, even with the usual assumptions of well-distributed mining etc.  Everybody still agrees?

I disagree.

If you create a transaction that spends your own outputs, then it is possible to be sure that that transaction will be included in the blockchain.  You might have to pay extra fees though (assuming some miners have child pays for parent).

A rule change can make the transaction invalid and that is a reason for not making those rule changes.


I insist: you cannot be sure, because a fee hike is not the only change that might prevent confirmation. Especially if the transaction is held for months before being broadcast.

Rule changes are inevitable.  They are likely to be needed to fix bugs and to meet new demands and constraints.  Many rule changes have happened already, and many more are in the pipeline.

As I pointed out, if Antpool, F2Pool, and any third miner decide to impose a soft-fork change, they can do it, and no one can stop them.
 
Curiously, it is soft-fork changes that can prevent confirmation of signed and validated but unconfirmed transactions.  Hard-fork changes (that only make rules more permissive) will not affect them.

CPFP is a mempool management rule only.  If a min fee hike is implemented as a mempool management rule only, or is an individual option of each miner, then one can hope that some miner may also implement CPFP, and then the low-fee transaction will be pulled through.  But there is no way for the client to know whether some miner is doing that, so he cannot put a probability on that.

On the other hand, if the min fee is implemented as a rule change (meaning that miners are prohibited from accepting low-paying transactions) then it seems unlikely that CPFP will be implemented too.  The validity rules must be verifiable "on line", meaning that the validity of a block in the blockchain can only depend on the contents of the blockchain up to and including that block.  In particular, the rules cannot say "a transaction with low fee is valid if there is a transaction further ahead in the blockchain  that pays for it.

Anyway, other possible soft-fork changes that could prevent confirmation of a currently valid transaction include reduction of the block size limit (as Luke has been demanding), imposing a minimum output value (an antispam measure proposed by Charlie Lee), limtiing the number of inputs and outputs, extending the wait period for spending coinbase UTXOs, and many more


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 20, 2016, 03:40:28 AM
Consider the standard refund transaction setup.  A transaction with a 2 of 2 output is committed to the block-chain that is spent by a refund transaction.

If the refund transaction has a locktime 2 years into the future, then it cannot be spent for at least two years.

On the one hand, the refund transaction is unconfirmed.  But on the other hand, there is no risk of its input being double spent.  Both parties are safe to assume that the transaction will eventually be included.

A hard fork which makes the refund transaction invalid effectively steals that output.  At the absolute minimum, there should be a notice period, but it is better to just not have that problem in the first place.

If someone has a 1MB transaction that spends a 2 of 2 output but is locked for 5 years, is it fair to say to them that it is no longer spendable?
I am confused. In your example, it is in the blockchain and since you have the ability to spend it, then why would any fork make it so you cant spend it?

If you are saying there are some 1MB tx that has timelocked tx in the future that is already confirmed, I am not sure why that is relevant. Clearly all existing tx that are already confirmed would be grandfathered in.

So the limit on tx size (however it is done) would apply to post fork tx.

Sorry to be slow on this, but I dont see what type of unconfirmed tx we need to make sure it is valid post fork. If it requires to create a new spend that is less than a 1MB tx size, that doesnt lose funds, so I dont see the issue.



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 20, 2016, 03:45:36 AM
Anyway, other possible soft-fork changes that could prevent confirmation of a currently valid transaction include reduction of the block size limit (as Luke has been demanding), imposing a minimum output value (an antispam measure proposed by Charlie Lee), limtiing the number of inputs and outputs, extending the wait period for spending coinbase UTXOs, and many more
I remember seeing someone post a softfork that allowed to issue more than 21 million bitcoins, so clearly any sort of thing is possible via softfork/hardfork.

Since a hardfork attack (or softfork) can always be attempted, it seems the only defense against something that is wrong is for there to be an outcry about it.

James

P.S. We can avoid the extreme N*N sig tx attack without breaking any existing tx by setting the limit to allow 1MB tx, but that still avoids problems from larger blocks


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 20, 2016, 05:42:44 AM
A hard fork which makes the refund transaction invalid effectively steals that output. 

You mean a soft fork.

A hard fork should not cause that.  It should only make invalid transactions valid, not the other way around.

However, a hard fork could enable a new type of "lock breaking" transaction that allows the locked coins to be spent before the expiration date.  That would invalidate the refund transaction, which would be rejected as a double spend.

I don't know whether such a change would still qualify as a hard fork, though. 


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 20, 2016, 05:50:48 AM
I remember seeing someone post a softfork that allowed to issue more than 21 million bitcoins, so clearly any sort of thing is possible via softfork/hardfork.

I believe the idea was posted first on reddit by /u/seweso . Here is my version of it. (https://np.reddit.com/r/bitcoin_uncensored/comments/43w24e/raising_the_21_million_btc_limit_with_a_soft_fork/)

Quote
Since a hardfork attack (or softfork) can always be attempted, it seems the only defense against something that is wrong is for there to be an outcry about it.

But what would the outcry achieve?

Quote
P.S. We can avoid the extreme N*N sig tx attack without breaking any existing tx by setting the limit to allow 1MB tx, but that still avoids problems from larger blocks

1 MB transactions already can take a long time to validate. 


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: inca on March 20, 2016, 12:54:38 PM
Quote
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.

What a load of FUD. How you expect people to take you seriously when you make ridiculous statements like this I will never know..


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 20, 2016, 01:01:38 PM
Quote
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.

What a load of FUD. How you expect people to take you seriously when you make ridiculous statements like this I will never know..

A hard fork cannot "change anything" that easily, because the proponents must explain the change and convince most miners and most users to upgrade, before the change is activated.

A soft fork, on the other hand, can "change anything" much more easily, because it only needs the agreement of a simple mining majority, who need not inform or convince anyone else beforehand.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on March 20, 2016, 03:43:42 PM
Similar difficulties exist in handling an old transaction that was created before a soft fork but was broadcast only after it, and became invalid under new rules.  The rules must have changed for a reason, so the transaction cannot simply be included in the blockchain as such.   For example, suppose that the change consisted in imposing a strict limit to the complexity of signatures, to prevent "costly transaction" attacks.  The miners cannot continue to accept old transactions according to old rules, because that would frustrate the goal of the fork. 
(Note that there is no way for a miner to determine when a transaction T1 was signed.  Even if it spends an UTXO in a transaction T2 that was confirmed only yesterday, it is possible that both T1 and T2 were signed a long time ago.)
Your argument is technically specious. Transactions in Bitcoin have 4 byte version field, that gives us potential for billions of rule-sets to apply to the old transactions. The correct question to ask: why this wasn't and isn't changed as the rules gets changed?



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: amaclin on March 20, 2016, 03:51:48 PM
...because it only needs the agreement of a [...] majority, who need not inform or convince anyone else
Please let me know if you find a globe with different law.  ;D


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TierNolan on March 20, 2016, 07:23:08 PM
I am confused. In your example, it is in the blockchain and since you have the ability to spend it, then why would any fork make it so you cant spend it?

The spending transaction isn't in the block chain.

You create transaction A and then create the refund transaction B.  B is signed by both parties.  A is submitted to the blockchain.  B has a locktime of 2 years in the future.

A soft fork happens that makes B unspendable for some reason.  Perhaps, it requires signatures signed with the original private keys.  In that case, it is impossible for either party to create the new spending transaction.

This has already happened with the P2SH fork.  If you happened to create a P2SH output, then it would be unspendable.  On the plus side, I assume they actually checked that there were no such outputs when the fork was proposed. 

The key point is that a (chain of) timelocked transactions that are spendable now, should also be spendable in the future.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: amaclin on March 20, 2016, 07:35:13 PM
On the plus side, I assume they actually checked that there were no such outputs when the fork was proposed. 
Sorry, my English is too poor. Who checked what?
Are you sure that these addresses are spendable today?
https://blockchain.info/address/3Dnnf49MfH6yUntqY6SxPactLGP16mhTUq
https://blockchain.info/address/3NukJ6fYZJ5Kk8bPjycAnruZkE5Q7UW7i8


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 20, 2016, 07:57:59 PM
On the plus side, I assume they actually checked that there were no such outputs when the fork was proposed. 
Sorry, my English is too poor. Who checked what?
Are you sure that these addresses are spendable today?
https://blockchain.info/address/3Dnnf49MfH6yUntqY6SxPactLGP16mhTUq
https://blockchain.info/address/3NukJ6fYZJ5Kk8bPjycAnruZkE5Q7UW7i8
From a practical point, if the amounts that are lost are small, then it could be solved via compensation. Practically speaking, it doesnt make sense to me to spend 1000 BTC of costs to make sure .001 BTC is preserved, assuming there are good justifications.

But that's just me


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 20, 2016, 08:01:55 PM
I am confused. In your example, it is in the blockchain and since you have the ability to spend it, then why would any fork make it so you cant spend it?

The spending transaction isn't in the block chain.

You create transaction A and then create the refund transaction B.  B is signed by both parties.  A is submitted to the blockchain.  B has a locktime of 2 years in the future.

A soft fork happens that makes B unspendable for some reason.  Perhaps, it requires signatures signed with the original private keys.  In that case, it is impossible for either party to create the new spending transaction.

This has already happened with the P2SH fork.  If you happened to create a P2SH output, then it would be unspendable.  On the plus side, I assume they actually checked that there were no such outputs when the fork was proposed. 

The key point is that a (chain of) timelocked transactions that are spendable now, should also be spendable in the future.
I see, this was before CLTV, where future locktime tx couldnt be confirmed.

Theoretically any unspent multisig output could be in this state, and any p2sh output could also have this issue.

But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TierNolan on March 20, 2016, 08:56:11 PM
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?

The actual fork under discussion has this property.  Restricting all transactions to 1MB would prevent the O(N2) part of the hashing problem.

Even better would be to restrict transactions to 100kB.  As I understand it, core already considers transactions above 100kB as non-standard.

The benefit of restricting transactions to 100kB should improve things by a factor of 100 (assuming O(N2)).  The problem with doing that is locked transactions.  There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs).

A soft fork which restricted transactions to 100kB unless the height is evenly divisible by 100 would be a reasonable compromise here.  Locked transactions can still be spent, but only in every 100th block.  Mostly likely nobody has 100kB+ locked transactions anyway.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 20, 2016, 09:04:08 PM
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?

The actual fork under discussion has this property.  Restricting all transactions to 1MB would prevent the O(N2) part of the hashing problem.

Even better would be to restrict transactions to 100kB.  As I understand it, core already considers transactions above 100kB as non-standard.

The benefit of restricting transactions to 100kB should improve things by a factor of 100 (assuming O(N2)).  The problem with doing that is locked transactions.  There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs).

A soft fork which restricted transactions to 100kB unless the height is evenly divisible by 100 would be a reasonable compromise here.  Locked transactions can still be spent, but only in every 100th block.  Mostly likely nobody has 100kB+ locked transactions anyway.
if >100kb is nonstandard, then odds are very very high that there are no such pending tx
and moving forward, CLTV can be used

cool idea to have an anything goes block every 100, it probably isnt an issue but since it is impossible to know for sure, probably a good idea to have something like that, but for something that probably doesnt exist, then 1 in 1000 should be good enough, or just make it nonstandard and as long as any single miner is mining them it will eventually get confirmed.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: ChronosCrypto on March 20, 2016, 09:06:19 PM
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?
There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs).
There isn't. 100kb is a huge transaction (100 times bigger than a "normal" large transaction). IMO, that's a perfectly acceptable threshold. If larger is needed, you can always create a second transaction.

The "one every 100 blocks" exception really isn't needed here. It's more cool than useful.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: ChronosCrypto on March 20, 2016, 09:37:35 PM
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?
There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs).
There isn't. 100kb is a huge transaction (100 times bigger than a "normal" large transaction). IMO, that's a perfectly acceptable threshold. If larger is needed, you can always create a second transaction.

The "one every 100 blocks" exception really isn't needed here. It's more cool than useful.

So would a hard frok to Classic result in the loss of time-locked coins?
You mean coins that are time-locked in transactions larger than 100kb? That's enormous. Of course there aren't any such coins.

But no, I think Classic has a 1mb transaction-size upper bound, which is a reasonable solution.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: BlindMayorBitcorn on March 20, 2016, 09:45:07 PM
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?
There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs).
There isn't. 100kb is a huge transaction (100 times bigger than a "normal" large transaction). IMO, that's a perfectly acceptable threshold. If larger is needed, you can always create a second transaction.

The "one every 100 blocks" exception really isn't needed here. It's more cool than useful.

So would a hard frok to Classic result in the loss of time-locked coins?
You mean coins that are time-locked in transactions larger than 100kb? That's enormous. Of course there aren't any such coins.

But no, I think Classic has a 1mb transaction-size upper bound, which is a reasonable solution.

Just checking.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 21, 2016, 04:55:24 AM
(Note that there is no way for a miner to determine when a transaction T1 was signed.  Even if it spends an UTXO in a transaction T2 that was confirmed only yesterday, it is possible that both T1 and T2 were signed a long time ago.)
Your argument is technically specious. Transactions in Bitcoin have 4 byte version field, that gives us potential for billions of rule-sets to apply to the old transactions. The correct question to ask: why this wasn't and isn't changed as the rules gets changed?

I am not sure if I understood your comment.  Miners cannot apply old semantics when the transaction has an old version field, because that field can be faked by the clients to sabotage the change.  E.g., suppose that the change imposed a mininum output amount of 0.0001 BTC as a way to reduce spam attacks on the UTXO database.  An attacker could frustrate that measure by issuing transactions with the pre-fork version tag.   Does that answer your comment?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on March 21, 2016, 05:42:54 AM
(Note that there is no way for a miner to determine when a transaction T1 was signed.  Even if it spends an UTXO in a transaction T2 that was confirmed only yesterday, it is possible that both T1 and T2 were signed a long time ago.)
Your argument is technically specious. Transactions in Bitcoin have 4 byte version field, that gives us potential for billions of rule-sets to apply to the old transactions. The correct question to ask: why this wasn't and isn't changed as the rules gets changed?

I am not sure if I understood your comment.  Miners cannot apply old semantics when the transaction has an old version field, because that field can be faked by the clients to sabotage the change.  E.g., suppose that the change imposed a mininum output amount of 0.0001 BTC as a way to reduce spam attacks on the UTXO database.  An attacker could frustrate that measure by issuing transactions with the pre-fork version tag.   Does that answer your comment?
You started writing really weird conflated stuff. What do fees have to do with transaction syntax?

The version field should be used to clearly describe syntax rules governing the transaction format.

The amount of fees doesn't change the syntax, so doesn't require change of the version.

The existing client already has "misbehavior" score to disconnect itself from other peers that try to abuse it in various ways. There's no point to invent new mechanisms to do it. All that could be possibly required is to tune the specific values for various misbehavior demerits.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 21, 2016, 06:12:56 AM
You started writing really weird conflated stuff. What do fees have to do with transaction syntax? ... The amount of fees doesn't change the syntax, so doesn't require change of the version.

Sorry, I don't understand your objections.  

There are no "meta-rules" that specify what the validity rules can be.  They are not limited to "syntax", whatever that means.   Any computable predicate on bit strings could in principle be a validity rule, as long as it does not completely break the system.

Right now there are no validiy rules that refer to fees.  The minimum fee, like the Pirate Code, "is more what you'd call 'guideline' than actual rule"; each miner decides whether to require it (or even to require more than it).  But the minimum could be made into a validity rule.  the difference woudl be that each miner would not only impose it on his blocks, but also reject blocks solved by other miners that contain transactions that pay less than that fee.

Quote
The version field should be used to clearly describe syntax rules governing the transaction format.

As I wrote, this cannot be guaranteed.  If a fork (rule change) was executed to fix a bug or prevent an attack, the miners cannot continue to use the old rules for transactions that have the old version tag; that would negate the purpose of the fork.  They must reject such transactions.  

So, it is not safe to retain signed but unconfirmed transactions without broadcasting them.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 21, 2016, 04:34:45 PM
So, it is not safe to retain signed but unconfirmed transactions without broadcasting them.
What do you mean by safe?

Hypothetically (not suggesting anybody has suggested this), but wouldnt a softfork (or hardfork) be able to freeze a specific set of addresses? so KYC can be added to bitcoin via softfork and only the majority of hashpower needs to be bought/convinced to conduct this softfork attack

Since a hardfork is much more visible and requires buyin by the community at large, the softfork attack appears to be much more of a threat than a hardfork attack, but if all the miners switched to a KYC version, along with all the big companies, then this seems a pretty viable attack vector, even as a hardfork.

James


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 21, 2016, 09:37:59 PM
So, it is not safe to retain signed but unconfirmed transactions without broadcasting them.
What do you mean by safe?

I mean that, even if your wallet is bug-free and up-to-date, you cannot be sure that your transaction can be confirmed, until it is; and that risk increases with time -- because soft-fork changes to the protocol can render the transaction invalid.

Since those mothballed transactions are not publicly accessible, there is no way for soft-fork proponents to make sure that they will not be invalidated.  In some cases (such as security or bug fixes), they must be invalidated.  Conversely, those who hold such transactions may not have the private keys or other conditions needed to create valid versions of them.

This may be bad news for the Lightning Network.  The latest attempt at the LN design, IIUC, uses long-lived bidirectional channels, and unconfirmed and unbroadcast transactions ("cheques") that may have to be held by the participants for months or years.  It was already pointed out that fee hikes could cause problems, forcing the receiver of a cheque to pay (via CPFP) the fees that the sender was supposed to pay.  But soft-forks could make the cheque completely unspendable.  Then the receiver would lose all the payments that he received through the channel.  If the channels have 100 year timouts, maybe both parties would effectively lose all the coins that they put into the channel.

Even if the risk of one cheque being invalidated is low -- say, 1 chance in 1'000'000 -- it may be unacceptable when there are 100'000 people doing 100 transactions per month in the LN.  Moreover, asingle change can precipitate many such incidents in a short time.

Quote
Hypothetically (not suggesting anybody has suggested this), but wouldnt a softfork (or hardfork) be able to freeze a specific set of addresses? so KYC can be added to bitcoin via softfork and only the majority of hashpower needs to be bought/convinced to conduct this softfork attack.

Of course.  A cooperating mining majority can do anything.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TierNolan on March 21, 2016, 11:06:19 PM
Since those mothballed transactions are not publicly accessible, there is no way for soft-fork proponents to make sure that they will not be invalidated.

Barring emergency fixes, you can make it so that the change depends on the transaction version number.  Any soft fork should be backwards compatible, unless there is a good reason not to.

Quote
In some cases (such as security or bug fixes), they must be invalidated.

Even for security and bug fixes, the objective should be to not make any transactions invalid.  If that isn't possible, then keep the number to a minimum.

Transaction which use an undefined version number are fair game though.

Quote
This may be bad news for the Lightning Network.  The latest attempt at the LN design, IIUC, uses long-lived bidirectional channels, and unconfirmed and unbroadcast transactions ("cheques") that may have to be held by the participants for months or years.

A soft fork which breaks the Lightning Network would have significant opposition.  You are likely much safer if you use transactions of a type that are very popular.  Breaking unusual edge cases is one thing, breaking extremely popular transaction formats is another.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 22, 2016, 03:24:55 AM
Any soft fork should be backwards compatible, unless there is a good reason not to.  ... Even for security and bug fixes, the objective should be to not make any transactions invalid.

That is mathematically impossible.  A soft fork, by definition, is a change that only makes the rules more restrictive: that is, some transactions that were valid by the old rules are invalid by the new ones, whereas all transactions that are valid by the new rules are also valid by the old ones.  

Quote
Barring emergency fixes, you can make it so that the change depends on the transaction version number.

As I explained already, that is often not an option.  Soft forks are often issued precisely because it is necessary or desirable to outlaw certain types of transactions.  Note that miners cannot distinguish a genuine mothballed transaction from a new transaction that is using the old version number just to frustrate the fork.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TierNolan on March 22, 2016, 11:45:44 AM
That is mathematically impossible.  A soft fork, by definition, is a change that only makes the rules more restrictive: that is, some transactions that were valid by the old rules are invalid by the new ones, whereas all transactions that are valid by the new rules are also valid by the old ones.  

That is why I mentioned using the version field.  People who use undefined versions for their transactions need to accept that there is a risk.

The P2SH soft fork could easily have only applied to the outputs for version 2 (and above) transactions.  The way it was actually done was to make it so that certain outputs could have been made unspendable.  If someone happened to have a locked transaction with a P2SH output, then it would have ended up unspendable.

Similarly, it could have used one of the NOPs as trigger.  Using undefined nops in locked transactions is also a risky thing to do.

Code:
<20 byte hash> OP_P2SH_VERIFY

That would even have used fewer bytes.

It would be worth making a statement of what are reasonable things to do with locked transactions.  Using undefined versions and undefined NOPs would be risky.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: gmaxwell on March 24, 2016, 05:24:27 PM
People who use undefined versions for their transactions need to accept that there is a risk.
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today), the secondary one is transaction versions. They are reserved for explicitly this purpose. The reason for this ranking is that version is global to all inputs and outputs in a transaction; which creates unwelcome tying-- one should be able to spend and create mixtures of coins under different rule sets. For changes that happen outside script, however, version is still available for use.

For segwit the primary mechanism will be segwit witness script versions... which are more clear and flexible than the reserved NOPs.



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: watashi-kokoto on March 24, 2016, 07:50:19 PM
Let's talk address format. If I remember correctly, segwit will use P2WPKH (20 bytes)   and P2WSH (32 byte).

The reasoning is because the pay to script variant needs to defend against a certain security bug, that would limit only 80 bits of security.

But , can we improve the alphabet itself. I mean, to move from base58 to base56? Removing some two letters. Perhaps wide ones like Wwm

Or even completely drop lowercase. This would provide 32 symbols:

 {ABCDEFGHJKLMNPQRSTUVXYZ123456789}

* 32 is nice round number
* O 0, I is removed cause ambiguity
* W is removed because too wide for very narrow low-resolution fonts.

Opinions???????????


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 25, 2016, 06:29:30 AM
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today)

If a fork makes a previously illegal opcode legal, how can it be a soft fork?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: l8orre on March 25, 2016, 08:00:29 AM
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today)

If a fork makes a previously illegal opcode legal, how can it be a soft fork?


Good Question - maybe like the issue with being pregnant or not, and trying to skirt the issue by saying it could be possible to be sort of a 'bit' pregnant...
But I am not technically qualified enough to make authoritative statements about details of bitcoin protocol.  ::)


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 25, 2016, 08:00:29 AM
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today)

If a fork makes a previously illegal opcode legal, how can it be a soft fork?

It was not previously illegal and will be interpreted as doing nothing by unmodified software (in scripts that might appear in later blocks) so although the unmodified software doesn't know what that op-code does it won't worry about it as far as validating the script goes (important assuming that the soft-fork succeeds).

Because the unmodified software doesn't know what the NOP is intended to do, however, it won't relay such a script (nor would an unmodified miner mine it). This is because the unmodified software knows enough to know it can't be sure if the script is valid or not.

Got it?

(there is a clear difference between relaying, mining and validating)


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TierNolan on March 25, 2016, 02:11:05 PM
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today), the secondary one is transaction versions.

Neither of which are being used for segregated witness.

According to the BIP, it works like P2SH and uses a template.

Code:
OP_1 <0x{32-byte-hash-value}>

If an output is of that format, then it counts as a witness output (the OP_1 can be replaced by other values to give SW version).

An alternative would be to use

Code:
OP_1 <0x{32-byte-hash-value}> OP_SW_VERIFY

OP_SW_VERIFY would be one of the NOPs.  This would ensure that an output that matches the template would not end up unspendable.

Outputs that don't include a checksig of some kind are already inherently unsafe to spend.  At least the P2SH and SW templates don't include OP_CHECKSIG calls.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 25, 2016, 07:16:48 PM
Because the unmodified software doesn't know what the NOP is intended to do, however, it won't relay such a script (nor would an unmodified miner mine it). This is because the unmodified software knows enough to know it can't be sure if the script is valid or not.

If no miner will mine a transaction that has a NOP code, then the NOP is effectively illegal.  I.e., those lines in the miner's software that say to reject such transactions are effectively part of the validity rules.  

Which means that making those opcodes legal is a relaxation of the existing rules, and therefore not a soft-fork type of change.

Quote
(there is a clear difference between relaying, mining and validating)

Each player can validate as much as he wants, by any rules that he wants.  However, if he wants to use "the" bitcoin that "everybody" uses, he had better use rules that are compatible with them, in the sense that he must trust the same blockchain that they trust.  As long as "everybody" prefers to trust the chain with the 1500 PH/s, "everybody" had better accept as valid whatever chain is created by the miners with the majority of that hashpower.

Likewise, each miner in theory can adopt any validity criteria that he likes.  He can change them at any time, apply them if and when he wants, and build his blocks any way he wants.  But, as long as he wants to earn bitcoins that he can sell, he must make blocks that end up included in some blockchain that enough potential buyers will trust.  There is no algorithm for that: he must watch the "market" and try to guess how the humans will behave.  

My point is that external observers cannot tell which validity rules a miner is using, nor when or whether he applies them.  All they can see are the blocks that he broadcasts.  In particular, there is no way to tell whether a miner is not accepting transactions with NOPs because NOPs are invalid in his version of the validiy rules, or because he is afraid that someone else may consider them invalid, or because he thinks that they bring bad luck.

As for the non-mining relay nodes, they are aberrations that have no place in the protocol and break all its (already weak) security guarantees.   Thhey should not exist, and clients shoud not use them.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on March 25, 2016, 07:56:33 PM
As for the non-mining relay nodes, they are aberrations that have no place in the protocol and break all its (already weak) security guarantees.   Thhey should not exist, and clients shoud not use them.
Non-mining relay nodes have several useful purposes: probably the most important one is as a first line of defense against denial of service attacks. Especially if such nodes are run in the cloud service provider who charges $0/GB for incoming traffic (like Amazon EC2): it nearly completely defangs the most common DDoS via UDP flood.

I have to observe that for somebody with an actual scientific degree you are making questionable statements too fast and too often.



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 26, 2016, 05:42:45 AM
If no miner will mine a transaction that has a NOP code, then the NOP is effectively illegal.  I.e., those lines in the miner's software that say to reject such transactions are effectively part of the validity rules.  

Which means that making those opcodes legal is a relaxation of the existing rules, and therefore not a soft-fork type of change.

Your logic seems to be completely confused - so let's take a practical example to try and help you to understand (although I get the feeling you're not interested in actually understanding this at all).

Let's look at CLTV and see how that works. The NOP code becomes OP_CHECKLOCKTIMEVERIFY after the number of nodes supporting the soft-fork has got to the correct level (that is determined by the super-majority mining new block versions).

At this point a script that uses this previous NOP code now enforces a rule check to make sure that the nLocktime is greater than or equal to the value on the stack (if not then the result is zero).

This is a restriction not a relaxation (as it can now fail this test) - when any node that hasn't upgraded sees this in a block that has been mined then it will also accept the script as valid even though it didn't do the check and this prevents a fork (as it is indeed being treated as a NOP for such nodes and the value that was pushed onto the stack is the result which is non-zero).

If it was a hard-fork then existing (non-upgraded) nodes would not find blocks that included txs with such scripts to be valid - but as stated *this is not the case* (and again you need to understand the difference between relaying, mining and validating but again I'm sure you'll just refuse to admit that there is any difference and insist that it is all just validation).

If you want to keep on arguing about definitions without actually bothering to understand how the system works then I don't think that anyone here is going to keep on wasting their time trying to explain it to you.

Bitcoin isn't some sort of theoretical model but instead is a practical piece of software (so it doesn't actually care about what you think the behaviour of things should be according to what you think the terms should refer to).


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: fbueller on March 26, 2016, 06:07:54 AM
As for the non-mining relay nodes, they are aberrations that have no place in the protocol and break all its (already weak) security guarantees.   Thhey should not exist, and clients shoud not use them.

They actually provide a great deal of value to the network. They keep miners honest by checking blocks for validity, and *not* relaying those which are totally invalid, or make coins up out of the air. If you make a block paying yourself with invalid proof of work, it won't get further than the nodes you announce it to.

Care to explain your position on why they are not to be trusted?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: gmaxwell on March 26, 2016, 07:35:19 AM
The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today)
If a fork makes a previously illegal opcode legal, how can it be a soft fork?
I would encourage you to read what I actually write. I said nothing of illegal opcodes, and for good reason.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 26, 2016, 11:47:09 AM
As for the non-mining relay nodes, they are aberrations that have no place in the protocol and break all its (already weak) security guarantees.   They should not exist, and clients shoud not use them.
Non-mining relay nodes have several useful purposes: probably the most important one is as a first line of defense against denial of service attacks. Especially if such nodes are run in the cloud service provider who charges $0/GB for incoming traffic (like Amazon EC2): it nearly completely defangs the most common DDoS via UDP flood.

Why should those non-mining relays (NMRs) be assumed to be "good guys"?

The original bitcoin protocol (without NMRs) provided some guarantee for simple clients, under the hypothesis that there was a majority of selfish greedy miners, and that the miners contacted by a client included at least one selfish greedy one. With NMRs, in order to give the same guarantee one must also assume that there is at least one path of honest NMRs between the client an such miner.  The basic premise of bitcoin is that selfish greed is prevalent, but honesty cannot be assumed.

 


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 26, 2016, 06:34:20 PM
Quote
At this point a script that uses this previous NOP code now enforces a rule check to make sure that the nLocktime is greater than or equal to the value on the stack (if not then the result is zero).

This is a restriction not a relaxation (as it can now fail this test) - when any node that hasn't upgraded sees this in a block that has been mined then it will also accept the script as valid even though it didn't do the check and this prevents a fork (as it is indeed being treated as a NOP for such nodes and the value that was pushed onto the stack is the result which is non-zero).

I understand that perfectly, thank you.  But, IIUC, Greg just claimed that, in fact, the miners running the current version of the software reject any transactions and blocks with such NOPs.  Whereas the miners running the new version will accept them, if the new semantics given to them is satisfied.

If that is true, then old miners, even if they are a minority, should reject the blocks created by the new miners, as soon as they start including the redefined opcode.  Isn't that correct?

Quote
and again you need to understand the difference between relaying, mining and validating

The policies of independent non-mining relay nodes are irrelevant, since they can reject transactions by any criteria they like, and have no incentives to behave in any particular way; and also because clients can (should) always bypass them and talk to miners directly (or to relays run by miners).  The functioning of the network cannot depend on them.

The policies that matter are those of the miners.  It does not make any difference for the network where in the mining software a certain rule is enforced:  if the miners reject transactions that have some property, then that property is effectively part of their validity rules.

If you want to keep on arguing about definitions without actually bothering to understand how the system works then I don't think that anyone here is going to keep on wasting their time trying to explain it to you.

Bitcoin isn't some sort of theoretical model but instead is a practical piece of software (so it doesn't actually care about what you think the behaviour of things should be according to what you think the terms should refer to).

Wait, you are telling me that bitcoin is not "guaranteed by math"?  ;D


Edit: created by the miners --> created by the new miners


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: amaclin on March 26, 2016, 06:50:41 PM
Wait, you are telling me that bitcoin is not "guaranteed by math"?  ;D
It is not guaranteed.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 27, 2016, 03:18:24 AM
If that is true, then old miners, even if they are a minority, should reject the blocks created by the new miners, as soon as they start including the redefined opcode.  Isn't that correct?

It was explained to you that the op code is *valid* in the context of validating a block (so of course the block would not be rejected by older software). Why do you think it is called a NOP?

If you still cannot grasp such a very basic concept as how this stuff is working then I suggest you simply stop with your "analysis" of Bitcoin because you will never get it.

(and btw - much of software deals with many practical things such as byte size limits that don't apply to math in general so you're not really making any sense with that statement either - if we were talking about pure maths then the block reward would never end for a start)


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 27, 2016, 04:42:31 AM
It was explained to you that the op code is *valid* in the context of validating a block (so of course the block would not be rejected by older software). Why do you think it is called a NOP?

Sigh.  That is what I always understood.  But then:

The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today)

What does this mean?  That the current version of the mining software will reject transactions that have those NOPs?

Quote
much of software deals with many practical things such as byte size limits that don't apply to math in general so you're not really making any sense with that statement either - if we were talking about pure maths then the block reward would never end for a start

I haven't looked at the code itself, but I do understand a few things about programming.  For example, a few months ago I found a rounding error in the table of block rewards on the bitcoin wiki.  (And integers are math too, you know.)

On the other hand, I wonder if you really understand how the protocol is supposed to work.  Can you see why the original design did not have non-mining relay nodes?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 27, 2016, 04:45:31 AM
It was explained to you that the op code is *valid* in the context of validating a block (so of course the block would not be rejected by older software). Why do you think it is called a NOP?

Sigh.  That is what I always understood.  But what does this mean:

The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today)

We have tried again and again to explain but you simply refuse to even read what we post (and just keep on quoting one or another thing to keep on repeating your non-point) - so I'm done with trying to explain things to you (this is also off-topic anyway).

I would advise that you don't try to make suggestions about how Bitcoin should work when you don't even understand how it currently works (or how it used to work for that matter as relaying has been there since day one).

(I have read the code and have been a software engineer since the 1980's)


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 27, 2016, 05:00:16 AM
We have tried again and again to explain but you simply refuse to even read what we post (and just keep on quoting one or another thing to keep on repeating your non-point) - so I'm done with trying to explain things to you (this is also off-topic anyway).

It would have been enough to say "your previous understanding was correct, the current mining software will accept and mine transactions containing those NOPs, and that sentence by Greg meant something else"  or "your previous understanding was wrong, the current mining software will not accept or mine transactions containing those NOPs". 

Quote
(I have read the code and have been a software engineer since the 1980's)

I have been programming continuously since 1969, and worked for a few years at software research labs in the US...


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 27, 2016, 05:04:50 AM
One last time and then I really do give up.

It is not an "either this or that" and that is where you are just getting it wrong (repeatedly).

As stated (on three separate posts already) validation isn't just a simple and single concept.

There are different validation rules depending upon whether you are mining, relaying or verifying a block (i.e. context).

So the rule about a NOP is not the same rule in all three situations (something that you just don't seem to be able to grok).

(also - I am assuming that you know that both txs and blocks are relayed and that the rules are also dependent upon whether you are relaying a tx or a block)


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on March 27, 2016, 05:22:25 AM
I haven't looked at the code itself, but I do understand a few things about programming.  For example, a few months ago I found a rounding error in the table of block rewards on the bitcoin wiki.  (And integers are math too, you know.)

On the other hand, I wonder if you really understand how the protocol is supposed to work.  Can you see why the original design did not have non-mining relay nodes?
OK, now you've shifted your position to open crackpottery.

The original implementation certainly had non-mining relay nodes. In the original implementation mining (then CPU-only) was explicitly optional. The shift between the original and current is just that nowadays the probability that the randomly connected relay node also does mining is much lower.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 27, 2016, 05:35:05 AM
As stated (on three separate posts already) validation isn't just a simple and single concept.
There are different validation rules depending upon whether you are mining, relaying or verifying a block (i.e. context).
So the rule about a NOP is not the same rule in all three situations (something that you just don't seem to be able to grok).

Thanks. Well, indeed, the fact that the validity rules are different for different players is new for me.  Is it explained in some place that I should have read?  (The descriptions of soft/hard fork that I have read, for example, always say "the rules", "the old rules", "the new rules" -- never "the miner rules" or "the client rules", etc.

So, does the current version of the Core mining software accept transactions with those NOPs, for inclusion into the candidate block? 

Would that software accept as parent a block mined by some other miner that contains them?

Quote
(also - I am assuming that you know that [ ... ] the rules [ for relay nodes ] are also dependent upon whether you are relaying a tx or a block)

No, I did not know that either.  But, again, one cannot assume anything about the behavior of non-mining relay nodes, so it seems pointless to worry about that.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: molecular on March 27, 2016, 05:37:11 AM
It was explained to you that the op code is *valid* in the context of validating a block (so of course the block would not be rejected by older software). Why do you think it is called a NOP?

It's called a "No OPeration" because it doesn't do anything (except maybe wait a clock cycle or so) in the context of script execution.. So it will not influence the result of the script execution. However there could still be a consensus rule that says: scripts shall not contain NOP instructions.

What exactly is the "context of validating" a block? Validating it for wether it should be included in finding the longest chain? So are you saying a block containing a tx with a NOP will be accepted as part of the longest valid chain by core 0.11, 0.12?

There are different validation rules depending upon whether you are mining, relaying or verifying a block (i.e. context).

So the rule about a NOP is not the same rule in all three situations (something that you just don't seem to be able to grok).

I understand that. For example BitcoinUnlimited will not relay some blocks if they are too big for the values given in configuration, yet it will still accept it as a valid block for mining.

The primary established mechanism for safe softforks is the reserved script NOPs (which will not be relayed or mined by unmodified software today)

so maxwell here seems to be saying that in the context of relaying a tx, relaying a block and mining, a tx containting a NOP (is that any different from "a reserved script NOP", btw?) instruction or a block containing a tx containing a NOP instruction will not be relayed (in case of a block or tx) and will not be considered when finding the 'longest' chain to mine on? That seems hard to believe. So I'm putting a NOP into a script and that means the tx wont be relayed by standard core nodes and it wont be mined by a miner who runs a core node? Is that what he's saying? Is it true?



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 27, 2016, 05:39:25 AM
So, does the current version of the Core mining software accept transactions with those NOPs, for inclusion into the candidate block?  

No - it would not (as it could not know whether or not they are valid).

Would that software accept as parent a block mined by some other miner that contains them?

Yes it would as it assumes that whoever has mined them did know that they were valid.

Are you starting to get it now?
(fingers crossed that the penny has dropped)

Also without relaying you actually wouldn't have a P2P network at all (so it is an essential part of the system not some optional thing) and there are expectations that a node will not relay invalid txs or blocks (and any node that does will be banned by your client).


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: molecular on March 27, 2016, 05:46:43 AM
Are you starting to get it now?
(fingers crossed that the penny has dropped)

I know you're not talking to me, but I think I understand now: the NOP really is something that invalidates a tx in the context of relaying and mining it because the assumption is that this NOP isn't there for fun, but it really is not a NOP for the author of the tx or the miner of it. I just don't know the new meaning of the NOP so I don't touch the tx with a ten foot pole... who knows wether that script returns true or false?

On the other hand a NOP in a tx in a block already mined by someone else is not offensive: the assumption is that it was assigned some meaning I don't yet know about and the block was mined by a miner that *does* know about that new meanig and he has verified the tx' validity.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 27, 2016, 05:50:18 AM
Yes @molecular that is precisely what I've been trying to explain.

In the case of CLTV we can't know whether or not a tx that uses this new (reserved NOP) is valid if we are running older software but once a block appears that has this op-code in it then we just treat it as NOP (assuming that whoever mined it knew that it was valid).

Whether or not we relay such txs isn't actually overly important - the key thing is whether or not such txs are included in a block that you mine.

This allows someone to run the older software without being forced to upgrade (whereas a hard-fork requires you to upgrade).


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: molecular on March 27, 2016, 11:12:53 AM
Yes @molecular that is precisely what I've been trying to explain.

A clear way for me to explain the seeming contradiciton ("NOP is ok" vs. "NOP invalidates tx") is not so much in looking at the procedural context we're in (mining, relaying, validating) but at the source of that tx: if it came from the p2p network, a NOP represents an unknown and we can't trust the tx to be valid, if the source is a block in the chain we trust, we can trust the script with that NOP (by extension).


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: achow101 on March 27, 2016, 02:41:36 PM
Some things need to be clarified here.

What a node relays is a transaction that is both valid and standard. There are standardness rules and those are node based, not network wide (well usually they are because they are in the software) and not consensus. Solely local node policy. If a transaction is standard, it is also valid.

A valid transaction is any transaction which, when the scripts are run, returns true and nothing in the stack triggers a failure. A valid transaction does not have to be standard.

AFAIK a transaction with a NOP is considered non standard but it is still valid. A NOP is a No Operation so nothing is done when it is being validated. It can still validate true and be a valid transaction, just that because of the NOP, a node will call it not standard and thus reject the transaction. If the transaction makes it into a block, then the miner thought it was standard. The transaction is still valid so when validating the block, a node will find that the block is valid, just that it contains some non standard but still valid transactions.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 27, 2016, 02:49:35 PM
A valid transaction is any transaction which, when the scripts are run, returns true and nothing in the stack triggers a failure. A valid transaction does not have to be standard.

I think this is the crux of the misunderstanding that others have been having. Valid and standard are not the same thing (but something that is standard is also valid).

Because Bitcoin has introduced an entirely new paradigm (that of P2P consensus) it has had to introduce some new software concepts and I think at this stage this is not being well understood (perhaps not surprisingly).


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 27, 2016, 05:46:27 PM
Would that software accept as parent a block mined by some other miner that contains them?

Yes it would as it assumes that whoever has mined them did know that they were valid.

That sounds bizarre if you say it that way: it would mean that a miner *cannot* verify the validity of the parent block, and has to trust the previous miner.

But perhaps what you should have said is: "Yes, it would validate that block fully -- with his (old) validity rules, in which those opcodes are NOPs, hence it would accept the block as valid."  Is that so?  

That is what I have always understood...

Quote
Does the current version of the Core mining software accept transactions with those NOPs, for inclusion into the candidate block?  

No - it would not (as it could not know whether or not they are valid).

This is not wrong, since a miner can exclude any transactions, by any criterion.  However, given that old miners will accept blocks with those opcodes as valid, its seems rather pointless.  There is no way to ensure that all (or any) miners will abide by this rule.  Any miner could create and mine blocks that use those opcodes; and other old miners would accept those blocks;  Correct?

By the way, are those NOPs really NOPs, or do they mean "terminate the verification with success"?  

Quote
Also without relaying you actually wouldn't have a P2P network at all (so it is an essential part of the system not some optional thing)

In the original design, the propagation was supposed to occur among the miners, who have incentives to keep the network running and to keep clients happy.  The original design depended on those incentives to argue that even simple clients would have adequate security.

The non-mining relay nodes were added later, without any justification.  They break that security argument.

A miner may be offering his services for two possible reasons: 1. to get the reward and fees, or 2. some other motive.  Bitcoin is based on the assumption that motive 1 is fairly likely, so if you contact N random miners it is quite likely that you will get at least one such "selfish greeedy" guy.  

A non-mining relay node cannot have motivation 1, so his motivation can be only 2, "some other motive".  What security can you derive from that?

Quote
and there are expectations that a node will not relay invalid txs or blocks (and any node that does will be banned by your client).

If the party that receives a tx or block checks some rule, then nothing is gained by having the relay to check that rule too.  If the receiver does not check some rule, then he will not notice when the relay sends data that does not satisfy it; and will not ban the relay.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 27, 2016, 05:55:47 PM
The non-mining relay nodes were added later, without any justification.  They break that security argument.

Rubbish.

I'm sorry but you have not tried to understand one thing but instead tried to just post rubbish again and again and again.

I would recommend that you retire as you have zero understanding of this technology and we have zero tolerance for your ignorance.

(and I 100% expected that you would post such a response as I know that you are just a paid shill - seriously you should be ashamed of yourself - if you have any self-respect at all then just stop posting)

(even your grandchildren would be saying "don't post more crap grandpa")


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 27, 2016, 06:07:46 PM
The non-mining relay nodes were added later, without any justification.  They break that security argument.

Rubbish.

I'm sorry but you have not tried to understand one thing but instead tried to just post rubbish again and again and again.

I suppose that you will not tell me where in the original design the non-mining relays were introduced as guardians of the Sared Protocol.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 27, 2016, 06:09:09 PM
I suppose that you will not tell me where in the original design the non-mining relays were introduced as guardians of the Sared Protocol.

The what?

You have just made yourself look even stupider than I could have possibly made you out to be with that (enjoy being an old fool - and at least I have saved others from wasting their time with you which I had realised needed to be addressed some time before which is why I bothered to do this).

If you want to leave this forum with any respect left at all then you'd best not reply. :D


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 27, 2016, 06:38:01 PM
I suppose that you will not tell me where in the original design the non-mining relays were introduced as guardians of the Sared Protocol.

The what?

"Sacred Protocol".  Sorry for the typo.

Quote
You have just made yourself look even stupider than I could have possibly made you out to be with that (enjoy being an old fool - and at least I have saved others from wasting their time with you which I had realised needed to be addressed some time before which is why I bothered to do this).
If you want to leave this forum with any respect left at all then you'd best not reply. :D

Right when the tone of the responses makes me think that I am getting close to something?  ;D

By the way, you have not told me where I can find a description of the multiple rule sets for different players, nor where in the original design the non-mining relays were described and given their special function.

(Satoshi always used "node" to mean "miner", if that is what you were thinking.)
 


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: achow101 on March 27, 2016, 06:39:48 PM
But perhaps what you should have said is: "Yes, it would validate that block fully -- with his (old) validity rules, in which those opcodes are NOPs, hence it would accept the block as valid."  Is that so?
 
Yes, that is so.

By the way, are those NOPs really NOPs, or do they mean "terminate the verification with success"?  
They are actually NOPs. They mean "do nothing" not "terminate with success".


In the original design, the propagation was supposed to occur among the miners, who have incentives to keep the network running and to keep clients happy.  The original design depended on those incentives to argue that even simple clients would have adequate security.

The non-mining relay nodes were added later, without any justification.  They break that security argument.
Nope, not true. The original Bitcoin Client (v0.1.0) did not having mining enabled by default. When you downloaded 0.1.0 it defaulted to being a non-mining relay node with the option in the GUI to enabled mining if you so desired. In fact Satoshi even said so himself in the original 0.1.0 announcement email: http://www.metzdowd.com/pipermail/cryptography/2009-January/014994.html.

You can examine the source code of 0.1.5 (the earliest available on github) at https://github.com/bitcoin/bitcoin/tree/v0.1.5 and you can get the original 0.1.0 client with the source code from http://satoshi.nakamotoinstitute.org/code/.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 27, 2016, 07:19:42 PM
They are actually NOPs. They mean "do nothing" not "terminate with success".

Thanks.  I don't know the script language, but can't you build a script with those opcodes that fails to validate if the opcode is interpreted as a NOP, but succeeds if it is redefined to something else?  I suppose that the only redefinitons that would allow a soft fork are of the kind "if (condition) then FAIL else NOP", correct?

Quote
Nope, not true. The original Bitcoin Client (v0.1.0) did not having mining enabled by default. When you downloaded 0.1.0 it defaulted to being a non-mining relay node with the option in the GUI to enabled mining if you so desired. In fact Satoshi even said so himself in the original 0.1.0 announcement email: http://www.metzdowd.com/pipermail/cryptography/2009-January/014994.html.

You can examine the source code of 0.1.5 (the earliest available on github) at https://github.com/bitcoin/bitcoin/tree/v0.1.5 and you can get the original 0.1.0 client with the source code from http://satoshi.nakamotoinstitute.org/code/.

OK, I see what you mean.

Back in 2009 there was only one kind of node, the miner-user-relayer.  Every player could mine; not just in theory, but in practice.  Satoshi apparently assumed that mining would be widespread,even if occasional and anonymous.  And he assumed that players would join primarily for the benefits of being users.

Later a second kind of node (already predicted in the whitepaper) was introduced, the simple client.  Still, the simple client was supposed to use the same rules as the miners, but only check a subset of them.  

Today there is a third kind, the dedicated "full but non-mining" relay, aka "node", which apparently has a very distinct role and is supposed to be an essential defense against misbehaving miners and other menaces.  And, TIL, needs special validity rules.  

The relaying function of miners in the original design was already a bit fuzzy, but since relaying was cheap, and all nodes were supposed to be also users and (potential) miners, they could be assumed to share the mining incentives and have the good intentions.  That does not carry over to the present-day nodes.  We have examples of relay nodes that filter transactions based on arbitrary ideological criteria, nodes that are committed to specific "parties" in the block size war...  Where is the analysis that they are helping rather than harming the security of the network?

PS. For general joy of mankind, I am traveling in a few hours and will be offline for 2 days.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: achow101 on March 27, 2016, 07:43:50 PM
Thanks.  I don't know the script language, but can't you build a script with those opcodes that fails to validate if the opcode is interpreted as a NOP, but succeeds if it is redefined to something else?  I suppose that the only redefinitons that would allow a soft fork are of the kind "if (condition) then FAIL else NOP", correct?
Yes. You can see this in OP_CLTV: https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki#summary and OP_CSV (upcoming): https://github.com/bitcoin/bips/blob/master/bip-0112.mediawiki#summary

Today there is a third kind, the dedicated "full but non-mining" relay, aka "node", which apparently has a very distinct role and is supposed to be an essential defense against misbehaving miners and other menaces.  And, TIL, needs special validity rules.  
No, a full node (non-mining relay) does the exact same thing as a miner when it comes to validation but it simply doesn't produce blocks. The "special validity rules" are not consensus rules unlike the validation rules which are consensus rules. Those rules are called standardness rules and both miners and full nodes have them. The standardness rules are local node policy so they tend to change more often than consensus rules because if something is non-standard it can still be valid.

Full nodes are even more important nowadays due to the prevalence of SPV mining. Many miners now aren't running full nodes, meaning they are not fully validating every single block and transaction they receive. The only nodes that do this now are the full nodes and they are what are enforcing the consensus rules because most miners aren't doing it and SPV wallets cannot. These full nodes protect against either major mining screw ups like the July 4th fork and against malicious miners.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: amaclin on March 27, 2016, 07:51:08 PM
Many miners now aren't running full nodes, meaning they are not fully validating every single block and transaction they receive.
;D Sure?
Edit: What do we mean by 'miners' - asic owners or pool admins?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 28, 2016, 06:41:00 AM
"Sacred Protocol"

Bitcoin is not a religion and has no such thing - as has been pointed out many times to you already the answers are *in the code* which you can find here: https://github.com/bitcoin/bitcoin


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TooDumbForBitcoin on March 29, 2016, 11:32:24 PM
Hard to believe jstolfi can be made into a sympathetic character, but this clown SEE, I AM A CLOWN is doing a pretty good job.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: amaclin on March 30, 2016, 05:14:43 AM
Bitcoin is not a religion and...
Sure?  ;D
All attributes exist.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: BitUsher on March 30, 2016, 10:49:54 AM
Segnet 4 is out. Time to spin up a node and help test more-

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-March/012595.html



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 30, 2016, 02:16:46 PM
Thanks for the reply.  You write:

The "special validity rules" [ of relay nodes ] are not consensus rules unlike the validation rules which are consensus rules. Those rules are called standardness rules and both miners and full nodes have them. The standardness rules are local node policy so they tend to change more often than consensus rules because if something is non-standard it can still be valid.

The word "standardness" implies that someone is setting a standard that should be followed by all nodes; but it is known that different relay nodes are using different arbitrary criteria to censor transactions and blocks.

Quote
Full nodes are even more important nowadays due to the prevalence of SPV mining. Many miners now aren't running full nodes, meaning they are not fully validating every single block and transaction they receive. The only nodes that do should do this now are the full nodes and they are should be what are enforcing the consensus rules because most miners aren't doing it and SPV wallets cannot.

Fixed that for you.  They should validate the blocks that they receive, but (unlike the miners) they have no motivation to do so, and there is no way for a client to check that they are.

The miners have a strong incentive to use only valid blocks as parents; when they gamble, as in the (badly named) "SPV mining" of empty blocks, it is because they have high confidence that the parent block is valid.  In the case of classic "SPV mining", they steal the hash of the parent block from another pool via Stratum.  Therefore, if that block was invalid, that other pool would be screwing all its members and itself.  In other words, when miners do "SPV mining" they gamble that the same incentives that spur them to validate their non-mepty blocks will spur the miner who assembled the previous non-empty block to validate it too.

Non-mining relay nodes have no incentives to validate the blocks that they relay.  On the contrary, they have a significant incentive to skip validation altogether. 

Quote
These full nodes should protect against [ ... ] malicious miners.

How exactly are they going to do that?  A relay node can only withhold a mined block that it considers invalid.  But if the malicious miners have a minority of the hashpower, that block will not get many confirmation, and soon be orphaned and rejected even by SPV clients.  On the other hand, if it comes form a majority branch, then sooner or later the SPV clients will receive it and accept it as valid, rejecting the "good" minority block that some relays offered instead.  And, in that case, bitcoin is done for anyway: nothing will save it if malicious miners have a majority of the hashpower.

On the other hand, a relay can withhold blocks from the majority branch and serve instead blocks from a minority branch, created by old, buggy, or malicious miners.  If all the relay nodes that are contacted by an SPV client do that, the SPV client will be screwed.  Given the way that clients get the addresses of relay nodes, and that such malicious relays are very easy to set up (much easier than setting up an effective malicious miner), this risk is far form negligible.

Perhaps the faith that some people have on relay nodes comes from experience with SMTP, NNTP, bitTorrent and other decentralized p2p networks, where nodes can (must) be assumed to be well-intentioned by default? Bitcoin is quite unlike those networks...

Quote
These full nodes protect should protect against major mining screw ups like the July 4th fork

The blame for that incident should go to the Core devs, who choose to deploy BIP66 when 5% of the miners were still not ready, without a grace period and without alerts.  The miners should be blamed only for trusting that the devs knew what they were doing.

It seems that the miners did learn their lesson, because at the next soft fork (BIP65, IIRC) AntPool held back its vote until most of the tiny miners had upgraded, and prodded them to do so.  I wonder if the Core devs have learned theirs?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TooDumbForBitcoin on March 30, 2016, 02:21:04 PM

disinterested skeptical academic discussion


Where is our promised vacation?  


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 30, 2016, 02:32:04 PM
The word "standardness" implies that someone is setting a standard that should be followed by all nodes; but it is known that different relay nodes are using different arbitrary criteria to censor transactions and blocks.

Again - a total misunderstanding of the terminology and how Bitcoin works.

If you are seriously at all interested in what Standard and Valid transactions are then you should read the code rather than post nonsensical things like you just did.

It is rather clear to me (and others) that you are not interested to understand anything but instead to try and push the agenda of those that are attacking the Bitcoin core devs.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: spartacusrex on March 30, 2016, 04:37:06 PM
"Sacred Protocol"

Bitcoin is not a religion and has no such thing - as has been pointed out many times to you already the answers are *in the code* which you can find here: https://github.com/bitcoin/bitcoin


LOL..

'.. Look - I've told you already.. The needle you're looking for is in that haystack!.. Now GGOOO!. FFOOOOLL!!!'

..

Why you always so angry CIYAM.. ? You were'nt always this way.. way back when..  ???

You really should learn to stop posting in threads that annoy you. Live longer.

It's obvious to anyone with half a brain jstolfi has a deep understanding of Bitcoin, and he is making some valid points. (Plus I like his clear un-emotional posts.. )  ;D


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: exstasie on March 30, 2016, 06:22:07 PM
Why you always so angry CIYAM.. ? You were'nt always this way.. way back when..  ???

I imagine like many others around here, he's getting frustrated at the never-ending campaigns of disinformation used to attack Core devs.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: spartacusrex on March 30, 2016, 06:41:56 PM
Why you always so angry CIYAM.. ? You were'nt always this way.. way back when..  ???

I imagine like many others around here, he's getting frustrated at the never-ending campaigns of disinformation used to attack Core devs.

Core devs rock. Period.

But there is a difference between attacking and disagreeing with.

That distinction seems to have been lost somewhere.. and it's an important one.



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: exstasie on March 30, 2016, 07:21:43 PM
Why you always so angry CIYAM.. ? You were'nt always this way.. way back when..  ???

I imagine like many others around here, he's getting frustrated at the never-ending campaigns of disinformation used to attack Core devs.

Core devs rock. Period.

But there is a difference between attacking and disagreeing with.

That distinction seems to have been lost somewhere.. and it's an important one.

There is a difference between "honest debate" and "campaigns of disinformation." And it's an important distinction to make. Notice that I said disinformation to suggest the intentional and deliberate nature of the false information being spread. No matter how many times the unproven fear mongering/misinformation [about Segwit/soft fork capabilities, Blockstream control of bitcoin development, the catastrophic implications of "full blocks," the immediate threat of altcoins overtaking bitcoin, etc.] is refuted, there is a distinct group of shills that simply repeat it. Ad nauseum. All over bitcointalk, twitter, reddit. That includes Gavin Andresen and Brian Armstrong (Coinbase employees/executives pushing for shareholder interests).

Perhaps more importantly: I firmly believe that releasing an incompatible/adversarial client that intends to fork the network without consensus--and still call itself "bitcoin"--is an attack on the network. I think deliberately spreading disinformation--and it's become pretty clear to any intelligent person that this is what's happening--is an attack on bitcoin and the developers who maintain it. That's without taking into account the incessant personal attacks and accusations of "blocking development."

We can agree to disagree.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on March 30, 2016, 08:15:19 PM
It's obvious to anyone with half a brain jstolfi has a deep understanding of Bitcoin, and he is making some valid points. (Plus I like his clear un-emotional posts.. )  ;D
Deep understanding of Bitcoin? It was disproved few pages ago by knightdk. JorgeStolfi clearly has no elementary comprehension of the source code of Bitcoin.

Yet he continues to post trivialities like:
And, in that case, bitcoin is done for anyway: nothing will save it if malicious miners have a majority of the hashpower.
which was already stated in the original Satoshi's whitepaper. This isn't deep. This is as shallow as it gets.

CIYAM tends to think that JorgeStolfi is some sort of paid disinformation operative. From my personal experiences with peer-review I would venture to guess that he may be heavily medicated or have some sort of amnesia. Such people tend to have good recall of the recent facts and recently acquired knowledge but start to have problems with recall and application of facts and knowledge acquired years ago.



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: JorgeStolfi on March 31, 2016, 10:35:34 AM
The word "standardness" implies that someone is setting a standard that should be followed by all nodes; but it is known that different relay nodes are using different arbitrary criteria to censor transactions and blocks.
Again - a total misunderstanding of the terminology and how Bitcoin works.

If you are seriously at all interested in what Standard and Valid transactions are then you should read the code

I don't plan to read the code, and I should not have to. The payment system of the world cannot be defined by a (messy) program.   Bitcoin should be an implementation-independent protocol, like SMTP and HTML. Anyone with sufficient knowledge of algorithms and networking should be able to understand how it works without reading the code.

And there should not be "the" code, since the maintainers of that code would be a central authority.

Whatever "standard" means, you cannot assume that miners will not create non-standard blocks that are accepted as valid by other miners and clients.  You cannot assume that every miner will run unmodified Core software; and you cannot assume that non-mining relay nodes will do anything specific.

Indeed I have little love for the current Core devs, for a dozen separate reasons.  To stay on the technical ones, they include (1) claim that soft forks are safer than hard forks, (2) the "fee market" and its paraphernalia, (3) SegWit, (4) reliance and encouragement of non-mining relays, (5) modifying bitcoin as if the Lightning Network is a certain thing.  Not me, but some very competent people (who also have read the code) have objectively pointed out the faults in those items; but their criticisms have never been answered.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 31, 2016, 10:59:27 AM
I don't plan to read the code, and I should not have to. The payment system of the world cannot be defined by a (messy) program.   Bitcoin should be an implementation-independent protocol, like SMTP and HTML. Anyone with sufficient knowledge of algorithms and networking should be able to understand how it works without reading the code.

That is something that even Mike Hearn knew is *simply impossible* for Bitcoin (I remember a topic about this from around 3 years ago in which I had actually argued for the creation of such a protocol specification and he convinced me otherwise).

It is fine for something like SMTP to have implementation faults as about the worst thing that might happen is you miss an email - but if you make any implementation fault with Bitcoin then people will lose money.

The C++ code IS the specification and if you refuse to read it then I'm sorry but you are NEVER going to understand Bitcoin (and it isn't up to us to educate you).

There are non-node implementations written in other languages so maybe you might be able to read the code of one of those (am guessing C++ is perhaps too difficult for you) but you need to understand that only "libconsensus" (written in C++) holds the key to Bitcoin (other language implementations are not used for mining but only for wallets).

The very language C++ itself would need to form a part of any such specification (as every single little C++ nuance is relevant) which would make any such document ridiculously large (which is why you are never going to see that).

You could of course take a look at Mastering Bitcoin (which I believe can even be found online).


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: amaclin on March 31, 2016, 12:06:09 PM
Anyone with sufficient knowledge of algorithms and networking should be able to understand how it works without reading the code.
And there should not be "the" code, since the maintainers of that code would be a central authority.

Is there a reason to explain C++ code in any other [human or computer] language?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: Jet Cash on March 31, 2016, 02:43:33 PM
Have any conclusions about SegWit come out of this thread?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on March 31, 2016, 03:31:47 PM
Anyone with sufficient knowledge of algorithms and networking should be able to understand how it works without reading the code.
And there should not be "the" code, since the maintainers of that code would be a central authority.

Is there a reason to explain C++ code in any other [human or computer] language?

Of course. Machine verification. It could be even a subset of C++ like SystemC. But the proper definition of consensus-critical portion should be machine-verifiable.

I'm pretty sure that at least gmaxwell understands the importance of that. He's an advocate of zk-SNARKs and those have a synthesis of logic circuit equivalent to the given program as one of the intermediate steps.

The zk-SNARK people in Israel designed and implemented some subset of C (not C++) to facilitate logic synthesis. It is a key step to machine verification of the code.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: amaclin on March 31, 2016, 03:46:52 PM
Of course. Machine verification. It could be even a subset of C++ like SystemC.
But the proper definition of consensus-critical portion should be machine-verifiable.
It is not possible to create an algorithm for verifying turing-complete structures.

In other words.
Imagine that you have a tool which produces '1' if the realization matches consensus and '0' if the realization is wrong and can produce hard-forks and earthquakes.
Who is responsible for bugs in this tool?  ;D How would you check it? With another tool?

And the second objection.
We do not need to 'stone' the consensus code. The majority always has a right to change anything in consensus.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on March 31, 2016, 04:09:54 PM
It is not possible to create an algorithm for verifying turing-complete structures.

In other words.
Imagine that you have a tool which produces '1' if the realization matches consensus and '0' if the realization is wrong and can produce hard-forks and earthquakes.
Who is responsible for bugs in this tool?  ;D How would you check it? With another tool?

And the second objection.
We do not need to 'stone' the consensus code. The majority always has a right to change anything in consensus.
From the past experience talking with you I think you are just pretending to be a dumbass. But it is also possible that you don't understand the difference between the old stopping problem and the automated logical equivalency verification like the one used by ARM to verify the implementations of their namesake architectures.

Every CAD/EDA tool vendor has tools to do automated verification and automated test vector generation. The obvious problems are:

1) those tools are closed source
2) those tools are very expensive
3) the input languages are hardware oriented: Verilog & VHDL mostly, with only recent additions of SystemC or similar high-level-synthesis tools.

That isn't even anything new like zk-SNARKs. Those tools were in use and on the market for more than 10 years. I had used some early prototypes of those tools (based on LISP) in school back in 20th century.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: amaclin on March 31, 2016, 04:15:02 PM
But it is also possible that you don't understand the difference between the old stopping problem
and the automated logical equivalency verification like the one used by ARM to verify the
implementations of their namesake architectures.
Of course, I understand the difference between turing-complete and non-turing-complete structures.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on March 31, 2016, 05:16:38 PM
Of course, I understand the difference between turing-complete and non-turing-complete structures.
Good. For those unfamiliar with that area of science here are good links to start their research:

https://en.wikipedia.org/wiki/High-level_synthesis
https://en.wikipedia.org/wiki/High-level_verification



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on March 31, 2016, 05:46:57 PM
But it is also possible that you don't understand the difference between the old stopping problem
and the automated logical equivalency verification like the one used by ARM to verify the
implementations of their namesake architectures.
Of course, I understand the difference between turing-complete and non-turing-complete structures.
is bitcoin turing complete?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on March 31, 2016, 05:48:14 PM
is bitcoin turing complete?

If you supposedly know anything about Bitcoin then of course you would know the answer to that - wouldn't you?

(btw - are you still using binary floating point for monetary calculations?)


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: amaclin on March 31, 2016, 06:29:53 PM
is bitcoin turing complete?
What do you mean? Bitcoin script language is not turing complete today.
But we are discussing not about scripts themselves, but a program which can validate script-processor and say is it consensus-compatible (equal to current c++ reference code) or not

(btw - are you still using binary floating point for monetary calculations?)
Har du slutat dricka konjak på förmiddagarna?


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: jl777 on April 01, 2016, 05:11:47 AM
is uᴉoɔʇᴉq turing complete?

If you supposedly know anything about uᴉoɔʇᴉq then of course you would know the answer to that - wouldn't you?

(btw - are you still using binary floating point for monetary calculations?)

my question was to point out that since bitcoin is not turing complete, amalcin's claim that it is impossible to verify bitcoin due to halting problem was not making sense.

I only use floating point when it is not needing to obtain consensus and where using floating point makes sense to use. I dont dogmatically avoid usage of something all the time just because there are times where it isnt correct to use it.

James


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: CIYAM on April 01, 2016, 05:20:34 AM
my question was to point out that since bitcoin is not turing complete, amalcin's claim that it is impossible to verify bitcoin due to halting problem was not making sense.

Apples and oranges - what @amaclin was referring to was the Bitcoin source code itself (not Bitcoin's scripting language).


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: pri3oner on April 01, 2016, 07:47:54 AM
I can't understand it. anyone can explain it to me :)


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: amaclin on April 01, 2016, 07:54:05 AM
I can't understand it. anyone can explain it to me :)
How much are you able to pay for this study?  ;D


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: TooDumbForBitcoin on April 01, 2016, 12:59:13 PM
I can't understand it. anyone can explain it to me :)
How much are you able to pay for this study?  ;D

Keep in mind that Amaclin values BTC at $10 each.  So if you settle on a $500 fee, he'll want 50 BTC.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on April 04, 2016, 02:45:34 AM
I am not sure if I understood your comment.  Miners cannot apply old semantics when the transaction has an old version field, because that field can be faked by the clients to sabotage the change.  E.g., suppose that the change imposed a mininum output amount of 0.0001 BTC as a way to reduce spam attacks on the UTXO database.  An attacker could frustrate that measure by issuing transactions with the pre-fork version tag.   Does that answer your comment?
I don't buy the argument about "frustrating that measure". It is very easy to verify that the "old style" transactions use only "old coins", the coins that were confirmed no later than the effective time of the new transaction format.

Theoretically someone could try to launch the attack using only the "old coins", pretending to have a pre-signed transaction with some rather large n-lock-time. I think that type of attack would be self-extinguishing, it could be launched only once for each "old" UTxO entry.


Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on April 04, 2016, 02:55:50 AM
You started writing really weird conflated stuff. What do fees have to do with transaction syntax? ... The amount of fees doesn't change the syntax, so doesn't require change of the version.

Sorry, I don't understand your objections.  

There are no "meta-rules" that specify what the validity rules can be.  They are not limited to "syntax", whatever that means.   Any computable predicate on bit strings could in principle be a validity rule, as long as it does not completely break the system.

Right now there are no validiy rules that refer to fees.  The minimum fee, like the Pirate Code, "is more what you'd call 'guideline' than actual rule"; each miner decides whether to require it (or even to require more than it).  But the minimum could be made into a validity rule.  the difference woudl be that each miner would not only impose it on his blocks, but also reject blocks solved by other miners that contain transactions that pay less than that fee.

Quote
The version field should be used to clearly describe syntax rules governing the transaction format.

As I wrote, this cannot be guaranteed.  If a fork (rule change) was executed to fix a bug or prevent an attack, the miners cannot continue to use the old rules for transactions that have the old version tag; that would negate the purpose of the fork.  They must reject such transactions.  

So, it is not safe to retain signed but unconfirmed transactions without broadcasting them.
I'm still unsure why we started talking about fees in this thread. The fees enter the consensus validity rules only when verifying that they aren't negative. The fees have to be positive or zero. The value of fees is only used when priority-sorting the already verified transactions.

Also, I don't believe in the existence of non-fixable bugs in the old rules that "fork (rule change) was executed to fix a bug or prevent an attack, the miners cannot continue to use the old rules for transactions that have the old version tag".

Edit: Getting back to the original argument:
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 
I disagree on the "tough" part. In my opinion this is less difficult than DOSbox/Wine on Linux or DOS subsystem in Windows 32 (and Itanium editions of Windows 64). It is more of the problem how much energy to spend on scoping the required area of backward compatibility and preparing/verifying test cases.
Perhaps the DOS/Windows argument wasn't the best. The better, but less well known, example would be mainframe disk device drivers. They easily cover old-style devices with interfaces designed in late 1960. The hardware implementations are "frozen" in the sense that nobody changes the relevant hardware logic anymore. It is just a small sub-area in the modern VLSI chips that implements exactly the same logic as the old TTL-style disk interface controller.

Nobody designs or writes an interface that is sprinkled with conditional logic to handle the old protocols (if () then {} else {}). There's one time inquiry to verify the protocol version used and then all operations are handled through indirection (e.g. (*handle_read[versn])(...).

Same idea could be applied to Bitcoin if the version field would be appropriately changed both in blocks and transactions.


Title: Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY
Post by: 2112 on April 04, 2016, 03:17:19 AM
* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  
That _is_ segregation of the signatures up to completely non-normative ordering of data transferred. Segwit could just as well order the data into the same place in the serialized transactions when sending them, but its cleaner to not do so.
The "cleaner" part is true only to subset of people: those that were actually considering the original Satoshi's design as "ideal" or "perfect".

I personally think that the original design where "transaction hash" is both a "transaction identifier" and "transaction checksum" as a sort of a "neat hack".

Edit:
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?
A strong malleability fix _requires_ segregation of signatures.

A less strong fix could be achieved without it if generality is abandoned (e.g. only works for a subset of script types, rather than all without question) and a new cryptographic signature system (something that provides unique signatures, not ECC signatures) was deployed.

And even with giving up on fixing malleability for most smart contracts, it's very challenging to be absolutely sure that a specific instance is actually non-malleable. This can be seen in the history of BIP62-- where at several points it was believed that it addressed all forms of malleability for the subset of transactions it attempted to fix, only to  later discover that there were additional forms.  If a design is inherently subject to malleability but you hope to fix it by disallowing all but one possible representation there is a near endless source of ways to get it wrong.

Segregation removes that problem. Segwitness using scripts achieve a strong base level of non-malleability without doubt or risk of getting it wrong, both in design and by script authors. And only segregation applies to all scripts, not just a careful subset of "inherently non-malleable rules".

Getting signatures out from under TXIDs is the natural design to prevent problems from malleability and engineers were lamenting that Bitcoin didn't work that way as far back as 2011/late-2012.
The requirement for segregation is really only for "logical" segregation, not "physical" segregation.

My opinion is that the main point of the contention is that more programmers agree that "logical" (or algebraic) segregation is OK. Only much smaller subset of programmers agree that "physical" segregation (being far away in the serialized bytestream on the wire or on the disk) is the correct way to implement the algebraic segregation.

Edit2:

In addition to the above there is an issue of what is the optimal length of "transaction id" and "witness id". Transaction identifiers have to be globally unique, whereas "witness identifiers" only have to be unique within the block that they refer to. So the optimal length of the witness id could be much lower than 256.



Title: Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF
Post by: 2112 on April 04, 2016, 05:16:22 PM
OK, so somebody posted then quickly deleted a follow up to my messages above. I only took a glance before I was interrupted, but the main take-away was that indeed I should clarify what I meant.

So lets roll back to the original Satoshi's transaction design.

There are basically 3 main goals that the transaction format has to fulfill:

1) reference source and destination of funds, as well as amounts
2) cryptographically sign the source references to prove that one has control over them
3) detect (and possibly correct) errors in the transmitted transaction: both intentional (tampering) and unintentional (channel errors)

The original Satoshi's design used single SHA256 hash to cover all tree goals. It was a neat idea to kill 3 birds with one stone. But then it turned out that only 2 birds get killed, the center one is only getting injured. At it has about two lives: low-S and high-S.

So then we start trying to address those 3 main goals using a separate fields in the new transaction format. I'm not really prepared to discuss all the possibilities.

Lets just discuss a possible encoding for single UTxO reference. The current design is an ordered pair of (256 bit transaction id,short integer index of the outputs within that transaction). Lets just also assume that for some reason it becomes extremely important to shorten that reference (e.g. transferring transactions with a QR code or some other ultra-low-power-and-bandwidth radio technology).

It may turn out that the better globally unique encoding is an ordered pair (short integer block number in the blockchain, short integer index to the preorder traversal of the Merkle tree of transactions and their outputs). It may be acceptable to be able to refer only to the confirmed transactions in this format.

I'm not trying to advocate the change of the current UTxO reference format. All I'm trying to convey is that there are various ways to achieve the required goals, with various trade-off in their implementation.

Both the original Satoshi's design as well as the current SegWit design suffer from "just-in-time design" syndrome. The choices were made quickly without properly discussing and comparing the alternates. The presumed target environment is only modern high-power high-speed high-temperature 32-bit and 64-bit processors and high bandwidth communication channels.

Around the turn of the century there was a cryptographic protocol called https://en.wikipedia.org/wiki/Secure_Electronic_Transaction . It was deservedly an unmitigated failure. But they did thing right in their design. The original "Theory of operations" SET document did a thorough analysis of design variants:

1) exact bit counts of various representations and encodings
2) estimated clock counts of the operations on the then-current mainstream 32-bit CPUs
3) estimated clock counts of the operations on the then-current 8-bit micro CPUs like GSM SIM cards
4) estimated line and byte counts of the implementation source and object codes
5) range of achievable gains possible by implementing special-purpose hardware cryptographic instructions with various target gate counts.

Again, I'm definitely not advocating anything like SET and its dual signatures. I'm just suggesting spending more time on balancing various trade-off and possible goal of the completed application.