Bitcoin Forum
April 19, 2024, 08:40:05 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 »  All
  Print  
Author Topic: Segregated witness - The solution to Scalability (short term)?  (Read 23093 times)
Lauda (OP)
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
December 07, 2015, 10:02:28 AM
Last edit: December 09, 2015, 07:33:13 PM by Lauda
Merited by ABCbits (3)
 #1

I don't think there's a thread about this yet (after conference), so here it is.


Here is a transcript of the presentation. A hard fork is possibly not even required.

Gavin's explanation:

Quote
Pieter Wuille gave a fantastic presentation on “Segregated Witness” in Hong Kong. It’s a great idea, and should be rolled into Bitcoin as soon as safely possible. It is the kind of fundamental idea that will have huge benefits in the future. It also needs a better name (“segregated” has all sorts of negative connotations…).

You should watch Pieter’s presentation, but I’ll give a different spin on explaining what it is (I know I often need something explained to me a couple different ways before I really understand it).
So… sending bitcoin into a segregate witness-locked output will look like a weird little beastie in today’s blockchain explorers– it will look like an “anyone can spend” transaction, with a scriptPubKey of:
PUSHDATA [version_byte + validation_script]

Spends of segregated witness-locked outputs will have a trivial one-byte scriptSig of OP_NULL (or maybe OP_NOP – There Will Be Bikeshedding over the details).

The reason that is not insane is because the REAL scriptSig for the transaction will be put in a separate, new data structure, and wallets and miners that are doing validation will use that new data structure to make sure the signatures for the transaction are valid, etc.

That data structure will be a merkle tree that mirrors the transaction merkle tree that is put into the block header of every block. Every transaction with a segregated witness input will have an entry in that second merkle tree with the signature data in it (plus 10 or so extra bytes per input to enable fraud proofs).

The best design is to combine the transaction and segregated witness merkle trees into one tree, with the left side of the tree being the transaction data and the right side the segregated witness data. The merkle root in the block header would just be that combined tree. That could (and should, in my opinion) be done as a hard fork; Pieter proposes doing it as a soft fork, by stuffing the segregated witness merkle root into the first (coinbase) transaction in each block, which is more complicated and less elegant but means it can be rolled out as a soft fork.

Regardless of how it is rolled out, it would be a smooth transition for wallets and most end-users– if you don’t want to use newfangled segregated witness transactions, you don’t have to. Paying to somebody who is using the newfangled transactions looks just like paying to somebody using a newfangled multisig wallet (a ‘3something’ BIP13 bitcoin address).

There is no requirement that wallets upgrade, but anybody generating a lot of transactions will have a strong incentive to produce segregated witness transactions– Pieter proposes to give segregated witness transactions a discount on transaction fees, by not completely counting the segregated witness data when figuring out the fee-per-kilobyte transaction charge. So… how does all of this help with the one megabyte block size limit?

Well, once all the details are worked out, and the soft or hard fork is past, and a significant fraction of transactions are spending segregated witness-locked outputs… more transactions will fit into the 1 megabyte hard limit. For example, the simplest possible one-input, one-output segregated witness transaction would be about 90 bytes of transaction data plus 80 or so bytes of signature– only those 90 bytes need to squeeze into the one megabyte block, instead of 170 bytes. More complicated multi-signature transactions save even more. So once everybody has moved their coins to segregated witness-locked outputs and all transactions are using segregated witness, two or three times as many transactions would squeeze into the one megabyte block limit.

Segregated witness transactions won’t help with the current scaling bottleneck, which is how long it takes a one-megabyte 'block’ message to propagate across the network– they will take just as much bandwidth as before. There are several projects in progress to try to fix that problem (IBLTs, weak blocks, thin blocks, a “blocktorrent” protocol) and one that is already deployed and making one megabyte block propagation much faster than it would otherwise be (Matt Corallo’s fast relay network).

I think it is wise to design for success. Segregated witness is cool, but it isn’t a short-term solution to the problems we’re already seeing as we run into the one-megabyte block size limit.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
It is a common myth that Bitcoin is ruled by a majority of miners. This is not true. Bitcoin miners "vote" on the ordering of transactions, but that's all they do. They can't vote to change the network rules.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
unamis76
Legendary
*
Offline Offline

Activity: 1512
Merit: 1005


View Profile
December 07, 2015, 10:52:54 AM
 #2

If this is correct and witness information is prunable, how is it a solution to scaling? It would still require a block size increase. Maybe I thought I knew what scaling is, but I'm not quit grasping the concept...
Denker
Legendary
*
Offline Offline

Activity: 1442
Merit: 1014


View Profile
December 07, 2015, 12:09:57 PM
 #3

So, for a lay person like me, this is basically a simple, efficient way to patch the malleability exploit while simultaneously increasing block size? And to do that we only have to make a soft fork?
Is this correct?

Lauda (OP)
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
December 07, 2015, 01:29:11 PM
Last edit: December 07, 2015, 01:40:59 PM by Lauda
Merited by ABCbits (2)
 #4

If this is correct and witness information is prunable, how is it a solution to scaling? It would still require a block size increase. Maybe I thought I knew what scaling is, but I'm not quit grasping the concept...
Scaling is not all about the block size what majority understand. Scaling could be a better way of storing data, a different level approach like LN, or something else.
Quote
What we do is discount the witness data by 75% for block size. So this enables us to say we allow 4x as many signatures in the chain. What this normally corresponds to, with a difficult transaction load, this is around 75% capacity increase for transactions that choose to use it. Another way of looking at it, is that we raise the block size to 4 MB for the witness part, but the non-witness has same size.
From what I understood is that they could discount the witness data by 75% right now, which means that 1 MB blocks could theoretically have as much transaction volume as 4 MB blocks. Or they increase the block size for the witness part to 4 MB (the non-witness part stays at 1 MB). This is how I understood it so far. This is still a fairly new concept so I'm also still learning (Bitcoin is a constant learning process though).

So, for a lay person like me, this is basically a simple, efficient way to patch the malleability exploit while simultaneously increasing block size? And to do that we only have to make a soft fork?
Is this correct?
Simple? Not exactly. This adds complexity, which BIP101 and XT supporters are probably going to use as an argument. If society wanted simplicity and not harness the benefits that came with complexity, we should have remained at the stone age. This kills all cases of unintentional malleability and can be implemented with a soft fork. This is correct.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
Zarathustra
Legendary
*
Offline Offline

Activity: 1162
Merit: 1004



View Profile
December 07, 2015, 01:39:03 PM
 #5


From what I understood is that they could discount the witness data by 75% right now, which means that 1 MB blocks could theoretically have as much transaction volume as 4 MB blocks. Or they increase the block size for the witness part to 4 MB (the non-witness part stays at 1 MB). This is how I understood it so far. This is still a fairly new concept so I'm also still learning.



Still learning; but making a post titled "Segregated witness - The solution to scaling".
QuestionAuthority
Legendary
*
Offline Offline

Activity: 2156
Merit: 1393


You lead and I'll watch you walk away.


View Profile
December 07, 2015, 01:42:54 PM
 #6

Lauda, explain Segregated Witness to me like I'm five.

mexxer-2
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1003


4 Mana 7/7


View Profile
December 07, 2015, 01:44:52 PM
 #7

Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born
Lauda (OP)
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
December 07, 2015, 01:51:04 PM
 #8

Lauda, explain Segregated Witness to me like I'm five.
It's a bit hard to correctly explain something so complex without leaving out important information. Let me try this: "Normally the transactionID is the hash of the signature and transaction", with the segregated witness the signatures are being excluded (as they consume 60% of the data on the blockchain now). In other words, they are going to re-work how this data is being stored (simplistic explanation without merkle tree) by excluding it from the block.

The positive outcome of this is an effective block-size of 4 MB with a soft fork. With effective I mean that they don't have to change the actual block size (that most people know of today).


And to me as if I'm just born
Read the bolded part. Addition: By changing how the data is stored, they are saving a lot of space (hence the effective block-size of 4 MB).

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
QuestionAuthority
Legendary
*
Offline Offline

Activity: 2156
Merit: 1393


You lead and I'll watch you walk away.


View Profile
December 07, 2015, 01:58:24 PM
 #9

But doesn't the signature verify the transaction was created by the real owner of the address? What about multisig? Is that gone with this new system? Sounds very flaky to me.

Lauda (OP)
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
December 07, 2015, 02:05:06 PM
 #10

But doesn't the signature verify the transaction was created by the real owner of the address? What about multisig? Is that gone with this new system? Sounds very flaky to me.
You are correct. However, I'm not talking about removing the signature data; I said excluding from the blocks.
Quote
Wouldn't it be nice to just drop the signatures? The reason why we can't do this is because the signature is part of the transaction hash. If we would just drop the sig from the transaction, the block wouldn't validate, you wouldn't be able to prove an output spend came from that transaction, so that's not something we could do.
Quote
You get a size increase because you no longer store the signatures in the block, you just have all your signatures empty and reference an output like [hash] OP_TRUE, where [hash] is the script hash to execute. Then you can sign for the transaction with an empty script sig. Data for the signature is held outside of the block, and is referenced by a hash in the block (probably in the sigScript of the coinbase transaction). Because the signature data isn't part of the real block, you can make the block+extra sig data be more than 1 MB.
It does not eliminate multisig, it actually solves malleability as I've previously stated and as seen on the slide.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
QuestionAuthority
Legendary
*
Offline Offline

Activity: 2156
Merit: 1393


You lead and I'll watch you walk away.


View Profile
December 07, 2015, 02:40:11 PM
 #11

But doesn't the signature verify the transaction was created by the real owner of the address? What about multisig? Is that gone with this new system? Sounds very flaky to me.
You are correct. However, I'm not talking about removing the signature data; I said excluding from the blocks.
Quote
Wouldn't it be nice to just drop the signatures? The reason why we can't do this is because the signature is part of the transaction hash. If we would just drop the sig from the transaction, the block wouldn't validate, you wouldn't be able to prove an output spend came from that transaction, so that's not something we could do.
Quote
You get a size increase because you no longer store the signatures in the block, you just have all your signatures empty and reference an output like [hash] OP_TRUE, where [hash] is the script hash to execute. Then you can sign for the transaction with an empty script sig. Data for the signature is held outside of the block, and is referenced by a hash in the block (probably in the sigScript of the coinbase transaction). Because the signature data isn't part of the real block, you can make the block+extra sig data be more than 1 MB.
It does not eliminate multisig, it actually solves malleability as I've previously stated and as seen on the slide.

Ok, so how are the transactions signed and does it increase the possibility of address collision? Hal Finley proposed batch signature verification long ago where it was believed the shortcut to secp256k1 would bring as much as a 20% speed increase to signature verification. By the time it was modified and implemented, in order to protect security, there was almost no speed advantage. Removing the sig verification from the mined blocks will most likely have some kind of security leak issue. I'm just not knowledgable enough to tell you what it will be. I'll be eagerly watching the development.

Lauda (OP)
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
December 07, 2015, 03:21:23 PM
 #12

Ok, so how are the transactions signed and does it increase the possibility of address collision? Hal Finley proposed batch signature verification long ago where it was believed the shortcut to secp256k1 would bring as much as a 20% speed increase to signature verification. By the time it was modified and implemented, in order to protect security, there was almost no speed advantage. Removing the sig verification from the mined blocks will most likely have some kind of security leak issue. I'm just not knowledgable enough to tell you what it will be. I'll be eagerly watching the development.
The data comes after the block and is connected via a hash IIRC. I don't think it increases the possibility of address collision why would it? Apparently it has been in testing mode for 6 months now, and I'm pretty sure that they would not miss a significant security leak just like that. Besides, they won't be rushing this out either way. For the exact specifics I'll have to get back to this thread as I'm very busy now and will head out (possibly stay disconnected). 

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
December 07, 2015, 03:53:38 PM
 #13

Video of great work done-

https://www.youtube.com/watch?v=fst1IK_mrng#t=36m
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
December 07, 2015, 04:06:45 PM
 #14

Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born

The size of the block chain can be cut down considerably by moving all signatures(not needed to be stored anyways) into a separate data structure and only keeping the transactions without signatures in the block chain.

Side benefits as well
    Much simpler future opcode additions/upgrades
    Solves malleability problems
    Fraud proofs for every single consensus rule, making SPV much much more secure and lazy validation
    
Also can be deployed with softfork which without this upgrade would have been extremely difficult to implement.

Basicly, this is an example of a scaling solution with absolutely no tradeoffs, The consequences are all positive. This is just one peice of the puzzle that needs to be rolled out to scale but objecting to this improvement is non-sensical.

Sildes- https://prezi.com/lyghixkrguao/segregated-witness-and-deploying-it-for-bitcoin/

Best yet, code is already done and been tested for over 6 months --

https://github.com/ElementsProject/elements/commit/663e9bd32965008a43a08d1d26ea09cbb14e83aa
https://github.com/sipa/bitcoin/commits/segwit


Read the bolded part. Addition: By changing how the data is stored, they are saving a lot of space (hence the effective block-size of 4 MB).

"Size = 4*BaseSize + WitnessSize <= 4MB. For normal transaction load, it means 1.75 MB, but more for multisig."
https://twitter.com/pwuille/status/673710939678445571

franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4412



View Profile
December 07, 2015, 04:16:48 PM
 #15

Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born

The size of the block chain can be cut in half by moving all signatures(not needed to be stored anyways) into a separate data structure and only keeping the transactions without signatures in the block chain.

Side benefits as well
    Much simpler future opcode additions/upgrades
    Solves malleability problems
    Fraud proofs for every single consensus rule, making SPV much much more secure and lazy validation
   
Also can be deployed with softfork which without this upgrade would have been extremely difficult to implement.



Read the bolded part. Addition: By changing how the data is stored, they are saving a lot of space (hence the effective block-size of 4 MB).

"Size = 4*BaseSize + WitnessSize <= 4MB. For normal transaction load, it means 1.75 MB, but more for multisig."
https://twitter.com/pwuille/status/673710939678445571

to me this translates to creating a new chain. called the witness(pruned) chain. where all the old blocks are pruned of the the signatures.
this is not a soft fork.. this is remaking a new chain.

also because there still needs to be a chain containing the signatures.. that would be the real bitcoin which will still bloat..
and if anyone still wants to be a full node. then they need to now have 2 chains.. meaning more data as some tx data is duplicated by holding both

witness chain would supposedly be used for liteclients. but its much easier to just let lite clients only download the tx data of addresses they control. and then find a real solution to bitcoins data bloat, without creating a new chain or risking security bugs related to not having signature checks

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
December 07, 2015, 04:41:05 PM
 #16


to me this translates to creating a new chain. called the witness(pruned) chain. where all the old blocks are pruned of the the signatures.
this is not a soft fork.. this is remaking a new chain.


Yes, Soft forks are new chains in all cases.



also because there still needs to be a chain containing the signatures.. that would be the real bitcoin which will still bloat..
and if anyone still wants to be a full node. then they need to now have 2 chains.. meaning more data as some tx data is duplicated by holding both

witness chain would supposedly be used for liteclients. but its much easier to just let lite clients only download the tx data of addresses they control. and then find a real solution to bitcoins data bloat, without creating a new chain or risking security bugs related to not having signature checks


There is only one chain. The softfork would allow old clients and implementations to bloat the blockchain still and the newer clients would prune off the unneeded signatures. You are insinuating that the security model would change introducing potentially new bugs. This is false as the security model remains exactly the same.

The blockchain signatures are segregated-- one has the transactions minus signatures, and one has just the signatures. Full nodes will still download and verify both, so security isn't reduced or compromised .

This is basically creating a better SPV/lite client that has the ability to use fraud proofs for authentication. It is not meant to replace Full nodes!




BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
December 07, 2015, 05:00:51 PM
Last edit: December 07, 2015, 10:27:11 PM by BitUsher
 #17

Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born

An Airline only allows you to bring a limited amount of weight and bags onboard a plane for your luggage. They have to be fair and have a consistent policy for their passengers while at the same time recognize real life physical limitations and risks. There is a careful balance to be made where they make their clients happy with allowing them more luggage but at the same time keep the weight considerations reasonable for speed , limiting fuel costs and safety.

They previously had a policy of allowing 2 carry on bags weighing 10 Kg in total, but overtime with a growing economy their clients grew more wealthy and needed to travel with more bags because they were flying to Aruba for 2 week vacations, instead of 1 week. The clients were demanding more bags and up to 40 Kg of luggage weight. They had a choice to increase the size of the plane but the accountants and engineers were concerned because the fuel costs would increase and the safety of the planes may be compromised as well with so much weight. The sales department also voiced an objection that larger planes would cut down on their flying routes leading to less possible destinations clients could visit.

It was known since 2011 a safe and effective solution would be to separate the less critical luggage on a train or boat and the important luggage could go with the client to solve these problems but the airline kept delaying these changes because of distribution and contract problems with the railroad/shipyard companies. Redirecting every railroad or shipping route would be a logistical nightmare. Another concern is passengers didn't want their luggage being mixed or misplaced within the trains/boats as they were keen on some of the new quick luggage check-in and fraud protection processes the airlines were developing and there is no way they could get any of that with dumping their luggage on a train or boat.

One day a bright young engineer recommended a novel approach to solve these concerns. He indicated, "Look, we already have shipping agreements with Fedex/DHL and lease out a certain amount of space on their cargo planes that flyout from the same airports. We would have to increase capacity on them but they are efficient and we won't have to reroute all the trains or deal with the nasty railroad companies and our passengers could still bring their important luggage through our new fraud protection and quick check-in processes. Our passengers won't have to pay more and get everything they want. We will have to secure larger contracts with the airfreight companies but ultimately they will benefit from more business and we will benefit from more flight routes and clients." ... followed by an applause from the board of directors and some quick recommendations to roll out the long awaited changes immediately.



***edited to better explain the complexities and tradeoffs in the analogy.***
AtheistAKASaneBrain
Hero Member
*****
Offline Offline

Activity: 770
Merit: 509


View Profile
December 07, 2015, 05:05:34 PM
 #18

But what amount of transaction per second would this deliver? can this really compete against LN in terms of being able to be at the same level of VISA and the like? This looks good on the surface but I would wait for someone like gmaxwell to comment on it to see potential flaws. Did any of the core devs speak on this?
mexxer-2
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1003


4 Mana 7/7


View Profile
December 07, 2015, 05:07:34 PM
 #19

Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born

An Airline only allows you to bring a limited amount of weight onboard a plane within your luggage. This rule is in place because if they allow one person to bring 20kg of baggage than they have to allow everyone too as well thus slowing the plane down , costing more fuel , and possibly causing it to crash due to being overweight. 10 KG per person and 2 carry bags is reasonable they claim. The problem is your 2 bags only fit 8kg of items and you really want to bring more items but the airline says that 2 bags are the limit due to space considerations which must be considered as well. You come up with a clever idea and run to the bathroom, take out all your clothes between 3 bags , and begin to wear multiple layers of pants , socks , and shirts and now you have room to bring all 10kg of luggage and eliminate the third bag.

The CPU, bandwidth, latency , and security costs are all the same with segregated witness. You just figured out a trick to effectively allow more transactions for the same resource costs.
Whew, now that was a good explanation
LiteCoinGuy
Legendary
*
Offline Offline

Activity: 1148
Merit: 1010


In Satoshi I Trust


View Profile WWW
December 07, 2015, 05:09:54 PM
 #20

But what amount of transaction per second would this deliver? can this really compete against LN in terms of being able to be at the same level of VISA and the like? This looks good on the surface but I would wait for someone like gmaxwell to comment on it to see potential flaws. Did any of the core devs speak on this?

Bitcoin can and will surpass VISA alot. just be patient and wait some years my friend.

Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!