Bitcoin Forum
April 25, 2024, 02:17:50 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 [6] 7 8 9 10 11 12 13 14 15 16 17 18 »  All
  Print  
Author Topic: Segregated witness - The solution to Scalability (short term)?  (Read 23093 times)
Cconvert2G36
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250


View Profile
December 10, 2015, 05:24:32 AM
 #101

It sorts like a way to efficiently compress  the weight of blocks by removing something that's not needed when possible.

As merely one question, can we really consider the signature as something that's not needed?

I get that we're not _eliminating_ the sig, merely putting it in a separate (segregated) container, apart from the rest of the transaction. But any entity that wants to operate bitcoin in a trustless manner is going to need to be able to fully validate each transaction. Such entities will need the signature, right? Accordingly, such entities will need both components, so no data reduction for them, right?

Currently, relay nodes verify each transaction before forwarding it, do they not? If they are denied the signature, they can no longer perform this verification. This seems to me to be a drastically altered division of responsibilities. Sure, this may still work, but how do we know whether this is a good repartitioning of the problem?

Further, does this open a new attack vector? If 'nodes' are going to stop validating transactions before forwarding them, then there is nothing to stop them from forwarding invalid transactions. What if an attacker were to inject many invalid transactions into the network? Being invalid, they would be essentially free to create in virtually unbounded quantities. If nodes are no longer validating before forwarding, this would result in 'invalid transaction storms', which could consume many times the bandwidth of the relatively small number of actual valid traffic. If indeed this is a valid concern, then this would work exactly contrary to its stated goal of increasing scalability.

Note I am not making any claims here, but I am asking questions, prompted from my incomplete understanding of this feature.

Some of us are suffering from a sort of whiplash... we've been told (by some factions and their hangers-on) for months that raising max block size even to 2MB is highly dangerous for decentralization. But now, completely reorganizing some of the basic functions of the protocol, with a (somewhat unnecessary) requirement that there be no hard fork... has led us to the point where the same group with those concerns... is offering a fairly drastic solution that effectively raises the requirements for fully validating nodes to a 4MB(or 2?) max equivalent.

SegWit is widely agreed to be a net positive to incorporate into Bitcoin (especially if it can kill malleability problems), but the burden of vetting and testing should be much more involved than a one line patch like BIP102. My fear is that we will be into 2017 before anything is deployed, and we will continue to be without the base data that garzik's 102 would provide. And, the precedent that "hard forks r bad n scary" would still be firmly in place, and would be rolled out to stifle any possibility of future main chain capacity growth.
Unlike traditional banking where clients have only a few account numbers, with Bitcoin people can create an unlimited number of accounts (addresses). This can be used to easily track payments, and it improves anonymity.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
RoadTrain
Legendary
*
Offline Offline

Activity: 1386
Merit: 1009


View Profile
December 10, 2015, 10:37:49 AM
 #102

Exactly. If a solution is not understandable for users with average IT expertise, then it will never be understandable for anyone with even less IT knowledge. And typically owners of large mining farms and exchanges do not have time to do those learning, so they tends to select the solution that they can understand or listen to people they like. This will turn the decision making into politics, and who are good at lobbying and PR will push in their changes. And this is not people would like to see in bitcoin. So, the knowledge gap of different participants decided that you really can't reach a wide consensus upon a radical or complex solution, XT's failure already proved that
Understanding can be of different levels: conceptual, algorithmic, implementational... I bet most people don't quite grasp how Bitcoin's Script stack machine is implemented, though it doesn't prevent them from utilising it, if they know conceptually at least. What's enough for most people is that a particular component has been peer-reviewed thoroughly to prove it's safe to use it.

I still don't really understand how that can be implemented as a soft fork. Softfork means backward compatible, when the upgraded SW clients broadcast new blocks through out the network, how come the original core client can accept such kind of strange block which does not contain signature data?
There are two modifications to be made for it to be soft-fork compatible:
1) SW outputs are made anyone can spend, so that for older clients it won't matter how they are spent, the scriptSig will be empty.
2) The merkle tree root of SW data hashes is stored in the coinbase.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
December 10, 2015, 10:47:36 AM
 #103

My fear is that we will be into 2017 before anything is deployed,
I don't think you have to worry about that.

Quote
and we will continue to be without the base data that garzik's 102 would provide
And also without the 1+ hour long block validations a simple "just increase the constant to 2MB" enable. Smiley
Zarathustra
Legendary
*
Offline Offline

Activity: 1162
Merit: 1004



View Profile
December 10, 2015, 11:05:11 AM
 #104

It sorts like a way to efficiently compress  the weight of blocks by removing something that's not needed when possible.

As merely one question, can we really consider the signature as something that's not needed?

I get that we're not _eliminating_ the sig, merely putting it in a separate (segregated) container, apart from the rest of the transaction. But any entity that wants to operate bitcoin in a trustless manner is going to need to be able to fully validate each transaction. Such entities will need the signature, right? Accordingly, such entities will need both components, so no data reduction for them, right?

Currently, relay nodes verify each transaction before forwarding it, do they not? If they are denied the signature, they can no longer perform this verification. This seems to me to be a drastically altered division of responsibilities. Sure, this may still work, but how do we know whether this is a good repartitioning of the problem?

Further, does this open a new attack vector? If 'nodes' are going to stop validating transactions before forwarding them, then there is nothing to stop them from forwarding invalid transactions. What if an attacker were to inject many invalid transactions into the network? Being invalid, they would be essentially free to create in virtually unbounded quantities. If nodes are no longer validating before forwarding, this would result in 'invalid transaction storms', which could consume many times the bandwidth of the relatively small number of actual valid traffic. If indeed this is a valid concern, then this would work exactly contrary to its stated goal of increasing scalability.

Note I am not making any claims here, but I am asking questions, prompted from my incomplete understanding of this feature.

Some of us are suffering from a sort of whiplash... we've been told (by some factions and their hangers-on) for months that raising max block size even to 2MB is highly dangerous for decentralization. But now, completely reorganizing some of the basic functions of the protocol, with a (somewhat unnecessary) requirement that there be no hard fork... has led us to the point where the same group with those concerns... is offering a fairly drastic solution that effectively raises the requirements for fully validating nodes to a 4MB(or 2?) max equivalent.


Yes, this is a very interesting scaling strategy. Quadruple the cap to get a double throughput is okay. Quadruple the cap to get a quadruple throughput is not okay.
DarkHyudrA
Legendary
*
Offline Offline

Activity: 1386
Merit: 1000


English <-> Portuguese translations


View Profile
December 10, 2015, 11:27:25 AM
 #105

Quote
and we will continue to be without the base data that garzik's 102 would provide
And also without the 1+ hour long block validations a simple "just increase the constant to 2MB" enable. Smiley


But BIP102 stills a hard fork, with all the stress of needing to everybody upgrade their Bitcoin servers ASAP, no?

And sorry, but why the blocks would need more than 1 hour to validate? This segregate witness proposal is that bad?

English <-> Brazilian Portuguese translations
Amph
Legendary
*
Offline Offline

Activity: 3206
Merit: 1069



View Profile
December 10, 2015, 11:39:29 AM
 #106

Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born

At the moment everything goes in the block.

With segwit, only the important stuff goes in the block. The other stuff goes into an 'attachment'.

This way more transactions can be put into a full block without increasing the blocksize limit.


but this will not solve the problem completely, when we need to increase again the block in the future, it will only delay it

in this case it seems that we have a margin of 3 more mega, it will be effectively like having a block of 4MB, but when we need 5MB we will be forced to increase the block anyway

therefore this is only a temporary solution...one problem at time i understand...
HostFat
Staff
Legendary
*
Offline Offline

Activity: 4214
Merit: 1203


I support freedom of choice


View Profile WWW
December 10, 2015, 11:53:43 AM
 #107

but this will not solve the problem completely, when we need to increase again the block in the future, it will only delay it
https://www.reddit.com/r/bitcoinxt/comments/3w2w17/segregated_witness_is_cool_gavin_andresen/#cxt01bu

NON DO ASSISTENZA PRIVATA - http://hostfatmind.com
Lauda (OP)
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
December 10, 2015, 12:12:26 PM
 #108

No solution can be final in this. There is always going to be a need for more upgrades. I don't see how that could possibly be an argument.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
Mickeyb
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000

Move On !!!!!!


View Profile
December 10, 2015, 01:16:51 PM
 #109

No solution can be final in this. There is always going to be a need for more upgrades. I don't see how that could possibly be an argument.

And that's OK! What we need now is buy some time, get something done and observe how this solution is impacting the whole network.

Also a good message needs to be sent out to the whole community that something is being done towards a long term solution of this problem!
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4441



View Profile
December 10, 2015, 01:26:30 PM
 #110

ok
full nodes (the real bitcoin-core) that mining POOL operators and true bitcoin fanboys will keep needing to store both tx data and signatures..
thus to them changing Block=1mb into blockA=0.25mb blockB=0.75mb makes no difference. its still 1mb bloat per blocktime..
thus to them changing Block=4mb into blockA=1mb blockB=3mb makes no difference. its still 4mb bloat per blocktime..
you can paint certain data any colour.. it doesnt make it invisible to full nodes
you can put certain data into different drawers.. it doesnt make the cabinet any lighter

secondly miners (not pool operators) dont need the full blockchain.. unscrew a mining rig and you will see no 60gb hard drive.. so yea miners do not care, they know how to grab what they need to do the job, and how the data is saved means nothing to them..

thirdly lite users. can easily code a liteclient right now(without protocol changes) that can read the blockchain and simply not save the signature part of the json data to file so they dont even need anything new to do this right now.. and in actual fact anyone wanting to not download bitcoin core. definitely aint going to want to have 20gb of lite segwit blockchain either... its an "all or nothing" game.. not something in the middle.

all i can see is that talking to a 5 year old
kid(lite):"mum theres pea's(sig) on my plate i just want the meat(tx), i dont want the pea's(sig)"
mom(full node):"ok here is a bigger plate. let me put everything on it.. and now move the pea's to the side. now shut up and grab your meat in your lite hands and ignore the pea's"
kid:"mom there is still pea's on the plate, every day you are still going to cook(store both) meat and pea's and all you are doing is putting it on a bigger plate, telling me i can just take the meat. your not helping yourself because your still making pea's. yea i know i will never eat(store) pea's, but you know you cant take the pea's off the plate because all the other moms will tell you its not a healthy(verified) meal. yes i can just grab the meat and eat it from my light hands separately but i could have done that anyway... but just putting it on a bigger plate means nothing.. if you think it means you can now cook 10x more meat you have to realise that you still end up cooking more pea's aswell.. if there is more meat theres more peas, simple fact.. you have not solved never needing moms to cook pea's nor have you solved me not needing to grab the meat off the main plate as i could always do it, even if you tell me that its on 2 plates and i only see the plate with the meat on it you have still cooked meat and pea's"


so segwit WILL NOT resolve scaling.. because upping the limit is just the standard thing to do and not special feature segwit is offering. the meat and pea ratio will still be there mining still will produce meat and pea's databloat for true nodes. you just increasing the meat and pea's which is no different than just making a larger limit..

using gavins example
Quote
Well, once all the details are worked out, and the soft or hard fork is past, and a significant fraction of transactions are spending segregated witness-locked outputs… more transactions will fit into the 1 megabyte hard limit. For example, the simplest possible one-input, one-output segregated witness transaction would be about 90 bytes of transaction data plus 80 or so bytes of signature– only those 90 bytes need to squeeze into the one megabyte block, instead of 170 bytes. More complicated multi-signature transactions save even more. So once everybody has moved their coins to segregated witness-locked outputs and all transactions are using segregated witness, two or three times as many transactions would squeeze into the one megabyte block limit.
wrong

bitcoin-core users will still have 170bytes per tx.. whether you want to colour 90byte green and colour 80byte red, its still 170byte saved to full nodes hard drives
trying to con people into thinking that making a plate 4 times bigger and saying oh look you can fit 8x more green bytes.. is just wrong.. full node blocks will still be the same 170byte total all that is happening is splitting the chain into two and branding the green chain as "bitcoin" and the red chain as "please dont look"
but full nodes will still be holding both chains and thus the total data a full node stores is still 170bytes on a basic tx...

so take a 2014 simple tx of 170bytes. thats 5882 tx a block
so just up the block limit to 4mb. 23529 tx a block

now seg wit
simple tx of A=90 B=80 full node storage is still 170byte = 23529 tx per 4mg block. but segwit lite clients storage is 2.117mb for 23529 tx segwit block

lite clients could have 90byte per tx. but their chain is not the real chain and wont help the network security nor will it help lite users that dont want any bloat
lite clients wont be part of the network security and so this is not a solution to help real network supporting users (bitcoin core), its not helping lite users either

lite clients can already have 90byte just by looking at a full tx and ignoring the json strings they dont need when saving files.
ive been doing it for years now. as my lite client only grabs tx data of addresses the client holds. and just saves the txid's, vins vout's and values.. lite clients wont want to store 20gb of useless history that doesnt help the network.. they either want full history to protect the network which they can verify, or just data that applies to them specifically to sing transaction, which is far far less than 20gb

having 20gb of non secure tx data is not a lite client. its a medium weight client. which to be honest ill say it again. anyone can make their own medium weight client right now. only saving part of the json data to file without doing anything special to bitcoins protocol.

so now onto the malleability..
once tx is confirmed.. its locked into history.. and then when segwit grabs just a portion of the block data.. ofcourse is malle proof.. BECAUSE ITS ALREADY CONFIRMED!
which is the same as anyone grabbing tx data on confirmed transactions has the same malle proof,..
now onto bandwidth
segwit lite clients will not just relay 90byte of unconfirmed tx's, as mining pools need the whole thing and each relay needs to check it.. so segwit will still transmit full 170bytes. full nodes will still store/transmit 170bytes too, and thus its not helping bandwidth of the network.

anyone right now can create a client that only grabs txid, vins vouts and values of relevant addresses of a user.. right now without any soft or hard forks..
i still cant see why people think segwit is so special..

summary
i still cannot rationalise why bitcoin-core needs to split the blockchain. just for useless lite clients..who are not going to help the network.. nor want any bloat
lite clients can more effectively grab the json data, put the json strings into individual variables.. and then just not save the signature variable to file..
this to me seems like a dysfunctional attempt at a solution
far easier to just keep the chain as 1 chain. and just put code in to raise limit to 4mb and solve the malleability by having code that ignores relayed tx variant if same vin has already been relayed by another tx saved in mempool, thus stopping people using the same vin until its confirmed(goodbye doublespend)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
December 10, 2015, 02:06:40 PM
 #111

so segwit WILL NOT resolve scaling.. because upping the limit is just the standard thing to do and not special feature segwit is offering.

Which developer is claiming it will "resolve" scaling? segwit is indeed providing more capacity and scalability and thus is part of the puzzle in scaling bitcoin.  

You are ignoring the nuanced benefits SW provides(vs simply increasing the block limit) that allow for better scalability in the future:

- one benefit with SW is full nodes could also skip transferring old signatures which is an unnecessary task.(Existing full nodes already do not validate signatures in the far past but still have the burden of transfering them)

- resolves Tx malleability which is an important step that needs to be accomplished to roll out LN. Yes there are numerous ways to fix tx malleability but this is a simple and elegant one.

- Allows for lite nodes to have fraud proofs where we are adding an extra layer of security to potentially compensate for further centralization of full nodes due to partially increasing block limit

You appear to be insinuating that we should just take the simpler approach and increase blocklimit.... which is something the core devs are suggesting we do in addition to segwit when needed. Why don't they simply increase the blocklimit? Because of the benefits cited above that increasing the blocklimit doesn't resolve.
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1012


Beyond Imagination


View Profile
December 10, 2015, 05:15:58 PM
 #112

Exactly. If a solution is not understandable for users with average IT expertise, then it will never be understandable for anyone with even less IT knowledge. And typically owners of large mining farms and exchanges do not have time to do those learning, so they tends to select the solution that they can understand or listen to people they like. This will turn the decision making into politics, and who are good at lobbying and PR will push in their changes. And this is not people would like to see in bitcoin. So, the knowledge gap of different participants decided that you really can't reach a wide consensus upon a radical or complex solution, XT's failure already proved that
Understanding can be of different levels: conceptual, algorithmic, implementational... I bet most people don't quite grasp how Bitcoin's Script stack machine is implemented, though it doesn't prevent them from utilising it, if they know conceptually at least. What's enough for most people is that a particular component has been peer-reviewed thoroughly to prove it's safe to use it.

Indeed, during early days of bitcoin, developers have much more freedom to do whatever they want, partly due to that no one cares about it, and partly due to that there are no major interested stake holders because of its low value

But now situation is different, the network has attracted so much venture capital and investors, these guys all have their own agenda thus the political landscape has changed. A good example is kncminer, they took the crowd funding money, realized their projects and start to drive their own mining operation secretly

At this stage, posting on a forum or reddit or checkin some code in git does not make a lot of sense, because the decision making power is not in the hand of developers, but in the hand of large mining pools, exchanges and payment processors. If devs represent a complex solution which those large players do not understand thoroughly, they would just ignore it (they have to protect their million dollar investment as best as they can). They could just keep running the old client, and build their clearing and settlement channel to avoid the scaling problem altogether

Imagine that when the blocks are full and each transaction cost a lot to clear, then only large service providers would be able to use blockchain to clear with their business partners. Users will find out that using web wallet services will cost just a few cents as usual and clears instantly, but using core client will cost $100 and maybe confirmed after 1 day, so they will definitely move to use blockchain.info or similar web wallet instead

You see, this is also a solution, since the risk on individual service provider is much smaller than the risk of the whole network, it can be accepted. And this solution is much easier for every investor to understand than that Segregated witness complication. In fact, most of the people are still very used to centralized service provider, so they would accept a locally centralized solution easily

The best scenario is that all the large players out there have deep IT expertise and can easily get what those new changes' pros and cons, but in my experience it is not the case. Rich people have totally another set of criteria in decision making

jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
December 10, 2015, 08:24:02 PM
Last edit: December 10, 2015, 08:37:14 PM by jbreher
 #113

Some of us are suffering from a sort of whiplash... we've been told (by some factions and their hangers-on) for months that raising max block size even to 2MB is highly dangerous for decentralization. But now, completely reorganizing some of the basic functions of the protocol, with a (somewhat unnecessary) requirement that there be no hard fork... has led us to the point where the same group with those concerns... is offering a fairly drastic solution that effectively raises the requirements for fully validating nodes to a 4MB(or 2?) max equivalent.

It's weirder than that. The 'drastic change' (i.e. moving the signatures to a separate data structure) does absolutely nothing to address scalability for fully validating nodes. To fully validate, such nodes need all the block data and all the signature data. No reduction there. I merely reduces demands on _non-validating_ nodes, by a factor of 1.8x or so.

What the entire SegWit proposal does to address scalability at fully validating nodes is not the segregation, but rather a simple _increase_in_the_block_size_. In Wuille-speak, this is represented as "Discount witness data by 75% for block size Or: block limit to 4MB, but only for witness".

At least as far as I can tell.

Bait & switch?

standard disclaimer: I have an incomplete view of SegWit at this time.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
December 10, 2015, 08:36:28 PM
 #114

segwit is indeed providing more capacity and scalability and thus is part of the puzzle in scaling bitcoin.  

As far as I can tell, the only component of the omnibus SegWit proposal that does anything about capacity or scalability is a simple increase of the block size to 4MB (I presume he means 4MiB). You can doublespeak this as "Discount the signature by 75% for block size" if you want, but that's really all it is.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
hdbuck
Legendary
*
Offline Offline

Activity: 1260
Merit: 1002



View Profile
December 10, 2015, 09:12:59 PM
 #115

so, core devs are now being racists? Tongue


(i dont get the sig being "segregated" to some soft (alt?!) fork.. aren't sigs a very basic and important 'feature' for Bitcoin?)
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
December 10, 2015, 09:16:47 PM
 #116

segwit is indeed providing more capacity and scalability and thus is part of the puzzle in scaling bitcoin.  

As far as I can tell, the only component of the omnibus SegWit proposal that does anything about capacity or scalability is a simple increase of the block size to 4MB (I presume he means 4MiB). You can doublespeak this as "Discount the signature by 75% for block size" if you want, but that's really all it is.

As far as capacity is concerned it doesn't even increase it to 4 MiB , but allows heavy multisig to extend that limit up to 4. It is better to assume that it equivalent to a 1.8-2.5MiB limit increase.

The one direct scalability benefit of segwit that isn't found with simply raising the block limit is full nodes could also skip transferring old signatures which is an unnecessary task.(Existing full nodes already do not validate signatures in the far past but still have the burden of transferring them)

All the other segwit benefits are only indirectly related to capacity increases.  

Segwit isn't being promoted as the solution to capacity problems by developers. It is a elegant change that solves many problems and only slightly increases capacity. The core developers are being very conservative and want to resolve all other optimizations like Segwit and the relay network before drastically increasing the limit.

Gavin does have a fair point to getting these capacity increases completed immediately because it will take a a long time to deploy them and complete the hard fork. I would like for the core devs, miners, and wallet developers to have the code ready and tested and a plan in place to increase the blocksize as an emergency measure if the fee market produces unfavorable results and there is a huge backlog.
BitcoinNewsMagazine
Legendary
*
Offline Offline

Activity: 1806
Merit: 1164



View Profile WWW
December 10, 2015, 09:31:43 PM
 #117

Exactly. If a solution is not understandable for users with average IT expertise, then it will never be understandable for anyone with even less IT knowledge. And typically owners of large mining farms and exchanges do not have time to do those learning, so they tends to select the solution that they can understand or listen to people they like. This will turn the decision making into politics, and who are good at lobbying and PR will push in their changes. And this is not people would like to see in bitcoin. So, the knowledge gap of different participants decided that you really can't reach a wide consensus upon a radical or complex solution, XT's failure already proved that
Understanding can be of different levels: conceptual, algorithmic, implementational... I bet most people don't quite grasp how Bitcoin's Script stack machine is implemented, though it doesn't prevent them from utilising it, if they know conceptually at least. What's enough for most people is that a particular component has been peer-reviewed thoroughly to prove it's safe to use it.

Indeed, during early days of bitcoin, developers have much more freedom to do whatever they want, partly due to that no one cares about it, and partly due to that there are no major interested stake holders because of its low value

But now situation is different, the network has attracted so much venture capital and investors, these guys all have their own agenda thus the political landscape has changed. A good example is kncminer, they took the crowd funding money, realized their projects and start to drive their own mining operation secretly

At this stage, posting on a forum or reddit or checkin some code in git does not make a lot of sense, because the decision making power is not in the hand of developers, but in the hand of large mining pools, exchanges and payment processors. If devs represent a complex solution which those large players do not understand thoroughly, they would just ignore it (they have to protect their million dollar investment as best as they can). They could just keep running the old client, and build their clearing and settlement channel to avoid the scaling problem altogether

Imagine that when the blocks are full and each transaction cost a lot to clear, then only large service providers would be able to use blockchain to clear with their business partners. Users will find out that using web wallet services will cost just a few cents as usual and clears instantly, but using core client will cost $100 and maybe confirmed after 1 day, so they will definitely move to use blockchain.info or similar web wallet instead

You see, this is also a solution, since the risk on individual service provider is much smaller than the risk of the whole network, it can be accepted. And this solution is much easier for every investor to understand than that Segregated witness complication. In fact, most of the people are still very used to centralized service provider, so they would accept a locally centralized solution easily

The best scenario is that all the large players out there have deep IT expertise and can easily get what those new changes' pros and cons, but in my experience it is not the case. Rich people have totally another set of criteria in decision making

My understanding is that decision making power in this case very much rests with the developers. The consensus so far seems that Segregated Witness will be proposed as a soft fork when the BIP is published. Mining pools, exchanges and processors have no direct say. All that is needed for the BIP to be merged is agreement among the lead developers. That would be Van der Laan, Gavin Andresen, Jeff Garzik, Gregory Maxwell and Pieter Wuille. Correct me if I missed a name or am in error.

jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
December 10, 2015, 09:37:10 PM
 #118

As far as capacity is concerned it doesn't even increase it to 4 MiB , but allows heavy multisig to extend that limit up to 4. It is better to assume that it equivalent to a 1.8-2.5MiB limit increase.

That's not what I derived from Wuille's talk.


http://imgur.com/HdnFO7x

I may have missed it when he said something more clearly about the actual new block size limit. Shall we look at the code to check?


http://imgur.com/am6m5PT

Oh. Dear.


http://imgur.com/BEXIhpH

What to do?

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
December 10, 2015, 10:00:38 PM
 #119

As far as capacity is concerned it doesn't even increase it to 4 MiB , but allows heavy multisig to extend that limit up to 4. It is better to assume that it equivalent to a 1.8-2.5MiB limit increase.

That's not what I derived from Wuille's talk.


http://imgur.com/HdnFO7x

I may have missed it when he said something more clearly about the actual new block size limit. Shall we look at the code to check?


http://imgur.com/am6m5PT

Oh. Dear.


http://imgur.com/BEXIhpH

What to do?

Here is the code -

https://github.com/sipa/bitcoin/commits/segwit

Transcript -
http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/segregated-witness-and-its-impact-on-scalability/

Wuille does discuss 4x(sipa should have stressed up to 4x), but it is more complicated than that as the 4MiB limit increase is only on the whitness merkle tree(parallel chain) with a 1 MiB still on the main chain, thus only heavy multisig will use all 4 MiB.

Here is some quick math to show you an example-

Quote from: nullc
Yea, the exact impact depend on usage patterns.

If your case is a counting one input, one output, pay to hash transactions the sizes work out to

4 (version) + 1 (vin count) + 32 (input id) + 4 (input index) + 4 (sequence no) + 1 (sig len) + 0 (sig) + 1 (output count) + 1 (output len) + 36 + (32 byte witness program hash, push overhead, OP_SEGWIT) + 8 (value) + 4 (nlocktime) = 96 non-witness bytes

1 (witness program length) + 1 (witness program type) + 33 (pubkey) + 1 (checksig) + 1 (witness length) + 73 (signature) = 110.

96x + 0.25*110x = 1000000; x = 8097 or 13.5 TPS for 600 second blocks; (this is without the code in front of me, so I may well have slightly miscounted an overhead; but it's roughly that)... which is around double if you were assuming 7 tps as your baseline. Which is why I said double the capacity in my post... but YMMV.


BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1034


View Profile
December 10, 2015, 10:08:49 PM
 #120

Mining pools, exchanges and processors have no direct say. All that is needed for the BIP to be merged is agreement among the lead developers. That would be Van der Laan, Gavin Andresen, Jeff Garzik, Gregory Maxwell and Pieter Wuille. Correct me if I missed a name or am in error.

Miners have most of the power. They can immediately protest and scare the developers into changing their proposal or adopting another. They also are the ones that have to ultimately accept the code so whatever the developers implement can be ignored or rejected.
The only power the developers have over the miners ultimately is the fact that they can walk away as volunteers and refuse to contribute if the miners don't accept their updates. This isn't of much consequence because other (likely less talented ) developers would fill the void. The miners, node operators, merchants and exchanges will likely accept the soft fork because they agree with it and trust the judgment of the developers. 
Pages: « 1 2 3 4 5 [6] 7 8 9 10 11 12 13 14 15 16 17 18 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!