Carlton Banks
Legendary
Offline
Activity: 3430
Merit: 3080
|
|
December 09, 2015, 11:27:16 PM |
|
So if I'm understading this correctly...
Gavin likes SW but still think XT is the way to go since SW takes too long to implement?
Same response as ever, in other words: "Great idea guys. Love it. Can I interest you in XT?"
|
Vires in numeris
|
|
|
marcus_of_augustus
Legendary
Offline
Activity: 3920
Merit: 2349
Eadem mutata resurgo
|
|
December 09, 2015, 11:31:33 PM |
|
Finally cleans up issues with signature TX malleability, makes fraud proofs viable for real SPV security and incidentally frees up some capacity breathing room for the near term (2-3MB per block), who could argue against it? Unless there is some truly objectionable security risk discovered it should be soft-forked in ASAP. A few niggles about 'cleanest' way to do that but hopefully that wont turn into too much slide-rule swinging.
One issue is that if the "effective max block size" with SW is 4 MB, then the maximum bandwidth that a full node will have to deal with is the same as if we had a hardfork to 4 MB blocks. With the current way that the network functions and is laid out, this might be too much bandwidth. Maybe this could be somewhat addressed with IBLT, weak blocks, and other tech, but that stuff doesn't exist yet. I think that there's basically agreement that 2 MB would be safe, though. So reduce the actual block limit to 500KByte? (effective max 2 MB). 4 MB effective is probably a tad too large for the current bandwidth tech. now but I'm skeptical how often it would be hit in the near term. It is a worst case assuming 1MB of TX data and maximum number of signature data associated (high number of multi-sig, etc) in a single block but needs to be tested out for security implications what effect such a nasty block would have on the system of course.
|
|
|
|
theymos
Administrator
Legendary
Offline
Activity: 5404
Merit: 13498
|
|
December 09, 2015, 11:41:54 PM |
|
So reduce the actual block limit to 500KByte? (effective max 2 MB).
I was thinking 1 MB normal blocks + 1 MB witness. Apparently most transactions are nowadays about 50% witness data, so we'd be able to pretty much fill up both the normal blocks and the "witness blocks". You're right that typical blocks would only fill 1-2 MB of witness data (2-3 MB total) with sipa's proposal, so maybe it's OK. But I'm not 100% sure yet.
|
1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
|
|
|
marcus_of_augustus
Legendary
Offline
Activity: 3920
Merit: 2349
Eadem mutata resurgo
|
|
December 09, 2015, 11:52:40 PM |
|
So reduce the actual block limit to 500KByte? (effective max 2 MB).
I was thinking 1 MB normal blocks + 1 MB witness. Apparently most transactions are nowadays about 50% witness data, so we'd be able to pretty much fill up both the normal blocks and the "witness blocks". You're right that typical blocks would only fill 1-2 MB of witness data (2-3 MB total) with sipa's proposal, so maybe it's OK. But I'm not 100% sure yet. Yeah better wait until we see how it pans out. No need to give any titchety XTers aneurysms by mention things like block size limit reductions
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
December 10, 2015, 12:00:42 AM |
|
Yeah better wait until we see how it pans out. No need to give any titchety XTers aneurysms by mention things like block size limit reductions Pieter Wuille was skeptical at first as well, but since there is are plenty of optimizations coming out such as improvements to the relay network this gave him confidence to recommend SW. With SW coming to the testnet this month and a softfork taking up to a year to roll out I am concerned about the timing of all these matters. The community should have plenty of contingency plans tested and coded in preparation for what could be a wild year. I almost hope there isn't a disinflationary bubble because we do need a bit more time to flesh out more solutions.
|
|
|
|
VeritasSapere
|
|
December 10, 2015, 12:48:06 AM Last edit: December 10, 2015, 04:50:33 PM by VeritasSapere |
|
I think SW should be implemented as a hard fork. This gives people more freedom of choice, however I can not imagine many people opposing SW, it seems to be good and I can not see many downsides to this new breakthrough. I think it is wise to design for success. Segregated witness is cool, but it isn’t a short-term (within the next six months to a year) solution to the problems we’re already seeing as we run into the one-megabyte block size limit. This is also not a short term solution to the blocksize limit and for that matter also not a long term solution either. There is also disagreement in regards to different theories surrounding the fee market. Some people believe blocks should become consistently full, this seems to be the predominant position among the Core developers. There are a significant amount of people like myself however that fundamentally disagree with the economic theory underpinning this assumption. Disagreements appear rooted more in differing opinions on economics, a specialized field entirely distinct from engineering, programming, and network design. The block size limit has for the most part not ever been, and should not now be, used to determine the actual size of average blocks under normal network operating conditions. Real average block size ought to emerge from factors of supply and demand for what I will term “transaction-inclusion services.” Beginning to use the protocol block size limit to restrict the provision of transaction-inclusion services would be a radical change to Bitcoin. The burden of proof is therefore on persons advocating using the protocol limit in this novel way. Transaction-fee levels are not in any general need of being artificially pushed upward. A 130-year transition phase was planned into Bitcoin during which the full transition from block reward revenue to transaction-fee revenue was to take place. The protocol block size limit was added as a temporary anti-spam measure, not a technocratic market-control measure.
|
|
|
|
sickpig
Legendary
Offline
Activity: 1260
Merit: 1008
|
|
December 10, 2015, 07:25:43 AM |
|
Finally cleans up issues with signature TX malleability, makes fraud proofs viable for real SPV security and incidentally frees up some capacity breathing room for the near term (2-3MB per block), who could argue against it? Unless there is some truly objectionable security risk discovered it should be soft-forked in ASAP. A few niggles about 'cleanest' way to do that but hopefully that wont turn into too much slide-rule swinging.
One issue is that if the "effective max block size" with SW is 4 MB, then the maximum bandwidth that a full node will have to deal with is the same as if we had a hardfork to 4 MB blocks. With the current way that the network functions and is laid out, this might be too much bandwidth. Maybe this could be somewhat addressed with IBLT, weak blocks, and other tech, but that stuff doesn't exist yet. I think that there's basically agreement that 2 MB would be safe, though. So reduce the actual block limit to 500KByte? (effective max 2 MB). 4 MB effective is probably a tad too large for the current bandwidth tech. now but I'm skeptical how often it would be hit in the near term. It is a worst case assuming 1MB of TX data and maximum number of signature data associated (high number of multi-sig, etc) in a single block but needs to be tested out for security implications what effect such a nasty block would have on the system of course. no need to guess or estimate. actual data are available. take a look at Pieter's tweet: https://twitter.com/pwuille/status/673710939678445571?s=091.75 for normal tx, more for others e.g. multisig. take into account that normal account represent more than 80% of the total.
|
Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
|
|
|
valiz
Sr. Member
Offline
Activity: 471
Merit: 250
BTC trader
|
|
December 10, 2015, 08:38:53 AM |
|
There is such a thing as Bitcoin governance, decisions do still need to be made after all. You are simply just arguing another huge straw man here, you are misrepresenting my views.
Bitcoin is not a state, a corporation, a community, a tribe, or a family. It is a p2p network implementing a currency. If somebody governs it, the it has failed. Control and direction is the last thing it needs. I don't care how many "straw man" "ad hominem" and other intellectual BS you are throwing around. For one with a hammer, everything looks like a nail. For a political philosopher, everything looks like sheep needing herding.
|
12c3DnfNrfgnnJ3RovFpaCDGDeS6LMkfTN "who lives by QE dies by QE"
|
|
|
MbccompanyX
Full Member
Offline
Activity: 182
Merit: 100
★YoBit.Net★ 350+ Coins Exchange & Dice
|
|
December 10, 2015, 08:42:43 AM |
|
There is such a thing as Bitcoin governance, decisions do still need to be made after all. You are simply just arguing another huge straw man here, you are misrepresenting my views.
Bitcoin is not a state, a corporation, a community, a tribe, or a family. It is a p2p network implementing a currency. If somebody governs it, the it has failed. Control and direction is the last thing it needs. I don't care how many "straw man" "ad hominem" and other intellectual BS you are throwing around. For one with a hammer, everything looks like a nail. For a political philosopher, everything looks like sheep needing herding. the main problem is that he is shooting all that kind of bs from weeks now and even ignores if you try to pin out other problems connected to that fact and that kills bitcoin, and anyway this thread was supposed to be a celebration about xt and bip101 fail instead we talk about those like if somebody still care about it
|
|
|
|
Cconvert2G36
|
|
December 10, 2015, 08:49:04 AM |
|
the main problem is that he is shooting all that kind of bs from weeks now and even ignores if you try to pin out other problems connected to that fact and that kills bitcoin, and anyway this thread was supposed to be a celebration about xt and bip101 fail instead we talk about those like if somebody still care about it
Solid analysis.
|
|
|
|
valiz
Sr. Member
Offline
Activity: 471
Merit: 250
BTC trader
|
|
December 10, 2015, 08:53:19 AM |
|
There is such a thing as Bitcoin governance, decisions do still need to be made after all. You are simply just arguing another huge straw man here, you are misrepresenting my views.
Bitcoin is not a state, a corporation, a community, a tribe, or a family. It is a p2p network implementing a currency. If somebody governs it, the it has failed. Control and direction is the last thing it needs. I don't care how many "straw man" "ad hominem" and other intellectual BS you are throwing around. For one with a hammer, everything looks like a nail. For a political philosopher, everything looks like sheep needing herding. the main problem is that he is shooting all that kind of bs from weeks now and even ignores if you try to pin out other problems connected to that fact and that kills bitcoin, and anyway this thread was supposed to be a celebration about xt and bip101 fail instead we talk about those like if somebody still care about it Indeed I will drink tonight to celebrate the demise of XT and BIP101 . Yay! I was so concerned, that at a time I considered abandoning bitcoin. Now I am relieved. Yay! Thank you!
|
12c3DnfNrfgnnJ3RovFpaCDGDeS6LMkfTN "who lives by QE dies by QE"
|
|
|
Zarathustra
Legendary
Offline
Activity: 1162
Merit: 1004
|
|
December 10, 2015, 10:16:18 AM |
|
Finally cleans up issues with signature TX malleability, makes fraud proofs viable for real SPV security and incidentally frees up some capacity breathing room for the near term (2-3MB per block), who could argue against it? Unless there is some truly objectionable security risk discovered it should be soft-forked in ASAP. A few niggles about 'cleanest' way to do that but hopefully that wont turn into too much slide-rule swinging.
One issue is that if the "effective max block size" with SW is 4 MB, then the maximum bandwidth that a full node will have to deal with is the same as if we had a hardfork to 4 MB blocks. With the current way that the network functions and is laid out, this might be too much bandwidth. Maybe this could be somewhat addressed with IBLT, weak blocks, and other tech, but that stuff doesn't exist yet. I think that there's basically agreement that 2 MB would be safe, though. So reduce the actual block limit to 500KByte? (effective max 2 MB). 4 MB effective is probably a tad too large for the current bandwidth tech. now but I'm skeptical how often it would be hit in the near term. It is a worst case assuming 1MB of TX data and maximum number of signature data associated (high number of multi-sig, etc) in a single block but needs to be tested out for security implications what effect such a nasty block would have on the system of course. no need to guess or estimate. actual data are available. take a look at Pieter's tweet: https://twitter.com/pwuille/status/673710939678445571?s=091.75 for normal tx, more for others e.g. multisig. take into account that normal account represent more than 80% of the total. SW = quadruple the cap to get a double throughput Is this formula correct?
|
|
|
|
sickpig
Legendary
Offline
Activity: 1260
Merit: 1008
|
|
December 10, 2015, 01:50:28 PM |
|
Finally cleans up issues with signature TX malleability, makes fraud proofs viable for real SPV security and incidentally frees up some capacity breathing room for the near term (2-3MB per block), who could argue against it? Unless there is some truly objectionable security risk discovered it should be soft-forked in ASAP. A few niggles about 'cleanest' way to do that but hopefully that wont turn into too much slide-rule swinging.
One issue is that if the "effective max block size" with SW is 4 MB, then the maximum bandwidth that a full node will have to deal with is the same as if we had a hardfork to 4 MB blocks. With the current way that the network functions and is laid out, this might be too much bandwidth. Maybe this could be somewhat addressed with IBLT, weak blocks, and other tech, but that stuff doesn't exist yet. I think that there's basically agreement that 2 MB would be safe, though. So reduce the actual block limit to 500KByte? (effective max 2 MB). 4 MB effective is probably a tad too large for the current bandwidth tech. now but I'm skeptical how often it would be hit in the near term. It is a worst case assuming 1MB of TX data and maximum number of signature data associated (high number of multi-sig, etc) in a single block but needs to be tested out for security implications what effect such a nasty block would have on the system of course. no need to guess or estimate. actual data are available. take a look at Pieter's tweet: https://twitter.com/pwuille/status/673710939678445571?s=091.75 for normal tx, more for others e.g. multisig. take into account that normal account represent more than 80% of the total. SW = quadruple the cap to get a double throughput Is this formula correct? I don't think so. It seems to me that for fully validate a block you still have to download txs + witness. Maybe you could do some fancy thing parallelizing download streams but you still have to download all the data. maybe I am missing something obvious, though. IMHO SegWit will lower full node's storage requirement because you could prune the witness part once you have validate the block (the exact timing ow wit prune will depend on how the feature will be implemented). So yes it will somewhat alleviate the burden of full node operators but only for one dimension, leaving untouched bandwidth. I still don't have a clear idea on how sigwit will impact CPU and RAM usage. That said the @pwuille formula just give you an idea on how much room we can do on the txs part of the block as result of moving (witness) on a separate data structure. AFAIU the size of the block under segwit will be ~ base_size (where you store txs) plus (witness_size). Nonetheless witness_size depend on the transaction type, hence the actual block size depends on the kind of txs that will be included. just to recap: @pwuille's formula: size = base_size * 4 + witness_size <4MB @aj (Anthony Towns) on btc ml dev suggests that a more correct formula is a combinations of 2 constraints (base_size + witness_size/4 <= 1MB) and (base_size < 1MB)
quoting the relevant part of @aj's email hopefully will give you an idea: So if you have a 500B transaction and move 250B into the witness, you're still using up 250B+250B/4 of the 1MB limit, rather than just 250B of the 1MB limit.
In particular, if you use as many p2pkh transactions as possible, you'd have 800kB of base data plus 800kB of witness data, and for a block filled with 2-of-2 multisig p2sh transactions, you'd hit the limit at 670kB of base data and 1.33MB of witness data.
That would be 1.6MB and 2MB of total actual data if you hit the limits with real transactions, so it's more like a 1.8x increase for real transactions afaics, even with substantial use of multisig addresses.
The 4MB consensus limit could only be hit by having a single trivial transaction using as little base data as possible, then a single huge 4MB witness. So people trying to abuse the system have 4x the blocksize for 1 block's worth of fees, while people using it as intended only get 1.6x or 2x the blocksize... That seems kinda backwards.
|
Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
|
|
|
forevernoob
|
|
December 10, 2015, 11:17:55 PM |
|
So if I'm understading this correctly...
Gavin likes SW but still think XT is the way to go since SW takes too long to implement?
Same response as ever, in other words: "Great idea guys. Love it. Can I interest you in XT?" It just blows my mind why he still want to force a fork. What's the rush? We haven't even reached the point where transactions cost more than a few cents. Surely we can wait 12+ months for SW.
|
|
|
|
VeritasSapere
|
|
December 11, 2015, 03:45:40 AM |
|
So if I'm understading this correctly...
Gavin likes SW but still think XT is the way to go since SW takes too long to implement?
Same response as ever, in other words: "Great idea guys. Love it. Can I interest you in XT?" It just blows my mind why he still want to force a fork. What's the rush? We haven't even reached the point where transactions cost more than a few cents. Surely we can wait 12+ months for SW. If the blocks became consistently full then transactions on the main Bitcoin blockchain would be rendered increasingly unreliable as well as more expensive. This would hamper adoption, doing a hard fork at short notice once the blocks do fill up would also be ill advised.
|
|
|
|
iCEBREAKER
Legendary
Offline
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
|
|
December 11, 2015, 04:06:51 AM |
|
In which the Gavinista Liberation Front reaches fresh new heights of self-clowning: https://www.reddit.com/r/btc/comments/3w5e7k/interesting_change_in_devs_detective_attitude/Craig Wright happens to talk of testing 340 GB blocks supporting 568,000 transactions and testing huge Bitcoin scaling solutions[Clip 2, Part C] (so that wouldn't exactly put him on Blockstream's side for the Lightning Network) Maxwell is trying so hard to discredit Dr. Wright as Satoshi now, because Dr. Wrights views contradict his. See how eager hellobitcoinworld and Huelco are to scarf down the bullshit peddled by CW, Gawker's Gizmodo, and Conde Naste's Wired? "Mmm, yummy big block bullshit" they say. "Feed us more steaming turds" they clamor. "We will, without the benefit of even token skepticism, vociferously consume any bullshit that strokes our confirmation biases" is apparently the Gavista motto. Too bad Toomin (like everyone else) has abandoned XT. so fekkin' rekt
|
██████████ ██████████████████ ██████████████████████ ██████████████████████████ ████████████████████████████ ██████████████████████████████ ████████████████████████████████ ████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ████████████████████████████████ ██████████████ ██████████████ ████████████████████████████ ██████████████████████████ ██████████████████████ ██████████████████ ██████████ Monero
|
| "The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy." David Chaum 1996 "Fungibility provides privacy as a side effect." Adam Back 2014
|
| | |
|
|
|
Zarathustra
Legendary
Offline
Activity: 1162
Merit: 1004
|
|
December 11, 2015, 09:28:22 AM |
|
|
|
|
|
hdbuck
Legendary
Offline
Activity: 1260
Merit: 1002
|
|
December 11, 2015, 03:28:52 PM |
|
exit bigblocks, enter segregation.
WTF is wrong with bitcoiners?
|
|
|
|
forevernoob
|
|
December 11, 2015, 06:02:26 PM |
|
If the blocks became consistently full then transactions on the main Bitcoin blockchain would be rendered increasingly unreliable as well as more expensive. This would hamper adoption, doing a hard fork at short notice once the blocks do fill up would also be ill advised.
Then you would agree that SW is the best solution since it doesn't require a hard fork? Also in what way would the blockchain be unreliable if it was full?
|
|
|
|
VeritasSapere
|
|
December 11, 2015, 06:50:06 PM Last edit: December 11, 2015, 07:30:19 PM by VeritasSapere |
|
If the blocks became consistently full then transactions on the main Bitcoin blockchain would be rendered increasingly unreliable as well as more expensive. This would hamper adoption, doing a hard fork at short notice once the blocks do fill up would also be ill advised.
Then you would agree that SW is the best solution since it doesn't require a hard fork? Also in what way would the blockchain be unreliable if it was full? I actually think that a hard fork is preferable because it is politically superior because of its implications related to governance and because it would require a higher degree of consensus in order to avoid a split. SW is also not a solution to increasing the throughput of the Bitcoin blockchain directly, it can be part of the solution however. Therefore an increase in the blocksize is still necessary, SW has not changed this. The Bitcoin blockchain would become more unreliable as the blocks fill up, since there is only capacity for so many transactions per block, regardless of the transaction fee. Part of the problem is that there is no way to know how much of a fee is even required under such a scenario which could lead to transactions becoming stuck for days or possibly not even being confirmed at all. Soft forks quash the minority voice. Hard forks allow it to persist.
|
|
|
|
|