Bitcoin Forum
December 01, 2021, 07:42:36 PM *
News: Latest Bitcoin Core release: 22.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Would you approve the compromise "Segwit + 2MB"?
Yes - 78 (62.4%)
No - 35 (28%)
Don't know - 12 (9.6%)
Total Voters: 125

Pages: « 1 2 3 4 5 6 7 [8] 9 10 11 12 13 14 15 16 17 18 19 »  All
  Print  
Author Topic: [POLL] Possible scaling compromise: BIP 141 + BIP 102 (Segwit + 2MB)  (Read 14254 times)
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 09:45:37 PM
 #141

So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
1638387756
Hero Member
*
Offline Offline

Posts: 1638387756

View Profile Personal Message (Offline)

Ignore
1638387756
Reply with quote  #2

1638387756
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 10:15:09 PM
 #142

The 'DoS' doesn't even require a protocol change to nullify. Indeed, there is a natural incentive already in the protocol that ensures it will never become a systemic problem. If large-time-to-verify-blocks ever became A Thing, miners will employ parallel validation. This will ensure that such large-time-to-verify-blocks will be orphaned by faster-to-verify-blocks.

Miners who gravitate to parallel validation will earn more income, and miners who do not employ parallel validation will become bankrupted over time. As will miners who create such DoS blocks.

This is already part of the protocol. No change is needed.
I've asked for a refreshment about 'parallel validation':
Quote
<harding> many miners currently mine empty blocks on top of unvalidated (but PoW-correct) new blocks.  There's no reason to expect them to behave differently under BTU, so most miners would probably extend the chain with the high-validation-work block rather than create an alternative block at the same height.
<harding> Thus parallel validation doesn't get you anything unless a low-validation-work block is coincidentally produced at the same time as a high-validation-work block.
<harding> parallel validation only helps you in the rare case that there are two or more blockchains with the same PoW.  Miners are disincentivized to create such chains since one of them is certain to lose, so the incentives probably favor them extending a high-validation-work block rather than creating a competing low-validation-work block.
<harding> Imagine block A is at the tip of the chain.  Some miner than extends that chain with block B, which looks like it'll take a long time to verify.  As a miner, you can either attempt to mine block C on top of block B, mining without validation but creating chain ABC that certainly has the most PoW.  Or you can mine block B' that is part of chain AB' that will have less PoW than someone who creates chain ABC.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 10:18:52 PM
 #143

So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

something we can agree on.. needing segwit nodes as the 'upstream filters' (gmaxwell own buzzword) is bad for security. plus its not "backward compatible"

i prefer the term backward trimmed(trimmable), or backwards 'filtered' (using gmaxwells word) to make it clearer old nodes are not getting full validatable blockdata
not a perfect term. but atleast its slightly more clearer of what segwit is "offering" compared to the half truths and half promises and word twisting to offset giving a real answer.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 11, 2017, 10:19:48 PM
 #144

Let's say 5 years, 10 years maybe is too far away.
We also need to determine whether we are talking about a block size in the traditional sense or a post-Segwit 'base + weight' size (as the "new" block size). Which is it?
The 20 MB I mentioned before were calculated by the straightforward traditional [non-segwit] way. So to compare to my previous calculation, and because unfortunately Segwit is still not active, I would be more interested in the "traditionally-calculated" value. But you can obviously add an estimation for a post-Segwit size.

Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 10:25:23 PM
 #145

sigop attack
v0.12 had a 4000 sigop per tx limit (read the code)
v0.14 had a 16000 sigop per tx limit (read the code)

so now check the code.
https://github.com/bitcoin/bitcoin/tree/0.14/src
core 0.14: MAX_BLOCK_SIGOPS_COST = 80000;
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = MAX_BLOCK_SIGOPS_COST/5;
meaning
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = 16000
You almost made me fall for this.. I was too tired to check whether your numbers were true or not myself right away. That '80000' number is the Segwit number, i.e. it is scaled for the 4 MB weight. 80 000/4 = 20 000. Now if you apply 'MAX_BLOCK_SIGOPS_COST/5;' on this number, you get.. 4000.  Roll Eyes

The 20 MB I mentioned before were calculated by the straightforward traditional [non-segwit] way. So to compare to my previous calculation, and because unfortunately Segwit is still not active, I would be more interested in the "traditionally-calculated" value. But you can obviously add an estimation for a post-Segwit size.
I'm not exactly sure how to mitigate the DOS vector in that case. If that was mitigated in some way, I'd say 10 MB upper limit for the next 5 years. I doubt that someone could expect that we'd need more than 30 TPS + all the secondary layer solutions so quickly.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 10:28:23 PM
 #146

sigop attack
v0.12 had a 4000 sigop per tx limit (read the code)
v0.14 had a 16000 sigop per tx limit (read the code)

so now check the code.
https://github.com/bitcoin/bitcoin/tree/0.14/src
core 0.14: MAX_BLOCK_SIGOPS_COST = 80000;
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = MAX_BLOCK_SIGOPS_COST/5;
meaning
core 0.14: MAX_STANDARD_TX_SIGOPS_COST = 16000
You almost made me fall for this.. I was too tired to check whether your numbers were true or not myself right away. That '80000' number is the Segwit number, i.e. it is scaled for the 4 MB weight. 80 000/4 = 20 000. Now if you apply 'MAX_BLOCK_SIGOPS_COST/5;' on this number, you get.. 4000.  Roll Eyes

i used 0.12 as an example of how many quadratics were permissible prior to segwit
and
i used 0.14 as an example of how many quadratics were permissible post segwit
prior: 4000
post: 16000

but in actual fact it is not v0.14 =4000 prior segwit its actually still 16,000 prior segwit (for pools using these uptodate versions EG 0.14 today)
check the code

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 10:33:48 PM
 #147

So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 10:35:53 PM
 #148

This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.
Segwit is like any other soft fork before it. Nodes that do not update, do not validate new rules..Alternatively in the HF, nodes that do not update are cut off from the network.

i used 0.12 as an example of how many quadratics were permissible prior to segwit
and
i used 0.14 as an example of how many quadratics were permissible post segwit
prior: 4000
post: 16000

but in actual fact it is not v0.14 =4000 prior segwit its actually still 16,000 prior segwit
check the code
No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code. See this for example: https://github.com/bitcoin/bitcoin/pull/8438

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 10:37:42 PM
 #149

No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code.

admit there is a 2 tiered system. not the word twisting

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 11, 2017, 10:39:25 PM
 #150

No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code.
admit there is a 2 tiered system. not the word twisting
As soon as you admit to being wrong with your "numbers". We all know that day won't come. Roll Eyes

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
March 11, 2017, 11:02:17 PM
 #151

This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.
Segwit is like any other soft fork before it. Nodes that do not update, do not validate new rules..Alternatively in the HF, nodes that do not update are cut off from the network.

Did any soft fork that came before it create a two tier network system? At least with a hard fork miners will not create segwit blocks until the vast majority of nodes have upgraded. Those who find there nodes unable to sync will upgrade there nodes. With the two tier network system introduced with the SWSF, nodes that have not been upgraded are being filtered data, so they are no longer full nodes. This appears to be a mechanism to bypass full node consensus, if the miners agree to start creating segwit blocks. Miners that do not wish to upgrade find they have too or risk having their blocks orphaned, so are basically forced to upgrade. Please someone correct my misunderstanding, otherwise I have a right to feel rather uncomfortable about this.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 3318
Merit: 2207



View Profile
March 11, 2017, 11:14:06 PM
 #152

No, you misunderstand this completely. This is absurd. You can create a TX with 20k sigops maximum, you just can't do this with Core. You're confusing policy and consensus rules and you're misread the code. See this for example: https://github.com/bitcoin/bitcoin/pull/8438

this is your mis understanding
the 20k limit (old v0.12) is the BLOCK LIMIT for sigops
the 4000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 4,000 sigops in v0.12 without and FILL THE BLOCKS sigop limit (no more tx's allowed)

the 80k limit (v0.14) is the BLOCK LIMIT for sigops
the 16000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 16,000 sigops in v0.12 without and FILL THE BLOCK sigop limit (no more tx's allowed)

as for your link - https://github.com/bitcoin/bitcoin/pull/8438
Quote
Treat high-sigop transactions as larger rather than rejecting them

meaning they acknowledge they are allowing transactions to be more quadratically used to attack.

they simply think that its not a problem. but lets say in the future. things move forward. if they then made it 32000sigops per tx. and 160,000 per block. still thats 5 tx per block and also because a native malicious user will do it .. the TIME to process 5tx of 32,000 compared to last years 5tx of 4000 will impact...



the solution is yes increase BLOCK sigop limits. but dont increase TX sigop limits. keep it low 16,000 maybe but preferably 4000 as a constant barrier against native key malicious quadratic creators.
meaning if it was 80,000 a malicious user has to make 20 tx to fill the blocks 80,000 limit. instead of just 5..
and because its only 4000X20 instead of 16,000X5 the validation time is improved
but they havnt

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 11:20:02 PM
 #153

So does this mean a soft fork bypasses consensus?
No. Soft forks are backwards compatible

Note that this requires believing that making nodes that currently operate in a trustless manner suddenly dependent upon others for security fits the definition of 'backwards compatible'. I think that definition of 'backwards compatible' is ludicrous. YMMV.

This is my biggest concern about segwit being implemented as a soft fork. All nodes are equal until they are not. If it was implemented as a hard fork, we wouldn't have this two tier network system, if I understand correctly.

Well, there is yet another effect which seems rarely discussed. Under The SegWit Omnibus Changeset, there are essentially two classes of bitcoins. Those that have been created by legacy, and those which have been created by SegWit. This is by definition a destruction of fungibility.

How important fungibility is to you is something only you can decide.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
d5000
Legendary
*
Offline Offline

Activity: 3010
Merit: 2925


Decentralization Maximalist


View Profile
March 11, 2017, 11:30:33 PM
 #154

I'm not exactly sure how to mitigate the DOS vector in that case. If that was mitigated in some way, I'd say 10 MB upper limit for the next 5 years. I doubt that someone could expect that we'd need more than 30 TPS + all the secondary layer solutions so quickly.

OK, 10 MB looks good for me (it would be possible to handle at least 50 million users with that) - and it's also close to Franky's 8 MB. With Segwit, if I understand it well, that transaction capacity (30 tps) would be equivalent to a 2-4 MB limit approximately.

jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 11:44:17 PM
 #155

The 'DoS' doesn't even require a protocol change to nullify. Indeed, there is a natural incentive already in the protocol that ensures it will never become a systemic problem. If large-time-to-verify-blocks ever became A Thing, miners will employ parallel validation. This will ensure that such large-time-to-verify-blocks will be orphaned by faster-to-verify-blocks.

Miners who gravitate to parallel validation will earn more income, and miners who do not employ parallel validation will become bankrupted over time. As will miners who create such DoS blocks.

This is already part of the protocol. No change is needed.
I've asked for a refreshment about 'parallel validation':
Quote
<harding> many miners currently mine empty blocks on top of unvalidated (but PoW-correct) new blocks.  There's no reason to expect them to behave differently under BTU, so most miners would probably extend the chain with the high-validation-work block rather than create an alternative block at the same height.
<harding> Thus parallel validation doesn't get you anything unless a low-validation-work block is coincidentally produced at the same time as a high-validation-work block.
<harding> parallel validation only helps you in the rare case that there are two or more blockchains with the same PoW.  Miners are disincentivized to create such chains since one of them is certain to lose, so the incentives probably favor them extending a high-validation-work block rather than creating a competing low-validation-work block.
<harding> Imagine block A is at the tip of the chain.  Some miner than extends that chain with block B, which looks like it'll take a long time to verify.  As a miner, you can either attempt to mine block C on top of block B, mining without validation but creating chain ABC that certainly has the most PoW.  Or you can mine block B' that is part of chain AB' that will have less PoW than someone who creates chain ABC.

Harding's concern would be founded. But only to the point that all miners would suddenly start performing only zero-transaction block mining. Which of course is ludicrous.

What is not said, is that miners who perform zero-transaction mining do so only until they are able to validate the block that they are mining atop. Once they have validated that block, they modify the block that they are mining to include a load of transactions. They cannot include the load of transactions before validation, because until validated, they have no idea which transactions they need to exclude from the block they are mining. For if they mine a block that includes a transaction that was mined in a previous block, their block would be orphaned for invalidity.

So what would happen with parallel validation under such a scenario?

Miner A is mining at height N. As he is doing so, miner B solves a block that contains a aberrant quadratic-hash-time transaction (let us call this 'ADoS block' (attempted denial of service)) at height N, and propagates it to the network.
Miner A, who implements parallel validation and zero-transaction mining stops mining his height A block. He spawns a thread to start validating the ADoS block at height N. He starts mining a zero-transaction block at height N+1 atop ADoS.
Miner C solves a normal validation time block C at height N and propagates it to the network.
When Miner A receives block C, he spawns another thread to validate block C. He is still mining the zero-transaction block atop ADoS.
A short time thereafter, Miner A finishes validation of block C. ADoS is still not validated. So Miner A builds a new block at height N+1 atop block C, full of transactions, and switches to mining that.
From the perspective of Miner A, he has orphaned Miner B's ADoS block.
Miner A may or may not win round N+1. But statistically, he has a much greater chance to win round N+1 than any other miner that does not perform parallel validation. Indeed, until the ADoS block is fully validated, it is at risk of being orphaned.
The net result is that miners have a natural incentive to operate in this manner, as it assures them a statistical advantage in the case of ADoS blocks. So if Miner A does not win round N+1, another miner that implements parallel validation assuredly will. End result: ADoS is orphaned.

End result: Harding's concern is irrelevant. The quadratic hash time problem solves itself. No change to the protocol needed.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
LazyTownSt
Newbie
*
Offline Offline

Activity: 17
Merit: 0


View Profile
March 11, 2017, 11:45:17 PM
 #156

This is a massive issue. Im surprised at the lack of votes so far
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 11, 2017, 11:59:29 PM
 #157

This is a massive issue. Im surprised at the lack of votes so far

'Voting' is pointless. The only 'votes' that matter are tendered by people choosing which code they are running.

I'm 'voting' BU.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 12, 2017, 08:14:23 AM
 #158

this is your mis understanding
the 20k limit (old v0.12) is the BLOCK LIMIT for sigops
the 4000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 4,000 sigops in v0.12 without and FILL THE BLOCKS sigop limit (no more tx's allowed)

the 80k limit (v0.14) is the BLOCK LIMIT for sigops
the 16000 is the TRANSACTION limit
meaning a malicious spammer can make 5 TX of 16,000 sigops in v0.12 without and FILL THE BLOCK sigop limit (no more tx's allowed)
Nope. Wrong. You are confusing policy & consensus rules and Segwit. The 80k number is Segwit only. A non-Core client can create a TX with 20k maximum sigops, which is the maximum that the consensus rules allow (not the numbers that you're writing about, e.g. 4k nor 16k).

Well, there is yet another effect which seems rarely discussed. Under The SegWit Omnibus Changeset, there are essentially two classes of bitcoins. Those that have been created by legacy, and those which have been created by SegWit. This is by definition a destruction of fungibility.
No. It does not destroy fungibility.

End result: Harding's concern is irrelevant. The quadratic hash time problem solves itself. No change to the protocol needed.
Definitely; everyone is a honest actor in this network and we are all living on a rainbow. Roll Eyes

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
jbreher
Legendary
*
Offline Offline

Activity: 2912
Merit: 1515


lose: unfind ... loose: untight


View Profile
March 12, 2017, 09:08:33 AM
 #159

Well, there is yet another effect which seems rarely discussed. Under The SegWit Omnibus Changeset, there are essentially two classes of bitcoins. Those that have been created by legacy, and those which have been created by SegWit. This is by definition a destruction of fungibility.
No. It does not destroy fungibility.

Do you understand that 'fungibility' is the property that no units of a thing have differing characteristics from other units?

Quote
End result: Harding's concern is irrelevant. The quadratic hash time problem solves itself. No change to the protocol needed.
Definitely; everyone is a honest actor in this network and we are all living on a rainbow.

Way to make a technical rebuttal, Lauda. You're certainly on your game tonight.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2903


Terminated.


View Profile WWW
March 12, 2017, 09:19:03 AM
 #160

Do you understand that 'fungibility' is the property that no units of a thing have differing characteristics from other units?
So for you, being part of UTXO vs Segwit UTXO is an adequate characteristic to destroy fungibility? What happens when *all* (in theory) keys are Segwit UTXO? Fungibility suddenly returned?

Way to make a technical rebuttal, Lauda. You're certainly on your game tonight.
I've come to realize that it is pointless to event attempt that since you only perceive what you want to. You are going to come to the same conclusion each time, regardless of whether you're wrong or not.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
Pages: « 1 2 3 4 5 6 7 [8] 9 10 11 12 13 14 15 16 17 18 19 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!