Bitcoin Forum
April 26, 2024, 06:14:31 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: SegWit + Variable and Adaptive (but highly conservative) Blockweight Proposal  (Read 2054 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
DooMAD (OP)
Legendary
*
Offline Offline

Activity: 3766
Merit: 3100


Leave no FUD unchallenged


View Profile
May 10, 2017, 09:00:29 PM
Last edit: December 15, 2017, 06:23:13 PM by DooMAD
 #1

:Moderation note:  I am open to economic and technical arguments to tweak any aspect of this compromise proposal, but outright dismissal over concerns of game theory will be deleted on sight unless said concerns are accompanied by a suggestion to improve the proposal and overcome those concerns.  Constructive feedback, not torpedoes, please.


A static limit is, in my considered opinion, shortsighted.  It fails to address the concerns many still have about on-chain transactions becoming economically unviable.  Many argue that it's simply a matter of conspiracy "holding SegWit back", but while it's not without reason, I'm not entirely convinced by that argument and feel that locking in guarantees about the blocksize would hasten the activation of SegWit.
[//EDIT December 2017: Almost prescient given BIP91's initial success (despite failing on the second hurdle), heh.   Grin]  

I maintain that an algorithmic process based on real-time network traffic is far better in every way than the established "clumsy hack" mentality of picking an arbitrary whole number from thin air, kicking the can down the road, waiting for "permission" from the developers and descending into the same stupid war each time the issue of throughput arises in future.  Plus, a one-time hardfork is far better than multiple hardforks every time we start nearing another new and arbitrary static limit.  And there are no violent swings in fee pressure this way.  Everything is smooth, consistent and, for the most part, predictable, which is what we should all want Bitcoin to be.

The proposal gauges fee pressures in conjunction with traffic to determine if an increase, decrease or no change at all is required.  Strong consideration has been given to limit increases (and allow decreases) so as not to reach a level where nodes would struggle with bandwidth usage.  The condition of fees also helps prevent gaming the system.  This latest iteration of the proposal, largely based on BIP106 includes adjustment to the Witness space to maintain the 1:3 ratio between base and witness.  SegWit is a prerequisite for this to be activated:


Code:
IF more than 50% of block's size, found in the first 2016 of the last difficulty period, is more than 90% MaxBlockSize
    AND  (TotalTxFeeInLastDifficulty > average(TotalTxFee in last 8 difficulty periods))
    THEN BaseMaxBlockSize = BaseMaxBlockSize +0.01MB
      WitnessMaxBlockSize = WitnessMaxBlockSize +0.03MB

Else IF more than 90% of block's size, found in the first 2016 of the last difficulty period, is less than 50% MaxBlockSize
    THEN BaseMaxBlockSize = BaseMaxBlockSize -0.01MB
      WitnessMaxBlockSize = WitnessMaxBlockSize -0.03MB

ELSE
    Keep the same BaseMaxBlockSize and WitnessMaxBlockSize
(credit to Upal Chakraborty for their original concept in BIP106)

  //EDIT:  Cheers to d5000 for their proposed average fee adjustment

So in plain English, a tiny, 0.01MB, adjustment to the base blockweight can occur each difficulty period and a proportionate 0.03MB to the Witness space to maintain a 1:3 ratio, but only if:

  • SegWit is implemented
  • Either there are sufficiently full blocks to justify an increase to the blockweight, or sufficiently empty to reduce it
  • There are more fees generated in the latest difficulty period than the average over the last 8 periods to permit an increase, but the blockweight can be reduced regardless of fees, which will deter gaming the system

Mathematically, assuming an average block time of ~10 minutes, there are a maximum of ~104 difficulty adjustments over a 4 year period, so even if there was a .01 MB increase at every difficulty re-target (the chances of which are negligible), the base blockweight would still only be ~2.04 MB after 4 years.

Is this a compromise most of us could get behind?

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
1714112071
Hero Member
*
Offline Offline

Posts: 1714112071

View Profile Personal Message (Offline)

Ignore
1714112071
Reply with quote  #2

1714112071
Report to moderator
1714112071
Hero Member
*
Offline Offline

Posts: 1714112071

View Profile Personal Message (Offline)

Ignore
1714112071
Reply with quote  #2

1714112071
Report to moderator
"Governments are good at cutting off the heads of a centrally controlled networks like Napster, but pure P2P networks like Gnutella and Tor seem to be holding their own." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714112071
Hero Member
*
Offline Offline

Posts: 1714112071

View Profile Personal Message (Offline)

Ignore
1714112071
Reply with quote  #2

1714112071
Report to moderator
1714112071
Hero Member
*
Offline Offline

Posts: 1714112071

View Profile Personal Message (Offline)

Ignore
1714112071
Reply with quote  #2

1714112071
Report to moderator
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
May 10, 2017, 11:10:27 PM
 #2

I like the idea of adaptive blocksizes. 

I also favor the idea that capacity should always outpace demand, so I think the increases have to be substantially greater than this.  I think Ethereum does it in a smart way.  If I am correct, it is something like 20% more than a moving average of actual capacity used.  This ensures blocks keep getting bigger as needed. 

BillyBobZorton
Legendary
*
Offline Offline

Activity: 1204
Merit: 1028


View Profile
May 10, 2017, 11:34:27 PM
 #3

I like the idea of adaptive blocksizes. 

I also favor the idea that capacity should always outpace demand, so I think the increases have to be substantially greater than this.  I think Ethereum does it in a smart way.  If I am correct, it is something like 20% more than a moving average of actual capacity used.  This ensures blocks keep getting bigger as needed. 

How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

I just don't see how flexible blocksize schemes aren't exploitable.
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
May 11, 2017, 12:03:16 AM
 #4

I like the idea of adaptive blocksizes. 

I also favor the idea that capacity should always outpace demand, so I think the increases have to be substantially greater than this.  I think Ethereum does it in a smart way.  If I am correct, it is something like 20% more than a moving average of actual capacity used.  This ensures blocks keep getting bigger as needed. 

How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

I just don't see how flexible blocksize schemes aren't exploitable.

Well, i think the flexibility goes both ways -- blocks can get smaller again if the space isn't being used.  That's the beauty of flexible blocks.

If someone wants to use the blockchain, and pay for it (spend millions), whether or not the transactions were useful to anyone else is kind of irrelevant.  One man's spam is another man's legit use.  But if they pay the fees and the miners agree to it, I think its ok and we shouldn't be trying to constrict the blocksizes because of it.  Better to get a little bit of bloat and make the spammers pay for all that volume than to constantly punish everyone else with high fees and restricted capacity.


BillyBobZorton
Legendary
*
Offline Offline

Activity: 1204
Merit: 1028


View Profile
May 11, 2017, 12:24:59 AM
 #5

I like the idea of adaptive blocksizes. 

I also favor the idea that capacity should always outpace demand, so I think the increases have to be substantially greater than this.  I think Ethereum does it in a smart way.  If I am correct, it is something like 20% more than a moving average of actual capacity used.  This ensures blocks keep getting bigger as needed. 

How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

I just don't see how flexible blocksize schemes aren't exploitable.

Well, i think the flexibility goes both ways -- blocks can get smaller again if the space isn't being used.  That's the beauty of flexible blocks.

If someone wants to use the blockchain, and pay for it (spend millions), whether or not the transactions were useful to anyone else is kind of irrelevant.  One man's spam is another man's legit use.  But if they pay the fees and the miners agree to it, I think its ok and we shouldn't be trying to constrict the blocksizes because of it.  Better to get a little bit of bloat and make the spammers pay for all that volume than to constantly punish everyone else with high fees and restricted capacity.



" One man's spam is another man's legit use.  "

But who is benefiting other than the guy spamming the network at the expense of everyone else getting their nodes bloated with insane amount of data?

The network would grow so much and nodes would end up a nightmare to run.

For example downloading the Ethereum blockchain from scratch is hell because there are some blocks filled with spam due an attack. You have to bypass those points or its pretty much impossible to download.

With this, you are only damaging the network long term because people will not bother with downloading the blockchain.
d5000
Legendary
*
Offline Offline

Activity: 3892
Merit: 6089


Decentralization Maximalist


View Profile
May 11, 2017, 12:32:19 AM
 #6

I fully support this proposal and hope we can move forward with a real BIP and an actual implementation based on it.

How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

In this proposal, it would take years of continuous spamming to achieve a dangerous block size. Not only mempool spamming (like now), the transactions would have to be actually included in the blocks, so it would be a very costly operation.

You can do the math: the increase of the base block size would be a maximum of 250 kB per year (that is the important number regarding to the quadratic hashing problem), while the total block size only could change 1 MB/year.

What I like regarding the last proposal (5% per period) is the non-exponential approach to block size increases. So if the block size becomes bigger it still is equally difficult to "spam the blocksize to the moon".

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
May 11, 2017, 12:32:50 AM
Last edit: May 11, 2017, 12:47:39 AM by franky1
 #7

if there is going to be a hard consensus to move to adaptive /dynamic blocks. then there would not need to be the 1:3 ratio of a block inside a block.

instead at a point where all the nodes are ready to accept adaptive blocks, validating segwit can be part of standard consensus.. meaning it can be just a single block format that allows
(TX input)(sig)(TX output) - native/legacy tx
and
(TX input)(TX output)(sig) - segwit
all in the same block area

thus allowing a CLEAN single merkle block where people that want to use segwit tx's can.. and those who want to remain with native tx's can and then that unites the community because there is no cludge of tier networks of stripping blocks apart and compatibility issues between different nodes.


also
Code:
    THEN BaseMaxBlockSize = BaseMaxBlockSize -0.01MB
      WitnessMaxBlockSize = WitnessMaxBlockSize -0.03MB

this would cause orphan risks of when nodes rescan the block chain

if 2016 blocks were X then the next time 2016 blocks were y(=x-0.01)  with the rules being Y.. all the X blocks would get orphaned because the 2016 blocks that are X are above the current Y rule.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
May 11, 2017, 12:57:06 AM
 #8

I like the idea of adaptive blocksizes. 

I also favor the idea that capacity should always outpace demand, so I think the increases have to be substantially greater than this.  I think Ethereum does it in a smart way.  If I am correct, it is something like 20% more than a moving average of actual capacity used.  This ensures blocks keep getting bigger as needed. 

How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

I just don't see how flexible blocksize schemes aren't exploitable.

Well, i think the flexibility goes both ways -- blocks can get smaller again if the space isn't being used.  That's the beauty of flexible blocks.

If someone wants to use the blockchain, and pay for it (spend millions), whether or not the transactions were useful to anyone else is kind of irrelevant.  One man's spam is another man's legit use.  But if they pay the fees and the miners agree to it, I think its ok and we shouldn't be trying to constrict the blocksizes because of it.  Better to get a little bit of bloat and make the spammers pay for all that volume than to constantly punish everyone else with high fees and restricted capacity.



" One man's spam is another man's legit use.  "

But who is benefiting other than the guy spamming the network at the expense of everyone else getting their nodes bloated with insane amount of data?


The users are benefitting by getting fast confirmations and low fees since the blocks aren't full. 

Quote

The network would grow so much and nodes would end up a nightmare to run.

For example downloading the Ethereum blockchain from scratch is hell because there are some blocks filled with spam due an attack. You have to bypass those points or its pretty much impossible to download.

With this, you are only damaging the network long term because people will not bother with downloading the blockchain.

You have a point that blockchain bloat can be an issue.  That was originally the reason why Satoshi put the 1mb limit in place.  However, when he did so, blocks were about 1 kilobyte.  I agree it might make sense to have some limit so the bloat can't go ballistic, but I don't see what's wrong with flexible limits.   If you use something like , lets say, a 30 day moving average, well then the blocks would have to be stuffed for a whole month.  This would become incredibly expensive...and the money would go right into the hands of the miners who have to deal with it anyway...so they are being compensated for that.

Non mining nodes can use some form of SPV and skip over the bloat.









BillyBobZorton
Legendary
*
Offline Offline

Activity: 1204
Merit: 1028


View Profile
May 11, 2017, 02:21:40 PM
 #9

I fully support this proposal and hope we can move forward with a real BIP and an actual implementation based on it.

How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

In this proposal, it would take years of continuous spamming to achieve a dangerous block size. Not only mempool spamming (like now), the transactions would have to be actually included in the blocks, so it would be a very costly operation.

You can do the math: the increase of the base block size would be a maximum of 250 kB per year (that is the important number regarding to the quadratic hashing problem), while the total block size only could change 1 MB/year.

What I like regarding the last proposal (5% per period) is the non-exponential approach to block size increases. So if the block size becomes bigger it still is equally difficult to "spam the blocksize to the moon".

It would be need to be researched and studied deeply and consider all the exploitable angles, because you are forgetting there's people out there with literal money machines.

An attacker like the PBOC itself has trillions of dollars to dump and bloat the network and keep it at the maximum, so even if you set a limit, you would end up with a blocksize that's too big for most people and then the nodes start dropping like soliders in the d-day.
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3071



View Profile
May 11, 2017, 02:33:52 PM
 #10

Mathematically, assuming an average block time of ~10 minutes, there are a maximum of ~104 difficulty adjustments over a 4 year period, so even if there was a .01 MB increase at every difficulty re-target (the chances of which are negligible), the base blocksize would still only be ~2.04 MB after 4 years.

Is this a compromise most of us could get behind?

For me, maybe

But bear in mind, saying 2.04 MB after 4 years conceals the fact that the real total blocksize would be 8.16MB when you include the signatures in the witness blocks, Segwit is a part of this deal you're proposing.

Vires in numeris
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
May 11, 2017, 02:46:11 PM
Last edit: May 11, 2017, 02:57:54 PM by franky1
 #11

Mathematically, assuming an average block time of ~10 minutes, there are a maximum of ~104 difficulty adjustments over a 4 year period, so even if there was a .01 MB increase at every difficulty re-target (the chances of which are negligible), the base blocksize would still only be ~2.04 MB after 4 years.

Is this a compromise most of us could get behind?

For me, maybe

But bear in mind, saying 2.04 MB after 4 years conceals the fact that the real total blocksize would be 8.16MB when you include the signatures in the witness blocks, Segwit is a part of this deal you're proposing.

if there was to be a hard consensus to move to dynamics.. then its much better to use that oppertunity to unite segwit AND native keypairs in the same area.
EG 4mb for both native and segwit to sit in, in a single merkle block. then have it increase by x% a fortnight.

people running speed tests know that on current modern baseline systems (raspberrypi3) and average internet speed of 2017 along with all the efficiences since 2009 (libsecp256k1) have revealed that 8mb is raspberrypi average home user safe..

so starting with a 4mb single merkle block would be deemed more than safe.

and remember
even with a 4mb rule DOES NOT mean pools will make 4mb instantly..
just like they didnt make 1mb blocks in 2009-2014 even with a 1mb allowable buffer..
pools done their own risk analysis and done their own preferential increments below consensus.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Qartada
Hero Member
*****
Offline Offline

Activity: 546
Merit: 500


View Profile WWW
May 11, 2017, 02:49:53 PM
 #12

Well one of the key arguments against an increasing blocksize is that it can cause centralisation of nodes.  However, the amount of data it takes to run a node should be measured relative to how much data people are willing to use or how much data people often have on their devices.  

A proposal with a varied blocksize which is limited properly, like this one, probably won't actually cause a decrease in node users, simply because the speed at which people's hard drive sizes typically increase is likely to match or exceed the speed at which the blockchain grows.

Therefore, this is a good short term solution, before when Moore's law stops being possible to apply - when it does, there will be more than enough space in the blocks for LN to be implemented and for the majority of Bitcoin users to use it, with onchain transactions for sending more Bitcoin and private channels being often used as well.

DooMAD (OP)
Legendary
*
Offline Offline

Activity: 3766
Merit: 3100


Leave no FUD unchallenged


View Profile
May 11, 2017, 05:53:10 PM
 #13

How do you differentiate real demand from spam demand?

If someone like Ver decides to dump millions of dollars worth of spam transactions in order to make the blockchain huge, how do you stop this? since if it's automated, the blockchain will just adapt to this demand (even if its fake) centralizing the nodes as a result.

I just don't see how flexible blocksize schemes aren't exploitable.

As d5000 mentioned, it only accounts for transactions that made it into previous blocks, it would have to be spam with a hefty fee to be counted.  Plus, the conditions are set so that the blocksize can only increase if the total tx fees collected in the latest diff period are higher than the last diff period, so they'll have to pay more and more over time if they want to keep spamming, otherwise it remains at the same level and doesn't increase.  And even if they did manage to achieve that level of continuous spam for several consecutive fortnights, they get an extra .01MB each time for their efforts and as soon as they can't maintain their attack, it will drop again.  Spam can never be prevented but can be heavily disincentivised, which this proposal does well.


Mathematically, assuming an average block time of ~10 minutes, there are a maximum of ~104 difficulty adjustments over a 4 year period, so even if there was a .01 MB increase at every difficulty re-target (the chances of which are negligible), the base blocksize would still only be ~2.04 MB after 4 years.

Is this a compromise most of us could get behind?

For me, maybe

But bear in mind, saying 2.04 MB after 4 years conceals the fact that the real total blocksize would be 8.16MB when you include the signatures in the witness blocks, Segwit is a part of this deal you're proposing.

Indeed, but that's only if an increase is actually achieved every diff period.  The threshold is set rather high if at least half of the blocks in that period need to be at least 90% full to qualify for an increase (but again, if people want to query those numbers we can look at that collectively and debate pros and cons).  In reality it could take far longer than 4 years to reach that sort of size.  I'm sure many pools will still be mining largely empty blocks just to grab the reward, so because of that and other factors there probably won't be an increase every time.  I'm reasonably certain that the total blockchain size won't experience any dramatic growth as a result.  Lastly, and obviously, even if the maximum size increases, miner's don't necessarily have to fill it.  They're still quite welcome to release <1MB blocks after an increase if that's their preference.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3071



View Profile
May 11, 2017, 06:32:13 PM
 #14

Mathematically, assuming an average block time of ~10 minutes, there are a maximum of ~104 difficulty adjustments over a 4 year period, so even if there was a .01 MB increase at every difficulty re-target (the chances of which are negligible), the base blocksize would still only be ~2.04 MB after 4 years.

saying 2.04 MB after 4 years conceals the fact that the real total blocksize would be 8.16MB when you include the signatures in the witness blocks, Segwit is a part of this deal you're proposing.

In reality it could take far longer than 4 years to reach that sort of size.  I'm sure many pools will still be mining largely empty blocks just to grab the reward, so because of that and other factors there probably won't be an increase every time.

That's not convincing at all. Remember that the difficulty rarely drops, and we could see an explosion in mining growth when the real corporate money begins to flow into the mining industry.

All growth, BTC exchange rate, transaction rate growth, blocksize growth and hashrate growth act synergistically with one another, they all aggregate into a wider positive feedback loop. I realise that even this will have a marginal effect on the possible blocksize growth we end up with (say 1.04 MB per year + factoring in 0.2 MB max for the < 2 weeks per difficulty period), but that doesn't mean your conservative expectations were any more correct, just that they don't effect the overall outcome too much, which brings me to my next point....


I'm reasonably certain that the total blockchain size won't experience any dramatic growth as a result. 

You're probably right. So why do it? We can be straight about the fact that 1.04 MB per year won't annoy the small blockers too much, but it won't satisfy the demands of the big blockers at all. Or is that why it's a compromise? Everyone's equally unhappy Cheesy

Lastly, and obviously, even if the maximum size increases, miner's don't necessarily have to fill it.  They're still quite welcome to release <1MB blocks after an increase if that's their preference.

Again, we part ways here. If there is a breakdown in the growth paradigms of the technological factors that determine the upper limits of the Bitcoin's network capacity (i.e. median internet bandwidth, average storage space costs and average overall computing capabilities), then we'll be back here on bitcointalk.org du jour circa the year 2027, arguing about where and how the blocksize should stop, instead of how it should grow.

Infinite growth needs infinite resources, and I'm not noticing the solution to the infinite resource issue in this proposal. Maybe time will solve that, but that's one big maybe, and not an engineering approach at all. Wishing things away is bad engineering, there should be a qualified upper limit to round this proposal off, otherwise you can't expect it to be considered a complete proposal.

Vires in numeris
DooMAD (OP)
Legendary
*
Offline Offline

Activity: 3766
Merit: 3100


Leave no FUD unchallenged


View Profile
May 11, 2017, 07:30:15 PM
 #15

I'm reasonably certain that the total blockchain size won't experience any dramatic growth as a result.  

You're probably right. So why do it? We can be straight about the fact that 1.04 MB per year won't annoy the small blockers too much, but it won't satisfy the demands of the big blockers at all. Or is that why it's a compromise? Everyone's equally unhappy Cheesy

Pretty much, heh.  My hope is that all participants at least feel their voice is heard this way.  With the two polar opposite camps screaming at each other, it didn't really seem like either side were actually acknowledging what the other were saying.
 

Lastly, and obviously, even if the maximum size increases, miner's don't necessarily have to fill it.  They're still quite welcome to release <1MB blocks after an increase if that's their preference.

Again, we part ways here. If there is a breakdown in the growth paradigms of the technological factors that determine the upper limits of the Bitcoin's network capacity (i.e. median internet bandwidth, average storage space costs and average overall computing capabilities), then we'll be back here on bitcointalk.org du jour circa the year 2027, arguing about where and how the blocksize should stop, instead of how it should grow.

Infinite growth needs infinite resources, and I'm not noticing the solution to the infinite resource issue in this proposal. Maybe time will solve that, but that's one big maybe, and not an engineering approach at all. Wishing things away is bad engineering, there should be a qualified upper limit to round this proposal off, otherwise you can't expect it to be considered a complete proposal.

I wasn't aware that particular precedent had been set.  BIP 100, 101 and 106 never specified an upper limit.  While it's true the community didn't deem them viable proposals, they certainly didn't deem them incomplete either.  This sounds like more of a personal preference on your part rather than a community standard of what constitutes "complete".  If we do have to look at capping it off in future, then we can look at that.  Any attempt at preempting a limit now is guesswork at best and also opens the door to future hardforks.  Capping off would be a soft fork.  I won't dismiss a cap outright, but I'd need more than just one person telling me it needs one.  As per the OP, I do have a fairly strong aversion to arbitrary static limits, but then that's just my personal preference.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3071



View Profile
May 11, 2017, 08:11:45 PM
Last edit: May 11, 2017, 09:30:23 PM by Carlton Banks
 #16

It's not about precedents. Or about how many people say it (seriously?)


It's about design. It's about logic. Don't talk to us about what everyone already thinks or has said, talk about what makes sense. Satoshi wouldn't have made Bitcoin if he'd listened to all the preceding people who said that decentralised cryptocurrency was an unsolvable problem, you don't solve design problems by pretending the problem doesn't exist.

Vires in numeris
arklan
Legendary
*
Offline Offline

Activity: 1778
Merit: 1008



View Profile
May 11, 2017, 08:14:10 PM
 #17

i've walways favor a dynamic blocksize myself. i support this proposal too.

i don't post much, but this space for rent.
d5000
Legendary
*
Offline Offline

Activity: 3892
Merit: 6089


Decentralization Maximalist


View Profile
May 11, 2017, 10:56:42 PM
Merited by DooMAD (2)
 #18

An attacker like the PBOC itself has trillions of dollars to dump and bloat the network and keep it at the maximum, so even if you set a limit, you would end up with a blocksize that's too big for most people and then the nodes start dropping like soliders in the d-day.

If these "money making machines" want to destroy Bitcoin, they can do so now. An 51% attack costs about 600-700 million dollars at this moment. The PBOC would have no problem with that. So why go the hard way and spam the Bitcoin network during 10 years to achieve ~3,5 MB base size if you can destroy it instantly?

As the blocksize growth - as already said - is linear, I think the importance of this attack vector is negligible.

@DooMAD: I have, however, a slight update proposal:

Change
Code:
(TotalTxFeeInLastDifficulty > TotalTxFeeInLastButOneDifficulty) 

to

Code:
(TotalTxFeeInLastDifficulty > average(TotalTxFee in last X difficulty periods)) 

with X = 4 or more, I would propose X = 8.

The reason is that totaltxfee can have fluctuations. So a malicious person/group that wanted to increase the block size could produce a "full block spam attack" in a difficulty period just after a period with relatively low TotalTxFee.

Regarding an upper blocksize cap (@Carlton Banks): In the previous proposals we discussed it made sense for me because the proposed block size growth was exponential (+5% is the last one I remember) and that could have lead to an uncontrollable situation in the future. I think this proposal is conservative enough that such an upper cap isn't important. I also wouldn't totally oppose it.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
DooMAD (OP)
Legendary
*
Offline Offline

Activity: 3766
Merit: 3100


Leave no FUD unchallenged


View Profile
May 12, 2017, 02:01:46 PM
Last edit: May 12, 2017, 02:14:39 PM by DooMAD
 #19

@DooMAD: I have, however, a slight update proposal:

Change
Code:
(TotalTxFeeInLastDifficulty > TotalTxFeeInLastButOneDifficulty) 

to

Code:
(TotalTxFeeInLastDifficulty > average(TotalTxFee in last X difficulty periods)) 

with X = 4 or more, I would propose X = 8.

The reason is that totaltxfee can have fluctuations. So a malicious person/group that wanted to increase the block size could produce a "full block spam attack" in a difficulty period just after a period with relatively low TotalTxFee.

Exceptional reasoning, I'm totally on board with that.  I was hoping we could find improvements that help raise disincentives to spam and this absolutely qualifies.  OP updated.  Thanks.   Smiley


It's not about precedents. Or about how many people say it (seriously?)


It's about design. It's about logic. Don't talk to us about what everyone already thinks or has said, talk about what makes sense. Satoshi wouldn't have made Bitcoin if he'd listened to all the preceding people who said that decentralised cryptocurrency was an unsolvable problem, you don't solve design problems by pretending the problem doesn't exist.

Maybe it's just me, but I honestly don't see what's logical about taking a stab in the dark now with no way to accurately forecast future requirements.  Particularly if that stab in the dark could easily result in another contentious debate later.  If someone can convince me why a potential hard fork later is somehow better than an equally potential soft fork later, I'll reconsider my stance.


also
Code:
    THEN BaseMaxBlockSize = BaseMaxBlockSize -0.01MB
      WitnessMaxBlockSize = WitnessMaxBlockSize -0.03MB

this would cause orphan risks of when nodes rescan the block chain

if 2016 blocks were X then the next time 2016 blocks were y(=x-0.01)  with the rules being Y.. all the X blocks would get orphaned because the 2016 blocks that are X are above the current Y rule.

Is there some kind of workaround or fix that would still enable us to reduce dynamically while limiting the potential for orphans?  I have doubts a sufficient supermajority could be reached for the proposal if max sizes could only increase.  It needs to be possible to reduce if there's a lack of demand.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
arklan
Legendary
*
Offline Offline

Activity: 1778
Merit: 1008



View Profile
May 12, 2017, 04:39:18 PM
 #20

could track what the max block size was for a given difficulty, perhaps? either a database of "this difficulty = this blocksize" or some sort of code, while determining orphans, to check if the blocksize was valid at the time.

i don't post much, but this space for rent.
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!