Bitcoin Forum
November 14, 2024, 09:15:47 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 4 »  All
  Print  
Author Topic: BIP 106: Dynamically Controlled Bitcoin Block Size Max Cap  (Read 9400 times)
upal (OP)
Full Member
***
Offline Offline

Activity: 165
Merit: 102


View Profile
August 17, 2015, 01:26:59 AM
Last edit: September 05, 2015, 09:31:25 PM by upal
Merited by ABCbits (2)
 #1

I have tried to solve the maximum block size debate in two different proposal.

i. Depending only on previous block size calculation.

ii. Depending on previous block size calculation and previous Tx fee collected by miners.


BIP 106: https://github.com/bitcoin/bips/blob/master/bip-0106.mediawiki

Proposal in bitcoin-dev mailing list - http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010285.html


Proposal 1: Depending only on previous block size calculation

The basic idea in alogorithmic format is as follows...

Code:
If more than 50% of block's size, found in the first 2000 of the last difficulty period, is more than 90% MaxBlockSize
    Double MaxBlockSize
Else if more than 90% of block's size, found in the first 2000 of the last difficulty period, is less than 50% MaxBlockSize
    Half MaxBlockSize
Else
    Keep the same MaxBlockSize


Proposal 2: Depending on previous block size calculation and previous Tx fee collected by miners

The basic idea in alogorithmic format is as follows...

Code:
TotalBlockSizeInLastButOneDifficulty = Sum of all Block size of first 2008 blocks in last 2 difficulty period
TotalBlockSizeInLastDifficulty = Sum of all Block size of second 2008 blocks in last 2 difficulty period (This actually includes 8 blocks from last but one difficulty)

TotalTxFeeInLastButOneDifficulty = Sum of all Tx fees of first 2008 blocks in last 2 difficulty period
TotalTxFeeInLastDifficulty = Sum of all Tx fees of second 2008 blocks in last 2 difficulty period (This actually includes 8 blocks from last but one difficulty)

If ( ( (Sum of first 4016 block size in last 2 difficulty period)/4016 > 50% MaxBlockSize) AND (TotalTxFeeInLastDifficulty > TotalTxFeeInLastButOneDifficulty) AND (TotalBlockSizeInLastDifficulty > TotalBlockSizeInLastButOneDifficulty) )
    MaxBlockSize = TotalBlockSizeInLastDifficulty * MaxBlockSize / TotalBlockSizeInLastButOneDifficulty
Else If ( ( (Sum of first 4016 block size in last 2 difficulty period)/4016 < 50% MaxBlockSize) AND (TotalTxFeeInLastDifficulty < TotalTxFeeInLastButOneDifficulty) AND (TotalBlockSizeInLastDifficulty < TotalBlockSizeInLastButOneDifficulty) )
    MaxBlockSize = TotalBlockSizeInLastDifficulty * MaxBlockSize / TotalBlockSizeInLastButOneDifficulty
Else
    Keep the same MaxBlockSize


Details: http://upalc.com/maxblocksize.php

Requesting for comment.
RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 17, 2015, 05:16:12 PM
 #2

This is one of the best proposal I have seen in recent times to solve the max block size problem. I hope, it does not get overlooked by the core & XT devs, because this could potentially stop the divide of bitcoin core & XT.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3080



View Profile
August 17, 2015, 05:41:20 PM
Merited by ABCbits (1)
 #3

Dynamic resizing is the obvious compromise between the camps. Everyone can get what they claim to want from it, without having to compromise either.

If the market chooses bigger blocks, then the market can test whether or not that works out in practice. If yes, then Gavin's design solution actually was the best idea after all. If not, then the market retreating will cause the blocksize to retreat also (which wouldn't be possible under BIP100).

The market could even try out bigger blocks, decide it doesn't work, try the alternative, dislike that more than bigger blocks, and then revert to some compromoise blocksize. Y'know, it's almost as if the free market works better than central planning...

Vires in numeris
DooMAD
Legendary
*
Offline Offline

Activity: 3948
Merit: 3191


Leave no FUD unchallenged


View Profile
August 17, 2015, 06:30:03 PM
 #4

So roughly every two weeks the blocksize halves, doubles or stays the same depending on the traffic that's going on.  It's certainly an idea I could get behind as a second preference or fallback if people are absolutely determined to torpedo BIP101.  It makes a fair trade-off.  And to be clear, I could support such a proposal whether it was introduced in core, or an independent client.  I still don't understand this fixation the community has with "trusted" developers.  If the effects of the code are obvious and neutral, I don't particularly care who coded it or what their personal views are, or if any other developers disagree for whatever their personal views are.  I want an open network that supports the masses if or when they come.  I hope this silences all the critics who think people who support larger blocks aren't willing to compromise, because they will if they're presented with a coherent and well-presented alternative like this one.

However, I'm sure if this particular proposal did become the prevailing favourite, the same usual suspects trying to discredit BIP101 would be doing the same for this, calling it an "altcoin", saying upal wants to be a "dictator" and seize control, pretending it's only consensus when they personally agree with it and all the other cheap shots they're taking at BIP101.  It'll be interesting to see how this plays out.

▄▄▄███████▄▄▄
▄█████████████████▄▄
▄██
█████████▀██▀████████
████████▀
░░░░▀░░██████████
███████████▌░░▄▄▄░░░▀████████
███████
█████░░░███▌░░░█████████
███
████████░░░░░░░░░░▄█████████
█████████▀░░░▄████░░░░█████████
███
████▄▄░░░░▀▀▀░░░░▄████████
█████
███▌▄█░░▄▄▄▄█████████
▀████
██████▄██
██████████▀
▀▀█████████████████▀▀
▀▀▀███████▀▀
.
.BitcoinCleanUp.com.


















































.
.     Debunking Bitcoin's Energy Use     .
███████████████████████████████
███████████████████████████████
███████████████████████████████
███████▀█████████▀▀▀▀█▀████████
███████▌░▀▀████▀░░░░░░░▄███████
███████▀░░░░░░░░░░░░░░▐████████
████████▄░░░░░░░░░░░░░█████████
████████▄░░░░░░░░░░░▄██████████
███████▀▀▀░░░░░░░▄▄████████████
█████████▄▄▄▄▄▄████████████████
███████████████████████████████
███████████████████████████████
███████████████████████████████
...#EndTheFUD...
RustyNomad
Sr. Member
****
Offline Offline

Activity: 336
Merit: 251



View Profile WWW
August 17, 2015, 06:59:10 PM
 #5

I have tried to solve the maximum block size debate, depending on the previous block size calculation.

Requesting for comment - http://upalc.com/maxblocksize.php

Proposal in bitcoin-dev mailing list - http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010285.html

I like your proposal. I've never been against the increase in the block size but always believed, and still do, that it should be more dynamic in nature and follow on actual bitcoin usage instead of it being based on how we think/perceive bitcoin to be used in future.

If it's not dynamic in nature we are bound to run into problems again in future if we say for example double up on the block size only to find out that mass adoption is not happening at the rate we expected. With your proposed solution both sides are covered as the block size will dynamically increase and or decrease based on actual usage. If properly implemented it could mean that we can lay the arguments around block sizes to rest and never need to worry about it becoming an issue again in a couple of years time.
RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 18, 2015, 11:09:48 AM
 #6

Dynamic resizing is the obvious compromise between the camps. Everyone can get what they claim to want from it, without having to compromise either.

If the market chooses bigger blocks, then the market can test whether or not that works out in practice. If yes, then Gavin's design solution actually was the best idea after all. If not, then the market retreating will cause the blocksize to retreat also (which wouldn't be possible under BIP100).

The market could even try out bigger blocks, decide it doesn't work, try the alternative, dislike that more than bigger blocks, and then revert to some compromoise blocksize. Y'know, it's almost as if the free market works better than central planning...

No sure if you have gone through OP's proposal. BIP 101 has no provision to decrease block size, instead it flat out increases without considering the network status. BIP 100 is employing some miner's voting system, which requires a separate activity from miner's end. The reason I feel OP's proposal is beautiful is because it requires users to fill up nodes with high Tx volumes and then miners to fill up blocks from mempool. So, it is not only the miner, but also the end users have a say in increasing or decreasing block size.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3080



View Profile
August 18, 2015, 11:30:06 AM
 #7

Dynamic resizing is the obvious compromise between the camps. Everyone can get what they claim to want from it, without having to compromise either.

If the market chooses bigger blocks, then the market can test whether or not that works out in practice. If yes, then Gavin's design solution actually was the best idea after all. If not, then the market retreating will cause the blocksize to retreat also (which wouldn't be possible under BIP100).

The market could even try out bigger blocks, decide it doesn't work, try the alternative, dislike that more than bigger blocks, and then revert to some compromoise blocksize. Y'know, it's almost as if the free market works better than central planning...

No sure if you have gone through OP's proposal. BIP 101 has no provision to decrease block size, instead it flat out increases without considering the network status. BIP 100 is employing some miner's voting system, which requires a separate activity from miner's end. The reason I feel OP's proposal is beautiful is because it requires users to fill up nodes with high Tx volumes and then miners to fill up blocks from mempool. So, it is not only the miner, but also the end users have a say in increasing or decreasing block size.

Ah, I did actually mean BIP 101 and not 100. Thanks for pointing it out. And I agree that this proposal sounds good, but I was making the more general point that some form of dynamic resizing scheme is best.

Vires in numeris
upal (OP)
Full Member
***
Offline Offline

Activity: 165
Merit: 102


View Profile
August 18, 2015, 05:29:18 PM
 #8

I have got some very good arguements in the bitcoin-dev list and hence updated the main article. If you have any counter-arguement to this proposal, feel free to put it here or in the comment section of the article - http://upalc.com/maxblocksize.php
quakefiend420
Legendary
*
Offline Offline

Activity: 784
Merit: 1000


View Profile
August 18, 2015, 05:34:56 PM
 #9

This is similar to what I was thinking, but perhaps better.

My idea was once a week or every two weeks:

avg(last week's blocksize)*2 = new maxBlocksize
tl121
Sr. Member
****
Offline Offline

Activity: 278
Merit: 254


View Profile
August 18, 2015, 06:01:01 PM
 #10

There has to be a maximum block size limit for bitcoin nodes to work.  The limit is not just a program variable needed for block chain consensus, it has real world implications in terms of storage, processing and bandwidth resources.  If a node doesn't have sufficient resources it will not be able to work as a properly functioning node. These resources have to be provisioned and managed by node operators who have to plan in advance to acquire the needed resources.  That is the reason for BIP 101 having a schedule for changes to the limits. A dynamic algorithm can not magically instantiate the needed resources.

The schedule in BIP 101 is based on technology forecasting.  Like all forecasting, technology forecasting is inaccurate.  If this schedule proves to be grossly in error then a new BIP can always be generated some years downstream,  allowing for any needed "mid-course" corrections.

RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 18, 2015, 06:58:12 PM
 #11

There has to be a maximum block size limit for bitcoin nodes to work.  The limit is not just a program variable needed for block chain consensus, it has real world implications in terms of storage, processing and bandwidth resources.  If a node doesn't have sufficient resources it will not be able to work as a properly functioning node. These resources have to be provisioned and managed by node operators who have to plan in advance to acquire the needed resources.  That is the reason for BIP 101 having a schedule for changes to the limits. A dynamic algorithm can not magically instantiate the needed resources.
As I can see, the advantage of this algo proposed by OP is machine learning. It is dynamically determining the next max cap depending on the current full blocks. Only if more than 50% of blocks are more than 90% full, then only the max cap will double. This means more than 50% of the blocks stored by the nodes in last difficulty period are already already 90% filled and market is pushing for more. In this situation, node has two ways. Either increase its resource and stay in the network or close down. Keeping a node in a network is not network's responsibility. Network did not show any responsibility to keep CPU mining either. Miners who wanted to be in the network upgraded their miner to GPU, FPGA and ASIC for their own benefit. Similarly, nodes will be run by interested parties, who has to benefit for nodes, e.g. miners, online wallet provides, exchanges and individuals with big bitcoin holding and thereby having a need to secure the network. All of them will have to upgrade resource to be in the game, because the push is coming from free market need.

The schedule in BIP 101 is based on technology forecasting.  Like all forecasting, technology forecasting is inaccurate.  If this schedule proves to be grossly in error then a new BIP can always be generated some years downstream,  allowing for any needed "mid-course" corrections.
BIP 101 is a linear increment proposal, where the laid out path has not been derived from market demand. It does not have a way out to decrease block size and there is no basis of the technology forecasting for the long run. And another hard fork is just next to impossible after wide-spread adoption. Neither of BIP 101 (Gavin Andresen) or 103 (Pieter Wuille) are taking into account the actual network condition. Both are speculative on technology forecasting.

Ducky1
Hero Member
*****
Offline Offline

Activity: 966
Merit: 500


📱 CARTESI 📱 INFRASTRUCTURE FOR SCA


View Profile
August 18, 2015, 08:11:35 PM
 #12

Very good suggestion!

I would additionally suggest to back-test the algorithm on the current blockchain, from day 1, starting with the smallest possible max size. Then see how it evolves, and then fine-tune the parameters if anything bad happens, or obvious possibilities for improvements are spotted. It could even be possible to auto tune the parameters for smallest possible max size by setting up a proper experiment. When it works great there is a big chance it will continue to work great the next 100 years.



                               .█
                             .-███
                           .-███-███
                         ..███.   ███
                        .███.      ███
                      .███.         ███
                    .███-            ███
                  .███-               ███
                .███:.                 ███
              .███*.                   .███
 ████████████████████████████         .███████████████
 ███......................███.      .███-...........███
 .███                      ███.   .███-             .███
  .███                      ███ .███:.               .███
    ███.                    .██████.                   ███
     ███.                   .████.                      ███
      ███                  .█████.                      .███
      .███               .███. ███                       .███
        ███.           .███-    ███                        ███
         ███.        .███-      .███                        ███
          ██████████████         -█████████████████████████████
                    ███.                    .███
                     ███                  .███
                      ███:              .███
                       ███-           .███
                        ███.       .-███
                         ███.    .-███
                          ███  ..███
                          .███.███
                           .████
                            -█
.CARTESI.📱.LINUX INFRASTRUCTURE FOR DAPPS.
                               .█
                             .-███
                           .-███-███
                         ..███.   ███
                        .███.      ███
                      .███.         ███
                    .███-            ███
                  .███-               ███
                .███:.                 ███
              .███*.                   .███
 ████████████████████████████         .███████████████
 ███......................███.      .███-...........███
 .███                      ███.   .███-             .███
  .███                      ███ .███:.               .███
    ███.                    .██████.                   ███
     ███.                   .████.                      ███
      ███                  .█████.                      .███
      .███               .███. ███                       .███
        ███.           .███-    ███                        ███
         ███.        .███-      .███                        ███
          ██████████████         -█████████████████████████████
                    ███.                    .███
                     ███                  .███
                      ███:              .███
                       ███-           .███
                        ███.       .-███
                         ███.    .-███
                          ███  ..███
                          .███.███
                           .████
                            -█
RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 19, 2015, 04:31:50 PM
 #13

Very good suggestion!

I would additionally suggest to back-test the algorithm on the current blockchain, from day 1, starting with the smallest possible max size. Then see how it evolves, and then fine-tune the parameters if anything bad happens, or obvious possibilities for improvements are spotted. It could even be possible to auto tune the parameters for smallest possible max size by setting up a proper experiment. When it works great there is a big chance it will continue to work great the next 100 years.



True. It would be great if someone does this back testing and share the result. I think, at Genesis block, max cap can be considered as 1 MB. As the proposal has decreasing max cap feature, the outcome might be lower than 1 MB as well.

DumbFruit
Sr. Member
****
Offline Offline

Activity: 433
Merit: 267


View Profile
August 19, 2015, 05:26:42 PM
 #14

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.

By their (dumb) fruits shall ye know them indeed...
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
August 19, 2015, 06:38:28 PM
 #15

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
DumbFruit
Sr. Member
****
Offline Offline

Activity: 433
Merit: 267


View Profile
August 19, 2015, 08:27:31 PM
Last edit: August 20, 2015, 12:24:02 PM by DumbFruit
 #16

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
The absolute best case scenario in this algorithm, from the perspective of fees, is that slightly less than 50% of the blocks are 100% full, and people are so impatient to get their transactions into those blocks that they will bid up the transaction fees up to about 50BTC in total. That way the network would be funded at about the same rate it is today when inflation (The subsidy) stops, ceteris paribus.

By their (dumb) fruits shall ye know them indeed...
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
August 20, 2015, 02:44:27 PM
 #17

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
The absolute best case scenario in this algorithm, from the perspective of fees, is that slightly less than 50% of the blocks are 100% full, and people are so impatient to get their transactions into those blocks that they will bid up the transaction fees up to about 50BTC in total. That way the network would be funded at about the same rate it is today when inflation (The subsidy) stops, ceteris paribus.
There is no prerequisite that coinbase+mining fee needs to equal 50 btc. I understand that you are trying not to disturb the miner's subsidy. But, you are wrong in assuming ceteris paribus. Other things will not remain the same. When the subsidy stops, the transaction volume will be far higher than it is today. So, with increased block size, a miner will be able to fill up a block with much more Tx than it is now and thereby collect much more Tx fee. Moreover, you are also assuming value of BTC will remain same. With increased adoption, that's going to change towards the higher side as well. Hence, if the toal collection of Tx fee is same or even lower than what it is today (which wont most likely be the case), the increased price of BTC will compensate the miners.

So, forcing end users to a bidding war to save miners is most likely not a solution we need to adopt.
goatpig
Legendary
*
Offline Offline

Activity: 3752
Merit: 1364

Armory Developer


View Profile
August 20, 2015, 05:02:54 PM
Last edit: August 26, 2015, 09:01:26 PM by goatpig
 #18

I like this initiative, it is by far the best I've seen for the following reasons: it allows for both increase and reduction (this is critical) of the block size, it doesn't require complicated context and mainly, it doesn't rely on a hardcoded magic number to rule it all. However I'm not comfortable with the doubling nor the thresholds, and I'd would propose to refine them as follow:

1) Roughly translating your metrics gives something like (correct me if I misinterpreted):

- If the network is operating above half capacity, double the ceiling.
- If the network is operating below half capacity, halve the ceiling.
- If the network is operating around half capacity, leave it as is.

While the last 2 make sense, the first one is out of proportion imo. The increment step could be debated over and over but I think a more straight forward solution is to peg it to difficulty, i.e. if an increase is triggered, the block size limit should be readjusted in the same proportion that the difficulty changed:

- If the difficulty increased 20% and a block size limit increase is triggered, the limit would be increased by 20%.
- If the difficulty only increased by 5%, so would the block size limit.
- If the difficulty increased but the block limit increase was not triggered, stay as is.
- If the difficulty was reduced, in every case reduce the block limit by that same proportion.

As for the increase threshold, I don't think your condition covers the most common use case. A situation where 100% of blocks are filled at 85% would not trigger an increase, but a network were 50% of blocks are filled at 10% and the other 50% are full would trigger the increase, which is a behavior more representative of a spam attack than organic growth in transaction demand.

I would suggest to evaluate the total size used by the last 2000 blocks as a whole, if it exceeds 2/3 or 3/4 (or whatever value is the most sensible) of the maximum capacity, then trigger an increase.

Maybe that is your intended condition, but from the wording, I can't help to think that your condition is to evaluate size consumption per block, rather than as a whole over the difficulty period.

2) The current situation with the Bitcoin network is that it is trivial and relatively cheap to spam transactions, and thus trigger block ceiling increase. At the same time, the conditions for a block size decrease are rather hard to sustain. An attacker needs to fill half the blocks for a difficulty period to trigger an increase, and only needs to keep 11% of blocks half full to prevent a decrease.

Quoting from your proposal:

Quote
Those who want to stop decrease, need to have more than 10% hash power, but must mine more than 50% of MaxBlockSize in all blocks.

I don't see how that would prevent anyone with that much hashing power from preventing a block size decrease. As you said, there is an economic incentive for a miner to include fee paying transactions, which reduces the possibility a large pool could prevent a block size increase by mining empty blocks, as it would bleed hash power pretty quickly.

However, this also implies there is no incentive to mine empty blocks. While a large miner can attempt to prevent a block size increase (at his own cost), a large group of large miners would be desperate to try and trigger a block size reduction, as a single large pool could send transactions to itself, paying fees to its own miners, to keep 11% of blocks half filled.

I would advocate that the block size decrease should also be triggered by used block space vs max available space as a whole over the difficulty period. I would also advocate for a second condition to trigger any block size change: total fee paid over the difficulty period:

- If both blocks are filling and the total sum of paid fees has increased at least as much as a portion of the difficulty (say 1/10th, again up for discussion) over a single period, then an increase in block size is triggered.
- Same goes with the decrease mechanism. If block size and fees have both decreased accordingly, trigger a block size decrease.

One or the other condition is not enough. Simply filling blocks without an increase in fees paid is not a sufficient condition to increase the network's capacity. As blocks keep on filling, fees go up and eventually the conditions are met. On the other hand, if block size usage goes down but fees remain high, or fees go down but block size usage goes up (say after a block size increase), there is no reason to reduce the block size either.

3) Lastly, I believe in case of a stalemate, a decay function should take over. Something simple, say 0.5~1% decay every difficulty period that didn't trigger an increase or a decrease. Block size increase is not hard to achieve as it relies on difficulty increase, blocks filling up and fees climbing, which takes place concurrently during organic growth. If the block limit naturally decays in a stable market, it will in return put a pressure on fees and naturally increase block fill rate. The increase in fee will in return increase miner profitability, creating opportunities. Fees are high, blocks are filling up and difficulty is going up and the ceiling will be bumped up once more to slowly decay again until organic growth resumes.

However in case of a spam attack, it forces the attacker to keep up with the climbing cost of triggering the next increase rather than simply maintaining the size increase he triggered at a low cost.

I believe with these changes to your proposal, it would turn exponentially expensive for an attacker to push the ceiling up, while allowing for an organic fee market to form and preventing fees from climbing sky high, as higher fees would eventually bump up the size cap.


DumbFruit
Sr. Member
****
Offline Offline

Activity: 433
Merit: 267


View Profile
August 20, 2015, 06:33:38 PM
Last edit: August 20, 2015, 09:03:23 PM by DumbFruit
 #19

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
The absolute best case scenario in this algorithm, from the perspective of fees, is that slightly less than 50% of the blocks are 100% full, and people are so impatient to get their transactions into those blocks that they will bid up the transaction fees up to about 50BTC in total. That way the network would be funded at about the same rate it is today when inflation (The subsidy) stops, ceteris paribus.
There is no prerequisite that coinbase+mining fee needs to equal 50 btc. I understand that you are trying not to disturb the miner's subsidy. But, you are wrong in assuming ceteris paribus. Other things will not remain the same. When the subsidy stops, the transaction volume will be far higher than it is today. So, with increased block size, a miner will be able to fill up a block with much more Tx than it is now and thereby collect much more Tx fee. Moreover, you are also assuming value of BTC will remain same. With increased adoption, that's going to change towards the higher side as well. Hence, if the toal collection of Tx fee is same or even lower than what it is today (which wont most likely be the case), the increased price of BTC will compensate the miners.

So, forcing end users to a bidding war to save miners is most likely not a solution we need to adopt.

The reason philosophers use "ceteris paribus" is not because they literally know with absolute certainty all of the variables that they want to hold static, it's because they are trying to get at a specific subset of the problem. It's especially useful where testing is impossible, like here where we're trying to design a product that will be robust going into the future. Otherwise we'll get into a gish gallup.

So! The problem I'm pointing out is that we know, in the best case scenario, that just less than half of transactions will have any bidding pressure to keep transaction fees above the equilibrium price of running roughly one node, because by design we know the remaining half are less than 90% full. There is no reason to believe that this second half of transactions will be bid so high as to fund the entire network to the same, or better, rate as today. How does the protocol keep the network funded as inflation diminishes?

One could get close to this problem by suggesting that there is a time between checks (2000 blocks) which would allow greater than half of blocks to remain full, but if one is seriously suggesting that this should fund the network then one is simultaneously proposing that the block size limit should be doubled every 2000 blocks in perpetuity, otherwise this funding mechanism doesn't exist, and so haven't adequately addressed the problem. If that is a reasonable assumption to you, then the protocol can be simplified to read, "double the block size every 2000 blocks".

You state that there are larger blocks and therefore more transaction fees with this protocol. There is more quantity of transaction fees, but there is not necessarily a higher value of transaction fees. So again; How does this protocol keep the network funded as inflation diminishes? There is no reason to believe, even being optimistic, that those fees would be anything but marginally higher than enough to fund roughly one node at equilibrium.

That is not the only problem with this protocol, but it is the one I'm focusing on at the moment.

By their (dumb) fruits shall ye know them indeed...
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
August 20, 2015, 11:33:55 PM
 #20

While the last 2 make sense, the first one is out of proportion imo. The increment step could be debated over and over but I think a more straight forward solution is to peg it to difficulty, i.e. if an increase is triggered, the block size limit should be readjusted in the same proportion that the difficulty changed:

- If the difficulty increased 20% and a block size limit increase is triggered, the limit would be increased by 20%.
- If the difficulty only increased by 5%, so would the block size limit.
- If the difficulty increased but the block limit increase was not triggered, stay as is.
- If the difficulty was reduced, in every case reduce the block limit by that same proportion.

How does difficulty change affects the size of blocks found today ? Is there any co-relation between difficulty and block size ? If not, then IMO, it wont be wise to make difficulty a parameter to change max block size cap.
Pages: [1] 2 3 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!