Bitcoin Forum
June 13, 2024, 11:15:50 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 3 4 [All]
  Print  
Author Topic: BIP 106: Dynamically Controlled Bitcoin Block Size Max Cap  (Read 9323 times)
upal (OP)
Full Member
***
Offline Offline

Activity: 165
Merit: 102


View Profile
August 17, 2015, 01:26:59 AM
Last edit: September 05, 2015, 09:31:25 PM by upal
Merited by ABCbits (2)
 #1

I have tried to solve the maximum block size debate in two different proposal.

i. Depending only on previous block size calculation.

ii. Depending on previous block size calculation and previous Tx fee collected by miners.


BIP 106: https://github.com/bitcoin/bips/blob/master/bip-0106.mediawiki

Proposal in bitcoin-dev mailing list - http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010285.html


Proposal 1: Depending only on previous block size calculation

The basic idea in alogorithmic format is as follows...

Code:
If more than 50% of block's size, found in the first 2000 of the last difficulty period, is more than 90% MaxBlockSize
    Double MaxBlockSize
Else if more than 90% of block's size, found in the first 2000 of the last difficulty period, is less than 50% MaxBlockSize
    Half MaxBlockSize
Else
    Keep the same MaxBlockSize


Proposal 2: Depending on previous block size calculation and previous Tx fee collected by miners

The basic idea in alogorithmic format is as follows...

Code:
TotalBlockSizeInLastButOneDifficulty = Sum of all Block size of first 2008 blocks in last 2 difficulty period
TotalBlockSizeInLastDifficulty = Sum of all Block size of second 2008 blocks in last 2 difficulty period (This actually includes 8 blocks from last but one difficulty)

TotalTxFeeInLastButOneDifficulty = Sum of all Tx fees of first 2008 blocks in last 2 difficulty period
TotalTxFeeInLastDifficulty = Sum of all Tx fees of second 2008 blocks in last 2 difficulty period (This actually includes 8 blocks from last but one difficulty)

If ( ( (Sum of first 4016 block size in last 2 difficulty period)/4016 > 50% MaxBlockSize) AND (TotalTxFeeInLastDifficulty > TotalTxFeeInLastButOneDifficulty) AND (TotalBlockSizeInLastDifficulty > TotalBlockSizeInLastButOneDifficulty) )
    MaxBlockSize = TotalBlockSizeInLastDifficulty * MaxBlockSize / TotalBlockSizeInLastButOneDifficulty
Else If ( ( (Sum of first 4016 block size in last 2 difficulty period)/4016 < 50% MaxBlockSize) AND (TotalTxFeeInLastDifficulty < TotalTxFeeInLastButOneDifficulty) AND (TotalBlockSizeInLastDifficulty < TotalBlockSizeInLastButOneDifficulty) )
    MaxBlockSize = TotalBlockSizeInLastDifficulty * MaxBlockSize / TotalBlockSizeInLastButOneDifficulty
Else
    Keep the same MaxBlockSize


Details: http://upalc.com/maxblocksize.php

Requesting for comment.
RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 17, 2015, 05:16:12 PM
 #2

This is one of the best proposal I have seen in recent times to solve the max block size problem. I hope, it does not get overlooked by the core & XT devs, because this could potentially stop the divide of bitcoin core & XT.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
August 17, 2015, 05:41:20 PM
Merited by ABCbits (1)
 #3

Dynamic resizing is the obvious compromise between the camps. Everyone can get what they claim to want from it, without having to compromise either.

If the market chooses bigger blocks, then the market can test whether or not that works out in practice. If yes, then Gavin's design solution actually was the best idea after all. If not, then the market retreating will cause the blocksize to retreat also (which wouldn't be possible under BIP100).

The market could even try out bigger blocks, decide it doesn't work, try the alternative, dislike that more than bigger blocks, and then revert to some compromoise blocksize. Y'know, it's almost as if the free market works better than central planning...

Vires in numeris
DooMAD
Legendary
*
Offline Offline

Activity: 3808
Merit: 3160


Leave no FUD unchallenged


View Profile
August 17, 2015, 06:30:03 PM
 #4

So roughly every two weeks the blocksize halves, doubles or stays the same depending on the traffic that's going on.  It's certainly an idea I could get behind as a second preference or fallback if people are absolutely determined to torpedo BIP101.  It makes a fair trade-off.  And to be clear, I could support such a proposal whether it was introduced in core, or an independent client.  I still don't understand this fixation the community has with "trusted" developers.  If the effects of the code are obvious and neutral, I don't particularly care who coded it or what their personal views are, or if any other developers disagree for whatever their personal views are.  I want an open network that supports the masses if or when they come.  I hope this silences all the critics who think people who support larger blocks aren't willing to compromise, because they will if they're presented with a coherent and well-presented alternative like this one.

However, I'm sure if this particular proposal did become the prevailing favourite, the same usual suspects trying to discredit BIP101 would be doing the same for this, calling it an "altcoin", saying upal wants to be a "dictator" and seize control, pretending it's only consensus when they personally agree with it and all the other cheap shots they're taking at BIP101.  It'll be interesting to see how this plays out.
RustyNomad
Sr. Member
****
Offline Offline

Activity: 336
Merit: 250



View Profile WWW
August 17, 2015, 06:59:10 PM
 #5

I have tried to solve the maximum block size debate, depending on the previous block size calculation.

Requesting for comment - http://upalc.com/maxblocksize.php

Proposal in bitcoin-dev mailing list - http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010285.html

I like your proposal. I've never been against the increase in the block size but always believed, and still do, that it should be more dynamic in nature and follow on actual bitcoin usage instead of it being based on how we think/perceive bitcoin to be used in future.

If it's not dynamic in nature we are bound to run into problems again in future if we say for example double up on the block size only to find out that mass adoption is not happening at the rate we expected. With your proposed solution both sides are covered as the block size will dynamically increase and or decrease based on actual usage. If properly implemented it could mean that we can lay the arguments around block sizes to rest and never need to worry about it becoming an issue again in a couple of years time.
RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 18, 2015, 11:09:48 AM
 #6

Dynamic resizing is the obvious compromise between the camps. Everyone can get what they claim to want from it, without having to compromise either.

If the market chooses bigger blocks, then the market can test whether or not that works out in practice. If yes, then Gavin's design solution actually was the best idea after all. If not, then the market retreating will cause the blocksize to retreat also (which wouldn't be possible under BIP100).

The market could even try out bigger blocks, decide it doesn't work, try the alternative, dislike that more than bigger blocks, and then revert to some compromoise blocksize. Y'know, it's almost as if the free market works better than central planning...

No sure if you have gone through OP's proposal. BIP 101 has no provision to decrease block size, instead it flat out increases without considering the network status. BIP 100 is employing some miner's voting system, which requires a separate activity from miner's end. The reason I feel OP's proposal is beautiful is because it requires users to fill up nodes with high Tx volumes and then miners to fill up blocks from mempool. So, it is not only the miner, but also the end users have a say in increasing or decreasing block size.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
August 18, 2015, 11:30:06 AM
 #7

Dynamic resizing is the obvious compromise between the camps. Everyone can get what they claim to want from it, without having to compromise either.

If the market chooses bigger blocks, then the market can test whether or not that works out in practice. If yes, then Gavin's design solution actually was the best idea after all. If not, then the market retreating will cause the blocksize to retreat also (which wouldn't be possible under BIP100).

The market could even try out bigger blocks, decide it doesn't work, try the alternative, dislike that more than bigger blocks, and then revert to some compromoise blocksize. Y'know, it's almost as if the free market works better than central planning...

No sure if you have gone through OP's proposal. BIP 101 has no provision to decrease block size, instead it flat out increases without considering the network status. BIP 100 is employing some miner's voting system, which requires a separate activity from miner's end. The reason I feel OP's proposal is beautiful is because it requires users to fill up nodes with high Tx volumes and then miners to fill up blocks from mempool. So, it is not only the miner, but also the end users have a say in increasing or decreasing block size.

Ah, I did actually mean BIP 101 and not 100. Thanks for pointing it out. And I agree that this proposal sounds good, but I was making the more general point that some form of dynamic resizing scheme is best.

Vires in numeris
upal (OP)
Full Member
***
Offline Offline

Activity: 165
Merit: 102


View Profile
August 18, 2015, 05:29:18 PM
 #8

I have got some very good arguements in the bitcoin-dev list and hence updated the main article. If you have any counter-arguement to this proposal, feel free to put it here or in the comment section of the article - http://upalc.com/maxblocksize.php
quakefiend420
Legendary
*
Offline Offline

Activity: 784
Merit: 1000


View Profile
August 18, 2015, 05:34:56 PM
 #9

This is similar to what I was thinking, but perhaps better.

My idea was once a week or every two weeks:

avg(last week's blocksize)*2 = new maxBlocksize
tl121
Sr. Member
****
Offline Offline

Activity: 278
Merit: 252


View Profile
August 18, 2015, 06:01:01 PM
 #10

There has to be a maximum block size limit for bitcoin nodes to work.  The limit is not just a program variable needed for block chain consensus, it has real world implications in terms of storage, processing and bandwidth resources.  If a node doesn't have sufficient resources it will not be able to work as a properly functioning node. These resources have to be provisioned and managed by node operators who have to plan in advance to acquire the needed resources.  That is the reason for BIP 101 having a schedule for changes to the limits. A dynamic algorithm can not magically instantiate the needed resources.

The schedule in BIP 101 is based on technology forecasting.  Like all forecasting, technology forecasting is inaccurate.  If this schedule proves to be grossly in error then a new BIP can always be generated some years downstream,  allowing for any needed "mid-course" corrections.

RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 18, 2015, 06:58:12 PM
 #11

There has to be a maximum block size limit for bitcoin nodes to work.  The limit is not just a program variable needed for block chain consensus, it has real world implications in terms of storage, processing and bandwidth resources.  If a node doesn't have sufficient resources it will not be able to work as a properly functioning node. These resources have to be provisioned and managed by node operators who have to plan in advance to acquire the needed resources.  That is the reason for BIP 101 having a schedule for changes to the limits. A dynamic algorithm can not magically instantiate the needed resources.
As I can see, the advantage of this algo proposed by OP is machine learning. It is dynamically determining the next max cap depending on the current full blocks. Only if more than 50% of blocks are more than 90% full, then only the max cap will double. This means more than 50% of the blocks stored by the nodes in last difficulty period are already already 90% filled and market is pushing for more. In this situation, node has two ways. Either increase its resource and stay in the network or close down. Keeping a node in a network is not network's responsibility. Network did not show any responsibility to keep CPU mining either. Miners who wanted to be in the network upgraded their miner to GPU, FPGA and ASIC for their own benefit. Similarly, nodes will be run by interested parties, who has to benefit for nodes, e.g. miners, online wallet provides, exchanges and individuals with big bitcoin holding and thereby having a need to secure the network. All of them will have to upgrade resource to be in the game, because the push is coming from free market need.

The schedule in BIP 101 is based on technology forecasting.  Like all forecasting, technology forecasting is inaccurate.  If this schedule proves to be grossly in error then a new BIP can always be generated some years downstream,  allowing for any needed "mid-course" corrections.
BIP 101 is a linear increment proposal, where the laid out path has not been derived from market demand. It does not have a way out to decrease block size and there is no basis of the technology forecasting for the long run. And another hard fork is just next to impossible after wide-spread adoption. Neither of BIP 101 (Gavin Andresen) or 103 (Pieter Wuille) are taking into account the actual network condition. Both are speculative on technology forecasting.

Ducky1
Hero Member
*****
Offline Offline

Activity: 966
Merit: 500


📱 CARTESI 📱 INFRASTRUCTURE FOR SCA


View Profile
August 18, 2015, 08:11:35 PM
 #12

Very good suggestion!

I would additionally suggest to back-test the algorithm on the current blockchain, from day 1, starting with the smallest possible max size. Then see how it evolves, and then fine-tune the parameters if anything bad happens, or obvious possibilities for improvements are spotted. It could even be possible to auto tune the parameters for smallest possible max size by setting up a proper experiment. When it works great there is a big chance it will continue to work great the next 100 years.



                               .█
                             .-███
                           .-███-███
                         ..███.   ███
                        .███.      ███
                      .███.         ███
                    .███-            ███
                  .███-               ███
                .███:.                 ███
              .███*.                   .███
 ████████████████████████████         .███████████████
 ███......................███.      .███-...........███
 .███                      ███.   .███-             .███
  .███                      ███ .███:.               .███
    ███.                    .██████.                   ███
     ███.                   .████.                      ███
      ███                  .█████.                      .███
      .███               .███. ███                       .███
        ███.           .███-    ███                        ███
         ███.        .███-      .███                        ███
          ██████████████         -█████████████████████████████
                    ███.                    .███
                     ███                  .███
                      ███:              .███
                       ███-           .███
                        ███.       .-███
                         ███.    .-███
                          ███  ..███
                          .███.███
                           .████
                            -█
.CARTESI.📱.LINUX INFRASTRUCTURE FOR DAPPS.
                               .█
                             .-███
                           .-███-███
                         ..███.   ███
                        .███.      ███
                      .███.         ███
                    .███-            ███
                  .███-               ███
                .███:.                 ███
              .███*.                   .███
 ████████████████████████████         .███████████████
 ███......................███.      .███-...........███
 .███                      ███.   .███-             .███
  .███                      ███ .███:.               .███
    ███.                    .██████.                   ███
     ███.                   .████.                      ███
      ███                  .█████.                      .███
      .███               .███. ███                       .███
        ███.           .███-    ███                        ███
         ███.        .███-      .███                        ███
          ██████████████         -█████████████████████████████
                    ███.                    .███
                     ███                  .███
                      ███:              .███
                       ███-           .███
                        ███.       .-███
                         ███.    .-███
                          ███  ..███
                          .███.███
                           .████
                            -█
RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 19, 2015, 04:31:50 PM
 #13

Very good suggestion!

I would additionally suggest to back-test the algorithm on the current blockchain, from day 1, starting with the smallest possible max size. Then see how it evolves, and then fine-tune the parameters if anything bad happens, or obvious possibilities for improvements are spotted. It could even be possible to auto tune the parameters for smallest possible max size by setting up a proper experiment. When it works great there is a big chance it will continue to work great the next 100 years.



True. It would be great if someone does this back testing and share the result. I think, at Genesis block, max cap can be considered as 1 MB. As the proposal has decreasing max cap feature, the outcome might be lower than 1 MB as well.

DumbFruit
Sr. Member
****
Offline Offline

Activity: 433
Merit: 263


View Profile
August 19, 2015, 05:26:42 PM
 #14

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.

By their (dumb) fruits shall ye know them indeed...
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
August 19, 2015, 06:38:28 PM
 #15

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
DumbFruit
Sr. Member
****
Offline Offline

Activity: 433
Merit: 263


View Profile
August 19, 2015, 08:27:31 PM
Last edit: August 20, 2015, 12:24:02 PM by DumbFruit
 #16

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
The absolute best case scenario in this algorithm, from the perspective of fees, is that slightly less than 50% of the blocks are 100% full, and people are so impatient to get their transactions into those blocks that they will bid up the transaction fees up to about 50BTC in total. That way the network would be funded at about the same rate it is today when inflation (The subsidy) stops, ceteris paribus.

By their (dumb) fruits shall ye know them indeed...
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
August 20, 2015, 02:44:27 PM
 #17

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
The absolute best case scenario in this algorithm, from the perspective of fees, is that slightly less than 50% of the blocks are 100% full, and people are so impatient to get their transactions into those blocks that they will bid up the transaction fees up to about 50BTC in total. That way the network would be funded at about the same rate it is today when inflation (The subsidy) stops, ceteris paribus.
There is no prerequisite that coinbase+mining fee needs to equal 50 btc. I understand that you are trying not to disturb the miner's subsidy. But, you are wrong in assuming ceteris paribus. Other things will not remain the same. When the subsidy stops, the transaction volume will be far higher than it is today. So, with increased block size, a miner will be able to fill up a block with much more Tx than it is now and thereby collect much more Tx fee. Moreover, you are also assuming value of BTC will remain same. With increased adoption, that's going to change towards the higher side as well. Hence, if the toal collection of Tx fee is same or even lower than what it is today (which wont most likely be the case), the increased price of BTC will compensate the miners.

So, forcing end users to a bidding war to save miners is most likely not a solution we need to adopt.
goatpig
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 20, 2015, 05:02:54 PM
Last edit: August 26, 2015, 09:01:26 PM by goatpig
 #18

I like this initiative, it is by far the best I've seen for the following reasons: it allows for both increase and reduction (this is critical) of the block size, it doesn't require complicated context and mainly, it doesn't rely on a hardcoded magic number to rule it all. However I'm not comfortable with the doubling nor the thresholds, and I'd would propose to refine them as follow:

1) Roughly translating your metrics gives something like (correct me if I misinterpreted):

- If the network is operating above half capacity, double the ceiling.
- If the network is operating below half capacity, halve the ceiling.
- If the network is operating around half capacity, leave it as is.

While the last 2 make sense, the first one is out of proportion imo. The increment step could be debated over and over but I think a more straight forward solution is to peg it to difficulty, i.e. if an increase is triggered, the block size limit should be readjusted in the same proportion that the difficulty changed:

- If the difficulty increased 20% and a block size limit increase is triggered, the limit would be increased by 20%.
- If the difficulty only increased by 5%, so would the block size limit.
- If the difficulty increased but the block limit increase was not triggered, stay as is.
- If the difficulty was reduced, in every case reduce the block limit by that same proportion.

As for the increase threshold, I don't think your condition covers the most common use case. A situation where 100% of blocks are filled at 85% would not trigger an increase, but a network were 50% of blocks are filled at 10% and the other 50% are full would trigger the increase, which is a behavior more representative of a spam attack than organic growth in transaction demand.

I would suggest to evaluate the total size used by the last 2000 blocks as a whole, if it exceeds 2/3 or 3/4 (or whatever value is the most sensible) of the maximum capacity, then trigger an increase.

Maybe that is your intended condition, but from the wording, I can't help to think that your condition is to evaluate size consumption per block, rather than as a whole over the difficulty period.

2) The current situation with the Bitcoin network is that it is trivial and relatively cheap to spam transactions, and thus trigger block ceiling increase. At the same time, the conditions for a block size decrease are rather hard to sustain. An attacker needs to fill half the blocks for a difficulty period to trigger an increase, and only needs to keep 11% of blocks half full to prevent a decrease.

Quoting from your proposal:

Quote
Those who want to stop decrease, need to have more than 10% hash power, but must mine more than 50% of MaxBlockSize in all blocks.

I don't see how that would prevent anyone with that much hashing power from preventing a block size decrease. As you said, there is an economic incentive for a miner to include fee paying transactions, which reduces the possibility a large pool could prevent a block size increase by mining empty blocks, as it would bleed hash power pretty quickly.

However, this also implies there is no incentive to mine empty blocks. While a large miner can attempt to prevent a block size increase (at his own cost), a large group of large miners would be desperate to try and trigger a block size reduction, as a single large pool could send transactions to itself, paying fees to its own miners, to keep 11% of blocks half filled.

I would advocate that the block size decrease should also be triggered by used block space vs max available space as a whole over the difficulty period. I would also advocate for a second condition to trigger any block size change: total fee paid over the difficulty period:

- If both blocks are filling and the total sum of paid fees has increased at least as much as a portion of the difficulty (say 1/10th, again up for discussion) over a single period, then an increase in block size is triggered.
- Same goes with the decrease mechanism. If block size and fees have both decreased accordingly, trigger a block size decrease.

One or the other condition is not enough. Simply filling blocks without an increase in fees paid is not a sufficient condition to increase the network's capacity. As blocks keep on filling, fees go up and eventually the conditions are met. On the other hand, if block size usage goes down but fees remain high, or fees go down but block size usage goes up (say after a block size increase), there is no reason to reduce the block size either.

3) Lastly, I believe in case of a stalemate, a decay function should take over. Something simple, say 0.5~1% decay every difficulty period that didn't trigger an increase or a decrease. Block size increase is not hard to achieve as it relies on difficulty increase, blocks filling up and fees climbing, which takes place concurrently during organic growth. If the block limit naturally decays in a stable market, it will in return put a pressure on fees and naturally increase block fill rate. The increase in fee will in return increase miner profitability, creating opportunities. Fees are high, blocks are filling up and difficulty is going up and the ceiling will be bumped up once more to slowly decay again until organic growth resumes.

However in case of a spam attack, it forces the attacker to keep up with the climbing cost of triggering the next increase rather than simply maintaining the size increase he triggered at a low cost.

I believe with these changes to your proposal, it would turn exponentially expensive for an attacker to push the ceiling up, while allowing for an organic fee market to form and preventing fees from climbing sky high, as higher fees would eventually bump up the size cap.


DumbFruit
Sr. Member
****
Offline Offline

Activity: 433
Merit: 263


View Profile
August 20, 2015, 06:33:38 PM
Last edit: August 20, 2015, 09:03:23 PM by DumbFruit
 #19

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
The absolute best case scenario in this algorithm, from the perspective of fees, is that slightly less than 50% of the blocks are 100% full, and people are so impatient to get their transactions into those blocks that they will bid up the transaction fees up to about 50BTC in total. That way the network would be funded at about the same rate it is today when inflation (The subsidy) stops, ceteris paribus.
There is no prerequisite that coinbase+mining fee needs to equal 50 btc. I understand that you are trying not to disturb the miner's subsidy. But, you are wrong in assuming ceteris paribus. Other things will not remain the same. When the subsidy stops, the transaction volume will be far higher than it is today. So, with increased block size, a miner will be able to fill up a block with much more Tx than it is now and thereby collect much more Tx fee. Moreover, you are also assuming value of BTC will remain same. With increased adoption, that's going to change towards the higher side as well. Hence, if the toal collection of Tx fee is same or even lower than what it is today (which wont most likely be the case), the increased price of BTC will compensate the miners.

So, forcing end users to a bidding war to save miners is most likely not a solution we need to adopt.

The reason philosophers use "ceteris paribus" is not because they literally know with absolute certainty all of the variables that they want to hold static, it's because they are trying to get at a specific subset of the problem. It's especially useful where testing is impossible, like here where we're trying to design a product that will be robust going into the future. Otherwise we'll get into a gish gallup.

So! The problem I'm pointing out is that we know, in the best case scenario, that just less than half of transactions will have any bidding pressure to keep transaction fees above the equilibrium price of running roughly one node, because by design we know the remaining half are less than 90% full. There is no reason to believe that this second half of transactions will be bid so high as to fund the entire network to the same, or better, rate as today. How does the protocol keep the network funded as inflation diminishes?

One could get close to this problem by suggesting that there is a time between checks (2000 blocks) which would allow greater than half of blocks to remain full, but if one is seriously suggesting that this should fund the network then one is simultaneously proposing that the block size limit should be doubled every 2000 blocks in perpetuity, otherwise this funding mechanism doesn't exist, and so haven't adequately addressed the problem. If that is a reasonable assumption to you, then the protocol can be simplified to read, "double the block size every 2000 blocks".

You state that there are larger blocks and therefore more transaction fees with this protocol. There is more quantity of transaction fees, but there is not necessarily a higher value of transaction fees. So again; How does this protocol keep the network funded as inflation diminishes? There is no reason to believe, even being optimistic, that those fees would be anything but marginally higher than enough to fund roughly one node at equilibrium.

That is not the only problem with this protocol, but it is the one I'm focusing on at the moment.

By their (dumb) fruits shall ye know them indeed...
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
August 20, 2015, 11:33:55 PM
 #20

While the last 2 make sense, the first one is out of proportion imo. The increment step could be debated over and over but I think a more straight forward solution is to peg it to difficulty, i.e. if an increase is triggered, the block size limit should be readjusted in the same proportion that the difficulty changed:

- If the difficulty increased 20% and a block size limit increase is triggered, the limit would be increased by 20%.
- If the difficulty only increased by 5%, so would the block size limit.
- If the difficulty increased but the block limit increase was not triggered, stay as is.
- If the difficulty was reduced, in every case reduce the block limit by that same proportion.

How does difficulty change affects the size of blocks found today ? Is there any co-relation between difficulty and block size ? If not, then IMO, it wont be wise to make difficulty a parameter to change max block size cap.
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
August 20, 2015, 11:47:47 PM
 #21

While the last 2 make sense, the first one is out of proportion imo. The increment step could be debated over and over but I think a more straight forward solution is to peg it to difficulty, i.e. if an increase is triggered, the block size limit should be readjusted in the same proportion that the difficulty changed:

- If the difficulty increased 20% and a block size limit increase is triggered, the limit would be increased by 20%.
- If the difficulty only increased by 5%, so would the block size limit.
- If the difficulty increased but the block limit increase was not triggered, stay as is.
- If the difficulty was reduced, in every case reduce the block limit by that same proportion.

How does difficulty change affects the size of blocks found today ? Is there any co-relation between difficulty and block size ? If not, then IMO, it wont be wise to make difficulty a parameter to change max block size cap.

Why would the fact that difficulty and blocksize are not related today, preclude that relationship from helping to solve a network problem? Explain why.

Vires in numeris
skang
Sr. Member
****
Offline Offline

Activity: 452
Merit: 252


from democracy to self-rule.


View Profile
August 20, 2015, 11:57:59 PM
 #22

I don't know why people are calling this a good proposal. Either they don't understand the problem at hand or its me. I'd thankfully accept its me if you can explain, please.

Let us try to simulate this proposal.
Let us say the number of transactions are rising and are around 1 mb regularly. This algo will increase the cap accordingly.
Now, with global adoption, let us say, the number of transactions rise further. This algo raises the cap further.
Let us say there are 25mb worth of transactions now. This algo raises the cap to 25 mb, but does that work??

Due to the practical limits, a big block will take a lot of time to propagate in the network. During this time maybe another miner also successfully solves the block only to realize after a while that he isn't the first one, thus producing orphans. The second effect, and a very important effect, is that the winner gets a headstart. He starts working on the next block while the rest of the world is still working on the previous one while waiting to download the successful solution. As mining has now gone big, the effect of this headstart is huge and increases with more mining power a miner or a pool of miners have.

tl121 made this exact point.

There has to be a maximum block size limit for bitcoin nodes to work.  The limit is not just a program variable needed for block chain consensus, it has real world implications in terms of storage, processing and bandwidth resources.  If a node doesn't have sufficient resources it will not be able to work as a properly functioning node. These resources have to be provisioned and managed by node operators who have to plan in advance to acquire the needed resources.  That is the reason for BIP 101 having a schedule for changes to the limits. A dynamic algorithm can not magically instantiate the needed resources.

The counter to this position, given by Gavin, is that his simulations show that this headstart does not have any effect.
But the counter's counter from the other side is that his simulations are not taking internet latency into account.

People, the problem is not 'what' the limit should be & 'how' to reach it. The problem is that large blocks will kill bitcoin, so large blocks are not an option, what to do then is the question? How to make bitcoin scalable?

"India is the guru of the nations, the physician of the human soul in its profounder maladies; she is destined once more to remould the life of the world and restore the peace of the human spirit.
But Swaraj is the necessary condition of her work and before she can do the work, she must fulfil the condition."
Trolololo
Sr. Member
****
Offline Offline

Activity: 263
Merit: 280



View Profile
August 21, 2015, 12:08:33 AM
 #23

I had this same idea today, I created a new thread proposing it, and then I found that you had created one thread and developed the idea some days ago.

I give my 100% support to it, because max block size should be dynamically calculated (based on the 2016 previous block sizes) as Difficulty is dynamically recalculated every 2016 blocks.

Go on with it!!!
goatpig
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 21, 2015, 12:19:47 PM
 #24

How does difficulty change affects the size of blocks found today ? Is there any co-relation between difficulty and block size ? If not, then IMO, it wont be wise to make difficulty a parameter to change max block size cap.

It does not but I would return this question to you: how is a doubling/halving of the block cap representative of the actual market growth/contraction that triggered the change? Difficulty variations are built in the blockchain and provide a very realistic perspective on the economic progression of the network, as they are a marker of profitability.

Keep in mind that my proposal evaluates total fee progression as well over difficulty periods, so in the case a new chip is released that largely outperforms previous generations and the market quickly invests into it, that event on its own would not be enough to trigger a block size increase, as there is no indication fees would also climb in the same fashion.

The idea is to keep the block size limit high enough to support organic market growth, while progressing in small enough increments that each increment won't undermine the fee market. I think difficulty progression is an appropriate metric to achieve that goal.

I've always thought fees should be somehow inversely pegged to difficulty to define the baseline of a healthy fee market. This is a way to achieve it.

Quote
People, the problem is not 'what' the limit should be & 'how' to reach it. The problem is that large blocks will kill bitcoin, so large blocks are not an option, what to do then is the question? How to make bitcoin scalable?

The problem is blocks larger than the network's baseline resource will magnify centralization. This is a bad thing, but the question is not "how to make Bitcoin scalable". My understanding of scalability (please share yours if it differs from mine) is for a piece of software that attempts to consume as much resources as is made available. An example of a scalable system would be Amazon's ec2. The more physical machines support it, the more powerful it gets. Another one is BitTorrent, where the more leechers show up, the more bandwidth the torrent totals (i.e. bandwidth is not defined by seed boxes alone).

I would say the current issue with Bitcoin and big blocks isn't scalability but rather efficiency. We don't want to use more resources, we want to use the same amount of resources in a more efficient manner. Block size is like a barrier to entry: the bigger the blocks, the higher the barrier. Increasing efficiency in block propagation and verification would reduce that barrier in return, allowing for an increase in size while keeping the network healthy. I am not familiar with the Core source but I believe there are a few low hanging fruits we can go after when it comes to block propagation.

Also, I believe the issue isn't truly efficiency, but rather centralization. Reducing the barrier to entry increases participants and thus decentralization but the real issue is that there are no true incentives to run nodes nor to spread mining to smaller clusters. I understand these are non trivial problems, but that's what the September workshop should be about, rather than scalability.

If there is an incentive to run full nodes and if there is an incentive to spread mining, then block size will no longer be a metric that affects centralization on its own. Keep in mind that it currently is the case partly because it is one of the last few metric set to a magic number. If it was controlled by a dynamic algorithm keeping track of economic factors, we wouldn't be wasting sweat and blood on this issue today and be looking at how to make the system more robust and decentralized instead.

KNK
Hero Member
*****
Offline Offline

Activity: 692
Merit: 502


View Profile
August 21, 2015, 12:40:59 PM
Last edit: August 21, 2015, 02:15:04 PM by KNK
 #25

Sorry for the long post ...
TLDR: +1 for dynamic block size. I hope it is not too late for the right change

A dynamic algorithm can not magically instantiate the needed resources.
It doesn't need to! If properly implemented it will be the other way around (see below {1})

The reason I feel OP's proposal is beautiful is because it requires users to fill up nodes with high Tx volumes and then miners to fill up blocks from mempool.
Exactly, what should be used here:
 {1}
  • Hard limit size - calculated by some algorithm for the entire network (see below {2})
  • Client limit size - configured from the client (miner full node) based on it's hardware and bandwidth limitations or other preferences

Each node may set it's own limit of how big blocks it will send to the network, but should accept blocks up to the Hard limit

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This is probably where consensus will be hardly achieved if it should be hard coded and not dynamic - cheaper transactions or bigger fees? Some want the first others the send and the truth is in the middle after both sides make some compromise, so it should also be kept in mind when planning the dynamic algorithm.

What I may suggest for the calculation of the Hard limit is:
{2}
 When calculating the new target difficulty, do the same for the block size.
 
  • Get the average size of the last 4000 nonempty blocks = AvgSize
  • Set the new block size to 150% of AvgSize, but not more than twice bigger/lower than previous block size

    How it is expected to work:
     The Hard limit is kept at 66% with 1 month moving average on each diff change.
     BUT it depends on the Soft limit chosen from the miners, so:
     
    • If the bandwidth is an issue (as it is for the most private pools and those in China) - they will send smaller blocks and thus Vote for the preferred size with their work
    • If there is a need for much bigger blocks, but the current status of the hardware (CPU or HDD) does not allow that - no increase will take place, because the clients won't send bigger blocks than configured
    • If there are not enough transactions to make bigger blocks - the size will be reduced

    EDIT: An option in the mining software to ignore blocks above Soft limit gives the control switch in each miner's hands in addition to the pools
EDIT 2: If you take a look at the average block size chart you will see that the current average size is far from the 1MB limit, if you ignore the stupid stress tests during the last month or two and even then the average is around 80%, so 2/3 (66% full) block size is a good target IMHO

Mega Crypto Polis - www.MegaCryptoPolis.com
BTC tips: 1KNK1akhpethhtcyhKTF2d3PWTQDUWUzHE
upal (OP)
Full Member
***
Offline Offline

Activity: 165
Merit: 102


View Profile
August 21, 2015, 08:05:24 PM
 #26

Thanks to everyone for providing good arguements for improvement of the proposal. I have derived a second proposal and updated OP accordingly. If you have any counter-arguement to this proposal, feel free to put it here or in the comment section of the article - http://upalc.com/maxblocksize.php
skang
Sr. Member
****
Offline Offline

Activity: 452
Merit: 252


from democracy to self-rule.


View Profile
August 22, 2015, 04:38:27 AM
 #27

The problem is blocks larger than the network's baseline resource will magnify centralization. This is a bad thing, but the question is not "how to make Bitcoin scalable". My understanding of scalability (please share yours if it differs from mine) is for a piece of software that attempts to consume as much resources as is made available. An example of a scalable system would be Amazon's ec2. The more physical machines support it, the more powerful it gets. Another one is BitTorrent, where the more leechers show up, the more bandwidth the torrent totals (i.e. bandwidth is not defined by seed boxes alone).

Your understanding is correct but Bitcoin is unlike anything in history. In traditional sense, like the examples you state, if a resource is getting fully utilized you add more of it & the key resources are ones that make that technology possible.

Although disk space is a resource, it is not a key resource in enabling the torrenting technology, in the sense that disk space existed before the invention of internet but that does not allow torrents to exist.
Inter-networking is the key resource that allows torrenting to exist. Now, what do you do if the network is fully occupied? You add more of it, problem solved.

With bitcoin, the network is a resource, but it is not the key resource, in the sense that networks existed before Bitcoin.
Blockchain is the key resource that allows bitcoin to exist. Now what do you do if blocks are full? You add more blocks. Ding! Not allowed, mate!

Blocks are essentially a list of transactions per unit time. So when we say we need to increase blocks, we mean we need to increase the rate of transaction throughput.

There are only 3 things in this equation that we can tweak:
1. increase the blocks. Not allowed, by definition of bitcoin; 1 per 10 minutes
2. decrease the time. Not allowed, by definition of bitcoin; each block comes out in 10 minutes
3. increase the block size. Allowed but practical limits of technology comes in. With each kb increased, download time for a block increases by milliseconds, and the miner who found that block, now has a headstart for these many milliseconds. Bigger the miner, more headstarts he gets and thus smaller miner leaves & this circle continues until only big miners are left. Complete centralization! Not an option.

To people looking at it in traditional way, miners might look like resources, so that more miners ought to mean more transaction throughput. But it does not for the same reason as more hard disk does not mean better torrent speed.

I would say the current issue with Bitcoin and big blocks isn't scalability but rather efficiency. We don't want to use more resources, we want to use the same amount of resources in a more efficient manner. Block size is like a barrier to entry: the bigger the blocks, the higher the barrier. Increasing efficiency in block propagation and verification would reduce that barrier in return, allowing for an increase in size while keeping the network healthy. I am not familiar with the Core source but I believe there are a few low hanging fruits we can go after when it comes to block propagation.

Also, I believe the issue isn't truly efficiency, but rather centralization. Reducing the barrier to entry increases participants and thus decentralization but the real issue is that there are no true incentives to run nodes nor to spread mining to smaller clusters. I understand these are non trivial problems, but that's what the September workshop should be about, rather than scalability.

If there is an incentive to run full nodes and if there is an incentive to spread mining, then block size will no longer be a metric that affects centralization on its own. Keep in mind that it currently is the case partly because it is one of the last few metric set to a magic number. If it was controlled by a dynamic algorithm keeping track of economic factors, we wouldn't be wasting sweat and blood on this issue today and be looking at how to make the system more robust and decentralized instead.

Bitcoin got to where it is today, I mean so much publicity and usage, because everything was taken care of, even the incentive of running full nodes.
What is that incentive, you ask? That incentive is bitcoin's survival.
The way to get something done is not always to reward, but sometimes punishment.
Here the punishment is bitcoin's death.

The reason for running the nodes is same as reason for feeding a goose that lays golden eggs.
But the problem here is that the people feeding the goose (people running nodes) are not the same as people collecting eggs(the miners).
People have difficulty in understanding indirect influences but they need to realize that it is they who are consuming gold, not the collector.
The miners don't even use bitcoin necessarily, but might only be doing it for fiat money.
So people feeding the goose must realize they need to keep doing so, because although directly it looks like the collector is getting rich but indirectly it is the feeders who want gold.

I would go a step further and say anybody not running a full node is not a bitcoin user in the true sense. Why?
Because the fact that your coins got transferred is only guaranteed by the history of those coins. And you don't have a copy of that history!
You are depending on someone else to supply a copy of history.
If its your brother in the family, who runs the full node for you to access, then it's fine. But for everything else, you are better off with banks.

There are counter points to this. And these counter points are only validly made by people who are happy using banks and trusting them but find other benefits in bitcoin, namely three:
1. Bitcoin is pseudonymous
2. Bitcoin has no geographical limit. Bitcoin has no monetary limit.
3. Bitcoin is 24x7, that is more than 3 times the bank opening time.

Now these use cases are huge & bring with them a lot of these people who trust others with their money, because the fire in the jungle hasn't reached their home yet.
They will keep running light wallets and enjoy these benefits, until the banks just tidy up and make themselves 24x7 & without limits.
Then all of these users would leave happily, coz banks have always kept free candy on the counter.
So I don't care about people who don't run full nodes and so shouldn't anyone caring about bitcoin.

"India is the guru of the nations, the physician of the human soul in its profounder maladies; she is destined once more to remould the life of the world and restore the peace of the human spirit.
But Swaraj is the necessary condition of her work and before she can do the work, she must fulfil the condition."
vane91
Member
**
Offline Offline

Activity: 133
Merit: 26


View Profile
August 22, 2015, 06:36:07 AM
 #28

IMHO , the first proposal is good, if, we target for example a x% of average block's capacity.


For example, if on average blocks are 50% full and we target 66% then reduce block size,
if blocks are 70% full then increase block capacity. Let it test and see how affects the fee market.

The best thing about this is that, now we can target an average fee per block! :

if we are targeting 1btc per block in fees, and fees rise too much, lower the % full target, if fees decline rise the target.

There you go! Now people can vote for block increase by simply including higher fees!
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
August 22, 2015, 09:41:24 AM
 #29

IMHO , the first proposal is good, if, we target for example a x% of average block's capacity.


For example, if on average blocks are 50% full and we target 66% then reduce block size,
if blocks are 70% full then increase block capacity. Let it test and see how affects the fee market.

The best thing about this is that, now we can target an average fee per block! :

if we are targeting 1btc per block in fees, and fees rise too much, lower the % full target, if fees decline rise the target.

There you go! Now people can vote for block increase by simply including higher fees!

I liked the points about handling the type of inertia that will manifest itself in a dynamic re-sizing scheme, particularly when the limit is sidling around in a narrow band. Any scheme should be able to respond to those circumstances in a way that promotes a healthy fee market, yet simultaneously disincentivise spammers. A decay function sounds like a good idea on that basis.

Vires in numeris
KNK
Hero Member
*****
Offline Offline

Activity: 692
Merit: 502


View Profile
August 22, 2015, 06:24:53 PM
 #30

Thanks to everyone for providing good arguements for improvement of the proposal. I have derived a second proposal and updated OP accordingly. If you have any counter-arguement to this proposal, feel free to put it here or in the comment section of the article - http://upalc.com/maxblocksize.php
I don't thing it is a good idea to include TX fees in the calculation.

See this charts {1}, {2} and {3}

Now consider and old miner consolidating his coins and a spammer attacking the network - both will cause increased volume of transactions (more or less in {1}) for a short period, but to succeed in the attack, the spammer (see 10 July and after) will include larger fees {3} and have less days destroyed {2}, while the old miner may 'donate' larger fees (as on 27 April) or use the fact that he may transfer them without fees (end of November), because they are old enough.

With your proposal of including the fees in the calculation the block size after 10 July will increase, thus helping the attacker even more, as it will keep increasing the block size (even now), just because others add more fees to mitigate the attack and prioritise their own transactions.

Mega Crypto Polis - www.MegaCryptoPolis.com
BTC tips: 1KNK1akhpethhtcyhKTF2d3PWTQDUWUzHE
upal (OP)
Full Member
***
Offline Offline

Activity: 165
Merit: 102


View Profile
August 23, 2015, 02:56:23 PM
 #31

Thanks to everyone for providing good arguements for improvement of the proposal. I have derived a second proposal and updated OP accordingly. If you have any counter-arguement to this proposal, feel free to put it here or in the comment section of the article - http://upalc.com/maxblocksize.php
I don't thing it is a good idea to include TX fees in the calculation.

See this charts {1}, {2} and {3}

Now consider and old miner consolidating his coins and a spammer attacking the network - both will cause increased volume of transactions (more or less in {1}) for a short period, but to succeed in the attack, the spammer (see 10 July and after) will include larger fees {3} and have less days destroyed {2}, while the old miner may 'donate' larger fees (as on 27 April) or use the fact that he may transfer them without fees (end of November), because they are old enough.

With your proposal of including the fees in the calculation the block size after 10 July will increase, thus helping the attacker even more, as it will keep increasing the block size (even now), just because others add more fees to mitigate the attack and prioritise their own transactions.

Thanks for your input. I was thinking the same and hence modified Proposal 2. This time the max cap increase is depndent on block size increase, but still Tx fee is taken care of, so that miners can be compensated for with decreasing block reward. Please check both Proposal 1 & 2 and share your opinion. It is good if someone can do a simulation with Proposal 1 & 2 and from Block 1 and share the result for both against last difficulty change.
KNK
Hero Member
*****
Offline Offline

Activity: 692
Merit: 502


View Profile
August 23, 2015, 03:42:31 PM
 #32

Have you considered my suggestion here about 66% full blocks target of 1 month moving average and soft limit configured from the client?

I don't like the idea to force some fixed compensation for the fee - the network should choose that and it is enough to give it the (right) triggers to do so.
I will use tl121's sentence here
A dynamic algorithm can not magically instantiate the needed resources.
just change that to 'increased fees can not ...'

By having (an easy to set) Soft limit allows the miners and pools to hold the block size growth in case of technical limitations. Yes, the usage will be limited (and more expensive) too, but if/until that doesn't cover the expenses for the bandwidth, space and CPU power required it is better to limit the network instead of crashing it completely with overwhelming requirements

Mega Crypto Polis - www.MegaCryptoPolis.com
BTC tips: 1KNK1akhpethhtcyhKTF2d3PWTQDUWUzHE
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
August 23, 2015, 06:04:11 PM
 #33

Have you considered my suggestion here about 66% full blocks target of 1 month moving average and soft limit configured from the client?
Why 66% and not 75% or 80% ? Where from this magic figure is coming ? It is like Gavin's 8MB. He first suggested 20MB and then to get Chinese miner's support agreed upon to 8MB. These magic figures should not be pillar of a robust system which can scale with time.

I don't like the idea to force some fixed compensation for the fee - the network should choose that and it is enough to give it the (right) triggers to do so.
As I can see, as per OP's Proposal 2 network is chosing everything. Where did u find fixed compensation in Proposal 2 ? (Proposal 1 does no take care of mining fee, so question of compensation is not coming.)
KNK
Hero Member
*****
Offline Offline

Activity: 692
Merit: 502


View Profile
August 23, 2015, 06:42:46 PM
Last edit: August 23, 2015, 07:05:05 PM by KNK
 #34

Why 66% and not 75% or 80% ? Where from this magic figure is coming ?
Good question and I will explain it in my next post.

As I can see, as per OP's Proposal 2 network is chosing everything. Where did u find fixed compensation in Proposal 2 ? (Proposal 1 does no take care of mining fee, so question of compensation is not coming.)
Network is choosing, but based on fixed rules, which can easily be cheated for both. Half and double for Proposal 1 may seem OK for now, but think about 10M and 20M blocks it's a 40k+ of transactions per block added - you ruin the need of fees for quite a while or cause the size to flip-flop between 10M and 20M each time

Mega Crypto Polis - www.MegaCryptoPolis.com
BTC tips: 1KNK1akhpethhtcyhKTF2d3PWTQDUWUzHE
KNK
Hero Member
*****
Offline Offline

Activity: 692
Merit: 502


View Profile
August 23, 2015, 06:57:38 PM
Last edit: August 23, 2015, 07:23:47 PM by KNK
 #35

Now where 66% came from ...
See this charts again {1}, {2} and {3} and also {4}

  • Now pick any two nearby min and max from {1} - it's close to 2:3 proportion
  • Check July's attack on {1} - it's close to 1:2 and the fees on {3} are close to 1:2 too, but actual average size in {4} is also 2:3

Including days destroyed ( {2} ) to ignore short time extreme volumes will be a good idea, but no idea how exactly (EDIT: how to do it properly, so better not to do it at all).

Mega Crypto Polis - www.MegaCryptoPolis.com
BTC tips: 1KNK1akhpethhtcyhKTF2d3PWTQDUWUzHE
99Percent
Full Member
***
Offline Offline

Activity: 403
Merit: 100


🦜| Save Smart & Win 🦜


View Profile WWW
August 24, 2015, 08:29:02 PM
 #36

I was about to post a similar suggestion too. I think this is a great idea.

I hope the core devs take a serious look at it.

CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
August 25, 2015, 12:06:06 AM
 #37

Now where 66% came from ...
See this charts again {1}, {2} and {3} and also {4}

  • Now pick any two nearby min and max from {1} - it's close to 2:3 proportion
  • Check July's attack on {1} - it's close to 1:2 and the fees on {3} are close to 1:2 too, but actual average size in {4} is also 2:3

Including days destroyed ( {2} ) to ignore short time extreme volumes will be a good idea, but no idea how exactly (EDIT: how to do it properly, so better not to do it at all).

Deriving a number relying on previous chart might work well for the short term, but it probably is not a good solution for the long run, because the chart will behave absolutely differently for a bubble, a spam attack or a tech innovation. In fact, that is the reason I do not like BIP 101, though I support bigger blocks. Gavin has derived 20mb and then 8mb relying on previous statistics. The better way, in my opinion, is to take signals from the network itself as proposed by OP.
KNK
Hero Member
*****
Offline Offline

Activity: 692
Merit: 502


View Profile
August 25, 2015, 07:06:38 AM
 #38

The better way, in my opinion, is to take signals from the network itself as proposed by OP.
That's what my proposal for Soft Limit does too and not just signals but a way for the miners to even shrink the block size if they have consensus on that with large enough hashrate:
Example 50% of the miners want to keep the block size - they mine ~40% full blocks and the size will not change even if the rest of the network mines full blocks and there are enough transactions to fill them all.
If they want it lower - they mine even smaller blocks (non empty as they are simply ignored), but knowing that they miss some fees. At 10% full blocks from 50% of the miners it is guaranteed that the new size will be less than 83% of the current size and with 1% full blocks - 75% of the current

Mega Crypto Polis - www.MegaCryptoPolis.com
BTC tips: 1KNK1akhpethhtcyhKTF2d3PWTQDUWUzHE
Swordsoffreedom
Legendary
*
Offline Offline

Activity: 2800
Merit: 1115


Leading Crypto Sports Betting & Casino Platform


View Profile WWW
August 25, 2015, 09:56:57 AM
 #39

I like these proposals as they factor in growth alongside transaction fees

Setting a dynamically adjusted max cap is the true solution to blocksize problems as it allows for a right size fits all in all cases puts the issue to rest
and is the best middle ground in my opinion it will have a lot less polarization as it makes sense to me and likely others that blocksize growth should match demand with a dynamic margin that grows or shrinks alongside usage in the future.

That said a setting in addition to a dynamic max cap would be a recommended minimum for client installations as it would address concerns about nodes not having the needed resources, adding into that a warning if a node approaches a limit where it would not be able to optimally function in the future.

..Stake.com..   ▄████████████████████████████████████▄
   ██ ▄▄▄▄▄▄▄▄▄▄            ▄▄▄▄▄▄▄▄▄▄ ██  ▄████▄
   ██ ▀▀▀▀▀▀▀▀▀▀ ██████████ ▀▀▀▀▀▀▀▀▀▀ ██  ██████
   ██ ██████████ ██      ██ ██████████ ██   ▀██▀
   ██ ██      ██ ██████  ██ ██      ██ ██    ██
   ██ ██████  ██ █████  ███ ██████  ██ ████▄ ██
   ██ █████  ███ ████  ████ █████  ███ ████████
   ██ ████  ████ ██████████ ████  ████ ████▀
   ██ ██████████ ▄▄▄▄▄▄▄▄▄▄ ██████████ ██
   ██            ▀▀▀▀▀▀▀▀▀▀            ██ 
   ▀█████████▀ ▄████████████▄ ▀█████████▀
  ▄▄▄▄▄▄▄▄▄▄▄▄███  ██  ██  ███▄▄▄▄▄▄▄▄▄▄▄▄
 ██████████████████████████████████████████
▄▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▄
█  ▄▀▄             █▀▀█▀▄▄
█  █▀█             █  ▐  ▐▌
█       ▄██▄       █  ▌  █
█     ▄██████▄     █  ▌ ▐▌
█    ██████████    █ ▐  █
█   ▐██████████▌   █ ▐ ▐▌
█    ▀▀██████▀▀    █ ▌ █
█     ▄▄▄██▄▄▄     █ ▌▐▌
█                  █▐ █
█                  █▐▐▌
█                  █▐█
▀▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▀█
▄▄█████████▄▄
▄██▀▀▀▀█████▀▀▀▀██▄
▄█▀       ▐█▌       ▀█▄
██         ▐█▌         ██
████▄     ▄█████▄     ▄████
████████▄███████████▄████████
███▀    █████████████    ▀███
██       ███████████       ██
▀█▄       █████████       ▄█▀
▀█▄    ▄██▀▀▀▀▀▀▀██▄  ▄▄▄█▀
▀███████         ███████▀
▀█████▄       ▄█████▀
▀▀▀███▄▄▄███▀▀▀
..PLAY NOW..
DooMAD
Legendary
*
Offline Offline

Activity: 3808
Merit: 3160


Leave no FUD unchallenged


View Profile
August 25, 2015, 10:12:35 AM
 #40

While this proposal is my second preference, I'd suggest getting a move on and coding it into existence if this is your first choice for how the network should be run.  Public support seems to be rallying around other proposals, namely BIP101 and BIP100.  If you want a dynamic blocksize, you need to get this out in the open pretty quick to make it viable.
RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 25, 2015, 12:44:04 PM
 #41

While this proposal is my second preference, I'd suggest getting a move on and coding it into existence if this is your first choice for how the network should be run.  Public support seems to be rallying around other proposals, namely BIP101 and BIP100.  If you want a dynamic blocksize, you need to get this out in the open pretty quick to make it viable.

Public is totally confused and devs are clearly trying to do round about BIP 100, 101 & 103 as all are coming from existing core devs. Core devs seem to be ignoring proposals from outsider without weighing their merits. This is really unfortunate. I can smell, celebrity culture is taking over bitcoin development.

DumbFruit
Sr. Member
****
Offline Offline

Activity: 433
Merit: 263


View Profile
August 25, 2015, 01:21:17 PM
 #42

Oh, I didn't realize a second different algorithm was made.
One problem with the new one is that it presumes that Bitcoin transaction fees, in bitcoins, should always go up with a block size increase, which isn't always the case.
If 1 TPS costs 1 cent to fund a healthy network, and a bitcoin costs 1 cent, then 1 TPS would cost 1 bitcoin to move.
If the network grows to 2 TPS, and bitcoin's value triples to 3 cents, then 2 TPS would cost .66 bitcoin to move.
So you could have the case where transaction fees could be bid up to .9 bitcoin, well into healthy territory, but it would appear that transaction fees, in bitcoin, have actually gone down substantially.

By their (dumb) fruits shall ye know them indeed...
RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 25, 2015, 09:13:59 PM
 #43

As I can see, the proposal is now being discussed on the front page of www.reddit.com/r/bitcoin... interesting Smiley

https://www.reddit.com/r/Bitcoin/comments/3iblg7/bipdraft_dynamically_controlled_bitcoin_block/

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
August 25, 2015, 09:53:00 PM
 #44

As I can see, the proposal is now being discussed on the front page of www.reddit.com/r/bitcoin... interesting Smiley

https://www.reddit.com/r/Bitcoin/comments/3iblg7/bipdraft_dynamically_controlled_bitcoin_block/

That's much better. good work, upal

Vires in numeris
skang
Sr. Member
****
Offline Offline

Activity: 452
Merit: 252


from democracy to self-rule.


View Profile
August 26, 2015, 01:18:01 AM
 #45

As I can see, the proposal is now being discussed on the front page of www.reddit.com/r/bitcoin... interesting Smiley

https://www.reddit.com/r/Bitcoin/comments/3iblg7/bipdraft_dynamically_controlled_bitcoin_block/

And the top comments state the problems which are exactly what I have said about it here and other thread.

"India is the guru of the nations, the physician of the human soul in its profounder maladies; she is destined once more to remould the life of the world and restore the peace of the human spirit.
But Swaraj is the necessary condition of her work and before she can do the work, she must fulfil the condition."
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
August 26, 2015, 12:36:12 PM
 #46

Oh, I didn't realize a second different algorithm was made.
One problem with the new one is that it presumes that Bitcoin transaction fees, in bitcoins, should always go up with a block size increase, which isn't always the case.
I dont see it is assumed anywhere. Tx fee has just been added as an extra filter criteria. Max block cap will rise, only if both block size & Tx fee increases.

If 1 TPS costs 1 cent to fund a healthy network, and a bitcoin costs 1 cent, then 1 TPS would cost 1 bitcoin to move.
If the network grows to 2 TPS, and bitcoin's value triples to 3 cents, then 2 TPS would cost .66 bitcoin to move.
So you could have the case where transaction fees could be bid up to .9 bitcoin, well into healthy territory, but it would appear that transaction fees, in bitcoin, have actually gone down substantially.
This is a wrong way to create fictitious situation. Tx fee shold not be calculated in cent. Do it in bitcoin and you'll have your answer. No one will pay 1 BTC to move 1 BTC. Do you see anyone paying 10000 satoshi to move 10000 satoshi ? But, if you want to move 10000 satoshi you still have to pay 10000 satoshi or 0.0001 BTC. If you remove economics and only discuss technology, then bitcoin is a fallacy.
ZephramC
Sr. Member
****
Offline Offline

Activity: 475
Merit: 255



View Profile
August 26, 2015, 03:52:07 PM
 #47

I like this proposal. It creates predictable system with feedback with neutral rules. Specific parameters and issues have to be fine-tuned, of course. But it needs both miners and Bicoin users to agree and to actually "act like change is needed".
Like stated elsewhere: Miners decide what is the longest chain. Full nodes decide what is the valid chain. Bitcoin is the longest valid chain. So it needs both. And block size change proposal and mechanism should reflect that.

I have a meta-question... why the details (in the first OP post) are a (dynamic) PHP page?
upal (OP)
Full Member
***
Offline Offline

Activity: 165
Merit: 102


View Profile
August 26, 2015, 04:07:08 PM
 #48

I have a meta-question... why the details (in the first OP post) are a (dynamic) PHP page?

Do you mean http://upalc.com/maxblocksize.php ?
DumbFruit
Sr. Member
****
Offline Offline

Activity: 433
Merit: 263


View Profile
August 26, 2015, 04:28:17 PM
Last edit: August 26, 2015, 08:15:04 PM by DumbFruit
 #49

Oh, I didn't realize a second different algorithm was made.
One problem with the new one is that it presumes that Bitcoin transaction fees, in bitcoins, should always go up with a block size increase, which isn't always the case.
I dont see it is assumed anywhere. Tx fee has just been added as an extra filter criteria. Max block cap will rise, only if both block size & Tx fee increases.

If 1 TPS costs 1 cent to fund a healthy network, and a bitcoin costs 1 cent, then 1 TPS would cost 1 bitcoin to move.
If the network grows to 2 TPS, and bitcoin's value triples to 3 cents, then 2 TPS would cost .66 bitcoin to move.
So you could have the case where transaction fees could be bid up to .9 bitcoin, well into healthy territory, but it would appear that transaction fees, in bitcoin, have actually gone down substantially.
This is a wrong way to create fictitious situation. Tx fee shold not be calculated in cent. Do it in bitcoin and you'll have your answer. No one will pay 1 BTC to move 1 BTC. Do you see anyone paying 10000 satoshi to move 10000 satoshi ? But, if you want to move 10000 satoshi you still have to pay 10000 satoshi or 0.0001 BTC. If you remove economics and only discuss technology, then bitcoin is a fallacy.

I was not calculating transaction fees in cents, I was referring to the value of the transaction using an arbitrary exchange rate (USD) rather than the actual quantity of bitcoins in the transaction fee.

The point is that the rising value of Bitcoin could make transaction fee amounts in bitcoins appear to go down, even if the value of transaction fees is actually going up. This would deadlock this algorithm over the period of increasing value.

Looking at a different problem with the new proposal, the new checks were designed so that it would make sure transaction fees were going up before a block size increase took place, but it only considers the last ~4000 blocks (~27 days), so if transaction fees fall to any degree over a 27 day period and then rise, the check can be passed. In that way nodes would still consolidate to compete at lower transaction fees and larger blocks.
This could easily be gamed by paying huge transaction fees to oneself at the ending of this period, which costs a miner virtually nothing.

By their (dumb) fruits shall ye know them indeed...
ZephramC
Sr. Member
****
Offline Offline

Activity: 475
Merit: 255



View Profile
August 26, 2015, 08:20:30 PM
 #50

I have a meta-question... why the details (in the first OP post) are a (dynamic) PHP page?

Do you mean http://upalc.com/maxblocksize.php ?

Yes, that is what I mean.
BTW, I tried to contribute to your redit thread, but the formating and linebreaks there are mess, I hed to redelet my post 4 times :-). So I hope I did not cause any problems.
upal (OP)
Full Member
***
Offline Offline

Activity: 165
Merit: 102


View Profile
August 27, 2015, 01:49:43 PM
 #51

I have a meta-question... why the details (in the first OP post) are a (dynamic) PHP page?

Do you mean http://upalc.com/maxblocksize.php ?

Yes, that is what I mean.
BTW, I tried to contribute to your redit thread, but the formating and linebreaks there are mess, I hed to redelet my post 4 times :-). So I hope I did not cause any problems.

Well, upalc.com is my personal blog and there are many common sections in the article pages. Hence I coded them in PHP.

Regarding the reddit post, it is not mine. I posted the BIP draft in bitcoin dev-list as per GMaxwell's suggestion. Someone read it there and posted it in reddit. But, I dont think, posting and editing your own post will create any problem for anyone.
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3416
Merit: 6699


Just writing some code


View Profile WWW
August 27, 2015, 05:09:59 PM
 #52

Have you submitted this to gmaxwell yet for bip number assignment? If you did, what was his response?

upal (OP)
Full Member
***
Offline Offline

Activity: 165
Merit: 102


View Profile
August 27, 2015, 05:47:00 PM
 #53

Have you submitted this to gmaxwell yet for bip number assignment? If you did, what was his response?

I did. On August 18, 2015 he said that normal procedure is to allow some time for discussion on the list and asked me to post the draft text as well. I did not have the draft text ready by then. As you'll see the draft (https://github.com/UpalChakraborty/bips/blob/master/BIP-DynamicMaxBlockSize.mediawiki) was made on August 24, 2015. So, after posting the draft text, I contacted him again for BIP no. Since then, I have not heard of him.
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
August 28, 2015, 06:16:50 PM
 #54

I was not calculating transaction fees in cents, I was referring to the value of the transaction using an arbitrary exchange rate (USD) rather than the actual quantity of bitcoins in the transaction fee.

The point is that the rising value of Bitcoin could make transaction fee amounts in bitcoins appear to go down, even if the value of transaction fees is actually going up. This would deadlock this algorithm over the period of increasing value.
Users can always spend what they feel correct to include their Tx in block. On ther other hand, miners can always chose which Tx they'll add. Now, if Tx fee goes down, but FIAT value of Tx fee goes up, then market will decide whether they'll still lower Tx fee or not. Bitcoin protocol do have this freedom and this proposal does not seem to block that either. So, I dont see any deadlock situation to arise.

Looking at a different problem with the new proposal, the new checks were designed so that it would make sure transaction fees were going up before a block size increase took place, but it only considers the last ~4000 blocks (~27 days), so if transaction fees fall to any degree over a 27 day period and then rise, the check can be passed. In that way nodes would still consolidate to compete at lower transaction fees and larger blocks.
This could easily be gamed by paying huge transaction fees to oneself at the ending of this period, which costs a miner virtually nothing.
Can not be gamed so easily, because this is not the only one check in proposal 2. There are two other checks as well and when all the three checks are satisfied over a period of time, then you can safely assume that the Tx volume is really increasing and max block size increase is happening due to market demand. Even if all these three parameters are gamed, miners have to keep burning an increasing amount of money to keep max block size cap up. If it is artificially increased, it'll automatically come down in coming difficulties, as soon as miners stop burning their money.
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3416
Merit: 6699


Just writing some code


View Profile WWW
August 29, 2015, 03:51:17 AM
 #55

So one thing that people have pointed out in other threads that also mention this proposal is that a someone can spam transactions which fill blocks and forces the block limit up. Miners can also lower the block limit to an infinitely small amount. I think you should also add in upper and lower bounds on what the block size limit can be. Perhaps 1 MB and 32 MB to prevent this kind of thing from happening.

99Percent
Full Member
***
Offline Offline

Activity: 403
Merit: 100


🦜| Save Smart & Win 🦜


View Profile WWW
August 29, 2015, 06:34:42 AM
 #56

It has come to my understanding (please correct me if I am wrong and why) that miners can inject bogus transactions in their blocks.

If that is true then this proposal is crap: miners can put whatever transactions with whatever fees to inflate the statistic to suit whatever new block size they target.

I firmly believe miners will do whatever it takes to make the blocksize as large as they can make it, via whatever mechanism allowed (includingi BIP100) since it will give them maximum flexibiility to decide. They can always soft limit if they choose after all.

goatpig
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 29, 2015, 10:15:10 AM
 #57

Thanks to everyone for providing good arguements for improvement of the proposal. I have derived a second proposal and updated OP accordingly. If you have any counter-arguement to this proposal, feel free to put it here or in the comment section of the article - http://upalc.com/maxblocksize.php

I would insist on implementing a decay function for the sake of spam control and to prevent gaming the system.

I will repeat my point: the status quo path, where the current size is maintained if no increase/decrease is triggered is damaging in that it becomes trivial to maintain a size increase after a spam attack or in case a miner wants to game the system. At the same time a decrease becomes quasi impossible with the current thresholds.

An increase should be triggered at 66~75% capacity, not ~50%, and the fees should be at least 20% superior to the cumulated subsidy of the previous period. This is critical as it forces an attacker to compound his effort to inflict several unnatural increase on the network instead of simply giving the network a nudge and maintaining the increase trivially.

The goal of this proposal is to automatically adapt the max block size to the demand, not to offer a voting mechanism where spammers and large miners alike can pump up the block size and keep it there at minimal cost. If this is what you are aiming for, then Garzik's approach makes more sense.

In order to dynamically resize the block ceiling, the algorithm needs to distinguish between spam/attacks and organic growth. The simplest way to one from the other is that spam is acute, organic growth is chronic (or long lasting if you prefer). This means natural growth will always eventually trigger an increase, which is why tighter thresholds make sense. Natural growth will always manage to get to a sane threshold, but the higher the threshold, the more expensive it is to game the system.

However, in case the attacker is willing to pay the price for an upscaling, the effect of that attack should fade once the attack is over, which why we should have a decay function instead of a status quo condition. Only organic growth will be powerful enough to maintain a ceiling increase. With proper thresholds, an attacker would have to keep on spending more fees and increasing the difficultly significantly to keep the ceiling on growing, which is the only way he'd have to force in a lasting effect. At this point he is better off just mining for profit as a sane market actor, which is what a PoW blockchain relies on to begin with: there is more profit in participating to the network than to attack it.

RocketSingh
Legendary
*
Offline Offline

Activity: 1662
Merit: 1050


View Profile
August 29, 2015, 11:28:17 AM
 #58

I would insist on implementing a decay function for the sake of spam control and to prevent gaming the system.
Assuming all your suggestions are implemented by OP, will Armory publicly come out in support of this proposal ? I am having a feeling that without support from any major player BIPs are just meaningless. Moreover, if the proposal is not from a core dev, it would get a hard time in getting even a BIP no.

goatpig
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 29, 2015, 11:45:45 AM
 #59

I would insist on implementing a decay function for the sake of spam control and to prevent gaming the system.
Assuming all your suggestions are implemented by OP, will Armory publicly come out in support of this proposal ? I am having a feeling that without support from any major player BIPs are just meaningless. Moreover, if the proposal is not from a core dev, it would get a hard time in getting even a BIP no.

No, I speak for myself in all these matters. Etotheipi sets the technical stance for Armory. I've been a bitcointalk member since before Armory even existed, I believe that is sufficient a distinction to signify that my position is personal and does not reflect what Armory as a business believes is the better route.

If this proposal gets enough discussion and refinement, I may implement it myself to speed up the BIP number processing. Historically, Armory has been pretty neutral in consensus discussions, so I do not believe it makes us a big player in this particular field or that my voice as an Armory employee carries more than any other poster in D&DT.

If I believed I had some authority in this matter, I would just go ahead and implement my own proposal.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
August 29, 2015, 02:33:52 PM
 #60

If I believed I had some authority in this matter, I would just go ahead and implement my own proposal.

Be fair on yourself. You have a more authoritative view than most others, seeing as your work (designing & implementing the block handling + storage for a major Bitcoin wallet) deals with both the subtle details and the overarching dynamics of this very topic (at least in respect of how Bitcoin works now).

Vires in numeris
goatpig
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 29, 2015, 04:07:40 PM
 #61

Be fair on yourself. You have a more authoritative view than most others, seeing as your work (designing & implementing the block handling + storage for a major Bitcoin wallet) deals with both the subtle details and the overarching dynamics of this very topic (at least in respect of how Bitcoin works now).

I have no experience with network design so I can't alone come up with a proposal that accounts for the entire engineering scope the metric affects. Maybe I would be more motivated to run with my own proposal and implementation if I did.

btcdrak
Legendary
*
Offline Offline

Activity: 1064
Merit: 1000


View Profile
September 02, 2015, 12:20:49 AM
 #62

Have you submitted this to gmaxwell yet for bip number assignment? If you did, what was his response?

I did. On August 18, 2015 he said that normal procedure is to allow some time for discussion on the list and asked me to post the draft text as well. I did not have the draft text ready by then. As you'll see the draft (https://github.com/UpalChakraborty/bips/blob/master/BIP-DynamicMaxBlockSize.mediawiki) was made on August 24, 2015. So, after posting the draft text, I contacted him again for BIP no. Since then, I have not heard of him.

Did you formally request the BIP number? Can I suggest you open a PR in the bips repository and gmaxwell can assign you a number there. That's how I got mine for BIP 105.
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
September 02, 2015, 12:42:56 AM
 #63

Have you submitted this to gmaxwell yet for bip number assignment? If you did, what was his response?

I did. On August 18, 2015 he said that normal procedure is to allow some time for discussion on the list and asked me to post the draft text as well. I did not have the draft text ready by then. As you'll see the draft (https://github.com/UpalChakraborty/bips/blob/master/BIP-DynamicMaxBlockSize.mediawiki) was made on August 24, 2015. So, after posting the draft text, I contacted him again for BIP no. Since then, I have not heard of him.

Did you formally request the BIP number? Can I suggest you open a PR in the bips repository and gmaxwell can assign you a number there. That's how I got mine for BIP 105.

Wont BIP 105 be included under https://github.com/bitcoin/bips/ like BIP 101 ?
btcdrak
Legendary
*
Offline Offline

Activity: 1064
Merit: 1000


View Profile
September 02, 2015, 07:17:17 AM
 #64

Have you submitted this to gmaxwell yet for bip number assignment? If you did, what was his response?

I did. On August 18, 2015 he said that normal procedure is to allow some time for discussion on the list and asked me to post the draft text as well. I did not have the draft text ready by then. As you'll see the draft (https://github.com/UpalChakraborty/bips/blob/master/BIP-DynamicMaxBlockSize.mediawiki) was made on August 24, 2015. So, after posting the draft text, I contacted him again for BIP no. Since then, I have not heard of him.

Did you formally request the BIP number? Can I suggest you open a PR in the bips repository and gmaxwell can assign you a number there. That's how I got mine for BIP 105.

Wont BIP 105 be included under https://github.com/bitcoin/bips/ like BIP 101 ?

It's there as a pull request for the time being: https://github.com/bitcoin/bips/pull/187. I think there's still plenty room for some discussions.
juiceayres
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
September 02, 2015, 10:44:30 PM
 #65

Good job OP! I've seen this discussion about dynamic block size about 2 years ago, but no one was able to introduce it formally.

I would prefer the proposal 1 (keep it simple) with some modifications, in order to not have a unnecessary block size. Imagine an actual block size of 20mb. The next jump would be 40mb to just use maybe 22mb.

I suggest to sum a discrete amount to Maxblock size (e.g 12.5%) up to doubling it the current day (~next 144 blocks).

Code:
If more than 50% of block's size, found in the first 2000 of the last difficulty period, is more than 90% MaxBlockSize
    MaxBlockSize = MaxBlockSize+ 12.5%
    Limit = Double MaxBlockSize_last_144

I think increasing by discrete amounts can damp the system, making the block size more predicable and reducing the tx fee gambling.

The same principle can be added to cut block size.

 


upal (OP)
Full Member
***
Offline Offline

Activity: 165
Merit: 102


View Profile
September 05, 2015, 09:35:01 PM
 #66

This BIP has now been assigned a BIP number and pulled into BIP repository on Github.

BIP 106: https://github.com/bitcoin/bips/blob/master/bip-0106.mediawiki
CounterEntropy
Full Member
***
Offline Offline

Activity: 214
Merit: 278


View Profile
September 06, 2015, 12:12:23 AM
 #67

Thanks to everyone for providing good arguements for improvement of the proposal. I have derived a second proposal and updated OP accordingly. If you have any counter-arguement to this proposal, feel free to put it here or in the comment section of the article - http://upalc.com/maxblocksize.php

I would insist on implementing a decay function for the sake of spam control and to prevent gaming the system.

I will repeat my point: the status quo path, where the current size is maintained if no increase/decrease is triggered is damaging in that it becomes trivial to maintain a size increase after a spam attack or in case a miner wants to game the system. At the same time a decrease becomes quasi impossible with the current thresholds.

An increase should be triggered at 66~75% capacity, not ~50%, and the fees should be at least 20% superior to the cumulated subsidy of the previous period. This is critical as it forces an attacker to compound his effort to inflict several unnatural increase on the network instead of simply giving the network a nudge and maintaining the increase trivially.

The goal of this proposal is to automatically adapt the max block size to the demand, not to offer a voting mechanism where spammers and large miners alike can pump up the block size and keep it there at minimal cost. If this is what you are aiming for, then Garzik's approach makes more sense.

In order to dynamically resize the block ceiling, the algorithm needs to distinguish between spam/attacks and organic growth. The simplest way to one from the other is that spam is acute, organic growth is chronic (or long lasting if you prefer). This means natural growth will always eventually trigger an increase, which is why tighter thresholds make sense. Natural growth will always manage to get to a sane threshold, but the higher the threshold, the more expensive it is to game the system.

However, in case the attacker is willing to pay the price for an upscaling, the effect of that attack should fade once the attack is over, which why we should have a decay function instead of a status quo condition. Only organic growth will be powerful enough to maintain a ceiling increase. With proper thresholds, an attacker would have to keep on spending more fees and increasing the difficultly significantly to keep the ceiling on growing, which is the only way he'd have to force in a lasting effect. At this point he is better off just mining for profit as a sane market actor, which is what a PoW blockchain relies on to begin with: there is more profit in participating to the network than to attack it.

As I can see, you have talked about various numbers, like 66~75%, 20% etc. These appears to be magic number to me, like BIP 101's 8mb or BIP 103's 4.4% & 17.7%. How do you derive them ?
goatpig
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
September 06, 2015, 07:49:30 AM
 #68

As I can see, you have talked about various numbers, like 66~75%, 20% etc. These appears to be magic number to me, like BIP 101's 8mb or BIP 103's 4.4% & 17.7%. How do you derive them ?

I've said stated all these figures need to be discussed. I believe these thresholds need to exist, but a decent value or a decent way to compute these values needs to be discussed. I can give you the reason these values need to exists but I have not done the research to determine which figures are the most appropriate, of it there is a way to set these dynamically as well.

The 66~75% figure is a proposal for the block space usage threshold at which a resizing should be tested against secondary conditions. The rationale is that organic market growth will always burst through any threshold (until it hits a hard cap eventually), whereas an attacker won't necessarily. Raising the space usage threshold increases the effort required by ill intentioned parties, and doesn't change a thing for natural market growth. As a reminder, the current threshold is 50%.

The 20% figure denotes the fee growth threshold, i.e. a resizing should only occur if fees have gone X% either way compared to the previous period. Currently there is no such threshold, making it trivial for any attacker to push up the block size and maintain it high.

As long as these thresholds are in place and tight enough, an effective decay function can be implemented. The goal is to distinguish between organic growth in demand and spam attacks, and use a safety net mechanism (the decay function) to correct all growth that is not supported by actual demand. It would actually mimic commodity prices in a speculative market: large speculators can pump the price for a while but eventually the market will always correct itself, with the valid demand as its baseline.

The first threshold is not very important. It will always be reached first when demand climbs, so its particular value is not all that important. It could be 90% for all I care, because fees won't start climbing until blocks are nearing max capacity. It needs to be > 50% to make room for the decay function.

The threshold that needs to be truly discussed is the second one. It can't be low enough that an attacker can throw a couple extra BTC at the network and trigger a size growth on the cheap under the right conditions. It can't be so big the network will get clogged with a massive backlog before it resizes. However, an increase in fee subsidy is an increase in revenue, which will translate into an increase mining power eventually. It can be expected that such tight thresholds will result in bursty cap growths, which is another reason for a decay function, but generally I believe we are better with high values than low ones.

Pages: 1 2 3 4 [All]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!