Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: hello_good_sir on October 08, 2014, 04:16:39 AM



Title: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: hello_good_sir on October 08, 2014, 04:16:39 AM
My concern is that there is little room for error with geometric growth.  Lets say that things are happily humming along with bandwidth and block size both increasing by 50% per year.  Then a decade goes by where bandwidth only increases by 30% per year.  In that decade block size grew to 5767% while bandwith grew to 1379%.  So now peoples connections are only 24% as capable of handling the blockchain.

Not a big deal?  Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable.  Meaning that this event would mean that the majority of nodes would shut down, likely permanently.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: solex on October 08, 2014, 09:24:16 AM
My concern is that there is little room for error with geometric growth.  Lets say that things are happily humming along with bandwidth and block size both increasing by 50% per year.  Then a decade goes by where bandwidth only increases by 30% per year.  In that decade block size grew to 5767% while bandwith grew to 1379%.  So now peoples connections are only 24% as capable of handling the blockchain.

Not a big deal?  Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable.  Meaning that this event would mean that the majority of nodes would shut down, likely permanently.

Compression techniques (e.g. by using transaction hashes and/or IBLT), once implemented, will certainly keep the new block message size growth rate much lower than the bandwidth growth rate.  

At the moment the 1MB in checkblock is agnostic as to how the blocks are received.  

Code:
    // Size limits
    if (block.vtx.empty() || block.vtx.size() > MAX_BLOCK_SIZE || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION) > MAX_BLOCK_SIZE)
        return state.DoS(100, error("CheckBlock() : size limits failed"),
                         REJECT_INVALID, "bad-blk-length");

Consider that bandwidth is the constraint and disk space, perhaps 10x less so. This implies that a 1MB block maximum for transmitted blocks should be reflected as a 10MB maximum for old blocks read from / written to disk (especially when node bootstrapping is enhanced by headers-first and an available utxo set).

Put another way, a newly mined block of 2MB might be transmitted across the network in a compressed form, perhaps of only 200KB, but it will get rejected, yet it should be acceptable as it is within currently accepted resource constraints.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: gmaxwell on October 08, 2014, 11:13:35 PM
(e.g. by using transaction hashes and/or IBLT), once implemented, will certainly keep the new block message size growth rate much lower than the bandwidth growth rate.  
Keep in mind these techniques don't reduce the amount of data that needs to be sent (except, at most, by a factor of two). They reduce the amount of latency critical data. Keeping up with the blockchain still requires transferring and verifying all the data.

Quote
Not a big deal?  Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable.  Meaning that this event would mean that the majority of nodes would shut down, likely permanently.
Right. There is a decentralization trade-off at the margin.  But this isn't scaleless-- there is _some_ level, even some level of growth which presents little to no hazard even way down the margin.   The a a soft stewardship goal (not a system rule, since it can't be) the commitment should be that the system should be run so that it fits into an acceptable portion of common residential broadband, so that the system does not become dependant on centralized entities. As some have pointed out, being decenteralized is Bitcoin's major (and perhaps only) strong competitive advantage compared to traditional currencies and payment systems. How to meet that goal best is debatable in the specifics.

At the moment there are a bunch of silly low hanging fruit that make running a node more costly than it needs to be; we're even at a case where some people developing on Bitcoin core have told me they've stopped running a node at home. It's hard to reason about the wisdom of these things while the system is still being held back by some warts we've long known how to correct and are in the process of correcting.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 13, 2014, 08:58:44 AM
(e.g. by using transaction hashes and/or IBLT), once implemented, will certainly keep the new block message size growth rate much lower than the bandwidth growth rate.  
Keep in mind these techniques don't reduce the amount of data that needs to be sent (except, at most, by a factor of two). They reduce the amount of latency critical data. Keeping up with the blockchain still requires transferring and verifying all the data.

Quote
Not a big deal?  Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable.  Meaning that this event would mean that the majority of nodes would shut down, likely permanently.
Right. There is a decentralization trade-off at the margin.  But this isn't scaleless-- there is _some_ level, even some level of growth which presents little to no hazard even way down the margin.   The a a soft stewardship goal (not a system rule, since it can't be) the commitment should be that the system should be run so that it fits into an acceptable portion of common residential broadband, so that the system does not become dependant on centralized entities. As some have pointed out, being decenteralized is Bitcoin's major (and perhaps only) strong competitive advantage compared to traditional currencies and payment systems. How to meet that goal best is debatable in the specifics.

At the moment there are a bunch of silly low hanging fruit that make running a node more costly than it needs to be; we're even at a case where some people developing on Bitcoin core have told me they've stopped running a node at home. It's hard to reason about the wisdom of these things while the system is still being held back by some warts we've long known how to correct and are in the process of correcting.

It doesn't make sense to guess at this.  Any guess is bound to be wrong.
If after picking the low hanging fruit, there is still an issue here (and there may be).
It ought not be resolved by a guess when there is data within the block chain that would be useful for making a determination on max block size.
In the same way that difficulty adjustment is sensitive to data within the block chain, so also this could be.

I don't know what the right answer is anymore than Gavin does, but making an estimation would not be the best way to solve this in any case.

.....
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.



Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Phrenico on October 13, 2014, 02:52:48 PM

It doesn't make sense to guess at this.  Any guess is bound to be wrong.
If after picking the low hanging fruit, there is still an issue here (and there may be).
It ought not be resolved by a guess when there is data within the block chain that would be useful for making a determination on max block size.
In the same way that difficulty adjustment is sensitive to data within the block chain, so also this could be.

I don't know what the right answer is anymore than Gavin does, but making an estimation would not be the best way to solve this in any case.

.....
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.



Is this 50% per year intended to be a hardcoded rule like the block reward?

That's not how I interpreted Gavin's report. It sounded more like a goal that the developers thought was attainable.

That said, 50% per year does seem aggressive. At some point, the opportunity cost of including more transactions is going to exceed the tx fee value, certainly as long as the block reward exists, so the blocksize cannot increase indefinitely. And so what if there is little room in the blockchain? Not every single tiny transaction needs to be recorded indefinitely. Since the (I expect) cost of increasing the block size is increased centralization, shouldn't the developers be hesitant to make such a commitment without allowing for discretion?

I also wonder what the best approach will be, way out in the future, when the block reward is near zero. Can there be an equilibrium transaction fee if the difficulty is allowed to continue to fall? A simple, kludgy solution might be to fix the difficulty at some level, allowing blockrate to depend on the accumulated bounty of transaction fees.

Though I'm sure some new kind of proof of work/stake approach could best solve this problem and make the network more secure and cheaper.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 13, 2014, 05:17:44 PM

It doesn't make sense to guess at this.  Any guess is bound to be wrong.
If after picking the low hanging fruit, there is still an issue here (and there may be).
It ought not be resolved by a guess when there is data within the block chain that would be useful for making a determination on max block size.
In the same way that difficulty adjustment is sensitive to data within the block chain, so also this could be.

I don't know what the right answer is anymore than Gavin does, but making an estimation would not be the best way to solve this in any case.

.....
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.



Is this 50% per year intended to be a hardcoded rule like the block reward?

That's not how I interpreted Gavin's report. It sounded more like a goal that the developers thought was attainable.

That said, 50% per year does seem aggressive. At some point, the opportunity cost of including more transactions is going to exceed the tx fee value, certainly as long as the block reward exists, so the blocksize cannot increase indefinitely. And so what if there is little room in the blockchain? Not every single tiny transaction needs to be recorded indefinitely. Since the (I expect) cost of increasing the block size is increased centralization, shouldn't the developers be hesitant to make such a commitment without allowing for discretion?

I also wonder what the best approach will be, way out in the future, when the block reward is near zero. Can there be an equilibrium transaction fee if the difficulty is allowed to continue to fall? A simple, kludgy solution might be to fix the difficulty at some level, allowing blockrate to depend on the accumulated bounty of transaction fees.

Though I'm sure some new kind of proof of work/stake approach could best solve this problem and make the network more secure and cheaper.

It also may be contrary to the eventual goal of usage driven mining, where transaction fees ultimately overtake block reward in value.  This proposal may drive TX fees to zero forever.  Block chain is a somewhat scarce resource, just as total # of coins.  Adding an arbitrary 50% yearly inflation changes things detrimentally.

If this forks as currently proposed, I'll be selling all my BTC on Gavin's fork and mining on the other.  I suspect I will not be the only one.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Gavin Andresen on October 13, 2014, 07:05:00 PM
It also may be contrary to the eventual goal of usage driven mining, where transaction fees ultimately overtake block reward in value.  This proposal may drive TX fees to zero forever.  Block chain is a somewhat scarce resource, just as total # of coins.  Adding an arbitrary 50% yearly inflation changes things detrimentally.

I'm sending a follow-up blog post to a couple of economists to review, to make sure my economic reasoning is correct, but I don't believe that even an infinite blocksize would drive fees to zero forever.

Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.

That has very little to do with whether or not transaction fees will be enough to secure the network in the future. I think both the "DON'T RAISE BLOCKSIZE OR THE WORLD WILL END!" and "MUST RAISE THE BLOCKSIZE OR THE WORLD WILL END!" factions confuse those two issues. I don't think adjusting the block size up or down or keeping it the same will have any effect on whether or not transaction fees will be enough to secure the network as the block subsidy goes to zero (and, as I said, I'll ask professional economists what they think).

If this forks as currently proposed, I'll be selling all my BTC on Gavin's fork and mining on the other.  I suspect I will not be the only one.

Okey dokey. You can join the people still mining on we-prefer-50-BTC-per-block fork (if you can find them... I think they gave up really quickly after the 50 to 25 BTC subsidy decrease).


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 13, 2014, 07:30:06 PM
It also may be contrary to the eventual goal of usage driven mining, where transaction fees ultimately overtake block reward in value.  This proposal may drive TX fees to zero forever.  Block chain is a somewhat scarce resource, just as total # of coins.  Adding an arbitrary 50% yearly inflation changes things detrimentally.

I'm sending a follow-up blog post to a couple of economists to review, to make sure my economic reasoning is correct, but I don't believe that even an infinite blocksize would drive fees to zero forever.

Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.

That has very little to do with whether or not transaction fees will be enough to secure the network in the future. I think both the "DON'T RAISE BLOCKSIZE OR THE WORLD WILL END!" and "MUST RAISE THE BLOCKSIZE OR THE WORLD WILL END!" factions confuse those two issues.

Great, we agree on all of this.

I don't think adjusting the block size up or down or keeping it the same will have any effect on whether or not transaction fees will be enough to secure the network as the block subsidy goes to zero (and, as I said, I'll ask professional economists what they think).
Here is where it jumps the tracks.  
Your thoughts, and my thoughts aren't going to answer this.  
Math will.  It is not about opinion, it is about measurement and calculation.  Picking 50% out of a hat is hubris, and you know it in your heart.
Justify it, show your work, or it can not be taken seriously.  Looking forward to your follow-up, and its analysis, economists sure, but lets have game theory analysis as well as an analysis of new risks.

If this forks as currently proposed, I'll be selling all my BTC on Gavin's fork and mining on the other.  I suspect I will not be the only one.
Okey dokey. You can join the people still mining on we-prefer-50-BTC-per-block fork (if you can find them... I think they gave up really quickly after the 50 to 25 BTC subsidy decrease).
Strawmen, will make you look stupid and petty.  Play well with the other scientists please?  If this was your best and final offer, you needn't bother responding.  I don't know the answer, but so far we haven't seen it in sufficient detail to end dialog and discovery.

Not to belabor it, but the obvious difference is the 50 BTC folks were going against Satoshi's design, whereas those following 50% love-it-or-leave-it fork would be going against Satoshi's design.  If we need a hard fork, we do it right so that it need not be repeated.

Your proposal started a dialog that may bring a good result.  
The first effort isn't that end result.  If we think we got it perfect on a first guess, our minds are closed to learning and consensus.


No comment on this?
Quote
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.  Where we are guessing we ought acknowledge that.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 13, 2014, 07:40:49 PM
Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.

Not only that, but there will always be non-infinite bandwidth and storage available to users, while anyone can create transaction spam essentially for free. So minimum fees remain necessary for non-priority transactions.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 13, 2014, 07:45:28 PM
Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.
Not only that, but there will always be non-infinite bandwidth and storage available to users, while anyone can create transaction spam essentially for free. So minimum fees remain necessary for non-priority transactions.

Check your assumptions.

1) We don't know what the future networks will look like.
2) Commodity prices do go to zero for periods of time.  Sometimes they rot in silos, and cost money to dispose of them (negative worth).


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: wachtwoord on October 13, 2014, 09:08:43 PM

If this forks as currently proposed, I'll be selling all my BTC on Gavin's fork and mining on the other.  I suspect I will not be the only one.

Okey dokey. You can join the people still mining on we-prefer-50-BTC-per-block fork (if you can find them... I think they gave up really quickly after the 50 to 25 BTC subsidy decrease).


This is so weak. If we follow this analogy YOU are the one wanting to mine 50 BTC blocks ad infinitum since halving to 25 BTC is what Satoshi proposed.

I really don't like the way you are handling this. It seems like you are trying to push your little pet project through as a little dictator. As long as you don't change I'm with NewLiberty on this one and will hold Bitcoin instead of GavinCoin.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 13, 2014, 09:28:48 PM
Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.
Not only that, but there will always be non-infinite bandwidth and storage available to users, while anyone can create transaction spam essentially for free. So minimum fees remain necessary for non-priority transactions.

Check your assumptions.

1) We don't know what the future networks will look like.

No, but we do know the science of today. I'm not sure you appreciate the meaning of infinite.

It's not possible to transmit information with perfect efficiency, unless, probably, using quantum entanglement. It's also not possible to store unlimited meaningful information within a confined space, never mind making it all computationally accessible. I'd say my statement is less an assumption and more an observation, unless of course you can show how it's reasonably possible to make use of quantum phenomena in ways we can't imagine today.


2) Commodity prices do go to zero for periods of time.  Sometimes they rot in silos, and cost money to dispose of them (negative worth).

I think he meant go to zero permanently, or at least substantially long periods of time.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: TraderTimm on October 14, 2014, 03:42:43 AM
I think this is where the ugly head of "Bitcoin is just an experiment" raises its ugly mug.

You see, Gavin could totally nuke Bitcoin, but he has the plausible deniability that Bitcoin is just an "experiment". You know, something you just putter about on in the garage, if raccoons break in and tear it apart, hell, its just foolin' around, no big loss.

And that is the attitude that is being put forth here. 50%? Sure, why the hell not. Maybe roll a D-100 and decide that way, it would be just as rigorous as a complete and utter guess.

What is completely unreasonable is why you wouldn't base any of these metrics on actual USAGE, with a sliding window ala Difficulty Adjustments to adhere to what is actually HAPPENING in the network.

Gavin doesn't know, but hey, we have to trust him.

I don't think we do...


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 14, 2014, 04:01:16 AM
What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: greenlion on October 14, 2014, 05:35:27 AM
What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.

IBLT makes it an issue because there would no longer be a risk/reward tradeoff on tx fees vs propagation delay in building the largest possible blocks. As a result the miner is incentivized to always build the largest possible block to collect maximum tx fees with no propagation risk.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: solex on October 14, 2014, 06:18:24 AM
What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.

IBLT makes it an issue because there would no longer be a risk/reward tradeoff on tx fees vs propagation delay in building the largest possible blocks. As a result the miner is incentivized to always build the largest possible block to collect maximum tx fees with no propagation risk.

IBLT encourages good behaviour because you can't successfully publish an IBLT full of transactions which the rest of the network doesn't want, unlike now, when a block could be full of rubbish 1sat transactions from a secret spam generator. The whole point of IBLT is that each node knows (and accepts) most of the transactions in advance, and has them in its mempool. It is only a smallish set of differences which are required from the IBLT when processing it. So the fees market should be helped by this development.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 14, 2014, 12:50:43 PM
What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.
That is nothing like Gavin's proposal for good reasons.  To answer your question of What would also happen if the block size were increased to 1 GB tomorrow is the introduction of new attack vectors, which if exploited would require intervention to resolve by miners, and development. 
It is not enough to design something that works, we must also design so that it does not become more fragile.

Why not strive for a dynamic limit that prevents the need for future hard forks over the same issue?
Gavin's proposal is "the simplest that could possibly work".

I'll argue that it is just too simple, and too inflexible.

This proposal may be opening Bitcoin to new types of coin killing attacks by assuming that anti-spam fees will always be sufficient to prevent bloating attacks.   Consider that the entire value of all bitcoin is currently less than 1/10th of the worlds currently richest man, and that man has spoken publicly against bitcoin?  When you include wealthy institutions and even governments within the potential threat vector, the risks may become more apparent.  We can not assume Bitcoin's success, and then predicate decisions necessary for that success, on that success having been already accomplished.

If Bitcoin has to change due to a crisis, it ought at least be made better... so that the crisis need not be revisited.  (Hard forks get progressively more challenging in the future).  Design for the next 100s of years, not for the next bubble.  Fix it right, and we fix it once.

Designs ought to have safeguards to avoid unintended consequences and the ability to adjust as circumstances change.
My suggestion is that perhaps we can do better than to simply assume an infinite extrapolation, when there exists a means to measure and respond to the actual needs as they may exist in the future, within the block chain.

50% may be too much in some years, too little in others.  The proposal is needlessly inflexible and assumes too much (an indefinate extrapolation of network resource).  Picking the inflating percentage numbers out of a hat by a small group is what CENTRAL BANKERS do, this is not Satoshi's Bitcoin.

I'm not convinced a crisis necessitating a hard fork is at hand, but I am sure that the initial proposal is not the answer to it.  I look forward to its revision and refinement.  


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: painlord2k on October 14, 2014, 01:59:03 PM
In my opinion, the 50% increase per year of the block size is too conservative in the short run and too optimistic in the long run.
If Bitcoin had exponential increase of usage, even usage from fields where it is currently uneconomic to implement a payment service, we will had faster increase in the short run and a slowdown in the long run.

Spamming the blockchain is not a real issue, for me.
If tomorrow we would had 1 GB block (max size) some entity could be able to spam the blockchain with dust transactions, because the minimum fee is about 1 cent. The reaction of the users would be just to raise the fee they pay. from 1 cent to 10 cents, the attacker would need to increase ten times the sum paid to the miners (miners thank a lot for this) to be able to produce transactions with the same fee and priority of the real users.

What would him accomplish? A bigger blockchain? people and large operators can buy truck loads of HDs on the TBytes.
Actually you just don't need to keep all the blockchain on disk, just the last few weeks, months. People could just download or share the previous blocks as they will never change on HW mediums.
Just to be clear, albeit nominally the blockchain could change if someone would dedicate enough time and resources to rebuilt it from the genesys block (with a larger proof-of-work), any change of the chain more than a day/week/month old will ever be rejected.

Large entities will not attack the blockchain:
1) because it is anyway, against some law to break havok in a computer network and rewrote it for nefarious purposes.
2) because governments would need to justify it. They have the monopoly of coercion the monopoly on violence. If they resort to indirect attacks, they are just admitting the threat of violence and the violence itself it is not working against Bitcoin's users. It would amount to start bleeding in a shark infested sea.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Gavin Andresen on October 14, 2014, 03:11:15 PM
No comment on this?
Quote
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

That does not address the core of people's fears, which is that big, centralized mining concerns will collaborate to push smaller competitors off the network by driving up the median block size.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.  Where we are guessing we ought acknowledge that.

Yes, that is a good point, made by other people in the other thread about this. A more conservative rule would be fine with me, e.g.

Fact: average "good" home Internet connection is 250GB/month bandwidth.
Fact: Internet bandwidth has been growing at 50% per year for the last 20 years.
  (if you can find better data than me on these, please post links).

So I propose the maximum block size be increased to 20MB as soon as we can be sure the reference implementation code can handle blocks that large (that works out to about 40% of 250GB per month).
Increase the maximum by 40% every two years (really, double every two years-- thanks to whoever pointed out 40% per year is 96% over two years)
Since nothing can grow forever, stop doubling after 20 years.



Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: TonyT on October 14, 2014, 04:08:39 PM
Ah, this is where the big boys post...ohh.  Impressive.

As for little me, I am downloading in a Third World country the entire > 50 GB blockchain I guess it is, for my Armory client, and it's fun but has an experimental feel to it.  Even with a 1.5 Mbps internet connection, it's close to 24 hours and I'm only two-thirds done.  I understand that subsequent incremental downloads of this blockchain should be a lot quicker and smaller once the initial download is finished.  I do understand however that Bitcoin transactions can take 1 hour to verify, which is probably related to the size of the blockchain.  The Bobos in Paradise (upper middle class) in the developed countries will not like that; for those off the grid this is a minor quibble.

As for compression of the blockchain, it's amazing what different algorithms can do.  For the longest time the difference between WinZip and WinRAR were trivial, then came 7-Zip, and using whatever algorithm that author uses, the shrinkage is dramatically better.  I can now compress a relational database much more using 7-Zip than WinZip, on a Windows platform.  But there must be some tradeoff; I imagine 7-Zip is more resource intensive and hence should take longer (though I've not seen this).

TonyT


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 14, 2014, 08:24:35 PM
As I think it through 50% per year may not be aggressive.

Drilling down into the problem we find the last mile is the bottleneck in bandwidth:

http://en.wikipedia.org/wiki/Last_mile

That page is a great read/refresher for this subject, but basically:

Quote
The last mile is typically the speed bottleneck in communication networks; its bandwidth limits the bandwidth of data that can be delivered to the customer. This is because retail telecommunication networks have the topology of "trees", with relatively few high capacity "trunk" communication channels branching out to feed many final mile "leaves". The final mile links, as the most numerous and thus most expensive part of the system, are the most difficult to upgrade to new technology. For example, telephone trunklines that carry phone calls between switching centers are made of modern optical fiber, but the last mile twisted pair telephone wiring that provides service to customer premises has not changed much in 100 years.

I expect Gavin's great link to Nielsen's Law of Internet Bandwidth (http://www.nngroup.com/articles/law-of-bandwidth/) is only referencing copper wire lines. Nielsen's experience, which was updated to include this year and prior (and continue to be inline with his law), tops out at 120 Mbps in 2014. Innovation allowing increases in copper lines is likely near the end, although DSL is the dominant broadband access technology globally according to a 2012 study (http://point-topic.com/wp-content/uploads/2013/02/Sample-Report-Global-Broadband-Statistics-Q2-2012.pdf).

The next step is fiber to the premises (http://en.wikipedia.org/wiki/Fiber_to_the_x#Fiber_to_the_premises). A refresher on fiber-optics communication:

Quote
Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of light through an optical fiber. The light forms an electromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optic communication systems have revolutionized the telecommunications industry and have played a major role in the advent of the Information Age. Because of its advantages over electrical transmission, optical fibers have largely replaced copper wire communications in core networks in the developed world. Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals. Researchers at Bell Labs have reached internet speeds of over 100 petabits per second using fiber-optic communication.

The U.S. has one of the highest ratios of Internet users to population, but is far from leading the world in bandwidth. Being first in technology isn't always advantageous (see iPhone X vs iPhone 1). Japan leads FTTP with 68.5 percent penetration of fiber-optic links, with South Korea next at 62.8 percent. The U.S. by comparison is 14th place with 7.7 percent. Similar to users leapfrogging to mobile phones for technology driven services in parts of Africa, I expect many places to go directly to fiber as Internet usage increases globally.

Interestingly, fiber is a future-proof technology in contrast to copper, because once laid future bandwidth increases can come from upgrading end-point optics and electronics without changing the fiber infrastructure.

So while it may be expensive to initially deploy fiber, once it's there I foresee deviation from Nielsen's Law to the upside. Indeed, in 2012 Wilson Utilities located in Wilson, North Carolina, rolled out their FTTN (Fiber to the Home) with speeds offerings of 20/40/60/100 megabits per second. In late 2013 they achieved 1 gigabit fiber to the home.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 14, 2014, 09:16:17 PM
doesn't matter how high is the physical limit, with exponential growth (any x% per year) rule it is going to be reached and exceeded, whereupon keeping the rule is just as well as making the max size infinite.


why not linear growth, like  +n MB per block halving, or quadratic like +n MB per n'th block halving?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Gavin Andresen on October 14, 2014, 10:07:23 PM
why not linear growth, like  +n MB per block halving, or quadratic like +n MB per n'th block halving?

Because network bandwidth, CPU, main memory, and disk storage (the potential bottlenecks) are all growing exponentially right now, and are projected to continue growing exponentially for the next couple decades.

Why would we choose linear growth when the trend is exponential growth?

Unless you think we should artificially limit Bitcoin itself to linear growth for some reason. Exponential growth in number of users and usage is what we want, yes?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 14, 2014, 11:45:38 PM
To answer your question of What would also happen if the block size were increased to 1 GB tomorrow is the introduction of new attack vectors, which if exploited would require intervention to resolve by miners, and development.

Like what? What "new" attack vectors? It is already quite cheap to attack the current 1 MB blocksize. What would it cost to attack a 1 GB blocksize vs the current 1 MB blocksize?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 15, 2014, 12:18:33 AM
To answer your question of What would also happen if the block size were increased to 1 GB tomorrow is the introduction of new attack vectors, which if exploited would require intervention to resolve by miners, and development.

Like what? What "new" attack vectors? It is already quite cheap to attack the current 1 MB blocksize. What would it cost to attack a 1 GB blocksize vs the current 1 MB blocksize?
The cost is not that significant.  Heck, the whole BTC market cap is not that significant.

If there were 6 GB block size bloat per hour?
A financial attack could do this independently.
Miners could do this free-ish.
Small miners would fail, as would all hobby miners.

Full nodes would become centralized, increased 51% risks, etc.
These are just the obvious.  No more decentralisation for Bitcoin.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 15, 2014, 05:51:07 AM
The cost is not that significant.  Heck, the whole BTC market cap is not that significant.

If there were 6 GB block size bloat per hour?
A financial attack could do this independently.
Miners could do this free-ish.
Small miners would fail, as would all hobby miners.

Full nodes would become centralized, increased 51% risks, etc.
These are just the obvious.  No more decentralisation for Bitcoin.

From the wiki:

Quote
Note that a typical transaction is 500 bytes, so the typical transaction fee for low-priority transactions is 0.1 mBTC (0.0001 BTC), regardless of the number of bitcoins sent.

To spam the 1 MB blocksize takes roughly .2 BTC per block, or 1.2 BTC per hour. That's only $500 per hour.

To spam a 1 GB blocksize takes roughly 200 BTC per block, or 1200 BTC per hour. That's $500,000 per hour!

A 1 GB blocksize is far more costly to attack. We could increase the blocksize to 1 GB now and nothing would happen because there aren't that many transactions to fill such blocks.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: painlord2k on October 15, 2014, 01:45:06 PM

To spam the 1 MB blocksize takes roughly .2 BTC per block, or 1.2 BTC per hour. That's only $500 per hour.

To spam a 1 GB blocksize takes roughly 200 BTC per block, or 1200 BTC per hour. That's $500,000 per hour!

A 1 GB blocksize is far more costly to attack. We could increase the block size to 1GB now and nothing would happen because there aren't that many transactions to fill such blocks.


I had a current cost of $252/hour to spam a 1MB/block. --> $252.000/hour to spam a 1GB block.
With 1 MB block, if an attacker spam the blocks, the users have no way to counter the attack raising the fees they pay. To move the cost  for the attacker to $252.000/hour they should pay 10$ per transaction.
With 1 GB block, if the attacker spam the blocks, the users just need to move from 1 cent to 2 cent and the cost of the attack move from $252K to $504K per hours. At 4 cents per transaction it become $1M per hour.

Remember the cost of spamming the blocks go directly in the pockets of miners. So they can reinvest the money in better bandwidth and storage and move to 2GB blocks, doubling the cost for the attacker.
At $1M per hour it is $100M in four days and $750M in a month and $8.760 trillion per year (more than ten times the miners income today)


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 15, 2014, 05:05:01 PM
why not linear growth, like  +n MB per block halving, or quadratic like +n MB per n'th block halving?

Why would we choose linear growth when the trend is exponential growth?


Because exponential growth is unsustainable, it is bound to cap at some point in the near future.
We have no idea at what constant it will reach saturation. Instead  we can try
 a slow growth of the parameter, knowing that it will surpass any constant and thus probably
catch up with the real limit at some point, and hoping that the growth is slow enough to be
at most a minor nuisance after that.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 15, 2014, 05:56:05 PM
Because exponential growth is unsustainable

Not inherently. It depends on the rate of growth and what is growing. For example, a 1% per year addition to bandwidth is exceedingly conservative, based on historical evidence.

it is bound to cap at some point in the near future.

Define 'near future'. Is that 5 years, 10 years, 40? And what makes you say that? It's easy to make a general unsupported statement. Don't be intellectually lazy. Show the basis for your reasoning, please.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 15, 2014, 06:23:31 PM
Because exponential growth is unsustainable

Not inherently. It depends on the rate of growth and what is growing. For example, a 1% per year addition to bandwidth is exceedingly conservative, based on historical evidence.

it is bound to cap at some point in the near future.


physical parameters have physical limits, which are constants.
So unbounded growth is unsustainable. Even linear growth.
However, with less than exponential growth one can expect it to be negligible
from some point on (that is, less than x% per year for any x).

Looking at the past data and just extrapolating the exponent one sees is a myopic
reasoning: the exponential growth is only due to the novelty of the given technology.
It will stop when the saturation is reached, that is, when the physical limit of the parameter
in question is close.

If you want a concrete example, look at the CPU clock growth over the next few decades.

Quote
Define 'near future'. Is that 5 years, 10 years, 40? And what makes you say that? It's easy to make a general unsupported statement. Don't be intellectually lazy. Show the basis for your reasoning, please.
I'm not making predictions on constants that we don't know; but when speaking
about exponential growth it  is not even necessary.  Want to know how fast the exponent
growth? Take your 50% growth, and just out of curiosity  see for which n your (1.5)^n exceeds
the number of atoms in the universe. Gives some idea.  Yes, you can put 1% or  (1.01)^n, the difference is
not important.

Of course one can say, let's put it 50% per year until the bandwidth stops growing that fast,
and then we fork again. But this only postpones the problem.  Trying to predict now  exactly when this happens, and to  program for it now, seems futile.





Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Gavin Andresen on October 15, 2014, 06:34:45 PM
Of course one can say, let's put it 50% per year until the bandwidth stops growing that fast,
and then we fork again. But this only postpones the problem.  Trying to predict now  exactly when this happens, and to  program for it now, seems futile.

Okey dokey.  My latest straw-man proposal is 40% per year growth for 20 years. That seems like a reasonable compromise based on current conditions and trends.

You seem to be looking hard for reasons not to grow the block size-- for example, yes, CPU clock speed growth has stopped. But number of cores put onto a chip continues to grow, so Moore's Law continues.  (and the reference implementation already uses as many cores as you have to validate transactions)

PS: I got positive feedback from a couple of full-time, professional economists on my "block size economics" post, it should be up tomorrow or Friday.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 15, 2014, 07:39:13 PM
Of course one can say, let's put it 50% per year until the bandwidth stops growing that fast,
and then we fork again. But this only postpones the problem.  Trying to predict now  exactly when this happens, and to  program for it now, seems futile.

Okey dokey.  My latest straw-man proposal is 40% per year growth for 20 years. That seems like a reasonable compromise based on current conditions and trends.

You seem to be looking hard for reasons not to grow the block size-- for example, yes, CPU clock speed growth has stopped. But number of cores put onto a chip continues to grow, so Moore's Law continues.  (and the reference implementation already uses as many cores as you have to validate transactions)

Actually, I'm not looking for reasons not to grow the block size: I  suggested sub-exponential growth instead, like, for example, quadratic (that was a serious suggestion).

About  the 40% over the 20 years - what if you overshoot, by, say, 10 years?
And as a result of 40% growth over the 10 extra years the max block size grows so much
that it's effectively infinite? ( 1.4^10 ~ 30). The point being, with an exponent it's too easy to
overshoot. Then if you want to solve the resulting problem by another fork, it may be much
harder to reach a consensus, since the problem will be of a very different nature (too much centralization
vs too expensive transactions).



Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 15, 2014, 08:14:08 PM
I'm not making predictions on constants that we don't know; but when speaking
about exponential growth it  is not even necessary.  Want to know how fast the exponent
growth? Take your 50% growth, and just out of curiosity  see for which n your (1.5)^n exceeds
the number of atoms in the universe. Gives some idea.

But the proposal isn't to exceed the number of atoms in the universe. It's to increase block size for 20 years then stop. If we do that starting with a 20MB block at 50% per year we arrive at 44,337 after 20 years. That's substantially under the number of atoms in the universe.

The point being, with an exponent it's too easy to overshoot.

How so? You can know exactly what value each year yields. It sounds like you're faulting exponents for exponents sake. Instead, give the reason you feel the resulting values are inappropriate. Here they are:

1: 20
2: 30
3: 45
4: 68
5: 101
6: 152
7: 228
8: 342
9: 513
10: 769
11: 1153
12: 1730
13: 2595
14: 3892
15: 5839
16: 8758
17: 13137
18: 19705
19: 29558
20: 44337


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 15, 2014, 08:53:18 PM

To spam the 1 MB blocksize takes roughly .2 BTC per block, or 1.2 BTC per hour. That's only $500 per hour.

To spam a 1 GB blocksize takes roughly 200 BTC per block, or 1200 BTC per hour. That's $500,000 per hour!

A 1 GB blocksize is far more costly to attack. We could increase the block size to 1GB now and nothing would happen because there aren't that many transactions to fill such blocks.


I had a current cost of $252/hour to spam a 1MB/block. --> $252.000/hour to spam a 1GB block.
With 1 MB block, if an attacker spam the blocks, the users have no way to counter the attack raising the fees they pay. To move the cost  for the attacker to $252.000/hour they should pay 10$ per transaction.
With 1 GB block, if the attacker spam the blocks, the users just need to move from 1 cent to 2 cent and the cost of the attack move from $252K to $504K per hours. At 4 cents per transaction it become $1M per hour.

Remember the cost of spamming the blocks go directly in the pockets of miners. So they can reinvest the money in better bandwidth and storage and move to 2GB blocks, doubling the cost for the attacker.
At $1M per hour it is $100M in four days and $750M in a month and $8.760 trillion per year (more than ten times the miners income today)
Theory and practice diverge here.
A miner can put as many transactions as they like in a block with no fees.
The cost is then replicated across every full node which must store it in perpetuity.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 15, 2014, 09:01:05 PM
Perhaps I am stating the obvious:

1MB/10m = 1,667B/s

Do not try running Bitcoin on a system with less bandwidth, i.e. even 9600 baud isn't enough.  Hmm, what does happen?  Do peers give up trying to catch it up?

A (the?) serious risk of continuing with a block size that is too small: If/when the block size bottlenecks Bitcoin then the backlog of transactions will accumulate.  If the inflow doesn't eventually subside long enough then the backlog will accumulate without limit until something breaks and besides which who wants transactions sitting in some queue for ages.

What functional limit(s) exist constraining the block size?  2MB, 10MB, 1GB, 10GB, 1TB, 100TB, at some point something will break.  Let's crank up the size on the testnet until it fails just to see it happen.

The alternative to all this is to reduce the time between blocks.  Five minutes between blocks gives us the same thing as jumping up to 2MB.

Why do we fear hard forks?  They are a highly useful tool/technique.  We do not know for sure what the future will bring.  When the unexpected comes then we must trust those on the spot to handle the issue.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 15, 2014, 09:10:57 PM

10: 769
...
20: 44337

so if the bandwith growth happens to  stop in 10 years, then in 20 years you end up
with max block of 44337 whereas the "comfortable" size (if we consider 1MB being comfortable
right now) is only 769.
I call that "easy to overshoot" because predicting technology for decades ahead is hard,
and the difference between these numbers is huge.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: solex on October 15, 2014, 09:35:58 PM
A miner can put as many transactions as they like in a block with no fees.

This is solved by implementing IBLT as the standard block transmission method, although this is not a short-term goal.

A miner can get his or her IBLT  blocks accepted only if the vast majority of the transactions in it are already known to, and accepted as sensible, by the majority of the network. It shifts the pendulum of power back towards all the non-mining nodes, because miners must treat the consensus tx mempool as work they are obliged to do. It also allows for huge efficiency gains, shifting the bottleneck from bandwidth to disk storage, RAM and cpu (which already have a much greater capacity). In theory, 100MB blocks which get written to the blockchain can be sent using only 1 or 2MB blocks on the network. I don't think many people appreciate how fantastic this idea is.

The debate here is largely being conducted on the basis that the existing block transmission method is going to remain unimproved. This is not the case, and a different efficiency method, tx hashes in relayed blocks, is already live.

Okey dokey.  My latest straw-man proposal is 40% per year growth for 20 years. That seems like a reasonable compromise based on current conditions and trends.

Gavin, I hope you proceed with what you think is best.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 15, 2014, 09:45:09 PM
Why do we fear hard forks?  They are a highly useful tool/technique.  We do not know for sure what the future will bring.  When the unexpected comes then we must trust those on the spot to handle the issue.
From a security perspective, the more useful something is, the more risk it tends to have.
Hard forks today are much easier than they will be later, there are only a couple million systems to update simultaneously with an error free code change and no easy back out process.
Later there will hopefully be a few more systems, with more functionality and complexity.
This is the reason I maintain hope that a protocol can be designed to accommodate the needs of the future with less guesswork/extrapolation.  This is not an easy proposition, and it is not one with which most are accustomed to developing.  This is because it is not enough to make something that works, we need something that can't be broken in an unpredictable future.

Whatever the result of this, we limit the usability of Bitcoin to some segment of the world's population and limit the use cases.
2.0 protocols have larger transaction sizes.  Some of this comes down to how the revenue gets split, with whom and when.  Broadly the split is between miners capitalizing on the scarce resource of block size to exact fees, and the Bitcoin protocol users who are selling transactions.

Block rewards are mapped out to 2140.  If we are looking at 10-20 years ahead only, I think we can still do better.

If we start with Gavin's proposal and set a target increase of 50% per year, but make this increase sensitive to the contents of the block chain (fee amounts, number of transactions, transaction sizes, etc) and adjust up or down the increase in maximum size based on actual usage and need and network capability, we may get some result that can survive as well as accommodate the changes that we are not able to predict.

50% may be too high, it may be too low, 40% ending after a period likewise, maybe high maybe low.
The problem is that we do not know today what the future holds, these are just best guesses and so they are guaranteed to be wrong.

Gavin is on payroll of TBF, which is primarily the protocol users and somewhat less represented by miners.  This is not to suggest that his loyalties are suspect, I start with the view that we all want what is best for Bitcoin, but I recognize that he may simply be getting more advice and concern from some interests and less from others.  All I want is the best result we can get, and have the patience to wait for that.  After all, how often do you get to work on something that can change the world?  It is worth the effort to try for the best answer.

JustusRanvier had a good insight on needing bandwidth price data from the future, which is not available in the block chain today, but ultimately may be with 2.0 oracles.  However depending on those for the protocol would introduce other new vulnerabilities.  The main virtue of Gavin's proposal is its simplicity, its main failures are that it is arbitrary, insensitive to changing conditions and inflexible.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 15, 2014, 10:03:51 PM
so if the bandwith growth happens to  stop in 10 years

Why would it? Why on earth would it???

Look, Jakob Nielsen reports his bandwidth in 2014 is 120Mbps, which is around the 90Mbps figure Gavin mentions (https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2) for his own calculations. Let's use 100Mbps as a "good" bandwidth starting point which yields:

1: 100
2: 150
3: 225
4: 338
5: 506
6: 759
7: 1139
8: 1709
9: 2563
10: 3844
11: 5767
12: 8650
13: 12975
14: 19462
15: 29193
16: 43789
17: 65684
18: 98526
19: 147789
20: 221684

Researchers at Bell Labs just set a record (http://www.pcmag.com/article2/0,2817,2460682,00.asp) for data transmission over copper lines of 10Gbps. So we can use that as a bound for currently existing infrastructure in the U.S. We wouldn't hit that until year 12 above, and that's copper.

Did you not read my earlier post (https://bitcointalk.org/index.php?topic=815712.msg9202223#msg9202223) on society's bandwidth bottleneck, the last mile? I talk about society moving to fiber to the premises (FTTP) to upgrade bandwidth. Countries like Japan and South Korea already have installed this at over 60% penetration. The U.S. is at 7.7% and I personally saw fiber lines being installed to a city block a week ago. Researchers at Bell Labs have achieved over 100 petabits per second internet data transmission over fiber-optic lines. Do you realize how much peta is? 1 petabit = 10^15bits = 1 000 000 000 000 000 bits = 1000 terabits

That's a real world bound for fiber, and that's what we're working toward. Your fears appear completely unsubstantiated. On what possible basis, given what I've just illustrated, would you expect bandwidth to stop growing, even exponentially from now, after only 10 years?!?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 15, 2014, 10:33:00 PM
Theory and practice diverge here.
A miner can put as many transactions as they like in a block with no fees.
The cost is then replicated across every full node which must store it in perpetuity.

And the rest of the miners are free to ignore a block like that. You have yet to convince me there's a problem. Miners can fill blocks with worthless free transactions today.

Then maybe the problem isn't a large maximum blocksize, but the allowance of unlimited feeless transactions. They are not free for the network to store in perpetuity, so why not eliminate them?

Eliminate free transactions and eliminate the maximum blocksize. Problem solved forever.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 15, 2014, 11:39:41 PM
Theory and practice diverge here.
A miner can put as many transactions as they like in a block with no fees.
The cost is then replicated across every full node which must store it in perpetuity.

And the rest of the miners are free to ignore a block like that. You have yet to convince me there's a problem. Miners can fill blocks with worthless free transactions today.

Then maybe the problem isn't a large maximum blocksize, but the allowance of unlimited feeless transactions. They are not free for the network to store in perpetuity, so why not eliminate them?

Eliminate free transactions and eliminate the maximum blocksize. Problem solved forever.
I'll let Satoshi speak to that: 
Free transactions are nice and we can keep it that way if people don’t abuse them.
Lets not break what needn't be broken in order to facilitate micropayments (the reason for this proposal in the first place, yes?)


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: 2112 on October 16, 2014, 01:03:40 AM
And the rest of the miners are free to ignore a block like that. You have yet to convince me there's a problem. Miners can fill blocks with worthless free transactions today.

Then maybe the problem isn't a large maximum blocksize, but the allowance of unlimited feeless transactions. They are not free for the network to store in perpetuity, so why not eliminate them?

Eliminate free transactions and eliminate the maximum blocksize. Problem solved forever.
How would you enforce those non-free transactions? If the transaction spammer is the miner then he can include any fee whatsoever because he pays himself. The only cost for him is that the coins are frozen for about 100 decaminutes.

Are you thinking of making the validity of the block dependent on how well the transactions in it were propagated over the network? I don't think this is going to work without a complete overhaul of the protocol.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 16, 2014, 01:47:58 AM
I'll let Satoshi speak to that: 
Free transactions are nice and we can keep it that way if people don’t abuse them.
Lets not break what needn't be broken in order to facilitate micropayments (the reason for this proposal in the first place, yes?)

I agree with Satoshi. You pointed out that free transations can be abused, so lets eliminate them when we adjust the max blocksize.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 16, 2014, 01:48:46 AM
How would you enforce those non-free transactions? If the transaction spammer is the miner then he can include any fee whatsoever because he pays himself. The only cost for him is that the coins are frozen for about 100 decaminutes.

You're thinking of the current network. We're talking about a hard fork. A hard fork could require all transactions to have some sort of minimum fee included.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: 2112 on October 16, 2014, 01:58:09 AM
You're thinking of the current network. We're talking about a hard fork. A hard fork could require all transactions to have some sort of minimum fee included.
Again, what is the point of minimum fee if the transaction isn't propagated? Miner-spammer includes the fee payable to himself. What's the point of that exercise?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 16, 2014, 02:55:42 AM
Again, what is the point of minimum fee if the transaction isn't propagated? Miner-spammer includes the fee payable to himself. What's the point of that exercise?

Ok, good point, thanks.

It would have to be a massive miner bloating only his own blocks he solves. Why would someone do that? It's a very limited and not very interesting attack as that is possible today yet doesn't happen.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 16, 2014, 12:13:36 PM
so if the bandwith growth happens to  stop in 10 years

Why would it? Why on earth would it???

It's all about predicting the value of some constants in the future.
I've no idea what they would be in 10 years.
I'm sure there are people who have much better idea than I.
While (I think we'd all agree  that) predicting technology decades ahead is hard,
 it is not impossible that a group of specialists,  after a thorough discussion, could
get the prediction about right.
May be we should all bet on them to make it. (No sarcasm intended.)

However it seems reasonable to me to try to find a solution that would involve
predicting fewer constants, and minimize the impact of not getting these few constants
right.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 16, 2014, 01:05:04 PM
How would you enforce those non-free transactions? If the transaction spammer is the miner then he can include any fee whatsoever because he pays himself. The only cost for him is that the coins are frozen for about 100 decaminutes.

You're thinking of the current network. We're talking about a hard fork. A hard fork could require all transactions to have some sort of minimum fee included.

Either we've gone through the looking glass, or else the goal is that Bitcoin should fail and some alt coin take its place?
Why hard fork Bitcoin to enable microtransactions if only to make them too expensive, as well as add risk and cost and also remove functionality in the process?

Gavin's 2nd proposal also seems worse than the first by the arbitrariness factor.  x20 size first year, for years two through ten x1.4, then stop.  

Is there some debate method outside Lewis Carroll where you get increasingly absurd until folks stop talking with you, and then you declare victory?
Lets stop this painting-the-roses-red stuff and get back to serious discussion if we want to increase the block size limit at all.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 16, 2014, 06:03:53 PM
While (I think we'd all agree  that) predicting technology decades ahead is hard,
 it is not impossible that a group of specialists,  after a thorough discussion, could
get the prediction about right.

I linked you to the report of Bell Labs achieving 10Gbps over copper wire. Here is the link to them achieving 100 petabits per second over fiber in 2009:

http://www.alcatel-lucent.com/press/2009/001797

Quote
This transmission experiment involved sending the equivalent of 400 DVDs per second over 7,000 kilometers, roughly the distance between Paris and Chicago.

These are demonstrated capacities for these two mediums. The only limiting factors for achieving such rates for individual consumers are physical and economic considerations for building out the infrastructure. Nonetheless the technologies for achieving exponential increase in bandwidth over current offerings is proven. Achieving these rates in practice on a scale coinciding with historical exponential growth of 50% annually, which does take into consideration economic and physical realities, seems well within reason. I'm sure telecommunications specialists would agree.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Cubic Earth on October 16, 2014, 08:01:55 PM
Either we've gone through the looking glass, or else the goal is that Bitcoin should fail and some alt coin take its place?
Why hard fork Bitcoin to enable microtransactions if only to make them too expensive, as well as add risk and cost and also remove functionality in the process?

Gavin's 2nd proposal also seems worse than the first by the arbitrariness factor.  x20 size first year, for years two through ten x1.4, then stop.  

1) It's not about micro-transactions.  It about the network having enough capacity to handle normal transaction (let's say over $0.01) as adoption grows.

2) It's not arbitrary.  Gavin's revised proposal is better than the first because it is more finely tuned to match the current and projected technological considerations.  Remember, the point of the proposal is not to maximize miner revenue or create artificial scarcity.  It is allow the network to grow as fast as possible while still keeping full-node / solo mining ability withing the reach of the dedicated home user.  That is not an economic question, but rather a technical one.  Gavin and the rest of the core devs are computer experts and are as well equipped to make guesses about bandwidth and computer power growth as anyone.

Lets look at each of the three phases of Gavin's revised proposal.  Step one: raise the MaxBlockSize to 20MB as soon as possible.  That would be a maximum of 87GB per month of chain growth, or 1 TB / per year - easy to store on consumer equipment.  Using myself as an example, I have a 20Mbps cable connection, which would actually be able to handle 1.4 GB every 10 minutes, so 20 MB blocks would utilize just 1/70th of my current bandwidth.  I think most of us would agree 20MB blocks would not squeeze out interested individuals.

Phase 2 is 40% yearly growth of the MaxBlockSize.  That seems entirely reasonable considering expected improvements in computers and bandwidth.

Phase 3 is stopping the pre-programmed growth after 20 years.  This recognizes that nothing can grow forever at 40%, and that our ability predict the future diminishes the farther out we look.  Also, lets image that in years 16 - 20, computation resources only grow by 30% per year, and the network becomes increasingly centralized.  After year 20 network capacity would freeze, but not computer speed growth, so the forces of decentralization would have a chance to catch up.

Gavin - I hope you have enough support to implement this.  You are the best chance for reaching consensus.  And thanks for being willing to lobby on its behalf.  100% consensus is impossible, but 75% - 80% percent should be enough to safely move forward.

Everyone else - remember that without consensus we are stuck at 1 MB blocks.  Gavin's proposal doesn't have to be perfect for it to still be vastly better than the status quo.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 16, 2014, 09:29:38 PM
While (I think we'd all agree  that) predicting technology decades ahead is hard,
 it is not impossible that a group of specialists,  after a thorough discussion, could
get the prediction about right.

I linked you to the report of Bell Labs achieving 10Gbps over copper wire. Here is the link to them achieving 100 petabits per second over fiber in 2009:

http://www.alcatel-lucent.com/press/2009/001797

Quote
This transmission experiment involved sending the equivalent of 400 DVDs per second over 7,000 kilometers, roughly the distance between Paris and Chicago.

These are demonstrated capacities for these two mediums. The only limiting factors for achieving such rates for individual consumers are physical and economic considerations for building out the infrastructure. Nonetheless the technologies for achieving exponential increase in bandwidth over current offerings is proven. Achieving these rates in practice on a scale coinciding with historical exponential growth of 50% annually, which does take into consideration economic and physical realities, seems well within reason. I'm sure telecommunications specialists would agree.

As a telecommunication specialist, No. I do not agree.

Sure, we were also able to get x.25 and x.75 telecom to run over barbed wire, in the lab.  (There are places in the world that still use these protocols, some of which would deeply benefit from bitcoin in their area.)
The logistical challenges of implementation is not what you find in the lab. 
This stuff has to go out in environments where someone backs up their truck into a cross country line so they can cut it and drive off with a few miles of copper to sell as scrap.  We live in the world, not in the lab.

Designing something to work and designing to not fail are entirely different endeavors and someone qualified for one is not necessarily qualified to even evaluate the other.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Gavin Andresen on October 16, 2014, 09:50:58 PM
Designing something to work and designing to not fail are entirely different endeavors and someone qualified for one is not necessarily qualified to even evaluate the other.
Pure coincidence, but I had lunch today with a local developer who will be putting up a building in downtown Amherst. They are planning on running fiber to the building, because they want to build for the future and the people they want to sell to (like me in a few years, when we downsize after my kids are in college) want fast Internet.

If I gaze into my crystal ball...  I see nothing but more and more demand for bandwidth.

We've got streaming Netflix now, at "pretty good" quality.  We'll want enough bandwidth to stream retina-display-quality to every family member in the house simultaneously.

Then we'll want to stream HD 3D surround video to our Oculus Rift gizmos, which is probably another order of magnitude in bandwidth. To every member of the family, simultaneously. While our home security cameras stream to some security center off-site that is storing it as potential evidence in case of burglary or vandalism....

Then... who knows? Every prediction of "this will surely be enough technology" has turned out to be wrong so far.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 16, 2014, 09:59:59 PM
Then... who knows? Every prediction of "this will surely be enough technology" has turned out to be wrong so far.
We agree.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 17, 2014, 01:14:14 AM
Sure, we were also able to get x.25 and x.75 telecom to run over barbed wire, in the lab.  (There are places in the world that still use these protocols, some of which would deeply benefit from bitcoin in their area.)
The logistical challenges of implementation is not what you find in the lab.  
This stuff has to go out in environments where someone backs up their truck into a cross country line so they can cut it and drive off with a few miles of copper to sell as scrap.  We live in the world, not in the lab.

We're in luck then, because one advantage of fiber lines over copper is they're not good used for anything other than telecom :)

I'm no telecommunications specialist, but do have an electronics engineering background. Raise some issue with fundamental wave transmission and maybe I can weigh in. My understanding is it's easier to install fiber lines, for example, because there is no concern over electromagnetic interference. Indeed, the fiber lines I witnessed being installed a week ago were being strung right from power poles.

However, is such theoretical discussion even necessary? We have people being offered 2Gbps bandwidth (http://www.theverge.com/2013/4/15/4226428/sony-so-net-2gbps-download-internet-tokyo-japan) over fiber not in theory but in practice in Japan, today.

That's already orders of magnitude over our starting bandwidth numbers. I agree with Gavin that demand for more bandwidth is inevitable. It's obvious all networks are converging - telephone, television, radio, internet. We'll eventually send all our data over the internet, as we largely do now, but in ever increasing bandwidth usage. To imagine progress in technology will somehow stop for no apparent reason, when history is chock full of people underestimating what technological capacity we actually experience is not only shortsighted, it borders unbelievable.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 17, 2014, 02:16:06 AM
Sure, we were also able to get x.25 and x.75 telecom to run over barbed wire, in the lab.  (There are places in the world that still use these protocols, some of which would deeply benefit from bitcoin in their area.)
The logistical challenges of implementation is not what you find in the lab.  
This stuff has to go out in environments where someone backs up their truck into a cross country line so they can cut it and drive off with a few miles of copper to sell as scrap.  We live in the world, not in the lab.

We're in luck then, because one advantage of fiber lines over copper is they're not good used for anything other than telecom :)

I'm no telecommunications specialist, but do have an electronics engineering background. Raise some issue with fundamental wave transmission and maybe I can weigh in. My understanding is it's easier to install fiber lines, for example, because there is no concern over electromagnetic interference. Indeed, the fiber lines I witnessed being installed a week ago were being strung right from power poles.

However, is such theoretical discussion even necessary? We have people being offered 2Gbps bandwidth (http://www.theverge.com/2013/4/15/4226428/sony-so-net-2gbps-download-internet-tokyo-japan) over fiber not in theory but in practice in Japan, today.

That's already orders of magnitude over our starting bandwidth numbers. I agree with Gavin that demand for more bandwidth is inevitable. It's obvious all networks are converging - telephone, television, radio, internet. We'll eventually send all our data over the internet, as we largely do now, but in ever increasing bandwidth usage. To imagine progress in technology will somehow stop for no apparent reason, when history is chock full of people underestimating what technological capacity we actually experience is not only shortsighted, it borders unbelievable.

Perhaps few disagree that Bitcoin can be improved by a plan for block size maximum adjustment.  My issues with the proposals are less what it achieves (a good thing) but what it doesn't (preventing this from happening in the future).

There are myriad external realities that we can not know about.  The development of the telecom technology is perhaps less the issue than what the world has in store for us in the coming decades.  I don't know, and no one else does either, but that shouldn't stop us from striving to achieve what has not been done before.

Undersea cables are cut accidentally, and by hostile actions, economic meltdowns and military conflicts halt or destroy deployments, plagues, natural disasters etc, OR new developments can accelerate everything, robots might do this all for us.  We can't know by guessing today what the right numbers will be.  We could be high or low.  I am just hoping that some more serious thought goes into avoiding the need to guess or extrapolate (an educated guess but still a guess).  We do not have a crisis today other than some pending narrow business concerns (some of which are on the board of TBF and possibly suggested that Gavin "do something").  I am also thankful that he is doing so.  This is an effort that deserves attention (even with the other mitigating efforts already in development).  Gavin is a forward thinking man, and is serving his role well.  We should be all glad that he is not alone in this, and that no one person has the power to make such decisions arbitrarily for others.

The difficulty adjustment algorithm works without knowing the future.  We should similarly look for a way that can also work for many generations, come what may, and save Bitcoin from as many future hard forks as we can. 

This is our duty, to our future, by virtue of us being here at this time.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: e4xit on October 17, 2014, 09:54:22 AM
Sure, we were also able to get x.25 and x.75 telecom to run over barbed wire, in the lab.  (There are places in the world that still use these protocols, some of which would deeply benefit from bitcoin in their area.)
The logistical challenges of implementation is not what you find in the lab.  
This stuff has to go out in environments where someone backs up their truck into a cross country line so they can cut it and drive off with a few miles of copper to sell as scrap.  We live in the world, not in the lab.

We're in luck then, because one advantage of fiber lines over copper is they're not good used for anything other than telecom :)

I'm no telecommunications specialist, but do have an electronics engineering background. Raise some issue with fundamental wave transmission and maybe I can weigh in. My understanding is it's easier to install fiber lines, for example, because there is no concern over electromagnetic interference. Indeed, the fiber lines I witnessed being installed a week ago were being strung right from power poles.

However, is such theoretical discussion even necessary? We have people being offered 2Gbps bandwidth (http://www.theverge.com/2013/4/15/4226428/sony-so-net-2gbps-download-internet-tokyo-japan) over fiber not in theory but in practice in Japan, today.

That's already orders of magnitude over our starting bandwidth numbers. I agree with Gavin that demand for more bandwidth is inevitable. It's obvious all networks are converging - telephone, television, radio, internet. We'll eventually send all our data over the internet, as we largely do now, but in ever increasing bandwidth usage. To imagine progress in technology will somehow stop for no apparent reason, when history is chock full of people underestimating what technological capacity we actually experience is not only shortsighted, it borders unbelievable.

Perhaps few disagree that Bitcoin can be improved by a plan for block size maximum adjustment.  My issues with the proposals are less what it achieves (a good thing) but what it doesn't (preventing this from happening in the future).

There are myriad external realities that we can not know about.  The development of the telecom technology is perhaps less the issue than what the world has in store for us in the coming decades.  I don't know, and no one else does either, but that shouldn't stop us from striving to achieve what has not been done before.

Undersea cables are cut accidentally, and by hostile actions, economic meltdowns and military conflicts halt or destroy deployments, plagues, natural disasters etc, OR new developments can accelerate everything, robots might do this all for us.  We can't know by guessing today what the right numbers will be.  We could be high or low.  I am just hoping that some more serious thought goes into avoiding the need to guess or extrapolate (an educated guess but still a guess).  We do not have a crisis today other than some pending narrow business concerns (some of which are on the board of TBF and possibly suggested that Gavin "do something").  I am also thankful that he is doing so.  This is an effort that deserves attention (even with the other mitigating efforts already in development).  Gavin is a forward thinking man, and is serving his role well.  We should be all glad that he is not alone in this, and that no one person has the power to make such decisions arbitrarily for others.

The difficulty adjustment algorithm works without knowing the future.  We should similarly look for a way that can also work for many generations, come what may, and save Bitcoin from as many future hard forks as we can. 

This is our duty, to our future, by virtue of us being here at this time.

Decreasing the block limit (note, not required block size) in the future would not be a hard fork, it would be a soft fork.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 17, 2014, 02:59:58 PM
It would seem that there could be a simple mathematical progressive increase/decrease, which is based on the factual block chain needs and realities of the time that can work forever into the future.

Here is an example that can come close to Gavin's first proposal of 50% increase per year.

If average block size of last 2 weeks is 60-75% of the maximum, increase maximum 1%, if >75% increase 2%
If average block size of last 2 weeks is 25-40% of the maximum decrease maximum 1%, if <29% decrease 2%

Something like this, would have no external dependencies, would adjust based on what future events may come, and won't expire or need to be changed.

These percentage numbers are ones that I picked arbitrarily.  They are complete guesses and so I don't like them anymore than any other number.  This is just to create a model of the sort of thing that would be better than extrapolating.  To do even better, we can do a regression analysis of previous blocks to see where we would be now and tune it further from there.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 17, 2014, 03:20:46 PM
http://en.wikipedia.org/wiki/Hysteresis


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 17, 2014, 03:28:34 PM
One does not want to see a burst of transactions sit in a queue waiting to be blocked into the chain for very long ever.  Instead we should address the source of transactions to make sure excessive spam is precluded or excluded.  Having a maximum block size at all is an aberration; except if there is some functional restriction.  In the face of such then the only reasonable course forward would be to reduce the time between blocks.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 17, 2014, 03:36:47 PM
It would seem that there could be a simple mathematical progressive increase/decrease, which is based on the factual block chain needs and realities of the time that can work forever into the future.

Here is an example that can come close to Gavin's first proposal of 50% increase per year.

If average block size of last 2 weeks is 60-75% of the maximum, increase maximum 1%, if >75% increase 2%
If average block size of last 2 weeks is 25-40% of the maximum decrease maximum 1%, if <29% decrease 2%

Something like this, would have no external dependencies, would adjust based on what future events may come, and won't expire or need to be changed.

These percentage numbers are ones that I picked arbitrarily.  They are complete guesses and so I don't like them anymore than any other number.  This is just to create a model of the sort of thing that would be better than extrapolating.  To do even better, we can do a regression analysis of previous blocks to see where we would be now and tune it further from there.

This may be manipulable:  miners with good bandwidth can start filling the blocks to capacity, to increase the max and push miners with smaller bandwidth out of competition.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 17, 2014, 05:20:54 PM
It would seem that there could be a simple mathematical progressive increase/decrease, which is based on the factual block chain needs and realities of the time that can work forever into the future.

Here is an example that can come close to Gavin's first proposal of 50% increase per year.

If average block size of last 2 weeks is 60-75% of the maximum, increase maximum 1%, if >75% increase 2%
If average block size of last 2 weeks is 25-40% of the maximum decrease maximum 1%, if <29% decrease 2%

Something like this, would have no external dependencies, would adjust based on what future events may come, and won't expire or need to be changed.

These percentage numbers are ones that I picked arbitrarily.  They are complete guesses and so I don't like them anymore than any other number.  This is just to create a model of the sort of thing that would be better than extrapolating.  To do even better, we can do a regression analysis of previous blocks to see where we would be now and tune it further from there.

This may be manipulable:  miners with good bandwidth can start filling the blocks to capacity, to increase the max and push miners with smaller bandwidth out of competition.

Agreed.  And thank you for contributing.

It is offered as an example of the sort of thing that can work, rather than a finished product.
It is merely "better" not best.  I don't think we know of something that will work yet.
By better, I mean that Gavin gets his +50%/year, iff it is needed, and not if it isn't.  And if circumstances change, so does the limit.

If it is 100% manipulated, it is only as bad as Gavin's first proposal. (+4% or so)
That of course could only happen if miners with good bandwidth win all block and also want to manipulate.

If we fear manipulation, we can add anomaly dropping  and exclude the 10% most extreme outside of standard variance (so that fully padded and empty blocks are dropped out of the calculations).

It would be good to avoid creating any perverse incentives entirely wherever possible.

And again, the percentages chosen here are samples only, arbitrarily chosen.  A regression analysis of the block chain ought be employed to determine where we would be with this sort of thing as well as how it would affect the path forward.


The point here is to allow market forces to dictate.  If some miners want to shrink block size to make transactions more precious and extract fees, others will want to get those fees and increase block size.  We want something that can work in perpetuity, not a temporary fix which may get adjusted centrally whenever the whim arises.

Our guide must be math and measurement, not central committees, no matter how smart they may be.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 17, 2014, 05:55:38 PM
I am just hoping that some more serious thought goes into avoiding the need to guess or extrapolate (an educated guess but still a guess).

It is offered as an example of the sort of thing that can work, rather than a finished product.

This is the problem.

People don't seem to realize Gavin's proposal may be the best we can do. I happen to think it is. If anyone had a better idea we'd have heard it by now. We, the entire community, have brooded on this issue for months if not years now. Here is a spoiler alert: nobody can predict the future.

Did anyone ever stop to think Bitcoin couldn't work? I mean I have, not for reasons technological, but for reasons of solving issues via consensus. Have you ever watched a three-legged human race, you know where one leg gets tied to the leg of another person? The reason they're funny is because it's hard to coordinate two separate thinking entities with different ideas on how to move forward, the result being slow or no progress and falling over. That may be our fate and progress gets harder the more legs get tied in. That's the reason for taking action sooner rather than later.

I've posted it before, but I'll say it again. I think a big reason Satoshi left is because he took Bitcoin as far as he could. With Gavin and other devs coming on-board he saw there was enough technical expertise to keep Bitcoin moving forward. I don't think he thought he had any more ironclad valuable ideas to give Bitcoin. Its fate would be up to the community/world he released it into. Bitcoin is an experiment. People don't seem to want to accept that, but it is. What I'd love to see is somebody against Gavin's proposal offer an actual debatable alternative. Don't just say, sorry it has to be 1MB blocks and as for what else, well that's not our thought problem; and don't just say no we don't want Gavin's proposal because it doesn't matter-of-factly predict the future, and as for what else, well we don't know.

Come up with something else or realize we need to take a possibly imperfect route, but one which could certainly work, so that we take some route at all.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: calim on October 17, 2014, 07:33:57 PM
I happen to think we can do better than Gavin's idea.  I like the idea of trying to come up with a solution that works with the blockchain and adapts over time instead of relying on Gavin, or NL or whomever.  The answer should be in the blockchain.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 17, 2014, 08:55:34 PM
I happen to think we can do better than Gavin's idea.  I like the idea of trying to come up with a solution that works with the blockchain and adapts over time instead of relying on Gavin, or NL or whomever.  The answer should be in the blockchain.

This.


To imagine I (or anyone) can predict the future would be engaging in hubris.
Thanks to Satoshi we do not have to predict anything, because the block chain will be there, in the future, telling us what is needed.

I've offered one option.  Heard one good criticism and responded to that with a modification that I think will resolve the concern.
Then outlined one research task to help further refine this option (regression testing with the block chain).

There is more work to be done here, that much is clear.  There are graphs to be plotted, data to be crunched, and code to be written.   There are a LOT of smart folks engaged in this, who else can step up with a critique, or spare some cycles to work on the data?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: jgarzik on October 17, 2014, 11:51:46 PM
It would seem that there could be a simple mathematical progressive increase/decrease, which is based on the factual block chain needs and realities of the time that can work forever into the future.

This can be easily gamed by stuffing transactions into the blockchain, shutting out smaller players prematurely.



Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 18, 2014, 12:03:47 AM
It would seem that there could be a simple mathematical progressive increase/decrease, which is based on the factual block chain needs and realities of the time that can work forever into the future.

This can be easily gamed by stuffing transactions into the blockchain, shutting out smaller players prematurely.
Thank you for contributing.
This was already mentioned earlier, you may have missed it.  Yes it can possibly be gamed in the way you mention, it is just unlikely, unprofitable, and ineffective to do so.

This effect of such an "attack" is limited by
1) Anomaly dropping
2) The % of blocks won
3) The disadvantage to those that do so by requiring transmission of larger blocks
4) Even if this "attack" is performed with 100% success by all miners, the max size only grows only a bit over 50% per year anyway (with the proposed numbers - so worse case scenario, it is about the same as Gavin's proposal).
5) Counter-balanced perhaps by other miners may want to shrink the limit and make inclusion in a block more valuable?

If you think that these factors are insufficient disincentive, and the benefits of doing such an attack are still worth it, please help us to better understand why that is?  

I maintain that I do not think we have the best answer yet, so these criticisms are valuable.  This is simply better than other proposals we have seen so far simply because it accommodates for an unpredictable future, but IMHO, not yet good enough for implementation.  Regression testing on previous block chain and some more game theory analysis.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: hello_good_sir on October 18, 2014, 03:14:55 AM
Decreasing the block limit (note, not required block size) in the future would not be a hard fork, it would be a soft fork.

It won't be a soft fork, it will be an impossibility.  The miners of the future will be few in number and hostile to the ideas of bitcoin.  This is the reality that we need to design for.  Entities in control of a node will want to keep the price of maintaining a node as high as possible, so that they can control access to the information in the blockchain.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: tdryja on October 18, 2014, 03:47:35 AM
My 2uBTC on this issue:
Instead of guessing the costs of the network via extrapolation, code in a constant-cost negative feedback mechanism.  For example, similar to difficulty adjustments, if mean non-coinbase block reward > 1 BTC, increase max size.  If mean block reward < 1 BTC, decrease max size (floor of 1MB).

Here's why I think this is a long term solution.  With Bitcoin, "costs" and "value" have a very interesting relationship; currently with mining, the costs to run the network are determined by the exchange value of a bitcoin.  Long term, the block size constrains both the cost and value of the network.  By "long term", I mean 100 years from now.  Long term, there's no more coinbase reward.  So miners compete for transaction fees.  Limited block size causes transactors to compete for space in the block, driving up the fees.  An unlimited block size would, without other costs, tend to drive fees to near-zero, and then there's not enough incentive for miners to bother, and the security of the system is compromised.  That's the death spiral idea anyway, which may not actually happen, but it's a legitimate risk, and should be avoided.  The value and utility of bitcoin today has a lot to do with the probability that it will have value in 100 years.

Max block sizes doubling every two years makes them pretty much unlimited.  Capping after 20 years is also a big guess. That also extrapolates Moore's law for potentially longer than the law keeps going.  Gigabit ethernet is what, 15 years old?  And that's what every PC has now, I've never seen 10G over copper ethernet.  Reliance on everything else becoming awesome is a very fragile strategy.

An issue I have with expoentially increasing block size, or static block size, is there's no feedback, and can't respond to changes in the system.  The block size in many ways determines the value of the network. All else being equal, a network that can handle more transactions per day is more useful and more valuable.

I think that similar to the current system of mining costs determined by bitcoin value, block propagation, verification and storage should be determined by how much people are willing to pay.  If transaction fees are high, block space is scarce, and will expand.  If transaction fees are low, block space is too cheap, and the max block size will expand.

This fixes a cost independent of the mining coinbase reward, allowing for sustainable, predictable mining revenue.  The issue is we would have to come up with a number.  What should it cost to run the bitcoin network?  1% of M0 per year?  That would be 210,000 coins per year in transaction fees to miners.  That would be about 3BTC per block.

0.5% M0 annually would be 1.5BTC per block, and so on.  This would be a ceiling cost; it could cost less, if people didn't make too many transactions, or most things happened off-blockchain, and the blocks tended back towards the 1MB floor.  It would effectively put a ceiling on the maintenance cost of the network, however; if blocks were receiving 6BTC in fees, the size would double at the next difficulty adjustment, which would tend to push total fees down.

If you wanted to get fancy you could have hysteresis and non-linearity and stuff like that but if it were up to me I'd keep it really simple and say that max block size is a linear function of the previous epoch block rewards.

This can be "gamed" in 2 ways.  It can be gamed to a limited extent by miners who want to push up the max block size.  They can pay a bunch of fees to themselves and push up that average.  I can't think of a clean way to get rid of that, but hopefully that's OK; isn't it the miners who want smaller blocks anyway?  If miners are competing for larger blocks, why would the non-mining users complain?  The only issue is one miner who wants larger blocks, and everyone else wants smaller ones.  Maybe use median instead of mean to chop out malicious miners or fat-finger giant transaction fees.

It can also be gamed the other way.  Your transaction fee is 0, but you have some off-channel account with my mining group which includes all your txs for a flat monthly rate.  This also seems unlikely; if it were more expensive that way, transactors would stop using the off-channel method and just go to the open market for transaction inclusion.  If it were cheaper, why would the miner forgo that revenue?

So if I ran this whole Bitcoin thing (which would defeat the point... :), that's what I would do.  The question is how much it should cost.  1BTC per block sounds OK, it's nice round number.  That's 50K BTC per year for the miners.

I'd welcome comments / criticism of why having such a feedback mechanism is a good or bad idea.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 18, 2014, 05:26:42 AM
We should anticipate governments becoming miners; if they aren't already.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 18, 2014, 05:33:59 AM
A government with a strong enough military/police can potentially take over a miner's equipment by force/violence all in the name of supposed social good while calling it eminent domain http://en.wikipedia.org/wiki/Eminent_domain.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 18, 2014, 06:14:25 AM
My 2uBTC on this issue:
...
I'd welcome comments / criticism of why having such a feedback mechanism is a good or bad idea.
I realize transactions can come in a wide variety of sizes so my back-of-the-envelope calculations need to be taken with a big grain of salt;

https://blockchain.info/charts/n-transactions-per-block shows around 3-Mar-2014 a peak of 618 transactions in a block (as averaged over 24 hours) & https://blockchain.info/block-index/477556 is a 396KB block with 710 transactions in it.  1BTC/710txn ~= 0.0014BTC/txn or about $0.53 at the current exchange rate; so much for micro-transactions.  Also, 396KB/710txn ~= 558B/txn, so, 1MB/558B/txn ~= 1792txn/MB.  Even, 1BTC/1792txn*$377.79/BTC ~= $0.21/txn.  I think maybe 0.1BTC/block would be nice.  If exchange rate climbs to $2000/BTC and if the block size were still at 1MB then 0.1BTC/1792txn*$2000/BTC ~= $0.11/txn but if the block size were at 2MB then the per transaction fee drops to $0.055 or so.  As legit transaction rates climb presumably so does the exchange rate.  At what point does BTC decouple from fiat?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: wachtwoord on October 18, 2014, 10:04:48 AM
I am just hoping that some more serious thought goes into avoiding the need to guess or extrapolate (an educated guess but still a guess).

It is offered as an example of the sort of thing that can work, rather than a finished product.

This is the problem.

People don't seem to realize Gavin's proposal may be the best we can do. I happen to think it is. If anyone had a better idea we'd have heard it by now. We, the entire community, have brooded on this issue for months if not years now. Here is a spoiler alert: nobody can predict the future.

New Liberty's unpolished prototype is already far superior to Gavin's nonsense so this is easily debunked.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 18, 2014, 10:54:10 AM
My 2uBTC on this issue:
Instead of guessing the costs of the network via extrapolation, code in a constant-cost negative feedback mechanism.  For example, similar to difficulty adjustments, if mean non-coinbase block reward > 1 BTC, increase max size.  If mean block reward < 1 BTC, decrease max size (floor of 1MB).

Here's why I think this is a long term solution.  With Bitcoin, "costs" and "value" have a very interesting relationship; currently with mining, the costs to run the network are determined by the exchange value of a bitcoin.  Long term, the block size constrains both the cost and value of the network.  By "long term", I mean 100 years from now.  Long term, there's no more coinbase reward.  So miners compete for transaction fees.  Limited block size causes transactors to compete for space in the block, driving up the fees.  An unlimited block size would, without other costs, tend to drive fees to near-zero, and then there's not enough incentive for miners to bother, and the security of the system is compromised.  That's the death spiral idea anyway, which may not actually happen, but it's a legitimate risk, and should be avoided.  The value and utility of bitcoin today has a lot to do with the probability that it will have value in 100 years.

Max block sizes doubling every two years makes them pretty much unlimited.  Capping after 20 years is also a big guess. That also extrapolates Moore's law for potentially longer than the law keeps going.  Gigabit ethernet is what, 15 years old?  And that's what every PC has now, I've never seen 10G over copper ethernet.  Reliance on everything else becoming awesome is a very fragile strategy.

An issue I have with exponentially increasing block size, or static block size, is there's no feedback, and can't respond to changes in the system.  The block size in many ways determines the value of the network. All else being equal, a network that can handle more transactions per day is more useful and more valuable.

I think that similar to the current system of mining costs determined by bitcoin value, block propagation, verification and storage should be determined by how much people are willing to pay.  If transaction fees are high, block space is scarce, and will expand.  If transaction fees are low, block space is too cheap, and the max block size will expand.

This fixes a cost independent of the mining coinbase reward, allowing for sustainable, predictable mining revenue.  The issue is we would have to come up with a number.  What should it cost to run the bitcoin network?  1% of M0 per year?  That would be 210,000 coins per year in transaction fees to miners.  That would be about 3BTC per block.

0.5% M0 annually would be 1.5BTC per block, and so on.  This would be a ceiling cost; it could cost less, if people didn't make too many transactions, or most things happened off-blockchain, and the blocks tended back towards the 1MB floor.  It would effectively put a ceiling on the maintenance cost of the network, however; if blocks were receiving 6BTC in fees, the size would double at the next difficulty adjustment, which would tend to push total fees down.

If you wanted to get fancy you could have hysteresis and non-linearity and stuff like that but if it were up to me I'd keep it really simple and say that max block size is a linear function of the previous epoch block rewards.

This can be "gamed" in 2 ways.  It can be gamed to a limited extent by miners who want to push up the max block size.  They can pay a bunch of fees to themselves and push up that average.  I can't think of a clean way to get rid of that, but hopefully that's OK; isn't it the miners who want smaller blocks anyway?  If miners are competing for larger blocks, why would the non-mining users complain?  The only issue is one miner who wants larger blocks, and everyone else wants smaller ones.  Maybe use median instead of mean to chop out malicious miners or fat-finger giant transaction fees.

It can also be gamed the other way.  Your transaction fee is 0, but you have some off-channel account with my mining group which includes all your txs for a flat monthly rate.  This also seems unlikely; if it were more expensive that way, transactors would stop using the off-channel method and just go to the open market for transaction inclusion.  If it were cheaper, why would the miner forgo that revenue?

So if I ran this whole Bitcoin thing (which would defeat the point... :), that's what I would do.  The question is how much it should cost.  1BTC per block sounds OK, it's nice round number.  That's 50K BTC per year for the miners.

I'd welcome comments / criticism of why having such a feedback mechanism is a good or bad idea.

Thank you for this.

At first look I very much like this feedback mechanism.  I'd also considered using the non-coinbase transaction fees initially for the source data, but had abandoned the idea, perhaps prematurely.  It may be a better place to look for a mechanism to determine this.

I had dropped it for two main reasons.  
1)  I looked at the historical charts. Number of transactions per block was the closest representation I could swiftly find that approximates block size (although it ignores the effects of 2.0 transactions which are larger, there are few of these now).  The fee chart shows much greater variation and less of the rise which I see as needed as a means of enabling adoption.
2)  I wasn't able to reconcile a way around needing an external value for BTC, to get at the mining cost.

Your proposal, tdrja, shows that both my initial reasons are not sufficient to abandon the idea of using fees paid as the means of sensing appropriate block size from the chain data.

I like this proposal foremost because it draws on the data within the block chain to self correct for the unpredictable future.  I also like it for its simplicity, I like that it uses a directly financial metric.  
After thinking about it a bit more, I may have some useful criticisms.




Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 18, 2014, 01:30:18 PM
My 2uBTC on this issue:
Instead of guessing the costs of the network via extrapolation, code in a constant-cost negative feedback mechanism.  For example, similar to difficulty adjustments, if mean non-coinbase block reward > 1 BTC, increase max size.  If mean block reward < 1 BTC, decrease max size (floor of 1MB).


a miner can include into his block a transaction with an arbitrary large fee (which he gets back of course), throwing the mean off the chart.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Gavin Andresen on October 18, 2014, 03:16:46 PM
I happen to think we can do better than Gavin's idea.  I like the idea of trying to come up with a solution that works with the blockchain and adapts over time instead of relying on Gavin, or NL or whomever.  The answer should be in the blockchain.

The answer cannot be in the blockchain, because the problem being addressed (resource usage rising too quickly so only people willing to spend tens of thousands of dollars can participate as fully validating nodes) is outside the blockchain.

You will go down the same path as the proof-of-stake folks, coming up with ever more complicated on-blockchain solutions to a problem that fundamentally involves something that is happening outside the blockchain. In this case, real-world CPU and bandwidth growth. In the POS case, proof that some kind of real-world effort was performed.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: 2112 on October 18, 2014, 03:28:19 PM
a miner can include into his block a transaction with an arbitrary large fee (which he gets back of course), throwing the mean off the chart.
What about the following modification:

a1) fold a modified p2ppool protocol into the mainline protocol
a2) require that transactions mined into the mainline blockchain have to be seen in the majority of p2ppool blocks
a3) p2ppool then has an additional function of proof-of-propagation: at least 50% of miners have seen the tx
a4) we can then individually adjust the fees and incentives individually for:
a4.1) permanent storage of transactions (in the mainline blockchain)
a4.2) propagation of transactions (in the p2ppool blockchain, which is ephemeral)

Right now the problem is that miners receive all fees due, both for permanent storage and for network propagation.

Another idea in the similar vein:

b1) make mining a moral equivalent of a second-price auction: the mining fees of block X accrue to the miner of block X+1
b2) possibly even replace 1 above with a higher, constant natural number N.
Late edit:
b3) reduce the coinbase maturity requirement by N
Later edit:
b4) since nowadays the fees are very low compared to subsidy, (b3) would imply a temporary gap of global mining income. Subsidy of block X accrues to the miner of X, fees of block X accrue to the miner of block X+N.
End of edits.

Both proposals above aim to incentivize and enforce propagation of the transactions on the network and discourage self-mining of non-public transactions and self-dealing on the mining fee market.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 18, 2014, 04:06:13 PM
I happen to think we can do better than Gavin's idea.  I like the idea of trying to come up with a solution that works with the blockchain and adapts over time instead of relying on Gavin, or NL or whomever.  The answer should be in the blockchain.

The answer cannot be in the blockchain, because the problem being addressed (resource usage rising too quickly so only people willing to spend tens of thousands of dollars can participate as fully validating nodes) is outside the blockchain.

You will go down the same path as the proof-of-stake folks, coming up with ever more complicated on-blockchain solutions to a problem that fundamentally involves something that is happening outside the blockchain. In this case, real-world CPU and bandwidth growth. In the POS case, proof that some kind of real-world effort was performed.


Thank you for your contribution and criticism.

Since the difficulty adjustment already effectively assesses real-world CPU growth, I'm unready to assume impossibility of real-world assessment with respect to bandwidth, as there are evidence of both in the block chain awaiting our use.
Analogies to PoS are also no proof of a negative.  

They answer may be in the block chain, and it seems the best place to look, as the block chain will be there in the future providing evidence of bandwidth usage if we can avoid breaking Bitcoin protocol today.  

I don't need anyone to be right or wrong here so long as in the end we get the best result for Bitcoin.  I am very happy to be wrong if that means an improvement can be made.

Gavin, I remain grateful for your raising the issue publicly, and for keeping engaged in the discussion.  I do not agree that discussion on the matter ought end, and think we can do better through continuing.

Wherever we can squeeze out arbitrary human decision through math and measurement, it is our duty to the future to do so.  The alternative is to commit our progeny to the whims and discretion of whomever is in authority in the decades to come.  As David Rabahy pointed out a few posts ago, we may not be pleased with that result.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 18, 2014, 04:41:45 PM
a miner can include into his block a transaction with an arbitrary large fee (which he gets back of course), throwing the mean off the chart.
What about the following modification:

a1) fold a modified p2ppool protocol into the mainline protocol
a2) require that transactions mined into the mainline blockchain have to be seen in the majority of p2ppool blocks
a3) p2ppool then has an additional function of proof-of-propagation: at least 50% of miners have seen the tx
a4) we can then individually adjust the fees and incentives individually for:
a4.1) permanent storage of transactions (in the mainline blockchain)
a4.2) propagation of transactions (in the p2ppool blockchain, which is ephemeral)

Right now the problem is that miners receive all fees due, both for permanent storage and for network propagation.

Another idea in the similar vein:

b1) make mining a moral equivalent of a second-price auction: the mining fees of block X accrue to the miner of block X+1
b2) possibly even replace 1 above with a higher, constant natural number N.
Late edit:
b3) reduce the coinbase maturity requirement by N
End of edit.

Both proposals above aim to incentivize and enforce propagation of the transactions on the network and discourage self-mining of non-public transactions and self-dealing on the mining fee market.

These are interesting propositions in their own right.
There is a virtue in simplicity in that it is less likely to create perverse incentives.  (Gavin alludes to this in his critique)
For example adding a p2ppool dependency may have complexity risks we don't see so the (b) series by that metric may be better than the (a).



Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 18, 2014, 06:27:33 PM
As I see it Bitcoin is like the U.S. government. It has made too many promises to keep. I agree with Gavin Bitcoin has been sold as being able to serve the world's population. At the same time it has been sold as being effectively decentralized. These two things can't happen at the same time with today's technology, because bandwidth numbers (primarily) don't  align with global transaction data numbers. They will work eventually, but they don't today.

The question is how to get from today to the future day when Bitcoin can handle the world's transaction needs while remaining decentralized down to technology available to average people.

We have effectively three choices.

- Do nothing and remain at 1MB blocks
- Gavin's proposal to grow transaction capacity exponentially, possibly fitting in line with Bitcoin adoption numbers
- Some algorithmic formula to determine block size which is probably more conservative than exponential growth, but less predictable

I think doing nothing is unrealistic.

I like Gavin's proposal because it can solve the issue while also being predictable. Predictability has value when it comes to money. I agree that some other algorithm using real world inputs is safer, but I wonder at what expense. In the worst case, using Gavin's proposal, there may be some risk of heaving hitting players hogging market share from lesser miners, maybe even to the extent of becoming centralized cartels. I don't think there is a good chance of that happening, but agree it's in the realm of possibility. In that case, though, nobody would be forced to continue using Bitcoin, since it's a voluntary currency. It's easy to move to an alternative coin. Free market forces, in my mind, would solve the problem.

If we try being as cautious as possible, seeking inputs along the way we probably rest assured centralization won't happen with Bitcoin. At the same time, though, the market has to continually assess what is Bitcoin's transaction capacity, and therefore value. I'm not sure how that would play out.

My question is can a majority of the community (say 70-80%) be convinced to choose one of the last two options?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 18, 2014, 07:05:39 PM
a miner can include into his block a transaction with an arbitrary large fee (which he gets back of course), throwing the mean off the chart.
What about the following modification:

a1) fold a modified p2ppool protocol into the mainline protocol
a2) require that transactions mined into the mainline blockchain have to be seen in the majority of p2ppool blocks
a3) p2ppool then has an additional function of proof-of-propagation: at least 50% of miners have seen the tx
a4) we can then individually adjust the fees and incentives individually for:
a4.1) permanent storage of transactions (in the mainline blockchain)
a4.2) propagation of transactions (in the p2ppool blockchain, which is ephemeral)

Right now the problem is that miners receive all fees due, both for permanent storage and for network propagation.

Another idea in the similar vein:

b1) make mining a moral equivalent of a second-price auction: the mining fees of block X accrue to the miner of block X+1
b2) possibly even replace 1 above with a higher, constant natural number N.
Late edit:
b3) reduce the coinbase maturity requirement by N
Later edit:
b4) since nowadays the fees are very low compared to subsidy, (b3) would imply a temporary gap of global mining income. Subsidy of block X accrues to the miner of X, fees of block X accrue to the miner of block X+N.
End of edits.

Both proposals above aim to incentivize and enforce propagation of the transactions on the network and discourage self-mining of non-public transactions and self-dealing on the mining fee market.


a) is vulnerable to sybil attacks
b) smothers the incentive to include any transactions in blocks: why should I (as a miner) include a tx if the fee would go to someone else?

Also it seems both  are too disruptive  to be implemented in bitcoin.
Anything this much different would take an altcoin to be tried.



Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 18, 2014, 07:30:12 PM
I'd welcome comments / criticism of why having such a feedback mechanism is a good or bad idea.

As with the proposal I offered, this proposal has the virtue of expanding MAX_BLOCK_SIZE when it is in demand, and contracting if fees are not sufficient to support the network (so that fees will rise).

Some issues for examination:
Previous block size:
In its simplest form the tdrja proposal the block size of previous epochs aren't factored.  This makes MAX_BLOCK_SIZE subject to rapid switching which as tdrja mentions could be cured by hysteresis, or also (new suggestion borrowed from my proposal) by having the MAX_BLOCK_SIZE a product of previous MAX_BLOCK_SIZE, modified by the tdryja proposed transaction fee metric (so a % increase/decrease).  The rapid switching may be problematic if some event stimulates a desire in many decentralized miners to radically reduce block size limit in order to restrain commerce during an event.  (It doesn't take a conspiracy, a single factor influencing miners in aggregate can do this.)

Coinbase Fee
As mentioned I like the tdrja proposal for its simplicity so I'd look ways to keep that virtue.  Still, if transaction fees are the primary metric, it would seem there may be some peril in ignoring the coinbase entirely due to it's impact on mining in the early years.  It is currently about 300x the transaction fee and so it almost entirely supports the mining effort.

There may be a way of using the coinbase fee also in this calculation, but treated differently.  The coinbase fee primarily serves the emission and distribution functions, but also stimulates adoption in the early years.  It might be used as a way of amplifying the metric in the early years (when lack of adoption is a significant existential risk, and percentage growth is presumably higher) and then let this effect subside in later years by some form of multiplying by (Coinbase)-1/2

Currently the cost per transaction, with the coinbase included, is often higher than the transacted amount.  Such transactions would not occur without the coinbase, so a way to accomodate for what this proposal would mean (because we would be unlikely to have any meaninful MAX_BLOCK_SIZE increases so long as coinbase transactions are the funding source for the network.

It would be good to increase MAX_BLOCK_SIZE long before the coinbase reward is no longer the driving force of network growth.

Squeezing out arbitrariness
There isn't much in the tdrja proposal which is arbitrarily declared by decree (fiat) other than the allocation of "What should it cost to run the bitcoin network?"
We have some indication of this from the hash rate and the total fees.  Currently total fees (coinbase and transaction) are stimulating growth in difficulty even in declining markets.




Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: tdryja on October 18, 2014, 10:04:38 PM
David Rabahy:
I generally try not to think in dollar terms about the economic issues in Bitcoin.  If there is a feedback system such that block rewards from tx fees tends towards 1BTC / block, the blocks could potentially be quite large; 100MB/block, or with your estimates 179,000 transaction, at a cost of 5.5 uBTC per tx.  More transactions trying to fit into a 100MB block will tend to push up the per tx fee, which would to expand the max block size to say 110MB, which pushes the fees per tx back down such that the new 110MB blocks are just about full of txs at a 5.4 uBTC / tx fee, still earning 1BTC per block.

trout:
I address this in my initial post and go into detail below.

2112:
I've thought about the same set of changes, and have decided that it's probably too much of a change to practically push through into Bitcoin.  Something where the miner of block n gets 1/2 the tx fees, and the miner of block n+10 gets the other half would both incent inclusion of high-fee transactions, as well as eliminate the risk that miners would pay fees to themselves.  Such a fundamental change however is probably impractical, as it would be dismissed by many as "not Bitcoin".  Integrating something like p2pool is also quite complex and will be viewed as too risky.

NewLiberty:
I wasn't clear enough about this in my post, but I meant that the new epoch's block size to be a function of the previous one, just like the difficulty adjustments.  Difficulty adjustments don't actually care about hash rate, just the time taken for the 2016 block epoch, and a relative difficulty adjustment is made based on the divergence from the two week target.  Similarly, I agree that max block size should use transaction fees as a relative adjustment factor.  I mention bounds of this adjustment below.

-

Simply using median transaction fees per block over the past epoch is hopefully simple and straightforward enough to be accepted by people, and does not have significant incentive problems.

There are two ways this can be 'gamed' by miners.  The way that is most dangerous to the non-mining users of the network would be for miners to artificially limit block sized to a small value, in the hopes that they would profit from high transaction fees.  Doing this requires malicious collusion among miners (in excess of that in a proof-of-idle system which I've written about) and in a situation where most of the miners are trying to harm bitcoin, we're already in much bigger trouble.  In practice miners will grab all the fees they can fit into a block, especially if they know the next miner will do the same.

The more problematic way a malicious miner can 'game' this is by paying fees to itself.  Using thresholds, or the median instead of mean, or some other mathematical way to cut out the outliers may be helpful.  I like using the median block reward -- it's really quite simple and would prevent anyone with <50% of the hash power from accomplishing much.  And the assumption in all of this is that there is no >50% miner. 

If I miner did somehow push the fees and blocksize up, that miner could then publish large blocks in an attempt to spam / DoS the network.  That's the only real threat, and it could be very costly and slow for the miner to accomplish.  Unlike the difficulty adjustment, which is bounded at 0.25X to 4X, the max block size adjustment could have a much tighter bound, like 10%, so that it would take months to double the max block size.

I think this is simple and straightforward enough that miners, developers, and bitcoin users can read it, understand it, and be OK with it.  I also think that it's safe long-term, and doesn't require human intervention once set up, regardless of how computer technology evolves (or fails to) over the next few decades.

Thanks to everyone who's read and commented on this; I actually thought of this a few years ago and mentioned it to people but never had gotten any attention.  My personal opinion is that Gavin's idea of just increase blocks based on a guess of continuance of Moore's law would probably work fine... but I like my idea a little better :)  Thanks for the comments.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 18, 2014, 11:44:16 PM
yep, median would work much better than the mean, and a group of <50% miners would only have limited power.

However, I don't quite agree with the reliance on no collusion above 50%.
I understand the  premise  that a group of >50% miners can do something much worse:
a doublespend.  But it is not at all the same type of collusion.
Assembling  a group to collude for a doublespend and destroying the credibility
and value of bitcoin in the eyes of the public is one thing, and assembling a group
to push the max block size to infinity, in order  to slowly push out low-bandwidth competitors
from mining, is a very different thing. It seems the latter is much easier.

This said, I find both this and NewLiberty's idea interesting.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: trout on October 19, 2014, 02:49:48 AM
.. more about this:
there's actually the opposite kind of manipulation (or rather attack) possible:
empty blocks. Right now they exist but don't hurt anyone; here they would push
the max block size down, hurting the network.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 19, 2014, 03:51:41 AM
A miner that fills a block with self-dealing transactions (for whatever reason; malicious or stupid) is a nuisance or perhaps worse.  Is there a way to reject blocks that contain transaction that haven't appeared on the network yet?  If transactions must appear on the network before they can appear in a block then some other miner might block them before the bad actor and obtain the associated fees undermining the entity attempting to bloat blocks with self-dealing transactions.  I suppose such a bad actor could hold the self-dealing transactions until they have a block ready and then transmit the self-dealing transactions and block out together as close as possible in time in an attempt to minimize the risk of another miner grabbing their fees.

Oh, I wonder; Does a full node have to have enough bandwidth to keep up with both the blocks *and* transactions waiting to be blocked?  If so then my earlier calculation based on just the blocks (and no orphans for that matter) is low.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 19, 2014, 03:53:59 AM
.. more about this:
there's actually the opposite kind of manipulation (or rather attack) possible:
empty blocks. Right now they exist but don't hurt anyone; here they would push
the max block size down, hurting the network.
Would it be reasonable to reject blocks with too few transactions in them if the pool of transactions waiting is above some threshold?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: tdryja on October 19, 2014, 04:19:30 AM
trout:
empty blocks are possible now, and not a big deal.  They become very expensive longer term as fees take over the block reward; an empty block could have no or negligible reward.  If the median is used, this attack will have minimal effect on the network, while costing the attacker 1 BTC per empty block.  I don’t think we need to worry about an attack which is very expensive to for the attacker, and has no appreciable effect on the network.

I agree that it may be easier to form a majority cartel if the only thing at stake is block size.  But a majority cartel of miners can pretty much do this anyway; they just tell everyone “Hey guys, the new max block size is 1GB.  We’re all moving our mining power there, you’d best update your clients.

Basically I think worrying about a majority of miners doing something you don’t want them to is beyond the scope of the problem.  And if they all want to have huge blocks and put all my transactions in there for free, I for one welcome our new benevolent mining overlords :)

David Rabahy:
The idea of only allowing known transactions into a block has been discussed before, but has been found unworkable.  The purpose of the block is to achieve consensus on which transactions have happened.  Presupposing consensus on the set of transactions removes the need for the block.  In other words, if all the miners already agree on what’s going to be in the next block, why bother broadcasting it to each other?

There are different ways to try to make that work, and I’ve discussed it with several people, but I think it’s fundamentally incompatible with Bitcoin’s current consensus system.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 19, 2014, 05:35:17 AM
I'd welcome comments / criticism of why having such a feedback mechanism is a good or bad idea.
There may be a way of using the coinbase fee also in this calculation, but treated differently.  The coinbase fee primarily serves the emission and distribution functions, but also stimulates adoption in the early years.  It might be used as a way of amplifying the metric in the early years (when lack of adoption is a significant existential risk, and percentage growth is presumably higher) and then let this effect subside in later years by some form of multiplying by (Coinbase)-1/2

Currently the cost per transaction, with the coinbase included, is often higher than the transacted amount.  Such transactions would not occur without the coinbase, so a way to accomodate for what this proposal would mean (because we would be unlikely to have any meaninful MAX_BLOCK_SIZE increases so long as coinbase transactions are the funding source for the network.
If I miner did somehow push the fees and blocksize up, that miner could then publish large blocks in an attempt to spam / DoS the network.  That's the only real threat, and it could be very costly and slow for the miner to accomplish.  Unlike the difficulty adjustment, which is bounded at 0.25X to 4X, the max block size adjustment could have a much tighter bound, like 10%, so that it would take months to double the max block size.

Currently the TX fees are way below 1 BTC per block, this will likely continue for quite a while.  It is less than 15 BTC per day in fees.
By including the coinbase fee (or maybe a square or other root of it) we would come closer to Gavin's increase in the early years and move steadily toward a fee supported mining within the next 20 years or so while increasing the MAX_BLOCK_SIZE.

I think this is simple and straightforward enough that miners, developers, and bitcoin users can read it, understand it, and be OK with it.  I also think that it's safe long-term, and doesn't require human intervention once set up, regardless of how computer technology evolves (or fails to) over the next few decades.

yes.

edit:
Another critique of the fee-basis method vs the block size basis might be that the "% of M0 to dedicate to mining" would gradually increase over time as bitcoin are lost/destroyed.  I don't see this as highly important, but may be a source of future refinement if it were ever to become a concern.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Gavin Andresen on October 19, 2014, 02:49:09 PM
By including the coinbase fee (or maybe a square or other root of it) we would come closer to Gavin's increase in the early years and move steadily toward a fee supported mining within the next 20 years or so while increasing the MAX_BLOCK_SIZE.

Did you read my "blocksize economics" blog post?

I don't understand why you think MAX_BLOCK_SIZE necessarily has anything to do with "supporting mining" (aka securing the network).

What stops this from happening:

Big miners accept off-blockchain payments from big merchants and exchanges that want their transactions confirmed. They are included in very small blocks with zero fees.  The blocksize stays at 1MB forever.

Lets look at incentives:

Big miners: have cozy agreements with Big Merchants. Those agreements keep the little guys out.

Big Merchants: same thing. The need to get an agreement with a miner to get your transactions accepted is a barrier to entry for the Little Guys.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 19, 2014, 05:08:56 PM
Is there a known functional limit above which MAX_BLOCK_SIZE breaks the code?  Have we ever cranked the MAX_BLOCK_SIZE up on testnet and then deliberately filled a block up with transactions and seen it fail?

Do any instabilities appear when the pool of unconfirmed transactions grows large enough?

Does every transaction eventually get put into a block for sure?  Is it possible for a transaction to hang out in the pool forever?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: tdryja on October 19, 2014, 05:18:00 PM
Did you read my "blocksize economics" blog post?

I don't understand why you think MAX_BLOCK_SIZE necessarily has anything to do with "supporting mining" (aka securing the network).

I can't speak for NewLiberty, but I have certainly read it, and agree with the majority of what you've written.  The part about "Block Subsidy, Fees, and Blockchain Security" is most relevant here.  I agree that as it stands, there is no guarantee that 1MB blocks would be full of high value transactions, and no guarantee that 1GB blocks would be full enough of low value transactions to secure the network. 

However, if the max block size is linked to the transaction fees, we can at least know that the 1GB block does have sufficient fees, because the size would contract if it didn't.  The other scenario -- a half empty 1MB block with minimal fees on a few large transactions -- implies that Bitcoin has either failed or been superseded, at which point the max block size is not relevant.


What stops this from happening:

Big miners accept off-blockchain payments from big merchants and exchanges that want their transactions confirmed. They are included in very small blocks with zero fees.  The blocksize stays at 1MB forever.

2 things: 1 which stops it from happening, and 1 which means it could happen anyway.

This scenario supposes that 1MB is sufficient to maintain the miner / merchant cartel's transactions, which may not be the case, but is plausible.  What is implausible is that every member of this cartel of miners continues to reject a vast mempool of outsider fee paying transactions.  Thousands of merchants saying "shut up and take my bitcoins! include my tx!" and the miners all say "No!", maintain their cartel, and deny themselves that money?  Or, if they try to on-board these merchants into their cartel, the 1MB block isn't big enough anymore.  Similarly for merchants, are they getting a better deal with the cartel?  If so, great, but why is the cartel being nice to the merchants; it's much more likely that the merchants would hate the cartel and try to get their transactions in a cheaper, independent block.

Why maintain membership in the cartel if you make less money?  One of those two groups (miners, merchants) must be making less money.

This type of cartel is also possible with an open-loop exponential expansion of max block size.  The majority of the miners can stick to 1MB blocks, and reject blocks with transactions not in their cartel.  >50% of miners need to participate in this cartel to effectively push down the median fees.  It doesn't make rational sense (unless megabytes are extremely expensive) in this case either, but if we worry about a malicious majority mining cartel, is still doable.

I think an open-loop larger block size would probably be fine, but it involves a lot of extrapolation.  Maybe computers get way better really fast, and 1GB is laughably small.  Or maybe they stay the same, and 1GB is too large, meaning the networking and storage costs of mining exceed the sha256 costs, centralizing mining.  I think a closed-loop feedback system based on median aggregate transaction fees is able to reduce these risks.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 19, 2014, 06:03:54 PM
Did you read my "blocksize economics" blog post?
yes, I should take this as request for comment in the thread more appropriate for that.

I don't understand why you think MAX_BLOCK_SIZE necessarily has anything to do with "supporting mining" (aka securing the network).
Simply put: It is the supply side of the mining resource which miners are selling.
This should be clear enough.  
I can go into more detail in its thread, tdryja made some decent comments here already.

What stops this from happening:

Big miners accept off-blockchain payments from big merchants and exchanges that want their transactions confirmed. They are included in very small blocks with zero fees.  The blocksize stays at 1MB forever.

Lets look at incentives:

Big miners: have cozy agreements with Big Merchants. Those agreements keep the little guys out.

Big Merchants: same thing. The need to get an agreement with a miner to get your transactions accepted is a barrier to entry for the Little Guys.

The fear of this theoretical arrangement was addressed in tdryja's initial post, and further explained in the latest.
I do agree that any feedback mechanism such as we are seeking with this line of discussion holds the potential for creating a perverse incentive.  


Admittedly there is also a philosophical basis for what may seem like a useless discussion to some since the Chief Scientist of The Bitcoin Foundation has already decided and is seeking to end discussion.  


Consider the existence of a central authority, employed by a member organization with the charter of interfacing with governments.  The Chiefs then take the role of arbitrarily deciding on the supply and adjusting as the organization's economic advisers suggest, we then have progressed towards replicating the Federal Reserve Bank.

It is nothing personal with Gavin, I like you and love what you do.  I think your proposal also could possibly work in the short term, except that it sets a most dangerous precedent.  One risk is certain, and that is that those who come after us will not be us, but it is our hope, and the effort for which we strive mightily, that Bitcoin will still be Bitcoin. It is this which I am hoping to protect by seeking for a way to put this authority on the block chain, and not on the decree of any person now, or in the future.

Both of these risks are theoretical, (a perverse miner/merchant Cartel, and a perverse Central Authority) On balance, the risks of possibly creating perverse incentives by basing decision effecting the monetary support of the network on the evidence provided by the block chain, and decisions by unknown people of the future who may have their own perverse incentives that will be more difficult to observe, I would give the role of this governance to the Bitcoin block chain.  Simple because there it will be exposed and may be seen, and is also a much easier perversity to dislodge.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 19, 2014, 08:04:34 PM
Consider the existence of a central authority, employed by a member organization with the charter of interfacing with governments.  The Chiefs then take the role of arbitrarily deciding on the supply and adjusting as the organization's economic advisers suggest, we then have progressed towards replicating the Federal Reserve Bank.

I completely disagree with this. Believe it or not it's actually not that easy for the Fed to adjust monetary policy. I mean all things considered, it's exceptionally easy, but they still have to get their board to go along and sell the public on what they're doing. That's a task made harder as they try more extraordinary things (like now) and the public becomes more astute to the way money works and its importance (like now), and that's a center driven design.

Bitcoin is designed from the ground up to be the opposite. It's extraordinarily hard to implement changes affecting the whole without consent from the whole. I sincerely believe after a certain point of adoption it will be impossible to make changes to Bitcoin, even ones not so controversial; if there isn't a do or die mandate behind the action (like a break in SHA256) I don't see consensus from millions and millions of independent thinkers coming easily. Somebody's going to think differently for some reason, even if it appears irrational. People call this ossifying of the protocol.

Think how hard this 1MB issue is. There was a time when Satoshi simply told everyone to start running a protocol change without question. He knew there was a severe bug allowing excess coins, but people simply upgraded and now the fork ignoring that block is locked in.

Bitcoin isn't the first to come up with decentralization. That was actually the idea behind America. Instead of power reigning down from monarchs it would be vested within all the individuals. However, the founders even then recognized authority by committee wasn't always ideal. It would be a clear disadvantage if attacked since the battle might be lost before it was decided what to do. That's why the president has full authority to respond militarily in case of attack.

It sounds like you're objecting for reasons more ideological than practical. While that's admirable and understandable I hope you also recognize that's not automatically best given the circumstances.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: tl121 on October 19, 2014, 08:07:01 PM
KISS:

1. Since technology allows increase to 20 MB per block today, make an increase to this size as soon as consensus and logistics allow.

2. Continue to evaluate the situation based on computer-communications technology growth, transaction growth and observed network behavior.  There will be ample time to make a second hard fork should this become necessary.  (A one time jump  of 20x is equivalent to 40% annual growth for 9 years.)


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 19, 2014, 09:07:53 PM
KISS:

1. Since technology allows increase to 20 MB per block today, make an increase to this size as soon as consensus and logistics allow.

And what if consensus never allows it? Do we never do anything? It seems a lot of people have a "oh just do this" game plan, without really considering things might not work the way they think they will.

It's entirely possible hard and even somewhat messy choices may have to be made with Bitcoin. This is because some people will never be on the page you're trying to get them on, no matter how much conversation occurs.

2. Continue to evaluate the situation ...

Did you not read what I wrote above? I fully expect (as do others) for changes to become harder if not impossible to make as adoption grows. If some tangible solution isn't enacted within a fairly short period of time (meaning before the next bubble of interest and increased adoption) I myself may seriously have to re-evaluate the viability of Bitcoin - not cryptocurrency mind you, just this particular version of Bitcoin.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: ElysianBaws on October 19, 2014, 10:35:21 PM
IS THAT THE GAVIN WHO IS NOW IN SATOSHIS POSITION ???


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 20, 2014, 08:23:37 AM
Consider the existence of a central authority, employed by a member organization with the charter of interfacing with governments.  The Chiefs then take the role of arbitrarily deciding on the supply and adjusting as the organization's economic advisers suggest, we then have progressed towards replicating the Federal Reserve Bank.

I completely disagree with this. Believe it or not it's actually not that easy for the Fed to adjust monetary policy. I mean all things considered, it's exceptionally easy, but they still have to get their board to go along and sell the public on what they're doing. That's a task made harder as they try more extraordinary things (like now) and the public becomes more astute to the way money works and its importance (like now), and that's a center driven design.

Bitcoin is designed from the ground up to be the opposite. It's extraordinarily hard to implement changes affecting the whole without consent from the whole. I sincerely believe after a certain point of adoption it will be impossible to make changes to Bitcoin, even ones not so controversial; if there isn't a do or die mandate behind the action (like a break in SHA256) I don't see consensus from millions and millions of independent thinkers coming easily. Somebody's going to think differently for some reason, even if it appears irrational. People call this ossifying of the protocol.

Think how hard this 1MB issue is. There was a time when Satoshi simply told everyone to start running a protocol change without question. He knew there was a severe bug allowing excess coins, but people simply upgraded and now the fork ignoring that block is locked in.

Bitcoin isn't the first to come up with decentralization. That was actually the idea behind America. Instead of power reigning down from monarchs it would be vested within all the individuals. However, the founders even then recognized authority by committee wasn't always ideal. It would be a clear disadvantage if attacked since the battle might be lost before it was decided what to do. That's why the president has full authority to respond militarily in case of attack.

It sounds like you're objecting for reasons more ideological than practical. While that's admirable and understandable I hope you also recognize that's not automatically best given the circumstances.

I understand you believe that Bitcoin is doomed to fail because of insufficient central authority.
We disagree.

In any case, even if you were right and such a thing were needed, that should not stop people from offering better ideas to those who are claiming to have authority.
So you are about as wrong as anyone can possibly be, to suggest that just because someone claims authority, that they should make decisions and everyone blindly follow when they see clearly better solutions available. 
Why?
Just for the sake of establishing authorities?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: helmax on October 20, 2014, 02:54:30 PM
i agree with this idea
gavin is correct !


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 20, 2014, 04:46:40 PM
i agree with this idea
gavin is correct !

It would be nice to add some reason for an opinion, or even clarify what it is that you are opining upon.

So you agree that correct about which?  Is it
a) that whomever the Chief Scientist of TBF is at the moment should decide what the MAX_BLOCK_SIZE ought be and hard fork as desired as the new central authority for Bitcoin? or
b) that the past changes in network technology as measured in the first world current user base sufficiently predict the future and should be used as the basis for governing? or
c) acoindr's assertion that central authority is necessary for the survival of bitcoin because meritocracy consensus is doomed to fail?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 20, 2014, 05:22:37 PM
I understand you believe that Bitcoin is doomed to fail ...

I never said I believe Bitcoin is doomed to fail, although my questioning it's viability may strengthen with developments. That should be everyone's position, because Bitcoin is an experiment. Those who think Bitcoin is guaranteed to succeed in serving the world are not understanding the situation. This doesn't mean it can't succeed at that, only that it's not guaranteed (how could it be?).

... because of insufficient central authority

My position isn't Bitcoin needs central authority. My position is Bitcoin needs a viable solution. If you read the post (https://bitcointalk.org/index.php?topic=815712.msg9247444#msg9247444) I made above you'll see I asked whether the majority community could be convinced to accept Gavin's proposal or one more like what you're crafting. My position was one of adopting a viable solution.

In any case, even if you were right and such a thing were needed, that should not stop people from offering better ideas to those who are claiming to have authority.

Who has claimed any authority? Where? All I see is people putting forth their suggestions.

So you are about as wrong as anyone can possibly be, to suggest that just because someone claims authority, that they should make decisions and everyone blindly follow when they see clearly better solutions available.  
Why?
Just for the sake of establishing authorities?

Like I said above, it seems you're arguing from a position of ideology. You seem to see resolving the block size issue as divided between those who tend toward centralization and those who demand absolute decentralization, even to the point of seeing people establishing positions they haven't. That is the reason I question Bitcoin's viability. It's because people have their own thoughts about how things should work, or how things can work, and even if there is a solution which can work (I'm not saying which) it may not be possible to get everyone to agree, because it's not possible to do a Vulcan mind meld and have everyone understand everyone else's thoughts, conclusions, and informing information. People think differently (and with differing abilities). In the absence of some deciding force (usually a leader or authority as you call it) the result may be no clear decision whatsoever.

I'm simply seeking something which can work, something a majority can agree upon, nothing more.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 20, 2014, 05:43:48 PM
I understand you believe that Bitcoin is doomed to fail ...

I never said I believe Bitcoin is doomed to fail, although my questioning it's viability may strengthen with developments. That should be everyone's position, because Bitcoin is an experiment. Those who think Bitcoin is guaranteed to succeed in serving the world are misunderstanding the situation. This doesn't mean it can't succeed at that, only that it's not guaranteed (how could it be?).

... because of insufficient central authority

My position isn't Bitcoin needs central authority. My position is Bitcoin needs a viable solution. If you read the post (https://bitcointalk.org/index.php?topic=815712.msg9247444#msg9247444) I made above you'll see I asked whether the majority community could be convinced to accept Gavin's proposal or one more like what you're crafting. My position was one of adopting a viable solution.

In any case, even if you were right and such a thing were needed, that should not stop people from offering better ideas to those who are claiming to have authority.

Who has claimed any authority? Where? All I see is people putting forth their suggestions.

So you are about as wrong as anyone can possibly be, to suggest that just because someone claims authority, that they should make decisions and everyone blindly follow when they see clearly better solutions available.  
Why?
Just for the sake of establishing authorities?

Like I said above, it seems you're arguing from a position of ideology. You seem to see resolving the block size issue as divided between those who tend toward centralization and those who demand absolute decentralization, even to the point of seeing people establishing positions they haven't. That is the reason I question Bitcoin's viability. It's because people have their own thoughts about how things should work, or how things can work, and even if there is a solution which can work (I'm not saying which) it may not be possible to get everyone to agree, because it's not possible to do a Vulcan mind meld and have everyone understand everyone else's thoughts, conclusions, and informing information. People think differently (and with differing abilities). In the absence of some deciding force (usually a leader or authority as you call it) the result may be no clear decision whatsoever.

I'm simply seeking something which can work, something a majority can agree upon, nothing more.

The ideology is:  To the extent that arbitrariness can be squeezed out of the most elemental variables of the protocol, that ought be the fundamental design goal.  The corollaries to this include what you have identified here,
a) that the more arbitrariness there is, the more the risk of needing future changes
b) the more changes needed, the greater the need for an leader/authority/deciding force.  
c) the more authority driven changes, the more degradation there is on the value of Bitcoin's decentralization

Thus my purpose here supports your goal of reducing that risk of stagnation for the future.

Thank you for re-characterizing your position.

Gavin's proposal is fit for immediate purposes, but it just kicks the can down the road for future readjustment.  This is problematic in that it will then re-require this leader/authority/deciding force.  Rather than accepting an extrapolation which is guaranteed to be wrong, we should look for one that self corrects so that we can look to solve this for future as well as today and dispense with the ongoing readjustments.
We ignore that at the peril of creating the very dystopian future you have imagined.

FWIW, I am still favoring the block size adjustment basis over the fee based adjustment, but there may be a way to combine the two and still maintain simplicity.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 20, 2014, 06:05:58 PM
Gavin's proposal is fit for immediate purposes, but it just kicks the can down the road for future readjustment.  This is problematic in that it will then re-require this leader/authority/deciding force.

This is one of those instances I'm talking about regarding people thinking differently.

You and I seem to fundamentally think differently here, and who is to say who is right? I believe whatever hard fork change we make, if we make one, it will be locked in quite probably forever more. It won't be subject to adjustment. Whatever it is future users will have to work with, sort of like we simply have to work with 1MB if we can't adequately change it. This is due to an ossifying of the protocol, again, as I mentioned above (https://bitcointalk.org/index.php?topic=815712.msg9257470#msg9257470).

Rather than accepting an extrapolation which is guaranteed to be wrong, ...

Define "wrong".


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Cubic Earth on October 20, 2014, 09:50:29 PM
The answer cannot be in the blockchain, because the problem being addressed (resource usage rising too quickly so only people willing to spend tens of thousands of dollars can participate as fully validating nodes) is outside the blockchain.

IMHO, this is the most salient point on this whole thread.  Sometimes you just have to think deeply and clearly to see the truth.  

The core dev team are more than just experts in computers, they are also experts in human relations.  It's why Gavin and others have so much respect in the community, and certainly why they have my respect.  Bitcoin is all about free will and voluntary participation.  Every aspect of bitcoin must be in philosophical alignment with those concepts.  I'm not worried, I believe the consensus around the principles that bitcoin embodies will continue to grow.  This thread has been a great discussion.  Some of alternatives to Gavin's plan just don't work on the philosophical level.  Therefore, they cannot function on a technical level.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 21, 2014, 01:41:52 AM
Gavin's proposal is fit for immediate purposes, but it just kicks the can down the road for future readjustment.  This is problematic in that it will then re-require this leader/authority/deciding force.

This is one of those instances I'm talking about regarding people thinking differently.

You and I seem to fundamentally think differently here, and who is to say who is right? I believe whatever hard fork change we make, if we make one, it will be locked in quite probably forever more. It won't be subject to adjustment. Whatever it is future users will have to work with, sort of like we simply have to work with 1MB if we can't adequately change it. This is due to an ossifying of the protocol, again, as I mentioned above (https://bitcointalk.org/index.php?topic=815712.msg9257470#msg9257470).

Rather than accepting an extrapolation which is guaranteed to be wrong, ...

Define "wrong".

Necessitating future adjustment.  A change that does not resolve the fundamental problem, and addresses only the immediate perceptions of today.

In the same way that a fixed 1MB is "wrong".  


The answer cannot be in the blockchain, because the problem being addressed (resource usage rising too quickly so only people willing to spend tens of thousands of dollars can participate as fully validating nodes) is outside the blockchain.

IMHO, this is the most salient point on this whole thread.  Sometimes you just have to think deeply and clearly to see the truth.  

The core dev team are more than just experts in computers, they are also experts in human relations.  It's why Gavin and others have so much respect in the community, and certainly why they have my respect.  Bitcoin is all about free will and voluntary participation.  Every aspect of bitcoin must be in philosophical alignment with those concepts.  I'm not worried, I believe the consensus around the principles that bitcoin embodies will continue to grow.  This thread has been a great discussion.  Some of alternatives to Gavin's plan just don't work on the philosophical level.  Therefore, they cannot function on a technical level.

IMHO Gavin's point there is in the category of "just don't work on the philosophical level".   It speaks of dollars.   It misses the mark on the value of the block chain as a resource.  It is short sighted.  

Further in practical terms, if you are suggesting that even my first proposal (that essentially replicates Gavin's first proposal in its effect) "cannot function on a technical level" then you would be suggesting that neither could Gavin's.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 21, 2014, 02:38:46 AM
Necessitating future adjustment.  A change that does not resolve the fundamental problem, and addresses only the immediate perceptions of today.  

Now define "right". Is it simply a block size which grows/shrinks dynamically with real world bandwidth over time? What if usage demand is far higher? What if the BTC exchange rate experiences unending volatility due to uncertainty about usage capacity (ie its user base)?

In other words does not needing future bandwidth adjustment automatically mean "right"?

In the same way that a fixed 1MB is "wrong".  

Not according to these people (http://keepbitcoinfree.org/). They think 1MB is the right answer for "many more years" and until we know it's "safe" to change. Can't you see any answer given is subjective? With that being the case doesn't it make sense to adopt a solution which can fit the most common perception of Bitcoin's promise, which certainly includes global usage, and can win popular support?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 21, 2014, 05:33:09 AM
Necessitating future adjustment.  A change that does not resolve the fundamental problem, and addresses only the immediate perceptions of today.  

Now define "right". Is it simply a block size which grows/shrinks dynamically with real world bandwidth over time? What if usage demand is far higher? What if the BTC exchange rate experiences unending volatility due to uncertainty about usage capacity (ie its user base)?

In other words does not needing future bandwidth adjustment automatically mean "right"?

Correct.  If you fix something and it never needs fixing again, and just keeps working from them on, you have fixed in a right way.
Automatic adjustment based on the environment of the future at least has a possibility of being right. 
We have this potential with the block chain.

We can play "what if" about all sorts of things, but at least the ones you mention here are not going to take additional hard forks to accommodate if we use any either of the methods described in this thread.  Each of them is responsive to usage demand to adjust capacity.

In the same way that a fixed 1MB is "wrong".  

Not according to these people (http://keepbitcoinfree.org/). They think 1MB is the right answer for "many more years" and until we know it's "safe" to change. Can't you see any answer given is subjective? With that being the case doesn't it make sense to adopt a solution which can fit the most common perception of Bitcoin's promise, which certainly includes global usage, and can win popular support?

At best they are saying "not now". 

Previously you suggested that you thought Satoshi left because you think he didn't have anything more to offer.  Another plausible hypothesis would be understanding that he would have undue influence continuing to contribute using that name, and that the resulting groupthink might cause the project to suffer.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 21, 2014, 07:04:06 PM
Correct.  If you fix something and it never needs fixing again, and just keeps working from them on, you have fixed in a right way.

What about this from you?

I do agree that any feedback mechanism such as we are seeking with this line of discussion holds the potential for creating a perverse incentive.

How can you know if your solution is right and still admit this?

Automatic adjustment based on the environment of the future at least has a possibility of being right.  

So does increases inline with 30 years of historical data (http://www.nngroup.com/articles/law-of-bandwidth/).

We can play "what if" about all sorts of things, but at least the ones you mention here are not going to take additional hard forks to accommodate if we use any either of the methods described in this thread.  

We need to clear something up. As I've said I don't expect future hard forks to be possible due the protocol ossifying. You keep talking about changes which might occur later and how you're against that. I'm saying I don't believe there can be any changes after a certain point, one which we may be nearing, because it's harder to gain consensus the more people that need be consulted.

Whatever change we make that will probably be it. How well it works in the future depends on how technology plays out and the community's ability to adapt around the protocol's shortcomings if there are any.

Please tell me if you agree an ossifying of the protocol - the fact it will become increasingly hard, probably impossible to make changes as adoption grows - is what we'll likely see.

At best they are saying "not now".  

If they're saying not now it may become impossible to change from 1MB. I don't believe their position is realistic, but who is more right? Everything is subjectively based on the priorities of the advocate.

My goal, as you said earlier isn't to be right, it's to arrive at some solution which can gain consensus while meeting Bitcoin's promises. If that's Gavin's proposal, fine. If that's your solution, fine. Let's just get something that meets that criteria so we're not stuck.

Why I think Gavin's proposal would play out better:

  • it has good chance at gaining consensus given Gavin's position
  • it provides predictability which shouldn't be under appreciated; most people/businesses are not nerds enjoying complex ideas, they want simple understandable guidelines to make decisions upon and can chart numbers with Gavin's model
  • similar to now the max size is a cap, not what is actually used; the cap stays commensurate with Moore's Law and Nielsen's Law
  • it accommodates the largest number of users (inline with exponential adoption) while still offering a protective cap
  • in the worst case if centralization occurs there is still the community to deal with (remember GHash.io) which has alternative coins

What I dislike about an input based solution:

  • possibly does not serve the greatest number of people
  • less certain about gaining consensus as people that need to be swayed may include business types like BitPay, Circle, Bitcoin Foundation, etc.
  • it doesn't provide clear predictability; whatever happens is dynamic based on the network which is influenced by various factors
  • possible perverse incentives created
  • a more complex solution means greater chance something doesn't work as expected

I see your version as fitting more like a glove. I'd agree it's probably more conservative and protective than Gavin's proposal, but at what expense? Nothing will be ideal because the technology and likely usage demand don't match, and won't for some time. Perhaps you can produce a bullet point list too and we can begin debating specifics.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 21, 2014, 09:05:23 PM
A maximum block size which is too small will naturally lead to more off-chain activity; folks/entities will not be denied the ability to transact.

A maximum block size which is too big thwarts participation by bandwidth-starved nodes.  So?

I propose we set MAX_BLOCK_SIZE to the maximum functional value possible today and walk away trusting the future to the caretakers then.  If any idiot/malicious bad actors try to take advantage of it and attack then Bitcoin was vulnerable to that already anyways.  No one is even filling up the current 1MB blocks with self-dealing transactions as it is.  Remind me again why it was lowered?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 21, 2014, 09:06:34 PM
Anyone that wants to transact off-chain is able to do so independent of MAX_BLOCK_SIZE.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 21, 2014, 10:46:18 PM
No one is even filling up the current 1MB blocks with self-dealing transactions as it is.  Remind me again why it was lowered?

This is a perfect example of why I'm skeptical about good consensus at any point, especially after large adoption numbers. Everyone has theoretically equal vote/opinion/input in Bitcoin, but not everyone is working with the same information (or expertise, abilities, integrity etc. etc.), no offense to David Rabahy.

We need something which can pass the community and in my opinion fairly soon.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 22, 2014, 03:50:58 AM
A maximum block size which is too small will naturally lead to more off-chain activity; folks/entities will not be denied the ability to transact.

A maximum block size which is too big thwarts participation by bandwidth-starved nodes.  So?

I propose we set MAX_BLOCK_SIZE to the maximum functional value possible today and walk away trusting the future to the caretakers then.  If any idiot/malicious bad actors try to take advantage of it and attack then Bitcoin was vulnerable to that already anyways. 

+1

No one is even filling up the current 1MB blocks with self-dealing transactions as it is.  Remind me again why it was lowered?

Now it would be very expensive to flood the network. When there was no specified limit (the effective limit was 32 MB), someone could have flooded the network with junk transactions for virtually no cost.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 22, 2014, 06:01:15 AM
I have an idea. Why not ask everyone in Bitcoin what they think we should do, then just do all of them! Or, we can just debate each idea until it no longer matters since the year will be 2150.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 22, 2014, 02:08:31 PM
Perhaps the largest block http://blockexplorer.com/block/00000000000000000898b6d7e23fd5e42cc4374ab4d054a3213691ffb5d98c38 to date just happens to pop out of the system during this discussion.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 22, 2014, 04:00:20 PM
.. more about this:
there's actually the opposite kind of manipulation (or rather attack) possible:
empty blocks. Right now they exist but don't hurt anyone; here they would push
the max block size down, hurting the network.
Would it be reasonable to reject blocks with too few transactions in them if the pool of transactions waiting is above some threshold?
https://bitcointalk.org/index.php?topic=165.msg1595#msg1595 gets at my point.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 22, 2014, 04:02:48 PM
https://en.bitcoin.it/wiki/Weaknesses#Spamming_transactions talks about spamming transactions but does not connect that with a miner.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 22, 2014, 04:13:32 PM
I have an idea. Why not ask everyone in Bitcoin what they think we should do, then just do all of them! Or, we can just debate each idea until it no longer matters since the year will be 2150.
Essentially a lot of ideas are being tried out via altcoins.

Rushing to do anything just to get something done does not seem prudent.  Hesitating forever will lead naturally to real consequences.

Waiting for the MAX_BLOCK_SIZE to become an emergency is waiting too long.  https://bitcointalk.org/index.php?topic=419185.msg4552409#msg4552409 was an attempt to find the biggest queue to date.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 22, 2014, 05:06:18 PM
We all agree that the current max block size is too restricted.

What seems obvious to me is that different people have different opinions on the underlying purpose of any blocksize limit.

At the risk of putting words into his mouth (for which I apologize if I'm wrong), Gavin sees it as a technical anti-DOS measure: to prevent miners from DOSing voting enthusiasts out of the network. If this is true, the best solution would be for an automatically adjusting limit that tracked the speed of residential connections and of residential hard drive capacities (enthusiast residences). Since that seems impossible, gavin's limited-exponential growth seems like the best educated guess that can be made today.

Others see it as a as an economic issue, and would like to tie the limit to some economic aspect of Bitcoin to solve this perceived economic threat. I'm no economist, and I certainly don't know if they're right. But guess what: I personally don't care if they're right.

Any restriction on blocksize is an artificial one, a regulation determined by some authority who believes (perhaps correctly) they know better than I. I'm OK with technical restrictions, and those that improve the ability to vote, but I am completely against any restrictions whose purpose is to alter an otherwise free-market system.

To but it bluntly, I would rather see a restriction-free Bitcoin crash and burn in the hands of a free-market system, than I would participate in a regulated Bitcoin. To me, Bitcoin should be a free market experiment (as much as is technically feasible), even if this leads to its eventual failure. Of course, that's just my personal opinion, but it's the basis for my dislike of more-limited blocksizes.

I mean no disrespect to some of the clever alternatives presented in this thread-- but I personally wouldn't want any of these "regulations" in Bitcoin.

Let me ask a question: is there anyone here who both (a) favors additional blocksize restrictions (more strict than Gavin's), and also (b) believes such restrictions are not additional "regulations" that subtract from a otherwise more-free-market system?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 22, 2014, 05:28:26 PM
Please tell me if you agree an ossifying of the protocol - the fact it will become increasingly hard, probably impossible to make changes as adoption grows - is what we'll likely see.

Not that I was asked, but I'll offer an opinion anyways.

At worst harder, but not impossible.

Today we have a sort of self-enforced (by the core devs) consensus system, plus of course the ultimate ability to vote with your node and with your mining capacity. I wouldn't expect the latter to ever change (indeed some blocksize limit is required to maintain this goal). For the former, however, I doubt that having this little governance around important changes to Bitcoin will last forever -- 20 years hence I would expect a much more regimented procedure, somewhat more akin to a standards organization than what we have today (perhaps with a combination of academic, miner, and corporate interests represented, but that'd be an argument for a different thread).

More governance is both bad and good-- in particular on the good side, bright lines can be drawn when it comes to voting in a way that doesn't happen so much today. If the ISO can finally manage to crank out C++11, despite the contentious issues and compromises that were ultimately required (and C++14 just two months ago too!), pretty much anything is possible IMO.

If you're that worried about a ossification, perhaps you'd prefer a dead man's switch: in 20 years, the blocksize reverts to its current 1 MB.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 22, 2014, 05:31:01 PM
At the risk of putting words into his mouth (for which I apologize if I'm wrong), gavin sees it as a technical anti-DOS measure: to prevent miners from DOSing voting enthusiasts out of the network.

But that's a very costly attack, yet it doesn't accomplish anything.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 22, 2014, 05:54:39 PM
At the risk of putting words into his mouth (for which I apologize if I'm wrong), gavin sees it as a technical anti-DOS measure: to prevent miners from DOSing voting enthusiasts out of the network.

But that's a very costly attack, yet it doesn't accomplish anything.

It's a free attack for a miner, and can arbitrarily kick anyone off the network (even if temporarily) who doesn't have sufficient bandwidth or ultimately enough disk space.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 22, 2014, 06:09:29 PM
At worst harder, but not impossible.

LOL are you not following this thread? What easy way forward do you see emerging for the block size issue?

If the ISO can finally manage to crank out C++11, despite the contentious issues and compromises that were ultimately required (and C++14 just two months ago too!), pretty much anything is possible IMO.

That's for a programming language not a protocol. Also see Andreas Antonopoulos's comment (http://www.reddit.com/r/Bitcoin/comments/2jw5pm/im_gavin_andresen_chief_scientist_at_the_bitcoin/clfnorn) on ossification considering hardware, which I also agree with.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 22, 2014, 06:55:05 PM
At worst harder, but not impossible.

LOL are you not following this thread? What easy way forward do you see emerging for the block size issue?

Consensus does not imply unanimous appeal. Although threads such as these help to flesh out different ideas and opinions, they do very little towards determining where consensus lies. This thread could represent a vocal majority just as easily as it could a small but vocal minority (on either side).

This is exactly where a more formal governance model (as I mentioned) could help. It too would surely be imperfect, but just about anything would be better than determining consensus based on who writes the most posts, thoughtful though they may be.

A formal governance model could draw distinct conclusions: yes the BIP passed, or no it didn't. If it didn't, it can lead to compromise. If, for example, I knew that there was little support for gavin's version, I for one would be much more willing to compromise. But I simply don't know.... instead, I choose to assume that people who support Bitcoin do so because they support the ideals of a free market, but I could be wrong.

If the ISO can finally manage to crank out C++11, despite the contentious issues and compromises that were ultimately required (and C++14 just two months ago too!), pretty much anything is possible IMO.

That's for a programming language not a protocol. Also see Andreas Antonopoulos's comment (http://www.reddit.com/r/Bitcoin/comments/2jw5pm/im_gavin_andresen_chief_scientist_at_the_bitcoin/clfnorn) on ossification considering hardware, which I also agree with.

I'm having trouble imagining a use case where embedded hardware with difficult-to-update software would connect to the P2P network, much less having anything to do with handling the blockchain, but my imagination isn't all that great. I also have trouble in general with any device whose purpose is highly security related that isn't software upgradeable. (Such things do exist today, and they're equally ill-advised.)


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 22, 2014, 07:34:40 PM
Another nice big block https://blockchain.info/block-height/326505 came through while we discuss the topic, yet the backlog of transactions https://blockchain.info/unconfirmed-transactions wasn't huge really at some amount just over 4000 (or are we just getting used to such big backlogs?).

We are bumping into the ceiling gentlemen.  It is safe to say we will begin to accumulate a bigger backlog pretty soon, when we start getting multiple blocks in a row near the current 1MB limit.  In my experience, in terms of queuing theory http://en.wikipedia.org/wiki/Queueing_theory, we can expect real signs of trouble as the average block size over a reasonable period of time, e.g. an hour or maybe more like a day, begins to exceed 70% of the maximum.  I'm going to try to build a model using JMT http://jmt.sourceforge.net/.

Perhaps we could two-step our way to the functional maximum.  We need to find the reliable functional maximum via testing.  To give ourselves some time to find it perhaps we could increase MAX_BLOCK_SIZE to the proposed 20MB right away (or as soon as is reasonable) and then work diligently to find the greatest workable maximum and then jump to it when we're ready.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 22, 2014, 07:39:25 PM
Consensus does not imply unanimous appeal. Although threads such as these help to flesh out different ideas and opinions, they do very little towards determining where consensus lies. This thread could represent a vocal majority just as easily as it could a small but vocal minority (on either side).

I had a complete reply typed out for all your points but my browser closed before I sent it  :'(

Ah well, I'm not re-typing it. The gist is I'm aware of the above, but don't think that's the case. I tend to think those in this part of the forum on this thread have sentiments which are not isolated. If we can gain consensus here we have a good chance in the wider community; if not then who knows, but it would become ever harder with passing time.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 22, 2014, 07:44:43 PM
We have less of a crisis than a request for comment.
Fears over lack of consensus are rhetoric to encourage people to accept shoddy work out of a mistaken sense of urgency.
So far there have been some well considered comments, though the responses by the requesters seem assumptive and tersely dismissive, as if comment were not honestly sought?
Frankly I'd expected better.  This is how science happens.  Peers review proposals, and both experts and consensus must be challenged to accomplish this.
It takes a mere engineer or technician to craft a patch, science produces novel results.  If we reach a crisis ahead of science happening, we can always patch.  Consensus is congealed in crisis, but crisis decisions are often also ill-considered.  For this I am grateful for Gavin's proposal.  I see it as a back-up plan in case these better solutions do not mature.

Richard Feynman in 1966 taught us that:
"Science is the belief in the ignorance of experts." -

Essentially an "expert" may have well formed opinions, and ignore options due to confirmation bias.  Revisiting assumptions often proves valuable.

It is not time that hinders consensus, so much as quality.  The BIPs flow like water when they are solid improvements.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 22, 2014, 07:46:27 PM
Consensus does not imply unanimous appeal. Although threads such as these help to flesh out different ideas and opinions, they do very little towards determining where consensus lies. This thread could represent a vocal majority just as easily as it could a small but vocal minority (on either side).

I had a complete reply typed out for all your points but my browser closed before I sent it  :'(

Ah well, I'm not re-typing it. The gist is I'm aware of the above, but don't think that's the case. I tend to think those in this part of the forum on this thread have sentiments which are not isolated. If we can gain consensus here we have a good chance in the wider community; if not then who knows, but it would become ever harder with passing time.

Fair enough.

Asside: did you preview your post at any point? It's in your draft history (https://bitcointalk.org/index.php?action=drafts) if so...


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 22, 2014, 07:56:37 PM
Asside: did you preview your post at any point? It's in your draft history (https://bitcointalk.org/index.php?action=drafts) if so...

Thanks! Never noticed that before. Here is the full reply:

Consensus does not imply unanimous appeal. Although threads such as these help to flesh out different ideas and opinions, they do very little towards determining where consensus lies. This thread could represent a vocal majority just as easily as it could a small but vocal minority (on either side).

This I know, but tend to think it's not the case. Anyone in this part of the forum, on this thread, probably doesn't have some obscure minority point of view. Whatever the basis for their reasoning it's probably not isolated. I think if we can get some sort of consensus on a thread like this we can too in the wider community. If we can't it would be harder, maybe not impossible, but harder depending how people dug into their positions. The longer the wait the harder. If this were mid 2010 there would likely be zero problem. High profile devs (like Satoshi/Gavin) would simply explain the necessary way forward and the small community would move along. If we're talking 2019 I don't see that happening so easily, or at all actually.

This is exactly where a more formal governance model (as I mentioned) could help. It too would surely be imperfect, but just about anything would be better than determining consensus based on who writes the most posts, thoughtful though they may be.

I'd be in favor of more structure for progress, but you won't convince everybody. There will be purists that cry centralization.

If, for example, I knew that there was little support for gavin's version, I for one would be much more willing to compromise. But I simply don't know....

Yes, I think some sort of poll will be in order at some point. I haven't pushed that yet because I think people still need time to stew with their positions.

I'm having trouble imagining a use case where embedded hardware with difficult-to-update software would connect to the P2P network, much less having anything to do with handling the blockchain, but my imagination isn't all that great. I also have trouble in general with any device whose purpose is highly security related that isn't software upgradeable. (Such things do exist today, and they're equally ill-advised.)

There is always a long tail of technology out into the marketplace. Just because our community is at the cutting edge of technology doesn't mean everyone is. For example, I was surprised to learn of a story in the community I came from (Hacker News) about a very successful business that still ran BASIC (http://en.wikipedia.org/wiki/BASIC). This was used for order fulfillment, accounting, you name it. The business was profitable in the millions if I recall, but completely reliant on their workhorse infrastructure. It wasn't cutting edge, but it worked, and that's all that mattered. A similar story exists in the banking industry.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 22, 2014, 08:27:33 PM
Fears over lack of consensus are rhetoric to encourage people to accept shoddy work out of a mistaken sense of urgency.

You didn't answer this:

Quote
Please tell me if you agree an ossifying of the protocol - the fact it will become increasingly hard, probably impossible to make changes as adoption grows - is what we'll likely see.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 22, 2014, 08:32:31 PM
This I know, but tend to think it's not the case. Anyone in this part of the forum, on this thread, probably doesn't have some obscure minority point of view. Whatever the basis for their reasoning it's probably not isolated. I think if we can get some sort of consensus on a thread like this we can too in the wider community. If we can't it would be harder, maybe not impossible, but harder depending how people dug into their positions. The longer the wait the harder. If this were mid 2010 there would likely be zero problem. High profile devs (like Satoshi/Gavin) would simply explain the necessary way forward and the small community would move along. If we're talking 2019 I don't see that happening so easily, or at all actually.

Except for that last clause perhaps, no arguments here.

I'd be in favor of more structure for progress, but you won't convince everybody. There will be purists that cry centralization.

More structure can cut both ways. If done well (big if), it can reduce centralization by better distributing "votes." But you're right that you can't convince everybody.

There is always a long tail of technology out into the marketplace. Just because our community is at the cutting edge of technology doesn't mean everyone is. For example, I was surprised to learn of a story in the community I came from (Hacker News) about a very successful business that still ran BASIC (http://en.wikipedia.org/wiki/BASIC). This was used for order fulfillment, accounting, you name it. The business was profitable in the millions if I recall, but completely reliant on their workhorse infrastructure. It wasn't cutting edge, but it worked, and that's all that mattered. A similar story exists in the banking industry.

My first assignment after college (20ish years ago) was with a defense contractor maintaining a codebase written in... BASIC. In any case, point taken.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 22, 2014, 08:36:19 PM
More structure can cut both ways. If done well (big if), it can reduce centralization by better distributing "votes." But you're right that you can't convince everybody.

Agreed.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 22, 2014, 10:45:03 PM
It's a free attack for a miner, and can arbitrarily kick anyone off the network (even if temporarily) who doesn't have sufficient bandwidth or ultimately enough disk space.

It's not free. The larger the block, the higher chance it has to be orphaned. No miner is going to inflate his blocks to reduce his chance to win a block race.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 22, 2014, 11:18:09 PM
It's a free attack for a miner, and can arbitrarily kick anyone off the network (even if temporarily) who doesn't have sufficient bandwidth or ultimately enough disk space.

It's not free. The larger the block, the higher chance it has to be orphaned. No miner is going to inflate his blocks to reduce his chance to win a block race.
Yes... and the orphan risk rate also decreases with bandwidth availability.
I continue to maintain that market forces can rightsize MAX_BLOCK_SIZE if an algorithm with a feedback mechanism can be introduced, and that doing so introduces both less centralization risk than an arbitrary patch, and less risk of future manual arbitrary adjustments.
Fix it right, fix it once.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: naplam on October 22, 2014, 11:26:51 PM
I don't know if this has been addressed before in this thread
I don't think adjusting the block size up or down or keeping it the same will have any effect on whether or not transaction fees will be enough to secure the network as the block subsidy goes to zero (and, as I said, I'll ask professional economists what they think).
But it's pretty simple: miners won't mine until they have enough transaction fees in a block to be able to pay what it costs them to mine. You'll have to increase the fees enough or increase adoption enough (more txs) to have a decent enough hashrate for the network to be reasonably secure.

Another side effect is that miners will have more power than they have now, they'll be able to establish cartels and request premiums for fast transactions. Right now we're used to transactions being treated more or less equally and fees being essentially zero, that equal treatment will come to an end. There is no question the block size will need to have been increased by then in order to fit a larger amount of txs; without many more txs than we have now, fees would be pretty high.

Ideally, mass adoption would be enough to keep txs cheap and net hashrate high enough. But I don't think we can expect that to happen while other issues remain unresolved. Bitcoin is not a very good payment system and the average person has no reason to adopt it yet. Some libertarians call for ignoring that and recognising Bitcoin for what it is now and its only strength: circumventing state and bank control over our money. But that is short-sighted too because Bitcoin is not designed to survive that way: without mass adoption it is just a very expensive and clunky trustless way of irreversibly sending money.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: naplam on October 22, 2014, 11:36:20 PM
It's a free attack for a miner, and can arbitrarily kick anyone off the network (even if temporarily) who doesn't have sufficient bandwidth or ultimately enough disk space.

It's not free. The larger the block, the higher chance it has to be orphaned. No miner is going to inflate his blocks to reduce his chance to win a block race.
Yes... and the orphan risk rate also decreases with bandwidth availability.
I continue to maintain that market forces can rightsize MAX_BLOCK_SIZE if an algorithm with a feedback mechanism can be introduced, and that doing so introduces both less centralization risk than an arbitrary patch, and less risk of future manual arbitrary adjustments.
Fix it right, fix it once.
Yeah, something that looks at block sizes over a past period of time to determine the next max block size for a certain period would be ok for example.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 23, 2014, 12:15:18 AM
Yeah, something that looks at block sizes over a past period of time to determine the next max block size for a certain period would be ok for example.

Any feedback loop can be gamed. You might as well just pick a fixed +xx% per year and be done with it.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Gavin Andresen on October 23, 2014, 01:50:34 PM
I continue to maintain that market forces can rightsize MAX_BLOCK_SIZE if an algorithm with a feedback mechanism can be introduced, and that doing so introduces both less centralization risk than an arbitrary patch, and less risk of future manual arbitrary adjustments.
Fix it right, fix it once.

I think you are confusing MAX_BLOCKSIZE with the floating, whatever-the market-demands blocksize.

MAX_BLOCKSIZE is, in my mind, purely a safety valve-- a "just in case" upper limit to make sure it doesn't grow faster than affordable hardware and software can support.

Ideally, we never bump into it. If we go with my proposal (increase to 20MB now, then double ten times over the next twenty years) I think it is reasonably likely the market-determined size will never bump into MAX_BLOCKSIZE.

I think it is very unlikely that in 20 years we will need to support more Bitcoin transactions than all of the cash, credit card and international wire transactions that happen in the world today (and that is the scale of transactions that a pretty-good year-2035 home computer and network connection should be able to support).


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 23, 2014, 02:01:55 PM
I continue to maintain that market forces can rightsize MAX_BLOCK_SIZE if an algorithm with a feedback mechanism can be introduced, and that doing so introduces both less centralization risk than an arbitrary patch, and less risk of future manual arbitrary adjustments.
Fix it right, fix it once.

I think you are confusing MAX_BLOCKSIZE with the floating, whatever-the market-demands blocksize.

MAX_BLOCKSIZE is, in my mind, purely a safety valve-- a "just in case" upper limit to make sure it doesn't grow faster than affordable hardware and software can support.

Ideally, we never bump into it. If we go with my proposal (increase to 20MB now, then double ten times over the next twenty years) I think it is reasonably likely the market-determined size will never bump into MAX_BLOCKSIZE.

I think it is very unlikely that in 20 years we will need to support more Bitcoin transactions than all of the cash, credit card and international wire transactions that happen in the world today (and that is the scale of transactions that a pretty-good year-2035 home computer and network connection should be able to support).


No, I am not confused on this matter.  I don't know why you would imagine this.  
It seems weird and bizzare (as if you imagine anyone that disagrees with your proposal must obviously be confused or insane...)


Visa today is about 2000 tx per second in average (non-peak) times.
Yes, I think Bitcoin can surpass this.  There are other problems with the way that scalability is limited than the block size to get there, this is just one.
And we agree on the purpose of the block size limit.  Just not on how to set it.

You don't know what the future market will look like.  You don't know what bandwidth or storage will be available.  Neither do I or anyone else.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Gavin Andresen on October 23, 2014, 02:20:48 PM
Yes, I think Bitcoin can surpass this.  There are other problems with the way that scalability is limited than the block size to get there, this is just one.
And we agree on the purpose of the block size limit.  Just not on how to set it.

You don't know what the future market will look like.  You don't know what bandwidth or storage will be available.  Neither do I or anyone else.

When you respond to me saying patronizing things like "there are other problems with the way scalability is limited," I have trouble not thinking you are either confused or insane. Or just lazy, and did not read my "Scalability Roadmap" blog post.

It is certainly true that nobody can predict the future with 100% accuracy. We might get hit by an asteroid before I finish this sentence. (whew! didn't!)

But extrapolating current trends seems to me to be the best we can do-- we are just as likely to be too conservative as too aggressive in our assumptions.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 23, 2014, 02:28:55 PM
Yes, I think Bitcoin can surpass this.  There are other problems with the way that scalability is limited than the block size to get there, this is just one.
And we agree on the purpose of the block size limit.  Just not on how to set it.

You don't know what the future market will look like.  You don't know what bandwidth or storage will be available.  Neither do I or anyone else.

When you respond to me saying patronizing things like "there are other problems with the way scalability is limited," I have trouble not thinking you are either confused or insane. Or just lazy, and did not read my "Scalability Roadmap" blog post.

It is certainly true that nobody can predict the future with 100% accuracy. We might get hit by an asteroid before I finish this sentence. (whew! didn't!)

But extrapolating current trends seems to me to be the best we can do-- we are just as likely to be too conservative as too aggressive in our assumptions.

Are you simply unaware of the other ways scalability is limited and only focused on this one?
We can go into it if you like.
I was looking to keep this discussion on the more narrow issue.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Gavin Andresen on October 23, 2014, 02:30:20 PM
Are you simply unaware of the other ways scalability is limited and only focused on this one?
We can go into it if you like.
I was looking to keep this discussion on the more narrow issue.

Start a new thread.  HAVE you read my Scalability Roadmap blog post?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 23, 2014, 02:38:11 PM
Are you simply unaware of the other ways scalability is limited and only focused on this one?
We can go into it if you like.
I was looking to keep this discussion on the more narrow issue.

Start a new thread.  HAVE you read my Scalability Roadmap blog post?

I read it, I offered some criticisms in the thread by that title a while back.

It is nice and theoretical.  There are practical things it misses (such as the zero cost mining that does occur in the real world from time to time when the equipment is not owned by the person controlling it)
There are also non-economic actors that do things for reasons other than money, and those working in larger economies (in which Bitcoin is only a minor part) with different agenda entirely.

It was a nice blog post and explained things in a simple way under ideal conditions.  I would refer people to it that need a primer on the matter.


In basic physics we give students problems that assume they are operating in a vacuum.  Basic economics also does this.  The real world is more complex.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: naplam on October 23, 2014, 02:47:56 PM

I don't know why there's so much discussion about the max block size when the real issue should be how are you going to increase adoption (more people willing to do more txs and pay more fees to keep the network secure) so that Bitcoin is sustainable without a much lower or zero block subsidy in the future.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 23, 2014, 02:48:13 PM
NewLiberty, I have a quick question for you which will hopefully clarify your position in my mind.

Excluding DOS flooding or other malicious actors, do you believe it would ever be a beneficial thing to have the blocksize limit hit?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: amaclin on October 23, 2014, 02:49:38 PM
Quote
It is certainly true that nobody can predict the future with 100% accuracy.

I can.
Bitcoin will die in 5 months.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 23, 2014, 02:55:56 PM
NewLiberty, I have a quick question for you which will hopefully clarify your position in my mind.

Excluding DOS flooding or other malicious actors, do you believe it would ever be a beneficial thing to have the blocksize limit hit?

Primarily the limit is a safeguard, a backstop.  It is not meant to be a constraint on legitimate commerce.
It also serves to facilitate adoption and decentralization by keeping the ability to participate affordable.
Backstops are beneficial when hit, if they are protecting grandma from getting hit with the ball.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 23, 2014, 02:57:50 PM

I don't know why there's so much discussion about the max block size when the real issue should be how are you going to increase adoption (more people willing to do more txs and pay more fees to keep the network secure) so that Bitcoin is sustainable without a much lower or zero block subsidy in the future.

There are WAY more people working on the adoption issue, than there are on this one.  If you point is that I should go do something else.  Granted.  If this is done properly I would certainly be doing something else.  But consider that doing it properly, is a matter accretive to adoption as well.

In the later days, Bitcoin will be supported by its tx fees.  Currently the fees support 1/300th of the miner payment.
We are doing about 1 tx per second or so, the limit is about 7tx per second, so now is the time to address this.

It takes time to do things right.  The alternative is that we just patch it and move on, sweeping the problem under the rug for the next time it needs to be patched.  I think that it is likely, and that we (or our children) may regret that.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 23, 2014, 03:01:06 PM
Primarily the limit is a safeguard, a backstop.  It is not meant to be a constraint on legitimate commerce.
It also serves to facilitate adoption and decentralization by keeping the ability to participate affordable.
Backstops are beneficial when hit, if they are protecting grandma from getting hit with the ball.

Thank  you.

Would you agree, if it were possible (although it is not), that the blocksize limit should somehow be automatically tied to "the bandwidth and disk space an average enthusiast can afford"?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 23, 2014, 03:07:00 PM
Primarily the limit is a safeguard, a backstop.  It is not meant to be a constraint on legitimate commerce.
It also serves to facilitate adoption and decentralization by keeping the ability to participate affordable.
Backstops are beneficial when hit, if they are protecting grandma from getting hit with the ball.

Thank  you.

Would you agree, if it were possible (although it is not), that the blocksize limit should somehow be automatically tied to "the bandwidth and disk space an average enthusiast can afford"?

Yes.  Though you should also recognize that block size and bandwidth use are not so tightly tied.
Network and disk compression can compress data, the block size is measured after decompression.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 23, 2014, 03:13:42 PM
Thank  you.

Would you agree, if it were possible (although it is not), that the blocksize limit should somehow be automatically tied to "the bandwidth and disk space an average enthusiast can afford"?

Yes.  Though you should also recognize that block size and bandwidth use are not so tightly tied.
Network and disk compression can compress data, the block size is measured after decompression.

Fair enough- block size may not be the best parameter to tweak to maintain the stated "bandwidth and disk space" goal, but it is a technically simple parameter to tweak.

Do you believe that there exists somewhere in the blockchain a metric, let's call it X, which would serve as a good predictor of "the bandwidth and disk space an average enthusiast can afford"?

I think this is the same question, though you may disagree: Do you believe that this metric X has a causal relationship with "the bandwidth and disk space an average enthusiast can afford"?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 23, 2014, 03:20:16 PM
Thank  you.

Would you agree, if it were possible (although it is not), that the blocksize limit should somehow be automatically tied to "the bandwidth and disk space an average enthusiast can afford"?

Yes.  Though you should also recognize that block size and bandwidth use are not so tightly tied.
Network and disk compression can compress data, the block size is measured after decompression.

Fair enough- block size may not be the best parameter to tweak to maintain the stated "bandwidth and disk space" goal, but it is a technically simple parameter to tweak.

Do you believe that there exists somewhere in the blockchain a metric, let's call it X, which would serve as a good predictor of "the bandwidth and disk space an average enthusiast can afford"?

I think this is the same question, though you may disagree: Do you believe that this metric X has a causal relationship with "the bandwidth and disk space an average enthusiast can afford"?

The Socratic inquiry is a bit pedantic, don't you think?
Skip to your point please.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 23, 2014, 03:29:15 PM
Thank  you.

Would you agree, if it were possible (although it is not), that the blocksize limit should somehow be automatically tied to "the bandwidth and disk space an average enthusiast can afford"?

Yes.  Though you should also recognize that block size and bandwidth use are not so tightly tied.
Network and disk compression can compress data, the block size is measured after decompression.

Fair enough- block size may not be the best parameter to tweak to maintain the stated "bandwidth and disk space" goal, but it is a technically simple parameter to tweak.

Do you believe that there exists somewhere in the blockchain a metric, let's call it X, which would serve as a good predictor of "the bandwidth and disk space an average enthusiast can afford"?

I think this is the same question, though you may disagree: Do you believe that this metric X has a causal relationship with "the bandwidth and disk space an average enthusiast can afford"?

The Socratic inquiry is a bit pedantic, don't you think?
Skip to your point please.

Very well.

No metric that can be gleaned from the blockchain has a causal relationship with "the bandwidth and disk space an average enthusiast can afford", and therefore any such predictor has a high danger of being either too restrictive or not restrictive enough.

Using Nielsen's Law also has a danger of being inaccurate, however given that it has at least been historically accurate, I find this danger much lower.

Do you disagree? (let's leave ossification out of this just for the moment, if you don't mind)


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 23, 2014, 04:37:28 PM
Thank  you.

Would you agree, if it were possible (although it is not), that the blocksize limit should somehow be automatically tied to "the bandwidth and disk space an average enthusiast can afford"?

Yes.  Though you should also recognize that block size and bandwidth use are not so tightly tied.
Network and disk compression can compress data, the block size is measured after decompression.

Fair enough- block size may not be the best parameter to tweak to maintain the stated "bandwidth and disk space" goal, but it is a technically simple parameter to tweak.

Do you believe that there exists somewhere in the blockchain a metric, let's call it X, which would serve as a good predictor of "the bandwidth and disk space an average enthusiast can afford"?

I think this is the same question, though you may disagree: Do you believe that this metric X has a causal relationship with "the bandwidth and disk space an average enthusiast can afford"?

The Socratic inquiry is a bit pedantic, don't you think?
Skip to your point please.

Very well.

No metric that can be gleaned from the blockchain has a causal relationship with "the bandwidth and disk space an average enthusiast can afford", and therefore any such predictor has a high danger of being either too restrictive or not restrictive enough.

Using Nielsen's Law also has a danger of being inaccurate, however given that it has at least been historically accurate, I find this danger much lower.

Do you disagree? (let's leave ossification out of this just for the moment, if you don't mind)

Thank you.  You saved yourself a lot of time.  I had enough socratic in law school.  And we'll set aside ossification for your benefit even though it cuts against your position here.

Yes, I disagree. 
Both Block size and transaction fee may be better tools than Nielsen's law, the combination may be even more so.  Dismissing inquiry on the matter, is a missed opportunity.

Having worked in multinational telcos for a few decades designing resilient scalable systems serving 193+ countries, managing teams of security software engineers, and responsibility for security and capacity management, the concepts are not so foreign.  The benefit of something like the block chain to provide consolidated data to rightsize applications over time for their audience, is a ripe fruit.


Neilson's law is less fit for purpose. 
1) It has measured fixed line connections.
- Usage demographics have changed over the period of history it covers.  More connections are mobile now than previous, and telco resources and growth have shifted.  There are other shifts to come.  These are not accommodated in the historical averages, nor are they factored into the current ones under Neilson.

2) It is not a measure of the average enthusiast.
- It measures a first world enthusiast, whose means have improved with age, in a rich nation with good infrastructure in time of peace.  This is not average with respect to place and time through history.

3) Following bandwidth growth is not the only function of max block size, though tying it to the average enthusiast capabilities (if that were possible) would be a suitable way of addressing other functions.
- ultimately it must accommodate the transactions of sufficient fees to maintain the network, and to not constrain reasonable commerce.  These will be business decisions which may be depending on the capacity and cost of the Bitcoin network and its associated fees.  These may radically bend the curve in one way or another.  A fixed non-responsive rate can not be flexible to a changing environment.  Avoiding a requirement for central decision makers to accommodate (or not) puts perverse incentives on Bitcoin developers.

I get that the core devs, (and former core devs) do have do deal with a lot of crazies.  But what is not needed is the "either you agree with me or your are stupid, crazy, or lazy" dismissals of doing real science instead of merely technicians work.  Science is hard, but it is often worth it.

I recall Gavin's talk in San Jose in 2013 being a lot more nuanced on this matter, and it looked like there were real solutions coming, with a future-proof market sensitive approach.  That conference was better in many ways than TBF did this year in Amsterdam.

That earlier stance was optimistic and well founded, it was abandoned.  The explanations for why it was abandoned don't seem compelling at all.


In my first proposal in this thread, I replicated Gavin's Nielsen's law approach with a simple algorithm that replicated it in effect, but took its cues from the block chain to accomplish that (so growth would stop or accelerate if real world circumstances changed).  This was simply an exercise to show that it would be easy enough to do so.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: jonny1000 on October 23, 2014, 06:08:28 PM
Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.

Gavin
Thanks for being so responsive on this issue.  Although, I am still not fully convinced by the blockszie economics post.

Suppliers of “commodities” need to make a profit and in this case if mining is competitive the difficulty will adjust and miners profit will reach a new equilibrium level.  The question is what is the equilibrium level of difficulty?  Letting normal market forces work means the price reaches some level, however this market has a “positive externality” which is network security.  Using an artificially low blocksize limit could be a good, effective and transparent way of manipulating the market to ensure network security.

Network security can be looked at in two ways:
1.   The network hashrate
2.   Aggregate mining revenue per block (as in theory at least, the cost of renting the bitcoin network’s hashrate to attack it could be related to mining revenue)

Mining revenue is therefore an important factor in network security.  Please try to consider this carefully when considering the maximum blocksize issue.  To be clear, I am not saying it shouldn’t increase above 1MB, I think it should.  However please consider mining incentives once the block reward falls, as one of the factors.  Bandwidth and the related technical issues should not be the only consideration.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 23, 2014, 06:39:25 PM
NewLiberty, thanks again for taking the time to explain your point of view.

The reason, by the way, I was asking the earlier questions was because I actually didn't know the answers. In particular, this answer (happily) surprised me:

Excluding DOS flooding or other malicious actors, do you believe it would ever be a beneficial thing to have the blocksize limit hit?

Primarily the limit is a safeguard, a backstop.  It is not meant to be a constraint on legitimate commerce.
It also serves to facilitate adoption and decentralization by keeping the ability to participate affordable.
Backstops are beneficial when hit, if they are protecting grandma from getting hit with the ball.

Regarding Nielsen's law:
Yes, I disagree.  
Both Block size and transaction fee may be better tools than Nielsen's law, the combination may be even more so.  Dismissing inquiry on the matter, is a missed opportunity.

I don't disagree that Nielsen's law is inaccurate, however I remain quite skeptical that there's something in the blockchain that can more accurately predict grandma's computing resources. Having said that, I think I'm misunderstanding your goal here (and I'm maybe OK with that): it seems as though you're not interested in using grandma's computing resources as a block size limit, you'd prefer a much lower bound at times when transaction volume isn't growing.

My biggest concern with the alternatives discussed in this thread isn't the potential for unchecked growth, but rather the potential for miners creating forced artificial scarcity (hence my first question, for which I expected a different response).

For example in the first algorithm you suggested, a majority mining cartel could artificially limit the max block size, preventing a mining minority from including transactions. It's this lack of free-market choice that I'd disagree with.

If the difference between average block size and max block size were a magnitude or two of order away, I'd find it much more agreeable.

My (ideal) goals, in particular, would be to (1) never kick out grandma, and (2) never prevent a minor from including a legitimate transaction. (edited to add: those are in priority order)


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 23, 2014, 06:49:30 PM
NewLiberty, you seem to be ignoring me (https://bitcointalk.org/index.php?topic=815712.msg9295024#msg9295024).

Your sticking point, in my mind, is less about solving this issue than it is you feel people are not taking adequate time to find an input based solution to "fix it right".

As I said before my goal isn't to be right. It's to find a solution which can pass the community so we're not stuck. Ideally it also meets Bitcoin's promises of decentralization and global service. I made a bullet point list (https://bitcointalk.org/index.php?topic=815712.msg9281473#msg9281473) outlining my thinking on the two proposals, but please note I didn't refer to any specific plan from you. I said any input based solution, which implies any taking accurate measurements too - lack of consideration in uncovering such isn't relevant. I fundamentally think that approach wouldn't work as well for reasons I outlined.

Would you make a bullet point list of your likes and dislikes on the two proposed paths so we can at least see in a more granular way where our beliefs differ?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 23, 2014, 07:07:35 PM
NewLiberty, you seem to be ignoring me (https://bitcointalk.org/index.php?topic=815712.msg9295024#msg9295024).

Your sticking point, in my mind, is less about solving this issue than it is you feel people are not taking adequate time to find an input based solution to "fix it right".

As I said before my goal isn't to be right. It's to find a solution which can pass the community so we're not stuck. Ideally it also meets Bitcoin's promises of decentralization and global service. I made a bullet point list (https://bitcointalk.org/index.php?topic=815712.msg9281473#msg9281473) outlining my thinking on the two proposals, but please note I didn't refer to any specific plan from you. I said any input based solution, which implies any taking accurate measurements too - lack of consideration in uncovering such isn't relevant. I fundamentally think that approach wouldn't work as well for reasons I outlined.

Would you make a bullet point list of your likes and dislikes on the two proposed paths so we can at least see in a more granular way where our beliefs differ?
Oh?  And I thought you were ignoring me.

I understand your goal, and your ossification fears.  I don't mean to be ignoring you, only thought this was already fully addressed.

If your ossification fears are justified (and they may be), then (I would argue) that it is more important to do it right than to do it fast, as the ossification would be progressive, and more difficult in years to come.
I understand your position to be that a quick fix to patch this element is needed, that we are at a crisis, and it may be now or never.
I disagree.  If it were a crisis, (even in an ossified state) consensus would be easy, and even doing something foolish would be justified and accepted broadly.  

Unless you are Jeremy Allaire, I probably want this particular issue fixed even more than you do, but I would rather see it fixed for good and all, than continuously twiddled with over the decades to come.

To your bullet point assignment...  maybe.
One of my publishers has been pestering me for a paper so I will likely 'write something'.  I'll try not to point to it and say "but didn't you read this" as if it were the definitive explanation of everything, because it surely will not be that.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: BitMos on October 23, 2014, 07:44:24 PM
Finally good news, even Satoshi told the 1MB limit is temporary.

But easier would be square maxblocksize every block reward halving, to make Bitcoin simple...

50 BTC = 1 MB
25 BTC = 2 MB
12.5 BTC = 4 MB
6.25 BTC = 16 MB
3.125 BTC = 256 MB


and so on.

(edit by me in bold)

you're welcome.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 23, 2014, 09:02:06 PM
I dunno know; here I am watching for blocks at or near the limit of 1MB and along comes ... it just seems strange to me https://blockchain.info/block-height/326639 -- apparently the miner couldn't be bothered to include even just one other transaction except the coinbase transaction in the block?  Could the pool have been empty from his point of view?

Miner algorithm: listen for a block to be broadcast and immediately begin searching for the next block with only their coinbase transaction in it, ignore all other transactions.  Is there some sort of advantage to ignoring the other transactions?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 23, 2014, 09:05:43 PM
Hmm, it came only 19 seconds (if the timestamps can be trusted) after the previous one; lucky guy.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 23, 2014, 09:13:09 PM
I'm trying to build the JMT queuing model of Bitcoin.  What is the measured distribution http://en.wikipedia.org/wiki/Probability_distribution of times between blocks?  https://blockchain.info/charts/avg-confirmation-time are points averaged over 24 hours which isn't helping me see it.  I know the target is 10 minutes but it's clear that is not being achieved consistently.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: solex on October 23, 2014, 09:28:00 PM
David, this is what you need

At this point it might be advisable to relax the presentation with some charts based on actual data ;).

Unfortunately I'm using quite old data (missing some blocks so the longest chain ends at block 210060), as you can see by this query result.

Code:
     last_block_date     | blocks_in_db | avg_blocktime_seconds | avg_blocktime_minutes
------------------------+--------------+-----------------------+-----------------------
 2012-11-29 01:19:00+01 |       210060 |  586.2221984194991907 |    9.7769351613824622

So here we go: some histograms, click images for slightly larger versions.

https://i.imgur.com/Vd9dgl.png (https://i.imgur.com/Vd9dg.png)

https://i.imgur.com/kfCUBl.png (https://i.imgur.com/kfCUB.png)

https://i.imgur.com/yqkELl.png (https://i.imgur.com/yqkEL.png)

Observations and clarifications/notes:

  • I'm looking at overlapping sequences, so a block that takes 127 minutes to calculate would result in multiple sequences being counted
  • The case of a 3-block sequence taking at least 127 minutes to find happened 759 out of 210,060 times (0.3613%)
  • The case of a 4-block sequence taking at least 127 minutes to find happened 1,551 out of 210,060 times (0.7383%)
  • 135,421 blocks (out of 210,060) have been solved in less than 10 minutes (64.47%)
  • There can be negative block times because miners clocks can be unsynced.
  • The block that took longest to calculate was block #2 (7719 minutes). It might've been block #1, but we don't know how long that took.
  • I put a confusing date on the upper right corner, data is from 2012-11-29
  • first and last "bins" include the rest of the data (for example last bin contains the number of blocks that took 127 minutes or more to find)
  • Surprisingly to me the "bin" with the most blocks is the 1-2 minutes bin (1:00 to 1:59.999) (bar labeled "1" in the charts")

queries I used are here: http://pastebin.com/tPg1RQtG

Does this stuff look like it could be correct to you guys?

EDIT: some corrections


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 24, 2014, 12:35:57 AM
NewLiberty, we can continue back and forth trying to sway one another and who knows how that will turn out. How about the following compromise:

We implement Gavin's plan - go to 20MB blocks and 50% annual increases thereafter. That is the default. However, we add a voting component. We make it possible to restrain the increase by say 1/2 if enough blocks contain some flag in the block header. It could also be used to increase scheduled increase by 1/2 if the model is too conservative for computing growth. There was a header variable mentioned before I think in the block size debate, the first time around.

I think this is the best of both worlds. It provides a measure of predictability, and simplicity, while allowing the community to bend capacity more inline with the growth of the time if needed. What do you think?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 24, 2014, 03:21:08 AM
NewLiberty, we can continue back and forth trying to sway one another and who knows how that will turn out. How about the following compromise:

We implement Gavin's plan - go to 20MB blocks and 50% annual increases thereafter. That is the default. However, we add a voting component. We make it possible to restrain the increase by say 1/2 if enough blocks contain some flag in the block header. It could also be used to increase scheduled increase by 1/2 if the model is too conservative for computing growth. There was a header variable mentioned before I think in the block size debate, the first time around.

I think this is the best of both worlds. It provides a measure of predictability, and simplicity, while allowing the community to bend capacity more inline with the growth of the time if needed. What do you think?

I don't recall Gavin ever proposed what you are suggesting here.  1st round was 50% per year, 2nd proposal was 20MB + 40% per year, yes?


I'm less a fan of voting than you might imagine.  
It is mostly useful when there are two bad choices rather than one good one, and a choice is forced.  I maintain hope for a good solution yet.  To give us an easy consensus.

This flag gives only miners the votes?  This doesn't seem better than letting the transactions or the miner fee be the votes?
Its better than a bad idea though, as it does provide some flexibility and sensitivity to future realities and relies on proof of work for voting.
It fails the test of being a self-regulating approach, and remains based on arbitrary guesses.
So I don't think it is the "best" of either world, but also not the worst.  More like an engineering trade-off.

Presumably this is counting years by blocks, yes?
This would give 100MB max blocks size in 2018 and gigabyte 6 years later, but blocks are coming faster than the years, so wouldn't likely take that long.

At such increases, Bitcoin could support (current) Visa processing peak rates within a decade, and a lot sooner if the votes indicate faster and the block solving doesn't slow too much.  (perhaps as soon as 6 years, by 2020)

The idea has a lot of negatives.  Possibly its fixable.
Thank you for bringing forward the suggestion.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 24, 2014, 03:38:46 AM
My (ideal) goals, in particular, would be to (1) never kick out grandma, and (2) never prevent a minor from including a legitimate transaction. (edited to add: those are in priority order)

We share these design goals, and priority.  They aren't comprehensive for me though.

I'd add (at least)
3) provide conditions conducive for mining when the transaction fees are supporting the network  
4) avoid future changes on the same issue.
And of course avoid introducing new unmitigated risks (more of a criteria than a goal).


If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 24, 2014, 05:31:59 AM
If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: -ck on October 24, 2014, 05:53:35 AM
I dunno know; here I am watching for blocks at or near the limit of 1MB and along comes ... it just seems strange to me https://blockchain.info/block-height/326639 -- apparently the miner couldn't be bothered to include even just one other transaction except the coinbase transaction in the block?  Could the pool have been empty from his point of view?
This is a common optimisation virtually all crappy pools use shortly after a new block since their software can't scale to get miners to work on the new block full of transactions quickly, they just broadcast a blank sheet for the first lot of work after a block change. Most pools blame bitcoind for being so slow to accept a block and generate a new template, and this is actually quite slow, but it's obviously more than just this (since I don't ever include transaction-free blocks in my own pool software).


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: solex on October 24, 2014, 06:01:22 AM
I dunno know; here I am watching for blocks at or near the limit of 1MB and along comes ... it just seems strange to me https://blockchain.info/block-height/326639 -- apparently the miner couldn't be bothered to include even just one other transaction except the coinbase transaction in the block?  Could the pool have been empty from his point of view?
This is a common optimisation virtually all crappy pools use shortly after a new block since their software can't scale to get miners to work on the new block full of transactions quickly, they just broadcast a blank sheet for the first lot of work after a block change. Most pools blame bitcoind for being so slow to accept a block and generate a new template, and this is actually quite slow, but it's obviously more than just this.

Gee. When gmaxwell said that there was a lot of low hanging fruit, in terms of possible improvements, perhaps it was not obvious just how low and how dangling some of that fruit actually is.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 24, 2014, 12:38:05 PM
If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.

Although we can argue about details, we (or at least I) have been using "grandma" as shorthand for "Bitcoin hobbyist", which Gavin had equated to "somebody with a current, reasonably fast computer and Internet connection, running an up-to-date version of Bitcoin Core and willing to dedicate half their CPU power and bandwidth to Bitcoin." Is that reasonable?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 24, 2014, 12:53:48 PM
My (ideal) goals, in particular, would be to (1) never kick out grandma, and (2) never prevent a minor from including a legitimate transaction. (edited to add: those are in priority order)

We share these design goals, and priority.  They aren't comprehensive for me though.

I'd add (at least)
3) provide conditions conducive for mining when the transaction fees are supporting the network  
4) avoid future changes on the same issue.
And of course avoid introducing new unmitigated risks (more of a criteria than a goal).

It seems to me that (1) and (2) could both be implemented with either the static (Gavin's) method or some reactive method, although the I suspect the reactive method can do (1) more safely/conservatively. If a reactive method can do (2) safely enough (I suspect it could), I'd prefer it. A reactive method seems much more likely to meet (4).

If I understand you correctly, (3) takes us back to an artificial cap on block size to prevent a perceived, as Gavin put it, "Transaction Fee Death Spiral." I've already made my rant on that subject (https://bitcointalk.org/index.php?topic=815712.msg9292869#msg9292869); no need to repeat it.

I'm of the opinion that reaching consensus on (3) is more important, and possibly more difficult, than any static-vs-reactive consensus. (3) is an economic question, whereas static-vs-reactive is closer to an implementation detail.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 24, 2014, 01:34:04 PM
If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 24, 2014, 01:40:20 PM
I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Actually I picked the term up from NewLiberty's post (https://bitcointalk.org/index.php?topic=815712.msg9303401#msg9303401), but yes that's what I was assuming it meant. Should the term "grandma-cap" make it into the BIP?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 24, 2014, 01:59:47 PM
I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Actually I picked the term up from NewLiberty's post (https://bitcointalk.org/index.php?topic=815712.msg9303401#msg9303401), but yes that's what I was assuming it meant. Should the term "grandma-cap" make it into the BIP?

Ah yes, the backstop reference, grandma at the ball park watching grandkid play, protected by the backstop.
http://mlblogscookandsonbats.files.wordpress.com/2011/03/1720-20grandma20and20kellan20at20spring20training1.jpg
Its that fence behind the kid, that protects them from the wild pitch and thrown bat.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 24, 2014, 02:37:20 PM
My (ideal) goals, in particular, would be to (1) never kick out grandma, and (2) never prevent a minor from including a legitimate transaction. (edited to add: those are in priority order)

We share these design goals, and priority.  They aren't comprehensive for me though.

I'd add (at least)
3) provide conditions conducive for mining when the transaction fees are supporting the network  
4) avoid future changes on the same issue.
And of course avoid introducing new unmitigated risks (more of a criteria than a goal).

It seems to me that (1) and (2) could both be implemented with either the static (Gavin's) method or some reactive method, although the I suspect the reactive method can do (1) more safely/conservatively. If a reactive method can do (2) safely enough (I suspect it could), I'd prefer it. A reactive method seems much more likely to meet (4).

If I understand you correctly, (3) takes us back to an artificial cap on block size to prevent a perceived, as Gavin put it, "Transaction Fee Death Spiral." I've already made my rant on that subject (https://bitcointalk.org/index.php?topic=815712.msg9292869#msg9292869); no need to repeat it.

I'm of the opinion that reaching consensus on (3) is more important, and possibly more difficult, than any static-vs-reactive consensus. (3) is an economic question, whereas static-vs-reactive is closer to an implementation detail.

I think you are missing the point entirely on #3, probably my fault for being overly brief there and not really explaining the point in this context.

The artificial cap on block size would fail the test of #3.  So would a too high max block size if node maintenance storage costs make processing transactions unfeasible if only supported by TX fees.  We have never seen a coin yet that survives on transaction fee supported mining.  Bitcoin survives on its inflation.  What is sought there is to compensate at an appropriate level.  We don't know what that level is, but it may be something like a fraction of a percentage of all coins.
Currently TX fees are 1/300th the miner compensation.  After the next halving, we may be around 1/100 if TX continue to grow.  Fees will still be well within marginal costs and so not significant still.
This is fundamentally a centralisation risk, and a security risk through not creating perverse incentives.  

Much mining can be done with a single node.  Costs are discrete between nodes and mining and asymmetric.  If costs for node maintenance overwhelm the expected rewards at <x hashrate / y nodes then we lose all mining under that hashrate irrespective of other costs, and we lose nodes per hashrate.  People look at hashrate to determine network health and not so much at node population and distribution, but both are essential.

It is not so much an artificial limit created for profitability, it is a technical limit to preserve network resilience through node population and distribution by being sensitive to the ratio.  Much of the discussion on blocksize economics treats mining and node maintenance as the same thing.  They aren't the same thing at all.  Its more a chain length vs hashrate issue.
In later years, there is a very long chain, and most coins transacted will be the most recent.  Old coins are meant to get to move for free, this reduces the UTXO block depth.  We don't know how it will play out, its uncharted territory.  #3 is more about not creating a perverse incentive to unbalance this that doesn't materialize until the distant future than about encouraging compensation through artificial constrain on supply.

For better clarity I should swap #3 for

3) provide conditions conducive for node maintenance and mining when the transaction fees are supporting the network by avoiding perverse incentives.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 24, 2014, 04:46:39 PM
Much mining can be done with a single node.  Costs are discrete between nodes and mining[,] and [those costs are] asymmetric.
...
People look at hashrate to determine network health and not so much at node population and distribution, but both are essential.

(Text in brackets added by me to indicate what I understood you to be saying.) Agreed.

If costs for node maintenance overwhelm the expected rewards at <x hashrate / y nodes then we lose all mining under that hashrate irrespective of other costs, and we lose nodes per hashrate.

We lose nodes per hashrate, which is bad and leads to (or rather continues the practice of) miners selling their votes to node operators, but I don't see how we lose hashrate, we just centralize control of hashrate to amortize node maintenance costs (still bad).

In later years, there is a very long chain, and most coins transacted will be the most recent.
...
We don't know how it will play out, its uncharted territory.  #3 is more about not creating a perverse incentive to unbalance this that doesn't materialize until the distant future than about encouraging compensation through artificial constrain on supply.

So long as the grandma-cap can be maintained, it seems like all of your discussion would already be covered. The hope has always been that new techniques (IBLT, tx pruning, UTXO commitments, etc.) will keep this possible.

However there is no way to see into the distant future. Any chosen grandma-cap could be incorrect, and any cap more restrictive than that to meet #3 could also be incorrect. I don't disagree that #3 is desirable, only that it may not be implementable. Having said that, as long as a more restrictive cap has little to no chance of interfering with #2 (never prevent a miner from including a legitimate tx), I'd have no problem with it.

3) provide conditions conducive for node maintenance and mining when the transaction fees are supporting the network by avoiding perverse incentives.

TL;DR - This goal implies the "only permit an exponential increase in the max blocksize during periods of demand" rule in your initial example, correct?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 24, 2014, 04:48:29 PM
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Some people want everyone to run a full node. I'm suggesting that's not a good idea. We should not limit bitcoin growth such that everyone can run a full node. Not everyone needs to run a full node to benefit from bitcoin.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: BitMos on October 24, 2014, 04:56:09 PM

This is a common optimisation virtually all crappy pools use shortly after a new block since their software can't scale to get miners to work on the new block full of transactions quickly, they just broadcast a blank sheet for the first lot of work after a block change. Most pools blame bitcoind for being so slow to accept a block and generate a new template, and this is actually quite slow, but it's obviously more than just this (since I don't ever include transaction-free blocks in my own pool software).

It's a personal honor to read from you.  :o


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: btchris on October 24, 2014, 04:57:58 PM
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Some people want everyone to run a full node. I'm suggesting that's not a good idea. We should not limit bitcoin growth such that everyone can run a full node. Not everyone needs to run a full node to benefit from bitcoin.

A "bitcoin enthusiast" is not everyone. See Gavin's definition I just quoted (https://bitcointalk.org/index.php?topic=815712.msg9314055#msg9314055) above. Without at least some limit, bitcoin nodes become centralized. Also, the suggested exponential upper limits seem very unlikely to limit bitcoin growth.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 24, 2014, 05:39:00 PM
I don't recall Gavin ever proposed what you are suggesting here.  1st round was 50% per year, 2nd proposal was 20MB + 40% per year, yes?

His Scalability Roadmap (https://bitcoinfoundation.org/2014/10/a-scalability-roadmap/) calls for 50% annually. He has since mentioned 40% being acceptable, but none of his proposals seem to have been accepted by you, so what's the difference? Do you feel 40% is better than 50%? Would 30% be even better? What is your fear?

That's why I asked you for a bullet point list (not a paper), to get an idea of your thinking in specifics of concern.

I'm less a fan of voting than you might imagine.  
It is mostly useful when there are two bad choices rather than one good one, and a choice is forced.  I maintain hope for a good solution yet.  To give us an easy consensus.

This is the problem I have with you. You seem to think there is some mystical silver bullet that simply hasn't been discovered yet, and you implore everyone keep searching for it, for once it's found the population will cheer, exalt it to the highest and run smiling to the voting booths in clear favor. That is a pipe dream. Somebody is always going to see things differently. There is no ideal solution because everything is subjective and arbitrary in terms of priority of the advocate. The only ideal solution is to remove all areas of concern, meaning no cap at all but with everyone in the world having easy access to enough computing resources to keep up with global transaction numbers. That's not our situation so we have to deal with things as best we can. Best, again, is completely subjective. The people at keepbitcoinfree.org don't want to change the 1MB now at all. They think, for Tor and other consideration, it's necessary, but I agree with Syke that not everyone needs to be able to run a full node.

The idea has a lot of negatives.  Possibly its fixable.
Thank you for bringing forward the suggestion.

Then suggest something. At least I tried moving in a direction toward your priority. Can we see that from you? Again - what I think is most important involves some measure of simplicity and predictability. We're building a global payment system, to be used potentially by enormous banks and the like, this isn't aiming to be some arbitrary system for a few geeks trading game points.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 24, 2014, 06:23:23 PM
1) What is the maximum value for MAX_BLOCKSIZE functionally possible given the APIs being used?

2) What is the maximum value which has been tested successfully?  Have any sizes been tested that fail?

3) Why not just set it to that value once right now (or soon) to the value which works and leave it at that?
       3.1) What advantage is there to delaying the jump to maximum tested value?

No miner is consistently filling up even the tiny 1MB blocks possible now.  We see no evidence of self-dealing transactions.  What are we afraid of?

Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.  How will we know we need to jump it up faster?  A few blocks at the current maximum is hardly a reason to panic but when the pool of transactions waiting to be blocked starts to grow without any apparent limit then we've waited too long.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 24, 2014, 06:40:27 PM
Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.

That's fine by me. My last proposal does this. What does everyone think? I say we start building some idea of what can pass community consensus. We may need to leave NewLiberty searching for the legendary ideal solution.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: andrewbb on October 24, 2014, 06:44:09 PM
Maybe this is a stupid question, but...


Miners are in it for the fees (plus mining coins).

Why not just set a minimum fee in relation to the probability of mining a coin?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: solex on October 24, 2014, 07:27:59 PM
Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.

That's fine by me. My last proposal does this. What does everyone think? I say we start building some idea of what can pass community consensus. We may need to leave NewLiberty searching for the legendary ideal solution.

Since the monumental debate which occurred in February 2013 (which you will remember), I have despaired for 18 months that nothing will get done until the limit is actually hit, throughput is crippled for days, and world's press laughs at Bitcoin for years afterward.

Which is why I even argued for a 3MB cap, just to buy some time.

https://bitcointalk.org/index.php?topic=709970.msg8129058#msg8129058

That 40% for 20 years is more than fine by me :-)


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 24, 2014, 07:43:13 PM
Since the monumental debate which occurred in February 2013 (which you will remember), I have despaired for 18 months that nothing will get done until the limit is actually hit, throughput is crippled for days, and world's press laughs at Bitcoin for years afterward.

I do remember. I'm really hoping it doesn't take running into the limit to provide impetus to take action. Not only would we likely get negative press as you mention, but it would highlight the issue to people completely oblivious to it. If we can't get people with good knowledge of the subject to agree how would we fare after adding even more noise to the signal?

Which is why I even argued for a 3MB cap, just to buy some time.

https://bitcointalk.org/index.php?topic=709970.msg8129058#msg8129058

I think half-measures only increase the likelihood we can't get a comprehensive solution.

That 40% for 20 years is more than fine by me :-)

Thanks for your feedback and for being IMO reasonable :)


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Syke on October 24, 2014, 08:12:35 PM
1) What is the maximum value for MAX_BLOCKSIZE functionally possible given the APIs being used?

There was a 32 MB limit to messages (which I assume still exists), so 32 MB is the max it could simply be raised to right now without further code changes. Breaking blocks into multiple messages would be a significant code change to go above 32 MB.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 24, 2014, 09:25:29 PM
So who are we kidding with this?  Are we doing the block segment code now or later?  Bump it to 32MB now to buy us time to do the block segment code.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: solex on October 24, 2014, 09:30:47 PM
Breaking blocks into multiple messages would be a significant code change to go above 32 MB.

So who are we kidding with this?  Are we doing the block segment code now or later?  Bump it to 32MB now to buy us time to do the block segment code.

The block segment code is not even needed for a very long time (edit: apart from node bootstrapping).
IBLT blocks of 32MB would support at least 3GB standard blocks on disk, or 20,000 TPS.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Realpra on October 24, 2014, 09:33:12 PM
I will just weigh in on this as someone who has worked with Bitcoin merkle trees in practice:

1. The limit could be infinite and we would be fine. Hence I support any proposal to increase block size as much as possible.
2. That said a slow ~40% growth rate gives us time to improve the clients to scale nicely. Again I give full support to this.
3. The things that makes this possible is swarm nodes and aggressive merkle tree pruning.

There are two hard forks needed in Bitcoin this is the first. The next will be more decimals. Nothing else I know about is needed.
(Made sure of that before I jumped the wagon you know ;))

Scaling details:
Swarm nodes:
Put/implemented as SIMPLY as possible (can also be trustless, decentralized and peer to peer) -> Two people run a "half" node each and simply tell each whether their half of the block was valid, boom 2X network capacity.
(Rinse and repeat/complicate as needed ;))

Aggressive merkle tree pruning:
1. Spent/provably unspendable TXs are pruned.
2. Dust size and rarely/never used unspent TXs can ALSO be pruned by miners -> The owner, should he exist, will just later have to provide the merkle branch leading to the header chain and other TX data at spend time. (Self storage of TX data, not just keys basically)
A miner who does not know about a TX will have to either A not include it in his blocks or B Get the information from someone else.

Security:
Complex issue, but it will be okay. (In another thread I have described how Bitcoin clients can delay miner blocks from miners that NEVER include their valid TXs for instance.)

In general ->
Bitcoin is consensus based, if the issue is serious enough it will be solved. A Bitcoin software "crash" will never happen because all issues will be solved.
In 2010 anyone could spend anyone elses Bitcoin... you probably didn't even know about that right? What happened? -> Nothing; solved and "forgotten".


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 24, 2014, 11:09:08 PM
2. That said a slow ~40% growth rate gives us time to improve the clients to scale nicely. Again I give full support to this.

Awesome. We're looking good for 40% annual increases :)

As Cubic Earth said we don't need 100% consensus. We just need general consensus. Let's try to keep rallying around a 40% increase game plan. The more people trumpet it the more it becomes the agreed upon way forward.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 24, 2014, 11:11:33 PM
Since the monumental debate which occurred in February 2013 (which you will remember), I have despaired for 18 months that nothing will get done until the limit is actually hit, throughput is crippled for days, and world's press laughs at Bitcoin for years afterward.

I do remember. I'm really hoping it doesn't take running into the limit to provide impetus to take action. Not only would we likely get negative press as you mention, but it would highlight the issue to people completely oblivious to it. If we can't get people with good knowledge of the subject to agree how would we fare after adding even more noise to the signal?

Which is why I even argued for a 3MB cap, just to buy some time.

https://bitcointalk.org/index.php?topic=709970.msg8129058#msg8129058

I think half-measures only increase the likelihood we can't get a comprehensive solution.

That 40% for 20 years is more than fine by me :-)

Thanks for your feedback and for being IMO reasonable :)


The 40% per year starting at 20MB as a half measure.
Its an improvement over the first round of 50% but is still picking some numbers, with some justification, arbitrarily guessed.

We aren't seeking "legendary" nor "ideal", but thank you for your rhetoric, and also for being a solidly reliable unvarying advocate for whatever the loudest voice says.  
I know I can rely on you for that, if any of the better suggestions catch traction, that you will just pile on with whichever you think is likely to get consensus.
You are also very reasonable, and your reasons clear:  Seek consensus.  Attack dissension.

I don't need to be right.  I am just as happy to be wrong, the happiness comes from improvement.

It will be nice to have this 40% solution in pocket as the minimum quality, temporary patch, while a fix may be devised that would not need future adjustment.
The max block size was already reduced once for being too large, and also once for being too small.  It isn't as though we haven't been here before, it would be a good one to see solved eventually.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 24, 2014, 11:18:25 PM
It will be nice to have this 40% solution in pocket as the minimum quality, temporary patch, while a fix may be devised that would not need future adjustment.

Awesome. Like I said, I'm happy for you to keep searching. If I can count you in for passing a 40% solution in the meantime I'll be your best friend ;)


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 25, 2014, 01:30:29 AM
Um, is that it?  How do we know if we've reached consensus?  When will the version with the increased MAX_BLOCKSIZE be available?


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 25, 2014, 02:38:39 AM
Um, is that it?  How do we know if we've reached consensus?  When will the version with the increased MAX_BLOCKSIZE be available?

I would stipulate that we agree that both Gavin's first and second solution are an improvement over the current code, I'd further opine that the second is a better guess even than the first.
I would maintain that our best so far is still a horrible miss of an opportunity.  With any luck we won't get another opportunity on this one in quite a while.  It is not a good solution, but it can get us at least up to the next time it has to be adjusted.

It is probably a different question whether to make a change, and if so when.  And another question as to whether there is a consensus to do so.

The answer to both might be in the same little bit of work.

In order to increase predictability, we might want to have some criteria for looking at this parameter, not just for now, but also for future? 
We have done the expedient before, in changing it,
Each time should continue to be an improvement over the last.  It is a patch not a fix, and it will probably last longer than what came before.
It is far less than Satoshi's suggestion.  We should recognize that it very well may need to change again.


Your questions, David are good ones.  They suggest the way to answer it may be in a few other questions:

If the plan is to keep changing MAX_BLOCKSIZE whenever we think MAX_BLOCKSIZE is awry, how does one know when MBS is off? 
What defines a crisis sufficient to get easy consensus?

Or put another way:
How how to measure risk of preventing legitimate transactions?  When risk is high enough we do this again.


Answering these satisfactorily, would likely foster an easy consensus.

This would also be a step toward the design goals, discussed on the last page.
If we get those defined, ahead of hitting that change criteria, we may yet end up with something still better.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: BitMos on October 25, 2014, 02:54:38 AM
The people at keepbitcoinfree.org don't want to change the 1MB now at all. They think, for Tor and other consideration, it's necessary, but I agree with Syke that not everyone needs to be able to run a full node.

Thank you for bringing this perspective so eloquently. There are in my limited knowledge only 6 options to scalability :

1. with the fees, it will be adjusted automatically (don't pay enough, no tx for you) / BEST OPTION
2. bigger blocks
3. faster blocks
4. alts
5. data compression (it will fit in those 640ko btw)
6. dynamic blocks (everything change depending of the usage)

6. being a little bit complex and that with alts no need to fork! maybe the global payment system is just a pipe dream... but a global payment system why not...


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 25, 2014, 03:09:50 AM
1) What is the maximum value for MAX_BLOCKSIZE functionally possible given the APIs being used?

2) What is the maximum value which has been tested successfully?  Have any sizes been tested that fail?

3) Why not just set it to that value once right now (or soon) to the value which works and leave it at that?
       3.1) What advantage is there to delaying the jump to maximum tested value?

No miner is consistently filling up even the tiny 1MB blocks possible now.  We see no evidence of self-dealing transactions.  What are we afraid of?

Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.  How will we know we need to jump it up faster?  A few blocks at the current maximum is hardly a reason to panic but when the pool of transactions waiting to be blocked starts to grow without any apparent limit then we've waited too long.

The first time it was fixed was from 32MB, it was reduced to 1MB temporarily until other things were fixed.  Pretty much all the reasons for that have abated since though.
(backstops in front of backstops)

The maximum successfully "tested" is what we have now, 1MB.
and there it sits at the top of the wish list.
https://en.bitcoin.it/wiki/Hardfork_Wishlist


We are at an average of less than 1/3rd of that now?
https://blockchain.info/charts/avg-block-size
https://blockchain.info/charts/avg-block-size?showDataPoints=false&timespan=all&show_header=true&daysAverageString=7&scale=0&address=

If we were to extrapolate the growth rate, we are still far from a crisis, or from getting transactions backed up because of this.
This provides opportunity for still better proposals to emerge in the months ahead.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 25, 2014, 04:42:30 AM
But clearly some blocks are already full right up to the 1MB limit.  I've been doing transactional systems for 30+ years; the serious trouble will start when the average over reasonable periods of time, e.g. an hour or so but not more than a day, begins to approach ~70%.

http://en.wikipedia.org/wiki/Little's_law

Per https://blockchain.info/charts/n-transactions?showDataPoints=true&timespan=1year&show_header=true&daysAverageString=1&scale=0&format=csv&address=, Nov. 28, 2013 had the most transactions in a day, i.e. 102010.  From https://blockchain.info/block-height/271850 to https://blockchain.info/block-height/272030, i.e. 180 blocks that day, one wonders what the block size distribution looked like.  Gosh, it would be useful to have the size of the pool of waiting transactions at that time.

Per https://blockchain.info/charts/n-transactions-per-block?showDataPoints=false&timespan=1year&show_header=true&daysAverageString=1&scale=0&format=csv&address=, we had an average of 560 transactions per block (only the 8th highest day so far).  Feb. 27, 2014 had the highest average transactions per block of 618 so far.

April 3, 2014 had the highest average block size at 0.365623MB.  Arg, a day is too long.  I just bet the hourly average peaks around 70% of 1MB.

Does *anyone* have a record of the pool or waiting transactions?  That's our key.  When there are ~2,000 transactions in the queue waiting then we would expect a full 1MB block to be coming out next.  When there are ~4,000 transactions in the queue waiting then we would expect two full 1MB blocks to be coming out next.  In this state, transactions can expect to take ~20 minutes to confirm.  ~6,000 waiting -> 30 minute confirmation times.  And so on.

7t/s * 60s/m = 420t/m, 420t/m * 10m/block = 4200t/block.  That does not match observations:  Observations reveal only about 2000t/block.  2000t/block * 1block/10m = 200t/m, 200t/m * 1m/60s ~= 3.3t/s.  Who thinks we can squeeze 4200t/block?  3.3t/s * 86400s/d = 285,120t/d.  Trouble is closer than we thought.  70% * 285.120t/d = 199,584t/d.

Gentlemen, I've seen this too many times before; when the workload grows to somewhere north of 200,000t/d we *will* begin to see the pool of waiting transactions grow to tens of thousands and confirmation times will be well over an hour.

Increase the MAX_BLOCKSIZE as soon as is reasonable.  20MB, 32MB, whatever.  Then enhance the code to segment blocks to exceed the API limit after that.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 25, 2014, 05:35:44 AM
But clearly some blocks are already full right up to the 1MB limit.  I've been doing transactional systems for 30+ years; the serious trouble will start when the average over reasonable periods of time, e.g. an hour or so but not more than a day, begins to approach ~70%.

So if there were a flexible adjustment that kept the highest average below 70%, or even below 50% to be safer, then we would have a flexible adjustment that would be fit for purpose?   Maybe even better than a fixed increase?  Would it be even better if the target max size were 400% of the average, to bring the average to 25% of the max?


http://en.wikipedia.org/wiki/Little's_law

Per https://blockchain.info/charts/n-transactions?showDataPoints=true&timespan=1year&show_header=true&daysAverageString=1&scale=0&format=csv&address=, Nov. 28, 2013 had the most transactions in a day, i.e. 102010.  From https://blockchain.info/block-height/271850 to https://blockchain.info/block-height/272030, i.e. 180 blocks that day, one wonders what the block size distribution looked like.  Gosh, it would be useful to have the size of the pool of waiting transactions at that time.
The largest block of that period  by a good margin was in the 880KB range.  https://blockchain.info/block-height/271998
The average was bit less than half that block.

Per https://blockchain.info/charts/n-transactions-per-block?showDataPoints=false&timespan=1year&show_header=true&daysAverageString=1&scale=0&format=csv&address=, we had an average of 560 transactions per block (only the 8th highest day so far).  Feb. 27, 2014 had the highest average transactions per block of 618 so far.

April 3, 2014 had the highest average block size at 0.365623MB.  Arg, a day is too long.  I just bet the hourly average peaks around 70% of 1MB.

Does *anyone* have a record of the pool or waiting transactions?  That's our key.  When there are ~2,000 transactions in the queue waiting then we would expect a full 1MB block to be coming out next.  When there are ~4,000 transactions in the queue waiting then we would expect two full 1MB blocks to be coming out next.  In this state, transactions can expect to take ~20 minutes to confirm.  ~6,000 waiting -> 30 minute confirmation times.  And so on.

7t/s * 60s/m = 420t/m, 420t/m * 10m/block = 4200t/block.  That does not match observations:  Observations reveal only about 2000t/block.  2000t/block * 1block/10m = 200t/m, 200t/m * 1m/60s ~= 3.3t/s.  Who thinks we can squeeze 4200t/block?  3.3t/s * 86400s/d = 285,120t/d.  Trouble is closer than we thought.  70% * 285.120t/d = 199,584t/d.

Gentlemen, I've seen this too many times before; when the workload grows to somewhere north of 200,000t/d we *will* begin to see the pool of waiting transactions grow to tens of thousands and confirmation times will be well over an hour.

What did you see and where did you see it.
200Kt/d may be several years away, yes?
https://blockchain.info/charts/n-transactions?timespan=all&showDataPoints=false&daysAverageString=1&show_header=true&scale=0&address=
(If you are fond of extrapolating this might tell you something)

Or 200kt/d may be much sooner.
So flexible limits are probably better than "whatever" right?

Increase the MAX_BLOCKSIZE as soon as is reasonable.  20MB, 32MB, whatever.  Then enhance the code to segment blocks to exceed the API limit after that.



Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: acoindr on October 25, 2014, 06:30:34 PM
Gavin, if you're still reading in my humble opinion we need a few things to move forward.

You're still the closest thing this community has to Satoshi/leadership so I think impetus comes from you. IMO you should update your Scalability Roadmap (http://bitcoinfoundation.org/2014/10/a-scalability-roadmap/) to reflect 40% annual increases noting that seems to be able to garner consensus. Maybe a second update includes a step-by-step process to rolling out the update (fork), so people know what to expect. I think starting to talk about things in a matter-of-fact way will engender confidence and expectation of what's to come. Thanks again for your hard work.

For the rest of us I think it's helpful to be supportive, in any way we can, of pushing forward this update. Notice I've updated my signature. If we can explain to those wondering whether it's the best option that it certainly is by implementing something, while covering the most possible bases people may go along.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Cubic Earth on October 25, 2014, 09:06:35 PM
Gavin, if you're still reading in my humble opinion we need a few things to move forward.

You're still the closest thing this community has to Satoshi/leadership so I think impetus comes from you. IMO you should update your Scalability Roadmap (http://bitcoinfoundation.org/2014/10/a-scalability-roadmap/) to reflect 40% annual increases noting that seems to be able to garner consensus. Maybe a second update includes a step-by-step process to rolling out the update (fork), so people know what to expect. I think starting to talk about things in a matter-of-fact way will engender confidence and expectation of what's to come. Thanks again for your hard work.

For the rest of us I think it's helpful to be supportive, in any way we can, of pushing forward this update. Notice I've updated my signature. If we can explain to those wondering whether it's the best option that it certainly is by implementing something, while covering the most possible bases people may go along.

My sentiments exactly.  I have actually found this whole thread to be quite heartening.

NewLiberty - you have done a good job tirelessly advocating for a certain approach.  It almost seems as if every even-numbered post is yours, and every odd-numbered post is from a different poster who steps up to the plate to explain to you the flaws in your position.  You seem like a perfectionist, which in general isn't a bad thing at all.  But action is needed now, even if we don't have some perfect 'forever' solution. Fortunately we don't need a 100% consensus to move forward, just an undefined super-majority.  I'd guess we have it around the 20MB / 40% concept.

My personal opinion is 20MB /40% is a kick-ass combo.  Between that, the headers-first downloading, the O(1) miner backbone system, and (hopefully) sidechains, we are looking at the most important set of technical improvements since bitcoin started.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: solex on October 25, 2014, 11:16:04 PM
My personal opinion is 20MB /40% is a kick-ass combo.  Between that, the headers-first downloading, the O(1) miner backbone system, and (hopefully) sidechains, we are looking at the most important set of technical improvements since bitcoin started.

+1 Great summary.

OT, but relevant. An interesting piece by Jake Yocom-Piatt, highlighting the CPU bottleneck of signature verification, which seems to rear its head next, after the artificial constraint on block size is lifted:

https://blog.conformal.com/btcsim-simulating-the-rise-of-bitcoin/

The road ahead might be rocky, but at least it is better than facing a dead-end.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 26, 2014, 01:28:31 AM
Gavin, if you're still reading in my humble opinion we need a few things to move forward.

You're still the closest thing this community has to Satoshi/leadership so I think impetus comes from you. IMO you should update your Scalability Roadmap (http://bitcoinfoundation.org/2014/10/a-scalability-roadmap/) to reflect 40% annual increases noting that seems to be able to garner consensus. Maybe a second update includes a step-by-step process to rolling out the update (fork), so people know what to expect. I think starting to talk about things in a matter-of-fact way will engender confidence and expectation of what's to come. Thanks again for your hard work.

For the rest of us I think it's helpful to be supportive, in any way we can, of pushing forward this update. Notice I've updated my signature. If we can explain to those wondering whether it's the best option that it certainly is by implementing something, while covering the most possible bases people may go along.

My sentiments exactly.  I have actually found this whole thread to be quite heartening.

NewLiberty - you have done a good job tirelessly advocating for a certain approach.  It almost seems as if every even-numbered post is yours, and every odd-numbered post is from a different poster who steps up to the plate to explain to you the flaws in your position.  You seem like a perfectionist, which in general isn't a bad thing at all.  But action is needed now, even if we don't have some perfect 'forever' solution. Fortunately we don't need a 100% consensus to move forward, just an undefined super-majority.  I'd guess we have it around the 20MB / 40% concept.

My personal opinion is 20MB /40% is a kick-ass combo.  Between that, the headers-first downloading, the O(1) miner backbone system, and (hopefully) sidechains, we are looking at the most important set of technical improvements since bitcoin started.

There have been no unaddressed flaws yet in what I have advocated, other than the fact that it does not include a concrete proposal for a specific finished algorithm.  :-\
What I have been advocating is for a flexible solution to be designed so that we won't have to do this again.  It hasn't been accomplished yet.

If folks decide that reaching an average of 1/3 full blocks is a sufficient impetus to implement something without delay, even if that implementation may well have to be adjusted again in the future (one way or the other), and in future years when it may be much harder to put through such a change... then of course the expedient 2nd Gavin solution will be implemented.

If however before the implementation, there is a flexible proposal that doesn't also introduce unmanaged perverse incentives, I suspect folks may line up behind that.  

In the mean time, I expect to continue taking this role of the loyal opposition in order to either 1)  Find that better solution, or 2) Galvanize the consensus.  
If this discussion we are having here doesn't happen publicly, and look at every option, people may think that what is selected is not the best that we can do under the circumstances.

If after exhausting all arguments against it, the time comes to implement (probably early 2015), the discussions and debate should have concluded with every criticism being having had a chance to be heard, and the best we can do at the moment being implemented.  That will either be "the simplest thing that can possibly work" or something less simple but with better chances of working indefinitely.  Either are an improvement.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: shorena on October 26, 2014, 08:31:57 PM
-snip-
Does *anyone* have a record of the pool or waiting transactions?  That's our key.  When there are ~2,000 transactions in the queue waiting then we would expect a full 1MB block to be coming out next.  When there are ~4,000 transactions in the queue waiting then we would expect two full 1MB blocks to be coming out next.  In this state, transactions can expect to take ~20 minutes to confirm.  ~6,000 waiting -> 30 minute confirmation times.  And so on.
-snip-

I dont have historical data. But I just setup a rrdtool database to track the number of transactions on my full node. The stats for the last 24 hours are shown here [1], the pic is updated every 30 minutes and I will add more for 30 and 360 days once the database has enough data. As you can see from the little data thats allready there (collecting ~1 hour now) we are allready closer to 4000 transactions waiting than to 2000.
The raw data is gathered every minute with the following command
Code:
 bitcoind getrawmempool false | wc -l

and is not filtered in any way that is not inherent to bitcoind.

Code:
 bitcoind getrawmempool true | grep fee | grep 0.00000000 | wc -l

shows that right now 2792 of 3685 TX are without fee. I might make another database to improve the stats.

[1] http://213.165.91.169/pic/mempool24h.png


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: Cubic Earth on October 26, 2014, 10:30:19 PM
Nice work shorena!  I sent you a small tip to the address in your profile.  I checked out all the other stats on you full node, overall it is an awesome testament to Bitcoin's openness.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: painlord2k on October 26, 2014, 10:47:28 PM

What did you see and where did you see it.
200Kt/d may be several years away, yes?
https://blockchain.info/charts/n-transactions?timespan=all&showDataPoints=false&daysAverageString=1&show_header=true&scale=0&address=
(If you are fond of extrapolating this might tell you something)

Or 200kt/d may be much sooner.
So flexible limits are probably better than "whatever" right?

Increase the MAX_BLOCKSIZE as soon as is reasonable.  20MB, 32MB, whatever.  Then enhance the code to segment blocks to exceed the API limit after that.

"The holiday season accounts for between 20 percent and 40 percent of typical retailers' total annual sales."
http://blogs.constantcontact.com/fresh-insights/holiday-shopping-stats/ (http://blogs.constantcontact.com/fresh-insights/holiday-shopping-stats/)

Now, we have to hope there is just a doubling (the 20% case) of daily transactions during the next Holiday Season (November/December 2014).
Now, Bitcoin help limit expenditures, as it is a "deflationary currency" increasing its value over long time periods.
But, given we are about 80K transactions/day a doubling will be around 160K/day. And it will not be evenly distributed during the day. It will peak at Europe and America business hours and days.

Now, compared with the last year, we have a lot more and larger retailers online accepting BTCs and probably four/ten times brick and mortar places.

We could get long delays during this shopping season, not the next. And it would not be pretty.
The slowdown of the increase of the hash rate will not help, because part of the reason we saw smaller blocks in the past and larger now, it is just there are less blocks per day, something like 1/6 less blocks.
Hope no large miners have any problem during this season, because if the hashrate fall for some reason at critical time, the network could find less 25% less blocks for some hours increasing queue time and confirmations.
And remember good luck is blind but bad luck see you perfectly even in complete darkness. I would not like if miners, cause of bad luck, didn't found a block for an hour during peak shopping time.
And hope the "40% of retailers typical total annual sales" do not happen this year for bitcoin, because the network is in no way able to manage a similar load even without any cascading failure do add insults to injuries.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 27, 2014, 12:45:32 AM
I don't have historical data. But I just setup a rrdtool database to track the number of transactions on my full node. The stats for the last 24 hours are shown here [1], the pic is updated every 30 minutes and I will add more for 30 and 360 days once the database has enough data. As you can see from the little data that's allready there (collecting ~1 hour now) we are already closer to 4000 transactions waiting than to 2000.
The raw data is gathered every minute with the following command
Code:
 bitcoind getrawmempool false | wc -l

and is not filtered in any way that is not inherent to bitcoind.

Code:
 bitcoind getrawmempool true | grep fee | grep 0.00000000 | wc -l

shows that right now 2792 of 3685 TX are without fee. I might make another database to improve the stats.

[1] http://213.165.91.169/pic/mempool24h.png
Inspirational!

Starting from your spark I found https://blockchain.info/tx/e30a4add629882d360bc87ecc529733a9824d557690d1e5769453954ea4a1056.  It appears to be the oldest transaction waiting at this moment.  It was 31:34 old at the time of block #327136.  Block #326954 was the first block that could have added it to the block chain; 182 blocks ago.  One wonders how old the oldest transaction is that includes a fee.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 27, 2014, 04:44:07 AM
Windows 8.1
Satoshi v0.9.3.0-g40d2041-beta
example output from Debug console "getrawmempool true" command;

{
"000308b9c51a0ba76d57efd8897159d95b8278e4fc0e3cb480b3d15343a1aadd" : {
"size" : 374,
"fee" : 0.00010000,
"time" : 1414369834,
"height" : 327133,
"startingpriority" : 4976624.92307692,
"currentpriority" : 5160750.84553682,
"depends" : [
"60c66a89e247760aa4cb29517ba79bbb2bbe773823996135fc7035c74f8be171"
]
},
"00349a4799b7b787e9733f38fc01a8f5dc801f7e35e3071a706831395d67086e" : {
"size" : 520,
"fee" : 0.00000001,
"time" : 1414209735,
"height" : 326867,
"startingpriority" : 40.33333333,
"currentpriority" : 10311.10448718,
"depends" : [
"75ba09c16b35b3495a7d829030dbafbed4e8e6806c8bc58207f8472e85749187"
]
},
...
}

DOS batch file to collapse output so that each transactions ends up on a single line (good for feeding into Excel);

@echo off
Setlocal EnableDelayedExpansion
SET new_line=
FOR /F "delims=" %%l IN (raw.txt) DO (
  if "%%l" == "}," (
    echo !new_line!
    SET new_line=
  ) ELSE (
    SET new_line=!new_line! %%l
  )
)

To invoke;

C:\bitcoin>collapse >btc_txn.txt


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: shorena on October 27, 2014, 08:40:53 AM
Nice work shorena!  I sent you a small tip to the address in your profile.  I checked out all the other stats on you full node, overall it is an awesome testament to Bitcoin's openness.

Thanks, I appreciate it. Pictures are indeed on the main page [1], but I thought I just post the picture directly. Less scrolling involved.

-snp-
Inspirational!

Starting from your spark I found https://blockchain.info/tx/e30a4add629882d360bc87ecc529733a9824d557690d1e5769453954ea4a1056.  It appears to be the oldest transaction waiting at this moment.  It was 31:34 old at the time of block #327136.  Block #326954 was the first block that could have added it to the block chain; 182 blocks ago.  One wonders how old the oldest transaction is that includes a fee.

Glad you like it. I think the long queue is mainly due to transactions without fee and miners beeing greedy. The TX in question has small (in BTC value) input/outputs, does not pay a fee and the input is not very old, which in my experience results in miners just ignoring your TX. There are plenty of TX with fees, so why should they confirm this one? I had a similar one for testing with 4 inputs and a single output of 0.0022. I didnt confirm in 8 days - by that time the inputs were 2-3 weeks old - so I had to remove it from core and "doublespend" it. I am not sure this would change with a bigger blocksize as most blocks are not full yet, even though they could be. They way the mempool currently is each block should be at the limit now, but they are not [2]. Maybe someone that operates a pool can shed some light on this.


[1] http://213.165.91.169/
[2] https://www.blocktrail.com/BTC/block-all/1 - you can sort by pool by clicking on its name, but I have not found one that has exclusivly big blocks.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: -ck on October 27, 2014, 09:48:22 AM
Maybe someone that operates a pool can shed some light on this.
A significant number of pools simply run the default bitcoind client and transaction rules. Some have their own rules claiming their rules are "anti-spam" (see the many threads about luke-jr's custom patches to the bitcoind client in the gentoo packaging, and probably any pool running his pool software does so too). Most of the alleged anti-spam selection criteria are (presumably an ideological objection) aimed at gambling sites.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: David Rabahy on October 27, 2014, 12:27:21 PM
[2] https://www.blocktrail.com/BTC/block-all/1 - you can sort by pool by clicking on its name, but I have not found one that has exclusivly big blocks.
Based on this, for example, Polmine tends strongly toward smaller blocks.  Meanwhile, DiscusFish/F2Pool does a much better job of producing bigger blocks.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 27, 2014, 02:49:24 PM
This is the sort of fundamental analysis that would also get us closer to having a client that can tell its user how much TX fee to include in order to expect the transaction to be confirmed in X minutes.
A nice feature to have.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: shorena on October 27, 2014, 08:51:11 PM
Maybe someone that operates a pool can shed some light on this.
A significant number of pools simply run the default bitcoind client and transaction rules. Some have their own rules claiming their rules are "anti-spam" (see the many threads about luke-jr's custom patches to the bitcoind client in the gentoo packaging, and probably any pool running his pool software does so too). Most of the alleged anti-spam selection criteria are (presumably an ideological objection) aimed at gambling sites.

Not sure I missed something when I read the source (or rather the comments). The way I understand it is that the TXs get sorted by priority and added. So with now over 5k TX in queue all blocks should be close to the limit, unless there is some sort of minimal priority I missed. Currenlty 328 / 1424 TX have a priority of 0.00000000

I made a 2nd database this morning to get data for TX with and without fee. The load was to much for the server to handle [1] so I had to reboot. Thus the data after 21:00 is not significant for now.

I think the message is clear anyway. The majority of TX without fee get ignored even though there is space left in the blocks. Transactions without fee queued up to over 2000 waiting 3 times in the last 24 hours. The transactions without fee might be spam though. Maybe David Rabahy can share some Excel results?

https://i.imgur.com/AKXrKAt.png

up to date pictures are on the nodes info page [2] below the connection and traffic stats. I might change the order though.

[1] the white spaces in the pictures indicate that the task to update the database could not be handled within 30 seconds and was terminated. The way rrdtool works is that it handles missing values as "unknown" which are shown as blanks in the graphics.
[2] http://213.165.91.169/


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 28, 2014, 09:15:37 PM
The cost is not that significant.  Heck, the whole BTC market cap is not that significant.

If there were 6 GB block size bloat per hour?
A financial attack could do this independently.
Miners could do this free-ish.
Small miners would fail, as would all hobby miners.

Full nodes would become centralized, increased 51% risks, etc.
These are just the obvious.  No more decentralisation for Bitcoin.

From the wiki:

Quote
Note that a typical transaction is 500 bytes, so the typical transaction fee for low-priority transactions is 0.1 mBTC (0.0001 BTC), regardless of the number of bitcoins sent.

To spam the 1 MB blocksize takes roughly .2 BTC per block, or 1.2 BTC per hour. That's only $500 per hour.

To spam a 1 GB blocksize takes roughly 200 BTC per block, or 1200 BTC per hour. That's $500,000 per hour!

A 1 GB blocksize is far more costly to attack. We could increase the blocksize to 1 GB now and nothing would happen because there aren't that many transactions to fill such blocks.

I didn't see if this was addressed elsewhere already...
The projected attack would come from a mining concern that is looking to shut out smaller players and consolidate their mining regime.
The cost of the attack is the marginal cost of a winning block being orphaned.  The transaction fee is paid by and to the attacker, at no cost.

However if it is not orphaned, the reward is significant.  While the block is being downloaded and verified by lower bandwidth nodes, the "attacker" is already at work on the next block, and with some decreased competition has some advantages.  It is essentially a denial of service from larger bandwidth miner to lesser bandwidth miner.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: TierNolan on October 30, 2014, 05:27:16 PM
What is the reason to stop at 20MB?  That just seems to be pushing things further into the future to make the decision.

It does mean that the 32MB message size limit wouldn't be a problem though.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: NewLiberty on October 30, 2014, 05:43:31 PM
What is the reason to stop at 20MB?  That just seems to be pushing things further into the future to make the decision.

It does mean that the 32MB message size limit wouldn't be a problem though.
Under Gavin's 2nd proposal, (starting at 20MB and +40% per year) >32MB is 2 years out.

Most blocks are about 1/3 MB, so the proposal is more or less for about 100x current utilization 2 years forward.


Title: Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
Post by: TierNolan on October 30, 2014, 06:56:10 PM
Under Gavin's 2nd proposal, (starting at 20MB and +40% per year) >32MB is 2 years out.

I misunderstood, I thought it was +40% and stopping at 20MB.