Bitcoin Forum
April 24, 2024, 12:35:42 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 4 5 6 7 8 9 10 11 »  All
  Print  
Author Topic: Increasing the block size is a good idea; 50%/year is probably too aggressive  (Read 14265 times)
hello_good_sir (OP)
Hero Member
*****
Offline Offline

Activity: 1008
Merit: 531



View Profile
October 08, 2014, 04:16:39 AM
 #1

My concern is that there is little room for error with geometric growth.  Lets say that things are happily humming along with bandwidth and block size both increasing by 50% per year.  Then a decade goes by where bandwidth only increases by 30% per year.  In that decade block size grew to 5767% while bandwith grew to 1379%.  So now peoples connections are only 24% as capable of handling the blockchain.

Not a big deal?  Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable.  Meaning that this event would mean that the majority of nodes would shut down, likely permanently.

1713962142
Hero Member
*
Offline Offline

Posts: 1713962142

View Profile Personal Message (Offline)

Ignore
1713962142
Reply with quote  #2

1713962142
Report to moderator
1713962142
Hero Member
*
Offline Offline

Posts: 1713962142

View Profile Personal Message (Offline)

Ignore
1713962142
Reply with quote  #2

1713962142
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713962142
Hero Member
*
Offline Offline

Posts: 1713962142

View Profile Personal Message (Offline)

Ignore
1713962142
Reply with quote  #2

1713962142
Report to moderator
1713962142
Hero Member
*
Offline Offline

Posts: 1713962142

View Profile Personal Message (Offline)

Ignore
1713962142
Reply with quote  #2

1713962142
Report to moderator
1713962142
Hero Member
*
Offline Offline

Posts: 1713962142

View Profile Personal Message (Offline)

Ignore
1713962142
Reply with quote  #2

1713962142
Report to moderator
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
October 08, 2014, 09:24:16 AM
 #2

My concern is that there is little room for error with geometric growth.  Lets say that things are happily humming along with bandwidth and block size both increasing by 50% per year.  Then a decade goes by where bandwidth only increases by 30% per year.  In that decade block size grew to 5767% while bandwith grew to 1379%.  So now peoples connections are only 24% as capable of handling the blockchain.

Not a big deal?  Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable.  Meaning that this event would mean that the majority of nodes would shut down, likely permanently.

Compression techniques (e.g. by using transaction hashes and/or IBLT), once implemented, will certainly keep the new block message size growth rate much lower than the bandwidth growth rate.  

At the moment the 1MB in checkblock is agnostic as to how the blocks are received.  

Code:
    // Size limits
    if (block.vtx.empty() || block.vtx.size() > MAX_BLOCK_SIZE || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION) > MAX_BLOCK_SIZE)
        return state.DoS(100, error("CheckBlock() : size limits failed"),
                         REJECT_INVALID, "bad-blk-length");

Consider that bandwidth is the constraint and disk space, perhaps 10x less so. This implies that a 1MB block maximum for transmitted blocks should be reflected as a 10MB maximum for old blocks read from / written to disk (especially when node bootstrapping is enhanced by headers-first and an available utxo set).

Put another way, a newly mined block of 2MB might be transmitted across the network in a compressed form, perhaps of only 200KB, but it will get rejected, yet it should be acceptable as it is within currently accepted resource constraints.

gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
October 08, 2014, 11:13:35 PM
 #3

(e.g. by using transaction hashes and/or IBLT), once implemented, will certainly keep the new block message size growth rate much lower than the bandwidth growth rate.  
Keep in mind these techniques don't reduce the amount of data that needs to be sent (except, at most, by a factor of two). They reduce the amount of latency critical data. Keeping up with the blockchain still requires transferring and verifying all the data.

Quote
Not a big deal?  Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable.  Meaning that this event would mean that the majority of nodes would shut down, likely permanently.
Right. There is a decentralization trade-off at the margin.  But this isn't scaleless-- there is _some_ level, even some level of growth which presents little to no hazard even way down the margin.   The a a soft stewardship goal (not a system rule, since it can't be) the commitment should be that the system should be run so that it fits into an acceptable portion of common residential broadband, so that the system does not become dependant on centralized entities. As some have pointed out, being decenteralized is Bitcoin's major (and perhaps only) strong competitive advantage compared to traditional currencies and payment systems. How to meet that goal best is debatable in the specifics.

At the moment there are a bunch of silly low hanging fruit that make running a node more costly than it needs to be; we're even at a case where some people developing on Bitcoin core have told me they've stopped running a node at home. It's hard to reason about the wisdom of these things while the system is still being held back by some warts we've long known how to correct and are in the process of correcting.
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 13, 2014, 08:58:44 AM
 #4

(e.g. by using transaction hashes and/or IBLT), once implemented, will certainly keep the new block message size growth rate much lower than the bandwidth growth rate.  
Keep in mind these techniques don't reduce the amount of data that needs to be sent (except, at most, by a factor of two). They reduce the amount of latency critical data. Keeping up with the blockchain still requires transferring and verifying all the data.

Quote
Not a big deal?  Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable.  Meaning that this event would mean that the majority of nodes would shut down, likely permanently.
Right. There is a decentralization trade-off at the margin.  But this isn't scaleless-- there is _some_ level, even some level of growth which presents little to no hazard even way down the margin.   The a a soft stewardship goal (not a system rule, since it can't be) the commitment should be that the system should be run so that it fits into an acceptable portion of common residential broadband, so that the system does not become dependant on centralized entities. As some have pointed out, being decenteralized is Bitcoin's major (and perhaps only) strong competitive advantage compared to traditional currencies and payment systems. How to meet that goal best is debatable in the specifics.

At the moment there are a bunch of silly low hanging fruit that make running a node more costly than it needs to be; we're even at a case where some people developing on Bitcoin core have told me they've stopped running a node at home. It's hard to reason about the wisdom of these things while the system is still being held back by some warts we've long known how to correct and are in the process of correcting.

It doesn't make sense to guess at this.  Any guess is bound to be wrong.
If after picking the low hanging fruit, there is still an issue here (and there may be).
It ought not be resolved by a guess when there is data within the block chain that would be useful for making a determination on max block size.
In the same way that difficulty adjustment is sensitive to data within the block chain, so also this could be.

I don't know what the right answer is anymore than Gavin does, but making an estimation would not be the best way to solve this in any case.

.....
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.


FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
Phrenico
Member
**
Offline Offline

Activity: 75
Merit: 10


View Profile
October 13, 2014, 02:52:48 PM
 #5


It doesn't make sense to guess at this.  Any guess is bound to be wrong.
If after picking the low hanging fruit, there is still an issue here (and there may be).
It ought not be resolved by a guess when there is data within the block chain that would be useful for making a determination on max block size.
In the same way that difficulty adjustment is sensitive to data within the block chain, so also this could be.

I don't know what the right answer is anymore than Gavin does, but making an estimation would not be the best way to solve this in any case.

.....
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.



Is this 50% per year intended to be a hardcoded rule like the block reward?

That's not how I interpreted Gavin's report. It sounded more like a goal that the developers thought was attainable.

That said, 50% per year does seem aggressive. At some point, the opportunity cost of including more transactions is going to exceed the tx fee value, certainly as long as the block reward exists, so the blocksize cannot increase indefinitely. And so what if there is little room in the blockchain? Not every single tiny transaction needs to be recorded indefinitely. Since the (I expect) cost of increasing the block size is increased centralization, shouldn't the developers be hesitant to make such a commitment without allowing for discretion?

I also wonder what the best approach will be, way out in the future, when the block reward is near zero. Can there be an equilibrium transaction fee if the difficulty is allowed to continue to fall? A simple, kludgy solution might be to fix the difficulty at some level, allowing blockrate to depend on the accumulated bounty of transaction fees.

Though I'm sure some new kind of proof of work/stake approach could best solve this problem and make the network more secure and cheaper.
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 13, 2014, 05:17:44 PM
Last edit: October 14, 2014, 02:19:00 PM by NewLiberty
 #6


It doesn't make sense to guess at this.  Any guess is bound to be wrong.
If after picking the low hanging fruit, there is still an issue here (and there may be).
It ought not be resolved by a guess when there is data within the block chain that would be useful for making a determination on max block size.
In the same way that difficulty adjustment is sensitive to data within the block chain, so also this could be.

I don't know what the right answer is anymore than Gavin does, but making an estimation would not be the best way to solve this in any case.

.....
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.



Is this 50% per year intended to be a hardcoded rule like the block reward?

That's not how I interpreted Gavin's report. It sounded more like a goal that the developers thought was attainable.

That said, 50% per year does seem aggressive. At some point, the opportunity cost of including more transactions is going to exceed the tx fee value, certainly as long as the block reward exists, so the blocksize cannot increase indefinitely. And so what if there is little room in the blockchain? Not every single tiny transaction needs to be recorded indefinitely. Since the (I expect) cost of increasing the block size is increased centralization, shouldn't the developers be hesitant to make such a commitment without allowing for discretion?

I also wonder what the best approach will be, way out in the future, when the block reward is near zero. Can there be an equilibrium transaction fee if the difficulty is allowed to continue to fall? A simple, kludgy solution might be to fix the difficulty at some level, allowing blockrate to depend on the accumulated bounty of transaction fees.

Though I'm sure some new kind of proof of work/stake approach could best solve this problem and make the network more secure and cheaper.

It also may be contrary to the eventual goal of usage driven mining, where transaction fees ultimately overtake block reward in value.  This proposal may drive TX fees to zero forever.  Block chain is a somewhat scarce resource, just as total # of coins.  Adding an arbitrary 50% yearly inflation changes things detrimentally.

If this forks as currently proposed, I'll be selling all my BTC on Gavin's fork and mining on the other.  I suspect I will not be the only one.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
October 13, 2014, 07:05:00 PM
 #7

It also may be contrary to the eventual goal of usage driven mining, where transaction fees ultimately overtake block reward in value.  This proposal may drive TX fees to zero forever.  Block chain is a somewhat scarce resource, just as total # of coins.  Adding an arbitrary 50% yearly inflation changes things detrimentally.

I'm sending a follow-up blog post to a couple of economists to review, to make sure my economic reasoning is correct, but I don't believe that even an infinite blocksize would drive fees to zero forever.

Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.

That has very little to do with whether or not transaction fees will be enough to secure the network in the future. I think both the "DON'T RAISE BLOCKSIZE OR THE WORLD WILL END!" and "MUST RAISE THE BLOCKSIZE OR THE WORLD WILL END!" factions confuse those two issues. I don't think adjusting the block size up or down or keeping it the same will have any effect on whether or not transaction fees will be enough to secure the network as the block subsidy goes to zero (and, as I said, I'll ask professional economists what they think).

If this forks as currently proposed, I'll be selling all my BTC on Gavin's fork and mining on the other.  I suspect I will not be the only one.

Okey dokey. You can join the people still mining on we-prefer-50-BTC-per-block fork (if you can find them... I think they gave up really quickly after the 50 to 25 BTC subsidy decrease).

How often do you get the chance to work on a potentially world-changing project?
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 13, 2014, 07:30:06 PM
Last edit: October 13, 2014, 08:15:34 PM by NewLiberty
 #8

It also may be contrary to the eventual goal of usage driven mining, where transaction fees ultimately overtake block reward in value.  This proposal may drive TX fees to zero forever.  Block chain is a somewhat scarce resource, just as total # of coins.  Adding an arbitrary 50% yearly inflation changes things detrimentally.

I'm sending a follow-up blog post to a couple of economists to review, to make sure my economic reasoning is correct, but I don't believe that even an infinite blocksize would drive fees to zero forever.

Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.

That has very little to do with whether or not transaction fees will be enough to secure the network in the future. I think both the "DON'T RAISE BLOCKSIZE OR THE WORLD WILL END!" and "MUST RAISE THE BLOCKSIZE OR THE WORLD WILL END!" factions confuse those two issues.

Great, we agree on all of this.

I don't think adjusting the block size up or down or keeping it the same will have any effect on whether or not transaction fees will be enough to secure the network as the block subsidy goes to zero (and, as I said, I'll ask professional economists what they think).
Here is where it jumps the tracks.  
Your thoughts, and my thoughts aren't going to answer this.  
Math will.  It is not about opinion, it is about measurement and calculation.  Picking 50% out of a hat is hubris, and you know it in your heart.
Justify it, show your work, or it can not be taken seriously.  Looking forward to your follow-up, and its analysis, economists sure, but lets have game theory analysis as well as an analysis of new risks.

If this forks as currently proposed, I'll be selling all my BTC on Gavin's fork and mining on the other.  I suspect I will not be the only one.
Okey dokey. You can join the people still mining on we-prefer-50-BTC-per-block fork (if you can find them... I think they gave up really quickly after the 50 to 25 BTC subsidy decrease).
Strawmen, will make you look stupid and petty.  Play well with the other scientists please?  If this was your best and final offer, you needn't bother responding.  I don't know the answer, but so far we haven't seen it in sufficient detail to end dialog and discovery.

Not to belabor it, but the obvious difference is the 50 BTC folks were going against Satoshi's design, whereas those following 50% love-it-or-leave-it fork would be going against Satoshi's design.  If we need a hard fork, we do it right so that it need not be repeated.

Your proposal started a dialog that may bring a good result.  
The first effort isn't that end result.  If we think we got it perfect on a first guess, our minds are closed to learning and consensus.


No comment on this?
Quote
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.  Where we are guessing we ought acknowledge that.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 13, 2014, 07:40:49 PM
 #9

Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.

Not only that, but there will always be non-infinite bandwidth and storage available to users, while anyone can create transaction spam essentially for free. So minimum fees remain necessary for non-priority transactions.
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 13, 2014, 07:45:28 PM
 #10

Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.
Not only that, but there will always be non-infinite bandwidth and storage available to users, while anyone can create transaction spam essentially for free. So minimum fees remain necessary for non-priority transactions.

Check your assumptions.

1) We don't know what the future networks will look like.
2) Commodity prices do go to zero for periods of time.  Sometimes they rot in silos, and cost money to dispose of them (negative worth).

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
wachtwoord
Legendary
*
Offline Offline

Activity: 2324
Merit: 1125


View Profile
October 13, 2014, 09:08:43 PM
 #11


If this forks as currently proposed, I'll be selling all my BTC on Gavin's fork and mining on the other.  I suspect I will not be the only one.

Okey dokey. You can join the people still mining on we-prefer-50-BTC-per-block fork (if you can find them... I think they gave up really quickly after the 50 to 25 BTC subsidy decrease).


This is so weak. If we follow this analogy YOU are the one wanting to mine 50 BTC blocks ad infinitum since halving to 25 BTC is what Satoshi proposed.

I really don't like the way you are handling this. It seems like you are trying to push your little pet project through as a little dictator. As long as you don't change I'm with NewLiberty on this one and will hold Bitcoin instead of GavinCoin.
acoindr
Legendary
*
Offline Offline

Activity: 1050
Merit: 1002


View Profile
October 13, 2014, 09:28:48 PM
 #12

Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.
Not only that, but there will always be non-infinite bandwidth and storage available to users, while anyone can create transaction spam essentially for free. So minimum fees remain necessary for non-priority transactions.

Check your assumptions.

1) We don't know what the future networks will look like.

No, but we do know the science of today. I'm not sure you appreciate the meaning of infinite.

It's not possible to transmit information with perfect efficiency, unless, probably, using quantum entanglement. It's also not possible to store unlimited meaningful information within a confined space, never mind making it all computationally accessible. I'd say my statement is less an assumption and more an observation, unless of course you can show how it's reasonably possible to make use of quantum phenomena in ways we can't imagine today.


2) Commodity prices do go to zero for periods of time.  Sometimes they rot in silos, and cost money to dispose of them (negative worth).

I think he meant go to zero permanently, or at least substantially long periods of time.
TraderTimm
Legendary
*
Offline Offline

Activity: 2408
Merit: 1121



View Profile
October 14, 2014, 03:42:43 AM
 #13

I think this is where the ugly head of "Bitcoin is just an experiment" raises its ugly mug.

You see, Gavin could totally nuke Bitcoin, but he has the plausible deniability that Bitcoin is just an "experiment". You know, something you just putter about on in the garage, if raccoons break in and tear it apart, hell, its just foolin' around, no big loss.

And that is the attitude that is being put forth here. 50%? Sure, why the hell not. Maybe roll a D-100 and decide that way, it would be just as rigorous as a complete and utter guess.

What is completely unreasonable is why you wouldn't base any of these metrics on actual USAGE, with a sliding window ala Difficulty Adjustments to adhere to what is actually HAPPENING in the network.

Gavin doesn't know, but hey, we have to trust him.

I don't think we do...

fortitudinem multis - catenum regit omnia
Syke
Legendary
*
Offline Offline

Activity: 3878
Merit: 1193


View Profile
October 14, 2014, 04:01:16 AM
 #14

What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.

Buy & Hold
greenlion
Hero Member
*****
Offline Offline

Activity: 667
Merit: 500


View Profile
October 14, 2014, 05:35:27 AM
 #15

What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.

IBLT makes it an issue because there would no longer be a risk/reward tradeoff on tx fees vs propagation delay in building the largest possible blocks. As a result the miner is incentivized to always build the largest possible block to collect maximum tx fees with no propagation risk.
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
October 14, 2014, 06:18:24 AM
 #16

What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.

IBLT makes it an issue because there would no longer be a risk/reward tradeoff on tx fees vs propagation delay in building the largest possible blocks. As a result the miner is incentivized to always build the largest possible block to collect maximum tx fees with no propagation risk.

IBLT encourages good behaviour because you can't successfully publish an IBLT full of transactions which the rest of the network doesn't want, unlike now, when a block could be full of rubbish 1sat transactions from a secret spam generator. The whole point of IBLT is that each node knows (and accepts) most of the transactions in advance, and has them in its mempool. It is only a smallish set of differences which are required from the IBLT when processing it. So the fees market should be helped by this development.

NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
October 14, 2014, 12:50:43 PM
Last edit: October 14, 2014, 01:52:07 PM by NewLiberty
 #17

What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.
That is nothing like Gavin's proposal for good reasons.  To answer your question of What would also happen if the block size were increased to 1 GB tomorrow is the introduction of new attack vectors, which if exploited would require intervention to resolve by miners, and development. 
It is not enough to design something that works, we must also design so that it does not become more fragile.

Why not strive for a dynamic limit that prevents the need for future hard forks over the same issue?
Gavin's proposal is "the simplest that could possibly work".

I'll argue that it is just too simple, and too inflexible.

This proposal may be opening Bitcoin to new types of coin killing attacks by assuming that anti-spam fees will always be sufficient to prevent bloating attacks.   Consider that the entire value of all bitcoin is currently less than 1/10th of the worlds currently richest man, and that man has spoken publicly against bitcoin?  When you include wealthy institutions and even governments within the potential threat vector, the risks may become more apparent.  We can not assume Bitcoin's success, and then predicate decisions necessary for that success, on that success having been already accomplished.

If Bitcoin has to change due to a crisis, it ought at least be made better... so that the crisis need not be revisited.  (Hard forks get progressively more challenging in the future).  Design for the next 100s of years, not for the next bubble.  Fix it right, and we fix it once.

Designs ought to have safeguards to avoid unintended consequences and the ability to adjust as circumstances change.
My suggestion is that perhaps we can do better than to simply assume an infinite extrapolation, when there exists a means to measure and respond to the actual needs as they may exist in the future, within the block chain.

50% may be too much in some years, too little in others.  The proposal is needlessly inflexible and assumes too much (an indefinate extrapolation of network resource).  Picking the inflating percentage numbers out of a hat by a small group is what CENTRAL BANKERS do, this is not Satoshi's Bitcoin.

I'm not convinced a crisis necessitating a hard fork is at hand, but I am sure that the initial proposal is not the answer to it.  I look forward to its revision and refinement.  

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
painlord2k
Sr. Member
****
Offline Offline

Activity: 453
Merit: 254


View Profile
October 14, 2014, 01:59:03 PM
 #18

In my opinion, the 50% increase per year of the block size is too conservative in the short run and too optimistic in the long run.
If Bitcoin had exponential increase of usage, even usage from fields where it is currently uneconomic to implement a payment service, we will had faster increase in the short run and a slowdown in the long run.

Spamming the blockchain is not a real issue, for me.
If tomorrow we would had 1 GB block (max size) some entity could be able to spam the blockchain with dust transactions, because the minimum fee is about 1 cent. The reaction of the users would be just to raise the fee they pay. from 1 cent to 10 cents, the attacker would need to increase ten times the sum paid to the miners (miners thank a lot for this) to be able to produce transactions with the same fee and priority of the real users.

What would him accomplish? A bigger blockchain? people and large operators can buy truck loads of HDs on the TBytes.
Actually you just don't need to keep all the blockchain on disk, just the last few weeks, months. People could just download or share the previous blocks as they will never change on HW mediums.
Just to be clear, albeit nominally the blockchain could change if someone would dedicate enough time and resources to rebuilt it from the genesys block (with a larger proof-of-work), any change of the chain more than a day/week/month old will ever be rejected.

Large entities will not attack the blockchain:
1) because it is anyway, against some law to break havok in a computer network and rewrote it for nefarious purposes.
2) because governments would need to justify it. They have the monopoly of coercion the monopoly on violence. If they resort to indirect attacks, they are just admitting the threat of violence and the violence itself it is not working against Bitcoin's users. It would amount to start bleeding in a shark infested sea.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
October 14, 2014, 03:11:15 PM
 #19

No comment on this?
Quote
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

That does not address the core of people's fears, which is that big, centralized mining concerns will collaborate to push smaller competitors off the network by driving up the median block size.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.  Where we are guessing we ought acknowledge that.

Yes, that is a good point, made by other people in the other thread about this. A more conservative rule would be fine with me, e.g.

Fact: average "good" home Internet connection is 250GB/month bandwidth.
Fact: Internet bandwidth has been growing at 50% per year for the last 20 years.
  (if you can find better data than me on these, please post links).

So I propose the maximum block size be increased to 20MB as soon as we can be sure the reference implementation code can handle blocks that large (that works out to about 40% of 250GB per month).
Increase the maximum by 40% every two years (really, double every two years-- thanks to whoever pointed out 40% per year is 96% over two years)
Since nothing can grow forever, stop doubling after 20 years.


How often do you get the chance to work on a potentially world-changing project?
TonyT
Full Member
***
Offline Offline

Activity: 210
Merit: 100


View Profile
October 14, 2014, 04:08:39 PM
 #20

Ah, this is where the big boys post...ohh.  Impressive.

As for little me, I am downloading in a Third World country the entire > 50 GB blockchain I guess it is, for my Armory client, and it's fun but has an experimental feel to it.  Even with a 1.5 Mbps internet connection, it's close to 24 hours and I'm only two-thirds done.  I understand that subsequent incremental downloads of this blockchain should be a lot quicker and smaller once the initial download is finished.  I do understand however that Bitcoin transactions can take 1 hour to verify, which is probably related to the size of the blockchain.  The Bobos in Paradise (upper middle class) in the developed countries will not like that; for those off the grid this is a minor quibble.

As for compression of the blockchain, it's amazing what different algorithms can do.  For the longest time the difference between WinZip and WinRAR were trivial, then came 7-Zip, and using whatever algorithm that author uses, the shrinkage is dramatically better.  I can now compress a relational database much more using 7-Zip than WinZip, on a Windows platform.  But there must be some tradeoff; I imagine 7-Zip is more resource intensive and hence should take longer (though I've not seen this).

TonyT

TonyT
Pages: [1] 2 3 4 5 6 7 8 9 10 11 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!