Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: Jeweller on January 31, 2013, 07:23:52 AM



Title: The MAX_BLOCK_SIZE fork
Post by: Jeweller on January 31, 2013, 07:23:52 AM
I’d like to discuss the scalability of the bitcoin network, specifically the current maximum block size of 1 megabyte.  The bitcoin wiki states:
Quote
Today the Bitcoin network is restricted to a sustained rate of 7 tps by some artificial limits. … Once those limits are lifted, the maximum transaction rate will go up significantly.
… and then goes on to theorize about transaction rates many orders of magnitude higher.  Certainly from a software engineering point of view, medium-term scalability is a trivial problem. An extra zero in the
Code:
static const unsigned int MAX_BLOCK_SIZE = 1000000;
line would be fine for a good while.  But I think dismissing the block size issue as the wiki and many others have done is a serious mistake.

Some background on the arguments can be found in this thread (https://bitcointalk.org/index.php?topic=134024.0) and  others. (https://bitcointalk.org/index.php?topic=92654.40)

Changing this limit needs to be discussed now, before we start hitting it.  Already a quick glance at the blockchain shows plenty of blocks exceeding 300KB.  Granted most of that’s probably S.Dice, but nobody can really dispute that bitcoin is rapidly growing, and will hit the 1MB ceiling fairly soon.

So... what happens then?  What is the method for implementing a hard fork?  No precedent, right?  Do we have a meeting?  With who?  Vote?  Ultimately it’s the miners that get to decide, right?  What if the miners like the 1MB limit, because they think the imposed scarcity of blockchain space will lead to higher transaction fees, and more bitcoin for them?  How do we decide on these things when nobody is really in charge?  Is a fork really going to happen at all?

Personally I would disagree with any pro-1MB miners, and think that it’s in everyone’s interest, miners included, to expand the limit.  I think any potential reductions in fees would be exceeded by the increased value of the block reward as the utility of the network expands.  But this is a source of significant uncertainty for me -- I just don’t know how it’s going to play out.  I wouldn’t be surprised if we are in fact stuck with the 1MB limit simply because we have no real way to build a consensus and switch.  Certainly not the end of bitcoin, but personally it would be disappointing.  A good analogue would be the 4-byte addresses of IPv4... all over again.  You can get around it (NAT), and you can fix it (IPv6) but the former is annoying and the latter is taking forever.

So what do you think?  Will we address this issue?  Before or after every block ≈ 1,000,000 bytes?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: flower1024 on January 31, 2013, 07:29:13 AM
i guess most miners won't like this change.
they are speculating that if its harder to place a transaction in a block (eg because of size) people will pay more transaction fees.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: notme on January 31, 2013, 07:30:25 AM
The first thing you need to understand that it's not just a matter of the majority of miners for a hard fork.... it's got to be pretty much everybody.  Otherwise, you will have a blockchain split with two different user groups both wanting to call their blockchain "bitcoin".  Unspent outputs at the time of the fork can be spent once on each new chain.  Mass confusion.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: flower1024 on January 31, 2013, 07:32:08 AM
The first thing you need to understand that it's not just a matter of the majority of miners for a hard fork.... it's got to be pretty much everybody.  Otherwise, you will have a blockchain split with two different user groups both wanting to call their blockchain "bitcoin".  Unspent outputs at the time of the fork can be spent once on each new chain.  Mass confusion.

+1

but i am sure that there is a need for a hardfork in the future (more digits or bigger blocks). the earlier the better....but its always hard to predict the future ;)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Jeweller on January 31, 2013, 07:47:12 AM
The first thing you need to understand that it's not just a matter of the majority of miners for a hard fork.... it's got to be pretty much everybody.

Quite true.  In fact even more so because "old" protocol nodes will only accept small blocks, while the "new" protocol nodes will accept either small (<1MB) or large (>1MB) blocks.  Thus all blocks produced by old miners will be accepted by the new ones as valid, even when there's an extra 500KB of transactions waiting in line to be published.

You'd need like a >90%, simultaneous switch to avoid total chaos.  In that case substantially all the blocks published would be >1MB, and the old protocol miners wouldn't be able to keep up.  If normal nodes switched at the same time, they would start pushing transactions that old-protocol clients / miners would lose track of.  It seems very likely that when / if the change takes place, blocks will have been at the 1MB limit for some time and the end of the limit would immediately result in 1.5MB blocks, so it would have to be coordinated well in advance.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: da2ce7 on January 31, 2013, 07:55:37 AM
This has been discussed again and again.  This is a hard-limit in the protocol, changing it is as hard as changing the total number of coins... ie. virtually impossible.

Many people have invested into Bitcoin under the pretence that the hard-limits of the protocol do not change.

Even if a super-majority wanted the change.  A significant amount of people (myself included) will reject the chain.  Thus creating a fork.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: notme on January 31, 2013, 07:55:49 AM
The first thing you need to understand that it's not just a matter of the majority of miners for a hard fork.... it's got to be pretty much everybody.

Quite true.  In fact even more so because "old" protocol nodes will only accept small blocks, while the "new" protocol nodes will accept either small (<1MB) or large (>1MB) blocks.  Thus all blocks produced by old miners will be accepted by the new ones as valid, even when there's an extra 500KB of transactions waiting in line to be published.

You'd need like a >90%, simultaneous switch to avoid total chaos.  In that case substantially all the blocks published would be >1MB, and the old protocol miners wouldn't be able to keep up.  If normal nodes switched at the same time, they would start pushing transactions that old-protocol clients / miners would lose track of.  It seems very likely that when / if the change takes place, blocks will have been at the 1MB limit for some time and the end of the limit would immediately result in 1.5MB blocks, so it would have to be coordinated well in advance.



It's not that bad.  If the larger block miners are >50%, they will build off the longest valid chain, so they will ignore the smaller block miners blocks since they have a lower difficulty.  If the smaller block miners are >50%, they will always have the longest chain and no large blocks will ever survive reorganization for more than a couple confirmations.  Block headers contain the hash of the previous block, so once your chain forks, the blocks built after the first split block are not compatible with the other chain.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: gmaxwell on January 31, 2013, 07:57:13 AM
It's not that bad.  If the larger block miners are >50%, they will build off the longest valid chain, so they will ignore the smaller block miners blocks since they have a lower difficulty.  If the smaller block miners are >50%, they will always have the longest chain and no large blocks will ever survive reorganization for more than a couple confirmations.  Block headers contain the hash of the previous block, so once your chain forks, the blocks built after the first split block are not compatible with the other chain.
No— "longest valid chain", all of the nodes which have not adopted your Bitcoin-prime will reject the >50% hashpower's "invalid chain" to the 'true' Bitcoin network those miners will simply stop existing. From one currency you will have two. It is a maximally pessimal outcome at the near 50% split, and it wold be against the issue of any Bitcoin user to accept a non-trivial risk of that outcome no matter what the benefit.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: gmaxwell on January 31, 2013, 08:01:14 AM
Opinions differ on the subject. The text on the Wiki largely reflect's Mike Hern's views.

Here are my views:

Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down.

Bitcoin is valuable because of scarcity. One of the important scarcities is the limited supply of coins, another is the limited supply of block-space: Limited blockspace creates a market for transaction fees, the fees fund the mining needed to make the chain robust against hostile reorganization.  I have not yet seen any suggestion as to how Bitcoin is long term viable without this except ones that argue for cartel or regulatory behavior (both of which I don't consider viable: they moot the decentralized purpose of Bitcoin).

Even going beyond fee funding— as Dan Kaminsky argued so succinctly— with, gigabyte blocks bitcoin would not be functionally decentralized in any meaningful way: only a small self selecting group of some thousands of major banks would have the means and the motive to participate in validation (much less mining), just as some thousands of major banks are the primary drivers of the USD and other major world currencies. An argument that Bitcoin can simply scale directly like that is an argument that the whole decentralization thing is a pretext: and some have argued that it's evidence that bitcoin is just destined to become another centralized currency (with some "bonus" wealth redistribution in the process, that they suggest is the real motive— that the decentralization is a cynical lie).

Obviously decentralization can be preserved for increased scale with technical improvements, and those should be done— but if decentralization doesn't come first I think we would lose what makes Bitcoin valuable and special...  and I think that would be sad. (Though, to be frank— Bitcoin becoming a worldwide centrally controlled currency could quite possibly be the most profitable for me— but I would prefer to profit by seeing the world be a diverse place with may good and personally liberating choices available to people)

Perhaps the proper maximum size isn't 1MB but some other value which is also modest and still preserves decentralization— I don't have much of an opinion beyond that fact that there is some number of years in the future where— say— 10MB will be no worse than 1MB today. It's often repeated that Satoshi intended to remove "the limit" but I always understood that to be the 500k maximum generation soft limit... quite possible I misunderstood, but I don't understand why it would be a hardforking protocol rule otherwise. (and why the redundant soft limit— and why not make it a rule for which blocks get extended when mining instead of a protocol rule? ...  and if that protocol rule didn't exist? I would have never become convinced that Bitcoin could survive... so where are the answers to long term survival?)

(In any case the worst thing that can possibly happen to a distributed consensus system is that fails to achieve consensus. A substantial persistently forked network is the worst possible failure mode for Bitcoin: Spend all your own coins twice!  No hardfork can be tolerated that wouldn't result in an thoroughly dominant chain with near certain probability)

But before I think we can even have a discussion about increasing it I think there must be evidence that the transaction load has gone over the optimum level for creating a market for fees (e.g. we should already be at some multiple of saturation and still see difficulty increasing or at least holding steady).  This would also have the benefit of further incentivizing external fast payment networks, which I think must exist before any blockchain increase: it would be unwise to argue an increase is an urgent emergency because we've painted ourselves into a corner by using the system stupidly and not investing in building the infrastructure to use it well.

Quote
You can get around it (NAT), and you can fix it (IPv6) but the former is annoying and the latter is taking forever

It's not really analogous at all.  Bitcoin has substantial limits that cannot be fixed within the architecture, unrelated to the artificial* block-size cap. The blockchain is a worldwide broadcast medium and will always scale poorly (even if rocket boosters can be strapped to that pig), the consensus it provides takes time to converge with high probability— you can't have instant confirmations,  you can't have reversals for anti-fraud (even when the parties all desire and consent to it),  and the privacy is quite weak owing to the purely public nature of all transactions.

(*artificial doesn't mean bad, unless you think that the finite supply of coin or the limitations on counterfeiting, or all of the other explicit rules of the system are also bad...)

Its important to distinguish Bitcoin the currency and Bitcoin the payment network.  The currency is worthwhile because of the highly trustworth extreme decentralization which we only know how to create through a highly distributed and decentralized public blockchain.  But the properties of the blockchain that make it a good basis for a ultimately trustworthy worldwide currency do _not_ make it a good payment network.  Bitcoin is only as much of a payment network as it must be in order to be a currency and in order to integrate other payment networks.

Or, by analogy— Gold may be a good store of value, but it's a cruddy payment system (especially online!).  Bitcoin is a better store of value— for one reason because it can better integrate good payment systems.

See retep's post on fidelity bonded chaum token banks for my personal current favorite way to produce infinitely scalable trustworthy payments networks denominated in Bitcoin.

Cheers,


Title: Re: The MAX_BLOCK_SIZE fork
Post by: notme on January 31, 2013, 08:13:08 AM
It's not that bad.  If the larger block miners are >50%, they will build off the longest valid chain, so they will ignore the smaller block miners blocks since they have a lower difficulty.  If the smaller block miners are >50%, they will always have the longest chain and no large blocks will ever survive reorganization for more than a couple confirmations.  Block headers contain the hash of the previous block, so once your chain forks, the blocks built after the first split block are not compatible with the other chain.
No— "longest valid chain", all of the nodes which have not adopted your Bitcoin-prime will reject the >50% hashpower's "invalid chain" to the 'true' Bitcoin network those miners will simply stop existing. From one currency you will have two. It is a maximally pessimal outcome at the near 50% split, and it wold be against the issue of any Bitcoin user to accept a non-trivial risk of that outcome no matter what the benefit.


I'm not sure I understand the "No".  As far as I can tell you are agreeing with me, but your notation is confusing me.

I was just refuting his claim that bitcoin prime miners would accept the blocks of the bitcoin classic miners by explaining that blocks wouldn't be compatible between chains since they have to include the hash of the previous block.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Jeweller on January 31, 2013, 08:18:30 AM
Wow - thanks for the quick, extremely well crafted responses.

da2ce7 - sorry if this is an old topic; I think my confusion stems from the wiki -- it strongly implies a consensus that the size limit will be lifted.

gmaxwell - thanks; I was hadn't thought through how the blockchain would actually fork.  Yeah, you really would immediately get two completely separate chains.  Yikes.

In general I agree, the block size needs to be limited so that tx fees incentivize mining.  Overly high limits mean someone, somewhere, will mine for free, allowing people to low-ball transactions, and ruining mining incentives in general.

What I meant by the IPv4 thing is that... 1MB?  That's it?  Like 500,000 tx a day.  If only they had said 100MB, that wouldn't have really made any difference in the long run, and then millions of people could get their transaction in there every day.  Which is what I've often thought about with IP addresses: if only they'd done 6-bytes like a hardware MAC address, then maybe we wouldn't have to worry about it...

So, the wiki should be changed, right?  I'd say just reading this thread, anyone holding bitcoins, from a conservative perspective would want to avoid the chaos of a split blockchain at all costs, and not consider changing the protocol numbers.  I had been under the impression, and I think many others are, that the network (not just the currency) would in fact be scaling up enormously in the future.

As for centralization, then, the decentralization of the bitcoin transaction network will then suffer in a way.  Right now, anyone can send their bitcoins wherever they wish.  Years from now, when people are bidding against each other for space in the constantly over-crowded blockchain, no normal people will be able to make on-chain, published transactions...


Title: Re: The MAX_BLOCK_SIZE fork
Post by: da2ce7 on January 31, 2013, 08:47:48 AM
Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down.


This is a very interesting game-theory question.

I have done some preliminary analysis of the problem and have found that gmaxwell may not be correct on this assertion.  (I used to support the razing of the max block size limit; however I now reject it on moral ground.  The same moral grounds that I would reject any change of the number of coins; that it would make any past uses of the protocol under a false pretence)

Now I will try and explain why a bitcoin-like protocol could be secure without the max-block-size limit.

The core issue is the mixing up ‘cost-to-attack’ with ‘active hash rate,’ when in the long run they separate to be quite independent qualities.   While the vast majority of the miners income is from mining blocks for new bitcoin; there happens to be a very strong causation, this causation doesn’t need to hold.

The first thing that we can define is the ‘damage cost of double spends’ this cost can be modelled defined by the equation:
cost = time x value
time: real number between 0 and 1
A double spend a long time after a transaction, for a large amount is a very costly (up to the full cost of the tx).


In the free-market, in the long term, the market will sustain any fraud rate that is less than the cost of reducing it.  (aka, it is cheaper to buy insurance than to increase the difficulty of an attack).
I see no reason why a bitcoin-like protocol wouldn't be subjected to the same principles:  The network will spend the absolute minimum on maintaining its security. (Either via hashing or via insurance).

So what is the cheapest form of network security? Dynamic response to threats.
Bitcoin miners incur virtually no cost; unless they are actively mining.  Bitcoin insurance companies could amass huge collections of bitcoin miners and turn them on when it is cheaper for them to out-mine the double-spend than pay the insurance out.

The bitcoin network will look quite easy to attack; well until you try and attack it.
This will raise the COST TO ATTACK (that is a constant); while the COST TO DEFEND is at a minimum; only the minimum number of miners are turned on to defend the chain when it is attacked.
Otherwise a background mining operation will be run by bitcoin companies for ‘general network health.’


Title: Re: The MAX_BLOCK_SIZE fork
Post by: theymos on January 31, 2013, 08:59:57 AM
It's often repeated that Satoshi intended to remove "the limit" but I always understood that to be the 500k maximum generation soft limit... quite possible I misunderstood, but I don't understand why it would be a hardforking protocol rule otherwise.

Satoshi definitely intended to increase the hard max block size. See:
https://bitcointalk.org/index.php?topic=1347.0

I believe that Satoshi expected most people to use some sort of lightweight node, with only companies and true enthusiasts being full nodes. Mike Hearn's view is similar to Satoshi's view.

I strongly disagree with the idea that changing the max block size is a violation of the "Bitcoin currency guarantees". Satoshi said that the max block size could be increased, and the max block size is never mentioned in any of the standard descriptions of the Bitcoin system.

IMO Mike Hearn's plan would probably work. The market/community would find a way to pay for the network's security, and it would be easy enough to become a full node that the currency wouldn't be at risk. The max block size would not truly be unlimited, since miners would always need to produce blocks that the vast majority of full nodes and other miners would be able and willing to process in a reasonable amount of time.

However, enforcing a max block size is safer. It's not totally clear that an unlimited max block size would work. So I tend to prefer a max block size for Bitcoin. Some other cryptocurrency can try the other method. I'd like the limit to be set in a more decentralized, free-market way than a fixed constant in the code, though.

So, the wiki should be changed, right?

It's not yet known how this issue will be handled. The wiki describes one possibility, and this work shouldn't be removed.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: flower1024 on January 31, 2013, 09:04:52 AM
What do you think about a dynamic block size based on the amount of transactions in the last blocks?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: theymos on January 31, 2013, 09:12:09 AM
What do you think about a dynamic block size based on the amount of transactions in the last blocks?

That's easily exploited. The limit shouldn't depend entirely on the block chain.

Here's one idea:
The block size limit doesn't need to be centrally-determined. Each node could automatically set its max block size to a calculated value based on disk space and bandwidth: "I have 100 GB disk space available, 10 MB per 10 minutes download speed and 1 MB per 10 minutes upload speed, so I'll stop relaying blocks [discouraging them] if they're near 1/8 MB [enough for each peer] and stop accepting them at all if they're over 2MB because I'd run out of disk space in less than a year at that rate". If Bitcoin ends up rejecting a long chain due to its max block size, it can ask the user whether he wants to switch to a lightweight mode.

Users could also specify target difficulty levels that they'd like the network to have and reduce their max block size when the network's actual difficulty level drops below that. A default target difficulty level could maybe be calculated based on how fast the user's computer is -- as users' computers get faster, you'd expect mining to also get faster.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: MPOE-PR on January 31, 2013, 09:19:23 AM
Unspent outputs at the time of the fork can be spent once on each new chain.  Mass confusion.

No, this is actually great insurance for Bitcoin users. Practically it says that if you get Bitcoins now and Bitcoin later forks, you will have your Bitcoins in each and every individual fork. You can never "lose" your Bitcoins for being "on the wrong side" of the fork, because you'll be on all sides.

This incidentally also offers a very efficient market mechanism for handling the issue: people will probably be interested in selling fork-x Bitcoins they own to buy more fork-y Bitcoins if they believe fork-y is good or fork-x bad. This imbalance of offer/demand will quickly bring the respective price ratios into a position where continuing the "bad" fork is economically unfeasible (sure, miners could continue mining forever from a technical standpoint, but in reality people with infinite bank accounts are rare).

Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down.

Bitcoin is valuable because of scarcity. One of the important scarcities is the limited supply of coins, another is the limited supply of block-space: Limited blockspace creates a market for transaction fees, the fees fund the mining needed to make the chain robust against hostile reorganization.

This is actually true.

(And the worst thing that can possibly happen to a distributed consensus system is that fails to achieve consensus. A substantial persistently forked network is the worst possible failure mode for Bitcoin: Spend all your own coins twice!  No hardfork can be tolerated that wouldn't result in an thoroughly dominant chain with near certain probability)

This is significantly overstated.

Surely from an "I want to be THE BITCOIN DEV!!!" perspective that scenario is the very avatar of complete and unmitigated disaster. The fact is however that most everyone currently propping their ego and answering the overwhelming "what is your point in this world and what are you doing with your life" existentialist questions with "I r Bitcoin Dev herp" will be out before the decade is out, and that includes you. Whether Bitcoin forks persistently or not, you still won't be "in charge" for very much longer.

Knowing that I guess you can view the matter a little closer to what it is: who cares? People do whatever they want. If they want eight different Bitcoin forks, more power to them. It will be even more decentralized that way, it will be even more difficult for "government" to "stop it" - heck, it'd be even impossible to know what the fuck anyone's talking about anymore. That failure mode of horror can very well be a survival mode of greatness, in the end. Who knows? Not me. Not you either, for that matter.

Its important to distinguish Bitcoin the currency and Bitcoin the payment network.  The currency is worthwhile because of the highly trustworth extreme decentralization which we only know how to create through a highly distributed and decentralized public blockchain.  But the properties of the blockchain that make it a good basis for a ultimately trustworthy worldwide currency do _not_ make it a good payment network.  Bitcoin is only as much of a payment network as it must be in order to be a currency and in order to integrate other payment networks.

This is also very true. Bitcoin is not a payment network any more than a girl that went to Stanford and graduated top of her class is a cook: for that limited interval where she's stuck with it. Course I've been saying that for a year now and pretty much everyone just glazes over and goes into derpmode. I guess it's a distinction whose time has not yet come or something.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: solex on January 31, 2013, 09:53:14 AM
The max block size seems to me to be a very important issue because 1Mb is certainly too small to support a global currency with a significant user base. Even if bitcoin just has a core function as a currency but not an all-singing all-dancing payment system. Like everyone here I would very much like to see bitcoin one day replace the disastrously managed fiat currencies.

My question is: Does increasing the max block size really need to be a hard fork?

Couldn't the block versioning be used as already described below regarding the introduction of version 2?

"As of version 0.7.0, a new block version number has been introduced. The network now uses version 2 blocks, which include the blockheight in the coinbase, to prevent same-generation-hash problems. As soon as a supermajority, defined as 95% of the last 1000 blocks, uses this new block version number, this version will be automatically enforced, rejecting any new block not using version 2."  (source http://blockorigin.pfoe.be/top.php)

Lets say a block size solution is determined such as a variable limit, or a simple increase to a new fixed value. And it is planned for block version 3.

The new software change could be inactive until a supermajority of the last 1000 blocks are version 3. Then the change to the max block size becomes active. The result is close to a "soft fork" with minimum risk and disruption. This would prevent some of worst blockchain forking scenarios described above.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on January 31, 2013, 09:55:26 AM
Changing this limit needs to be discussed now, before we start hitting it. 

This has been discussed for a while (https://bitcointalk.org/index.php?topic=1865.0).

I used to support the idea of an algorithm to recalculate the limit, as it's done for the difficulty. But currently I just think miners should be able to create their own limits together with multiple "tolerance levels", like  "I won't accept chains containing blocks larger than X unless it's already N blocks deeper than mine". Each miner should set their own limits. That would push towards a consensus. Miners with limits too different than the average would end up losing work. The point is that like this the consensus is achieved through "spontaneous order" (decentralized), and not via a top-down decision.

That said, I do have the feeling that this change will only be scheduled once we start hitting the limit.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: MPOE-PR on January 31, 2013, 10:22:57 AM
Changing this limit needs to be discussed now, before we start hitting it. 

This has been discussed for a while (https://bitcointalk.org/index.php?topic=1865.0).

I used to support the idea of an algorithm to recalculate the limit, as it's done for the difficulty. But currently I just think miners should be able to create their own limits together with multiple "tolerance levels", like  "I won't accept chains containing blocks larger than X unless it's already N blocks deeper than mine". Each miner should set their own limits. That would push towards a consensus. Miners with limits too different than the average would end up losing work. The point is that like this the consensus is achieved through "spontaneous order" (decentralized), and not via a top-down decision.

That said, I do have the feeling that this change will only be scheduled once we start hitting the limit.

Probably the most sensible approach.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: flower1024 on January 31, 2013, 10:34:27 AM
What do you think about a dynamic block size based on the amount of transactions in the last blocks?

That's easily exploited. The limit shouldn't depend entirely on the block chain.

Here's one idea:
The block size limit doesn't need to be centrally-determined. Each node could automatically set its max block size to a calculated value based on disk space and bandwidth: "I have 100 GB disk space available, 10 MB per 10 minutes download speed and 1 MB per 10 minutes upload speed, so I'll stop relaying blocks [discouraging them] if they're near 1/8 MB [enough for each peer] and stop accepting them at all if they're over 2MB because I'd run out of disk space in less than a year at that rate". If Bitcoin ends up rejecting a long chain due to its max block size, it can ask the user whether he wants to switch to a lightweight mode.

Users could also specify target difficulty levels that they'd like the network to have and reduce their max block size when the network's actual difficulty level drops below that. A default target difficulty level could maybe be calculated based on how fast the user's computer is -- as users' computers get faster, you'd expect mining to also get faster.

i dont like that approach very much, because i think it gave to much influence to nodes.
what about this one: blocksize is determined by median transaction fees?

this is not very easy to game (except you are a big pool which should want to reduce the blocksize anyway so there is no incentive)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: gmaxwell on January 31, 2013, 11:56:22 AM
Couldn't the block versioning be used as already described below regarding the introduction of version 2?
[...]
The result is close to a "soft fork" with minimum risk and disruption. This would prevent some of worst blockchain forking scenarios described above.
In our normal language a softforking change is one which is fully reverse compatible. They are changes which never produce behavior which an original bitcoin node would recognize as a violation of the rules which make bitcoin ... bitcoin.  What you're trying to describe is a coordinated hardfork, which is what you'd need to do to change any of the system fundamentals, e.g. change the supply of coins or the time between blocks— something we've never done— and something that isn't easily made safe.

Softforking changes are safe so long as a sufficient super-majority of mining is on... to older nodes they just look like some txn are indefinitely delayed and some blocks are surprisingly orphaned, but no violations.

A hardforking change requires almost universal adoption by bitcoin users (note: not miners, miners are irrelevant for a hardforking change: a miner that doesn' follow one that is followed by all the users simply isn't a miner anymore) so taking a count of miners is not a good or safe way to go about it.  The obvious way to implement one would be to achieve sufficient consensus, and then strike a fixed switchover time at some point in the future. ... though the savvy analyst is asking themselves what happens when the next revision of the rules is prejudicial to their interests?...

When Bitcoin's behavior is merely a system of computer rules you can trust it because you (or people you trust who read code) can point to the rules and say "it is so because of cryptographic proof, the mathematics of the program make it thusly".  If the rules are up for revision by popularity contest or whatever system you like— then you have a much more complicated trust equation where you have to ask if that process will make decisions which are not only wise but also respect your needs. Who will cast the ballots, who will count them? Even if the process is democratically fair— is it something that lets the wolves vote to eat the sheep or does the process somehow respect personal liberty and autonomy?  All the blockchain distributed consensus stuff starts sounding easy by comparison.

An alternative theory I present is: if some hardforking change is so valuable, why couldn't an altcoin prove that value and earn its place in the free market and eventually supplant the inferior alternative? Why is that inferior to changing the immutable (within the context of the system) rules when doing so is against the will of any of its users[1]?  Or to use the language of libertarian dogma: Must change only come by force?   Can any blockchain cryptocurrency survive if it becomes a practice and perception that the underlying software contract will be changed?

Hardforks: There be technological and philosophical dragons.


[1] if the rules are subtly broken and ~everyone agrees that they /must/ be changed that is another matter and not the subject I'm talking about.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Jeweller on January 31, 2013, 12:13:41 PM
caveden - Thanks for the link to the thread from 2010.  It's interesting that many people, including Satoshi, were discussing this long before the limit was approached.  And I definitely agree that having it hard-coded in will make it much harder to change in 2014 than in 2010.

da2ce7 - I understand your opposition to any protocol change.  Makes sense; we signed up for 1MB blocks, so that's what we stay with.  What I'd like to know is, what would your response be if there was a widespread protocol change?  If version 0.9 of the qt-client had some type of increased, or floating max block size (presumably with something like solex proposes), would you:

- refuse the change and go with a small-block client
- grudgingly accept it and upgrade to a big-block client
- give up on bitcoin all together?

I worry about this scenario from a store-of-value point of view.  Bitcoins are worth something because of the decentralized consensus of the block chain.  To me, anything that threatens that consensus threatens the value of my bitcoins.  So in my case, whether I'm on the big-block side or the small-block side, I'm actually just going to side with whichever side is bigger, because I feel the maintenance of consensus is more valuable than any benefits / problems based on the details of the protocol.  Saying you reject it on "moral" terms though makes me think you might not be willing to make that kind of pragmatic compromise.

That said, 1MB is really small.  I'm trying to envision a world-finance-dominating network with 1MB blocks every 10 minutes and it's tough.  While there are lots of great ideas, it does seem to defeat the purpose a little bit to have the vast majority of transactions taking place outside the blockchain. 
And if the 1MB limit does stay, it calls in to question the need for client improvements in terms efficiency and so on.  If the blocks never get appreciably bigger than they do now, well any half-decent laptop made in the past few years can handle being a full node with no problem.

Perhaps a better question then I'd like to ask people here is: The year is 2015.  Every block is a megabyte.  Someone wrote a new big-block client fork, and people are switching.  What will you do?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: justusranvier on January 31, 2013, 12:31:44 PM
I don't remember who proposed it but the best proposal I've heard is to make the maximum block size scale based on the difficulty.

The transition to ASIC mining should represent the last step increase in hashing power, so after that's done would be a good time to establish a baseline for whatever formula gets used.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: MPOE-PR on January 31, 2013, 01:34:29 PM
That said, 1MB is really small.  I'm trying to envision a world-finance-dominating network with 1MB blocks every 10 minutes and it's tough.

What makes you suspect it's tough because of the blocksize? Maybe it's tough because it's just not something you'd be very good at, for a multitude of unrelated reasons.

Perhaps a better question then I'd like to ask people here is: The year is 2015.  Every block is a megabyte.  Someone wrote a new big-block client fork, and people are switching.  What will you do?

I've asked MP. While nobody can really know the future, turns out what we'll likely do is start an entirely new coin, this time guaranteed to never be hard-forked; not by a bunch of coder nobodies, but by MP himself. In practice that'll most likely work out to simply staying with the old version and replacing some code monkeys. This, mind you, not because we really care all that much if it's 1 Mb or 100 Gb, but because the precedent of hardforking "for change" is intolerable. We'll find ourselves in due time under a lot more pressure to fuck up Bitcoin than some vague "I can't manage to envision the future" sort of bs.

An alternative theory I present is: if some hardforking change is so valuable, why couldn't an altcoin prove that value and earn its place in the free market and eventually supplant the inferior alternative?

An excellent question. If y'all are going to be dicking around with hardforks might as well do it on whatever devcoin nobody cares about, watch it continue to be resoundingly not cared about and draw the logical inference from there.

The transition to ASIC mining should represent the last step increase in hashing power

Nonsense. The only group that shipped one (some?) asics is using 110nm tech. We are at the beginning of a curve, not at the end of it.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Mike Hearn on January 31, 2013, 02:36:10 PM
You could argue against any change to Bitcoin ever based on "those are the protocol rules that were signed up for", but obviously, the protocol has changed many times since its first creation.

All this thread says to me is we need a better FAQ page. The topic comes up repeatedly and no new insight is gained by doing it 11 times rather than 10.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Jeweller on January 31, 2013, 03:20:15 PM
Mike Hearn - Sorry if this feels like a redundant question, or that it's decreasing the signal to noise ratio here in any way.  I suppose at it's base it's not really an answerable question: what's the future of bitcoin?  We'll have to see.

What's interesting is that there seem to be two fairly strongly divergent viewpoints on this matter: some people assume the transaction network will continue to grow to rival paypal or even credit cards, and see the block size limit as an unimportant detail that will be quickly changed when needed.  Others see the limit as a fundamental requirement, or even dogma, of the bitcoin project, and view the long term network as mainly an international high-value payment system, or the backing of derivative currencies.  Both views seem reasonable, yet mutually exclusive.

I don't see this kind of disagreement with other often-brought up and redundant issues, such as "satoshi's aren't small enough", "people losing coins means eventually there won't be anything left" and so on.  Those aren't real problems.  I'm not saying the 1MB limit is a "problem" though, I just want to know what people are going to do, and what's going to happen.  Regardless of anyone's opinion on the issue, given the large number of people using bitcoin, the ease with which the change can be made, and the impending demand for more transactions, someone will compile a client with a larger block limit.  What if people want to start using it?

I can see this issue limit bitcoin acceptance as a payment for websites: why go to all the trouble of implementing high a high security bitcoin processing system for your e-commerce site if in a couple years bitcoin won't be usable for small transactions?  Maybe it will in fact scale up, but without any clear path for how that would happen, many will choose to wait on bitcoin and see what evolves rather than adopt it for their organization.

Sorry if I'm being dense -- from the wiki this is indeed classified as "Probably not a problem", and if some developers come on here and told me, "Quiet, you, it's being worked on," I would concede the point to them.  To me though the uncertainty itself of whether the 1MB limit will remain gives me pause.  The threads from 3 years ago debating the same topic perhaps make this conversation redundant, but don't settle the issue for me: this was a tricky problem 3 years ago, and is still.  The only thing that's changed with regard to the block size is that we're getting ever closer to hitting the limit.

Perhaps this is a conversation we'll just need to have in a year or so when the blocks are saturated.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: MPOE-PR on January 31, 2013, 03:49:15 PM
You could argue against any change to Bitcoin ever based on "those are the protocol rules that were signed up for", but obviously, the protocol has changed many times since its first creation.

All this thread says to me is we need a better FAQ page. The topic comes up repeatedly and no new insight is gained by doing it 11 times rather than 10.

Any hard forks in that list of many changes?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Mike Hearn on January 31, 2013, 05:23:28 PM
What's interesting is that there seem to be two fairly strongly divergent viewpoints on this matter: some people assume the transaction network will continue to grow to rival paypal or even credit cards, and see the block size limit as an unimportant detail that will be quickly changed when needed.  Others see the limit as a fundamental requirement, or even dogma, of the bitcoin project, and view the long term network as mainly an international high-value payment system, or the backing of derivative currencies.  Both views seem reasonable, yet mutually exclusive.

It's an issue that will become clearer with time, I think.

By the way, I'm not assuming that Bitcoin will grow to rival PayPal or credit cards. That would be a wild, runaway success that would mark a major turning point in the history of money itself. And the internet is littered with the carcasses of dead attempts to revolutionize payments. It'd be presumptuous to presume future success.

However, if transaction volumes do grow to reach the block size limits, I would (for now) be advocating for a move to a floating limit based on chain history.

To recap: the primary arguments against are

1) Rising requirements for running a node make Bitcoin more centralized
2) The economics of providing for network security when block inclusion is free and inflation has dwindled

For (1), Satoshi always took the position that Moores law would accomodate us. I wrote the Scalability page on the wiki to flesh out his belief with real numbers. As you can see, even with no improvements to todays technology at all Bitcoin can scale beyond PayPal .... by orders of magnitude. It requires nodes to run on server-class machines rather than laptops, but many already do, so I don't see that as a big deal. If Bitcoin ever reaches high traffic levels, CPU time, bandwidth, storage capacity ... all very likely to be cheaper than today. I don't think the "only banks and megacorps run full nodes" scenario will ever happen.

For (2) I have proposed building a separate p2p network on which participants take part in automatically negotiated assurance contracts. I think it would work, but it won't be possible to be truly convincing until it's tried out for real. That in turn requires:

a) That there be some real incentive to boost network security, like semi-frequent re-orgs leading to spends being reversed and merchants/exchanges losing money. Inflation will likely provide us enough security for the medium-term future.
b) Somebody actually build the tool and get people using it.

Then you have to wait and see if community participants step up and use it.

In short, I can't see this question being resolved before we actually run up against the limit, which is unfortunate. I wish Satoshi had put a floating limit in place right from the start. But unfortunately there were many issues for him to consider and only limited time to consider each one, a fixed size limit probably didn't seem like a big deal when he wrote it.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: bpd on January 31, 2013, 06:02:39 PM

It's an issue that will become clearer with time, I think.

By the way, I'm not assuming that Bitcoin will grow to rival PayPal or credit cards. That would be a wild, runaway success that would mark a major turning point in the history of money itself. And the internet is littered with the carcasses of dead attempts to revolutionize payments. It'd be presumptuous to presume future success.

However, if transaction volumes do grow to reach the block size limits, I would (for now) be advocating for a move to a floating limit based on chain history.

To recap: the primary arguments against are

1) Rising requirements for running a node make Bitcoin more centralized
2) The economics of providing for network security when block inclusion is free and inflation has dwindled

For (1), Satoshi always took the position that Moores law would accomodate us. I wrote the Scalability page on the wiki to flesh out his belief with real numbers. As you can see, even with no improvements to todays technology at all Bitcoin can scale beyond PayPal .... by orders of magnitude. It requires nodes to run on server-class machines rather than laptops, but many already do, so I don't see that as a big deal. If Bitcoin ever reaches high traffic levels, CPU time, bandwidth, storage capacity ... all very likely to be cheaper than today. I don't think the "only banks and megacorps run full nodes" scenario will ever happen.

For (2) I have proposed building a separate p2p network on which participants take part in automatically negotiated assurance contracts. I think it would work, but it won't be possible to be truly convincing until it's tried out for real. That in turn requires:

a) That there be some real incentive to boost network security, like semi-frequent re-orgs leading to spends being reversed and merchants/exchanges losing money. Inflation will likely provide us enough security for the medium-term future.
b) Somebody actually build the tool and get people using it.

Then you have to wait and see if community participants step up and use it.

In short, I can't see this question being resolved before we actually run up against the limit, which is unfortunate. I wish Satoshi had put a floating limit in place right from the start. But unfortunately there were many issues for him to consider and only limited time to consider each one, a fixed size limit probably didn't seem like a big deal when he wrote it.

Agree, (1) is not a huge issue. Moore's law will prevail. Plenty of people will run full nodes, and those that can't will run header-only or blockchain-less clients.

For (2), I feel like there's a factor I never see mentioned. In the short run (12+ years), the block rewards are more than enough to incentivize mining, especially as we're moving to a world where the variable cost (electricity) of mining is plummeting. Over that same timeframe, the cost of ASICs should also plummet to the marginal cost of production at the same time Moore's law is increasing their power. Hashing power is going to be cheap. Very cheap. I actually hypothesize that even in the case where transaction fees are negligible, if bitcoin has "succeeded", i.e. the value is much much higher than it is today, then we will have de facto proof of stake. Those people and entities with large holdings of bitcoin will have both the resources and the incentive to mine or to pay for hashing power to secure the network. Similar to how those with lots of gold tend to build massive expensive vaults and pay for large amounts of security.

I think it's a mistake to project one's vision of what bitcoin SHOULD be (high-value international transaction network) while ignoring what bitcoin IS becoming in the present. To choke off the growth in the payment network at this stage would be completely counterproductive to getting the value of bitcoin to where we all want it to be longer term. The block size limit MUST be addressed, and most likely within the next year and a half.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jtimon on January 31, 2013, 06:25:45 PM
About Mike's problem two, this has been discussed many times.
The bitcoin protocol as it is needs a block size limit (not necessarily the one we have today) to avoid a tragedy of the commons on mining when subsidies are gone.
I remember some people advocating for proof of stake (I think that's how that concept started) and me alone advocating for demurrage (http://en.wikipedia.org/wiki/Demurrage_currency).

And now sorry for the free advertising...
Fortunately we have Freicoin (http://freico.in/) which doesn't suffers from this potential problem even if there's no block limit at all.
Freicoin has perpetual reward for miners financed through demurrage fees on holdings.
Before everybody starts complaining about savings and demanding mercy for their grandma's: freicoin is purposely designed to be a medium of exchange and NOT a store of value.
That's the beauty of a free monetary market: different monies can have different qualities and purposes.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: MatthewLM on January 31, 2013, 07:08:57 PM
I disagree with those that say this is bad since it breaks the trust of bitcoin. People who use bitcoin want to retain ownership and usability of their money and to be assured that the inflation rate wont differ. More space in blocks would be a benefit due to lower transaction fees. The integrity of people's bitcoins remains as was ever.

The problem is that the fork may not go seamlessly. If a fork is to be made as many issues as possible should be resolved. For instance, the block timestamp can be made a 64 bit integer as should always have been, so that this disturbance would be as seldom as possible.

Considerations must be made for how bitcoin can scale to much larger block sizes. I have mentioned before that when block sizes reach a point it will be beneficial to relay blocks in separated parts.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on January 31, 2013, 07:52:49 PM
2) The economics of providing for network security when block inclusion is free and inflation has dwindled

For (2), I feel like there's a factor I never see mentioned. In the short run (12+ years), the block rewards are more than enough to incentivize mining, especially as we're moving to a world where the variable cost (electricity) of mining is plummeting. Over that same timeframe, the cost of ASICs should also plummet to the marginal cost of production at the same time Moore's law is increasing their power. Hashing power is going to be cheap. Very cheap.

Hashing power being cheap is not relevant to the security of the network since it would be equally cheap to an attacker. It's the total amount of resources employed by honest miners relative to those of an eventual attacker that actually matter.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: MPOE-PR on January 31, 2013, 09:02:34 PM
To recap, this is my issue with Hearn: he makes a false claim (quoted above) to prop a false generalization of his. If that doesn't work, he just ignores the point. Intellectual dishonesty at its finest, and not quite the first time either.

Any hard forks in that list of many changes you weasel you!

And now sorry for the free advertising...
Fortunately we have Freicoin which doesn't suffers from this potential problem even if there's no block limit at all.
Freicoin has perpetual reward for miners financed through demurrage fees on holdings.
Before everybody starts complaining about savings and demanding mercy for their grandma's: freicoin is purposely designed to be a medium of exchange and NOT a store of value.
That's the beauty of a free monetary market: different monies can have different qualities and purposes.

Nothing to be sorry about (I had no idea it existed, for one) and good luck with it.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: d'aniel on January 31, 2013, 10:32:23 PM
To recap, this is my issue with Hearn: he makes a false claim (quoted above) to prop a false generalization of his. If that doesn't work, he just ignores the point. Intellectual dishonesty at its finest, and not quite the first time either.

Any hard forks in that list of many changes you weasel you!

IIRC, a couple years ago there was a buffer overflow bug that required an emergency hard forking change to fix.  I think there was one more that was rolled out gradually over a couple years, but I don't care enough to look it up for you.

Perhaps Mike didn't notice your demand that he address your point because he (understandably) has you on his ignore list.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: bpd on January 31, 2013, 10:46:59 PM
2) The economics of providing for network security when block inclusion is free and inflation has dwindled

For (2), I feel like there's a factor I never see mentioned. In the short run (12+ years), the block rewards are more than enough to incentivize mining, especially as we're moving to a world where the variable cost (electricity) of mining is plummeting. Over that same timeframe, the cost of ASICs should also plummet to the marginal cost of production at the same time Moore's law is increasing their power. Hashing power is going to be cheap. Very cheap.

Hashing power being cheap is not relevant to the security of the network since it would be equally cheap to an attacker. It's the total amount of resources employed by honest miners relative to those of an eventual attacker that actually matter.

Sorry, I said this badly. My point is, even if transaction fees amount to tens of thousands of dollars daily, as the current block reward does, who has more incentive to run mining equipment? People going after a fraction of those fees, or people trying to protect their billions of dollars of savings?  Transaction fees are not the only incentive to run mining equipment.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: da2ce7 on February 01, 2013, 12:05:26 AM
Jeweller, thank you for your great questions, I’ll endeavour to answer them.

First I wish to point out that that I believe that a bitcoin-like protocol that has no block-size limit. Where the miners choose the ‘best block’ based upon two factors: block size and difficulty, would be still economically secure. I suggest miners would orphan blocks that are too-large (thus too-expensive to maintain). (The network would find some sort of equilibrium).

If I was to pull out the continuous issue here:
Max_Block_Size is both a network and economic issue.

Changing the max block size has real economic consequences that can be split into two areas:  Fees and Uses.

Fees:
While I don’t think that the network needs a low Max_Block_Size to remain secure, that shouldn’t be up to my own analysis.  People may have invested into Bitcoin because they believe the 1MB limit to be the value required to maintain a secure network.

We don't know if Bitcoin had a different limit if they would have invested in the first place OR NOT.

Uses:
In 10-years, a modern smartphone/computer will be able to run the full processing node if the Max_Block_Size remains 1MB.  This is a clear economic benefit for Bitcoin: Decentralized Bitcoin verification.  In fact in a few years, virtually every computer will be able to process the entire blockchain without issue thus making Bitcoin extremely unique in the realm of payments.

We don't know if Bitcoin had a different limit if they would have invested in the first place OR NOT.


So I believe that the Max_Block_Size is clearly a moral issue, more than a practical issue.  Just like the Maximum Bitcoins, or Generation Rates.




- refuse the change and go with a small-block client
- grudgingly accept it and upgrade to a big-block client
- give up on bitcoin all together?
Keep on using the smaller chain.  Although I could double spend my coins in the larger chain for my own benefit.


Saying you reject it on "moral" terms though makes me think you might not be willing to make that kind of pragmatic compromise.
Yes.

Perhaps a better question then I'd like to ask people here is: The year is 2015.  Every block is a megabyte.  Someone wrote a new big-block client fork, and people are switching.  What will you do?
If it is better for my purposes than Bitcoin I would move over.  However I believe that the network effect will be that bitcoin is always my first store of value.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: BradZimdack on February 01, 2013, 06:19:43 AM
I'm becoming very, very concerned about this 1MB block limit, especially since we're already seeing blocks that put us more than 25% of the way there.  A mere 7 transactions per second is nothing.  One particularly large online retailer does more than that alone.  If this limit is not increased somehow, and it sounds like it's not going to be possible, then I'm really afraid this will have a very negative impact on Bitcoin's ultimate value, utility, and security.

Just speaking hypothetically, suppose that:

* We're constantly capping out on the 1MB limit;
* We're at a point where the majority of miner's fees come from transaction fees;
* Mining has become optimally efficient such that marginal profit approaches zero.

In this scenario, competition for block space would be so fierce as to push transaction fees into the range of several dollars per KB.  It could also then take several hours to get some available block space, even with high fees.  I can't foresee it ever costing more than what a bank charges for a wire transfer, which at my bank is $30.  A wire is also pretty fast, so it's doubtful someone is going to pay more than $30 for a Bitcoin transfer and put up with waiting longer than it would take to do a bank wire.

At 7 tps, that's 604,800 transactions per day maximum.  At a maximum average fee of $30 per transaction, that's about $18.1 million per day in fees.  If we're assuming that mining will approach zero profit, including electricity, hardware depreciation, and R&D -- meaning that 100% of fees paid are going directly to the services needed to secure the network -- then the total annual investment in mining work will not exceed $6.6 billion.  As has been mentioned before, network strength should be measured in currency, not hash rate, because that's the actual cost an attacker would have to incur to break it.  So, in this scenario, we have a maximum network strength of $6.6 billion and it could never, ever be stronger than that without a lot of money being spent to run at a loss.  Pulling off a 51% attack would mean only slightly exceeding that.  If there's enough value being transacted to demand $30 transaction fees, then banks and governments will probably be somewhat irked that so much value is changing hands without them getting their cut.  $6.6 billion is pocket change for a bank or a government, especially because they'd see such a big return on investment from blocking all these direct transfers.

That's just from the security side of things.  From a utility point of view, once fees start to cost a few bucks, Bitcoin is going to look like a pretty lousy way to pay for most everyday items, and credit cards will look cheap again -- even with their massive fraud rate.

I don't know enough about how all the pieces work, but it sure seems to me like we need a lot more than a 1MB limit...and quick.  Can someone tell me what I'm missing here or reassure me that I have no idea what I'm talking about and that everything will be fine?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Atruk on February 01, 2013, 07:03:06 AM
To recap, this is my issue with Hearn: he makes a false claim (quoted above) to prop a false generalization of his. If that doesn't work, he just ignores the point. Intellectual dishonesty at its finest, and not quite the first time either.

Any hard forks in that list of many changes you weasel you!

And now sorry for the free advertising...
Fortunately we have Freicoin which doesn't suffers from this potential problem even if there's no block limit at all.
Freicoin has perpetual reward for miners financed through demurrage fees on holdings.
Before everybody starts complaining about savings and demanding mercy for their grandma's: freicoin is purposely designed to be a medium of exchange and NOT a store of value.
That's the beauty of a free monetary market: different monies can have different qualities and purposes.

Nothing to be sorry about (I had no idea it existed, for one) and good luck with it.

I don't blame you for not slumming it up with us little people who watch the altchain discussion. Freicoin was big news about a month ago until people realized it was just the lewest, pumpingest, and dumpingest of the altcoins. A full 80% of the initially issued goings flowing into a hardcoded developer controlled foundation which would take a hard fork (not quite sure if it is irony) to remove.

Apologies for the quality of the following linked thread https://bitcointalk.org/index.php?topic=134665.0 (https://bitcointalk.org/index.php?topic=134665.0)

Subsidizing mining by demurrage may or may not be a good idea in a future cryptocurrency (probably a bad idea), but bitcoin's momentum and prestige by having substantial value is why these discussions over single technical issues inspire a lot of passion. This same affinity for the majority of bitcoin's traits and features means that any fork or altchain Joe Idea that incorporates their entire wish list of changes isn't going to be adopted by many people beyond Joe Idea, because for a majority Joe Idea's "improvements" are going to seem like a step backwards, if they don't think the changes broke everything they liked about the system.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: MPOE-PR on February 01, 2013, 08:33:40 AM
Perhaps Mike didn't notice your demand that he address your point because he (understandably) has you on his ignore list.

Perhaps. A suicidal strategy, but a strategy nonetheless.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: ThePok on February 01, 2013, 09:04:30 AM
The Idea to calculate it by Blockrewar+Transactonfees is Great.

How much Reward do we need, to make Bitcoin ultimate save? And how much is not TOO much? I didnt think to long about it, but 1% is not to far away from good. So 1% of 20 million BTC is 200 000 Coins there are 144Blocks a day *360 Days in a Year. So every Block needs 3.9 BTC reward. (transactionfees are around 0.33 today) So i think there should be a rule that the blocksizelimit increases 5% if the Reward + Transactionfees were high enough in the last Difficultyphase, and decrase by maybe 5% if there wasnt enough reward to keep projected yeld at 1%.

We wont hit that limit for a long time....50 25 12.5 6.25 3.125 -> maybe in 16 Years.....so long blocksize is bound by hardwareability :)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Akka on February 01, 2013, 09:09:57 AM
I always thought two of the benefits of Bitcoin are:

Fast transactions
Low transaction fees

It seems, that thees points are outright Lies

If the blocksize limit isn't lifted and Bitcoin grows it will at one Point be virtually impossible to make transactions unless a ridiculous high transaction fee is paid.
I can't see how we can ask any merchant to accept Bitcoin, if it is clear that, unless Bitcoin remain a sideline payment system, at some point it will be impossible to accept Bitcoin for small payments.

So learning that changing the blocksize limit would require a hardfork gets me very concerned.

Also, I strongly disagree, that the 1MB blocksize limit is one of the principals of Bitcoin.

IMO it's scarcity of transaction space.

Therefore my Proposal:


Make the max blocksize a mathematical Function.

Lets say for example we set a Target, that the average Blocksize is always 80% of max. Blocksize and adjust the max. Blocksize by max. +-20% all 2016 Blocks to meet this Target.

This would ensure that Blockchain space always remains scarce, therefore ensuring TX fees for fast transaction, by at the same time ensuring that it will always be possible to make a transaction.

I'm looking forward to learn why this wouldn't work.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Atruk on February 01, 2013, 10:00:38 AM
Lets say for example we set a Target, that the average Blocksize is always 80% of max. Blocksize and adjust the max. Blocksize by max. +-20% all 2016 Blocks to meet this Target.

This would ensure that Blockchain space always remains scarce, therefore ensuring TX fees for fast transaction, by at the same time ensuring that it will always be possible to make a transaction.

I'm looking forward to learn why this wouldn't work.

The problem with this is that some pools still mine blocks where the only transaction is the one that awards them their subsidy, and that can really poison averages.

As far as using bitcoins in retail on a reasonable timeline goes, a few of the gambling sites have developed very good ways to prevent nasty stuff by identifying the low risk transactions which may as well be accepted immediately. I don't find the interval of time between blocks to be problematic.

Honestly as far as block size goes, I'm pretty comfortable with Gavin making a decision as benevolent dictator and letting everyone know at what point in the future the blocksize increases to its next finite (or formulaic) step.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Akka on February 01, 2013, 10:16:27 AM
Lets say for example we set a Target, that the average Blocksize is always 80% of max. Blocksize and adjust the max. Blocksize by max. +-20% all 2016 Blocks to meet this Target.

This would ensure that Blockchain space always remains scarce, therefore ensuring TX fees for fast transaction, by at the same time ensuring that it will always be possible to make a transaction.

I'm looking forward to learn why this wouldn't work.

The problem with this is that some pools still mine blocks where the only transaction is the one that awards them their subsidy, and that can really poison averages.


Than change it to The average of the biggest 50% of all Blocks mined every 2016 Blocks. That would also mean that at least 50% of all miner would have to agree that a increase of the max. blocksize is necessary and also ensuring no minority can keep an increase from happening.

So (numbers changed a little):

Target: Average Blocksize of the biggest 1008 Blocks is always 90% of max. Blocksize
Adjustment: Max. Blocksize, max. +-20% all 2016 Blocks to meet this Target.


As far as using bitcoins in retail on a reasonable timeline goes, a few of the gambling sites have developed very good ways to prevent nasty stuff by identifying the low risk transactions which may as well be accepted immediately. I don't find the interval of time between blocks to be problematic.

Blocktime all 10 minutes is fine by me. And there is no need to change this in any way IMO. I just ment if there would be say 1 Mil transactions a day, that would mean, that ~400 K each day would never be conformed, solely to the 1Mb limit and this would add up day by day.

Honestly as far as block size goes, I'm pretty comfortable with Gavin making a decision as benevolent dictator and letting everyone know at what point in the future the blocksize increases to its next finite (or formulaic) step.

I agree, but I would be far more comfortable with a solution that would fix this once and for all and would still work for any unforeseen challenges that might come in 50 Years.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Atruk on February 01, 2013, 10:37:44 AM
Lets say for example we set a Target, that the average Blocksize is always 80% of max. Blocksize and adjust the max. Blocksize by max. +-20% all 2016 Blocks to meet this Target.

This would ensure that Blockchain space always remains scarce, therefore ensuring TX fees for fast transaction, by at the same time ensuring that it will always be possible to make a transaction.

I'm looking forward to learn why this wouldn't work.

The problem with this is that some pools still mine blocks where the only transaction is the one that awards them their subsidy, and that can really poison averages.


Than change it to The average of the biggest 50% of all Blocks mined every 2016 Blocks. That would also mean that at least 50% of all miner would have to agree that a increase of the max. blocksize is necessary and also ensuring no minority can keep an increase from happening.

So (numbers changed a little):

Target: Average Blocksize of the biggest 1008 Blocks is always 90% of max. Blocksize
Adjustment: Max. Blocksize, max. +-20% all 2016 Blocks to meet this Target.

Isn't it amazing the difference adding one small caveat makes. This is why I'd default to trusting Gavin's judgement, because everyone is trying to smash all of these ideas in his face and he has to make decisions that protect brialliant ideas from naive attacks. http://www.schneier.com/blog/archives/2011/04/schneiers_law.html (http://www.schneier.com/blog/archives/2011/04/schneiers_law.html)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: solex on February 01, 2013, 11:25:24 AM
A look at historical stats shows roughly:

   500 transactions occurred per day in Jan/Feb 2011,
 5000 per day in Jan/Feb 2012,
50000 per day in late Jan 2013, with an average block size of about 180Kb today (Feb 1st).

https://blockchain.info/charts/n-transactions?showDataPoints=false&show_header=true&daysAverageString=1&timespan=all&scale=1&address=

There is a logarithmic progression in transaction numbers which, if this continues, will see block saturation at 1Mb well before the end of 2013.

I think it is reasonable to call the max block size limit a "time bomb" within bitcoin. If this limit is reached many transactions will languish for hours or days. People will spread the word that bitcoin is breaking down resulting in negative publicity, panic selling and disinvestment, and websites dropping bitcoin. Perhaps the languishing transactions will then be processed, but great reputational damage will be done to a currency which should be as good as virtual gold.

The moral imperative must be to protect the utility and integrity of bitcoin for the public who are using it, the merchant websites accepting it, and the investors who are buying it to hold as an alternative to central bank fiat.

When I recently purchased some coins as an investment it was despite the block size limit being a negative factor. I thought it was a temporary restriction for sensible reasons like training wheels on a bicycle. 99% of bitcoin holders are unaware of it, and the crippling ramifications of it.
 
Akka's solution sounds very good, and it makes sense to include another important improvement as MatthewLM suggests.  I really hope that something can proceed as it would be a shame for bitcoin to suffer from an internal cause when so many external threats still exist.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: mintymark on February 01, 2013, 12:37:11 PM
solex, +1.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: BradZimdack on February 01, 2013, 05:44:16 PM
I always thought two of the benefits of Bitcoin are:

Fast transactions
Low transaction fees

It seems, that thees points are outright Lies

I'm beginning to think you're absolutely right.  That scalability wiki page seems like a big fat lie too if block size can never be increased.  I'd really like someone who knows more than me to provide some reassurance here.  This seems like a very worrisome problem.


Therefore my Proposal:
...
I'm looking forward to learn why this wouldn't work.

With any ideas on what the block size "should be", no matter how good they are, isn't the underlying problem that any change will require a hard fork?  Isn't the idea of a hard fork, especially for something this controversial, at this point in the game, basically impossible?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: 2112 on February 01, 2013, 05:51:13 PM
This seems like a very worrisome problem.
When you say "this" do you mean:

1) MAX_BLOCK_SIZE problem
2) problem with prevalence of white lies and other errors of omission in the Bitcoin milieu

?

Thanks.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: MatthewLM on February 01, 2013, 06:00:05 PM
If the transaction fees are changed to algorithmically follow the block space as I expect would be an alternative solution to this, what will happen is that bitcoin will become expensive enough for an alternative crypto-currency to arise. An alternative to bitcoin which is cheaper will succeed and bitcoin will fail. Thus the only way to keep bitcoin alive is to allow for more volume, such that demand can be satisfied.

If it does ever come to bitcoin reaching it's limits, then I would be one to support another crypto-currency, as it would then be needed. I think this could be a good thing as it provides an opportunity to improve upon the many flaws of bitcoin with something new.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Akka on February 01, 2013, 06:06:52 PM
With any ideas on what the block size "should be", no matter how good they are, isn't the underlying problem that any change will require a hard fork?  Isn't the idea of a hard fork, especially for something this controversial, at this point in the game, basically impossible?

I thought I read this proposal here, but I don't find it anymore.

It could be introduces that all Blocks of Miners that use the Hardfork are somehow "marked".
As soon as a overwhelming majority of all Blocks (for Example 85%) is marked, a countdown starts and in 10.000 Blocks the Hardfork is activated.
This way Damage and Chaos could be minimized.

But I see no way how this could be "patched" without creating some damage, chaos and still a majority would have to agree to this in the first place, which also isn't ensured.

I also think there is importance to this, as it gets harder to implement this with each passing day.

Correct me if I'm wrong here, I'm far from being an expert.



If the transaction fees are changed to algorithmically follow the block space as I expect would be an alternative solution to this, what will happen is that bitcoin will become expensive enough for an alternative crypto-currency to arise. An alternative to bitcoin which is cheaper will succeed and bitcoin will fail. Thus the only way to keep bitcoin alive is to allow for more volume, such that demand can be satisfied.

If it does ever come to bitcoin reaching it's limits, then I would be one to support another crypto-currency, as it would then be needed. I think this could be a good thing as it provides an opportunity to improve upon the many flaws of bitcoin with something new.

I think Bitcoin should be "patched" if possible. It would be devastating if we increase Bitcoin business further, finally get some Merchant on board, only to get to a point where it will become virtually impossible for a Merchant to use it.

Then when they are frustrated and dumping BTC we tell them, we now have a "better" cryptocurrency, so just use this instead.

This would mean back to field one for cryptocurrencys.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jgarzik on February 01, 2013, 06:09:52 PM
If the transaction fees are changed to algorithmically follow the block space as I expect would be an alternative solution to this, what will happen is that bitcoin will become expensive enough for an alternative crypto-currency to arise. An alternative to bitcoin which is cheaper will succeed and bitcoin will fail. Thus the only way to keep bitcoin alive is to allow for more volume, such that demand can be satisfied.

Boy that's a shortsighted analysis.

Bitcoin will grow layers above the base layer -- the blockchain -- that will enable instant transactions, microtransactions, and other scalable issues.

Do not think that the blockchain is the only way to transfer bitcoins.

Larger aggregators will easily compensate for current maximum block size in a scalable manner.

All nation-state/fiat currencies are multi-layer.  Too many people look at what bitcoin does now, and assume that those are the only currency services that will ever exist.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: HostFat on February 01, 2013, 07:53:59 PM
@jgarzik
I haven't a deep knowledge of the core of Bitcoin and the possibilities that cryptography can open, but I hope that these "layers" don't mean to centralize Bitcoin and then open new weaknesses.

The other thing isn't about the bitcoin dev team, but I saw other open source projects that died slowly because of the main team.
They were against many requests of the community, because they were convinced to already know "the best" to do or that they were already archived the best results. They were wrong.
I know that Bitcoin is different, it's really fragile, so every change needs a deep examination.
But I hope that doesn't stop you from see everyday if there are any possibilities to fix things today instead of tomorrow.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Gavin Andresen on February 01, 2013, 08:30:52 PM
For the record:

I'm on the "let there be no fixed maximum block size" side of the debate right now.

I think we should let miners decide on the maximum size of blocks that they'll build on. I'd like to see somebody come up with a model for time-to-transmit-and-receive-and-validate-a-block versus increased-chance-that-block-will-be-an-orphan.

Because that is the tradeoff that will keep miners from producing 1 Terabyte blocks (or, at least, would keep them from producing 1 Terabyte blocks right now-- if we have petabyte thumb-drives and Terabyte/second networks in 10 years maybe 1Terabyte blocks will be just fine).

Right now, miners that use the reference implementation and don't change any settings will produce blocks no larger than 250Kbytes big.

So we're finding out right now how miners collectively react to bumping up against a block size limit. I'd like to let that experiment run for at least a few months before arguing that we do or do not need to eliminate the 1MB hard limit, and start arguing about what the default rules for acceptable block size should be.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: johnyj on February 01, 2013, 08:58:06 PM
If the transaction fees are changed to algorithmically follow the block space as I expect would be an alternative solution to this, what will happen is that bitcoin will become expensive enough for an alternative crypto-currency to arise. An alternative to bitcoin which is cheaper will succeed and bitcoin will fail. Thus the only way to keep bitcoin alive is to allow for more volume, such that demand can be satisfied.

Boy that's a shortsighted analysis.

Bitcoin will grow layers above the base layer -- the blockchain -- that will enable instant transactions, microtransactions, and other scalable issues.

Do not think that the blockchain is the only way to transfer bitcoins.

Larger aggregators will easily compensate for current maximum block size in a scalable manner.

All nation-state/fiat currencies are multi-layer.  Too many people look at what bitcoin does now, and assume that those are the only currency services that will ever exist.



Highly agreed, I also think all the changes could be done at higher level without modifying the original protocal, to keep the integrity of the blockchain


Title: Re: The MAX_BLOCK_SIZE fork
Post by: justusranvier on February 01, 2013, 10:16:16 PM
Bitcoin will grow layers above the base layer -- the blockchain -- that will enable instant transactions, microtransactions, and other scalable issues.

Do not think that the blockchain is the only way to transfer bitcoins.

Larger aggregators will easily compensate for current maximum block size in a scalable manner.

All nation-state/fiat currencies are multi-layer.  Too many people look at what bitcoin does now, and assume that those are the only currency services that will ever exist.
These other layers may indeed assume that role someday, but it should happen because they prove to be superior on their own merits, not because the capabilities of blockchain transfers are deliberately crippled.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: ildubbioso on February 01, 2013, 11:00:14 PM
For the record:

I'm on the "let there be no fixed maximum block size" side of the debate right now.

I'd like to let that experiment run for at least a few months before arguing that we do or do not need to eliminate the 1MB hard limit, and start arguing about what the default rules for acceptable block size should be.


Isn't waiting dangerous?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 02, 2013, 03:39:05 AM
For the record:

I'm on the "let there be no fixed maximum block size" side of the debate right now.

I'd like to let that experiment run for at least a few months before arguing that we do or do not need to eliminate the 1MB hard limit, and start arguing about what the default rules for acceptable block size should be.


Isn't waiting dangerous?

If we want to do it, this is the best moment. As ASICs start running, mining becomes less decentralized for a short period, which means we don't need to persuade so many people


Title: Re: The MAX_BLOCK_SIZE fork
Post by: fornit on February 02, 2013, 04:00:18 AM
guys, its not THAT urgent. the first thing that happens when the blocks start filling up is that transactions with low or no fees will be delayed so much that they are no longer possible.
right now, transaction fees are 0,011% of the current transaction volume. on average you pay 1/10000 of the transfered amount - or average 1/5 of an us-dollar cent -as a fee. so there is plenty of room before non-micro-transactions are noticably affected by this in any way. satoshi dice however might run into problems much sooner i suppose.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: BladeMcCool on February 02, 2013, 06:47:09 AM
could maybe future-date a larger block size allowance by doing something like at block 250000 2MB blocks become allowed .. or something like that. Why not anyway. And for tx fees in blocks, more tx with lower fees vs less tx with higher fees = same amount of fees in a block that took essentially the same resources to compute, so it seems pretty moot. Am I missing something?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: mp420 on February 02, 2013, 10:23:54 AM

In 10-years, a modern smartphone/computer will be able to run the full processing node if the Max_Block_Size remains 1MB.  This is a clear economic benefit for Bitcoin: Decentralized Bitcoin verification.  In fact in a few years, virtually every computer will be able to process the entire blockchain without issue thus making Bitcoin extremely unique in the realm of payments.

What is the use of having portable devices act as full nodes if you can't (because of fees) use bitcoin for purchasing anything smaller than a house? As I see it, your argument is not valid. With 1MB blocksize limit, even if Bitcoin remains a relatively small niche currency, the limit will act as a hard constraint on the potential utility of the currency. Of course, once we start hitting the limit, it will hurt Bitcoin's public image so much that it's conceivable so many people will move away from Bitcoin that we get few more years of time to fix the issue.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Realpra on February 02, 2013, 01:02:05 PM
The first thing you need to understand that it's not just a matter of the majority of miners for a hard fork.... it's got to be pretty much everybody.  Otherwise, you will have a blockchain split with two different user groups both wanting to call their blockchain "bitcoin".  Unspent outputs at the time of the fork can be spent once on each new chain.  Mass confusion.
The only way to do it is to get most major clients to accept larger blocks AFTER a future specified date.

That way once say "2017 Dec 31" rolls around 90% of BTC users will all accept larger blocks at the same time and the confusion will be minimal.

This is not that hard honestly just get MyWallet, Armory, the Satoshi client and Electrum programmers/distributors to agree on a date say "2020" and a new limit "100mb" and you're done.
I don't see why that many people would reject this change and as such the new standard should be rolled out way before it takes effect.

Miners are irrelevant in this, but they should welcome the change; BTC will never grow if normal people can't use it. We would be right back to trusting banks, only backed by BTC instead of gold this time. We all know what a wild success THAT has been!


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 02, 2013, 01:18:35 PM
Currently (Feb 2013), we have about 50000 tx per day, or 0.579 tx per second (tps), or 347 tx per block (tpb). We are paying miners 25 BTC per block, or $500 per block at current rate. If bitcoin becomes the VISA scale, it has to handle 4000 tps, or 2400000 tpb, or 6916x of the current volume. To keep mining profitable, we may need to pay $50000 per block (to pay electricity, harddrive space, bandwidth, CPU time for ECDSA). As the block reward will become 0 in the future, this $50000 has to be covered by fee. Since we will have 2400000 tpb, each tx will pay $0.021, not too bad when you can send unlimited amount of money to anywhere in the world in no time.

That means mining is profitable even without any block size limit.

On the other hand, the 1MB constraint will certainly kill bitcoin. 1MB is only about 2500 tpb, or 0.1% of VISA scale. We are already at 13.9% of this limit. If we don't act before problem arises, people will start migrating to alt-coins.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: da2ce7 on February 02, 2013, 01:46:32 PM

In 10-years, a modern smartphone/computer will be able to run the full processing node if the Max_Block_Size remains 1MB.  This is a clear economic benefit for Bitcoin: Decentralized Bitcoin verification.  In fact in a few years, virtually every computer will be able to process the entire blockchain without issue thus making Bitcoin extremely unique in the realm of payments.

What is the use of having portable devices act as full nodes if you can't (because of fees) use bitcoin for purchasing anything smaller than a house? As I see it, your argument is not valid. With 1MB blocksize limit, even if Bitcoin remains a relatively small niche currency, the limit will act as a hard constraint on the potential utility of the currency. Of course, once we start hitting the limit, it will hurt Bitcoin's public image so much that it's conceivable so many people will move away from Bitcoin that we get few more years of time to fix the issue.

Lets not get ahead of ourselves here.  I expect that we will have a multi-layered system the vast majority of the transactions being made off-chain.

Additionally; who is to say that one wouldn't want to verify their house transaction with a smart-phone.

People completely mis-judge the free-market.   If you have to use alt-chains because the fees are so high, well isn't that a success of bitcoin already!  By no stage has bitcoin failed then.

The argument for larger blocks is VALID for a proticol that isn't Bitcoin.  However it is a catch 22, for Bitcoin.  It only becomes a problem if Bitcoin is a success.  If Bitcoin is a success, by definition it isn't a problem.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 02, 2013, 02:12:34 PM
Currently (Feb 2013), we have about 50000 tx per day, or 0.579 tx per second (tps), or 347 tx per block (tpb). We are paying miners 25 BTC per block, or $500 per block at current rate. If bitcoin becomes the VISA scale, it has to handle 4000 tps, or 2400000 tpb, or 6916x of the current volume. To keep mining profitable, we may need to pay $50000 per block (to pay electricity, harddrive space, bandwidth, CPU time for ECDSA). As the block reward will become 0 in the future, this $50000 has to be covered by fee. Since we will have 2400000 tpb, each tx will pay $0.021, not too bad when you can send unlimited amount of money to anywhere in the world in no time.

That means mining is profitable even without any block size limit.

On the other hand, the 1MB constraint will certainly kill bitcoin. 1MB is only about 2500 tpb, or 0.1% of VISA scale. We are already at 13.9% of this limit. If we don't act before problem arises, people will start migrating to alt-coins.

Let's assume a miner with moderate hashing power can mine 1 in 10000 blocks (i.e. one block in 10 weeks). With $50000/block, he will get about $5/block.

2500000tpb (VISA scale) means about 1GB/block. Currently a 2000GB drive costs about $100, or $0.05/GB. Therefore, the harddrive cost is only 1% of his mining income. It's negligible. (and harddrive will be much cheaper in the future)

A quad-core Intel core i7 is able to handle 4000tps (https://en.bitcoin.it/wiki/Scalability#CPU) at 77W. Assuming $0.15/kWh, it costs about $0.012/h, or $0.002/block. Even energy is 10x more expensive in the future, it's still negligible. (and CPU will be much efficient in the future)

1GB/block needs a bandwidth of 4.3TB/month. Including all overhead it may take 10TB/month, and may cost $300/month currently for a dedicated server in datacentre. It is $300/(30*24*6) = 0.069/block. Again, it is negligible comparing with the $5/block reward.

He will still earn $5-0.05-0.002-0.069 = $4.879/block after deducting the harddrive, CPU, and bandwidth cost. It is $29/hr or $21077/month and is a ridiculous amount given he only owns 0.01% of total hashing power. He still needs to pay for the electricity bill for the mining equipment. It is hard to estimate but even if he uses 90% of the earning for the electricity bill, he will still earn $2107/month.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 02, 2013, 02:17:10 PM

In 10-years, a modern smartphone/computer will be able to run the full processing node if the Max_Block_Size remains 1MB.  This is a clear economic benefit for Bitcoin: Decentralized Bitcoin verification.  In fact in a few years, virtually every computer will be able to process the entire blockchain without issue thus making Bitcoin extremely unique in the realm of payments.

What is the use of having portable devices act as full nodes if you can't (because of fees) use bitcoin for purchasing anything smaller than a house? As I see it, your argument is not valid. With 1MB blocksize limit, even if Bitcoin remains a relatively small niche currency, the limit will act as a hard constraint on the potential utility of the currency. Of course, once we start hitting the limit, it will hurt Bitcoin's public image so much that it's conceivable so many people will move away from Bitcoin that we get few more years of time to fix the issue.

Lets not get ahead of ourselves here.  I expect that we will have a multi-layered system the vast majority of the transactions being made off-chain.

Additionally; who is to say that one wouldn't want to verify their house transaction with a smart-phone.

People completely mis-judge the free-market.   If you have to use alt-chains because the fees are so high, well isn't that a success of bitcoin already!  By no stage has bitcoin failed then.

The argument for larger blocks is VALID for a proticol that isn't Bitcoin.  However it is a catch 22, for Bitcoin.  It only becomes a problem if Bitcoin is a success.  If Bitcoin is a success, by definition it isn't a problem.

Setup your own Electrum server with your computer at home and verify your house transaction with a smart-phone through it. Therefore you don't need to trust a third party

A smart-phone is never designed to run a bitcoin full-node

Moreover, the sentence "It only becomes a problem if Bitcoin is a success.  If Bitcoin is a success, by definition it isn't a problem." is self-contradicting


Title: Re: The MAX_BLOCK_SIZE fork
Post by: mp420 on February 02, 2013, 05:06:30 PM

In 10-years, a modern smartphone/computer will be able to run the full processing node if the Max_Block_Size remains 1MB.  This is a clear economic benefit for Bitcoin: Decentralized Bitcoin verification.  In fact in a few years, virtually every computer will be able to process the entire blockchain without issue thus making Bitcoin extremely unique in the realm of payments.

What is the use of having portable devices act as full nodes if you can't (because of fees) use bitcoin for purchasing anything smaller than a house? As I see it, your argument is not valid. With 1MB blocksize limit, even if Bitcoin remains a relatively small niche currency, the limit will act as a hard constraint on the potential utility of the currency. Of course, once we start hitting the limit, it will hurt Bitcoin's public image so much that it's conceivable so many people will move away from Bitcoin that we get few more years of time to fix the issue.

Lets not get ahead of ourselves here.  I expect that we will have a multi-layered system the vast majority of the transactions being made off-chain.

So, full nodes act as banks and issue Bitcoin-denominated instruments to their clients. Maybe the clients do not even have to trust the banks, thanks to some kind of cryptographical magic. Because of the economical scale of the transactions, only big companies and financial institutions have any reason to actually make a Bitcoin transaction, and these kinds of actors can run any kind of full node anyway. The clients can run something similar to what is outlined in this thread: https://bitcointalk.org/index.php?topic=88208.0

Actually, that thread outlines the way that future PCs (if not smartphones) could conceivably run a full node (or "almost-full" node) even with no limit / floating limit.

I just can't see why this artificial limit that was intended as temporary from the start should be accepted as an immutable part of the protocol.

There is going to be a hard fork in any case, more likely sooner than later. It should be planned beforehand, if we care about Bitcoin at all.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: gmaxwell on February 02, 2013, 09:09:59 PM
Actually, that thread outlines the way that future PCs (if not smartphones) could conceivably run a full node (or "almost-full" node) even with no limit / floating limit.
There are many merits to etotheipi's writing but what he proposes massive _increases_ the IO and computational cost of running a full node (or a fully validating but historyless node) over a plain committed UTXO set for validation. The increased node burden is one of the biggest arguments against what he's proposing and I suspect will ultimately doom the proposal.

I have seen nothing proposed except moore's law that would permit full validation on "desktop" systems with gigabyte blocks.

Quote
I just can't see why this artificial limit that was intended as temporary from the start should be accepted as an immutable part of the protocol.
There are plenty of soft limits in bitcoin (like the 500k softlimit for maximum block size). The 1MB limit is not soft. I'm not aware of any evidence to suggest that it was temporary from the start— and absent it I would have not spent a dollar of my time on Bitcoin: without some answer to how the system remains decentralized with enormous blocks and how miners will be paid to provide security without blockspace scarcity or cartelization the whole idea is horribly flawed.  I also don't think a network rule should be a suicide pact— my argument for the correctness of making the size limited has nothing to do with the way it always was, but that doesn't excuse being inaccurate about the history.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Gavin Andresen on February 02, 2013, 10:01:48 PM
I'm not aware of any evidence to suggest that it was temporary from the start...

Ummm, see this old forum thread (https://bitcointalk.org/index.php?topic=1347), where Satoshi says:

Quote
It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: ildubbioso on February 02, 2013, 10:23:39 PM

Ummm, see this old forum thread (https://bitcointalk.org/index.php?topic=1347), where Satoshi says:


So you think the max block size is not a pressing problem, don't you?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on February 02, 2013, 10:33:18 PM
I'm on the "let there be no fixed maximum block size" side of the debate right now.

he, the "lead developer" is on the same side I am, nice. :)

I think we should let miners decide on the maximum size of blocks that they'll build on.

How difficult would it be to implement it on bitcoind right now, without touching the 1Mb hard limit?
I mean the multiple limits and tolerance levels idea.

Miners using bitcoind would be able to set in config file a list of value pairs. One value would be a size limit, the other the amount of blocks longer a chain breaking that limit would have to be in order to you to accept building on top of it. Do you see what I'm saying?
That could be done right now on bitcoind, with the sole condition that anything above 1Mb will be rejected no matter what.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: impulse on February 03, 2013, 12:58:51 AM
Couldn't the limit be adjusted with every difficulty change so that it is approximately in-line with the demand of the previous difficulty period? If the block size were capped near the transaction volume ceiling there would be still be incentive to include mining fees while never running the risk of running out of block space.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 03, 2013, 02:22:25 AM
Actually, that thread outlines the way that future PCs (if not smartphones) could conceivably run a full node (or "almost-full" node) even with no limit / floating limit.
There are many merits to etotheipi's writing but what he proposes massive _increases_ the IO and computational cost of running a full node (or a fully validating but historyless node) over a plain committed UTXO set for validation. The increased node burden is one of the biggest arguments against what he's proposing and I suspect will ultimately doom the proposal.

I have seen nothing proposed except moore's law that would permit full validation on "desktop" systems with gigabyte blocks.

Quote
I just can't see why this artificial limit that was intended as temporary from the start should be accepted as an immutable part of the protocol.
There are plenty of soft limits in bitcoin (like the 500k softlimit for maximum block size). The 1MB limit is not soft. I'm not aware of any evidence to suggest that it was temporary from the start— and absent it I would have not spent a dollar of my time on Bitcoin: without some answer to how the system remains decentralized with enormous blocks and how miners will be paid to provide security without blockspace scarcity or cartelization the whole idea is horribly flawed.  I also don't think a network rule should be a suicide pact— my argument for the correctness of making the size limited has nothing to do with the way it always was, but that doesn't excuse being inaccurate about the history.

Read my calculation above. With each transaction just paying $0.021 as fee, we have more than enough money to pay miners to handle 4000 transaction per second.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: deepceleron on February 03, 2013, 05:10:13 AM
After there are 25BTC of fees per block to replace the mining reward instead of .25, by transactions that have to pay to get in a block in a reasonable amount of time, then it might be time to consider a larger block size. Until then it needs to stay scarce. I have 2.5GB of other people's gambling on my hard drive, because it's cheap.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Jeweller on February 03, 2013, 11:52:57 AM
Some ideas to throw into the pile:

Idea 1: Quasi-unanimous forking.

If the block size for is attempted, it is critical to minimize disruption to the network.  Setting it up well in advance based on block number is OK, but that lacks any kind of feedback mechanism.  I think something like
Code:
if( block_number > 300000) AND ( previous 100 blocks are all version > 2)
then { go on and up the MAX_BLOCK_SIZE }
Maybe 100 isn't enough, if all of the blocks in a fairly long sequence have been published by miners who have upgraded, that's a good indication that a very large super-majority of the network has switched over.  I remember reading something like this in the qt-client documentation (version 1 -> 2?) but can't seem to find it.

Alternatively, instead of just relying on block header versions, also look at the transaction data format version (first 4 bytes of a tx message header). Looking at the protocol it seems that every tx published in the block will also have that version field, so we could even say "no more than 1% of all transactions in the last 1000 blocks of version 2 means it's OK to switch to version 3".

This has the disadvantage of possibly taking forever if there are even a few holdouts (da2ce7? ;D), but my thinking is that agreement and avoiding a split blockchain is of primary importance and a block size change should only happen if it's almost unanimous.  Granted, "almost" is ambiguous: 95%?  99%?  Something like that though.  So that anyone who hasn't upgraded for a long time, and somehow ignored all the advisories would just see blocks stop coming in.

Idea 2:  Measuring the "Unconfirmable Transaction Ratio"
I agree with gmaxwell that an unlimited max block size, long term, could mean disaster.  While we have the 25BTC reward coming in now, I think competition for block space will more securely incentivize mining once the block reward incentive has diminished.  So basically, blocks should be full.  In a bitcoin network 10 years down the road, the max_block_size should be a limitation that we're hitting basically every block so that fees actually mean something.  Lets say there are 5MB of potential transactions that want to get published, and only 1MB can due to the size limit.  You could then say there's a 20% block inclusion rate, in that 20% of the outstanding unconfirmed transactions made it into the current block.

I realize this is a big oversimplification and you would need to more clearly define what constitutes that 5MB "potential" pool.  Basically you want a nice number of how much WOULD be confirmed, except can't be due to space constraints.  Every miner would report a different ratio given their inclusion criteria.  But this ratio seems like an important aspect of a healthy late-stage network.  (By late-stage I mean most of the coins have been mined)  Some feedback toward maintaining this ratio would seem to alleviate worries about mining incentives. 

Which leads to:

Idea 3:  Fee / reward ratio block sizing.

This may have been previously proposed as it is fairly simple.  (Sorry if it has; I haven't seen it but there may be threads I haven't read.)

What if you said:
Code:
MAX_BLOCK_SIZE = 1MB + ((total_block_fees / block_reward)*1MB)
so that if the block size would scale up as the multiple of the reward.  So right now, if you wanted a 2MB block, there would need 25BTC total fees in that block.  If you wanted a 10MB block, that's 250BTC in fees.

In 4 years, when the reward is 12.5BTC, 250BTC in fees will allow for a 20MB block.
It's nice and simple and seems to address many of the concerns raised here.  It does not remove the freedom for miners to decide on fees -- blocks under 1MB have the same fee rules.  Other nodes will recognize a multiple-megabyte block as valid if the block had tx fees in excess of the reward (indicative of a high unconfirmable transaction ratio.)

Problems with this is it doesn't work long term because the reward goes to zero.  So maybe put a "REAL" max size at 1GB or something, as ugly as that is.  Short / medium term though it seems like it would work. You may get an exponentially growing max_block size, but it's really slow (doubles every few years).  Problems I can think of are an attacker including huge transaction fees just to bloat the block chain, but that would be a very expensive attack.  Even if the attacker controlled his own miners, there's a high risk he wouldn't mine his own high-fee transaction.

Please let me know what you think of these ideas, not because I think we need to implement them now, but because I think thorough discussion of the issue can be quite useful for the time when / if the block size changes.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Jeweller on February 03, 2013, 12:17:55 PM
Wait, no, I spoke too soon.  The fee/reward ratio is a bit too simplistic. 
An attacker could publish one of those botnet type of blocks with 0 transactions.  But instead, fill the block with spam transactions that were never actually sent through the network and where all inputs and outputs controlled by the attacker.  Since the attacker also mines the block, he then gets back the large fee.  This would allow an attacker to publish oversized spam blocks where the size is only limited by the number of bitcoins the attacker controls, and it doesn't cost the attacker anything.  In fact he gets 25BTC with each sucessful attack.  So an attacker controlling 1000BTC could force a 40MB spam block into the blockchain whenever he mines a block.

Not the end of the world, but ugly. 
There are probably other holes in the idea too.
Anyway, I'm just suggesting that something akin to a (total fee/block reward) calculation may be useful.  Not sure how you'd filter out spammers with lots of bitcoins.  And filtering out spammers was the whole point (at least according to Satoshi's comments) of the initial 1MB limit.

I'll keep pondering this, though I guess it's more about WHAT the fork might be, rather than HOW (or IF) to do it.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: justusranvier on February 03, 2013, 04:35:10 PM
Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down.
Can you walk me through the reasoning that you used to conclude that bitcoin will remain more secure if it's limited to a fixed number of transactions per block?

Are you suggesting more miners will compete for the fees generated by 7 transactions per second than will compete for the fees generated by 4000 transactions per second?

If the way to maximize fee revenue is to limit the transaction rate, why does Visa, Mastercard, and every business that's trying to maximize their revenue process so many of them?

If limiting the allowed number of transactions doesn't maximize revenue for any other transaction processing network, why would it work for Bitcoin?

If artificially limiting the number of transactions reduces potential revenue, how does that not result in fewer miners, and therefore more centralization?

In what scenario does your proposed solution not result in the exact opposite of what you claim to be your desired outcome?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: FreeMoney on February 03, 2013, 08:22:54 PM
Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down.
Can you walk me through the reasoning that you used to conclude that bitcoin will remain more secure if it's limited to a fixed number of transactions per block?

Are you suggesting more miners will compete for the fees generated by 7 transactions per second than will compete for the fees generated by 4000 transactions per second?

If the way to maximize fee revenue is to limit the transaction rate, why does Visa, Mastercard, and every business that's trying to maximize their revenue process so many of them?

If limiting the allowed number of transactions doesn't maximize revenue for any other transaction processing network, why would it work for Bitcoin?

If artificially limiting the number of transactions reduces potential revenue, how does that not result in fewer miners, and therefore more centralization?

In what scenario does your proposed solution not result in the exact opposite of what you claim to be your desired outcome?

If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Akka on February 03, 2013, 08:33:16 PM
If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.

That's why transaction space should IMO be scarce. But not hard limited.

A hard limit cap will just make a transaction impossible at a certain point, no matter how high the fees paid. If we would have 1Mil. legit transactions a day, with the 1MB limit 400k would never be confirmed, no matter the fees.

A algorithm adjusting max. blocksize in a way that transaction space remains scarce, but ensuring all transactions can be put into the blockchain is IMO a reasonable solution.

Ensuring fees would always have to be paid for fast transactions, but also ensuring every transaction has a chance to get confirmed.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: justusranvier on February 03, 2013, 08:47:03 PM
If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.
How many business can you name that maximize their profitability by restricting the number of customers they serve?

If it really worked like that, then why stop at 1 MB? Limit block sizes to a single transaction and all the miners would be rich beyond measure! That would certainly make things more decentralized because miners all over the world would invest in hardware to collect the massive fee that one lucky person per block will be willing to pay.

Why stop there? I'm going to start a car dealership and decide to only sell 10 cars per year. Because I've made the number of cars I sell a limited resource I'll be able to charge more for them, right?

Then I'll open a restaurant called "House of String-Pushing" that only serves regular food but only lets in 3 customers at a time.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: solex on February 03, 2013, 10:55:56 PM
I have 2.5GB of other people's gambling on my hard drive, because it's cheap.

Snap! Me too.

Zero-fee transactions are an overhead to bitcoin. One benefit of them might be to encourage take-up by new users maintaining momentum of growth. If they need to be discouraged, then agreed, it could be done by using the max block size limit.

An initial split ensuring "high or reasonable" fee transactions get processed into the blockchain within an average of 10 minutes, and "low or zero" fee transactions get processed within an average of 20 minutes might be the way to go.

Consider the pool of unprocessed transactions:

Each transaction has a fee in BTC and an origination time. If the transaction pool  is sorted by non-zero fee size then: fm =  median (middle) fee value.

The block size limit is then dynamically calculated to accommodate all transactions with a fee value > fm plus all the remaining transactions with an origination time > 10 minutes ago.  If a large source of zero fee transactions tried to get around this by putting, say, a 17 satoshi fee on all its transactions then fm would likely be 17 satoshis, and these would still get delayed. A block limit of 10x the average block size during the previous difficulty period is also a desirable safeguard.

The public would learn that low or zero fee transactions take twice as long to obtain confirmation. It then opens the door for further granularity where the lower half (or more) of the pool is divided 3, 4, or 5 times such that very low-fee transactions take half an hour, zero-fee transactions take an average of an hour. The public will accept that as normal. Miners would reap the benefits of a block limit enforced fee incentive system.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: fornit on February 04, 2013, 03:33:52 AM
An initial split ensuring "high or reasonable" fee transactions get processed into the blockchain within an average of 10 minutes, and "low or zero" fee transactions get processed within an average of 20 minutes might be the way to go.

Consider the pool of unprocessed transactions:

Each transaction has a fee in BTC and an origination time. If the transaction pool  is sorted by non-zero fee size then: fm =  median (middle) fee value.

[...]

The public would learn that low or zero fee transactions take twice as long to obtain confirmation. It then opens the door for further granularity where the lower half (or more) of the pool is divided 3, 4, or 5 times such that very low-fee transactions take half an hour, zero-fee transactions take an average of an hour. The public will accept that as normal. Miners would reap the benefits of a block limit enforced fee incentive system.

i doubt that transactions are that evenly distributed over a 24h or a 7day period. you might end up with all low-fee transactions bring pushed several hours, to times when rush hour is only for people living in the middle of the atlantic or pacific ocean.
which is imho perfectly okay for tips or micro-donations.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: kjj on February 04, 2013, 04:44:59 AM
If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.
How many business can you name that maximize their profitability by restricting the number of customers they serve?

If it really worked like that, then why stop at 1 MB? Limit block sizes to a single transaction and all the miners would be rich beyond measure! That would certainly make things more decentralized because miners all over the world would invest in hardware to collect the massive fee that one lucky person per block will be willing to pay.

Why stop there? I'm going to start a car dealership and decide to only sell 10 cars per year. Because I've made the number of cars I sell a limited resource I'll be able to charge more for them, right?

Then I'll open a restaurant called "House of String-Pushing" that only serves regular food but only lets in 3 customers at a time.

If car dealerships sold cars for however much you were willing to pay, down to and including free, you can bet they'd limit the number of cars they "sold".  And I doubt you'd even get 10 out of them.

The problem is that we really don't know yet how to operate with the system we have, much less a different one.  In a decade or two, when the subsidy is no longer the dominant part of the block reward, maybe then we'll have some idea how to price transactions, and we will be able to think clearly about mechanisms to adjust the block size.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 04, 2013, 05:00:22 AM
If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.
How many business can you name that maximize their profitability by restricting the number of customers they serve?

If it really worked like that, then why stop at 1 MB? Limit block sizes to a single transaction and all the miners would be rich beyond measure! That would certainly make things more decentralized because miners all over the world would invest in hardware to collect the massive fee that one lucky person per block will be willing to pay.

Why stop there? I'm going to start a car dealership and decide to only sell 10 cars per year. Because I've made the number of cars I sell a limited resource I'll be able to charge more for them, right?

Then I'll open a restaurant called "House of String-Pushing" that only serves regular food but only lets in 3 customers at a time.

If car dealerships sold cars for however much you were willing to pay, down to and including free, you can bet they'd limit the number of cars they "sold".  And I doubt you'd even get 10 out of them.

The problem is that we really don't know yet how to operate with the system we have, much less a different one.  In a decade or two, when the subsidy is no longer the dominant part of the block reward, maybe then we'll have some idea how to price transactions, and we will be able to think clearly about mechanisms to adjust the block size.

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Jeweller on February 04, 2013, 06:39:40 AM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

Unfortunately it's not that simple for a couple reasons.

First, right now clients will reject oversized blocks from miners.  Other miners aren't the only ones who need to store the blocks, all full nodes do even if they just want to transact without mining.  So what if all the miners are fine with the 1-GB block and none of the clients nodes are?  Total mess.  Miners are minting coins only other miners recognize, and as far as clients are concerned the network hash rate has just plummeted.

Second, right now we have a very clear method for determining the "true" blockchain. It's the valid chain with the most work.  "Most work" is easily verified, everyone will agree.  "Valid" is also easily tested with unambiguous rules, and everyone will agree.  Miners can't "simply drop" blocks they don't like.  Maybe if that block is at depth -1 from the current block, sure.  But what if someone publishes a 1GB block, then someone else publishes a 1MB block on top of that?  Do you ignore both?  How far back do you go to start your own chain and try to orphan that whole over-size branch?

I think you can see the mess this would create.  The bitcoin network needs to operate with nearly unanimous consensus.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 04, 2013, 07:07:09 AM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

Unfortunately it's not that simple for a couple reasons.

First, right now clients will reject oversized blocks from miners.  Other miners aren't the only ones who need to store the blocks, all full nodes do even if they just want to transact without mining.  So what if all the miners are fine with the 1-GB block and none of the clients nodes are?  Total mess.  Miners are minting coins only other miners recognize, and as far as clients are concerned the network hash rate has just plummeted.

Second, right now we have a very clear method for determining the "true" blockchain. It's the valid chain with the most work.  "Most work" is easily verified, everyone will agree.  "Valid" is also easily tested with unambiguous rules, and everyone will agree.  Miners can't "simply drop" blocks they don't like.  Maybe if that block is at depth -1 from the current block, sure.  But what if someone publishes a 1GB block, then someone else publishes a 1MB block on top of that?  Do you ignore both?  How far back do you go to start your own chain and try to orphan that whole over-size branch?

I think you can see the mess this would create.  The bitcoin network needs to operate with nearly unanimous consensus.

I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: kjj on February 04, 2013, 07:30:44 AM
I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.

What you are describing is much worse than a mere fork, the only word I can think of for it is a shatter.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 04, 2013, 07:42:36 AM
I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.

What you are describing is much worse than a mere fork, the only word I can think of for it is a shatter.

This is actually happening and forces some miners to drop transactions from Satoshi Dice to keep their blocks slimmer. Ignoring big blocks might not be intentional but big blocks are non-competitive for obvious reason (take longer to propagate)

May be I should rephrase it:

Therefore, if the majority of miners are unable to handle the 1GB block N timely, they will keep building on N-1 until N is verified. Block N is exposed to a higher risk of orphaning, and building 1GB block will become very risky and no one will do so.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: MPOE-PR on February 04, 2013, 09:43:14 AM
I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.

What you are describing is much worse than a mere fork, the only word I can think of for it is a shatter.

Actually sounds like correct behavior.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Gavin Andresen on February 04, 2013, 05:17:08 PM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).



Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 04, 2013, 05:34:25 PM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).



And if there are too many transactions than the available block space, people will pay more transaction fee and miner will have more money to upgrade their hardware and network for bigger block size.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: ShadowOfHarbringer on February 04, 2013, 06:13:24 PM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).

This.

I am all for "let-the-market-decide" elastic algorithms.

If you let people select what is best for their interests, they will make the best choices through multiple tries in order to maximize profit & minimize risk.

Nobody wants to lose money, and everybody wants to earn the most. Therefore market will balance out the block size and reach perfect equilibrium automatically.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Akka on February 04, 2013, 06:21:24 PM
This.

I am all for "let-the-market-decide" elastic algorithms.

If you let people select what is best for their interests, they will make the best choices through multiple tries in order to maximize profit & minimize risk.

Nobody wants to lose money, and everybody wants to earn the most. Therefore market will balance out the block size and reach perfect equilibrium automatically.

I concur,

a kind of "natural selection" in a open marked ends in the best possible solution for the current environment (hardware)

this also allows is to adapt to better hardware as there is no way to tell a 100% certain where development will go. (At least that's my opinion)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: ildubbioso on February 04, 2013, 07:05:05 PM
So, shouldn't we (you developers actually) change it as fast as possible?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on February 04, 2013, 07:25:00 PM
There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

That's nice. Just don't forget to include total download time in the "time to verify", as well as any other I/O time. Bandwidth will be a significant bottleneck once blocks start getting larger.

EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! :)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: arklan on February 04, 2013, 07:25:34 PM
probably something ot aim to have in place before 1.0 is released... and since were closing in on .8... :D


Title: Re: The MAX_BLOCK_SIZE fork
Post by: MPOE-PR on February 04, 2013, 08:12:55 PM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).

Spoken like a true Gavin. No objections.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: FreeMoney on February 04, 2013, 08:49:43 PM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).



This rule would apply to blocks until they are 1 deep, right? Do you envision no check-time or size rule for blocks that are built on? Or a different much more generous rule?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: notme on February 05, 2013, 04:34:12 AM
probably something ot aim to have in place before 1.0 is released... and since were closing in on .8... :D

We're only 2 minor releases away!!!.... from 0.10


Title: Re: The MAX_BLOCK_SIZE fork
Post by: solex on February 05, 2013, 04:40:50 AM
probably something ot aim to have in place before 1.0 is released... and since were closing in on .8... :D

We're only 2 minor releases away!!!.... from 0.10

1.0 or 1.10?  :)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 05, 2013, 04:50:09 AM
1M MAX_BLOCK_SIZE is obviously an arbitrary and temporary limit. Imagine that bitcoin was invented in 1996 instead of 2009, when 99% normal internet users connected though telephone lines with 28.8kb/s, or  3.6kB/s. To transfer a typical block of 200kB today, it would take more than 1 minute and the system would fail due to very high stale rate and many branches in the chain. If the "1996 Satoshi" used a 25kB MAX_BLOCK_SIZE, are we still going to stick with it till the end of bitcoin?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: notig on February 05, 2013, 06:15:06 AM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).



does this still involve a fork?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Maged on February 05, 2013, 06:46:12 AM
EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! :)
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks). So, to mitigate the damage that will cause to the practical confirmation time...
Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. Additionally, by lowering the block-creation-time constant, you increase the chances of there being natural orphans by a much larger factor than you are lowering the constant (5 minute blocks would on average have 4x as many orphans as 10 minute blocks over the same time period). Currently, we see that as a bad thing since it makes the network weaker against an attacker. So, the current block time was set so that the block verification time network-wide would be mostly negligible. Let's make it so that it's not.

To miners, orphans are lost money, so instead of using such a large constant for the block time so that orphans don't happen much in the first place, force the controlling of the orphan rate onto the miners. To avoid orphans, they'd then be forced to use such block-ignoring features. In turn, the smaller the constant for block time that we pick, the exponentially smaller the blocks would have to be. Currently, I suspect that a 50 MB block that was made up of pre-verified transactions would be no big deal for the current network. However, a .2 MB block on a 2.35 seconds per block network (yes, extreme example) absolutely would be a big deal (especially because at that speed even an empty block with just a coinbase is a problem).

There are also some side benefits: because miners would strongly avoid transactions most of the network hasn't seen, only high-fee transactions would be likely to make it into the very next block, but many transactions would make it eventually. It might even encourage high-speed relay networks to appear, who will require a cut of the transaction fees the miners make in order to let them join this network.

In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: solex on February 05, 2013, 07:08:27 AM
Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. ...
In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)

For the rest of us who are catching up, are you proposing what seems far more radical than eliminating the 1Mb limit? Can you please clarify. Are you proposing reducing the 10 min average block creation time? If so, what happens to the 25 BTC reward which would be excessive, and need a pro-rata reduction for increased block frequency?



Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on February 05, 2013, 08:04:34 AM
EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! :)
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks).

Would it really? I've never seen any actual analysis, but I'd say that honest splits would mostly carry the same transactions, with the obvious exception of coinbase and "a few others". Has anyone ever done an analysis of how many transactions (in relative terms) are actually lost in a reorg and need to get reconfirmed?

Btw, interested nodes could attempt to download, and perhaps even relay, all sides of a split. If you see that your transaction is in all of them, you know it actually had its first confirmation for good. Relaying orphans sounds a less radical change than changing the 10m delay...


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Maged on February 05, 2013, 11:22:02 PM
Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. ...
In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)

For the rest of us who are catching up, are you proposing what seems far more radical than eliminating the 1Mb limit?
Quite possibly. However, if we think of the 10 minute constant as not actually having to stay at that constant, we can adjust it so that at the time we disable the 1 MB limit, the largest block that miners would practically want to make at that time would be 1 MB. Basically, this would protect us from having a 1 MB limit one day, to a practical 50 MB limit (or whatever is currently practical with the 10 minute constant). I mainly want people to remember that changing the block time is also something that's also able to be on the table.

Can you please clarify. Are you proposing reducing the 10 min average block creation time?
Yes.

If so, what happens to the 25 BTC reward which would be excessive, and need a pro-rata reduction for increased block frequency?
Just like you said, it would have a pro-rata reduction for increased block frequency. Sorry, I assumed that was obvious, since changing anything about the total currency created is absolutely off the table.

EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! :)
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks).

Would it really?
Yes. You wouldn't be able to trust that a majority of the network acknowledged a block until it gets past the point where all clients are required to accept it as part of the chain.

Imagine that only 10% of the network accepts blocks over 10 MB and 100% accepts blocks less than 1 MB. What if that 10% got lucky and generated two 11 MB blocks in a row? Well, the other 90% would just ignore them because they are too large. So, those blocks get orphaned because the rest of the network found three small blocks. If you just accepted the 11 MB blocks as a confirmation and sent goods because of it, you could be screwed if there was a double-spend.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 06, 2013, 07:18:35 AM
Bitcoin works great as a store of value, but should we also have as a requirement that it operates great as a payment network?

It seems that the debate over whether the maximum block size should be increased is really a question of whether or not the Bitcoin protocol should be improved so that it serves the dual purposes. Specifically:

1) That transactions should verify quickly (less time between blocks)

2) Transactions fees should be low

3) There should be no scarcity for transaction space in blocks

Open questions:

Right now there's about what, 1.4 SatoshiDICEs worth of transaction volume?

Should Bitcoin scale to support 20 times the volume of SatoshiDICE?

Should Bitcoin scale to support 1000 times the volume of SatoshiDICE?

Should we allow the blockchain to grow without a bound on the rate (right now it is 1 megabyte per 10 minutes, or 144MB/day)?

Is it reasonable to require that Bitcoin should always be able to scale to include all possible transactions?

Is it a requirement that Bitcoin eventually be able to scale to accommodate the volume of any existing fiat payment system (or the sum of the volumes of more than one existing payment system)?

Will it ever be practical to accept transactions to ship physical goods with 0 confirmations?

Will the time for acceptance of a transaction into a block ever be on the order of seconds?

How would one implement a "Bitcoin vending machine" which can dispense a product immediately without the risk of fraud?

Can't we just leave parameters like 10 minutes / 1 megabyte alone (since they require a hard fork) and build new market-specific payment networks that use Bitcoin as the back end, processing transactions in bulk at a lower frequency (say, once per 10 minutes)?

Aren't high transaction fees a good thing, since they make mining more profitable resulting in greater total network hashrate (more security)?



Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 06, 2013, 07:45:52 AM
There is something about the artificial scarcity of transaction space in a block that appeals to me. My gut tells me that miners should always have to make a choice about which transactions to keep and which ones to drop. That choice will probably always be based on the fees per kilobyte, so as to maximize the revenue per block. This competition between transactions solves the problem where successive reductions in block subsidies are not balanced by a corresponding increase in transaction fees.

If the block size really needs to increase, here's an idea for doing it in a way that balances scarcity versus transaction volume:

DEPRECATED DUE TO VULNERABILITIES (see the more recent post)

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or goes up by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if the sum of miner's fees excluding block subsidies for all blocks since the last adjustment would exceed a fixed percentage of the total coins transmitted (say, 1.5%). This percentage is also a baked-in constant.

4) There should be no client or miner limits on the number of kilobytes of free transactions in a block - if there's space left in the block after including all the paid transactions, there's nothing wrong with filling up the remaining space with as many free tx as possible.

Example:

When an adjustment period arrives, clients add up the miner's fees exclusive of subsidies, and add up the total coins transferred. If the percentage of miner's fees is 1.5% or more of the total coins transferred then the max block size is permanently increased by 10%.

This scheme offers a lot of nice properties:

- Consensus is easy to determine

- The block size will tend towards a size that accommodates the total transaction volume over a 24 hour period

- The average transaction fees are capped and easily calculated in the client when sending money. A fee of 1.5% should get included after several blocks. A fee greater than 1.5% will get included faster. Fees under 1.5%, will get included slower.

- Free transactions will eventually get included (during times of the day or week where transaction volume is at a low)

- Since the percentage of growth is capped, any increase in transaction volume that exceeds the growth percentage will eventually get accommodated but miners will profit from additional fees (due to competition) until the blocks reach the equilibrium size. Think of this as a 'gold rush'.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: flower1024 on February 06, 2013, 07:47:46 AM
There is something about the artificial scarcity of transaction space in a block that appeals to me. My gut tells me that miners should always have to make a choice about which transactions to keep and which ones to drop. That choice will probably always be based on the fees per kilobyte, so as to maximize the revenue per block. This competition between transactions solves the problem where successive reductions in block subsidies are not balanced by a corresponding increase in transaction fees.

If the block size really needs to increase, here's an idea for doing it in a way that balances scarcity versus transaction volume:

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or goes up by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if the sum of miner's fees excluding block subsidies for all blocks since the last adjustment would exceed a fixed percentage of the total coins transmitted (say, 1.5%). This percentage is also a baked-in constant.

4) There should be no client or miner limits on the number of kilobytes of free transactions in a block - if there's space left in the block after including all the paid transactions, there's nothing wrong with filling up the remaining space with as many free tx as possible.

Example:

When an adjustment period arrives, clients add up the miner's fees exclusive of subsidies, and add up the total coins transferred. If the percentage of miner's fees is 1.5% or more of the total coins transferred then the max block size is permanently increased by 10%.

This scheme offers a lot of nice properties:

- Consensus is easy to determine

- The block size will tend towards a size that accommodates the total transaction volume over a 24 hour period

- The average transaction fees are capped and easily calculated in the client. A fee of 1.5% should get included after several blocks. A fee greater than 1.5% will get included faster. Fees under 1.5%, will get included slower.

- Free transactions will eventually get included (during times of the day or week where transaction volume is at a low)

- Since the percentage of growth is capped, any increase in transaction volume that exceeds the growth percentage will eventually get accommodated but miners will profit from additional fees (due to competition) until the blocks reach the equilibrium size. Think of this as a 'gold rush'.



+1
I really like the idea of dynamic block sizes. but i dont know enough about economy to know what magic numbers are needed.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 06, 2013, 07:55:26 AM
I really like the idea of dynamic block sizes. but i dont know enough about economy to know what magic numbers are needed.

Thanks. I used 10% and 1.5% as examples but they are not based on any calculations. My intuition tells me that the 10% figure doesn't matter a whole heck of a lot, it mostly controls the rate of convergence. The penalty for a number that is too high is that the size would overshoot and there wouldn't be any scarcity. I think that this would only happen if the percentage was huge, like 50% or more. If this number is too low it would just take longer to converge, and there would be a temporary period when miners generated above average profits.

As for the miner's fees I would imagine that number matters more. Too high, and the block size might never increase. Too low and there might never be scarcity in the blocks.

What are the "correct" values? I have no idea.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Jeweller on February 06, 2013, 08:41:49 AM
misterbigg - interesting idea, and I agree with your stance but here are some problems.  While it seems intuitively clear that, “Hm, if transaction fees are 3% of total bitcoins transmitted, that’s too high, the potential block space needs to expand.”

Problem is, how do you measure the number of bitcoins transmitted?

Address A has 100BTC.  It sends 98BTC to address B and 1BTC to address C.  How many bitcoins were spent?

Probably 1, right?  That is the assumption the blockchaininfo site makes.  But maybe it’s actually sending 98.  Or none.  Or 99, to two separate parties.  So we can’t know the actual transfer amount.  We can assume the maximum.  But then that means anyone with a large balance which is fairly concentrated address-wise is going to skew that “fee %” statistic way down.  In the above transaction, you know that somewhere between 0 and 100BTC were transferred.  The fee was 1BTC.  Was the fee 100% or 1%?

This also opens it up to manipulation.  Someone could mine otherwise empty blocks with enormous looking fees... which, since they mined the block, they get back, costing them nothing.  They could then work to expand the block size for whatever nefarious reason.

So while I think the “fees as % of transfer” is a nice number to work with in theory, in practice it’s not really available.  If we want to maintain scarcity of transactions in the blockchain while still having a way to expand it, I think the (total fee) / (block reward) is a good metric because it scales with time and maintains miner incentive.  While in its simplistic form it is also somewhat open to manipulation, you could just have an average of 10 blocks or so, and if an attacker is publishing 10 blocks in a row you’ve got way bigger problems. (Also I don’t think a temporary block size increase attack in really that damaging... within reason, we can put up with occasional spam.  Hack, we’ve all got a gig of S.Dice gambling on our drives right now.)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 06, 2013, 08:52:08 AM
Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down...Limited blockspace creates a market for transaction fees, the fees fund the mining needed to make the chain robust against hostile reorganization.

I agree that there needs to be scarcity. I believe that tying the scarcity to the average amount of tx fees assures that that block size can grow but also that there will always be a market for tx fees.

I strongly disagree with the idea that changing the max block size is a violation of the "Bitcoin currency guarantees"...It's not totally clear that an unlimited max block size would work.

I agree. It seems obvious that if the max block size is left at 1MB, and there are always non-free transactions that get left out of blocks, that the fees for transactions will keep increasing to a high level.

Each node could automatically set its max block size to a calculated value based on disk space and bandwidth

Not really a fan of this idea. Disk space and bandwidth should have little to do with the determination of max block size. Disk space should be largely a non issue: if the goal is to make Bitcoin more useful as a payment network, we should not be hamstrung by temporary limitations in storage space. If bandwidth is an issue then we have bigger problems than max block size - it means that the overlay network (messages sent between peers) has congestion and we need some sort of throttling scheme. If the goal is to make Bitcoin accommodate as much transaction volume as possible, the sensible choice is for nodes to demote themselves to thin clients if they can't keep up.

I just think miners should be able to create their own limits together with multiple "tolerance levels"...That would push towards a consensus. Miners with limits too different than the average would end up losing work.

This doesn't make sense. Given any set of network parameters, there is always a single global optimum strategy for miners to maximize their revenue by prioritizing transactions. Tolerances for block sizes are not something that miners will have a wide variety of opinions on - the goal is always to make money through fees (and the subsidy, but that doesn't change based on which tx are included). Besides, why on earth would we want to waste hashing power by causing more orphans?

If the blocks never get appreciably bigger than they do now, well any half-decent laptop made in the past few years can handle being a full node with no problem.

If Bitcoin's transaction volume never exceeds an average of 1mb per block then we have bigger problems, because the transaction fees will tend towards zero. There's no incentive for paying a fee if transactions always get included. To maintain fees, transaction space must be scarce. To keep fees low, the maximum block size must grow, and in a decentralized fashion that doesn't create extra orphans.

the best proposal I've heard is to make the maximum block size scale based on the difficulty.

Disagree. If this causes the maximum block size to increase to such a size that there is always room for more transactions, then we will end up killing off the fees (no incentive to include a fee).



Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 06, 2013, 09:02:16 AM
Problem is, how do you measure the number of bitcoins transmitted?...This also opens it up to manipulation...So while I think the “fees as % of transfer” is a nice number to work with in theory, in practice it’s not really available.

Whoops! You're right of course, and I was expecting at least a hole or two. Here's an alternative:

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if more than 50% of the blocks in the previous interval have a sum of transaction fees greater than 50BTC minus the block subsidy. The 50BTC constant and the threshold percentage are baked in.

Example:

A block size adjustment arrives, and the current subsidy is 12.5BTC. The last 210,000 blocks are analyzed, and it is determined that 62% of them have over 37.5BTC in transaction fees. The maximum block size is increased by 10% as a result.

Instead of targeting a fixed percentage of fees (1.5% in my original proposal), this targets a fixed block value (measured in BTC). This scheme still creates scarcity while allowing the max block size to grow. One interesting property is that during growth phases, blocks will reward 50BTC regardless of the subsidy. If transaction volume declines, fees will be reduced. Hopefully this will be the result of Bitcoin gaining purchasing power (correlating roughly to the fiat exchange rate). For this reason, the scheme does not allow the block size to shrink, or else the transaction fees might become too large with respect to purchasing power.

Another desirable property is that a client can display a reasonable upper limit for the default fee given the size of the desired transaction. It is simply 50BTC divided by the block size in bytes, multiplied by the size of the desired transaction.

Someone could mine otherwise empty blocks with enormous looking fees... which, since they mined the block, they get back, costing them nothing.  They could then work to expand the block size for whatever nefarious reason.

I believe this problem is solved with the new proposal. If someone mines a block with a huge fee, it still counts as just one block. This would be a problem if the miner could produce 50% of the blocks in the interval with that property, but this is equivalent to a 51% attack and therefore irrelevant.

The expected behavior of miners and clients is a little harder to analyze than with the fixed fee, can someone help me with a critique?



Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on February 06, 2013, 01:48:47 PM
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks).

Would it really?
Yes. You wouldn't be able to trust that a majority of the network acknowledged a block until it gets past the point where all clients are required to accept it as part of the chain.

Imagine that only 10% of the network accepts blocks over 10 MB and 100% accepts blocks less than 1 MB. What if that 10% got lucky and generated two 11 MB blocks in a row? Well, the other 90% would just ignore them because they are too large. So, those blocks get orphaned because the rest of the network found three small blocks. If you just accepted the 11 MB blocks as a confirmation and sent goods because of it, you could be screwed if there was a double-spend.

I understand, I just think that's not such a serious issue to motivate a change in the 10 min interval too. Instead, if relaying orphans becomes a normal practice, nodes would be able to see whether there's another branch in which their transactions don't exist. If your transaction is currently in all branches being mined, you're certain that you got your confirmation.
So, to counter the problem you raise, I think that relaying orphans is good enough. Why wouldn't it be?

I just think miners should be able to create their own limits together with multiple "tolerance levels"...That would push towards a consensus. Miners with limits too different than the average would end up losing work.

This doesn't make sense. Given any set of network parameters, there is always a single global optimum strategy for miners to maximize their revenue by prioritizing transactions. Tolerances for block sizes are not something that miners will have a wide variety of opinions on - the goal is always to make money through fees (and the subsidy, but that doesn't change based on which tx are included). Besides, why on earth would we want to waste hashing power by causing more orphans?

There would be some variety, surely. In the blocks they produce themselves, miners will search to optimize the ratio (time to propagate / revenue in fees), while in blocks they receive from other miners, they would rather it be the smaller possible. These parameters are not the same for different miners, particularly the "time to propagate" one, as it strongly depends on how many connections you can keep established and on your bandwidth/network lag.

Plus, if there is an "global optimal max size", it's quite pretentious to claim you can come up with the "optimal formula" to calculate it. Even if you could, individual peers would never have all necessary data to feed to this formula, as it would have to take into consideration the hardware resources of all miners and the network as a whole. That's impracticable. Such maximum size must be established via a decentralized/spontaneous order. It's pretty much like economical central planning versus free markets actually.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: mrvision on February 06, 2013, 02:21:57 PM
Plus, if there is an "global optimal max size", it's quite pretentious to claim you can come up with the "optimal formula" to calculate it. Even if you could, individual peers would never have all necessary data to feed to this formula, as it would have to take into consideration the hardware resources of all miners and the network as a whole. That's impracticable. Such maximum size must be established via a decentralized/spontaneous order. It's pretty much like economical central planning versus free markets actually.

I think that if we reach the 1mb limit and don't upgrade with a solution, then the spontaneous order will create fiat currencies backed with bitcoins, in order to reduce the amount of transactions in the bitcoin network. So, this would also lead to less revenues for the miners (plus a loss on reputation for the bitcoin network).

The block size hard limit is nothing but a protectionist policy.

Even when misterbigg's approach might not be the optimal solution, at least it's an idea.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 06, 2013, 02:51:53 PM
if there is an "global optimal max size", it's quite pretentious to claim you can come up with the "optimal formula" to calculate it.

Definitely, but I was talking about an optimum strategy for prioritizing transactions, not an optimum choice of max block size. Typically the strategy will be either:

1) Include all known pending transactions with fees

or

2) Choose the pending transactions with the highest fees per kilobyte ratio and fill the block up to a certain size

There would be some variety, surely. In the blocks they produce themselves, miners will search to optimize the ratio (time to propagate / revenue in fees), while in blocks they receive from other miners, they would rather it be the smaller possible. These parameters are not the same for different miners, particularly the "time to propagate" one, as it strongly depends on how many connections you can keep established and on your bandwidth/network lag.

I don't understand this aspect of the network. Why do miners want smaller blocks from other miners? Do blocks take a long time to propagate? Are you saying that newly solved blocks are sent around on the same peer connections used to transmit messages, and that while a connection is being used to send a block (which can be large relative to the size of a transaction) it holds up the queue for individual tx?

If this is the case, perhaps an easier way to deal with the propagation of blocks is to have two overlays, one for tx and the other for blocks.

I think that if we reach the 1mb limit and don't upgrade with a solution, then the spontaneous order will create fiat currencies backed with bitcoins, in order to reduce the amount of transactions in the bitcoin network.

I'm not so sure this is a bad thing. These ad-hoc "fiat" currencies may be created with unique properties that make them better suited to the task at hand than Bitcoin. For example, a private payment network that provides instant confirmation and requires no mining (relying on trust in a central authority).

Quote
So, this would also lead to less revenues for the miners (plus a loss on reputation for the bitcoin network).

Average transaction fees per kilobyte is inversely proportional to the block size, so leaving the block size at 1MB will cause fees to increase once blocks are regularly full. The rate of increase in the fees will be proportional to the growth in the number of transactions.

Miners would love it if all blocks had a one transaction maximum, this would maximize fees (assuming people didn't leave the Bitcoin network due to high fees).






Title: Re: The MAX_BLOCK_SIZE fork
Post by: DeathAndTaxes on February 06, 2013, 03:03:48 PM
Sadly any attempt to find an "optimal" block size is likely doomed because it can be gamed and "optimal" is hard to quantity.

Optimal for the short thinking non-miner - block size large enough that fees are driven down to zero or close to it.

Optimal for the network - block size large enough to create sufficient competition that fees can support the network relative to true economic value.

Optimal for the short term looking miner - never rising larger than 1MB to maximize fee revenue.

However I would point out that the blockchain may eventually become the equivalent of bank wire transactions.  FedWire for example transferred ~$663 trillion USD in 2011 using 127 million transactions.  If FedWire used a 10 minute block it would be ~2,500 transactions per block.  For Bitcoin that would be roughly 400 bytes per tx.  So it shows that Bitcoin can support a pretty massive fund transfer network even with a 1MB block limit.

Some would dismiss this as too centralized but I would point out that direct access to FedWire is impossible for anyone without a banking charter.  Direct access to the blockchain simply requires payment of fee and computing resources capable of running a node.  This means the blockchain will always remain far more open. 

I think one modest change (which is unlikely to make anyone happy but would allow higher tx volume) would be to take it out of the hands of everyone.  The block subsidy follows a specific exact path for a reason.  If it was open to human control Bitcoin would likely be hyperinflated and nearly worthless today.  A proposal could be made for a hard fork to double (or increase by some other factor) the max size of a block on every subsidy cut. 

This would allow for example (assuming avg tx is 400 bytes):
2012 - 1MB block =~ 360K daily transactions (131M annually)
2016 - 2MB block =~ 720K daily transactions (262M annually)
2020 - 4MB block =~ 1.44M daily transactions (525M annually)
2024 - 8MB block =~ 2.88M daily transactions (1B annually)
2028 - 16MB block =~5.76M daily transactions (2B annually)
2030 - 32MB block =~11.52M daily transactions (4B annually)

Moore's law should ensure that processing a 32MB block in 2030 is computationally less of a challenge than doing so with a 1MB block today.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: DeathAndTaxes on February 06, 2013, 03:15:46 PM
I don't understand this aspect of the network. Why do miners want smaller blocks from other miners? Do blocks take a long time to propagate? Are you saying that newly solved blocks are sent around on the same peer connections used to transmit messages, and that while a connection is being used to send a block (which can be large relative to the size of a transaction) it holds up the queue for individual tx?

If this is the case, perhaps an easier way to deal with the propagation of blocks is to have two overlays, one for tx and the other for blocks.

Yes there is a propagation delay for larger blocks when two blocks are produced by different miners at roughly the same time larger blocks are more likely to be orphaned. The subsidy distorts the market effect.  Say you know that by making the block 4x as large you can gain 20% more fees.  If this increases the risk of an oprhan by 20% then the larger block is break even.  However the subsidy distorts the revenue to size ratio.  20% more fees may only mean 0.4% more total revenue if fees make up only 2% of revenue (i.e. 25 BTC subsidy + 0.5 BTC fees).  As a result a 20% increase in oprhan rates isn't worth a 0.4% increase in total revenue.

As the subsidy become a smaller % of miner total compensation the effect of the distortion will be less.  There has been some some brainstorming on methods to remove the "large block penalty".  It likely would require a separate mining overlay.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 06, 2013, 03:29:51 PM
Yes there is a propagation delay for larger blocks

There's a delay regardless of whether or not two different blocks are solved at the same time?

Quote
when two blocks are produced by different miners at roughly the same time larger blocks are more likely to be orphaned.

You mean that when two different blocks are solved at the same time, the smaller block will propagate faster and therefore more miners will start building on it versus the larger block?

Quote
...increases the risk of an orphan by 20%

Is there a straightforward way to estimate the risk of an orphan?

Quote
As the subsidy become a smaller % of miner total compensation the effect of the distortion will be less.  There has been some some brainstorming on methods to remove the "large block penalty".  It likely would require a separate mining overlay.

Even with a separate overlay, two blocks solved at the same time is a problem. And I would imagine that adding a new overlay is an extreme solution to be considered as a last resort only.

...any attempt to find an "optimal" block size is likely doomed because it can be gamed and "optimal" is hard to quantity.

What are your thoughts on the last scheme I described (https://bitcointalk.org/index.php?topic=140233.msg1507328#msg1507328)?

...
2016 - 2MB block =~ 720K daily transactions (262M annually)
...

Hmm...this seems problematic. If the transaction volume doesn't grow sufficiently, this could kill fees. But if the transaction volume grows too much, fees will become exhorbitant. IF we accept that max block size needs to change, I believe it should be done in a way that decreases scarcity in response to a rise in average transaction fees.

There would be some variety, surely. In the blocks they produce themselves, miners will search to optimize the ratio (time to propagate / revenue in fees), while in blocks they receive from other miners, they would rather it be the smaller possible.

Sure, a miner might "rather" received blocks be as small as possible but since there's no way to refuse to receive a block from a peer, this point is moot. They could drop a block that is too big once they get it but this doesn't help them very much other than not having to forward it to the remaining peers. And even this has little global effect since those other peers will just receive it from someone else.

These parameters are not the same for different miners, particularly the "time to propagate" one, as it strongly depends on how many connections you can keep established and on your bandwidth/network lag.

Bandwidth will be the limiting factor in determining the number of connections that may be maintained. For purpose of analysis we should assume that miner's choose degree (number of peers) such that bandwidth is not fully saturated. Because doing otherwise would lead to not being able to collect the largest number of transactions possible for the amount of bandwidth available, limiting revenue.

Do people in mining pools even need to run a full node?




Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 06, 2013, 03:45:21 PM
Problem is, how do you measure the number of bitcoins transmitted?...This also opens it up to manipulation...So while I think the “fees as % of transfer” is a nice number to work with in theory, in practice it’s not really available.

Whoops! You're right of course, and I was expecting at least a hole or two. Here's an alternative:

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if more than 50% of the blocks in the previous interval have a sum of transaction fees greater than 50BTC minus the block subsidy. The 50BTC constant and the threshold percentage are baked in.

Example:

A block size adjustment arrives, and the current subsidy is 12.5BTC. The last 210,000 blocks are analyzed, and it is determined that 62% of them have over 37.5BTC in transaction fees. The maximum block size is increased by 10% as a result.

Instead of targeting a fixed percentage of fees (1.5% in my original proposal), this targets a fixed block value (measured in BTC). This scheme still creates scarcity while allowing the max block size to grow. One interesting property is that during growth phases, blocks will reward 50BTC regardless of the subsidy. If transaction volume declines, fees will be reduced. Hopefully this will be the result of Bitcoin gaining purchasing power (correlating roughly to the fiat exchange rate). For this reason, the scheme does not allow the block size to shrink, or else the transaction fees might become too large with respect to purchasing power.

Another desirable property is that a client can display a reasonable upper limit for the default fee given the size of the desired transaction. It is simply 50BTC divided by the block size in bytes, multiplied by the size of the desired transaction.

Someone could mine otherwise empty blocks with enormous looking fees... which, since they mined the block, they get back, costing them nothing.  They could then work to expand the block size for whatever nefarious reason.

I believe this problem is solved with the new proposal. If someone mines a block with a huge fee, it still counts as just one block. This would be a problem if the miner could produce 50% of the blocks in the interval with that property, but this is equivalent to a 51% attack and therefore irrelevant.

The expected behavior of miners and clients is a little harder to analyze than with the fixed fee, can someone help me with a critique?



There is no reason to stick the total reward to 50 BTC because you need to consider the purchasing power. Although we only have 25 BTC block reward at this moment, finding a block now will give you >1000x more in USD than a 50 BTC block in 2009. A future miner may be satisfied with 1 BTC block if it is equivalent to US$10000 of today's purchasing power. However, the purchasing power is not mathematically determined and you cannot put it into the formula.

Also, requiring a total reward of 50BTC means requiring 25BTC in fee NOW. As the typical total tx fee in a block is about 0.25BTC, the fee has to increase by 100x and obviously this will kill the system.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 06, 2013, 03:53:47 PM
Also, requiring a total reward of 50BTC means requiring 25BTC in fee NOW. As the typical total tx fee in a block is about 0.25BTC, the fee has to increase by 100x and obviously this will kill the system.

How and why would the system be "killed"? The max block size would simply not increase.

There is no reason to stick the total reward to 50 BTC because you need to consider the purchasing power.

Here's yet another alternative scheme:

1) Block size adjustments happen at the same time that network difficulty adjusts

2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if more than 50% of the blocks in the previous interval have a size greater than or equal to 90% of the max block size. Both of the percentage thresholds are baked in.

Example:

A block size adjustment arrives, and the current max block size is 1024KB. The last 210,000 blocks are analyzed, and it is determined that 125,000 of them are at least 922KB in size. The maximum block size is increased by 10% as a result.

Instead of targeting a fixed block reward, this scheme tries to determine if miners are consistently reaching the max block limit when filling a block with transactions (the 90% percentage should be tuned based on historical transaction data). This creates scarcity (in proportion to the 50% figure) while remaining independent of the purchasing power.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: DeathAndTaxes on February 06, 2013, 04:02:28 PM
There's a delay regardless of whether or not two different blocks are solved at the same time?

Yes but if a miner knew that no other block would be found in the next x seconds they wouldn't care.  Since that can never be known the longer the propagation delay the higher the probability that a competing block will be found before propagating completes and potentially win the race.

Quote
You mean that when two different blocks are solved at the same time, the smaller block will propagate faster and therefore more miners will start building on it versus the larger block?

Yes although it is more like the smaller block has a higher probability of winning the race.  A miner can never know if he will be in a race condition or which races he will lose but over the long run everything else being equal a miner with a longer propogating delay will suffer a higher rate of orphaned blocks.

Quote
Is there a straightforward way to estimate the risk of an orphan?

Not that I know of.  I do know pools have looked into this and to improve their orphan rates to remain competitive.  My guess is any analysis is crude because it would be difficult to model so testing needs to be done with real blocks = real earnings. A pool at least wants to ensure its orphan rate isn't significantly higher than its peers (or global average).


Quote
Even with a separate overlay, two blocks solved at the same time is a problem. And I would imagine that adding a new overlay is an extreme solution to be considered as a last resort only.

True but if it became a large enough problem, a mining network would allow for direct transmission to other miners.  A block notification superhighway of sorts.  Blocks could be digitally signed by a miner and if that miner is trusted by other miners (based on prior submitted work) those miners could start mining the next block immediately.  This of it as WOT for miners but instead of rating financial transactions miners are trusting other miners based on prior "good work".

The propogation delay on large blocks is a combination of the relay -> verify -> relay nature of the bitcoin network, combined with relatively slow block verification (large fraction of a second), and the potential need for multiple hops.  All combined this can result in a delay of multiple seconds before a majority of miners start work on this block.

A single hop, trust enough to start work on next block, and verify after the fact would make the "cost" of a larger block negligible.   It is just an idea.  I care less about mining these days so that is something for major miners to work out.    Even if this network never became "official" I would imagine some sort of private high speed data network to emerge.  It would allow participating miners to gain a small but real competitive advantage on other miners.  Less orphans, ability to include more tx (and thus higher fees) = more net revenue for miners.

Quote
What are your thoughts on the last scheme I described?

Any system which relies on trivial input can be gamed.  I (and other merchants) could buy/rent enough hashing power to solve 1% of blocks and fill them with tx containing massive fees (which come right back to us) and inflate the average fee per block.

I would point out that a fixed money supply and static inflation curve is non-optimal. In theory a central bank should be able to do a better job.  By matching the growth of the money supply to economic growth (or contraction) prices never need to rise or fall (in aggregate).  A can of soup which costs $0.05 in 1905 would cost $0.05 today.  At least the inflation aspect.  The actual price may vary for non-inflationary reasons such as improved productivity or true scarcity of resources.  

The problem with central banks isn't the theory ... it is the humans.  The models of monetary policy rely on flawed humans making perfect decisions and that is why they are doomed to failure.  Flawed humans choosing the benefit for the many (the value of price stability) over the increased benefit for the few (direct profit from manipulation of the money supply).  Maybe someday when we create a utopian such ideas will work but until then they will be manipulated for personal benefit.

The value of Bitcoin comes from the inability to manipulate the money supply.  Sure many times a fixed money supply and static inflation curve is non-optimal but it can't be manipulated and thus this non-optimal system has the potential to outperform systems which in theory are superior but have the flaw of needing perfect humans to run them.

On edit:
On rereading I noticed you proposed using median block reward not average.  That is harder to manipulate.  It probably is better than a fixed block size but I would expect 50 BTC per block isn't necessary on a very large tx volume so it may result in higher than needed fees (although still not as bad as 1MB fixed).  It is worth considering.  Not sure if a consensus for a hard fork can ever be reached though.

Quote
Hmm...this seems problematic. If the transaction volume doesn't grow sufficiently, this could kill fees. But if the transaction volume grows too much, fees will become exhorbitant. IF we accept that max block size needs to change, I believe it should be done in a way that decreases scarcity in response to a rise in average transaction fees.

Likewise a fixed subsidy reduction schedule is non-optimal.  What if tx fees don't cover the drop in subsidy value in 2016.  Network security will be reduced.  Should we also make the money supply adjustable? :)  (Kidding but I hope it illustrates the point).

TL/DR: Fixed but non-optimal vs adjustable but manipulable? Your choice.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jl2012 on February 06, 2013, 04:09:11 PM
Also, requiring a total reward of 50BTC means requiring 25BTC in fee NOW. As the typical total tx fee in a block is about 0.25BTC, the fee has to increase by 100x and obviously this will kill the system.

How and why would the system be "killed"? The max block size would simply not increase.


So you propose that the size will only increase but never decrease? Anyway, the major problem is the choice of total reward amount because change in purchasing power is unpredictable.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 06, 2013, 04:22:53 PM
Any system which relies on trivial input can be easily gamed.  I (and other merchants) could buy/rent enough hashing power to solve 1% of blocks and fill them with massive fees (which come right back to us) and inflate the average fee per block.

Hmm...This was a problem in my first idea but I fixed it for the last two. Do you an exploitable problem with the most recent proposal (https://bitcointalk.org/index.php?topic=140233.msg1507870#msg1507870)?

Quote
I would point out that a fixed money supply and static inflation curve is non-optimal. In theory a central bank should be able to do a better job.  By matching the growth of the money supply to economic growth (or contraction) prices never need to rise or fall (in aggregate).

I disagree. There's nothing particularly attractive about fixed prices. Especially troublesome is when growth in the money supply is politically directed (versus doled out through proof of work). But this heads us in the direction of "deflationary currency" debate so I'll stop here.

...you propose that the size will only increase but never decrease? Anyway, the major problem is the choice of total reward amount because change in purchasing power is unpredictable.

Yes, size would only increase. If we allow the size to decrease then it could cause fees to skyrocket. I proposed a new scheme (https://bitcointalk.org/index.php?topic=140233.msg1507870#msg1507870) that does not depend on total reward amount, I believe it addresses your concerns.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: mrvision on February 06, 2013, 06:10:05 PM
I think that if we reach the 1mb limit and don't upgrade with a solution, then the spontaneous order will create fiat currencies backed with bitcoins, in order to reduce the amount of transactions in the bitcoin network.

I'm not so sure this is a bad thing. These ad-hoc "fiat" currencies may be created with unique properties that make them better suited to the task at hand than Bitcoin. For example, a private payment network that provides instant confirmation and requires no mining (relying on trust in a central authority).


It is a bad thing because even when you may know the amount of deposits they have (since you can audit the blockchain), you don't actually know the amount of notes they will have emited, so eventually this will drive us to a fractional reserve system again.

That's what i think.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 06, 2013, 06:14:01 PM
eventually this will drive us to a fractional reserve system again.

There's nothing wrong with a fractional reserve system, just look at systems of self-issued credit (like Ripple). The problem is with legal tender laws that force you to use a particular debt instrument. With voluntary exchange, competition between systems would keep them honest.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: mrvision on February 06, 2013, 06:20:33 PM
eventually this will drive us to a fractional reserve system again.

There's nothing wrong with a fractional reserve system, just look at systems of self-issued credit (like Ripple). The problem is with legal tender laws that force you to use a particular debt instrument. With voluntary exchange, competition between systems would keep them honest.


Well indeed there is a problem if the only option you have is to use a fractional reserve system because the hard limit of 1mb. Specially if the 'bank' cannot give you back your bitcoins because everybody else has withdrawn theirs for whatever reason :D


Title: Re: The MAX_BLOCK_SIZE fork
Post by: solex on February 06, 2013, 08:31:22 PM

Well indeed there is a problem if the only option you have is to use a fractional reserve system because the hard limit of 1mb. Specially if the 'bank' cannot give you back your bitcoins because everybody else has withdrawn theirs for whatever reason :D


Agreed 100%

If Bitcoin cripples itself at such an early stage then the central bankers will be laughing as they quaff bourbon in their gentlemen's clubs. Bitcoin is a very disruptive technology which has a huge first-mover advantage, potentially returning a lot of power from governments and TBTF banks to the people. If this is thrown away then expect the next major cryptocurrency to be FedCoin or ECBcoin or even IMFcoin which will be designed to integrate somehow with the existing fiat systems. (Oh. And expect attempts to ban community-based alternatives when "official" ones are up and running.)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: SimonL on February 09, 2013, 04:00:39 PM
I haven't read the entire thread so this may have been covered. I've been stewing over this problem for a while and would just like to think aloud here....

I very much think the blocksize should be network regulated much like difficulty is used to regulate propagation windows based on the amount of computation cycles used to find hashes for particular difficulty targets. To clarify, when I say CPU I mean CPUs, GPUs, and ASICs collectively.

Difficulty is very much focused on the network's collective CPU cycles to control propagation windows (1 block every 10 mins), avoid 51% attacks, and distribute new coins.

However the max_blocksize is not related to computing resources to validate transactions and regular block propagation, it is geared much more to network speed, storage capacity of miners (and includes even non-mining full nodes) and verification of transactions (which as I understand it means hammering the disk). What we need to determine is whether the nodes supporting the network can quickly and easily propagate blocks while not having this affect the propagation window.

Interestingly there is a connection between CPU resources, the calculation of the propagation window with difficulty targets, and network propagation health. If we have no max_blocksize limit in place, it leaves the network open to a special type of manipulation of the difficulty.

The propagation window can be manipulated in two ways as I see it, one is creating more blocks as we classically know, throw more CPUs at block creation, and we transmit more blocks, more computation power = more blocks produced, and the difficulty ensures the propagation window doesn't get manipulated this way. The difficulty is measured by timestamps in the blocks to determine whether more or less blocks in a certain period were created and whether difficulty goes up or down. All taken care of.

The propagation window could also be manipulated in a more subtle way though, that being transmission of large blocks (huge blocks in fact). Large blocks take longer to transmit, longer to verify, and longer to write to disk, though this manipulation of the number of blocks being produced is unlikely to be noticed until a monster block gets pushed across the network (in a situation where there is no limit on blocksize that is). Now because there is only a 10 minute window the block can't take longer than that I'm guessing. If it does, difficulty will sink and we have a whole new problem, that being manipulation of the difficulty through massive blocks. Massive blocks could mess with difficulty and push out smaller miners, causing all sorts of undesirable centralisations. In short, it would probably destroy the Bitcoin network.

So we need a maximum block size that is high enough that the vast majority of nodes are comfortable with it, and isn't so big that it can be used to manipulate the difficulty by artificially slowing propagation accross the network with massive blocks. With the help of the maintaining of the propagation window through it's difficulty, we may be able to determine whether the propagation of blocks is slowing and whether the max_blocksize should be adjusted down to ensure the propagation window remains stable.

Because the difficulty can be potentially manipulated this way we could possibly have a means of knowing what the Bitcoin network is comfortable with propagating. And it could be determined thusly:

If the median size of the blocks transmitted in the last difficulty period is bumping up against the max_blocksize (median being chosen to avoid situations where one malicious entity, or entities tries to arbitrarily push up the max_blocksize limit), and the difficulty is "stable", increase the max_blocksize (say by 10%) for the next difficulty period (say the median is within 20% of the max_blocksize), but if the median size of blocks for the last period is much lower (say less than half the current blocksize_limit), then lower the size by 20% instead.

However, if the If the median size of the blocks transmitted in the last difficulty period is bumping up against the max_blocksize and the difficulty is NOT stable, don't increase the max_blocksize since there is a possibility that the network is not currently healthy and increasing or decreasing the max_blocksize is a bad idea. Or alternatively in those situations lower the max_blocksize by 10% for the next difficulty period anyway (not sure if this is a good idea or not though).

In either case the 1mb max_blocksize should be the lowest the blocksize should go to if it continued to shrink.

Checking the stability of the last difficulty period and the next one is what determines whether the network is spitting out blocks at a regular rate or not, if the median blocksize of blocks transmitted in the last difficulty period is bumping up against the limit, and difficulty is going down, it could mean a significant number of nodes can't keep up, esp. if the difficulty needs to be moved down, that means that blocks aren't getting to all the nodes in time and hashing capacity is getting cut off because they are too busy verifying the blocks they received. If the difficulty is going up and median block size is bumping up against the limit, then there's a strong indication that nodes are all processing the blocks they receive easily and so raising the max_blocksize limit a little should be OK. The one thing I'm not sure of though is determining whether the difficulty is "stable" or not, I'm very much open to suggestions the best way of doing that. The argument that what is deemed "stable" is arbitrary and could still lead to manipulation of the max_blocksize, just over a longer and more sustained period I think is possible too, so I'm not entirely sure this approach could be made foolproof, how does calculating of difficulty targets take these things into consideration?

OK, guys, tear it apart. ;)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Ari on February 10, 2013, 02:38:33 PM
We will eventually hit the limit, and the limit will be raised.  People running old versions will get disconnected from the network when that happens.  The only question is how close we will come to a 50% network split.  It would be good to reach some consensus well in advance, so that this is minimally disruptive.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: notme on February 10, 2013, 09:07:09 PM
We will eventually hit the limit, and the limit will be raised.  People running old versions will get disconnected from the network new block chain and continue on the old, causing confusion as to which chain is the official "bitcoin" chain when that happens.  The only question is how close we will come to a 50% network split.  It would be good to reach some consensus well in advance, so that this is minimally disruptive.

FTFY


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Ari on February 10, 2013, 09:42:06 PM
Yeah.  If there's a major split, the <1MB blockchain will probably continue for a while.  It will just get slower and slower with transactions not confirming.  It would be better if there is a clear upgrade path so we don't end up with a lot of people in that situation.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: notme on February 10, 2013, 09:44:48 PM
Yeah.  If there's a major split, the <1MB blockchain will probably continue for a while.  It will just get slower and slower with transactions not confirming.  It would be better if there is a clear upgrade path so we don't end up with a lot of people in that situation.

You are assuming miners want to switch.  They have a very strong incentive to keep the limit in place (higher fees, lower storage costs).


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Anon136 on February 10, 2013, 10:08:37 PM
Opinions differ on the subject. The text on the Wiki largely reflect's Mike Hern's views.

Here are my views:

Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down.

Bitcoin is valuable because of scarcity. One of the important scarcities is the limited supply of coins, another is the limited supply of block-space: Limited blockspace creates a market for transaction fees, the fees fund the mining needed to make the chain robust against hostile reorganization.  I have not yet seen any suggestion as to how Bitcoin is long term viable without this except ones that argue for cartel or regulatory behavior (both of which I don't consider viable: they moot the decentralized purpose of Bitcoin).

Even going beyond fee funding— as Dan Kaminsky argued so succinctly— with, gigabyte blocks bitcoin would not be functionally decentralized in any meaningful way: only a small self selecting group of some thousands of major banks would have the means and the motive to participate in validation (much less mining), just as some thousands of major banks are the primary drivers of the USD and other major world currencies. An argument that Bitcoin can simply scale directly like that is an argument that the whole decentralization thing is a pretext: and some have argued that it's evidence that bitcoin is just destined to become another centralized currency (with some "bonus" wealth redistribution in the process, that they suggest is the real motive— that the decentralization is a cynical lie).

Obviously decentralization can be preserved for increased scale with technical improvements, and those should be done— but if decentralization doesn't come first I think we would lose what makes Bitcoin valuable and special...  and I think that would be sad. (Though, to be frank— Bitcoin becoming a worldwide centrally controlled currency could quite possibly be the most profitable for me— but I would prefer to profit by seeing the world be a diverse place with may good and personally liberating choices available to people)

Perhaps the proper maximum size isn't 1MB but some other value which is also modest and still preserves decentralization— I don't have much of an opinion beyond that fact that there is some number of years in the future where— say— 10MB will be no worse than 1MB today. It's often repeated that Satoshi intended to remove "the limit" but I always understood that to be the 500k maximum generation soft limit... quite possible I misunderstood, but I don't understand why it would be a hardforking protocol rule otherwise. (and why the redundant soft limit— and why not make it a rule for which blocks get extended when mining instead of a protocol rule? ...  and if that protocol rule didn't exist? I would have never become convinced that Bitcoin could survive... so where are the answers to long term survival?)

(In any case the worst thing that can possibly happen to a distributed consensus system is that fails to achieve consensus. A substantial persistently forked network is the worst possible failure mode for Bitcoin: Spend all your own coins twice!  No hardfork can be tolerated that wouldn't result in an thoroughly dominant chain with near certain probability)

But before I think we can even have a discussion about increasing it I think there must be evidence that the transaction load has gone over the optimum level for creating a market for fees (e.g. we should already be at some multiple of saturation and still see difficulty increasing or at least holding steady).  This would also have the benefit of further incentivizing external fast payment networks, which I think must exist before any blockchain increase: it would be unwise to argue an increase is an urgent emergency because we've painted ourselves into a corner by using the system stupidly and not investing in building the infrastructure to use it well.

Quote
You can get around it (NAT), and you can fix it (IPv6) but the former is annoying and the latter is taking forever

It's not really analogous at all.  Bitcoin has substantial limits that cannot be fixed within the architecture, unrelated to the artificial* block-size cap. The blockchain is a worldwide broadcast medium and will always scale poorly (even if rocket boosters can be strapped to that pig), the consensus it provides takes time to converge with high probability— you can't have instant confirmations,  you can't have reversals for anti-fraud (even when the parties all desire and consent to it),  and the privacy is quite weak owing to the purely public nature of all transactions.

(*artificial doesn't mean bad, unless you think that the finite supply of coin or the limitations on counterfeiting, or all of the other explicit rules of the system are also bad...)

Its important to distinguish Bitcoin the currency and Bitcoin the payment network.  The currency is worthwhile because of the highly trustworth extreme decentralization which we only know how to create through a highly distributed and decentralized public blockchain.  But the properties of the blockchain that make it a good basis for a ultimately trustworthy worldwide currency do _not_ make it a good payment network.  Bitcoin is only as much of a payment network as it must be in order to be a currency and in order to integrate other payment networks.

Or, by analogy— Gold may be a good store of value, but it's a cruddy payment system (especially online!).  Bitcoin is a better store of value— for one reason because it can better integrate good payment systems.

See retep's post on fidelity bonded chaum token banks for my personal current favorite way to produce infinitely scalable trustworthy payments networks denominated in Bitcoin.

Cheers,

thanks gmaxwell. im not technically minded and i was wondering the whole time why no one brought this up. Once bitcoin becomes popular enough to use the whole 1mb on 100% serious transactions than higher demand will simply lead to a need to supply higher transaction fees inorder to get your transaction processed. If these fees become high enough to be prohibitive than centrally managed servers can be used for small transactions that will be recorded on their records but not on the blockchain.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: d'aniel on February 10, 2013, 10:14:47 PM
Yeah.  If there's a major split, the <1MB blockchain will probably continue for a while.  It will just get slower and slower with transactions not confirming.  It would be better if there is a clear upgrade path so we don't end up with a lot of people in that situation.

You are assuming miners want to switch.  They have a very strong incentive to keep the limit in place (higher fees, lower storage costs).
Sometimes more customers paying less results in higher profit.  Miners will surely have an incentive to lower their artificially high prices to accommodate new customers, instead of having them all go with much cheaper off-blockchain competitors.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: markm on February 10, 2013, 11:49:41 PM
Don't forget merged mining. Smaller transactions could use one of the merged-mined  blockchains, there are several such blockchains already, this kind of pressure might just cause one or more of them to increase in popularity, and miners would still reap the fees.

-MarkM-


Title: Re: The MAX_BLOCK_SIZE fork
Post by: deepceleron on February 11, 2013, 12:10:09 AM
I'm just going to leave this here:

http://en.wikipedia.org/wiki/Parkinson%27s_law_of_triviality


Title: Re: The MAX_BLOCK_SIZE fork
Post by: notme on February 11, 2013, 06:04:39 AM
I'm just going to leave this here:

http://en.wikipedia.org/wiki/Parkinson%27s_law_of_triviality


 ;D


Title: Re: The MAX_BLOCK_SIZE fork
Post by: fornit on February 11, 2013, 02:37:12 PM
very much depends on what you consider trivial. little more than two years ago, all copies of the blockchain all around the world would have fit on a single hard drive. right now, if every client would be a full node, we would need something around a thousand hard drives.

in a few years, we might easily end up with a few million or billion hard drives assuming everyone is a full node. so imho how this issue is handles directly determines how many full nodes we will have long term. i have no idea how many full nodes is acceptable, 1%? 0,1%? but i am pretty sure 0,0000001% wont cut it.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: mrvision on February 11, 2013, 02:43:55 PM
Why don't we set up a 1kb limit? That way the miners will earn a lot more from fees :) [/ironic]

Oh yes! because that way people would start making trades off the chain, and miners won't get those 'big fees' they are coveting. There's an optimum where the miners have the biggest earnings and don't encourage people to use whatever else coin (phisical or virtual) to trade.

I am all for "let-the-market-decide" elastic algorithms.

Right now i like this.

I think 1mb should be the smallest limit, but maybe i want to accept 4 mb of transactions, for whatever reason, and earn a lot more from fees. Maybe i've got a rig of asics and i can process a lot more mb in 10 minutes instead of pushing up the dificulty. So, maybe a GPU miner, can mine 1mb blocks, and an asic miner can mine 20 mb blocks for the same difficulty, having then the same odds of solving the problem.

I am just thinking aloud :D


Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 11, 2013, 04:47:49 PM
Maybe i've got a rig of asics and i can process a lot more mb in 10 minutes instead of pushing up the dificulty. So, maybe a GPU miner, can mine 1mb blocks, and an asic miner can mine 20 mb blocks for the same difficulty

The time required to mine the block is independent of the size of the block.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: Ari on February 11, 2013, 05:22:12 PM
As much as I would like to see some sort of constraint on blockchain bloat, if this is significantly curtailed then I suspect S.DICE shareholders will invest in mining.  I suspect that getting support from miners was a large part of the motivation for taking S.DICE public, as there clearly wasn't a need to raise capital.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: zebedee on February 25, 2013, 04:36:34 AM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).


I'm a bit late to this discussion, but I'm glad to see that an elastic, market-based solution is being seriously considered by the core developers.

It is clearly the right way to go to balance the interests of all concerned parties.  Free markets are naturally good at that.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: misterbigg on February 25, 2013, 04:51:10 AM
It is clearly the right way to go to balance the interests of all concerned parties.  Free markets are naturally good at that.

By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: solex on February 25, 2013, 05:02:54 AM
It is clearly the right way to go to balance the interests of all concerned parties.  Free markets are naturally good at that.

By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.


And that type of argument takes us nowhere. There have been thousands of comments on the subject and we need to close in on a solution rather than spiral away from one. I have seen your 10 point schedule for what happens when the 1Mb blocks are saturated. There is a some probability you are right, but it is not near 100%, and if you are wrong then the bitcoin train hits the buffers.

Please consider this and the next posting:
https://bitcointalk.org/index.php?topic=144895.msg1556506#msg1556506

I am equally happy with Gavin's solution which zebedee quotes. Either is better than letting a huge unknown risk become a real event.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on February 25, 2013, 08:24:37 AM
By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.

The block subsidy will be determined by the free market once inflation is no longer relevant, iff the block size limit is dropped. Even Bitcoin inflation itself in a sense may one day be determined by the free market, if we start seeing investment assets quoted in Bitcoin being traded with high liquidity: such highly-liquid BTC-quoted assets would end up being used in trades, and would become a flexible monetary aggregate. Fractional reserves is not the only way to do it.

Concerning the time between blocks, there have been proposals of ways to make such parameter fluctuate according to supply and demand. I think it was Meni Rosomething, IIRC, who came up once with such ideas. Although potentially feasible, that's a technical risk that might not be worthy taking. Perhaps some alternative chain will try it one day, and if it really shows itself worthwhile as a feature, people might consider it for Bitcoin, why not. I'm just not sure it's that important, 10 min seems to be fine enough.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: justusranvier on February 28, 2013, 06:57:05 PM
There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).
Using this proposal all nodes could select for themselves what block size they are willing to accept. The only part that is missing is to communicate this information to the rest of the network somehow.

Each node could keep track of the ratio of transaction size to verification time averaged over a suitable interval. Using that number it could calculate the maximum block size likely to meet the time constraint, and include that maximum block size in the version string it reports to other nodes. Then miners could make educated decisions about what size of blocks the rest of the network will accept.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: marcus_of_augustus on March 12, 2013, 07:18:54 PM
Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: solex on March 12, 2013, 09:03:51 PM
Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.

Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor.
Edit 0.7 until 0.8.1 is available.

General question. Is Deepbit too conservative for its own good?  
They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong!



Title: Re: The MAX_BLOCK_SIZE fork
Post by: marcus_of_augustus on March 13, 2013, 02:34:15 AM
Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.

Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor.

General question. Is Deepbit too conservative for its own good?  
They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong!



No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.

Rushing everyone onto 0.8 is asking for problems.

Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: notme on March 13, 2013, 02:58:28 AM
Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.

Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor.

General question. Is Deepbit too conservative for its own good?  
They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong!



Please give the development team time to put together a plan.  If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: ArticMine on March 13, 2013, 03:08:36 AM

Please give the development team time to put together a plan.  If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.

+1


Title: Re: The MAX_BLOCK_SIZE fork
Post by: solex on March 13, 2013, 03:16:25 AM

No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.

Rushing everyone onto 0.8 is asking for problems.

Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?

Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.
It appears 60% of the network would have recognized the problem block. If more people were prepared to upgrade in a timely manner then it might have been closer to 90% and a minor issue arguably leaving a better situation than exists now.


Please give the development team time to put together a plan.  If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.

+1

Yes. I agree with that because of where the situation is now.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: marcus_of_augustus on March 13, 2013, 03:34:33 AM

No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.

Rushing everyone onto 0.8 is asking for problems.

Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?

Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.


For the last time IT WAS NOT A BUG!

http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html (http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html)

0.8 levelDB as implemented by Mike Hearn (who also propagated the "just bump you block limit meme with the miners) did not faithfully emulate BDB, which it was minimally required to do.

Like I said, you do not fully understand the problem so are not qualified to comment any further.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on March 13, 2013, 07:31:43 AM
Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.


For the last time IT WAS NOT A BUG!

http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html (http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html)

0.8 levelDB as implemented by Mike Hearn (who also propagated the "just bump you block limit meme with the miners) did not faithfully emulate BDB, which it was minimally required to do.

Come on, such obscure limit was not known by anyone in Bitcoin-world up until it blew yesterday. You may claim it's not a bug on BDB side*, what's arguable, but it is definitely a bug on bitcoin implementation side.
Everybody should be able to handle blocks up until 1Mb. That was the general agreement, the protocol spec if you will. The particular implementation of Satoshi client <= 0.7 was not capable of following such protocol specification as it should. 0.8 onward was. If anything, 0.8 is the "correct version". Bringing everybody back to 0.7 was an "emergency plan" since pushing everybody to 0.8 was believed to be much harder to accomplish (and likely truly would be).

* And bug or not, the fact that nobody here even knew about it just shows how much we cannot rely on BDB - not a single person among all the brilliant minds on the core dev team understands fully how this thing works (and neither did Satoshi) .
Moving out of BDB is certainly a desirable thing. Now with this even more crippling block size limit, it's pretty much urgent.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: marcus_of_augustus on March 13, 2013, 08:28:07 AM
Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.


For the last time IT WAS NOT A BUG!

http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html (http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html)

0.8 levelDB as implemented by Mike Hearn (who also propagated the "just bump you block limit meme with the miners) did not faithfully emulate BDB, which it was minimally required to do.

Come on, such obscure limit was not known by anyone in Bitcoin-world up until it blew yesterday. You may claim it's not a bug on BDB side*, what's arguable, but it is definitely a bug on bitcoin implementation side.
Everybody should be able to handle blocks up until 1Mb. That was the general agreement, the protocol spec if you will. The particular implementation of Satoshi client <= 0.7 was not capable of following such protocol specification as it should. 0.8 onward was. If anything, 0.8 is the "correct version". Bringing everybody back to 0.7 was an "emergency plan" since pushing everybody to 0.8 was believed to be much harder to accomplish (and likely truly would be).

* And bug or not, the fact that nobody here even knew about it just shows how much we cannot rely on BDB - not a single person among all the brilliant minds on the core dev team understands fully how this thing works (and neither did Satoshi) .
Moving out of BDB is certainly a desirable thing. Now with this even more crippling block size limit, it's pretty much urgent.

How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?

The fact that the devs (or anyone) seems to have never read the documentation of the standard dependencies is more the worry, in my opinion.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: markm on March 13, 2013, 08:39:59 AM
I was told on #bitcoin-dev that actually the devs have met the BDB configuration numbers before, and to look at db.cpp to see where in the bitcoin code they explicitly set the numbers they want BDB to use.

Also, they they ran into problems with it before.

So supposedly they were not unaware that BDB can be configured. They even confided that the page size BDB uses is by default the block size of the underlying block device (the disk drive, for example).

So from the sound if it they simply had not set the configuration numbers high enough to accomodate all platforms, or maybe all possible sizes of blockchain reorganisation. (During a re-org, apparently, it needs enough locks to deal with two blocks at once in one BDB-transaction?)

-MarkM-


Title: Re: The MAX_BLOCK_SIZE fork
Post by: marcus_of_augustus on March 13, 2013, 09:08:28 AM
I was told on #bitcoin-dev that actually the devs have met the BDB configuration numbers before, and to look at db.cpp to see where in the bitcoin code the explicitly set the numbers they want BDB to use.

-MarkM-


Removed incorrect, thnx Jeff.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: markm on March 13, 2013, 09:37:18 AM
Weird, 0.8 uses leveldb not BDB doesn't it?

Does leveldb use those same calls to set its configuration?

-MarkM-


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jgarzik on March 13, 2013, 09:47:41 AM
Okay line 83 in db.cpp appears to have been changed from 0.7 to 0.8 ... this is exactly where the incompatibility was introduced.
[...]
So seems like an unannounced change to an implicit protocol rule.

Incorrect.  0.8 does not use BDB for blockchain indexing.   Those BDB settings you quote are only relevant to the wallet.dat file in 0.8.

The BDB lock limitation simply does not exist in 0.8, because leveldb is used for blockchain indexing, not BDB.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on March 13, 2013, 01:38:02 PM
How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?

The fact that the devs (or anyone) seems to have never read the documentation of the standard dependencies is more the worry, in my opinion.

A non-understood limitation of an implementation dependency does not define the protocol. To the Bitcoin protocol, blocks up until 1Mb are allowed. That was the consensus, that was what every documentation available said. The <= 0.7 Satoshi implementation wasn't capable of dealing with such blocks. That implementation was bugged, not the 0.8.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: justusranvier on March 13, 2013, 01:48:27 PM
The <= 0.7 Satoshi implementation wasn't capable of dealing with such blocks.
It's worse than that. If < 0.8 implementations all behaved the exact same way you could call their behavior a de facto standard, but since their behavior was not consistent between nodes of the exact same version you can't even say that. The behavior of all implementations prior to 0.8 with regards to the protocol is non-deterministic.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on March 13, 2013, 01:57:20 PM
It's worse than that. If < 0.8 implementations all behaved the exact same way you could call their behavior a de facto standard,

I'd still call it a bug. Actually, it's still an unsolved bug. We've only worked around it, but it's still there.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Gavin Andresen on March 14, 2013, 03:25:53 AM
How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?
Ah, excellent, can you please send me the documentation that says exactly how many locks will be taken by each bdb operation?  I haven't been able to find that.  Thanks!


Title: Re: The MAX_BLOCK_SIZE fork
Post by: alir on March 14, 2013, 03:36:27 AM
Ah, excellent, can you please send me the documentation that says exactly how many locks will be taken by each bdb operation?  I haven't been able to find that.  Thanks!
Now, now, Gavin. Play nice. ;)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Gavin Andresen on March 14, 2013, 03:47:20 AM
If Mr. Augustus found some documentation that I don't know about it, I genuinely want to know about it, because it will save me time. Right now I'm throwing blocks at an instrumented v0.7.2 bitcoind that tells me how many locks are taken, so I can be sure whatever fix we implement will always work.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: alir on March 14, 2013, 04:03:06 AM
I don't believe he does Mr. Andresen.

But now that the bug is resolved, will there still be an update to allow increased block sizes? I'm not understanding how the team intends to allow older versions to accept larger blocks from 0.8+.

Will a full-scale 0.8 update be forced?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Atruk on March 14, 2013, 05:23:15 AM
How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?
Ah, excellent, can you please send me the documentation that says exactly how many locks will be taken by each bdb operation?  I haven't been able to find that.  Thanks!


Per operation is difficult to calculate. I don't know much about how the bitcoin code interacts with the BDB, so much of what I am going to write can be taken with a grain of salt. In the next few weeks I plan on printing out chunks of the Bitcoin source code and trying to understand how it works with the Database as a personal enrichment exercise, and I'll share anything I find that might be useful. I can't promise anything material, but when my plate clears I can try to make an effort (for whatever all of those qualifiers are worth).

Here's Oracle's rather worthless documentation of locking and blocking in BDB. (http://docs.oracle.com/cd/E17076_02/html/gsg_txn/CXX/blocking_deadlocks.html) Normally there's one lock per database transaction, but it can get messy because of lock contention. What it sound like you are looking for though is better documentation. Such a thing might exist but Oracle...

A real max block size fix is probably going to have to involve picking a distant future deadline and hardforking things over to LevelDB. This might involve the painful experience of making a LevelDB version of 0.3.x to accompany newer clients, but LevelDB and BDB are different enough in their quirks that leaving BDB for LevelDB will probably mean abandoning BDB wholesale. How many years have gone by without a good WordPress port for PostgreSQL?

The deadline to move to a LevelDB bitcoin client might have to be set in the very distant future, maybe even at the next halving day. BDB isn't sufficiently well documented to the point where its quirks can be accounted for in any other database software for all but simple cases (Oracle probably likes it this way and BDB's is locking contention is probably a selling point for their other database products), and if Bitcoin has demonstrated anything this far into its life, Bitcoin attracts fringe use cases.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: markm on March 14, 2013, 07:23:00 AM
This simply shows yet again that, sorry, the implementation is not and cannot be the specification.

That functioning in the heterogeneous wild might be a "hard problem" might suck, but probably is really a stronger call for standards and specifications than functioning in the land of fairies and unicorns might be.

The more sure it is that we'll never never actually be able to be fully up to spec, the more important it probably is to actually have a spec to aspire to.

If we don't even have "second star to the right and straight on 'til morning" things might not bode well, but if we do have it, ahhh, then, the land of never never might not actually be so very bad.

So please, lets just admit we aren't up to spec and have never been up to spec, and focus on getting there instead of pretending that "the world is after all perfect, if requiring a little adjustment" is a bastion of software engineering.

(Just because it is true doesn't mean it is computer science. :))

-MarkM-


Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on March 14, 2013, 07:26:19 AM
But now that the bug is resolved,

It is not. It's been worked around. But we still have this bug limiting blocks to 5k transactions top, when they should be able to handle more.

will there still be an update to allow increased block sizes? I'm not understanding how the team intends to allow older versions to accept larger blocks from 0.8+.

Will a full-scale 0.8 update be forced?

I don't see other way around it.

The deadline to move to a LevelDB bitcoin client might have to be set in the very distant future, maybe even at the next halving day.

hehe, dude, we'll have to abandon BDB in a few weeks. Or at least abandon this configuration which limits it to 5k transactions. We're already bumping on it. The only reason we had this fork was precisely because 5k tx per block is already not enough sometimes.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Atruk on March 14, 2013, 07:43:20 AM
The deadline to move to a LevelDB bitcoin client might have to be set in the very distant future, maybe even at the next halving day.

hehe, dude, we'll have to abandon BDB in a few weeks. Or at least abandon this configuration which limits it to 5k transactions. We're already bumping on it. The only reason we had this fork was precisely because 5k tx per block is already not enough sometimes.

The problem with BDB is actually worse than that... 5k database transactions isn't necessarily 5k bitcoin transactions. BDB is a monster that needs to be killed with fire.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on March 14, 2013, 07:49:54 AM
Yes, my point was that we definitely cannot wait until the next halving day to kill this beast, or it will kill us instead. ;)


Title: Re: The MAX_BLOCK_SIZE fork
Post by: 2112 on March 14, 2013, 10:34:57 AM
If Mr. Augustus found some documentation that I don't know about it, I genuinely want to know about it, because it will save me time. Right now I'm throwing blocks at an instrumented v0.7.2 bitcoind that tells me how many locks are taken, so I can be sure whatever fix we implement will always work.
The number of locks required depends not only on the number of transactions in the block (N) but also on the size of the blkindex.dat (S). Even if you have a fixed upper limit on N, the S grows without an upper bound.

The most common requested lock is "read with intent to write". As the process descends down the B-tree it takes one of those locks for each page traversed until it reaches the leaf page. Then it promotes the last lock to the "write lock". So the number of locks required to insert single Bitcoin transaction is O(H) where H is the height of the B-tree. This is turn is O(log_b S) where log_b is number of k,v-pairs per page of the B-tree.

So for a single database transaction which consists of all Bitcoin transactions in the block the number of locks required is O(N * log_b S). This number has no upper bound, it logaritmically increases with the size of blkindex.dat.

I'm sorry I can't give a good reference. It is in some of the Springer's series "Lecture Notes in Computer Science", something related to concurrency and deadlock detection/avoidance in database systems. The only thing I remember now is that was one of the thickest LNiCS volumes on the whole shelf with LNiCS.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: caveden on March 14, 2013, 10:49:07 AM
Argh... BDB doesn't allow for an unlimited number of locks? (how many your system can handle, that is)
It really needs to specify a limit?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: markm on March 14, 2013, 11:08:19 AM
Argh... BDB doesn't allow for an unlimited number of locks? (how many your system can handle, that is)
It really needs to specify a limit?

Well, legend has it that once upon a time some systems had more things to do than just lock pages of databases, so, in short, yes.

Much as we have a maximum block size... ;)

So hey, y'know, maybe that darn magic number actually is a darn specification not an implementation artifact after all?

No, wait... it merely means that it is not only the maximum size of the blocks but also the maximum size of the block index at a given time (as measured in block height) that is a crucial limit to specify; and, happily enough, it turns out that the max size of a block has a relatively predictable effect upon the size of the block index at any given "height"?

-MarkM-

EDIT: Do we need to account for orphan blocks, too? Thus need to specify an orphan rate tolerance?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: marcus_of_augustus on March 14, 2013, 08:38:29 PM
How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?
Ah, excellent, can you please send me the documentation that says exactly how many locks will be taken by each bdb operation?  I haven't been able to find that.  Thanks!


Well that's implementation specific, as the tutorial states, (posted above), sorry can't be more helpful but I am looking into it so you'll be first to know if I find anything.

Pre-0.8 bitcoin have specified the limits for their BDB implementation in db.cpp lines 82 and 83

Code:
    
dbenv.set_lk_max_locks(10000);
dbenv.set_lk_max_objects(10000);

Most definitely a limitation but not a bug nor an "unknown behaviour" is the main point.  Unfortunately this is a proxy limitation on bitcoin block sizes, only loosely correlated with block data size, that we will have to live with for now, and we have lived with for some time now.

Maybe look at basing transaction fees on the number of locks a transaction uses is another angle for a solution? Loosely specifying "1MB" as the block limit is in fact abstracting a data size away from the actual physical configuration of how that data is stored/accessed which is what actually defines the total transaction cost, includes CPU cycles, disk read/writes and storage.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: markm on March 14, 2013, 10:51:03 PM
Excellent reasoning... If bitcoin was about locking databases that would be just the very kind of quantification that should go into its specification.

Unfortunately, bitcoin is a little more about attaining, storing and transmitting a proven state of consensus than about controlling how many entities can access how many bits of it, and which bits, exactly, for that matter. If number of simultaneous accesses to the data enters into the spec it ought more to lean toward "as many as possible, even if that means having more than one complete copy of the entire dataset in existence on the planet at any given moment".

Basically, bitcoin is intended to give lots of entities access to the same data, so locks are actually anathema to its entire purpose and goal.

Thus while acknowledging the brilliance of your solution I find myself forced to say sorry but it still seems to me that number of database locks in a single corporation or other entity's living executing implementation or copy of an implementation is one of the many things that should if at all possible be abstracted away by the second star to the right and straight on 'til morning specification.

-MarkM-


Title: Re: The MAX_BLOCK_SIZE fork
Post by: zebedee on March 15, 2013, 01:38:04 AM
Excellent reasoning... If bitcoin was about locking databases that would be just the very kind of quantification that should go into its specification.

Unfortunately, bitcoin is a little more about attaining, storing and transmitting a proven state of consensus than about controlling how many entities can access how many bits of it, and which bits, exactly, for that matter. If number of simultaneous accesses to the data enters into the spec it ought more to lean toward "as many as possible, even if that means having more than one complete copy of the entire dataset in existence on the planet at any given moment".

Basically, bitcoin is intended to give lots of entities access to the same data, so locks are actually anathema to its entire purpose and goal.

Thus while acknowledging the brilliance of your solution I find myself forced to say sorry but it still seems to me that number of database locks in a single corporation or other entity's living executing implementation or copy of an implementation is one of the many things that should if at all possible be abstracted away by the second star to the right and straight on 'til morning specification.

-MarkM-

Not only that, but it's easy to imagine implementations where verification is done in a single-threaded process that is isolated and does nothing else (and hence locks of any kind are entirely unnecessary), and any persistent db is maintained separately.

It's a shame the original bitcoin implementation was a monolothic piece of poor code.  The lack of clean separation of responsibilities is really hindering its progress and testing.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: marcus_of_augustus on March 15, 2013, 04:55:33 AM
Quote
Maybe look at basing transaction fees on the number of locks a transaction uses is another angle for a solution? Loosely specifying "1MB" as the block limit is in fact abstracting a data size away from the actual physical configuration of how that data is stored/accessed which is what actually defines the total transaction cost, includes CPU cycles, disk read/writes and storage.

Quote
Excellent reasoning... If bitcoin was about locking databases that would be just the very kind of quantification that should go into its specification.

Unfortunately, bitcoin is a little more about attaining, storing and transmitting a proven state of consensus than about controlling how many entities can access how many bits of it, and which bits, exactly ...

Quite. My suggestion was just an example, a hint towards engaging in some lateral thinking on the exact nature of the underlying problem we are facing here. It could be mem. locks (I hope not), it could be physical RAM space, it could mem. accesses, CPU cycles, total HD storage space occupied network-wide.

Point being, we are searching for a prescription for the physical atomic limitation, as a network rule, that needs to priced by the market of miners validating transactions and the nodes storing them, so that fees are paid and resources allocated correctly in such a way that the network scales, in the way that we already know that it theoretically can.

If we are going to hard fork, lets make sure it is for a justifiable, quantifiable reason. Or we could be merely embarking on holy grail pursuit for the 'best' DB upgrades to keep scaling up, endlessly.

Bitcoin implementations could be DB agnostic if it were to use the right metric. Jeff Garzik has some good ideas like a "transactions accessed" metric, as I'm sure others do also. Maybe some kind scale-independent transactions accessed block limit and fee scheduling rule?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: bbulker on March 15, 2013, 10:05:37 AM
Have only read OP post, but are you saying that if everyone in the world tried to do a transaction right now that it would take 30+ years to verify them all (assuming the hardware and software remained unchanged)? Wow!  :o


Title: Re: The MAX_BLOCK_SIZE fork
Post by: markm on March 15, 2013, 10:35:19 AM
Have only read OP post, but are you saying that if everyone in the world tried to do a transaction right now that it would take 30+ years to verify them all (assuming the hardware and software remained unchanged)? Wow!  :o

Ha ha, nice way of looking at it. I won't presume to check your math, but really, even if you dropped or picked up an order of magnitude that still sounds like its lucky for us that not everyone in the world is on the internet yet.

-MarkM-


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jtimon on March 16, 2013, 01:45:01 PM
I agree with those who push for a formal specification for the protocol instead of letting the reference implementation be the protocol. It is hard work, but the way to go IMO.
Miners will have an incentive to upgrade just to be closer to the specifications and have less risk of being on the "wrong side" of the fork, as the newest version of the reference implementation will probably be always the one that first implements the newest version of the spec. Like cpython is the reference implementation for python, the specifications of the language. Yes, cpython can sometimes not comply with python, and those are bugs to be fixed.
Actually, I thought we had something like that already with the wiki.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: bg002h on March 17, 2013, 02:08:45 AM
It's not the miners who make the call in the max_block_size issue, right?  I mean, all the miners could gang up and say: "we're not going to process blocks with more than 2 transactions," if they wanted too.  It's the validating nodes that make the call as far as what will be a valid block.  If all the nodes ganged up, they could change the limit as well. 

I think miners should keep their own market determined limit on transactions.  If I was a big pool, I'd make people pay for access to my speedy blocks.  They're being nice processing no fee tx's as it is.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: jgarzik on March 17, 2013, 04:38:55 AM
It's not the miners who make the call in the max_block_size issue, right?  I mean, all the miners could gang up and say: "we're not going to process blocks with more than 2 transactions," if they wanted too.  It's the validating nodes that make the call as far as what will be a valid block.  If all the nodes ganged up, they could change the limit as well. 

Correct.

Any miner that increases MAX_BLOCK_SIZE beyond 1MB will self-select themselves away from the network, because all other validating nodes would ignore that change.

Just like if a miner decides to issue themselves 100 BTC per block.  All other validating nodes consider that invalid data, and do not relay or process it further.



Title: Re: The MAX_BLOCK_SIZE fork
Post by: commonancestor on March 17, 2013, 11:47:35 AM
Certainly from a software engineering point of view, medium-term scalability is a trivial problem. An extra zero in the
Code:
static const unsigned int MAX_BLOCK_SIZE = 1000000;
line would be fine for a good while.

Yes please, 10 megabyte per block is the right answer.

Also 100 MB may be considered in the future, but for some users it would be lots of traffic these days.
Also it would be nice to have the old tx pruning function some day as the database starts growing faster.

So... what happens then?  What is the method for implementing a hard fork?  No precedent, right?  Do we have a meeting?  With who?  Vote?  Ultimately it’s the miners that get to decide, right?  What if the miners like the 1MB limit, because they think the imposed scarcity of blockchain space will lead to higher transaction fees, and more bitcoin for them?  How do we decide on these things when nobody is really in charge?  Is a fork really going to happen at all?

Actually there is a hard fork in progress right now: there is a database locking glitch in <=v0.7.2 so everyone needs to upgrade to v0.8.1 by 15/May/2013.

Increasing block size does not seem significantly different.
First there is selected a block number from which the increased block size applies. For example block >=261840 which is expected around Oct 2013.
Then - few months before Oct 2013 - this logic is implemented in the full-node clients - Satoshi client, bitcoinj, ... - and people are asked to upgrade, or have their old clients stop working after Oct 2013.
That's all.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: mp420 on March 17, 2013, 05:00:29 PM
Certainly from a software engineering point of view, medium-term scalability is a trivial problem. An extra zero in the
Code:
static const unsigned int MAX_BLOCK_SIZE = 1000000;
line would be fine for a good while.

Yes please, 10 megabyte per block is the right answer.


How about no? Making it 10MB would just necessitate another hard fork in the future. We should have as few hard forks as possible, so make it dynamic somehow so that that part of the protocol need not ever be changed again.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: commonancestor on March 17, 2013, 06:32:28 PM
How about no? Making it 10MB would just necessitate another hard fork in the future. We should have as few hard forks as possible, so make it dynamic somehow so that that part of the protocol need not ever be changed again.

One of the key elements of Bitcoin is decentralization via the P2P network. Average Joe needs to be able to run a full node. In my opinion 10MB blocks (some 100MB per hour) are acceptable for average Joe these days, but 100MB blocks (some 1GB per hour) are bit too much now (maybe ok in 1-2 years). Unfortunately there is no reliable indicator for Bitcoin to know what are current network speeds and hard-disk sizes. If tying it to past results, big miners could game such system. I can't see any good dynamic solution. Imho if it can have one hard fork now, then it can have another after 1-2 years. Everybody understands what replacing 1MB with 10MB means, it's no rocket science.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: markm on March 17, 2013, 06:52:57 PM
Everybody understands what replacing 1MB with 10MB means, it's no rocket science.

Yeah, its the same kind of math as replacing $10 bitcoins with $100 bitcoins. :P

But wait, we're only at $50! Maybe try 5MB ?

-MarkM-


Title: Re: The MAX_BLOCK_SIZE fork
Post by: ChristianK on April 19, 2013, 12:05:42 PM
Quote
Yes, they are. This is what being a developer means in this context: that you are a servant. A slave, if you prefer that terminology. One who obeys. An inferior. A steward. Nobody, politically speaking. I'm running out of alternative ways to put this, but I would hope you get the idea.
I think you get that wrong. Being a developer in no way implies that you are a serve.

Lead developers in open-source projects get titles such as benevolent dictator.

In the case of bitcoin, the lead developer gets payed by the foundation and the foundation has a bunch of important stakeholders in it. The foundation together has probably the political power to do anything with bitcoin that it likes whether or not you approve.
Quote
I agree with those who push for a formal specification for the protocol instead of letting the reference implementation be the protocol. It is hard work, but the way to go IMO.
Are you willing to pay for that hard work to be done?


Title: Re: The MAX_BLOCK_SIZE fork
Post by: johnyj on April 19, 2013, 06:50:29 PM
I believe it is good to have possible future change in protocol, since no one can predict future with today's environment. But then some kind of consensus based voting/poll mechanism should become a practice


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Anon136 on April 19, 2013, 07:06:32 PM
I believe it is good to have possible future change in protocol, since no one can predict future with today's environment. But then some kind of consensus based voting/poll mechanism should become a practice

this is defacto required for the fork to be adopted. If there is not enough consensus than the devs attempts to fork the chain will fail all on its own.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: warpio on April 19, 2013, 07:25:07 PM
If the BTC chain can't be successfully forked, there's always the option of starting a new, entirely separate cryptocurrency with the same rules as BTC but with a higher block size... Then in the event that BTC can't handle its transaction volume, people will naturally want to move to this new altcoin, until eventually the majority have switched over, and that new altcoin becomes the new de-facto Bitcoin.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Fry on April 19, 2013, 11:44:58 PM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).





This would make it very easy for a miner to fork the blockchain.
He would just have to create a Block that's so large that it gets rejected by half of the Network.
He could then fork one branch of the fork again.
This branch could be forked again and so on... until the mining power on one branch is so low that he could perform a 51% Attack on that branch.


Title: Re: The MAX_BLOCK_SIZE fork
Post by: Fry on April 19, 2013, 11:50:01 PM

This rule would apply to blocks until they are 1 deep, right? Do you envision no check-time or size rule for blocks that are built on? Or a different much more generous rule?


Even if this rule only applys as long as the difference is one Block:
what is if both branches of the fork have the same deep?
And how could one node know for sure it is on the shorter Branch if it can not check the Blocks of the other Branch because they are to large to be transfered or checked?