Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: amincd on July 25, 2014, 05:55:38 PM



Title: Share your ideas on what to replace the 1 MB block size limit with
Post by: amincd on July 25, 2014, 05:55:38 PM
I think the biggest obstacle to mass adoption of Bitcoin is the 1 MB block size limit.

It's a major source of uncertainty for the Bitcoin economy because, on the one hand:

Bitcoin can't achieve mass adoption and global currency status status until the limit is lifted, because 7 transactions per second (the transaction rate that results in block reaching the 1 MB size limit) is miniscule for any global payment network or currency

and on the other hand, a change in the protocol to lift the 1 MB block size limit portends many risks, the biggest two of which are:

  • a split in the community leading to the Bitcoin blockchain being forked
  • poor bloat control leading to garbage being dumped into the blockchain by malicious actors, making it too costly to run a full node for all but the largest players

I think we should have more discussion about potential replacements for the current block size limit, in order to get us closer to a solution.

Some might argue that we should wait until we are closer to the 1 MB block limit before discussing it, but consider that from May 2012 to May 2013, Bitcoin's transaction volume increased almost 10 fold.

If we see a similar growth in transaction volume, we would reach the block limit in a matter of four-five months (it's currently at 240 KB, meaning it can grow 4 fold before hitting the limit). And then what happens? The uncertainty hangs over future Bitcoin development.

For my part, I think the best solution is a two part one.

For the first part, we should eliminate the block size limit altogether, as Gavin Andresen and Mike Hearn advocate. If a miner creates a block that is too big, the other miners will simply reject it. This would not be a protocol level rule, but it would be enforced as if it were, because any miner whose default block size limit is not accepted by at least 50% of the network hashing rate, will eventually see all of their blocks and block rewards orphaned, so they would have an incentive to conform to the most common limit.

For the second part, miners should adopt a rule whereby their block size limit tracks the difficulty. This is a simple construction that will allow Bitcoin to scale as the economic value of the network increases. It's not perfect, but then no solution is, and between imperfect solutions, simpler ones are better.

If you have an idea on what to replace the 1 MB block size limit with, please post it here.

Edit:

Gmaxwell makes some great points, which I'll include in the OP for visibility:

Imagine— you want your message to be read by dozens or hundreds of people— consuming a few minutes of their valuable time each. It makes sense to spend quite a few minutes making sure you are well informed first, considering how much of other people's time your message will consume.

In particular, I think it's especially unhelpful when people make posts which make it clear that they don't understand that there isn't a free lunch here. In particular, I think any productive post will have been made understanding the following points:

  • Blocksize has a trade-off with decentralization. If verifying the blockchain is made expensive (relative to hardware and bandwidth costs), then past some limit Bitcoin becomes a centralized system where everyone is economically forced to trust some consortium of large miners— which are themselves more efficient if centralized since they can just verify once, instead of verifying for themselves. (If the economic majority is trusting and not verifying, you need to also do the same so you don't end up split from the other users of the system.)
  • Bitcoin isn't secure unless there is income to pay to apply computation to the honest chain (and thus far the alternatives appear not clearly workable (https://download.wpsoftware.net/bitcoin/pos.pdf)), we argue that once the subsidy is gone transaction fees will support the security. But the existence of a market for transaction fees requires a degree of scarcity to make the rational price non-zero and to encourage efficient use. Just like Bitcoin itself wouldn't be valuable if everyone had access to infinite Bitcoins, our incentives require a degree of scarcity of blockspace.
  • Bitcoin currency throughput can be increased to arbitrary levels without increasing blocksizes, especially if you're willing to make decentralization tradeoffs. Importantly, handling high volume transactions in other ways than expressing each and every one in the global bitcoin ledger can help avoid pulling down the available security for all transactions just because a large volume of low value transactions need the throughput and can accept the lower security. Work in this space has been under-developed, but I'm not aware of anyone disagreeing with the broad possibilities here. Because of the lack of need until now it's only recently become possible to raise substantial funding for work in this space.

None of this to say that its not an interesting subject to discuss (though it has been discussed in depth before), but it's at least my view that posts which are unaware of these points are unlikely to be productive. If you don't understand what I'm saying in these points, you need to read up more (or even feel free to contact me in PM to talk to you about them one on one before taking the stage yourself).

The Bitcoin systems exists in a careful and somewhat subtle balance between two extremes: one where it is too costly to transact in, thus not valuable— or one where it is to costly to verify and so it offers little to no trustlessness advantage over traditional systems (which have a much more efficient and scalable design, made possible in part because they are not attempting to be trustless). Like most engineering tradeoff discussions every choice has ups and downs.

Also, you can review some previous discussions on the 1 MB block size limit in these links:

https://bitcointalk.org/index.php?topic=1865.0  Block size limit automatic adjustment (one of the earliest discussions on it, from 11/2010)

https://bitcointalk.org/index.php?topic=140233.0  The MAX_BLOCK_SIZE fork

https://bitcointalk.org/index.php?topic=144895.0  How a floating blocksize limit inevitably leads towards centralization

https://bitcointalk.org/index.php?topic=96097.0  Max block size and transaction fees


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: ArticMine on July 25, 2014, 07:33:59 PM
My suggestion, to get this discussion started, is a dynamic block size limit that is based on the following two parameters:

1) The median size of the a set of the previous blocks. A good example is the model of the CryptoNote coins, for example Monero (XMR). https://cryptonote.org/inside.php (https://cryptonote.org/inside.php).
2) We could also add the requirement that for an increase in the block size limit the difficulty must be rising and for a decrease the difficulty must be falling.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: tsoPANos on July 25, 2014, 07:49:13 PM
Well, I'm not a bitcoin expert, but here's the idea.

I think the limit should be removed for the network to allow mass-adoption.
Here's an interesting idea.

Bitcoin devs should:
Set a date, like January 1 2015 and declare that a new client will be released with no limit on that date.
Also, make every client released from now on, have a timestamp-based limit, (change the limit automatically depending on the date)
As the clocks hit 1/1/2015, hopefully more than 50% of the people will have updated their clients to a timestamp-based ones.

Well, I hope you took the point.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: gmaxwell on July 25, 2014, 08:09:30 PM
This thread is beginning to just rehash the discussion from the several prior ones, please take the time to search the forum some, read, and contemplate a bit before replying.  

Imagine— you want your message to be read by dozens or hundreds of people— consuming a few minutes of their valuable time each. It makes sense to spend quite a few minutes making sure you are well informed first, considering how much of other people's time your message will consume.

In particular, I think it's especially unhelpful when people make posts which make it clear that they don't understand that there isn't a free lunch here. In particular, I think any productive post will have been made understanding the following points:

  • Blocksize has a trade-off with decentralization. If verifying the blockchain is made expensive (relative to hardware and bandwidth costs), then past some limit Bitcoin becomes a centralized system where everyone is economically forced to trust some consortium of large miners— which are themselves more efficient if centralized since they can just verify once, instead of verifying for themselves. (If the economic majority is trusting and not verifying, you need to also do the same so you don't end up split from the other users of the system.)
  • Bitcoin isn't secure unless there is income to pay to apply computation to the honest chain (and thus far the alternatives appear not clearly workable (https://download.wpsoftware.net/bitcoin/pos.pdf)), we argue that once the subsidy is gone transaction fees will support the security. But the existence of a market for transaction fees requires a degree of scarcity to make the rational price non-zero and to encourage efficient use. Just like Bitcoin itself wouldn't be valuable if everyone had access to infinite Bitcoins, our incentives require a degree of scarcity of blockspace.
  • Bitcoin currency throughput can be increased to arbitrary levels without increasing blocksizes, especially if you're willing to make decentralization tradeoffs. Importantly, handling high volume transactions in other ways than expressing each and every one in the global bitcoin ledger can help avoid pulling down the available security for all transactions just because a large volume of low value transactions need the throughput and can accept the lower security. Work in this space has been under-developed, but I'm not aware of anyone disagreeing with the broad possibilities here. Because of the lack of need until now it's only recently become possible to raise substantial funding for work in this space.

None of this to say that its not an interesting subject to discuss (though it has been discussed in depth before), but it's at least my view that posts which are unaware of these points are unlikely to be productive. If you don't understand what I'm saying in these points, you need to read up more (or even feel free to contact me in PM to talk to you about them one on one before taking the stage yourself).

The Bitcoin systems exists in a careful and somewhat subtle balance between two extremes: one where it is too costly to transact in, thus not valuable— or one where it is to costly to verify and so it offers little to no trustlessness advantage over traditional systems (which have a much more efficient and scalable design, made possible in part because they are not attempting to be trustless). Like most engineering tradeoff discussions every choice has ups and downs.

With respect to  the suggestion to use the scheme from bytecoin (and it's forks like monero and fantomcoin), since that didn't exist when the prior threads were active I might as well give my thoughts on it—

Sadly, what bytecoin does is objectively broken. (Had their paper had a modicum of peer review this would have been noticed</whine>) My understanding is that it has a limitless blocksize (well at some point I presume nodes run out of ram, so not really limitless, just "undefined") with a median operation such that a miner cannot produce a block larger than a median of recent blocks without throwing out a fraction of their fees which is quadratically related to the size— the bigger the block the more coins you must throw out. Unfortunately, it's perfectly possible to pay miner fees 'out of band', e.g. author a transaction with zero-fee but pays the miner directly (some Bitcoin mining pools like Eligius already do this), so this as a control on blocksize cannot work in the long run. It also fails to control for the incentives of larger centeralized pools— who can justify beefier nodes due to the mining income— in pushing everyone else out. I understand that monero will be hardforking soon, in part to reign in the blockchain growth (which has grown almost 2GB in its 3 months of operation in spite far far lower exposure than Bitcoin), though also because the median is too constraining when there haven't been a lot of transactions recently.

Mod note: I'm going to remove posts that look like they haven't even read _this_ thread, much less past ones— prior threads spiraled a bit into uselessness with people jumping in repeated/rehashed uninformed opinion that had been thoughtfully covered in prior posts.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: ArticMine on July 25, 2014, 10:04:37 PM
First I should point that I am familiar with many of the old threads and the arguments in them. Ironically I came across CryptoNote and Monero while researching these old threads. I will address first two points miner incentives and decentralization because they are both very valid. They also form part of most of the arguments against increasing the 1 MB block size limit.

Bitcoin isn't secure unless there is income to pay to apply computation to the honest chain (and thus far the alternatives appear not clearly workable (https://download.wpsoftware.net/bitcoin/pos.pdf)), we argue that once the subsidy is gone transaction fees will support the security. But the existence of a market for transaction fees requires a degree of scarcity to make the rational price non-zero and to encourage efficient use. Just like Bitcoin itself wouldn't be valuable if everyone had access to infinite Bitcoins, our incentives require a degree of scarcity of blockspace.
This is all very true but it can be addressed by making an increase in difficulty a necessary requirement for an increase in the blocksize limit. In short if the miners are not receiving the proper incentives then the difficulty can not also be rising at the same time. This is why I included the second requirement of a rising difficulty for a blocksize increase in my suggestion.

Blocksize has a trade-off with decentralization. If verifying the blockchain is made expensive (relative to hardware and bandwidth costs), then past some limit Bitcoin becomes a centralized system where everyone is economically forced to trust some consortium of large miners— which are themselves more efficient if centralized since they can just verify once, instead of verifying for themselves. (If the economic majority is trusting and not verifying, you need to also do the same so you don't end up split from the other users of the system.)
The problem here is that a fixed blocksize does not take into account that these hardware and bandwith costs are dropping at an exponential rate. Furthermore this also does not take into account that what will likely limit the "economic majority" namely consumers form verifying transactions is not cost but rather the fact that they choose to cede control over their devices to large centralized corporations such as Apple and Microsoft.  An iPad or a Surface tablet may have the processing power and memory to handle a full Bitcoin node but is not able to do so because the DRM in the device. Now for under 20% of the cost of an iPad one can purchase a used laptop running GNU/Linux that is perfectly capable of handling a full Bitcoin node. If the 1-2% of Bitcoin users, who choose to run GNU/Linux, are the only ones verifying transactions that still provides enough decentralization to secure the network. Furthermore it avoids an attack by a propriety OS vendor who could use the DRM in their OS to cripple Bitcoin. One thing we must keep in mind here is that those who run a propriety OS do not control their devices they only think they do. Crippling Bitcoin with a 1 MB block size limit is not going to solve the centralization issue.

This brings me to the third point
 
Bitcoin currency throughput can be increased to arbitrary levels without increasing blocksizes, especially if you're willing to make decentralization tradeoffs. Importantly, handling high volume transactions in other ways than expressing each and every one in the global bitcoin ledger can help avoid pulling down the available security for all transactions just because a large volume of low value transactions need the throughput and can accept the lower security. Work in this space has been under-developed, but I'm not aware of anyone disagreeing with the broad possibilities here. Because of the lack of need until now it's only recently become possible to raise substantial funding for work in this space.
True but this is essentially self defeating. If it turns out that a centralized or semi centralized solution can be competitive with Bitcoin in certain situations then so be it. This however should be a result of true market forces and not an arbitrary limit placed on Bitcoin.

Edit: With respect to CryptoNote and Monero, I do see merit in the argument that the fee penalty alone may not be enough to constrain blocksize; however when combined with the difficulty increase requirement the picture changes. As for the specifics of the Monero blockchain there are also other factors including dust from mining pools that led to the early bloating, and we must also keep in mind the CryptoNote and Monero also have built in privacy which adds to the blocksize by its very nature.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: gmaxwell on July 26, 2014, 12:01:42 AM
This is all very true but it can be addressed by making an increase in difficulty a necessary requirement for an increase in the blocksize limit.
It seems like a reasonable thing, not sufficient on its own but reasonable... (in fact!) I've previously proposed it myself.  (not sufficient, among other things that it doesn't do anything to keep incentives aligned, or keep centralization gains moderate.)

Quote
Now for under 20% of the cost of an iPad one can purchase a used laptop running GNU/Linux that is perfectly capable of handling a full Bitcoin node. If the 1-2% of Bitcoin users
Yes, but thats only true for a certain set of limits at a certain size. 100MB blocks wouldn't be tolerable on thrifty resources like that today. You may be making an error in reading my message as some kind of opposition instead of tradeoffs which must be understood and carefully handled.

Quote
True but this is essentially self defeating. If it turns out that a centralized or semi centralized solution can be competitive with Bitcoin in certain situations then so be it. This however should be a result of true market forces and not an arbitrary limit placed on Bitcoin.
Centeralized services are inherently more efficient, enormously so. My desktop could handle 40,000 TPS with a simple ledger system, giving nearly instant confirmation... physics creates limits for decentralized systems in scale and latency, and thats okay— the decenteralized systems are much better from a security perspective.  My point about there being alternatives to increasing scaling are not limited to (semi-)centralized systems, though they're useful tools which will exist in the ecosystem, there are decenteralized approaches as well.

I also think it's wrong to think of semi-centralized systems as being in competition with Bitcoin, if process transactions for Bitcoin value they're part of the broader ecosystem; and Bitcoin's trustlessness should enable semi-centeralized systems which are far more trust worthy than what is possible in semi-centeralized systems absent something like Bitcoin. We should be able to adopt them in the places where they make the most sense and have the least risk, rather than try to force all of Bitcoin to the level of centralization needed to make processing $0.25 coffee cup purchases economically efficient while still doing a poor job of it.

Quote
Edit: With respect to CryptoNote and Monero, I do see merit in the argument that the fee penalty alone may not be enough to constrain blocksize; however when combined with the difficulty increase requirement the picture changes. As for the specifics of the Monero blockchain there are also other factors including dust from mining pools that led to the early bloating, and we must also keep in mind the CryptoNote and Monero also have built in privacy which adds to the blocksize by its very nature.
Yes, its complicated— but it's also concerning. The only altcoins that I'm aware of which have diverged from the current Bitcoin behavior in this front, and did so with the benefits of all the discussions we've had before being available to them, have been suffering from crippling bloat. That they have stronger privacy features which make transactions somewhat larger is somewhat relevant, but the average mixing size on monero is small.. and we may need to adopt similar functionality in Bitcoin (esp as increased mining centralization, partially fueled by the cost of operating nodes makes censorship more of a risk).  This doesn't prove anything one way or another, just something to think about— the way it was originally presented sounded to me like you were saying it was solved over there, but instead I think that the experience in Bytecoin and monero brings about more questions than answers.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: jubalix on July 26, 2014, 01:58:45 AM
Some reflection on backbone currencies may yield the a view that the  utility of high security far out weight micro transaction for store of wealth.

Backbones can run 80% of the economy, the other 20% purchasing from starbucks/retial cannot.

Notice how when say Target/or K-mart fail or some other retail store, nothing really changes to you quality of life, and another provide can pop up.

Notice how your quality of life changes when you make large payments...eg house......these large payments is where the value is.

BTC seems to take a slant toward more security than TPS, which is a fine balance and probably the right one. The entire system crumbles is security is compromised, or even perceived to be lessened by centralization.

The example of the thin blockchain set up for high value low TPS is Peercoin. Down load that block chain and see exactly how fast it is, the contra of LTC/DOGE and basically everything else that is bloating itself out of existence, by haveing to many "features" and not enough focus on a clear objective



Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: ArticMine on July 26, 2014, 02:34:58 AM
This is all very true but it can be addressed by making an increase in difficulty a necessary requirement for an increase in the blocksize limit.
It seems like a reasonable thing, not sufficient on its own but reasonable... (in fact!) I've previously proposed it myself.  (not sufficient, among other things that it doesn't do anything to keep incentives aligned, or keep centralization gains moderate.)

Quote
Now for under 20% of the cost of an iPad one can purchase a used laptop running GNU/Linux that is perfectly capable of handling a full Bitcoin node. If the 1-2% of Bitcoin users
Yes, but thats only true for a certain set of limits at a certain size. 100MB blocks wouldn't be tolerable on thrifty resources like that today. You may be making an error in reading my message as some kind of opposition instead of tradeoffs which must be understood and carefully handled.

Quote
True but this is essentially self defeating. If it turns out that a centralized or semi centralized solution can be competitive with Bitcoin in certain situations then so be it. This however should be a result of true market forces and not an arbitrary limit placed on Bitcoin.
Centeralized services are inherently more efficient, enormously so. My desktop could handle 40,000 TPS with a simple ledger system, giving nearly instant confirmation... physics creates limits for decentralized systems in scale and latency, and thats okay— the decenteralized systems are much better from a security perspective.  My point about there being alternatives to increasing scaling are not limited to (semi-)centralized systems, though they're useful tools which will exist in the ecosystem, there are decenteralized approaches as well.

I also think it's wrong to think of semi-centralized systems as being in competition with Bitcoin, if process transactions for Bitcoin value they're part of the broader ecosystem; and Bitcoin's trustlessness should enable semi-centeralized systems which are far more trust worthy than what is possible in semi-centeralized systems absent something like Bitcoin. We should be able to adopt them in the places where they make the most sense and have the least risk, rather than try to force all of Bitcoin to the level of centralization needed to make processing $0.25 coffee cup purchases economically efficient while still doing a poor job of it.

Quote
Edit: With respect to CryptoNote and Monero, I do see merit in the argument that the fee penalty alone may not be enough to constrain blocksize; however when combined with the difficulty increase requirement the picture changes. As for the specifics of the Monero blockchain there are also other factors including dust from mining pools that led to the early bloating, and we must also keep in mind the CryptoNote and Monero also have built in privacy which adds to the blocksize by its very nature.
Yes, its complicated— but it's also concerning. The only altcoins that I'm aware of which have diverged from the current Bitcoin behavior in this front, and did so with the benefits of all the discussions we've had before being available to them, have been suffering from crippling bloat. That they have stronger privacy features which make transactions somewhat larger is somewhat relevant, but the average mixing size on monero is small.. and we may need to adopt similar functionality in Bitcoin (esp as increased mining centralization, partially fueled by the cost of operating nodes makes censorship more of a risk).  This doesn't prove anything one way or another, just something to think about— the way it was originally presented sounded to me like you were saying it was solved over there, but instead I think that the experience in Bytecoin and monero brings about more questions than answers.


I consider the increase in difficulty requirement on its own a necessary but not a sufficient condition, while the CryptoNote / Monero solution or something similar also a necessary but not a sufficient condition. My proposal is requiring both as a necessary and sufficient condition. Both suggestions have been made before individually but I have not seen a proposal that required both of them.

The problem of 100MB blocks needs to be considered not just in terms of current technology, but in terms of technology costs down the road. (2, 5 10 years etc.) I run a full Bitcoin node on an over ten year old laptop (that still has its Windows 2000 logo, and a floppy drive), that is way inferior in performance from what one can buy used for say 100 - 200 USD today.

The efficiency of centralization argument is on the surface very valid, furthermore my desktop can also handle "40,000 TPS with a simple ledger system, giving nearly instant confirmation". The problem arises when an individual with value on your desktop wishes to do business with an individual with value on mine. It is then when the fees and costs start to get really expensive. There are two problems here:
First businesses left to their own devices tend to not co-operate with their competitors in order to allow each other's customers to do business. They instead prefer to keep their customers for the most part locked up in their own "walled gardens". Just witness the behaviour of Apple with IOS or while on the subject of coffee Keurig adding DRM to their coffee makers in order to prevent customers from purchasing coffee from suppliers other than those approved by Keurig.
The second is regulatory if we both are in different jurisdictions then we both have now to comply with two sets of regulators. As the number of jurisdictions is increased so does the number of regulators, and even worse the sometimes conflicting interactions between the various regulators each provider has to deal with.
These are the reasons why we see many innovative payment methods that work only within one jurisdiction but only few and expensive options across international borders. Paying 2.50 USD for coffee at the local coffee shop is not where Bitcoin shines, but paying 2.50 USD for a good or service from a provider across the world is where Bitcoin can really shine. Centralized and semi-centralized solutions do have a role to play in reducing blockchain bloat, but cannot not by themselves solve the problem. For example: Coinbase provides both exchange and merchant services to persons in the US, and requires a US bank account. if one of their exchange customers make a purchase from one of their merchants with Bitcoin, this transaction does not need to go through the blockchain. One the other hand if I in Canada makes a purchase from a Coinbase merchant my transaction does have to go through the blockchain, and I, by the way, obtained my BTC from Virtex, who requires Canadian citizenship as a requirement of doing business with them!

One thing to note about Monero and Bytecoin is that Monero has two orders of magnitude the transaction volume by value (BTC or USD) over Bytecoin for very similar capitalizations, so the stress test of bloat should happen in Monero long before it happens in Bytecoin, even though Bytecoin is the older coin.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: Challisto on July 26, 2014, 04:41:59 AM
I think improving block propagation should take priority before block size. Currently block distribution is an O(n) operation, the bigger the block the longer it will take to propagate, which means miners are forced to mine small blocks to reduce orphan rates. Even if the max block size is increased there is no incentive for miners to use it.

This is a possible way to reduce block propagation time if a solution to the drawback is found. Since most nodes will have a nearly identical mempool of transactions a template could be sent that will construct an entire newly mined block. The benefits are block propagation time is no longer directly tied to its size and there is a reduction in bandwidth. The drawback is if some transactions are missing the receiving node will have to send a request for them, this process can end being slower than just receiving an entire block from the start.

   


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: gmaxwell on July 26, 2014, 06:34:11 AM
while the CryptoNote / Monero solution or something similar also a necessary but not a sufficient condition
The quadratic disincentive Bytecoin&forks does is very bad, there is no upside to it. It doesn't achieve the advertised end, it encourages people to pay miners directly instead of fees, thus improving the position of large well known miners. (If you just mean using a median-controlled limit— sure, but thats a old proposal in these threads that has nothing to do with bytecoin. I would point out that an explicit desired size is better than an actual size in that calculation, to avoid miners needing to pad their blocks or omit transactions they could have included just to express a desire for another limit).

Sadly there is still no proposal that I've seen which really closes the loop between user capabilities (esp bandwidth handling, as bandwidth appears to be the 'slowest' of the technology to improve), at best I've seen applying some additional local exponential cap based on historical bandwidth numbers, which seems very shaky since small parameter changes can make the difference between too constrained and completely unconstrained easily.  The best proxy I've seen for user choice is protocol rule limits, but those are overly brittle and hard to change.

Petertodd and jdillion previously had a POS-ish voting proposal that might be interesting... but any of that family seemed complex to implement. The notion there was that signers in recent transactions, selected by non-interactive cut and choose using the block hashes weighed by coins days destroyed, could reveal signatures of block-hashes after their transactions in order register approval for increasing the block size.  Since miners can always decrease the blocksize unilatterly and would usually be in favor of increases, and they can censor users— so if the users consent is one sided and needed for increases, and the protocol is constructed so that users can't be forced to give consent in advance except by giving up their private keys— this actually has a fighting chance of keeping the users in the decision.  But there are a lot of free parameters in how it could work... and giving a majority of coin holders a voice is actually a pretty poor proxy for doing something smarter:  Democracy is still an oppressive system which forces the will of some onto others, and those holding the most coins might be large corporations or goverments who benefit from centralization, still better than just handing the decision unilateral to miners.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: coinft on July 26, 2014, 06:18:58 PM
Just an idea regarding incentives:

A miner who wants to mine a block larger than 1MB needs to give up a part of the block reward. That way there is a strong incentive not to break the 1MB limit unless there are enough TX available which make up for the lost part (and the higher orphan risk) in fees.

Of course the actual numbers need to be determined with great care.

I would guess that even if this would be introduced today with conservative numbers, it wouldn't be used much to mint larger blocks until several block reward halvings have passed, or TX volume picks up a lot. Also by reducing block rewards this does lower the theoretical maximum of 21M, unless further measures are taken. And there will have to be an upper limit for the time when block rewards reach zero.

Has anyone ever thought about such a system?

-coinft


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: gmaxwell on July 26, 2014, 06:54:21 PM
A miner who wants to mine a block larger than 1MB needs to give up a part of the block reward. That way there is a strong incentive not to break the 1MB limit unless there are enough TX available which make up for the lost part (and the higher orphan risk) in fees.
This is the approach we were talking about with Bytecoin above.  The income is reduced by a multiple determined as function of size^2. This, sadly, does not work in the long run once subsidy is not an overwhelming portion of miner income, because transactions will just pay miners directly (e.g. just author N different transactions, one for each miner you know of, each including an output for that miner. Eligius started supporting being paid fees in this way in 2011).

A version that wasn't a multiple but instead was an absolute number of coins to destroy might work, since you couldn't escape it by moving fees to outputs— but then you have a free parameter of not knowing the value of a bitcoin to its users— and as we've seen with fees, constants that depend on how much a bitcoin is worth easily get out of wack. Any magical thoughts there?


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: coinft on July 26, 2014, 08:15:15 PM
This is the approach we were talking about with Bytecoin above.  The income is reduced by a multiple determined as function of size^2. This, sadly, does not work in the long run once subsidy is not an overwhelming portion of miner income, because transactions will just pay miners directly (e.g. just author N different transactions, one for each miner you know of, each including an output for that miner. Eligius started supporting being paid fees in this way in 2011).

A version that wasn't a multiple but instead was an absolute number of coins to destroy might work, since you couldn't escape it by moving fees to outputs— but then you have a free parameter of not knowing the value of a bitcoin to its users— and as we've seen with fees, constants that depend on how much a bitcoin is worth easily get out of wack. Any magical thoughts there?

In what I proposed above you don't give up on a percentage of fees (which may be zero due to out of band fees) but on the block reward itself. And you need to do that for every single block to exceed the 1MB, it does not change the limit for future blocks. For example, if for x% of the block reward you may increase the block by N*x%, with N=10, the new block limit range becomes 1-11MB for a linear range of subsidies from 25 to 0BTC. There is no way to "cheat" with out of band fees.

At the highest end there is no block reward at all. It is true this will become cheaper for miners as the block reward goes down, and eventually it will be free, but in my opinion that's a feature compensating for cheaper future storage and processing resources. With N=10 and today's subsidy, few miners would elect to create larger blocks unless the market changes a lot, but it becomes more reasonable over time.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: gmaxwell on July 26, 2014, 09:35:39 PM
but in my opinion that's a feature compensating for cheaper future storage and processing resources
Storage in the (not so far distant future) will not be free. Talk about "programmed destruction" yikes. What the bytecoin stuff does reduces all the generated coin, including subsidy— what you're suggesting really is a duplicate of it, but less completely considered, please check out the bytecoin whitepaper. I suppose that leaving _out_ the fees at least avoids the bad incentive trap. It's still broken, none the less, and you really can't wave your hands and ignore the fact that subsidy will be pretty small in only a few years... esp with the same approach being apparently ineffective in bytecoin and monero when their subsidy is currently quite large.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: coinft on July 26, 2014, 09:53:59 PM
Storage in the (not so far distant future) will not be free. Talk about "programmed destruction" yikes. What the bytecoin stuff does reduces all the generated coin, including subsidy— what you're suggesting really is a duplicate of it, but less completely considered, please check out the bytecoin whitepaper. I suppose that leaving _out_ the fees at least avoids the bad incentive trap. It's still broken, none the less, and you really can't wave your hands and ignore the fact that subsidy will be pretty small in only a few years... esp with the same approach being apparently ineffective in bytecoin and monero when their subsidy is currently quite large.

Yes it was an idea of the moment to fix the bytecoin model, especially the adaptive limit. With N small enough it might not have bad effects, but it also limits blocks to only 1+N MB forever, which is conservative but maybe not worth the effort. This suits me fine, I would much prefer to compromise on trust and decentralization for payments of small amounts off the block chain any way, and keep the core system as small and trusted (distributed) as possible.



Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: KriszDev on July 27, 2014, 06:17:06 AM
Why you wanna change the 1MB? Currently most of the blocks is <500KB.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: amincd on July 30, 2014, 11:17:43 PM
BTC seems to take a slant toward more security than TPS, which is a fine balance and probably the right one. The entire system crumbles is security is compromised, or even perceived to be lessened by centralization.

If Bitcoin is merely a high powered money for international settlement between large commercial entities, it will have failed in its mission of providing the world with a decentralized electronic cash. Bitcoin has to be accessible for the average Joe. We can draw the line at on-chain micro-transactions, but I don't see any reason why Bitcoin can't match Visa's transaction volumes. Satoshi's original calculations argued as much.

I do agree that security should take priority, but I believe a balance can be found that doesn't compromise BTC's decentralized structure, while also not relegating it to only large value international transactions.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: gmaxwell on July 30, 2014, 11:35:19 PM
If Bitcoin is merely a high powered money for international settlement between large commercial entities, it will have failed in its mission of providing the world with a decentralized electronic cash. Bitcoin has to be accessible
Without commenting on the rest, the logic doesn't follow here.  It is not necessary that not doing soda pop buys directly in the Bitcoin blockchain means that Bitcoin hasn't provided people with decenteralized electronic cash.

Quote
but I don't see any reason why Bitcoin can't match Visa's
The Bitcoin blockchain is a very different system from the visa payment work which makes very different trade-offs. It will always be the case that in some respects the bitcoin blockchain doesn't match visa, just as much as visa fails to match Bitcoin.  If you insist your floor wax be a tasty desert topping you may get something which is the worst of all worlds instead.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: amincd on July 31, 2014, 02:57:03 AM
Without commenting on the rest, the logic doesn't follow here.  It is not necessary that not doing soda pop buys directly in the Bitcoin blockchain means that Bitcoin hasn't provided people with decenteralized electronic cash.

How's that so? How can bitcoin be decentralized cash if it can't be used for everyday purchases like soda pop buys, while using the decentralized network?

Quote
The Bitcoin blockchain is a very different system from the visa payment work which makes very different trade-offs. It will always be the case that in some respects the bitcoin blockchain doesn't match visa, just as much as visa fails to match Bitcoin.

Visa's maximum possible transaction throughput will undoubtedly be higher than Bitcoin's, but the current transaction volume, which is limited by consumer demand rather than the technical capabilities of Visa's data centers, is something Satoshi envisioned Bitcoin matching:

http://www.mail-archive.com/cryptography@metzdowd.com/msg09964.html

Quote
The bandwidth might not be as prohibitive as you think.  A typical transaction
would be about 400 bytes (ECC is nicely compact).  Each transaction has to be
broadcast twice, so lets say 1KB per transaction.  Visa processed 37 billion
transactions in FY2008, or an average of 100 million transactions per day.  
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
2 HD quality movies, or about $18 worth of bandwidth at current prices.

If the network were to get that big, it would take several years, and by then,
sending 2 HD movies over the Internet would probably not seem like a big deal.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: gmaxwell on July 31, 2014, 03:44:23 AM
Without commenting on the rest, the logic doesn't follow here.  It is not necessary that not doing soda pop buys directly in the Bitcoin blockchain means that Bitcoin hasn't provided people with decenteralized electronic cash.
How's that so? How can bitcoin be decentralized cash if it can't be used for everyday purchases like soda pop buys, while using the decentralized network?
It is perfectly possible to transact in Bitcoin without using the blockchain for every transaction (in a decenteralized way too— though for buying sodapop federated solutions may be much less costly).
Quote
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
Yep, though note that that post was also from before the million byte limit was added to Bitcoin, along with many other protections against lost of decentralization and against denial of service... it's an argument to the general feasibility of this class of approach, and indeed— it's fine. We're still not yet to a point where sending 100GB/day is "not a big deal", nor is demand for Bitcoin transactions anything like that (and, arguably if Bitcoin currently required 100GB/day now we never would reach that level of demand— because such a costly system at this point would be completely centralized and inferior to traditional banks and Visa). Visa's 2008 transaction volume is also a long way from handling the total transaction volume of the worlds small cash transactions, as you seemed to be envisioning— yet Bitcoin _can_ accommodate that, but not if you continue to believe you can shove all the worlds transactions constantly in to a single global broadcast network.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: amincd on July 31, 2014, 05:48:34 PM
It is perfectly possible to transact in Bitcoin without using the blockchain for every transaction (in a decenteralized way too— though for buying sodapop federated solutions may be much less costly).

How can you transact in Bitcoin in a decentralized way without the blockchain? Rapidly adjusted payment channels need aggregators (centralized parties) if you want to use them to transact with parties who you have no previous or ongoing relationship with (e.g. a random vending machine), and require locking up your bitcoin for a period.

Quote
Yep, though note that that post was also from before the million byte limit was added to Bitcoin, along with many other protections against lost of decentralization and against denial of service... it's an argument to the general feasibility of this class of approach, and indeed— it's fine. We're still not yet to a point where sending 100GB/day is "not a big deal", nor is demand for Bitcoin transactions anything like that (and, arguably if Bitcoin currently required 100GB/day now we never would reach that level of demand— because such a costly system at this point would be completely centralized and inferior to traditional banks and Visa).

I agree entirely. We're not at the point where Bitcoin should be handling 4000 tps. I'm just making a case for not sticking with the 1 MB block size limit, and putting in place a mechanism where, overtime, it can (automatically) scale to that volume.

Quote
Visa's 2008 transaction volume is also a long way from handling the total transaction volume of the worlds small cash transactions, as you seemed to be envisioning— yet Bitcoin _can_ accommodate that, but not if you continue to believe you can shove all the worlds transactions constantly in to a single global broadcast network.

That is true. Ultimately, the Bitcoin blockchain cannot handle the total volume of the world's small cash transactions. I think we can cross that bridge when we get there though. Getting to 4,000 tps would radically transform the world's financial system, and inject a massive amount of capital and manpower into the Bitcoin community, making it easier for new solutions (e.g. sidechains) to be developed.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: solex on August 01, 2014, 01:58:34 AM
gmaxwell makes clear that this subject has been debated in many threads. It keeps getting raised, and the reason is that 18 months have passed since Jeweller's thread (https://bitcointalk.org/index.php?topic=140233.0), and there is probably less than 18 months until the block size limit starts crippling transaction flows, just as the 250KB soft-limit did early in March 2013.

Sadly there is still no proposal that I've seen which really closes the loop between user capabilities (esp bandwidth handling, as bandwidth appears to be the 'slowest' of the technology to improve), at best I've seen applying some additional local exponential cap based on historical bandwidth numbers, which seems very shaky since small parameter changes can make the difference between too constrained and completely unconstrained easily.  The best proxy I've seen for user choice is protocol rule limits, but those are overly brittle and hard to change.

Satoshi put the 1MB into place nearly 4 years ago, mainly as an anti-spam measure. Now the block limit exists, at the very minimum it should be increasing at the same rate as the average global internet broadband speed.

UK consumer broadband (download) average speed each year (http://consumers.ofcom.org.uk/news/broadband-speeds-april-14/)
http://images.ofcom.org.uk/images/consumer/bb-speeds-by-year

It is a reasonable assumption that all major countries which host bitcoin nodes have seen a similar growth pattern, and upload speeds also follow the pattern.
So, since the 1MB max block size was acceptable, within the goal of maintaining decentralization, in 2010, then 3MB must be acceptable today.

Large blocks are already being created, as a matter of course, by different miners:

313377 298 3,178.83 BTC 5.9.24.81 731.56
313376 1230 3,322.72 BTC Eligius 877.88
313375 1447 2,434.47 BTC Unknown with 1AcAj9p Address 731.47
313374 1897 5,733.43 BTC GHash.IO 731.61

The bare minimum which needs doing is like:
if block height > 330000
   maxblocksize = 3MB [and recalculate dependent variables]

Or better still a flexible limit based upon demand. Remember people are paying for their transactions to be processed:

The median size of the a set of the previous blocks.
A set of 2016 blocks is a large number, representative of real bitcoin usage, so a flexible limit determined at each difficulty change makes sense.
The fees market (which is still dsyfunctional) is a lesser concern at the present time.

Bitcoin Core version 0.8 focused on LevelDB, 0.9 on Payment protocol. Version 0.10 really needs to address the block size.

It is crazy to allow the scenario (below) to happen over the 1MB constant when all nodes, not just miners, would be affected:

By default Bitcoin will not created blocks larger than 250kb even though it could do so without a hard fork. We have now reached this limit. Transactions are stacking up in the memory pool and not getting cleared fast enough.

What this means is, you need to take a decision and do one of these things:

  • Start your node with the -blockmaxsize flag set to something higher than 250kb, for example -blockmaxsize=1023000. This will mean you create larger blocks that confirm more transactions. You can also adjust the size of the area in your blocks that is reserved for free transactions with the -blockprioritysize flag.
  • Change your nodes code to de-prioritize or ignore transactions you don't care about, for example, Luke-Jr excludes SatoshiDice transactions which makes way for other users.
  • Do nothing.

If everyone does nothing, then people will start having to attach higher and higher fees to get into blocks until Bitcoin fees end up being uncompetitive with competing services like PayPal.

If you mine on a pool, ask your pool operator what their policy will be on this, and if you don't like it, switch to a different pool.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: ArticMine on August 03, 2014, 01:48:38 AM
Here is an excellent graphic on when we would likely reach the 1 MB Block limit. https://bitcointalk.org/index.php?topic=400235.msg8153516#msg8153516 (https://bitcointalk.org/index.php?topic=400235.msg8153516#msg8153516). A reasonable prediction is within 12 months, likely during the next major price move.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: ABISprotocol on August 21, 2014, 06:55:29 AM
but in my opinion that's a feature compensating for cheaper future storage and processing resources
Storage in the (not so far distant future) will not be free. Talk about "programmed destruction" yikes. What the bytecoin stuff does reduces all the generated coin, including subsidy— what you're suggesting really is a duplicate of it, but less completely considered, please check out the bytecoin whitepaper. I suppose that leaving _out_ the fees at least avoids the bad incentive trap. It's still broken, none the less, and you really can't wave your hands and ignore the fact that subsidy will be pretty small in only a few years... esp with the same approach being apparently ineffective in bytecoin and monero when their subsidy is currently quite large.

I just read this whole thread and found it very interesting. After reading it, I decided to go back and re-read this:

Output Distribution Obfuscation (posted July 16, 2014), by Gregory Maxwell and Andrew Poelstra. (Involves use of cryptonote-based bytecoin (BCN) ring signatures, described as a possibility for bitcoin: "Using Bytecoin Ring Signatures (BRS), described at cryptonote.org, it is possible to disguise which of utxos is being referenced in any given transaction input. This is done by simply referencing all the utxos, then ringsigning with a combination of all their associated pubkeys."
http://download.wpsoftware.net/bitcoin/wizardry/brs-arbitrary-output-sizes.txt
(This, in part, proposes "an output-encoding scheme by which outputs of *every possible size* are created alongside each real output (...) further requir(ing) that the "ghost outputs" are indistinguishable from the real ones to anyone not in possession of the outputs' associated private key. (With ring signatures hiding the exact outputs which are being spent, even spending a real output will not identify it as real.)" In this scenario, ghost outputs are chosen randomly, and users improve anonymity when selecting ghost outputs " by trying to reuse n for any given P, V."

(Background to this:)
(...)the bytecoin ring signature is pretty straight forward to add to Bitcoin— though it implies a pretty considerable scalability tradeoff. Andytoshi and I have come up with some pretty substantial cryptographic improvements, e.g. https://download.wpsoftware.net/bitcoin/wizardry/brs-arbitrary-output-sizes.txt

So, my questions:

How would this output-encoding scheme work realistically for something of *every possible size?*   And assuming this were applied to bitcoin as an option [much as SharedCoin is in blockchain.info], wouldn't it still come at a cost both in terms  of size of the data corresponding to whatever transactions involved the scheme in the cases where users choose to utilize it, as well as corresponding additional fee(s)?  How are the scalability issue(s) addressed?  (Please also explain from both the scripting vs. no-scripting scenarios.)


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: Gavin Andresen on August 21, 2014, 02:27:35 PM
Quote
Edit: With respect to CryptoNote and Monero, I do see merit in the argument that the fee penalty alone may not be enough to constrain blocksize; however when combined with the difficulty increase requirement the picture changes. As for the specifics of the Monero blockchain there are also other factors including dust from mining pools that led to the early bloating, and we must also keep in mind the CryptoNote and Monero also have built in privacy which adds to the blocksize by its very nature.
Yes, its complicated— but it's also concerning. The only altcoins that I'm aware of which have diverged from the current Bitcoin behavior in this front, and did so with the benefits of all the discussions we've had before being available to them, have been suffering from crippling bloat.

Glancing at block explorers for Monero and ByteCoin (https://minergate.com/blockchain/bcn/blocks)... I'm not seeing crippling bloat right now. I see lots of very-few-transactions blocks.

Glancing at recent release notes for ByteCoin (https://forum.cryptonote.org/viewtopic.php?f=12&t=257), it looks like transactions were not being prioritized by fee, which is a fundamental to getting a working fee market.

Have Monero and ByteCoin fixed the bloat problem, or did the transaction spammers just get bored and go away?



Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: ABISprotocol on August 21, 2014, 05:34:53 PM
Quote
Edit: With respect to CryptoNote and Monero, I do see merit in the argument that the fee penalty alone may not be enough to constrain blocksize; however when combined with the difficulty increase requirement the picture changes. As for the specifics of the Monero blockchain there are also other factors including dust from mining pools that led to the early bloating, and we must also keep in mind the CryptoNote and Monero also have built in privacy which adds to the blocksize by its very nature.
Yes, its complicated— but it's also concerning. The only altcoins that I'm aware of which have diverged from the current Bitcoin behavior in this front, and did so with the benefits of all the discussions we've had before being available to them, have been suffering from crippling bloat.

Glancing at block explorers for Monero and ByteCoin (https://minergate.com/blockchain/bcn/blocks)... I'm not seeing crippling bloat right now. I see lots of very-few-transactions blocks.

Glancing at recent release notes for ByteCoin (https://forum.cryptonote.org/viewtopic.php?f=12&t=257), it looks like transactions were not being prioritized by fee, which is a fundamental to getting a working fee market.

Have Monero and ByteCoin fixed the bloat problem, or did the transaction spammers just get bored and go away?

Not sure about whether bloat has been addressed to the extent that it would need to be in Bytecoin (BCN), but I do know that recent updates dropped the default fee from 10 BCN to .01 BCN (1000 times cheaper), but the updates also provide that the user specifies what the fee will be so that the higher the fee, the faster the transaction makes it in, like this:

In this example, I've shown the general format for the transfer command and I've shown the -f (fee) as 10 bytecoin:

Code:
transfer <mixin_count> <address> <amount> [-p payment_id] [-f fee] 
Code:
transfer 10 27sfd....kHfjnW 10000 -p cfrsgE...fdss -f 10

My understanding is that gmaxwell and andytoshi (et. al.?) have come up with "substantial cryptographic improvements" to the BCN system which potentially are a "pretty straight forward to add to Bitcoin" as per gmaxwell, see:  https://download.wpsoftware.net/bitcoin/wizardry/brs-arbitrary-output-sizes.txt and previous comment(s) cited in this thread.  However, I still have my (unanswered) questions, to wit:

How would this output-encoding scheme work realistically for something of *every possible size?*  

Assuming this were applied to bitcoin as an option [much as SharedCoin is in blockchain.info], wouldn't it still come at a cost both in terms of size of the data corresponding to whatever transactions involved the scheme in the cases where users choose to utilize it, as well as corresponding additional fee(s)?  

How are the scalability issue(s) addressed?


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: jl2012 on August 21, 2014, 06:05:28 PM
1MB-block supporters have 2 major arguments: decentralization and block space scarcity. By considering ONLY these 2 factors, however, the BEST solution is to limit a block to only 2 transactions: a reward transaction and a normal transaction. This will limit the block size to absolute minimal, and make sure everyone could mine with a 9.6k modem and a 80386 computer

The truth is that the 1MB limit was just an arbitrary choice by Satoshi without any considering its implications carefully (at least not well documented). He chose "1" simply because it's the first natural number. Had he chosen 2MB instead of 1MB, I am pretty sure that Bitcoin would have worked in exactly the same way as how it works now. Had he chosen 0.5MB, we might have already run into a big trouble.

We want to maximize miner profit because that will translate to security. However, a block size limit does not directly translate to maximized miner profit. Consider the most extreme "2 transactions block limit", that will crush the value of Bitcoin to zero and no one will mine for it. We need to find a reasonable balance but 1MB is definitely not a good one. Assume that we aim at paying $1 million/block ($52 billion/year) to secure the network (I consider this as a small amount if Bitcoin ever grows to a trillion market cap). The current 7tps llimit will require a fee of $238/tx, which is way too expensive even for a global settlement network among banks.

To answer the question of "what to replace the 1 MB block size limit with", we first need to set a realistic goal for Bitcoin. In long term, I don't think bitcoin could/should be used for buying a cup of coffee. To be competitive with VISA, the wiki quotes 2000tps, or 1200000tx/block, or 586MB/block (assuming 512bytes/tx). To be competitive with SWIFT, which has about 20 million translations per day, it takes 232tps, or 138889tx/block, or 68MB/block. Divide them by the $1 million fee/block, that would be $0.83/tx and $7.2/tx respectively. A fix rate of $7.2/tx is a bit high but still (very) competitive with wire transfer. $0.83/tx is very competitive for transactions over $100 of value. I think a reasonable choice, with the implications for centralization considered, would be around 100MB/block. That takes 1.5Mb/s of bandwidth in a perfect scenario. That would be a better equilibrium in technical and economical terms.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: mmeijeri on August 21, 2014, 06:29:10 PM
We want to maximize miner profit because that will translate to security.

That step in the argument needs more work. Security isn't the only consideration, nor does it obviously trump all others. At some point the incremental value of additional security might not be worth it.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: mmeijeri on August 21, 2014, 06:31:30 PM
We need to find a reasonable balance but 1MB is definitely not a good one.

Blocks of 1MB combined with tree-chains could turn out to be a perfectly adequate solution. The trade space is larger than just changing the block size.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: gmaxwell on August 21, 2014, 08:17:06 PM
1MB-block supporters have 2 major arguments: decentralization and block space scarcity. By considering ONLY these 2 factors, however, the BEST solution is to limit a block to only 2 transactions:
Gee. And yet no one is suggesting that. Perhaps this should suggest your understanding of other people's views is flawed, before clinging to it and insulting people with an oversimplification of their views and preventing polite discourse as a result? :-/

Quote
Had he chosen 2MB instead of 1MB, I am pretty sure that Bitcoin would have worked in exactly the same way as how it works now.
Maybe, we've suffered major losses of decentralization with even many major commercial players not running their own verifying nodes and the overwhelming majority of miners— instead relying on centralized services like Blockchain.info and mining pools. Even some of the mining pools have tried not running their own nodes but instead proxying work from other pools. The cost of running a node is an often cited reason.  Some portion if this cost may be an illusion, some may be a constant (e.g. software maintenance), but to the extent that the cost is proportional to the load on the network having higher limits would not be improving things.

What we saw in Bitcoin last year was a rise of ludicrously inefficient services— ones that bounced transactions through several addresses for every logical transaction made by users, games that produced a pair of transactions per move, etc. Transaction volume rose precipitously but when fees and delays became substantial many of these services changed strategies and increased their efficiency.   Though I can't prove it, I think it's likely that there is no coincidence that the load has equalized near the default target size.

Quote
We want to maximize miner profit because that will translate to security.
But this isn't the only objective, we also must have ample decentralization since this is what provides Bitcoin with any uniqueness or value vs the vastly more efficient centralized payment systems.

Quote
We need to find a reasonable balance
Agreed.

Quote
but 1MB is definitely not a good one.
At the moment it seems fine. Forever? not likely— I agree, and on all counts. We can reasonable expect available bandwidth, storage, cpu-power, and software quality to improve. In some span of time 10MB will have similar relative costs to 1MB today, and so all factors that depend on relative costs will be equally happy with some other side.

Quote
Assume that we aim at paying $1 million/block ($52 billion/year) to secure the network (I consider this as a small amount if Bitcoin ever grows to a trillion market cap). The current 7tps llimit will require a fee of $238/tx, which is way too expensive even for a global settlement network among banks.
This is ignoring various kinds of merged mining income, which might change the equation somewhat... but this is hard to factor in today.

Quote
I think a reasonable choice, with the implications for centralization considered, would be around 100MB/block. That takes 1.5Mb/s of bandwidth in a perfect scenario. That would be a better equilibrium in technical and economical terms.
I think at the moment— based on how we're seeing things play out with the current load levels on the network— I think 100MB blocks would be pretty much devastating to decentralization, in a few years— likely less so, but at the moment it would be even more devastating to the existence of a fee market.

Have Monero and ByteCoin fixed the bloat problem, or did the transaction spammers just get bored and go away?
Yes, sort of— fee requirements at major pools, monero apparently planning a hard-fork to change the rules, I'm not sure where thats standing— I'll ping some of their developers to comment.  Monero's blockchain size is currently about 2.1GBytes on my disk here.

My understanding is that gmaxwell and andytoshi (et. al.?) have come up with "substantial cryptographic improvements" to the BCN system which potentially are a "pretty straight forward to add to Bitcoin" as per gmaxwell, see:  https://download.wpsoftware.net/bitcoin/wizardry/brs-arbitrary-output-sizes.txt and previous comment(s) cited in this thread.  However, I still have my (unanswered) questions, to wit:
How would this output-encoding scheme work realistically for something of *every possible size?*  
Assuming this were applied to bitcoin as an option [much as SharedCoin is in blockchain.info], wouldn't it still come at a cost both in terms of size of the data corresponding to whatever transactions involved the scheme in the cases where users choose to utilize it, as well as corresponding additional fee(s)?  
How are the scalability issue(s) addressed?
The improvements Andrew and I came up with do not change the scalablity at all, they change the privacy (and do work for all possible sizes), and since its not scalability related it's really completely off-topic for this thread.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: evanito on August 22, 2014, 03:19:19 AM
I dont think the size per block matters, as long as we can improve the transactions per second cap.
As you said, 7 transactions per second is miniscule, and we should focus on solving the perfect balance for maximum speed.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: jl2012 on August 22, 2014, 03:43:05 AM
1MB-block supporters have 2 major arguments: decentralization and block space scarcity. By considering ONLY these 2 factors, however, the BEST solution is to limit a block to only 2 transactions:
Gee. And yet no one is suggesting that. Perhaps this should suggest your understanding of other people's views is flawed, before clinging to it and insulting people with an oversimplification of their views and preventing polite discourse as a result? :-/


Quote
Quote
but 1MB is definitely not a good one.
At the moment it seems fine. Forever? not likely— I agree, and on all counts. We can reasonable expect available bandwidth, storage, cpu-power, and software quality to improve. In some span of time 10MB will have similar relative costs to 1MB today, and so all factors that depend on relative costs will be equally happy with some other side.

Some of the 1MB-block supporters believe we should keep the limit forever, and move 99.9% of the transactions to off-chain. I just want to point out that their logic is completely flawed.



Quote
Quote
Had he chosen 2MB instead of 1MB, I am pretty sure that Bitcoin would have worked in exactly the same way as how it works now.
Maybe, we've suffered major losses of decentralization with even many major commercial players not running their own verifying nodes and the overwhelming majority of miners— instead relying on centralized services like Blockchain.info and mining pools. Even some of the mining pools have tried not running their own nodes but instead proxying work from other pools. The cost of running a node is an often cited reason.  Some portion if this cost may be an illusion, some may be a constant (e.g. software maintenance), but to the extent that the cost is proportional to the load on the network having higher limits would not be improving things.

I've been maintaining a node with my 100Mb/s domestic connection since 2012. It takes less than 800MB of RAM now which I have 24GB. CPU load is <0.5% of a Core i5. Harddrive space is essentially infinite. I don't anticipate any problem even if everything scales up by 10x, or 100x with some optimization.

Therefore, people are not running full node simply because they don't really care. Cost is mostly an excuse. From a development and maintenance standpoint it's just easier to rely on Blockchain.info than running a full node. People are not solo mining mostly because of variance, as only pools could survive when the difficulty is growing by 20% every 2 weeks. This may sound bad but majority of the commercial players and miners are here for profit, not the ideology of Bitcoin.

At the end of the day, theoretically, we only require one honest full node on the network to capture all the wrongdoing in the blockchain, and tell the whole world. I'm pretty sure we will have enough big bitcoin whales and altruistic players to maintain full nodes as long as the cost is reasonable, say $100/month. I don't think we will hit this even by scaling up 1000x

The real problem for scaling is probably in mining. I hope Gavin's O(1) propagation would help a bit.

Quote
What we saw in Bitcoin last year was a rise of ludicrously inefficient services— ones that bounced transactions through several addresses for every logical transaction made by users, games that produced a pair of transactions per move, etc. Transaction volume rose precipitously but when fees and delays became substantial many of these services changed strategies and increased their efficiency.   Though I can't prove it, I think it's likely that there is no coincidence that the load has equalized near the default target size.

If the limit was 2MB, the load would be higher, but not doubled. Some people have to shutdown their bitcoind but we should still have more than enough full nodes to maintain a healthy network. Core developers may have a different development priority (e.g. optimization of network use rather than payment protocol). These are not questions of life or death.

I hate those spams too but I also recognize that a successful bitcoin network has to be able to handle much more than that.

Quote
Quote
Assume that we aim at paying $1 million/block ($52 billion/year) to secure the network (I consider this as a small amount if Bitcoin ever grows to a trillion market cap). The current 7tps llimit will require a fee of $238/tx, which is way too expensive even for a global settlement network among banks.
This is ignoring various kinds of merged mining income, which might change the equation somewhat... but this is hard to factor in today.

Merge mining incurs extra cost, with the same scale property of bitcoin. I'm not sure how bitcoin mining could be substantially funded by merge mining.

Quote
Quote
I think a reasonable choice, with the implications for centralization considered, would be around 100MB/block. That takes 1.5Mb/s of bandwidth in a perfect scenario. That would be a better equilibrium in technical and economical terms.
I think at the moment— based on how we're seeing things play out with the current load levels on the network— I think 100MB blocks would be pretty much devastating to decentralization, in a few years— likely less so, but at the moment it would be even more devastating to the existence of a fee market.

I'm just trying to set a realistic target, not saying that we should raise the limit to 100MB today. However, the 1MB limit will become a major limiting factor much soon, most likely in 2 years.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: jl2012 on August 22, 2014, 04:41:11 AM
This is how I see the problem:

1. 1MB was just an arbitrary choice to protect the network at alpha stage. Satoshi made it clear that he intended to raise it.

2. With some calculation I think 100MB is a realistic target, to keep the whole thing reasonably decentralized, to charge a competitive transaction fee, and to offer enough profit for miners to keep the network safe.

3. To reach the 100MB target we should raise it gradually

4. We should consider to limit the growth of UTXO set if the MAX_BLOCK_SIZE is increased. For each block, calculate (total size of new outputs - total size of spent outputs) and put a limit on it.

5. We won't know the price of bitcoin in the future. Requesting miners to give up a fixed amount of bitcoin for a bigger block size could become problematic.

6. I have demonstrated that the block size could be increased with a soft-fork. I would like to know whether people prefer a cumbersome soft-fork as I suggested (https://bitcointalk.org/index.php?topic=283746.0), or a simple hard-fork as Satoshi suggested (https://bitcointalk.org/index.php?topic=1347.msg15366#msg15366). Either choice has its own risk and benefit.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: gmaxwell on August 22, 2014, 05:31:13 AM
Some of the 1MB-block supporters believe we should keep the limit forever, and move 99.9% of the transactions to off-chain. I just want to point out that their logic is completely flawed.
Can you cite these people specifically?  The strongest I've seen is that it "may not be necessary" and shouldn't be done unless the consequences are clear (including mining incentives, etc), the software well tested, etc.

Quote
I've been maintaining a node with my 100Mb/s domestic connection since 2012. It takes less than 800MB of RAM now which I have 24GB. CPU load is <0.5% of a Core i5. Harddrive space is essentially infinite. I don't anticipate any problem even if everything scales up by 10x, or 100x with some optimization.
Great. I live in the middle of silicon valley and no such domestic connection is available at any price (short of me paying tens of thousands of dollars NRE to lay fiber someplace). This is true for much of the world today.

Quote
Therefore, people are not running full node simply because they don't really care. Cost is mostly an excuse.
I agree with this partially, but I know it's at all the whole truth of it. Right now, even on a host with solid gigabit connectivity you will take days to synchronize the blockchain— this is due to dumb software limitations which are being fixed... but even with them fixed, on a quad core i7 3.2GHz and a fast SSD you're still talking about three hours. With 100x that load you're talking about 30 hours— 12.5 days.

Few who are operating in any kind of task driven manner— e.g. "setup a service" are willing to tolerate that, and I can't blame them.

Quote
People are not solo mining mostly because of variance,
There is no need to delegate your mining vote to a third party to mine— it would be perfectly possible for pools to pay according to shares that pay them, regardless of where you got your transaction lists from— bur they don't do this.

Quote
At the end of the day, theoretically, we only require one honest full node on the network to capture all the wrongdoing in the blockchain, and tell the whole world.
And tell them what?  That hours ago the miners created a bunch of extra coin out of thin air ("no worries, the inflation was needed to support the economy/security/etc. and there is nothing you can do about it because it's hours burried and the miners won't reorg it out and any attempt to do so opens up a bunch of double spending risk")—  How exactly does this give you anything over a centralized service that offers to let people audit it?  In both there can always be some excuse good enough to get away with justifying compromising the properties once you've left the law of math and resorted to enforcement by men and politics.

In the whitepaper a better path is mentioned that few seem to have noticed "One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user's software to download the full block and alerted transactions to confirm the inconsistency". Sadly, I'm not aware of even any brainstorming behind what it would take to make that a reality beyond a bit I did a few years ago (https://en.bitcoin.it/wiki/User:Gmaxwell/features#Proofs). (... even if I worked on Bitcoin full time I couldn't possibly personally build all the things I think we need to build, there just isn't enough hours in the day)

That isn't the whole tool in the belt, but I point it out to highlight that what you're suggesting above is a real and concerning relaxation of the security model, which moves bitcoin closer to the trust-us-were-lolgulated-banking-industry... and it that it is not at all obvious to me that such compromises are necessary.

It's beyond infuriating to me when I hear a dismissive tone, since pretending these things don't have a decentralization impact removes all interest from working on the technical tools needed to bridge the gap.

Quote
The real problem for scaling is probably in mining.
I'm not sure why you think that— miners are paid for their participation. Some of them habe been extract revenue on the hundred thousands dollars a month in fees from their hashers. There is a lot of funds to pay for equipment there.

Quote
I hate those spams
Oh, I wasn't trying to express any opinion/dislike on the inefficient use but to point out that to some extent load expands to fill capacity, and if the price is too low people will use it wastefully or selfishly.

Quote
Merge mining incurs extra cost, with the same scale property of bitcoin. I'm not sure how bitcoin mining could be substantially funded by merge mining.
Same cost for miners, who are paid for their resources. Not the same cost for verifiers, because not everyone has to verify everything.

Quote
I'm just trying to set a realistic target, not saying that we should raise the limit to 100MB today. However, the 1MB limit will become a major limiting factor much soon, most likely in 2 years.
In spite of all the nits I'm picking above I agree with you in broad strokes.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: jl2012 on August 22, 2014, 08:17:36 AM
Some of the 1MB-block supporters believe we should keep the limit forever, and move 99.9% of the transactions to off-chain. I just want to point out that their logic is completely flawed.
Can you cite these people specifically?  The strongest I've seen is that it "may not be necessary" and shouldn't be done unless the consequences are clear (including mining incentives, etc), the software well tested, etc.

I am not going to cite to provoke unnecessary debate. As you are not one of them, let's stop here.

Quote
Quote
I've been maintaining a node with my 100Mb/s domestic connection since 2012. It takes less than 800MB of RAM now which I have 24GB. CPU load is <0.5% of a Core i5. Harddrive space is essentially infinite. I don't anticipate any problem even if everything scales up by 10x, or 100x with some optimization.
Great. I live in the middle of silicon valley and no such domestic connection is available at any price (short of me paying tens of thousands of dollars NRE to lay fiber someplace). This is true for much of the world today.

This kind of connection is available in Hong Kong for maybe 10 years. Today, with $20/mo we have 100Mb/s. We can even have an 1Gb/s fiber directly connected to the computer at home with only $70/mo.

Anyway, I know our case is atypical. However, I'd be really surprised if you can't do the same in silicon valley in 10 years. Also, I doubt one would have any difficulty to rent an 1U collocation space with 100Mb/s for $100/mo in silicon valley.

Quote
Quote
Therefore, people are not running full node simply because they don't really care. Cost is mostly an excuse.
I agree with this partially, but I know it's at all the whole truth of it. Right now, even on a host with solid gigabit connectivity you will take days to synchronize the blockchain— this is due to dumb software limitations which are being fixed... but even with them fixed, on a quad core i7 3.2GHz and a fast SSD you're still talking about three hours. With 100x that load you're talking about 30 hours— 12.5 days.

Few who are operating in any kind of task driven manner— e.g. "setup a service" are willing to tolerate that, and I can't blame them.

Most of them are here for profit so they won't do it anyway, no matter it's 1MB or 100MB.

Also, I can't see why we really need to verify every single transaction back to the genesis block. If there were no fork floating around, and no one is complaining in the last few months, it's really safe to assume the blockchain (until a few months ago) is legitimate.

Quote
Quote
People are not solo mining mostly because of variance,
There is no need to delegate your mining vote to a third party to mine— it would be perfectly possible for pools to pay according to shares that pay them, regardless of where you got your transaction lists from— bur they don't do this.

Again, they won't do it no matter it's 1MB or 100MB, unless the protocol forces them to do so

Quote
Quote
At the end of the day, theoretically, we only require one honest full node on the network to capture all the wrongdoing in the blockchain, and tell the whole world.
And tell them what?  That hours ago the miners created a bunch of extra coin out of thin air ("no worries, the inflation was needed to support the economy/security/etc. and there is nothing you can do about it because it's hours burried and the miners won't reorg it out and any attempt to do so opens up a bunch of double spending risk")—  How exactly does this give you anything over a centralized service that offers to let people audit it?  In both there can always be some excuse good enough to get away with justifying compromising the properties once you've left the law of math and resorted to enforcement by men and politics.

In the whitepaper a better path is mentioned that few seem to have noticed "One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user's software to download the full block and alerted transactions to confirm the inconsistency". Sadly, I'm not aware of even any brainstorming behind what it would take to make that a reality beyond a bit I did a few years ago (https://en.bitcoin.it/wiki/User:Gmaxwell/features#Proofs). (... even if I worked on Bitcoin full time I couldn't possibly personally build all the things I think we need to build, there just isn't enough hours in the day)

That isn't the whole tool in the belt, but I point it out to highlight that what you're suggesting above is a real and concerning relaxation of the security model, which moves bitcoin closer to the trust-us-were-lolgulated-banking-industry... and it that it is not at all obvious to me that such compromises are necessary.

It's beyond infuriating to me when I hear a dismissive tone, since pretending these things don't have a decentralization impact removes all interest from working on the technical tools needed to bridge the gap.


Why that would take hours to broadcast a warning like that? Let say you are a merchant and you can't afford you own full node. You primarily rely on Blockchain.info but you also run an SPV client to monitor the block headers. As long as one of your peers is honest, you should be able to detect any problem in the data of Blockchain.info within 6 confirmations (since the block headers won't match).


Quote
Quote
The real problem for scaling is probably in mining.
I'm not sure why you think that— miners are paid for their participation. Some of them habe been extract revenue on the hundred thousands dollars a month in fees from their hashers. There is a lot of funds to pay for equipment there.

I mean, a 100MB block is not a problem for bitcoin whales and altruistic players to run full nodes, and we will have enough honest full nodes to support SPV clients. For miners, however, as block propagation is crucial for their profit, a big block with O(n) propagation time will cause problem. Gavin's O(1) proposal gives some hope, but I have to admit I don't understand the maths behind it.

Quote
Quote
I'm just trying to set a realistic target, not saying that we should raise the limit to 100MB today. However, the 1MB limit will become a major limiting factor much soon, most likely in 2 years.
In spite of all the nits I'm picking above I agree with you in broad strokes.


No matter that will be a hardfork, or an auxiliary block softfork, this will be the most dramatic change to the protocol. However, I can't see any real progress in reaching consensus despite years of debate.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: solex on August 23, 2014, 07:03:20 AM
No matter that will be a hardfork, or an auxiliary block softfork, this will be the most dramatic change to the protocol. However, I can't see any real progress in reaching consensus despite years of debate.

Indeed, but it is now apparent (to me anyway) that simply increasing the block limit to allow larger and larger blocks to propagate is not necessary, as this is not the optimal long-term solution. The optimal solution takes advantage of the fact that most transactions are already known to most peers before the next block is mined. So, "highly abbreviated" new blocks can be propagated instead. This is beyond mere data compression because it relies on the receiver knowing most of the block contents in advance.

We see in the O(1) thread that there are excellent proposals on the table for block propagation efficiency:
A) short transaction hashes: as in block network coding (https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding), and similarly in the optimized block relay (https://bitcoinfoundation.org/2014/08/a-bitcoin-backbone/) (Matt Corallo already has a relay service live)
B) IBLT blocks (https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2)

Even better, they are compatible such that A can be used within B giving enormous efficiency gains. This must be the long-term goal.

The next question is: Can the max block size be made flexible (for example: a function of the median size of the previous 2016 blocks) as a phase in the process of introducing block propagation efficiency as a consensus change?


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: coinft on August 23, 2014, 12:58:17 PM
No matter that will be a hardfork, or an auxiliary block softfork, this will be the most dramatic change to the protocol. However, I can't see any real progress in reaching consensus despite years of debate.

Indeed, but it is now apparent (to me anyway) that simply increasing the block limit to allow larger and larger blocks to propagate is not necessary, as this is not the optimal long-term solution. The optimal solution takes advantage of the fact that most transactions are already known to most peers before the next block is mined. So, "highly abbreviated" new blocks can be propagated instead. This is beyond mere data compression because it relies on the receiver knowing most of the block contents in advance.

We see in the O(1) thread that there are excellent proposals on the table for block propagation efficiency:
A) short transaction hashes: as in block network coding (https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding), and similarly in the optimized block relay (https://bitcoinfoundation.org/2014/08/a-bitcoin-backbone/) (Matt Corallo already has a relay service live)
B) IBLT blocks (https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2)

Even better, they are compatible such that A can be used within B giving enormous efficiency gains. This must be the long-term goal.

The next question is: Can the max block size be made flexible (for example: a function of the median size of the previous 2016 blocks) as a phase in the process of introducing block propagation efficiency as a consensus change?

As far as I understand those schemes they are only good if you run a node with a current memory pool. The full transactions still need to communicated at some time, and still need to be written to the blockchain in full. You couldn't just write IBLTs to the blockchain, because no one without your memory pool could reconstruct the TXs.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: lnternet on August 28, 2014, 06:09:58 PM
I think I'm not alone when I say we are dangerously close to hitting the limit already that we need a quick fix right away. The only easy and quick fix is raising the limit, and to not make people upset, just double it to 2MB.

Reading about soft fork vs hard fork, this appears to be more difficult than a simple user like me imagines. But something needs to be done soon. Getting tx stuck when the next boom hits is something we can all agree on shouldn't happen.



In related news, I also feel spamming the network is way too cheap right now. I can spam 10 tx/s for less than 9BTC for a full day (assuming 0.01 mBTC fee). If I was in a position planning to buy in big time, I would do this, expecting a price drop with probability high enough to make the whole endeavor worth it.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: gmaxwell on August 28, 2014, 10:43:47 PM
The next question is: Can the max block size be made flexible (for example: a function of the median size of the previous 2016 blocks) as a phase in the process of introducing block propagation efficiency as a consensus change?
Letting miners _freely_ expand blocks is a bit like asking the foxes to guard the hen-house— it's almost equivalent to no limit at all. Individual miners have every incentive to put as much fee paying transactions in their own blocks as they can (esp. presuming propagation delays are resolved or the miner has a lot of hashpower and so propagation hardly matters)— because they only need verify once the cost of a few more cpus or disks isn't a big deal. In theory (I say because miners have been bad with software updates), they can afford the operating costs of fixing things like integer overflows that arise with larger blocks, especially since they usually have little customization— other nodes, not so much?

Since miners can always put fewer transactions in, it's not unreasonable for the block-chain to coordinate that soft-limit (— in the hopes that baking in the conspiracy discourages worse ones from forming). But in that case it shouldn't be based on the actual size, but instead on an explicit limit, so that expressing your honest belief that the network would be better off with smaller blocks is not at odds with maximizing your revenue in this block.

If you want to talk about new limit-agreement mechanisms, I think that txout creation is more interesting to limit than the size directly though... or potentially both.

Even for these uses— Median might not be the right metric, however— consider that recently it would have giver control to effectively a single party, at the moment it would effectively give it to two parties. You could easily use the median for lowering the limit and (say) the 25th percentile for raising it though... though even thats somewhat sloppy because having more than half the hashrate in your conspiracy means you can have all the hashrate if you really want it. :(


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: WhiteBeard on August 29, 2014, 12:04:53 AM
While this may not directly address block size, it will have an effect on it as well as other aspects of the entire system.

There is an inefficiency in bitcoin related to people needing to consolidate their inputs and it ends up costing the system more than the transaction fees are worth due to unnecessary traffic.   What we need to do is remove the need for people to consolidate all their tiny transactions, by doing it for them automatically.

Couldn't we add a feature to the wallet system that automatically consolidates all extant inputs to a particular address if there are any inputs of at least X days of age into one transaction, by adding a marker/token to a new block that effectively authenticates the presence of all these inputs from previous blocks, so there is no need to search further back down the block-chain for authentication of the inputs. Thus decreasing the data size of any future outputs from that address. I would make it so that it would, also, automatically re-consolidate the returned change inputs by adding them back to the address from which they originally came without having them travel the block-chain again.

To take this further, it effectively would be an auto-pruning measure, because at some point you would never need to go back to the genesis block for authentication.

This, ultimately, may or may not have an effect on difficulty, so care needs to be taken that we do not introduce inflation into the economy.

What size should each block be?  I'd say as compact as we can make them with as much data as we can cram into them. My proposal helps with that aspect.  Eventually, we will see the block-chain housing significantly more data than it already does.  

There are advantages and disadvantages to every proposal so far.  Does increasing/decreasing block sizes contribute to maintenance of current and future difficulty standards and thus have a positive or negative impact on inflation?  When we get a protocol in place to begin pruning we will then see a need to balance block size against chain length. To that end, I suppose, there will have to be a block size growth factor calculated in or the chain will become "unmanageably" long even with pruning and difficulty will skyrocket, especially as seen against the back-drop of the goal of bitcoin to become the currency of the world economy...

Just my ideas! What do you all think?

whitebeard


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: smooth on September 02, 2014, 05:36:21 AM
Quote
Edit: With respect to CryptoNote and Monero, I do see merit in the argument that the fee penalty alone may not be enough to constrain blocksize; however when combined with the difficulty increase requirement the picture changes. As for the specifics of the Monero blockchain there are also other factors including dust from mining pools that led to the early bloating, and we must also keep in mind the CryptoNote and Monero also have built in privacy which adds to the blocksize by its very nature.
Yes, its complicated— but it's also concerning. The only altcoins that I'm aware of which have diverged from the current Bitcoin behavior in this front, and did so with the benefits of all the discussions we've had before being available to them, have been suffering from crippling bloat.

Glancing at block explorers for Monero and ByteCoin (https://minergate.com/blockchain/bcn/blocks)... I'm not seeing crippling bloat right now. I see lots of very-few-transactions blocks.

Glancing at recent release notes for ByteCoin (https://forum.cryptonote.org/viewtopic.php?f=12&t=257), it looks like transactions were not being prioritized by fee, which is a fundamental to getting a working fee market.

Have Monero and ByteCoin fixed the bloat problem, or did the transaction spammers just get bored and go away?

Sorry for the late reply. I wasn't aware of this thread discussing Monero.

There was never any "crippling bloat." I'm not even sure what gmaxwell is talking about. There are certainly some forward-looking concerns about the future size of the blockchain (as with Bitcoin). But as of today there is no crippling. I run a Monero node on a smartphone-class nettop (just to see if I could), and it works fine.

There have been some growing pains. Early on the release one pool code didn't have a minimum payout threshold, so pools paid out on every block. Obviously that increased the volume of transactions, but the network still functioned normally and required no adjustment (save for fixing the pool code). And there was recently a spam attack on Monero and we responded by raising the (previously unrealistic <0.01 USD) default fee, but that only lasted a day or so and added 20 MB to the blockchain before the higher fee kicked in and the spammers stopped. Finally, the on-disk format is currently inefficient, and is being improved.

Still none of these issues have been crippling at all.


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: jl2012 on December 15, 2016, 03:15:26 AM
Thanks for David Jerry bringing this up (https://twitter.com/digitsu/status/809152914023251968 ). Here is my reply

1MB-block supporters have 2 major arguments: decentralization and block space scarcity. By considering ONLY these 2 factors, however, the BEST solution is to limit a block to only 2 transactions: a reward transaction and a normal transaction. This will limit the block size to absolute minimal, and make sure everyone could mine with a 9.6k modem and a 80386 computer

Yes, if we consider ONLY these 2 factors, this is the best solution. But the very obvious fact is these are not the only factors

The truth is that the 1MB limit was just an arbitrary choice by Satoshi without any considering its implications carefully (at least not well documented). He chose "1" simply because it's the first natural number. Had he chosen 2MB instead of 1MB, I am pretty sure that Bitcoin would have worked in exactly the same way as how it works now. Had he chosen 0.5MB, we might have already run into a big trouble.

Yes, I still think 1MB is an arbitrary limit. If Satoshi used 2MB, Bitcoin should have worked in exactly the same way as it was in 2014 (when I wrote the original message). However, if we had 2MB today, without all the optimization brought by Bitcoin Core in the past 2 years (libsecp256k1, compact block, pruning, etc), many full nodes would have been broken.

Had he chosen 0.5MB, it might be easier for people to reach consensus to increase it. However, I have to say the effect of hitting the limit is not as bad as I thought. One indication is the robust price uptrend in this year. OTOH, most tx confirmation problems are related to wallet design.

We want to maximize miner profit because that will translate to security. However, a block size limit does not directly translate to maximized miner profit. Consider the most extreme "2 transactions block limit", that will crush the value of Bitcoin to zero and no one will mine for it. We need to find a reasonable balance but 1MB is definitely not a good one. Assume that we aim at paying $1 million/block ($52 billion/year) to secure the network (I consider this as a small amount if Bitcoin ever grows to a trillion market cap). The current 7tps llimit will require a fee of $238/tx, which is way too expensive even for a global settlement network among banks.

Note that when I say "1MB is definitely not a good one", the context is bitcoin would become a global settlement network with a trillion market cap. I think my estimation is still correct, but this is a long term vision.


To answer the question of "what to replace the 1 MB block size limit with", we first need to set a realistic goal for Bitcoin. In long term, I don't think bitcoin could/should be used for buying a cup of coffee. To be competitive with VISA, the wiki quotes 2000tps, or 1200000tx/block, or 586MB/block (assuming 512bytes/tx). To be competitive with SWIFT, which has about 20 million translations per day, it takes 232tps, or 138889tx/block, or 68MB/block. Divide them by the $1 million fee/block, that would be $0.83/tx and $7.2/tx respectively. A fix rate of $7.2/tx is a bit high but still (very) competitive with wire transfer. $0.83/tx is very competitive for transactions over $100 of value. I think a reasonable choice, with the implications for centralization considered, would be around 100MB/block. That takes 1.5Mb/s of bandwidth in a perfect scenario. That would be a better equilibrium in technical and economical terms.

This 100MB estimation does not take payment channel like LN into considerations. It also doesn't take better cryptography like Schnoor signature aggregation (saving lots of block space) into considerations. Schnoor signature alone could easily cut my estimated size by more than half (and it is totally on-chain). With more optimizations (weak block, UTXO set growth limit, fraud proof, partial block validation by SPV wallet), I don't think 50MB blocks is an insane idea. It just can't happen today.

Finally, as a not-so-related note, one may also note that I'm always a fan of softforks since I joined this forum, for examples:
https://bitcointalk.org/index.php?topic=283746.0
https://bitcointalk.org/index.php?topic=256516.10
https://bitcointalk.org/index.php?topic=253385.0
https://bitcointalk.org/index.php?topic=1103281.0


Title: Re: Share your ideas on what to replace the 1 MB block size limit with
Post by: Jet Cash on December 15, 2016, 05:38:17 AM
There are some great replies inthis thread.

The impression that I get is that there are a number of objectives that have been discussed ( blocksize is a possible solution and not an objective).

- Speed up transaction confirmation for end users
- Reduce bloat in transactions
- Reduce the number of records required for each transaction
- Maintain the de-centralisation of mining.

At the moment block generation time is too long, and it can result in transactions taking up to an hour to be confirmed. Bitcoin won't remain competitive if this continues, or gets worse. I read the replies to my thread suggesting a reduction in the generation time, and I am grateful for the informed comments, and I accept the reasons for not reducing the time in the current environment. I think it is desirable for better minds than mine to address this problem to see if there can be a way to reduce the generation time.

Reducing bloat in transactions - one way to do this is to REDUCE the blocksize, maybe to 500K, and combine this with faster block generation. Obviously this is subject to the constraints mentioned above. SegWit should be  great help with this as well.

Reducing the number of records - there was an interesting suggestion about combining micro-payments  into one transaction within a wallet. It would be useful if this could be done with a zero fee transaction, but would miners be prepared to support this. There is not the same priority for the confirmation of these consolidations, so maybe they could wait for anything up to a week, Listing them in a consolidatation pool after they have been verified could give miners a handy alternative to mining empty or near empty blocks. This would help to speed up future payments from that wallet, and it would reduce the number of transactions waiting to go into a block. You wouldn't need a fork for this, as miners not supporting it would just ignore the second pool, and those who did support it would create blocks that were valid under the current rules - is this true, or am I misunderstanding the block filling process?

Maintaining the decentralisation of mining. I accept the point that reducing the block generation time can create "churning" of the blockchain, and would give larger miners an advantage when the block generation time is minimal. If it were possible to enforce a minimum time for the generation of  a new block ( say one minute) then surely that would cut out much of the churning, and may even attract a few small miners to return to mining.

Side chains are also beneficial, and maybe we can explore the way to increase their use more effectively. Maybe all faucet payments should be moved onto a specialist side chain for example.