Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: caveden on November 20, 2010, 11:33:48 PM



Title: Block size limit automatic adjustment
Post by: caveden on November 20, 2010, 11:33:48 PM
Hello all,

Recently I just posted on another thread to express my concern about this subject, but I thought it might deserve a topic of its own.

This block size rule is something really "dangerous" to the protocol. Rules like that are almost impossible to change once there are many clients implementing the protocol. Take SMTP as an example. Several improvements could be done to it, but how? It's impractical to synchronize the change.

And, well, if we ever want to scale, such limit will have to grow. I really think we should address this problem while there is only one client used by everyone, and changes in the protocol are still feasible, because in the future we may not be able to.

As far as I understand, one of the purposes of this block size limit was to avoid flooding. Another purpose as well, as mentioned here (http://bitcointalk.org/index.php?topic=1847.msg22836#msg22836), is to keep the transaction fees not "too small" in order to create an incentive for block generation once the coin production isn't that interesting anymore. (if only a limited number of transactions can enter a block, those with the smallest fees won't be quickly processed...)

So, if we really need a block size limit, and if we also need it to scale, why not making such limit so that it adjusts itself to the transaction rate, as the difficulty of generation adjust itself to the generation rate?

Some of the smart guys in this forum could come up with an adjustment formula, taking in consideration the total size of all transactions in the latest X blocks, and calculating which should be the block size limit for the next X blocks. Just like the difficulty factor.This way we avoid this "dangerous" constant in the protocol.
One of the things the smart guys would have to decide is how rigorous will the adjustment be. Should the adjustment be done in order to always leave enough room to all transactions in the next block, or should blocks be "tight" enough to make sure that some transactions will have to wait, thus pushing up the transaction fees?

Okay, I do realize that it would allow flooders to slowly increase the limit, but, what for? As long as generators aren't accepting 0-fee transactions, a flooder would have to pay to perform his attack.

So, what do you think?


Title: Re: Block size limit automatic adjustment
Post by: RHorning on November 21, 2010, 12:20:31 AM
So, if we really need a block size limit, and if we also need it to scale, why not making such limit so that it adjusts itself to the transaction rate, as the difficulty of generation adjust itself to the generation rate?

Some of the smart guys in this forum could come up with an adjustment formula, taking in consideration the total size of all transactions in the latest X blocks, and calculating which should be the block size limit for the next X blocks. Just like the difficulty factor.This way we avoid this "dangerous" constant in the protocol.
One of the things the smart guys would have to decide is how rigorous will the adjustment be. Should the adjustment be done in order to always leave enough room to all transactions in the next block, or should blocks be "tight" enough to make sure that some transactions will have to wait, thus pushing up the transaction fees?

Okay, I do realize that it would allow flooders to slowly increase the limit, but, what for? As long as generators aren't accepting 0-fee transactions, a flooder would have to pay to perform his attack.

So, what do you think?

I also think this is a useful idea to follow up on.  In this case, it might be nice to have a "floating average" of say the previous 2000 blocks plus some constant or constant percentage.  I'm suggesting perhaps the mean + 50%, to give some flexibility for it to expand?  We could quibble over the exact amount for the expansion room (perhaps allow it to 100% or 200% of the mean) but some sort of limit certainly sounds like a good idea and is something very easy and quick to compute.  It could also be calculated independently by all of the clients quickly to accept or reject a particular block.

A genuine and sustained increase in transactions over the long duration would be accepted into the network at the increased rate and wouldn't put too much "push back" as the network adjusts to the new level.

Besides, we can run the current chain through the algorithm (whatever we come up with) and see how the network would have adjusted based on "real world" data.  It might be fun to see just how that would have worked out, too.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on November 21, 2010, 12:42:00 AM
I think that having a floating block size limit is likely to affect the generation of blocks by having a not so small percentage of generated blocks rejected by the network.

Perhaps a better way is to have a max size for free transactions, perhaps derived as a percentage of the fee paying portion of the block, with the 1 meg as the starting point.

Say that all fee paying transactions, no matter how small, can be included in the current block, and then 20% of that total block size may be added in free transactions if the generator's policy provides for that.  If the end calculation is below 1 meg, then more free transactions can be included up to that point.  This allows the blocksize to grow as needed, without uncapping the restraint on spamming, while also allowing backlogs of free transactions to clear during off-peak situations.

This adds yet another calculation that the client must perform on a block to check for validity, but changes the max block limit from a hard limit as it is now to one that can 'stretch' to meet demand and still perform it's primary function.  It also allows future client upgrades to have a higher 'open max' or 'min max' (what the hell would we call this rule?) than older clients without it being a destructive change.  Said another way, a future client development group decides that it's time to up the 'min max' from 1 meg to 3 megs, so they allow their client to accept 3 meg blocks that don't adhere to the 20% or less free rule, but don't yet produce them.  Other development groups might agree or disagree, or might move at a different pace, but the change only really would take effect once more than half of the network had upgraded, and then generated blocks with a 3 meg 'min max' would be accepted into the main chain, forcing the rest of the network to comply or fork, but without harm to the internim processing of the network.


Title: Re: Block size limit automatic adjustment
Post by: theymos on November 21, 2010, 01:18:52 AM
The main reason for the block size limit is disk space. At 1MB, an attacker can force every generator to permanently store 53GB per year. At 10MB, an attacker can force every generator to permanently store 526GB per year. Even a small change in block size makes a big difference on the load that generators must bear. It must not be changed until the network is ready for it, and this time can not be predicted reliably.

If the block size limit is too high, an attacker can destroy the network by making it impossible for anyone to be a generator. If the block size is too low, fees get higher until the problem is fixed. Automatic adjustments would still carry the risk of adjusting to a limit that is too low.


Title: Re: Block size limit automatic adjustment
Post by: caveden on November 21, 2010, 01:37:03 AM
Even a small change in block size makes a big difference on the load that generators must bear.

Generators must bear whatever the network demands. If we ever reach a "professional" level of hundreds of transactions per minute, generators will have to bear that.

If the block size limit is too high, an attacker can destroy the network by making it impossible for anyone to be a generator. If the block size is too low, fees get higher until the problem is fixed.

That's why an automatic adjustment would be important. Not to allow "too high" or "too low" limits for a long time.

You do realize that if this limit is a constant, it will be really hard to change it when needed, right?

Automatic adjustments would still carry the risk of adjusting to a limit that is too low.

Yes, like the difficulty adjustment, there might be periods where it's not that precise. But it'd be much better than a constant value, wouldn't it?


Title: Re: Block size limit automatic adjustment
Post by: RHorning on November 21, 2010, 01:51:27 AM
I think that having a floating block size limit is likely to affect the generation of blocks by having a not so small percentage of generated blocks rejected by the network.

I'm curious, how would the floating block size limit reject a block that is following the rules?  I'm not convinced on this issue here. The expected maximum size of the block would be known before the block is created, and that a block size limit exists is something already in the network.  All that is being asked here is that this size become a variable rather than a constant, and it is a variable which can grow with the activity of the network as a whole rather than throwing in a whole bunch of exceptions.

Rejected blocks would come if this rule change creates blocks larger than the current maximum size of blocks, where it would get rejected by earlier clients following older rules without the variable block size.  I am suggesting that would be rare and likely not happen until well after the whole network (or at least 51%+ of the processing power) switches to this new rule, if it is adopted, whatever the rule is that comes up here from this idea.

The main reason for the block size limit is disk space. At 1MB, an attacker can force every generator to permanently store 53GB per year. At 10MB, an attacker can force every generator to permanently store 526GB per year. Even a small change in block size makes a big difference on the load that generators must bear. It must not be changed until the network is ready for it, and this time can not be predicted reliably.

If the block size limit is too high, an attacker can destroy the network by making it impossible for anyone to be a generator. If the block size is too low, fees get higher until the problem is fixed. Automatic adjustments would still carry the risk of adjusting to a limit that is too low.

I realize that this is to limit the potential damage which a malicious attacker might try to force a whole bunch of miscellaneous data onto the network in an inefficient manner.  If anything, an algorithm using this concept might actually reduce the amount of data needed by miners and coin generators under such an attack, at least while the number of transactions is still quite small on average.  The question is how to set up such a formula so such an attack is essentially futile and would only result in forcing participants to engage in adding fees to their transactions?

This would require a persistent and prolonged attack, something that I think those contributing to Bitcoins would pick out well before it starts to even remotely cause much damage in terms of sucking up a whole bunch of permanent storage.  On the other hand, it would allow flexibility for where special exceptions wouldn't have to be made a couple of years from now if or when what most people genuinely acknowledge is a large number of "legitimate" transactions that really would on average require that much data storage.  It is a problem we are going to be facing eventually, and it seems like trying to put off a problem that we know is going to come.


Title: Re: Block size limit automatic adjustment
Post by: caveden on November 21, 2010, 01:53:25 AM
I think that having a floating block size limit is likely to affect the generation of blocks by having a not so small percentage of generated blocks rejected by the network.

If this rule is defined and documented while we still can, it would become a "protocol-rule". Generators would know it, and know how to adapt to it in order not to lose their blocks.


Title: Re: Block size limit automatic adjustment
Post by: theymos on November 21, 2010, 04:49:11 AM
Generators must bear whatever the network demands. If we ever reach a "professional" level of hundreds of transactions per minute, generators will have to bear that.

If no generators are capable of storing the 10TB block chain or whatever, then there will be no generators and Bitcoin will die. Limit adjustments won't make the chain smaller once it has already grown to a gigantic size.

Quote
You do realize that if this limit is a constant, it will be really hard to change it when needed, right?

It will not be hard to change. It will cause a certain amount of disruption, but it's not difficult. A certain group will change, and the change will either catch on with everyone else or the people changing will realize their new coins are becoming worthless and change back.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on November 21, 2010, 05:58:04 AM
Generators must bear whatever the network demands. If we ever reach a "professional" level of hundreds of transactions per minute, generators will have to bear that.

If no generators are capable of storing the 10TB block chain or whatever, then there will be no generators and Bitcoin will die. Limit adjustments won't make the chain smaller once it has already grown to a gigantic size.


It's not actually necessary for generators to keep the entire blockchain.  And even if the blockchain were to outpace growth in storage (something I question), there isn't really a need for most generators to keep a local copy of the blockchain at all.  There is no technical reason that prevents a specialized generation client from contracting with some online shared storage service that keeps the archive of the blockchain older than a year, in return for a donation 1% of generated coins.  A new client would still be able to verify the chain, and then just keep the block headers should it need to fetch block data older than a year.  Such fetching would be very rare.  1000 different generating users sharing one well protected read only copy of the blockchain would  render your concerns moot, whether they were all in one datacenter owned by one financial insitutution, or individuals spread across the Internet.


Title: Re: Block size limit automatic adjustment
Post by: caveden on November 21, 2010, 02:06:18 PM
If no generators are capable of storing the 10TB block chain or whatever, then there will be no generators and Bitcoin will die. Limit adjustments won't make the chain smaller once it has already grown to a gigantic size.

I think creighto said all that needed to be said about this. Generators can find a way to bypass this issue, if it ever becomes an issue. And, again, with a proper interface to add transaction fees, there would be no incentive for a flood attack. So, blocks would only grow if they really need to grow - if people really are transacting that much.

Quote
You do realize that if this limit is a constant, it will be really hard to change it when needed, right?

It will not be hard to change. It will cause a certain amount of disruption, but it's not difficult. A certain group will change, and the change will either catch on with everyone else or the people changing will realize their new coins are becoming worthless and change back.

I have to disagree here. You don't easily change a protocol constant. It can only be done when just a few softwares implement the protocol. Once it is well diffused throughout the Internet, it's almost impossible to change.
What you are proposing is like a fork on the project, since the chains wouldn't be compatible. Having to fork the project just because the value of a constant became obsolete? It's way too radical. People wouldn't do it, and the bad constant would remain there, causing issues, like too high transaction fees.


Title: Re: Block size limit automatic adjustment
Post by: ribuck on November 21, 2010, 03:27:09 PM
You don't easily change a protocol constant.

Agreed.

If we can find a workable algorithmic block size, it makes sense to adopt it earlier rather than later.

I have never understood the argument that when transaction numbers rise you can pay a transaction fee for priority, or use free transactions which will get processed "eventually". It makes no sense. If the average number of transactions per hour is more than six blocks worth, the transaction queue will grow and grow and grow without bound, transaction fees or not.


Title: Re: Block size limit automatic adjustment
Post by: FreeMoney on November 21, 2010, 06:49:44 PM
You don't easily change a protocol constant.

Agreed.

If we can find a workable algorithmic block size, it makes sense to adopt it earlier rather than later.

I have never understood the argument that when transaction numbers rise you can pay a transaction fee for priority, or use free transactions which will get processed "eventually". It makes no sense. If the average number of transactions per hour is more than six blocks worth, the transaction queue will grow and grow and grow without bound, transaction fees or not.

Something like making the max block size increase to 110% of the average size of the last 2016 blocks seems good.

I thought at first that spam would grow unlimitedly, but it won't if just a few % of generators refuse to include it. Generators themselves don't have incentive to bloat blocks, both because it will cost them in future disk space, but also because bigger blocks reduce the fee in equilibrium.

Should max block size ever decrease? I don't think so, but way way down the road it might be a problem.


Title: Re: Block size limit automatic adjustment
Post by: db on November 21, 2010, 07:45:47 PM
How about having the transaction fees decide the block size? Perhaps by a rule like this: The total fees of the least expensive half of the transactions in a block must be bigger than half the total fees of the most expensive half. (All transactions get to add their fair share of the newly minted coins to their fee.)

Such a scheme has several benefits.

* Prohibits flooding
* Allows unlimited numbers of transactions if there is real demand
* Makes transaction fees depend on the demand for transactions
* No constants
* No guesses about the future market


Title: Re: Block size limit automatic adjustment
Post by: FreeMoney on November 21, 2010, 08:33:32 PM
How about having the transaction fees decide the block size? Perhaps by a rule like this: The total fees of the least expensive half of the transactions in a block must be bigger than half the total fees of the most expensive half. (All transactions get to add their fair share of the newly minted coins to their fee.)

Such a scheme has several benefits.

* Prohibits flooding
* Allows unlimited numbers of transactions if there is real demand
* Makes transaction fees depend on the demand for transactions
* No constants
* No guesses about the future market


But fees could get very large and very even, no?

I think we need to know what the purpose of limiting the block size is.

If it is to stop spam only, then it should be set to grow with consistent use near the max, spam cannot push this up because at least a few generators will not include spam, especially given that their future pricing power (weak as it may be anyway) will be further reduced by allowing the block size to increase. On the other hand without collusion generators will not refuse legitimate transactions with fees.

If it is to keep the size of the chain down by then we need to somehow weigh that against the goodness of cheap transactions. I don't know how this can be settled at all. Everyone bears the tiny cost of remembering the transaction for potentially a long time, but only one generator gets the payment for putting it in there. 


Title: Re: Block size limit automatic adjustment
Post by: db on November 21, 2010, 08:56:21 PM
But fees could get very large and very even, no?
No, if all fees are large and even there is plenty of room for cheap transactions at the bottom.

I think we need to know what the purpose of limiting the block size is.
The most important and most difficult purpose is to keep the transaction fees both reasonable and high enough to give an incentive to generators to provide the unrelated public good of hashing difficulty.


Title: Re: Block size limit automatic adjustment
Post by: FreeMoney on November 21, 2010, 09:11:11 PM
But fees could get very large and very even, no?
No, if all fees are large and even there is plenty of room for cheap transactions at the bottom.

I think we need to know what the purpose of limiting the block size is.
The most important and most difficult purpose is to keep the transaction fees both reasonable and high enough to give an incentive to generators to provide the unrelated public good of hashing difficulty.


Maybe I am confused, imagine a max block size of about 10 transactions and this schedule of people's willingness to pay.

.42BTC, .41BTC, .41BTC, .41BTC, .41BTC, .4BTC, .4BTC, .4BTC, .39BTC, .39BTC, .37BTC, .36BTC, .36BTC, .34BTC, .33BTC, .33BTC, .33BTC, .33BTC, .32BTC, .31BTC, .3BTC, .3BTC...

and so on for maybe hundreds or thousands, but only the top 10 very similar payments will actually be in blocks and available for determining weather to increase max block size and so it will not be increased. Maybe my list is not a realistic structure, but I can't see why it would always have to be very different fees in the top and bottom, especially once block size becomes a limiting factor.

But it does occur to me than in the "very slowly decreasing desire to pay fees" scenario generators may actually want to increase the block size. It will lower average fees, but probably not total fees. I guess this has to do with the elasticity of demand for sending a transfer.


Title: Re: Block size limit automatic adjustment
Post by: db on November 21, 2010, 09:26:03 PM
Maybe I am confused, imagine a max block size of about 10 transactions and this schedule of people's willingness to pay.

Ah, sorry if this was not clear: There is no maximum block size that is adjusted. The size of a block is determined by the size of the most profitable rule-abiding set of transactions that can go into it. Completely independent of previous block sizes.


Title: Re: Block size limit automatic adjustment
Post by: FreeMoney on November 21, 2010, 10:05:28 PM
Maybe I am confused, imagine a max block size of about 10 transactions and this schedule of people's willingness to pay.

Ah, sorry if this was not clear: There is no maximum block size that is adjusted. The size of a block is determined by the size of the most profitable rule-abiding set of transactions that can go into it. Completely independent of previous block sizes.


Wait, so what stops an attacker from generating a block with one million or more spam transactions?


Title: Re: Block size limit automatic adjustment
Post by: db on November 21, 2010, 10:35:22 PM
Wait, so what stops an attacker from generating a block with one million or more spam transactions?

Nothing, but it stops attackers from drowning legitimate transactions in junk inside the normal blocks. Are gargantuan phony entire blocks really a problem? They will be expensive to produce and won't be long lived as they are extremely hard not to spot and no generators in their right mind would continue building the chain from one of them. They would lose the income from any subsequent blocks when everyone else ditches the offending block. So that should take care of itself through generator self interest.


Title: Re: Block size limit automatic adjustment
Post by: asdf on November 21, 2010, 10:49:58 PM
It seems to me that the spam issue and the txfee issue are related. Some want to limit the block size to stop spam and some want to limit it to create an artifical scarcity to drive up txfees.

The problem, as I see it, is that there is NO incentive to NOT accept a fee paying transaction, unless it's ridiculously small. Once a generator has established his infrastructure, It costs a negligible amount to process a transaction. If you can impose some sort of protocol rule on blocks that makes smaller fee transactions less desirable, this would solve both problems.

Automatically adjusting the block size is a solution, if you can find an algorithm that scales appropriately with economic activity. If set too high, there will be too much spam and transactions will be too cheap; generators will leave. If set too low, transactions will become very expensive and people will stop using bitcoin.

Also, there is the idea of restricting the distribution of transactions fees in each block. Like mandating that a frequency distribution of fees fit a linear scale. I don't know if this is workable, I'm just throwing ideas around.

So, I think that we need an incentive for generators to NOT accept fee paying transactions as they get smaller. I particular, build in some sort of fixed cost to processing a transaction, that adjusts with the market.


Title: Re: Block size limit automatic adjustment
Post by: FreeMoney on November 21, 2010, 10:53:57 PM
Wait, so what stops an attacker from generating a block with one million or more spam transactions?

Nothing, but it stops attackers from drowning legitimate transactions in junk inside the normal blocks. Are gargantuan phony entire blocks really a problem? They will be expensive to produce and won't be long lived as they are extremely hard not to spot and no generators in their right mind would continue building the chain from one of them. They would lose the income from any subsequent blocks when everyone else ditches the offending block. So that should take care of itself through generator self interest.


Hmm, okay outrageous blocks containing only junk would be easy to spot. But what of badly sized say 100k when average is 10k that contain mostly junk, but also the legit transactions that had been received. Maybe now some will reject and some will not? If there is no uniform rule it will be splits all over the place. And even normal users will be affected. If their transaction is in a very oversize block do they just hope it will stay? Or hope it will not be accepted and send again? I think there must be a max block size to avoid this.

Also the "public good of difficulty" comment made me realize that block size may need to be "artificially" limited in some way. But I think updating it along with difficulty to be slightly more than the average size of the previous 2016 blocks, but never decreasing, is a resonable way to do it.


Title: Re: Block size limit automatic adjustment
Post by: db on November 21, 2010, 11:25:28 PM
Hmm, okay outrageous blocks containing only junk would be easy to spot. But what of badly sized say 100k when average is 10k that contain mostly junk, but also the legit transactions that had been received. Maybe now some will reject and some will not? If there is no uniform rule it will be splits all over the place. And even normal users will be affected. If their transaction is in a very oversize block do they just hope it will stay? Or hope it will not be accepted and send again? I think there must be a max block size to avoid this.
Too expensive. But it doesn't matter. This made me realize the whole idea won't work anyway. Generators could just pad their blocks with transactions to themselves with fees set so that they can include as many transactions as they want, i.e. all of them.

Also the "public good of difficulty" comment made me realize that block size may need to be "artificially" limited in some way. But I think updating it along with difficulty to be slightly more than the average size of the previous 2016 blocks, but never decreasing, is a resonable way to do it.
The blocks would quickly grow too large.


Title: Re: Block size limit automatic adjustment
Post by: db on November 21, 2010, 11:32:48 PM
Too expensive. But it doesn't matter. This made me realize the whole idea won't work anyway. Generators could just pad their blocks with transactions to themselves with fees set so that they can include as many transactions as they want, i.e. all of them.

Which could be prevented if other generators ignore new blocks with lots of unpublished transactions. But that feels a little messy.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on November 21, 2010, 11:52:36 PM
Wait, so what stops an attacker from generating a block with one million or more spam transactions?

Nothing, but it stops attackers from drowning legitimate transactions in junk inside the normal blocks. Are gargantuan phony entire blocks really a problem? They will be expensive to produce and won't be long lived as they are extremely hard not to spot and no generators in their right mind would continue building the chain from one of them. They would lose the income from any subsequent blocks when everyone else ditches the offending block. So that should take care of itself through generator self interest.


There is no 'dropping' a valid block, spamming or not.


Title: Re: Block size limit automatic adjustment
Post by: db on November 22, 2010, 12:08:33 AM
There is no 'dropping' a valid block, spamming or not.

Sure there is. Just ignore it and continue building the chain from the previous block.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on November 22, 2010, 12:13:42 AM
There is no 'dropping' a valid block, spamming or not.

Sure there is. Just ignore it and continue building the chain from the previous block.


Then you have created a new rule that will split the network.  Part of the point on agreeing in advance on a common set of network rules is to avoid regularly spitting the chain.


Title: Re: Block size limit automatic adjustment
Post by: db on November 22, 2010, 12:20:26 AM
Then you have created a new rule that will split the network.  Part of the point on agreeing in advance on a common set of network rules is to avoid regularly spitting the chain.

Yes, but these particular splits would be very small and unnoticeable for the normal user.


Title: Re: Block size limit automatic adjustment
Post by: RHorning on November 22, 2010, 04:08:34 AM
Then you have created a new rule that will split the network.  Part of the point on agreeing in advance on a common set of network rules is to avoid regularly spitting the chain.

Yes, but these particular splits would be very small and unnoticeable for the normal user.


If some "normal user" happened to get a transaction adopted into one of these forks, they'd sure notice.

What decides which part of the chain split is accepted is the 51% of the CPU processing.  This isn't even a theoretical speculation, as there have been similar chain splits in the network already, most notably when the clients upgraded from 0.3.9 to 0.3.10.  The "bad" transactions were thrown into "good" blocks and the block rejected as falling out of the rules by some of the generators but accepted by others.  Yes, it created a mess, but what I'm saying is that the network as already dealt with this situation and it passed with flying colors.

Of course warning messages had to be passed around for everybody to "know" which chain was more likely to be permanently accepted by the network, as it did take place with the upgrade of the clients + generators.  This is also why there is still a warning not to use clients prior to 0.3.10 right now because they are missing some of the rules which stopped what appears to be an attack on the network.

BTW, otherwise "valid" blocks were dropped because they were included in the "wrong" chain, and unfortunately this did include a few legitimate transactions.  Not many transactions were lost, as the warning messages were sent out that it was a problem at the time.

There is no 'dropping' a valid block, spamming or not.

Sure there is. Just ignore it and continue building the chain from the previous block.


Then you have created a new rule that will split the network.  Part of the point on agreeing in advance on a common set of network rules is to avoid regularly spitting the chain.

Agreed, but that doesn't imply that the rules to the network must always stay the same either.  The main point is that most of the network must agree to the same rules, and if the rules change it must be something seen to be implicitly necessary to keep the network running... usually to stop spaming or some attack on the network would be the most logical reasons for adding rules.  This is similar to other networking protocols that do changes from time to time, sometimes because of malicious attacks on the network.

The reason to deal with this issue now, rather than later, is that we can talk objectively regarding what solutions or algorithms we might want to implement to resolve this issue.  If there is huge pressure because the transactions are starting to pile up and transaction fees are escalating as a result, any changes in the algorithm and network protocols are going to be seen as being a huge advantage to one group or another and it will become a political process instead.

Politics and computer programming don't mix very well.


Title: Re: Block size limit automatic adjustment
Post by: theymos on November 22, 2010, 05:02:37 AM
Not many transactions were lost, as the warning messages were sent out that it was a problem at the time.

IIRC, the legitimate chain overtook the "contaminated" chain within the 100-block maturation time, so all transactions were ported to the new chain (except for the illegal ones).

Chain forks are not inherently bad. If the network disagrees about a policy, then a split is good. The better policy will win. If block forks start happening a lot, it would be simple to consider a transaction unconfirmed if it relies on a generation that isn't 500 blocks deep or whatever.


Title: Re: Block size limit automatic adjustment
Post by: caveden on November 22, 2010, 08:48:17 AM
This made me realize the whole idea won't work anyway. Generators could just pad their blocks with transactions to themselves with fees set so that they can include as many transactions as they want, i.e. all of them.

There is no economic incentive in flooding. Actually, you can only do it on the blocks you create, otherwise you have to pay fees for it.
So, flooding would be done just by silly people trying to attack the system. They would hardly be sufficiently numerous to make what you say here:

The blocks would quickly grow too large.

That would only happens if flooders are numerous, what I would doubt. Not to mention that, if the block max size is "just", there will always be quite a good number of paying transactions to be added. Maybe there is enough to fill the block with. There is an incentive not to flood if we think this way.


Title: Re: Block size limit automatic adjustment
Post by: caveden on November 22, 2010, 08:53:13 AM
Not to mention that, the larger the block, the longer it takes to propagates it to the network, what I suppose can slightly increase the chance that another block generated by somebody else propagates faster. Really tiny chance, but anyway, it's another counter-incentive to flooding...


Title: Re: Block size limit automatic adjustment
Post by: db on November 22, 2010, 10:07:53 AM
This made me realize the whole idea won't work anyway. Generators could just pad their blocks with transactions to themselves with fees set so that they can include as many transactions as they want, i.e. all of them.

There is no economic incentive in flooding. Actually, you can only do it on the blocks you create, otherwise you have to pay fees for it.
So, flooding would be done just by silly people trying to attack the system.

Definitely; the worry wasn't flooding but circumventing the artificial scarcity keeping transaction fees above zero.

They would hardly be sufficiently numerous to make what you say here:

The blocks would quickly grow too large.

That would only happens if flooders are numerous, what I would doubt. Not to mention that, if the block max size is "just", there will always be quite a good number of paying transactions to be added. Maybe there is enough to fill the block with. There is an incentive not to flood if we think this way.

Again the worry wasn't flooding but keeping the block size small enough to support transaction fees. But anyway, under that scheme, wouldn't it take just one person that fills every block with free transactions to make the block size grow exponentially?


Title: Re: Block size limit automatic adjustment
Post by: caveden on November 22, 2010, 12:45:30 PM
I see, you think people could push up the limit to be sure that it would be big enough to always fit every transaction in it, therefore collecting more fees. On the long run that would be bad for generators themselves though, as the fee values would fall.
I'm not even sure this is interesting to the generator in the short run itself. He wouldn't collect more fees in the "flooded" blocks he generates.. and he doesn't have a real guarantee of being able to do so in the future blocks either.


Title: Re: Block size limit automatic adjustment
Post by: caveden on November 22, 2010, 01:04:29 PM
But anyway, under that scheme, wouldn't it take just one person that fills every block with free transactions to make the block size grow exponentially?

Regarding free transactions, I don't see why would somebody accept them as long as the client gives users the option to add fees to the transactions.
A generator could add free or dummy transactions to his own blocks in the intend to push the limit up, but then, one person only wouldn't be that effective in increasing the limit, as s/he wouldn't be able to generate enough blocks. If periods of adjustment are short, one person only wouldn't even be able to generate one block per period, so in the end s/he would be inoffensive, as the limit would fall back.
Only an attacker with strong computing power could push up the limit considerably. And I don't see much incentives in using a strong computing power like this... do you?


Title: Re: Block size limit automatic adjustment
Post by: db on November 22, 2010, 01:12:24 PM
I see, you think people could push up the limit to be sure that it would be big enough to always fit every transaction in it, therefore collecting more fees. On the long run that would be bad for generators themselves though, as the fee values would fall.
I'm not even sure this is interesting to the generator in the short run itself. He wouldn't collect more fees in the "flooded" blocks he generates.. and he doesn't have a real guarantee of being able to do so in the future blocks either.

Not quite. A max block size that grows to accommodate all transactions won't impose scarcity even without flooding. And, unrelated, a single attacker could cheaply do a massive flooding attack; not for profit but out of malice.

In the rule scheme without a max block size the problem is that a generator could trick the rule to allow more transactions into the block for each individual block generated if the generator is allowed to include unpublished transactions to itself.


Title: Re: Block size limit automatic adjustment
Post by: db on November 22, 2010, 01:16:48 PM
Regarding free transactions, I don't see why would somebody accept them as long as the client gives users the option to add fees to the transactions.

Possibly not. But what about very low fee transactions?


Title: Re: Block size limit automatic adjustment
Post by: caveden on November 22, 2010, 02:04:03 PM
Well, I do think that there must be some blocks with enough free space to fit all almost-free transactions once in a while.. the adjustment shouldn't be so that all blocks are filled up.
I suppose transfers don't happen homogeneously during the 24h of a day, 7 days of a week. So, if the limit is "just", there will be blocks filled, and there will be blocks with free space where even free transactions could enter if the generator doesn't mind.


Title: Re: Block size limit automatic adjustment
Post by: ShadowOfHarbringer on November 22, 2010, 02:13:14 PM
I think i agree with caveden - having such an important constant hardcoded in bitcoin may be devastating at some point when the network changes significantly or grows much larger than it is now.
Generally, almost every important value in core of bitcoin algorithms should be a non-constant elastic variable which can adapt to changes.

Anyway, I still really would like to see Satoshi's & Gavin's opinions on this.


Title: Re: Block size limit automatic adjustment
Post by: ByteCoin on November 26, 2010, 02:13:48 AM
To clarify, the block size limit is the size beyond which a received block will be rejected by a client just because it's too big.

I agree with caveden that having a fixed block size limit could cause problems in future.

Let's consider the scenario in which Bitcoin becomes popular and the non-spam transaction rate starts to rise. The current fees and priority scheme is fine until the size of the fees required becomes a disincentive for new users to start using Bitcoin. The miners must choose between taking a smaller fee from a given transaction or maintaining their fee schedule and effectively turning away lots of new users perhaps to other competing cryptographic currency schemes.
I think it's reasonable to imagine that everyone will decide to drop fees to a level that encourages the widest possible adoption of Bitcoin until other limiting factors (such as network bandwidth) come into play.
So with the reduced fees, block sizes increase until blocks get rejected by old clients with lower hard block size limits. These clients can't relay the new blocks and so new clients would have to only connect to other new clients. Miners which reject the large blocks would continue to build a block chain of "normal" sized blocks. As soon as transactions start to refer to coins in new large blocks then the old clients would reject these transactions and these coins could be double spent on the "old" client network. I don't think this would be pretty.

The ostensible reason for hard block limits is to prevent spam. As ribuck mentions current spam attacks have two effects, one which you can see and one that you can't. You can see block sizes rising but this is an effect which counteracts the less visible problem of your transaction cache filling up with spam transactions. I believe that memory exhaustion due to transaction cache filling will be the main problem with spam attacks so large blocks removing lots of transactions from it will mitigate it. The real solution to spam is "shunning" which I will outline in another post. I believe having any block limits is likely to exacerbate the adverse effects of spam transactions.

As FreeMoney observes, in the absence of block limits there's nothing to stop a miner from including arbitrary amounts of its own spam transactions in the block. This is true. However, it's certainly not in the non-generating client's interest to reject the block even if it only removes a few transactions from the cache. Rather the onus is on the other miners to notice that the new block does not remove enough transactions from the cache and reject it. They will then build the longer chain while ignoring that block which will be an orphan. Hence the spamming miner is punished.

The moral of this story is that the non-generating clients operate on the network at the pleasure of the miners. The miners are effectively in control of the "health" of the network and the current block size limits reflect that. So for example block http://blockexplorer.com/b/92037 is about 200455 bytes long and mostly contains spam. Normal blocks max out at 50k. This shows that at least one generator has chosen to waive the current fees scheme. I think that letting miners effectively decide their own fees scheme will be seen to be the least bad option.

ByteCoin


Title: Re: Block size limit automatic adjustment
Post by: asdf on November 26, 2010, 07:50:38 AM
The moral of this story is that the non-generating clients operate on the network at the pleasure of the miners. The miners are effectively in control of the "health" of the network and the current block size limits reflect that. So for example block http://blockexplorer.com/b/92037 is about 200455 bytes long and mostly contains spam. Normal blocks max out at 50k. This shows that at least one generator has chosen to waive the current fees scheme. I think that letting miners effectively decide their own fees scheme will be seen to be the least bad option.

We came to a similar conclusion in this thread:
http://bitcointalk.org/index.php?topic=1847.0;all
My concern is I don't see any inherent force that will stabilize transaction fees.

Generators have the ability to accept any transactions they see fit as well as reject any block that doesn't adhere to their "ethics". The question is; will this game result in an oligopoly of price gouging generators, will it result in a dead market where no one generates or will some competing forces reach a common ground of a fair stable fee structure.

I'm not smart enough to figure this out. I wish Satoshi would weigh in on this issue. I suspect he may have already envisioned the outcome.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on November 26, 2010, 02:59:39 PM

I'm not smart enough to figure this out. I wish Satoshi would weigh in on this issue. I suspect he may have already envisioned the outcome.

Austrian economic theory says that no one is smart enough, because no one can have all of the information.  The best that we can do is take a guess, and I would guess that it isn't going to be a real problem.  Certainly not in my lifetime.


Title: Re: Block size limit automatic adjustment
Post by: RHorning on November 26, 2010, 05:21:37 PM

I'm not smart enough to figure this out. I wish Satoshi would weigh in on this issue. I suspect he may have already envisioned the outcome.

Austrian economic theory says that no one is smart enough, because no one can have all of the information.  The best that we can do is take a guess, and I would guess that it isn't going to be a real problem.  Certainly not in my lifetime.

The main issue raised by this thread does seem like something of concern for the long-term health of this project, as there does seem to be the very real possibility that the current limit is not going to be sufficient for "ordinary" transactions at some point in the future.  Using examples for data processing rates and transaction rates for other payment processing systems like PayPal, this current limit is not only going to be insufficient but woefully insufficient.  I realize we aren't anywhere near those demand levels, but it still is an issue to think about.

There are also plenty of examples where decisions of software architecture including the use of constants or other features in software architecture have profound real-world impact simply because the software design team has been short sighted and didn't anticipate the future very effectively.  Examples of this include the Y2K bugs, the Unix 2038 date overflow bug (remains to be seen how it will be completely solved), and perhaps most similar to this current situation is the IPv4 address space issue.  There are other instances where a coded constant of some kind also can come up and bite end-users in unexpected ways... one of the reasons computer software developers call these kind figures "magic numbers".  When you have some very intelligent people who are complaining about an issue of this nature as having some significant impact, it is at least something which needs some attention.

The specifics on how to avoid this problem is the point of this thread, and a strong suggestion that "rules" ought to be incorporated into the network in terms of how to somehow allow this hard coded limit.

The moral of this story is that the non-generating clients operate on the network at the pleasure of the miners. The miners are effectively in control of the "health" of the network and the current block size limits reflect that. So for example block http://blockexplorer.com/b/92037 is about 200455 bytes long and mostly contains spam. Normal blocks max out at 50k. This shows that at least one generator has chosen to waive the current fees scheme. I think that letting miners effectively decide their own fees scheme will be seen to be the least bad option.

I will say in regard to the control of the network by the miners, that is mostly true but not 100% of the time.  Blocks sent out by miners can also be rejected by "the vast masses of clients" who simply refuse to recognize a block.  Perhaps some other miner that fits within the rules set up by the network will do something that another miner doesn't take into consideration, and that particular block is simply going to be rejected.  With the rules as currently established, a miner who chooses to create a very large block is simply going to have that block ignored by the current network.  Essentially this is "proof" that the miners don't have absolute authority here.  Miners also work at the pleasure of the network as a whole, and have "constitutional limits" imposed upon them by the networking rules.  This particular issue with the maximum block size is one of the few rules that is outside of the control of a single miner.  Other kinds of similar rules could be adopted by a significant portion of the clients that may exclude certain miners or even groups of miners providing a check to a sort of "tyranny of the miners".

I'm not going to speculate about how such rules might be established or what other potential rules might be, other than suggesting that the block limit rule is one such rule and that needs to be somewhat reconsidered, certainly as a fixed size.  The long-term consequence is that without being changed, transaction fees may potentially escalate to absurd levels as more people trying to get the network to incorporate a particular transaction becomes a sort of "fees arms race", particularly when miners simply would be unable to get blocks of a larger size incorporated into the network.

The opposite situation is a voluntary self-limiting feature on miners who simply choose not to grow blocks to large sizes.  As long as somebody somewhere is allowed to have an arbitrarily large block, it will deal with the transactions with a low fee or perhaps no fee at all, even if it will take awhile to get those blocks incorporated into the network.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on November 26, 2010, 06:20:38 PM

I'm not smart enough to figure this out. I wish Satoshi would weigh in on this issue. I suspect he may have already envisioned the outcome.

Austrian economic theory says that no one is smart enough, because no one can have all of the information.  The best that we can do is take a guess, and I would guess that it isn't going to be a real problem.  Certainly not in my lifetime.

The main issue raised by this thread does seem like something of concern for the long-term health of this project, as there does seem to be the very real possibility that the current limit is not going to be sufficient for "ordinary" transactions at some point in the future.

I was responding specificly to his concern about transaction fees, not the block size limit.  The hard limit is a real concern, but also one that I imagine has been well considered by others before us.  Did anyone bother to search the archives before diving into this thread?  The block limit exists to prevent spamming from packing the blocks, not support transaction fees.  I think that the recently instituted priority rule does the job at least as well as the hard block limit, but it's not enough on it's own. 


Title: Re: Block size limit automatic adjustment
Post by: Alex Beckenham on May 10, 2011, 03:49:33 PM
Has anyone had any further thoughts on dynamic block size limit, in light of the recent slow down of free transactions recently?

Is the block size limit still a concern?


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 10, 2011, 09:32:17 PM
Has anyone had any further thoughts on dynamic block size limit, in light of the recent slow down of free transactions recently?

Is the block size limit still a concern?


It never really was a real problem except for future scalability.


Title: Re: Block size limit automatic adjustment
Post by: Alex Beckenham on May 10, 2011, 09:44:56 PM
Is the block size limit still a concern?
It never really was a real problem except for future scalability.
Doesn't that by itself make it a real problem?


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 10, 2011, 09:53:00 PM
Is the block size limit still a concern?
It never really was a real problem except for future scalability.
Doesn't that by itself make it a real problem?


Well, yes.  But the talk recently is about how maintaining the transaction fees in order to maintain the hasing power of the network, not scalability.  This particular thread was mostly about scalability, and in such a case the max block limit can be raised or removed.  It's only present as a backstop against the possibility of there being some presently unknown exploit that would permit limitless transaction spamming of the blockchain, and not as a means to support the transaction fees.  The short answer to the scalability issue is that it can be easily removed long before the network traffic is high enough that the max block size becomes an actual scalability issue.


Title: Re: Block size limit automatic adjustment
Post by: Alex Beckenham on May 10, 2011, 10:25:47 PM
Okay great. My understanding from reading this thread was that it was not so 'easily' removed; Having to get every client updated at basically the same time.

Good to read otherwise.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 10, 2011, 10:46:58 PM
Okay great. My understanding from reading this thread was that it was not so 'easily' removed; Having to get every client updated at basically the same time.

Good to read otherwise.


No, not at the same time.  Just before hitting that limit were to become a regular event.  We could change that rule in the next vanilla client release, so long as everyone agreed that we should, and voiced consent by downloading the new client.  At the current rate we still have months, if not years, before hitting that limit regularly.  As far as I know, we have never come close to it.

If we wait until every other block is hitting that limit; however, implementing such a rule change is going to be problematic.


Title: Re: Block size limit automatic adjustment
Post by: caveden on May 11, 2011, 08:17:08 AM
Okay great. My understanding from reading this thread was that it was not so 'easily' removed;

It could be relatively easy now or even in some months from now. But once bitcoin goes mainstream, doing such a change will not be that easy, mainly due to the coordination effort. And the more popular bitcoin gets, the harder such change becomes, that's why I still think it should be done just once. And to do it just once, a self-adjusting rule should be created... just raising a constant is bad for several reasons (multiple backward incompatible changes, gives more space for spammer-miners to abuse on, risk of dropping the fee value and by consequence the difficulty factor in the future etc)


Title: Re: Block size limit automatic adjustment
Post by: Mike Hearn on May 11, 2011, 09:36:14 AM
Block size limits are only relevant to miners, as it's they who decide whether to "accept" a block by building up on it or not. Most users will end up on lightweight clients which don't need to check the block size. So as long as mining consolidates around professionals who communicate and keep up, there probably won't be a "doomsday" scenario.



Title: Re: Block size limit automatic adjustment
Post by: caveden on May 11, 2011, 09:52:22 AM
Block size limits are only relevant to miners, as it's they who decide whether to "accept" a block by building up on it or not. Most users will end up on lightweight clients which don't need to check the block size. So as long as mining consolidates around professionals who communicate and keep up, there probably won't be a "doomsday" scenario.

Like the professional ISPs who have waited until the last minute to migrate from IPv4 to IPv6 - not to mention that many have not yet migrated? Or like the also professional SMTP servers who have never implemented some stronger authentication system in e-mail transfers? :)

I'm not saying it's impossible, neither that there will be a "doomsday" due to this. I'm just saying it's a problem that will need to be fixed someday, and the earlier it's done, the easier it is. If it's done too much later, it may provoke avoidable problems like long chain splits, people not understanding why their bitcoin is not working anymore etc.

And it's not an issue only to miners, every full client performs block validations. I don't think miners will be the only ones running full clients. Those who serve the lightweight clients for ex., they will need to be full clients.


Title: Re: Block size limit automatic adjustment
Post by: Mike Hearn on May 11, 2011, 09:59:39 AM
Quote
Or like the also professional SMTP servers who have never implemented some stronger authentication system in e-mail transfers? :)

Virtually all large, professionally run SMTP networks do authenticate their mail, as far as I know. We track how much mail is authenticated coming into the Google network (ie: gmail consumer/business editions), and it's pretty high. There's a long tail of home-run SMTP servers that will never be upgraded but they also don't represent a whole lot of users.

Quote
And it's not an issue only to miners, every full client performs block validations. I don't think miners will be the only ones running full clients. Those who serve the lightweight clients for ex., they will need to be full clients.

Yeah, we might want to change that :-) I'll ask Gavin about it next time I see him on IRC. I think non-miners don't need to check the block size even if they are full nodes as an attempt to explode the size of the chain maliciously will be overridden by genuine miners pretty quickly. There's a risk of temporarily flooding the broadcast network with a gigantic block but it's not an easy attack to pull off. Nodes can prune blocks on side chains after a while so the increase in storage required would be temporary. If anyone ever did it, a patch to forcibly delete a block from storage would be written pretty fast.


Title: Re: Block size limit automatic adjustment
Post by: ribuck on May 11, 2011, 10:48:51 AM
What's the current failure mode? What happens if the existing Bitcoin client encounters an over-long block?


Title: Re: Block size limit automatic adjustment
Post by: theymos on May 11, 2011, 11:54:57 AM
What's the current failure mode? What happens if the existing Bitcoin client encounters an over-long block?

It is considered invalid and rejected.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 11, 2011, 12:21:01 PM
Okay great. My understanding from reading this thread was that it was not so 'easily' removed;

It could be relatively easy now or even in some months from now. But once bitcoin goes mainstream, doing such a change will not be that easy, mainly due to the coordination effort. And the more popular bitcoin gets, the harder such change becomes, that's why I still think it should be done just once. And to do it just once, a self-adjusting rule should be created... just raising a constant is bad for several reasons (multiple backward incompatible changes, gives more space for spammer-miners to abuse on, risk of dropping the fee value and by consequence the difficulty factor in the future etc)

I have already laid out my plan for a self adjusting limit, based on the average number of blocks a fee paying transaction spends in the average queue, but with some fudge factor to allow for different miners having slightly different averages across the network.  To account for fee rates, the average could be weighted by the amount of the fee in such a manner that a fee larger than the minimum gets affects the average more than the average transaction fee.

The space available for free transactions could be set static, or as a percentage of the moving blocksize.  The blocksize could be adjusted when the difficulty is adjusted, and the winning block first after the change encodes the calculated max limit into that block in some fashion.  If the network accepts that block, then that max limit becomes a fixed limit for another 2015 blocks.


Title: Re: Block size limit automatic adjustment
Post by: ribuck on May 11, 2011, 12:49:13 PM
Here's an idea that might be dismissed as stupid-simple, but sometimes stupid-simple ideas work really well.

How about: The maximum block size equals the higher of: (a) the current hard-coded maximum block size, and (b) 'difficulty' bytes.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 11, 2011, 12:52:23 PM
Here's an idea that might be dismissed as stupid-simple, but sometimes stupid-simple ideas work really well.

How about: The maximum block size equals the higher of: (a) the current hard-coded maximum block size, and (b) 'difficulty' bytes.

That's awesome.  And I agree it's sledgehammer simple.  I like that rule better than mine.  The free section, and the tiered fee schedule, would have to become percentages of that number; also sledgehammer simple.  No fudge factors.  Elegant.

EDIT:  Rule (b) might have to be some agreed upon multiple of difficulty, however.  If the blocksize does not naturally increase until difficulty is over one million, I'm afraid that we really would have some scalability issues.  Can anyone think of a metric that can be used for that multiple, or must it be fixed?

EDIT #2:  And Rule (a) should be reduced by half at least.


Title: Re: Block size limit automatic adjustment
Post by: Mike Hearn on May 11, 2011, 01:12:40 PM
There are actually several block size limits. Practically speaking, the client holds incoming messages in RAM. Gigabyte sized blocks would require a gigabyte of RAM to receive. Any block size limit would have to ensure the max message size is also adjusted to take it into account, at least until blocks are distributed as header+tx hash lists.

We can probably just set it to a gigabyte max for non-miners and forget about it for a while. That's approximately what it'd take to keep up with VISA - might as well aim high, right? :-) Miners can be more distinguishing as long as they're responsive.

Automatic adjustments based on difficulty assume difficulty will scale with traffic. I'm not convinced that relationship will hold. If there's going to be an automatic formula (median size of recent blocks * 1.1) seems like as good as any, and is also simple.


Title: Re: Block size limit automatic adjustment
Post by: sandos on May 11, 2011, 01:13:44 PM
Here's an idea that might be dismissed as stupid-simple, but sometimes stupid-simple ideas work really well.

How about: The maximum block size equals the higher of: (a) the current hard-coded maximum block size, and (b) 'difficulty' bytes.

I was just thinking that block-size should depend on some measurement of how much it taxes the network, just like difficulty is measuring how fast blocks can be found and corrects that rate.

I have a hard time seeing how to objectively measure block-size impact (which is global!) in a similar way as mining?


Title: Re: Block size limit automatic adjustment
Post by: ribuck on May 11, 2011, 01:28:11 PM
How about: The maximum block size equals the higher of: (a) the current hard-coded maximum block size, and (b) 'difficulty' bytes.

That's awesome...

EDIT:  Rule (b) might have to be some agreed upon multiple of difficulty, however.  If the blocksize does not naturally increase until difficulty is over one million, I'm afraid that we really would have some scalability issues.
What? Difficulty will be above one million real soon now. Two to three months probably.

Quote
And Rule (a) should be reduced by half at least.
Why risk compatibility with existing software, just for the sake of a minor tweak that will only be relevant for the next two or three months?


Title: Re: Block size limit automatic adjustment
Post by: caveden on May 11, 2011, 01:49:06 PM
I think non-miners don't need to check the block size even if they are full nodes

I'm not really convinced of that...

There are some arbitrary rules regarding what a valid block is which are of interest to the entire bitcoin community, not only miners. And I'm not talking about obvious rules like no double-spending or signature validation. I mean rules like the difficult factor or block rewards, for example. These two concern the inflation control, which are of interest of every bitcoin user.

Of course that miners that disagree with the current rules could always try to change them. But if users reject their blocks, the result of their mining may be worth much less as it would be a fork used by few.
So, when users validate blocks, they create a strong incentive for miners to obey the entire user base consensus. If instead users accept all blocks that miners decide to build upon, then it's up to the miner consensus only to decide these kind of rules. Even if they change to something which is not really of interest to the entire user base, users will passively accept it.

I think that the maximum block size is a rule of this kind. It's not only about spam. It's about creating an artificial scarcity too.
It's true that miners may come up with a good agreement since this artificial scarcity is good for them, but still, it sounds dangerous to me for the entire user base to give a "blank card" to miners to decide on that entirely on their own... don't you think?


Title: Re: Block size limit automatic adjustment
Post by: caveden on May 11, 2011, 01:51:13 PM
Automatic adjustments based on difficulty assume difficulty will scale with traffic. I'm not convinced that relationship will hold.

Neither am I. Using the size of the last X blocks seems more reasonable.


Title: Re: Block size limit automatic adjustment
Post by: Gavin Andresen on May 11, 2011, 01:53:29 PM
I'd tweak the formula to be:  max block size = 1000000 + (int64)(difficulty)

... just to avoid "if block number is < X max block size = 1000000 else..." logic.  Adding in the current 1MB max limit means all the old blocks are valid under the new rule.

I like Mike's point that difficulty and transaction volume aren't necessarily related.  Maybe a better formula for miners would be something like:

max block size = 1000000 + (average size of last N blocks in the best chain)
... where N is maybe 144 (smooth over 24-hours of transactions)

Anybody have access to what Visa daily transaction volume looks like in the days around Christmas?  Are there huge, sudden spikes that the above formula wouldn't handle?


Title: Re: Block size limit automatic adjustment
Post by: ribuck on May 11, 2011, 02:40:52 PM
Automatic adjustments based on difficulty assume difficulty will scale with traffic.

Here's how it scales automatically:

If blocks are getting full, people pay higher fees to get their transactions in the block. Increased mining profitability causes increased mining which causes increased difficulty.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 11, 2011, 02:55:06 PM
Automatic adjustments based on difficulty assume difficulty will scale with traffic.

Here's how it scales automatically:

If blocks are getting full, people pay higher fees to get their transactions in the block. Increased mining profitability causes increased mining which causes increased difficulty.

I agree with this perspective.  This simple rule maintains scarcity, prevents scalability issues, and is likely to find it's own equilibrium via transaction price discovery.


Title: Re: Block size limit automatic adjustment
Post by: znGoat on May 11, 2011, 04:10:24 PM
In the long-run the miners are all going to have their own rules on the fee schedules; the best we can do is set the default rules with the expectation that one day they will become ignored.

It will be in the big miners interest to make the most amount of profit ( Sum[of all fees] ).  This might be a smaller number of transactions, but each transaction taking a large fee, or many many small fee transactions.


I propose that the fee schedule is:

A: (optional) first 100KB open to any transactions  -  this is not adjusted no-matter the block size/fees, the miner can optionally not include any free transactions.

B: (recommended) next 100KB given to highest fee transactions - a miner must include up-to 100KB of the highest fee transactions. (I don't know if you could enforce this)

C: (enforced max) Based upon the average of part B over the last 100 blocks, if you can accept transactions up to:

Max size of Section C = (Total fees in B section)/(AVG free last 100 B sections) * 100KB

Total Max:  Must not be over 100x AVG size of last 6 blocks.  (can grow very large very quickly, if those making the transactions are willing to pay for it)


Why I propose the above schedule:

1.  Has a 'no-cost, but limited size area for any transactions of the miners choice... eg the Miner can Choose to include transactions from his buddies for no transaction fee. (Section A)

2.  Top priority transactions have a dedicated place in every block to compete for. (section B)

3.  If there is a strong demand for fee paying transactions then the the blocks will scale quite large very quickly (aka Christmas shopping)

4.  The total fees must always be significantly more than the average for very large blocks.


I have put quite a bit of thought into this fee schedule, I would love the forums comments on it.

Overall, whatever we what decide will not matter as one day the big miners will decide for themselves... This is just my best guess about what will fit with the natural economics of bitcoin.


Title: Re: Block size limit automatic adjustment
Post by: jimbobway on May 11, 2011, 04:19:55 PM
I'd tweak the formula to be:  max block size = 1000000 + (int64)(difficulty)

... just to avoid "if block number is < X max block size = 1000000 else..." logic.  Adding in the current 1MB max limit means all the old blocks are valid under the new rule.

I like Mike's point that difficulty and transaction volume aren't necessarily related.  Maybe a better formula for miners would be something like:

max block size = 1000000 + (average size of last N blocks in the best chain)
... where N is maybe 144 (smooth over 24-hours of transactions)

Anybody have access to what Visa daily transaction volume looks like in the days around Christmas?  Are there huge, sudden spikes that the above formula wouldn't handle?


I think averaging the "last N blocks in the best chain" is good but there may be a better way.  How about we try to predict the size of the next block?  We take the last N blocks and determine if it is linear, exponential, or polynomial.  Then we solve the linear or polynomial equation to determine the N+1 point.  Basically, this method is attempting to predict the size of the next block.

We can start of simple, and just use y=mx+b. 


Title: Re: Block size limit automatic adjustment
Post by: caveden on May 11, 2011, 04:40:39 PM
How about we try to predict the size of the next block?  We take the last N blocks and determine if it is linear, exponential, or polynomial.  Then we solve the linear or polynomial equation to determine the N+1 point.  Basically, this method is attempting to predict the size of the next block.

That starts to get more complex than what it needs to be, IMHO. As long as the delay of readjustment is short (24h for ex., as Gavin suggested), any formula which slightly increases the last average size should be fine. Maybe just making the increase relative instead of absolute should help with commercial holidays.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 11, 2011, 06:18:19 PM
I was thinking about all this on my commute to work, and I have a proposal.

1,000,000 + (Difficulty * Byte * K) = max block size

Wherein K= some factor high enough that the max block size is never really an issue, say K=2.  But some analysis is due on that.

But here is another change to the default fee schedule, granted that individual miners are likely to have tighter requirements themselves than this...

First, the max block size calculated above becomes the basis metric for a set of soft max block sizes.  I suggest 16 tiers of equal size.

0 to one-sixteenth of max block size, no special requirements, miners can include whatever valid transactions they desire up until this point.

1 to 2 (sixteenth) at least one transaction paying a fee equal or greater than the minimum fee required for unusual transactions must be present.  That transaction can be one wherein the fee was required or not.  As long as at least one is present, miners can include whatever else they desire up until this limit.

2 to 3  At least one transaction paying a fee double the fee for the above class.

3 to 4  At least one transaction paying at least double the rule above this one must be present.

And so on, so the fee paid by the highest fee paying transaction sets the bar for the block, and then the miner can include whatever other transactions that it sees fit.  This not only encourages the use of -sendtomany whenever possible, which is more efficent for the network anyway; most of the fee paying transactions (and free transactions) are then competing for the fill in space left by the one transaction that is paying for the bandwidth.  And this also sets a method of ongoing price discovery, as any client can look at the last block and it's own transaction queue and predict how much it will have to pay in order to get into the next block (probably equal to or higher than the highest single fee in the last block, if the queue is steady, slightly more if it is growing, slightly less if it is dropping) as well as establish a bidding mechanism for the 'median' transaction to be included in a block in the near future; as all other transactions besides the high transaction are then bidding for the remaing space by looking at their own queue of transactions, guessing which will be the high (and therefore the sixe of space available) and looking at the second highest to outbid if it wishes to be included in the next block.

In this way, the well-heeled senders set the bar.  Imagine if Wal-Mart, which has half a million employees to pay each week, were to compile that entire paylist into a single -sendtomany transaction.  They would be able to definitively determine the minimum fee they would have to offer just to be considered, based solely on the actual size of the transaction, and then be able to guess how much more they should offer based upon how many large senders there were in the previous several blocks.  Say in this transaction had a million outputs (probably 10 million inputs) and was 3.2 Mb once done.  The difficulty was 2 million at the last adjustment, so wlmart knows that the max block size is 5Mb.  In order to fit their 3.2 Mb single transaction into the block, they have to offer a fee at least 16 times the minimum fee (5/8=.625, 3.2/.625=5.12, so 6th tier, first tier is free, second is equal to the minumum fee, so 6th tier is 4 doublings of the minumum fee).  If the minimum is .01, then Wal-Mart pays at least .16 just to qualify.

EDIT: somewhre I switched my numbers in my head from 16 teirs to only eight.  So my numbers are wrong, but hopefully I conveyed the idea.


Title: Re: Block size limit automatic adjustment
Post by: gim on May 11, 2011, 06:20:36 PM
max block size = 1000000 + (average size of last N blocks in the best chain)
... where N is maybe 144 (smooth over 24-hours of transactions)

With this formula, asymptotically, block size cannot increase by more than 2MB in 24-hours.
That is roughly 300000 transactions a day.
(What about Visa spikes ? probably similar).

This is a hard limit, so if bitcoins are still in use in a hundred years, maybe it would be better to scale exponentially. For example:
Quote
max block size = 1000000 + 1.01 (average size of last N blocks in the best chain)
and blocksize would scale up to (about) 2% per 24-hours.

Yes, that is one more random constant :p


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 11, 2011, 06:20:37 PM
I'd tweak the formula to be:  max block size = 1000000 + (int64)(difficulty)

... just to avoid "if block number is < X max block size = 1000000 else..." logic.  Adding in the current 1MB max limit means all the old blocks are valid under the new rule.

I like Mike's point that difficulty and transaction volume aren't necessarily related.  Maybe a better formula for miners would be something like:

max block size = 1000000 + (average size of last N blocks in the best chain)
... where N is maybe 144 (smooth over 24-hours of transactions)

Anybody have access to what Visa daily transaction volume looks like in the days around Christmas?  Are there huge, sudden spikes that the above formula wouldn't handle?


I think averaging the "last N blocks in the best chain" is good but there may be a better way.  How about we try to predict the size of the next block?  We take the last N blocks and determine if it is linear, exponential, or polynomial.  Then we solve the linear or polynomial equation to determine the N+1 point.  Basically, this method is attempting to predict the size of the next block.

We can start of simple, and just use y=mx+b. 

How does this do anything but grow?


Title: Re: Block size limit automatic adjustment
Post by: Mike Hearn on May 11, 2011, 07:17:41 PM
Visa handles around 8,000 transactions per second during holiday shopping and has burst capacity up to 10,000tps.

Of course MasterCard also handles quite a bit. I don't have figures for them but I guess it'd be in the same ballpark.

I don't believe artificial scarcity is a good plan nor necessary in the long run, so requiring end-user software to enforce these sorts of rules makes me nervous. I don't plan on adding max size checks to BitCoinJ at least, they aren't even enforceable as in future SPV clients probably won't request full blocks.


Title: Re: Block size limit automatic adjustment
Post by: FreeMoney on May 11, 2011, 07:59:05 PM
I'd tweak the formula to be:  max block size = 1000000 + (int64)(difficulty)

... just to avoid "if block number is < X max block size = 1000000 else..." logic.  Adding in the current 1MB max limit means all the old blocks are valid under the new rule.

I like Mike's point that difficulty and transaction volume aren't necessarily related.  Maybe a better formula for miners would be something like:

max block size = 1000000 + (average size of last N blocks in the best chain)
... where N is maybe 144 (smooth over 24-hours of transactions)

Anybody have access to what Visa daily transaction volume looks like in the days around Christmas?  Are there huge, sudden spikes that the above formula wouldn't handle?

I like it. Don't worry about Christmas, I'm pretty sure that's a bubble.


Title: Re: Block size limit automatic adjustment
Post by: jimbobway on May 11, 2011, 08:35:04 PM
I'd tweak the formula to be:  max block size = 1000000 + (int64)(difficulty)

... just to avoid "if block number is < X max block size = 1000000 else..." logic.  Adding in the current 1MB max limit means all the old blocks are valid under the new rule.

I like Mike's point that difficulty and transaction volume aren't necessarily related.  Maybe a better formula for miners would be something like:

max block size = 1000000 + (average size of last N blocks in the best chain)
... where N is maybe 144 (smooth over 24-hours of transactions)

Anybody have access to what Visa daily transaction volume looks like in the days around Christmas?  Are there huge, sudden spikes that the above formula wouldn't handle?


I think averaging the "last N blocks in the best chain" is good but there may be a better way.  How about we try to predict the size of the next block?  We take the last N blocks and determine if it is linear, exponential, or polynomial.  Then we solve the linear or polynomial equation to determine the N+1 point.  Basically, this method is attempting to predict the size of the next block.

We can start of simple, and just use y=mx+b.  

How does this do anything but grow?

Not sure if I am answering your question but y=mx + b is a high school algebra equation for a line on a graph.  Using this equation or some other polynomial equation to predict the size of the next block shouldn't be too hard.  Just plug in values for m, x, and b and solve for y.

http://www.math.com/school/subject2/lessons/S2U4L2DP.html

I think Gavin is right in that we need some data, maybe plot it on a graph, and determine which method/equation can best fit that graph.

Just my two millicoins.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 11, 2011, 08:50:39 PM

Quote
How does this do anything but grow?

Not sure if I am answering your question but y=mx + b is a high school algebra equation for a line on a graph.  Using this equation or some other polynomial equation to predict the size of the next block shouldn't be too hard.  Just plug in values for m, x, and b and solve for y.


That wasn't really what I was asking.  I'm not a math geek, I'm an econo-geek (and a radio geek, but that's not relevant).  I think that a simple equation to predict the trend in order to set a blocksize has the incentives wrong, and almost certainly trends toward infinity because both those paying for transactions to be processed and miners have an incentive for every transaction to be included in every block, and then we truly do have a 'tragedy of the commons' situation as the blocksize shoots to the moon, senders no longer have an encentive to pay anything over a token fee, and miners start dropping out because the fees can't cover the cost of bandwidth and electric; resulting in a difficulty level that is too low to defend itself as the block reward is reduced.  There needs to be some mechanisim that resists arbitrary growth of the blocksize, even if only a little.  Tying the max blocksize to the difficulty in some linear fashion is a smooth way to do this.  I'm not married to the details, but the implementation just seems smooth to me.  Although I have no concept of how difficult that would be to impliment into the code, because I'm not a coder, but I imagine that it would still be easier than a rolling average or a predictive algorithim, because it's just linear math.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 11, 2011, 08:52:54 PM
Visa handles around 8,000 transactions per second during holiday shopping and has burst capacity up to 10,000tps.


What's the average size of a simple transaction?


Title: Re: Block size limit automatic adjustment
Post by: Mike Hearn on May 11, 2011, 09:37:34 PM
Why would it matter? BitCoin is the only financial system I know of that cares about wire-size. Any serious processing company just builds a few datacenters and you're done. They don't even have to be very big.

And the only reason BitCoin cares about wire-size is that we're afraid of scaling up the system, pretty much.


Title: Re: Block size limit automatic adjustment
Post by: MoonShadow on May 11, 2011, 09:42:51 PM
Why would it matter? BitCoin is the only financial system I know of that cares about wire-size. Any serious processing company just builds a few datacenters and you're done. They don't even have to be very big.

And the only reason BitCoin cares about wire-size is that we're afraid of scaling up the system, pretty much.

Wire-size?

Scaling isn't really an issue if the system is suited to compensate the network for the resources, that 's what I'm concerned about.  If we choose an algorithim that just permits limitless growth, then we might as well just remove the blocksize limit altogether and cross our fingers, because the result is the same.  We don't have the option of "just build a few datacenters" because this is the method by which we must pay for those datacenters, and ours need to be bigger and faster than any others.


Title: Re: Block size limit automatic adjustment
Post by: xf2_org on May 11, 2011, 09:43:26 PM
And the only reason BitCoin cares about wire-size is that we're afraid of scaling up the system, pretty much.

Well, it's still pretty cheap to dump tons of useless data into the block chain.

Satoshi didn't seem to think the block size limit should be changed... until it needed to be.  Right now, we are nowhere near the limit, so his rationale still seems sound.



Title: Re: Block size limit automatic adjustment
Post by: Mike Hearn on May 11, 2011, 09:46:31 PM
The problem is that we will never reach the point at which the block size "needs" to be increased because by the time this is regularly happening people will be trying BitCoin, deciding that transactions are expensive and slow, then leaving again. It's a self-fulfilling prophecy in that sense.


Title: Re: Block size limit automatic adjustment
Post by: gim on May 11, 2011, 10:17:50 PM
Visa handles around 8,000 transactions per second during holiday shopping and has burst capacity up to 10,000tps.
Uch... realizing I underestimated it by a lot in my first post.
Assuming the dayly transaction increase is, let's say, 1/100 of this peek, the proposed linear adjustment is then definitely far too small for that scale.
The 1MB constant should be multiplied by at least 20.

Exponnential adjustment is not an option IMO. Localized spammers can't exploit it anyway (even with a sloppy 5%, 10% or even 100% per day adjustment).


Title: Re: Block size limit automatic adjustment
Post by: gim on May 11, 2011, 10:43:02 PM
I don't believe artificial scarcity is a good plan nor necessary in the long run,
Forgot to say I fully agree, but it was probably obvious.

so requiring end-user software to enforce these sorts of rules makes me nervous. I don't plan on adding max size checks to BitCoinJ at least, they aren't even enforceable as in future SPV clients probably won't request full blocks.
Yes, as long as most miners perform sizes check and refuse to build over obviously oversized blocks, everything should be fine.
There's no reason for light clients to check that.


Title: Re: Block size limit automatic adjustment
Post by: caveden on May 12, 2011, 07:07:37 AM
I don't believe artificial scarcity is a good plan nor necessary in the long run,
Forgot to say I fully agree, but it was probably obvious.

so requiring end-user software to enforce these sorts of rules makes me nervous. I don't plan on adding max size checks to BitCoinJ at least, they aren't even enforceable as in future SPV clients probably won't request full blocks.
Yes, as long as most miners perform sizes check and refuse to build over obviously oversized blocks, everything should be fine.
There's no reason for light clients to check that.

So are you both among those who think miners would charitably work on a net loss for the "common good" in the long future (not entirely impossible, but I'd rather not rely on that), or do you have another idea on how will transactions fees remain above 0,01µBTC?
Or maybe you believe the entire user base can expect miners to set up an agreement regarding such artificial scarcity? It's true it may happen, but I feel uneasy about it.... I'd feel more comfortable if this was set by the "client consensus" instead of a "miner consensus"...


Title: Re: Block size limit automatic adjustment
Post by: Mike Hearn on May 12, 2011, 08:53:46 AM
I already explained in the "disturbingly low tx fee equilibrium" thread how I think fees will be set in future. It does not rely on altruism.


Title: Re: Block size limit automatic adjustment
Post by: caveden on November 29, 2011, 10:09:38 PM
Reviving this old topic just to say that my opinion on this subject has mostly changed, mainly after some very interesting talks with Stefan Thomas and others (even the great economist Detlev Schlichter was there :) ) during the bitcoin conference in Prague.

We probably don't need to mind with fixing a max block size on the protocol. The spam problem is a problem for pool operators and solo-miners only, as already said in this thread. And the transaction fee "tragedy of the commons" scenario would probably be better solved by market agents (like insurers) than by arbitrary rules on the protocol. By fixing arbitrary rules, we will either not create the amount of incentives needed, or, more likely, we will create more incentives than what's actually needed, provoking unnecessary waste of resources.


Title: Re: Block size limit automatic adjustment
Post by: zellfaze on November 30, 2011, 05:48:46 AM
max block size = 1000000 + (average size of last N blocks in the best chain)
... where N is maybe 144 (smooth over 24-hours of transactions)

With this formula, asymptotically, block size cannot increase by more than 2MB in 24-hours.
That is roughly 300000 transactions a day.
(What about Visa spikes ? probably similar).

This is a hard limit, so if bitcoins are still in use in a hundred years, maybe it would be better to scale exponentially. For example:
Quote
max block size = 1000000 + 1.01 (average size of last N blocks in the best chain)
and blocksize would scale up to (about) 2% per 24-hours.


This is the way that I would do it personally.  I don't really much see a problem with it.


Title: Re: Block size limit automatic adjustment
Post by: Vandroiy on December 03, 2011, 01:23:07 AM
Reviving this old topic just to say that my opinion on this subject has mostly changed, mainly after some very interesting talks with Stefan Thomas and others (even the great economist Detlev Schlichter was there :) ) during the bitcoin conference in Prague.

We probably don't need to mind with fixing a max block size on the protocol. The spam problem is a problem for pool operators and solo-miners only, as already said in this thread. And the transaction fee "tragedy of the commons" scenario would probably be better solved by market agents (like insurers) than by arbitrary rules on the protocol. By fixing arbitrary rules, we will either not create the amount of incentives needed, or, more likely, we will create more incentives than what's actually needed, provoking unnecessary waste of resources.

I already worried nobody would discuss the topic when I didn't have time to come to Prague. Thanks for keeping it up. :)

Concerning the difficulty equilibrium: this is also the conclusion I'm currently at. Insurers are the way to go, this will keep tx fees very low, which is a very good thing. 8)

But I'm still a little uncertain about the block size limit because of memory. A single miner with access to a lot of storage might flood the block chain to get rid of competition with less storage capabilities, so some mechanism against arbitrarily large blocks might be good. "The spam problem is a problem for pool operators", I agree, so what to do about it?

Also, CAN the block size limit be changed easily, can we reach a consensus without splitting the network?


@zellfaze: scaling exponentially is almost the same as having no limit in the first place. If a large miner network wants to start abuse, this limit would make no difference, just delay the attack by a few hours or maybe days.


Title: Re: Block size limit automatic adjustment
Post by: caveden on December 04, 2011, 06:53:00 PM
But I'm still a little uncertain about the block size limit because of memory. A single miner with access to a lot of storage might flood the block chain to get rid of competition with less storage capabilities, so some mechanism against arbitrarily large blocks might be good. "The spam problem is a problem for pool operators", I agree, so what to do about it?

It's on the interest of miners not to have huge blocks, which they didn't mine, occupying space on their hard drives. Also I believe it's on their interest not to split the network. So, they can create rules of the kind "I won't build on top of blocks larger than X unless they are already Y blocks deep already". They can actually have multiple rules with different values for X and Y. If most miners roughly agree on these parameters, it would be really hard to keep a chain with a block larger than such limits.

Plus, if they want, they can actually use the same mechanism to try to boycott miners which accept transactions with "too low fees". "I won't mine on top of blocks which had transactions paying less than X/Kb unless it is already Y blocks deep". That would be another "decentralized" way to deal with the incentives issue, besides the eventual insurers.

Also, CAN the block size limit be changed easily, can we reach a consensus without splitting the network?

It would be a backward incompatible change, so it would need to be scheduled with a good advance to minimize this risk.

@zellfaze: scaling exponentially is almost the same as having no limit in the first place. If a large miner network wants to start abuse, this limit would make no difference, just delay the attack by a few hours or maybe days.

Not entirely sure. As it would be faking transactions, only their blocks would push the limit up. Every other "true block" would be way below the limit, pushing the average down. The amount of waste these abusing network would manage to create would be proportional to the amount of processing power they have, and I don't think you can gather that many people for such a pointless attack...


Title: Re: Block size limit automatic adjustment
Post by: Meni Rosenfeld on December 04, 2011, 07:29:34 PM
And the transaction fee "tragedy of the commons" scenario would probably be better solved by market agents (like insurers)
Can you explain exactly how will this work? Back then I was not at all convinced by Mike's vision of the mechanics, and I still consider this an open problem.

Plus, if they want, they can actually use the same mechanism to try to boycott miners which accept transactions with "too low fees". "I won't mine on top of blocks which had transactions paying less than X/Kb unless it is already Y blocks deep". That would be another "decentralized" way to deal with the incentives issue, besides the eventual insurers.
If your solution relies on a cartel of miners boycotting competitors who undercut them, with nobody having a clear idea what they need to do to have their blocks accepted, I'd say you already lost.


Title: Re: Block size limit automatic adjustment
Post by: caveden on December 04, 2011, 08:20:00 PM
And the transaction fee "tragedy of the commons" scenario would probably be better solved by market agents (like insurers)
Can you explain exactly how will this work? Back then I was not at all convinced by Mike's vision of the mechanics, and I still consider this an open problem.

Yes, I guess it was Mike Hearn the first to come by with this insurance idea a while ago, but at the time maybe I didn't pay the deserved attention to it, or I just didn't think it through well enough, don't remember.

I can't obviously "explain exactly how will this work" since I have no crystal ball. But I tend to trust more on spontaneous order than centrally planned order, particularly when it comes to economic incentives. And that's what Stefan made me see with those talks: by arguing for an arbitrary formula to set up a moving block size limit, I was in a sense arguing for central planning instead of spontaneous order.

I will just give a grasp on the idea of these insurances for those who haven't heard about it yet. Basically, people interested in not being the target of double-spends, as well as being capable of spending at all (the "freezing the network" attack scenario), could hire insurances for that. Say, for example, some organization wants to freeze bitcoin's network with a >50% attack. If that happens, insurers would have to pay a huge amount to their clients. They have a financial interest to rent enough processing power to outcome such attack as quick as possible. (actually, I would expect most bitcoin users to collaborate... not only for ideological reasons, but simply to be able to spend their money again)
If you want a more "discussed" scenario which can be compared to bitcoin's "transaction fee tragedy of the commons" scenario, I'd suggest the one of stateless defense. It's obviously not exactly the same thing, but I think it's the closest one on economic literature. For example, half of the The Chaos Theory book, from Bob Murphy, is about stateless defense.

If your solution relies on a cartel of miners boycotting competitors who undercut them, with nobody having a clear idea what they need to do to have their blocks accepted, I'd say you already lost.

It's not "my solution". And, what do you mean with "nobody having a clear idea what they need to do to have their blocks accepted"? Nothing needs to be done on secret, actually pool operators would better announce everything they do pretty clearly since they are using other people's resources after all.
And, also, what did I lose?


Title: Re: Block size limit automatic adjustment
Post by: Vandroiy on December 04, 2011, 08:42:56 PM
It's on the interest of miners not to have huge blocks, which they didn't mine, occupying space on their hard drives. Also I believe it's on their interest not to split the network. So, they can create rules of the kind "I won't build on top of blocks larger than X unless they are already Y blocks deep already". They can actually have multiple rules with different values for X and Y. If most miners roughly agree on these parameters, it would be really hard to keep a chain with a block larger than such limits.

Plus, if they want, they can actually use the same mechanism to try to boycott miners which accept transactions with "too low fees". "I won't mine on top of blocks which had transactions paying less than X/Kb unless it is already Y blocks deep". That would be another "decentralized" way to deal with the incentives issue, besides the eventual insurers.

All of this assumes that there is a proxy service giving miners the historic data. Otherwise, the proposed situation is not stable: the miner consensus would continuously kick out miners with too little storage space, until the weaker half of miners in terms of storage are the stronger half in processing power. Who knows when that would happen?

Always keep in mind that miners have no say once they're kicked out of the market for whatever reason! Neglecting that was the major reason people did not acknowledge the low difficulty equilibrium for so long.

I really hope nobody includes some major mistake in dynamics when changing the protocol. Make noise if any protocol change is upcoming, and let's make sure the dynamics are right. We might not be able to deflate the block chain again once it's enormous.


Title: Re: Block size limit automatic adjustment
Post by: Meni Rosenfeld on December 04, 2011, 09:50:11 PM
I will just give a grasp on the idea of these insurances for those who haven't heard about it yet. Basically, people interested in not being the target of double-spends, as well as being capable of spending at all (the "freezing the network" attack scenario), could hire insurances for that. Say, for example, some organization wants to freeze bitcoin's network with a >50% attack. If that happens, insurers would have to pay a huge amount to their clients. They have a financial interest to rent enough processing power to outcome such attack as quick as possible.
Sorry, I'm just not seeing it. Relying on these insurers to counter any potential attack seems only one step removed from dropping the whole proof of work thing and just letting a few trusted servers synchronize transactions.

I'd much rather see a carefully planned incentive structure / branch selection criterion (which IMO should involve some combination of proof-of-stake, cementing, Bitcoin days destroyed and proof-of-work) which naturally leads to an efficient decentralized market.

(actually, I would expect most bitcoin users to collaborate... not only for ideological reasons, but simply to be able to spend their money again)
The effect each user's mining has on his own ability to spend bitcoins is negligible and not much of an incentive. That's pretty much what "tragedy of the commons" means.

If you want a more "discussed" scenario which can be compared to bitcoin's "transaction fee tragedy of the commons" scenario, I'd suggest the one of stateless defense. It's obviously not exactly the same thing, but I think it's the closest one on economic literature. For example, half of the The Chaos Theory book, from Bob Murphy, is about stateless defense.
Thanks, sounds interesting, I'll try to have a look.

If your solution relies on a cartel of miners boycotting competitors who undercut them, with nobody having a clear idea what they need to do to have their blocks accepted, I'd say you already lost.
And, what do you mean with "nobody having a clear idea what they need to do to have their blocks accepted"? Nothing needs to be done on secret, actually pool operators would better announce everything they do pretty clearly since they are using other people's resources after all.
This still gives big miners too much power in demanding draconian tx fees. Which solves the difficulty equilibrium problem, but creates a new problem. (Fees too high will reduce Bitcoin tx volume and thus the total fees collected. But I see no reason why the point with the max collected fees is the point best for Bitcoin in general. Efficiency is when you compete with someone other than yourself.)

And, also, what did I lose?
Lost in your efforts to bring about a decentralized (as in, not run by a cartel) currency.


Title: Re: Block size limit automatic adjustment
Post by: caveden on December 05, 2011, 08:04:14 AM
All of this assumes that there is a proxy service giving miners the historic data.

 ???
I didn't understand this. I'm not making such assumption.

Otherwise, the proposed situation is not stable: the miner consensus would continuously kick out miners with too little storage space, until the weaker half of miners in terms of storage are the stronger half in processing power.

I'm not sure if I follow. You think the "miner consensus" would artificially create huge blocks with fake transactions in it, just to kick out those who do not have the resources to handle it? Because otherwise, if they are not faking anything and the blocks are big, that's the bitcoin network itself who's kicking them out.
I don't think it is clever for pool operators to try to fake large blocks.

I really hope nobody includes some major mistake in dynamics when changing the protocol.

That's not a change in the protocol, other than eventually eliminating the 1MB limit - what will have to be done anyway.
Miners are the only ones that should concern with huge blocks, so why not let them work that out?


Title: Re: Block size limit automatic adjustment
Post by: caveden on December 05, 2011, 08:32:49 AM
Sorry, I'm just not seeing it. Relying on these insurers to counter any potential attack seems only one step removed from dropping the whole proof of work thing and just letting a few trusted servers synchronize transactions.

Oh but mining will be a professional thing regardless. Actually, it already is, if we consider only those who mine solo or operate pools, they are quite few.
I wouldn't compare that to a centralized solution though. Anyone with the skills and resources can enter the market, and most importantly, pool operators don't own the resources that they use to mine. Even if a pool operator is shut down by force, the actual miners, those who own the cards, can just switch to another pool or start mining solo. Somebody may also try to create a new pool to gather all those pool-less miners. Anyway, it's not comparable to a centralized solution where you kill everything by killing the servers.

I'd much rather see a carefully planned incentive structure / branch selection criterion (which IMO should involve some combination of proof-of-stake, cementing, Bitcoin days destroyed and proof-of-work) which naturally leads to an efficient decentralized market.

Don't you think that's the political way of solving problems? We're more likely to create new problems than solving the existent one. :)

This still gives big miners too much power in demanding draconian tx fees.

Please, keep in mind that there's nothing we can do to prevent such kind of "cartel boycott" from happening right now. I'd argue that not even CPU-friendly proof-of-work algorithms prevent that, since in the end there would be big pools anyway (and while they don't prevent cartels, they create new vulnerabilities).

And I don't think transaction fees will ever be draconian, for the reason you also note:

(Fees too high will reduce Bitcoin tx volume and thus the total fees collected. But I see no reason why the point with the max collected fees is the point best for Bitcoin in general. Efficiency is when you compete with someone other than yourself.)

The fact that it's not that difficult to transact outside of the blockchain already prevents miners from abusing. And they are competing with someone else: they are competing with other miners, with e-wallets which take transactions out of the chain, with offline means of payment like casacius coins and bitbills etc.


Title: Re: Block size limit automatic adjustment
Post by: Maged on December 05, 2011, 06:33:07 PM
This still gives big miners too much power in demanding draconian tx fees.

Please, keep in mind that there's nothing we can do to prevent such kind of "cartel boycott" from happening right now.
This is an important point to keep in mind. You also must consider that a cartel might even be desirable. While individual people might have a slight interest in protecting Bitcoin from attacks, that interest might not be very strong. In an overall group, that means that the protection is pretty weak. For example, think of your average 1-rig miner out there right now. If Bitcoin collapsed today, how much of an impact would that really have on them? How much would they really lose? A strong cartel, on the other hand, would have a significant vested interest in keeping Bitcoin safe. That is because each individual member has invested a lot of money into equipment that wouldn't be worth much for any other application.

As for the worries of large blocks, you have to remember that large blocks already have a significant cost to the miner who mined it, at least in terms of risk. The larger the block...

1) the longer it takes to initially transmit to all of the miner's peers.
2) the longer it takes peers to individually verify the block.
3) the longer it takes those peers to then broadcast the block.

The longer that process takes to propagate the block throughout the network, the more likely it is for the block to be ultimately rejected. In fact, once a miner accepts such a large block, they might be willing to switch to building off of a block with fewer transactions if one comes in before they are confident that the large block fully propagated. That is especially true if the small block collects much fewer fees than the large one.


Title: Re: Block size limit automatic adjustment
Post by: Meni Rosenfeld on December 05, 2011, 07:21:26 PM
This still gives big miners too much power in demanding draconian tx fees.
Please, keep in mind that there's nothing we can do to prevent such kind of "cartel boycott" from happening right now.
If we don't need to rely on a cartel to maintain difficulty equilibrium, we can try to come up with a solution to the problem of a cartel forming.

While individual people might have a slight interest in protecting Bitcoin from attacks, that interest might not be very strong. In an overall group, that means that the protection is pretty weak. For example, think of your average 1-rig miner out there right now. If Bitcoin collapsed today, how much of an impact would that really have on them? How much would they really lose?
Yes, this is the problem. No, I don't agree that a cartel is a desirable solution.