Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: Meni Rosenfeld on May 06, 2012, 03:57:10 PM



Title: Dynamic block frequency
Post by: Meni Rosenfeld on May 06, 2012, 03:57:10 PM
tl; dr: The target time span between blocks, which for Bitcoin is 10 minutes, is an important design parameter. Since it is difficult to determine the optimal value in advance, it may be desired to have it dynamically set by the market rather than hardcoded - and this is possible.

Introduction. The Bitcoin mining difficulty is adjusted so that a block will be found, on average, once every 10 minutes. This figure appears in Satoshi's whitepaper, but not much motivation is given for this choice. It is a compromise obtained from considering the following points:

Advantages of long time span:
1a. Less time wasted trying to continue an old chain while waiting for new blocks to propagate, which would give attackers an advantage.
2a. Less overhead for transmitting and storing block headers.

Advantages of short time span:
1b. Less variance for miners (and you should care about miners (https://bitcointalk.org/index.php?topic=79757.0)).
2b. Less time merchants need to wait to receive a given level of security of a transaction (there are some subtleties and much confusion about this point, but it's basically correct).

There is no guarantee that the value of 10 minutes is optimal, and some alternative blockchain-based currencies ("alts") have tried their luck with a different target. It is also likely that the optimal value will change over time, so it may be desired not to have it hardcoded, but rather have a system that allows market forces to determine it dynamically. In this post I will propose just such a system.

I will first focus on the scenario that the block reward consists entirely in new generated coins, without any transaction fees, and on the respective advantages 1a and 1b of long and short time spans. The general case will be considered afterwards.

I don't know if the block target time is considered to be a part of the Schelling point of Bitcoin which must never be changed, or if the proposed system is too much trouble to implement now that the wheels are already in motion. At the very least, it could be something to consider for a well-designed alt.

Basics. The network will still have a target hash value such that finding a hash lower than the target would be done once every 10 minutes on average. However, when mining a block, the miner can choose a target hash specific for this block which can be distinct from the network target. As long as the block hash is lower than the specific target, the block is valid. The specific target is part of the hashed block header, so the miner can only choose the target before he starts mining.

Branch selection will be based on the total work contained in the branch, obtained by summing the inverses of the specific targets.

The block's weight will be the ratio between the network target and the specific target. An easy block with a higher target will have less weight. The amount of new generated coins will be the normal block reward multiplied by the weight. A block which is ten times easier to find than a normal block will only give a tenth of the bitcoins as a reward.

Various landmarks which are now triggered by a certain number of blocks, will instead be triggered by a certain total weight. The normal block reward will be halved whenever the total weight reaches a multiple of 210000. A block which crosses the threshold will be rewarded proportionally based on the portion of the weight that rests on either side of the crossover point. This guarantees that the total bitcoins generated remains 21M, and on the same schedule (though with different granularity).

The difficulty (the network target) will be adjusted every 2016 weight units, so that the time to find 2016 weight units is 2 weeks.

If miners in general end up choosing a weight higher/lower than 1, this will have a similar effect as hardcoding a time span longer/shorter than 10 minutes.

Market forces. Choosing a weight different from 1 has no effect on the theoretical expected reward from mining. However, it affects variance and efficiency.

A miner will not want the weight to be too large, as that would mean a smaller chance to get a higher reward, which increases variance. If the default time span is so large that the variance is costly, miners will tend towards choosing a lower weight.

A miner will not want the weight to be too small, since that would increase the chance that it will be rejected. Suppose the average timespan is so short that forking is a matter of course. If two miners find a block at the same time, the one that chose a higher weight will be accepted, because the resulting branch is longer. Thus the higher weight miner will enjoy a higher percentage of accepted blocks and higher average reward, and this will push the market back to an average timespan where conflicts aren't that much of a problem.

The same considerations exist for mining pools too, who for the sake of their users want to minimize both variance and invalid block rate.

All in all, this will allow the market to reach an equilibrium point which strikes a balance between variance and forking.

Attacker manipulation. The incentives of an attacker are different from honest miners. He will not care about the variance in long time spans, and choosing higher weight blocks will make an attack easier. In fact, it can be shown that if the attacker can choose an arbitrarily high weight, any double-spend attempt is guaranteed to succeed eventually, whatever the hashrate of the attacker and the honest network. Being able to choose a very small weight makes Finney attacks easier.

Therefore, some limitation on weight selection will need to be placed. For example, it can be bounded from above and below by some multiple of the mean weight of the recent blocks. This will still allow the market to gradually converge towards a different time span, while preventing an attacker from choosing time spans much higher or lower than the rest of the network.

Future work. The resulting equilibrium may be such that merchants will suffer from needlessly long confirmation times; miners can choose short time spans that while not causing many conflicts, overload the blockchain with numerous headers; and the proportional relationship between block weight and reward is not at all clear for a scenario when transaction fees are significant.

I believe all of these issues can be solved within a reworked framework of the way transaction fees work, which I think is needed anyway. In the future I may write about that in a separate post.

CONTINUED:

Transaction fees. When the transaction fees become significant, the procedure described above loses the direct proportionality between block weight and reward, twisting the incentive structure and favoring shorter blocks. This can be solved by using the continuous version of rollover transaction fees (https://bitcointalk.org/index.php?topic=80387.0), which pays out transaction fees not to a single block but to several blocks based on their weight. The proportionality is thus approximately reinstated.

If the decay is fairly fast, this still gives a higher reward per weight unit the lighter the block is. This gives Bitcoin users some control of the miners' choice. If the confirmation speed leaves something to be desired due to block times too long, they will typically pay a higher fee to expedite their transactions; if everyone does this, transaction fees become proportionally higher than coin generation, which increases the incentive to mine light blocks which negate the problem.

This effect no longer holds true when the coin generation becomes negligible, in which case transaction fees are the only significant component of the reward, and increase fees does not encourage shorter blocks. Finding a mechanism to allow users to encourage shorter blocks in this case is an open problem. One idea is to not use only an exponentially decaying function, but give the user a choice between functions with a different kurtosis.

Block header upkeep cost. Every block generated creates a burden of the network due to the need of every node, even SPV clients, to download it and store it. Thus the miner should pay for the resources consumed by his block. This can be of the form of a fixed cost deduction from the block reward, which will be added to the long-term fee pool described here (https://bitcointalk.org/index.php?topic=80435.0). This will prevent miners from choosing very short blocks which overload the network.

Contested transaction fees. Because transaction fees decay, miners could be incentivized to intentionally reject a valid block and try to find their own heavier block, allowing them to collect more fees. This can be solved by the principle that contested transaction fees are deferred - that is, in case of a fork, the fees which the blocks have in common are not paid to either of them, bur rather suspended and added back to the pool until blocks are found with a total weight equal to the current maximum weight. This applies only to fees and not to coinbase. This way there is no incentive to create a fork when you know you will lose the reward you tried to reclaim. This also makes unintentional forks that much more painful, but this will be compensated by the deferred fees that will exist at any time.

The implementation of this could be tricky, and requires among other things that invalid blocks will be considered in the calculation of the block rewards of the valid chain.


Title: Re: Dynamic block frequency
Post by: Maged on May 06, 2012, 08:45:42 PM
Attacker manipulation. The incentives of an attacker are different from honest miners. He will not care about the variance in long time spans, and choosing higher weight blocks will make an attack easier. In fact, it can be shown that if the attacker can choose an arbitrarily high weight, any double-spend attempt is guaranteed to succeed eventually, whatever the hashrate of the attacker and the honest network.

Therefore, some limitation on weight selection will need to be placed. For example, it can be bounded by some multiple of the median weight of the recent blocks. This will still allow the market to gradually converge towards a higher time span, while preventing an attacker from choosing time spans much higher than the rest of the network.
Unfortunately, this is where your scheme loses some of its greatest usefulness. It would in no way allow merchants to receive a "given level of security of a transaction" that would be any more useful than it is now. The biggest jump in the security of a transaction is when it reaches 1 confirmation, and that's because that is when a Finney Attack is no longer effective. However, under your proposal, a Finney Attack remains equally possible and effective all the way up to the cutoff point for the maximum weight of a single block. Even worse, though, if an attacker doesn't even need that much weight to pull a Finney Attack on someone, they can make blocks with a lower weight, allowing them to pull off the attack more often. This means that in order for someone to have an equal amount of security as one confirmation right now, they need to wait until the maximum weight for a single block has been added to the chain. Now, before you say that that's a good thing since it allows more fine-grained control of the security of a transaction, that's only barely true: a transaction at 99% weighted confirmations is barely more secure than a 0-confirmation transaction, but is still significantly less secure than a 100% weighted confirmation transaction, and that difference is almost the same as the current difference between 0-confirmation transactions and 1-confirmation transactions.

Another factor that makes Finney Attacks more effective is the reduced variance this would give the blockchain. For example, let's say that most miners produce blocks at 10% of the maximum weight, and that 100% weight is defined to average to 10 minutes, meaning that 10% blocks are produced on average every 1 minute. This dynamic all but ensures that a 100% weight Finney Attack has 10 minutes before the Finney block would have to be released, allowing Finney Attacks to be used in places that were previously not practical because of how long it would take to checkout. Namely, the real world. Whereas, with the current setup, it becomes increasingly riskier (and unpredictably so) to hold onto a Finney block for a period of time before releasing it, making it dangerous to use in the real world. As a result, even 0-confirmation transactions would be riskier than they are now.

Now that I've shown that the advantage for merchants is no longer true, that leaves us with "Less variance for miners". If miners want less variance, there are already solutions for that, the best of which being P2Pool. Speaking of pools, it would be in the interest of all pool to mine the maximum weight because they already don't have to worry much about variance. Thus, practically, this doesn't change much anyway. At worst, if it dynamically allows the maximum weight to result in blocks on average coming less often than every ten minutes (most pools wouldn't even mind if a block was only made once an hour), this makes the life of merchants much harder.


I would support a global change in the re-targeting time, or even some way for miners to express support for automatic global changes, but it'd be hard for me to support a situation where it's possible for two blocks to not be equal.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 07, 2012, 04:03:07 AM
@Maged - if I understand your objections correctly, they are solvable by imposing a lower bound in addition to the upper bound, and tightening it. For example, the weight of a block needs to be between 0.9 to 1.1 times the mean weight of the 500 most recent blocks. This way the attacker can't get much advantage by using a block weight significantly different than the rest of the network; but if the current time span is suboptimal, the network will slowly but surely drift towards a better value.

It's also not true that pools don't care about variance. Good PPS pools charge 5% fee to compensate for their risk, and threads of normal pools are littered with discussions of good and bad luck, and complaints about maturity time. The big pools become bigger because they have less variance.

P2Pool basically emulates a blockchain with a smaller timespan, and I agree it removes some of the need for an actually shorter span. I still believe it would be better for the timespan itself to be more optimal.


Title: Re: Dynamic block frequency
Post by: Maged on May 07, 2012, 06:22:47 AM
@Maged - if I understand your objections correctly, they are solvable by imposing a lower bound in addition to the upper bound, and tightening it. For example, the weight of a block needs to be between 0.9 to 1.1 times the weight of the 500 most recent blocks. This way the attacker can't get much advantage by using a block weight significantly different than the rest of the network; but if the current time span is suboptimal, the network will slowly but surely drift towards a better value.
Interesting idea. At worst, a merchant would require one additional confirmation to know that they are safe from a Finney Attack, but because of the market dynamics this idea has, the market might just push average block time to less than half of our current system anyway. You could actually even widen the range of possible weights to .67 to 1.33 without hurting the merchants any more than .9 to 1.1 would. However, to allow it to stretch that far, you wouldn't want the range of weights to be calculated on a rolling basis, but that's fine. If you did want it recalculated every block, you would have to tighten the range just slightly.

It's also not true that pools don't care about variance. Good PPS pools charge 5% fee to compensate for their risk, and threads of normal pools are littered with discussions of good and bad luck, and complaints about maturity time. The big pools become bigger because they have less variance.
Fair enough. I guess we'd just have to see how this would play out.

P2Pool basically emulates a blockchain with a smaller timespan, and I agree if removes some of the need for an actually shorter span. I still believe it would be better for the timespan itself to be more optimal.
I do agree with you there. 10 minutes is really far too long, in my opinion.


Title: Re: Dynamic block frequency
Post by: kjj on May 07, 2012, 02:55:12 PM
The subsidy is based on an exact integer operation.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 07, 2012, 05:49:36 PM
The subsidy is based on an exact integer operation.
And this is relevant how? Multiply the integer block reward by the integer network target, divide by the specific target. The reward will be rounded to the nearest satoshi but that's immaterial.

Implementation details should serve the design, not the other way around.


Title: Re: Dynamic block frequency
Post by: kjj on May 07, 2012, 06:41:15 PM
The subsidy is based on an exact integer operation.
And this is relevant how? Multiply the integer block reward by the integer network target, divide by the specific target. The reward will be rounded to the nearest satoshi but that's immaterial.

Implementation details should serve the design, not the other way around.

The subsidy is currently 5,000,000,000 units, giving you a lot of dynamic range to play with.  That will not always be true.  And rounding to the "nearest" satoshi can be gamed, for trivial gain at first to be sure.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 07, 2012, 07:02:37 PM
The subsidy is currently 5,000,000,000 units, giving you a lot of dynamic range to play with.  That will not always be true.  And rounding to the "nearest" satoshi can be gamed, for trivial gain at first to be sure.
It is obvious to me that by the time the generation reward is significantly reduced, satoshis will be split so what is now known as 1 bitcoin will be more than 100,000,000 atomic units. Even in the highly unlikely case that a satoshi will be significant, since the value is rounded down, this will only create a preference to weights which result in an integer, which causes no harm.

Once again, implementation should follow design, and the "2.1E+15 atomic units" detail can be changed.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

I didn't talk yet about how to handle transaction fees, so you may want to defer judgement until then. Hint: You won't be able to get the entire fee of all floating transactions by creating a very light block.


Title: Re: Dynamic block frequency
Post by: kjj on May 07, 2012, 07:46:58 PM
The subsidy is currently 5,000,000,000 units, giving you a lot of dynamic range to play with.  That will not always be true.  And rounding to the "nearest" satoshi can be gamed, for trivial gain at first to be sure.
It is obvious to me that by the time the generation reward is significantly reduced, satoshis will be split so what is now known as 1 bitcoin will be more than 100,000,000 atomic units. Even in the highly unlikely case that a satoshi will be significant, since the value is rounded down, this will only create a preference to weights which result in an integer, which causes no harm.

Once again, implementation should follow design, and the "2.1E+15 atomic units" detail can be changed.

You are probably right about that.

Oh, but did you notice that you changed your scheme already?  From "round to nearest" to "round down".  One of those was probably just a silly mistake, the sort that happens to everyone from time to time.  Simplicity has a value that is hard to quantify.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

Actually, miners will choose the weight based on whether they got the last block or not, and whether they are getting paid kickbacks to support/override blocks from the previous miner.  For most miners, I think the best payoff would be prevDifficulty+1, except when they are working on extending their own block, in which case it would be the minimum, or close to it.  The exact game theory optimum would depend the acceptable range.

I didn't talk yet about how to handle transaction fees, so you may want to defer judgement until then. Hint: You won't be able to get the entire fee of all floating transactions by creating a very light block.

Oh?

Option 1.  You pay "easy" blocks less and put the balance into a fund used to pay extra for "hard" blocks.  Equilibrium at 10 minutes, which is what you wanted to avoid.

Option 2.  You pay "easy" blocks less, and the remainder carries into the next block.  Equilibrium at X minutes, which is no more optimal than 10 was.

Option 3.  You pay "easy" blocks less, and the remainder is lost.  Equilibrium at 10 minutes.

Option 4.  You pay "easy" blocks less, and "hard" blocks more, creating money out of thin air.  Equilibrium at whatever the weight cap is.

Did I miss any?


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 07, 2012, 08:19:48 PM
Oh, but did you notice that you changed your scheme already?  From "round to nearest" to "round down".  One of those was probably just a silly mistake, the sort that happens to everyone from time to time.  Simplicity has a value that is hard to quantify.
Are we really arguing semantics now? When I said "round to nearest" I meant "round to the nearest integer smaller than the value or equal to it", and I thought it was clear that I'm referring to normal integer division (which rounds down). I'm not as precise in my language in conversation as in formal proposals.

And, with any other rounding method, there's still very little to game. I said it's "immaterial" which implies I couldn't care less how it's rounded, or how people would interpret how I say it's rounded. Energies are better saved for important things.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

Actually, miners will choose the weight based on whether they got the last block or not, and whether they are getting paid kickbacks to support/override blocks from the previous miner.  For most miners, I think the best payoff would be prevDifficulty+1, except when they are working on extending their own block, in which case it would be the minimum, or close to it.  The exact game theory optimum would depend the acceptable range.
As I said, honest miners will build on the longest (most difficult) branch they know. This is a Nash equilibrium - a miner will want to build on the longest branch, as that improves the chances that their own block will be accepted. Increasing the weight decreases the probability that their block will be rejected in a conflict.

I didn't talk yet about how to handle transaction fees, so you may want to defer judgement until then. Hint: You won't be able to get the entire fee of all floating transactions by creating a very light block.
Oh?

Option 1.  You pay "easy" blocks less and put the balance into a fund used to pay extra for "hard" blocks.  Equilibrium at 10 minutes, which is what you wanted to avoid.

Option 2.  You pay "easy" blocks less, and the remainder carries into the next block.  Equilibrium at X minutes, which is no more optimal than 10 was.

Option 3.  You pay "easy" blocks less, and the remainder is lost.  Equilibrium at 10 minutes.

Option 4.  You pay "easy" blocks less, and "hard" blocks more, creating money out of thin air.  Equilibrium at whatever the weight cap is.

Did I miss any?
Options 1 and 2 aren't that far off. But seriously, hypothesizing what the equilibrium will be is hard enough when the system is fully specified, trying to deduce it based on guesses on what I want to propose is just silly. I would appreciate the benefit of the doubt that what I want to propose is not totally idiotic and broken.


Title: Re: Dynamic block frequency
Post by: kjj on May 07, 2012, 08:59:43 PM
Oh, but did you notice that you changed your scheme already?  From "round to nearest" to "round down".  One of those was probably just a silly mistake, the sort that happens to everyone from time to time.  Simplicity has a value that is hard to quantify.
Are we really arguing semantics now? When I said "round to nearest" I meant "round to the nearest integer smaller than the value or equal to it", and I thought it was clear that I'm referring to normal integer division (which rounds down). I'm not as precise in my language in conversation as in formal proposals.

And, with any other rounding method, there's still very little to game. I said it's "immaterial" which implies I couldn't care less how it's rounded, or how people would interpret how I say it's rounded. Energies are better saved for important things.

I thought I had made my point pretty well.  You are proposing to replace something that is really simple, always exact, and impossible to mess up with something that has none of those properties.  People will get it wrong.

I'm not saying that anyone should reject the proposal simply because of this, but it should give you pause.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

Actually, miners will choose the weight based on whether they got the last block or not, and whether they are getting paid kickbacks to support/override blocks from the previous miner.  For most miners, I think the best payoff would be prevDifficulty+1, except when they are working on extending their own block, in which case it would be the minimum, or close to it.  The exact game theory optimum would depend the acceptable range.
As I said, honest miners will build on the longest (most difficult) branch they know. This is a Nash equilibrium - a miner will want to build on the longest branch, as that improves the chances that their own block will be accepted. Increasing the weight decreases the probability that their block will be rejected in a conflict.

You sure about that?  No one will ever want to ignore an easy block and try to replace it with their own hard block?

I didn't talk yet about how to handle transaction fees, so you may want to defer judgement until then. Hint: You won't be able to get the entire fee of all floating transactions by creating a very light block.
Oh?

Option 1.  You pay "easy" blocks less and put the balance into a fund used to pay extra for "hard" blocks.  Equilibrium at 10 minutes, which is what you wanted to avoid.

Option 2.  You pay "easy" blocks less, and the remainder carries into the next block.  Equilibrium at X minutes, which is no more optimal than 10 was.

Option 3.  You pay "easy" blocks less, and the remainder is lost.  Equilibrium at 10 minutes.

Option 4.  You pay "easy" blocks less, and "hard" blocks more, creating money out of thin air.  Equilibrium at whatever the weight cap is.

Did I miss any?
Options 1 and 2 aren't that far off. But seriously, hypothesizing what the equilibrium will be is hard enough when the system is fully specified, trying to deduce it based on guesses on what I want to propose is just silly. I would appreciate the benefit of the doubt that what I want to propose is not totally idiotic and broken.

Don't take it personally.  Lots of people propose changes that fix things that aren't problems, or that don't fix an actual problem, or that create too many new problems.  I have no idea if your proposal fits one of those three categories, or if it actually fixes something that is actually a problem and doesn't create a whole new mess.  I find out by poking at the edges.

P.S.  Read the bold part again.  Warning bells should be going off as you do.


Title: Re: Dynamic block frequency
Post by: Maged on May 08, 2012, 01:36:03 AM
Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

Actually, miners will choose the weight based on whether they got the last block or not, and whether they are getting paid kickbacks to support/override blocks from the previous miner.  For most miners, I think the best payoff would be prevDifficulty+1, except when they are working on extending their own block, in which case it would be the minimum, or close to it.  The exact game theory optimum would depend the acceptable range.
As I said, honest miners will build on the longest (most difficult) branch they know. This is a Nash equilibrium - a miner will want to build on the longest branch, as that improves the chances that their own block will be accepted. Increasing the weight decreases the probability that their block will be rejected in a conflict.

You sure about that?  No one will ever want to ignore an easy block and try to replace it with their own hard block?
Are you sure able that? If we assume that average block times are going to tend to be very quick, it'd be stupid to ignore any block. This assumes, of course, 2*minimum weight > 1*maximum weight. With that, if any weight block is added to the block you are ignoring, you lose. In fact, assuming that lower weight blocks put a portion of the fees from the block forward to the next block, it would be in every miner's benefit to mine on the low-weight block.

As for why this dynamic wouldn't just push people to mine on the lowest-weight block released, that's because, generally, people wouldn't want forks. Not to mention, we could have it so that when miners are able to spend their coins depends on weight, meaning that a building on a higher weight block would result in getting paid sooner.

Woah, multiple dynamics are going on there...


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 08, 2012, 04:42:16 AM
I thought I had made my point pretty well.  You are proposing to replace something that is really simple, always exact, and impossible to mess up with something that has none of those properties.  People will get it wrong.
The current block reward calculation isn't any more exact than what I propose - when the block reward is halved, the amount of satoshis is rounded down.

It's hard for you to make a case about simplicity taking as an example a minute detail about one of the simpler aspects. If multiplying two numbers gives us troubles, I worry.

I'm not saying that anyone should reject the proposal simply because of this, but it should give you pause.
Clearly the proposal adds complexity. But the whole integer arithmetic thing is not it.

Also, I'm pretty sure that your idea will create total chaos.  Not less forks, but more.  Lots more.  Like on nearly every block.  Not at first, necessarily, but as the subsidy shrinks relative to the transaction fees for sure.
Did I say it will create less forks? I predict that the equilibrium found will be less than 10 minutes, which means more forks.

I doubt it will be "on nearly every block". As I explained, if it comes to that, miners will choose a larger weight which decreases their invalid rate.

Actually, miners will choose the weight based on whether they got the last block or not, and whether they are getting paid kickbacks to support/override blocks from the previous miner.  For most miners, I think the best payoff would be prevDifficulty+1, except when they are working on extending their own block, in which case it would be the minimum, or close to it.  The exact game theory optimum would depend the acceptable range.
As I said, honest miners will build on the longest (most difficult) branch they know. This is a Nash equilibrium - a miner will want to build on the longest branch, as that improves the chances that their own block will be accepted. Increasing the weight decreases the probability that their block will be rejected in a conflict.

You sure about that?  No one will ever want to ignore an easy block and try to replace it with their own hard block?
I think it's possible to parameterize it so that this will be rare.

P.S.  Read the bold part again.  Warning bells should be going off as you do.
Not really. If I were able to predict what the market would do I'd be rich. I just want it to have the tools to find what's most efficient for it, and that's easier to demonstrate.


Title: Re: Dynamic block frequency
Post by: kjj on May 08, 2012, 06:42:22 AM
I thought I had made my point pretty well.  You are proposing to replace something that is really simple, always exact, and impossible to mess up with something that has none of those properties.  People will get it wrong.
The current block reward calculation isn't any more exact than what I propose - when the block reward is halved, the amount of satoshis is rounded down.

It's hard for you to make a case about simplicity taking as an example a minute detail about one of the simpler aspects. If multiplying two numbers gives us troubles, I worry.

Don't take this the wrong way, but you did get this "minute detail about one of the simpler aspects" wrong.  This is an extreme example, one that I often choose because it is so trivial and so simple, and still so easy to get wrong.  (See my posts in various other threads with proposals to change up the subsidy calculation.)

"Right shift by 1 bit" is not the same operation as "divide by two and round down".  They obtain the same result if done properly, but have different meta costs, as you helped me demonstrate.  That extra cost should not be paid (now and for eternity) without good reason.  The bitcoin software is chock full of things far more complicated than this, so clearly we can get this detail right too.  But should we?

Do you get what I'm trying to say?

P.S.  Read the bold part again.  Warning bells should be going off as you do.
Not really. If I were able to predict what the market would do I'd be rich. I just want it to have the tools to find what's most efficient for it, and that's easier to demonstrate.

It is relatively simple to show that 5 seconds between blocks is too short, and that 3600 seconds between blocks is too long.  It is much harder to show that 590 or 610 seconds is better than 600, or that 300 or 900 are better than 600.

As you say, it is hard to predict what this system will look like once it gets rolling.  Will we be trading one non-optimal block average for a different non-optimal average?  Will we be making a system that cycles or floats through a range of values, each just as non-optimal as the others?  How can we tell a "good" state from a "bad" state?

Usually I make these points in threads about moving the decimal point, but they really apply to all of the magic numbers.  Why 10 minutes?  Why 2 weeks?  Why 210,000 blocks?  The answer is always the same, because they are "good enough" and it is really hard to show that any different numbers would be any better.

I suspect that the same applies here.  It seems to me that it is just as hard to show that a dynamic system is any better for this job, and it doesn't just have to be any better, it has to be better enough to pay for the extra complexity costs.

All that said, unless you want to discuss these things further, I'm willing to bow out here and let you get this thread back on track so that you can finish presenting your idea.  I really would like to see the rest of it.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 08, 2012, 10:06:11 AM
Don't take this the wrong way, but you did get this "minute detail about one of the simpler aspects" wrong.
Sorry, but no, I didn't get anything wrong. I failed to communicate an idea clearly, which I would have spent more effort avoiding if it referred to anything important.

"Right shift by 1 bit" is not the same operation as "divide by two and round down".  They obtain the same result if done properly, but have different meta costs, as you helped me demonstrate.
I don't understand. Bit-shift may take less CPU time but it's much more difficult conceptually. The actual operation being done is division by 2 and rounding down, bit-shift is the specific code used to do it.

Do you get what I'm trying to say?
No.

P.S.  Read the bold part again.  Warning bells should be going off as you do.
Not really. If I were able to predict what the market would do I'd be rich. I just want it to have the tools to find what's most efficient for it, and that's easier to demonstrate.

It is relatively simple to show that 5 seconds between blocks is too short, and that 3600 seconds between blocks is too long.  It is much harder to show that 590 or 610 seconds is better than 600, or that 300 or 900 are better than 600.

As you say, it is hard to predict what this system will look like once it gets rolling.  Will we be trading one non-optimal block average for a different non-optimal average?  Will we be making a system that cycles or floats through a range of values, each just as non-optimal as the others?  How can we tell a "good" state from a "bad" state?
It's all about the incentive structure. If we can find a system that allows the people for whom the value actually matters to vote with their feet, we should come up with something good.

Usually I make these points in threads about moving the decimal point, but they really apply to all of the magic numbers.  Why 10 minutes?  Why 2 weeks?  Why 210,000 blocks?  The answer is always the same, because they are "good enough" and it is really hard to show that any different numbers would be any better.

I suspect that the same applies here.  It seems to me that it is just as hard to show that a dynamic system is any better for this job, and it doesn't just have to be any better, it has to be better enough to pay for the extra complexity costs.
Some magic numbers are more important than others. 21M total bitcoins is unimportant, 100M units per bitcoin is unimportant. 2 weeks is somewhat important. 4 years and 10 minutes are quite important. We can't change 4 years for Bitcoin itself, but maybe we can change 10 minutes.

I agree completely that it is hard to deduce that one value is better than another, but I don't think it's as hard to show that a certain system has the incentives to find an optimal value properly aligned.

Also, even if the dynamic system can't be trusted to be robust, the proposal gives a framework for changing the static block time, if there is ever a consensus that it should be changed.

All that said, unless you want to discuss these things further, I'm willing to bow out here and let you get this thread back on track so that you can finish presenting your idea.  I really would like to see the rest of it.
We can discuss it further. Some of the discussion is better deferred until I write about some related ideas. I'm not saying I have all the answers, but I think it's possible to make a useful system out of this.


Title: Re: Dynamic block frequency
Post by: kjj on May 08, 2012, 05:38:20 PM
Don't take this the wrong way, but you did get this "minute detail about one of the simpler aspects" wrong.
Sorry, but no, I didn't get anything wrong. I failed to communicate an idea clearly, which I would have spent more effort avoiding if it referred to anything important.

It seems like we are unlikely to come to an agreement on the importance of the subsidy calculation.  I maintain that it is very important, and that it should be exact and simple, barring a very good reason to be otherwise.

"Right shift by 1 bit" is not the same operation as "divide by two and round down".  They obtain the same result if done properly, but have different meta costs, as you helped me demonstrate.
I don't understand. Bit-shift may take less CPU time but it's much more difficult conceptually. The actual operation being done is division by 2 and rounding down, bit-shift is the specific code used to do it.

As someone that knows a little bit about CPUs and digital logic, I can promise you that the right shift operation is not done by divide-and-round, it is done in dedicated hardware called a shifter that literally shifts bits.  It might be harder for laymen to understand, but programmers (and CPUs) understand shift very literally.

P.S.  Read the bold part again.  Warning bells should be going off as you do.
Not really. If I were able to predict what the market would do I'd be rich. I just want it to have the tools to find what's most efficient for it, and that's easier to demonstrate.

It is relatively simple to show that 5 seconds between blocks is too short, and that 3600 seconds between blocks is too long.  It is much harder to show that 590 or 610 seconds is better than 600, or that 300 or 900 are better than 600.

As you say, it is hard to predict what this system will look like once it gets rolling.  Will we be trading one non-optimal block average for a different non-optimal average?  Will we be making a system that cycles or floats through a range of values, each just as non-optimal as the others?  How can we tell a "good" state from a "bad" state?
It's all about the incentive structure. If we can find a system that allows the people for whom the value actually matters to vote with their feet, we should come up with something good.

Usually I make these points in threads about moving the decimal point, but they really apply to all of the magic numbers.  Why 10 minutes?  Why 2 weeks?  Why 210,000 blocks?  The answer is always the same, because they are "good enough" and it is really hard to show that any different numbers would be any better.

I suspect that the same applies here.  It seems to me that it is just as hard to show that a dynamic system is any better for this job, and it doesn't just have to be any better, it has to be better enough to pay for the extra complexity costs.
Some magic numbers are more important than others. 21M total bitcoins is unimportant, 100M units per bitcoin is unimportant. 2 weeks is somewhat important. 4 years and 10 minutes are quite important. We can't change 4 years for Bitcoin itself, but maybe we can change 10 minutes.

I agree completely that it is hard to deduce that one value is better than another, but I don't think it's as hard to show that a certain system has the incentives to find an optimal value properly aligned.

Also, even if the dynamic system can't be trusted to be robust, the proposal gives a framework for changing the static block time, if there is ever a consensus that it should be changed.

I think that this is the part that needs the most work.  Miners will seek to maximize things based on whatever incentive scheme we come up with.  We need to show that there is a problem (or at least a sub-optimal condition) and that this change fixes (or at least improves) it.

I agree with you that 600 seconds forever is very unlikely to be optimal.  But is dynamic actually any better?  If we make it dynamic, and set up a system that adjusts based on the desires of miners, and miners adjust things to their liking, is that evidence that the new value is better?  Or is it just evidence that the miners are gaming the incentive?  And how do we measure better-ness anyway?


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 08, 2012, 06:54:43 PM
"Right shift by 1 bit" is not the same operation as "divide by two and round down".  They obtain the same result if done properly, but have different meta costs, as you helped me demonstrate.
I don't understand. Bit-shift may take less CPU time but it's much more difficult conceptually. The actual operation being done is division by 2 and rounding down, bit-shift is the specific code used to do it.
As someone that knows a little bit about CPUs and digital logic, I can promise you that the right shift operation is not done by divide-and-round, it is done in dedicated hardware called a shifter that literally shifts bits.  It might be harder for laymen to understand, but programmers (and CPUs) understand shift very literally.
Look, we're not going to get anywhere if you keep misinterpreting what I say. Of course how a CPU physically does a bit shift is completely different from how it does integer division. That's why I said it's faster. But the mathematical operation being performed is the same. Do you really find "block reward starts at 5 billion satoshis, and halves every 4 years" less intuitive than "block rewards starts at 5 billion satoshis, and is shifted by 1 bit to the right every 4 years"?

It seems to me we're speaking completely different languages - you as a programmer (I presume?) are passionate about implementation details (and you interpret everything I say as an implementation specification which must be rigorous), for me as a mathematician the concepts come first, the implementation can be worried about later - especially in those cases when it is clear that how we choose to implement it is not going to matter.

I maintain that it is very important, and that it should be exact and simple, barring a very good reason to be otherwise.
As I said, I agree with this, but I don't see how what I'm talking about is any less simple or exact. The claim that bit-shift is somehow more exact than integer division is ludicrous.

I think that this is the part that needs the most work.  Miners will seek to maximize things based on whatever incentive scheme we come up with.  We need to show that there is a problem (or at least a sub-optimal condition) and that this change fixes (or at least improves) it.

I agree with you that 600 seconds forever is very unlikely to be optimal.  But is dynamic actually any better?  If we make it dynamic, and set up a system that adjusts based on the desires of miners, and miners adjust things to their liking, is that evidence that the new value is better?  Or is it just evidence that the miners are gaming the incentive?  And how do we measure better-ness anyway?
The users, who are the second half of the equation, would need to have influence over the incentive structure of the miners. I still need to iron out a few things with this.


Title: Re: Dynamic block frequency
Post by: symbols on May 08, 2012, 07:11:55 PM
Your proposal is interesting, and I agree the confirmation time is definitely a point of some friction. I like litecoin's 2.5 minute target, but even that could get unrealistically large for things like high frequency trading. Definitely worth considering. One of the potential weaknesses is it creates an incentive to oscillate the difficulty. This incentive already exists, but you need a significant amount of hashing power to benefit from it in the current system. Bounding the weight deviation seems an effective way to keep that incentive down.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 08, 2012, 07:41:36 PM
Your proposal is interesting, and I agree the confirmation time is definitely a point of some friction. I like litecoin's 2.5 minute target, but even that could get unrealistically large for things like high frequency trading.
The system isn't going to do any magic. Too frequent blocks will cause excessive forking, so some things will just have to be done with layers on top of Bitcoin.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 11, 2012, 12:20:47 PM
I have continued the proposal.


Title: Re: Dynamic block frequency
Post by: nybble41 on May 11, 2012, 06:45:18 PM
As someone that knows a little bit about CPUs and digital logic, I can promise you that the right shift operation is not done by divide-and-round, it is done in dedicated hardware called a shifter that literally shifts bits.  It might be harder for laymen to understand, but programmers (and CPUs) understand shift very literally.
Look, we're not going to get anywhere if you keep misinterpreting what I say. Of course how a CPU physically does a bit shift is completely different from how it does integer division. That's why I said it's faster.
Actually, they're not implemented all that differently. True, shifts are not implemented via division; it's the other way around. Division can be implemented in hardware as a series of right-shift and subtract operations (a naive binary version of long division), and division by two reduces to a simple right-shift, though there may be some wasted cycles. For that reason, any decent optimizing compiler will generate the opcode for a right-shift operation rather than integer division when the code specifies integer division by a fixed power of two. As you said, it's exactly the same mathematical operation. The compiler is just unrolling the loop implied by the division algorithm.

Your proposal is interesting, and I agree the confirmation time is definitely a point of some friction. I like litecoin's 2.5 minute target, but even that could get unrealistically large for things like high frequency trading.
The system isn't going to do any magic. Too frequent blocks will cause excessive forking, so some things will just have to be done with layers on top of Bitcoin.
Would it be possible to re-integrate the forks instead, provided there are no conflicting transactions? It should be rare to have forks which cannot be reconciled. Example:

Chain 1: A(t1, t2) -> B(t3, t4)
Chain 2: A(t1, t2) -> C(t3, t5, t6)
Next: A(t1, t2) -> {B(t3, t4), C(t3, t5, t6)} -> D(t7)


Instead of a single parent, Block D would list the heads of all the chains its compatible with. The height would be the maximum along any path from the genesis block (i.e. height(D) = height(A) + max(difficulty(B, C))). In the event of conflicting transactions, it would be up to the miner of block D to choose a non-conflicting subset of the available chains.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 12, 2012, 05:26:38 PM
Your proposal is interesting, and I agree the confirmation time is definitely a point of some friction. I like litecoin's 2.5 minute target, but even that could get unrealistically large for things like high frequency trading.
The system isn't going to do any magic. Too frequent blocks will cause excessive forking, so some things will just have to be done with layers on top of Bitcoin.
Would it be possible to re-integrate the forks instead, provided there are no conflicting transactions? It should be rare to have forks which cannot be reconciled. Example:

Chain 1: A(t1, t2) -> B(t3, t4)
Chain 2: A(t1, t2) -> C(t3, t5, t6)
Next: A(t1, t2) -> {B(t3, t4), C(t3, t5, t6)} -> D(t7)


Instead of a single parent, Block D would list the heads of all the chains its compatible with. The height would be the maximum along any path from the genesis block (i.e. height(D) = height(A) + max(difficulty(B, C))). In the event of conflicting transactions, it would be up to the miner of block D to choose a non-conflicting subset of the available chains.
There has been talk here (https://bitcointalk.org/index.php?topic=57647.10) about doing pretty much this. Maged has a few ideas how it would work. It would be interesting to see how to combine the ideas of dynamic block times and a different graph structure for the blockchain.


Title: Re: Dynamic block frequency
Post by: nybble41 on May 14, 2012, 04:35:31 AM
Your proposal is interesting, and I agree the confirmation time is definitely a point of some friction. I like litecoin's 2.5 minute target, but even that could get unrealistically large for things like high frequency trading.
The system isn't going to do any magic. Too frequent blocks will cause excessive forking, so some things will just have to be done with layers on top of Bitcoin.
Would it be possible to re-integrate the forks instead, provided there are no conflicting transactions? It should be rare to have forks which cannot be reconciled. Example:

Chain 1: A(t1, t2) -> B(t3, t4)
Chain 2: A(t1, t2) -> C(t3, t5, t6)
Next: A(t1, t2) -> {B(t3, t4), C(t3, t5, t6)} -> D(t7)


Instead of a single parent, Block D would list the heads of all the chains its compatible with. The height would be the maximum along any path from the genesis block (i.e. height(D) = height(A) + max(difficulty(B, C))). In the event of conflicting transactions, it would be up to the miner of block D to choose a non-conflicting subset of the available chains.
There has been talk here (https://bitcointalk.org/index.php?topic=57647.10) about doing pretty much this. Maged has a few ideas how it would work. It would be interesting to see how to combine the ideas of dynamic block times and a different graph structure for the blockchain.
There is a serious problem with reintegration which I didn't consider before: the fees and block rewards will inevitably conflict between the different chains. Assuming B and C were discovered by different miners, the fee for transaction t3 in block B would conflict with the fee for the same transaction in block C, and the {B,C} set, while still representing only a single link in the chain, would have double the combined block reward.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 14, 2012, 09:36:58 AM
Your proposal is interesting, and I agree the confirmation time is definitely a point of some friction. I like litecoin's 2.5 minute target, but even that could get unrealistically large for things like high frequency trading.
The system isn't going to do any magic. Too frequent blocks will cause excessive forking, so some things will just have to be done with layers on top of Bitcoin.
Would it be possible to re-integrate the forks instead, provided there are no conflicting transactions? It should be rare to have forks which cannot be reconciled. Example:

Chain 1: A(t1, t2) -> B(t3, t4)
Chain 2: A(t1, t2) -> C(t3, t5, t6)
Next: A(t1, t2) -> {B(t3, t4), C(t3, t5, t6)} -> D(t7)


Instead of a single parent, Block D would list the heads of all the chains its compatible with. The height would be the maximum along any path from the genesis block (i.e. height(D) = height(A) + max(difficulty(B, C))). In the event of conflicting transactions, it would be up to the miner of block D to choose a non-conflicting subset of the available chains.
There has been talk here (https://bitcointalk.org/index.php?topic=57647.10) about doing pretty much this. Maged has a few ideas how it would work. It would be interesting to see how to combine the ideas of dynamic block times and a different graph structure for the blockchain.
There is a serious problem with reintegration which I didn't consider before: the fees and block rewards will inevitably conflict between the different chains. Assuming B and C were discovered by different miners, the fee for transaction t3 in block B would conflict with the fee for the same transaction in block C, and the {B,C} set, while still representing only a single link in the chain, would have double the combined block reward.
I think the rewards can be split between the referenced blocks one way or another.


Title: Re: Dynamic block frequency
Post by: nybble41 on May 14, 2012, 05:59:53 PM
There is a serious problem with reintegration which I didn't consider before: the fees and block rewards will inevitably conflict between the different chains. Assuming B and C were discovered by different miners, the fee for transaction t3 in block B would conflict with the fee for the same transaction in block C, and the {B,C} set, while still representing only a single link in the chain, would have double the combined block reward.
I think the rewards can be split between the referenced blocks one way or another.
I considered that, but any such split would necessarily be after the fact; there are any number of ways blocks B and C, and possibly other blocks, could be recombined into block D, and then into block E, and so on. Reducing the reward after a valid block has been discovered doesn't seem very fair to the miners. Similarly, splitting transaction fees ex post facto would introduce chaos into the miners' transaction selection process. It also seems like a much more involved protocol change than simply adding multiple parent blocks and merging their transactions.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on May 14, 2012, 06:28:18 PM
There is a serious problem with reintegration which I didn't consider before: the fees and block rewards will inevitably conflict between the different chains. Assuming B and C were discovered by different miners, the fee for transaction t3 in block B would conflict with the fee for the same transaction in block C, and the {B,C} set, while still representing only a single link in the chain, would have double the combined block reward.
I think the rewards can be split between the referenced blocks one way or another.
I considered that, but any such split would necessarily be after the fact; there are any number of ways blocks B and C, and possibly other blocks, could be recombined into block D, and then into block E, and so on. Reducing the reward after a valid block has been discovered doesn't seem very fair to the miners. Similarly, splitting transaction fees ex post facto would introduce chaos into the miners' transaction selection process. It also seems like a much more involved protocol change than simply adding multiple parent blocks and merging their transactions.
Mining rewards mature after 120 blocks, and it seems reasonable if they change during that time. Though how to do this technically might be a challenge.


Title: Re: Dynamic block frequency
Post by: galambo on July 27, 2012, 12:15:22 AM
Optimal block frequency is determined by the conditions of the network, not the market.

You are making an economic mistake that many bitcoiners do.

As far as the field of economics is concerned, coinbase transactions contain zero information. Its what you do with it, not what it is.

The economic value of bitcoin is the non-coinbase transactions.



Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on July 27, 2012, 04:07:31 AM
Optimal block frequency is determined by the conditions of the network, not the market.

You are making an economic mistake that many bitcoiners do.

As far as the field of economics is concerned, coinbase transactions contain zero information. Its what you do with it, not what it is.

The economic value of bitcoin is the non-coinbase transactions.
I think you've missed the point. Block frequency has the effects that I described on Bitcoin users, nodes and miners.


Title: Re: Dynamic block frequency
Post by: gmaxwell on July 27, 2012, 04:28:42 AM
I think you've missed the point. Block frequency has the effects that I described on Bitcoin users, nodes and miners.

On the contrary, you missed his.  The most important element of the inter-block time is the network convergence radius, if the interblock time is too small relative to the block validating and forwarding delay and the network is likely to never converge. Somewhat larger, but still small— a concentrated attacker gains an unusual advantage against distributed honest defenders, because the defenders lose hash power extending multiple separate forks due to slow convergence.  If you set the time just high enough to avoid the most fatal effects you still greatly increase the computational burden on full validating nodes, increase the storage burden on simplified nodes, and damage decentralization by discouraging validating or semi-validating nodes and driving people to centralized services.

It's not really an market parameter at all— and to the extent that it is one, it would be one highly prone to a race to the bottom which devalues the system for everyone.   Even if you set it to stupidly-low-will-break-for-sure levels there will intermittently be very long blocks. If you're engaging in activity where you need fast evidence of irreversibility it's almost always the case that you need consistently fast evidence of irreversibility— often fast, sometimes not isn't good.  So making the block times different doesn't really solve anything here.  Fortunately there are other ways to get evidence of irreversibility, lower variance for mining, etc. that don't require breaking the system.


Title: Re: Dynamic block frequency
Post by: allten on July 27, 2012, 04:37:51 AM
There's a strong atmosphere in the Bitcoin community that "if its not broken then don't fix it". May not be optimal or flexible, but it works. until we reach a scenario where it becomes obvious to all that varied block timing will be beneficial, it will be very hard to influence the change.

Anyways, thanks for proposal. It's definitely good food for thought.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on July 27, 2012, 07:09:52 AM
@gmaxwell - I've already addressed most of these points. I think the problem is a too narrow definition of market - everything can be a market if there's an appropriate incentive scheme. I guess your point is that there's currently no good incentive scheme - which is a challenge, but I don't think an insurmountable one.

The most important element of the inter-block time is the network convergence radius, if the interblock time is too small relative to the block validating and forwarding delay and the network is likely to never converge.
More frequent blocks will make forks longer, but I don't think there's a sense in which they can not converge.

Somewhat larger, but still small— a concentrated attacker gains an unusual advantage against distributed honest defenders, because the defenders lose hash power extending multiple separate forks due to slow convergence.
That's the market of miners who don't wish their hashes to be wasted, and the market of users who prefer to minimize the probability of their transactions being double-spent. It is still an open problem how to have a way for users to act on this preference.

If you set the time just high enough to avoid the most fatal effects you still greatly increase the computational burden on full validating nodes,
The market of nodes who wish to cut costs.

increase the storage burden on simplified nodes,
The market of users. Though this is a small enough problem that it can be solved by agreement rather than direct action.

and damage decentralization by discouraging validating or semi-validating nodes and driving people to centralized services.
The users have a preference for more decentralization. Again there's some work to find the incentive scheme that's best for letting them act on it.

So making the block times different doesn't really solve anything here.
There's a tradeoff with regards to block frequency. Where there's a tradeoff there's an optimal value, and I see no reason to believe 10 minutes is it.

What I'm trying to say is that the weight placed on arbitrary decrees should be minimized. "10 minutes is the block frequency and there's nothing you can do about it" may solve the incentive issues but it leads to a result far from optimum. With a dynamic block frequency you'll still need some decree to keep the incentives in line, but most of the heavylifting of finding the best value will be done by the actions of those who are affected by the decision. Think of it as a sort of amplification.

Fortunately there are other ways to get evidence of irreversibility, lower variance for mining, etc. that don't require breaking the system.
That's probably true. P2Pool emulates frequent blocks for variance purposes, and these days I have a different idea (https://bitcointalk.org/index.php?topic=91732.0) on how to make fast payments, so having a dynamic block frequency isn't critical.


Title: Re: Dynamic block frequency
Post by: gmaxwell on July 27, 2012, 07:17:46 AM
More frequent blocks will make forks longer, but I don't think there's a sense in which they can not converge.

A toy example: The block rate is infinitely fast, the communication delay between miners is 0+ε.  All miners are on their own forks and will never converge onto a single chain.   This was even demonstrated in practice on a altcoin.

Quote
That's the market of miners who don't wish their hashes to be wasted, and the market of users who prefer to minimize the probability of their transactions being double-spent.

The miners hases aren't "wasted", since eveyone takes the losses from orphaning. There is only the loss of security of blocks which are frequently below the communications horizon and the race to the bottom to include more transactions regardless of the overall security implications.

Quote
The market of users. Though this is a small enough problem that it can be solved by agreement rather than direct action.

There are many problems which are basically unsolvable in a dynamic system but are trivial when the parties can pre-commit. Bitcoin is defined by it's rules, its a pre-commitment that we've all signed up for. Changes that favor some over others would rightfully undermine any trust people put in it...

Quote
That's probably true. P2Pool emulates frequent blocks for variance purposes, and these days I have a different idea (https://bitcointalk.org/index.php?topic=91732.0) on how to make fast payments, so having a dynamic block frequency isn't critical.

... especially when there are alternatives.



Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on July 27, 2012, 07:31:46 AM
More frequent blocks will make forks longer, but I don't think there's a sense in which they can not converge.
A toy example: The block rate is infinitely fast, the communication delay between miners is 0+ε.  All miners are on their own forks and will never converge onto a single chain.   This was even demonstrated in practice on a altcoin.
Of course if the block frequency is infinite, so will the forks be. Interestingly, I think the length of forks grows very rapidly with diminishing time between blocks, exp(1/T).

Quote
That's the market of miners who don't wish their hashes to be wasted, and the market of users who prefer to minimize the probability of their transactions being double-spent.
The miners hases aren't "wasted", since eveyone takes the losses from orphaning. There is only the loss of security of blocks which are frequently below the communications horizon and the race to the bottom to include more transactions regardless of the overall security implications.
If block time decreases for everyone then yes, miners aren't worse off. In the context of my proposal, a miner deciding to personally go for a shorter time will have his hashes wasted since his block has a higher chance to be rejected. That's one of the market forces restoring equilibrium.


Title: Re: Dynamic block frequency
Post by: jl2012 on July 27, 2012, 07:46:19 AM
Miners will decide the "optimal" block frequency only based on their interest (e.g. reducing mining variance), not for network security / efficiency


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on July 27, 2012, 08:01:10 AM
Miners will decide the "optimal" block frequency only based on their interest (e.g. reducing mining variance), not for network security / efficiency
Exactly.


Title: Re: Dynamic block frequency
Post by: jl2012 on July 27, 2012, 08:48:07 AM
Miners will decide the "optimal" block frequency only based on their interest (e.g. reducing mining variance), not for network security / efficiency
Exactly.

Therefore, your proposal may actually have harmful effect on network security / efficiency. For example, if miners find out that 20-minute block is optimal for for mining, all transactions will wait much longer before getting confirmed.


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on July 27, 2012, 08:55:28 AM
Miners will decide the "optimal" block frequency only based on their interest (e.g. reducing mining variance), not for network security / efficiency
Exactly.
Therefore, your proposal may actually have harmful effect on network security / efficiency. For example, if miners find out that 20-minute block is optimal for for mining, all transactions will wait much longer before getting confirmed.
Not if there is an incentive system in place which allows users who want fast confirmations to pay the miners who deliver a quick block.


Title: Re: Dynamic block frequency
Post by: ripper234 on July 27, 2012, 10:33:19 PM
Nice thread. Might be worthwhile for Bitcoin 2.0.

Added to the Hardfork wishlist (https://en.bitcoin.it/wiki/Hardfork_Wishlist#Major_structural_changes).

Keep up the good work & innovation Meni!


Title: Re: Dynamic block frequency
Post by: Meni Rosenfeld on July 28, 2012, 07:45:02 PM
Keep up the good work & innovation Meni!
Thanks. I think I've had this idea since Nov 2011, too bad it took me until May to write about it.