Bitcoin Forum
April 26, 2024, 01:25:42 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 [6] 7 8 9 10 »  All
  Print  
Author Topic: The MAX_BLOCK_SIZE fork  (Read 35542 times)
notig
Sr. Member
****
Offline Offline

Activity: 294
Merit: 250


View Profile
February 05, 2013, 06:15:06 AM
 #101

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).



does this still involve a fork?
1714137942
Hero Member
*
Offline Offline

Posts: 1714137942

View Profile Personal Message (Offline)

Ignore
1714137942
Reply with quote  #2

1714137942
Report to moderator
1714137942
Hero Member
*
Offline Offline

Posts: 1714137942

View Profile Personal Message (Offline)

Ignore
1714137942
Reply with quote  #2

1714137942
Report to moderator
Each block is stacked on top of the previous one. Adding another block to the top makes all lower blocks more difficult to remove: there is more "weight" above each block. A transaction in a block 6 blocks deep (6 confirmations) will be very difficult to remove.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714137942
Hero Member
*
Offline Offline

Posts: 1714137942

View Profile Personal Message (Offline)

Ignore
1714137942
Reply with quote  #2

1714137942
Report to moderator
1714137942
Hero Member
*
Offline Offline

Posts: 1714137942

View Profile Personal Message (Offline)

Ignore
1714137942
Reply with quote  #2

1714137942
Report to moderator
1714137942
Hero Member
*
Offline Offline

Posts: 1714137942

View Profile Personal Message (Offline)

Ignore
1714137942
Reply with quote  #2

1714137942
Report to moderator
Maged
Legendary
*
Offline Offline

Activity: 1204
Merit: 1015


View Profile
February 05, 2013, 06:46:12 AM
 #102

EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! Smiley
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks). So, to mitigate the damage that will cause to the practical confirmation time...
Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. Additionally, by lowering the block-creation-time constant, you increase the chances of there being natural orphans by a much larger factor than you are lowering the constant (5 minute blocks would on average have 4x as many orphans as 10 minute blocks over the same time period). Currently, we see that as a bad thing since it makes the network weaker against an attacker. So, the current block time was set so that the block verification time network-wide would be mostly negligible. Let's make it so that it's not.

To miners, orphans are lost money, so instead of using such a large constant for the block time so that orphans don't happen much in the first place, force the controlling of the orphan rate onto the miners. To avoid orphans, they'd then be forced to use such block-ignoring features. In turn, the smaller the constant for block time that we pick, the exponentially smaller the blocks would have to be. Currently, I suspect that a 50 MB block that was made up of pre-verified transactions would be no big deal for the current network. However, a .2 MB block on a 2.35 seconds per block network (yes, extreme example) absolutely would be a big deal (especially because at that speed even an empty block with just a coinbase is a problem).

There are also some side benefits: because miners would strongly avoid transactions most of the network hasn't seen, only high-fee transactions would be likely to make it into the very next block, but many transactions would make it eventually. It might even encourage high-speed relay networks to appear, who will require a cut of the transaction fees the miners make in order to let them join this network.

In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)

solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
February 05, 2013, 07:08:27 AM
Last edit: February 05, 2013, 10:37:23 AM by solex
 #103

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. ...
In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)

For the rest of us who are catching up, are you proposing what seems far more radical than eliminating the 1Mb limit? Can you please clarify. Are you proposing reducing the 10 min average block creation time? If so, what happens to the 25 BTC reward which would be excessive, and need a pro-rata reduction for increased block frequency?


caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
February 05, 2013, 08:04:34 AM
 #104

EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! Smiley
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks).

Would it really? I've never seen any actual analysis, but I'd say that honest splits would mostly carry the same transactions, with the obvious exception of coinbase and "a few others". Has anyone ever done an analysis of how many transactions (in relative terms) are actually lost in a reorg and need to get reconfirmed?

Btw, interested nodes could attempt to download, and perhaps even relay, all sides of a split. If you see that your transaction is in all of them, you know it actually had its first confirmation for good. Relaying orphans sounds a less radical change than changing the 10m delay...
Maged
Legendary
*
Offline Offline

Activity: 1204
Merit: 1015


View Profile
February 05, 2013, 11:22:02 PM
 #105

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. ...
In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)

For the rest of us who are catching up, are you proposing what seems far more radical than eliminating the 1Mb limit?
Quite possibly. However, if we think of the 10 minute constant as not actually having to stay at that constant, we can adjust it so that at the time we disable the 1 MB limit, the largest block that miners would practically want to make at that time would be 1 MB. Basically, this would protect us from having a 1 MB limit one day, to a practical 50 MB limit (or whatever is currently practical with the 10 minute constant). I mainly want people to remember that changing the block time is also something that's also able to be on the table.

Can you please clarify. Are you proposing reducing the 10 min average block creation time?
Yes.

If so, what happens to the 25 BTC reward which would be excessive, and need a pro-rata reduction for increased block frequency?
Just like you said, it would have a pro-rata reduction for increased block frequency. Sorry, I assumed that was obvious, since changing anything about the total currency created is absolutely off the table.

EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! Smiley
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks).

Would it really?
Yes. You wouldn't be able to trust that a majority of the network acknowledged a block until it gets past the point where all clients are required to accept it as part of the chain.

Imagine that only 10% of the network accepts blocks over 10 MB and 100% accepts blocks less than 1 MB. What if that 10% got lucky and generated two 11 MB blocks in a row? Well, the other 90% would just ignore them because they are too large. So, those blocks get orphaned because the rest of the network found three small blocks. If you just accepted the 11 MB blocks as a confirmation and sent goods because of it, you could be screwed if there was a double-spend.

misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 06, 2013, 07:18:35 AM
 #106

Bitcoin works great as a store of value, but should we also have as a requirement that it operates great as a payment network?

It seems that the debate over whether the maximum block size should be increased is really a question of whether or not the Bitcoin protocol should be improved so that it serves the dual purposes. Specifically:

1) That transactions should verify quickly (less time between blocks)

2) Transactions fees should be low

3) There should be no scarcity for transaction space in blocks

Open questions:

Right now there's about what, 1.4 SatoshiDICEs worth of transaction volume?

Should Bitcoin scale to support 20 times the volume of SatoshiDICE?

Should Bitcoin scale to support 1000 times the volume of SatoshiDICE?

Should we allow the blockchain to grow without a bound on the rate (right now it is 1 megabyte per 10 minutes, or 144MB/day)?

Is it reasonable to require that Bitcoin should always be able to scale to include all possible transactions?

Is it a requirement that Bitcoin eventually be able to scale to accommodate the volume of any existing fiat payment system (or the sum of the volumes of more than one existing payment system)?

Will it ever be practical to accept transactions to ship physical goods with 0 confirmations?

Will the time for acceptance of a transaction into a block ever be on the order of seconds?

How would one implement a "Bitcoin vending machine" which can dispense a product immediately without the risk of fraud?

Can't we just leave parameters like 10 minutes / 1 megabyte alone (since they require a hard fork) and build new market-specific payment networks that use Bitcoin as the back end, processing transactions in bulk at a lower frequency (say, once per 10 minutes)?

Aren't high transaction fees a good thing, since they make mining more profitable resulting in greater total network hashrate (more security)?

misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 06, 2013, 07:45:52 AM
Last edit: February 06, 2013, 09:43:34 AM by misterbigg
 #107

There is something about the artificial scarcity of transaction space in a block that appeals to me. My gut tells me that miners should always have to make a choice about which transactions to keep and which ones to drop. That choice will probably always be based on the fees per kilobyte, so as to maximize the revenue per block. This competition between transactions solves the problem where successive reductions in block subsidies are not balanced by a corresponding increase in transaction fees.

If the block size really needs to increase, here's an idea for doing it in a way that balances scarcity versus transaction volume:

DEPRECATED DUE TO VULNERABILITIES (see the more recent post)

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or goes up by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if the sum of miner's fees excluding block subsidies for all blocks since the last adjustment would exceed a fixed percentage of the total coins transmitted (say, 1.5%). This percentage is also a baked-in constant.

4) There should be no client or miner limits on the number of kilobytes of free transactions in a block - if there's space left in the block after including all the paid transactions, there's nothing wrong with filling up the remaining space with as many free tx as possible.

Example:

When an adjustment period arrives, clients add up the miner's fees exclusive of subsidies, and add up the total coins transferred. If the percentage of miner's fees is 1.5% or more of the total coins transferred then the max block size is permanently increased by 10%.

This scheme offers a lot of nice properties:

- Consensus is easy to determine

- The block size will tend towards a size that accommodates the total transaction volume over a 24 hour period

- The average transaction fees are capped and easily calculated in the client when sending money. A fee of 1.5% should get included after several blocks. A fee greater than 1.5% will get included faster. Fees under 1.5%, will get included slower.

- Free transactions will eventually get included (during times of the day or week where transaction volume is at a low)

- Since the percentage of growth is capped, any increase in transaction volume that exceeds the growth percentage will eventually get accommodated but miners will profit from additional fees (due to competition) until the blocks reach the equilibrium size. Think of this as a 'gold rush'.

flower1024
Legendary
*
Offline Offline

Activity: 1428
Merit: 1000


View Profile
February 06, 2013, 07:47:46 AM
 #108

There is something about the artificial scarcity of transaction space in a block that appeals to me. My gut tells me that miners should always have to make a choice about which transactions to keep and which ones to drop. That choice will probably always be based on the fees per kilobyte, so as to maximize the revenue per block. This competition between transactions solves the problem where successive reductions in block subsidies are not balanced by a corresponding increase in transaction fees.

If the block size really needs to increase, here's an idea for doing it in a way that balances scarcity versus transaction volume:

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or goes up by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if the sum of miner's fees excluding block subsidies for all blocks since the last adjustment would exceed a fixed percentage of the total coins transmitted (say, 1.5%). This percentage is also a baked-in constant.

4) There should be no client or miner limits on the number of kilobytes of free transactions in a block - if there's space left in the block after including all the paid transactions, there's nothing wrong with filling up the remaining space with as many free tx as possible.

Example:

When an adjustment period arrives, clients add up the miner's fees exclusive of subsidies, and add up the total coins transferred. If the percentage of miner's fees is 1.5% or more of the total coins transferred then the max block size is permanently increased by 10%.

This scheme offers a lot of nice properties:

- Consensus is easy to determine

- The block size will tend towards a size that accommodates the total transaction volume over a 24 hour period

- The average transaction fees are capped and easily calculated in the client. A fee of 1.5% should get included after several blocks. A fee greater than 1.5% will get included faster. Fees under 1.5%, will get included slower.

- Free transactions will eventually get included (during times of the day or week where transaction volume is at a low)

- Since the percentage of growth is capped, any increase in transaction volume that exceeds the growth percentage will eventually get accommodated but miners will profit from additional fees (due to competition) until the blocks reach the equilibrium size. Think of this as a 'gold rush'.



+1
I really like the idea of dynamic block sizes. but i dont know enough about economy to know what magic numbers are needed.
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 06, 2013, 07:55:26 AM
 #109

I really like the idea of dynamic block sizes. but i dont know enough about economy to know what magic numbers are needed.

Thanks. I used 10% and 1.5% as examples but they are not based on any calculations. My intuition tells me that the 10% figure doesn't matter a whole heck of a lot, it mostly controls the rate of convergence. The penalty for a number that is too high is that the size would overshoot and there wouldn't be any scarcity. I think that this would only happen if the percentage was huge, like 50% or more. If this number is too low it would just take longer to converge, and there would be a temporary period when miners generated above average profits.

As for the miner's fees I would imagine that number matters more. Too high, and the block size might never increase. Too low and there might never be scarcity in the blocks.

What are the "correct" values? I have no idea.
Jeweller (OP)
Newbie
*
Offline Offline

Activity: 24
Merit: 1


View Profile
February 06, 2013, 08:41:49 AM
 #110

misterbigg - interesting idea, and I agree with your stance but here are some problems.  While it seems intuitively clear that, “Hm, if transaction fees are 3% of total bitcoins transmitted, that’s too high, the potential block space needs to expand.”

Problem is, how do you measure the number of bitcoins transmitted?

Address A has 100BTC.  It sends 98BTC to address B and 1BTC to address C.  How many bitcoins were spent?

Probably 1, right?  That is the assumption the blockchaininfo site makes.  But maybe it’s actually sending 98.  Or none.  Or 99, to two separate parties.  So we can’t know the actual transfer amount.  We can assume the maximum.  But then that means anyone with a large balance which is fairly concentrated address-wise is going to skew that “fee %” statistic way down.  In the above transaction, you know that somewhere between 0 and 100BTC were transferred.  The fee was 1BTC.  Was the fee 100% or 1%?

This also opens it up to manipulation.  Someone could mine otherwise empty blocks with enormous looking fees... which, since they mined the block, they get back, costing them nothing.  They could then work to expand the block size for whatever nefarious reason.

So while I think the “fees as % of transfer” is a nice number to work with in theory, in practice it’s not really available.  If we want to maintain scarcity of transactions in the blockchain while still having a way to expand it, I think the (total fee) / (block reward) is a good metric because it scales with time and maintains miner incentive.  While in its simplistic form it is also somewhat open to manipulation, you could just have an average of 10 blocks or so, and if an attacker is publishing 10 blocks in a row you’ve got way bigger problems. (Also I don’t think a temporary block size increase attack in really that damaging... within reason, we can put up with occasional spam.  Hack, we’ve all got a gig of S.Dice gambling on our drives right now.)
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 06, 2013, 08:52:08 AM
 #111

Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down...Limited blockspace creates a market for transaction fees, the fees fund the mining needed to make the chain robust against hostile reorganization.

I agree that there needs to be scarcity. I believe that tying the scarcity to the average amount of tx fees assures that that block size can grow but also that there will always be a market for tx fees.

I strongly disagree with the idea that changing the max block size is a violation of the "Bitcoin currency guarantees"...It's not totally clear that an unlimited max block size would work.

I agree. It seems obvious that if the max block size is left at 1MB, and there are always non-free transactions that get left out of blocks, that the fees for transactions will keep increasing to a high level.

Each node could automatically set its max block size to a calculated value based on disk space and bandwidth

Not really a fan of this idea. Disk space and bandwidth should have little to do with the determination of max block size. Disk space should be largely a non issue: if the goal is to make Bitcoin more useful as a payment network, we should not be hamstrung by temporary limitations in storage space. If bandwidth is an issue then we have bigger problems than max block size - it means that the overlay network (messages sent between peers) has congestion and we need some sort of throttling scheme. If the goal is to make Bitcoin accommodate as much transaction volume as possible, the sensible choice is for nodes to demote themselves to thin clients if they can't keep up.

I just think miners should be able to create their own limits together with multiple "tolerance levels"...That would push towards a consensus. Miners with limits too different than the average would end up losing work.

This doesn't make sense. Given any set of network parameters, there is always a single global optimum strategy for miners to maximize their revenue by prioritizing transactions. Tolerances for block sizes are not something that miners will have a wide variety of opinions on - the goal is always to make money through fees (and the subsidy, but that doesn't change based on which tx are included). Besides, why on earth would we want to waste hashing power by causing more orphans?

If the blocks never get appreciably bigger than they do now, well any half-decent laptop made in the past few years can handle being a full node with no problem.

If Bitcoin's transaction volume never exceeds an average of 1mb per block then we have bigger problems, because the transaction fees will tend towards zero. There's no incentive for paying a fee if transactions always get included. To maintain fees, transaction space must be scarce. To keep fees low, the maximum block size must grow, and in a decentralized fashion that doesn't create extra orphans.

the best proposal I've heard is to make the maximum block size scale based on the difficulty.

Disagree. If this causes the maximum block size to increase to such a size that there is always room for more transactions, then we will end up killing off the fees (no incentive to include a fee).

misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 06, 2013, 09:02:16 AM
Last edit: February 06, 2013, 09:42:55 AM by misterbigg
 #112

Problem is, how do you measure the number of bitcoins transmitted?...This also opens it up to manipulation...So while I think the “fees as % of transfer” is a nice number to work with in theory, in practice it’s not really available.

Whoops! You're right of course, and I was expecting at least a hole or two. Here's an alternative:

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if more than 50% of the blocks in the previous interval have a sum of transaction fees greater than 50BTC minus the block subsidy. The 50BTC constant and the threshold percentage are baked in.

Example:

A block size adjustment arrives, and the current subsidy is 12.5BTC. The last 210,000 blocks are analyzed, and it is determined that 62% of them have over 37.5BTC in transaction fees. The maximum block size is increased by 10% as a result.

Instead of targeting a fixed percentage of fees (1.5% in my original proposal), this targets a fixed block value (measured in BTC). This scheme still creates scarcity while allowing the max block size to grow. One interesting property is that during growth phases, blocks will reward 50BTC regardless of the subsidy. If transaction volume declines, fees will be reduced. Hopefully this will be the result of Bitcoin gaining purchasing power (correlating roughly to the fiat exchange rate). For this reason, the scheme does not allow the block size to shrink, or else the transaction fees might become too large with respect to purchasing power.

Another desirable property is that a client can display a reasonable upper limit for the default fee given the size of the desired transaction. It is simply 50BTC divided by the block size in bytes, multiplied by the size of the desired transaction.

Someone could mine otherwise empty blocks with enormous looking fees... which, since they mined the block, they get back, costing them nothing.  They could then work to expand the block size for whatever nefarious reason.

I believe this problem is solved with the new proposal. If someone mines a block with a huge fee, it still counts as just one block. This would be a problem if the miner could produce 50% of the blocks in the interval with that property, but this is equivalent to a 51% attack and therefore irrelevant.

The expected behavior of miners and clients is a little harder to analyze than with the fixed fee, can someone help me with a critique?

caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
February 06, 2013, 01:48:47 PM
 #113

Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks).

Would it really?
Yes. You wouldn't be able to trust that a majority of the network acknowledged a block until it gets past the point where all clients are required to accept it as part of the chain.

Imagine that only 10% of the network accepts blocks over 10 MB and 100% accepts blocks less than 1 MB. What if that 10% got lucky and generated two 11 MB blocks in a row? Well, the other 90% would just ignore them because they are too large. So, those blocks get orphaned because the rest of the network found three small blocks. If you just accepted the 11 MB blocks as a confirmation and sent goods because of it, you could be screwed if there was a double-spend.

I understand, I just think that's not such a serious issue to motivate a change in the 10 min interval too. Instead, if relaying orphans becomes a normal practice, nodes would be able to see whether there's another branch in which their transactions don't exist. If your transaction is currently in all branches being mined, you're certain that you got your confirmation.
So, to counter the problem you raise, I think that relaying orphans is good enough. Why wouldn't it be?

I just think miners should be able to create their own limits together with multiple "tolerance levels"...That would push towards a consensus. Miners with limits too different than the average would end up losing work.

This doesn't make sense. Given any set of network parameters, there is always a single global optimum strategy for miners to maximize their revenue by prioritizing transactions. Tolerances for block sizes are not something that miners will have a wide variety of opinions on - the goal is always to make money through fees (and the subsidy, but that doesn't change based on which tx are included). Besides, why on earth would we want to waste hashing power by causing more orphans?

There would be some variety, surely. In the blocks they produce themselves, miners will search to optimize the ratio (time to propagate / revenue in fees), while in blocks they receive from other miners, they would rather it be the smaller possible. These parameters are not the same for different miners, particularly the "time to propagate" one, as it strongly depends on how many connections you can keep established and on your bandwidth/network lag.

Plus, if there is an "global optimal max size", it's quite pretentious to claim you can come up with the "optimal formula" to calculate it. Even if you could, individual peers would never have all necessary data to feed to this formula, as it would have to take into consideration the hardware resources of all miners and the network as a whole. That's impracticable. Such maximum size must be established via a decentralized/spontaneous order. It's pretty much like economical central planning versus free markets actually.
mrvision
Sr. Member
****
Offline Offline

Activity: 527
Merit: 250



View Profile
February 06, 2013, 02:21:57 PM
 #114

Plus, if there is an "global optimal max size", it's quite pretentious to claim you can come up with the "optimal formula" to calculate it. Even if you could, individual peers would never have all necessary data to feed to this formula, as it would have to take into consideration the hardware resources of all miners and the network as a whole. That's impracticable. Such maximum size must be established via a decentralized/spontaneous order. It's pretty much like economical central planning versus free markets actually.

I think that if we reach the 1mb limit and don't upgrade with a solution, then the spontaneous order will create fiat currencies backed with bitcoins, in order to reduce the amount of transactions in the bitcoin network. So, this would also lead to less revenues for the miners (plus a loss on reputation for the bitcoin network).

The block size hard limit is nothing but a protectionist policy.

Even when misterbigg's approach might not be the optimal solution, at least it's an idea.
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 06, 2013, 02:51:53 PM
 #115

if there is an "global optimal max size", it's quite pretentious to claim you can come up with the "optimal formula" to calculate it.

Definitely, but I was talking about an optimum strategy for prioritizing transactions, not an optimum choice of max block size. Typically the strategy will be either:

1) Include all known pending transactions with fees

or

2) Choose the pending transactions with the highest fees per kilobyte ratio and fill the block up to a certain size

There would be some variety, surely. In the blocks they produce themselves, miners will search to optimize the ratio (time to propagate / revenue in fees), while in blocks they receive from other miners, they would rather it be the smaller possible. These parameters are not the same for different miners, particularly the "time to propagate" one, as it strongly depends on how many connections you can keep established and on your bandwidth/network lag.

I don't understand this aspect of the network. Why do miners want smaller blocks from other miners? Do blocks take a long time to propagate? Are you saying that newly solved blocks are sent around on the same peer connections used to transmit messages, and that while a connection is being used to send a block (which can be large relative to the size of a transaction) it holds up the queue for individual tx?

If this is the case, perhaps an easier way to deal with the propagation of blocks is to have two overlays, one for tx and the other for blocks.

I think that if we reach the 1mb limit and don't upgrade with a solution, then the spontaneous order will create fiat currencies backed with bitcoins, in order to reduce the amount of transactions in the bitcoin network.

I'm not so sure this is a bad thing. These ad-hoc "fiat" currencies may be created with unique properties that make them better suited to the task at hand than Bitcoin. For example, a private payment network that provides instant confirmation and requires no mining (relying on trust in a central authority).

Quote
So, this would also lead to less revenues for the miners (plus a loss on reputation for the bitcoin network).

Average transaction fees per kilobyte is inversely proportional to the block size, so leaving the block size at 1MB will cause fees to increase once blocks are regularly full. The rate of increase in the fees will be proportional to the growth in the number of transactions.

Miners would love it if all blocks had a one transaction maximum, this would maximize fees (assuming people didn't leave the Bitcoin network due to high fees).




DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 06, 2013, 03:03:48 PM
 #116

Sadly any attempt to find an "optimal" block size is likely doomed because it can be gamed and "optimal" is hard to quantity.

Optimal for the short thinking non-miner - block size large enough that fees are driven down to zero or close to it.

Optimal for the network - block size large enough to create sufficient competition that fees can support the network relative to true economic value.

Optimal for the short term looking miner - never rising larger than 1MB to maximize fee revenue.

However I would point out that the blockchain may eventually become the equivalent of bank wire transactions.  FedWire for example transferred ~$663 trillion USD in 2011 using 127 million transactions.  If FedWire used a 10 minute block it would be ~2,500 transactions per block.  For Bitcoin that would be roughly 400 bytes per tx.  So it shows that Bitcoin can support a pretty massive fund transfer network even with a 1MB block limit.

Some would dismiss this as too centralized but I would point out that direct access to FedWire is impossible for anyone without a banking charter.  Direct access to the blockchain simply requires payment of fee and computing resources capable of running a node.  This means the blockchain will always remain far more open. 

I think one modest change (which is unlikely to make anyone happy but would allow higher tx volume) would be to take it out of the hands of everyone.  The block subsidy follows a specific exact path for a reason.  If it was open to human control Bitcoin would likely be hyperinflated and nearly worthless today.  A proposal could be made for a hard fork to double (or increase by some other factor) the max size of a block on every subsidy cut. 

This would allow for example (assuming avg tx is 400 bytes):
2012 - 1MB block =~ 360K daily transactions (131M annually)
2016 - 2MB block =~ 720K daily transactions (262M annually)
2020 - 4MB block =~ 1.44M daily transactions (525M annually)
2024 - 8MB block =~ 2.88M daily transactions (1B annually)
2028 - 16MB block =~5.76M daily transactions (2B annually)
2030 - 32MB block =~11.52M daily transactions (4B annually)

Moore's law should ensure that processing a 32MB block in 2030 is computationally less of a challenge than doing so with a 1MB block today.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 06, 2013, 03:15:46 PM
 #117

I don't understand this aspect of the network. Why do miners want smaller blocks from other miners? Do blocks take a long time to propagate? Are you saying that newly solved blocks are sent around on the same peer connections used to transmit messages, and that while a connection is being used to send a block (which can be large relative to the size of a transaction) it holds up the queue for individual tx?

If this is the case, perhaps an easier way to deal with the propagation of blocks is to have two overlays, one for tx and the other for blocks.

Yes there is a propagation delay for larger blocks when two blocks are produced by different miners at roughly the same time larger blocks are more likely to be orphaned. The subsidy distorts the market effect.  Say you know that by making the block 4x as large you can gain 20% more fees.  If this increases the risk of an oprhan by 20% then the larger block is break even.  However the subsidy distorts the revenue to size ratio.  20% more fees may only mean 0.4% more total revenue if fees make up only 2% of revenue (i.e. 25 BTC subsidy + 0.5 BTC fees).  As a result a 20% increase in oprhan rates isn't worth a 0.4% increase in total revenue.

As the subsidy become a smaller % of miner total compensation the effect of the distortion will be less.  There has been some some brainstorming on methods to remove the "large block penalty".  It likely would require a separate mining overlay.
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 06, 2013, 03:29:51 PM
Last edit: February 06, 2013, 03:43:32 PM by misterbigg
 #118

Yes there is a propagation delay for larger blocks

There's a delay regardless of whether or not two different blocks are solved at the same time?

Quote
when two blocks are produced by different miners at roughly the same time larger blocks are more likely to be orphaned.

You mean that when two different blocks are solved at the same time, the smaller block will propagate faster and therefore more miners will start building on it versus the larger block?

Quote
...increases the risk of an orphan by 20%

Is there a straightforward way to estimate the risk of an orphan?

Quote
As the subsidy become a smaller % of miner total compensation the effect of the distortion will be less.  There has been some some brainstorming on methods to remove the "large block penalty".  It likely would require a separate mining overlay.

Even with a separate overlay, two blocks solved at the same time is a problem. And I would imagine that adding a new overlay is an extreme solution to be considered as a last resort only.

...any attempt to find an "optimal" block size is likely doomed because it can be gamed and "optimal" is hard to quantity.

What are your thoughts on the last scheme I described?

...
2016 - 2MB block =~ 720K daily transactions (262M annually)
...

Hmm...this seems problematic. If the transaction volume doesn't grow sufficiently, this could kill fees. But if the transaction volume grows too much, fees will become exhorbitant. IF we accept that max block size needs to change, I believe it should be done in a way that decreases scarcity in response to a rise in average transaction fees.

There would be some variety, surely. In the blocks they produce themselves, miners will search to optimize the ratio (time to propagate / revenue in fees), while in blocks they receive from other miners, they would rather it be the smaller possible.

Sure, a miner might "rather" received blocks be as small as possible but since there's no way to refuse to receive a block from a peer, this point is moot. They could drop a block that is too big once they get it but this doesn't help them very much other than not having to forward it to the remaining peers. And even this has little global effect since those other peers will just receive it from someone else.

These parameters are not the same for different miners, particularly the "time to propagate" one, as it strongly depends on how many connections you can keep established and on your bandwidth/network lag.

Bandwidth will be the limiting factor in determining the number of connections that may be maintained. For purpose of analysis we should assume that miner's choose degree (number of peers) such that bandwidth is not fully saturated. Because doing otherwise would lead to not being able to collect the largest number of transactions possible for the amount of bandwidth available, limiting revenue.

Do people in mining pools even need to run a full node?


jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1092


View Profile
February 06, 2013, 03:45:21 PM
 #119

Problem is, how do you measure the number of bitcoins transmitted?...This also opens it up to manipulation...So while I think the “fees as % of transfer” is a nice number to work with in theory, in practice it’s not really available.

Whoops! You're right of course, and I was expecting at least a hole or two. Here's an alternative:

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if more than 50% of the blocks in the previous interval have a sum of transaction fees greater than 50BTC minus the block subsidy. The 50BTC constant and the threshold percentage are baked in.

Example:

A block size adjustment arrives, and the current subsidy is 12.5BTC. The last 210,000 blocks are analyzed, and it is determined that 62% of them have over 37.5BTC in transaction fees. The maximum block size is increased by 10% as a result.

Instead of targeting a fixed percentage of fees (1.5% in my original proposal), this targets a fixed block value (measured in BTC). This scheme still creates scarcity while allowing the max block size to grow. One interesting property is that during growth phases, blocks will reward 50BTC regardless of the subsidy. If transaction volume declines, fees will be reduced. Hopefully this will be the result of Bitcoin gaining purchasing power (correlating roughly to the fiat exchange rate). For this reason, the scheme does not allow the block size to shrink, or else the transaction fees might become too large with respect to purchasing power.

Another desirable property is that a client can display a reasonable upper limit for the default fee given the size of the desired transaction. It is simply 50BTC divided by the block size in bytes, multiplied by the size of the desired transaction.

Someone could mine otherwise empty blocks with enormous looking fees... which, since they mined the block, they get back, costing them nothing.  They could then work to expand the block size for whatever nefarious reason.

I believe this problem is solved with the new proposal. If someone mines a block with a huge fee, it still counts as just one block. This would be a problem if the miner could produce 50% of the blocks in the interval with that property, but this is equivalent to a 51% attack and therefore irrelevant.

The expected behavior of miners and clients is a little harder to analyze than with the fixed fee, can someone help me with a critique?



There is no reason to stick the total reward to 50 BTC because you need to consider the purchasing power. Although we only have 25 BTC block reward at this moment, finding a block now will give you >1000x more in USD than a 50 BTC block in 2009. A future miner may be satisfied with 1 BTC block if it is equivalent to US$10000 of today's purchasing power. However, the purchasing power is not mathematically determined and you cannot put it into the formula.

Also, requiring a total reward of 50BTC means requiring 25BTC in fee NOW. As the typical total tx fee in a block is about 0.25BTC, the fee has to increase by 100x and obviously this will kill the system.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 06, 2013, 03:53:47 PM
Last edit: February 06, 2013, 04:06:55 PM by misterbigg
 #120

Also, requiring a total reward of 50BTC means requiring 25BTC in fee NOW. As the typical total tx fee in a block is about 0.25BTC, the fee has to increase by 100x and obviously this will kill the system.

How and why would the system be "killed"? The max block size would simply not increase.

There is no reason to stick the total reward to 50 BTC because you need to consider the purchasing power.

Here's yet another alternative scheme:

1) Block size adjustments happen at the same time that network difficulty adjusts

2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if more than 50% of the blocks in the previous interval have a size greater than or equal to 90% of the max block size. Both of the percentage thresholds are baked in.

Example:

A block size adjustment arrives, and the current max block size is 1024KB. The last 210,000 blocks are analyzed, and it is determined that 125,000 of them are at least 922KB in size. The maximum block size is increased by 10% as a result.

Instead of targeting a fixed block reward, this scheme tries to determine if miners are consistently reaching the max block limit when filling a block with transactions (the 90% percentage should be tuned based on historical transaction data). This creates scarcity (in proportion to the 50% figure) while remaining independent of the purchasing power.
Pages: « 1 2 3 4 5 [6] 7 8 9 10 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!