Bitcoin Forum
April 27, 2024, 07:34:22 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 4 5 6 7 »  All
  Print  
Author Topic: Elastic block cap with rollover penalties  (Read 24004 times)
Meni Rosenfeld (OP)
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
June 02, 2015, 08:16:17 PM
Last edit: June 08, 2015, 02:28:18 PM by Meni Rosenfeld
 #1

tl; dr: I propose replacing the hard cap on the data size of a block (1MB, 20MB, or whatever it is) with an elastic one, where resistance to larger blocks accumulates progressively. Miners will be required to pay a superlinear penalty for large blocks, to be paid into a rollover fee pool. This will greatly increase Bitcoin's robustness to a situation where the block cap is approached, and allow a healthy fee market.

Background
In these days there is heated debate about whether the block size limit of 1 MB should be relaxed. Many arguments are being discussed, but the most common one is the classic tradeoff:

Bigger blocks -> Harder to run a node -> Less nodes -> More centralization
Smaller blocks -> Less payments can be done directly as transactions on the blockchain -> More reliance on payment processing solutions involving 3rd parties -> More centralization

Most agree that there is some optimal value for this tradeoff, and the debate is about what it is, how to determine it, how to make decisions revolving it, etc. And most assume that if the blocks are too small, we'll get a "heads up" in the form of increasing tx fees.

However, Mike argues, and I tend to agree, that this is not how it will work. Rather, if the block limit is too small for the transaction demand, the Bitcoin network will crash. The gist of his argument, as I see it, is - market forces should find a fee equilibrium where transaction supply matches demand, whatever they are. However, market forces don't react fast enough to fluctuations, creating buildups that technically can't be handled by the software.

Mike argues also that since we don't know when we'll reach the limit - and once we do, we will have catastrophic failure without warning - we must hurry to raise the limit to remain safe. That part I have an issue with - if Bitcoin can't gracefully degrade in the face of rising transaction volume, we will have problems no matter what the current block size limit is. We should instead focus on fixing that problem.

In this post I will introduce a protocol modification that might eliminate this problem. I will describe the suggestion and analyze how it can help. With it in place, we no longer run the risk of a crash landing, only rising fees - giving us an indication that something should be changed.

And then, we can go back to arguing what the block size should be, given the tradeoff above.

Rollover fee pool
This suggestion requires, as a prerequisite, the use of a rollover fee pool. I first introduced the concept here - this post is worth a read, but it used the concept for a different purpose than we have here.

The original idea was that fees for a transaction will not be paid to the one miner of the block which includes it - rather, fees would be paid into a pool, from which the funds will gradually be paid to future miners. You could have, for example, that in each block, the miner is entitled to 1% of the current funds in the pool (in addition to any other block rewards).

In the current suggestion, we will use such a pool that is paid out over time - but it will not be the users who put money into the pool. Transaction fees will be paid to the miner who found the block as normal.

Edit: Saying again for clarity - In the current proposal, fees from transactions will be paid to the miner of the current block instantly and in full. The miners can't gain anything by accepting tx fees out of band. The one paying into the rollover pool is the miner himself, as explained below.

Elastic block cap
The heart of the suggestion is - instead of forbidding large blocks, penalize them. The miner of a large block must pay a penalty that depends on the block's size. The penalty will be deducted from the funds he collects in the generation transaction, and paid into the rollover pool, to be distributed among future miners. If the penalty exceeds the miner's income of tx fees + minted coins + his share of the current rollover pool, the block is invalid.

This requires choosing a function f that returns the penalty for a given block size. There is great flexibility and there's little that can go wrong if we choose a "wrong" function. The main requirements are that it is convex, and has significant curvature around the size we think blocks should be. My suggestion: Choose a target block size T. Then for any given block size x, set f(x) = Max(x-T,0)^2 / (T*(2T-x)). (graph)

This will mean that there are no penalties for blocks up to size T. As the block size increases, there is a penalty for each additional transaction - negligible at first, but eventually sharply rising. Blocks bigger than 2T are forbidden.

Analysis
I assume we do want scarcity in the blockchain - this prevents useless transactions that bloat the blockchain and make it harder to run nodes, and incentivizes users to fund the Bitcoin infrastructure. A block size limit creates scarcity - but only if there ever really are situations where we reach the limit. But as things stand, reaching the limit creates technical malfunctions.

Mike calls the problem "crash landing", but to me it's more like hitting a brick wall. Transaction demand runs towards the limit with nothing slowing it down, until it painfully slams into it. One minute there is spare room in the blocks, and no reason to charge tx fees - and the next, there's no room, and we run into problems.

If more transactions enter the network than can be put in blocks, the queue will grow ever larger and can take hours to clear, meaning that transactions will take hours to confirm. Miners can use fees to signal to the market to ease up on new transactions, but the market will be too slow to react. First, because the software isn't optimized to receive this signal; and second, because transaction demand is inelastic in short time scales. If, over sustained periods of time, transaction fees are high, I will stop looking for places to pay with Bitcoin, I will sign up to a payment facilitation service, etc. But short-term, if I have a transaction I'm set on sending right now (e.g. a restaurant tab), I'll be willing to pay very high fees for it if I must. So fees are not effective in controlling the deluge of transactions.

Enter an elastic cap. When tx demand exceeds the target block value, a backlog doesn't accumulate - miners simply include the extra in the block. They will start to require a bit of extra fees to compensate for the penalty, but can still clear the queue at the rate it is filling. If the incoming tx rate continues to increase, the marginal penalty miners have to pay per tx will increase, so fees will become higher - but since the process is more gradual, clients will be in a better position to understand what fee they need to pay for quick confirmation. The process will resemble climbing a hill rather than running into a brick wall. As we push the limits, the restoring force will grow stronger until we reach an equilibrium.

On longer time scales, the market will have an understanding of the typical fees, and make decisions accordingly. It can also analyze times of the day/week when fees are lower, and use those for any non-time-sensitive transactions.

With a hard cap, the queue of transactions can only clear at a specific rate. Below this rate there is no fee tension, and above it there is instability. With an elastic cap, a longer queue cause transaction fees to be higher, encouraging miners to include more in a block, increasing the speed at which the queue clears. This is a stable equilibrium that prevents the queue from growing unboundedly, while always maintaining some fee tension.

Incidentally, the current system is a special case of this suggestion, with f(x) = If (x<T, 0, Infinity). We assume that an invalid block is equivalent to an infinite penalty.

The way forward
I believe there is very little downside, if any, to implementing this method. It can prove useful even if the crash landing scenario isn't as grave as Mike suggests. And the primitives will be handy for other possible protocol improvements. I believe it is an essentially simple and fully fleshed-out idea. As such, I hope it can be accepted without much controversy.

It is however a major change which may require some significant coding. When discussing this idea with Gavin, he explained he'd be in a better position to evaluate it if given working, testable code. Writing code isn't my thing, though. If anyone is interested in implementing this, I'll be happy to provide support.

Related work
A similar system exists in CryptoNote, see e.g. section 6.2.3 of https://cryptonote.org/whitepaper.pdf.

Greg Maxwell proposed a similar system with variable mining effort instead of reward penalty - see e.g. http://sourceforge.net/p/bitcoin/mailman/message/34100485/ for discussion.

This proposal is not directly related to floating block limit methods, where the limit parameters are determined automatically based on recent history of transaction demand.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
1714203262
Hero Member
*
Offline Offline

Posts: 1714203262

View Profile Personal Message (Offline)

Ignore
1714203262
Reply with quote  #2

1714203262
Report to moderator
The trust scores you see are subjective; they will change depending on who you have in your trust list.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714203262
Hero Member
*
Offline Offline

Posts: 1714203262

View Profile Personal Message (Offline)

Ignore
1714203262
Reply with quote  #2

1714203262
Report to moderator
lechango
Newbie
*
Offline Offline

Activity: 14
Merit: 0


View Profile
June 02, 2015, 09:09:16 PM
 #2

This is a very good, but as you said will require a lot of work, seems like a good portion of BTC's code would need to be rewritten. I'd say we're better off just letting BTC croak and eventually once the damage that is caused to public image has faded, transitioning to a better alt will be in the best interest of the whole cryptocurrency community.
ArticMine
Legendary
*
Offline Offline

Activity: 2282
Merit: 1050


Monero Core Team


View Profile
June 02, 2015, 09:12:50 PM
 #3

The key here is how is T set. If T is fixed then 2T becomes the hard limit and the problem remains. If T is set based on an some average of previously mined blocks then this may address the problem

Concerned that blockchain bloat will lead to centralization? Storing less than 4 GB of data once required the budget of a superpower and a warehouse full of punched cards. https://upload.wikimedia.org/wikipedia/commons/8/87/IBM_card_storage.NARA.jpg https://en.wikipedia.org/wiki/Punched_card
alani123
Legendary
*
Offline Offline

Activity: 2380
Merit: 1411


Leading Crypto Sports Betting & Casino Platform


View Profile
June 02, 2015, 09:43:50 PM
 #4

Would this address bandwidth issues that China could suffer from if block size was increased?

..Stake.com..   ▄████████████████████████████████████▄
   ██ ▄▄▄▄▄▄▄▄▄▄            ▄▄▄▄▄▄▄▄▄▄ ██  ▄████▄
   ██ ▀▀▀▀▀▀▀▀▀▀ ██████████ ▀▀▀▀▀▀▀▀▀▀ ██  ██████
   ██ ██████████ ██      ██ ██████████ ██   ▀██▀
   ██ ██      ██ ██████  ██ ██      ██ ██    ██
   ██ ██████  ██ █████  ███ ██████  ██ ████▄ ██
   ██ █████  ███ ████  ████ █████  ███ ████████
   ██ ████  ████ ██████████ ████  ████ ████▀
   ██ ██████████ ▄▄▄▄▄▄▄▄▄▄ ██████████ ██
   ██            ▀▀▀▀▀▀▀▀▀▀            ██ 
   ▀█████████▀ ▄████████████▄ ▀█████████▀
  ▄▄▄▄▄▄▄▄▄▄▄▄███  ██  ██  ███▄▄▄▄▄▄▄▄▄▄▄▄
 ██████████████████████████████████████████
▄▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▄
█  ▄▀▄             █▀▀█▀▄▄
█  █▀█             █  ▐  ▐▌
█       ▄██▄       █  ▌  █
█     ▄██████▄     █  ▌ ▐▌
█    ██████████    █ ▐  █
█   ▐██████████▌   █ ▐ ▐▌
█    ▀▀██████▀▀    █ ▌ █
█     ▄▄▄██▄▄▄     █ ▌▐▌
█                  █▐ █
█                  █▐▐▌
█                  █▐█
▀▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▀█
▄▄█████████▄▄
▄██▀▀▀▀█████▀▀▀▀██▄
▄█▀       ▐█▌       ▀█▄
██         ▐█▌         ██
████▄     ▄█████▄     ▄████
████████▄███████████▄████████
███▀    █████████████    ▀███
██       ███████████       ██
▀█▄       █████████       ▄█▀
▀█▄    ▄██▀▀▀▀▀▀▀██▄  ▄▄▄█▀
▀███████         ███████▀
▀█████▄       ▄█████▀
▀▀▀███▄▄▄███▀▀▀
..PLAY NOW..
Meni Rosenfeld (OP)
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
June 02, 2015, 09:45:47 PM
 #5

This is a very good, but as you said will require a lot of work, seems like a good portion of BTC's code would need to be rewritten.
I don't think the changes are that extensive. Basically you just need to change the rules for generation transaction validity.

I'd say we're better off just letting BTC croak and eventually once the damage that is caused to public image has faded, transitioning to a better alt will be in the best interest of the whole cryptocurrency community.
Umm... No?


The key here is how is T set. If T is fixed then 2T becomes the hard limit and the problem remains. If T is set based on an some average of previously mined blocks then this may address the problem
We still need some way to determine the optimal block size, but we have much more leeway. The wrong choice will not cause catastrophic failure, rather gradually increasing fees which will indicate that a buff is needed. The flexibility will make it easier to reach community consensus about changing hardcoded parameters.

Reusing what I wrote to Gavin an a private exchange - I don't believe in having a block limit calculated automatically based on past blocks. Because it really doesn't put a limit at all. Suppose I wanted to spam the network. Now there is a limit of 1MB/block so I create 1MB/block of junk. If I keep this up the rule will update the size to 2MB/block, and then I spam with 2MB/block. Then 4MB, ad infinitum. The effects of increasing demand for legitimate transaction is similar. There's no real limit and no real market for fees.

Perhaps we can find a solution that uses an automatic rule for short-term fluctuations, and hardcoded parameters for long-term trends. If a good automatic cap rule can be found, it will be compatible with this method.


Would this address bandwidth issues that China could suffer from if block size was increased?
Only very indirectly - it can help Bitcoin function with a low cap, hence reducing the need to increase the cap.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
binaryFate
Legendary
*
Offline Offline

Activity: 1484
Merit: 1003


Still wild and free


View Profile
June 02, 2015, 09:49:39 PM
 #6

Isn't it precisely what is implemented in Monero? (except you don't have a rollover pool, the penalty is simply deducted from the block reward for good).

Monero's privacy and therefore fungibility are MUCH stronger than Bitcoin's. 
This makes Monero a better candidate to deserve the term "digital cash".
leopard2
Legendary
*
Offline Offline

Activity: 1372
Merit: 1014



View Profile
June 02, 2015, 09:52:25 PM
 #7

I think (in contrast to global politics) we should follow the most intelligent people, and Meni is certainly smarter than most people here.

So, I support the idea. Just that 2T is a bit high, will that work? I mean, will it be possible to download  a blockchain with 2T blocks in it? A 2T blockchain consisting of smaller blocks is not an issue, but what happens if the download gets stuck DURING the 2T?

Is the Bitcoin client ready to download blocks partially and continue after a crash/shutdown/process kill?  Huh

Truth is the new hatespeech.
Meni Rosenfeld (OP)
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
June 02, 2015, 10:10:53 PM
Last edit: June 02, 2015, 10:42:16 PM by Meni Rosenfeld
 #8

Isn't it precisely what is implemented in Monero? (except you don't have a rollover pool, the penalty is simply deducted from the block reward for good).
No idea what happens in Monero, but if so, more power to them.

In theory you could have the penalty coins vanish here as well, but that has several disadvantages.

So, I support the idea. Just that 2T is a bit high, will that work? I mean, will it be possible to download  a blockchain with 2T blocks in it? A 2T blockchain consisting of smaller blocks is not an issue, but what happens if the download gets stuck DURING the 2T?

Is the Bitcoin client ready to download blocks partially and continue after a crash/shutdown/process kill?  Huh
In case it's unclear, T doesn't mean terabyte. It's just a parameter that can be configured based on our needs. At this point in time I suggest T = 3MB.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
digicoin
Legendary
*
Offline Offline

Activity: 1106
Merit: 1000



View Profile
June 02, 2015, 10:18:17 PM
 #9

Oh Monero solution to block size limit

Block size limit is not important in the discussion.

The important is

+ Too many people don't understand block size and block size limit is
+ Political discussions: bitcoin for everybody or for elites only (lots of people try to ignore where bitcoin network effect comes from)
+ Group interest
+ Boys crying wolf: Viacoin, Blockstream devs
franckuestein
Legendary
*
Offline Offline

Activity: 1960
Merit: 1130


Truth will out!


View Profile WWW
June 02, 2015, 11:46:41 PM
 #10

Finally a new opinion that has nothing to do with the 1 or 20MB blocks debate...



At this point in time I suggest T = 3MB.

Just one question, Meni:
Why do you think that a target block size T (equal to 3MB) would be a good choice?

Thanks!

[ AVAILABLE SIGNATURE SPACE ]
chmod755
Legendary
*
Offline Offline

Activity: 1386
Merit: 1020



View Profile WWW
June 03, 2015, 01:13:55 AM
 #11

If T=3MB it's like a 6MB limit with pressure to keep blocks smaller than 3MB unless there are enough transactions paying fees so it's worth including them?

I think T should scale over time as bandwidth is growing. 42 transactions per second is still a low limit for a global payment network.

bitcoinfees
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile WWW
June 03, 2015, 02:34:26 AM
 #12

Just a comment on the following:

Quote
With a hard cap, the queue of transactions can only clear at a specific rate. Below this rate there is no fee tension, and above it there is instability.

I don't think you can say that - that would be like saying, queues are never a problem as long as utilization is < 1 (which of course is required for stability). But long queues do in fact develop when utilization is < 1, due to variability in service / arrival times (re: bitcoin, the dominant source of variability is in inter-block times).

As long as there are queues, fee tension will be present, as mempool transactions are largely prioritised by feerate. Empirically, we are observing periods of fee tension (i.e. busy periods, when pools' max block size is reached) quite often these days.

Otherwise I like this perspective on the block size problem (even though I can't really comment on the proposed solution), in particular the observation that in the short term transaction demand is inelastic. (Upon further thought, however, proponents of increasing block size would say that the inelastic demand problem doesn't really apply if the block size cap is sufficiently higher than the average transaction rate.)
tacotime
Legendary
*
Offline Offline

Activity: 1484
Merit: 1005



View Profile
June 03, 2015, 03:09:57 AM
Last edit: June 03, 2015, 03:38:29 AM by tacotime
 #13

Isn't it precisely what is implemented in Monero? (except you don't have a rollover pool, the penalty is simply deducted from the block reward for good).
No idea what happens in Monero, but if so, more power to them.

Yes. There is a quadratic penalty imposed for blocks above the median size (with a maximum size of 2 * median(size of last 400 blocks)), with a dynamic, elastic block sizing algorithm. Unlike Meni's suggestion, the reduction in block subsidy is not given to a pool but rather deferred to future miners because the subsidy algorithm is based around the number of coins.

See section 6.2.3 of the CryptoNote whitepaper: https://cryptonote.org/whitepaper.pdf

It was one of the most criticized components by the Bitcoin core developers, but so far testing on the mainnet and testnet has failed to evidence any failures.

There is a major shortcoming I can see in the rollover fee pool suggestion: miners are incentivized to accept fees out of band so they can obtain all the highest fees instantly, thus defeating the entire purpose of that feature.

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
Bitcoin Betting Guide
Full Member
***
Offline Offline

Activity: 180
Merit: 100


View Profile WWW
June 03, 2015, 03:13:59 AM
 #14

By George you have done it old boy! This solves so many problems.

Quote
We still need some way to determine the optimal block size, but we have much more leeway. The wrong choice will not cause catastrophic failure

Fantastic, we can just take our best guess without pulling our hair out with worry of what disasters will insure if the guess is wrong. Perhaps 8mb + 20% per year elastic cap where those values can be adjusted with a soft fork.

What if the rollover fee could become the full node reward?

The world's best source of crypto gambling information. Helping bettors win crypto! https://bitedge.com
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
June 03, 2015, 04:31:48 AM
 #15

There is a major shortcoming I can see in the rollover fee pool suggestion: miners are incentivized to accept fees out of band so they can obtain all the highest fees instantly, thus defeating the entire purpose of that feature.
That is the main aspect of what monero/bytecoin does that I complained about-- that you can simply pay fees out of band and bypass it as the subsidy declines (even with the constant inflation the subsidy might be inconsequential compared to the value of the transactions if the system took off).  In Bitcoin this is not hypothetical since 2011 at least pools have accepted out of band payments and its not unusual for varrious businesses to have express handling deals with large pools; and this is absent any major reason to pull fees out of band.

My proposal (on bitcoin-development and previously on the forum) is effectively (and explicitly credited to) the monero/bytecoin behavior, but rather than transferring fees/subsidy it changes the cost of being successful at the work function.

I haven't had a chance yet to read and internalize the specifics of what Meni is suggesting (or rather, I've read it but the complete, _precise_ meaning isn't clear to me yet).  The main things to watch out for solutions of this class are (1) bypass vulnerability (where you pay fees, or the like, out of band to avoid the scheme)  and (2) scale invariance  (the scheme should work regardless of Bitcoin's value).  My proposal used effort adjustment (it's imprecise to call it difficulty adjustment, though I did, because it doesn't change the best chain rule; it just changes how hard it is for a miner to meet that rule).

I think this kind of proposal is a massive improvement on proposals without this kind of control; and while it does not address all of the issues around larger blocks-- e.g. they do not incentive align miners and non-mining users of the system-- it seems likely that proposals in this class would greatly improve a some of them of them; and as such is worth a lot more consideration.

Thanks for posting, Meni-- I'm looking forward to thinking more about what you've written.
goatpig
Legendary
*
Offline Offline

Activity: 3668
Merit: 1345

Armory Developer


View Profile
June 03, 2015, 05:17:58 AM
 #16

I personally think a P2P network cannot rely on magic numbers, in this case 1MB nor 20MB blocks. The bar is either set too low and it creates an artificial choke point, or it is set too high which opens room for both DOS attacks and more centralization. As such, a solution that pins block size to fees is, in my point of view, the most sensible alternative to explore. Fees are the defacto index to determine transaction size and priority, so they are also the best candidate to determine valid block size.

However I have a couple divergence with Meni's proposal:

First, while I find the fee pool idea intriguing, and I certainly see benefits to it (like countering "selfish" mining), I don't believe it is a necessary device to pin block size limits to fees. Simply put, I think it's a large implementation effort, or if anything a much larger one than is needed to achieve the task at hand. It also requires modifying, or at least amending consensus rules, something the majority of the Core team has been trying to keep to a minimum. I believe there is wisdom in that position.

Second, I expect a fee pool alone will increase block verification cost. If it is tied to the block size validity as well, it would increase that cost even further. The people opposing block size growth base it on the rationale that an increase in resource to propagate and verify blocks effectively raises the barrier to entry of the network, resulting in more centralization. This point has merits and thus I think any solution to the issue needs to keep the impact on validation cost as low as possible.

Meni's growth function still depends on predetermined constants (i.e. magic numbers), but it is largely preferable to static limits. Meni wants to use it to define revenue penalties to miners for blocks larger than T.

I would scrap the fee pool and use the function the opposite way: the total sum of fees paid in the block defines the maximum block size. The seeding constant for this function could itself be inversely tied to the block difficulty target, which is an acceptable measure of coin value: i.e. the stronger the network, the higher the BTC value, and reciprocally the lower the nominal fee to block size balance point.

With an exponential function in the fashion of Meni's own, we can keep a healthy cap on the cost of larger blocks, which impedes spam by design, while allowing miners to include transactions with larger fees without outright kicking lower fee transactions out of their blocks.

As Meni's proposal, this isn't perfect, but I believe it comes at the advantage of lower implementation cost and disturbance to the current model, while keeping the mechanics behind block size elasticity straight forward. Whichever the community favors, I would personally support a solution that ties block size limit to fees over any of the current proposals.



gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
June 03, 2015, 05:55:35 AM
 #17

I expect a fee pool alone will increase block verification cost.
It would not, in any meaningful way.
edmundedgar
Sr. Member
****
Offline Offline

Activity: 352
Merit: 250


https://www.realitykeys.com


View Profile WWW
June 03, 2015, 06:56:07 AM
Last edit: June 03, 2015, 07:13:02 AM by edmundedgar
 #18

If we're making drastic changes it may be worthwhile to target balance of unspent outputs instead of / as well as block size, since UTXO size seems to be a more likely bottleneck than the network, unfortunate American core devs still connected to the internet using 1870s-style copper wires notwithstanding.
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


View Profile
June 03, 2015, 07:36:29 AM
 #19

But short-term, if I have a transaction I'm set on sending right now (e.g. a restaurant tab), I'll be willing to pay very high fees for it if I must. So fees are not effective in controlling the deluge of transactions.

This part seems a bit off. At any given time, some people will have an urgent need for settlement, but many/most won't. So we get smooth scaling for quite a while from a purely economic perspective. Now once we reach a point in adoption where there are so many urgent transactions that they fill the blocks on their own, that kicks up the frustration to unacceptable levels, but even then some will be willing to outbid others and again it's a gradual increase in pain, not a deluge.

Insofar as the prices miners charge do rise properly and users have an easy way of getting their transactions in at some price, fees will limit transactions even in the short term. All you're really describing here is reaching a point that is pretty far along that smooth pain curve, after all the less important transactions have been priced out of the market.

Overall this is a great idea, though!
Meni Rosenfeld (OP)
Donator
Legendary
*
expert
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
June 03, 2015, 07:39:38 AM
 #20

Unlike Meni's suggestion, the reduction in block subsidy is not given to a pool but rather deferred to future miners because the subsidy algorithm is based around the number of coins.
Well, the pool does have the ultimate effect of deferring rewards to future miners.

See section 6.2.3 of the CryptoNote whitepaper: https://cryptonote.org/whitepaper.pdf
Interesting. I argue that regardless of other issues, the particular function they suggest is not optimal, and the cap it creates is too soft.

There is a major shortcoming I can see in the rollover fee pool suggestion: miners are incentivized to accept fees out of band so they can obtain all the highest fees instantly, thus defeating the entire purpose of that feature.
This is only (potentially) a problem in my 2012 rollover fee proposal. Here, tx fees don't go into the pool - fees are paid to the miner instantly. It is only the miner's block size penalty that goes in to the pool, and he must pay it if he wants to include all these transactions.

Of course, if you'd like to post this criticism on that thread, I'll be happy to discuss it.


Just a comment on the following:

Quote
With a hard cap, the queue of transactions can only clear at a specific rate. Below this rate there is no fee tension, and above it there is instability.

I don't think you can say that - that would be like saying, queues are never a problem as long as utilization is < 1 (which of course is required for stability). But long queues do in fact develop when utilization is < 1, due to variability in service / arrival times (re: bitcoin, the dominant source of variability is in inter-block times).

As long as there are queues, fee tension will be present, as mempool transactions are largely prioritised by feerate. Empirically, we are observing periods of fee tension (i.e. busy periods, when pools' max block size is reached) quite often these days.

Otherwise I like this perspective on the block size problem (even though I can't really comment on the proposed solution), in particular the observation that in the short term transaction demand is inelastic. (Upon further thought, however, proponents of increasing block size would say that the inelastic demand problem doesn't really apply if the block size cap is sufficiently higher than the average transaction rate.)
That's true in general; however, for the specific time scales and fluctuations in queue fillrate we have here, I'd say that "no fee tension" may be an exaggeration, but captures the essence.

Sure, if the block limit is high enough we can always clear transactions... But then there will be little incentive to give fees or conserve space on the blockchain. The post assumes that we agree that too low is bad, too high is bad, and we don't want to be walking a thin rope in between.


(1) bypass vulnerability (where you pay fees, or the like, out of band to avoid the scheme)
I don't think this is an issue here. Transaction fees are paid instantly in full to miners, so users have no reason to pay fees out of band. Miners are forced by protocol to pay the penalty if they want to include the transactions (and if they don't include the transactions, they're not doing anything they could be paid for). You could talk about miners paying penalty into the pool hoping to only get it back in future blocks, but with a pool that clears slowly enough, that's not a problem.

(2) scale invariance  (the scheme should work regardless of Bitcoin's value).
In the space of block sizes, you do need to occasionally update the parameter to match the current situation. I think it's best this way - currently only humans can reliably figure out if the fees are too high or node count is too low. But if a good automatic size adjustment rule can be found, it can be combined with this method.

In the space of Bitcoin value, transaction fees and hardware cost, the proposed function is multi-scaled and thus cares little about all of that. For any combination of the above, the function will find an equilibrium block size for which the penalty structure makes sense.

I think this kind of proposal is a massive improvement on proposals without this kind of control; and while it does not address all of the issues around larger blocks-- e.g. they do not incentive align miners and non-mining users of the system-- it seems likely that proposals in this class would greatly improve a some of them of them; and as such is worth a lot more consideration.
I'm still hoping Red Balloons, or something like it, will solve that problem.

Thanks for posting, Meni-- I'm looking forward to thinking more about what you've written.
Thanks, and likewise - I've only recently heard about your effort-penalty suggestion, I hope to be able to examine it and do a comparison.


I expect a fee pool alone will increase block verification cost.
It would not, in any meaningful way.
Right, the proposal here only adds a constant amount of calculations per block, taking a few microseconds. It doesn't even grow with the number of transactions.

I would scrap the fee pool and use the function the opposite way: the total sum of fees paid in the block defines the maximum block size. The seeding constant for this function could itself be inversely tied to the block difficulty target, which is an acceptable measure of coin value: i.e. the stronger the network, the higher the BTC value, and reciprocally the lower the nominal fee to block size balance point.

With an exponential function in the fashion of Meni's own, we can keep a healthy cap on the cost of larger blocks, which impedes spam by design, while allowing miners to include transactions with larger fees without outright kicking lower fee transactions out of their blocks.

As Meni's proposal, this isn't perfect, but I believe it comes at the advantage of lower implementation cost and disturbance to the current model, while keeping the mechanics behind block size elasticity straight forward. Whichever the community favors, I would personally support a solution that ties block size limit to fees over any of the current proposals.
This is similar to the idea of eschewing a block limit and simply hardcoding a required fee per tx size. The main issue I have with this kind of ideas is that it doesn't give the market enough opportunity to make smart decisions, such as preferring to send txs when traffic is low, or to upgrade hardware to match demand for txs.

Another issue with this is miner spam - A miner can create huge blocks with his own fee-paying txs, which he can easily do since he collects the fees.

Using difficulty to determine the size/fee ratio is interesting. I wanted to say you have the problem that difficulty is affected not only by BTC rate, but also by hardware technology. But then I realized that the marginal resource costs of transactions also scales down with hardware. The two effects partially cancel out, so we can have a wide operational range without much need for tweaking parameters.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Pages: [1] 2 3 4 5 6 7 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!