This is similar to the idea of eschewing a block limit and simply hardcoding a required fee per tx size.
I assume you are referring to the debate on "hard block size limit + organic fees" versus "no block size limit + hard fees", the third option (no block limit and organic fees) being a non solution. Obviously an "organic block size limit + organic fees" is the ideal solution, but I think the issue is non trivial, and I have no propositions to achieve it. I don't even know if its philosophically possible.
In this light, a "pseudo elastic block size limit + organic fees" is the better and most accessible solution at the moment, and I will argue that my proposal cannot be reduced to "no block size limit + hard fees", and that it actually falls under the same category as yours. Indeed, like your proposal, mine relies on an exponential function to establish the fee expended to block size ratio. Essentially the T-2T range remains, where any blocks below T needs no fees to be valid, and the total fee grows exponentially from T to 2T.
In this regard, my proposal uses the same soft-hard cap range mechanics as yours. As I said, ideally I'd prefer a fully scalable solution (without any artificial hard cap), but for now this kind of elastic soft-hard cap mechanic is better than what we got and simple enough to review and implement. The fact that my solution has caps implies there will be competition for fees as long as the seeding constants of the capping function are tuned correctly. On this front it behaves neither worse nor better than your idea.
Since I believe fees should be pegged on difficulty, fees wouldn't be hard coded either. Rather the baseline would progress inversely to network hashrate, while leaving room for competition over scarce block room.
The main issue I have with this kind of ideas is that it doesn't give the market enough opportunity to make smart decisions
I will again argue to the contrary. As a matter of fact, I believe your solution offers no room for such adaptive market behavior, while mine does. To take both your examples in order:
such as preferring to send txs when traffic is low
With T being the soft cap and 2T the hard cap, your solution proposes to penalize all miners creating blocks larger than T. This pegs blockchain space to fees, the same as my proposal: the more txs waiting in the mempool, the higher the fee you need to get included in the next block. Inversely, the less items in the mempool, the more likely you are to have your low/zero fee tx mined right away, which creates an incentive to emit transactions during low traffic periods.
While your approach supports emitting txs during low traffic by imposing extra costs on high traffic, mine simply doesn't do with the extra cost. That doesn't mean emitting transactions during low traffic is NOT cheaper. As a matter of fact, it is, but the difference between low and high traffic isn't as significant.
The true difference between my solution and yours is that while mine allows miners to exceed the T soft cap as long as there are enough fees to go by, yours penalizes all blocks outgrowing T, which effectively locks all blocks at size T.
Indeed a selfish miner would have no incentive to build blocks beyond T, and they will also benefit in not including 0/low fee transactions. Indeed, by leaving all 0/low fee transactions in the mempool and only creating small blocks, where the block size is defined min(total size of high fee txs, T), a large selfish miner can deplete the mempool from all high fee transactions, leaving the rest of the network to pick up the slack.
Other miners are left with the choice to fill blocks up to T, but not further. Due to the selfish miners action (who are pumping high fee, small size blocks), there are not enough fees to be redeemed from the mempool, and the penalties for including transaction past T would come out of the good willed miners' coinbase rewards. You may have "benevolent" miners who would rather empty the mempool than follow game theory, but they only stand to earn less money than everybody else. On the other hand, selfish miners still qualify to get a cut of the fee pool, to which they make a point not to contribute to, effectively siphoning revenue from good will and "benevolent" miners.
The true effect on the network is that no one will bother creating blocks larger than T, and we will still have a defacto hardcoded block size cap.
My proposal offers to allow miners to take in extra fees as long as they remain below the curve defined by the capping function. Selfish miners do not have an opportunity to vampirize good willed miners anymore so while the entire behavior (high fee, low size blocks) is not deterred, it is at least not encouraged.
Please keep in mind that this analysis of your system relies on my current understanding of it. I'm still not 100% clear how the "fee pool" functions. I'm assuming it is either only funded in penalties, and all fees are paid to miners directly, or that all fees are pooled and distributed equally per blocks. The later assumption seems pointless since it can be easily bypassed, so I'm using the former one as the basis to my critics of your proposal.
or to upgrade hardware to match demand for txs
Again I will argue that my proposal supports hardware improvement while yours doesn't. In your case, T will act as a defacto hard cap on block size limit, so there is no incentive for miners to be able to handle more traffic and create blocks beyond that value. As long as miners won't output blocks larger than T, there is no reason for the rest of the network to upgrade either.
With my solution, as long as there are enough fees to go by, up until 2T blocksize (or whatever the hard cap ends up being), miners are motivated to include transactions paying fees beyond the soft cap, which justifies hardware improvement to handle the extra load, with the consequences this has on the rest of the network.
Another issue with this is miner spam - A miner can create huge blocks with his own fee-paying txs, which he can easily do since he collects the fees.
Both in the current state and the solution you propose, malevolent miners can disturb the network by mining either empty blocks, or blocks full of "useless transactions", sending their own coins back to themselves. In my solution malevolent miners also have the added opportunity to mine large blocks by paying fees to themselves. Let's analyze what counter there is to these 3 disturbance attacks.
1) In case of empty blocks, all good willed and selfish miners should simply ignore them. It increases their revenue and impeaches the attack.
2) In case of "useless transactions", either the transactions were never made public, in which case the blocks are easy to identify (full of txs that never hit the mempool) and can be ignored for the same reason as above, or the transactions have been published publicly, and the attacker is making a point of mining only these. At this point you can't really distinguish this miner from good willed ones with either solution.
3) With my solution, malevolent miners can pay themselves fees and enlarge blocks. However that only holds true if they are keeping the transactions private. In this case other miners can identify such blocks as malevolent and ignore them entirely (2T wide blocks full of large fee txs that never hit the mempool). If the attacking miner makes the transactions public, he can't expect to rake all fees, so the solution to this attack is no different from 1 & 2
4) With your solution, what is to stop a malevolent miner from maxing blocks with 0 fee transactions? Sure he would give up the coinbase reward in form of penalties, but now these are available for the taking to other miners. Good willed miners may ignore such blocks, but selfish miners probably won't.
With my solution, there is an incentive for both good willed and selfish miners to ignore blocks outright constructed to bloat the network. With yours, there is a built in cost to such disturbance, so selfish miners can choose to ignore the disturbance for the benefit of the reward. In my system, the only viable economic option is to ignore the blocks.
Using difficulty to determine the size/fee ratio is interesting. I wanted to say you have the problem that difficulty is affected not only by BTC rate, but also by hardware technology. But then I realized that the marginal resource costs of transactions also scales down with hardware. The two effects partially cancel out, so we can have a wide operational range without much need for tweaking parameters.
2 years ago I would have opposed pegging fees and block size to difficulty, because ASIC technology was catching up to current manufacturing processes and as such was growing much faster than every other hardware supporting the network. That would require too much manual adjustments of the pegging function to be acceptable. As time passes by, that criticism loses ground, and now is not a bad time to consider it.
I would be interested to see if you have room to factor difficulty in your current function.
I expect a fee pool alone will increase block verification cost.
It would not, in any meaningful way.
I try to not be so quick with drawing such conclusions. I'm not savvy with the Core codebase, but my experience with blockchain analysis has taught me that the less complicated a design is, the more room for optimization it has. You can't argue that adding a verification mechanic will simplify code or reduce verification cost, although the magnitude of the impact is obviously relevant. I'm not in a position to evaluate that, but I would rather remain cautious.
The point still remains, you don't need a fee pool to establish a relationship between fee, block size, and possibly difficulty.
Don't get me wrong, I believe the idea has merits. What I don't believe is that these merits apply directly to the issue at hand. It can fix other issues, but other issues aren't threatening to split the network. I also don't think this idea is mature enough.
As Gavin says, without an implementation and some tests it is hard to see how the system will perform. If we are going to “theorycraft”, I will attempt to keep it as lean as possible.
It also requires modifying, or at least amending consensus rules, something the majority of the Core team has been trying to keep to a minimum. I believe there is wisdom in that position.
Obviously increasing the block size requires a hard fork, but the fee pool part could be accomplished purely with a soft fork.
The coinbase of the transaction must pay <size penalty> BTC to OP_TRUE as its first output. Even if there is no size penalty, the output needs to exist but pay zero.
The second transaction must be the fee pool transaction.
The fee pool transaction must have two inputs; the coinbase OP_TRUE output from 100 blocks previously and the OP_TRUE output from the fee pool transaction in the previous block.
The transaction must have a single output that is 99% (or some other value) of the sum of the inputs paid to OP_TRUE.
By ignoring fees paid in the block, it protects against miners using alternative channels for fees.
It seems your implementation pays the fee pool in full to the next block. That defeats the pool purpose in part. The implementation becomes more complicated when you have to gradually distribute pool rewards to “good” miners, while you keep raking penalties from larger blocks.
Otherwise, a large block paying high penalties could be followed right away by another large block, which will offset its penalties with the fee pool reward. The idea here is to penalize miners going over and reward those staying under the soft cap. If you let the miners going over the cap get a cut of the rewards, they can offset their penalties and never care for the whole system.
As a result you need a rolling fee pool, not just a 1 block lifetime pool, and that complicates the implementation, because you need to keep track of the pool size across a range of blocks.