acoindr
Legendary
Offline
Activity: 1050
Merit: 1002
|
|
March 30, 2013, 06:45:39 PM |
|
Actually, forget my earlier "heartbeat" block size. I have a better idea. ... All that needs to happen is allow the 1MB to be replaced by a capping algorithm which just keeps pace ahead of demand. ...
I think this is right. It's effectively not a cap at all just like the U.S. debt ceiling. The problem with the debt ceiling is people, at least prior, were not paying attention, but there is a check in place - raising the ceiling requires a vote. Increasing block size could happen the same way, but instead of congressmen ignorant of economics and/or apathetic of votes, miners have financial incentive to vote responsibly. I think a brilliant idea of Gavin's is this: A hard fork won't happen unless the vast super-majority of miners support it. E.g. from my "how to handle upgrades" gist https://gist.github.com/gavinandresen/2355445Example: increasing MAX_BLOCK_SIZE (a 'hard' blockchain split change)
Increasing the maximum block size beyond the current 1MB per block (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) is a likely future change to accomodate more transactions per block. A new maximum block size rule might be rolled out by:
New software creates blocks with a new block.version Allow greater-than-MAX_BLOCK_SIZE blocks if their version is the new block.version or greater and 100% of the last 1000 blocks are new blocks. (51% of the last 100 blocks if on testnet) 100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different (maybe something like block.timestamp is after 1-Jan-2015 and 99% of the last 2000 blocks are new-version), since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.
Checking for version numbers IMO is how almost all network changes should be handled - if a certain percentage isn't compliant no change happens. Doing this would have prevented the recent accidental hard fork. It's what I call an anti-fork ideology. Either we all move forward the same way or we don't change at all. That's important given the economic aspects of Bitcoin. So we use this model also to meter block size. One of the points in the debate is future technological advances can be an accommodating factor for decentralization, but that's unfortunately unknown. No problem, let the block size increase by polling to see what miners can handle. Think of a train many many boxcars long. Maybe the biggest most impressive boxcars are upfront near the engine powering along, but way back are small capacity cars barely staying connected. To ensure no cars are lost even the smallest car has powerful brakes that can limit the speed of the entire train. Gavin's earlier thoughts are close: .. (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) ... The problem here is within a network of increasingly centralized mining capacity the median size of most any number of blocks will always be too high to account for small scale miners, allowing larger limits by default. Instead we make it more like that train. The network checks for the lowest block limit (maybe in 100MB increments) announced by at least say 10% of all blocks every thousand blocks (or whatever). It can't be the absolute lowest value found at any given time since some people will simply not change by neglect. However, I think 10% or so sends a clear signal people are not ready to go higher. At the same time all miners have financial incentive to allow higher capacity as soon as possible due to fees they can collect. This method would keep the block size ideal for decentralization as long as there was good decentralization of miners. So it's like the 51% attack rationale - centralized miners could only become monopolies by controlling nearly 100% of all blocks found.
|
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
March 30, 2013, 08:39:03 PM |
|
Good points acoindr, but I am not clear on the last paragraphs. (I think you mean 100KB too).
Demand is indeed predictable based upon the last few thousand blocks. Because Bitcoin is a global currency transaction volumes, over a time period of a week or two, should ebb and flow steadily like the sea level.
|
|
|
|
acoindr
Legendary
Offline
Activity: 1050
Merit: 1002
|
|
March 30, 2013, 09:33:53 PM |
|
Good points acoindr, but I am not clear on the last paragraphs. (I think you mean 100KB too).
I didn't think through the math, so the 100MB increment and one thousand block sampling size numbers are only placeholders. I don't know how often would be best to check for possible block size increase. I don't think it should be a constant thing. Instead it could be done once per year, or maybe once every 3 months. I think knowing the fixed limit for a significant time period is helpful for planning. So that would mean maybe checking the last one thousand blocks (about 1 week's worth) every 13,400 blocks which is about every 3 months. For size of increase, I don't know... I'm thinking 10-100MB (just a guesstimate) which may accommodate even explosive global adoption rates. Remember, this carries into a future of technological capacity we don't yet know. Demand is indeed predictable based upon the last few thousand blocks. Because Bitcoin is a global currency transaction volumes, over a time period of a week or two, should ebb and flow steadily like the sea level.
This actually doesn't care about demand. It only cares about the network capacity comfortable for even miners of lower resources to continue participating. It guards against mining operations evolving into monopolies and oligopolies resulting from an unlimited block size (he who has the highest bandwidth/resources wins) without the automatic crippling to widespread usage a hard limit would ensure. There may be times when network capacity available doesn't keep pace with total demand, but that simply puts market pressure on increasing network capacity and/or viable alternate channels. At least the entire project isn't wrecked because neither implementation of cap or no cap will gain consensus.
|
|
|
|
Timo Y
Legendary
Offline
Activity: 938
Merit: 1001
bitcoin - the aerogel of money
|
|
March 31, 2013, 08:53:09 AM |
|
There would be tons of freeloaders.
What are freeloaders doing? They are betting that the hashrate will be above their desired value, even if they don't pledge. So why not bring them into the system and let them bet for profit?
If you pledge for a certain hashrate, and the hashrate doesn't materialize, you get back your pledge + x percent profit. The profit comes from the people who pledged for the hashrate that did materialize. A fraction of their pledge goes to the miners and another fraction is used for betting.
|
|
|
|
Mike Hearn (OP)
Legendary
Offline
Activity: 1526
Merit: 1134
|
|
March 31, 2013, 12:21:44 PM |
|
That sounds a bit like a dominant assurance contract with a twist. My question is why it's better/worth the extra complexity.
|
|
|
|
marcus_of_augustus
Legendary
Offline
Activity: 3920
Merit: 2349
Eadem mutata resurgo
|
|
March 31, 2013, 08:48:26 PM |
|
Increasing the maximum block size beyond the current 1MB per block (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) is a likely future change to accomodate more transactions per block. A new maximum block size rule might be rolled out by: Did not know Gavin (or anyone) was considering floating MAX_BLOCK_SIZE, when I suggested it a while back on IRC it went down like a lead balloon. Anyway, imo it needs to float and be based on some sensible calculation of previous block sizes over a 'long enough' period. Also I think there needs to be a way to float the min tx fee, this is the other piece that is hard-coded and adjusted by human 'seems about right' to prevent spam tx. Obviously as the value of btc goes higher then what is and isn't considered spam tx changes. The two variables max_block_size and min_tx_fee are coupled though. Maybe a simple LQR controller for a 2 variable system could be sufficient for closing the loop for stability here?
|
|
|
|
bitlancr
|
|
March 31, 2013, 11:22:01 PM |
|
Idea: why not increase the hash difficulty for larger blocks? So if a miner wants to pass the 1MB threshold, require an extra zero bit on the hash. There's a big disincentive to include lots of tx dust.
|
|
|
|
Frozenlock
|
|
April 01, 2013, 03:20:34 AM |
|
I very much like the idea of adjusting the difficulty with the block size.
This gives an incentive to keep the blocks small, unless there's enough fees to counteract the additional difficulty. IE: for 2x the difficulty, I can make 5x as much profits in fees. It can also deal more easily with fast surge, such as the week before Christmas.
This leaves a pressure to keep the size small, contrary to simply adjusting the limit after X blocks.
From there, a minimum fee required for a transaction to be relayed by the network could be a fraction the smallest fee of the newest block. So if it was 0.00001 BTC and you try to send with a fee of 0.00000001, you are most likely to not be included in the next block and you are not relayed.
|
|
|
|
ShadowOfHarbringer
Legendary
Offline
Activity: 1470
Merit: 1006
Bringing Legendary Har® to you since 1952
|
|
April 01, 2013, 03:26:38 AM |
|
I very much like the idea of adjusting the difficulty with the block size.
This gives an incentive to keep the blocks small, unless there's enough fees to counteract the additional difficulty. IE: for 2x the difficulty, I can make 5x as much profits in fees. It can also deal more easily with fast surge, such as the week before Christmas.
This leaves a pressure to keep the size small, contrary to simply adjusting the limit after X blocks.
From there, a minimum fee required for a transaction to be relayed by the network could be a fraction the smallest fee of the newest block. So if it was 0.00001 BTC and you try to send with a fee of 0.00000001, you are most likely to not be included in the next block and you are not relayed.
This idea seems nice, however I am afraid there will be some hidden consequences. Can we have a comment on that from a developer ?
|
|
|
|
Sukrim
Legendary
Offline
Activity: 2618
Merit: 1007
|
|
April 01, 2013, 07:48:58 AM |
|
That sounds a bit like a dominant assurance contract with a twist. My question is why it's better/worth the extra complexity.
It would encourage betting/bidding as opposed to not betting/bidding. Either you get some small profit or you really paid for the hash rate you were looking for. However I'm not too sure if it would work out - who would even bet on any realistic difficulty? All you need to do is bet higher than 4 times the current diff (impossible to reach) to always get a profit. If everybody does that though, nobody gets a profit at all. Probably it would be smart to have a (exponentially? diminishing quadratically or cubic?) larger reward for people who bet close to the really reached difficulty or simply do parimutel betting... All in all it seems to me as if you want to do some kind of "penny auction" of mining fees or network costs. As this is about money, the reward is good for anyone contributing and people donating out of charity is unlikely/unsustainable, there need to be rewards or potential rewards for anyone taking part, not only miners and as few freeloaders as possible. Also the system needs to be as automatable as possible, betting might be fun and nice for a few times, but if I had to look at hash rate diagrams and estimates every week to make sure I either get a small profit or loose the whole wager to secure the network I guess I'd quickly choose to cash out and leave this behind. On the other hand, this could be done by an external service as well for the beginning - do parimutel betting with for example a half cut off bell shaped curve for rewards on the difficulty after the next difficulty switch, pay high bettors 10%(?) of the low bets and 90% as pure fee transactions towards each of the 2016 blocks created. Should there be blocks built too fast to include some transactions, just divide the remaining reward by the new count and the remaining blocks then get paid a little more. Might not be as lucrative as SD but (if done charitable --> only covering hosting+maintenance costs) could be used by various services also as some kind of CSR measure - they either support the miners or make a small profit that they then can automatically reinvest for the next round of course or just directly donate to a pool, service they like, still donate to miners...
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
April 01, 2013, 03:36:50 PM |
|
In general it sounds like an unworkable scheme. There is definitely no consensus on the block size issue at all.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
Mike Hearn (OP)
Legendary
Offline
Activity: 1526
Merit: 1134
|
|
April 01, 2013, 03:41:48 PM |
|
What sounds unworkable? The last post? Or the whole thread?
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
April 01, 2013, 03:54:39 PM |
|
Infinite block sizes.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
Mike Hearn (OP)
Legendary
Offline
Activity: 1526
Merit: 1134
|
|
April 01, 2013, 03:59:39 PM |
|
Yes, but what's your reasoning for that? What specific thing about using assurance contracts to fund mining with large (or floating capped) block sizes seems unworkable to you?
|
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
April 01, 2013, 04:38:09 PM |
|
Idea: why not increase the hash difficulty for larger blocks? So if a miner wants to pass the 1MB threshold, require an extra zero bit on the hash. There's a big disincentive to include lots of tx dust.
Hmm, so a block with twice the difficulty can have twice the size? In effect, this allows the block rate increase faster than once every 10 minutes, by combining multiple headers into a single header. If you have 1MB of transactions worth 10BTC and another 1MB worth 5BTC (since they have lower tx fees), then it isn't worth combining them. A double difficulty block would give you 50% the odds of winning, but you only win 15BTC if you win.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
April 01, 2013, 05:50:47 PM |
|
Idea: why not increase the hash difficulty for larger blocks? So if a miner wants to pass the 1MB threshold, require an extra zero bit on the hash. There's a big disincentive to include lots of tx dust.
Hmm, so a block with twice the difficulty can have twice the size? In effect, this allows the block rate increase faster than once every 10 minutes, by combining multiple headers into a single header. If you have 1MB of transactions worth 10BTC and another 1MB worth 5BTC (since they have lower tx fees), then it isn't worth combining them. A double difficulty block would give you 50% the odds of winning, but you only win 15BTC if you win. That actually effectively illustrates part of the difficulty in creating a solution: our economic reasoning will be clouded for many years by the block subsidy, which will probably dwarf the transaction fees for years to come. Efficiencies which must exist in the self-supporting, fee-only future are unseen at this time.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
|
bitlancr
|
|
April 01, 2013, 06:15:48 PM |
|
If you have 1MB of transactions worth 10BTC and another 1MB worth 5BTC (since they have lower tx fees), then it isn't worth combining them. A double difficulty block would give you 50% the odds of winning, but you only win 15BTC if you win.
OK, so it doesn't work if you pick those particular numbers. How about if doubling the difficulty gave you 4x the space then? Or more?
|
|
|
|
acoindr
Legendary
Offline
Activity: 1050
Merit: 1002
|
|
April 01, 2013, 07:47:50 PM |
|
Mike, sorry to derail this thread some but creating multiple threads on the same general subject seems unhelpful I've finally had time to consider assurance contracts. The idea is good (e.g. Kickstarter) but it won't work IMO for Bitcoin. The reason stems from something you pointed out and which I mentioned elsewhere: transactions come in different types. Often in block size discussions we refer to transactions quite generically - we view them only in the sense of data. However, transactions are not created equally. You correctly note some transactions can happily wait days for clearance and others, like micropayments, are not concerned much with double spends. This line of thinking led me to realize the block size issue should be easily solvable. I'll get to that in a second. The reason assurance contracts won't work for Bitcoin is the participants you imagine might form such contracts won't opt for an inefficient payment channel. ... I think it'd likely work like this - merchants that have noticed that they start seeing double spends when network speed drops below 5 THash/sec would clearly be targeting that amount and no more. ..
If I'm a dentist or coffee shop proprietor why would I be using the block-chain for transactions? As you note it's subject to double spends, and at the very least unpredictable payment confirmation delay which can stretch well over an hour. As I've posted often before I see Bitcoin transactions evolving to be handled largely off-chain. Native Bitcoin is not ideal for the majority of the world's transactions and never will be. If I'm a legitimate lawful business I care little about anonymous payments and irreversibility for clients. I'd just as soon use "BitcoinPal" to accept bitcoins instantly. If I'd pledge money for an assurance contract I'd certainly offer it to an entrepreneur that could offer a more elegant and professional payment solution. So the block size issue should be easy to solve. We'll never need an infinite block size to ensure Bitcoin can handle the world's transactions. The majority of these will prefer not to route through Bitcoin. So why are we insisting they be able to? Instead, all we need to achieve is usability for those transactions which uniquely value native Bitcoin. Such transactions (like those of Silk Road) should have little problem paying a decent sized fee for the value the block-chain provides. This solves DOS attacks for trivial fee amounts and other block-chain spam from low value transactions. As I describe above we should allow the cap to be dynamically set by polling miners every so often to see when they are comfortable - even those of lower resources - with a higher block size limit. This prevents formation of mining monopolies while allowing Bitcoin to scale with technological progess.
|
|
|
|
marcus_of_augustus
Legendary
Offline
Activity: 3920
Merit: 2349
Eadem mutata resurgo
|
|
April 01, 2013, 11:04:47 PM |
|
That actually effectively illustrates part of the difficulty in creating a solution: our economic reasoning will be clouded for many years by the block subsidy, which will probably dwarf the transaction fees for years to come. Efficiencies which must exist in the self-supporting, fee-only future are unseen at this time. Agreed, the block reward incentive drives solely hash-power, whereas the fees-only regime (after circa 2040) is expected to incentivise both hash-power and tx storage. The phase we are entering now will be the gradual transition between the two regimes for approx. the next 25 years. Maybe an interim solution for an interim situation?
|
|
|
|
|