Mike Hearn (OP)
Legendary
Offline
Activity: 1526
Merit: 1134
|
|
April 05, 2013, 10:05:55 AM |
|
Most full nodes do not need to store the entire chain. Though it's not implemented yet, block pruning will mean that your disk usage will eventually stabilize at some multiple of transaction traffic. Only a small number of nodes really need to store the entire thing stretching all the way back to 2009.
Anyway, ultimately this will be decided by Gavin and so far he's been saying he wants to raise the block size limit.
|
|
|
|
caveden
Legendary
Offline
Activity: 1106
Merit: 1004
|
|
April 05, 2013, 10:15:50 AM |
|
Anyway, ultimately this will be decided by Gavin and so far he's been saying he wants to raise the block size limit.
I'd say ultimately it's the main services (MtGox, Bitpay, BitInstant, BitcoinStore, WalletBit, Silk Road etc) that will decide. If they all stay at the same side of the fork, that will likely be the side that "wins", regardless of Gavin or even miners' will (most miners would just follow the money). Of course that services have a strong interest in staying on the branch that's more professionally supported by developers, so yeah, if most of the core team goes to one side, we could predict most of these services would too.
|
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
April 05, 2013, 10:57:21 AM Last edit: April 05, 2013, 11:08:04 AM by solex |
|
Thanks Peter for the detailed explanation of your position. I do understand the thrust of your arguments but disagree over a number of areas... There isn't going to be a single service that does this, that's my whole point: if you achieve scalability by just raising the blocksize, you wind up with all your trust is in a tiny number of validating nodes and mining pools. If you achieve scalability through off-chain transactions, you will have many choices and options. ... On the other hand, if the blocksize is raised and it leads to centralization, Bitcoin as a decentralized currency will be destroyed forever.
I am already concerned about the centralization seen in mining. Only a handful of pools are mining most of the blocks, so decentralization is already being lost there. Work is needed in two areas before the argument for off-chain solutions becomes strong: first blockchain pruning, secondly, initial propagation of headers (presumably with associated utxo) so that hashing can begin immediately while the last block is propagated and its verification done in parallel. These would help greatly to preserve decentralization. MtGox and other sites are not a good place for people to leave their holdings permanently. As has been pointed out, most people will not run nodes to support the blockchain if their own transactions are forced or priced away from it. Bitcoin cannot be a store of value without being a payment system as well. The two are inseparable. It might not be sexy and exciting, but like it or not leaving the 1MB limit in place for the foreseeable future is the sane, sober and conservative approach.
Unfortunately, this is the riskiest approach at the present time. The conservative approach is to steadily increase it ahead of demand, which maintains the status quo as much as market forces permit. The dead puppy transaction sources have forced this issue much earlier than would otherwise be the case. You mention your background in passing, so I will just mention mine. I spent many years at the heart of one of the largest TBTF banks working on its equities proprietary trading system. For a while 1% of the shares traded globally (by value) was our execution flow. On average every three months we encountered limitations of one sort or another (software, hardware, network, satellite systems), yet every one of them was solved by scaling, rewriting or upgrading. We could not stand still as the never-ending arms race for market-share meant that to accept a limitation was to throw in the towel. The block limit here is typical of default/preset software limits that have to be frequently reviewed, revised or even changed automatically. The plane that temporarily choked on 700 passengers may now be able to carry 20,000. Bitcoin's capacity while maintaining a desired level of decentralization may be far higher than we think, especially if a lot of companies start to run nodes. It just needs the chance to evidence this.
|
|
|
|
Peter Todd
Legendary
Offline
Activity: 1120
Merit: 1160
|
|
April 05, 2013, 11:10:39 AM Last edit: April 05, 2013, 08:12:04 PM by retep |
|
Of course that services have a strong interest in staying on the branch that's more professionally supported by developers, so yeah, if most of the core team goes to one side, we could predict most of these services would too.
FWIW currently the majority of the core team members, Gregory Maxwell, Jeff Garzik and Pieter Wuille, have all stated they are against increasing the blocksize as the solution to the scalability problem. Each has different opinions and degrees of course on exactly what that position constitutes, but ultimately all of them believe off-chain transactions need to be the primary way to make Bitcoin scale. EDIT: to be clear no-one, including myself, thinks the blocksize must never change. Rather achieve scalability first through off-chain transactions, and only then do you consider increasing the limit. I made a rough guess myself that it may make sense to raise the blocksize at a market cap of around 1 trillion - still far off in the future. Fees in this scenario would be something like $5 per transaction, or $1billion/year of proof of work security. (not including the inflation subsidy) That's low enough to be affordable for things like payroll, and is still a lot cheaper than international wire transfers. Hopefully at that point Bitcoin will need less security against takedowns by authority, and/or technological improvements make it easier to run nodes. As far as I know Wladimir J. van der Laan and Nils Schneider haven't stated an opinion, leaving Gavin Andresen. I think Jeff Garzik's post on the issue is apropos, particularly his last point: That was more than I intended to type, about block size. It seems more like The Question Of The Moment on the web, than a real engineering need. Just The Thing people are talking about right now, and largely much ado about nothing.
The worst that can happen if the 1MB limit stays is growth gets slowed for awhile. In the grand scheme of things that's a manageable problem.
|
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
April 05, 2013, 11:18:31 AM |
|
Anyway, ultimately this will be decided by Gavin and so far he's been saying he wants to raise the block size limit.
I'd say ultimately it's the main services (MtGox, Bitpay, BitInstant, BitcoinStore, WalletBit, Silk Road etc) that will decide. If they all stay at the same side of the fork, that will likely be the side that "wins", regardless of Gavin or even miners' will (most miners would just follow the money). Doubtful, in practice the miners will decide. However, a "suggestion" by Gavin (and more to the point an update to the reference client) would be a very strong push, as would a suggestion by large users of bitcoin. In fact, have any pool owners stated what their opinion is?
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
conv3rsion
|
|
April 05, 2013, 05:25:28 PM |
|
.
The idea that Bitcoin must be crippled to 1 MB blocksizes forever is absurd and perverse and I'm extremely thankful that people smarter than I am agree with that stance. If people like you and Gavin weren't working on Bitcoin (for example, if it was run by Peter), I would be getting as far away from it as I could.
|
|
|
|
acoindr
Legendary
Offline
Activity: 1050
Merit: 1002
|
|
April 05, 2013, 06:59:24 PM Last edit: April 05, 2013, 08:13:55 PM by acoindr |
|
Anyway, ultimately this will be decided by Gavin and so far he's been saying he wants to raise the block size limit.
That gives us pretty much zero information. I'm sure 99% of us "want to raise the block size limit". The question is how. Do we raise it to 2MB or 10MB or infinite? Do we raise it now? If not now when? Do we raise it once? What about dynamically? Dynamically using data or preset parameters? Do we consider hard fork risks in the decision? There are many ways to raise the limit and all have different ramifications. No matter the precise course of action someone will be dissatisfied. Actually, what Gavin said, quoting directly, is this: A hard fork won't happen unless the vast super-majority of miners support it. E.g. from my "how to handle upgrades" gist https://gist.github.com/gavinandresen/2355445Example: increasing MAX_BLOCK_SIZE (a 'hard' blockchain split change)
Increasing the maximum block size beyond the current 1MB per block (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) is a likely future change to accomodate more transactions per block. A new maximum block size rule might be rolled out by:
New software creates blocks with a new block.version Allow greater-than-MAX_BLOCK_SIZE blocks if their version is the new block.version or greater and 100% of the last 1000 blocks are new blocks. (51% of the last 100 blocks if on testnet) 100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different (maybe something like block.timestamp is after 1-Jan-2015 and 99% of the last 2000 blocks are new-version), since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.
I think this shows great consideration and judgement because I note and emphasize the following: 100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different ... since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.
What I think is of greatest value in Gavin's quote is that it's inclusive of data from the field. It's not him unilaterally saying the new size will be X, deal with it. Instead he essentially says the new size can be X if Y and Z are also true. It appears he has a regard for the ability of the market to decide. Indeed no change remains an option and is actually the default. I think Jeff Garzik's post on the issue is apropos, particularly his last point: Thanks for that link. I hadn't seen that post and I think it's brilliant. It probably aligns with my views 99.999%. Ironically it's his last point I disagree with most: Just The Thing people are talking about right now, and largely much ado about nothing.
I completely disagree. Think how easily this issue could have been solved if in 2009 Satoshi implemented a rule such as Jeff suggests here: My off-the-cuff guess (may be wrong) for a solution was: if (todays_date > SOME_FUTURE_DATE) { MAX_BLOCK_SIZE *= 2, every 1 years } [Other devs comment: too fast!] That might be too fast, but the point is, not feedback based nor directly miner controlled.
I think the above could be a great solution (though I tend to agree it might be too fast). However, implementing it now will meet resistance from someone feeling it misses their views. If Satoshi had implemented it then it wouldn't be an issue now. We would simply be dealing with it and the market working around it. Now however there is a lot of money tied up in protocol changes and many more views about what should or shouldn't be done. That will only increase, meaning the economic/financial damage possible from ungraceful changes increases as well. I also note early in Jeff's post he says he reversed his earlier stance, my point here being people are not infallible. I actually agree with his updated views, but what if they too are wrong? Who is to say? So the same could apply to Gavin. That's why I think it's wise he appears to include a response from the market in any change, and no change is the default.
|
|
|
|
conv3rsion
|
|
April 06, 2013, 01:37:01 AM |
|
EDIT: to be clear no-one, including myself, thinks the blocksize must never change. Rather achieve scalability first through off-chain transactions, and only then do you consider increasing the limit. I made a rough guess myself that it may make sense to raise the blocksize at a market cap of around 1 trillion - still far off in the future. Fees in this scenario would be something like $5 per transaction, or $1billion/year of proof of work security. (not including the inflation subsidy) That's low enough to be affordable for things like payroll, and is still a lot cheaper than international wire transfers. Hopefully at that point Bitcoin will need less security against takedowns by authority, and/or technological improvements make it easier to run nodes. We won't get to $10 billion, let alone $1 trillion, if it costs $5 to make a bitcoin transaction. I refuse to believe that you truly do not understand this. Bitcoin will not capture enough economy if it is expensive to use because it won't be useful. Nobody is going to convert their fiat currencies twice in order to use bitcoin as a wire transfer service with a marginal (at best) savings. And they will have to convert back to their fiat currencies because bitcoin is to expensive to spend or to move to an off chain transaction service (to spend there). Expensive is not decentralized. Go fucking make PeterCoin, give it a 10KB block size, and see how far you get with that horseshit.
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
April 06, 2013, 03:08:30 AM |
|
I don't agree with the idea that keeping the 1mb limit is conservative. It's actually a highly radical position. The original vision for Bitcoin was that it supports everyone who wants to use it, that's why the very first discussion of it ever was about scalability and Satoshi answered back with calculations based on VISA traffic levels. The only reason the limit is there at all was to avoid anyone mining huge blocks "before the community was ready for it", in his words, so it was only meant to be temporary.
You continue to repeat this -- but it is only half the story. Satoshi also intended the subsidy-free, fee-only future to support bitcoin. He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block. Unlimited block sizes are also a radical position quite outside whatever was envisioned by the system's creator -- who cleared did think that far ahead.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
justusranvier
Legendary
Offline
Activity: 1400
Merit: 1013
|
|
April 06, 2013, 03:51:59 AM |
|
Satoshi also intended the subsidy-free, fee-only future to support bitcoin. He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block.
Unlimited block sizes are also a radical position quite outside whatever was envisioned by the system's creator -- who cleared did think that far ahead. Appeal to authority: Satoshi didn't mention assurance contracts therefore they can not be part of the economics of the network Strawman argument: The absence of a specific protocol-defined limit implies infinite block sizes. False premise: A specific protocol-defined block size limit is required to generate fee revenue.
|
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
April 06, 2013, 03:56:20 AM Last edit: April 06, 2013, 04:14:23 AM by solex |
|
Satoshi also intended the subsidy-free, fee-only future to support bitcoin. He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block.
(side-stepping justusranvier's points, right now, but accepting the above statement) So what block size, realistically, allows a fees-market to function? It can be approximated fairly easily if we assume that it happens once the fee revenue per block matches or exceeds the block reward. We have another 3.5 years at 25 BTC, so this needs to be the starting point. How many transactions fit in a block? It varies because they have a variable number of inputs and outputs. Using blockchain.info terminology an average of 600 transactions populate an average 250KB block, so 2,400 will fit in a 1MB block. Perhaps most are "vanilla" Bitcoin transactions having a few inputs and one output. What is a sensible fee for a vanilla transaction in the market-place? I had considered for a while that it is closer to the BTC equivalent of 5c than 0.5c or 50c. So 2,400 transactions will accrue $120 in fees. With the Bitcoin fx rate at $150 then the block reward is $3,750, which is (a rounded) 30MB block size before the fees market functions properly. A few weeks ago this was 10MB. Perhaps by the end of the year it will be anywhere between 10 and 100MB. These are quite large blocks so perhaps a realistic fee would be more like 20c per transaction, reducing the required block size to the range 2.5MB to 25MB. The market will find the optimum if it is given a chance. Under no scenario will a 1MB limit give a chance for the fees market to become established, unless transaction fees are forced up to average $1.50, or we wait 11 years until the block reward is 6.25 BTC, or the fx rate collapses back to something like $5 per BTC. None are desirable options.
|
|
|
|
Qoheleth
Legendary
Offline
Activity: 960
Merit: 1028
Spurn wild goose chases. Seek that which endures.
|
|
April 06, 2013, 04:07:02 AM |
|
I completely disagree. Think how easily this issue could have been solved if in 2009 Satoshi implemented a rule such as Jeff suggests here: My off-the-cuff guess (may be wrong) for a solution was: if (todays_date > SOME_FUTURE_DATE) { MAX_BLOCK_SIZE *= 2, every 1 years } [Other devs comment: too fast!] That might be too fast, but the point is, not feedback based nor directly miner controlled.
Satoshi also intended the subsidy-free, fee-only future to support bitcoin. He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block.
Unlimited block sizes are also a radical position quite outside whatever was envisioned by the system's creator -- who cleared did think that far ahead. It's my impression that the 2009 Satoshi implementation didn't have a block size limit - that it was a later addition to the reference client as a temporary anti-spam measure, which was left in until it became the norm. Is this impression incorrect?
|
If there is something that will make Bitcoin succeed, it is growth of utility - greater quantity and variety of goods and services offered for BTC. If there is something that will make Bitcoin fail, it is the prevalence of users convinced that BTC is a magic box that will turn them into millionaires, and of the con-artists who have followed them here to devour them.
|
|
|
|
Zeilap
|
|
April 06, 2013, 04:27:40 AM |
|
Satoshi also intended the subsidy-free, fee-only future to support bitcoin. He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block.
(side-stepping justusranvier's points, right now, but accepting the above statement) So what block size, realistically, allows a fees-market to function? It can be approximated fairly easily if we assume that it happens once the fee revenue per block matches or exceeds the block reward. We have another 3.5 years at 25 BTC, so this needs to be the starting point. How many transactions fit in a block? It varies because they have a variable number of inputs and outputs. Using blockchain.info terminology an average of 600 transactions populate an average 250KB block, so 2,400 will fit in a 1MB block. Perhaps most are "vanilla" Bitcoin transactions having a few inputs and one output. What is a sensible fee for a vanilla transaction in the market-place? I had considered for a while that it is closer to the BTC equivalent of 5c than 0.5c or 50c. So 2,400 transactions will accrue $120 in fees. With the Bitcoin fx rate at $150 then the block reward is $3,750, which is (a rounded) 30MB block size before the fees market functions properly. A few weeks ago this was 10MB. Perhaps by the end of the year it will be anywhere between 10 and 100MB. These are quite large blocks so perhaps a realistic fee would be more like 20c per transaction, reducing the required block size to the range 2.5MB to 25MB. The market will find the optimum if it is given a chance. Under no scenario will a 1MB limit give a chance for the fees market to become established, unless transaction fees are forced up to average $1.50, or we wait 11 years until the block reward is 6.25 BTC, or the fx rate collapses back to something like $5 per BTC. None are desirable options. This argument doesn't work, you're working backwards from the premise that miners need to maintain their current income to be profitable.
|
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
April 06, 2013, 04:41:57 AM |
|
This argument doesn't work, you're working backwards from the premise that miners need to maintain their current income to be profitable.
The argument is attempting to determine when the fees market becomes functional. There is an argument that this is worrying about nothing because the block reward will maintain the network for many years without significant fees. Based upon the high fx rate this argument looks better by the day! Trying to force fees to match the block reward might be a task for the next decade, and counterproductive today. The risk remains from dead puppy transaction sources which would need to be throttled directly somehow, as the fees market won't do it.
|
|
|
|
Peter Todd
Legendary
Offline
Activity: 1120
Merit: 1160
|
|
April 06, 2013, 08:55:39 AM |
|
I think this shows great consideration and judgement because I note and emphasize the following: 100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different ... since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.
What I think is of greatest value in Gavin's quote is that it's inclusive of data from the field. It's not him unilaterally saying the new size will be X, deal with it. Instead he essentially says the new size can be X if Y and Z are also true. It appears he has a regard for the ability of the market to decide. Indeed no change remains an option and is actually the default. Think through this voting proposal carefully: can you think of a way to game the vote? I completely disagree. Think how easily this issue could have been solved if in 2009 Satoshi implemented a rule such as Jeff suggests here: My off-the-cuff guess (may be wrong) for a solution was: if (todays_date > SOME_FUTURE_DATE) { MAX_BLOCK_SIZE *= 2, every 1 years } [Other devs comment: too fast!] That might be too fast, but the point is, not feedback based nor directly miner controlled.
I think the above could be a great solution (though I tend to agree it might be too fast). However, implementing it now will meet resistance from someone feeling it misses their views. If Satoshi had implemented it then it wouldn't be an issue now. We would simply be dealing with it and the market working around it. Now however there is a lot of money tied up in protocol changes and many more views about what should or shouldn't be done. That will only increase, meaning the economic/financial damage possible from ungraceful changes increases as well. There was some heated IRC discussion on this point - gmaxwell made the excellent comment that any type of exponentially increasing blocksize function runs into the enormous risk that the constant is either too large, and the blocksize blows up, or too small, and you need the off-chain transactions it was supposed to avoid anyway. There is very little room for error, yet changing it later if you get it wrong is impossible due to centralization. I also note early in Jeff's post he says he reversed his earlier stance, my point here being people are not infallible. I actually agree with his updated views, but what if they too are wrong? Who is to say? So the same could apply to Gavin. That's why I think it's wise he appears to include a response from the market in any change, and no change is the default.
Same here. I read Satoshi's predictions about achieving scalability through large blocks about two years ago, and simply accepted it as inevitable and ok. It was only much, much later, that I thought about the issue carefully and realized how dangerous the implications would be to Bitcoin's decentralization. I am already concerned about the centralization seen in mining. Only a handful of pools are mining most of the blocks, so decentralization is already being lost there. Work is needed in two areas before the argument for off-chain solutions becomes strong: first blockchain pruning, secondly, initial propagation of headers (presumably with associated utxo) so that hashing can begin immediately while the last block is propagated and its verification done in parallel. These would help greatly to preserve decentralization.
(snip)
You mention your background in passing, so I will just mention mine. I spent many years at the heart of one of the largest TBTF banks working on its equities proprietary trading system. For a while 1% of the shares traded globally (by value) was our execution flow. On average every three months we encountered limitations of one sort or another (software, hardware, network, satellite systems), yet every one of them was solved by scaling, rewriting or upgrading. We could not stand still as the never-ending arms race for market-share meant that to accept a limitation was to throw in the towel.
It's good that you have had experience with such environments, but remember that centrally run systems, even if the architecture itself is distributed, are far, far easier to manage than truly decentralized systems. When your decentralized system must be resistant to attack, the problem is even worse. As an example, consider your (and Mike's) proposal to propagate headers and/or transaction hash information, rather than full transactions. Can you think of a way to attack such a system? Can you think of a way that such a system could make network forking events more likely? I'll give you a hint for the latter: reorganizations. It's my impression that the 2009 Satoshi implementation didn't have a block size limit - that it was a later addition to the reference client as a temporary anti-spam measure, which was left in until it became the norm.
Is this impression incorrect?
Bitcoin began with a 32MiB blocksize limit, which was reduced to 1MB later by Satoshi.
|
|
|
|
Zeilap
|
|
April 06, 2013, 11:35:09 AM |
|
As an example, consider your (and Mike's) proposal to propagate headers and/or transaction hash information, rather than full transactions. Can you think of a way to attack such a system? Can you think of a way that such a system could make network forking events more likely? I'll give you a hint for the latter: reorganizations.
Are you talking about the case where I mine a block with one of my own transactions which I haven't announced, so you have to make another request for the missing transactions before you can verify it? If I refuse to provide the missing transactions, then no-one will accept my block and I lose. If I delay announcing the missing transactions, competing blocks may be found in the mean time, so this is no different to delaying the announcement of the block itself which has no benefit for me. What am I missing?
|
|
|
|
acoindr
Legendary
Offline
Activity: 1050
Merit: 1002
|
|
April 06, 2013, 05:40:29 PM |
|
Okay here is a proposal which IMO is "safe" enough to win support, but first a quick question. Infinite block sizes appear dangerous for centralization, but also DOS attack and chain bloat by tx's with no/trivial fees which size limits and higher fees help prohibit. Has there been good answer for the latter? (apologies for having missed it if so)
This new proposal builds on Gavin's model, but IMO makes it "safer":
Increases of 4MB every 26,400 blocks (every 6 months) if block.timestamp is after 1-Jan-2014 and 95% of the last 4,400 blocks (1 month) are new-version.
Pros:
- no action until future date gives market time to respond gracefully - no change as default, means no fork risk - allows possible increase of 16MB in 2 years and has no final limit - puts increases in the hands of miners; even the small ones have a say - should not be prone to gaming due to large sampling size, but worst case is 8MB per year increase
Cons
- advocates of infinite size will likely find it conservative - ?
Please note in my view Bitcoin could survive with a 1MB limit and not be "crippled". I think one thing to remember is Bitcoin would not exist if government was completely out of money. Bitcoin only has value because it allows people to control their stored wealth and the free market to work. Regulation is the reason payments come with fees and inefficiency. A truly free market would have long since given users better and cheaper options. Indeed, payments existing outside the "system" was PayPal's original idea believe it or not.
So what Bitcoin allows are services to be built on top of it, e.g. ones which rely less on the block-chain, to extend its user empowerment. What I'm saying is Bitcoin is not only the block-chain; it's the idea of allowing free market efficiency the chance to work, so off-chain Bitcoin transactions can be considered Bitcoin too as Bitcoin makes them efficiently possible.
|
|
|
|
Gavin Andresen
Legendary
Offline
Activity: 1652
Merit: 2301
Chief Scientist
|
|
April 06, 2013, 06:49:31 PM |
|
So the longer I think about the block size issue, the more I'm reminded of this Hayek quote: The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.
F.A. Hayek, The Fatal Conceit We can speculate all we want about what is going to happen in the future, but we don't really know. So, what should we do if we don't know? My default answer is "do the simplest thing that could possibly work, but make sure there is a Plan B just in case it doesn't work." In the case of the block size debate, what is the simplest thing that just might possibly work? That's easy! Eliminate the block size limit as a network rule entirely, and trust that miners and merchants and users will reject blocks that are "obviously too big." Where what is "obviously too big" will change over time as technology changes. What is Plan B if just trusting miners/merchants/users to do the right thing doesn't work? Big-picture it is easy: Schedule a soft-fork that imposes some network-rule-upper-limit, with whatever formula seems right to correct whatever problem crops up. Small-picture: hard to see what the "right" formula would be, but I think it will be much easier to define after we run into some actual practical problem rather than guessing where problems might crop up.
|
How often do you get the chance to work on a potentially world-changing project?
|
|
|
acoindr
Legendary
Offline
Activity: 1050
Merit: 1002
|
|
April 06, 2013, 07:29:41 PM |
|
So the longer I think about the block size issue, the more I'm reminded of this Hayek quote: The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.
F.A. Hayek, The Fatal Conceit We can speculate all we want about what is going to happen in the future, but we don't really know. So, what should we do if we don't know? My default answer is "do the simplest thing that could possibly work, but make sure there is a Plan B just in case it doesn't work." In the case of the block size debate, what is the simplest thing that just might possibly work? That's easy! Eliminate the block size limit as a network rule entirely, and trust that miners and merchants and users will reject blocks that are "obviously too big." Where what is "obviously too big" will change over time as technology changes. What is Plan B if just trusting miners/merchants/users to do the right thing doesn't work? Big-picture it is easy: Schedule a soft-fork that imposes some network-rule-upper-limit, with whatever formula seems right to correct whatever problem crops up. Small-picture: hard to see what the "right" formula would be, but I think it will be much easier to define after we run into some actual practical problem rather than guessing where problems might crop up. Well, thanks for weighing in. Under the circumstances you're the closest to a leader this leaderless grand experiment has, so it means a lot. We simply must get some resolution to this so we can move forward. As I alluded to earlier no action could ever satisfy everyone nor be provably correct. I'm a bit torn on this avenue, because on one hand I believe strongly in the ability of the market to work effectively. So your rationale it will self regulate block size makes sense to me. On the other I consider this the riskiest route for things which can go wrong and consequences if they do. However, this is one of the reasons I began pushing alt-coins. The more options a free market has the better it can work. In the end, because of that, I can live with any decision for Satoshi's version of Bitcoin that resolves block size without an economically and confidence damaging fork. So I say let's start pushing forward with such plans. It will be better for all interests if we do.
|
|
|
|
|