Peter Todd (OP)
Legendary
Offline
Activity: 1120
Merit: 1164
|
This is a re-post of a message I sent to the bitcoin-dev mailing list. There has been a lot of talk lately about raising the block size limit, and I fear very few people understand the perverse incentives miners have with regard to blocks large enough that not all of the network can process them, in particular the way these incentives inevitably lead towards centralization. I wrote the below in terms of block size, but the idea applies equally to ideas like Gavin's maximum block validation time concept. Either way miners, especially the largest miners, make the most profit when the blocks they produce are large enough that less than 100%, but more than 50%, of the network can process them.
One of the beauties of bitcoin is that the miners have a very strong incentive to distribute as widely and as quickly as possible the blocks they find...they also have a very strong incentive to hear about the blocks that others find.
The idea that miners have a strong incentive to distribute blocks as widely and as quickly as possible is a serious misconception. The optimal situation for a miner is if they can guarantee their blocks would reach just over 50% of the overall hashing power, but no more. The reason is orphans. Here's an example that makes this clear: suppose Alice, Bob, Charlie and David are the only Bitcoin miners, and each of them has exactly the same amount of hashing power. We will also assume that every block they mine is exactly the same size, 1MiB. However, Alice and Bob both have pretty fast internet connections, 2MiB/s and 1MiB/s respectively. Charlie isn't so lucky, he's on an average internet connection for the US, 0.25MiB/second. Finally David lives in country with a failing currency, and his local government is trying to ban Bitcoin, so he has to mine behind Tor and can only reliably transfer 50KiB/second. Now the transactions themselves aren't a problem, 1MiB/10minutes is just 1.8KiB/second average. However, what happens when someone finds a block? So Alice finds one, and with her 1MiB/second connection she simultaneously transfers her new found block to her three peers. She has enough bandwidth that she can do all three at once, so Bob has it in 1 second, Charlie 4 seconds, and finally David in 20 seconds. The thing is, David has effectively spent that 20 seconds doing nothing. Even if he found a new block in that time he wouldn't be able to upload it to his other peers fast enough to beat Alice's block. In addition, there was also a probabalistic time window before Alice found her block, where even if David found a block, he couldn't get it to the majority of hashing power fast enough to matter. Basically we can say David spent about 30 seconds doing nothing, and thus his effective hash power is now down by 5% However, it gets worse. Lets say a rolling average mechanism to determine maximum block sizes has been implemented, and since demand is high enough that every block is at the maximum, the rolling average lets the blocks get bigger. Lets say we're now at 10MiB blocks. Average transaction volume is now 18KiB/second, so David just has 32KiB/second left, and a 1MiB block takes 5.3 minutes to download. Including the time window when David finds a new block but can't upload it he's down to doing useful mining a bit over 3 minutes/block on average. Alice on the other hand now has 15% less competition, so she's actually clearly benefited from the fact that her blocks can't propegate quickly to 100% of the installed hashing power. Now I know you are going to complain that this is BS because obviously we don't need to actually transmit the full block; everyone already has the transactions so you just need to transfer the tx hashes, roughly a 10x reduction in bandwidth. But it doesn't change the fundamental principle: instead of David being pushed off-line at 10MiB blocks, he'll be pushed off-line at 100MiB blocks. Either way, the incentives are to create blocks so large that they only reliably propagate to a bit over 50% of the hashing power, *not* 100% Of course, who's to say Alice and Bob are mining blocks full of transactions known to the network anyway? Right now the block reward is still high, and tx fees low. If there isn't actually 10MiB/second of transactions on the network it still makes sense for them to pad their blocks to that size anyway to force David out of the mining business. They would gain from the reduced hashing power, and get the tx fees he would have collected. Finally since there are now just three miners, for Alice and Bob whether or not their blocks ever get to Charlie is now totally irrelevant; they have every reason to make their blocks even bigger. Would this happen in the real world? With pools chances are people would quit mining solo or via P2Pool and switch to central pools. Then as the block sizes get large enough they would quit pools with higher stale rates in preference for pools with lower ones, and eventually the pools with lower stale rates would probably wind up clustering geographically so that the cost of the high-bandwidth internet connections between them would be cheaper. Already miners are very sensitive to orphan rates, and will switch pools because of small differences in that rate. Ultimately the reality is miners have very, very perverse incentives when it comes to block size. If you assume malice, these perverse incentives lead to nasty outcomes, and even if you don't assume malice, for pool operators the natural effects of the cycle of slightly reduced profitability leading to less ability invest in and maintain fast network connections, leading to more orphans, less miners, and finally further reduced profitability due to higher overhead will inevitably lead to centralization of mining capacity.
|
|
|
|
Sukrim
Legendary
Offline
Activity: 2618
Merit: 1007
|
|
February 18, 2013, 05:05:15 PM |
|
Wouldn't already a valid header (or even just the hash of that header) be enough to start mining at least an empty block? Also if you produce blocks large and fast enough to drive someone out of mining you'd also drive a lot more full clients off the network.
Miners already have quite high incentives to DDoS (or otherwise break) all other pools that they are not part of, no matter the block size. I think there are more effective, less disruptive for users and cheaper ways of driving competing miners off the grid than a bandwidth war.
|
|
|
|
Gavin Andresen
Legendary
Offline
Activity: 1652
Merit: 2311
Chief Scientist
|
|
February 18, 2013, 05:14:32 PM |
|
So... I start from "more transactions == more success"
I strongly feel that we shouldn't aim for Bitcoin topping out as a "high power money" system that can process only 7 transactions per second.
I agree with Stephen Pair-- THAT would be a highly centralized system.
Oh, sure, mining might be decentralized. But who cares if you either have to be a gazillionaire to participate directly on the network as an ordinary transaction-creating customer, or have to have your transactions processed via some centralized, trusted, off-the-chain transaction processing service?
So, as I've said before: we're running up against the artificial 250K block size limit now, I would like to see what happens. There are lots of moving pieces here, so I don't think ANYBODY really knows what will happen (maybe miners will collectively decide to keep the block size low, so they get more fees. Maybe they will max it out to force out miners on slow networks. Maybe they will keep it small so their blocks relay through slow connections faster (maybe there will be a significant fraction of mining power listening for new blocks behind tor, but blasting out new blocks not via tor)).
I think we should put users first. What do users want? They want low transaction fees and fast confirmations. Lets design for that case, because THE USERS are who ultimately give Bitcoin value.
|
How often do you get the chance to work on a potentially world-changing project?
|
|
|
Peter Todd (OP)
Legendary
Offline
Activity: 1120
Merit: 1164
|
|
February 18, 2013, 05:28:43 PM |
|
Wouldn't already a valid header (or even just the hash of that header) be enough to start mining at least an empty block?
Yes, but an empty block doesn't earn you any revenue as the block reward drops, so mining is still pointless. You still need the full block to know what transactions were mined, and thus what transactions in the mempool are safe to include in the block you want to attempt to mine. Additionally without the full block, you don't know if the header is valid, so you are vulnerable to miners feeding you invalid blocks. Of course, someone has to create those invalid block hashes, but the large miners are the only ones who can validate them, so if smaller miners respond by taking risks like mining blocks even when they don't know what transactions have already been mined, the larger miners can run lots of nodes to find those blocks as fast as possible, and distribute them to other small miners without the equipment to validate the blocks. Also if you produce blocks large and fast enough to drive someone out of mining you'd also drive a lot more full clients off the network.
Sure, but all the scenarios where extremely large blocks are allowed are also assuming that most people only run mini-SPV clients at best; if one of the smaller "full-node transaction feed" services gets put off-line, your customers, that is transaction creators, will just move to a larger service for their tx feed. Miners already have quite high incentives to DDoS (or otherwise break) all other pools that they are not part of, no matter the block size. I think there are more effective, less disruptive for users and cheaper ways of driving competing miners off the grid than a bandwidth war.
Yes, but DoSing nodes by launching DoS attacks is illegal. DoSing full-nodes by just making larger blocks isn't. For the largest miner/full-node service the cost of launching such an attack is zero, they've already paid for the extra hardware capacity, so why not use it to it's full advantage? So what if doing so causes %5 of the network to drop out. The most dangerous part of this scenario is that you don't need miners to even act maliciously for it to happen. The miner with the largest investment in fixed costs, network capacity and CPU power, has a profit motive to use that expensive capacity to the fullest extent possible. The fact that doing so happens to push the miner with the smallest investment in fixed costs off of the network, furthering the largest's profits due to mining, is inevitable. Furthermore the process is guaranteed to happen again, because the largest miner has no reason not to take those further mining profits and invest in yet more network capacity and CPU power. Again, remember that those fixed costs do not make the network more secure. A 51% attacker doesn't care about valid transactions at all; they're trying to mine blocks that don't have the transactions that the main network does, so they don't need to spend any money on their network connection. Every cent that miners spend on internet connections and fast computers because they need to process huge blocks is money that could have gone towards securing the network with hashing power, but didn't.
|
|
|
|
Peter Todd (OP)
Legendary
Offline
Activity: 1120
Merit: 1164
|
|
February 18, 2013, 06:08:14 PM |
|
So... I start from "more transactions == more success"
I strongly feel that we shouldn't aim for Bitcoin topping out as a "high power money" system that can process only 7 transactions per second.
Hey, I want a pony too. But Bitcoin is an O(n) system, and we have no choice but to limit n. I agree with Stephen Pair-- THAT would be a highly centralized system.
A "highly centralized" system where anyone can get a transaction confirmed by paying the appropriate fee? A fee that would be about $20 (1) for a typical transaction even if $10 million a day, or $3.65 billion a year, goes to miners keeping the network secure for everyone? I'd be very happy to be able to wire money anywhere in the world, completely free from central control, for only $20. Equally I'll happily accept more centralized methods to transfer money when I'm just buying a chocolate bar. 1) $10,000,000/144blocks = $69,440/block / 1MiB/block = $69.44/KiB A two-in, two-out transaction with compressed keys is about 300 bytes, thus $20.35 per transaction. So, as I've said before: we're running up against the artificial 250K block size limit now, I would like to see what happens. There are lots of moving pieces here, so I don't think ANYBODY really knows what will happen (maybe miners will collectively decide to keep the block size low, so they get more fees. Maybe they will max it out to force out miners on slow networks. Maybe they will keep it small so their blocks relay through slow connections faster (maybe there will be a significant fraction of mining power listening for new blocks behind tor, but blasting out new blocks not via tor)).
That sounds like a whole lot of "maybe" I agree that we need to move cautiously, but fundamentally I've shown why a purely profit driven miner has an incentive to create blocks large enough to push other miners out of the game and gmaxwell has made the point that a purely profit driven miner has no incentive not to add an additional transaction to a block if the transaction fee is greater than the cost in terms of decreased block propagation leading to orphans. The two problems are complementary in that decreased block propagation actually increases revenues up to a point, and the effect is most significant for the largest miners. Unless someone can come up with a clear reason why gmaxwell and myself are both wrong, I think we've shown pretty clearly that floating blocksize limits will lead to centralization. Variance already has caused the number of pools out there to be fairly limited; we really don't want more incentives for pools to get larger. I think we should put users first. What do users want? They want low transaction fees and fast confirmations. Lets design for that case, because THE USERS are who ultimately give Bitcoin value.
They want something impossible from an O(n) system without making it centralized. We've already got lots of centralized systems - creating another one doesn't do the world any good. We've only got one major decentralized payment system, Bitcoin, and I want to keep it that way. Users can always use centralized systems for low-value transactions, and if block sizes are limited they'll even be able to very effectively audit the on-chain transactions produced by those centralized systems. Large blocks does not let you do that. Ultimately, the problem is the huge amount of expensive infrastructure built around the assumption that transactions are nearly free. Businesses make decisions based on what will happen at most 3-5 years in the future, so naturally the likes of Mt. Gox, BitInstance, Satoshidice and others have every reason to want the block size limit to be lifted. It'll save them money now, even if it will lead to a centralized Bitcoin five or ten years down the road.
|
|
|
|
Mike Hearn
Legendary
Offline
Activity: 1526
Merit: 1134
|
|
February 18, 2013, 06:26:03 PM |
|
I agree with Gavin, and I don't understand what outcome you're arguing for.
You want to keep the block size limit so Dave can mine off a GPRS connection forever? Why should I care about Dave? The other miners will make larger blocks than he can handle and he'll have to stop mining and switch to an SPV client. Sucks to be him.
Your belief we have to have some hard cap on the N in O(N) doesn't ring true to me. Demand for transactions isn't actually infinite. There is some point at which Bitcoin may only grow very slowly if at all (and is outpaced by hardware improvements).
Likewise, miners have all kinds of perverse incentives in theory that don't seem to happen in practice. Like, why do miners include any transactions at all? They can minimize their costs by not doing so. Yet, transactions confirm. You really can't prove anything about miners behaviour, just guess at what some of them might do.
I don't personally have any interest in working on a system that boils down to a complicated and expensive replacement for wire transfers. And I suspect many other developers, including Gavin, don't either. If Gavin decides to lift the cap, I guess you and Gregory could create a separate alt-coin that has hard block size caps and see how things play out over the long term.
|
|
|
|
Gavin Andresen
Legendary
Offline
Activity: 1652
Merit: 2311
Chief Scientist
|
|
February 18, 2013, 06:29:47 PM |
|
Half-baked thoughts on the O(N) problem:
So, we've got O(T) transactions that have to get verified.
And, right now, we've got O(P) full nodes on the network that verify every single transaction.
So, we get N verifications, where N = T*P.
The observation is that if both T and P increase at the same rate, that formula is O(N^2).
... and at this point your (and gmaxwell's) imagination seems to run out, and you throw up your hands and say "We Must Limit Either T or P."
Really?
If we have 20,000 full nodes on the network, do we really need every transaction to be verified 20,000 separate times?
I think as T and P increase it'd be OK if full nodes with limited CPU power or bandwidth decide to only fetch and validate a random subset of transactions.
|
How often do you get the chance to work on a potentially world-changing project?
|
|
|
Peter Todd (OP)
Legendary
Offline
Activity: 1120
Merit: 1164
|
|
February 18, 2013, 06:42:30 PM |
|
I agree with Gavin, and I don't understand what outcome you're arguing for.
You want to keep the block size limit so Dave can mine off a GPRS connection forever? Why should I care about Dave? The other miners will make larger blocks than he can handle and he'll have to stop mining and switch to an SPV client. Sucks to be him.
I primarily want to keep the limit fixed so we don't have a perverse incentive. Ensuring that everyone can audit the network properly is secondarily. If there was consensus to, say, raise the limit to 100MiB that's something I could be convinced of. But only if raising the limit is not something that happens automatically under miner control, nor if the limit is going to just be raised year after year. Your belief we have to have some hard cap on the N in O(N) doesn't ring true to me. Demand for transactions isn't actually infinite. There is some point at which Bitcoin may only grow very slowly if at all (and is outpaced by hardware improvements).
Yes, there will likely only be around 10 billion people on the planet, but that's a hell of a lot of transactions. At one transaction per person per day you've got 115,700 transactions per second. Sorry, but there are lots of reasons to think Moore's law is coming to an end, and in any case the issue I'm most worried about is network scaling, and network scaling doesn't even follow Moore's law. Making design decisions assuming technology is going to keep getting exponentially better is a huge risk when transistors are already only a few orders of magnitude away from being single atoms. Likewise, miners have all kinds of perverse incentives in theory that don't seem to happen in practice. Like, why do miners include any transactions at all? They can minimize their costs by not doing so. Yet, transactions confirm. You really can't prove anything about miners behaviour, just guess at what some of them might do.
The fact that miners include transactions at all is a great example of how small the block limit is. Right now the risk of orphans due to slow propagation is low enough that the difference between a 1KiB block and a 250KiB block is so inconsequential that pools just run the reference client code and don't bother tweaking it. I wouldn't be the slightest bit surprised to be told that there aren't any pools with even a single full-time employee, so why would I expect people to really put in the effort to optimize revenue, when it'll probably lead to a bunch of angry forum posts and miners leaving because they think the pool will damage Bitcoin? I don't personally have any interest in working on a system that boils down to a complicated and expensive replacement for wire transfers. And I suspect many other developers, including Gavin, don't either. If Gavin decides to lift the cap, I guess you and Gregory could create a separate alt-coin that has hard block size caps and see how things play out over the long term.
I don't have any interest in working on a system that boils down to a complicated and expensive replacement for PayPal. Decentralization is the fundamental thing that makes Bitcoin special.
|
|
|
|
Zeilap
|
|
February 18, 2013, 06:57:42 PM |
|
Likewise, miners have all kinds of perverse incentives in theory that don't seem to happen in practice. Like, why do miners include any transactions at all? They can minimize their costs by not doing so. Yet, transactions confirm. You really can't prove anything about miners behaviour, just guess at what some of them might do.
The fact that miners include transactions at all is a great example of how small the block limit is. Right now the risk of orphans due to slow propagation is low enough that the difference between a 1KiB block and a 250KiB block is so inconsequential that pools just run the reference client code and don't bother tweaking it. I don't think that was the point Mike was making. Rather, the cost of computing the hash of a block is directly proportional to the size of the block, so doubling the blocksize is like halving the hashrate for a miner. Thus, while rewards for finding blocks are large compared to fees, it is more profitable for a miner to mine a block as small as possible because his effective hashrate increases and he is more likely to find blocks. What this says about miners (or really, pool operators) is that either: - they're too lazy to change the code - they're not arseholes / aren't purely motivated by short term profit - they realise that by mining empty blocks, the usability of bitcoin will reduce, hence the market price, hence their profits
|
|
|
|
Peter Todd (OP)
Legendary
Offline
Activity: 1120
Merit: 1164
|
|
February 18, 2013, 07:09:02 PM |
|
Half-baked thoughts on the O(N) problem:
So, we've got O(T) transactions that have to get verified.
And, right now, we've got O(P) full nodes on the network that verify every single transaction.
So, we get N verifications, where N = T*P.
The observation is that if both T and P increase at the same rate, that formula is O(N^2).
... and at this point your (and gmaxwell's) imagination seems to run out, and you throw up your hands and say "We Must Limit Either T or P."
Really?
If we have 20,000 full nodes on the network, do we really need every transaction to be verified 20,000 separate times?
I think as T and P increase it'd be OK if full nodes with limited CPU power or bandwidth decide to only fetch and validate a random subset of transactions.
Well you'll have to implement the fraud proofs stuff d'aniel talked about and I later expanded on. You'll also need a DHT so you can retrieve arbitrary transactions. Both require a heck of a lot of code to be written, working UTXO for fraud proofs in particular; random transaction verification is quite useless without the ability to tell everyone else that the block is invalid. Things get ugly though... block validation isn't deterministic anymore: I can have one tx out of a million invalid, yet it still makes the whole block invalid. You better hope someone is in fact running a full-block validator and the fraud proof mechanism is working well or it might take a whole bunch of blocks before you find out about the invalid one with random sampling. The whole fraud proofs implementation is also now part of the consensus problem; that's a lot of code to get right. In addition partial validation still doesn't solve the problem that you don't know which tx's in your mempool are safe to include in the next block unless you know which ones were spent by the previous block. Mining becomes a game of odds, and the UTXO tree proposals don't help. A UTXO bloom filter might, but you'll have to be very careful that it isn't subject to chosen key attacks. Transaction volume itself leads to centralization too, simply by ensuring that only a miner able to keep up with the large volume of low-fee transactions can make a profit. I've already thought of your idea, and I'm sure gmaxwell has too... our imagination didn't "run out"
|
|
|
|
cjp
|
|
February 18, 2013, 07:12:08 PM |
|
I think we need to have a block size limit. My original objection against removing the block size limit was that, as the number of new coins per block drops to zero, mining incentive will also drop to zero, if you have nothing to keep transaction fee above zero (transaction capacity has to be "scarce"). The OP showed an entirely new way how things can go wrong if there is no block size limit. I don't see how making the block size limit "auto-adjustable" is different in this respect from having no block size limit at all. In my opinion, the future block size limit can be very high, to allow for very high (but not unlimited) transaction volume. But it has to be low enough to prevent all the problems related to unlimited block sizes. See the paper I presented in this thread: https://bitcointalk.org/index.php?topic=94674.0. In chapter 3, it contains some estimations about scalability of different concepts. I mention it here, because it contains some estimates about the number of transactions needed for different technologies, when used worldwide for all transactions. When assuming 2 transactions pppd for 10^10 people, these are some conclusions: - normal Bitcoin system: 1e8 transactions/block
- when my proposed system is widely used: 1e5 transactions/block
That should give you an idea of how high the block size limit should be. Maybe it should even be a bit lower, to increase scarcity a bit, and for the current level of technology, to allow normal-PC users to verify the entire block chain. For comparison: the current limit is around 1e3 transactions/block. So, as I've said before: we're running up against the artificial 250K block size limit now, I would like to see what happens. There are lots of moving pieces here, so I don't think ANYBODY really knows what will happen (maybe miners will collectively decide to keep the block size low, so they get more fees. Maybe they will max it out to force out miners on slow networks. Maybe they will keep it small so their blocks relay through slow connections faster (maybe there will be a significant fraction of mining power listening for new blocks behind tor, but blasting out new blocks not via tor)).
I'd like to see that too, since it's IMHO such an important piece of Bitcoin, and I'd rather have it tested now than when the whole world starts using Bitcoin; after successful halving of the block reward, this is the next big step. I think we should put users first. What do users want? They want low transaction fees and fast confirmations. Lets design for that case, because THE USERS are who ultimately give Bitcoin value.
I think the users want more than that, at least in the current Bitcoin community. Bitcoins most unique characteristics come from its decentralized nature; if you lose that, everything else is in danger. If you just want low fees and fast confirmation, Bitcoin is not the right technology: it would be far more efficient to have a couple of centralized debit card issuers who issue properly secured cards without chargeback: every transaction only needs to be verified and stored once or twice, so there would be almost no costs (and hence almost no transaction fees) and confirmation would be near-instantaneous.
|
|
|
|
Gavin Andresen
Legendary
Offline
Activity: 1652
Merit: 2311
Chief Scientist
|
|
February 18, 2013, 07:18:36 PM |
|
RE: lots of code to write if you can't keep up with transaction volume: sure. So? Transaction volume itself leads to centralization too, simply by ensuring that only a miner able to keep up with the large volume of low-fee transactions can make a profit.
I really don't understand this logic. Yes, it is a fact of life that if you have a system where people are competing, the people who are less efficient will be driven out of business. So there will be fewer people in that business. You seem to be saying that we should subsidize inefficient miners by limiting the block size, therefore driving up fees and making users pay for their inefficiency. All in the name of vague worries about "too much centralization."
|
How often do you get the chance to work on a potentially world-changing project?
|
|
|
Peter Todd (OP)
Legendary
Offline
Activity: 1120
Merit: 1164
|
|
February 18, 2013, 07:45:30 PM |
|
RE: lots of code to write if you can't keep up with transaction volume: sure. So?
Well, one big objection is the code required is very similar to that required by fidelity-bonded bank/ledger implementations, but unlike the fidelity stuff, because it's consensus screwing it up creates problems that are far more difficult to fix and far more widespread in scale. Transaction volume itself leads to centralization too, simply by ensuring that only a miner able to keep up with the large volume of low-fee transactions can make a profit.
I really don't understand this logic. Yes, it is a fact of life that if you have a system where people are competing, the people who are less efficient will be driven out of business. So there will be fewer people in that business. You seem to be saying that we should subsidize inefficient miners by limiting the block size, therefore driving up fees and making users pay for their inefficiency. "This mining this is crazy, like all that work when you could just verify a transaction's signatures, and I dunno, ask a bunch of trusted people if the transaction existed?" So, why do we give miners transaction fees anyway? Well, they are providing a service of "mining a block", but the real service they are providing is the service of being independent from other miners, and we value that because we don't want >50% of the hashing power to be controlled by any one entity. When you say these small miners are inefficient, you're completely ignoring what we actually want miners to do, and that is to provide independent hashing power. The small miners are the most efficient at providing this service, not the least. The big issue is the cost to be a miner comes in two forms, hashing power and overhead. The former is what makes the network secure. The latter is a necessary evil, and costs the same for every independent miner. Fortunately with 1MiB blocks the overhead is low enough that individual miners can profitably mine on P2Pool, but with 1GiB blocks P2Pool mining just won't be profitable. We already have 50% of the hashing power controlled by about three or four pools - if running a pool requires thousands of dollars worth of equipment the situation will get even worse. Of course, we've also been focusing a lot on miners, when the same issue applies to relay nodes too. Preventing DoS attacks on the flood-fill network is going to be a lot harder when when most nodes can't verify blocks fast enough to know if a transaction is valid or not, and hence the limited resource of priority or fees is being expended by broadcasting it. Yet if the "solution" is fewer relay nodes, you've broken the key security assumption that information is easy to spread and difficult to stifle. All in the name of vague worries about "too much centralization."
Until Bitcoin has undergone a serious attack we just aren't going to have a firm idea of what's "too much centralization"
|
|
|
|
cjp
|
|
February 18, 2013, 07:49:24 PM |
|
I really don't understand this logic.
Yes, it is a fact of life that if you have a system where people are competing, the people who are less efficient will be driven out of business. So there will be fewer people in that business.
You seem to be saying that we should subsidize inefficient miners by limiting the block size, therefore driving up fees and making users pay for their inefficiency.
All in the name of vague worries about "too much centralization."
It's interesting, and a bit worrying too, to see the same ideological differences of the "real" world come back in the world of Bitcoin. In my view, the free market is a good, but inherently instable system. Economies of scale and network effects advantage large parties, so large parties can get larger and small parties will disappear, until only one or just a few parties are left. You see this in nearly all markets nowadays. Power also speeds up this process: more powerful parties can eliminate less powerful parties; less powerful parties can only survive if they subject themselves to more powerful parties, so the effect is that power tends to centralize. For the anarchists among us: this is why we have governments. It's not because people once thought it was a good idea, it's because that happens to be the natural outcome of the mechanisms that work in society. In the light of this, and because the need for bitcoins primarily comes from the need for a decentralized, no-point-of-control system, I think it's not sufficient to call worries about centralization "vague": you have to clearly defend why this particular form of centralization can not be dangerous. The default is "centralization is bad".
|
|
|
|
Mike Hearn
Legendary
Offline
Activity: 1526
Merit: 1134
|
|
February 18, 2013, 07:55:28 PM |
|
In the absence of a block size cap miners can be supported using network assurance contracts. It's a standard way to fund public goods, which network security is, so I am not convinced by that argument.
I feel these debates have been going on for years. We just have wildly different ideas of what is affordable or not.
Perhaps I've been warped by working at Google so long but 100,000 transactions per second just feels totally inconsequential. At 100x the volume of PayPal each node would need to be a single machine and not even a very powerful one. So there's absolutely no chance of Bitcoin turning into a PayPal equivalent even if we stop optimizing the software tomorrow.
But we're not going to stop optimizing the software. Removing the block cap means a hard fork, and once we decided to do that we may as well throw in some "no brainer" upgrades as well, like supporting ed25519 which is orders of magnitude faster than ECDSA+secp256k1. Then a single strong machine can go up to hundreds of thousands of transactions per second.
The cost of a Bitcoin transaction is just absurdly low and will continue to fall in future. It's like nothing at all. Saying Bitcoin is going to get centralized because of high transaction rates is kinda like saying in 1993 that the web can't possibly scale because if everyone used it web servers would fall over and die. Well yes, they would have done, in 1993. But not everyone started using the web overnight and by the time they did, important web sites were all using hardware load balancers and multiple data centers and it was STILL cheap enough that Wikipedia - one of the worlds top websites - could run entirely off donations.
|
|
|
|
OhShei8e
Legendary
Offline
Activity: 1792
Merit: 1059
|
|
February 18, 2013, 08:04:41 PM |
|
So, as I've said before: we're running up against the artificial 250K block size limit now, I would like to see what happens. There are lots of moving pieces here, so I don't think ANYBODY really knows what will happen
A rational approach. I think we should put users first. What do users want? They want low transaction fees and fast confirmations. Lets design for that case, because THE USERS are who ultimately give Bitcoin value.
I agree. I'm a user. :-)
|
|
|
|
cjp
|
|
February 18, 2013, 08:22:37 PM |
|
I feel these debates have been going on for years. We just have wildly different ideas of what is affordable or not.
I don't think the most fundamental debate is about how high the limit should be. I made some estimates about how high it would have to be for worldwide usage, which is quite a wild guess, and I suppose any estimation about what is achievable with either today's or tomorrow's technology is also a wild guess. We can only hope that what is needed and what is possible will somehow continue to match. But the most fundamental debate is about whether it is dangerous to (effectively) disable the limit. These are some ways to effectively disable the limit: - actually disabling it
- making it "auto-adjusting" (so it can increase indefinitely)
- making it so high that it won't ever be reached
I think the current limit will have to be increased at some point in time, requiring a "fork". I can imagine you don't want to set the new value too low, because that would make you have to do another fork in the future. Since it's hard to know what's the right value, I can imagine you want to develop an "auto-adjusting" system, similar to how the difficulty is "auto-adjusting". However, if you don't do this extremely carefully, you could end up effectively disabling the limit, with all the potential dangers discussed here. You have to carefully choose the goal you want to achieve with the "auto-adjusting", and you have to carefully choose the way you measure your "goal variable", so that your system can control it towards the desired value (similar to how difficulty adjustments steers towards 10minutes/block). One "goal variable" would be the number of independent miners (a measure of decentralization). How to measure it? Maybe you can offer miners a reward for being "non-independent"? If they accept that reward, they prove non-independence of their different mining activities (e.g. different blocks mined by them); the reward should be larger than the profits they could get from further centralizing Bitcoin. This is just a vague idea; naturally it should be thought out extremely carefully before even thinking of implementing this.
|
|
|
|
Peter Todd (OP)
Legendary
Offline
Activity: 1120
Merit: 1164
|
|
February 18, 2013, 08:35:12 PM |
|
In the absence of a block size cap miners can be supported using network assurance contracts. It's a standard way to fund public goods, which network security is, so I am not convinced by that argument.
Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk. Perhaps I've been warped by working at Google so long but 100,000 transactions per second just feels totally inconsequential. At 100x the volume of PayPal each node would need to be a single machine and not even a very powerful one. So there's absolutely no chance of Bitcoin turning into a PayPal equivalent even if we stop optimizing the software tomorrow.
But we're not going to stop optimizing the software. Removing the block cap means a hard fork, and once we decided to do that we may as well throw in some "no brainer" upgrades as well, like supporting ed25519 which is orders of magnitude faster than ECDSA+secp256k1. Then a single strong machine can go up to hundreds of thousands of transactions per second.
I don't see any reason to think CPU power will be the issue. It's network capacity and disk space that is the problem. You're 100x the volume of PayPal is 4000 transactions a second, or about 1.2MiB/second, and you'll want to be able to burst quite a bit higher than that to keep your orphan rate down when new blocks come in. Like it or not that's well beyond what most internet connections in most of the world can handle, both in sustained speed and in quota. (that's 3TiB/month) Again, P2Pool will look a heck of a lot less attractive. You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory. That's an ugly, ugly requirement - after all if a block has n transactions, your average access time per transaction must be limited to 10minutes/n to even just keep up. EDIT: also, it occurs me me that one of the worst things about the UTXO set is the continually increasing overhead it implies. You'll probably be lucky if cost/op/s scales by even something as good as log(n) due to physical limits, so you'll gradually be adding more and more expensive constantly on-line hardware for less and less value. All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing. In addition your determinism goes down because inevitably the UTXO set will be striped across multiple storage devices, so at worst every tx turns out to be behind one low-bandwidth connection. God help you if an attacker figures out a way to find the worst sub-set to pick. UTXO proofs can help a bit - a transaction would include it's own proof that it is in the UTXO set for each txin - but that's a lot of big scary changes with consensus-sensitive implications. Again, keeping blocks small means that scaling mistakes, like the stuff Sergio keeps on finding, are far less likely to turn into major problems. The cost of a Bitcoin transaction is just absurdly low and will continue to fall in future. It's like nothing at all. Saying Bitcoin is going to get centralized because of high transaction rates is kinda like saying in 1993 that the web can't possibly scale because if everyone used it web servers would fall over and die. Well yes, they would have done, in 1993. But not everyone started using the web overnight and by the time they did, important web sites were all using hardware load balancers and multiple data centers and it was STILL cheap enough that Wikipedia - one of the worlds top websites - could run entirely off donations.
You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted. Meanwhile unlike Wikipedia Bitcoin requires global shared state that must be visible to, and mutable by, every client. Comparing the two ignores some really basic computer science that was very well understood even when the early internet was created in the 70's.
|
|
|
|
OhShei8e
Legendary
Offline
Activity: 1792
Merit: 1059
|
|
February 18, 2013, 08:45:38 PM |
|
In the light of this, and because the need for bitcoins primarily comes from the need for a decentralized, no-point-of-control system, I think it's not sufficient to call worries about centralization "vague": you have to clearly defend why this particular form of centralization can not be dangerous. The default is "centralization is bad".
It is a technical decision, not political. The block size can not be determined on the basis of political beliefs. I'm pretty sure about this. If we talking about centralization we should focus on Mt. Gox but that's a different story.
|
|
|
|
misterbigg
Legendary
Offline
Activity: 1064
Merit: 1001
|
|
February 18, 2013, 08:47:24 PM |
|
I think we should put users first. What do users want? They want low transaction fees and fast confirmations. This comes down to Bitcoin as a payment network versus Bitcoin as a store of value. I thought it was already determined that there will always be better payment networks that function as alternatives to Bitcoin. A user who cares about the store of value use-case, is going to want the network hash rate to be as high as possible. This is at odds with low transaction fees and fast confirmations.
|
|
|
|
|