rme (OP)
|
|
July 23, 2014, 07:07:18 PM |
|
We all know the recent news, Ghash pool controlling 51% of the hashrate. While some consider it a threat others think that is not harmful.
The thing is that we have to do something to stop this from happening again.
My proposal is to start thinking about miners that join a pool like independent miners and not slave miners, this includes creating a new mining protocol that does not rely on the pool sending the list of transactions to include in a block. Each individual miner has to collect transactions by his own and mine that, this can be achieved by running a full node or by running a SPV like node that ask other nodes for transactions.
Once this protocol is developed and standarised we as a community could require all pools to use it (because its better, because is more trustless...), not by imposing it but by recommending it.
Pool owners could send some instructions using this protocol to the miner about how many transactions to include per block (some pools want small blocks), how many 0 fee transactions to include, how much is the minimum fee per Kb to include transactions and some info about the Coinbase field in the block.
This way is impossible to perform some of the possible 51% attacks: A pool owner cant mine a new chain (selfish mining) (pool clients have a SPV or full node that has checkpoints and ask other peers about the length of the chain) A pool owner can't perform double spends or reverse transactions (pool clients know all the transactions relayed to the network, they know if they are already included on a block) A pool owner cant decide which transactions not to include (but they can configure the minimum fee). A pool owner cant get all the rewards by avoiding other pools from mining blocks (Because the pool client knows the last block independently that is from his pool or other).
The only thing that a 51% pool owner can do is to shut down his pool and drop the hashrate by 51% because he does not control the miners.
If the pool owner owns all the hardware in the pool my proposal is not valid, if the pool clients dont use this protocol my proposal is not valid.
I want to know if this is possible or its been developed or there is already a working protocol that works like this, also I want to read other people's ways to address this threat, thanks for reading.
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
July 23, 2014, 07:20:00 PM |
|
It is already possible with getblocktemplate and it would be trivial for stratum to add similar support. However no pools care, no miners care. The reality is most miners know it is bad (if only for perception) that a single pool has so much hashrate. They could easily move to another pool. It would take all of 5 minutes of work. They don't care enough to do that. What makes you think the same people who can't be bothered to use another pool, or use p2pool would instead generate their own transaction sets (requires running a full node)? They won't. This is a human problem not a technological one.
|
|
|
|
rme (OP)
|
|
July 23, 2014, 07:25:29 PM |
|
It is already possible with getblocktemplate and it would be trivial for stratum to add similar support. However no pools care, no miners care. The reality is most miners know it is bad (if only for perception) that a single pool has so much hashrate. They could easily move to another pool. It would take all of 5 minutes of work. They don't care enough to do that. What makes you think the same people who can't be bothered to use another pool, or use p2pool would instead generate their own transaction sets (requires running a full node)? They won't. This is a human problem not a technological one.
I believe that getblocktemplate is slower and needs a full node running. Im talking about a new protocol that works as a SPV (asking for transactions to random nodes in the network) that is fast and does not penalice the miner.
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
July 24, 2014, 12:16:05 AM Last edit: July 24, 2014, 02:58:48 AM by DeathAndTaxes |
|
Im talking about a new protocol that works as a SPV (asking for transactions to random nodes in the network) that is fast and does not penalice the miner.
SPV nodes aren't trustless when it comes to unconfirmed txns. SPV nodes validate txns by ensuring they are in a block on the longest chain. Validity is directly related to depth and a depth of zero means validity is indeterminate. Also I can't think of a more horribly inefficient (and painfully slow) method to reconstruct the txn set then by doing a DOS attack against other nodes rather than maintain a local copy. Remember there is no such thing as "the network", the network is a collection of other nodes. So you are suggesting that a miner would not run a full node and instead would offload that work onto another full node. So miners (aka the people getting paid to secure the network) would put all the txn set load on non mining (aka not getting paid) nodes rather than run a full node themselves. I think you can also see that querying multiple nodes for the txn set is never going to be faster than doing a looking in a few milliseconds from your local full node. Still the larger point remains that (many) miners simply don't care. Today a miner could: a) use p2pool or solo-mine (IIRC a miner with 5% of the network was using ghash.io, if anyone can solo mine it is a massive farm with 5% of the network). b) demand pools allows them to select the txn set (i.e. use GBT or have stratum enhanced to support txn selection). c) use any centralized pool other than the largest one (yeah still centralized but it spread across more entities is better than it all concentrated at the top). The reality is 40%+ choose the single worst possible option, using the largest centralized pool available. Any other option would improve security and wouldn't take more than a token amount of effort. If miners aren't willing to do that why would they be willing to do even more?
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4284
Merit: 8808
|
|
July 24, 2014, 02:10:30 AM |
|
The reality is 40%+ choose the single worst possible option, which is using the largest centralized pool. Any other option would improve security and wouldn't take more than a token amount of effort.
There are a bunch of complicating factors. For one, 'the largest' pool physically owns and/or controls a substantial portion of that hashpower. They admit to 40% but there are reasons to believe they are not being completely honest— (during dos attacks which made the pool seemingly unreachable from the internet they did not experience a large apparent hashrate drop). Then combine that with the fact that many miners, even— historically— some pool operators have misunderstood mining as a race in which the 'fastest' had super-linear rewards (even ignoring misconduct like selfish mining)... Then you have factors like— I've met with people who have invested millions in mining and had _no idea_ mining served any purpose other than the lottery for the initial coin distribution. Plus we have ecosystem symmetry breaking where centralized pools that must be trusted to not rob the miners were widely adopted first, the trust requirements hurt competition, and the lack of good reporting tools except via centralized pools discourage using multiple pools (assuming equal risk and fees, loadbalancing between pools is the variance minimizing strategy)... etc. Just a lot of little cuts, which makes it hard to make progress. In my experience most people mining now are utterly saturated in dealing with the untrustworthyness of hardware vendors, they're too exhausted (or bankrupt) after that to worry that their actions might be making the Bitcoin they're receiving worthless... sometimes I think the maturity window should have been longer than 100 blocks, too— since the fact that many miners immediately sell their income can't help things either.
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
July 24, 2014, 03:01:34 AM |
|
sometimes I think the maturity window should have been longer than 100 blocks It is something I have considered myself. What if the maturity window was very long like say 4,320 blocks? Yeah miners would probably prefer it not be that long but it would mean their reward was tied to the future value of the network. Do something stupid (like let a pool perform a 51% attack) and you earned but uncollected rewards may end up worthless (or at least worth a lot less).
|
|
|
|
theymos
Administrator
Legendary
Offline
Activity: 5390
Merit: 13427
|
|
July 24, 2014, 04:02:58 AM |
|
Yeah, the incentives for miners are wrong. I think that some changes to how mining works will probably be necessary eventually, though it unfortunately might take a serious incident to make this palatable for Bitcoin users. I really like Proof of Activity. It fixes several problems at once: - 51% attacks are made much more difficult, and pools have greater disincentive to even try them. - Users are incentivized to run full nodes, which is important. - Miners lose all power over choosing which transactions to confirm. Instead, this power goes to randomly-selected Bitcoin users. I think that this will tend to make the fee and standardness policies more varied and fair. Some people have proposed making pooling impossible, but I think that this would actually make the problem worse. Everyday people would be even less inclined to mine because their chance of winning would be almost 0. Only people who could afford to buy a significant amount of mining power would try mining, and economies of scale would still favor larger miners. Ideally, everyone would be required to use GBT with a full node or something like P2Pool, but enforcing this seems difficult or impossible. We might just have to learn to live with big pools (and learn how to do this without giving up on decentralization).
|
1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
|
|
|
rme (OP)
|
|
July 24, 2014, 06:31:31 AM |
|
The thing is making getblocktemplate (or similar) easier for miners so they can set it up as fast as stratum one. Then the Bitcoin Foundation or the core devs or some people that have influence should try to contact the main pools to make them aggree some rules (Use this brand new protocol that is better, gradually remove the other because is centralised, do not ever reach 30% or more hashrate or you will be considerated as harmfull)
This way soon pools should start showing in their websites "This pool agreess the Fair Mining Treaty and it uses getblocktemplate". Then if this is done the right way (most of pools join this, the new protocol is faster...) most miners that are in public pools should use it, then we tell miner software developers to deprecate the old protocol because is centralised and also only few miners use it. Finally we tell Pool Owners that they need to remove the old protocol. They do it and we are fine.
I know that this sounds difficult but we should work to improve protocols that let the individual miner to choose the transactions they want to include, this way the pool owner has no word to say about double spendings, mining other chain, etc.
But pool owners should still decide things like "minimum fee to include in a block", "coinbase", "max size of a block (and why not, minimum size)".
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4284
Merit: 8808
|
|
July 24, 2014, 08:53:18 AM |
|
Yeah, the incentives for miners are wrong. I don't think the incentives are wrong. But even in the face of acceptable incentives, sometimes bad outcomes happen. There are people mining right now with millions invested who hardly understand Bitcoin at all... incentives and rationality assume a degree of knoweldge which doesn't always exist. Unfortunately, many of the issues that apply to POS also apply to proof of activity, sometimes they're less severe (see, for example, the issues with the worthlessness of old private keys— PoA it just gives you a huge reduction in computation costs, but the computation is not free), but I think in all cases they create some amount of true centralization advantage (mining no longer linear in resources because you must combine them) and admit attacks which Bitcoin would have been completely secure against. Maybe in reality and with the right parameters the trade-offs are good but it's very hard to analyze— ever so much more so than POW. PoA also has some issues with behavior at odds with good security practices— if the actual spending keys must be used then multisig, offline wallets, and HSMs are actually expensive to use— we don't need more disincentives to good security. If the mining rights are delegated to other keys, users may transfer ownership— further complexifying incentive arguments. The schemes that break pooled mining are non-starters, they'd create huge advantages for vertically integrated operations which can get acceptable variance without risk, and huge advantages for hosted operations. The sets of proposals I am most fond of are: * Coinbase only mining — where miners pool only for their coinbase transactions, and are free to generate their network consensus locally or get it from another source. With this the entire network could safely (in terms of network safety, not miner income) be on a single pool. * Smart property mining— where mining asics can will only process consensus state work authenticated by their owner (or owner designated parties). This would allow miners in hosted facilities to not be at risk of being redirected for attacks. (Sure, any hardware tamper proof can be defeated, but if you have to decap chips to bypass the protection— cheaper to just manufacturer new chips).
|
|
|
|
trout
|
|
July 24, 2014, 06:53:23 PM |
|
I think miners choose the largest pool not because they don't care, but because they want to reduce the variance. Variance is very important when the difficulty is rising fast - if you mine less than expectation in the first two weeks, you may never recover the expectation if the difficulty jumps but your hashrate doesn't.
Thus it is rational (though selfish) to select the biggest pool.
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4284
Merit: 8808
|
|
July 24, 2014, 09:47:12 PM |
|
I think miners choose the largest pool not because they don't care, but because they want to reduce the variance. Variance is very important when the difficulty is rising fast - if you mine less than expectation in the first two weeks, you may never recover the expectation if the difficulty jumps but your hashrate doesn't.
Thus it is rational (though selfish) to select the biggest pool.
You're reasoning incorrectly about expected returns. The expected return is the expected return, regardless of if there are any future blocks or not. The hashrate decreasing or increasing isn't relevant— mining is an instantaneous, memory-less process. If you are unlucky and get less than your expectation or lucky and get more— there is no making up for it, regardless of what the difficulty does in the future (see also: gamblers fallacy). Beyond that, the difference in variance between mining in a 10% hashrate pool and a 40% hashrate pool is negligible— the variance is dominated by the network finding variance long before that point. Moreover, the optimal variance reducing strategy (assuming that all pools are equally good) is to mine on all pools weighed proportional to their hashrate— something which I've never seen anyone do, so much for "rational". (Not to mention that the fees and risk of undetectable theft, further lower the expectation of centralized pooling) Even ignoring that, it still remains that control over the consensus state and pooling for payment is technically orthogonal. Even if you had some great need to be in the largest possible income sharing pool, there is no need to fuck over the decentralization of bitcoin (and a lot of reason not to, since doing so _will_ result in outcomes adverse to your interest, e.g. the network changing POW function or eroding trust in Bitcoin), and so you'd expect to see people solving that so they could have 80% pools safely if the motivation were really variance reduction and not ignorance and lazyness...
|
|
|
|
trout
|
|
July 25, 2014, 12:28:55 AM |
|
I think miners choose the largest pool not because they don't care, but because they want to reduce the variance. Variance is very important when the difficulty is rising fast - if you mine less than expectation in the first two weeks, you may never recover the expectation if the difficulty jumps but your hashrate doesn't.
Thus it is rational (though selfish) to select the biggest pool.
You're reasoning incorrectly about expected returns. The expected return is the expected return, regardless of if there are any future blocks or not. The hashrate decreasing or increasing isn't relevant— mining is an instantaneous, memory-less process. If you are unlucky and get less than your expectation or lucky and get more— there is no making up for it, regardless of what the difficulty does in the future (see also: gamblers fallacy). I wasn't reasoning about the expectation. Only about the variance. The expectation is the same on any pool, but on a small pool you are more likely to get a value well below the expectiation - or well above indeed - which is a gamble. Taking this gamble is not rational. In other words, continuing the gambling analogy, many would prefer winning 2 with probability 1/2 over winning 100 with probability 1/100, because the former is less risky. Minimizing variance is minimizing risk, and I still maitain that this is rational given that the expectation is the same. The increasing hashrate is relevant, since had it been stable, in the long run the profit per day from mining on any pool (or even solo) would be the same: it converges to the expectation. (This is what I referred to as "recovering expectation".) Beyond that, the difference in variance between mining in a 10% hashrate pool and a 40% hashrate pool is negligible— the variance is dominated by the network finding variance long before that point.
I didn't get the last point. Moreover, the optimal variance reducing strategy (assuming that all pools are equally good) is to mine on all pools weighed proportional to their hashrate
I don't think so. Just ran some numbers, and unless I'm making some very silly mistakes the variance only increases. Even ignoring that, it still remains that control over the consensus state and pooling for payment is technically orthogonal. Even if you had some great need to be in the largest possible income sharing pool, there is no need to fuck over the decentralization of bitcoin (and a lot of reason not to, since doing so _will_ result in outcomes adverse to your interest, e.g. the network changing POW function or eroding trust in Bitcoin), and so you'd expect to see people solving that so they could have 80% pools safely if the motivation were really variance reduction and not ignorance and lazyness...
It's an example of the tragedy of the commons. Being selfish and letting others solve the problem or sacrificing one's interest for the sake of a negligible contribution to the greater good. I agree though that the variance difference between the largest and second or perhaps even the third largest pool is negligible, so choosing the largest pool is probably mostly ignorance. However, if 3-4 pools have >50% hashrate this is still hardly a decentralized environment, so choosing between the few largest is hardly relevant. Choosing one of those 1% pools (incl. p2pool) does appear risky though.
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4284
Merit: 8808
|
|
July 25, 2014, 12:46:07 AM |
|
The increasing hashrate is relevant, since had it been stable, in the long run the profit per day from mining on any pool (or even solo) would be the same: it converges to the expectation. (This is what I referred to as "recovering expectation".)
It does not converge to the expectation after the fact. This is the gamblers fallacy. You expect to be near the expectation, indeed, but after your first dice is cast your winnings and losses are _permanent_ and have no bearing on the future results. Beyond that, the difference in variance between mining in a 10% hashrate pool and a 40% hashrate pool is negligible— the variance is dominated by the network finding variance long before that point.
I didn't get the last point. There is pretty substantial differences in the mining income for the whole network... once you're up at the 10% level the overall network variation is a substantial part of the variation you experience, the change in your 10%-tile income between 10% and 40% pools is pretty small.. a percent or so. It's an example of the tragedy of the commons. Being selfish and letting others solve the problem or sacrificing one's interest for the sake of a negligible contribution to the greater good. Usually "tragedy of the commons" refers to interests being out of alignment (e.g. whats good for you is bad for everyone). Arguably the issue is a freeloading loss, but I'm doubtful. Considering that hardware companies are successfully selling hardware at price _far_ beyond what reasonable models of future income show is profitable, it's really hard for me to buy that miners are acting rationally enough to be micromanaging operating costs to the point where they're sitting around not writing functionality because they hope someone else will. I agree though that the variance difference between the largest and second or perhaps even the third largest pool is negligible, so choosing the largest pool is probably mostly ignorance. However, if 3-4 pools have >50% hashrate this is still hardly a decentralized environment, so choosing between the few largest is hardly relevant. Choosing one of those 1% pools (incl. p2pool) does appear risky though.
Okay I am glad we agree on that first point. On the second— well, lets see— people buying hardware that are sure losses by basically any reasonable calculation, dropping them on 5% PPS pools... taking half their income to a blockchain dice site... Every nameable centeralized pool has been hacked (except f2pool afaik, but it hasn't been around that long), in some cases with _large_ amounts stolen. In the case of ghash.io in addition to taking the operators funds they executed doublespends to the tune of 3kbtc lost. And thats after the survivorship bias— many pools in the past were popular and vanished with users funds. By comparison I don't think P2Pool is risky at all, it's the centralized pools that are risky even if you're counting on selling your coin before ecosystem damage makes it worthless.
|
|
|
|
trout
|
|
July 25, 2014, 01:26:14 AM |
|
The increasing hashrate is relevant, since had it been stable, in the long run the profit per day from mining on any pool (or even solo) would be the same: it converges to the expectation. (This is what I referred to as "recovering expectation".)
It does not converge to the expectation after the fact. This is the gamblers fallacy. You expect to be near the expectation, indeed, but after your first dice is cast your winnings and losses are _permanent_ and have no bearing on the future results. Sure all winnings are permanent, and independent on all instants. What I'm saying has nothing to do with this though: If the the difficulty is constant, your payout per day converges to the expectation of the payout on the first day, which is the same on any pool. The variance is irrelevant. If the difficulty increases, your winning per day does not converge to the expectation on the first day, since the expectation itself changes (goes to 0). This makes the variance and consequently choosing the pool relevant. Beyond that, the difference in variance between mining in a 10% hashrate pool and a 40% hashrate pool is negligible— the variance is dominated by the network finding variance long before that point.
I didn't get the last point. There is pretty substantial differences in the mining income for the whole network... once you're up at the 10% level the overall network variation is a substantial part of the variation you experience, the change in your 10%-tile income between 10% and 40% pools is pretty small.. a percent or so. It's an example of the tragedy of the commons. Being selfish and letting others solve the problem or sacrificing one's interest for the sake of a negligible contribution to the greater good. Usually "tragedy of the commons" refers to interests being out of alignment (e.g. whats good for you is bad for everyone). Arguably the issue is a freeloading loss, but I'm doubtful. Considering that hardware companies are successfully selling hardware at price _far_ beyond what reasonable models of future income show is profitable, it's really hard for me to buy that miners are acting rationally enough to be micromanaging operating costs to the point where they're sitting around not writing functionality because they hope someone else will. OK I agree that what is rational is one thing, and what people do is another. I think miners get to experience the variance very quickly though. Probably it often goes like this: a miner with his new rig joins a small pool because it's good for all. He watches his incomde for a couple of hours and sees that it is zero. Meanwhile ghash gets a few blocks. He thinks - well, I could have already made some coin! Damn. I guess I have to stick to my pool because now we are more likely to find a block! Wait, this is gambler's fallacy. What I was doing in the past few hours does not influence my future income. I can just as well switch to ghash now and start earning me some coin!!! Of course there are also some who quickly get a profit from that 1% pool, but it being a 1% pool, most don't. Every nameable centeralized pool has been hacked (except f2pool afaik, but it hasn't been around that long), in some cases with _large_ amounts stolen. In the case of ghash.io in addition to taking the operators funds they executed doublespends to the tune of 3kbtc lost. And thats after the survivorship bias— many pools in the past were popular and vanished with users funds. By comparison I don't think P2Pool is risky at all, it's the centralized pools that are risky even if you're counting on selling your coin before ecosystem damage makes it worthless.
yes that's an important consideration and a different argument. Ultimately I think it would have been easy to make P2Pool dominate the network, and then the variance argument would play in its favour, had it not been for the fact that with P2Pool you need to maintain a copy of the blockchain.
|
|
|
|
|