jonald_fyookball (OP)
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 04, 2015, 03:16:48 AM |
|
I understand the block limit is an anti spam measure, but why can't miners just form their own consensus of how big a block is acceptable? If more than 51% of the mining power agrees a block is too big, they will ignore it and build a longer chain.
We might have more reorgs but what's wrong with this idea?
The main idea behind this is that actually, having a blocksize limit was a temporary measure and in fact should not be a protocol rule at all. That's why we are having so much difficulty. We are trying to form consensus on implementation, rather than the core rules.
|
|
|
|
Panthers52
|
|
September 04, 2015, 03:31:35 AM |
|
This would result in uncertainty regarding if a transaction is confirmed or not. As it stands now if a transaction has one confirmation, it is all but certain that the transaction will make it into the "final" blockchain with virtually zero chances of it being double spent. Under your proposal, a transaction with even 2-3 (or more) confirmations would have the possibility of a double spend.
This would also result in miners not knowing which block to build on top of. It would be possible that there would be 2-3 branches that various mining pools are attempting to build on top of that are 1-3 blocks deep.
This would create incentives not to mine because of the uncertainty regarding if a block that follows what the miner believes to be are all the rules could get orphaned even if the block is able to propagate fully throughout the network.
Lastly and most importantly, this would violate one of Bitcoin's core principals that the chain with the most work is the valid chain.
|
|
|
|
jonald_fyookball (OP)
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 04, 2015, 03:37:27 AM |
|
Thanks for the reply Panthers.
In my vision, there could be consensus of the blocksize limit but it would not be hardcoded. It can be changed when mining power decides to change it, thereby eliminating those confirmation issues.
I don't quite understand the chain with the most work point you bring up. I thought measure of work is based on the difficulty of the hash, which has nothing to do with how many transactions are in the block.
|
|
|
|
Panthers52
|
|
September 04, 2015, 04:02:30 AM |
|
I don't quite understand the chain with the most work point you bring up. I thought measure of work is based on the difficulty of the hash, which has nothing to do with how many transactions are in the block.
You are correct, the measure of work is based on the difficulty of each block that is contained in the blockchain. If the network consensus was that 5 MB blocks are valid, and a pool were to mine 3 consecutive 4.5 MB blocks, these blocks were to properly propagate throughout the network, then all nodes that are in receipt of these blocks should mine on top of the 3rd 4.5 MB block received. Under your proposal, a miner could ignore those blocks if they arbitrary thought those blocks was too large and could start mining on top of a chain that is not the most cumulatively difficult chain
|
|
|
|
jonald_fyookball (OP)
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 04, 2015, 04:10:45 AM |
|
I don't quite understand the chain with the most work point you bring up. I thought measure of work is based on the difficulty of the hash, which has nothing to do with how many transactions are in the block.
You are correct, the measure of work is based on the difficulty of each block that is contained in the blockchain. If the network consensus was that 5 MB blocks are valid, and a pool were to mine 3 consecutive 4.5 MB blocks, these blocks were to properly propagate throughout the network, then all nodes that are in receipt of these blocks should mine on top of the 3rd 4.5 MB block received. Under your proposal, a miner could ignore those blocks if they arbitrary thought those blocks was too large and could start mining on top of a chain that is not the most cumulatively difficult chain Yes but that is no different than ignoring any other "longer" chain that the mining node deems invalid. In your example, the miner ignoring the blocks is going against consensus and therefore will sooner or later have a shorter chain than the main one.
|
|
|
|
Panthers52
|
|
September 04, 2015, 04:27:32 AM |
|
I don't quite understand the chain with the most work point you bring up. I thought measure of work is based on the difficulty of the hash, which has nothing to do with how many transactions are in the block.
You are correct, the measure of work is based on the difficulty of each block that is contained in the blockchain. If the network consensus was that 5 MB blocks are valid, and a pool were to mine 3 consecutive 4.5 MB blocks, these blocks were to properly propagate throughout the network, then all nodes that are in receipt of these blocks should mine on top of the 3rd 4.5 MB block received. Under your proposal, a miner could ignore those blocks if they arbitrary thought those blocks was too large and could start mining on top of a chain that is not the most cumulatively difficult chain Yes but that is no different than ignoring any other "longer" chain that the mining node deems invalid. In your example, the miner ignoring the blocks is going against consensus and therefore will sooner or later have a shorter chain than the main one. I am not aware of any reorg in which 3+ valid blocks were properly propagated, zero additional blocks were broadcast for 5+ minutes then those 3 blocks were orphaned (I am not an expert, and as a result I may not be aware of a situation that has occured that matches this description). In your proposal, what is considered "valid" is not really known until after the fact. As it stands now, the largest three mining pools are finding roughly 53% of the blocks, if these three mining pools were to make it a habit to orphan chains that do not contain any of their found blocks in the most recent three blocks, then all of them would see a mass exodus of miners, however if this were to happen under your proposal, these pools could easily claim they were ignoring blocks that were "too large". Miners need certainty to know if a certain block is going to ultimately be accepted into the "final" blockchain.
|
|
|
|
jonald_fyookball (OP)
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 04, 2015, 04:35:39 AM Last edit: September 04, 2015, 04:46:26 AM by jonald_fyookball |
|
You make a good point that the miners want to know which blocks will be valid. I guess I'm seeing the wisdom of Bip100 where it can float but does so in an organized manner.
I wonder what bip100 would look like if we change the voting from 80 percent to 51 percent. It would eliminate the so called 21 percent attack and make things more orderly.
|
|
|
|
Panthers52
|
|
September 04, 2015, 04:57:33 AM |
|
You make a good point that the miners want to know which blocks will be valid. I guess I'm seeing the wisdom of Bip100 where it can float but does so in an organized manner.
I don't like BIP 100 because it gives too much control over a small minority of miners. BIP 100 also does not address loan term scale-ability issues, but rather kicks the can down the road a little bit (it acts like Congress). I like BIP 101 because it immediately (nearly so) increases the maximum block size (which is needed) and causes the maximum block size to increase over time in accordance with what is roughly the rate of moores law. There are some critics who say that moores law is slowing down and will not keep with with maximum block size increases of BIP 101. To counter this argument, I would suggest a compromise in which, the maximum block size would increase every ~105,000 blocks (as is the case with BIP 101), however at block size increase minus 15,000 blocks a soft fork proposal would automatically be put into place that would stop/cancel the next maximum block size doubling. The soft for proposal, could say for example, if at any time from block size increase minus 15,000 blocks up until block size increase minus 2,000 blocks that 90% of the most recent blocks agrees with the soft fork then the next block size doubling will be canceled, and will instead take place at 105k blocks in the future. For example, if the maximum block size is about to increase from 32 MB to 64 MB at block 700,000, and the miners agree to a soft fork opposing this increase by including a flag in 90% of the most recent 1,000 found blocks, then the maximum block size will not increase at block 700,000, and the maximum block size will be scheduled to increase to 64 MB at block 805,000 (assuming that a similar soft fork would not be approved immediately prior to the increase)
|
|
|
|
tadakaluri
|
|
September 04, 2015, 05:24:09 AM |
|
Most major Bitcoin mining outfits (21 Inc., BitFury, BTCChina, etc.) support BIP 100, whereas amid services (BitGo, BitPay, Circle, Blockchain.info, etc.) most support BIP 101.
|
|
|
|
brg444
|
|
September 04, 2015, 03:17:27 PM |
|
You make a good point that the miners want to know which blocks will be valid. I guess I'm seeing the wisdom of Bip100 where it can float but does so in an organized manner.
I don't like BIP 100 because it gives too much control over a small minority of miners. BIP 100 also does not address loan term scale-ability issues, but rather kicks the can down the road a little bit (it acts like Congress). I like BIP 101 because it immediately (nearly so) increases the maximum block size (which is needed) and causes the maximum block size to increase over time in accordance with what is roughly the rate of moores law. BIP101's proposed increases are unnecessarily large and also favors large miners. You soft-fork proposition amounts to having miners vote which is IMO never a good idea. At a certain point large miners, which should become the majority, will be incentivized to create as large a block as they can and would veto such attempt to "cancel" the increase.
|
"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
|
|
|
Peter R
Legendary
Offline
Activity: 1162
Merit: 1007
|
|
September 04, 2015, 05:21:32 PM Last edit: September 04, 2015, 05:42:42 PM by Peter R |
|
Jonald is correct: the longest chain (composed of valid transactions) should and will decide just like it's historically done and just like what is described in the Bitcoin white paper. Although we've had an explicit block size limit since the summer of 2010, the economic effects of this limit were nil. The limit was far above the free-market equilibrium point for block space production, and so did not serve to distort the market. The proposals in favour of a restrictive block size limit would act as a "production quota" that would force the block size limit smaller than its free-market equilibrium size. As shown in the figure below, economic theory suggest that we suffer a " deadweight loss" of economic activity equal to the area shaded brown in the lower chart below. This is an entirely different economic model than the one Bitcoin has been operating on until recently. My understanding is that a production quota that distorts the free market dynamics (lower chart) can only be of a net social good if it serves to reduce or eliminate some negative externality, and the net loss of economic activity is outweighed by the net gain from reducing that externality. I suppose the small-blockers could argue that the negative externality is "centralization" or "insufficient hash power for security"; however, as theZerg points out, the area shaded brown is also "unmet" demand that could leak into an alt-coin or a sidechain...
|
|
|
|
poeEDgar
|
|
September 04, 2015, 05:28:58 PM |
|
I understand the block limit is an anti spam measure, but why can't miners just form their own consensus of how big a block is acceptable? If more than 51% of the mining power agrees a block is too big, they will ignore it and build a longer chain.
Ask Mike Hearn why he thinks checkpoints might be necessary, just in case this happens.
|
I woulda thunk you were old enough to be confident that technology DOES improve. In fits and starts, but over the long term it definitely gets better.
|
|
|
jonald_fyookball (OP)
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 04, 2015, 05:58:26 PM |
|
I understand the block limit is an anti spam measure, but why can't miners just form their own consensus of how big a block is acceptable? If more than 51% of the mining power agrees a block is too big, they will ignore it and build a longer chain.
Ask Mike Hearn why he thinks checkpoints might be necessary, just in case this happens. I fail to see what Mike Hearn or checkpoints has to do with this conversation. Please stay on topic. @ Peter R, thanks for the comments. Yes the longest chain "will" decide. The hardcoding of a blocksize is part of consensus just as they are running the same codebase overall. I question the wisdom of making that part of the code at all. However, it seems that having no limit could be problematic, and as Panther pointed out, miners do want some predictability, which is why I'm liking Bip100 more. Do you know if Jeff Garzik ever considered making it a 51% vote?
|
|
|
|
poeEDgar
|
|
September 04, 2015, 06:01:38 PM |
|
I understand the block limit is an anti spam measure, but why can't miners just form their own consensus of how big a block is acceptable? If more than 51% of the mining power agrees a block is too big, they will ignore it and build a longer chain.
Ask Mike Hearn why he thinks checkpoints might be necessary, just in case this happens. I fail to see what Mike Hearn or checkpoints has to do with this conversation. Please stay on topic. It may have been a mere quip, but it was on topic. Are you suggesting that the threshold for consensus in a hard fork be lowered to 51%?
|
I woulda thunk you were old enough to be confident that technology DOES improve. In fits and starts, but over the long term it definitely gets better.
|
|
|
jonald_fyookball (OP)
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 04, 2015, 06:03:39 PM |
|
I understand the block limit is an anti spam measure, but why can't miners just form their own consensus of how big a block is acceptable? If more than 51% of the mining power agrees a block is too big, they will ignore it and build a longer chain.
Ask Mike Hearn why he thinks checkpoints might be necessary, just in case this happens. I fail to see what Mike Hearn or checkpoints has to do with this conversation. Please stay on topic. It may have been a mere quip, but it was on topic. Are you suggesting that the threshold for consensus in a hard fork be lowered to 51%? Not necessarily. It depends what kind of change are we talking about. What I'm hypothesizing in this thread here, is that maybe blocksize shouldn't be a protocol rule to begin with.
|
|
|
|
tl121
|
|
September 04, 2015, 06:04:12 PM |
|
There are a few good reasons to support removing the block size limit from the consensus rules.
1. It is the simplest possible solution to expanding the block size, involving removing code from the consensus critical code. 2. It does not require divination (technology forecasting) and resulting contentious numbers that are trolling for argument. 3. It relies on the existing "longest chain" mechanism and does not require a new distributed algorithm 4. With the existence of numerous complex proposals (BIPs and otherwise) to increase the block size, this is the only proposal that can possibly serve as a focal point (Schelling point) on which the community can reach agreement.
|
|
|
|
jonald_fyookball (OP)
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 04, 2015, 06:18:55 PM |
|
There are a few good reasons to support removing the block size limit from the consensus rules.
1. It is the simplest possible solution to expanding the block size, involving removing code from the consensus critical code. 2. It does not require divination (technology forecasting) and resulting contentious numbers that are trolling for argument. 3. It relies on the existing "longest chain" mechanism and does not require a new distributed algorithm 4. With the existence of numerous complex proposals (BIPs and otherwise) to increase the block size, this is the only proposal that can possibly serve as a focal point (Schelling point) on which the community can reach agreement.
That is nice to read. Have others proposed this too? Can you think of reasons AGAINST it? I think it could be one of those things where not having a limit sounds like a bad idea on the surface, but the miners actually can self police themselves. If a block is ridiculously huge, throw it out! If 51% of the network agrees, its probably ok. I just wonder what would happen if a really huge block was allowed to stay in the chain. What kind of edge cases are possible here?
|
|
|
|
Peter R
Legendary
Offline
Activity: 1162
Merit: 1007
|
|
September 04, 2015, 06:19:27 PM Last edit: September 04, 2015, 06:34:53 PM by Peter R |
|
The hardcoding of a blocksize is part of consensus just as they are running the same codebase overall. I question the wisdom of making that part of the code at all. However, it seems that having no limit could be problematic, and as Panther pointed out, miners do want some predictability, which is why I'm liking Bip100 more. Do you know if Jeff Garzik ever considered making it a 51% vote?
Sickpig (forum member here) suggested that the block size limit was a transport layer constraint that crept into the consensus layer. I agree. I don't think we really need a protocol enforced limit because: 1. There is a significant economic cost to producing large spam blocks as an attack. 2. There is a physical limitation to how large a block can be (due to bandwidth and other constraints). 3. The miners can enforce an effective limit anyways. BIP100 seems redundant (and dangerous if it's not based on majority votes). Let's get rid of the limit and let the miners sort it out.
|
|
|
|
Peter R
Legendary
Offline
Activity: 1162
Merit: 1007
|
|
September 04, 2015, 06:20:35 PM |
|
Can you think of reasons AGAINST it?
Yes. Gavin says there's some memory overflow attack (but I think this limit is node dependent and to fend off this attack the limit can be made very large).
|
|
|
|
jonald_fyookball (OP)
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 04, 2015, 06:23:37 PM |
|
The hardcoding of a blocksize is part of consensus just as they are running the same codebase overall. I question the wisdom of making that part of the code at all. However, it seems that having no limit could be problematic, and as Panther pointed out, miners do want some predictability, which is why I'm liking Bip100 more. Do you know if Jeff Garzik ever considered making it a 51% vote?
Sickpig suggested that the block size limit was a transport layer constraint that crept into the consensus layer. I agree. I don't think we really need a protocol enforced limit because: 1. There is a significant economic cost to producing large spam blocks as an attack. 2. There is a physical limitation to how large a block can be (due to bandwidth and other constraints). 3. The miners can enforce an effective limit anyways. BIP100 seems redundant (and dangerous if it's not based on majority votes). Let's get rid of the limit and let the miners sort it out. Ok cool. Let's. Is there a BIP for this? If not, can we create one?
|
|
|
|
|