jl2012 (OP)
Legendary
Offline
Activity: 1792
Merit: 1111
|
|
June 18, 2015, 03:53:03 AM |
|
5 largest mining pools in China, including 2 of the busiest exchanges in the world, release a joint declaration to support raising the MAX_BLOCK_SIZE to 8MB. They currently control 60% of the network hashrate. https://imgur.com/a/LlDRrChinese companies are operating under a very oppressive environment. All internet activities are strictly censored and outbound bandwidth is limited. They still agree to increase the block size. Although one may argue that on this issue merchants' view is more important than miners, the hardfork won't be successful without miners' support. And don't forget, this statement includes 2 major exchanges, BTCChina and Huobi. I hope this would conclude the debate around "raise block size or not" and "how much to raise". I hope we could focus on the pathway leading to 8MB. Should that be a simple raise or a step function? If a step function, how? Should we limit other parameters, e.g. sigop count and UTXO growth? The hard fork should also consider the pathway beyond 8MB if we don't want to repeat the debate (too early).
|
Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY) LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC) PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
|
|
|
tupelo
Member
Offline
Activity: 99
Merit: 10
|
|
June 18, 2015, 05:13:24 AM |
|
Why exactly 8 MB? Should Bitcoin really be dictated by archaic and irrational Chinese superstition? Or is there more substance to this number?
|
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
June 18, 2015, 08:24:31 AM Last edit: June 18, 2015, 08:49:08 AM by TierNolan |
|
I think 8MB is just a compromise between 1MB and 20MB.
I think they are worried about the bandwidth between China and the rest of the world. Very large blocks could cause problems for them.
There are a few different network simulators that give different results and it depends on what parameters you set.
They are concerned that pools outside China might produce large blocks and it will take longer for those blocks to reach them and that would mean they waste hashing power.
Under some conditions they might benefit from lower bandwidth into China. Assuming >50% of the hashing power is using Chinese pools and a Chinese pool and a non-Chinese pool both find a block at the same time, then the Chinese pool's block will reach a majority of the hashing power before the non-Chinese pool. If the non-Chinese block is 20MB, then it would take even longer to enter China.
Mining farms and mining pools don't have to be at the same location. It would be possible for miners in China to use mining pools outside China, if it ever became a problem. This could shift the majority of the hashing power out of China, and then mining pools would have to leave China in order to have a good connection to the majority of the hashing power.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
bitnanigans
|
|
June 18, 2015, 09:09:35 AM |
|
Personally, I think the hard limit should be removed completely. Remember the "640K is enough memory for everyone" quote? Look at where we are today. Technology advances at a very quick pace, and we'll have more than enough storage and bandwidth to handle the increases as adoption grows.
|
|
|
|
dogie
Legendary
Offline
Activity: 1666
Merit: 1185
dogiecoin.com
|
|
June 18, 2015, 09:51:18 AM |
|
Why exactly 8 MB? Should Bitcoin really be dictated by archaic and irrational Chinese superstition? Or is there more substance to this number?
Yes, yes it should, because 8MB is an arbitrary number just like 20MB. And just like 20MB, no one is predicting 8MB to be consumed any time soon and definitely not so quick that another consensus could not be reached to increase it. Other arbitrary bitcoin stuff: A block targeted every 10 minutes, block reward halved every 4 years, 50btc block reward yada yada yada. None of these things are set in stone, based on anything in particular or vitally important.
|
|
|
|
Carlton Banks
Legendary
Offline
Activity: 3430
Merit: 3080
|
|
June 18, 2015, 11:27:51 AM |
|
Why exactly 8 MB? Should Bitcoin really be dictated by archaic and irrational Chinese superstition? Or is there more substance to this number?
Yes, yes it should, because 8MB is an arbitrary number just like 20MB. And just like 20MB, no one is predicting 8MB to be consumed any time soon and definitely not so quick that another consensus could not be reached to increase it. Other arbitrary bitcoin stuff: A block targeted every 10 minutes, block reward halved every 4 years, 50btc block reward yada yada yada. None of these things are set in stone, based on anything in particular or vitally important. The 1 MB limit, and certainly the 10 minute discovery target, were objectively chosen, albeit still in a can-kicking guesswork category of design decisions. There are *ahem* several arbitrary numbers that wouldn't work in their place, so that description isn't very apt.
|
Vires in numeris
|
|
|
spazzdla
Legendary
Offline
Activity: 1722
Merit: 1000
|
|
June 18, 2015, 06:39:46 PM |
|
20MB is way to much of a jump.. the Blockchain is already very large which prevents people from running a full node. This is just kicking the can down the road.. we should not think of this as a fix to the problem. We are suppose to be better than the central banks.. I am on board for 8MB but after that we need to stop the idea of just kicking the problem to our children.
|
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
June 18, 2015, 07:21:06 PM |
|
20MB is way to much of a jump.. the Blockchain is already very large which prevents people from running a full node. This is just kicking the can down the road.. we should not think of this as a fix to the problem. We are suppose to be better than the central banks.. I am on board for 8MB but after that we need to stop the idea of just kicking the problem to our children.
They have added pruning to the latest release. That reduces the amount of disk space required to run a full node. It stores at least 288 blocks or 500MB, whichever is larger. At 20MB per block, that is 5.7GB.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
shorena
Copper Member
Legendary
Offline
Activity: 1498
Merit: 1540
No I dont escrow anymore.
|
|
June 18, 2015, 07:30:45 PM |
|
20MB is way to much of a jump.. the Blockchain is already very large which prevents people from running a full node. This is just kicking the can down the road.. we should not think of this as a fix to the problem. We are suppose to be better than the central banks.. I am on board for 8MB but after that we need to stop the idea of just kicking the problem to our children.
They have added pruning to the latest release. That reduces the amount of disk space required to run a full node. It stores at least 288 blocks or 500MB, whichever is larger. At 20MB per block, that is 5.7GB. Latest as in 0.11? 0.10.x does not allow to run a node in pruning mode AFAIK.
|
Im not really here, its just your imagination.
|
|
|
spazzdla
Legendary
Offline
Activity: 1722
Merit: 1000
|
|
June 18, 2015, 07:34:54 PM |
|
20MB is way to much of a jump.. the Blockchain is already very large which prevents people from running a full node. This is just kicking the can down the road.. we should not think of this as a fix to the problem. We are suppose to be better than the central banks.. I am on board for 8MB but after that we need to stop the idea of just kicking the problem to our children.
They have added pruning to the latest release. That reduces the amount of disk space required to run a full node. It stores at least 288 blocks or 500MB, whichever is larger. At 20MB per block, that is 5.7GB. I was under the impression this would make it difficult to follow transactions from the begining?
|
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
June 18, 2015, 08:30:14 PM Last edit: June 18, 2015, 08:48:54 PM by TierNolan |
|
Latest as in 0.11? 0.10.x does not allow to run a node in pruning mode AFAIK.
Right, sorry, meant 0.11, so next release. I was under the impression this would make it difficult to follow transactions from the begining?
Yes, but each node only needs to download it once and doesn't need to keep everything. [Edit] There is a suggestion on the mailing list for each node to store some of the blocks. If everyone stored 1% of the blockchain, then you can download each block from lots of different nodes. Once you are synced, you can prune your block store.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
acquafredda
Legendary
Offline
Activity: 1316
Merit: 1481
|
|
June 18, 2015, 09:05:19 PM |
|
Why exactly 8 MB? Should Bitcoin really be dictated by archaic and irrational Chinese superstition? Or is there more substance to this number?
anyone remembering the Olympics??? The 2008 Summer Olympics opening ceremony was held at the Beijing National Stadium, also known as the Bird's Nest. It was began at 20:00 China Standard Time (UTC+8) on Friday, 8 August 2008, as the number 8 is considered to be auspicious.
|
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
June 19, 2015, 12:45:03 AM Last edit: June 19, 2015, 01:17:31 AM by solex |
|
Those of us who want to see an increased block size limit are at a disadvantage because "doing nothing" achieves the same result as a consensus decision to keep the 1MB and seeing what happens to the Bitcoin ecosystem when confirmation times blow out from the 10 minute average, which everyone expects today.
Yet, Wladimir commented last month that he was "weakly against" making this change.
So, since then, we know there is a clear majority for this change on all user polls, lists of businesses and wallet providers, and now mining opinion.
Mike and Gavin didn't follow consensus procedures? Boo hoo, too bad. They knew that there was a lot of entrenched opinion against changing the 1MB (whether misguided about Satoshi's original vision or not), so they probably tried very long and very hard to obtain consensus among the github commit access developers. They failed, so rightly took all the arguments public, where they found overwhelming support for the change.
Core Dev need to ask themselves "If the 1MB limit did not exist, would it get any ACK to put it in place in an upcoming change (e.g. v0.11)?" This is not a rhetorical question, this is a valid question to ask. I bet that this type of change, a blunt hard limit with unknown consequences would get zero support in Bitcoin Dev. There would be all sorts of objections about how it's a naive attempt to vaguely "increase fees", "stop spam", "slow the decline in full nodes" could be done far more effectively in far more elegant ways. Implementing the 1MB today would get an unanimous NACK.
Core Dev also need to make a decision. they either:
a) delay the v0.11 until it gets a scaling improvement which replaces the max block size constant (whether by a patch reflecting Jeffs' BIP 100, or a functionally comparable patch from Gavin). After all, Gavin gave notice 2 months ago that he wanted to submit a patch for v0.11
b) release v0.11 without the above, which is effectively a declaration that they are prepared to allow the 1MB limit to be maxed out (noticeably affecting user confirmation times) before considering to release a patch for it. This might not be "1MB 4EVR" but it is practically equivalent as far as the rest of us are concerned who want to see the limit raised/modified/improved/removed before the inevitable PR disaster from inaction.
We have heard Gregory's opinion loud and clear on Bitcointalk and Reddit, so what does Wladimir think today?
|
|
|
|
Carlton Banks
Legendary
Offline
Activity: 3430
Merit: 3080
|
|
June 19, 2015, 05:54:48 PM |
|
Core Dev also need to make a decision. they either:
a) delay the v0.11 until it gets a scaling improvement which replaces the max block size constant (whether by a patch reflecting Jeffs' BIP 100, or a functionally comparable patch from Gavin). After all, Gavin gave notice 2 months ago that he wanted to submit a patch for v0.11
b) release v0.11 without the above, which is effectively a declaration that they are prepared to allow the 1MB limit to be maxed out (noticeably affecting user confirmation times) before considering to release a patch for it. This might not be "1MB 4EVR" but it is practically equivalent as far as the rest of us are concerned who want to see the limit raised/modified/improved/removed before the inevitable PR disaster from inaction.
Maybe the choosing will be between using the pruned database feature or the 20 MB block fork feature. Like so much with this debate, the answer is that you actually want both, and that these supposedly opposing ideas, in fact, support one another.
|
Vires in numeris
|
|
|
induktor
|
|
June 19, 2015, 10:59:14 PM |
|
IMHO, I am more concern about using 20MB blocks than have to store the full blockchain nowadays HDD space is cheap, but bandwidth is still a problem in several countries like mine.
A typical upload speed here is 512Kb, a 1Mbit upload speed is fantastic here, and not very common. so increase to 20MB could cause some problems, like the china letter claim, 8MB seems more reasonable I think.
To be honest I would prefer not to change anything, but i understand that something must to be done.
|
BTC addr: 1vTGnFgaM2WJjswwmbj6N2AQBWcHfimSc
|
|
|
tspacepilot
Legendary
Offline
Activity: 1456
Merit: 1081
I may write code in exchange for bitcoins.
|
|
June 19, 2015, 11:35:31 PM |
|
Core Dev also need to make a decision. they either:
a) delay the v0.11 until it gets a scaling improvement which replaces the max block size constant (whether by a patch reflecting Jeffs' BIP 100, or a functionally comparable patch from Gavin). After all, Gavin gave notice 2 months ago that he wanted to submit a patch for v0.11
b) release v0.11 without the above, which is effectively a declaration that they are prepared to allow the 1MB limit to be maxed out (noticeably affecting user confirmation times) before considering to release a patch for it. This might not be "1MB 4EVR" but it is practically equivalent as far as the rest of us are concerned who want to see the limit raised/modified/improved/removed before the inevitable PR disaster from inaction.
Maybe the choosing will be between using the pruned database feature or the 20 MB block fork feature. Like so much with this debate, the answer is that you actually want both, and that these supposedly opposing ideas, in fact, support one another. For me, this kind of observation leads me to think that 8MB might be a nice compromise between block size and storage. You could also compromise on the pruning side, say 1GB instead of 500MB, that would be meeting in the middle, right?
|
|
|
|
Carlton Banks
Legendary
Offline
Activity: 3430
Merit: 3080
|
|
June 20, 2015, 12:09:48 AM |
|
Core Dev also need to make a decision. they either:
a) delay the v0.11 until it gets a scaling improvement which replaces the max block size constant (whether by a patch reflecting Jeffs' BIP 100, or a functionally comparable patch from Gavin). After all, Gavin gave notice 2 months ago that he wanted to submit a patch for v0.11
b) release v0.11 without the above, which is effectively a declaration that they are prepared to allow the 1MB limit to be maxed out (noticeably affecting user confirmation times) before considering to release a patch for it. This might not be "1MB 4EVR" but it is practically equivalent as far as the rest of us are concerned who want to see the limit raised/modified/improved/removed before the inevitable PR disaster from inaction.
Maybe the choosing will be between using the pruned database feature or the 20 MB block fork feature. Like so much with this debate, the answer is that you actually want both, and that these supposedly opposing ideas, in fact, support one another. For me, this kind of observation leads me to think that 8MB might be a nice compromise between block size and storage. You could also compromise on the pruning side, say 1GB instead of 500MB, that would be meeting in the middle, right? I kind of agree with the comment from induktor, storage is cheap and set to get vastly cheaper and more capacious still. Bandwidth is getting faster, but the prices per unit are not falling at the rate storage is. The optimistic part of me says it won't matter in 5 years, mesh technology will be too easy, too cheap and too ubiquitous for this to matter. Alot else could happen given that scenario, though lol.
|
Vires in numeris
|
|
|
CryptoTrout
|
|
June 20, 2015, 02:43:33 AM |
|
thats better than sticking to 1mb but i think venture capital wants to take bitcoin mainstream very quickly, so they want it larger
|
|
|
|
muhrohmat
|
|
June 23, 2015, 10:51:18 AM |
|
the 60% does not makes double spend like the quata of 60% of mining does that no of Ths i mean if china such a developer of BTC why the governemet phrobits the coin no?
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
June 23, 2015, 01:10:31 PM |
|
“The pool operators could actually make such a change themselves without proposing that the core developers do such a thing. Instead, we would like to express our views and concerns to the core developers, and let the community form a discussion rather than rudely cast a divergence. We are happy to see the consensus on the final improvement plan. After all, a 'forked' community is not what we are chasing after.”
It is great that these 5 mining pools are doing the right thing and trying to develop a consensus with developers but it is disturbing to realize 5 companies can completely decide Bitcoin's fate. We seriously need to work on decentralizing mining and hash power globally.
|
|
|
|
|