Bitcoin Forum

Bitcoin => Bitcoin Discussion => Topic started by: cryptocoimor on June 04, 2015, 04:53:18 PM



Title: Satoshi said: Lets go for the bigger block!
Post by: cryptocoimor on June 04, 2015, 04:53:18 PM
We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block:

Code:
if blocksize>20MB then blocksize=first 20MB, the rest stand in line wait for the next block.

This is what miners should do when they include txs in the block they found, satoshi said so. see below.

It can be phased in, like:

if (blocknumber > 115000 <-- 1MB limit)
    maxblocksize = largerlimit <-- He means a number > 1MB, ie: 20MB

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.

Satoshi did say we should use a bigger blocksize other than using sidechain like GMaxwell's lightning.netwrok

You don't agree with 20MB? Fine! Tor can easily support 100KB/s download currently, 1MB < Your pick < 60MB (100KB*60*10)

You don't agree that we will reach 1MB per block after Q1 2016? Fine! just say when, 2015 Q4? 2017? 2018?

There is always only one blockchain.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: oblivi on June 04, 2015, 05:04:24 PM
We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block:

Code:
if block>20MB then block=first 20MB

or


It can be phased in, like:

if (blocknumber > 115000 2000000)
    maxblocksize = largerlimit 20MB

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

I don't understand why they don't just don't do this, but it's clear there is a reason, otherwise a hard fork wouldn't be risked. It's seems modifying the blocksize is not as easy as that and has a deeper impact on the system.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: cryptworld on June 04, 2015, 05:10:08 PM
If they want to do a hard fork it is because is necessary,a hard fork is a risky thing a no one want to do unless it is mandatory


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: shorena on June 04, 2015, 05:14:19 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: cryptocoimor on June 04, 2015, 05:19:49 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: Klestin on June 04, 2015, 05:21:08 PM
We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block:

Code:
if block>20MB then block=first 20MB


Absolutely nobody is talking about changing the block size to 20 MB.  They are talking about changing the MAXIMUM block size, in a phased approach, to eventually reach 20 MB.  Even when the 20 MB max is set, block sizes will not all be 20 MB.  Some will be 1 MB. Some will be 100 KB.

This change is by definition a hard fork.  The nodes that still have the old 1MB limit will not accept the larger blocks.  Such is the nature of cryptocurrencies.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: bitllionaire on June 04, 2015, 05:22:59 PM
Because everybody needs to update the wallets with the new code implemented, and if not, there would be rejected blocks and transactions, ie, the chaos


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: shorena on June 04, 2015, 05:28:46 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: Elwar on June 04, 2015, 05:30:05 PM
We could take the phased in approach of switching to XT which has the code to phase it in over time.




which...when Gavin asked if that was a good idea he was labeled a heretic and every thread on bitcointalk suggested he was destroying Bitcoin


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: cryptocoimor on June 04, 2015, 05:37:02 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it?


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: SpanishSoldier on June 04, 2015, 05:43:46 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for better a solution, isn it?

The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT.

The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: cryptocoimor on June 04, 2015, 05:59:15 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it?

The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT.

The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok.

Can't we just go for the 20MB first, then use the time (years) we bought to properly test Gregory Maxwell 's lightning.netwrok again and again and if the result is all good, then we use it.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: jeannemadrigal2 on June 04, 2015, 06:10:02 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for better a solution, isn it?

The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT.

The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok.

You forgot another reason (https://bitcointalk.org/index.php?topic=1075323.0) for opposing the biger block size ;)

But it is clear at this point that inaction is not an option.  Not doing anything will bring a change too, just not a very good one.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: SpanishSoldier on June 04, 2015, 06:17:34 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it?

The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT.

The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok.

Can't we just go for the 20MB first, then use the time (years) we bought to properly test Gregory Maxwell 's lightning.netwrok again and again and if the result is all good, then we use it.

Gavin proposed exactly what you are proposing here. But, GMaxwell did not agree. Rumour is that GMaxwell's company blockstream, which has got 21M USD funding, is working on solving the lightning.netwrok problem and imlementation of sidechains. Hence GMaxwell wants 1 MB cap to stay so blockstream can reap the beneit of solving the problem. Due to this disagreement, Gavin said that if the consensus can not be reached at dev level, then it goes to the node level. Hence he asked people to use XT (which is currently almost identical to Bitcoin Core) to show support for him. If 50% of the bitcoin network run XT, then he'llrequest devs again to modify Bitcoin Core so that hard fork does not happen. If they still do not agree, he'll wait for the network to run 90% on XT and then implement the changes on XT. That is when hard fork happens, but with already 90% network running on XT, XT chain will invariably survive. In any case, none of this is going to happen before February 2016. All these things are now at discussion level.

You forgot another reason (https://bitcointalk.org/index.php?topic=1075323.0) for opposing the biger block size ;)

But it is clear at this point that inaction is not an option.  Not doing anything will bring a change too, just not a very good one.

Dint forget. But dint mention, because I have respect for GMaxwell and believe it is not as blunt as it has been portrayed. But, mentioned it now.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: achow101 on June 04, 2015, 07:11:58 PM

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it?
Because Bitcoin does not work like that. A valid block header contains a merkle root, which is a hash of all of the transactions contained within the block. If Bitcoin nodes only looked at a portion of the block, then the validation would fail because hashing all of the transactions in that portion would NOT result in the merkle root which then creates a different header. Thus, the block would be rejected as an invalid block. Also, how would the node know that all of the other transactions contained in the rest of the block know that they were confirmed?


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: cryptocoimor on June 04, 2015, 07:21:31 PM

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it?
Because Bitcoin does not work like that. A valid block header contains a merkle root, which is a hash of all of the transactions contained within the block. If Bitcoin nodes only looked at a portion of the block, then the validation would fail because hashing all of the transactions in that portion would NOT result in the merkle root which then creates a different header. Thus, the block would be rejected as an invalid block. Also, how would the node know that all of the other transactions contained in the rest of the block know that they were confirmed?


You are talking about the reason why a old version bitcoin-qt can't just read the first 1MB if a block > 1MB, right? because your explication has nothing to do with the quote.

Quote
how would the node know that all of the other transactions contained in the rest of the block know that they were confirmed?

No, they don't. That's why people will soon all use new version of bitcoin-qt to read the whole 20MB block.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: achow101 on June 04, 2015, 07:31:45 PM

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it?
Because Bitcoin does not work like that. A valid block header contains a merkle root, which is a hash of all of the transactions contained within the block. If Bitcoin nodes only looked at a portion of the block, then the validation would fail because hashing all of the transactions in that portion would NOT result in the merkle root which then creates a different header. Thus, the block would be rejected as an invalid block. Also, how would the node know that all of the other transactions contained in the rest of the block know that they were confirmed?


You are talking about the reason why a old version bitcoin-qt can't just read the first 1MB if a block > 1MB, right? because your explication has nothing to do with the quote.
It does, I think, because your idea is that if the size of the block is greater than the X, then, you only read and take the first X MB of the block, correct? Please correct me if I'm wrong.

Quote
Quote
how would the node know that all of the other transactions contained in the rest of the block know that they were confirmed?

No, they don't. That's why people will soon all use new version of bitcoin-qt to read the whole 20MB block.
Maybe I misunderstood your proposal.

Also, your quote is not what Satoshi was saying. His actual quote is this:
It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.

This is talking about how and when to implement the larger blocks. It is saying that the hard fork to the new block size should occur at a block number so far in the future that everyone has upgraded to a client that will support the new blocks.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: cryptocoimor on June 04, 2015, 07:32:23 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it?

The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT.

The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok.

Can't we just go for the 20MB first, then use the time (years) we bought to properly test Gregory Maxwell 's lightning.netwrok again and again and if the result is all good, then we use it.

Gavin proposed exactly what you are proposing here. But, GMaxwell did not agree. Rumour is that GMaxwell's company blockstream, which has got 21M USD funding, is working on solving the lightning.netwrok problem and imlementation of sidechains. Hence GMaxwell wants 1 MB cap to stay so blockstream can reap the beneit of solving the problem. Due to this disagreement, Gavin said that if the consensus can not be reached at dev level, then it goes to the node level. Hence he asked people to use XT (which is currently almost identical to Bitcoin Core) to show support for him. If 50% of the bitcoin network run XT, then he'llrequest devs again to modify Bitcoin Core so that hard fork does not happen. If they still do not agree, he'll wait for the network to run 90% on XT and then implement the changes on XT. That is when hard fork happens, but with already 90% network running on XT, XT chain will invariably survive. In any case, none of this is going to happen before February 2016. All these things are now at discussion level.

You forgot another reason (https://bitcointalk.org/index.php?topic=1075323.0) for opposing the biger block size ;)

But it is clear at this point that inaction is not an option.  Not doing anything will bring a change too, just not a very good one.

Dint forget. But dint mention, because I have respect for GMaxwell and believe it is not as blunt as it has been portrayed. But, mentioned it now.

Then why Gavin doesn't agree with GMaxwell's lightning.netwrok? gavin proved that doesn't work or worse than his 20MB block?


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: achow101 on June 04, 2015, 07:35:51 PM
Then why Gavin doesn't agree with GMaxwell's lightning.netwrok? gavin proved that doesn't work or worse than his 20MB block?
It hasn't been implemented yet and will be difficult to implement a stable and working version within a year. The only thing with the lightning network is a proposal describing the way the system will work. It is still a proposal. On the other hand, Gavin's proposal for the 20 MB blocksize is relatively easy to implement and test within a year.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: cryptocoimor on June 04, 2015, 07:39:15 PM
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

This is what miners should do when they include txs in the block they found. Not for the user who read the block.



Then why Gavin doesn't agree with GMaxwell's lightning.netwrok? gavin proved that doesn't work or worse than his 20MB block?
It hasn't been implemented yet and will be difficult to implement a stable and working version within a year. The only thing with the lightning network is a proposal describing the way the system will work. It is still a proposal. On the other hand, Gavin's proposal for the 20 MB blocksize is relatively easy to implement and test within a year.

OK, but from what i saw, Satoshi did say we should use the bigger block other than using sidechain like GMaxwell's lightning.netwrok, am i correct?

if (blocknumber > 115000)
    maxblocksize = largerlimit <-- He means a number > 1MB here, ie: 10MB or 20MB


Title: Re: Modify few codes instead of a hard fork
Post by: wr104 on June 04, 2015, 07:40:29 PM
We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block:

Code:
if blocksize>20MB then blocksize=first 20MB, the rest stand in line wait for the next block.

or

It can be phased in, like:

if (blocknumber > 115000 2000000)
    maxblocksize = largerlimit 20MB, the rest stand in line wait for the next block.

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, then every one using a new version bitcoin-qt can recognize 20MB of a lagger block given. Those using old version can only recognize the first 1MB of it.

In my case, There is always only one blockchain.

Unfortunately, it is not that simple because you are changing network rules.  If you are not careful, you risk creating consensus forks which is the worst nightmare for the coin.

You would need a little more code like the Git patch I wrote below for Bitcoin 0.9.5.  

Basically, you need a kill switch for older wallet versions (example: begin rejecting older block versions after 95% of the nodes have upgraded) and then, you allow larger block size after certain height number. (example 400,000 which is ~9 months for today).

Code:
 src/core.h        |  2 +-
 src/init.cpp      |  4 ++++
 src/main.cpp      | 30 ++++++++++++++++++++++++------
 src/main.h        | 11 ++++++++---
 src/miner.cpp     | 10 ++++++----
 src/rpcmining.cpp | 10 ++++++++--
 6 files changed, 51 insertions(+), 16 deletions(-)

diff --git a/src/core.h b/src/core.h
index d89f06b..01af749 100644
--- a/src/core.h
+++ b/src/core.h
@@ -345,7 +345,7 @@ class CBlockHeader
 {
 public:
     // header
-    static const int CURRENT_VERSION=3;
+    static const int CURRENT_VERSION=4;
     int nVersion;
     uint256 hashPrevBlock;
     uint256 hashMerkleRoot;
diff --git a/src/init.cpp b/src/init.cpp
index 6f9abca..d3f40be 100644
--- a/src/init.cpp
+++ b/src/init.cpp
@@ -1103,6 +1103,10 @@ bool AppInit2(boost::thread_group& threadGroup)
 
     RandAddSeedPerfmon();
 
+    // Check if the network can begin accepting larger block size
+    if (chainActive.Height() >= 400000)
+        fNewBlockSizeLimit = true;
+
     //// debug print
     LogPrintf("mapBlockIndex.size() = %u\n",   mapBlockIndex.size());
     LogPrintf("nBestHeight = %d\n",                   chainActive.Height());
diff --git a/src/main.cpp b/src/main.cpp
index a42bb8a..caa2352 100644
--- a/src/main.cpp
+++ b/src/main.cpp
@@ -47,6 +47,7 @@ bool fImporting = false;
 bool fReindex = false;
 bool fBenchmark = false;
 bool fTxIndex = false;
+bool fNewBlockSizeLimit = false;
 unsigned int nCoinCacheSize = 5000;
 
 /** Fees smaller than this (in satoshi) are considered zero fee (for transaction creation) */
@@ -1809,6 +1810,7 @@ bool ConnectBlock(CBlock& block, CValidationState& state, CBlockIndex* pindex, C
     int64_t nFees = 0;
     int nInputs = 0;
     unsigned int nSigOps = 0;
+    const int nSigOpsLimit = fNewBlockSizeLimit ? MAX_BLOCK_SIGOPS : OLD_MAX_BLOCK_SIGOPS;
     CDiskTxPos pos(pindex->GetBlockPos(), GetSizeOfCompactSize(block.vtx.size()));
     std::vector<std::pair<uint256, CDiskTxPos> > vPos;
     vPos.reserve(block.vtx.size());
@@ -1818,7 +1820,7 @@ bool ConnectBlock(CBlock& block, CValidationState& state, CBlockIndex* pindex, C
 
         nInputs += tx.vin.size();
         nSigOps += GetLegacySigOpCount(tx);
-        if (nSigOps > MAX_BLOCK_SIGOPS)
+        if (nSigOps > nSigOpsLimit)
             return state.DoS(100, error("ConnectBlock() : too many sigops"),
                              REJECT_INVALID, "bad-blk-sigops");
 
@@ -1834,7 +1836,7 @@ bool ConnectBlock(CBlock& block, CValidationState& state, CBlockIndex* pindex, C
                 // this is to prevent a "rogue miner" from creating
                 // an incredibly-expensive-to-validate block.
                 nSigOps += GetP2SHSigOpCount(tx, view);
-                if (nSigOps > MAX_BLOCK_SIGOPS)
+                if (nSigOps > nSigOpsLimit)
                     return state.DoS(100, error("ConnectBlock() : too many sigops"),
                                      REJECT_INVALID, "bad-blk-sigops");
             }
@@ -1966,6 +1968,10 @@ void static UpdateTip(CBlockIndex *pindexNew) {
             // strMiscWarning is read by GetWarnings(), called by Qt and the JSON-RPC code to warn the user:
             strMiscWarning = _("Warning: This version is obsolete, upgrade required!");
     }
+    // Check if the network is ready to accept larger block size
+    if (!fNewBlockSizeLimit && chainActive.Height() >= 400000) {
+        fNewBlockSizeLimit = true;
+    }
 }
 
 // Disconnect chainActive's tip.
@@ -2319,7 +2325,8 @@ bool CheckBlock(const CBlock& block, CValidationState& state, bool fCheckPOW, bo
     // that can be verified before saving an orphan block.
 
     // Size limits
-    if (block.vtx.empty() || block.vtx.size() > MAX_BLOCK_SIZE || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION) > MAX_BLOCK_SIZE)
+    const int nBlkSizeLimit = fNewBlockSizeLimit ? MAX_BLOCK_SIZE : OLD_MAX_BLOCK_SIZE;
+    if (block.vtx.empty() || block.vtx.size() > (nBlkSizeLimit / 60) || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION) > nBlkSizeLimit)
         return state.DoS(100, error("CheckBlock() : size limits failed"),
                          REJECT_INVALID, "bad-blk-length");
 
@@ -2367,7 +2374,8 @@ bool CheckBlock(const CBlock& block, CValidationState& state, bool fCheckPOW, bo
     {
         nSigOps += GetLegacySigOpCount(tx);
     }
-    if (nSigOps > MAX_BLOCK_SIGOPS)
+    const int nSigOpsLimit = fNewBlockSizeLimit ? MAX_BLOCK_SIGOPS : OLD_MAX_BLOCK_SIGOPS;
+    if (nSigOps > nSigOpsLimit)
         return state.DoS(100, error("CheckBlock() : out-of-bounds SigOpCount"),
                          REJECT_INVALID, "bad-blk-sigops", true);
 
@@ -2426,7 +2434,7 @@ bool AcceptBlock(CBlock& block, CValidationState& state, CDiskBlockPos* dbp)
         // Reject block.nVersion=1 blocks when 95% (75% on testnet) of the network has upgraded:
         if (block.nVersion < 2)
         {
-            if ((!TestNet() && CBlockIndex::IsSuperMajority(2, pindexPrev, 950, 1000)) ||
+            if (fNewBlockSizeLimit || (!TestNet() && CBlockIndex::IsSuperMajority(2, pindexPrev, 950, 1000)) ||
                 (TestNet() && CBlockIndex::IsSuperMajority(2, pindexPrev, 75, 100)))
             {
                 return state.Invalid(error("AcceptBlock() : rejected nVersion=1 block"),
@@ -2436,13 +2444,23 @@ bool AcceptBlock(CBlock& block, CValidationState& state, CDiskBlockPos* dbp)
         // Reject block.nVersion=2 blocks when 95% (75% on testnet) of the network has upgraded:
         if (block.nVersion < 3)
         {
-            if ((!TestNet() && CBlockIndex::IsSuperMajority(3, pindexPrev, 950, 1000)) ||
+            if (fNewBlockSizeLimit || (!TestNet() && CBlockIndex::IsSuperMajority(3, pindexPrev, 950, 1000)) ||
                 (TestNet() && CBlockIndex::IsSuperMajority(3, pindexPrev, 75, 100)))
             {
                 return state.Invalid(error("AcceptBlock() : rejected nVersion=2 block"),
                                      REJECT_OBSOLETE, "bad-version");
             }
         }
+        // Reject block.nVersion=3 blocks when 95% (75% on testnet) of the network has upgraded:
+        if (block.nVersion < 4)
+        {
+            if (fNewBlockSizeLimit || (!TestNet() && CBlockIndex::IsSuperMajority(4, pindexPrev, 950, 1000)) ||
+                (TestNet() && CBlockIndex::IsSuperMajority(4, pindexPrev, 75, 100)))
+            {
+                return state.Invalid(error("AcceptBlock() : rejected nVersion=3 block"),
+                    REJECT_OBSOLETE, "bad-version");
+            }
+        }
         // Enforce block.nVersion=2 rule that the coinbase starts with serialized block height
         if (block.nVersion >= 2)
         {
diff --git a/src/main.h b/src/main.h
index dc50dff..0b50ad4 100644
--- a/src/main.h
+++ b/src/main.h
@@ -33,8 +33,10 @@ class CBlockIndex;
 class CBloomFilter;
 class CInv;
 
-/** The maximum allowed size for a serialized block, in bytes (network rule) */
-static const unsigned int MAX_BLOCK_SIZE = 1000000;
+/** The NEW maximum allowed size for a serialized block, in bytes (network rule) */
+static const unsigned int MAX_BLOCK_SIZE = 20 * 1024 * 1024;
+/** The OLD maximum allowed size for a serialized block, in bytes (network rule) */
+static const unsigned int OLD_MAX_BLOCK_SIZE = 1000000;
 /** Default for -blockmaxsize and -blockminsize, which control the range of sizes the mining code will create **/
 static const unsigned int DEFAULT_BLOCK_MAX_SIZE = 750000;
 static const unsigned int DEFAULT_BLOCK_MIN_SIZE = 0;
@@ -42,8 +44,10 @@ static const unsigned int DEFAULT_BLOCK_MIN_SIZE = 0;
 static const unsigned int DEFAULT_BLOCK_PRIORITY_SIZE = 50000;
 /** The maximum size for transactions we're willing to relay/mine */
 static const unsigned int MAX_STANDARD_TX_SIZE = 100000;
-/** The maximum allowed number of signature check operations in a block (network rule) */
+/** The NEW maximum allowed number of signature check operations in a block (network rule) */
 static const unsigned int MAX_BLOCK_SIGOPS = MAX_BLOCK_SIZE/50;
+/** The OLD maximum allowed number of signature check operations in a block (network rule) */
+static const unsigned int OLD_MAX_BLOCK_SIGOPS = OLD_MAX_BLOCK_SIZE / 50;
 /** Default for -maxorphantx, maximum number of orphan transactions kept in memory */
 static const unsigned int DEFAULT_MAX_ORPHAN_TRANSACTIONS = 100;
 /** Default for -maxorphanblocks, maximum number of orphan blocks kept in memory */
@@ -95,6 +99,7 @@ extern int64_t nTimeBestReceived;
 extern bool fImporting;
 extern bool fReindex;
 extern bool fBenchmark;
+extern bool fNewBlockSizeLimit;
 extern int nScriptCheckThreads;
 extern bool fTxIndex;
 extern unsigned int nCoinCacheSize;
diff --git a/src/miner.cpp b/src/miner.cpp
index e8abb8c..d587fde 100644
--- a/src/miner.cpp
+++ b/src/miner.cpp
@@ -126,8 +126,9 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
 
     // Largest block you're willing to create:
     unsigned int nBlockMaxSize = GetArg("-blockmaxsize", DEFAULT_BLOCK_MAX_SIZE);
-    // Limit to betweeen 1K and MAX_BLOCK_SIZE-1K for sanity:
-    nBlockMaxSize = std::max((unsigned int)1000, std::min((unsigned int)(MAX_BLOCK_SIZE-1000), nBlockMaxSize));
+    // Limit to betweeen 1K and MAX_BLOCK_SIZE-1K for sanity. After height 400,000 we allow miners to create larger blocks.
+    const int nBlkSizeLimit = (!TestNet() && (chainActive.Tip()->nHeight + 1) > 400000) ? MAX_BLOCK_SIZE : OLD_MAX_BLOCK_SIZE;
+    nBlockMaxSize = std::max((unsigned int)1000, std::min((unsigned int)(nBlkSizeLimit - 1000), nBlockMaxSize));
 
     // How much of the block should be dedicated to high-priority transactions,
     // included regardless of the fees they pay
@@ -228,6 +229,7 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
         uint64_t nBlockSize = 1000;
         uint64_t nBlockTx = 0;
         int nBlockSigOps = 100;
+        const int nSigOpsLimit = fNewBlockSizeLimit ? MAX_BLOCK_SIGOPS : OLD_MAX_BLOCK_SIGOPS;
         bool fSortedByFee = (nBlockPrioritySize <= 0);
 
         TxPriorityCompare comparer(fSortedByFee);
@@ -250,7 +252,7 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
 
             // Legacy limits on sigOps:
             unsigned int nTxSigOps = GetLegacySigOpCount(tx);
-            if (nBlockSigOps + nTxSigOps >= MAX_BLOCK_SIGOPS)
+            if (nBlockSigOps + nTxSigOps >= nSigOpsLimit)
                 continue;
 
             // Skip free transactions if we're past the minimum block size:
@@ -273,7 +275,7 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
             int64_t nTxFees = view.GetValueIn(tx)-tx.GetValueOut();
 
             nTxSigOps += GetP2SHSigOpCount(tx, view);
-            if (nBlockSigOps + nTxSigOps >= MAX_BLOCK_SIGOPS)
+            if (nBlockSigOps + nTxSigOps >= nSigOpsLimit)
                 continue;
 
             CValidationState state;
diff --git a/src/rpcmining.cpp b/src/rpcmining.cpp
index ef99cb3..8597238 100644
--- a/src/rpcmining.cpp
+++ b/src/rpcmining.cpp
@@ -579,8 +579,14 @@ Value getblocktemplate(const Array& params, bool fHelp)
     result.push_back(Pair("mintime", (int64_t)pindexPrev->GetMedianTimePast()+1));
     result.push_back(Pair("mutable", aMutable));
     result.push_back(Pair("noncerange", "00000000ffffffff"));
-    result.push_back(Pair("sigoplimit", (int64_t)MAX_BLOCK_SIGOPS));
-    result.push_back(Pair("sizelimit", (int64_t)MAX_BLOCK_SIZE));
+    if (fNewBlockSizeLimit) {
+        result.push_back(Pair("sigoplimit", (int64_t)MAX_BLOCK_SIGOPS));
+        result.push_back(Pair("sizelimit", (int64_t)MAX_BLOCK_SIZE));
+    }
+    else {
+        result.push_back(Pair("sigoplimit", (int64_t)OLD_MAX_BLOCK_SIGOPS));
+        result.push_back(Pair("sizelimit", (int64_t)OLD_MAX_BLOCK_SIZE));
+    }
     result.push_back(Pair("curtime", (int64_t)pblock->nTime));
     result.push_back(Pair("bits", HexBits(pblock->nBits)));
     result.push_back(Pair("height", (int64_t)(pindexPrev->nHeight+1)));


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: achow101 on June 04, 2015, 07:42:48 PM
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

This is what miners should do when they include txs in the block they found. Not for the user who read the block.


This is essentially what they do now, but with the 1 MB blocks. The change, as I said above, is a relatively simple implementation. However, the hard fork occurs because old nodes will consider the larger blocks as invalid.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: cryptocoimor on June 04, 2015, 07:53:07 PM
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

This is what miners should do when they include txs in the block they found. Not for the user who read the block.


This is essentially what they do now, but with the 1 MB blocks. The change, as I said above, is a relatively simple implementation. However, the hard fork occurs because old nodes will consider the larger blocks as invalid.

But satoshi did say we should use the bigger block other than using sidechain like GMaxwell's lightning.netwrok, am i correct?

if (blocknumber > 115000)
    maxblocksize = largerlimit <-- He means a number > 1MB here, ie: 10MB or 20MB


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: achow101 on June 04, 2015, 07:54:57 PM
But satoshi did say we should use the bigger block other than using sidechain like GMaxwell's lightning.netwrok, am i correct?
Not necessarily. Lightning network and sidechains did not exist when Satoshi made that post. These were all ideas that came later, long after Satoshi had left.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: cryptocoimor on June 04, 2015, 08:00:49 PM
But satoshi did say we should use the bigger block other than using sidechain like GMaxwell's lightning.netwrok, am i correct?
Not necessarily. Lightning network and sidechains did not exist when Satoshi made that post. These were all ideas that came later, long after Satoshi had left.

But if satoshi said a bigger block > 1MB was the original plan and its ok to go, I think most of us will then agree with gavin (we can talk about the size goes for, 20MB or 10MB or 5MB) and finish this chao.

I would like to change this title to "Satoshi said lets go for the bigger block!"


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: achow101 on June 04, 2015, 08:07:50 PM
But satoshi did say we should use the bigger block other than using sidechain like GMaxwell's lightning.netwrok, am i correct?
Not necessarily. Lightning network and sidechains did not exist when Satoshi made that post. These were all ideas that came later, long after Satoshi had left.

But if satoshi said a bigger block > 1MB was the original plan and its ok to go, I think most of us will then agree with gavin (we can talk about the size goes for, 20MB or 10MB or 5MB) and finish this chao.

I would like to change this title to "Satoshi said lets go for the bigger block!"
However, things have changed since Satoshi's comment. There are alternatives and possibly better ways to solve the problem. While I agree that we should raise the block size limit and that a solution must be found, I don't think that we should only do what Satoshi said 5 five years ago when other solutions did not exist, and that what he said may no longer be applicable.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: neurotypical on June 04, 2015, 09:19:01 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for better a solution, isn it?

The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT.

The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok.

But even with Lightning Network, we'll eventually hit the 1MB maximum block, so whats the point? we need both things if anything.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: SpanishSoldier on June 04, 2015, 09:45:54 PM
-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for better a solution, isn it?

The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT.

The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok.

But even with Lightning Network, we'll eventually hit the 1MB maximum block, so whats the point? we need both things if anything.

Lightning.Network is still a theory. So, we donno whether it can bypass the 1MB cap or not. But, if I have understood GMaxwell correctly, he is trying to bypass the 1MB cap by jointly using lightning.network and sidechains.


Title: Re: Why have to hard fork instead of modify few lines of code?
Post by: yayayo on June 04, 2015, 11:32:28 PM
We could take the phased in approach of switching to XT which has the code to phase it in over time.

which...when Gavin asked if that was a good idea he was labeled a heretic and every thread on bitcointalk suggested he was destroying Bitcoin

Which is simply true if he divides Bitcoin into two different coins (Bitcoin and GavinCoin), because there is good reason not to join the "OMG-blocks-are-full-Bitcoin-gonna-freeze"-panic and blindly follow Gavin.

While I, more than most people, appreciate what Satoshi has given us, anything he's said hasn't considered the last 4 years of data (obviously since he hasn't spoken in over 4 years). You have to take that into consideration when you appeal to his authority.

Exactly. There's too much personality cult going on here.

I think a bit of fee pressure is healthy to drive out transaction spam. Blocks may increase in future, yes - but not limitless and certainly not based on the absence of an alternative solution for micropayments. Decentralization is what gives Bitcoin value. Decentralization is the reason I'm here.

ya.ya.yo!


Title: Re: Satoshi said: Lets go for the bigger block!
Post by: Soros Shorts on June 05, 2015, 12:18:37 AM
We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block:

You realize that a 20GB block would never work with a 32-bit client, don't you? The maximum addressable memory for 32-bits is 4GB, and that needs to hold memcache and everything else.


Title: Re: Satoshi said: Lets go for the bigger block!
Post by: Eastfist on June 05, 2015, 12:27:01 AM
People really need to stop invoking "Satoshi says..." just to get their way. Take some responsibility for your own thinking for once.


Title: Re: Satoshi said: Lets go for the bigger block!
Post by: achow101 on June 05, 2015, 12:36:41 AM
We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block:

You realize that a 20GB block would never work with a 32-bit client, don't you? The maximum addressable memory for 32-bits is 4GB, and that needs to hold memcache and everything else.
Bitcoin Core has a 64-bit client which has more than enough memory to support a 20 GB block, provided that the hardware has significantly more memory than current standards.