cryptocoimor (OP)
Newbie
Offline
Activity: 9
Merit: 0
|
|
June 04, 2015, 04:53:18 PM Last edit: June 04, 2015, 08:39:54 PM by cryptocoimor |
|
We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block: if blocksize>20MB then blocksize=first 20MB, the rest stand in line wait for the next block.
This is what miners should do when they include txs in the block they found, satoshi said so. see below. It can be phased in, like:
if (blocknumber > 115000 <-- 1MB limit) maxblocksize = largerlimit <-- He means a number > 1MB, ie: 20MB
It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.
When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.
Satoshi did say we should use a bigger blocksize other than using sidechain like GMaxwell's lightning.netwrokYou don't agree with 20MB? Fine! Tor can easily support 100KB/s download currently, 1MB < Your pick < 60MB (100KB*60*10) You don't agree that we will reach 1MB per block after Q1 2016? Fine! just say when, 2015 Q4? 2017? 2018? There is always only one blockchain.
|
|
|
|
oblivi
|
|
June 04, 2015, 05:04:24 PM |
|
We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block: if block>20MB then block=first 20MB
or It can be phased in, like:
if (blocknumber > 115000 2000000) maxblocksize = largerlimit 20MB
It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.
When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.
Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block. I don't understand why they don't just don't do this, but it's clear there is a reason, otherwise a hard fork wouldn't be risked. It's seems modifying the blocksize is not as easy as that and has a deeper impact on the system.
|
|
|
|
cryptworld
|
|
June 04, 2015, 05:10:08 PM |
|
If they want to do a hard fork it is because is necessary,a hard fork is a risky thing a no one want to do unless it is mandatory
|
|
|
|
shorena
Copper Member
Legendary
Offline
Activity: 1498
Merit: 1520
No I dont escrow anymore.
|
|
June 04, 2015, 05:14:19 PM |
|
-snip-
Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.
Not everyone, the old versions would not accept the node as valid, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.
|
Im not really here, its just your imagination.
|
|
|
cryptocoimor (OP)
Newbie
Offline
Activity: 9
Merit: 0
|
|
June 04, 2015, 05:19:49 PM |
|
-snip-
Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.
Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time. So : "every one using new version bitcoin-qt can recognize the lagger block" Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.
|
|
|
|
Klestin
|
|
June 04, 2015, 05:21:08 PM |
|
We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block: if block>20MB then block=first 20MB
Absolutely nobody is talking about changing the block size to 20 MB. They are talking about changing the MAXIMUM block size, in a phased approach, to eventually reach 20 MB. Even when the 20 MB max is set, block sizes will not all be 20 MB. Some will be 1 MB. Some will be 100 KB. This change is by definition a hard fork. The nodes that still have the old 1MB limit will not accept the larger blocks. Such is the nature of cryptocurrencies.
|
|
|
|
bitllionaire
Legendary
Offline
Activity: 1120
Merit: 1000
|
|
June 04, 2015, 05:22:59 PM |
|
Because everybody needs to update the wallets with the new code implemented, and if not, there would be rejected blocks and transactions, ie, the chaos
|
|
|
|
shorena
Copper Member
Legendary
Offline
Activity: 1498
Merit: 1520
No I dont escrow anymore.
|
|
June 04, 2015, 05:28:46 PM |
|
-snip-
Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.
Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time. So : "every one using new version bitcoin-qt can recognize the lagger block" Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it. No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.
|
Im not really here, its just your imagination.
|
|
|
Elwar
Legendary
Offline
Activity: 3598
Merit: 2386
Viva Ut Vivas
|
|
June 04, 2015, 05:30:05 PM |
|
We could take the phased in approach of switching to XT which has the code to phase it in over time.
which...when Gavin asked if that was a good idea he was labeled a heretic and every thread on bitcointalk suggested he was destroying Bitcoin
|
First seastead company actually selling sea homes: Ocean Builders https://ocean.builders Of course we accept bitcoin.
|
|
|
cryptocoimor (OP)
Newbie
Offline
Activity: 9
Merit: 0
|
|
June 04, 2015, 05:37:02 PM Last edit: June 04, 2015, 06:00:09 PM by cryptocoimor |
|
-snip-
Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.
Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time. So : "every one using new version bitcoin-qt can recognize the lagger block" Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it. No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version. Why the heck people don't agree with this change?: if blocksize> 1MB then blocksize=1MB to if blocksize> 20MB then blocksize=20MB Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it?
|
|
|
|
SpanishSoldier
|
|
June 04, 2015, 05:43:46 PM |
|
-snip-
Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.
Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time. So : "every one using new version bitcoin-qt can recognize the lagger block" Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it. No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version. Why the heck people don't agree with this change?: if blocksize> 1MB then blocksize=1MB to if blocksize> 20MB then blocksize=20MB Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for better a solution, isn it? The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT. The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok.
|
|
|
|
cryptocoimor (OP)
Newbie
Offline
Activity: 9
Merit: 0
|
|
June 04, 2015, 05:59:15 PM |
|
-snip-
Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.
Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time. So : "every one using new version bitcoin-qt can recognize the lagger block" Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it. No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version. Why the heck people don't agree with this change?: if blocksize> 1MB then blocksize=1MB to if blocksize> 20MB then blocksize=20MB Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it? The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT. The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok. Can't we just go for the 20MB first, then use the time (years) we bought to properly test Gregory Maxwell 's lightning.netwrok again and again and if the result is all good, then we use it.
|
|
|
|
jeannemadrigal2
|
|
June 04, 2015, 06:10:02 PM |
|
-snip-
Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.
Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time. So : "every one using new version bitcoin-qt can recognize the lagger block" Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it. No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version. Why the heck people don't agree with this change?: if blocksize> 1MB then blocksize=1MB to if blocksize> 20MB then blocksize=20MB Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for better a solution, isn it? The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT. The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok. You forgot another reason for opposing the biger block size But it is clear at this point that inaction is not an option. Not doing anything will bring a change too, just not a very good one.
|
|
|
|
SpanishSoldier
|
|
June 04, 2015, 06:17:34 PM |
|
-snip-
Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.
Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time. So : "every one using new version bitcoin-qt can recognize the lagger block" Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it. No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version. Why the heck people don't agree with this change?: if blocksize> 1MB then blocksize=1MB to if blocksize> 20MB then blocksize=20MB Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it? The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT. The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok. Can't we just go for the 20MB first, then use the time (years) we bought to properly test Gregory Maxwell 's lightning.netwrok again and again and if the result is all good, then we use it. Gavin proposed exactly what you are proposing here. But, GMaxwell did not agree. Rumour is that GMaxwell's company blockstream, which has got 21M USD funding, is working on solving the lightning.netwrok problem and imlementation of sidechains. Hence GMaxwell wants 1 MB cap to stay so blockstream can reap the beneit of solving the problem. Due to this disagreement, Gavin said that if the consensus can not be reached at dev level, then it goes to the node level. Hence he asked people to use XT (which is currently almost identical to Bitcoin Core) to show support for him. If 50% of the bitcoin network run XT, then he'llrequest devs again to modify Bitcoin Core so that hard fork does not happen. If they still do not agree, he'll wait for the network to run 90% on XT and then implement the changes on XT. That is when hard fork happens, but with already 90% network running on XT, XT chain will invariably survive. In any case, none of this is going to happen before February 2016. All these things are now at discussion level. You forgot another reason for opposing the biger block size But it is clear at this point that inaction is not an option. Not doing anything will bring a change too, just not a very good one. Dint forget. But dint mention, because I have respect for GMaxwell and believe it is not as blunt as it has been portrayed. But, mentioned it now.
|
|
|
|
achow101
Staff
Legendary
Offline
Activity: 3430
Merit: 6705
Just writing some code
|
|
June 04, 2015, 07:11:58 PM |
|
Why the heck people don't agree with this change?: if blocksize> 1MB then blocksize=1MB to if blocksize> 20MB then blocksize=20MB Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it? Because Bitcoin does not work like that. A valid block header contains a merkle root, which is a hash of all of the transactions contained within the block. If Bitcoin nodes only looked at a portion of the block, then the validation would fail because hashing all of the transactions in that portion would NOT result in the merkle root which then creates a different header. Thus, the block would be rejected as an invalid block. Also, how would the node know that all of the other transactions contained in the rest of the block know that they were confirmed?
|
|
|
|
cryptocoimor (OP)
Newbie
Offline
Activity: 9
Merit: 0
|
|
June 04, 2015, 07:21:31 PM |
|
Why the heck people don't agree with this change?: if blocksize> 1MB then blocksize=1MB to if blocksize> 20MB then blocksize=20MB Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it? Because Bitcoin does not work like that. A valid block header contains a merkle root, which is a hash of all of the transactions contained within the block. If Bitcoin nodes only looked at a portion of the block, then the validation would fail because hashing all of the transactions in that portion would NOT result in the merkle root which then creates a different header. Thus, the block would be rejected as an invalid block. Also, how would the node know that all of the other transactions contained in the rest of the block know that they were confirmed? You are talking about the reason why a old version bitcoin-qt can't just read the first 1MB if a block > 1MB, right? because your explication has nothing to do with the quote. how would the node know that all of the other transactions contained in the rest of the block know that they were confirmed? No, they don't. That's why people will soon all use new version of bitcoin-qt to read the whole 20MB block.
|
|
|
|
achow101
Staff
Legendary
Offline
Activity: 3430
Merit: 6705
Just writing some code
|
|
June 04, 2015, 07:31:45 PM |
|
Why the heck people don't agree with this change?: if blocksize> 1MB then blocksize=1MB to if blocksize> 20MB then blocksize=20MB Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it? Because Bitcoin does not work like that. A valid block header contains a merkle root, which is a hash of all of the transactions contained within the block. If Bitcoin nodes only looked at a portion of the block, then the validation would fail because hashing all of the transactions in that portion would NOT result in the merkle root which then creates a different header. Thus, the block would be rejected as an invalid block. Also, how would the node know that all of the other transactions contained in the rest of the block know that they were confirmed? You are talking about the reason why a old version bitcoin-qt can't just read the first 1MB if a block > 1MB, right? because your explication has nothing to do with the quote. It does, I think, because your idea is that if the size of the block is greater than the X, then, you only read and take the first X MB of the block, correct? Please correct me if I'm wrong. how would the node know that all of the other transactions contained in the rest of the block know that they were confirmed? No, they don't. That's why people will soon all use new version of bitcoin-qt to read the whole 20MB block. Maybe I misunderstood your proposal. Also, your quote is not what Satoshi was saying. His actual quote is this: It can be phased in, like:
if (blocknumber > 115000) maxblocksize = largerlimit
It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.
When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.
This is talking about how and when to implement the larger blocks. It is saying that the hard fork to the new block size should occur at a block number so far in the future that everyone has upgraded to a client that will support the new blocks.
|
|
|
|
cryptocoimor (OP)
Newbie
Offline
Activity: 9
Merit: 0
|
|
June 04, 2015, 07:32:23 PM |
|
-snip-
Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.
Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time. So : "every one using new version bitcoin-qt can recognize the lagger block" Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it. No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version. Why the heck people don't agree with this change?: if blocksize> 1MB then blocksize=1MB to if blocksize> 20MB then blocksize=20MB Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for a better solution, isn it? The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT. The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok. Can't we just go for the 20MB first, then use the time (years) we bought to properly test Gregory Maxwell 's lightning.netwrok again and again and if the result is all good, then we use it. Gavin proposed exactly what you are proposing here. But, GMaxwell did not agree. Rumour is that GMaxwell's company blockstream, which has got 21M USD funding, is working on solving the lightning.netwrok problem and imlementation of sidechains. Hence GMaxwell wants 1 MB cap to stay so blockstream can reap the beneit of solving the problem. Due to this disagreement, Gavin said that if the consensus can not be reached at dev level, then it goes to the node level. Hence he asked people to use XT (which is currently almost identical to Bitcoin Core) to show support for him. If 50% of the bitcoin network run XT, then he'llrequest devs again to modify Bitcoin Core so that hard fork does not happen. If they still do not agree, he'll wait for the network to run 90% on XT and then implement the changes on XT. That is when hard fork happens, but with already 90% network running on XT, XT chain will invariably survive. In any case, none of this is going to happen before February 2016. All these things are now at discussion level. You forgot another reason for opposing the biger block size But it is clear at this point that inaction is not an option. Not doing anything will bring a change too, just not a very good one. Dint forget. But dint mention, because I have respect for GMaxwell and believe it is not as blunt as it has been portrayed. But, mentioned it now. Then why Gavin doesn't agree with GMaxwell's lightning.netwrok? gavin proved that doesn't work or worse than his 20MB block?
|
|
|
|
achow101
Staff
Legendary
Offline
Activity: 3430
Merit: 6705
Just writing some code
|
|
June 04, 2015, 07:35:51 PM |
|
Then why Gavin doesn't agree with GMaxwell's lightning.netwrok? gavin proved that doesn't work or worse than his 20MB block?
It hasn't been implemented yet and will be difficult to implement a stable and working version within a year. The only thing with the lightning network is a proposal describing the way the system will work. It is still a proposal. On the other hand, Gavin's proposal for the 20 MB blocksize is relatively easy to implement and test within a year.
|
|
|
|
cryptocoimor (OP)
Newbie
Offline
Activity: 9
Merit: 0
|
|
June 04, 2015, 07:39:15 PM Last edit: June 04, 2015, 07:52:24 PM by cryptocoimor |
|
if blocksize> 1MB then blocksize=1MB to if blocksize> 20MB then blocksize=20MB This is what miners should do when they include txs in the block they found. Not for the user who read the block. Then why Gavin doesn't agree with GMaxwell's lightning.netwrok? gavin proved that doesn't work or worse than his 20MB block?
It hasn't been implemented yet and will be difficult to implement a stable and working version within a year. The only thing with the lightning network is a proposal describing the way the system will work. It is still a proposal. On the other hand, Gavin's proposal for the 20 MB blocksize is relatively easy to implement and test within a year. OK, but from what i saw, Satoshi did say we should use the bigger block other than using sidechain like GMaxwell's lightning.netwrok, am i correct? if (blocknumber > 115000) maxblocksize = largerlimit <-- He means a number > 1MB here, ie: 10MB or 20MB
|
|
|
|
|