Bitcoin Forum
May 22, 2024, 03:39:39 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: Satoshi said: Lets go for the bigger block!  (Read 1640 times)
wr104
Sr. Member
****
Offline Offline

Activity: 329
Merit: 250


View Profile WWW
June 04, 2015, 07:40:29 PM
 #21

We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block:

Code:
if blocksize>20MB then blocksize=first 20MB, the rest stand in line wait for the next block.

or

It can be phased in, like:

if (blocknumber > 115000 2000000)
    maxblocksize = largerlimit 20MB, the rest stand in line wait for the next block.

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, then every one using a new version bitcoin-qt can recognize 20MB of a lagger block given. Those using old version can only recognize the first 1MB of it.

In my case, There is always only one blockchain.

Unfortunately, it is not that simple because you are changing network rules.  If you are not careful, you risk creating consensus forks which is the worst nightmare for the coin.

You would need a little more code like the Git patch I wrote below for Bitcoin 0.9.5.  

Basically, you need a kill switch for older wallet versions (example: begin rejecting older block versions after 95% of the nodes have upgraded) and then, you allow larger block size after certain height number. (example 400,000 which is ~9 months for today).

Code:
 src/core.h        |  2 +-
 src/init.cpp      |  4 ++++
 src/main.cpp      | 30 ++++++++++++++++++++++++------
 src/main.h        | 11 ++++++++---
 src/miner.cpp     | 10 ++++++----
 src/rpcmining.cpp | 10 ++++++++--
 6 files changed, 51 insertions(+), 16 deletions(-)

diff --git a/src/core.h b/src/core.h
index d89f06b..01af749 100644
--- a/src/core.h
+++ b/src/core.h
@@ -345,7 +345,7 @@ class CBlockHeader
 {
 public:
     // header
-    static const int CURRENT_VERSION=3;
+    static const int CURRENT_VERSION=4;
     int nVersion;
     uint256 hashPrevBlock;
     uint256 hashMerkleRoot;
diff --git a/src/init.cpp b/src/init.cpp
index 6f9abca..d3f40be 100644
--- a/src/init.cpp
+++ b/src/init.cpp
@@ -1103,6 +1103,10 @@ bool AppInit2(boost::thread_group& threadGroup)
 
     RandAddSeedPerfmon();
 
+    // Check if the network can begin accepting larger block size
+    if (chainActive.Height() >= 400000)
+        fNewBlockSizeLimit = true;
+
     //// debug print
     LogPrintf("mapBlockIndex.size() = %u\n",   mapBlockIndex.size());
     LogPrintf("nBestHeight = %d\n",                   chainActive.Height());
diff --git a/src/main.cpp b/src/main.cpp
index a42bb8a..caa2352 100644
--- a/src/main.cpp
+++ b/src/main.cpp
@@ -47,6 +47,7 @@ bool fImporting = false;
 bool fReindex = false;
 bool fBenchmark = false;
 bool fTxIndex = false;
+bool fNewBlockSizeLimit = false;
 unsigned int nCoinCacheSize = 5000;
 
 /** Fees smaller than this (in satoshi) are considered zero fee (for transaction creation) */
@@ -1809,6 +1810,7 @@ bool ConnectBlock(CBlock& block, CValidationState& state, CBlockIndex* pindex, C
     int64_t nFees = 0;
     int nInputs = 0;
     unsigned int nSigOps = 0;
+    const int nSigOpsLimit = fNewBlockSizeLimit ? MAX_BLOCK_SIGOPS : OLD_MAX_BLOCK_SIGOPS;
     CDiskTxPos pos(pindex->GetBlockPos(), GetSizeOfCompactSize(block.vtx.size()));
     std::vector<std::pair<uint256, CDiskTxPos> > vPos;
     vPos.reserve(block.vtx.size());
@@ -1818,7 +1820,7 @@ bool ConnectBlock(CBlock& block, CValidationState& state, CBlockIndex* pindex, C
 
         nInputs += tx.vin.size();
         nSigOps += GetLegacySigOpCount(tx);
-        if (nSigOps > MAX_BLOCK_SIGOPS)
+        if (nSigOps > nSigOpsLimit)
             return state.DoS(100, error("ConnectBlock() : too many sigops"),
                              REJECT_INVALID, "bad-blk-sigops");
 
@@ -1834,7 +1836,7 @@ bool ConnectBlock(CBlock& block, CValidationState& state, CBlockIndex* pindex, C
                 // this is to prevent a "rogue miner" from creating
                 // an incredibly-expensive-to-validate block.
                 nSigOps += GetP2SHSigOpCount(tx, view);
-                if (nSigOps > MAX_BLOCK_SIGOPS)
+                if (nSigOps > nSigOpsLimit)
                     return state.DoS(100, error("ConnectBlock() : too many sigops"),
                                      REJECT_INVALID, "bad-blk-sigops");
             }
@@ -1966,6 +1968,10 @@ void static UpdateTip(CBlockIndex *pindexNew) {
             // strMiscWarning is read by GetWarnings(), called by Qt and the JSON-RPC code to warn the user:
             strMiscWarning = _("Warning: This version is obsolete, upgrade required!");
     }
+    // Check if the network is ready to accept larger block size
+    if (!fNewBlockSizeLimit && chainActive.Height() >= 400000) {
+        fNewBlockSizeLimit = true;
+    }
 }
 
 // Disconnect chainActive's tip.
@@ -2319,7 +2325,8 @@ bool CheckBlock(const CBlock& block, CValidationState& state, bool fCheckPOW, bo
     // that can be verified before saving an orphan block.
 
     // Size limits
-    if (block.vtx.empty() || block.vtx.size() > MAX_BLOCK_SIZE || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION) > MAX_BLOCK_SIZE)
+    const int nBlkSizeLimit = fNewBlockSizeLimit ? MAX_BLOCK_SIZE : OLD_MAX_BLOCK_SIZE;
+    if (block.vtx.empty() || block.vtx.size() > (nBlkSizeLimit / 60) || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION) > nBlkSizeLimit)
         return state.DoS(100, error("CheckBlock() : size limits failed"),
                          REJECT_INVALID, "bad-blk-length");
 
@@ -2367,7 +2374,8 @@ bool CheckBlock(const CBlock& block, CValidationState& state, bool fCheckPOW, bo
     {
         nSigOps += GetLegacySigOpCount(tx);
     }
-    if (nSigOps > MAX_BLOCK_SIGOPS)
+    const int nSigOpsLimit = fNewBlockSizeLimit ? MAX_BLOCK_SIGOPS : OLD_MAX_BLOCK_SIGOPS;
+    if (nSigOps > nSigOpsLimit)
         return state.DoS(100, error("CheckBlock() : out-of-bounds SigOpCount"),
                          REJECT_INVALID, "bad-blk-sigops", true);
 
@@ -2426,7 +2434,7 @@ bool AcceptBlock(CBlock& block, CValidationState& state, CDiskBlockPos* dbp)
         // Reject block.nVersion=1 blocks when 95% (75% on testnet) of the network has upgraded:
         if (block.nVersion < 2)
         {
-            if ((!TestNet() && CBlockIndex::IsSuperMajority(2, pindexPrev, 950, 1000)) ||
+            if (fNewBlockSizeLimit || (!TestNet() && CBlockIndex::IsSuperMajority(2, pindexPrev, 950, 1000)) ||
                 (TestNet() && CBlockIndex::IsSuperMajority(2, pindexPrev, 75, 100)))
             {
                 return state.Invalid(error("AcceptBlock() : rejected nVersion=1 block"),
@@ -2436,13 +2444,23 @@ bool AcceptBlock(CBlock& block, CValidationState& state, CDiskBlockPos* dbp)
         // Reject block.nVersion=2 blocks when 95% (75% on testnet) of the network has upgraded:
         if (block.nVersion < 3)
         {
-            if ((!TestNet() && CBlockIndex::IsSuperMajority(3, pindexPrev, 950, 1000)) ||
+            if (fNewBlockSizeLimit || (!TestNet() && CBlockIndex::IsSuperMajority(3, pindexPrev, 950, 1000)) ||
                 (TestNet() && CBlockIndex::IsSuperMajority(3, pindexPrev, 75, 100)))
             {
                 return state.Invalid(error("AcceptBlock() : rejected nVersion=2 block"),
                                      REJECT_OBSOLETE, "bad-version");
             }
         }
+        // Reject block.nVersion=3 blocks when 95% (75% on testnet) of the network has upgraded:
+        if (block.nVersion < 4)
+        {
+            if (fNewBlockSizeLimit || (!TestNet() && CBlockIndex::IsSuperMajority(4, pindexPrev, 950, 1000)) ||
+                (TestNet() && CBlockIndex::IsSuperMajority(4, pindexPrev, 75, 100)))
+            {
+                return state.Invalid(error("AcceptBlock() : rejected nVersion=3 block"),
+                    REJECT_OBSOLETE, "bad-version");
+            }
+        }
         // Enforce block.nVersion=2 rule that the coinbase starts with serialized block height
         if (block.nVersion >= 2)
         {
diff --git a/src/main.h b/src/main.h
index dc50dff..0b50ad4 100644
--- a/src/main.h
+++ b/src/main.h
@@ -33,8 +33,10 @@ class CBlockIndex;
 class CBloomFilter;
 class CInv;
 
-/** The maximum allowed size for a serialized block, in bytes (network rule) */
-static const unsigned int MAX_BLOCK_SIZE = 1000000;
+/** The NEW maximum allowed size for a serialized block, in bytes (network rule) */
+static const unsigned int MAX_BLOCK_SIZE = 20 * 1024 * 1024;
+/** The OLD maximum allowed size for a serialized block, in bytes (network rule) */
+static const unsigned int OLD_MAX_BLOCK_SIZE = 1000000;
 /** Default for -blockmaxsize and -blockminsize, which control the range of sizes the mining code will create **/
 static const unsigned int DEFAULT_BLOCK_MAX_SIZE = 750000;
 static const unsigned int DEFAULT_BLOCK_MIN_SIZE = 0;
@@ -42,8 +44,10 @@ static const unsigned int DEFAULT_BLOCK_MIN_SIZE = 0;
 static const unsigned int DEFAULT_BLOCK_PRIORITY_SIZE = 50000;
 /** The maximum size for transactions we're willing to relay/mine */
 static const unsigned int MAX_STANDARD_TX_SIZE = 100000;
-/** The maximum allowed number of signature check operations in a block (network rule) */
+/** The NEW maximum allowed number of signature check operations in a block (network rule) */
 static const unsigned int MAX_BLOCK_SIGOPS = MAX_BLOCK_SIZE/50;
+/** The OLD maximum allowed number of signature check operations in a block (network rule) */
+static const unsigned int OLD_MAX_BLOCK_SIGOPS = OLD_MAX_BLOCK_SIZE / 50;
 /** Default for -maxorphantx, maximum number of orphan transactions kept in memory */
 static const unsigned int DEFAULT_MAX_ORPHAN_TRANSACTIONS = 100;
 /** Default for -maxorphanblocks, maximum number of orphan blocks kept in memory */
@@ -95,6 +99,7 @@ extern int64_t nTimeBestReceived;
 extern bool fImporting;
 extern bool fReindex;
 extern bool fBenchmark;
+extern bool fNewBlockSizeLimit;
 extern int nScriptCheckThreads;
 extern bool fTxIndex;
 extern unsigned int nCoinCacheSize;
diff --git a/src/miner.cpp b/src/miner.cpp
index e8abb8c..d587fde 100644
--- a/src/miner.cpp
+++ b/src/miner.cpp
@@ -126,8 +126,9 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
 
     // Largest block you're willing to create:
     unsigned int nBlockMaxSize = GetArg("-blockmaxsize", DEFAULT_BLOCK_MAX_SIZE);
-    // Limit to betweeen 1K and MAX_BLOCK_SIZE-1K for sanity:
-    nBlockMaxSize = std::max((unsigned int)1000, std::min((unsigned int)(MAX_BLOCK_SIZE-1000), nBlockMaxSize));
+    // Limit to betweeen 1K and MAX_BLOCK_SIZE-1K for sanity. After height 400,000 we allow miners to create larger blocks.
+    const int nBlkSizeLimit = (!TestNet() && (chainActive.Tip()->nHeight + 1) > 400000) ? MAX_BLOCK_SIZE : OLD_MAX_BLOCK_SIZE;
+    nBlockMaxSize = std::max((unsigned int)1000, std::min((unsigned int)(nBlkSizeLimit - 1000), nBlockMaxSize));
 
     // How much of the block should be dedicated to high-priority transactions,
     // included regardless of the fees they pay
@@ -228,6 +229,7 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
         uint64_t nBlockSize = 1000;
         uint64_t nBlockTx = 0;
         int nBlockSigOps = 100;
+        const int nSigOpsLimit = fNewBlockSizeLimit ? MAX_BLOCK_SIGOPS : OLD_MAX_BLOCK_SIGOPS;
         bool fSortedByFee = (nBlockPrioritySize <= 0);
 
         TxPriorityCompare comparer(fSortedByFee);
@@ -250,7 +252,7 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
 
             // Legacy limits on sigOps:
             unsigned int nTxSigOps = GetLegacySigOpCount(tx);
-            if (nBlockSigOps + nTxSigOps >= MAX_BLOCK_SIGOPS)
+            if (nBlockSigOps + nTxSigOps >= nSigOpsLimit)
                 continue;
 
             // Skip free transactions if we're past the minimum block size:
@@ -273,7 +275,7 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
             int64_t nTxFees = view.GetValueIn(tx)-tx.GetValueOut();
 
             nTxSigOps += GetP2SHSigOpCount(tx, view);
-            if (nBlockSigOps + nTxSigOps >= MAX_BLOCK_SIGOPS)
+            if (nBlockSigOps + nTxSigOps >= nSigOpsLimit)
                 continue;
 
             CValidationState state;
diff --git a/src/rpcmining.cpp b/src/rpcmining.cpp
index ef99cb3..8597238 100644
--- a/src/rpcmining.cpp
+++ b/src/rpcmining.cpp
@@ -579,8 +579,14 @@ Value getblocktemplate(const Array& params, bool fHelp)
     result.push_back(Pair("mintime", (int64_t)pindexPrev->GetMedianTimePast()+1));
     result.push_back(Pair("mutable", aMutable));
     result.push_back(Pair("noncerange", "00000000ffffffff"));
-    result.push_back(Pair("sigoplimit", (int64_t)MAX_BLOCK_SIGOPS));
-    result.push_back(Pair("sizelimit", (int64_t)MAX_BLOCK_SIZE));
+    if (fNewBlockSizeLimit) {
+        result.push_back(Pair("sigoplimit", (int64_t)MAX_BLOCK_SIGOPS));
+        result.push_back(Pair("sizelimit", (int64_t)MAX_BLOCK_SIZE));
+    }
+    else {
+        result.push_back(Pair("sigoplimit", (int64_t)OLD_MAX_BLOCK_SIGOPS));
+        result.push_back(Pair("sizelimit", (int64_t)OLD_MAX_BLOCK_SIZE));
+    }
     result.push_back(Pair("curtime", (int64_t)pblock->nTime));
     result.push_back(Pair("bits", HexBits(pblock->nBits)));
     result.push_back(Pair("height", (int64_t)(pindexPrev->nHeight+1)));
achow101
Staff
Legendary
*
Offline Offline

Activity: 3402
Merit: 6642


Just writing some code


View Profile WWW
June 04, 2015, 07:42:48 PM
 #22

Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

This is what miners should do when they include txs in the block they found. Not for the user who read the block.


This is essentially what they do now, but with the 1 MB blocks. The change, as I said above, is a relatively simple implementation. However, the hard fork occurs because old nodes will consider the larger blocks as invalid.

cryptocoimor (OP)
Newbie
*
Offline Offline

Activity: 9
Merit: 0


View Profile
June 04, 2015, 07:53:07 PM
 #23

Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

This is what miners should do when they include txs in the block they found. Not for the user who read the block.


This is essentially what they do now, but with the 1 MB blocks. The change, as I said above, is a relatively simple implementation. However, the hard fork occurs because old nodes will consider the larger blocks as invalid.

But satoshi did say we should use the bigger block other than using sidechain like GMaxwell's lightning.netwrok, am i correct?

if (blocknumber > 115000)
    maxblocksize = largerlimit <-- He means a number > 1MB here, ie: 10MB or 20MB
achow101
Staff
Legendary
*
Offline Offline

Activity: 3402
Merit: 6642


Just writing some code


View Profile WWW
June 04, 2015, 07:54:57 PM
 #24

But satoshi did say we should use the bigger block other than using sidechain like GMaxwell's lightning.netwrok, am i correct?
Not necessarily. Lightning network and sidechains did not exist when Satoshi made that post. These were all ideas that came later, long after Satoshi had left.

cryptocoimor (OP)
Newbie
*
Offline Offline

Activity: 9
Merit: 0


View Profile
June 04, 2015, 08:00:49 PM
 #25

But satoshi did say we should use the bigger block other than using sidechain like GMaxwell's lightning.netwrok, am i correct?
Not necessarily. Lightning network and sidechains did not exist when Satoshi made that post. These were all ideas that came later, long after Satoshi had left.

But if satoshi said a bigger block > 1MB was the original plan and its ok to go, I think most of us will then agree with gavin (we can talk about the size goes for, 20MB or 10MB or 5MB) and finish this chao.

I would like to change this title to "Satoshi said lets go for the bigger block!"
achow101
Staff
Legendary
*
Offline Offline

Activity: 3402
Merit: 6642


Just writing some code


View Profile WWW
June 04, 2015, 08:07:50 PM
 #26

But satoshi did say we should use the bigger block other than using sidechain like GMaxwell's lightning.netwrok, am i correct?
Not necessarily. Lightning network and sidechains did not exist when Satoshi made that post. These were all ideas that came later, long after Satoshi had left.

But if satoshi said a bigger block > 1MB was the original plan and its ok to go, I think most of us will then agree with gavin (we can talk about the size goes for, 20MB or 10MB or 5MB) and finish this chao.

I would like to change this title to "Satoshi said lets go for the bigger block!"
However, things have changed since Satoshi's comment. There are alternatives and possibly better ways to solve the problem. While I agree that we should raise the block size limit and that a solution must be found, I don't think that we should only do what Satoshi said 5 five years ago when other solutions did not exist, and that what he said may no longer be applicable.

neurotypical
Hero Member
*****
Offline Offline

Activity: 672
Merit: 502


View Profile
June 04, 2015, 09:19:01 PM
 #27

-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for better a solution, isn it?

The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT.

The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok.

But even with Lightning Network, we'll eventually hit the 1MB maximum block, so whats the point? we need both things if anything.
SpanishSoldier
Sr. Member
****
Offline Offline

Activity: 686
Merit: 255


View Profile
June 04, 2015, 09:45:54 PM
 #28

-snip-

Why do we have to hard fork the blockchain? why not just modify these codes at bitcoin-qt, and every one using bitcoin-qt can recognize the lagger block.

Not everyone, the old versions would not accept the node as valid only recognize the first 1MB of a given block, thus splitting the network into those running old versions and those running new versions. This is what is called a hard fork as your client either accepts the changes or not read the first 1MB or the whole of the block. The number of lines you have to edit does not matter. It also does not matter if you push the hard fork into the future by a certain number of blocks, it only lessens the impact of the change as its more likely for people to update over a long period of time than it is over a short period of time.

So : "every one using new version bitcoin-qt can recognize the lagger block"

Do you mean these few lines changes = the whole hard fork thing we are talking about everywhere? i see people talking about two independence blockchains. If we go as my way, there is only one blockchain, old version qt can only recognize the first 1mb of a given lagger block, new qt can recognize 20 MB of it, that's it.

No, the old version can not recognize the first 1MB of a block, because you would try to give it a bigger block. It would look at it and say: nope not valid and be done with it. Changing the old version is a patch that makes it no longer an old version.

Why the heck people don't agree with this change?:
Code:
if blocksize> 1MB then blocksize=1MB
to
Code:
if blocksize> 20MB then blocksize=20MB

Of cause this is not the final solution since the blocksize would need to be > 10GB in the future, but it makes bitcoin stronger and buy us some time for better a solution, isn it?

The Green part is the reason Gavin Andresen wants the change. Gavin will go for hard fork only if 90% of the network is running on XT.

The Red part is the reason Gregory Maxwell does not want the change. GMaxwell is proposing the problem to be solved by implementing a very complex thing called lightning.netwrok.

But even with Lightning Network, we'll eventually hit the 1MB maximum block, so whats the point? we need both things if anything.

Lightning.Network is still a theory. So, we donno whether it can bypass the 1MB cap or not. But, if I have understood GMaxwell correctly, he is trying to bypass the 1MB cap by jointly using lightning.network and sidechains.
yayayo
Legendary
*
Offline Offline

Activity: 1806
Merit: 1024



View Profile
June 04, 2015, 11:32:28 PM
 #29

We could take the phased in approach of switching to XT which has the code to phase it in over time.

which...when Gavin asked if that was a good idea he was labeled a heretic and every thread on bitcointalk suggested he was destroying Bitcoin

Which is simply true if he divides Bitcoin into two different coins (Bitcoin and GavinCoin), because there is good reason not to join the "OMG-blocks-are-full-Bitcoin-gonna-freeze"-panic and blindly follow Gavin.

While I, more than most people, appreciate what Satoshi has given us, anything he's said hasn't considered the last 4 years of data (obviously since he hasn't spoken in over 4 years). You have to take that into consideration when you appeal to his authority.

Exactly. There's too much personality cult going on here.

I think a bit of fee pressure is healthy to drive out transaction spam. Blocks may increase in future, yes - but not limitless and certainly not based on the absence of an alternative solution for micropayments. Decentralization is what gives Bitcoin value. Decentralization is the reason I'm here.

ya.ya.yo!

.
..1xBit.com   Super Six..
▄█████████████▄
████████████▀▀▀
█████████████▄
█████████▌▀████
██████████  ▀██
██████████▌   ▀
████████████▄▄
███████████████
███████████████
███████████████
███████████████
███████████████
▀██████████████
███████████████
█████████████▀
█████▀▀       
███▀ ▄███     ▄
██▄▄████▌    ▄█
████████       
████████▌     
█████████    ▐█
██████████   ▐█
███████▀▀   ▄██
███▀   ▄▄▄█████
███ ▄██████████
███████████████
███████████████
███████████████
███████████████
███████████████
███████████████
███████████▀▀▀█
██████████     
███████████▄▄▄█
███████████████
███████████████
███████████████
███████████████
███████████████
         ▄█████
        ▄██████
       ▄███████
      ▄████████
     ▄█████████
    ▄███████
   ▄███████████
  ▄████████████
 ▄█████████████
▄██████████████
  ▀▀███████████
      ▀▀███
████
          ▀▀
          ▄▄██▌
      ▄▄███████
     █████████▀

 ▄██▄▄▀▀██▀▀
▄██████     ▄▄▄
███████   ▄█▄ ▄
▀██████   █  ▀█
 ▀▀▀
    ▀▄▄█▀
▄▄█████▄    ▀▀▀
 ▀████████
   ▀█████▀ ████
      ▀▀▀ █████
          █████
       ▄  █▄▄ █ ▄
     ▀▄██▀▀▀▀▀▀▀▀
      ▀ ▄▄█████▄█▄▄
    ▄ ▄███▀    ▀▀ ▀▀▄
  ▄██▄███▄ ▀▀▀▀▄  ▄▄
  ▄████████▄▄▄▄▄█▄▄▄██
 ████████████▀▀    █ ▐█
██████████████▄ ▄▄▀██▄██
 ▐██████████████    ▄███
  ████▀████████████▄███▀
  ▀█▀  ▐█████████████▀
       ▐████████████▀
       ▀█████▀▀▀ █▀
.
Premier League
LaLiga
Serie A
.
Bundesliga
Ligue 1
Primeira Liga
.
..TAKE PART..
Soros Shorts
Donator
Legendary
*
Offline Offline

Activity: 1617
Merit: 1012



View Profile
June 05, 2015, 12:18:37 AM
 #30

We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block:

You realize that a 20GB block would never work with a 32-bit client, don't you? The maximum addressable memory for 32-bits is 4GB, and that needs to hold memcache and everything else.
Eastfist
Full Member
***
Offline Offline

Activity: 210
Merit: 100


View Profile WWW
June 05, 2015, 12:27:01 AM
 #31

People really need to stop invoking "Satoshi says..." just to get their way. Take some responsibility for your own thinking for once.
achow101
Staff
Legendary
*
Offline Offline

Activity: 3402
Merit: 6642


Just writing some code


View Profile WWW
June 05, 2015, 12:36:41 AM
 #32

We can simply modify few codes at bitcoin-qt to support 20MB or even 20GB block:

You realize that a 20GB block would never work with a 32-bit client, don't you? The maximum addressable memory for 32-bits is 4GB, and that needs to hold memcache and everything else.
Bitcoin Core has a 64-bit client which has more than enough memory to support a 20 GB block, provided that the hardware has significantly more memory than current standards.

Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!