Bitcoin Forum
May 03, 2024, 10:27:23 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Keep bitcoin decentralized while increasing block size  (Read 3999 times)
jl2012 (OP)
Legendary
*
Offline Offline

Activity: 1792
Merit: 1093


View Profile
July 02, 2013, 10:54:40 AM
Last edit: July 17, 2013, 04:14:40 AM by jl2012
 #1

"Big blocks will make bitcoin centralized" is a common argument against increasing block size: http://www.youtube.com/watch?v=cZp7UGgBR0I

This is not new but I would like to summarize the solutions for this problem.

1. Partial validating node with web-of-trust. Currently, there are 3 major types of bitcoin clients: full clients (e.g. bitcoind), SPV clients (e.g. bitcoinj), server-trusting clients (e.g. electrum). We can implement the 4th type: partial validating clients (PVC).

User may assign certain amount of system resources (CPU power / bandwidth / storage space) to bitcoind. If the node is unable to verify a block before the next block arrives, it will automatically turn into a PVC. Instead of verifying every blocks, a PVC will skip some of them. When a block is verified, a PVC will publish a signature for the block. Through a web-of-trust, people will be reasonably confident that the longest chain is a valid chain.

This could even be extended to mobile device. For example, a smartphone, which is normally running as SPV, may turn into a PVC when it is connected to AC power and wifi. Even only one block is verified per day, the network is strengthened as a whole.

2. Mining pools and full nodes on trusted platform. As some people are advocating centralized "bitcoin bank" running on trusted platform (TP), I think mining pool and public full nodes can also use TP. A full node on TP will accept new blocks and new transactions through encrypted channel, so that the network administrator won't be able to censor any block/transaction. They will also answer queries like normal full nodes but also in an encrypted way. With remote-attestation and fidelity bond (https://bitcointalk.org/index.php?topic=134827.0), it is very unlikely that the operator may cheat.

Based on TP full nodes, we can further establish TP mining pools. It's like a normal getblocktemplate mining pool so miners are free to construct their blocks, without doing any transaction validation themselves, assuming that the pool is trustworthy. Again, the integrity of the pool is secured by remote-attestation and fidelity bond. Therefore, mining will be decentralized even with huge block size. Mining over TOR is also possible.

Again, people will verify the integrity of TP full nodes with the partial validating nodes I described.

3. Limiting the size of UTXO in a block. This may not be necessary, but some have raised related concerns (https://bitcointalk.org/index.php?topic=153133.0). When the max block size is raised, we may put another hard limit for the growth of UTXO size: Total size of new outputs - Total size of spent outputs. This will also encourage people/miner to merge dust outputs.

4. Distributing historical blockchain offline. Currently the blockchain could be stored in 2 DVD-R (4.3GB). A single BD-R (25GB) should be enough for the coming year. The next generation disc (Holographic Versatile Disc) will store 6TB. We shouldn't depend on the p2p network to distribute the historical blockchain.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
1714775244
Hero Member
*
Offline Offline

Posts: 1714775244

View Profile Personal Message (Offline)

Ignore
1714775244
Reply with quote  #2

1714775244
Report to moderator
1714775244
Hero Member
*
Offline Offline

Posts: 1714775244

View Profile Personal Message (Offline)

Ignore
1714775244
Reply with quote  #2

1714775244
Report to moderator
1714775244
Hero Member
*
Offline Offline

Posts: 1714775244

View Profile Personal Message (Offline)

Ignore
1714775244
Reply with quote  #2

1714775244
Report to moderator
You can see the statistics of your reports to moderators on the "Report to moderator" pages.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714775244
Hero Member
*
Offline Offline

Posts: 1714775244

View Profile Personal Message (Offline)

Ignore
1714775244
Reply with quote  #2

1714775244
Report to moderator
1714775244
Hero Member
*
Offline Offline

Posts: 1714775244

View Profile Personal Message (Offline)

Ignore
1714775244
Reply with quote  #2

1714775244
Report to moderator
1714775244
Hero Member
*
Offline Offline

Posts: 1714775244

View Profile Personal Message (Offline)

Ignore
1714775244
Reply with quote  #2

1714775244
Report to moderator
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
August 01, 2013, 10:52:09 AM
 #2

Posted the following to James A. Donald's blog (the first guy who interacted with Satoshi in the cryptography forum where he first appeared).

http://blog.jim.com/economics/bitcoin-scaling-problems.html/comment-page-1#comment-342437

Quote
Purging old records has the benefit of eliminating lost (or abandoned, e.g. death) wallets to make the money supply more quantifiable. Owners can send new transactions to themselves to update their timestamps. Purging will not hold the block chain size constant, because the rate of transactions is growing (probably exponentially).

Although the block chain size is currently only ~8GB (up from ~2GB in a year) and thus can still easily fit in the 4TB harddisks available to and afforded by the consumer market, it will not only eventually outpace Moore's Law applied to harddisk space, but it is currently too large for many consumer internet connections to download in any quick start scenario. If non-hosted ISP connections provide 0.1 - 1GB per 10 minutes, then (assuming a resumable download manager for dropped connections) 8GB is a 1 hour to 1 day download. At 4TB a 1 year to decades download. Note a mining peer could begin processing before downloading the entire blockchain, if it is download from newest to oldest, and all the transactions in a current block are from blocks already downloaded.

At Visa scale of 16 million transactions per 10 minute block, the blockchain would be growing at roughly 23 GB per day or 8 TB per year. However, some percent of this can be reduced by pruning the blockchain for private keys that have been entirely spent (and possibly also beyond a certain age).

I propose that although we need to broadcast the transactions, the blockchain should only need to store the balances of the private keys (perhaps after the currently 100 block maturity cycle to account for resolution of competing forks). There would be two Proofs-of-Work provided, i.e. two parallel blockchains, one containing the transaction data and the other only the private keys with updated balances, with the former provide first, then all peers competing to provide the latter. So the reward would be split in half and the difficulty for both blockchains would be set so they both average completion every 10 minutes. Or the latter blockchain could be a digest of say every 10 to 100 blocks, and so the difficulty could be adjusted to be every 100 to 1000 minutes.

If the number of private keys in existence could be limited (by an automated free market algorithm protocol that raised the price of new private keys while giving a simultaneous credit for spending all of and thus deleting a private key), then size of the blockchain could be limited. Four billion private keys with a 4-byte balance would require roughly 100 GB, thus 12 hours to 12 days download. With perhaps 100 million Bitcoin users at most over the next several years, that is 40 private keys each. By the time the entire human population needs to use Bitcoin, the bandwidth of the ISPs will probably have increased an order-of-magnitude, so the limit can be increased by up to an order-of-magnitude.

For many reasons, including that mining is the only way to obtain Bitcoins truly anonymously, we don't want mining to be limited to only those with certain resources (especially we don't want to eliminate normal ISP accounts!).

Every mining peer has to have the evidence that supports a transaction, else peers could disagree on consensus (see my conclusion that alternatives to Proof-of-Work must centralize to obtain consensus) about new blocks and forks could appear.

Assume the blockchain is partitioned in N sections, where each mining peer only has to hold a section determined from its private key by partitioning the private key space into N sections.

If the blockchain evidence for each transaction is not sent to every mining peer, then transactions require a factor of N more time to be added into the blockchain (must wait for a mining peer to win the Proof-of-Work which holds the section of record on the sender's balance) and forks can appear because (N-1)/N mining peers won't be able to verify (N-1)/N transactions in the current block before starting Proof-of-Work on the next block.

So if the blockchain is N partioned, the only viable design is that the evidence must be sent to all mining peers for each transaction. Thus increasing bandwidth required by Proof-of-Work while reducing bandwidth required for new peers to download the entire blockchain. The number of peers that will request the evidence is N-1 and the size of the blockchain that a new peer has to download is total/N.

I believe Jim is correct that the only evidence that needs to be sent are the branches of the Merkle tree within the block up to block hash. All mining peers would keep a complete history of mining hashes, since these are only 80 bytes * 6/hr * 24hr * 365 = 4MB per year.

The Merkle tree is a perfectly balanced binary tree, thus the depth of the tree is log2(T) where is T is the number of transactions in a block. Thus the number of (2 hashes evidence per) nodes from block hash tree root is log2(T)-1. Thus the Merkle branch evidence bandwidth required at the limit of N -> infinity is T_current x ((log2(T_old)-1) x 2 x hashsize + transactionsize/2). Note this is in addition to the data for the current block, which is T_current x (hashsize + transactionsize) - hashsize.

Visa scale is ~16 million transactions per 10 minutes. If hashsize is 20 bytes (instead of current 32 bytes) and transactionsize is 50 bytes, then for ~16million transactions per block, the 1.1GB data size increases to 15.8 GB per 10 minute block.

Non-hosted ISP connections are limited to order-of-magnitude of 100 MB to 1GB bandwidth per 10 minutes equating to 1.4 - 14.3 million transactions per 10 min block with Bitcoin's non-partitioned blockchain or 118 to 1046 thousand transactions per 10 min block with the herein proposed, partitioned blockchain.

Thus I conclude that the only way to scale to Visa-scale and retain freedom of mining for all (and thus anonymity for all), is to limit the number of private keys as I proposed above. This also has the advantages of keeping required bandwidth thus unreliable connection hiccups lower and discarding the history of transaction graphs, which thus increasing anonymity w.r.t. the private sector attacks (although the NSA has the Zetabyte storage resources to retain the transaction graphs even at Visa scale).

Does anyone see a problem with that proposal?

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
August 01, 2013, 11:47:31 AM
 #3

1. Partial validating node with web-of-trust. Currently, there are 3 major types of bitcoin clients: full clients (e.g. bitcoind), SPV clients (e.g. bitcoinj), server-trusting clients (e.g. electrum). We can implement the 4th type: partial validating clients (PVC).

If the merkle tree was modified to be a sum-tree, then the node could verify individual transactions in the block.

Your web-of-trust isn't really required (since block validity is a matter of maths).  All you need is an invalidity proof system.  If a block is added to the block chain (so meets POW), then that fork can be terminated by a proof of invalidity.

By broadcasting the proof of invalidity, all nodes will blacklist the illegal block.  Nodes would need to remember invalidity proofs.  Effectively, they "attach" them to blocks that were already in the block tree.  That block is permanently blacklisted.  There would probably be a threshold, only forks which have POW greater than 1 (current) block's worth after the illegal block need to be stored.

Another thing to remember is that signature verification is the big cost.  A PVC could potentially verify all the transactions, but only verify random the signatures (with a CPU load limit set).

Quote
3. Limiting the size of UTXO in a block. This may not be necessary, but some have raised related concerns (https://bitcointalk.org/index.php?topic=153133.0). When the max block size is raised, we may put another hard limit for the growth of UTXO size: Total size of new outputs - Total size of spent outputs. This will also encourage people/miner to merge dust outputs.

Yeah that is a good idea.  However, it hard limits the number of coins users can have.

This has the plus that users are encouraged to merge dust coins.  If UTXOs become limited, merchants might conserve them.

At the moment a payment is

In1: from customer

Out 1: to merchant
Out 2: change (to customer)

This generates +1 UTXOs.

In a limited UTXO situation, the merchant would have to provide a 2nd input.

In 1: payment from customer
In 2: coin from merchant

Out 1: to merchant
Out 2: change to customer

Quote
4. Distributing historical blockchain offline. Currently the blockchain could be stored in 2 DVD-R (4.3GB). A single BD-R (25GB) should be enough for the coming year. The next generation disc (Holographic Versatile Disc) will store 6TB. We shouldn't depend on the p2p network to distribute the historical blockchain.

You could also have dedicated storage nodes.  They wouldn't even have to do verification.  If a block gets added to the block tree, store it.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
August 01, 2013, 01:03:47 PM
 #4

4. Distributing historical blockchain offline. Currently the blockchain could be stored in 2 DVD-R (4.3GB). A single BD-R (25GB) should be enough for the coming year. The next generation disc (Holographic Versatile Disc) will store 6TB. We shouldn't depend on the p2p network to distribute the historical blockchain.

You could also have dedicated storage nodes.  They wouldn't even have to do verification.  If a block gets added to the block tree, store it.

NFG (the impolite form) on both.

1. Requiring mailing or personal exchange of blockchain impinges on anonymity, expediency, mobility, and geographic reach. Besides at Visa-scale, we need 8TB per year.

2. Storage nodes mean either there is no consensus or the bandwidth evidence requirements skyrocket. See my prior post for the calculations.


unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!