Bitcoin Forum
November 16, 2024, 07:28:48 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 [5]  All
  Print  
Author Topic: Size of BTC blockchain centuries from now...  (Read 10790 times)
Yurock
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


View Profile
June 26, 2013, 07:01:27 AM
 #81

A limit that is always above the average actual block size at least prevents an attacker from creating one enormous block.
PerfectAgent
Sr. Member
****
Offline Offline

Activity: 252
Merit: 250


Still the Best 1973


View Profile
June 27, 2013, 06:14:12 PM
 #82

I thought that was the only intent behind block size.

░▒▓█ Coinroll.it - 1% House Edge Dice Game █▓▒░ • Coinroll Thread • *FREE* 100 BTC Raffle
BTC Address: 14qkEkmoWQbgF4EMB6F5m8p3LU1D8UK32D ||||||||||| NMC Address: N8xv7xXnLnRgSvQCRK9vwndsH2HBAifs3C
Valerian77
Sr. Member
****
Offline Offline

Activity: 437
Merit: 255


View Profile
July 10, 2013, 12:01:06 AM
 #83

I do not see the actual problem.

True - the blockchain has an incredible length of 10GB now. And some BTC users might not want to handle this.

But ..

1. anybody is free to use a lightweight client

2. compression is part of any modern file or operating system

3. according to the statistics it is growing now by 5GB every 6 month - I think as long as this is not a traffic or computational problem it should not be a storage problem due to current hard drive sizes.

Where the heck is the point ? Finally there will be a group of miners, secondly a group of heavy weight client users and thirdly the crowd of lightweight client users. Maybe from a security point of view the situation becomes serious if the number of blockchain users drops under a certain level which I do not see now.

I use a heavyweight client currently mayself and still have no problem with the size.
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
July 10, 2013, 07:57:34 AM
 #84

A limit that is always above the average actual block size at least prevents an attacker from creating one enormous block.
This is a fair point.

But perhaps it would be wiser to include a built-in limit of, say, a rolling median or average of the last 1000 blocks instead.

On the other hand that would just introduce added complexity. Perhaps it's a good idea to transition to that eventually, but start out by manually increasing the limit.
Yurock
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


View Profile
July 12, 2013, 10:27:30 PM
 #85

I do not see the actual problem.
Problems may be not obvious now, but we need to be ready to scale. Now we have about 0.48 transactions per second in Bitcoin. That is a tiny volume comparing to popular payment networks. If Bitcoin adoption will continue at a good rate, it will not be long untill we will need to scale to something like 10 TPS.

computational problem
Verification is the most difficult function of nodes. Every input of every transaction has to be matched with an output of a previous transaction. We need to look up that previous transaction in a database of unspent transactions. If the UTXO DB does not fit in RAM, lookups involve disk I/O, which are relatively slow. With time, both transaction volume and UTXO DB grow. Nodes will have to verify more transactions and every verification will be taking more time on average.

The main difficulty with this all is synchronization. If a node goes offline for some time, it will have to catch up. At 10 TPS, 28 hours offline results in a backlog of 1 million transactions. If you think that it's not much, consider that UTXO DB will be much larger than it is now and an average transaction will take more time to verify than it does now. During that time a big share of the computer resources and maybe downstream network bandwidth will be occupied by Bitcoin.

Initial synchronization in its current form will be just infeasible for most users. Nodes will have to start in SPV mode and download and verify the full transaction history in the background. Again, eating some share of computer resources.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
July 12, 2013, 11:17:28 PM
 #86

Verification is the most difficult function of nodes. Every input of every transaction has to be matched with an output of a previous transaction. We need to look up that previous transaction in a database of unspent transactions. If the UTXO DB does not fit in RAM, lookups involve disk I/O, which are relatively slow. With time, both transaction volume and UTXO DB grow. Nodes will have to verify more transactions and every verification will be taking more time on average.

Well the good news is the UXTO grows much slower.  For example in the last 6 months the full blockchain has nearly doubled however the UXTO has grown from ~200MB to 250MB.  This is due to the fact that currency is eventually reused.  New transactions require older transactions to be spent.  The one exception is "dust" where the economical value of the transaction is less than the cost to spend it.  The ~250MB UXTO is about one quarter dust and most of that will never be spent.  If it is eventually spent it is only due to luck and time where the exchange rate rises enough that the dust isn't dust.  This is why the changes to 0.8.2 are so important.  By preventing worthless dust it limits the amount of the UXTO which isn't spent.  If the majority of the UXTO is regularly spent the size will grow even slower.  The size of the UXTO is more related to the number of unique users (with full control over their private keys) then the number of transactions.

Also the method of caching the UXTO is relatively inefficient right now.  The entire transactions is stored however if the inputs are already verified (which they are) then the UXTO could simply contain the already verified output only and possibly the full transaction hash for lookup against the full db.  Outputs are much smaller than inputs with average output size being 34 bytes and the average input size being 72 bytes.  This would indicate a roughly 66% reduction in UXTO is possible.  The vast majority of outputs are "standard" (OP_DUP OP_HASH160 <pubKeyHash> OP_EQUALVERIFY OP_CHECKSIG) for the purposes of the UXTO they could be simplified to a ledger like this:

TxHash - 32 bytes
OutIndex - 2 bytes
Amount - 8 bytes    
PubKeyHash - 20 bytes
Total: 62 bytes

Current UXTO has ~1.1 million unspent output so we are talking data (before db/disk overhead) being ~68 MB.   Obviously this isn't a pressing concern but given the UXTO grows relatively linearly, available memory grows exponentially and the current UXTO can be reduced to ~68MB the risk of UXTO spilling out of available memory and slowing tx validation is minimal.  The one threat was dust.  Since dust is almost never spent it causes the UXTO to grow much faster (they dust just keeps accumulating rather than cycling in and out of the UXTO).  With 0.8.2 that threat is removed.

This is a significant refactoring so don't expect it to appear anytime soon.  Eventually I think you will see the blockchain stored in three different formats.

1) The full chain.  Nodes keeping the full chain (containing all spent tx back to genesis block) will be archive nodes.  It isn't important for every node or even most nodes to be archive nodes but we would want a robust number to keep this archival copy of the blockchain decentralized.

2) The pruned chain & memory pool.  Most nodes will likely only retain the pruned blockchain, replacing spent tx with placeholders in the blockchain.  The memory pool contains the set of unconfirmed transactions.

3) In memory UXTO ledger.  Nodes will build this from the pruned database.  Since tx are already validated (or the tx and/or block would be rejected) it isn't necessary for the entire tx to be stored in the UXTO just the output portion.  Obviously you can't "only" have the UXTO or get the output only portion from other nodes as it can't be independently validated but this doesn't mean nodes can't build their OWN output only ledger from the pruned database.

As for CPU being a limitation I don't see that anytime soon.  A single 2.0 Ghz i7 core can perform 4,300 256 bit ECDSA signature validations per second.  Now that is just the ECDSA portion but it is the bulk of the computing needed to validate a transaction  Lets assume bitcoin implementation is no worse than half as fast and the average tx has two inputs that is a minimum of 2150 tps.  I would caution that this is probably a massive understatement of what is possible and assumes only one core is in use but use it as a worst case scenario.  To catch up 1 million transaction validation (roughly 28 hours of down time @ 10 tps) would take about 8 minutes to sync.  Remember this is just CPU "bottleneck" (or lack thereof).  For historical usage we can assume @ 10tps the first x years is a rounding error (blockchain today has ~3 million transactions or about 3 days at maxed out 10 tps).  To bootstrap from nothing the a node could validate (remember just CPU bottleneck so assuming network and disk and keep up) 185 million transactions per day or about 215 days at 10 tps.  So for near future the CPU isn't really a bottleneck.

Now at 1,000 tps it is a different story so this is why IMHO the first increase to block limit should be a single manual upgrade (to say 5MB or 10MB) to provide time to come up with more robust handling of high transaction volumes.  Those who wish to jump to an unlimited blockchain size are underestimating the risk of either through negligence or malice the blockchain growing so fast that while existing nodes can "keep up" it kills off the ability for new nodes to bootstrap. Given today we are only 0.4 tps I just don't see the potential risks of going from 1MB to unlimited overnight.  A modest one time increase (when necessary) would provide "breathing room" to come up with a more comprehensive solution.
Yurock
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


View Profile
July 12, 2013, 11:33:50 PM
 #87

It's good to know that problems have solutions. Hopefully, Bitcoin software will keep up with ever increasing load.
Pages: « 1 2 3 4 [5]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!