Bitcoin Forum
November 11, 2024, 06:08:23 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Bitcoin 9000: a long-term scaling plan  (Read 5966 times)
goodsamaritan9k (OP)
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
March 01, 2016, 09:32:53 PM
 #1

We propose a strategy to scale Bitcoin to a far greater throughput and performance than available today while keeping the risk of centralization and costs to a minimum. To achieve this we decrease block validation latency with diff blocks, parallelize transaction validation, enable UTXO sharding with transaction input block height annotations, and deploy a series of extension blocks for sustainable capacity increases.

Download whitepaper:
https://github.com/goodsamaritan9000/scalingbitcoin/raw/master/Bitcoin9000.pdf
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1134


View Profile WWW
March 01, 2016, 09:53:48 PM
 #2

We propose a strategy to scale Bitcoin to a far greater throughput and performance than available today while keeping the risk of centralization and costs to a minimum. To achieve this we decrease block validation latency with diff blocks, parallelize transaction validation, enable UTXO sharding with transaction input block height annotations, and deploy a series of extension blocks for sustainable capacity increases.

Download whitepaper:
https://github.com/goodsamaritan9000/scalingbitcoin/raw/master/Bitcoin9000.pdf
Nice!

The iguana bitcoin core implements a parallel download where the vast majority of data goes into read only files. This avoids needing a DB and also allows them to be put into a compressed file system vi mksquashfs. By processing the data in several stages, it is possible to stream data in at bandwidth saturation levels. I am not seeing any bottlenecks until it exceeds 500 mbps. The parallel download is able to get 70 to 120 megabytes/sec, which is 12 minutes for all 60GB blockchain.

Using 8 cores all of the data structures are created in parallel with hash tables and bloom filters built into the read only files. I am seeing about a half hour time to get to where things are ready for the last pass. The last pass does the final processing that is needed.

So, the parallel processing somewhat similar to what you write is already in functioning project and it does remove the bottlenecks the DB oriented approaches incur. The only thing that changes into the past is the state of the unspents, but this is encoded into 6 bytes per unspent by assigning a deterministic 32bit integer to each of the high entropy hashes. So the net result is a relatively compact set of utxo. Even the spends data can be processed in parallel once all the blocks are loaded and create a vector of updates to the unspents. By or'ing together these vectors, it creates a current set of unspents relatively quickly.

The searches using the read-only bundles can also be done in parallel, but I am seeing times of about 2 milliseconds for the equivalent of an importprivkey operation on a 1.4Ghz i5 laptop just serially processing the parallel files.

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
pixelRobot
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
March 04, 2016, 08:23:52 AM
Last edit: March 05, 2016, 04:04:43 AM by pixelRobot
 #3

Love the quote on the front page  Cheesy

Very intriguing paper. A few questions come to mind:

1. - Part 3 can be a soft fork. Does Part 1 & Part 2 require a hard fork?

2. - Have you tested this, if so, could you tell us about it?

--
pxR




hv_
Legendary
*
Offline Offline

Activity: 2534
Merit: 1055

Clean Code and Scale


View Profile WWW
March 06, 2016, 06:21:46 PM
 #4

Under present circumstances it might be realized year 9001... Grin

Carpe diem  -  understand the White Paper and mine honest.
Fix real world issues: Check out b-vote.com
The simple way is the genius way - Satoshi's Rules: humana veris _
Shawshank
Legendary
*
Offline Offline

Activity: 1623
Merit: 1608



View Profile
March 14, 2016, 09:01:20 PM
 #5

The document just says: "Safely scale Bitcoin to process over 9000 transactions", but later on it says: "This sequence would yield a total capacity of over 9000 Mb as required."

So, what is this 9000 exactly?

Lightning Address: shawshank@getalby.com
oleganza
Full Member
***
Offline Offline

Activity: 200
Merit: 104


Software design and user experience.


View Profile WWW
May 03, 2016, 06:44:37 AM
 #6

Seems like Github deleted the repo, so I moved the PDF here:
https://github.com/oleganza/bitcoin-papers/blob/master/Bitcoin9000.pdf


So, what is this 9000 exactly?

It's a reference to "It's Over 9000" meme: http://knowyourmeme.com/memes/its-over-9000




Bitcoin analytics: blog.oleganza.com / 1TipsuQ7CSqfQsjA9KU5jarSB1AnrVLLo
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!