Bitcoin Forum
December 12, 2017, 01:02:57 PM *
News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: Bitcoin 9000: a long-term scaling plan  (Read 5838 times)
goodsamaritan9k
Newbie
*
Offline Offline

Activity: 1


View Profile
March 01, 2016, 09:32:53 PM
 #1

We propose a strategy to scale Bitcoin to a far greater throughput and performance than available today while keeping the risk of centralization and costs to a minimum. To achieve this we decrease block validation latency with diff blocks, parallelize transaction validation, enable UTXO sharding with transaction input block height annotations, and deploy a series of extension blocks for sustainable capacity increases.

Download whitepaper:
https://github.com/goodsamaritan9000/scalingbitcoin/raw/master/Bitcoin9000.pdf
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1513083777
Hero Member
*
Offline Offline

Posts: 1513083777

View Profile Personal Message (Offline)

Ignore
1513083777
Reply with quote  #2

1513083777
Report to moderator
1513083777
Hero Member
*
Offline Offline

Posts: 1513083777

View Profile Personal Message (Offline)

Ignore
1513083777
Reply with quote  #2

1513083777
Report to moderator
jl777
Legendary
*
Offline Offline

Activity: 1162


View Profile WWW
March 01, 2016, 09:53:48 PM
 #2

We propose a strategy to scale Bitcoin to a far greater throughput and performance than available today while keeping the risk of centralization and costs to a minimum. To achieve this we decrease block validation latency with diff blocks, parallelize transaction validation, enable UTXO sharding with transaction input block height annotations, and deploy a series of extension blocks for sustainable capacity increases.

Download whitepaper:
https://github.com/goodsamaritan9000/scalingbitcoin/raw/master/Bitcoin9000.pdf
Nice!

The iguana bitcoin core implements a parallel download where the vast majority of data goes into read only files. This avoids needing a DB and also allows them to be put into a compressed file system vi mksquashfs. By processing the data in several stages, it is possible to stream data in at bandwidth saturation levels. I am not seeing any bottlenecks until it exceeds 500 mbps. The parallel download is able to get 70 to 120 megabytes/sec, which is 12 minutes for all 60GB blockchain.

Using 8 cores all of the data structures are created in parallel with hash tables and bloom filters built into the read only files. I am seeing about a half hour time to get to where things are ready for the last pass. The last pass does the final processing that is needed.

So, the parallel processing somewhat similar to what you write is already in functioning project and it does remove the bottlenecks the DB oriented approaches incur. The only thing that changes into the past is the state of the unspents, but this is encoded into 6 bytes per unspent by assigning a deterministic 32bit integer to each of the high entropy hashes. So the net result is a relatively compact set of utxo. Even the spends data can be processed in parallel once all the blocks are loaded and create a vector of updates to the unspents. By or'ing together these vectors, it creates a current set of unspents relatively quickly.

The searches using the read-only bundles can also be done in parallel, but I am seeing times of about 2 milliseconds for the equivalent of an importprivkey operation on a 1.4Ghz i5 laptop just serially processing the parallel files.

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
pixelRobot
Newbie
*
Offline Offline

Activity: 1


View Profile
March 04, 2016, 08:23:52 AM
 #3

Love the quote on the front page  Cheesy

Very intriguing paper. A few questions come to mind:

1. - Part 3 can be a soft fork. Does Part 1 & Part 2 require a hard fork?

2. - Have you tested this, if so, could you tell us about it?

--
pxR




hv_
Hero Member
*****
Offline Offline

Activity: 672


View Profile
March 06, 2016, 06:21:46 PM
 #4

Under present circumstances it might be realized year 9001... Grin

Carpe diem  -  cut the down side  -  be anti-fragile
A feature that needs more than one convincing argument is no and Satoshi owes me no proof.
My coding style is legendary but limited to 1MB, sorry but cannot come much over my C64, Bill Gates and Tom Bombadil
Shawshank
Legendary
*
Offline Offline

Activity: 1349



View Profile
March 14, 2016, 09:01:20 PM
 #5

The document just says: "Safely scale Bitcoin to process over 9000 transactions", but later on it says: "This sequence would yield a total capacity of over 9000 Mb as required."

So, what is this 9000 exactly?

A bank-only system is similar to having your Bitcoin wallet confined to your national ID, essentially forfeiting your privacy and handing all private keys to the government
oleganza
Full Member
***
Offline Offline

Activity: 200


Software design and user experience.


View Profile WWW
May 03, 2016, 06:44:37 AM
 #6

Seems like Github deleted the repo, so I moved the PDF here:
https://github.com/oleganza/bitcoin-papers/blob/master/Bitcoin9000.pdf


So, what is this 9000 exactly?

It's a reference to "It's Over 9000" meme: http://knowyourmeme.com/memes/its-over-9000




Bitcoin analytics: blog.oleganza.com / 1TipsuQ7CSqfQsjA9KU5jarSB1AnrVLLo
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!