Bitcoin Forum
May 12, 2024, 04:59:55 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3]  All
  Print  
Author Topic: A proposal for a scalable blockchain.  (Read 6161 times)
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
December 01, 2011, 03:01:12 AM
 #41

Splendid! Would you be so kind as to rerun the calculations while discarding TxOuts with small BTC values, please?
Let's say anything below 0.001 can be discarded.

Interesting idea!

Thinking out loud...  There Doesn't Have To Be One Way To Do It.  Piuk (or somebody) should hack together a client or miner that uses a ledger system; it could refuse to relay or include any transactions with inputs smaller than 0.001 BTC, so it only needed a truncated ledger to create new blocks (if it is a miner, maybe it connects to a trusted 'traditional' bitcoin node to make sure it only builds on valid blocks which might contain tiny inputs).


How often do you get the chance to work on a potentially world-changing project?
1715533195
Hero Member
*
Offline Offline

Posts: 1715533195

View Profile Personal Message (Offline)

Ignore
1715533195
Reply with quote  #2

1715533195
Report to moderator
"Bitcoin: mining our own business since 2009" -- Pieter Wuille
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715533195
Hero Member
*
Offline Offline

Posts: 1715533195

View Profile Personal Message (Offline)

Ignore
1715533195
Reply with quote  #2

1715533195
Report to moderator
1715533195
Hero Member
*
Offline Offline

Posts: 1715533195

View Profile Personal Message (Offline)

Ignore
1715533195
Reply with quote  #2

1715533195
Report to moderator
1715533195
Hero Member
*
Offline Offline

Posts: 1715533195

View Profile Personal Message (Offline)

Ignore
1715533195
Reply with quote  #2

1715533195
Report to moderator
ByteCoin
Sr. Member
****
expert
Offline Offline

Activity: 416
Merit: 277


View Profile
December 01, 2011, 03:13:42 AM
 #42

Interesting idea!

somebody should hack together a client or miner that uses a ledger system; it could refuse to relay or include any transactions with inputs smaller than 0.001 BTC, so it only needed a truncated ledger to create new blocks

Quite so. If the transactions were sorted in the merkle tree according to value, then a client (or miner) not interested in fiddling small change could stub off whole chunks of the tree. I agree that "There Doesn't Have To Be One Way To Do It" but one has to be careful - going too far down that route can result in TxOuts being shunned. If a majority of miners don't relay and don't accept transactions using the tiny TxOuts then such transactions take a long time to confirm - if at all.

I'm in two minds as to whether this scenario is natural optimisation at work or a bad thing.

On the other hand, similar pressures exist under the current system if block chain pruning were implemented. There would be a temptation to prune small unspent transactions to free up disk space.

ByteCoin
Atheros
Sr. Member
****
Offline Offline

Activity: 249
Merit: 251



View Profile WWW
December 01, 2011, 04:11:42 AM
 #43

On the other hand, similar pressures exist under the current system if block chain pruning were implemented. There would be a temptation to prune small unspent transactions to free up disk space.
If a miner did this, how would they verify all transactions? And if they don't verify transactions, how will they know what to include in blocks? If they include bad transactions in blocks then the block is bad and they just wasted a whole lot of time and electricity.

BM-GteJMPqvHRUdUHHa1u7dtYnfDaH5ogeY
Bitmessage.org - Decentralized, trustless, encrypted, authenticated messaging protocol and client.
Skybuck
Full Member
***
Offline Offline

Activity: 384
Merit: 110


View Profile
December 01, 2011, 12:58:57 PM
 #44

This is against the idea of bitcoin.

Since there is a limited supply of coins, at least in theory, some day a bitcoin could be worth 1000 dollars.

Thus 0.001 could be worth 1 dollar.

Do you really want to throw away dollars like that ? I don't think so Wink Smiley
piuk (OP)
Hero Member
*****
expert
Offline Offline

Activity: 910
Merit: 1005



View Profile WWW
December 01, 2011, 03:43:55 PM
 #45

Splendid! Would you be so kind as to rerun the calculations while discarding TxOuts with small BTC values, please?
Let's say anything below 0.001 can be discarded.
If you're feeling keen could you please plot a cumulative distribution of TxOut value? Possibly on a log value scale?
I have a strong feeling that the vast majority of the ledger will be taken up with very small TxOuts, many of them vanity addresses.

The fact that a ledger system seems to result in reclaiming 90% of the disc space is encouraging. Now that many bitcoins have been exchanged, the space saving results of a ledger system are more obvious than when I first proposed it over a year ago

The fees system could be reworked to provide incentives for keeping the ledger small rather than the blockchain. Some transactions would result in ledger size decreasing and could perhaps be encouraged by refunding some previously paid fee.

As Gavin implies, all nodes would have to verify ledger integrity in the same way that block integrity is verified. If this were implemented, a couple of extra optimizations should accompany: the transaction signature shouldn't contribute to the hash and the signature should not be recorded in the block or ledger. This would result in further considerable space savings and pave the way for enabling transaction replacement and other advanced contract features.

Note that if the ledger is implemented as a hash tree and incoming transactions are incorporated into the tree according to a suitable algorithm then when a new block is found, each client can recalculate the ledger independently and hence the whole ledger need only be downloaded once.

ByteCoin

Here's the output filtering all output size <= 0.01BTC

Quote
Building Test Ledger
Ledger generation took 128s
Original disk size 750.153MB
Number of unspent outputs 733392 Ledger size 50.6512MB
6.75211% of the original size

To be honest I would have thought the saving would be greater. Regardless unless the official position is that bitcoin won't support transactions under 0.01BTC you couldn't filter them. The majority of space is taken up by transaction hashes and scripts so if there was a good way to compress these then you could reduce the size equally as well.

One possible scheme would be:
Order the ledger by transaction hash in ascending order. The hash is recorded as a var_int value between the difference of the previous hash.
Scripts are then stored at the end of the ledger, with duplicate scripts being written only once. The CTxOut then points to an script index instead of having the script written inline.

I will play around with some different formats.

Adjusting fees based on the ledger size effect is an excellent idea.



finway
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500


View Profile
December 06, 2011, 09:44:03 AM
 #46

Gavin says CPU is the bottleneck right now,
recently i downloaded the whole blockchain again (it takes me almost whole day), and i see the harddisk light shining all along, and the CPU usage is 20%~30%.

Maybe some blk0001.dat reorganize will help?


gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8420



View Profile WWW
December 07, 2011, 02:24:31 PM
 #47

If a miner did this, how would they verify all transactions? And if they don't verify transactions, how will they know what to include in blocks? If they include bad transactions in blocks then the block is bad and they just wasted a whole lot of time and electricity.

They don't include transactions they can't verify.  Their only risk is that someone else wasted a lot of time and electricity in the prior block they received to make them waste a lot of time and electricity.

It's also possible to do the opposite of what bytecoin suggests e.g. _don't_ sort the tree by the sizes, sort it by txn hash, so that you can't escape the space requirement of small transactions easily. (if you prune them out of the ledger you'll have to keep their hashes in order to be able to update the tree— so the space savings isn't great).   

In any case if the ledger is committed the pruning is less bad because someone who wants to spend a pruned input could provide the tree fragment themselves... though it's not clear to me how a miner could add new outputs to a size sorted tree for a size they weren't maintaining.

Another random point is that where the ledger is deterministic the bigger issue is attackers intentionally creating unbalanced trees, so you'd have to penalize txn outs which would attach at ugly (too deep) points.

etotheipi
Legendary
*
expert
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
May 07, 2012, 01:18:03 AM
Last edit: May 26, 2012, 01:43:39 AM by etotheipi
 #48

Has anyone considered this idea using address/P2SH hashes as the "key" of the tree instead of TxOuts/OutPoints?   By doing this, lite-weight nodes could retrieve the entire unspent-TxOut-list of an address by downloading only a couple kilobytes but still have the same security of a full-node!     

Like the OP's generic merkle-tree-of-unspent-TxOuts, this would provide the same level of compression but additionally enable lightweight nodes to verify address balances within seconds.  That's because the leaf nodes of the ledger tree are actually lists of unspent outputs, instead of individual outputs (organized in sub-merkle-trees).

For convenience, I will use the vague term "recipient" to refer to the pub-key-hash160, or a P2SH script.  I'm sure a scheme could be created that takes into account arbitrary TxOut scripts and still have benefit of uniform distribution.  Here's how the "ledger" is created on a given block:

  • All nodes traverse the blockchain and collect unspent outputs (can be done on an already pruned tree to be updated)
  • The unspent outputs are sorted by their recipient, which can be done in linear time using bucket-sort (because the sorting keys are hashes which are uniformly distributed)
  • For each recipient all of its unspent outputs are put into a merkle sub-tree.
  • The sub-tree root for each individual recipient is put into a master merkle tree
  • The fingerprint of the blockchain on a given block is the master-merkle-tree root.

Therefore, each address/recipient is one leaf node of the master merkle tree, and the value of each leaf node is the merkle root of the unspent-TxOut-list for that recipieint.

There's plenty of discussion already about how this fingerprint is distributed, so we assume that all full nodes have it and agree on it for each recipient.  So now you are a lite-weight node getting onto the network for the first time.  You have a list of addresses ("recipients") to collect balances for.  

  • You download the headers (80 bytes each) plus the ledger-fingerprint at each block (32 bytes each).  Compute the longest chain.
  • For each address, you query your peers with the recipient string.
  • Each peer responds with the sub-merkle-root of that recipient, along with the entire merkle branch -- which should be 2*log(N) hashes, where N is the number of "recipients" in the blockchain with balance > 0.  If the blockchain has 100,000,000 addresses, this is about 2 kB worth of data.
  • You verify the merkle branch against the ledger fingerprint for this block.  You now know that the sub-merkle-root is valid.
  • You request the sub-merkle tree from your peers (which is just the unspent TxOut list for that address).
  • You compute the merkle-root of the unspent TxOuts and verifies it matches the sub-merkle-root previously verified.

As long as the ledger-fingerprint (which is just an enormous-merkle-tree root) is somehow included in the protocol, then lite-clients could import addresses and verify balances for only a couple kilobytes and with the same security as downloading the whole blockchain!  

Oh, and you'd get the benefit of blockchain pruning, too.  A small bonus...

Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
cbeast
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
May 07, 2012, 03:22:35 AM
 #49

Has anyone considered this idea using address/P2SH hashes as the "key" of the tree instead of TxOuts/OutPoints?   By doing this, lite-weight nodes could retrieve the entire unspent-TxOut-list of an address by downloading only a couple kilobytes but still have the same security of a full-node!   
So now you are a lite-weight node getting onto the network for the first time.
Oh, and you'd get the benefit of blockchain pruning, too.  A small bonus...

Someone had an epiphany.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8420



View Profile WWW
May 07, 2012, 03:29:30 AM
 #50

Has anyone considered this idea using address/P2SH hashes as the "key" of the tree instead of TxOuts/OutPoints?

Then I arrange a bunch of transactions under that key and unbalance your tree. Updates become O(N). Yuck.   (And bumping zombie threads? double yuck!)
BrightAnarchist
Donator
Legendary
*
Offline Offline

Activity: 853
Merit: 1000



View Profile
May 07, 2012, 03:32:40 AM
 #51

watching
etotheipi
Legendary
*
expert
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
May 07, 2012, 08:21:55 PM
Last edit: May 08, 2012, 02:56:01 AM by etotheipi
 #52

Then I arrange a bunch of transactions under that key and unbalance your tree. Updates become O(N). Yuck.

I don't follow.  Or rather, I don't know why this is any more computationally inconvenient than a pure unspent-txout-list-merkle-tree.  Much of the tree would probably have to be recomputed anyway when you have hundreds of transactions removing and adding branches to the tree.  The only difference is that your main tree now only holds all "recipients" in stead of all unspent outputs, and each recipient has its own sub-tree of unspent outputs.   Either way, you have N unspent outputs that have to be represented in the tree, and updating 300 of them on a new block still requires substantial recomputations.  

In this case, each op requires updating a subtree and then the master tree after that.  If all unspent outputs were conentrated behind a single recipient, then we're back to the original proposal of unspent-TxOut-list trees, which was the basis for this thread.

Unless your point has to do with the general feasibility of maintaining such a tree.  Then it's a fair point, but I'm simply building on the original proposal (and its flaws).  

If one argument against it was that the 90% space savings isn't worth the tremendous complexity changes to the client and protocol, then my question is "is it worth revisiting it knowing that a change of similar complexity could save you 90% and give nodes the ability to verify balances without downloading the whole tree?"


(And bumping zombie threads? double yuck!)

Would you have preferred I started a new thread?  Then you could complain about the dozens of threads already out there discussing this general topic.  Plus, this isn't a "bump"... it's a legitimate expansion of the exact, original idea proposed in this thread.




Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
piuk (OP)
Hero Member
*****
expert
Offline Offline

Activity: 910
Merit: 1005



View Profile WWW
June 15, 2012, 11:51:27 AM
 #53

Up to date test re-run:

Quote
Building Test Ledger
Ledger generation took 128s
Number of unspent outputs 1053033
Original disk size 1716.7MB
Compressed Ledger size 71.9706MB
4.19238% of the original size

Since the last test the blockchain has more than doubled in size but the ledger has only increased by 20MB.

In my opinion this is the only way that bitcoin will scale. Merkel tree pruning is a dead end because not even the centralised nodes holding the full blockchain will be able to scale.

Keep variable number of blocks (Default ~2 weeks) unspent outputs in blocks older than that are compressed into a ledger.

Realpra
Hero Member
*****
Offline Offline

Activity: 815
Merit: 1000


View Profile
June 15, 2012, 03:08:19 PM
 #54

I like this proposal and I think it would work.

However to avoid miners printing money I think the snapshots should be larger than 2 weeks: If someone had 51% for two weeks (not THAT hard to imagine) he could basically rewrite ledger history.

Maybe make it 5 YEARS: Keeps the chain from becoming infinite, but is still quite safe + most inputs before such date should have been spent, so the ledger becomes smaller too.

Win-win.


Anyone from this thread like my swarm proposal? (other thread)

Cheap and sexy Bitcoin card/hardware wallet, buy here:
http://BlochsTech.com
Pages: « 1 2 [3]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!