Bitcoin Forum
April 25, 2024, 12:50:27 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 »  All
  Print  
Author Topic: A proposal for a scalable blockchain.  (Read 6160 times)
Skybuck
Full Member
***
Offline Offline

Activity: 384
Merit: 110


View Profile
November 26, 2011, 02:21:02 PM
 #21

Quote
3. Instead of storing the transactions, the balances are stored, which could cut back on data.
This would require quite a rework on the entire bitcoin network.

Not really, the balance chain would be something extra.

It could even start locally for each client without any additional protocol, a balance chain protocol could be added later on.

Quote
Right now it is based on transactions not just for the sake of it, but for flexibility which is achieved with scripts. Basically a transaction can have many different inputs and many different outputs, and many different conditions that will have to be med to claim the outputs. Just throwing this all away and simply storing balances of each address would not be compatible with this and would mean we would have to rework the whole concept.
Cool idea though.

I don't see how it would be incompatible since bit coin addresses start with a balance at 0.

Instead now bit coin addresses can be at an arbitrary value, which is validated by the balance chain, and thus inputs and outputs could be resolved up to that balance value ?!? So instead of solving it to 0, it is solved to balance value.

If you believe otherwise, then please try to explain why this would not work, or where problems/incompatibilities would be.
1714049427
Hero Member
*
Offline Offline

Posts: 1714049427

View Profile Personal Message (Offline)

Ignore
1714049427
Reply with quote  #2

1714049427
Report to moderator
In order to get the maximum amount of activity points possible, you just need to post once per day on average. Skipping days is OK as long as you maintain the average.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714049427
Hero Member
*
Offline Offline

Posts: 1714049427

View Profile Personal Message (Offline)

Ignore
1714049427
Reply with quote  #2

1714049427
Report to moderator
1714049427
Hero Member
*
Offline Offline

Posts: 1714049427

View Profile Personal Message (Offline)

Ignore
1714049427
Reply with quote  #2

1714049427
Report to moderator
cbeast
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
November 26, 2011, 02:44:36 PM
 #22

+∞

Explosive growth in Bitcoin use = explosive growth in Bitcoin value. Internet speed and storage space are growing exponentially. As Bitcoin grows in value, miners with full clients will be connected to fiber optic networks operating at multi GB speeds and everyone else can use lite clients. There is plenty of time to explore other options.

There won't be any explosive growth in use unless big problems like scalability are addressed first.

What is the maximum theoretical bandwidth of fiber optics? Unknown. Scalability problem will be solved through hardware in the short term, Bitcoin core development in the long term.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1128


View Profile
November 26, 2011, 02:46:50 PM
 #23

There won't be any explosive growth in use unless big problems like scalability are addressed first.

If you agree that the current limitations on growth are software performance related (and not exchanges or economy size or whatever) then there are plenty of simpler optimizations that don't require protocol extensions, which still aren't implemented. That'd definitely be the first thing to think about. Gavin is planning to work on the most important of these, but I'm sure he'd appreciate help. Otherwise you could contribute to MultiBit/BitcoinJ which already implement a form of lightweight mode.
cbeast
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
November 26, 2011, 03:05:19 PM
 #24

What is the maximum theoretical bandwidth of fiber optics? Unknown. Scalability problem will be solved through hardware in the short term, Bitcoin core development in the long term.

If scalability is a non-issue, then why is storing generic data like Namecoin banned/heavily discouraged?

Namecoin is icky  Tongue

Seriously, I think it won't be in the long run and can be addressed in future core development. For the time being focus should be on useful things.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
November 27, 2011, 06:38:19 PM
 #25

If scalability is a non-issue, then why is storing generic data like Namecoin banned/heavily discouraged?

As Mike said, help on "initial headers-only download" would be much appreciated.

Work-in-progress is here:  https://github.com/bitcoin/bitcoin/tree/blockheaders

... and my notes on issues that have to be worked out are here:  https://gist.github.com/1059233

As for scalability in general:  it looks to me like CPU time to validate transactions will be the bottleneck before bandwidth or disk space, so I don't see a strong reason to switching to a 'ledger' or 'balance sheet' method. Effective optimization/scalability is all about identifying and eliminating bottlenecks.

Quote
"Premature optimization is the root of all evil (or at least most of it) in programming." -- Donald Knuth

How often do you get the chance to work on a potentially world-changing project?
piuk (OP)
Hero Member
*****
expert
Offline Offline

Activity: 910
Merit: 1005



View Profile WWW
November 27, 2011, 11:04:44 PM
 #26

As for scalability in general:  it looks to me like CPU time to validate transactions will be the bottleneck before bandwidth or disk space, so I don't see a strong reason to switching to a 'ledger' or 'balance sheet' method.

It's worth noting that this method would reduce block validation time considerably, something which merkel pruning would not help with. I agree that there are better optimisations that can be implemented first but I still think it might be worth discussing for the future. Can you see any obvious flaws in the proposal Gavin?

Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
November 28, 2011, 01:16:54 AM
 #27

I agree that there are better optimisations that can be implemented first but I still think it might be worth discussing for the future. Can you see any obvious flaws in the proposal Gavin?
Ummm, yes.

It seems to me miners will have an incentive to lie about the transaction ledger, and put fake ledger hashes in their blocks. Either so their transactions might be considered 'unspent' by unsuspecting nodes that trust them, or so that other miners that don't have the full block chain create invalid blocks (eliminate the competition!)

And I don't see a proposal that everybody check the ledger and reject blocks that contain invalid ledger hashes.

I also don't see what the ledger hash accomplishes.  If you're going to trust some other node's version of unspent-transaction-reality, then you could just ask "send me the ledger state before (or after) the block with THIS block hash".

But if you're going to trust one or more nodes anyway... then it seems to me sending an ever-increasing-in-size ledger is a bad way to get scalable. If size-of-full-blockchain becomes a problem before the mining pools and big exchanges/merchants/transactions processors all have transaction processing clusters with a terabyte of ram and petabyte hard drive array then I think extending the protocol to make it easy to request all transactions involved in a given Merkle branch will probably be the way to go.

But before then I expect the bitcoin network will look very different from the way it looks today, and I expect there will be several different solutions for how to scale up. If (when!) Bitcoin gets that successful, there will be serious money hiring the same smart people who figured out how to scale up PayPal and Visa.

How often do you get the chance to work on a potentially world-changing project?
Skybuck
Full Member
***
Offline Offline

Activity: 384
Merit: 110


View Profile
November 28, 2011, 02:42:39 AM
 #28

I agree that there are better optimisations that can be implemented first but I still think it might be worth discussing for the future. Can you see any obvious flaws in the proposal Gavin?
Ummm, yes.

It seems to me miners will have an incentive to lie about the transaction ledger, and put fake ledger hashes in their blocks. Either so their transactions might be considered 'unspent' by unsuspecting nodes that trust them, or so that other miners that don't have the full block chain create invalid blocks (eliminate the competition!)

And I don't see a proposal that everybody check the ledger and reject blocks that contain invalid ledger hashes.

I also don't see what the ledger hash accomplishes.  If you're going to trust some other node's version of unspent-transaction-reality, then you could just ask "send me the ledger state before (or after) the block with THIS block hash".

But if you're going to trust one or more nodes anyway... then it seems to me sending an ever-increasing-in-size ledger is a bad way to get scalable. If size-of-full-blockchain becomes a problem before the mining pools and big exchanges/merchants/transactions processors all have transaction processing clusters with a terabyte of ram and petabyte hard drive array then I think extending the protocol to make it easy to request all transactions involved in a given Merkle branch will probably be the way to go.

But before then I expect the bitcoin network will look very different from the way it looks today, and I expect there will be several different solutions for how to scale up. If (when!) Bitcoin gets that successful, there will be serious money hiring the same smart people who figured out how to scale up PayPal and Visa.

This is kinda funny.

So on one hand/side you consider a blockchain safe ?

But on the other hand/side you consider a balancechain to be unsafe ?

You can't have it both way ?!? Wink Smiley

The only difference so far is that the blockchain is anchored into the application and fully available vs throwing information away, but what remains could be anchored as well so not much difference there ?!?

Though how important such an anchor is remains to be seen ? Perhaps it's not important and can be worked around ?!?

If it is important then maybe a balancechain anchor could be build in as well ?

Either the bitcoin is flawed or it's not, which is it gavin ? Wink
Skybuck
Full Member
***
Offline Offline

Activity: 384
Merit: 110


View Profile
November 28, 2011, 04:38:55 AM
 #29

Nothing is stopping you from trying to make it coherent.

Let me/us know when you figured out how to make it coherent ?!?
cbeast
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
November 28, 2011, 11:47:31 AM
 #30

Skybuck, Gavin is not saying you should trust anybody. Can you provide a link where Gavin talked about balancechain?

And I don't see a proposal that everybody check the ledger and reject blocks that contain invalid ledger hashes.


Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
Skybuck
Full Member
***
Offline Offline

Activity: 384
Merit: 110


View Profile
November 28, 2011, 12:38:34 PM
 #31

The balance-blockchain (=ledger-blockchain ?) would work the same as the transaction-blockchain.

The concept of a blockchain can be applied to anything.

This includes all the blockchain-mechanisms for rejecting and accepting, etc.

This should not need to be re-explained.

Perhaps the proposal of the original/first poster is weak/insecure or worriesome.

My slightly different proposal is to make a seperate balance-blockchain which should be stronger... assuming the original/first proposal is rejected because of security concerns or incomplete proposal.

I'd like to hear objections or security concerns against my proposal the seperate balance-blockchain.
cbeast
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
November 28, 2011, 12:50:05 PM
 #32

The balance-blockchain (=ledger-blockchain ?) would work the same as the transaction-blockchain.

The concept of a blockchain can be applied to anything.

This includes all the blockchain-mechanisms for rejecting and accepting, etc.

So why should we trust a balance-blockchain? Should we also have a balancechain for that, and then another for that?

This should not need to be re-explained.

Perhaps the proposal of the original/first poster is weak/insecure or worriesome.

My slightly different proposal is to make a seperate balance-blockchain which should be stronger... assuming the original/first proposal is rejected because of security concerns or incomplete proposal.

I'd like to hear objections or security concerns against my proposal the seperate balance-blockchain.


Humor me. Please explain.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
Skybuck
Full Member
***
Offline Offline

Activity: 384
Merit: 110


View Profile
November 28, 2011, 03:46:29 PM
 #33

The balance-blockchain (=ledger-blockchain ?) would work the same as the transaction-blockchain.

The concept of a blockchain can be applied to anything.

This includes all the blockchain-mechanisms for rejecting and accepting, etc.

So why should we trust a balance-blockchain? Should we also have a balancechain for that, and then another for that?


The real question is why not ?

You can change the balance, but it will not be accepted by other clients, because your balance would be a mismatch.

The mismatch can happen in two ways:

1. The traditional way by: length wins, which is what bitcoin does.

2. The voting way: confirmations.

An attacker could create a fake balance chain, which could be longer, and broadcast it to you.

However this should not happen because of majority of processing power, so majority has ultimately longer balance chain.

Unless you are disconnected from the real network and you connect to a fake network, and you cannot connect to real network, same problem as with transaction block chain.

Other attacks could exist too.

The hashes in the balance chain also protect against modifications, make a single modification and you will have to find a hash collision, or re-do an entire fake balance chain which becomes computationally expensive. A balance chain might not even be necessary. Only one balance sheet needs to be stored the previous one. However balance sheets could also function as checkpoints, but that's probably not necessary.

My suggestion was also to make the difficulty 1000x greater than the blockchain, if the blockchain is allowed to be 1000 blocks ahead, if this leads to 1000x hash search times remains to be seen, maybe it will become too difficult, examination of this idea will have to be done.

So the concept of a balance-chain could be dismissed, instead a single balance sheet is stored by all clients.

It could be considered a "sliding balance sheet". It slides along the transactions over time.

However if the concept of a balance chain is dismissed, then the race can no longer happen, nobody wins.

This becomes a problem, there is no winning selection criterium anymore, except perhaps the voting situation. Though that's a bit shakey.

Therefore the idea could be simply:

Store the hashes of the balance chain only.

The balance sheets themselfes are thrown away, except the last one.

At least this makes it possible for people who have all transactions and know exactly when to calculate a balance and a balance hash to check the chain for consistency.

This also preserves the proof-of-work idea.

An attacker would have to re-do those balance chain hashes, computationally expensive.

It's even very interesting, since an attacker might not have the necessary data to make the balance chain.

Therefore attacker is restrained to new data only, less attack surface for him. This would be a lazy attacker, however it's good to assume default intense attackers which have all data.

By storing balance chain hashes only, the old transactions can be thrown away, the old balance sheets can be thrown away, what remains are the balance chain hashes, about 256 bits per hash and a recent balance sheet.
Skybuck
Full Member
***
Offline Offline

Activity: 384
Merit: 110


View Profile
November 28, 2011, 03:53:03 PM
 #34

Perhaps some terms are confusing.

A visualization of the idea:

Balance Block 0 (Anchor hardcoded into application)
+---------------------------------+
|Genesis Balance Sheet Hash   |
+---------------------------------+

Balance Block 1
+---------------------------------+
| Previous Balance Sheet Hash |
+---------------------------------+

Balance Block 2
+---------------------------------+
| Previous Balance Sheet Hash |
+---------------------------------+

Balance Block 3
+---------------------------------+
| Previous Balance Sheet Hash |
+---------------------------------+

Balance Block 4
+---------------------------------+
| Previous Balance Sheet Hash |
+---------------------------------+

Current Balance Sheet
+------------------------------------+
|  Bitcoin Address | Balance Value |
+------------------------------------+
|  Bitcoin Address | Balance Value |
+------------------------------------+
|  Bitcoin Address | Balance Value |
+------------------------------------+
|  Bitcoin Address | Balance Value |
+------------------------------------+
|  Bitcoin Address | Balance Value |
+------------------------------------+
|  Bitcoin Address | Balance Value |
+------------------------------------+
|  Bitcoin Address | Balance Value |
+------------------------------------+
|  Bitcoin Address | Balance Value |
+------------------------------------+
|  Bitcoin Address | Balance Value |
+------------------------------------+
^
All known bitcoin addresses, possibly only those with balance value above zero.

Clients only need the most recent balance block to participate. Older balance blocks can be used to check historic data.

(The transaction block chain keeps existing to stabilize the balance sheet, which will be considered stable after 1000 or 2000 blocks, the transaction block chain can later be thrown away up to the most recent/current balance sheet, or it can be stored for checking/verification purposes at the expensive of more harddisk usage, up to user/application to decide.)
piuk (OP)
Hero Member
*****
expert
Offline Offline

Activity: 910
Merit: 1005



View Profile WWW
November 30, 2011, 04:17:17 PM
 #35

I agree that there are better optimisations that can be implemented first but I still think it might be worth discussing for the future. Can you see any obvious flaws in the proposal Gavin?
Ummm, yes.

It seems to me miners will have an incentive to lie about the transaction ledger, and put fake ledger hashes in their blocks. Either so their transactions might be considered 'unspent' by unsuspecting nodes that trust them, or so that other miners that don't have the full block chain create invalid blocks (eliminate the competition!)

And I don't see a proposal that everybody check the ledger and reject blocks that contain invalid ledger hashes.

I also don't see what the ledger hash accomplishes.  If you're going to trust some other node's version of unspent-transaction-reality, then you could just ask "send me the ledger state before (or after) the block with THIS block hash".

But if you're going to trust one or more nodes anyway... then it seems to me sending an ever-increasing-in-size ledger is a bad way to get scalable. If size-of-full-blockchain becomes a problem before the mining pools and big exchanges/merchants/transactions processors all have transaction processing clusters with a terabyte of ram and petabyte hard drive array then I think extending the protocol to make it easy to request all transactions involved in a given Merkle branch will probably be the way to go.

But before then I expect the bitcoin network will look very different from the way it looks today, and I expect there will be several different solutions for how to scale up. If (when!) Bitcoin gets that successful, there will be serious money hiring the same smart people who figured out how to scale up PayPal and Visa.

It would be much simpler if enforcing a correct ledger hash could be done, but it would make adoption much more difficult as it would require 50% of all miners to upgrade at once.

A solution this problem would be to have miners include the hash of previous ledger and block height they believe to be correct. During the initial blockchain download the client would continue to download blocks until there at least one ledger can be agreed upon. Some kind of decaying time window would need to be implemented so if the majority of hashing power is agreed on one "ledger chain" a minority of clients cannot force a full blockchain download.

You don't have trust any node's version of unspent reality, you have to trust 50% of the hashing power's version of unspent reality - something which you kind of have to do anyway. Although the the consequences for a malicious entity gaining 50% hashing power would be more severe (though they would need to have 50% for a long time).

Skybuck: I'm not sure were on the same page. I wasn't really talking about a separate balance chain. The problem with a chain is its a chain so grows indefinatly thus block validation time, disk space, bandwidth requirements all grow indefinitely. At some point you should be able to look into the past and  say these transactions are no longer under contention and be able to archive/compress them.

Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
November 30, 2011, 04:29:59 PM
 #36

A solution this problem would be to have miners include the hash of previous ledger and block height they believe to be correct.
So go implement it and see how well it works.

Create a little HTTP-based protocol with, oh, three methods:
  • You send a block height or block hash, you get back a ledger hash.
  • You send a ledger hash, you get back the full ledger or an "I have no idea what you're talking about, that's not a valid ledger hash".
  • You send two ledger hashes, you get back the changes from one to the other or an "I have no idea what you're talking about, one of those isn't a valid ledger hash".

Then you just need to convince a bunch of semi-trustworthy people to run "ledger servers." And maybe have some mechanism for reporting when a ledger server has a bug or 'goes rogue' and reports a ledger hash that is different from everybody else.

Oh, and you might need to solve the incentive problem of "why would I run a ledger server if I'm not paid for it" (and maybe write a bunch of denial-of-service-prevention code in case some jerk with a botnet decides to ask for 10,000 full ledgers from 10,000 different IP addresses).

How often do you get the chance to work on a potentially world-changing project?
Maged
Legendary
*
Offline Offline

Activity: 1204
Merit: 1015


View Profile
November 30, 2011, 05:37:32 PM
 #37

The most obvious way to prevent this from being exploited is by incorporating it directly into the protocol as a requirement for block acceptance. Of course, this would initially require 50% of the miners in order to work, but that's standard procedure for adding new requirements to the protocol. It also might only need to be included in certain blocks, such as every difficulty change. However, that might make miners lazy and just not check the hash when it comes up, so it'd have to be included fairly commonly - likely every block. As for how it'd be checked, that's simple: it'd be independently calculated by everyone.

That being said, I don't really see what this would accomplish. The difference between this and block pruning would likely be small. People bootstrapping with this ledger still would need to at least check the block headers all the way back to the last checkpoint in order to verify that they are actually on the main chain and not a fake one that is only a few thousand blocks long that isn't actually attached to the main chain. The savings just aren't enough to justify this large overhead.

If you want to prove me wrong, go and calculate the savings that this would currently provide, along with the savings of block pruning. Bonus points if you model out, simulate, and calculate this on projected future growth.

There are only two major benefits I see this ledger proposal:
1) A client doesn't need to hold on to the merkle branch of all their unspent transactions to ensure that they will always be able to spend their coins, even if the miners don't (easily) have access to old merkle trees.
2) It is not possible for clients to provide over-pruned merkle trees (where they prune even unspent transactions) to new clients attempting to bootstrap.

On the plus side, this could be used as a low-certainty check (where this hash isn't require to be in blocks, nor is it required to be checked) so that new clients know that they have all of the unspent transactions and didn't receive over-pruned blocks. But at that point, it might just be easier to request this hash outside of the chain from trusted peers/multiple untrusted peers.

piuk (OP)
Hero Member
*****
expert
Offline Offline

Activity: 910
Merit: 1005



View Profile WWW
November 30, 2011, 08:45:52 PM
Last edit: November 30, 2011, 11:27:42 PM by piuk
 #38

If you want to prove me wrong, go and calculate the savings that this would currently provide, along with the savings of block pruning. Bonus points if you model out, simulate, and calculate this on projected future growth.

I wrote some test code to produce a vector of all unspent outputs (Basically the ledger).

Quote
#include <list>
#include <set>
#include <utility>
template < typename KeyType, typename MappedType,
typename Comp = std::less< KeyType > >
struct linked_map {
    typedef KeyType key_type;
    typedef MappedType mapped_type;
    typedef std::pair< const key_type, mapped_type > value_type;
private:
    typedef std::list< value_type >      list_type;
    typedef typename list_type::iterator list_iterator;
    struct compare_keys {
        Comp the_order;
        compare_keys ( Comp o )
        : the_order ( o )
        {}
        bool operator() ( list_iterator lhs, list_iterator rhs ) const {
            return ( the_order( lhs->first, rhs->first ) );
        }
    };
    typedef std::set< list_iterator, compare_keys > set_type;
    typedef typename set_type::iterator             set_iterator;
    list_type the_list;
    set_type  the_set;
public:
    typedef list_iterator iterator;
    typedef typename set_type::size_type size_type;
    linked_map ( Comp o = Comp() )
    : the_list()
    , the_set ( compare_keys( o ) )
    {}
    iterator find ( key_type const & key ) {
        value_type dummy_value ( key, mapped_type() );
        list_type  dummy_list;
        dummy_list.push_back( dummy_value );
        set_iterator where = the_set.find( dummy_list.begin() );
        if ( where == the_set.end() ) {
            return ( the_list.end() );
        }
        return ( *where );
    }
    iterator insert ( value_type const & value ) {
        list_type dummy;
        dummy.push_back( value );
        set_iterator where = the_set.find( dummy.begin() );
        if ( where == the_set.end() ) {
            the_list.push_back( value );
            list_iterator pos = the_list.end();
            -- pos;
            the_set.insert( pos );
            return ( pos );
        } else {
            (*where)->second = value.second;
            return ( *where );
        }
    }
    iterator erase ( iterator where ) {
        the_set.erase( where );
        return ( the_list.erase( where ) );
    }
    iterator begin ( void ) {
        return ( the_list.begin() );
    }
    iterator end ( void ) {
        return ( the_list.end() );
    }
    size_type size ( void ) const {
        return ( the_set.size() );
    }
    mapped_type & operator[] ( key_type const & key ) {
        iterator pos = insert( std::make_pair( key, mapped_type() ) );
        return ( pos->second );
    }
};

void buildTestLedger() {
    
    cout << "Building Test Ledger" << endl;
    
    float start = time(NULL);

    vector<pair<int, CBlockIndex*> > vSortedByHeight;
    vSortedByHeight.reserve(mapBlockIndex.size());
    BOOST_FOREACH(const PAIRTYPE(uint256, CBlockIndex*)& item, mapBlockIndex)
    {
        CBlockIndex* pindex = item.second;
        vSortedByHeight.push_back(make_pair(pindex->nHeight, pindex));
    }
    sort(vSortedByHeight.begin(), vSortedByHeight.end());
    
    linked_map< COutPoint, CTxOut > unspent;
    
    long originalSize = 0;
    
    BOOST_FOREACH(const PAIRTYPE(int, CBlockIndex*)& item, vSortedByHeight)
    {
        CBlockIndex* pindex = item.second;

        CBlock block;
        
        block.ReadFromDisk(pindex);
      
        originalSize += GetSerializeSize(block, SER_DISK);
        
        BOOST_FOREACH(CTransaction & tx, block.vtx) {
            
            //Check each input and remove spent
            BOOST_FOREACH(CTxIn & in, tx.vin) {
               linked_map< COutPoint, CTxOut >::iterator it = unspent.find(in.prevout);
                
                if (it != unspent.end()) {
                    unspent.erase(it);
                }
            }
            
            int ii = 0;
            
            //Add each output to unspent
            BOOST_FOREACH(CTxOut & out, tx.vout) {
                COutPoint point(tx.GetHash(), ii);
                
                linked_map<COutPoint, CTxOut>::iterator it = unspent.insert(make_pair(point, out));
                
                ++ii;
            }
        }
    }
        
    //Here you would write the ledger to disk
    
    float end = time(NULL);
    
    long ledgerSize = unspent.size() * (sizeof(COutPoint) + sizeof(CTxOut));

    cout << "Ledger generation took " << end - start << "s" << endl;

    cout << "Original disk size " << originalSize / 1024.f / 1024.f << "MB" << endl;

    cout << "Number of unspent outputs " << unspent.size() <<  " Ledger size " << ledgerSize / 1024.f / 1024.f << "MB" << endl;

    cout << (ledgerSize / (double)originalSize) * 100 << "% of the original size" << endl;
}


Sample output:

Quote
Building Test Ledger
Ledger generation took 128s
Original disk size 748.074MB
Number of unspent outputs 1212159 Ledger size 78.6083MB
10.5081% of the original size

+ You would need to hold a certain number of recent blocks, at least until all chain forks can be resolved or 2016 to calculate the difficulty target. Your probably looking at 85% reduction in disk size and the same in block validation time. I wouldn't even know where to begin calculating an for estimate for merkel pruning would you have to download the full merkel trees?

Edit: Come to think about it, assuming the requirement of a correct balance ledger was enforced you don't even need the latest full blocks. You could take the latest balance ledger from any chain head and download the block headers only to verify the proof of work. That way your looking at more than 85% reduction in chain size.

ByteCoin
Sr. Member
****
expert
Offline Offline

Activity: 416
Merit: 277


View Profile
December 01, 2011, 02:30:15 AM
 #39

Building Test Ledger
...
10.5081% of the original size

Splendid! Would you be so kind as to rerun the calculations while discarding TxOuts with small BTC values, please?
Let's say anything below 0.001 can be discarded.
If you're feeling keen could you please plot a cumulative distribution of TxOut value? Possibly on a log value scale?
I have a strong feeling that the vast majority of the ledger will be taken up with very small TxOuts, many of them vanity addresses.

The fact that a ledger system seems to result in reclaiming 90% of the disc space is encouraging. Now that many bitcoins have been exchanged, the space saving results of a ledger system are more obvious than when I first proposed it over a year ago

The fees system could be reworked to provide incentives for keeping the ledger small rather than the blockchain. Some transactions would result in ledger size decreasing and could perhaps be encouraged by refunding some previously paid fee.

As Gavin implies, all nodes would have to verify ledger integrity in the same way that block integrity is verified. If this were implemented, a couple of extra optimizations should accompany: the transaction signature shouldn't contribute to the hash and the signature should not be recorded in the block or ledger. This would result in further considerable space savings and pave the way for enabling transaction replacement and other advanced contract features.

Note that if the ledger is implemented as a hash tree and incoming transactions are incorporated into the tree according to a suitable algorithm then when a new block is found, each client can recalculate the ledger independently and hence the whole ledger need only be downloaded once.

ByteCoin
finway
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500


View Profile
December 01, 2011, 02:52:27 AM
 #40

Scalability is a problem. Any proposal should be welcome.  Embarrassed

Pages: « 1 [2] 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!