Bitcoin Forum
December 11, 2024, 07:45:36 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 20 21 22 »  All
  Print  
Author Topic: Ultimate blockchain compression w/ trust-free lite nodes  (Read 87947 times)
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
June 10, 2013, 01:16:47 AM
Last edit: June 10, 2013, 02:35:24 AM by d'aniel
 #361

However SCIP is probably years away from getting to the point where we could use it in the Bitcoin core. One big issue is that a SCIP proof for a validated merkle tree has to be recursive so you need to create a SCIP proof that you ran a program that correctly validated a SCIP proof. Creating those recursive proofs is extremely expensive; gmaxwell can talk more but his rough estimates would be we'd have to hire a big fraction of Amazon EC2 and assemble a cluster of machines with hundreds of terrabytes of ram. But math gets better over time so there is hope.
The talk left me with the impression that their non-recursive SCIP proofs are inexpensive, so I wonder if recursion could be avoided.  For example, if the full state were encoded locally in pairs of adjacent blocks  - as the proposal in this thread would achieve - then a SCIP proof validating the next block could simply assume validity of the two prior blocks, which is fine if the node verifying this proof has verified the SCIP proofs of all preceding blocks as well.  Once blocks become individually unwieldy, perhaps verifying each block would simply take a few extra SCIP proof validations - with SCIP proof authors tackling the transaction and UTXO patricia/radix tree updates by branches.  Could this approach properly remove the need to nest SCIP proofs inside of SCIP proofs, or is there something obvious I'm missing?

Edit: I suppose this would mean that Alice would be sending a slightly different program for Bob to run to produce each SCIP proof in each block?   I guess these programs would have to be a protocol standard, since 'Alice' is really everybody, and would differ only by the hash of the previous block?  All of this is very vague and magical to me still...
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
June 10, 2013, 03:29:05 AM
 #362

A non-SCIP approach that we can do now would be to use fraud detection with punishment. Peers assemble some part of the merkle tree and digitally sign that they have done so honestly with an identity. (a communitive accumulator is another possibility) The tree is probabalisticly validated, and any detected fraud is punished somehow, perhaps by destroying a fidelity bond that the peer holds.  You still need some level of global consensus so the act of destroying a bond is meaningful of course, and there are a lot of tricky details to get right, but the rough idea is plausible with the cryptography available to us now.
I do like this approach as well, and hadn't thought to use fidelity bonds for expensive punishment of misbehaving anonymous 'miner helpers'.  Though it is susceptible to attacks on the p2p network, unlike a SCIP approach, by surrounding groups of nodes and blocking the relay of fraud proofs to them.  Not sure how important this is in practice though.
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1164


View Profile
June 11, 2013, 11:43:05 PM
 #363

The talk left me with the impression that their non-recursive SCIP proofs are inexpensive, so I wonder if recursion could be avoided.  For example, if the full state were encoded locally in pairs of adjacent blocks  - as the proposal in this thread would achieve - then a SCIP proof validating the next block could simply assume validity of the two prior blocks, which is fine if the node verifying this proof has verified the SCIP proofs of all preceding blocks as well.  Once blocks become individually unwieldy, perhaps verifying each block would simply take a few extra SCIP proof validations - with SCIP proof authors tackling the transaction and UTXO patricia/radix tree updates by branches.  Could this approach properly remove the need to nest SCIP proofs inside of SCIP proofs, or is there something obvious I'm missing?

If you are talking about pairs of adjacent blocks all you've achieved is making validating the chain possibly a bit cheaper, those creating the blocks still need to have the full UTXO set.


Going back to the merkle tree thing it occurs to me that achieving synchronization is really difficult. For instance if the lowest level of the tree is indexed by tx hash, you've achieved nothing because there is no local UTXO set consensus.

If the lowest level of the tree is indexed by txout hash, H(txid:vout), you now have the problem that you basically really have a set of merge-mined alt-coins. Suppose I have a txout whose hash starts with A and I want to spend it in a transaction that would result in a txout with a hash starting with B.

So I create a transaction spending that txout in chain A, destroying the coin in that chain, and use the merkle path to "prove" to chain B that the transaction happened and chain B can create a coin out of thin air. (note how the transaction will have to contain a nothing-up-my-sleeve nonce, likely a blockhash from chain B, to ensure you can't re-use the txout)

This is all well and good, but a 51% attack on just chain A, which overall might be a 5% attack, is enough to create coins out of thin air because chain B isn't actually able to validate anything other than there was a valid merkle path leading back to the chain A blockheader. It's not a problem with recursive SCIP because there is proof the rules were followed, but without you're screwed - at best you can probabilistically try to audit things, which just means an attacker gets lucky periodically. You can try to reverse the transaction after the fact, but that has serious issues too - how far back do you go?

Achieving consensus without actually having a consensus isn't easy...

I do like this approach as well, and hadn't thought to use fidelity bonds for expensive punishment of misbehaving anonymous 'miner helpers'.  Though it is susceptible to attacks on the p2p network, unlike a SCIP approach, by surrounding groups of nodes and blocking the relay of fraud proofs to them.  Not sure how important this is in practice though.

Bitcoin in general assumes a jam-proof P2P network is available.

An important issue is that determining how to value the fidelity bonds would be difficult; at any time the value of the bond must be more than the return on committing fraud. That's easy to do in the case of a bank with deposits denominated in BTC, much harder to reason about when you're talking about keeping an accurate ledger.

Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1164


View Profile
June 12, 2013, 12:21:07 AM
 #364

Here's a rough sketch of another concept:

Suppose you have 2*k blockchains where each blockheader was actually the header of two blocks, that is chain n mod 2*k and chain (n+1) mod 2*k In English picture a ring of blockchains and miners would "mine" pairs of chains.

The rule is that the difference in height between any adjacent pair of chains can differ no more than 1 block, and finding a valid PoW creates a pair of blocks with an equal reward in each chain. Because the miners get the equal reward they have an incentive to honestly mine both chains, or they'd produce an invalid block and lose that reward. To move coins between one chain and it's neighbor create a special transaction doing so which will be validated fully because a miner will have full UTXO set knowledge for both chains. Of course, this means it might take k steps to actually get a coin moved from one side of the ring to the other, but the movement will be fully validated the whole way around.

Again, what used to be a 51% attack can now become something much weaker. On the other hand because the data to store the PoW's and block headers (but not full blocks) is small PoW's for one pair of chains can include the hashes of every chain, and the system can simply treat that extra PoW as an additional hurdle for an attacker to rewrite any individual chain. What a 51% attack on a pair of chains involves is to actually manage to get into a situation where you are the only person bothering to mine a particular pair of chains - hopefully a much higher barrier if people pick the pair of chains they validate randomly.

The ring is just a nice example; in reality I think it'd good enough to just have the n chains and miners pick pairs of chains to mine. The number of pairs that needs to be mined for a full interconnected set is n(n-1) ~= n^2. The big advantage of a fully connected set is that the slicing can happen on a per-txout-hash basis, IE a transaction spending a txout starting with A and creating a txout starting with B can be mined by anyone mining both the A and B chains, though note how you'll wind up paying fees for both, and with more outputs you can wind up with a partially confirmed transaction. Also note how a miner with only the UTXO set for the A chain can safely mine that transaction by simply creating a 1 transaction block in the B chain... ugly. You probably need proof-of-UTXO-set-posession on top of proof-of-work to keep the incentives correct.

We've created weird incentives for hashers because moment to moment the reward (fees) for mining each pair will be determined by the transactions required to bridge that pair, so pools will pop up like crazy and your mining software will pool hop automatically - another perverse result in a system designed to aid decentralization, although probably a manageable one with probabilistic auditing.

Maybe the idea works, but I'll have to think very carefully about it... there's probably a whole set of attacks and perverse incentives lurking in the shadows...

maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1012


View Profile
June 12, 2013, 12:22:14 AM
 #365

If you are talking about pairs of adjacent blocks all you've achieved is making validating the chain possibly a bit cheaper,

*A lot* cheaper. But anyway:

those creating the blocks still need to have the full UTXO set.
To create a transaction you only need access to your own inputs. Why would you need the full UTXO set?

Going back to the merkle tree thing it occurs to me that achieving synchronization is really difficult. For instance if the lowest level of the tree is indexed by tx hash, you've achieved nothing because there is no local UTXO set consensus.
Can you explain this?

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
etotheipi (OP)
Legendary
*
expert
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
June 12, 2013, 03:10:05 AM
 #366

For reference on the synchronization question, I will reference one of my previous posts.  It was a thought-experiment to figure out how to download the Reiner-tree between nodes, given that the download will take a while and you'll get branch snapshots at different block heights:

https://bitcointalk.org/index.php?topic=88208.msg1408410#msg1408410

I just wanted to make sure it wasn't something to be concerned about (like all sorts of hidden complexity).  It looks like it's workable.

Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
June 16, 2013, 01:20:13 AM
Last edit: June 16, 2013, 05:24:37 AM by d'aniel
 #367

Regarding having nested subtries for coins with the same ScriptSigs, I wonder if it's such a good idea to complicate the design like this in order to accommodate address reuse?  Address reuse is discouraged for privacy and security reasons, and will become increasingly unnecessary with the payment protocol and deterministic wallets.

Also, was there a verdict on the 2-way (bitwise) trie vs. 256-way + Merkle trees in each node?  I've been thinking lately about sharding block creation/verification, and am noticing the advantages of the bitwise trie since its updates require a much more localized/smaller set of data.
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1012


View Profile
June 16, 2013, 02:57:19 AM
 #368

I've pushed an in-memory hybrid PATRICIA-Braindais tree implementation to github:

https://github.com/maaku/utxo-index

I may experiment with the internal structure of this tree (for example: different radix sizes, script vs hash(script) as key, storing extra information per node). 2-way tries probably involve way too much overhead, but I think a convincing argument could be made for 16-way tries (two levels per byte). Once I get a benchmark runner written we can get some empirical evidence on this.

Having sub-trees isn't so much about address reuse as it is that two different keys are needed: the key is properly (script, txid:n). In terms of implementation difficulty I don't think it's actually that much more complicated. But again, we can empirically determine this.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
June 16, 2013, 04:57:49 PM
 #369

I've pushed an in-memory hybrid PATRICIA-Braindais tree implementation to github:

https://github.com/maaku/utxo-index
Cool!

On second thought, I don't think the radix size really matter too much for sharding the node.  The choice of keying OTOH...
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
June 16, 2013, 04:58:51 PM
Last edit: June 17, 2013, 04:21:42 AM by d'aniel
 #370

@retep, here's roughly how I imagine sharding a node could be done without diluting hashing power across multiple separate chains (that sounds terrible!):

First I'll assume we include in the block headers the digest of a utxo tree keyed by (txid:n, script) instead of by (script, txid:n), as this will turn out to be much more natural, for this purpose at least.  Second, I'll assume the tx digest is created from the authenticated prefix tree of their txids, which will also turn out to be much more natural.  (Last minute thought: doesn't the tx ordering matter in the usual tx Merkle tree, i.e. earlier txs can't spent TxOuts created by later txs?  Or can it just be assumed that the block is valid if there exists some valid ordering which is up to the verifier to construct?)  The radix size turns out not to matter, but let's call it k.

Distributed block construction

Division of labor is as follows: We have a coordinator who directs the efforts of N well-mirrored branch curators who separately update each of the utxo tree branches below level logk(N), and process subblocks of any number of transactions.

A branch curator downloads the incoming txs whose txids lie in his particular branch.  Notice that due to our convenient choice of keying, all of his newly created TxOuts will lie in his own branch.  For each TxIn in a given tx, he needs to download the corresponding TxOut from his relevant well-mirrored counterparts. Note that TxOuts will always be uniquely identifiable with only a few bytes, even for extremely large utxo sets. Also, having to download the TxOuts for the corresponding TxIns isn't typically that much extra data, relatively speaking - ~40 bytes/corresponding TxOut, compared to ~500 bytes for the average tx having 2-3 TxIns.  With just these TxOuts, he can verify that his txs are self-consistent, but cannot know whether any given TxOut has already been spent.

This is where the coordinator comes into play.  He cycles through the N branches, and for each branch, nominates one of the curator mirrors that wishes to submit a subblock.  This branch curator then gathers a bunch of self-consistent txs, and compresses the few byte ids of their TxIns into a prefix tree.  He sends his respective counterparts - or rather, one of their mirrors who are up to date with the previous subblock - the appropriate branches, and they send back subbranches of those that are invalid with respect to the previous subblock.  Note that this communication is cheap - a few bytes per tx.  He then removes the invalid txs from his bunch, informs his counterparts of the TxIns that remain so they can delete the corresponding utxos from their respective utxo tree branches, deletes those relevant to him, inserts all of his newly created TxOuts into his utxo tree branch, and builds his tx tree.  He submits his tx and utxo tree root hashes to the coordinator, who also gathers the other branch curators' updated utxo tree root hashes.  This data is used to compute the full tx and utxo tree root hashes, which is then finally submitted to miners.

When the coordinator has cycled through all N branches, he goes back to the first who we note can perform very efficient updates to his existing tx tree.

Some notes:

  • Mutual trust between all parties was assumed in lots of ways, but this could be weakened using a fraud detection and punishments scheme - ingredients being e.g. authenticated sum trees, fidelity bonds, and lots of eyes to audit each step.  Trusted hardware or SCIP proofs at each step would be the ideal future tech for trust-free cooperation.
  • The job of the coordinator is cheap and easy.  The branch curators could all simultaneously replicate all of its functions, except nominating subblock submissions.  For that they'd need a consensus forming scheme.  Perhaps miners including into their coinbase a digest of their preferred next several subblock nominees, and broadcasting sub-difficulty PoW would be a good alternative.
  • Subblock nominees could be selected by largest estimated total fee, or estimated total fee / total size of txs, or some more complicated metric that takes into account changes to the utxo set size.
  • Revision requests for a chain of subblocks could be managed such that that the whole chain will be valid when each of the subblocks come back revised, thus speeding up the rate at which new blocks can be added to the chain.
  • Nearby branch curators will have no overlap in txs submitted, and very little overlap in utxos spent by them (only happens for double spends).

Distributed block verification

To catch up with other miners' blocks, branch curators would download the first few identifying bytes of the txids in their respective branches, to find which txs need to be included in the update.  The ones they don't have are downloaded.  Then in rounds, they would perform collective updates to the tx and utxo trees, so that txs that depend on previous txs will all eventually be covered.  If by the end the tx and utxo tree root hashes match those in the block header, the block is valid.

Future tech: branch curators would instead simply verify a small chain of SCIP proofs Smiley

Additional note: branch curators can additionally maintain an index of (script: txid:n) for their branch, in order to aid lightweight clients doing lookups by script.
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1012


View Profile
June 17, 2013, 05:33:14 AM
 #371

An index keyed by (txid:n) will have to be maintained for block validation anyway. My current plan is to have one index (hash(script), txid:n) -> balance for wallet operations, and another (txid:n) -> CCoins for validation.

Transactions within blocks are processed in-order, and so cannot depend on later transactions.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
etotheipi (OP)
Legendary
*
expert
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
June 17, 2013, 06:08:07 AM
 #372

Completely off topic.  I was thinking that " Reiner-Friedenbach tree" is way too long and way too German.  What about the "Freiner tree"? Too cheesy?   It does seem to be an elegant mixture of Reiner, Friedenbach, and Freicoin.

I really wanted to rename this thread to something more appropriate, but I'm not sure what, yet. 

Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
June 17, 2013, 08:32:20 AM
 #373

An index keyed by (txid:n) will have to be maintained for block validation anyway.
Right, I guess that's why it's the natural keying for distributed nodes.

Quote
My current plan is to have one index (hash(script), txid:n) -> balance for wallet operations, and another (txid:n) -> CCoins for validation.
The question then is which one's digest gets included in the block header?  Having both would be nice, but maintaining the (hash(script): txid:n) one seems to make distributing a node a lot more complex and expensive.  The downside to only having the (txid:n, script) one's digest committed in the block header is that you can't concisely prove a utxo with a given script doesn't exist.  But you can still concisely prove when one does, and that seems to be what's really important.  Also, if only one of the trees needs the authenticating structure, then this would be less overhead.

Quote
Transactions within blocks are processed in-order, and so cannot depend on later transactions.
Okay, but I don't think an individual block really needs to encode an explicit ordering of transactions, as the tx DAG is the same for any valid ordering.  As long as a node can find some ordering that's consistent, then that's good enough.
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
June 17, 2013, 08:53:03 AM
 #374

Completely off topic.  I was thinking that " Reiner-Friedenbach tree" is way too long and way too German.  What about the "Freiner tree"? Too cheesy?   It does seem to be an elegant mixture of Reiner, Friedenbach, and Freicoin.

I really wanted to rename this thread to something more appropriate, but I'm not sure what, yet. 
How about "Authenticated dictionary of unspent coins"?

I like it cause it's self-explanatory Smiley
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1012


View Profile
June 17, 2013, 08:54:15 AM
 #375

The downside to only having the (txid:n, script) one's digest committed in the block header is that you can't concisely prove a utxo with a given script doesn't exist.  But you can still concisely prove when one does, and that seems to be what's really important.

No, you need to be able to prove both, otherwise we're right back where we started from in terms of scalability and lightweight clients.

One data structure is needed for creating transactions, the other is required for validating transactions. It's rather silly and myopic to optimize one without the other, and I will accept no compromise - both will be authenticated and committed so long as I have any say in it.

Quote
Transactions within blocks are processed in-order, and so cannot depend on later transactions.
Okay, but I don't think an individual block really needs to encode an explicit ordering of transactions, as the tx DAG is the same for any valid ordering.  As long as a node can find some ordering that's consistent, then that's good enough.

I'm simply reporting how bitcoin works: if you include transactions out of order, your block will not validate.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
ThomasV
Legendary
*
Offline Offline

Activity: 1896
Merit: 1353



View Profile WWW
June 17, 2013, 08:56:07 AM
 #376

Also, was there a verdict on the 2-way (bitwise) trie vs. 256-way + Merkle trees in each node?  I've been thinking lately about sharding block creation/verification, and am noticing the advantages of the bitwise trie since its updates require a much more localized/smaller set of data.

I guess what really matters if the amount of database operations.
if the hash of a node is stored at its parent, (and each node stores the hashes for all its children)
then updating the hash of a node requires only one database read and one write, instead of reading all children (see my previous post).

Electrum: the convenience of a web wallet, without the risks
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
June 17, 2013, 09:20:30 AM
Last edit: June 17, 2013, 07:24:50 PM by d'aniel
 #377

No, you need to be able to prove both, otherwise we're right back where we started from in terms of scalability and lightweight clients.

One data structure is needed for creating transactions, the other is required for validating transactions. It's rather silly and myopic to optimize one without the other, and I will accept no compromise - both will be authenticated and committed so long as I have any say in it.
I'm not sure I follow.  I understand that one of the main benefits of the idea is to be able to prove to a lightweight client that a coin is currently valid; whereas now, when given a Merkle path to a tx in some block, a lightweight client doesn't necessarily know if a txout in this tx was spent in another tx after that block.  That seems like a big improvement.  But the problem isn't that the malicious peer isn't serving data at all, it's that he's serving it in a misleading way.  Isn't it true that either keying ensures peers can't mislead lightweight clients in this way?

Quote
I'm simply reporting how bitcoin works: if you include transactions out of order, your block will not validate.
I appreciate that.  The distributed node idea assumed a couple significant protocol changes, and I just wanted to be sure they wouldn't break anything.
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
June 17, 2013, 09:43:26 AM
Last edit: June 17, 2013, 10:04:00 AM by d'aniel
 #378

Also, was there a verdict on the 2-way (bitwise) trie vs. 256-way + Merkle trees in each node?  I've been thinking lately about sharding block creation/verification, and am noticing the advantages of the bitwise trie since its updates require a much more localized/smaller set of data.

I guess what really matters if the amount of database operations.
if the hash of a node is stored at its parent, (and each node stores the hashes for all its children)
then updating the hash of a node requires only one database read and one write, instead of reading all children (see my previous post).

I noted somewhere a while back in this thread that a group of 2-way nodes can be 'pruned' to produce a (e.g.) 256-way one for storage, and a 256-way one can be 'filled in' after retrieving it from storage to produce a group of 2-way ones.  Not sure what the optimal radix size is for storage, but database operations are definitely the primary concern.

If a write-back cache policy is used so that the 'trunk' of the tree isn't being constantly rewritten on the disk, then using a bitwise trie would mean not having to rebuild full Merkle trees from scratch for each of the upper nodes during every update.  Not sure if this is a big deal though.

Edit: Never mind about that last point.  Merkle tree leaves are rarely ever added or removed in the upper nodes, so updating the Merkle tree would usually only require changing a single path.
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
June 18, 2013, 09:26:00 AM
 #379

About SCIP, if the UTXO tree is on each block, it can be done non-recursively.
Clients would donwload the whole header chain, not only with the proof of work but also with the "signature" that proves the transition from the previous UTXO to the current one is correct.

But I don't know SCIP in detail neither or their schedule. So, yes, it would be interesting to have someone that explains this stuff applied ot this first hand.
Can anyone bring Ben to this conversation? I don't even know his nick on the forums...

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
flipperfish
Sr. Member
****
Offline Offline

Activity: 350
Merit: 251


Dolphie Selfie


View Profile
June 18, 2013, 01:39:44 PM
 #380

Possible? Yes. Desirable? No. It's important that miners verify that they haven't been duped onto a side chain. It is, however, okay for them to throw away those historical transactions once they have been verified and just keep the UTXO set.

Yeah, I did not mention the UTXO set because I thought it's obivous.

The reason I brought up this is, I believe a lot of us are willing to run a USB miner to secure the network, without generating any noticeable revenue, now that it's out and very power-efficient, the power cost of keeping one running is somehow negligible, but if we have to download and store the rapidly growing full chain, the cost may grow significantly.

The miner could validate the entire history or synchronize with constant storage requirements, throwing away data as it is no longer required.
[/quote]

Why can the miner not use the block-headers only to verify, that he's on the right chain? Is there a reason all the already spent transactions of the past have to be stored somewhere?
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 20 21 22 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!