Bitcoin Forum
December 12, 2024, 10:52:26 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Including merkleroot of blockheaders in coinbase?  (Read 1080 times)
jl2012 (OP)
Legendary
*
Offline Offline

Activity: 1792
Merit: 1111


View Profile
May 04, 2015, 07:45:59 AM
 #1

With the ever-growing blockchain, storing and verifying blockheader may become a problem for extra-light SPV clients. Would that be beneficial to include the merkleroot of blockheaders in coinbase (as a soft-fork)? Therefore, SPV clients will only need to store headers of relevant blocks.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
jl2012 (OP)
Legendary
*
Offline Offline

Activity: 1792
Merit: 1111


View Profile
May 04, 2015, 07:48:07 AM
 #2

Well, as soon as I post this I realize this may not be a good idea as SPV clients have to verify the PoW so they need all blockheaders anyway. However, is this idea useful in some other way?

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4284
Merit: 8816



View Profile WWW
May 04, 2015, 08:42:03 AM
 #3

Well, as soon as I post this I realize this may not be a good idea as SPV clients have to verify the PoW so they need all blockheaders anyway. However, is this idea useful in some other way?

Behold: http://blockstream.com/sidechains.pdf  read appendix b "Efficient SPV proofs", in it we present a design that allows you to precisely the amount of work done (in the expectation) on a chain and compute exactly the same value as a user linearly reading the headers using only log communication.  (The tradeoff is that because there are few samples the measurement has high variance; so an attacker authoring a false proof could get lucky, similar to how you might find a block on your CPU... so even though the expected work to forge a proof is exactly the same, the probability of success in at attack is higher than for an equivalent length reorg; though there are various ways to further harden it, most further decrease communications efficiency).
fbueller
Sr. Member
****
Offline Offline

Activity: 412
Merit: 287


View Profile
May 04, 2015, 12:04:34 PM
 #4

I don't think this is possible for the same reason a signature can't sign itself.. The coinbase transaction is used to compute the merkle hash, so therefore cannot contain the merkle hash without altering the hash you wished to include.

Bitwasp Developer.
jl2012 (OP)
Legendary
*
Offline Offline

Activity: 1792
Merit: 1111


View Profile
May 04, 2015, 12:14:50 PM
 #5

I don't think this is possible for the same reason a signature can't sign itself.. The coinbase transaction is used to compute the merkle hash, so therefore cannot contain the merkle hash without altering the hash you wished to include.

To be precise, I'm talking about merkle hash of the hash of all previous blocks

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
jl2012 (OP)
Legendary
*
Offline Offline

Activity: 1792
Merit: 1111


View Profile
May 04, 2015, 01:08:38 PM
Last edit: May 04, 2015, 03:25:06 PM by jl2012
 #6

Well, as soon as I post this I realize this may not be a good idea as SPV clients have to verify the PoW so they need all blockheaders anyway. However, is this idea useful in some other way?

Behold: http://blockstream.com/sidechains.pdf  read appendix b "Efficient SPV proofs", in it we present a design that allows you to precisely the amount of work done (in the expectation) on a chain and compute exactly the same value as a user linearly reading the headers using only log communication.  (The tradeoff is that because there are few samples the measurement has high variance; so an attacker authoring a false proof could get lucky, similar to how you might find a block on your CPU... so even though the expected work to forge a proof is exactly the same, the probability of success in at attack is higher than for an equivalent length reorg; though there are various ways to further harden it, most further decrease communications efficiency).

Do you mean one can use a very low value hash to replace the whole blockchain?

EDIT: I made a mistake

So with the current chainwork=69fdb53bd3217bc08921f at block 354922, I can rewrite the whole blockchain if I'm lucky enough to get a hash below 0000000000000000000026a50fe1b18437....... ?
(I get this number by taking 1/69fdb53bd3217bc08921f https://www.wolframalpha.com/input/?i=1%2F0x69fdb53bd3217bc08921f)

it is 39136 times more difficult than then current target of 00000000000000001713dd000000000000000000000000000000000000000000 and is expected to have one in 19.438.8 weeks with current hashrate

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
May 04, 2015, 03:05:04 PM
 #7

it is 39136 times more difficult than then current target of 00000000000000001713dd000000000000000000000000000000000000000000 and is expected to have one in 19.4 weeks with current hashrate

39136 blocks is nearly 1 year of blocks.  If you had that much hashing power, then you could replace the chain directly.

The underlying problem is that if the hashing power is increasing exponentially, then all previous POW done is equal to 1 doubling time worth of the current hashing power.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
jl2012 (OP)
Legendary
*
Offline Offline

Activity: 1792
Merit: 1111


View Profile
May 04, 2015, 03:29:10 PM
 #8

it is 39136 times more difficult than then current target of 00000000000000001713dd000000000000000000000000000000000000000000 and is expected to have one in 19.4 weeks with current hashrate

39136 blocks is nearly 1 year of blocks.  If you had that much hashing power, then you could replace the chain directly.

The underlying problem is that if the hashing power is increasing exponentially, then all previous POW done is equal to 1 doubling time worth of the current hashing power.

It is higher since the difficulty is quite stable in the past few months.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4284
Merit: 8816



View Profile WWW
May 05, 2015, 12:22:20 AM
 #9

Do you mean one can use a very low value hash to replace the whole blockchain?
Not quite; you commit to the work you're targeting... so you can't just find something and replace as much as you can; you have to set out in advance what amount of work you're claiming; and the link does nothing unless you hit it. If you try, but aren't successful, you've just wasted your effort.  In other words with the same amount of hashing power on average you could make a false compact proof that replaces the chain, or you could replace the whole chain normally; they take the same expected amount of work exactly.   But if you didn't have that much hashing power... only enough to, say, replace 6 months; then maybe you could get lucky and successfully create a compact proof for the whole chain but with, say, a 0.01% probability of success while the probability of success for just getting lucky directly mining the blocks would be 0.0000001% probability of success; just due the the reduced number of samples.  

(Though note, you can boost your reorg success probability even without the compact proofs by mining so that the difficulty goes up at the highest possible rate (e.g. 4x every retarget)... and _if_ you assume that everyone's hashrate exponentially goes up forever; then any fixed proportion of the hashrate (e.g. 0.1%) will in finite, but potentially very large, time eventually replace the whole chain just due to that 4x ramping variance attack: eventually you have a chain with similar work but much fewer samples; and so you can get lucky)

As far as the historical work:  http://bitcoin.sipa.be/powdays-50k.png  this is how much time at the current hashrate that it would take to replace the work in the chain.
jl2012 (OP)
Legendary
*
Offline Offline

Activity: 1792
Merit: 1111


View Profile
May 05, 2015, 04:51:57 AM
 #10

you have to set out in advance what amount of work you're claiming;

Do you mean morphing the "bits" field in the header? Is it legal to use an lower-than-required "bits" in bitcoin header?

and the link does nothing unless you hit it. If you try, but aren't successful, you've just wasted your effort.  In other words with the same amount of hashing power on average you could make a false compact proof that replaces the chain, or you could replace the whole chain normally; they take the same expected amount of work exactly.   But if you didn't have that much hashing power... only enough to, say, replace 6 months; then maybe you could get lucky and successfully create a compact proof for the whole chain but with, say, a 0.01% probability of success while the probability of success for just getting lucky directly mining the blocks would be 0.0000001% probability of success; just due the the reduced number of samples.  

The chance of success is higher because of higher variance, right? The normal way follows gamma (Erlang) distribution while the compact proof follows exponential distribution?

This sounds related to my earlier (bad) idea of "decreasing the variance of block interval w/o decreasing the mean of interval" https://bitcointalk.org/index.php?topic=572816.0

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4284
Merit: 8816



View Profile WWW
May 05, 2015, 08:35:00 AM
 #11

Do you mean morphing the "bits" field in the header? Is it legal to use an lower-than-required "bits" in bitcoin header?
No; the commitment included in the block is a hashtree with leaves that look like:

{index for how many back, previous_block_hash, compact encoded work difference between now and then}
...

For multiple backpointers. A back pointer is only valid (considered acceptable for traversal in a proof) if the block committing to it meets the work-difference threshold.  Similar to merge mining, it's possible to perform N-hashcashes in parallel all with different thresholds. Just like if namecoin's difficulty were higher than Bitcoin you could mergemine namecoin and have a bitcoin block with a namecoin commitment in it which is still useless for namecoin though its a valid bitcoin block.

So to follow a link that goes one block back, you need bits work-- this is the normal case.

To go follow a link that jumps two back your block must be valid at the 2*bits level... and so on, to successfully mine a 10000 back hop you need to have a block that is valid when considered with a work threshold of whatever was spanned by those blocks (e.g. 10000x greater if difficulty has been constant).

This is why the proof can hide data, but you can't fake one with less work (on average) than just mining normally.  Individually any block is unlikely to jump far back, but on the aggregate long jumps happen and as a result the proof is log sized in the length of the chain. (Actually less when the hashrate increases constantly).

For a given set of commitments there are multiple possible paths, and so multiple proofs for the same chain possible-- if you could have gone 100 back you could also go 50 back. Sometimes going less back makes for a smaller proof, perhaps at 51 back there is a single hop that goes all the way. The smallest proof can be constructed in linear time via dynamic programming: Start with the block you're trying to reach with your proof (e.g. genesis), and scan forward, for each block keep track of the size of the best proof from it to the destination and the first hop back, when you consider a block check each of its valid backlinks, add the cost of taking that backlink to the cost of its destinations best. Take the minimum as your best and move to the next block.  When you get to the end walk backwords. This algorithm always produces the smallest possible proof, though a simple greedy algorithm (always jump as far as you can but not past your destination) isn't usually _that_ much larger.

Quote
The chance of success is higher because of higher variance, right? The normal way follows gamma (Erlang) distribution while the compact proof follows exponential distribution?
Right, exactly-- well to be more precise, the compact proof and normal are both gamma; but the compact proof takes the log of the shape, while increasing lambda--- the same expected work, is represented by a fewer number of more rare events such that the expectation is the same but the distribution shape is different.

Technically its possible to make the proofs "over prove", e.g. to go X-work back you have to show a*X expected total work; but I think I concluded that you can only get constant factor improvements that way unless you break the asymptotic log scaling of the proofs. One nice thing is that such techniques are between the prover and verifier, the commitments don't change.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!