Maybe the bid is that the needed UTXOS to verify by a block say
M UTXOS (about 2-3k) are all contiguous in the Merkle strucutre and they will need a total of
less than M nodes proofs as they constitute one large subtree and they all could be proved by one or two paths; ie O( log (the total UTXO set)), let's call it
O(log N) But since in practice most of the time u prove like 2k UTXOS using about 10K nodes and u consider this an achievement (comparing to the average of
2k * log (76*20⁶) applinkedThen why don't u just send the stored 2k hashes, maybe with a one accumulated hash of them all as a caution from transmission errors?
I agree with you.
I'm sorry but this was not right to agree with?!
I corrected it in the next reply, we use Merkle proofs because with them nodes have a piece of the proof (the root) and will verify by themselves the correctness of the UTXO hash. If the server just sent to you yes the hash you have is valid, or no it is not, how would you be sure the server is not lying?
(That's why SPV clients usually ask more than one node)
This is doesn't contradict with the fact that we hope to get shorter proofs if we manage to make the required to proof UTXOS adjacent in the tree
.
& yes the paper is not about Bitcoin Blockchain, but it gave me an intuition somehow because the difference between: 1-Append to right Merkle Trees, like Utreexo & MMR, where u can make all ur trees complete ( if the UTXOS are arranged according to creation time, u can assign them in order to power of 2 complete trees)
2-Keyed on hash Merkle Trees, these trees are not complete by nature. The paper assumes a complete large power of 2 tree with some default value leaves say 0, then the true tree becomes like a sparse subset tree from this big complete one.
-Then the paper, solving different problem with different constraints, compare storing this tree as B, B+,B-, and another previously known method (maybe hash tables, I don't remember now) for space-time trade offs.