I see 2 big problems here.
- If it's Murphy's law you are working against you are doomed to failure. Murphy always wins and in this case I believe it would mean that ASICs will come out that run your coin BEFORE the software implementation is done.
- Your memory requirements growing over absolute time will eventually outpace Moore's law leading to an increasingly impractical cost of participation that eventually require more RAM than possible.
The first one is a funny typo, but the second is a big issue.
At SOME point we are going to see Moore's law slow down and maybe even plateau as we reach atomic densities. This protocol will just get more and more memory intensive until just scanning the blockchain takes millions of dollars worth of hardware, let alone mining. BitCoin solves this issue by NOT using time as an absolute factor, but instead Hashrate/difficulty. This means if Moores law stops working tomorrow or in 2050 the network will self-stabilize difficulty and available computing power.
I fixed that, thank you.
I have been wondering this myself, as it's only 10 angstroms to a nanometer and we're moving to sub-10 nanometer design in the next decase.
2 conclusions I'm drawing from this:
- We need to prevent excessive computation from being required to validate the blockchain or operate as a client. The more asymmetric we can make the ratio of verification:generation the better the efficiency for clients compared to the mining effort
- Don't tie ANYTHING to an absolute growth direction except the number objects. Assume that any adjustments that are made might need to be reversed in order to keep the network operating.
I suppose that's kind of the fail-safe of the bitcoin network with difficulty, that difficulty is always reversible instead of always increasing.
Suggestions:
Maybe your difficulty adjustment should be a composite metric that includes harder targets AND more RAM instead of having them tied to 2 different events?
As far as increasing the ratio between computation and verification would it be possible to sign each block twice? Sign it once with a simple algo, and then sign block, simple signature, and nonce with the complex algo and retain both hashes. Mining could require full verification of the previous complex hashes, but that just needs to be for recent blocks.
Those are my thoughts so far, hope they help.
Well, I think a possible composite algorithm for difficulty adjustment could be a long term retarget for memory (35, 70, or 140 days) and a short term retarget (3.5 days) for difficulty. The problem with this approach is that if the network becomes inundated with miners that the memory retarget could become too large and destroy the infrastructure of the chain. I think if we're using 35 days retargets for memory that the maximum increase should be 5%-10% while the maximum decrease should be 20%-50%.
The last point I'm not knowledgeable about. I think you'd need to have two symmetric merkle trees with both the simple and the complex hash. The complex hash would need to be solved first, and then whoever solves it would have to solve the simple hash at approximately the same time (it'd have to be really easy to make it near instantaneous) and so would sign for both. The simple "dummy" tree nodes would just contain data about who solved the block, what the transactions were, and what the network settings were at the time. This would then have to be constantly evaluated by the master tree using "master nodes" with full hashing capabilities to ensure that both trees are congruent; this would expose the network to master node sybil attacks, though. Master nodes would then be the source of the dummy tree to clients.
Probably any such network simplification algorithm is going to expose clients using "dummy trees" or other simplified merkle tree structures to this sort of attack. As the bitcoin algorithm is facing a data storage problem eventually stemming from the same problem, probably a solution will be found for this sometime soon, I'm just not sure what it is.