It's nice to know I'm not talking to myself
So, more news, and general commentary. I got through a lot of tasks today and I need to debrief a bit.
There is 7 new algorithms being added: blake14lr (as used in decred), blake2b (sia), Lyra2REv2 (vertcoin), skein (one of myriad's 5), x13 and x11. These were selected because all of the coins that they are associated with have the standart bitcoin mining RPC API, and none of these require any changes to the block header structure (equihash requires an additional 1134 bytes, monero's difficulty adjustment is double the length at 64 bits) and basically most of it is tedious administrative work going through all the places where I already added the support for Scrypt to the btcd base, now there will be 9 in total. I am most of the way through the initial implementation.
The difficulty adjustment deserves a little more discussion. KGW and Dark Gravity Wave are more well known continuous difficulty adjustment algorithms, and as people would know, the Verge attack was due to flaws in the DGW. I spent a lot of the last week staring at logs of the new node mining, well, only one algorithm, but this doesn't substantially matter because all of them are poisson point processes anyway, so if designed right it should behave not too differently with more. It might be a day or two before I have them fully working, and then I can see how my algorithm copes with the multiple algorithms.
So, the existing difficulty adjustment is terrible, I don't know how long the original authors spent testing but I suspect not very long. It bases its calculations on the time between a block of an algorithm and the previous block of the algorithm, with a mere 10 sampled to generate the average upon which it adjusts linearly.
Firstly, yes, of course this new algorithm uses exponents, not squares, as in many of the others, but rather, a cubic curve, which is shifted with adding one to the result and taking one from the divergence ratio (target:actual) then cubing it. This is a slightly rough description:
https://github.com/parallelcointeam/pod/blob/master/docs/parabolic_filter_difficulty_adjustment.mdOne subject I didn't fully address because it wasn't on my mind so much as I was simply testing to see how it works, was the matter of how the different algorithms cooperate with each other. The many schemes I see, monero's and others, do a LOT of processing and slicing and weighting and such, and I think if this can be avoided, it is better, as this is a process that takes place during replay, and nothing is more irritating than waiting for sync, because of bandwidth or disk access problems. It can be worse sometimes also depending on the database, leveldb is not bad but I would prefer, down the track, to switch it all to the Badger database, which separates the key and value fields, and keeps (optionally) all of the keys in memory.
So, I basically (lazily) implemented it, without too much thinking beforehand, such that for each algorithm, it always steps back to the previous block of that algorithm, and then walks backwards some amount of blocks (probably 288 blocks, but I'm not completely sure yet), and simply uses these two times, subtracts the older time from the newer, and adjusts from that.
What I did not realise immediately was how this would work with especially this many algorithms. As I have discussed in that document above, the difficulty is dithered by a tiny amount, just the last two bits, after getting that average block time, and then feeding it into that curve filter. This is a unique feature for difficulty adjustments, I thought about more 'random' ways of doing it but the fact is when you are dithering, you should not shuffle things too much. The bit-flipping may not be sufficient to smooth out the bumpy, pointy timestamp resolution, but it probably is a long way towards eliminating resonances caused by common factors between numbers.
So, in every case, the difficulty adjustment is computed based on, what will often be, some time in the past, maybe 1 block, maybe hundreds. The adjustment based on this will influence only the one algorithm's blocks, but it also combines with the other blocks mingled together. But the most important effect, which was just a side effect, in my mind, that I didn't model yet, is that no matter how long the gap is while an algorithm isn't being used, it is as though no time has passed, so each of the different algorithms will bring new, complex 'echoes' into the feedback loop that computes difficulty, which I figure will probably somewhat resemble reverb or radiosity in sound and light respectively. Or in other words, the possibly different inherent distributions of solutions, which are already independent from each other, will combine a lot of different numbers together.
Pure gaussian noise is by its nature soft and fluffy, the sound is like the whooshing of breath, and visually it is blurring, density of dots, for example, is used for most forms of print matter, and the best looking images are ones that have the natural chaotic patterning as you see in film. Even many new developments in camera technology, and displays, are increasingly using multiple exposures, and highly pattern-free algorithms like Floyd-Steinberg and later, more fluffy types of ditherers.
So, in a simple way of explaining it, the only thing that really differentiates how I am implementing this, is I am making maximum use of the benefits of poisson point processes and gaussian distribution with intentional noise. Too much will overwhelm it, but I think between the bit flips and the time-traveling block difficulty that always ignores the immediate previous block, maybe many more, since the previous time the algorithm was used for a block, these blocks will appear randomly, and the echos will be sometimes short, nearly not an echo at all, and sometimes quite a ways back. These will definitely add more randomness to the edges of the adjustments, and this is very important, in my opinion, as the reality is, difficulty adjustment regimes really are not up to date compared to other fields of science where control systems are implemented.
You may have heard of Fuzzy Logic, and probably don't realise that almost all cars, washing machines, and many many devices now use fuzzy logic systems to progressively adapt via feedback loops to keep a random or undesirably shaped response pattern from appearing in the behaviour of the device. These unwanted patterns tend to also cause inefficiency and catastrophic changes if they are not mitigated.
The new stochastic parabolic filter difficulty adjustment scheme should hopefully fix almost all of the issues relating to blockchain difficulty attacks, most of them, the most devastatingly efficient, as was the first live instance of a time warp attack, on Verge, earlier this year, and the attacker succeeded in freezing the clock and issuing arbitrary tokens (of course limited by block height/reward size, but not in sheer number).
The only remaining vector of attack is the pure and simple 51%, just simply overwhelming the chain for long enough to be able to rewrite it.
Just some brief comments about the timing behaviour that I observed so far in tests, with the dithering added, particularly, the block times started to nicely swing quite a ways back and forth, in an almost alternating pattern. It's probably actually a 4-part pattern, because 4 different ways 2 bits can be set. Firstly, it does not react strongly to blocks when they fall within the inner 20% of divergence, between 80% and 125%, and as such, even with quite dramatic changes in hashpower, it stays pretty close, because basically once block time falls, as it is prone to, randomly, at the further out edges, from 50% to 200% (smaller and bigger) the algorithm kicks them hard, so in the events of a sharp rise in hashpower, what happens is the difficulty rapidly increases, *but* it does not do so in a linear fashion. It is dithered in the smaller harmonics, as well as being a smooth surface like a mirror (and made smoother by dithering), so it homes in on the correct adjustment quite quickly.
From what I saw so far, a sharp rise sees the difficulty accelerates upwards, but not smoothly, and because it isn't smooth, though when a big miner stops mining, yes there may be longer gaps between, but as I mentioned, the block times seem to pretty well oscillate in a 1:4 and 1:2 ratio, usually within 3 blocks it will, as it did going up, go back down.
Most algorithms don't try to address the issue of aliasing distortion in the signal, and it is this that makes difficulty adjustment algorithms get stuck or oscillate excessively.
Anyway, I will have all of the bits changed to fit the 7 new algorithms in the next days, and then we will be able to see more concrete data about what additional effect the 9 different algorithms will have, as well as the pattern that it ignores the newest block if it is the same algorithm, meaning that due to the random distribution of the algorithms, the time will also be dithered as a product of the poisson process of finding solutions.
Blockchains basically take a random source, and attempt to modulate the process with difficulty to make it more regular. Regular blocks is very important because of the width of the amount of time between blocks being somewhat long, and in fact blockchains can't really do much better than 2.5 minutes, at best, and in my opinion, 5 minutes is a good amount of time, and long enough. Between 10 minute blocks, there is 600 seconds. You would not be happy if there was only 600 pixels across your screen (these days), and I think it helps people understand what is going on when an analogous phenomena is compared.
Blockchains are highly dependent on clocks, and it is because of the many random factors between them, that it is also the most difficult thing to really get right. Many of the older coins have terrible difficulty adjustments, bitcoin only adjusts every 2 weeks, and the only reason it can get away with it is because of so much loyal hashpower, the volatility is small. But the more ASICs that are available, the more GPUs, the more variance there can be between hashpower and profitability on various chains, the more of a problem it gets to be for more chains.
I have thought about other strategies, and maybe they will be added later, but I am not sure yet if it is even necessary. The most challenging thing with difficulty adjustment and the main type of attack carried out on pretty much a daily basis, even, mostly unintentional, is that blockchains can be caused to pause for indeterminate amounts of time when difficulty adjusts up due to high hashpower, and then when it goes away, it cannot go back down until a new block hits the old target. Min diff blocks, as used in test networks, do not work in the real world, because it is not possible to come to a clear consensus about time, and the network can be highly divided about whether the necessary amount of time elapsed or not, and thtis gets even worse when you add in that people alter their nodes to feed fake times or divergent times as part of attempts to game the system.
I think the solution is just noise, basically, in case you hadn't figured it's all about that, and using noise to eliminate distortion. Even if the chain has dramatic, like maybe even as high as 20x jumps in difficulty, that with added wiggle, when difficulty ramps up, it does not get caught, either slowed or sped up, in moving up, it wiggles all the way to the top, and at the top, though there will be a time of gap between, the 'time travelling' averaging I have devised, likely will further counter this.
There will naturally be clumpiness in the distribution, even given a continuous, long time of all algorithms at a stable hashrate, and, especially with so many algorithms, now there is 9, this means that when difficulty goes up a lot, an algorithm that has gone through a long dry patch will have a potentially lower (and also higher) target already, and thus where the blocks with algos with not many recent solutions find a solution, or another giant pool shows up, it gives far more opportunities for a block to come in on target, sooner than it would have otherwise.
The real results of course will be seen with more testing and when the hardfork activates. I hold to Proof of Work as the currently and still and will probably be best strategy for protecting the security of the blockchains, as I saw first hand how easily it is for proof of stake blockchains to get extremely badly distributed. Proof of Stake also, in the beginning, is very cheap to acquire, and after a short time, only the early people or someone with a lot of money can have any influence. This is not conducive to adoption, nor is it conducive to security.