Hmmm, I read the MegaCoin forum post on it - but I can't seem to actually find the math they use for it.
https://forum.megacoin.in/index.php?topic=893.0It only explains the problem it's trying to solve.
If I had to guess, it probably is just an algorithm that adjust difficulty based on the last x blocks continuously instead of every 2016 blocks - so every block it looks at the last 2016 blocks and decides what the new difficulty should be.
They probably have it setup so that multiple running averages are taken - say the last 2016 blocks is weighted a bit, and then the last 1008 blocks, and the last 504 blocks, or something like that. You could even have this measure be more continuous, as in take the average from the last n blocks where n runs from 1 to 2016, and say that that calculated average contributes to the weighted average in a way proportional to how many blocks it takes into account.
The problem I see with this is that it gives you too much control over how quickly blocks are generated - I think I read somewhere that the lower the variance is in block generation time, the easier it is to fork the chain? Not sure on that, a quick google search didn't bring up any results.
Either way, I really would like to see what specific algorithm they used for it - I'll have to peek through the source a bit. They shouldn't design a new one from the ground up, because there are already really really good algorithms designed by engineers/physicists who work with signal-response feedback controlled systems, and any one of those could work really well I think. Things like Kalman Filters
http://en.wikipedia.org/wiki/Kalman_filterBut still then, the worry I think is it being too good - though I think you could have a filter like this target a desired variance in block-time fairly well.