So it is hard fork?
And I must ask... Are you SURE that this fixes the issues? Better not release it yet if you are not sure...
Yes I am fairly confident. The trouble was finding out why no other caps clones have been effected. This simple yet rather tricky little bug was hard to locate for me but to the more experienced eye was easier to pinpoint. I will be taking the updates balthazar puts out and adding the other fixes caps needs as well as.
As the stake system functions its not doing what is is designed to do. Users are really not protecting the network with such short block times and low block maturity. It is not decided if I will address these separate issues at a later date or roll them all into one update while we are doing the hard fork now
The problems have arisen from the High starting difficulty caps was born with :
Caps: Note the minimum difficulty for Proof of Work is much higher than that of Proof of Stake
static CBigNum bnProofOfWorkLimit(~uint256(0) >> 30);
static CBigNum bnProofOfStakeLimit(~uint256(0) >> 24);
NVC: Note PoS difficulty is set higher than PoW difficulty
CBigNum bnProofOfWorkLimit(~uint256(0) >> 20); // "standard" scrypt target limit for proof of work, results with 0,000244140625 proof-of-work difficulty
CBigNum bnProofOfStakeLegacyLimit(~uint256(0) >> 24); // proof of stake target limit from block #15000 and until 20 June 2013, results with 0,00390625 proof of stake difficulty
CBigNum bnProofOfStakeLimit(~uint256(0) >> 27); // proof of stake target limit since 20 June 2013, equal to 0.03125 proof of stake difficulty
The function the PoS blocks were failing and leading to the issues when combined with other factors was computeminwork() as seen below in Caps. It was originally believed since it was never adjusted to match the shorter nTargetTimespan it was causing the issues. But there was no indication as to why no other clones had been effected (as the code for this was identical in all). But I knew it was failing that function and obviously had never been correctly adjusted. So in 1.4.1 this function was lowered from 24 * 60 * 60 to the value below but the issue remained:
static const int64 nTargetTimespan = 0.16 * 24 * 60 * 60; // 4-hour
static const int64 nTargetSpacingWorkMax = 12 * nStakeTargetSpacing; // 2-hour
//
// minimum amount of work that could possibly be required nTime after
// minimum work required was nBase
//
unsigned int ComputeMinWork(unsigned int nBase, int64 nTime)
{
CBigNum bnTargetLimit = bnProofOfWorkLimit;
CBigNum bnResult;
bnResult.SetCompact(nBase);
bnResult *= 2;
while (nTime > 0 && bnResult < bnTargetLimit)
{
// Maximum 200% adjustment per day...
bnResult *= 2;
nTime -= 0.16 * 24 * 60 * 60;
}
if (bnResult > bnTargetLimit)
bnResult = bnTargetLimit;
return bnResult.GetCompact();
}
As you can see it is calculated using the bnProofOfWorkLimit which is much higher than the bnProofOfStakeLimit. This has never been an issue with any other PoS implementation because bnProofOfStake is usually higher
NVC uses a different function as Balthazar has a independent development path from PPC. This function is correct for NVC as it has 10 minute block spacing and a 1 week nTargetTimespan:
//
// minimum amount of work that could possibly be required nTime after
// minimum proof-of-work required was nBase
//
unsigned int ComputeMinWork(unsigned int nBase, int64 nTime)
{
return ComputeMaxBits(bnProofOfWorkLimit, nBase, nTime);
}
//
// maximum nBits value could possible be required nTime after
//
unsigned int ComputeMaxBits(CBigNum bnTargetLimit, unsigned int nBase, int64 nTime)
{
CBigNum bnResult;
bnResult.SetCompact(nBase);
bnResult *= 2;
while (nTime > 0 && bnResult < bnTargetLimit)
{
// Maximum 200% adjustment per day...
bnResult *= 2;
nTime -= 24 * 60 * 60;
}
if (bnResult > bnTargetLimit)
bnResult = bnTargetLimit;
return bnResult.GetCompact();
}