Bitcoin Forum
May 08, 2024, 05:53:10 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 ... 71 »
  Print  
Author Topic: [ANN] Catcoin - 0.9.1.1 - Old thread. Locked. Please use 0.9.2 thread.  (Read 130908 times)
kuroman
Hero Member
*****
Offline Offline

Activity: 588
Merit: 501


View Profile
February 04, 2014, 07:14:06 PM
Last edit: February 04, 2014, 07:24:15 PM by kuroman
 #281

The only remaining problem with the difficulty adjustment is that it doesn't "know" the difference between being off by 2000% and being off by 20%. It serves up the same 12% change for both scenarios, when they should really be treated much differently.

KGW and the double-moving-average are both solutions which would probably work well, but both are complex (and as far as I can tell have not been applied to a long blocktime coin like CAT).

I would like to throw another possible solution out there which is extremely simple (one line of code), readily understandable, and responds well to hash attacks in my simulations. It doesn't change anything about the coin except how it approaches the 12% limit. Block time, max change, etc. all remain the same.

That solution is an exponentially weighted average, which instead of simply dividing the target blocktime by the average blocktime to get the change amount, takes the logarithm of that division. In pseudocode:

Modified_Actual_36_Block_Time = Target_36_Block_Time + Target_Blocktime*NaturalLog(Target_36_Block_Time/Actual_36_Block_Time)

When the actual blocktime is 5 minutes (50% of target), the retarget only goes up 1.93%. More examples are shown in the table here:



The modeled system response to 1 GH/s attacks every time the difficulty drops below 45 is shown in the plot:



The actual code implementation is to add one line:

Code:
 if(pindexLast->nHeight >= fork2Block){
        numerator = 112;
        denominator = 100;
    }
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;
    printf("  nActualTimespan = %"PRI64d"  before bounds\n", nActualTimespan);
    if (nActualTimespan < lowLimit)
        nActualTimespan = lowLimit;
    if (nActualTimespan > highLimit)
        nActualTimespan = highLimit;

to:

Code:
 if(pindexLast->nHeight >= fork2Block){
        numerator = 112;
        denominator = 100;
    }
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    nActualTimespan = nTargetTimespanLocal+nTargetSpacing*log(nTargetTimespanLocal/nActualTimespan)
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;
    printf("  nActualTimespan = %"PRI64d"  before bounds\n", nActualTimespan);
    if (nActualTimespan < lowLimit)
        nActualTimespan = lowLimit;
    if (nActualTimespan > highLimit)
        nActualTimespan = highLimit;

The limits would still kick in at 12%, which would happen when the actual time is around 100x or 0.01x the target time.

This is not true KGW was used in different coins from long to medium to very short block target time example, MMC has a 6min block time, Anon is 3min Mega is 2.5min Fox 1 min, Franko 30s.

As for your graph, that's how the limite x% percentage is supposed to work in theory it pushes the coin to converge, but it doesn't work like that due to other exterior parameters, for example Profitability pool does not have a fixed hashrate, they can hit one time with an average hashrate, but another with peak hashrate, and make the diff peaks again, these pools don't mine for a fixed time/till a fixed diff, they keep mining as long as the coin is reasonnably profitable...ect ect hence the need of a dynamic function that will adapt to those conditions
1715147590
Hero Member
*
Offline Offline

Posts: 1715147590

View Profile Personal Message (Offline)

Ignore
1715147590
Reply with quote  #2

1715147590
Report to moderator
The network tries to produce one block per 10 minutes. It does this by automatically adjusting how difficult it is to produce blocks.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
SlimePuppy
Hero Member
*****
Offline Offline

Activity: 655
Merit: 500


View Profile
February 04, 2014, 07:34:17 PM
 #282

50 CAT bounty for the first post that shows that KGW was or is used successfully in a coin with a 10 minute or longer block time.


Disclaimer:  This is my money, not CATs that belong in any way to development or that have been donated by anyone for any reason.
envy2010
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
February 04, 2014, 07:34:39 PM
 #283

This is not true KGW was used in different coins from long to midium to very short block target time example, MMC has a 6min block time, Anon is 3min Mega is 2.5min Fox 1 min, Franko 30s.
Those are still significantly shorter than CAT. I'm not saying it wouldn't work, but it would need as much testing as any other solution, and is much more complex.

Quote
As for your graph, that's how the limite x% percentage is supposed to work in theory it pushes the coin to converge, but it doesn't work like that due to other exterior parameters,
Actually, the % limit doesn't help much at all with convergence, even if you assume a fixed hashrate, because the difficulty spends so little time in the narrow band where there is usable feedback: Here's a graph projecting difficulty with fixed network hashrate of 300 MH/s:



Quote
for example Profitability pool does not have a fixed hashrate, they can hit one time with an average hashrate, but another with peak hashrate, and make the diff peaks again, these pools don't mine for a fixed time/till a fixed diff, they keep mining as long as the coin is reasonnably profitable...ect ect hence the need of a dynamic function that will adapt to those conditions

I've included this in my model. As difficulty goes up, I reclaculate hashrate dropping off exponentially, which is a reasonable assumption if you look at the pool hashrates here: http://cat.coinium.org/index.php?page=statistics&action=graphs

In my graph on the previous page I assumed that 1GH/s worth of profit pools would jump in at any difficulty below 45, which is about where the current price/profitability point is. That's a bit arbitrary, but as the graph shows after 4 cycles the difficulty just stays very slightly above profitability... which is exactly what we want. It would do the same if we assumed profitability occurred at 30 or 50 or 100.
zerodrama
Sr. Member
****
Offline Offline

Activity: 364
Merit: 250


View Profile
February 04, 2014, 08:10:09 PM
 #284

envy: The relative weighting of the 12% limit is something I've been thinking about. We should try that.

EASY CALCULATION FOR TRADES: 1 Million is 1x10e6. 1 Satoshi is 1x10e-8. 1 M sat is 1x10e-2. 100 M sat is 1. If 1 herpcoin = 100 derptoshi then
1 M herpcoin @ 001 derptoshi = 0.01 derpcoin, 1 M herpcoin @ 100 derptoshi = 1.00 derpcoin
Post Scarcity Economics thread https://bitcointalk.org/index.php?topic=3773185
kuroman
Hero Member
*****
Offline Offline

Activity: 588
Merit: 501


View Profile
February 04, 2014, 08:22:57 PM
Last edit: February 04, 2014, 08:40:43 PM by kuroman
 #285

Those are still significantly shorter than CAT. I'm not saying it wouldn't work, but it would need as much testing as any other solution, and is much more complex.

30s is significatly shorter but 6 min is not, we are talking about the same order magnitude (60%) while 30s is obviously significally shorter(5%), You are just being nitpicking I'm sure if it was 8min instead of 6 you would have said the same thing, also I can return the same question to you, how many new scrypt altcoin you can count that have a 10 min block time? You have your answer...


Actually, the % limit doesn't help much at all with convergence, even if you assume a fixed hashrate, because the difficulty spends so little time in the narrow band where there is usable feedback: Here's a graph projecting difficulty with fixed network hashrate of 300 MH/s:


It does, with the current variation ( the current interpolation can either interpolate polynomialy example Chebyshev polynomial or even if we consider a linear interpolation for non numerical mathematics initiated = straight lines) the diff will converge.

Simplified explaination : lets take the example of when the diff is increasing, the 12% limite makes move in 12% steps per block now lets imagine we have said hashrate that it is supposed to push us to diff = 100, the diff will start increasing in 12% steps trying to reach that diff, but at some point when the diff reach a certain level lets say diff=60 the coin is no more profitable and the profitabilty pools leave so the peak we've got is diff 60 instead of 100 thanks to the fact that diff increased in limited steps and the diff didn't reach the top value it was supposed to reach thanks to diff retarget each block (since with the next block the diff target will be significally lower and the diff will start decreasing). Same thing will happen when we are going downwards when the coin will reach a certain point where is profitable again (in 12% steps) lets say diff 20 (while if there was no limite the diff should have gone down to 1 for example due to low hashrate) so each time you'll have the coin bouncing with smaller minimum and maximimum and converging towards the diff (limite) on which the coin is profitable (No this doesn't take into consideration the 36 average and this is what's make everything jerky in addation to everything I mentioned before). So maybe removing the 36 block avergae can solve the issue if the other parameters don't swing in a major way


I've included this in my model. As difficulty goes up, I reclaculate hashrate dropping off exponentially, which is a reasonable assumption if you look at the pool hashrates here: http://cat.coinium.org/index.php?page=statistics&action=graphs

In my graph on the previous page I assumed that 1GH/s worth of profit pools would jump in at any difficulty below 45, which is about where the current price/profitability point is. That's a bit arbitrary, but as the graph shows after 4 cycles the difficulty just stays very slightly above profitability... which is exactly what we want. It would do the same if we assumed profitability occurred at 30 or 50 or 100.

Sadely this is a wrong assumption and I understand the reason behind this, but it doesn't work like that, Consider the huge part of the increased hashrate from profitability pools as an On/Off Value because these pools switch instantly all their hashrate from a coin to another (switching ports dynamically for everyone at the same time) there is a minority that switchs coins manually while it's random it can be assumulated or interpolated as an exponentiel function
zccopwrx
Full Member
***
Offline Offline

Activity: 306
Merit: 100



View Profile
February 04, 2014, 08:40:48 PM
 #286

HashFaster's Crypto-Mining Network

http://cat.hashfaster.com has upgraded to the latest wallet!!!

Come help us grow.

ZC,
Owner of HashFaster
SlimePuppy
Hero Member
*****
Offline Offline

Activity: 655
Merit: 500


View Profile
February 04, 2014, 08:42:03 PM
Last edit: February 05, 2014, 03:01:58 AM by SlimePuppy
 #287

Code complexity...

Kimoto Gravity Well code from the creator's coin - Megacoin.  Main.cpp, beginning on line 1276.  I cannot confirm this is all the code, just the main block I can easily identify.

https://github.com/megacoin/megacoin/blob/master/src/main.cpp

Code:
unsigned int static KimotoGravityWell(const CBlockIndex* pindexLast, const CBlockHeader *pblock, uint64 TargetBlocksSpacingSeconds, uint64 PastBlocksMin, uint64 PastBlocksMax) {
/* current difficulty formula, megacoin - kimoto gravity well */
const CBlockIndex  *BlockLastSolved = pindexLast;
const CBlockIndex  *BlockReading = pindexLast;
const CBlockHeader *BlockCreating = pblock;
BlockCreating = BlockCreating;
uint64 PastBlocksMass = 0;
int64 PastRateActualSeconds = 0;
int64 PastRateTargetSeconds = 0;
double PastRateAdjustmentRatio = double(1);
CBigNum PastDifficultyAverage;
CBigNum PastDifficultyAveragePrev;
double EventHorizonDeviation;
double EventHorizonDeviationFast;
double EventHorizonDeviationSlow;

    if (BlockLastSolved == NULL || BlockLastSolved->nHeight == 0 || (uint64)BlockLastSolved->nHeight < PastBlocksMin) { return bnProofOfWorkLimit.GetCompact(); }

for (unsigned int i = 1; BlockReading && BlockReading->nHeight > 0; i++) {
if (PastBlocksMax > 0 && i > PastBlocksMax) { break; }
PastBlocksMass++;

if (i == 1) { PastDifficultyAverage.SetCompact(BlockReading->nBits); }
else { PastDifficultyAverage = ((CBigNum().SetCompact(BlockReading->nBits) - PastDifficultyAveragePrev) / i) + PastDifficultyAveragePrev; }
PastDifficultyAveragePrev = PastDifficultyAverage;

PastRateActualSeconds = BlockLastSolved->GetBlockTime() - BlockReading->GetBlockTime();
PastRateTargetSeconds = TargetBlocksSpacingSeconds * PastBlocksMass;
PastRateAdjustmentRatio = double(1);
if (PastRateActualSeconds < 0) { PastRateActualSeconds = 0; }
if (PastRateActualSeconds != 0 && PastRateTargetSeconds != 0) {
PastRateAdjustmentRatio = double(PastRateTargetSeconds) / double(PastRateActualSeconds);
}
EventHorizonDeviation = 1 + (0.7084 * pow((double(PastBlocksMass)/double(144)), -1.228));
EventHorizonDeviationFast = EventHorizonDeviation;
EventHorizonDeviationSlow = 1 / EventHorizonDeviation;

if (PastBlocksMass >= PastBlocksMin) {
if ((PastRateAdjustmentRatio <= EventHorizonDeviationSlow) || (PastRateAdjustmentRatio >= EventHorizonDeviationFast)) { assert(BlockReading); break; }
}
if (BlockReading->pprev == NULL) { assert(BlockReading); break; }
BlockReading = BlockReading->pprev;
}

CBigNum bnNew(PastDifficultyAverage);
if (PastRateActualSeconds != 0 && PastRateTargetSeconds != 0) {
bnNew *= PastRateActualSeconds;
bnNew /= PastRateTargetSeconds;
}
    if (bnNew > bnProofOfWorkLimit) { bnNew = bnProofOfWorkLimit; }

    /// debug print
    printf("Difficulty Retarget - Kimoto Gravity Well\n");
    printf("PastRateAdjustmentRatio = %g\n", PastRateAdjustmentRatio);
    printf("Before: %08x  %s\n", BlockLastSolved->nBits, CBigNum().SetCompact(BlockLastSolved->nBits).getuint256().ToString().c_str());
    printf("After:  %08x  %s\n", bnNew.GetCompact(), bnNew.getuint256().ToString().c_str());

return bnNew.GetCompact();
}

unsigned int static GetNextWorkRequired_V2(const CBlockIndex* pindexLast, const CBlockHeader *pblock)
{
static const int64 BlocksTargetSpacing = 2.5 * 60; // 2.5 minutes
unsigned int TimeDaySeconds = 60 * 60 * 24;
int64 PastSecondsMin = TimeDaySeconds * 0.25;
int64 PastSecondsMax = TimeDaySeconds * 7;
uint64 PastBlocksMin = PastSecondsMin / BlocksTargetSpacing;
uint64 PastBlocksMax = PastSecondsMax / BlocksTargetSpacing;

return KimotoGravityWell(pindexLast, pblock, BlocksTargetSpacing, PastBlocksMin, PastBlocksMax);
}


Code from Phoenixcoin's 4th fork (Oct 2013) where they implemented their difficulty fix.   Their method was to calculate the average difficulty from the last 500 blocks, calculate the average difficulty for the last 100 blocks, average those, then apply a 10% dampening.  Finally, allow a max 2% move each block.  I think I have the appropriate code, but might have missed something.  Beginning on line 937 of main.cpp:
https://github.com/ghostlander/Phoenixcoin/blob/master/src/main.cpp

Code:
    // Basic 100 blocks averaging after the 4th livenet or 1st testnet hard fork
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nInterval *= 5;
        nTargetTimespan *= 5;
    }


   // Extended 500 blocks averaging after the 4th livenet or 1st testnet hard fork
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nInterval *= 5;

        const CBlockIndex* pindexFirst = pindexLast;
        for(int i = 0; pindexFirst && i < nInterval; i++)
          pindexFirst = pindexFirst->pprev;

        int nActualTimespanExtended =
          (pindexLast->GetBlockTime() - pindexFirst->GetBlockTime())/5;

        // Average between the basic and extended windows
        int nActualTimespanAvg = (nActualTimespan + nActualTimespanExtended)/2;

        // Apply 0.1 damping
        nActualTimespan = nActualTimespanAvg + 9*nTargetTimespan;
        nActualTimespan /= 10;

        printf("RETARGET: nActualTimespanExtended = %d (%d), nActualTimeSpanAvg = %d, nActualTimespan (damped) = %d\n",
          nActualTimespanExtended, nActualTimespanExtended*5, nActualTimespanAvg, nActualTimespan);
    }

   // The 4th livenet or 1st testnet hard fork (1.02 difficulty limiter)
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nActualTimespanMax = nTargetTimespan*102/100;
        nActualTimespanMin = nTargetTimespan*100/102;
    }




Catcoin's current 1 block retarget, 36 block SMA, 12% limit.  Not sure I have all of this...don't shoot me quite yet. Wink  Main.cpp, various sections after line 1045:

https://github.com/CatcoinOfficial/CatcoinRelease/blob/master/src/main.cpp

Code:
static const int64 nTargetTimespan = 6 * 60 * 60; // 6 hours
static const int64 nTargetSpacing = 10 * 60;
static const int64 nInterval = nTargetTimespan / nTargetSpacing;

static const int64 nTargetTimespanOld = 14 * 24 * 60 * 60; // two weeks
static const int64 nIntervalOld = nTargetTimespanOld / nTargetSpacing;


    // after fork2Block we retarget every block  
    if(pindexLast->nHeight < fork2Block){
        // Only change once per interval
        if ((pindexLast->nHeight+1) % nIntervalLocal != 0)


 // Go back by what we want to be 14 days worth of blocks
    const CBlockIndex* pindexFirst = pindexLast;
    for (int i = 0; pindexFirst && i < blockstogoback; i++)
        pindexFirst = pindexFirst->pprev;
    assert(pindexFirst);


    // Limit adjustment step
    int numerator = 4;
    int denominator = 1;
    if(pindexLast->nHeight >= fork2Block){
        numerator = 112;
        denominator = 100;
    }
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;
    printf("  nActualTimespan = %"PRI64d"  before bounds\n", nActualTimespan);
    if (nActualTimespan < lowLimit)
        nActualTimespan = lowLimit;
    if (nActualTimespan > highLimit)
        nActualTimespan = highLimit;

envy2010
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
February 04, 2014, 08:45:58 PM
 #288

Those are still significantly shorter than CAT. I'm not saying it wouldn't work, but it would need as much testing as any other solution, and is much more complex.

30s is significatly shorter but 6 min is not we are talking about the same order magnitude (60%) while 30s is obviously significally shorter(5%), You are just being nitpicking I'm sure if it was 8min instead of 6 you would have said the thing, also I can return the same question to you, how many new scrypt altcoin you can count that have a 10 min block time? You have your answer...

The dynamic response of a 6 min coin is significantly different from a 10 minute coin. We don't know how KGW will respond, therefore we have to test, just like any other solution. I never said it wouldn't work, just that its neither simple nor a copy/paste solution.

Quote
It does with the current variation ( the current interpolation can either interpolate polynomialy example Chebyshev polynomial or even if we consider a linear interpolation for non numerical mathematics initiated = straight lines) the diff will converge.

Simplified explaination : lets take the example of when the diff is increasing, the 12% limite makes move in 12% steps now lets imagine we have said hashrate that it is supposed to push us to diff = 100, the diff will start increasing in 12% steps trying to reach that diff, but at some point when the diff reach a certain level lets say 60 the coin is no more profitable and the profitabilty pools leave so the peak we've got is diff 60 instead of 100 thanks to the fact that diff increased in limited steps and the diff didn't reach the top value it was supposed to reach thanks to diff retarget each block same think will happen when we are doing downwards when the coin will reach a certain point where is profitable again (in 12% steps) lets diff 20 (while if there was no limite the diff should have gone down to 1 for example due to low hashrate) so each time you'll have the coin bouncing with smaller minimum and maximimum and converging towards the diff (limite) on which the coin is profitable (No this doesn't take into consideration the 36 average and this is what's make everything jerky in addation to everything I mentioned before). So maybe removing the 36 block avergae can solve the issue if the other parameters don't swing in a major way

I am very familar with dynamic systems theory, so an explanation is not necessary here. What is needed is proof to back up what you are saying. If that's how you think the system will behave, model it, show you inputs and results like I did.

And no, removing the trailing average completely won't help, because the goal is to stabilize an inherently unstable system, which is very difficult even if you know where it should be headed (which is the function of the average).

Quote
these pools choice instantly all their hashrate from a coin to another (switching ports dynamically for everyone at the same time)

I have both the exponential function (manual switchers) AND the instant switching (auto pools) modeled. I'm throwing an extra 1000 MH/s at the simulation the block after diff goes under 45 (ON TOP of the exponentially increasing manual switchers), and keeping it there until diff goes over 45. 45 is an arbitrary approximation, but I can prove that the response is similar if profitability occurs at 30 or 50 or 100.
envy2010
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
February 04, 2014, 08:51:25 PM
 #289

envy: The relative weighting of the 12% limit is something I've been thinking about. We should try that.

The response isn't perfect, but it's pretty good from what I've seen -- and the implementation (just one line) is so simple that it couple be compiled and running on the testnet in minutes.

The response could be further tweaked by using different bases than e. Base 10 or 2 would be easy to try out.

Edit: from a quick look, base2 is underdamped, 2.5 and 3 look similar to e (not surprising), 5 is well damped and 10 is very well damped. Anything from e to 10 is worth a second look.
kuroman
Hero Member
*****
Offline Offline

Activity: 588
Merit: 501


View Profile
February 04, 2014, 09:09:29 PM
 #290

The dynamic response of a 6 min coin is significantly different from a 10 minute coin. We don't know how KGW will respond, therefore we have to test, just like any other solution. I never said it wouldn't work, just that its neither simple nor a copy/paste solution.
No it is not, Tell me which difference is greater : the difference between 6 min to 10 or from 30s to 6min?  KGW proven to work for everycase scenario from 30s to 6min. By any law of probability you want to use there is 80-90% chance it will work from the get go with 10min blocktime

I am very familar with dynamic systems theory, so an explanation is not necessary here. What is needed is proof to back up what you are saying. If that's how you think the system will behave, model it, show you inputs and results like I did.

And no, removing the trailing average completely won't help, because the goal is to stabilize an inherently unstable system, which is very difficult even if you know where it should be headed (which is the function of the average).
I've added the explaination for those who are not math initiated (And I don't know what people wants anymore, some complain about complex math or code others complain about explanations...)
The explaination prove it self on it own, it's just interpolation math. if you think it is wrong, feel free to point where it is and I would be happy to provide argumentation if needed, if you want a graphical representation I can do that aswell, although I've already did before with the consideration a one way limite (for increase and no limite for decrease) check out my previous comments, and no the Average is making the matter worse right now as It was explained before, does removing it will solve the issue, it will depends on the other parameters, if the other parameters don't swig,= stay in a constant level, it will most likely solve the problem, but that is a BIG assumption, hence a dynamic value instead of a 12% which negate the effect of the other parameters.


I have both the exponential function (manual switchers) AND the instant switching (auto pools) modeled. I'm throwing an extra 1000 MH/s at the simulation the block after diff goes under 45 (ON TOP of the exponentially increasing manual switchers), and keeping it there until diff goes over 45. 45 is an arbitrary approximation, but I can prove that the response is similar if profitability occurs at 30 or 50 or 100.


It will converge and I agree with you here, and this is what the current fork was supposed to mean, what you aren't taking in consideration is other parameters as I mentioned before (and obviously I was refering to exponentional decrease you've mentioned which is a wrong assumption)

Also Envy thank you for your effort, I'm not discrediting your work by any mean just giving you my honest analysis and bringing facts to back them up, I feel like if we keep discussing and simulating different case scenarios we will be able to find a possible solution, but for now I believe we can use KGW as it is available and prove to work for everycoin it was used on. I personally believe the dev process should not be limited to solving major issues but it should a continius work to keep improving the coin.
SlimePuppy
Hero Member
*****
Offline Offline

Activity: 655
Merit: 500


View Profile
February 04, 2014, 09:15:29 PM
 #291



So... a new fork coming? They clearly have >51%.
Network hashrate is ~300Mh/s and theirs is almost 200Mh/s so they have almost 2/3 of the entire network (~ 65%) which is A LOT.

And also Team CatCoin pool (has around 80Mh/s so ~ 25% of network) is somewhat stuck with stats like this:
PPLNS Target: 195812
Est. Shares: 156687 (done: 302.82%)
Pool Valid: 474480

This is even worse than Bitcoin at the moment with cex.io having almost half of the network...
Don't put a ton of stock in this chart - it doesn't include all the pools or any solo miners.  Just participate in a crowd-sourced attempt to keep things in better balance, that's all.  THANKS!
kuroman
Hero Member
*****
Offline Offline

Activity: 588
Merit: 501


View Profile
February 04, 2014, 09:18:01 PM
 #292

Code complexity...

Kimoto Gravity Well code from the creator's coin - Megacoin.  Main.cpp, beginning on line 1276.  I cannot confirm this is all the code, just the main block I can easily identify.

https://github.com/megacoin/megacoin/blob/master/src/main.cpp

Code:
    if (BlockLastSolved == NULL || BlockLastSolved->nHeight == 0 || (uint64)BlockLastSolved->nHeight < PastBlocksMin) { return bnProofOfWorkLimit.GetCompact(); }

for (unsigned int i = 1; BlockReading && BlockReading->nHeight > 0; i++) {
if (PastBlocksMax > 0 && i > PastBlocksMax) { break; }
PastBlocksMass++;

if (i == 1) { PastDifficultyAverage.SetCompact(BlockReading->nBits); }
else { PastDifficultyAverage = ((CBigNum().SetCompact(BlockReading->nBits) - PastDifficultyAveragePrev) / i) + PastDifficultyAveragePrev; }
PastDifficultyAveragePrev = PastDifficultyAverage;

PastRateActualSeconds = BlockLastSolved->GetBlockTime() - BlockReading->GetBlockTime();
PastRateTargetSeconds = TargetBlocksSpacingSeconds * PastBlocksMass;
PastRateAdjustmentRatio = double(1);
if (PastRateActualSeconds < 0) { PastRateActualSeconds = 0; }
if (PastRateActualSeconds != 0 && PastRateTargetSeconds != 0) {
PastRateAdjustmentRatio = double(PastRateTargetSeconds) / double(PastRateActualSeconds);
}
EventHorizonDeviation = 1 + (0.7084 * pow((double(PastBlocksMass)/double(144)), -1.228));
EventHorizonDeviationFast = EventHorizonDeviation;
EventHorizonDeviationSlow = 1 / EventHorizonDeviation;

if (PastBlocksMass >= PastBlocksMin) {
if ((PastRateAdjustmentRatio <= EventHorizonDeviationSlow) || (PastRateAdjustmentRatio >= EventHorizonDeviationFast)) { assert(BlockReading); break; }
}
if (BlockReading->pprev == NULL) { assert(BlockReading); break; }
BlockReading = BlockReading->pprev;
}

CBigNum bnNew(PastDifficultyAverage);
if (PastRateActualSeconds != 0 && PastRateTargetSeconds != 0) {
bnNew *= PastRateActualSeconds;
bnNew /= PastRateTargetSeconds;
}
    if (bnNew > bnProofOfWorkLimit) { bnNew = bnProofOfWorkLimit; }

    /// debug print
    printf("Difficulty Retarget - Kimoto Gravity Well\n");
    printf("PastRateAdjustmentRatio = %g\n", PastRateAdjustmentRatio);
    printf("Before: %08x  %s\n", BlockLastSolved->nBits, CBigNum().SetCompact(BlockLastSolved->nBits).getuint256().ToString().c_str());
    printf("After:  %08x  %s\n", bnNew.GetCompact(), bnNew.getuint256().ToString().c_str());

return bnNew.GetCompact();
}



Code from Phoenixcoin's 4th fork (Oct 2013) where they implemented their difficulty fix.   Their method was to calculate the average difficulty from the last 500 blocks, calculate the average difficulty for the last 100 blocks, average those, then apply a 10% dampening.  Finally, allow a max 20% move each block.  I think I have the appropriate code, but might have missed something.  Beginning on line 937 of main.cpp:
https://github.com/ghostlander/Phoenixcoin/blob/master/src/main.cpp

Code:
    // Basic 100 blocks averaging after the 4th livenet or 1st testnet hard fork
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nInterval *= 5;
        nTargetTimespan *= 5;
    }


   // Extended 500 blocks averaging after the 4th livenet or 1st testnet hard fork
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nInterval *= 5;

        const CBlockIndex* pindexFirst = pindexLast;
        for(int i = 0; pindexFirst && i < nInterval; i++)
          pindexFirst = pindexFirst->pprev;

        int nActualTimespanExtended =
          (pindexLast->GetBlockTime() - pindexFirst->GetBlockTime())/5;

        // Average between the basic and extended windows
        int nActualTimespanAvg = (nActualTimespan + nActualTimespanExtended)/2;

        // Apply 0.1 damping
        nActualTimespan = nActualTimespanAvg + 9*nTargetTimespan;
        nActualTimespan /= 10;

        printf("RETARGET: nActualTimespanExtended = %d (%d), nActualTimeSpanAvg = %d, nActualTimespan (damped) = %d\n",
          nActualTimespanExtended, nActualTimespanExtended*5, nActualTimespanAvg, nActualTimespan);
    }

   // The 4th livenet or 1st testnet hard fork (1.02 difficulty limiter)
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nActualTimespanMax = nTargetTimespan*102/100;
        nActualTimespanMin = nTargetTimespan*100/102;
    }




Catcoin's current 1 block retarget, 36 block SMA, 12% limit.  Not sure I have all of this...don't shoot me quite yet. Wink  Main.cpp, various sections after line 1144:

https://github.com/CatcoinOfficial/CatcoinRelease/blob/master/src/main.cpp

Code:
    // after fork2Block we retarget every block   
    if(pindexLast->nHeight < fork2Block){
        // Only change once per interval
        if ((pindexLast->nHeight+1) % nIntervalLocal != 0)


(code missing...check back when I find a clue...or more breadcrumbs...)


    // Limit adjustment step
    int numerator = 4;
    int denominator = 1;
    if(pindexLast->nHeight >= fork2Block){
        numerator = 112;
        denominator = 100;
    }
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;
    printf("  nActualTimespan = %"PRI64d"  before bounds\n", nActualTimespan);
    if (nActualTimespan < lowLimit)
        nActualTimespan = lowLimit;
    if (nActualTimespan > highLimit)
        nActualTimespan = highLimit;


This comparaison is biased, because you don't remove the unknown defintion part from KGW or add them for other codes (they are defined at the begining of the code if I'm not mistaking)

The exact code comparaison without the definition would be like this :
KGW

Code:
    if (BlockLastSolved == NULL || BlockLastSolved->nHeight == 0 || (uint64)BlockLastSolved->nHeight < PastBlocksMin) { return bnProofOfWorkLimit.GetCompact(); }

for (unsigned int i = 1; BlockReading && BlockReading->nHeight > 0; i++) {
if (PastBlocksMax > 0 && i > PastBlocksMax) { break; }
PastBlocksMass++;

if (i == 1) { PastDifficultyAverage.SetCompact(BlockReading->nBits); }
else { PastDifficultyAverage = ((CBigNum().SetCompact(BlockReading->nBits) - PastDifficultyAveragePrev) / i) + PastDifficultyAveragePrev; }
PastDifficultyAveragePrev = PastDifficultyAverage;

PastRateActualSeconds = BlockLastSolved->GetBlockTime() - BlockReading->GetBlockTime();
PastRateTargetSeconds = TargetBlocksSpacingSeconds * PastBlocksMass;
PastRateAdjustmentRatio = double(1);
if (PastRateActualSeconds < 0) { PastRateActualSeconds = 0; }
if (PastRateActualSeconds != 0 && PastRateTargetSeconds != 0) {
PastRateAdjustmentRatio = double(PastRateTargetSeconds) / double(PastRateActualSeconds);
}
EventHorizonDeviation = 1 + (0.7084 * pow((double(PastBlocksMass)/double(144)), -1.228));
EventHorizonDeviationFast = EventHorizonDeviation;
EventHorizonDeviationSlow = 1 / EventHorizonDeviation;

if (PastBlocksMass >= PastBlocksMin) {
if ((PastRateAdjustmentRatio <= EventHorizonDeviationSlow) || (PastRateAdjustmentRatio >= EventHorizonDeviationFast)) { assert(BlockReading); break; }
}
if (BlockReading->pprev == NULL) { assert(BlockReading); break; }
BlockReading = BlockReading->pprev;
}

CBigNum bnNew(PastDifficultyAverage);
if (PastRateActualSeconds != 0 && PastRateTargetSeconds != 0) {
bnNew *= PastRateActualSeconds;
bnNew /= PastRateTargetSeconds;
}
    if (bnNew > bnProofOfWorkLimit) { bnNew = bnProofOfWorkLimit; }

    /// debug print
    printf("Difficulty Retarget - Kimoto Gravity Well\n");
    printf("PastRateAdjustmentRatio = %g\n", PastRateAdjustmentRatio);
    printf("Before: %08x  %s\n", BlockLastSolved->nBits, CBigNum().SetCompact(BlockLastSolved->nBits).getuint256().ToString().c_str());
    printf("After:  %08x  %s\n", bnNew.GetCompact(), bnNew.getuint256().ToString().c_str());

return bnNew.GetCompact();
}


And the KGW still includes the partial debug command, that is not included for other coins and interven much later of the code if I'm not mistaking

Compared to :

Code:
 // Basic 100 blocks averaging after the 4th livenet or 1st testnet hard fork
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nInterval *= 5;
        nTargetTimespan *= 5;
    }


   // Extended 500 blocks averaging after the 4th livenet or 1st testnet hard fork
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nInterval *= 5;

        const CBlockIndex* pindexFirst = pindexLast;
        for(int i = 0; pindexFirst && i < nInterval; i++)
          pindexFirst = pindexFirst->pprev;

        int nActualTimespanExtended =
          (pindexLast->GetBlockTime() - pindexFirst->GetBlockTime())/5;

        // Average between the basic and extended windows
        int nActualTimespanAvg = (nActualTimespan + nActualTimespanExtended)/2;

        // Apply 0.1 damping
        nActualTimespan = nActualTimespanAvg + 9*nTargetTimespan;
        nActualTimespan /= 10;

        printf("RETARGET: nActualTimespanExtended = %d (%d), nActualTimeSpanAvg = %d, nActualTimespan (damped) = %d\n",
          nActualTimespanExtended, nActualTimespanExtended*5, nActualTimespanAvg, nActualTimespan);
    }

   // The 4th livenet or 1st testnet hard fork (1.02 difficulty limiter)
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nActualTimespanMax = nTargetTimespan*102/100;
        nActualTimespanMin = nTargetTimespan*100/102;
    }




Maverickthenoob (OP)
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile WWW
February 04, 2014, 09:35:49 PM
 #293



Cryptsy is now in sync.

Board of Directors - Catcoin
Personal: CAT: 9pndWw3qmPiWm2jQRw5pRAVEfJN4LzaD1f  BTC: 1Jo1394CraTgC8bKFzDdEMdks2DroB6VBe
CAT Dev Donation CAT: 9gZpz58KzYr1WKBN8DfPkZPAEt5wfZ4UKT BTC: 1MeRkKfRRfC86BQWEx5gsq68bDHe7dgs3o
SlimePuppy
Hero Member
*****
Offline Offline

Activity: 655
Merit: 500


View Profile
February 04, 2014, 09:36:37 PM
 #294

Code complexity...

Kimoto Gravity Well code from the creator's coin - Megacoin.  Main.cpp, beginning on line 1276.  I cannot confirm this is all the code, just the main block I can easily identify.

<snips>

This comparaison is biased, because you don't remove the unknown defintion part from KGW or add them for other codes (they are defined at the begining of the code if I'm not mistaking)
<snips>
Fear not, Kuroman - I'm not interested in a code length contest and don't think it should be a factor.  None of the solutions or attempted solutions are 1 line of code. Wink
envy2010
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
February 04, 2014, 09:43:13 PM
 #295

We should implement KGW if it's shown to be the best solution, after considering all options. Its not going to be a copy/paste solution but could certainly be done, and there is no hurry for the next update.

no the Average is making the matter worse right now as It was explained before, does removing it will solve the issue, it will depends on the other parameters, if the other parameters don't swig,= stay in a constant level, it will most likely solve the problem, but that is a BIG assumption, hence a dynamic value instead of a 12% which negate the effect of the other parameters.

Block solving is random, so the difficulty HAS to be based on some sort of an average. Obviously the 36SMA isn't perfect, but it's working now and could be very good with very minor modifications.

It will converge and I agree with you here, and this is what the current fork was supposed to mean, what you aren't taking in consideration is other parameters as I mentioned before (and obviously I was refering to exponentional decrease you've mentioned which is a wrong assumption)

I'm no longer sure it will converge. I can't get it to in my sims, I only get more of the same no matter what the hash input is.

And the function I've found works BEST (not perfect) by experimentation is: nethash = 2681*EXP(-0.059*current_diff)+40. Go pull some historical difficulties and the corresponding hashrates, and see that it's a reasonable approximation. There is certainly a non-linear correlation between difficulty and nethash.

Also Envy thank you for your effort, I'm not discrediting your work by any mean just giving you my honest analysis and bringing facts to back them up, I feel like if we keep discussing and simulating different case scenarios we will be able to find a possible solution, but for now I believe we can use KGW as it is available and prove to work for everycoin it was used on. I personally believe the dev process should not be limited to solving major issues but it should a continius work to keep improving the coin.

The next update to the coin is not a rush because its already functioning well enough to get by. I agree that it should and will be a continuous improvement process.
envy2010
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
February 04, 2014, 09:48:00 PM
 #296

Code complexity...

Kimoto Gravity Well code from the creator's coin - Megacoin.  Main.cpp, beginning on line 1276.  I cannot confirm this is all the code, just the main block I can easily identify.

<snips>

This comparaison is biased, because you don't remove the unknown defintion part from KGW or add them for other codes (they are defined at the begining of the code if I'm not mistaking)
<snips>
Fear not, Kuroman - I'm not interested in a code length contest and don't think it should be a factor.  None of the solutions or attempted solutions are 1 line of code. Wink

Actually, I proposed a possible 1-line exponential weighted average solution that uses only previously defined variables and functions. Not that code complexity should be the biggest factor, but it's something that should be considered...

The actual code implementation is to add one line:

Code:
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;

to:

Code:
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    nActualTimespan = nTargetTimespanLocal+nTargetSpacing*log(nTargetTimespanLocal/nActualTimespan)
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;
SlimePuppy
Hero Member
*****
Offline Offline

Activity: 655
Merit: 500


View Profile
February 04, 2014, 10:08:16 PM
 #297

How about this, everyone - first, we put options on the table.  Then we arm wrestle, fight, measure size, etc.  Then we code some finalists and get them on testnet.  Then we choose one, code it, verify the code, and implement it.

As I see it, we're in phase one:  Fill the table.  This is not the time to start arm wrestling. Wink

peace...
envy2010
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
February 04, 2014, 10:49:24 PM
 #298

How about this, everyone - first, we put options on the table.  Then we arm wrestle, fight, measure size, etc.  Then we code some finalists and get them on testnet.  Then we choose one, code it, verify the code, and implement it.

As I see it, we're in phase one:  Fill the table.  This is not the time to start arm wrestling. Wink

peace...

Haha sure. Just pointing out that a one liner might be feasible.
SlimePuppy
Hero Member
*****
Offline Offline

Activity: 655
Merit: 500


View Profile
February 04, 2014, 11:32:33 PM
Last edit: February 05, 2014, 12:39:21 AM by SlimePuppy
 #299

50 CAT bounty for the first post that shows that KGW was or is used successfully in a coin with a 10 minute or longer block time.


Disclaimer:  This is my money, not CATs that belong in any way to development or that have been donated by anyone for any reason.
50 CAT bounty for the first post that shows that the Kimoto Gravity Well (KGW) was or is used successfully in a coin with a 10 minute or longer block time.

Anyone?
SlimePuppy
Hero Member
*****
Offline Offline

Activity: 655
Merit: 500


View Profile
February 04, 2014, 11:33:54 PM
 #300

How about this, everyone - first, we put options on the table.  Then we arm wrestle, fight, measure size, etc.  Then we code some finalists and get them on testnet.  Then we choose one, code it, verify the code, and implement it.

As I see it, we're in phase one:  Fill the table.  This is not the time to start arm wrestling. Wink

peace...

Haha sure. Just pointing out that a one liner might be feasible.
Agreed.   Just trying to keep the gravity well folks from lighting their torches.  No holy wars yet. LOL
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 ... 71 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!