Bitcoin Forum
May 05, 2024, 04:49:33 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 [270] 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 ... 676 »
  Print  
Author Topic: NA  (Read 893540 times)
thsminer
Sr. Member
****
Offline Offline

Activity: 332
Merit: 250



View Profile
October 21, 2014, 03:25:18 PM
 #5381


Beside all these solutions I still think the problem is the way DGW3 is implemented. It just does not react the way it should and that has nothing to do with being designed for... Because its a counting algo based on the last 24 blocks. One thing that also keeps me busy is the fact that KGW did not work properly either so maybe the solution is at some totally other place in the code. You sometimes see the same strange behaviour when variables are used outside the memory space and overwritten by some other procedure
DGW3 was not modified, except for some code comments and alignment. It looks like DGW3 is calculating correctly, the problem is that it was not created for jump pools at this scale. I have been thinking about increasing the number of blocks into calculation (currently 24). But I'm affraid that it will only result in multipools getting more blocks before the diff spikes. What do you think?
No, what I meant is the fact that two seperate dampening solutions that should work pretty well jump around like crazy. That made me wonder, maybe its not the KGW or DGW code but something that precedes or supercedes it. You maybe could ask Rijk what exactly was done about the KGW problem back in April, because then KGW went haywire. It's just a wild guess, but it's strange that we have more problems with those jump pool than other coins.
1714884573
Hero Member
*
Offline Offline

Posts: 1714884573

View Profile Personal Message (Offline)

Ignore
1714884573
Reply with quote  #2

1714884573
Report to moderator
1714884573
Hero Member
*
Offline Offline

Posts: 1714884573

View Profile Personal Message (Offline)

Ignore
1714884573
Reply with quote  #2

1714884573
Report to moderator
Bitcoin addresses contain a checksum, so it is very unlikely that mistyping an address will cause you to lose money.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714884573
Hero Member
*
Offline Offline

Posts: 1714884573

View Profile Personal Message (Offline)

Ignore
1714884573
Reply with quote  #2

1714884573
Report to moderator
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 21, 2014, 03:45:21 PM
 #5382

@ny2cafuse

The idea is good, but it's not a complete fix. Multipools could still mine up to 11 blocks directly after a high diff block was mined (which probably took some hours).
This is because the actualTimespan is several hours, while the targetTimespan is 12*2.5=30 minutes. With this changes that would cause a diff x2 directly when the high diff block is >12 blocks ago in the chain (and not taken into calculation anymore). Because at that point, only blocks with a timespan of several seconds are in the DGW3 calculation, so the diff factor maxes out to 2.0

I think this is a step in the right direction, but there's more required for a full fix.

Then drop it even further to 6 blocks.  You need the retarget to happen faster than it's happening now, but not as severe.  With our current 3x jump, lets say that we go from 250 difficulty to 750 in a jump.  This knocks the multi off the chain until it drops back down.  But what's to say that a difficulty of 300 wouldn't have done the same?  And then if it isn't sufficient, it ramps up again, until it is.

Lesser changes that happen faster.  That's the way digishield works, as evident by almost any difficulty graph for digishield coins.  We just need to emulate that.

-Fuse

Community > Devs
veertje
Legendary
*
Offline Offline

Activity: 952
Merit: 1000


View Profile
October 21, 2014, 04:17:04 PM
 #5383


Then drop it even further to 6 blocks.  You need the retarget to happen faster than it's happening now, but not as severe.  With our current 3x jump, lets say that we go from 250 difficulty to 750 in a jump.  This knocks the multi off the chain until it drops back down.  But what's to say that a difficulty of 300 wouldn't have done the same?  And then if it isn't sufficient, it ramps up again, until it is.

Lesser changes that happen faster.  That's the way digishield works,
as evident by almost any difficulty graph for digishield coins.  We just need to emulate that.

-Fuse

I think you are correct on this. Sounds very plausible. But I am not a programmer. Hope it can be that easy to adjust the script  Smiley

ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 21, 2014, 04:22:48 PM
 #5384


Then drop it even further to 6 blocks.  You need the retarget to happen faster than it's happening now, but not as severe.  With our current 3x jump, lets say that we go from 250 difficulty to 750 in a jump.  This knocks the multi off the chain until it drops back down.  But what's to say that a difficulty of 300 wouldn't have done the same?  And then if it isn't sufficient, it ramps up again, until it is.

Lesser changes that happen faster.  That's the way digishield works,
as evident by almost any difficulty graph for digishield coins.  We just need to emulate that.

-Fuse

I think you are correct on this. Sounds very plausible. But I am not a programmer. Hope it can be that easy to adjust the script  Smiley



Additionally, you could use a weighted average where the last 6 blocks carry more weight than the last 18.  So essentially you could still focus on the 24 blocks, but with more emphasis on the last 6.

-Fuse

Community > Devs
veertje
Legendary
*
Offline Offline

Activity: 952
Merit: 1000


View Profile
October 21, 2014, 04:30:55 PM
 #5385

Fuse, are those numbers of 24 in comparison with the other digishield coins and their target blocktimes? We have a target blocktime of 2,5 minutes. Those numbers should match up with the target blocktime, I think. I don't know if it was originally 24. So, I let it to Geert-Johan to react further, because I am not a programmer.  

But wanted to react as it seems plausible what you say.

Additionally, you could use a weighted average where the last 6 blocks carry more weight than the last 18.  So essentially you could still focus on the 24 blocks, but with more emphasis on the last 6.

That would be nice as well, if needed.
thsminer
Sr. Member
****
Offline Offline

Activity: 332
Merit: 250



View Profile
October 21, 2014, 04:41:59 PM
 #5386

To ilustrate the behaviour we experience look at the blocks below. At 137136 we had a diff of 412 and because the block preceding this took two hours the diff dropped to 118. But the max drop is 1/3 so it should have dropped to 138 and not 118. The dif lowers and despite blocktimes of tens of seconds it lowers and lowers. Till 137159 then the calculation says hoooo fellas this is too fast and it raises the diff from 29.8 to 162.1 not exactly the max three times that was in the DGW design... So why does is diff adjusted more than the DGW design? And second it can behave much better if it is a weighted average instead of a plain one.

 
137184 2014-10-20 19:37:56 1 1000 227.657 307183000 64.5077 4675.82 61.0856%
137183 2014-10-20 19:37:43 1 1000 188.93 307182000 64.5077 4675.82 61.0856%
137182 2014-10-20 19:37:14 6 45685.2034 431.768 307181000 64.5076 4675.82 61.0858%
137181 2014-10-20 19:01:13 1 1000 374.057 307180000 64.4828 4675.79 61.095%
137180 2014-10-20 19:00:42 2 1422.978 332.835 307179000 64.4827 4675.79 61.0951%
137179 2014-10-20 18:53:02 1 1000 302.237 307178000 64.4776 4675.79 61.097%
137178 2014-10-20 18:52:47 1 1000 276.263 307177000 64.4776 4675.79 61.0971%
137177 2014-10-20 18:52:02 1 1000 256.457 307176000 64.4773 4675.79 61.0973%
137176 2014-10-20 18:50:03 1 1000 240.951 307175000 64.4761 4675.78 61.0978%
137175 2014-10-20 18:49:57 3 1975.29793541 228.569 307174000 64.4763 4675.78 61.0978%
137174 2014-10-20 18:47:50 1 1000 218.365 307173000 64.475 4675.78 61.0984%
137173 2014-10-20 18:47:41 1 1000 210.006 307172000 64.4751 4675.78 61.0984%
137172 2014-10-20 18:47:39 1 1000 203.035 307171000 64.4753 4675.78 61.0984%
137171 2014-10-20 18:47:24 1 1000 197.242 307170000 64.4753 4675.78 61.0985%
137170 2014-10-20 18:47:19 1 1000 192.377 307169000 64.4755 4675.78 61.0985%
137169 2014-10-20 18:46:52 1 1000 188.239 307168000 64.4754 4675.78 61.0986%
137168 2014-10-20 18:46:48 1 1000 184.775 307167000 64.4756 4675.78 61.0986%
137167 2014-10-20 18:46:44 1 1000 181.831 307166000 64.4757 4675.78 61.0987%
137166 2014-10-20 18:46:41 1 1000 179.32 307165000 64.4759 4675.78 61.0987%
137165 2014-10-20 18:46:29 1 1000 177.128 307164000 64.476 4675.78 61.0987%
137164 2014-10-20 18:45:32 1 1000 175.297 307163000 64.4755 4675.78 61.099%
137163 2014-10-20 18:45:08 1 1000 173.454 307162000 64.4755 4675.78 61.0991%
137162 2014-10-20 18:43:29 2 6641.46782322 171.971 307161000 64.4745 4675.78 61.0995%
137161 2014-10-20 18:42:48 1 1000 170.843 307160000 64.4744 4675.78 61.0996%
137160 2014-10-20 18:42:27 1 1000 162.133 307159000 64.4744 4675.78 61.0997%
137159 2014-10-20 18:42:16 1 1000 29.857 307158000 64.4744 4675.78 61.0997%
137158 2014-10-20 18:41:59 1 1000 32.026 307157000 64.4744 4675.78 61.0998%
137157 2014-10-20 18:41:53 1 1000 34.314 307156000 64.4746 4675.78 61.0998%
137156 2014-10-20 18:41:40 1 1000 36.323 307155000 64.4746 4675.78 61.0999%
137155 2014-10-20 18:41:37 1 1000 38.742 307154000 64.4748 4675.78 61.0999%
137154 2014-10-20 18:41:34 1 1000 37.496 307153000 64.475 4675.78 61.0999%
137153 2014-10-20 18:41:18 1 1000 40.622 307152000 64.475 4675.78 61.1%
137152 2014-10-20 18:40:59 1 1000 43.965 307151000 64.475 4675.78 61.1001%
137151 2014-10-20 18:40:52 1 1000 47.547 307150000 64.4751 4675.78 61.1001%
137150 2014-10-20 18:40:48 1 1000 50.685 307149000 64.4753 4675.78 61.1001%
137149 2014-10-20 18:40:47 1 1000 54.738 307148000 64.4755 4675.78 61.1001%
137148 2014-10-20 18:40:43 1 1000 58.675 307147000 64.4757 4675.78 61.1001%
137147 2014-10-20 18:40:31 1 1000 63.313 307146000 64.4757 4675.78 61.1002%
137146 2014-10-20 18:40:03 1 1000 68.027 307145000 64.4756 4675.78 61.1003%
137145 2014-10-20 18:39:48 1 1000 72.557 307144000 64.4757 4675.78 61.1004%
137144 2014-10-20 18:39:44 1 1000 78.178 307143000 64.4758 4675.78 61.1004%
137143 2014-10-20 18:39:00 1 1000 83.37 307142000 64.4755 4675.78 61.1006%
137142 2014-10-20 18:38:55 1 1000 88.63 307141000 64.4757 4675.78 61.1006%
137141 2014-10-20 18:38:54 1 1000 92.904 307140000 64.4759 4675.78 61.1006%
137140 2014-10-20 18:38:49 1 1000 99.356 307139000 64.476 4675.78 61.1006%
137139 2014-10-20 18:38:33 1 1000 97.431 307138000 64.476 4675.78 61.1007%
137138 2014-10-20 18:37:44 1 1000 105.188 307137000 64.4757 4675.78 61.1009%
137137 2014-10-20 18:37:00 8 92544.22974057 118.799 307136000 64.4754 4675.78 61.1011%
137136 2014-10-20 18:36:42 3 8363.86928814 412.358 307135000 64.4763 4675.78 61.1006%
 
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 21, 2014, 04:45:36 PM
 #5387

Fuse, are those numbers of 24 in comparison with the other digishield coins and their target blocktimes? We have a target blocktime of 2,5 minutes. Those numbers should match up with the target blocktime, I think. I don't know if it was originally 24. So, I let it to Geert-Johan to react further, because I am not a programmer.  

But wanted to react as it seems plausible what you say.

Additionally, you could use a weighted average where the last 6 blocks carry more weight than the last 18.  So essentially you could still focus on the 24 blocks, but with more emphasis on the last 6.

That would be nice as well, if needed.

Digishield doesn't calculate the difficulty based on block averages, but rather the individual time between blocks.  At least that's the way I interpret the digishield code from POT:

Code:
unsigned int static GetNextWorkRequired_V3(const CBlockIndex* pindexLast, const CBlockHeader *pblock){
 
     unsigned int nProofOfWorkLimit = bnProofOfWorkLimit.GetCompact();
     int nHeight = pindexLast->nHeight + 1;
 
 
     int64 retargetTimespan = nTargetTimespan;
     int64 retargetSpacing = nTargetSpacing;
     int64 retargetInterval = nInterval;
 
     retargetInterval = nTargetTimespanNEW / nTargetSpacing;
     retargetTimespan = nTargetTimespanNEW;
 
     // Genesis block
     if (pindexLast == NULL)
         return nProofOfWorkLimit;
 
     // Only change once per interval
     if ((pindexLast->nHeight+1) % retargetInterval != 0)
     {
         // Special difficulty rule for testnet:
         if (fTestNet)
         {
             // If the new block's timestamp is more than 2* nTargetSpacing minutes
             // then allow mining of a min-difficulty block.
             if (pblock->nTime > pindexLast->nTime + retargetSpacing*2)
                 return nProofOfWorkLimit;
             else
             {
                 // Return the last non-special-min-difficulty-rules-block
                 const CBlockIndex* pindex = pindexLast;
                 while (pindex->pprev && pindex->nHeight % retargetInterval != 0 && pindex->nBits == nProofOfWorkLimit)
                     pindex = pindex->pprev;
                 return pindex->nBits;
             }
         }
 
         return pindexLast->nBits;
     }
 
     // Dogecoin: This fixes an issue where a 51% attack can change difficulty at will.
     // Go back the full period unless it's the first retarget after genesis. Code courtesy of Art Forz
     int blockstogoback = retargetInterval-1;
     if ((pindexLast->nHeight+1) != retargetInterval)
         blockstogoback = retargetInterval;
 
     // Go back by what we want to be 14 days worth of blocks
     const CBlockIndex* pindexFirst = pindexLast;
     for (int i = 0; pindexFirst && i < blockstogoback; i++)
         pindexFirst = pindexFirst->pprev;
     assert(pindexFirst);
 
     // Limit adjustment step
     int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
     printf(" nActualTimespan = %"PRI64d" before bounds\n", nActualTimespan);
 
     CBigNum bnNew;
     bnNew.SetCompact(pindexLast->nBits);
 
     //DigiShield implementation - thanks to RealSolid & WDC for this code
 // amplitude filter - thanks to daft27 for this code
     nActualTimespan = retargetTimespan + (nActualTimespan - retargetTimespan)/8;
     printf("DIGISHIELD RETARGET\n");
     if (nActualTimespan < (retargetTimespan - (retargetTimespan/4)) ) nActualTimespan = (retargetTimespan - (retargetTimespan/4));
     if (nActualTimespan > (retargetTimespan + (retargetTimespan/2)) ) nActualTimespan = (retargetTimespan + (retargetTimespan/2));
     // Retarget
 
     bnNew *= nActualTimespan;
     bnNew /= retargetTimespan;
 
     if (bnNew > bnProofOfWorkLimit)
         bnNew = bnProofOfWorkLimit;
 
     /// debug print
     printf("GetNextWorkRequired RETARGET\n");
     printf("nTargetTimespan = %"PRI64d" nActualTimespan = %"PRI64d"\n", retargetTimespan, nActualTimespan);
     printf("Before: %08x %s\n", pindexLast->nBits, CBigNum().SetCompact(pindexLast->nBits).getuint256().ToString().c_str());
     printf("After: %08x %s\n", bnNew.GetCompact(), bnNew.getuint256().ToString().c_str());
 
     return bnNew.GetCompact();
 
 
 }

To clarify, although I believed digishield was the solution we needed to go with originally, I stand by my opinion that we should try to tweak DGW3 to get the result we want.  I am not advocating for digishield at this time.  This may change in the future, but for now lets try to fix DGW3.

-Fuse

Community > Devs
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 21, 2014, 04:52:37 PM
 #5388

To ilustrate the behaviour we experience look at the blocks below. At 137136 we had a diff of 412 and because the block preceding this took two hours the diff dropped to 118. But the max drop is 1/3 so it should have dropped to 138 and not 118. The dif lowers and despite blocktimes of tens of seconds it lowers and lowers. Till 137159 then the calculation says hoooo fellas this is too fast and it raises the diff from 29.8 to 162.1 not exactly the max three times that was in the DGW design... So why does is diff adjusted more than the DGW design? And second it can behave much better if it is a weighted average instead of a plain one.

+1 for the detailed info, mate.  I'm with you 100%.

I really do believe either looking at less blocks for the average, or creating a weighted average is the way to go.  Additionally, there needs to be a limit in the amount of difficulty increase/decrease so we're not throwing the difficulty all over the place.

Again- less extreme changes that happen more often.

-Fuse

Community > Devs
Halofire
Hero Member
*****
Offline Offline

Activity: 938
Merit: 1000


@halofirebtc


View Profile
October 21, 2014, 04:57:59 PM
 #5389

Thinking a little outside the box here:
How about if the network senses an hour or however long has passed, then automatically drops the diff of the current block by voiding/canceling the current big diff block and releasing a new small diff block? Is there a reason the network can't cancel/re-submit the diff or the current block being mined? Just ideas to keep ppl thinking, food for thought, I don't know if it's even possible. Blockchain and code are capable of so many things, figured I'd ask.

OC Development - oZwWbQwz6LAkDLa2pHsEH8WSD2Y3LsTgFt
SMC Development - SgpYdoVz946nLBF2hF3PYCVQYnuYDeQTGu
Friendly reminder: Back up your wallet.dat files!!
strataghyst
Sr. Member
****
Offline Offline

Activity: 393
Merit: 250


View Profile
October 21, 2014, 05:09:05 PM
 #5390

I also think this is a good plan as a 24 average alone can never straighten out the big swings from a multi pool.
So in theory a  24 weighted average with the emphasis on the last 6 will do much better.

ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 21, 2014, 05:31:42 PM
 #5391

Thinking a little outside the box here:
How about if the network senses an hour or however long has passed, then automatically drops the diff of the current block by voiding/canceling the current big diff block and releasing a new small diff block? Is there a reason the network can't cancel/re-submit the diff or the current block being mined? Just ideas to keep ppl thinking, food for thought, I don't know if it's even possible. Blockchain and code are capable of so many things, figured I'd ask.

I might not be 100% correct on this, but I don't think this would be possible.  The code works around submitted blocks.  So you need to have blocks created and submitted to the chain to have the numbers calculated against.  I might be possible if you had something like POS always generating blocks separately from POW, but that's a whole other can of worms.  Frankly, implementing something like that is a huge undertaking.

-Fuse

Community > Devs
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 21, 2014, 05:55:18 PM
 #5392

I'm no coder, but I can read and understand it.  If the solution is a weighted average, the section of code we would need to adjust would be the following snippet:

Code:
    // loop over the past n blocks, where n == PastBlocksMax
    for (unsigned int i = 1; BlockReading && BlockReading->nHeight > 0; i++) {
        if (PastBlocksMax > 0 && i > PastBlocksMax) { break; }
        CountBlocks++;

        // Calculate average difficulty based on the blocks we iterate over in this for loop
        if(CountBlocks <= PastBlocksMin) {
            if (CountBlocks == 1) { PastDifficultyAverage.SetCompact(BlockReading->nBits); }
            else { PastDifficultyAverage = ((PastDifficultyAveragePrev * CountBlocks)+(CBigNum().SetCompact(BlockReading->nBits))) / (CountBlocks+1); }
            PastDifficultyAveragePrev = PastDifficultyAverage;
        }

        // If this is the second iteration (LastBlockTime was set)
        if(LastBlockTime > 0){
            // Calculate time difference between previous block and current block
            int64 Diff = (LastBlockTime - BlockReading->GetBlockTime());
            // Increment the actual timespan
            nActualTimespan += Diff;
        }
        // Set LasBlockTime to the block time for the block in current iteration
        LastBlockTime = BlockReading->GetBlockTime();      

        if (BlockReading->pprev == NULL) { assert(BlockReading); break; }
        BlockReading = BlockReading->pprev;
    }

Instead of cycling through all 24 blocks, we need to cycle through the first 6(or whatever number, lower being better in my mind), and then cycle through the next 18 and count those as 1 lesser weighted block.

I ran this theory through an excel spreadsheet, and the weighted average reacts faster and carries a slightly higher difficulty than a generic average.  Try it yourself and you'll see what I mean.

Of course, am I'm now starting to wonder if I'm even calculating the correct values, and whether or not I should be looking at time instead.  Can someone steer me in the right direction?

-Fuse

Community > Devs
thsminer
Sr. Member
****
Offline Offline

Activity: 332
Merit: 250



View Profile
October 21, 2014, 06:16:46 PM
 #5393

Thinking a little outside the box here:
How about if the network senses an hour or however long has passed, then automatically drops the diff of the current block by voiding/canceling the current big diff block and releasing a new small diff block? Is there a reason the network can't cancel/re-submit the diff or the current block being mined? Just ideas to keep ppl thinking, food for thought, I don't know if it's even possible. Blockchain and code are capable of so many things, figured I'd ask.

I might not be 100% correct on this, but I don't think this would be possible.  The code works around submitted blocks.  So you need to have blocks created and submitted to the chain to have the numbers calculated against.  I might be possible if you had something like POS always generating blocks separately from POW, but that's a whole other can of worms.  Frankly, implementing something like that is a huge undertaking.

-Fuse

Yep, Fuse is right. The problem with the current block is that we don't know whats going on till it's solved.

Imagine it this way; You want to do a job and I will hand you a calculation to solve. The difficulty of the calculation I give you is estimated so that it will take you 150 seconds. So you start working and when you're done and found a solution you shout it to the community. So between the time you asked for work and the moment you shout your result we don't have any contact. I don't know it takes you so long and you don't know what to do otther than soving the calculation. The only way to stop you in between is another person shouting the answer for this calculation so you know it's solved.

Sure there are ways to notify but as Fuse stated thats a huge undertaking. 
24Kilo
Sr. Member
****
Offline Offline

Activity: 672
Merit: 250


View Profile
October 21, 2014, 06:30:24 PM
 #5394

Thinking a little outside the box here:
How about if the network senses an hour or however long has passed, then automatically drops the diff of the current block by voiding/canceling the current big diff block and releasing a new small diff block? Is there a reason the network can't cancel/re-submit the diff or the current block being mined? Just ideas to keep ppl thinking, food for thought, I don't know if it's even possible. Blockchain and code are capable of so many things, figured I'd ask.

It is 05:30 here and I am off to the markets... thus the short reply... while the above is a great idea... it is wide open to a time warp attack... it would leave the network vulnerable and very insecure... this is essentially proof of time... not proof of work.

I could destroy the NLG network with a only a small percentage of the nethash.
Halofire
Hero Member
*****
Offline Offline

Activity: 938
Merit: 1000


@halofirebtc


View Profile
October 21, 2014, 06:55:48 PM
 #5395

Thinking a little outside the box here:
How about if the network senses an hour or however long has passed, then automatically drops the diff of the current block by voiding/canceling the current big diff block and releasing a new small diff block? Is there a reason the network can't cancel/re-submit the diff or the current block being mined? Just ideas to keep ppl thinking, food for thought, I don't know if it's even possible. Blockchain and code are capable of so many things, figured I'd ask.

I might not be 100% correct on this, but I don't think this would be possible.  The code works around submitted blocks.  So you need to have blocks created and submitted to the chain to have the numbers calculated against.  I might be possible if you had something like POS always generating blocks separately from POW, but that's a whole other can of worms.  Frankly, implementing something like that is a huge undertaking.

-Fuse

Yep, Fuse is right. The problem with the current block is that we don't know whats going on till it's solved.

Imagine it this way; You want to do a job and I will hand you a calculation to solve. The difficulty of the calculation I give you is estimated so that it will take you 150 seconds. So you start working and when you're done and found a solution you shout it to the community. So between the time you asked for work and the moment you shout your result we don't have any contact. I don't know it takes you so long and you don't know what to do otther than soving the calculation. The only way to stop you in between is another person shouting the answer for this calculation so you know it's solved.

Sure there are ways to notify but as Fuse stated thats a huge undertaking.  


I knew that the blocks were based on calculated averages of the past. I'm a bonehead, sometimes. Haha. Instead of cancelling the block as I described, why can't the network start to drop the diff if the last block was found, for example, an hour ago? Have a decaying diff inside the time between each block instead of varying diff per block and put a limit on how long the maximum time in between blocks for the decay math to base itself from.

OC Development - oZwWbQwz6LAkDLa2pHsEH8WSD2Y3LsTgFt
SMC Development - SgpYdoVz946nLBF2hF3PYCVQYnuYDeQTGu
Friendly reminder: Back up your wallet.dat files!!
/GeertJohan
Sr. Member
****
Offline Offline

Activity: 409
Merit: 250


View Profile
October 21, 2014, 07:07:38 PM
 #5396

Beside all these solutions I still think the problem is the way DGW3 is implemented. It just does not react the way it should and that has nothing to do with being designed for... Because its a counting algo based on the last 24 blocks. One thing that also keeps me busy is the fact that KGW did not work properly either so maybe the solution is at some totally other place in the code. You sometimes see the same strange behaviour when variables are used outside the memory space and overwritten by some other procedure
DGW3 was not modified, except for some code comments and alignment. It looks like DGW3 is calculating correctly, the problem is that it was not created for jump pools at this scale. I have been thinking about increasing the number of blocks into calculation (currently 24). But I'm affraid that it will only result in multipools getting more blocks before the diff spikes. What do you think?
No, what I meant is the fact that two seperate dampening solutions that should work pretty well jump around like crazy. That made me wonder, maybe its not the KGW or DGW code but something that precedes or supercedes it. You maybe could ask Rijk what exactly was done about the KGW problem back in April, because then KGW went haywire. It's just a wild guess, but it's strange that we have more problems with those jump pool than other coins.
I agree, it could be something else. Something not related to KGW or DGW. But when doing the math (and I'm sure you've done it too) it makes sense that it's simply DGW not being able to handle the huge amount of hash/sec
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 21, 2014, 07:16:27 PM
 #5397

I knew that the blocks were based on calculated averages of the past. I'm a bonehead, sometimes. Haha. Instead of cancelling the block as I described, why can't the network start to drop the diff if the last block was found, for example, an hour ago? Have a decaying diff inside the time between each block instead of varying diff per block and put a limit on how long the maximum time in between blocks for the decay math to base itself from.

You still need a block to be found to trigger the change.  When a block is found, the network broadcasts the next block's difficulty.  The difficulty algorithm should takes into account the time between blocks and adjust the difficulty accordingly for the next block.  But without a block being solved to send out a new record of the next block's difficulty, you can't find out what the difficulty is going to be.

I suspect any change that would do something like a "difficulty ping" would not only need to write these changes to the blockchain somehow(like POS blocks).  Not only would it bloat the blockchain, but it would also allow additional avenues for time-based attacks.  Essentially all an attacker would need to do is set their clocks forward on numerous nodes to simulate a larger time gap.

-Fuse

Edit:

Beside all these solutions I still think the problem is the way DGW3 is implemented. It just does not react the way it should and that has nothing to do with being designed for... Because its a counting algo based on the last 24 blocks. One thing that also keeps me busy is the fact that KGW did not work properly either so maybe the solution is at some totally other place in the code. You sometimes see the same strange behaviour when variables are used outside the memory space and overwritten by some other procedure
DGW3 was not modified, except for some code comments and alignment. It looks like DGW3 is calculating correctly, the problem is that it was not created for jump pools at this scale. I have been thinking about increasing the number of blocks into calculation (currently 24). But I'm affraid that it will only result in multipools getting more blocks before the diff spikes. What do you think?
No, what I meant is the fact that two seperate dampening solutions that should work pretty well jump around like crazy. That made me wonder, maybe its not the KGW or DGW code but something that precedes or supercedes it. You maybe could ask Rijk what exactly was done about the KGW problem back in April, because then KGW went haywire. It's just a wild guess, but it's strange that we have more problems with those jump pool than other coins.
I agree, it could be something else. Something not related to KGW or DGW. But when doing the math (and I'm sure you've done it too) it makes sense that it's simply DGW not being able to handle the huge amount of hash/sec

This is exactly why you need to reduce the number of blocks taken into consideration.  IMO, a weighted average is the way to go.  You want to reduce the amount of blocks the multis can take.  Increasing the count evens the difficulty graph out over time, but it doesn't solve the issue of network jumping.  You need to limit the amount of blocks they can mine by quickly ramping up difficulty, but not overshooting the magic number.

Again, less change more often.  I believe a weighted average would do this.

The other option is digishield, but that is another very serious fork.

-Fuse

Community > Devs
/GeertJohan
Sr. Member
****
Offline Offline

Activity: 409
Merit: 250


View Profile
October 21, 2014, 07:29:13 PM
 #5398

To ilustrate the behaviour we experience look at the blocks below. At 137136 we had a diff of 412 and because the block preceding this took two hours the diff dropped to 118. But the max drop is 1/3 so it should have dropped to 138 and not 118. The dif lowers and despite blocktimes of tens of seconds it lowers and lowers. Till 137159 then the calculation says hoooo fellas this is too fast and it raises the diff from 29.8 to 162.1 not exactly the max three times that was in the DGW design... So why does is diff adjusted more than the DGW design? And second it can behave much better if it is a weighted average instead of a plain one.

+1 for the detailed info, mate.  I'm with you 100%.

I really do believe either looking at less blocks for the average, or creating a weighted average is the way to go.  Additionally, there needs to be a limit in the amount of difficulty increase/decrease so we're not throwing the difficulty all over the place.

Again- less extreme changes that happen more often.

-Fuse

I think we're on a good path here!

I've actually started implementing a new algorithm, from scratch. It performs faster re-adjustment, limited to 1.2 or 0.8 difficulty change between individual blocks. But it also limits the difficulty change to 3.0 or 0.33 compared to the last 120 blocks difficulty average. This means that the diff can rise ~3.0 times in 6 blocks, and it can fall to ~0.33 times in 5 blocks. In the linked formula's the impact of the new blocks themselves are not calculated in the 120 blocks average.
The idea behind this is that it will be able to handle large joins and leaves, but won't be tricked into settling on a high difficulty too fast.

Thoughts?

I'll share the code when I'm happy with initial implementation.
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 21, 2014, 07:36:47 PM
 #5399

I think we're on a good path here!

I've actually started implementing a new algorithm, from scratch. It performs faster re-adjustment, limited to 1.2 or 0.8 difficulty change between individual blocks. But it also limits the difficulty change to 3.0 or 0.33 compared to the last 120 blocks difficulty average. This means that the diff can rise ~3.0 times in 6 blocks, and it can fall to ~0.33 times in 5 blocks. In the linked formula's the impact of the new blocks themselves are not calculated in the 120 blocks average.
The idea behind this is that it will be able to handle large joins and leaves, but won't be tricked into settling on a high difficulty too fast.

Thoughts?

I'll share the code when I'm happy with initial implementation.


I think you're shooting too high with a 3X increase.  We'll have the same results we have now where it jumps too high for normal miners, and we get stuck.  The drop back down is fine, but the jump up in difficulty should be smaller IMO.

As 24Kilo always tells me, we don't need to reinvent the wheel.  We just need to make sure our current wheel is round.

-Fuse

Community > Devs
/GeertJohan
Sr. Member
****
Offline Offline

Activity: 409
Merit: 250


View Profile
October 21, 2014, 08:57:50 PM
 #5400

I think we're on a good path here!

I've actually started implementing a new algorithm, from scratch. It performs faster re-adjustment, limited to 1.2 or 0.8 difficulty change between individual blocks. But it also limits the difficulty change to 3.0 or 0.33 compared to the last 120 blocks difficulty average. This means that the diff can rise ~3.0 times in 6 blocks, and it can fall to ~0.33 times in 5 blocks. In the linked formula's the impact of the new blocks themselves are not calculated in the 120 blocks average.
The idea behind this is that it will be able to handle large joins and leaves, but won't be tricked into settling on a high difficulty too fast.

Thoughts?

I'll share the code when I'm happy with initial implementation.


I think you're shooting too high with a 3X increase.  We'll have the same results we have now where it jumps too high for normal miners, and we get stuck.  The drop back down is fine, but the jump up in difficulty should be smaller IMO.

As 24Kilo always tells me, we don't need to reinvent the wheel.  We just need to make sure our current wheel is round.

-Fuse

I think you misunderstand. The diff will rise max with a factor of 1.2 between blocks. So if the diff is 300, the next block can be max 360.
Pages: « 1 ... 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 [270] 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 ... 676 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!