Bitcoin Forum
November 10, 2024, 07:00:13 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 [356] 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 ... 676 »
  Print  
Author Topic: NA  (Read 893604 times)
Frais
Sr. Member
****
Offline Offline

Activity: 246
Merit: 250



View Profile
December 28, 2014, 10:32:55 AM
 #7101

Good to know the simulator isn't necessarily the first step to a solution (I always thought it was), good there are more and more possible solutions made public. As long as the simulator didn't bring us a solution, every other suggestion brought by the community must be taken seriously and discussed in the open (instead of being ignored).
thsminer
Sr. Member
****
Offline Offline

Activity: 332
Merit: 250



View Profile
December 28, 2014, 01:18:11 PM
 #7102

Yeah. In the meanwhile we can approach Golang developers anyway. It's handy to have one or two that we can call in anyway. Can you give me assistence with this Fuse? Send me a list of requirements/ code that the Golang devs needs to implement and where? Preferably via PM so I can type up a job-post on several freelance boards.

Thanks.

And to calm everyone down. Please take a moment to breathe, testing thoroughly is very very important. Don't rush to pushing for DIGI just because it seems the right thing to do. We need to know it is the best play, based on maths and not an educated guess on tests on the testnet. Sure. It goes a long way and a lot better than no testing at all. But having multiple numbers and tests just gives everyone more bang for their buck.

Not to worry. Buerra's pushing things now. LOL MONEY MONEY LOL

I'll start looking on Odesk and Elance.  Anyone with C++ and GO experience would be able to do the port.  I'd rather let /GJ decide on whether he wants to dev the rest.  It's his baby after all.  Has any post been made to announce the simulator to the crypto community?  There is a dev section of the forums that is filled with very competent devs who eat up this kind of development.  I would guess that announcing it there would lead to an "outsourcing" of community development.  Might just be easier to go that route.

What I say next might seem argumentative, but it's really just for my clarification.  Don't take it the wrong way.

With respect to the testing, how is real world mining not actual maths?  It is how the algorithm is going to behave on a chain, rather than how a simulator says it will behave.  I would think if you wanted true test data you would truly test it, would you not?  Like I said, I'm for testing against the simulator, but we need to know that the simulator(which simulates actual mining, not actually mining) is giving us substantiated results.  We can't say the simulator is the key to testing algorithms without testing the simulator against an actual testnet first.

For example, you would need to mine on a testnet.  You would simulate large wave, small wave, etc. and record all the data.  Then you would need to replicate that chain with the simulator, and compare the results.  If the simulator was off by a predetermined margin, you wouldn't be able to say the simulator was accurate.  After all the testnet chain would be the definitive data... it's the actual mining.

So to say that testnet data is an educated guess is a little off base.  It is the actual, substantiated data collected from mining.

-Fuse

I totally agree with Fuse in this. I'm holding back lately because of the strange discussions going on over here regarding the algo and simulator. It's amazing to see a statement that testnet is an educated guess.
I know I'm gona disturb some dreams again but I'm a litlle tired about the powers that are being ascribed to a simulator.

Let me make an other statement, "a simulator is nothing more or less than a tool for testing situations that CAN'T be test otherwise"
Why do you think cars are crashed to a concrete wall? Why do you think they testflight a new plane? Just because all the simulations they do just give a pretty good insight but are no substitute for the real live situations. Simulators are the next best thing if you can't afford testing live!
This said a simulator for crypto an unnecesary toy. Real live testing is possible in the testnet environment, real software, real hashes from real miners.

The Criptoe team tested Digi in a modified form on testnet and the results are looking OK. What reason can any dev have for a cumbersome coin NOT to change the algo to something thats properly tested.





  
BioMike
Legendary
*
Offline Offline

Activity: 1658
Merit: 1001


View Profile
December 28, 2014, 01:28:56 PM
 #7103

Please, don't touch the reward and the block time. That's something you do with scam coins, not guldencoin (and are, for me, in a way important determinants for trust in this coin).
/GeertJohan
Sr. Member
****
Offline Offline

Activity: 409
Merit: 250


View Profile
December 28, 2014, 01:49:26 PM
 #7104

A 25% modified Digishield diff algo is my recommendation.
How is the algo modified? I'd be very interested to see how it works!

While I completely trust GJ, a simulator is a just that... a simulation.
A simulation is not "just a simulation".. it's a piece of code that executes the exact same maths as being done in the satoshi client, but does not require actual hashing power.. Thereby making it easier to see how an algo reacts.. This makes it a much more powerful tool to test algo changes.

1. Look at network difficulty adjustment in increments instead of minutes or seconds -

The common fault is to look at a network as blocks per minute or blocks per 24 hours, such as presented by 'nlgblogger', instead a network needs to be looked at as the number of diff adjustments per 24 hours. With a 60sec block time, there is 1440 increments per 24 hours, with 150sec(NLG), there are only 576. So any diff algo designed for a 60sec block time needs to be looked at over 1440 blocks, not a 24 hour period, to see if it is working as desired if that diff algo is applied to any other block time. NLG has only about 1/3 of the diff increments in 24 hours when compared to a 60sec block time network. Both DigiShield and DGW3 were desired expressly for a 60 second block time, Digicoin in the former and DarkCoin in the latter. This is why neither Digi nor DGW3 are effective without tuning.
I don't really agree. The main variable that drives the DGW3 calculation is the average block time for the past n blocks. Hashing power can be added/removed instantaneous. It's not like hashing machines need to "warm up" and slowly increase their hashing speed over a few minutes or hours, so it doesn't make sense for any diff algo to take into account how many re-adjustments per hour are happening.

Digishield is a much simpler and more elegant diff algo that scales easily and effectively, while DGW3 is very complex and does not scale well.
I wouldn't say DGW3 is very complex.. It's not even complex relative to Digishield, because the two look a lot alike. Please tell me how you think Digi is more elegant and what you mean with "scaling".. Where in the maths does Digi scale better than DGW3?

I would fork NLG to a 75sec block target time with a 500NLG reward that retargets every block using a custom Digishield implemtation.
Blockreward is nowhere near related to the difficulty algorithm.. It wouldn't have anything to do with Digishield...

(....) This would reduce any 'stuck network' time by 50%, increase the number of diff increments per 24hrs by 100%, reduce the number of blocks that Clever can mint per 24hrs by 80%, and make NLG a much better network for micro-transactions.
How did you figure out these numbers ?


Please don't take these remarks personally.. I'm merely pointing out that with the hashrate/sec spikes we're experiencing; both DGW3 and Digishield won't handle it. I know I haven't been active the last week or so, I'm sorry for that. But please keep in mind that we want a permanent and solid solution. We actually need a more complex algo...
/GeertJohan
Sr. Member
****
Offline Offline

Activity: 409
Merit: 250


View Profile
December 28, 2014, 02:07:23 PM
 #7105

With respect to the testing, how is real world mining not actual maths?  It is how the algorithm is going to behave on a chain, rather than how a simulator says it will behave.  I would think if you wanted true test data you would truly test it, would you not?  Like I said, I'm for testing against the simulator, but we need to know that the simulator(which simulates actual mining, not actually mining) is giving us substantiated results.  We can't say the simulator is the key to testing algorithms without testing the simulator against an actual testnet first.

For example, you would need to mine on a testnet.  You would simulate large wave, small wave, etc. and record all the data.  Then you would need to replicate that chain with the simulator, and compare the results.  If the simulator was off by a predetermined margin, you wouldn't be able to say the simulator was accurate.  After all the testnet chain would be the definitive data... it's the actual mining.

So to say that testnet data is an educated guess is a little off base.  It is the actual, substantiated data collected from mining.

-Fuse

Let me make an other statement, "a simulator is nothing more or less than a tool for testing situations that CAN'T be test otherwise"
Why do you think cars are crashed to a concrete wall? Why do you think they testflight a new plane? Just because all the simulations they do just give a pretty good insight but are no substitute for the real live situations. Simulators are the next best thing if you can't afford testing live!
This said a simulator for crypto an unnecesary toy. Real live testing is possible in the testnet environment, real software, real hashes from real miners.

The Criptoe team tested Digi in a modified form on testnet and the results are looking OK. What reason can any dev have for a cumbersome coin NOT to change the algo to something thats properly tested.


I should clarify a bit on the simulator.
First of all, the comparison to car crashing simulations is not very accurate. We crash cars into walls because it's very very hard to simulate all the moving parts, the different materials, the impact and weight of all those materials.. the temperature and friction of all the parts.. etc. etc. etc.. So real live testing is the way to go..
A difficulty re-adjustment algorithm is ~100 lines of code performing some calculations.. When you give this the same input, it will output the same thing every time.. The only thing that makes the simulator different from a testnet is that no actual hashing is required and one can "fake" any amount of GH/s without having to purchase large machines..
I'm not saying testnet testing shouldn't happen, it's very important.. But unless you own a fortune and just bought a lot of new mining machines, it can't test against the same volumes of hashingpower.. Another argument for my simulator is that it makes testing easier and faster.. on a testnet testing a chain with 100 blocks should take 100*150 seconds.. The simulator can do this instantly because time is "simulated".. Again, that does not make the math being done any less reliable.. it's just fooling DGW3 by telling it more time has passed that did in the real world.

And yes, fuse is correct that if you record the testnet results and input them in a simulator, somewhere near the same output should come out.. (only faster).
There is one large difference between the testnet results and the simulator: the simulator calculates with 'perfect' hashrate chance. So if you have a dice roll 60 times, it will roll to each digit 10 times.. Wheras real mining has a chance of "having luck".. In the end this doesn't really matter for the diff algo testing, becuase "having luck" with a 100 MH miner isn't nowhere near as much impact as 600GH/s rolling through..
Halofire
Hero Member
*****
Offline Offline

Activity: 938
Merit: 1000


@halofirebtc


View Profile
December 28, 2014, 02:20:26 PM
 #7106

100 lines of code? That's it? Granted they are very important lines of code but damn! Made it seem like it was longer than the main.cpp...

Thanks for the update/opinions GJ. I guess it wasn't what everyone wanted to hear as your statements are reflected in the price but you can't give them what they want, not healthy. Onward. Just don't put this off for another year.

All luck balances out over time. Luck should not be a an attributed factor of great weight.

OC Development - oZwWbQwz6LAkDLa2pHsEH8WSD2Y3LsTgFt
SMC Development - SgpYdoVz946nLBF2hF3PYCVQYnuYDeQTGu
Friendly reminder: Back up your wallet.dat files!!
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
December 28, 2014, 04:47:54 PM
Last edit: December 28, 2014, 05:10:43 PM by ny2cafuse
 #7107

A 25% modified Digishield diff algo is my recommendation.
How is the algo modified? I'd be very interested to see how it works!

You change the max difficulty adjustment.  Other than that it's standard DIGI.


While I completely trust GJ, a simulator is a just that... a simulation.
A simulation is not "just a simulation".. it's a piece of code that executes the exact same maths as being done in the satoshi client, but does not require actual hashing power.. Thereby making it easier to see how an algo reacts.. This makes it a much more powerful tool to test algo changes.

Can it simulate a fork caused by an isntant, massive hashrate increase?  You also talk about luck with a 100MH miner... well the 600GH miner is going to have luck too.  Computer simulations, while very accurate(when proven so) are only as good as the code.  There's a reason why robot AI isn't to a point of consciousness yet.


1. Look at network difficulty adjustment in increments instead of minutes or seconds -

The common fault is to look at a network as blocks per minute or blocks per 24 hours, such as presented by 'nlgblogger', instead a network needs to be looked at as the number of diff adjustments per 24 hours. With a 60sec block time, there is 1440 increments per 24 hours, with 150sec(NLG), there are only 576. So any diff algo designed for a 60sec block time needs to be looked at over 1440 blocks, not a 24 hour period, to see if it is working as desired if that diff algo is applied to any other block time. NLG has only about 1/3 of the diff increments in 24 hours when compared to a 60sec block time network. Both DigiShield and DGW3 were desired expressly for a 60 second block time, Digicoin in the former and DarkCoin in the latter. This is why neither Digi nor DGW3 are effective without tuning.
I don't really agree. The main variable that drives the DGW3 calculation is the average block time for the past n blocks. Hashing power can be added/removed instantaneous. It's not like hashing machines need to "warm up" and slowly increase their hashing speed over a few minutes or hours, so it doesn't make sense for any diff algo to take into account how many re-adjustments per hour are happening.

I think you're missing the point here.  The reason Kilo brings this up is VERY valid.  If you have more iterations of retargeting in a specific period, the more change can happen.  Lets say the NLG codebase was only allowed to be altered every 6 months.  You would have to make a lot of major changes to account for everything that happened in that defined time period.  Maybe something happened 5 months ago, but now you're waiting to catch up.  If you say you can change the codebase every 3 months, now you're only 2 months behind instead of 5.  Mind you, I know the blocks carry the changes with each block, but when they are targeted for a longer time, they reaction is more severe.  I'll get back to this.


Digishield is a much simpler and more elegant diff algo that scales easily and effectively, while DGW3 is very complex and does not scale well.
I wouldn't say DGW3 is very complex.. It's not even complex relative to Digishield, because the two look a lot alike. Please tell me how you think Digi is more elegant and what you mean with "scaling".. Where in the maths does Digi scale better than DGW3?

Averaging difficulties to calculate the newer difficulty is more complex in ways.  It's allows more room for error and skewed calculations.  It's pretty obvious DGW3 doesn't scale well.  You can't argue that because the chain shows it.  I think it has for a long time now.

I'll take a stab at what Kilo meant by elegant.  DIGI is simple.  It's a 1 to 2 line addition to the base LTC difficulty algorithm.  That's it.  It's elegant in that it's simple and it works.  Additionally, Kilo and I have witnessed DIGI implementations over the last year that scaled very well.  When a chain's hashrate grows 5X it's size in a day and stays healthy, I would say it scales pretty darn good.


I would fork NLG to a 75sec block target time with a 500NLG reward that retargets every block using a custom Digishield implemtation.
Blockreward is nowhere near related to the difficulty algorithm.. It wouldn't have anything to do with Digishield...

Completely related if you reduce the block time to accommodate for an algo that is meant for shorter block times.  A 150 second block time works for LTC because LTC uses the standard algo with a shit ton of hashing power, think petahash, behind it consistently.  NLG doesn't have that.  Most other altcoins that have healthy chains use a block time closer to 60 seconds but not less.  90 seconds would be optimal, but Kilo's 75 seconds is probably a very safe number for NLG.  Less than 60 seconds and you run the risk of timewarp attacks.  Kilo has successfully done this to a few coins just to prove the maths.  But that aside, shortening the block time is a solid idea.  It reduces the magnification of the retarget changes and allows for faster reactions.  If you shorten the block time, you have to reduce the reward, unless you want to mine out faster.  Normally, I would be against reducing block rewards, but halving the block time and the block reward gives you the same coins mined daily, but with 100% more difficulty changes in that time.


(....) This would reduce any 'stuck network' time by 50%, increase the number of diff increments per 24hrs by 100%, reduce the number of blocks that Clever can mint per 24hrs by 80%, and make NLG a much better network for micro-transactions.
How did you figure out these numbers ?

I would say the 50% stuck time is conservative with DIGI.  I would guess you would reduce the amount of stuck time(blocks that take forever to mine) closer to 90-95%.  The chain would function smoother, and not make wild swings into the stratosphere like it does now with DGW3.  The 100% increase in difficulty adjustments is correct if you halve the block time.  The reduction in the amount of blocks that clever could mint is about right.  It's not hard to see in the charts I posted that if you threw 10X the network hashrate at the chain, based on profitability, you're profitability would be gone before you could grab 15 blocks in under a minute.  There's no averaging, so DIGI doesn't need to take in to account the false lows that DGW3 produces.  The reaction is instant instead of delayed because of a moving average.

The numbers, while not statistically absolute, are sound.  No need to dismiss them.


Please don't take these remarks personally.. I'm merely pointing out that with the hashrate/sec spikes we're experiencing; both DGW3 and Digishield won't handle it. I know I haven't been active the last week or so, I'm sorry for that. But please keep in mind that we want a permanent and solid solution. We actually need a more complex algo...

I don't take the remarks personally, at all.  I think you are 100% incorrect about DIGI not handling the hashrate spikes though.  We've proved it on a testnet.  It's funny... we presented actual data, but there's still people that deny it, like it was all made up lol.  DIGI works, the graphs back that up, and the solution will work until the entire altcoin environment makes it's next paradigm shift.  You can't predict the future of mining, so you can't create an algo for it.  You can however use what is effective in this era of mining.  DIGI is the answer.

-Fuse

Community > Devs
xtrix
Full Member
***
Offline Offline

Activity: 170
Merit: 100


View Profile
December 28, 2014, 04:56:50 PM
 #7108

 Roll Eyes The main question. When will the algo be solved  Huh The sooner the better.

ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
December 28, 2014, 05:05:09 PM
 #7109

Roll Eyes The main question. When will the algo be solved  Huh The sooner the better.

This.

/GJ, don't take my post as an attack, or what I'm going to say right now.

You spent that entire post trying to debunk my team's findings, and shoot down the possibility of using DIGI.  However, you didn't mention a single plan of action other than to state that a more complex algo needs to be created.

A plan that has been needed for months now.  Give us direction.  You're the rudder.  Don't let us steer ourselves into the rocks.

-Fuse

Community > Devs
LTEX
Legendary
*
Offline Offline

Activity: 1025
Merit: 1000


ltex.nl


View Profile WWW
December 28, 2014, 05:27:37 PM
Last edit: December 28, 2014, 05:41:10 PM by LTEX
 #7110

Maybe I have become too much commercial over time, but I am also a Nerd from zero-hour (my first company actually was the third Internet Provider in The Netherlands, way back when the WWW wasn't even thought of and we all used Gopher and 300 baud modems).

What hit's me reading the above discussions is that we tend to move closer and closer to focus on details if we try to solve problems. This is why most of us are discussing detailed parts of the algo. On the other hand we have a DEV team that has a wider focus overseeing a long term strategy.

I personally have learned to take a distance from things first to oversee the big picture. But I also have experienced that missing out on the details can have quite crucial effects as well. This is why I think we need to do both and try to combine experiences.

Having said that, I see why our DEV team wants to take time to create the best possible foundation for their long term views. I also see why a lot of us are becoming impatient and frustrated over the effects of the lack in defense against rapes from CM. And then I have to agree with both!

Still my gut feeling tells me it would be wise to accept the fact that DGW hasn't provided us with wat we expected. This in no way is something our DEV team has to account for, we were all behind this action. I do sense however that the majority is coming to peace with the fact that the implementation of DGW has failed.

I like to point out that we now have 4 months of real time testing behind us on our life chain, only to draw the conclusion the chosen path isn't the right one. My logical question would be, why chose to stay on the proven wrong path while trying to figure out the ultimate alternative, when we have a way better (and well argumented) path we can hop on almost effortless, doing the same (figuring out the ultimate alternative)?

To cut this long reply short: My vote goes to Digi ASAP...

A fool will just look at the finger, even if it points to paradise!
/GeertJohan
Sr. Member
****
Offline Offline

Activity: 409
Merit: 250


View Profile
December 28, 2014, 06:21:49 PM
 #7111

A 25% modified Digishield diff algo is my recommendation.
How is the algo modified? I'd be very interested to see how it works!

You change the max difficulty adjustment.  Other than that it's standard DIGI.
Okay.. And it is increased 25% then? No other changes are made?

While I completely trust GJ, a simulator is a just that... a simulation.
A simulation is not "just a simulation".. it's a piece of code that executes the exact same maths as being done in the satoshi client, but does not require actual hashing power.. Thereby making it easier to see how an algo reacts.. This makes it a much more powerful tool to test algo changes.

Can it simulate a fork caused by an isntant, massive hashrate increase?  You also talk about luck with a 100MH miner... well the 600GH miner is going to have luck too.  Computer simulations, while very accurate(when proven so) are only as good as the code.  There's a reason why robot AI isn't to a point of consciousness yet.

Forks aren't caused by the diff readjustment algorithm and so they're currently out of scope for the simulator, and are not part of the difficulty re-adjustment problem at all.
The chances of a 1MH miner having three lucky blocks are a lot larger then the chances for a 600GH miner having three lucky blocks.

1. Look at network difficulty adjustment in increments instead of minutes or seconds -

The common fault is to look at a network as blocks per minute or blocks per 24 hours, such as presented by 'nlgblogger', instead a network needs to be looked at as the number of diff adjustments per 24 hours. With a 60sec block time, there is 1440 increments per 24 hours, with 150sec(NLG), there are only 576. So any diff algo designed for a 60sec block time needs to be looked at over 1440 blocks, not a 24 hour period, to see if it is working as desired if that diff algo is applied to any other block time. NLG has only about 1/3 of the diff increments in 24 hours when compared to a 60sec block time network. Both DigiShield and DGW3 were desired expressly for a 60 second block time, Digicoin in the former and DarkCoin in the latter. This is why neither Digi nor DGW3 are effective without tuning.
I don't really agree. The main variable that drives the DGW3 calculation is the average block time for the past n blocks. Hashing power can be added/removed instantaneous. It's not like hashing machines need to "warm up" and slowly increase their hashing speed over a few minutes or hours, so it doesn't make sense for any diff algo to take into account how many re-adjustments per hour are happening.

I think you're missing the point here.  The reason Kilo brings this up is VERY valid.  If you have more iterations of retargeting in a specific period, the more change can happen.  Lets say the NLG codebase was only allowed to be altered every 6 months.  You would have to make a lot of major changes to account for everything that happened in that defined time period.  Maybe something happened 5 months ago, but now you're waiting to catch up.  If you say you can change the codebase every 3 months, now you're only 2 months behind instead of 5.  Mind you, I know the blocks carry the changes with each block, but when they are targeted for a longer time, they reaction is more severe.  I'll get back to this.
I totally agree that having more adjustments (smaller block times) will give the diff re-adjustment more opportunities to re-adjust, and so in theory it will work better.. But this is not something the algorithm itself should be aware of. Also, let's not forget that smaller blocktimes introduces a new problem: more chance of chain splits. (network must converge within the blocktime, if it doesn't, a split might occur.. if the split isn't handled well by the network, it becomes permanent and we're stuck with a fork..)

Digishield is a much simpler and more elegant diff algo that scales easily and effectively, while DGW3 is very complex and does not scale well.
I wouldn't say DGW3 is very complex.. It's not even complex relative to Digishield, because the two look a lot alike. Please tell me how you think Digi is more elegant and what you mean with "scaling".. Where in the maths does Digi scale better than DGW3?

Averaging difficulties to calculate the newer difficulty is more complex in ways.  It's allows more room for error and skewed calculations.  It's pretty obvious DGW3 doesn't scale well.  You can't argue that because the chain shows it.  I think it has for a long time now.

I'll take a stab at what Kilo meant by elegant.  DIGI is simple.  It's a 1 to 2 line addition to the base LTC difficulty algorithm.  That's it.  It's elegant in that it's simple and it works.  Additionally, Kilo and I have witnessed DIGI implementations over the last year that scaled very well.  When a chain's hashrate grows 5X it's size in a day and stays healthy, I would say it scales pretty darn good.
I know DGW3 doesn't handle the current spikes very well.. But what is this "scaling" factor? And why would Digi handle it well? Is there an example of a coin with Digi that has x30 hashrate increases (e.g.: clevermining) at the same level as Guldencoin currently is?

Also, Digishield is not really a 1 to 2 line addition to the base LTC algorithm. Please read https://github.com/digibyte/digibyte/blob/master/src/main.cpp#L1268-L1555
And we don't have 5x hashrate in a day.. we have 30x hashrate in 3 seconds..

I would fork NLG to a 75sec block target time with a 500NLG reward that retargets every block using a custom Digishield implemtation.
Blockreward is nowhere near related to the difficulty algorithm.. It wouldn't have anything to do with Digishield...

Completely related if you reduce the block time to accommodate for an algo that is meant for shorter block times.  A 150 second block time works for LTC because LTC uses the standard algo with a shit ton of hashing power, think petahash, behind it consistently.  NLG doesn't have that.  Most other altcoins that have healthy chains use a block time closer to 60 seconds but not less.  90 seconds would be optimal, but Kilo's 75 seconds is probably a very safe number for NLG.  Less than 60 seconds and you run the risk of timewarp attacks.  Kilo has successfully done this to a few coins just to prove the maths.  But that aside, shortening the block time is a solid idea.  It reduces the magnification of the retarget changes and allows for faster reactions.  If you shorten the block time, you have to reduce the reward, unless you want to mine out faster.  Normally, I would be against reducing block rewards, but halving the block time and the block reward gives you the same coins mined daily, but with 100% more difficulty changes in that time.
With the current network stability (which isn't very high) I wouldn't recommend lower blocktimes.. Furthermore, it simply doesn't work because although lower blocktimes means more blocks and thus more re-adjustments.. it's the same calculations being done... say we would half the block time to 75 seconds, that means the difficulty would also be halved so we would get twice the number of blocks within the same amount of time. The income/costs for dedicated miners wouldn't change, they would just get twice the blocks with each half the reward per block. This also applies for clever, they would have to spend the same amount of hashingpower to get twice the number of blocks at half the reward per block.. so in the end it's just the same thing.. The only thing that changes is that if the dgw3 or digi impact isn't halved too: it will cause twice the size in blocktime spikes (both up and down).. So you'd have to half the dgw3/digi impact.. which means that in the end, you're left with 0 effect (apart from a network that's more vulnerable to time warp attacks and chain splits). The faster reaction you're talking about would work if the hashrate changes over a number of blocks' time. But with a jump pool, it's instantanious. If I'm missing something, please explain.

(....) This would reduce any 'stuck network' time by 50%, increase the number of diff increments per 24hrs by 100%, reduce the number of blocks that Clever can mint per 24hrs by 80%, and make NLG a much better network for micro-transactions.
How did you figure out these numbers ?

I would say the 50% stuck time is conservative with DIGI.  I would guess you would reduce the amount of stuck time(blocks that take forever to mine) closer to 90-95%.  The chain would function smoother, and not make wild swings into the stratosphere like it does now with DGW3.  The 100% increase in difficulty adjustments is correct if you halve the block time.  The reduction in the amount of blocks that clever could mint is about right.  It's not hard to see in the charts I posted that if you threw 10X the network hashrate at the chain, based on profitability, you're profitability would be gone before you could grab 15 blocks in under a minute.  There's no averaging, so DIGI doesn't need to take in to account the false lows that DGW3 produces.  The reaction is instant instead of delayed because of a moving average.

The numbers, while not statistically absolute, are sound.  No need to dismiss them.
The  fact that digi doesn't take into account the false lows /does/ sound good.. As that is actually the largest part of the problem we're seeing.

Please don't take these remarks personally.. I'm merely pointing out that with the hashrate/sec spikes we're experiencing; both DGW3 and Digishield won't handle it. I know I haven't been active the last week or so, I'm sorry for that. But please keep in mind that we want a permanent and solid solution. We actually need a more complex algo...

I don't take the remarks personally, at all.  I think you are 100% incorrect about DIGI not handling the hashrate spikes though.  We've proved it on a testnet.  It's funny... we presented actual data, but there's still people that deny it, like it was all made up lol.  DIGI works, the graphs back that up, and the solution will work until the entire altcoin environment makes it's next paradigm shift.  You can't predict the future of mining, so you can't create an algo for it.  You can however use what is effective in this era of mining.  DIGI is the answer.
You spent that entire post trying to debunk my team's findings, and shoot down the possibility of using DIGI.  However, you didn't mention a single plan of action other than to state that a more complex algo needs to be created.

A plan that has been needed for months now.  Give us direction.  You're the rudder.  Don't let us steer ourselves into the rocks.
So, I don't really believe in the halved blocktimes, but investigating more into digi seems to be worth the time..
Before christmas I had the idea to write 'GDR': "Guldencoin Difficulty Readjustment". Maybe it's a good approach to take what we've learned so far, including DGW3 and Digi approaches, and work out an algorithm that is better. After all that is how DGW3 and Digi were born too: by applying lessons learned and incremental development.
I would be extremely happy to do this, but not alone. If anyone can provide me with a Digi ported to the simulator that would be great, it's pretty simple, but will take some time.

If the community doesn't want a new algorithm and places it's bet on Digi, then I WILL help with that. I'd be happy to work together with youguys to apply and deploy the software patch. Just as before I will provide compiled binaries for all major platforms and update the seeds and send a network alert. I believe last time that went pretty well, and this time we don't have to do a chain fork.

Please though, understand that we should not do this without being absolutely sure. There are always risks.

Cheers!
strataghyst
Sr. Member
****
Offline Offline

Activity: 393
Merit: 250


View Profile
December 28, 2014, 06:30:31 PM
 #7112



To cut this long reply short: My vote goes to Digi ASAP...

Same here!
BioMike
Legendary
*
Offline Offline

Activity: 1658
Merit: 1001


View Profile
December 28, 2014, 06:43:32 PM
 #7113

/GJ, the reason why people vote for DIGI is that it is currently more certain as a "fix" (we don't know if it is 100% effective) than having a yet-to-be-developed GDR at some undetermined point in the future.

People have offered help as much as they can, it seems that there is a better-than-DGW3 solution available, but we get very little feedback on those things. Everybody here feels that they are poking in the dark, not knowing what to do.

As far as I know is that we could provide a DIGI implementation in GO, but nobody here knows how to use that language. So it has been suggested to hire a dev for that.
LTEX
Legendary
*
Offline Offline

Activity: 1025
Merit: 1000


ltex.nl


View Profile WWW
December 28, 2014, 06:54:06 PM
 #7114

/GJ, the reason why people vote for DIGI is that it is currently more certain as a "fix" (we don't know if it is 100% effective) than having a yet-to-be-developed GDR at some undetermined point in the future.

People have offered help as much as they can, it seems that there is a better-than-DGW3 solution available, but we get very little feedback on those things. Everybody here feels that they are poking in the dark, not knowing what to do.

As far as I know is that we could provide a DIGI implementation in GO, but nobody here knows how to use that language. So it has been suggested to hire a dev for that.

Referring to my earlier analogy, I would like to ask if we are going to "prepare a list" coming "weekend" or do the "Job" (Fix)...

Please also note my Vote also stands for G-J and all the tremendous work he is- and has been doing for us!

A fool will just look at the finger, even if it points to paradise!
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
December 28, 2014, 07:38:23 PM
 #7115

Okay.. And it is increased 25% then? No other changes are made?

The charts I provided were for 20%.  25% is really aggressive, but it works as well.  That is the only real change made besides the retarget time variables.


Forks aren't caused by the diff readjustment algorithm and so they're currently out of scope for the simulator, and are not part of the difficulty re-adjustment problem at all.
The chances of a 1MH miner having three lucky blocks are a lot larger then the chances for a 600GH miner having three lucky blocks.

No, forks aren't caused by the algo, but a fork instantly alters the network hashrate when it splits and then rejoins.  With enough hashrate you can fork anything.  That's something the simulator won't account for that a testnet can.


I totally agree that having more adjustments (smaller block times) will give the diff re-adjustment more opportunities to re-adjust, and so in theory it will work better.. But this is not something the algorithm itself should be aware of. Also, let's not forget that smaller blocktimes introduces a new problem: more chance of chain splits. (network must converge within the blocktime, if it doesn't, a split might occur.. if the split isn't handled well by the network, it becomes permanent and we're stuck with a fork..)

Correction... the algorithm is aware of the percentage of time elapsed between blocks.  That's how DIGI calculates the next difficulty.  Go over 150 seconds, the difficulty shifts down.  Go under, and the difficulty shoots up.  All proportionate to the percentage of time.  Time is a huge factor in our issue here.  One that has been discussed to the point of completely ignoring blocks under a certain time.  By reducing the total block time, you allow for smoother adjustments that happen more often, and DIGI eats that up like roast beast at a Whoville feast.


I know DGW3 doesn't handle the current spikes very well.. But what is this "scaling" factor? And why would Digi handle it well? Is there an example of a coin with Digi that has x30 hashrate increases (e.g.: clevermining) at the same level as Guldencoin currently is?

By scaling, we mean being able to increase in hashrate without issue.  Whether we're at 10GH total network hashrate or 10PH.  However, once we get past a certain point, the event horizon of sorts, you no longer need a readjustment algo like DIGI or DGW3.  You just implement the stock algo and let it ride out until the end.

As far as examples go- any coin, pre-hashlets, that implemented DIGI.  I'm sure I could track down a list, but I know POT for instance did it.  They shot themselves in the foot with a flawed block halving scheme, but their network hashrate increased substantially after their DIGI implementation, and the multipools dropped off almost completely.  There are others that had similar results... just search the forums.


Also, Digishield is not really a 1 to 2 line addition to the base LTC algorithm. Please read https://github.com/digibyte/digibyte/blob/master/src/main.cpp#L1268-L1555
And we don't have 5x hashrate in a day.. we have 30x hashrate in 3 seconds..

The code you highlighted is not what I based our code on.  This is the DIGI function, as it was on our testnet:

Code:
unsigned int static GetNextWorkRequired_DIGI(const CBlockIndex* pindexLast, const CBlockHeader *pblock){

    unsigned int nProofOfWorkLimit = bnProofOfWorkLimit.GetCompact();
    int nHeight = pindexLast->nHeight + 1;

    int64 retargetTimespan = nTargetTimespan;
    int64 retargetSpacing = nTargetSpacing;
    int64 retargetInterval = nInterval;

    retargetInterval = nTargetTimespanNEW / nTargetSpacing;
    retargetTimespan = nTargetTimespanNEW;

    // Genesis block
    if (pindexLast == NULL)
        return nProofOfWorkLimit;

    // Only change once per interval
    if ((pindexLast->nHeight+1) % retargetInterval != 0)
    {
        // Special difficulty rule for testnet:
        if (fTestNet)
        {
            // If the new block's timestamp is more than 2* nTargetSpacing minutes
            // then allow mining of a min-difficulty block.
            if (pblock->nTime > pindexLast->nTime + retargetSpacing*3)
                return nProofOfWorkLimit;
            else
            {
                // Return the last non-special-min-difficulty-rules-block
                const CBlockIndex* pindex = pindexLast;
                while (pindex->pprev && pindex->nHeight % retargetInterval != 0 && pindex->nBits == nProofOfWorkLimit)
                    pindex = pindex->pprev;
                return pindex->nBits;
            }
        }

        return pindexLast->nBits;
    }

    // Dogecoin: This fixes an issue where a 51% attack can change difficulty at will.
    // Go back the full period unless it's the first retarget after genesis. Code courtesy of Art Forz
    int blockstogoback = retargetInterval-1;
    if ((pindexLast->nHeight+1) != retargetInterval)
        blockstogoback = retargetInterval;

    // Go back by what we want to be 14 days worth of blocks
    const CBlockIndex* pindexFirst = pindexLast;
    for (int i = 0; pindexFirst && i < blockstogoback; i++)
        pindexFirst = pindexFirst->pprev;
    assert(pindexFirst);

    // Limit adjustment step
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    printf(" nActualTimespan = %"PRI64d" before bounds\n", nActualTimespan);

    CBigNum bnNew;
    bnNew.SetCompact(pindexLast->nBits);

    //DigiShield implementation - thanks to RealSolid & WDC for this code
    // amplitude filter - thanks to daft27 for this code
    nActualTimespan = retargetTimespan + (nActualTimespan - retargetTimespan)/8;
    printf("DIGISHIELD RETARGET\n");
    if (nActualTimespan < (retargetTimespan - (retargetTimespan/4)) ) nActualTimespan = (retargetTimespan - (retargetTimespan/4));
    if (nActualTimespan > (retargetTimespan + (retargetTimespan/2)) ) nActualTimespan = (retargetTimespan + (retargetTimespan/2));
    // Retarget

    bnNew *= nActualTimespan;
    bnNew /= retargetTimespan;

    if (bnNew > bnProofOfWorkLimit)
        bnNew = bnProofOfWorkLimit;

    /// debug print
    printf("GetNextWorkRequired RETARGET\n");
    printf("nTargetTimespan = %"PRI64d" nActualTimespan = %"PRI64d"\n" , retargetTimespan, nActualTimespan);
    printf("Before: %08x %s\n", pindexLast->nBits, CBigNum().SetCompact(pindexLast->nBits).getuint256().ToString().c_str());
    printf("After: %08x %s\n", bnNew.GetCompact(), bnNew.getuint256().ToString().c_str());

    return bnNew.GetCompact();


}

You need this:

Code:
static const int64 nTargetTimespan = 3.5 * 24 * 60 * 60; // Guldencoin: 3.5 days
static const int64 nTargetTimespanNEW = 2.5 * 60;
static const int64 nTargetSpacing = 2.5 * 60; // Guldencoin: 2.5 minutes
static const int64 nInterval = nTargetTimespan / nTargetSpacing;

And you change this to set the max change:

Code:
bnResult = (bnResult * 120) / 100;

That's for 20%.  Replace the 120 with 125 for 25%.

It's pretty straight forward.  Compared to the original algo for LTC, and the GetNextWorkRequired_original function, you can see why I would say it's only 1-2 lines of additional code.

With the current network stability (which isn't very high) I wouldn't recommend lower blocktimes.. Furthermore, it simply doesn't work because although lower blocktimes means more blocks and thus more re-adjustments.. it's the same calculations being done... say we would half the block time to 75 seconds, that means the difficulty would also be halved so we would get twice the number of blocks within the same amount of time. The income/costs for dedicated miners wouldn't change, they would just get twice the blocks with each half the reward per block. This also applies for clever, they would have to spend the same amount of hashingpower to get twice the number of blocks at half the reward per block.. so in the end it's just the same thing.. The only thing that changes is that if the dgw3 or digi impact isn't halved too: it will cause twice the size in blocktime spikes (both up and down).. So you'd have to half the dgw3/digi impact.. which means that in the end, you're left with 0 effect (apart from a network that's more vulnerable to time warp attacks and chain splits). The faster reaction you're talking about would work if the hashrate changes over a number of blocks' time. But with a jump pool, it's instantanious. If I'm missing something, please explain.

DIGI accounts for time, so it scales with the block time.  The difficulty doesn't change because you halve the block time.  DIGI says this is what the time should be, and what percentage of that time was the last block.  If you go over the time, the difficulty goes down, if you go under the difficulty goes up.  In our current code, you would be correct with this.  I'm not talking about our current code though.  I'm talking about the code that will fix our situation.  Halving the block time isn't a necessity.  Keep it where it is and use a higher max adjustment.  The selling point here is that with a shorter block time, you can reduce the max change, and get more adjustments per period of time.  Again though, not a necessity.  Just an added step to making things better.


The  fact that digi doesn't take into account the false lows /does/ sound good.. As that is actually the largest part of the problem we're seeing.

So, I don't really believe in the halved blocktimes, but investigating more into digi seems to be worth the time..
Before christmas I had the idea to write 'GDR': "Guldencoin Difficulty Readjustment". Maybe it's a good approach to take what we've learned so far, including DGW3 and Digi approaches, and work out an algorithm that is better. After all that is how DGW3 and Digi were born too: by applying lessons learned and incremental development.
I would be extremely happy to do this, but not alone. If anyone can provide me with a Digi ported to the simulator that would be great, it's pretty simple, but will take some time.

If the community doesn't want a new algorithm and places it's bet on Digi, then I WILL help with that. I'd be happy to work together with youguys to apply and deploy the software patch. Just as before I will provide compiled binaries for all major platforms and update the seeds and send a network alert. I believe last time that went pretty well, and this time we don't have to do a chain fork.

Please though, understand that we should not do this without being absolutely sure. There are always risks.

Cheers!

Please don't let the port to the simulator hold up the talks.  While I am in agreement that the simulator should be used, I need to know if it is ready to take in to account everything needed for real-life results.  If it isn't ready to simulate every possible variable, then we need to put it aside for these discussions.  If that's not an option, we need to know when additional features will be implemented so we can press forward.

Additionally, the simulator needs to be verified against a chain.  I would seriously suggest creating a post in the development sub-forum here to get it out to the public and let them develop and test it for you.  There is no reason why it shouldn't be made public.  It will strengthen development of the simulator and it will bring publicity to NLG.

I'll reiterate this point though-

You can't predict the future of mining, so you can't future-proof the algorithm.  I've been in crypto for a little over 2 years now, active here for 90% of that time.  Things change dramatically over small timeframes in crypto.  People need to realize that no algorithm or solution will ever be the final answer.  Coins need to adapt to the changes.  It's evolution at a code level.  It you don't adapt and overcome, you can't survive.  But you also can't adapt to something that hasn't happened yet because the world can take the opposite turn, and then you'll be left exactly where we are right now.

Don't be afraid to make future changes.  It's not a sign of weak coin planning, but rather a sign of active adaptation to an ever changing landscape.

-Fuse

Community > Devs
Digithusiast
Hero Member
*****
Offline Offline

Activity: 886
Merit: 504



View Profile
December 28, 2014, 07:54:22 PM
 #7116

Could you please add the voting option "Change to M7M cpu-mining algo" too?
BioMike
Legendary
*
Offline Offline

Activity: 1658
Merit: 1001


View Profile
December 28, 2014, 08:02:25 PM
 #7117

Could you please add the voting option "Change to M7M cpu-mining algo" too?

Scrypt is not under discussion. It isn't broken, so why change it?
Digithusiast
Hero Member
*****
Offline Offline

Activity: 886
Merit: 504



View Profile
December 28, 2014, 08:03:55 PM
 #7118

It ensures fair mining and kicks out botnets and mining farms. I believe that is the whole clevermining issue here.
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
December 28, 2014, 08:07:09 PM
 #7119

Could you please add the voting option "Change to M7M cpu-mining algo" too?

Scrypt is not under discussion. It isn't broken, so why change it?

Exactly this ^

Absolutely no need to change from scrypt.  Doing so would alienate the majority of dedicated miners.  We can't make a change like this because you made the mistake of buying rackmount blade servers instead of ASICs.

-Fuse

Community > Devs
Digithusiast
Hero Member
*****
Offline Offline

Activity: 886
Merit: 504



View Profile
December 28, 2014, 08:17:54 PM
Last edit: December 28, 2014, 08:30:02 PM by Digithusiast
 #7120

ny2, it is not about me. And I did not buy the equipment you mention.
I have no personal gain motivation for changing the algo to M7M. I think it is the best solution to the problem that has been dragging on for months now.
The M7M algo has proven its value in fair mining and keeping out mining farms and preventing multipools like clever from taking over.
Moreover, XMG dev Joe who implemented M7M very succesfully already offered to help.
So it seems like a good solution to the problem to me.

Pages: « 1 ... 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 [356] 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 ... 676 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!