Bitcoin Forum
May 07, 2024, 08:14:31 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 [271] 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 ... 676 »
  Print  
Author Topic: NA  (Read 893540 times)
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 21, 2014, 09:04:54 PM
 #5401

I think you misunderstand. The diff will rise max with a factor of 1.2 between blocks. So if the diff is 300, the next block can be max 360.

I definitely misread your text lol

This is a decent approach.  As long as we don't from 250 to 750 in a single jump, you should see better block times.  Additionally, as the difficulty drops back down, we'll get to the point where we're riding a sweet spot where it's not profitable for the miners doing short-term calculations for profit(aka multis).

-Fuse

Community > Devs
Make sure you back up your wallet regularly! Unlike a bank account, nobody can help you if you lose access to your BTC.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715112871
Hero Member
*
Offline Offline

Posts: 1715112871

View Profile Personal Message (Offline)

Ignore
1715112871
Reply with quote  #2

1715112871
Report to moderator
1715112871
Hero Member
*
Offline Offline

Posts: 1715112871

View Profile Personal Message (Offline)

Ignore
1715112871
Reply with quote  #2

1715112871
Report to moderator
LTEX
Legendary
*
Offline Offline

Activity: 1023
Merit: 1000


ltex.nl


View Profile WWW
October 21, 2014, 09:32:24 PM
 #5402

I think you misunderstand. The diff will rise max with a factor of 1.2 between blocks. So if the diff is 300, the next block can be max 360.

I definitely misread your text lol

This is a decent approach.  As long as we don't from 250 to 750 in a single jump, you should see better block times.  Additionally, as the difficulty drops back down, we'll get to the point where we're riding a sweet spot where it's not profitable for the miners doing short-term calculations for profit(aka multis).

-Fuse

I feel a like a reward is due soon!  Grin

A fool will just look at the finger, even if it points to paradise!
24Kilo
Sr. Member
****
Offline Offline

Activity: 672
Merit: 250


View Profile
October 21, 2014, 10:00:38 PM
 #5403

To ilustrate the behaviour we experience look at the blocks below. At 137136 we had a diff of 412 and because the block preceding this took two hours the diff dropped to 118. But the max drop is 1/3 so it should have dropped to 138 and not 118. The dif lowers and despite blocktimes of tens of seconds it lowers and lowers. Till 137159 then the calculation says hoooo fellas this is too fast and it raises the diff from 29.8 to 162.1 not exactly the max three times that was in the DGW design... So why does is diff adjusted more than the DGW design? And second it can behave much better if it is a weighted average instead of a plain one.

+1 for the detailed info, mate.  I'm with you 100%.

I really do believe either looking at less blocks for the average, or creating a weighted average is the way to go.  Additionally, there needs to be a limit in the amount of difficulty increase/decrease so we're not throwing the difficulty all over the place.

Again- less extreme changes that happen more often.

-Fuse

I think we're on a good path here!

I've actually started implementing a new algorithm, from scratch. It performs faster re-adjustment, limited to 1.2 or 0.8 difficulty change between individual blocks. But it also limits the difficulty change to 3.0 or 0.33 compared to the last 120 blocks difficulty average. This means that the diff can rise ~3.0 times in 6 blocks, and it can fall to ~0.33 times in 5 blocks. In the linked formula's the impact of the new blocks themselves are not calculated in the 120 blocks average.
The idea behind this is that it will be able to handle large joins and leaves, but won't be tricked into settling on a high difficulty too fast.

Thoughts?

I'll share the code when I'm happy with initial implementation.


Please send me your code by PM to test it for time based attacks... remember that the Achilles Heel of difficulty retargeting algo's that adjust every block is vulnerability to time based attacks... this is why KGW had to be recoded... I would not try to reinvent the wheel with this... a Digishield variant is the best next step in finding a solution... I know this algo has pretty much resolved the multi-pool problem for POT and has proven to be quite resistant to attack.

This is the approach Criptoe is working on at present.
c_e_d
Member
**
Offline Offline

Activity: 100
Merit: 10


View Profile
October 21, 2014, 10:49:04 PM
 #5404


I've actually started implementing a new algorithm, from scratch. It performs faster re-adjustment, limited to 1.2 or 0.8 difficulty change between individual blocks. But it also limits the difficulty change to 3.0 or 0.33 compared to the last 120 blocks difficulty average. This means that the diff can rise ~3.0 times in 6 blocks, and it can fall to ~0.33 times in 5 blocks. In the linked formula's the impact of the new blocks themselves are not calculated in the 120 blocks average.
The idea behind this is that it will be able to handle large joins and leaves, but won't be tricked into settling on a high difficulty too fast.

Thoughts?

DGW3 was developed at a time and for a network that had far smaller spikes than we see today.
I still think what we need is a faster reaction to the hash rate spikes and drops we see.
The more time it takes to settle in to the desired diff for the actual hash rate the more time we give pools to take advantage of it and normal miners will suffer after a drop and the time it takes to bring the diff back down.

The idea I wrote about a few pages back and some others picked up too is to give the newer blocks in the interval a higher weight than the older ones.

Lets say we split the 24 blocks into 4 parts when calculating nActualTimespan.

older ---> newer
part1, part2, part3, part4 (each part 6 blocks)

a more conservative approach:
part1: block times * 1   (counted like 06)
part2: block times * 2   (counted like 12)
part3: block times * 3   (counted like 18)
part4: block times * 4   (counted like 24)

sum / 60 = weighted average
weighted average * 24 = nActualTimespanWeighted

a more agressive approach:
part1: block times * 2^0  (=block times * 1)   (counted like 06)
part2: block times * 2^1  (=block times * 2)   (counted like 12)
part3: block times * 2^2  (=block times * 4)   (counted like 24)
part4: block times * 2^3  (=block times * Cool   (counted like 48)

sum / 90 = weighted average
weighted average * 24 = nActualTimespanWeighted

Splitting it into more parts would make it react even faster, both on the way up and on the way down.
Giving the newest i.e 1 or 2 blocks additional weight can even speed it up more; giving the very latest blocks too much weight could lead to an exploit for an attack.

It should be an easy mod to our DGW3 diff algo with lower risk than a completely new algo.
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 22, 2014, 12:17:59 AM
 #5405

DGW3 was developed at a time and for a network that had far smaller spikes than we see today.
I still think what we need is a faster reaction to the hash rate spikes and drops we see.
The more time it takes to settle in to the desired diff for the actual hash rate the more time we give pools to take advantage of it and normal miners will suffer after a drop and the time it takes to bring the diff back down.

The idea I wrote about a few pages back and some others picked up too is to give the newer blocks in the interval a higher weight than the older ones.

Lets say we split the 24 blocks into 4 parts when calculating nActualTimespan.

older ---> newer
part1, part2, part3, part4 (each part 6 blocks)

a more conservative approach:
part1: block times * 1   (counted like 06)
part2: block times * 2   (counted like 12)
part3: block times * 3   (counted like 18)
part4: block times * 4   (counted like 24)

sum / 60 = weighted average
weighted average * 24 = nActualTimespanWeighted

a more agressive approach:
part1: block times * 2^0  (=block times * 1)   (counted like 06)
part2: block times * 2^1  (=block times * 2)   (counted like 12)
part3: block times * 2^2  (=block times * 4)   (counted like 24)
part4: block times * 2^3  (=block times * Cool   (counted like 48)

sum / 90 = weighted average
weighted average * 24 = nActualTimespanWeighted

Splitting it into more parts would make it react even faster, both on the way up and on the way down.
Giving the newest i.e 1 or 2 blocks additional weight can even speed it up more; giving the very latest blocks too much weight could lead to an exploit for an attack.

It should be an easy mod to our DGW3 diff algo with lower risk than a completely new algo.

I think this is getting more complicated than it needs to be.  If we're looking at a weighted average, like I suggested, take the first 6 blocks and then the next 18 counted as one block.  I don't see how additional weight on the latest blocks would allow for an exploit.  Care to elaborate?

I do agree though that the changes need to happen faster, much like I originally pointed out here: https://bitcointalk.org/index.php?topic=554412.msg8992812;topicseen#msg8992812.  The POT chart there is a solid representation of what we should be trying to achieve.

-Fuse

Community > Devs
c_e_d
Member
**
Offline Offline

Activity: 100
Merit: 10


View Profile
October 22, 2014, 01:45:10 AM
 #5406


I think this is getting more complicated than it needs to be.  If we're looking at a weighted average, like I suggested, take the first 6 blocks and then the next 18 counted as one block.  I don't see how additional weight on the latest blocks would allow for an exploit.  Care to elaborate?

I do agree though that the changes need to happen faster, much like I originally pointed out here: https://bitcointalk.org/index.php?topic=554412.msg8992812;topicseen#msg8992812.  The POT chart there is a solid representation of what we should be trying to achieve.

-Fuse

We basically follow the same path. The more weight you give to the newer blocks, the quicker it reacts to the hash rate change.
Neither your nor mine idea are big deals to implement; only a few lines of code in the loop that calculates the actual time and average.
From how I understand it, you are putting it as 6 new blocks to 1 averaged over the 18 older ones.
My approach is to increase the weight in steps the closer you come to the newest block with the weight of 40% or 53% to the most relevant set of blocks.
Which one is better? We can speculate but best would be to see some numbers from test cases or network tests.

The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

(I had a quick look at NiteGravityWell and without digging too deep into it, it looks like a slightly modded KGW.)
forzendiablo
Legendary
*
Offline Offline

Activity: 1526
Merit: 1000


the grandpa of cryptos


View Profile
October 22, 2014, 02:08:43 AM
 #5407

i reallyu like this coins idea

yolo
ny2cafuse
Legendary
*
Offline Offline

Activity: 1582
Merit: 1002


HODL for life.


View Profile
October 22, 2014, 02:11:55 AM
 #5408

The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

I would never recommend a 1-2 block focus.  That would be crazypants thinking.

I seriously think 6 is a nice even number, and it falls in line with about the average number of blocks that Clever rapes at any given time.  You put more focus on those 6 blocks, instead of 24 evenly, and you'll see a big change in Clever's ability to own 90% of the network.

Of course, GJ has some code on the table right now.  I guess we'll all just need to wait to see what it looks like before we try to push one way or another.

-Fuse

Community > Devs
strataghyst
Sr. Member
****
Offline Offline

Activity: 393
Merit: 250


View Profile
October 22, 2014, 04:53:29 AM
 #5409

The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

I would never recommend a 1-2 block focus.  That would be crazypants thinking.

I seriously think 6 is a nice even number, and it falls in line with about the average number of blocks that Clever rapes at any given time.  You put more focus on those 6 blocks, instead of 24 evenly, and you'll see a big change in Clever's ability to own 90% of the network.

Of course, GJ has some code on the table right now.  I guess we'll all just need to wait to see what it looks like before we try to push one way or another.

-Fuse

Nice to see some constructive consensus  Grin
ShopemNL
Legendary
*
Offline Offline

Activity: 1025
Merit: 1001



View Profile WWW
October 22, 2014, 06:04:21 AM
 #5410


thsminer
Sr. Member
****
Offline Offline

Activity: 332
Merit: 250



View Profile
October 22, 2014, 07:58:31 AM
 #5411

The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

I would never recommend a 1-2 block focus.  That would be crazypants thinking.

I seriously think 6 is a nice even number, and it falls in line with about the average number of blocks that Clever rapes at any given time.  You put more focus on those 6 blocks, instead of 24 evenly, and you'll see a big change in Clever's ability to own 90% of the network.

Of course, GJ has some code on the table right now.  I guess we'll all just need to wait to see what it looks like before we try to push one way or another.

-Fuse

I'm with CED and Fuse in this, I would say three / four blocks minimum because otherwise the diff will be all over the place. I'm also missing some structure in this discussion. I did put down a list yesterday with strange diff values imho thats problem uno, if we don't solve that problem every solution will probably show the same weird behaviour as KGW and DGW are showing.

Second, if you want to modify the algo you need to identify the problem properly and with the problem description you can work out a solution. Without a proper problem description there is a huge risc of drifting away from the goal. Why take 6 blocks or 10 or 24? Because thats the number of doors in your office? Or is it the blocks that are being mined fast? So in my opinion the path could look like this:

phase1
- identify the cause of the swings more than 3 and 1/3
- work out a solution for point 1 first
- if the fix for point 1 is outside DGW code see if DGW is acting as expected in the first place
phase 2
- identify the problem and descibe it properly (yeah I know it's jumppool, but whats really the undelying thing that causes the trouble)
- work out a modification of the algo by testing it on past hashrate swings and see if it smoothens out those.
- test the modification thorough for vulnerabilities and flaws
- implement it












 
strataghyst
Sr. Member
****
Offline Offline

Activity: 393
Merit: 250


View Profile
October 22, 2014, 08:27:32 AM
 #5412

To ilustrate the behaviour we experience look at the blocks below. At 137136 we had a diff of 412 and because the block preceding this took two hours the diff dropped to 118. But the max drop is 1/3 so it should have dropped to 138 and not 118. The dif lowers and despite blocktimes of tens of seconds it lowers and lowers. Till 137159 then the calculation says hoooo fellas this is too fast and it raises the diff from 29.8 to 162.1 not exactly the max three times that was in the DGW design... So why does is diff adjusted more than the DGW design? And second it can behave much better if it is a weighted average instead of a plain one.

 
137184 2014-10-20 19:37:56 1 1000 227.657 307183000 64.5077 4675.82 61.0856%
137183 2014-10-20 19:37:43 1 1000 188.93 307182000 64.5077 4675.82 61.0856%
137182 2014-10-20 19:37:14 6 45685.2034 431.768 307181000 64.5076 4675.82 61.0858%
137181 2014-10-20 19:01:13 1 1000 374.057 307180000 64.4828 4675.79 61.095%
137180 2014-10-20 19:00:42 2 1422.978 332.835 307179000 64.4827 4675.79 61.0951%
137179 2014-10-20 18:53:02 1 1000 302.237 307178000 64.4776 4675.79 61.097%
137178 2014-10-20 18:52:47 1 1000 276.263 307177000 64.4776 4675.79 61.0971%
137177 2014-10-20 18:52:02 1 1000 256.457 307176000 64.4773 4675.79 61.0973%
137176 2014-10-20 18:50:03 1 1000 240.951 307175000 64.4761 4675.78 61.0978%
137175 2014-10-20 18:49:57 3 1975.29793541 228.569 307174000 64.4763 4675.78 61.0978%
137174 2014-10-20 18:47:50 1 1000 218.365 307173000 64.475 4675.78 61.0984%
137173 2014-10-20 18:47:41 1 1000 210.006 307172000 64.4751 4675.78 61.0984%
137172 2014-10-20 18:47:39 1 1000 203.035 307171000 64.4753 4675.78 61.0984%
137171 2014-10-20 18:47:24 1 1000 197.242 307170000 64.4753 4675.78 61.0985%
137170 2014-10-20 18:47:19 1 1000 192.377 307169000 64.4755 4675.78 61.0985%
137169 2014-10-20 18:46:52 1 1000 188.239 307168000 64.4754 4675.78 61.0986%
137168 2014-10-20 18:46:48 1 1000 184.775 307167000 64.4756 4675.78 61.0986%
137167 2014-10-20 18:46:44 1 1000 181.831 307166000 64.4757 4675.78 61.0987%
137166 2014-10-20 18:46:41 1 1000 179.32 307165000 64.4759 4675.78 61.0987%
137165 2014-10-20 18:46:29 1 1000 177.128 307164000 64.476 4675.78 61.0987%
137164 2014-10-20 18:45:32 1 1000 175.297 307163000 64.4755 4675.78 61.099%
137163 2014-10-20 18:45:08 1 1000 173.454 307162000 64.4755 4675.78 61.0991%
137162 2014-10-20 18:43:29 2 6641.46782322 171.971 307161000 64.4745 4675.78 61.0995%
137161 2014-10-20 18:42:48 1 1000 170.843 307160000 64.4744 4675.78 61.0996%
137160 2014-10-20 18:42:27 1 1000 162.133 307159000 64.4744 4675.78 61.0997%
137159 2014-10-20 18:42:16 1 1000 29.857 307158000 64.4744 4675.78 61.0997%
137158 2014-10-20 18:41:59 1 1000 32.026 307157000 64.4744 4675.78 61.0998%
137157 2014-10-20 18:41:53 1 1000 34.314 307156000 64.4746 4675.78 61.0998%
137156 2014-10-20 18:41:40 1 1000 36.323 307155000 64.4746 4675.78 61.0999%
137155 2014-10-20 18:41:37 1 1000 38.742 307154000 64.4748 4675.78 61.0999%
137154 2014-10-20 18:41:34 1 1000 37.496 307153000 64.475 4675.78 61.0999%
137153 2014-10-20 18:41:18 1 1000 40.622 307152000 64.475 4675.78 61.1%
137152 2014-10-20 18:40:59 1 1000 43.965 307151000 64.475 4675.78 61.1001%
137151 2014-10-20 18:40:52 1 1000 47.547 307150000 64.4751 4675.78 61.1001%
137150 2014-10-20 18:40:48 1 1000 50.685 307149000 64.4753 4675.78 61.1001%
137149 2014-10-20 18:40:47 1 1000 54.738 307148000 64.4755 4675.78 61.1001%
137148 2014-10-20 18:40:43 1 1000 58.675 307147000 64.4757 4675.78 61.1001%
137147 2014-10-20 18:40:31 1 1000 63.313 307146000 64.4757 4675.78 61.1002%
137146 2014-10-20 18:40:03 1 1000 68.027 307145000 64.4756 4675.78 61.1003%
137145 2014-10-20 18:39:48 1 1000 72.557 307144000 64.4757 4675.78 61.1004%
137144 2014-10-20 18:39:44 1 1000 78.178 307143000 64.4758 4675.78 61.1004%
137143 2014-10-20 18:39:00 1 1000 83.37 307142000 64.4755 4675.78 61.1006%
137142 2014-10-20 18:38:55 1 1000 88.63 307141000 64.4757 4675.78 61.1006%
137141 2014-10-20 18:38:54 1 1000 92.904 307140000 64.4759 4675.78 61.1006%
137140 2014-10-20 18:38:49 1 1000 99.356 307139000 64.476 4675.78 61.1006%
137139 2014-10-20 18:38:33 1 1000 97.431 307138000 64.476 4675.78 61.1007%
137138 2014-10-20 18:37:44 1 1000 105.188 307137000 64.4757 4675.78 61.1009%
137137 2014-10-20 18:37:00 8 92544.22974057 118.799 307136000 64.4754 4675.78 61.1011%
137136 2014-10-20 18:36:42 3 8363.86928814 412.358 307135000 64.4763 4675.78 61.1006%
 



For the people who missed the list thsminer is talking about
veertje
Legendary
*
Offline Offline

Activity: 952
Merit: 1000


View Profile
October 22, 2014, 09:23:24 AM
 #5413

https://www.thunderclap.it/projects/17235-de-gulden-is-weer-terug

Only 2 to go...  Smiley
/GeertJohan
Sr. Member
****
Offline Offline

Activity: 409
Merit: 250


View Profile
October 22, 2014, 09:29:29 AM
 #5414

The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

I would never recommend a 1-2 block focus.  That would be crazypants thinking.

I seriously think 6 is a nice even number, and it falls in line with about the average number of blocks that Clever rapes at any given time.  You put more focus on those 6 blocks, instead of 24 evenly, and you'll see a big change in Clever's ability to own 90% of the network.

Of course, GJ has some code on the table right now.  I guess we'll all just need to wait to see what it looks like before we try to push one way or another.

-Fuse

I'm with CED and Fuse in this, I would say three / four blocks minimum because otherwise the diff will be all over the place. I'm also missing some structure in this discussion. I did put down a list yesterday with strange diff values imho thats problem uno, if we don't solve that problem every solution will probably show the same weird behaviour as KGW and DGW are showing.

Second, if you want to modify the algo you need to identify the problem properly and with the problem description you can work out a solution. Without a proper problem description there is a huge risc of drifting away from the goal. Why take 6 blocks or 10 or 24? Because thats the number of doors in your office? Or is it the blocks that are being mined fast? So in my opinion the path could look like this:

phase1
- identify the cause of the swings more than 3 and 1/3
It is limited to 3 and 1/3 diff change. But the new diff is calculated from the average diff over the previous 24 blocks. So you can't compare the diff to the previous block as displayed in the list. Please see https://github.com/nlgcoin/guldencoin/blob/master/src/main.cpp#L1286 and line 1299 and lines 1303-1306.
- work out a solution for point 1 first
We could change it so the diff is calculated only from the diff of the previous block.. Then the diff change between individual blocks could never exceed 3x or 0.33x. But that also means the diff can go x3^3 (x27 !!!) when there are 3 lucky/fast blocks. I don't think that is a smart thing to do.
- if the fix for point 1 is outside DGW code see if DGW is acting as expected in the first place
I don't think point 1 needs to be fixed. I'm not aware of any code influencing the difficulty outside DGW. DGW is acting as it should (as it is doing in other coins), but simply can't handle these hash/sec spikes.
phase 2
- identify the problem and descibe it properly (yeah I know it's jumppool, but whats really the undelying thing that causes the trouble)
You're right. It is good to have a verbose explanation with some examples. As long as we keep in mind that diff readjustment should work for more problems than only this one.
- work out a modification of the algo by testing it on past hashrate swings and see if it smoothens out those.
Testing is very important. We shouldn't release a new algo too fast. Placing a new algo on the existing chain doesn't give a proper indication though, as it does not influence block times and doesn't change multipool behaviour. It would be better so simulate several cases of hashrate joins/leaves. I have done this before with DGW3, but never the amounts that we currently see. This time we must simulate extreme hashrates, even multiple times more than we currently see with clevermining. I plan to release the software I used for simulations, just needs some polishing and persistency of scenario's (currently the user must enter the block times manually, each time). That way everyone can apply and test changes locally.
- test the modification thorough for vulnerabilities and flaws
Very important! I think 24Kilo can be of great help here
- implement it
This time deployment will be very smooth. We can release a new algo one week in advance with a hardcoded block on which all nodes switch algo.

 

I really like the weighted average idea's that are being discussed here. I had the idea of weighted average in my mind, but I didn't have it worked out yet. It's great to see the discussion here, please keep it going! It is definitely influencing the code I'm writing.

Please understand that when I release the code I have so far: it's not a final version. We do this together. It will be our algorithm; created by the Guldencoin community. So the code is open for feedback and changes. This is the only way that we can fix this problem. We all have great idea's and knowledge. When we keep combining our efforts we will create the best possible algorithm.
strataghyst
Sr. Member
****
Offline Offline

Activity: 393
Merit: 250


View Profile
October 22, 2014, 09:46:04 AM
 #5415

The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

I would never recommend a 1-2 block focus.  That would be crazypants thinking.

I seriously think 6 is a nice even number, and it falls in line with about the average number of blocks that Clever rapes at any given time.  You put more focus on those 6 blocks, instead of 24 evenly, and you'll see a big change in Clever's ability to own 90% of the network.

Of course, GJ has some code on the table right now.  I guess we'll all just need to wait to see what it looks like before we try to push one way or another.

-Fuse

I'm with CED and Fuse in this, I would say three / four blocks minimum because otherwise the diff will be all over the place. I'm also missing some structure in this discussion. I did put down a list yesterday with strange diff values imho thats problem uno, if we don't solve that problem every solution will probably show the same weird behaviour as KGW and DGW are showing.

Second, if you want to modify the algo you need to identify the problem properly and with the problem description you can work out a solution. Without a proper problem description there is a huge risc of drifting away from the goal. Why take 6 blocks or 10 or 24? Because thats the number of doors in your office? Or is it the blocks that are being mined fast? So in my opinion the path could look like this:

phase1
- identify the cause of the swings more than 3 and 1/3
It is limited to 3 and 1/3 diff change. But the new diff is calculated from the average diff over the previous 24 blocks. So you can't compare the diff to the previous block as displayed in the list. Please see https://github.com/nlgcoin/guldencoin/blob/master/src/main.cpp#L1286 and line 1299 and lines 1303-1306.
- work out a solution for point 1 first
We could change it so the diff is calculated only from the diff of the previous block.. Then the diff change between individual blocks could never exceed 3x or 0.33x. But that also means the diff can go x3^3 (x27 !!!) when there are 3 lucky/fast blocks. I don't think that is a smart thing to do.
- if the fix for point 1 is outside DGW code see if DGW is acting as expected in the first place
I don't think point 1 needs to be fixed. I'm not aware of any code influencing the difficulty outside DGW. DGW is acting as it should (as it is doing in other coins), but simply can't handle these hash/sec spikes.
phase 2
- identify the problem and descibe it properly (yeah I know it's jumppool, but whats really the undelying thing that causes the trouble)
You're right. It is good to have a verbose explanation with some examples. As long as we keep in mind that diff readjustment should work for more problems than only this one.
- work out a modification of the algo by testing it on past hashrate swings and see if it smoothens out those.
Testing is very important. We shouldn't release a new algo too fast. Placing a new algo on the existing chain doesn't give a proper indication though, as it does not influence block times and doesn't change multipool behaviour. It would be better so simulate several cases of hashrate joins/leaves. I have done this before with DGW3, but never the amounts that we currently see. This time we must simulate extreme hashrates, even multiple times more than we currently see with clevermining. I plan to release the software I used for simulations, just needs some polishing and persistency of scenario's (currently the user must enter the block times manually, each time). That way everyone can apply and test changes locally.
- test the modification thorough for vulnerabilities and flaws
Very important! I think 24Kilo can be of great help here
- implement it
This time deployment will be very smooth. We can release a new algo one week in advance with a hardcoded block on which all nodes switch algo.

 

I really like the weighted average idea's that are being discussed here. I had the idea of weighted average in my mind, but I didn't have it worked out yet. It's great to see the discussion here, please keep it going! It is definitely influencing the code I'm writing.

Please understand that when I release the code I have so far: it's not a final version. We do this together. It will be our algorithm; created by the Guldencoin community. So the code is open for feedback and changes. This is the only way that we can fix this problem. We all have great idea's and knowledge. When we keep combining our efforts we will create the best possible algorithm.


Nice feedback! I think we are really on the right track here. Keep on discussing people!

And remember http://vimeo.com/m/50104966

For all new people now is a great time to buy in!
LTEX
Legendary
*
Offline Offline

Activity: 1023
Merit: 1000


ltex.nl


View Profile WWW
October 22, 2014, 10:05:12 AM
 #5416


Nice feedback! I think we are really on the right track here. Keep on discussing people!

And remember http://vimeo.com/m/50104966

For all new people now is a great time to buy in!

One of my favorite quotes:

"Even if you're on the right track, A train will run over you if you just sit there!"

Nice to see we are keeping pace!

A fool will just look at the finger, even if it points to paradise!
Coincookie
Sr. Member
****
Offline Offline

Activity: 249
Merit: 250


View Profile
October 22, 2014, 12:19:44 PM
 #5417

I am extremely happy to see such work being done together. When you see this much people getting involved the way some of you do it just shows you the power of a community. How far we have come in just over 6 months is amazing and makes me proud to be a part of it. Though I am no coder and can not contribute in ways like the previous posts I due feel I am a part of this. Again, proud and happy. I see a solution to our problem in the near future.

Cookie.

NLG: GNszAQ7Nwhz2E5ygqucYjVV36gV2Nkikzj
bram_vnl
Legendary
*
Offline Offline

Activity: 1148
Merit: 1000


View Profile
October 22, 2014, 12:42:13 PM
 #5418

Meije
Full Member
***
Offline Offline

Activity: 192
Merit: 100


View Profile
October 22, 2014, 12:46:53 PM
 #5419

LTEX, Are you sure that we can scan the QR code of your stickers?, i have tried it with 3 different phones and it does not work. Is it possible that maybe only the iphone can scan these stickers?
bram_vnl
Legendary
*
Offline Offline

Activity: 1148
Merit: 1000


View Profile
October 22, 2014, 01:17:44 PM
 #5420

sticker qrcode is working but is this url http://q-r.to/qz0
Pages: « 1 ... 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 [271] 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 ... 676 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!