Bitcoin Forum
June 23, 2024, 09:15:55 PM *
News: Voting for pizza day contest
 
   Home   Help Search Login Register More  
Warning: One or more bitcointalk.org users have reported that they strongly believe that the creator of this topic is a scammer. (Login to see the detailed trust ratings.) While the bitcointalk.org administration does not verify such claims, you should proceed with extreme caution.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 [36] 37 38 39 40 41 42 43 44 45 46 »
  Print  
Author Topic: [ANN] Noirbits Update required Improved algos !!!!  (Read 74431 times)
stbgefltc
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile WWW
July 04, 2013, 09:10:08 PM
Last edit: July 04, 2013, 09:21:17 PM by stbgefltc
 #701

Agreed. Current source only applies a limited (80%) diff drop after 4 hours.

Have a look here : https://github.com/ftcminer/Noirbits/tree/Testing

I moved the diff calc into diff.h & diff.cpp (that's actually why I need help with the QT Makefile, to add these files in. It's already done for Noirbitsd, but not the UI client)

* Disclaimer : I'm no C++ pro...

oatmo
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
July 04, 2013, 10:04:05 PM
 #702

Agreed. Current source only applies a limited (80%) diff drop after 4 hours.

Have a look here : https://github.com/ftcminer/Noirbits/tree/Testing

I moved the diff calc into diff.h & diff.cpp (that's actually why I need help with the QT Makefile, to add these files in. It's already done for Noirbitsd, but not the UI client)

* Disclaimer : I'm no C++ pro...
What's QT? What are you using to compile. I'm decent in regular makefiles, and makepp as well..
stbgefltc
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile WWW
July 04, 2013, 10:43:39 PM
Last edit: July 04, 2013, 11:06:52 PM by stbgefltc
 #703

Well I only ever compile the daemon with make and the makefiles in /src.

QT is the cross platform library used for the UI. But there's a Makefile in the main directory, that's used to compile Noirbits-qt (the UI client), and I'm not sure of how I should handle dependencies in that one... If you can have a look at it, I'd appreciate. Otherwise, I'll go ahead and try with QtCreator. I asked barwizi to have a look at the makefiles, but he seems to be MIA since yesterday, probably busy.

Edit : never mind that, QtCreator handles that file automagically...

What do you think about the new retarget implementation ? Feel free to improve it if you want...

oatmo
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
July 05, 2013, 03:20:14 PM
 #704

Well I only ever compile the daemon with make and the makefiles in /src.

QT is the cross platform library used for the UI. But there's a Makefile in the main directory, that's used to compile Noirbits-qt (the UI client), and I'm not sure of how I should handle dependencies in that one... If you can have a look at it, I'd appreciate. Otherwise, I'll go ahead and try with QtCreator. I asked barwizi to have a look at the makefiles, but he seems to be MIA since yesterday, probably busy.

Edit : never mind that, QtCreator handles that file automagically...

What do you think about the new retarget implementation ? Feel free to improve it if you want...

I looked at the retarget implementation. Unfortunately I have to work today, so I can't work on it. My one comment is that it's probably better to factor out whole functions rather than specific steps in the code. i.e. you have a set of "new" functions and a set of "old" functions. That's it's easy to look at each set of functions and see what they do. It also makes it easier to handle switchovers (hoping we won't have many, but we will probably have non-zero after this one). i.e. We get to the target block, do the switchover, then you go into the code and copy the "new" functions to the "old" functions, and remove the switchover block id. Then if another change in the algorithm is required, you simply go change the switchover block id again, modify the "new" functions again, etc.. If you make that change now, and comment the code for the "new" and "old" functions, it will make any future changes, if they are required, really easy to implement and test.

If we write a testbench to simulate these types of fluctuations, then we would use it on the new functions only. It's easier if we have whole functions not wound up with testing switchover block IDs, etc. That type of testbench would be great, not only for us, but the original bitcoin codebase (it would help anyone else starting a new coin to see how their algorithm works before testing and getting screwed in the wild).

stbgefltc
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile WWW
July 05, 2013, 04:34:46 PM
 #705

Well I only ever compile the daemon with make and the makefiles in /src.

QT is the cross platform library used for the UI. But there's a Makefile in the main directory, that's used to compile Noirbits-qt (the UI client), and I'm not sure of how I should handle dependencies in that one... If you can have a look at it, I'd appreciate. Otherwise, I'll go ahead and try with QtCreator. I asked barwizi to have a look at the makefiles, but he seems to be MIA since yesterday, probably busy.

Edit : never mind that, QtCreator handles that file automagically...

What do you think about the new retarget implementation ? Feel free to improve it if you want...

I looked at the retarget implementation. Unfortunately I have to work today, so I can't work on it. My one comment is that it's probably better to factor out whole functions rather than specific steps in the code. i.e. you have a set of "new" functions and a set of "old" functions. That's it's easy to look at each set of functions and see what they do. It also makes it easier to handle switchovers (hoping we won't have many, but we will probably have non-zero after this one). i.e. We get to the target block, do the switchover, then you go into the code and copy the "new" functions to the "old" functions, and remove the switchover block id. Then if another change in the algorithm is required, you simply go change the switchover block id again, modify the "new" functions again, etc.. If you make that change now, and comment the code for the "new" and "old" functions, it will make any future changes, if they are required, really easy to implement and test.

If we write a testbench to simulate these types of fluctuations, then we would use it on the new functions only. It's easier if we have whole functions not wound up with testing switchover block IDs, etc. That type of testbench would be great, not only for us, but the original bitcoin codebase (it would help anyone else starting a new coin to see how their algorithm works before testing and getting screwed in the wild).

I see your point. I usually always factor steps into functions, because functions name carry meaning, and make code easier to parse mentally and maintain later.

I've started reworking the code as you suggested, by splitting CMainNetDiff into two classes, COldNetRules and CNewNetRules, each applying their own retarget algorithm without any branching dependant on height. I'm introducing a CDiffProvider static class that will handle the branching depending on height and used net (test or main), and return the appropriate CDiff instance.

I'm thinking writing a testbench with that structure will be a breeze, all we'll have to do is construct a fake chain of CBlockIndex's with timestamps, heights, and target bits that reflect the hashrate variations we have seen on the Noirbits network, or that could possibly be seen. I'll get back at you when I'm done, probably not tonight though, it's friday night here, hence drinking time and my brain's gonna be off for the next 12 hours or so Wink

oatmo
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
July 05, 2013, 05:18:43 PM
Last edit: July 05, 2013, 05:49:05 PM by oatmo
 #706

Well I only ever compile the daemon with make and the makefiles in /src.

QT is the cross platform library used for the UI. But there's a Makefile in the main directory, that's used to compile Noirbits-qt (the UI client), and I'm not sure of how I should handle dependencies in that one... If you can have a look at it, I'd appreciate. Otherwise, I'll go ahead and try with QtCreator. I asked barwizi to have a look at the makefiles, but he seems to be MIA since yesterday, probably busy.

Edit : never mind that, QtCreator handles that file automagically...

What do you think about the new retarget implementation ? Feel free to improve it if you want...

I looked at the retarget implementation. Unfortunately I have to work today, so I can't work on it. My one comment is that it's probably better to factor out whole functions rather than specific steps in the code. i.e. you have a set of "new" functions and a set of "old" functions. That's it's easy to look at each set of functions and see what they do. It also makes it easier to handle switchovers (hoping we won't have many, but we will probably have non-zero after this one). i.e. We get to the target block, do the switchover, then you go into the code and copy the "new" functions to the "old" functions, and remove the switchover block id. Then if another change in the algorithm is required, you simply go change the switchover block id again, modify the "new" functions again, etc.. If you make that change now, and comment the code for the "new" and "old" functions, it will make any future changes, if they are required, really easy to implement and test.

If we write a testbench to simulate these types of fluctuations, then we would use it on the new functions only. It's easier if we have whole functions not wound up with testing switchover block IDs, etc. That type of testbench would be great, not only for us, but the original bitcoin codebase (it would help anyone else starting a new coin to see how their algorithm works before testing and getting screwed in the wild).

I see your point. I usually always factor steps into functions, because functions name carry meaning, and make code easier to parse mentally and maintain later.

I've started reworking the code as you suggested, by splitting CMainNetDiff into two classes, COldNetRules and CNewNetRules, each applying their own retarget algorithm without any branching dependant on height. I'm introducing a CDiffProvider static class that will handle the branching depending on height and used net (test or main), and return the appropriate CDiff instance.

I'm thinking writing a testbench with that structure will be a breeze, all we'll have to do is construct a fake chain of CBlockIndex's with timestamps, heights, and target bits that reflect the hashrate variations we have seen on the Noirbits network, or that could possibly be seen. I'll get back at you when I'm done, probably not tonight though, it's friday night here, hence drinking time and my brain's gonna be off for the next 12 hours or so Wink

Sounds great. I'm thinking that the testbench (and testsuite) can have a set of random tests.

The tests would define parameters like the following:

BaseHashingPower - this would distinguish between a situation like bitcoin where there is tons of constant hash power vs. a new coin where we maybe have 5MH/s which is constant and other stuff can add and subtract. (Currently about 5MH/s for noirbits).
VariableHashingPower - this is the amount that can come in and out (Currently about 150 MH/s for noirbits).
A set of probabilities describing how quickly people add or subtract power. What we want to get is something that let's use randomly add hashing, and the lower the difficulty, the higher the probability of adding hash at any point in time, and when the difficulty increases people are more likely to drop out. We could also define some other factors on how the hashing comes in and out, then the test would simulate timesteps (like a minute), and randomly find a block based on hashing power, etc, and hashing power adding or subtracting (in some minimum increments). This should allow us to model a number of different cases, and also pretest our algorithm change to see how it reacts to a situation like we are currently in (and test a number of variations and see what seems to work the best). Then pass/fail for the test would be a set of benchmarks like variance from the target behavior (in our case testing variance from 2 minute blocks over time, max/min time to generate a set of 30 blocks, etc). We looking for actual numbers, but then also an acceptance test where if we go out of range, the algorithm fails.

In my every day design life, this is how we make 90% of our design decisions, make a model, run a bunch of representative tests that we're trying to optimize, and try with a large range of parameters and select the ones that work best (since I do chip design, usually management is presented with a set of cost/performance alternatives).

[edit]

One other thing required is a random delay in response for the system. After a change in difficulty, since at this point it looks like humans are deciding to add/remove hashing power, the effective difficulty being responded to by the system will be delayed by some amount. You have a distribution that says 10% of the system is using difficulty from an hour ago, 25% is using it from 2 minutes ago, 20% from 10 minutes ago. This models the delay in reaction for the change (when the difficulty gets high, it take a little time for power to drop out, and when it's low, it takes a little while for it to come in).

The other things (pie in the sky talking here), would be to write something which goes through the block chain, and generates the probability parameters. You can then go through the block chains for all the other coins, and use them as testcases for your algorithm. A lot of other people have tried, and screwed things up, so this would be a good way to use all of their mistakes to tune our algorithm to be the best possible.

Having said all that, I think we need an algorithm change ASAP, so we should get our current proposed changes in as quickly as possible, because I think they will improve things, then target another algorithm refinement for 3 weeks away or something like that, so we can test a bunch of tweaked parameters and find a set which is much better, then make one more change the make the coin more solid than any that has come before.
barwizi (OP)
Legendary
*
Offline Offline

Activity: 882
Merit: 1000



View Profile
July 06, 2013, 06:02:17 AM
 #707

i'm back and very hung over, what's new? 
stbgefltc
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile WWW
July 06, 2013, 08:38:19 AM
Last edit: July 06, 2013, 11:29:46 AM by stbgefltc
 #708

Same here Smiley

We just need to decide on the height for the new retarget algorithm... 25000 25020 sounds a like a safe bet to give everyone time to upgrade...

stbgefltc
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile WWW
July 06, 2013, 03:40:58 PM
 #709

Allright, just pushed a commit with the new CDiff implementations (COldNetDiff & CMainNetDiff). I have validated that COldNetDiff works with the current blockchain, now I still need to test CMainNetDiff, which provides the new rules.

If I still have enough time before BBQ time, I'll start working on the test benches, shouldn't take too long.

oatmo
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
July 06, 2013, 04:25:35 PM
 #710

Same here Smiley

We just need to decide on the height for the new retarget algorithm... 25000 25020 sounds a like a safe bet to give everyone time to upgrade...
I think 21000. We are getting the wild fluctuations because people are still jumping in on the low difficulty, so we are not averaging the right number of blocks per day. Can we push a message to all the clients telling them to upgrade? We're currently getting 600 blocks per day, so that gives 3 days. That seems like enough to get the majority of clients that matter (the pools are the big deal).

stbgefltc
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile WWW
July 06, 2013, 04:38:31 PM
 #711

Same here Smiley

We just need to decide on the height for the new retarget algorithm... 25000 25020 sounds a like a safe bet to give everyone time to upgrade...
I think 21000. We are getting the wild fluctuations because people are still jumping in on the low difficulty, so we are not averaging the right number of blocks per day. Can we push a message to all the clients telling them to upgrade? We're currently getting 600 blocks per day, so that gives 3 days. That seems like enough to get the majority of clients that matter (the pools are the big deal).

I'm not sure, I have to check the source, but not that I know of. The Json/RPC API has nothing like that... It would actually be a nifty feature to be able to push upgrade required messages to clients, would make transitions like this one easier. That's why I'm saying 25020 is safer, if we get a diff drop again like the other day, the 2K blocks are gonna fly by in less than 2 days.

We could split in half, say around the closest multiple of 60 in the 22500 range, should give an extra 1.5 days, it's not too far in the future, and gives some margin in case of hashrate increases.

We're gonna need barwizi and everyone who can to communicate on the change as soon as we merge Testing into master so we reach a maximum of users. Concerning pools, I run miners-united, so that won't be an issue, but there's still coinmine (feeleep needs to be contacted), minepool (I don't know the owner, most likely MarkusRomanus since he advertised on the thread), and the P2Pools... For cryptsy, BigVern apparently can push the new client in a 12 hours delay. We also need to relay the info on the other cryptocoin-forums who have Noirbits listed.

But I'm out for today, it's nice & hot outside, and the grilled pork & rosé are calling me out...

Edit : if you have time, please test the latest version from my github, edit diff.h and change the min. height required to the current height, and test that you can successfully mine with the new algo on testnet. I haven't gotten around to it, and it's the last thing that needs to be done before we can safely merge the changes into master branch

oatmo
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
July 06, 2013, 04:54:19 PM
 #712

Can we put a hard floor on difficulty at 0.22?
stbgefltc
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile WWW
July 06, 2013, 05:02:09 PM
 #713

Can we put a hard floor on difficulty at 0.22?

Yeah, it's currently defined in two places in my current implementation (I need to fix it, but it's not urgent), once in main.cpp, and once in diff.cpp. However, you need to figure out how many targets bits that is, since
the min. difficulty (or max. target) is defined as :

Code:
const CBigNum CDiffProvider::bnProofOfWorkLimit(~uint256(0) >> 20);

Just need to replace 20 by the desired target bits for min. difficulty and it should be good.

Now I'm really gone until tomorrow...


oatmo
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
July 06, 2013, 05:52:52 PM
 #714


Edit : if you have time, please test the latest version from my github, edit diff.h and change the min. height required to the current height, and test that you can successfully mine with the new algo on testnet. I haven't gotten around to it, and it's the last thing that needs to be done before we can safely merge the changes into master branch

Are there some directions for mining on testnet somewhere. I can pull it into my linux VM and do CPU mining if I know how to do it. Also, don't I need 2 clients on testnet? I don't know if I have a second VM that I can easily use. I don't have Visual Studio handy, so I'm going to have problems testing a windows client.
oatmo
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
July 06, 2013, 10:45:55 PM
 #715

Can we put a hard floor on difficulty at 0.22?

Yeah, it's currently defined in two places in my current implementation (I need to fix it, but it's not urgent), once in main.cpp, and once in diff.cpp. However, you need to figure out how many targets bits that is, since
the min. difficulty (or max. target) is defined as :

Code:
const CBigNum CDiffProvider::bnProofOfWorkLimit(~uint256(0) >> 20);

Just need to replace 20 by the desired target bits for min. difficulty and it should be good.

Now I'm really gone until tomorrow...


Hard floor should probably be .22 or 90% of the last difficulty, whichever is lower. Therefore it's possible to go below .22, but only if you already got there, and it would go down slowly from there.
barwizi (OP)
Legendary
*
Offline Offline

Activity: 882
Merit: 1000



View Profile
July 07, 2013, 06:14:13 AM
 #716

I have merged to testing branch, i will test it out later today.
barwizi (OP)
Legendary
*
Offline Offline

Activity: 882
Merit: 1000



View Profile
July 07, 2013, 02:52:44 PM
 #717

no problems for me so far. So shall we make this official?
jasinlee
Hero Member
*****
Offline Offline

Activity: 742
Merit: 500


Its as easy as 0, 1, 1, 2, 3


View Profile
July 07, 2013, 03:44:55 PM
 #718

Go for it Cheesy

BTC 1JASiNZxmAN1WBS4dmGEDoPpzN3GV7dnjX DVC 1CxxZzqcy7YEVXfCn5KvgRxjeWvPpniK3                     Earn Devcoins Devtome.com
TheRickTM
Newbie
*
Offline Offline

Activity: 42
Merit: 0



View Profile
July 07, 2013, 04:05:34 PM
 #719

no problems for me so far. So shall we make this official?

Sounds good to me.
stbgefltc
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile WWW
July 07, 2013, 04:25:26 PM
 #720

no problems for me so far. So shall we make this official?

I guess so, if you tested everything Smiley

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 [36] 37 38 39 40 41 42 43 44 45 46 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!