kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 03, 2015, 11:59:49 AM Last edit: July 03, 2015, 12:26:11 PM by kano |
|
I am hoping this is the right thread to ask about the changes in the API on the latest cgminer for S3. From where do you pick up / compute the GH/S(paid) and GH/S(avg) values? (With the last API, there was the MHS 5s and MHS av). Also, could you throw some light on extra fill_*** and read_*** variables returned by a stats request from the API? EDIT: Found the GH/S(avg) in the summary response ...... not the paid though! More light on the fill_*** and read_*** in the stats response  Paid = Difficulty Accepted * 2^32 / Elapsed ... which should sorta be obvious ...  That's also what I do here: https://github.com/ckolivas/cgminer/blob/master/miner.php#L327The new stats I added to sort out the bitmain stupidity and tune the code  Edit: Their code checks for sending work nonces something like 100,000 times a second ... so yeah that's pretty pointless. My default does about 1000 times a second so uses a lot less CPU with hardly any extra latency added - certainly not enough to ever care about. You'd have to read my code to work out all the fill/read stats https://github.com/ckolivas/cgminer/blob/master/driver-bitmain.c#L3007Also I've got all the settings (at the top below 'Min', and opt_* at the bottom) in stats so you can see the settings it's running.
|
|
|
|
pekatete
|
 |
July 03, 2015, 12:22:16 PM |
|
I am hoping this is the right thread to ask about the changes in the API on the latest cgminer for S3. From where do you pick up / compute the GH/S(paid) and GH/S(avg) values? (With the last API, there was the MHS 5s and MHS av). Also, could you throw some light on extra fill_*** and read_*** variables returned by a stats request from the API? EDIT: Found the GH/S(avg) in the summary response ...... not the paid though! More light on the fill_*** and read_*** in the stats response  Paid = Difficulty Accepted * 2^32 / Elapsed ... which should sorta be obvious ...  That's also what I do here: https://github.com/ckolivas/cgminer/blob/master/miner.php#L327The new stats I added to sort out the bitmain stupidity and tune the code  Their code checks for nonces something like 100,000 times a second ... so yeah that's pretty pointless. My default does about 1000 times a second so uses a lot less CPU with hardly any extra latency added - certainly not enough to ever care about. You'd have to read my code to work out all the fill/read stats https://github.com/ckolivas/cgminer/blob/master/driver-bitmain.c#L3007Also I've got all the settings (at the top below 'Min', and opt_* at the bottom) in stats so you can see the settings it's running. Thanks ..... I'll try and read the code to get a bit of a grasp on the read/fill stats. With regard to the paid ..... yep, obvious to I suppose but I was on the assumption you had the computed value somewhere in a response to the API (no worries, will do the maths on my end ... or even just go with the 5s metric as it still resides in the summary response). Not sure whether this has to do with how bitmain muddled things up in the first place, but the last S3 API response from a stats command contained the hash speeds too (and so do the few other non S3 rigs running older cgminer versions that I have). Would it be a big ask to request to re-include those metrics in the status command so as to have consistency in the API across versions? I know someone mentioned in another thread that it has broken M's monitor too ..... I am sure other monitors will also have to revisit their code to accomodate the new S3 cgminer API.
|
|
|
|
kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 03, 2015, 12:28:12 PM |
|
Use summary+estats No idea why they copied information from one reply to another. So yet another stupid thing they did  It has never done that in master cgminer.
|
|
|
|
pekatete
|
 |
July 03, 2015, 12:44:33 PM |
|
Use summary+estats No idea why they copied information from one reply to another. So yet another stupid thing they did  It has never done that in master cgminer. yep, had recoded / resorted to summary+stats to get around that ..... it just crossed my mind that for the currently un-maintained monitors, users are going to be left scratching their heads, though I agree, thats no reason to break your end to accomodate bitmain's lapses.
|
|
|
|
kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 03, 2015, 12:52:30 PM |
|
Use summary+estats No idea why they copied information from one reply to another. So yet another stupid thing they did  It has never done that in master cgminer. yep, had recoded / resorted to summary+stats to get around that ..... it just crossed my mind that for the currently un-maintained monitors, users are going to be left scratching their heads, though I agree, thats no reason to break your end to accomodate bitmain's lapses. Well if someone has coded into a monitor to look for the hash rate in stats, that wont work for most miners. Probably only Bitmain ... and not even all Bitmain either, I'm pretty sure the original S1 didn't do that. I'd guess you don't realise, 'summary+estats' is better to use than 'summary+stats' for what you are doing. estats excludes the special pool low level stats list and (though not relevant for Bitmain Sn) ignores zombies - neither of which you'd want.
|
|
|
|
pekatete
|
 |
July 03, 2015, 01:01:08 PM |
|
Use summary+estats No idea why they copied information from one reply to another. So yet another stupid thing they did  It has never done that in master cgminer. yep, had recoded / resorted to summary+stats to get around that ..... it just crossed my mind that for the currently un-maintained monitors, users are going to be left scratching their heads, though I agree, thats no reason to break your end to accomodate bitmain's lapses. Well if someone has coded into a monitor to look for the hash rate in stats, that wont work for most miners. Probably only Bitmain ... and not even all Bitmain either, I'm pretty sure the original S1 didn't do that. I'd guess you don't realise, 'summary+estats' is better to use than 'summary+stats' for what you are doing. estats excludes the special pool low level stats list and (though not relevant for Bitmain Sn) ignores zombies - neither of which you'd want. For S1 its always worked with stats+devs and for the S3 (before cgminer 4.9.2 & API 3.6) just stats worked OK. For most of the rest non bitmain, devs had it all. But you are right, estats is more compact so may resort to that (though stats has the few objects I want pulled too)
|
|
|
|
kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 03, 2015, 01:12:10 PM |
|
Yes you can use devs or summary on a Bitmain Sn miner. That's coz there's only one dev so they have the same values without having to add up the devs If you do that on any mining device that internally has more then one dev, you need to add up all the devs to get the summary amount.
|
|
|
|
pekatete
|
 |
July 03, 2015, 01:22:17 PM Last edit: July 03, 2015, 01:37:33 PM by pekatete |
|
If you do that on any mining device that internally has more then one dev, you need to add up all the devs to get the summary amount.
My custom monitor is on windows and linq makes it trivial to pull the non null/empty values from the API response and average them out (though I just tend to list the values for each). Saying that, with the new S3 API, it throws up a wierd character on chain_acs11 (also seen it on chain_acs10). It keeps changing though ... here's a screenshot of it in putty (my monitor is currently in debug mode but will post how it manifests on the form when I am up and running again). EDIT: Here's the gremlin in my monitor! 
|
|
|
|
kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 03, 2015, 01:57:23 PM |
|
If you do that on any mining device that internally has more then one dev, you need to add up all the devs to get the summary amount.
My custom monitor is on windows and linq makes it trivial to pull the non null/empty values from the API response and average them out (though I just tend to list the values for each). Saying that, with the new S3 API, it throws up a wierd character on chain_acs11 (also seen it on chain_acs10). It keeps changing though ... here's a screenshot of it in putty (my monitor is currently in debug mode but will post how it manifests on the form when I am up and running again). http://s11.postimg.org/n67k0lqjn/S3_Gremlin.pngEDIT: Here's the gremlin in my monitor! http://s2.postimg.org/87zo87th5/S3_Gremlin1.pngI'll look into it (I don't see it at all on mine - so it may be a bug on your end) But you already know not to display it: [miner_count] => 2 for tempX, chain_acnX and chain_acsX i.e. 2 means 0 and 1 for X of course same for fan: [fan_num] => 2 for fanX
|
|
|
|
pekatete
|
 |
July 03, 2015, 02:09:46 PM Last edit: July 03, 2015, 05:36:36 PM by pekatete |
|
If you do that on any mining device that internally has more then one dev, you need to add up all the devs to get the summary amount.
My custom monitor is on windows and linq makes it trivial to pull the non null/empty values from the API response and average them out (though I just tend to list the values for each). Saying that, with the new S3 API, it throws up a wierd character on chain_acs11 (also seen it on chain_acs10). It keeps changing though ... here's a screenshot of it in putty (my monitor is currently in debug mode but will post how it manifests on the form when I am up and running again). http://s11.postimg.org/n67k0lqjn/S3_Gremlin.pngEDIT: Here's the gremlin in my monitor! http://s2.postimg.org/87zo87th5/S3_Gremlin1.pngI'll look into it (I don't see it at all on mine - so it may be a bug on your end) But you already know not to display it: [miner_count] => 2 for tempX, chain_acnX and chain_acsX i.e. 2 means 0 and 1 for X of course same for fan: [fan_num] => 2 for fanX It definitely is on my end. I initially thought it was because I was running a pre-release putty 0.65 (to fix the bug that was fixed in windows update that meant putty could not render its window) but then it showed up in my form. And yes, I could (and now have) use the miner count, or even check for length, but thought you may want to know in case there was something more to it. While on that subject (and I'll make the the last one), I also noticed that initially the response for chains 1 and 2 had double the "chips", with the first set all dashes ..... however this cleared up soon enough and I have not replicated it since I've left the S3 I am testing this on to run (now 24 hrs+). Again, did not mention it ealier as it cleared up quickly ..... EDIT: Did a restart and here is the initial confusion per my monitor ...  I know I said that was the last one, but the API still returns "broken" JSON when queried with two commands on the same line. Easy enough to fix by adding a comma between any curly braces backing onto each other .....
|
|
|
|
luthermarcus
|
 |
July 03, 2015, 08:52:16 PM |
|
Some more feedback about S3+. I noticed that HW error droped on most of my miners by .003%. Everything running good here about 20 hours on each miner. I like the new view on the miner stats simple and straight forward the way I like it. Great work. Bitmain should hire you (if you would choose to work for a company like that.) I dont get it with all the epic fails on there part dont know why they don't try to pick up a dev that knows what they are doing.
|
Donate Bitcoin 1Mz7ZHxPhoH1ZK2yQvo62NdHvvsS2quhzc Donate TRX TB3WiLEj6iuSBU5tGUKyZkjB4vqrBDvoYM
|
|
|
Mikestang
Legendary
Offline
Activity: 1274
Merit: 1001
|
 |
July 04, 2015, 06:36:31 AM |
|
Yes just the steps shown at the top to do it - the tar extract and the cgset do everything The rest below just explains what's going on. Of course you should check your settings as it suggests. ... have you ever tried the U3 on cgminer? (latest 4.9.2) We use proper USB access to devices, not 30 year old filtered serial access that hides important information (and cgminer has since I first changed it to use USB long ago) Direct USB has many advantages over the filtered serial access. ... though of course the U3 itself is pretty shoddy  Thanks, still haven't gotten around to updating my S3+, will do that soon. I have not used 4.9.2 on my U3s yet, I have them at another location at the moment and that machine does not grant me necessary permissions to use zadig and modify usb drivers, so I am pursuing other avenues there. Once summer is over I'll bring them home and run them with cgminer again.
|
|
|
|
kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 04, 2015, 10:49:17 AM |
|
... I know I said that was the last one, but the API still returns "broken" JSON when queried with two commands on the same line. Easy enough to fix by adding a comma between any curly braces backing onto each other .....
What command did you send it so I can test it? If you send json as multiple commands with + between them they become an array of replies {"command":"cmd1+cmd2"}replies with {"cmd1":[{ ... reply1 ... }],"cmd2":[{ ... reply2 ... }]}where { ... reply1 ... } is what you'd get from {"command":"cmd1"}Edit: reading your comment again - you can't send 2 commands - only one per API access (and then the API socket closes) You can join them, as I've mentioned above, with a +, to get an array of answers in one command (but they can only be "report" commands) ... as in https://github.com/ckolivas/cgminer/blob/master/API-README
|
|
|
|
pekatete
|
 |
July 04, 2015, 11:06:10 AM Last edit: July 04, 2015, 11:22:54 AM by pekatete |
|
... I know I said that was the last one, but the API still returns "broken" JSON when queried with two commands on the same line. Easy enough to fix by adding a comma between any curly braces backing onto each other .....
What command did you send it so I can test it? If you send json as multiple commands with + between them they become an array of replies {"command":"cmd1+cmd2"}replies with {"cmd1":[{ ... reply1 ... }],"cmd2":[{ ... reply2 ... }]}where { ... reply1 ... } is what you'd get from {"command":"cmd1"}Edit: reading your comment again - you can't send 2 commands - only one per API access (and then the API socket closes) You can join them, as I've mentioned above, with a +, to get an array of answers in one command (but they can only be "report" commands) ... as in https://github.com/ckolivas/cgminer/blob/master/API-READMECommand sent was stats+summary JSON encoded (i.e I use the .NET JavaScriptSerializer to serialize a dictionary of string, string to JSON then use the serialized object / string to poll the API. As you mention above, it SHOULD respond with the two responses separated by a comma, but it does not put the comma there. Additionally (and I have not checked this properly yet), normally the API will terminate a response for a single command with a null at the end, it may be that the API also includes a null at the end of the first command response in a two command poll which will cause loops looking for a terminating null to bail out early on the first null.
|
|
|
|
kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 04, 2015, 11:25:15 AM Last edit: July 04, 2015, 11:46:11 AM by kano |
|
It's some bug in your code, json implementation or .NET The output format is as I stated, there are no closing } followed directly by an opening {
The API puts a null at the end of the full reply (not in the middle) on purpose. It's a socket level optimisation. It is guaranteed to be the only null and it clearly terminates the socket message. Various code in various places had random handling to determine an end of a socket message. There is no such confusion with the API socket. Once you get the null, you know you have all the data and do not need to look for/wait for anything else. Until you get the null, you know you need to keep reading. Thus only in the very rare case of a transmission error/failure do you ever wait on the socket and get a timeout.
You can test what the reply is directly on linux: echo '{"command":"summary+stats"}' | ncat -4 MinerIPAddress 4028
Edit: note: it's not 2 responses seperated by a comma, it's a json list If you are getting 2 {} responses then you must be making 2 connections and sending 2 {command} requests
|
|
|
|
pekatete
|
 |
July 04, 2015, 12:28:24 PM |
|
It's some bug in your code, json implementation or .NET The output format is as I stated, there are no closing } followed directly by an opening {
Turns out I was looking at the API output from another S3 that has not been updated with the latest cgminer .... I have checked on the updated one and it returns unbroken json (so not a bug in my code and certainly not in .NET!) The API puts a null at the end of the full reply (not in the middle) on purpose. It's a socket level optimisation. It is guaranteed to be the only null and it clearly terminates the socket message.
Like I said, I had not tested that (but I know if it does it'd would cause the issue I mentioned), and have yet to confirm either way.
|
|
|
|
Mikestang
Legendary
Offline
Activity: 1274
Merit: 1001
|
 |
July 05, 2015, 05:45:04 AM |
|
Just updated my S3+, couple issues:
1) Under API allow, if I use W:[my local ip address], then CryptoGlance reports the S3+, but the web page doesn't show any stats under Miner Status. If I use W:127.0.0.1, then it shows stats under Miner Status, but CryptoGlance shows the S3+ as dead.
2) Old version my unit hashed at 500GH/s, now I don't see over 420.
Going to have to revert versions...
|
|
|
|
kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 05, 2015, 06:17:15 AM Last edit: July 05, 2015, 06:32:57 AM by kano |
|
Just updated my S3+, couple issues:
1) Under API allow, if I use W:[my local ip address], then CryptoGlance reports the S3+, but the web page doesn't show any stats under Miner Status. If I use W:127.0.0.1, then it shows stats under Miner Status, but CryptoGlance shows the S3+ as dead.
Right so you use both, since you want both to have access ... W:127.0.0.1,W:[my local ip address] https://github.com/ckolivas/cgminer/blob/master/API-README#L18If you had it before as W:0/0 anyone on the planet could change your miner to mine for them if they had network access and found it ... e.g. your neighbours if you have Wifi and they can hack into it  I also have no idea what Bitmain did to the API - but it SHOULD ONLY give access to what you tell it to have access as how I designed and wrote the API and api-allow 2) Old version my unit hashed at 500GH/s, now I don't see over 420.
Going to have to revert versions...
Try setting the Advances settings to what they were before and saving them and make sure they are the same. Edit: you can also see the setting it is running if you look at the API estats command output (in our version, not in the bitmain version)
|
|
|
|
kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 05, 2015, 06:31:01 AM |
|
It's some bug in your code, json implementation or .NET The output format is as I stated, there are no closing } followed directly by an opening {
Turns out I was looking at the API output from another S3 that has not been updated with the latest cgminer .... I have checked on the updated one and it returns unbroken json (so not a bug in my code and certainly not in .NET!) ... Bitmain has old versions of their fork of cgminer in their miners ... all the more reason to update to our master cgminer 
|
|
|
|
Mikestang
Legendary
Offline
Activity: 1274
Merit: 1001
|
 |
July 05, 2015, 07:03:48 AM |
|
Right so you use both, since you want both to have access ... W:127.0.0.1,W:[my local ip address] https://github.com/ckolivas/cgminer/blob/master/API-README#L18<snip> Try setting the Advances settings to what they were before and saving them and make sure they are the same. Edit: you can also see the setting it is running if you look at the API estats command output (in our version, not in the bitmain version) I should read the read me, shouldn't I? Thank you, makes sense and now fixed. I had checked the advance settings and they were the same as pre-update before. I've reinstalled 4.9.2 and I think I just needed to let it run a bit longer to even out. Hashing away happily over 500GH/s now, thanks. Would be great to see if 4.9.2 also fixes why my S3+ would decrease from 500GH to 480, sometimes over the course of 1 day, some times over several. Fingers crossed it holds steady at 500GH+. 
|
|
|
|
|