Is Slush's pool safe from the problems that affected P2Pool and Antpool last night (where they were not validating incoming blocks correctly and forked onto an incorrect chain for 6 blocks)?
I assume that slush is more competent than these other pool operators, and that his team keeps on top of required updates.
But currently slush is not listed on the "safe" pool list, so I have to ask.
Slush is as safe as houses! They forked to v3 on and in time ..... you can confirm that by checking any of the blocks the pool has won lately (you do not need to be logged in to do that).
|
|
|
It's some bug in your code, json implementation or .NET The output format is as I stated, there are no closing } followed directly by an opening {
Turns out I was looking at the API output from another S3 that has not been updated with the latest cgminer .... I have checked on the updated one and it returns unbroken json (so not a bug in my code and certainly not in .NET!) The API puts a null at the end of the full reply (not in the middle) on purpose. It's a socket level optimisation. It is guaranteed to be the only null and it clearly terminates the socket message.
Like I said, I had not tested that (but I know if it does it'd would cause the issue I mentioned), and have yet to confirm either way.
|
|
|
... I know I said that was the last one, but the API still returns "broken" JSON when queried with two commands on the same line. Easy enough to fix by adding a comma between any curly braces backing onto each other .....
What command did you send it so I can test it? If you send json as multiple commands with + between them they become an array of replies {"command":"cmd1+cmd2"}replies with {"cmd1":[{ ... reply1 ... }],"cmd2":[{ ... reply2 ... }]}where { ... reply1 ... } is what you'd get from {"command":"cmd1"}Edit: reading your comment again - you can't send 2 commands - only one per API access (and then the API socket closes) You can join them, as I've mentioned above, with a +, to get an array of answers in one command (but they can only be "report" commands) ... as in https://github.com/ckolivas/cgminer/blob/master/API-READMECommand sent was stats+summary JSON encoded (i.e I use the .NET JavaScriptSerializer to serialize a dictionary of string, string to JSON then use the serialized object / string to poll the API. As you mention above, it SHOULD respond with the two responses separated by a comma, but it does not put the comma there. Additionally (and I have not checked this properly yet), normally the API will terminate a response for a single command with a null at the end, it may be that the API also includes a null at the end of the first command response in a two command poll which will cause loops looking for a terminating null to bail out early on the first null.
|
|
|
If you do that on any mining device that internally has more then one dev, you need to add up all the devs to get the summary amount.
My custom monitor is on windows and linq makes it trivial to pull the non null/empty values from the API response and average them out (though I just tend to list the values for each). Saying that, with the new S3 API, it throws up a wierd character on chain_acs11 (also seen it on chain_acs10). It keeps changing though ... here's a screenshot of it in putty (my monitor is currently in debug mode but will post how it manifests on the form when I am up and running again). http://s11.postimg.org/n67k0lqjn/S3_Gremlin.pngEDIT: Here's the gremlin in my monitor! http://s2.postimg.org/87zo87th5/S3_Gremlin1.pngI'll look into it (I don't see it at all on mine - so it may be a bug on your end) But you already know not to display it: [miner_count] => 2 for tempX, chain_acnX and chain_acsX i.e. 2 means 0 and 1 for X of course same for fan: [fan_num] => 2 for fanX It definitely is on my end. I initially thought it was because I was running a pre-release putty 0.65 (to fix the bug that was fixed in windows update that meant putty could not render its window) but then it showed up in my form. And yes, I could (and now have) use the miner count, or even check for length, but thought you may want to know in case there was something more to it. While on that subject (and I'll make the the last one), I also noticed that initially the response for chains 1 and 2 had double the "chips", with the first set all dashes ..... however this cleared up soon enough and I have not replicated it since I've left the S3 I am testing this on to run (now 24 hrs+). Again, did not mention it ealier as it cleared up quickly ..... EDIT: Did a restart and here is the initial confusion per my monitor ... I know I said that was the last one, but the API still returns "broken" JSON when queried with two commands on the same line. Easy enough to fix by adding a comma between any curly braces backing onto each other .....
|
|
|
If you do that on any mining device that internally has more then one dev, you need to add up all the devs to get the summary amount.
My custom monitor is on windows and linq makes it trivial to pull the non null/empty values from the API response and average them out (though I just tend to list the values for each). Saying that, with the new S3 API, it throws up a wierd character on chain_acs11 (also seen it on chain_acs10). It keeps changing though ... here's a screenshot of it in putty (my monitor is currently in debug mode but will post how it manifests on the form when I am up and running again). EDIT: Here's the gremlin in my monitor!
|
|
|
Use summary+estats No idea why they copied information from one reply to another. So yet another stupid thing they did It has never done that in master cgminer. yep, had recoded / resorted to summary+stats to get around that ..... it just crossed my mind that for the currently un-maintained monitors, users are going to be left scratching their heads, though I agree, thats no reason to break your end to accomodate bitmain's lapses. Well if someone has coded into a monitor to look for the hash rate in stats, that wont work for most miners. Probably only Bitmain ... and not even all Bitmain either, I'm pretty sure the original S1 didn't do that. I'd guess you don't realise, 'summary+estats' is better to use than 'summary+stats' for what you are doing. estats excludes the special pool low level stats list and (though not relevant for Bitmain Sn) ignores zombies - neither of which you'd want. For S1 its always worked with stats+devs and for the S3 (before cgminer 4.9.2 & API 3.6) just stats worked OK. For most of the rest non bitmain, devs had it all. But you are right, estats is more compact so may resort to that (though stats has the few objects I want pulled too)
|
|
|
Use summary+estats No idea why they copied information from one reply to another. So yet another stupid thing they did It has never done that in master cgminer. yep, had recoded / resorted to summary+stats to get around that ..... it just crossed my mind that for the currently un-maintained monitors, users are going to be left scratching their heads, though I agree, thats no reason to break your end to accomodate bitmain's lapses.
|
|
|
I am hoping this is the right thread to ask about the changes in the API on the latest cgminer for S3. From where do you pick up / compute the GH/S(paid) and GH/S(avg) values? (With the last API, there was the MHS 5s and MHS av). Also, could you throw some light on extra fill_*** and read_*** variables returned by a stats request from the API? EDIT: Found the GH/S(avg) in the summary response ...... not the paid though! More light on the fill_*** and read_*** in the stats response Paid = Difficulty Accepted * 2^32 / Elapsed ... which should sorta be obvious ... That's also what I do here: https://github.com/ckolivas/cgminer/blob/master/miner.php#L327The new stats I added to sort out the bitmain stupidity and tune the code Their code checks for nonces something like 100,000 times a second ... so yeah that's pretty pointless. My default does about 1000 times a second so uses a lot less CPU with hardly any extra latency added - certainly not enough to ever care about. You'd have to read my code to work out all the fill/read stats https://github.com/ckolivas/cgminer/blob/master/driver-bitmain.c#L3007Also I've got all the settings (at the top below 'Min', and opt_* at the bottom) in stats so you can see the settings it's running. Thanks ..... I'll try and read the code to get a bit of a grasp on the read/fill stats. With regard to the paid ..... yep, obvious to I suppose but I was on the assumption you had the computed value somewhere in a response to the API (no worries, will do the maths on my end ... or even just go with the 5s metric as it still resides in the summary response). Not sure whether this has to do with how bitmain muddled things up in the first place, but the last S3 API response from a stats command contained the hash speeds too (and so do the few other non S3 rigs running older cgminer versions that I have). Would it be a big ask to request to re-include those metrics in the status command so as to have consistency in the API across versions? I know someone mentioned in another thread that it has broken M's monitor too ..... I am sure other monitors will also have to revisit their code to accomodate the new S3 cgminer API.
|
|
|
I am hoping this is the right thread to ask about the changes in the API on the latest cgminer for S3. From where do you pick up / compute the GH/S(paid) and GH/S(avg) values? (With the last API, there was the MHS 5s and MHS av). Also, could you throw some light on extra fill_*** and read_*** variables returned by a stats request from the API? EDIT: Found the GH/S(avg) in the summary response ...... not the paid though! More light on the fill_*** and read_*** in the stats response
|
|
|
Ooops-a-daisy! Do tell ..... why's that as I use it very often?
2 reasons: 1) a restart wont get new settings, it just restarts with the settings it was first started with. 2) restart isn't 100% reliable - processes can lock up while trying to do that under rare circumstances. quit works fine I use it in the knowledge it won't get new settings (can set those before if I wanted though) and reliability has not (touch wood) been an issue for me thus far .... (big sigh!) Will post about the other 2 issues later (though you seem to have quashed the restart one). Just noticed with the API that the type has changed to AS3 ... so that may be the issue (though have not looked at the returned data from the poll just yet) I'll leave you get your kip ...
|
|
|
API allow is now shown in the web settings so you can see and change it there
Yep I notcied that (which is part of my belated WOW!) I did remove a 'glitch' that Xiangfu put in there for Avalon (that Bitmain copied over) and I doubt that's the cause, but if it keeps doing that for you I'll look into it tomorrow (it's 00:45 here now)
I assume this is in reference to the API ... I'll look at my polling code later (possibly nearer to your wake up time) and see what the new API returns to my poll. Off my head I think I use a json encoded stats+devs command with 0 as a parameter, but will confirm this later. Also, never use cgminer API restart Ooops-a-daisy! Do tell ..... why's that as I use it very often?
|
|
|
cgminer for S3 looking good with the all new webui and the extras there.... just a few gremlins thus far. 1. Seems like the API has been changed a bit as I can not get any values from it (using my own monitor ... I'll check later to see where!) 2. If I save & apply from the miner configuration tab, it does not start hashing on its own, I have to go into System -> Startup and restart cgminer from there. 3. Tried restarting via cgminer-api restart but that does not seem to restart hashing (though it echoes pid ..... etc)
EDIT Otherwise ... WOW! thanks to the coder and funder of this upgrade, what a makeover both in settings and UI data candy!
|
|
|
Just like I thought ..... he's ALWAYS on the ball ..... and seems like a few PH just returned too! Here's to luck continuing to do its thing. I don't think that's called "on the ball". Done in time - yes. On the ball - no. Semantics really, but if switching over to v3 is literally that (turning a switch), then he'd have had to do the wiring for the switch beforehand for v3 to turn on when the switch is flicked (I'd say that's being on the ball). That slush switched over in (and on) time is a bonus to faint hearted mining there....
|
|
|
Just like I thought ..... he's ALWAYS on the ball ..... and seems like a few PH just returned too! Here's to luck continuing to do its thing.
|
|
|
Ahh ... it is still v2! Still, slush is usually on the ball so we'll have to assume (for now) that they'll flick over as and when mining v3 becomes mandatory. PS. Did anyone say something? Seems the pool hash spiked a bit in the last half hour or so .... may the luck keep doing its bit!
|
|
|
I have to say, whatever tweak slush did to get the luck back, he should have done a while ago. If indeed it is switching over to the latest v3 whatever, then it was long overdue! Here's to the luck keeping up doing its thing (block version notwithstanding)!
|
|
|
Has anyone actually confirmed that the price raising temperatures here is correct? If so, does it apply to the 100 unit "bulk orders" or to single units? Also, does that include shipping?
|
|
|
It's my understanding they are getting chips from Spondooliestech. So, when SPT's chips are in new rigs mining [As they are now], we may see them begin development very soon
Ok, while more recent than before, it just clouds the issue further. What chips does Spondoolies have that are newer than the Rockerbox (SP20), and where are they mining with them? What's your source for this? thats the miner-edge discussion in the sfards thread I believe ... no wonder its clouded!
|
|
|
$1129 ..... by eck! They can't have that strong weed down that end .. surely not.
<Seinfeld "I'm out" .gif> At that price point so am I .... still hopeful it is a number plucked out of thin air though ... I'd probably buy a few of these if SFARDS can get the pricing down to $800, and offer individual unit purchases vs. 100 unit MOQ.
I would also like to see some test units sent to trusted veterans on Bitcointalk, so we know what we're dealing with. Otherwise I'll hold out until Bitmain launches the S7.
If the reported bulk buy price of $1129 is correct (I assume it is FOB), then there is not a chance in hell getting single units at $800, at least not in the next 6-10 weeks (if ever). Saying that, I'm still trying to get that price into my head ...
|
|
|
$1129 ..... by eck! They can't have that strong weed down that end .. surely not.
|
|
|
|