BeerMan81
|
|
October 03, 2017, 05:35:45 PM |
|
What do you mean by coinbase generation?
|
|
|
|
|
Upstream
Newbie
Offline
Activity: 41
Merit: 0
|
|
October 03, 2017, 07:39:02 PM |
|
Hey folks, I just pointed my 1x S9 at this pool, and have 29 more in the mail.
Is it possible to somehow get notifications when my miners go down? I know this pool isn't configured for it but maybe there's something I can do on my end?
I will have 30 S9's in a shopping container, I need to know when they go down (bad internet, loss of power etc). I was planning on using Slushpool because they have notification, but I would prefer to support this pool and pay lower fee's.
|
|
|
|
wavelengthsf
|
|
October 03, 2017, 07:54:11 PM |
|
Hey folks, I just pointed my 1x S9 at this pool, and have 29 more in the mail.
Is it possible to somehow get notifications when my miners go down? I know this pool isn't configured for it but maybe there's something I can do on my end?
I will have 30 S9's in a shopping container, I need to know when they go down (bad internet, loss of power etc). I was planning on using Slushpool because they have notification, but I would prefer to support this pool and pay lower fee's.
Couple ways to do it: 1) Use the cgminer API on the S9. You can build something that calls the API and retrieves the stats. If this fails, its down. 2) Use the pool stats for your worker. Again, build something to occasionally poll the pool stats, and if your worker is down, take some action.
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4298
Merit: 1645
Ruu \o/
|
|
October 03, 2017, 08:15:28 PM |
|
When a block is found by this pool, instead of you waiting for me to process the payouts and you get a payment, you will get the reward immediately with the block solve as a "mined" entry in your wallet. Your wallet will say it's immature and you won't be able to use the reward for 100 confirmations but basically you get newly generated coins.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
tfbiii
Jr. Member
Offline
Activity: 63
Merit: 1
http://ckpool.org
|
|
October 03, 2017, 10:08:34 PM |
|
I will have 30 S9's in a shopping container, I need .... [snip]
In a shipping container? How does that work? Where do you get cooling? I'm also kinda jealous that you have 30 S9's... As for monitoring, I would look at a multi-prong approach, you need to monitor your environment, otherwise you cook the whole batch. Also a simple ping test per device with an e-mail alert if a unit stops answering, there are several free tools out for that in the Network monitoring world. There are a few tools that can monitor the miners but maybe not give alerts. There in Minera, that runs on a rPi. While built to be a miner controller, it has the ability to monitor "external" miners that have an open API port running. That can give you a per-unit view of what is going on via a nice www interface. -Fred
|
http://ckpool.org
|
|
|
Upstream
Newbie
Offline
Activity: 41
Merit: 0
|
|
October 03, 2017, 11:04:14 PM |
|
Hey folks, I just pointed my 1x S9 at this pool, and have 29 more in the mail.
Is it possible to somehow get notifications when my miners go down? I know this pool isn't configured for it but maybe there's something I can do on my end?
I will have 30 S9's in a shopping container, I need to know when they go down (bad internet, loss of power etc). I was planning on using Slushpool because they have notification, but I would prefer to support this pool and pay lower fee's.
Couple ways to do it: 1) Use the cgminer API on the S9. You can build something that calls the API and retrieves the stats. If this fails, its down. 2) Use the pool stats for your worker. Again, build something to occasionally poll the pool stats, and if your worker is down, take some action. Good idea's, thanks! I have a programmer friend that might help me do this (i'm just a dumb mech. eng. ) In a shipping container? How does that work? Where do you get cooling?
I'm also kinda jealous that you have 30 S9's...
As for monitoring, I would look at a multi-prong approach, you need to monitor your environment, otherwise you cook the whole batch. Also a simple ping test per device with an e-mail alert if a unit stops answering, there are several free tools out for that in the Network monitoring world. There are a few tools that can monitor the miners but maybe not give alerts. There in Minera, that runs on a rPi. While built to be a miner controller, it has the ability to monitor "external" miners that have an open API port running. That can give you a per-unit view of what is going on via a nice www interface.
Yes in a shipping container. Two large fans exhaust the heat. A controller monitors temp and regulates fan speed. Thx for the tip. The idea that I have to learn coding / networking makes my head spin
|
|
|
|
Upstream
Newbie
Offline
Activity: 41
Merit: 0
|
|
October 03, 2017, 11:27:17 PM |
|
Couple ways to do it:
1) Use the cgminer API on the S9. You can build something that calls the API and retrieves the stats. If this fails, its down.
2) Use the pool stats for your worker. Again, build something to occasionally poll the pool stats, and if your worker is down, take some action.
OK, yes I think 2) is probably pretty easy to implement. However I noticed I can't check individual works like: ckpool.org/users/*user*.0 ckpool.org/users/*user*.1 etc So am I right in thinking I won't be able to monitor individual workers this way? Or I would have to put each worker as a separate username / address?
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4298
Merit: 1645
Ruu \o/
|
|
October 03, 2017, 11:48:21 PM |
|
Couple ways to do it:
1) Use the cgminer API on the S9. You can build something that calls the API and retrieves the stats. If this fails, its down.
2) Use the pool stats for your worker. Again, build something to occasionally poll the pool stats, and if your worker is down, take some action.
OK, yes I think 2) is probably pretty easy to implement. However I noticed I can't check individual works like: ckpool.org/users/*user*.0 ckpool.org/users/*user*.1 etc So am I right in thinking I won't be able to monitor individual workers this way? Or I would have to put each worker as a separate username / address? All the individual workers are listed within the user stats. The stats are all in valid JSON format so you need to use a JSON parser to get what you want and extract per worker information.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Upstream
Newbie
Offline
Activity: 41
Merit: 0
|
|
October 04, 2017, 07:27:35 PM |
|
So am I right in thinking I would need a dedicated computer to run software to poll the URL showing my user stats and notify me for each worker than goes to 0 hashrate? It does seem simple...
I have a programmer friend I will ask to see if he can figure this out for me, but if anyone thinks its easy and is interested, I would pay them in BTC to help me set it up. PM me if it seems like a reasonably easy / interesting project.
|
|
|
|
hurricandave
Legendary
Offline
Activity: 966
Merit: 1003
|
|
October 05, 2017, 02:21:04 AM |
|
Sounds like a good job fer one'a dem dehr PiZero's
|
|
|
|
ZedZedNova
Sr. Member
Offline
Activity: 475
Merit: 265
Ooh La La, C'est Zoom!
|
|
October 05, 2017, 05:47:51 AM Last edit: October 05, 2017, 06:01:02 AM by ZedZedNova |
|
So am I right in thinking I would need a dedicated computer to run software to poll the URL showing my user stats and notify me for each worker than goes to 0 hashrate? It does seem simple...
I have a programmer friend I will ask to see if he can figure this out for me, but if anyone thinks its easy and is interested, I would pay them in BTC to help me set it up. PM me if it seems like a reasonably easy / interesting project.
Here you go: import json import time import urllib2
for x in range(10): response = urllib2.urlopen('http://ckpool.org/users/12PbPgw7e5SANspLJYezBxsi4ociS1PBiK') miners = json.loads(response.read()) for miner in miners['worker']: if miner['hashrate1m'] == '0': print 'poll {}: Worker {} has stopped hashing'.format(x, miner['workername']) time.sleep(2)
It's not pretty, but fairly simple. You can make it as fancy as you like. One thing I would consider is rather than alert you when hashrate1m goes to zero, may wait for three polls before alerting, or maybe check hashrate5m instead. Here is the output: poll 0: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 1: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 2: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 3: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 4: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 5: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 6: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 7: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 8: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 9: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing
The for loop could be changed to a while true loop so it runs forever, and the time.sleep(2) could be changed so that the polling happens less frequently, say time.sleep(60), so the polls happen once per minute. To see if it works for you, change 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK to your BTC address. If you use multiple BTC addresses you would need to iterate over all of them with another loop: import json import time import urllib2
btc_addresses = ['12PbPgw7e5SANspLJYezBxsi4ociS1PBiK', '12PbPgw7e5SANspLJYezBxsi4ociS1PBiK', '12PbPgw7e5SANspLJYezBxsi4ociS1PBiK']
for x in range(10): for btc_address in btc_addresses: response = urllib2.urlopen('http://ckpool.org/users/{}'.format(btc_address)) miners = json.loads(response.read()) for miner in miners['worker']: if miner['hashrate1m'] == '0': print 'poll {}: Worker {} has stopped hashing'.format(x, miner['workername']) time.sleep(2)
And the output from this one: poll 0: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 0: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 0: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 1: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 1: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 1: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 2: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 2: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 2: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 3: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 3: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 3: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 4: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 4: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 4: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 5: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 5: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 5: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 6: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 6: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 6: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 7: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 7: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 7: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 8: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 8: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 8: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 9: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 9: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing poll 9: Worker 12PbPgw7e5SANspLJYezBxsi4ociS1PBiK.Compac1 has stopped hashing
|
No mining at the moment.
|
|
|
bussumseheide
Newbie
Offline
Activity: 47
Merit: 0
|
|
October 06, 2017, 09:30:59 PM |
|
http://ckpool.org/pool/pool.status#gives me json error in chrome (probably because I installed JSONView chrome app): Error: Parse error on line 1: ..."Disconnected": 12}{"hashrate1m": "1.33 ----------------------^ Expecting 'EOF', '}', ',', ']'a missing comma?
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4298
Merit: 1645
Ruu \o/
|
|
October 06, 2017, 09:34:51 PM |
|
http://ckpool.org/pool/pool.status#gives me json error in chrome (probably because I installed JSONView chrome app): Error: Parse error on line 1: ..."Disconnected": 12}{"hashrate1m": "1.33 ----------------------^ Expecting 'EOF', '}', ',', ']'a missing comma? The user stats are valid json. The pool status page is there for your perusal and are dumps of multiple lines of json so don't try to parse them directly unless you feed it in as multiple lines. The pool work page can be parsed though.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
ComputerGenie
|
|
October 06, 2017, 09:36:28 PM |
|
... (probably because I installed JSONView chrome app): ...
I'm guessing so, because Firefox shows it just fine (as does php's json_decode): {"runtime": 1779447, "lastupdate": 1507325678, "Users": 100, "Workers": 253, "Idle": 52, "Disconnected": 3} {"hashrate1m": "1.37P", "hashrate5m": "1.35P", "hashrate15m": "1.36P", "hashrate1hr": "1.36P", "hashrate6hr": "1.35P", "hashrate1d": "1.26P", "hashrate7d": "1.07P"} {"SPS1m": 49.3, "SPS5m": 49.1, "SPS15m": 49.1, "SPS1h": 49.5} {"diff": "52.0", "accepted": 590041764328, "rejected": 1799121189, "lns": 2077756956366.8, "herp": 2083269876050.313, "reward": 13.91181271}
|
If you have to ask "why?", you wouldn`t understand my answer. Always be on the look out, because you never know when you'll be stalked by hit-men that eat nothing but cream cheese....
|
|
|
daemondazz
|
|
October 06, 2017, 11:07:26 PM |
|
I jumped from BTC.com pool to here just to try. I see that my configuration is corrrect as I am seeing that my hashing rate is consistent to what my miner is capable.
But how do you know how much bitcoin are you getting? I mean, yes there is a mark where it will payout, but is there any other way to see whether you are near to payout or not.
Thank you.
You'll get a payout when the pool finds a block, assuming you meet the requirements under "unique ckpool.org features" on the first post. If you do not meet those requirements then you'll get a payout in a later block.
|
Computers, Amateur Radio, Electronics, Aviation - 1dazzrAbMqNu6cUwh2dtYckNygG7jKs8S
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4298
Merit: 1645
Ruu \o/
|
|
October 06, 2017, 11:15:33 PM |
|
I jumped from BTC.com pool to here just to try. I see that my configuration is corrrect as I am seeing that my hashing rate is consistent to what my miner is capable.
But how do you know how much bitcoin are you getting? I mean, yes there is a mark where it will payout, but is there any other way to see whether you are near to payout or not.
Thank you.
If you click on the pool work link http://ckpool.org/pool/pool.work you will see who's scheduled to receive a reward on the next block find under "payouts". If you're in the "postponed" list that means you haven't contributed enough hashrate yet to receive a payout and will be postponed. As you contribute hashrate you can move up into the payout list if a block hasn't yet been found if your hashrate is significant enough. That work page is created dynamically every minute and changes continuously and is an estimate of payout should a block be found this instant.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4298
Merit: 1645
Ruu \o/
|
|
October 07, 2017, 02:39:05 AM |
|
Right on. So meaning is it luck that push me from postpone to payouts or all of the contributed hashrate previously will accumulate until I am push to payouts?
Mostly hashrate and a small contribution by luck.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
o_solo_miner
Legendary
Offline
Activity: 2488
Merit: 1487
-> morgen, ist heute, schon gestern <-
|
|
October 07, 2017, 10:29:59 PM |
|
Right on. So meaning is it luck that push me from postpone to payouts or all of the contributed hashrate previously will accumulate until I am push to payouts?
Mostly hashrate and a small contribution by luck. Thank you ck for the confirmation. Another quetion is, do small mining machine have a chance to get lucky? Of course they have a chance, otherwise no one would try it. The chance might be statisticly low, but luck is unperdictible anyway. The smalest miner ever found a block on solo was a single S3 I think.
|
from the creator of CGMiner http://solo.ckpool.org for Solominers paused: passthrough for solo.ckpool.org => stratum+tcp://rfpool.org:3334
|
|
|
EarthWide
Newbie
Offline
Activity: 12
Merit: 0
|
|
October 11, 2017, 09:23:26 PM |
|
What is the number that follows the addresses that are in the postponed section of the pool work? is that an accumulation of user/address HERP?
|
|
|
|
|