bb
Member
Offline
Activity: 84
Merit: 10
|
|
July 24, 2011, 09:26:12 PM |
|
also it would be easier for pools to fake stats explicity for that service.
But it would reduce the work to adapt to any changes to modifying the code at one point, instead of modifying code in every client.
|
|
|
|
flower1024
Legendary
Offline
Activity: 1428
Merit: 1000
|
|
July 24, 2011, 09:29:08 PM |
|
would you accept a service which has (sometimes) wrong stats? we would need a group of trusty people
after all wrong stats means less money
|
|
|
|
ahitman
|
|
July 24, 2011, 09:40:26 PM |
|
It is true, it would be a big target and a possible point of failure...
Maybe at least if someone could code up a local proxy one can use to accomplish the same thing, then you would at least only have one request no matter how many bitHoppers you were running.
|
|
|
|
bb
Member
Offline
Activity: 84
Merit: 10
|
|
July 24, 2011, 09:41:21 PM |
|
would you accept a service which has (sometimes) wrong stats?
I already am. Since I am usually asleep for ~ 6 hours a day my miners can (and have) get hung up on a pool faking stats.
|
|
|
|
c00w (OP)
|
|
July 24, 2011, 09:43:13 PM |
|
1)eskimo? Fixed that.
2) Overhammering servers? Well I raised the minimum to 30 seconds between hits. But you really should be caching your pages and not re-rendering every time you get hit. Or provide a json interface which is a flat file updated regularly...
|
1HEmzeuVEKxBQkEenysV1yM8oAddQ4o2TX
|
|
|
MrSam
|
|
July 24, 2011, 09:48:01 PM |
|
2) Overhammering servers? Well I raised the minimum to 30 seconds between hits. But you really should be caching your pages and not re-rendering every time you get hit. Or provide a json interface which is a flat file updated regularly...
Stats allready cached, but please use http://api.triplemining.com/json/stats , this way you are not using SSL and not triggering other unusefull stats.
|
|
|
|
MrSam
|
|
July 24, 2011, 09:54:30 PM |
|
Please push/spread this api method
|
|
|
|
msb8r
|
|
July 24, 2011, 10:18:14 PM |
|
[triple] name: Triple Mining mine_address: eu1.triplemining.com:8344 api_method:re api_address:http://api.triplemining.com/json/stats api_key:(\d+) role:mine
new code for triple. json isn't applicable as [] are used instead of {}
|
|
|
|
flower1024
Legendary
Offline
Activity: 1428
Merit: 1000
|
|
July 24, 2011, 10:20:20 PM |
|
json isn't applicable as [] are used instead of {}
json IS capable of [] - its just an array. index 1 is round shares (for me json.loads works)
|
|
|
|
MrSam
|
|
July 24, 2011, 10:21:13 PM |
|
json isn't applicable as [] are used instead of {}
json IS capable of [] - its just an array. index 1 is round shares (for me json.loads works) Indeed.. Flower: can you post your configuration for others to copy ? UPDATE: i changed it anyway - > {"solved":"2619271"}
|
|
|
|
msb8r
|
|
July 24, 2011, 10:22:44 PM |
|
json isn't applicable as [] are used instead of {}
json IS capable of [] - its just an array. index 1 is round shares (for me json.loads works) my bad. Got an pool api error when I tried the json approach.
|
|
|
|
flower1024
Legendary
Offline
Activity: 1428
Merit: 1000
|
|
July 24, 2011, 10:23:35 PM |
|
json isn't applicable as [] are used instead of {}
json IS capable of [] - its just an array. index 1 is round shares (for me json.loads works) Indeed.. Flower: can you post your configuration for others to copy ? wouldn't work with latest c00w, but anyway: def triplemining_sharesResponse(response): global servers r = json.loads(response) shares = int(r[1]) servers['triplemining']['shares'] = shares
|
|
|
|
c00w (OP)
|
|
July 24, 2011, 10:34:42 PM |
|
1) errors? Fixed. Again. I messed that up a little.
2) Triple? Changed to work. I still have it disabled.
|
1HEmzeuVEKxBQkEenysV1yM8oAddQ4o2TX
|
|
|
flower1024
Legendary
Offline
Activity: 1428
Merit: 1000
|
|
July 24, 2011, 11:02:14 PM Last edit: July 24, 2011, 11:27:06 PM by flower1024 |
|
i am trying with an pool hopping service. atm its free - if i get any problems regarding traffic i'll come back to you http://www.k1024.de/servers.jsonmy vserver (which rans bithopper) prints out this json file every ten seconds. this file is immediatly uploaded to an ftp-server (which you can access using the link above). so at least: my hopper stays somewhat secure. tomorrow i'll add hashrate and another round share counter (parsed through regex) - so pool stats-faking is little bit more detectable. the format is as follows: [ { "shares": 3702496, "pool": "rfcpool", "mode": "prop" }, { "shares": 2646553, "pool": "triplemining", "mode": "prop" }, { "shares": 152019, "pool": "ozco", "mode": "prop" }, { "shares": 136717, "pool": "nofee", "mode": "prop"}, { "shares": 2360466, "pool": "polmine", "mode": "prop" }, { "shares": 94054, "pool": "bitclockers", "mode": "prop" }, { "shares": 24772, "pool": "mtred", "mode": "prop" } ]
BTW: if you are interested in joining this project please pm me. i have a job, so i can't watch the pool whole days. maybe four people from different timezones would be nice
|
|
|
|
|
Sukrim
Legendary
Offline
Activity: 2618
Merit: 1007
|
|
July 25, 2011, 12:16:13 AM |
|
It just takes time until people update their clients, the fix is already in the code. Imho you can already start (again) faking stats, this will lead to faster updates on bitHopper side I guess... About Namecoin mining: On http://www.nmcwatch.com/ there's a 24h running average BTC vs. NMC price (currently: "average: 0.02852482"). As long as this value is above NMCdiff/BTCdiff (94 037.96/1 690 906.20) = 0.0556139424, NMC mining (and hopping) for sure pays off. It might even pay off at levels slightly below that, due to higher gains from hopping, but I think this solution is more safe. All that is needed is: * getting NMC difficulty additionally to BTC difficulty * parsing NMCwatch every once in a while for the 24h average price * comparing the price to the "calculated price" of NMCdiff/BTCdiff * switching to NMC/BTC as needed * ... * PROFIT! Edit: I also support a way to get the stats externally/centralized - I have a few ideas already and this could come in handy...
|
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
July 25, 2011, 01:35:28 AM |
|
json isn't applicable as [] are used instead of {}
json IS capable of [] - its just an array. index 1 is round shares (for me json.loads works) Indeed.. Flower: can you post your configuration for others to copy ? UPDATE: i changed it anyway - > {"solved":"2619271"} Thanks mrsam. Hope you'll be changing to an XXPPS type payout when you do so we can use you for back up.
|
|
|
|
shotgun
Member
Offline
Activity: 98
Merit: 11
|
|
July 25, 2011, 01:54:22 AM Last edit: July 25, 2011, 04:22:57 AM by shotgun |
|
3)differences between this and flexible miner proxy? This one hops. Oh and it changes servers in the case of server death etc... We also have LP support I know they have support for individual worker statistics which we don't have yet.
Can you (or someone) post a screenshot of the web interface so I can compare it to Flexible Proxy? I'd love to get started with pool hopping and this project sounds interesting. Flexible Proxy does have LP support, just in case anyone is wondering. Um... individual worker stats, that is very useful and I'd certainly want to have that info displayed. I can add those to this pool hopping proxy quite easily if someone isn't already working on it. Also, let me just post this here code that I wrote this weekend. It adds health stats to the flexible proxy but can easily be modified so it adds the same health stats to the pool hopping proxy (as long as you're ok with adding a couple of tables to the mysql schema). Basically you just run one cronjob per miner on you servers and it polls aticonfig for device temperature, core clock speed, mem speed, fan speed, and then inserts those values into the DB. I'm pretty good with Python so I could add miner stats (mhash/s, submitted shares, stales, etc) and health stats (from this script) to the pool hopping proxy if you want, just let me know and I'll get started. #!/bin/bash ######################################################### # Reports worker stats to BTC Mining Proxy 'modified' # # See schema changes below to setup database for script # # Author: shotgun # # Date: 2011 07 22 # # Donations welcome: 1BUV1p5Yr3xEtSGbixLSospmK6B8NCdqiW # #########################################################
## Database settings for BTCproxy server. This should be a ## separate user that only has INSERT,SELECT on the ## worker_health table so that things are secure on the db U="btcinsert" P="password" H="ip_address_of_proxy_database_server" DB="btcproxy"
## SQL to create table in BTCproxy schema and GRANT ## statement for user that will insert health reports. ## This table and user must exist before running the script ## For [network] use only the class C address, ex: 10.1.1.% ## For [password] be secure, use the MD5 of a random string ## For [schema] set to the schema name from btc proxy ## # CREATE TABLE worker_health_current ( # id int(32) NOT NULL auto_increment, # worker_id int(11) NOT NULL, # temp int(5) NOT NULL, # speed_clock int(4) NOT NULL, # speed_mem int(4) NOT NULL, # speed_fan int(3) NOT NULL, # date datetime NOT NULL, # PRIMARY KEY (id), # UNIQUE KEY worker_id_ix (worker_id)) ENGINE=MyISAM AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; # # CREATE TABLE worker_health_archive ( # id int(32) NOT NULL auto_increment, # worker_id int(11) NOT NULL, # temp int(5) NOT NULL, # speed_clock int(4) NOT NULL, # speed_mem int(4) NOT NULL, # speed_fan int(3) NOT NULL, # date datetime NOT NULL, # PRIMARY KEY (id), # KEY worker_id_ix (worker_id)) ENGINE=MyISAM AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; # # GRANT UPDATE,SELECT,INSERT ON [schema].worker_health_current TO 'btcinsert'@'[network]' IDENTIFIED BY '[password]'; # GRANT SELECT,INSERT ON [schema].worker_health_archive TO 'btcinsert'@'[network]' IDENTIFIED BY '[password]'; # GRANT SELECT ON [schema].worker TO 'btcinsert'@'[network]' IDENTIFIED BY '[password]'; # ## END DATABASE CHANGES
## CRONTAB entries to run health report on schedule ## I recommend polling at 5 minute intervals, but use whatever ## you need for your setup. You need one line entry per worker ## Set the arguments correctly, or your reporting won't work. ## Example below. Remove the leading # character when pasting ## into /etc/crontab, set the script location per your setup. # # ATI GPU Health Monitoring # * * * * * exec-user script-location worker_name device_number > logfile # ┬ ┬ ┬ ┬ ┬ # │ │ │ │ └───── day of week (0 - 7) (Sunday=0 or 7) # │ │ │ └────────── month (1 - 12) # │ │ └─────────────── day of month (1 - 31) # │ └──────────────────── hour (0 - 23) # └───────────────────────── min (0 - 59) or interval (*/5 = every 5 minutes) # */5 * * * * user /opt/phoenix-1.50_patched/worker-health.sh worker0 0 > /tmp/worker_health-0.log # */5 * * * * user /opt/phoenix-1.50_patched/worker-health.sh worker1 1 > /tmp/worker_health-1.log #
## Check for help if [ $1 = "-h" ] || [ $1 = "help" ]; then echo "Usage: worker_health.sh [worker_name] [device number]" echo "Example: worker_health.sh quad0c0 0" exit 0 fi
## Check for arguments if [ $# -ne 2 ]; then echo "Usage: worker_health.sh [worker_name] [device number]" exit 65 fi
WORKER=$1 DEVICE=$2
## Environment variables export AMDAPPSDKSAMPLESROOT=/opt/AMD-APP-SDK-v2.4-lnx64 export LD_LIBRARY_PATH=/opt/AMD-APP-SDK-v2.4-lnx64/lib/x86_64: export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/opt/phoenix-1.50_patched export AMDAPPSDKROOT=/opt/AMD-APP-SDK-v2.4-lnx64 export HOME=/home/user export LOGNAME=user export DISPLAY=:0 export _=/usr/bin/env
## Get health values FAN=`/usr/bin/aticonfig --pplib-cmd "get fanspeed 0"|grep "Fan Speed"|awk -F: '{print $3}'` TEMP=`/usr/bin/aticonfig --odgt --adapter=$DEVICE | grep -o '[0-9][0-9].[0-9][0-9]' | sed 's/\.[0-9][0-9]//g'` CLOCK=`/usr/bin/aticonfig --odgc --adapter=$DEVICE | grep "Current Clock"| grep -o '[0-9][0-9][0-9]'|head -n 1` MEM=`/usr/bin/aticonfig --odgc --adapter=$DEVICE | grep "Current Clock"| grep -o '[0-9][0-9][0-9][0-9]'|tail -n 1` SQL0="INSERT INTO worker_health_archive (id,worker_id,temp,speed_clock,speed_mem,speed_fan,date) VALUES (NULL,(SELECT id FROM worker WHERE name='$WORKER'),'$TEMP','$CLOCK','$MEM','$FAN',NOW());" SQL1="INSERT INTO worker_health_current (id,worker_id,temp,speed_clock,speed_mem,speed_fan,date) VALUES (NULL,(SELECT id FROM worker WHERE name='$WORKER'),'$TEMP','$CLOCK','$MEM','$FAN',NOW()) ON DUPLICATE KEY UPDATE temp='$TEMP', speed_clock='$CLOCK', speed_mem='$MEM', speed_fan='$FAN', date=NOW();"
## Locate mysql client binary mysqlbinary=`which mysql` if [ $mysqlbinary = "" ]; then echo "MySQL client not found in PATH=$PATH. Please install or put binary into path." exit 1 fi
## Insert SQL to the database $mysqlbinary --user=$U --password=$P --host=$H $DB -e "$SQL0" if [ "$?" = "0" ]; then $mysqlbinary --user=$U --password=$P --host=$H $DB -e "$SQL1" if [ "$?" = "0" ]; then echo "Health reported to database: successful. w: $WORKER d: $DEVICE" exit 0 else echo "Health reported to database: failed current sql. Check user permissions. w: $WORKER d: $DEVICE" fi else echo "Health reported to database: failed archive sql. Check user permissions. w: $WORKER d: $DEVICE" exit 1 fi
|
<luke-jr> Catholics do not believe in freedom of religion.
|
|
|
bb
Member
Offline
Activity: 84
Merit: 10
|
|
July 25, 2011, 02:14:41 AM |
|
For this to be useful at all you definately need a timestamp in there somewhere, so one can fall back on individual polling if your service fails. It just takes time until people update their clients, the fix is already in the code. Imho you can already start (again) faking stats, this will lead to faster updates on bitHopper side I guess... Erm, there is different timezones you know. You should give people a chance to wake up... I'm pretty good with Python so I could add miner stats (mhash/s, submitted shares, stales, etc) and health stats (from this script) to the pool hopping proxy if you want, just let me know and I'll get started.
I for one am monitoring my miners and my cards using a different set of scripts. The proxy has nothing to do with miner monitoring. You should really adhere to the Unix philosophy here.
|
|
|
|
|