krom
Newbie
Offline
Activity: 12
Merit: 0
|
|
June 19, 2011, 12:53:25 PM |
|
Every time the client requests new work from the proxy, so every 40-50 seconds, this will happen:
The proxy asks the first pool in his priority list for new work. If that pool does not respond within X seconds, the proxy goes on and asks the next pool in his priority list. This will repeat until a pool does respond and delivers new work which the proxy can relay to the client, or the end of the prioriry list is reached.
|
|
|
|
teknohog
|
|
June 21, 2011, 03:26:13 PM |
|
Has anyone got this working with Lighttpd (under Gentoo)? I am trying to rule out wider system-level errors before digging deeper into this code. Basically, my clients (Phoenix) are connecting to the proxy, but they are not getting any work.
I got exactly the same error with Apache, and I even did some debugging via tcpdump. It seemed like the proxy was not even asking for any work from the pools. The solution turned out simple, I had to set allow_url_fopen = On in php.ini. So, lighttpd (which uses PHP via fastcgi) seems to work. As I just got it working, I have no long-term data which might be important for long polling, but so far things are OK
|
|
|
|
lachesis
|
|
June 22, 2011, 03:03:05 AM |
|
I'm getting the response {"error":"No enabled pools responded to the work request.","result":null,"id":1} with Multipool. I just verified -- Multipool is up. In fact, when I make the exact same work request which I made to the proxy (with updated credentials) to multipool.hpc.tw, I get a response in under a second. How can I troubleshoot this?
|
|
|
|
lebuen
|
|
June 22, 2011, 01:25:00 PM |
|
Unfortunately, I get extreme stale rates (>10%) when using the proxy on my local mining-rig. But I'm behind a firewall - could it be that the Mining Proxy needs to be listening on port 80 from the "outside" to be able to receive LP-queries?
|
|
|
|
kripz
|
|
June 22, 2011, 02:25:26 PM |
|
Randomly get this using DiabloMiner. Hashkill will just crash. Phoenix seems to be 100% stable, never crashes but i dont doubt it runs into the same problem behind the scenes. [23/06/11 12:20:47 AM] DEBUG: Attempt 26 found on Juniper (#1) [23/06/11 12:20:47 AM] Accepted block 26 found on Juniper (#1) [23/06/11 12:21:08 AM] ERROR: Cannot connect to Bitcoin: Bitcoin returned error message: No enabled pools responded to the work request. [23/06/11 12:21:09 AM] ERROR: Cannot connect to Bitcoin: Bitcoin returned error message: No enabled pools responded to the work request. [23/06/11 12:21:09 AM] ERROR: Cannot connect to Bitcoin: Bitcoin returned error message: No enabled pools responded to the work request. [23/06/11 12:21:17 AM] DEBUG: Attempt 27 found on Juniper (#1) [23/06/11 12:21:23 AM] Rejected block 1 found on Juniper (#1) [23/06/11 12:21:46 AM] DEBUG: Attempt 28 found on Juniper (#1) [23/06/11 12:21:46 AM] Rejected block 2 found on Juniper (#1) [23/06/11 12:22:43 AM] DEBUG: Attempt 29 found on Juniper (#1) [23/06/11 12:22:43 AM] Accepted block 27 found on Juniper (#1)
192.168.1.10 - beasty [23/Jun/2011:00:20:13 +1000] "POST / HTTP/1.1" 200 394 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:20:47 +1000] "POST / HTTP/1.1" 200 394 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:21:06 +1000] "POST / HTTP/1.1" 200 372 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:21:06 +1000] "POST / HTTP/1.1" 200 373 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:21:06 +1000] "POST / HTTP/1.1" 200 373 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:21:17 +1000] "POST / HTTP/1.1" 200 395 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:21:46 +1000] "POST / HTTP/1.1" 200 395 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:22:08 +1000] "POST / HTTP/1.1" 200 951 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:22:09 +1000] "POST / HTTP/1.1" 200 952 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:22:09 +1000] "POST / HTTP/1.1" 200 952 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:22:43 +1000] "POST / HTTP/1.1" 200 394 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:23:09 +1000] "POST / HTTP/1.1" 200 951 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:23:09 +1000] "POST / HTTP/1.1" 200 952 "-" "Java/1.6.0_18" 192.168.1.10 - beasty [23/Jun/2011:00:23:09 +1000] "POST / HTTP/1.1" 200 952 "-" "Java/1.6.0_18"
|
|
|
|
cdhowie (OP)
|
|
June 22, 2011, 10:17:37 PM |
|
Do you want patches? I'm testing putting 50 miners through this proxy and was running into stability issues. I have mysql on a separate machine and am used to that being a bottleneck, so I looked into making it use persistent connections. I am not familiar with the PDO extension, but found the instantiator in common.inc.php on line 31; I added array(PDO::ATTR_PERSISTENT => true) to the new call and this took care of my problems.
I now have 32 connections from the proxy web server staying open (which is what I want).
Sounds like a good idea. I'll test this on my setup and commit if there are no issues. There should be a way to copy worker/pool configs
so instead of setting up 10x pools on 10x workers you can set it up once then copy it to the rest, then go in and change the logins as necessary. Or a way to use a variable in the config so you build X workers with the suffix X changed on each one
I might add one or more of these ideas in the future. Right now I'm trying to keep the number of features low and work on the ones that are having trouble. Problem 3 was solved by adding 'http://' to the pool's url. Won't work without.
Yup. I'll add something like this to the readme, and maybe add some validation too so that it will bail unless it sees a scheme in the URL. Really wish there was a way to monitor anything on solo mining besides getwork w/ the proxy
I've no clue what you're asking for here. What sort of values can i set for 'average_interval'? and what effects will it have?
When reporting on the number of shares submitted and the miner speed, this is the window of time it will look back to gather data. If you set it longer, the query will take longer and will generally even out more; if you set it shorter, the query will run faster but will result in wild fluctuations of the worker speed (for example) as the worker has lucky and unlucky periods. It is purely used for reports; it will have no effect on mining. Also, please add rejected per hour and efficiency.
Another user has a nearly-ready patch that adds a lot of this information. connections to the proxy started timing out, so restarting apache fixed it, but i didn't trust it enough to keep going
This was on a VM w/ 1GB of ram
What kind of hardware are you guys running on?
I'm running with 768MB, most of which is in use by a Minecraft server... Try increasing the number of worker children in your Apache configuration. With more miners, you need more workers to keep up with the demand. If Apache hits its limit then it will simply stop responding to requests. I'm getting the response {"error":"No enabled pools responded to the work request.","result":null,"id":1} with Multipool. I just verified -- Multipool is up. In fact, when I make the exact same work request which I made to the proxy (with updated credentials) to multipool.hpc.tw, I get a response in under a second. How can I troubleshoot this? If you can, run a sniffer on the web server and see if it even tries to connect. If it does, see if you can diagnose the problem from the content of the HTTP conversation. If it does not try to connect: - Verify that you have pools assigned to the worker you are using.
- Verify that the php allow_url_fopen configuration flag is set to On.
Unfortunately, I get extreme stale rates (>10%) when using the proxy on my local mining-rig. But I'm behind a firewall - could it be that the Mining Proxy needs to be listening on port 80 from the "outside" to be able to receive LP-queries?
No, the proxy needs no ports open except for the miners themselves. It may be worth running a packet sniffer and seeing if the LP requests are actually making it through. If you are not running on Apache, some config tweaks may be necessary to get LP to work at all.
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
lachesis
|
|
June 22, 2011, 11:22:14 PM |
|
I'm getting the response {"error":"No enabled pools responded to the work request.","result":null,"id":1} with Multipool. I just verified -- Multipool is up. In fact, when I make the exact same work request which I made to the proxy (with updated credentials) to multipool.hpc.tw, I get a response in under a second. How can I troubleshoot this? If you can, run a sniffer on the web server and see if it even tries to connect. If it does, see if you can diagnose the problem from the content of the HTTP conversation. If it does not try to connect: - Verify that you have pools assigned to the worker you are using.
- Verify that the php allow_url_fopen configuration flag is set to On.
My client tried to connect twice. The proxy tried to connect a number of times and received several responses, but forwarded none on to my miner. I wonder if there is an issue here: this server has two IPs on the same subnet. Is there a way to prioritize your proxy to choose one of them exclusively? I'm afraid it's expecting an answer on eth0 but receiving one on eth1 and not properly handling it.
|
|
|
|
cdhowie (OP)
|
|
June 23, 2011, 01:40:13 PM |
|
My client tried to connect twice. The proxy tried to connect a number of times and received several responses, but forwarded none on to my miner.
If you can save a pcap file and email it to me, maybe I could have a look and see what's going on. I wonder if there is an issue here: this server has two IPs on the same subnet. Is there a way to prioritize your proxy to choose one of them exclusively? I'm afraid it's expecting an answer on eth0 but receiving one on eth1 and not properly handling it.
The kernel should take care of routing stuff correctly. If traffic is coming back on the wrong interface, that would be a problem with an upstream router and not any particular configuration on your box. If you can load other websites fine then the network config shouldn't be interfering.
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
carlo
|
|
June 23, 2011, 01:51:22 PM |
|
Hello, the Dashboard is getting slower and slower with the whole data. Does sombody have a quick and dirty soulution to erase the "submitted_work" and the "work_data" every 24h ? I thougt about dropping every 24h the data from hour 0 -> 23 ... so i keep some stats (like hashrate of the last hour). If i make my cronjob working ... i let you know ... if somebody does allready have one please let me know cheers
|
|
|
|
wyze
Newbie
Offline
Activity: 28
Merit: 0
|
|
June 23, 2011, 02:48:44 PM |
|
Hello, the Dashboard is getting slower and slower with the whole data. Does sombody have a quick and dirty soulution to erase the "submitted_work" and the "work_data" every 24h ? I thougt about dropping every 24h the data from hour 0 -> 23 ... so i keep some stats (like hashrate of the last hour). If i make my cronjob working ... i let you know ... if somebody does allready have one please let me know cheers How big is your database? Mine is at 19MB and my dashboard still loads in under a second. I have about 4-5 days worth of data in there. This is the same speed I get when accessing the dashboard from work as well, while it is hosted on a little web server in my basement.
|
|
|
|
cdhowie (OP)
|
|
June 23, 2011, 03:29:16 PM |
|
Also, please add rejected per hour and efficiency.
Another user has a nearly-ready patch that adds a lot of this information. I misread you the first time -- the rejected-per-hour stats are there, but efficiency is not. How do you define efficiency, and how would you compute it?
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
cdhowie (OP)
|
|
June 23, 2011, 03:33:50 PM |
|
the Dashboard is getting slower and slower with the whole data.
There are some indexes missing from the schema. I need to add those and provide a migration script for older installs, but this will take a bit of work and I haven't finished it yet. Once this is done, running the upgrade script will create the indexes and the dashboard will load much faster. Does sombody have a quick and dirty soulution to erase the "submitted_work" and the "work_data" every 24h ? I thougt about dropping every 24h the data from hour 0 -> 23 ... so i keep some stats (like hashrate of the last hour).
See the "Database maintenance" section of the readme.
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
Salain
Newbie
Offline
Activity: 18
Merit: 0
|
|
June 23, 2011, 04:34:39 PM |
|
Also, please add rejected per hour and efficiency.
Another user has a nearly-ready patch that adds a lot of this information. I misread you the first time -- the rejected-per-hour stats are there, but efficiency is not. How do you define efficiency, and how would you compute it? Based on context, I think he's referring to a % good/stale shares stat. So (#good shares) / (#good + #stale shares). Any better efficiency metric would require you to load the current difficulty...
|
|
|
|
PulsedMedia
|
|
June 23, 2011, 06:34:46 PM |
|
Can this include some actual proxy kind of features, ie. caching? So that this could be used behind a flaky internet connection to keep miners 100% at work, if the flakyness is in the seconds range? Or does this try to connect the pool directly on the same instance as the miner connects to etc.? What i'm wondering, would this enable me to run miners behind a 3G connection I'll try this out soon, and i will then optimize the mysql db for you after it starts to slow off (one my of specialties is optimization, esp. mysql), to increase the performance. Not sure how well you've done performance wise, but we can make it work even with Tb sized datasets if you want to. Contact me via PM or freenode #PulsedMedia if you want to chat about it. I have a mining hosting idea (in the mining hw section) for which using this as intermediary could be PERFECT. Just need more stats etc. and eventually tagging, grouping nodes/gpus etc. "advanced workflow features", and RESTful API (or more like my own relaxed version using of which is way easier than full REST spec). Sorry for the basic questions, just yet is not time for me to research into this, i got other dev. tasks i need to finish first before i get to play around with this
|
|
|
|
PulsedMedia
|
|
June 24, 2011, 01:07:36 AM |
|
Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet). Quick glance at DB shows no reason why it should be slow, if queries match. So those who are having speed issues probably have bad MySQL config just. MySQL scales really efficiently and i don't see immediate reasons why this would be slow, but have to wait for data set to increase. If anyone got say 8G+ dataset they don't mind sharing, i would be willing to look into it. Or any size with which someone is having some serious perf issues. Code should be PLENTY more commented btw
|
|
|
|
kjj
Legendary
Offline
Activity: 1302
Merit: 1026
|
|
June 24, 2011, 01:10:22 AM |
|
Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet). Quick glance at DB shows no reason why it should be slow, if queries match. So those who are having speed issues probably have bad MySQL config just. MySQL scales really efficiently and i don't see immediate reasons why this would be slow, but have to wait for data set to increase. If anyone got say 8G+ dataset they don't mind sharing, i would be willing to look into it. Or any size with which someone is having some serious perf issues. Code should be PLENTY more commented btw Check out the join that creates the status display.
|
17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8 I routinely ignore posters with paid advertising in their sigs. You should too.
|
|
|
PulsedMedia
|
|
June 24, 2011, 01:21:03 AM |
|
Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet). Quick glance at DB shows no reason why it should be slow, if queries match. So those who are having speed issues probably have bad MySQL config just. MySQL scales really efficiently and i don't see immediate reasons why this would be slow, but have to wait for data set to increase. If anyone got say 8G+ dataset they don't mind sharing, i would be willing to look into it. Or any size with which someone is having some serious perf issues. Code should be PLENTY more commented btw Check out the join that creates the status display. LOL! Yeah that would cause some serious issues (first query in admin/index.php) 3rd query is a monstrosity. Well there is the problem, using dynamic (on the fly created) tables etc. These queries is almost like SELECT *, wonder if they ever hit any indexes ... In any case need bigger data set before i can optimize them properly. But on first query FROM ( XXXX XXXX ) sw should be change to just choose the damn table, moving the LIMIT 10 at the whole query. Inner joins -> just choose from multiple tables, use same p.id = sw.pool_id So something like SELECT w.name AS worker, p.name AS pool, p.pool_id AS poolId, p.id AS poolId, sw.worker_id AS workerId [AND SO ON FOR ALL FIELDS REQUIRED] FROM submitted_work sw, pool p, woker WHERE pool.id = sw.pool_id AND worker.id=sw.workerid ORDER BY LIMIT INNER JOINS are like SELECT *, if i recall right Bottomline is that carefully crafted queries can do a full text match scored on a 100G dataset with multiple text matches, joining multiple tables (not using JOIN clause tho, but still called joining), on a Quad Core Xeon with 16G ram (pre-i7 xeon) in well under 100ms. (Target was 15searches/sec, achieved peak was above that, and bottleneck was actually the PHP syntax parsing for real world scenario which transformed our simplified custom query language into a mysql query, for easier use for the end users)
|
|
|
|
kjj
Legendary
Offline
Activity: 1302
Merit: 1026
|
|
June 24, 2011, 01:47:12 AM |
|
Meh. Took me like 5 minutes to modify the work_data table, add new history tables, write a cron job to rotate the records out, and post my scripts. Bonus: I didn't even have to think of any clever SQL tricks.
|
17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8 I routinely ignore posters with paid advertising in their sigs. You should too.
|
|
|
nick5429
Member
Offline
Activity: 79
Merit: 14
|
|
June 24, 2011, 03:15:04 AM |
|
Just as another datapoint: loading Multipool into the proxy doesn't work for me, either. My miner sits there with 0Mhash/sec. Directly mining at Multipool directly works fine for me. Also, I get weird authentication issues with hashkill. About half the time it will fail the initial auth, the other half it authenticates, does a little bit of work, and then claims it had authentication issues: [hashkill] Version 0.2.5 [hashkill] Plugin 'bitcoin' loaded successfully [hashkill] Found GPU device: Advanced Micro Devices, Inc. - Barts [hashkill] GPU0: AMD Radeon HD 6800 Series [busy:0%] [temp:49C] [hashkill] Temperature threshold set to 90 degrees C [hashkill] This plugin supports GPU acceleration. [hashkill] Initialized hash indexes [hashkill] Initialized thread mutexes [hashkill] Spawned worker threads [hashkill] Successfully connected and authorized at my.bitcoinminingproxy.com:80 [hashkill] Compiling OpenCL kernel source (amd_bitcoin.cl) [hashkill] Binary size: 349376 [hashkill] Doing BFI_INT magic...
Mining statistics... Speed: 273 MHash/sec [proc: 4] [subm: 1] [stale: 0] [eff: 25%] [error] (ocl_bitcoin.c:141) Cannot authenticate! Phoenix miner works fine for me with the same proxy settings.
|
|
|
|
nick5429
Member
Offline
Activity: 79
Merit: 14
|
|
June 24, 2011, 03:35:12 AM Last edit: June 24, 2011, 11:47:23 PM by nick5429 |
|
In case anyone is interested, I was able to get the mining proxy working on my Dreamhost shared server with a bit of tweaking. This should be equally applicable to other hosts that may impose similar restrictions. Since Dreamhost forces PHP-CGI (rather than mod_php) on its shared hosting users, the .htaccess tricks don't work and the PHP_AUTH_USER / PHP_AUTH_PW variables in the scripts are empty. First, follow all the basic setup instructions in the standard Proxy guide. Then, you need to make sure that magic_quotes_gpc and allow_url_fopen are set to the defaults that we need, otherwise I don't think there's anything you can do. To do this, create a file called phpinfo.php in your htdocs with the following code: And browse to it in your browser. Search for magic_quotes_gpc (it needs to be off) and allow_url_fopen (it needs to be on). If these settings don't match, getting the proxy working is beyond the scope of this tweak. If those settings are okay, then we can start with the tweaking. The way the script is written won't allow you to authenticate, but everything else works fine. To fix this... Replace the contents of your .htaccess file with the following: Options -Indexes RewriteEngine on RewriteRule .* - [env=HTTP_AUTHORIZATION:%{HTTP:Authorization},last] Then, edit your common.inc.php file to include this code inside the "do_admin_auth()" function: if (preg_match('/Basic\s+(.*)$/i', $_SERVER['HTTP_AUTHORIZATION'], $matches)) { list($name, $password) = explode(':', base64_decode($matches[1])); $_SERVER['PHP_AUTH_USER'] = strip_tags($name); $_SERVER['PHP_AUTH_PW'] = strip_tags($password); }
This function will now look something like: function do_admin_auth() { global $BTC_PROXY; if (preg_match('/Basic\s+(.*)$/i', $_SERVER['HTTP_AUTHORIZATION'], $matches)) { list($name, $password) = explode(':', base64_decode($matches[1])); $_SERVER['PHP_AUTH_USER'] = strip_tags($name); $_SERVER['PHP_AUTH_PW'] = strip_tags($password); } if (!isset($_SERVER['PHP_AUTH_USER'])) { auth_fail(); }
if ( $_SERVER['PHP_AUTH_USER'] != $BTC_PROXY['admin_user'] || $_SERVER['PHP_AUTH_PW'] != $BTC_PROXY['admin_password']) { auth_fail(); } }
EDIT: You also need to include this code snippet near the top of index.php
|
|
|
|
|