Bitcoin Forum
December 05, 2016, 04:48:51 PM *
News: Latest stable version of Bitcoin Core: 0.13.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Poll
Question: Would you use such a mining proxy?  (Voting closed: April 13, 2011, 10:26:05 PM)
Yes, it seems like a good idea. - 7 (63.6%)
Maybe. - 1 (9.1%)
No, I don't like the idea. - 3 (27.3%)
No, I use something similar already. - 0 (0%)
Total Voters: 11

Pages: « 1 2 3 4 5 6 7 8 9 10 [11] 12 13 14 15 16 17 »  All
  Print  
Author Topic: Flexible mining proxy  (Read 83904 times)
kjj
Legendary
*
Offline Offline

Activity: 1302



View Profile
June 24, 2011, 01:47:12 AM
 #201

Meh.  Took me like 5 minutes to modify the work_data table, add new history tables, write a cron job to rotate the records out, and post my scripts.  Bonus: I didn't even have to think of any clever SQL tricks.

p2pcoin: a USB/CD/PXE p2pool miner - 1N8ZXx2cuMzqBYSK72X4DAy1UdDbZQNPLf - todo
I routinely ignore posters with paid advertising in their sigs.  You should too.
1480956531
Hero Member
*
Offline Offline

Posts: 1480956531

View Profile Personal Message (Offline)

Ignore
1480956531
Reply with quote  #2

1480956531
Report to moderator
1480956531
Hero Member
*
Offline Offline

Posts: 1480956531

View Profile Personal Message (Offline)

Ignore
1480956531
Reply with quote  #2

1480956531
Report to moderator
1480956531
Hero Member
*
Offline Offline

Posts: 1480956531

View Profile Personal Message (Offline)

Ignore
1480956531
Reply with quote  #2

1480956531
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1480956531
Hero Member
*
Offline Offline

Posts: 1480956531

View Profile Personal Message (Offline)

Ignore
1480956531
Reply with quote  #2

1480956531
Report to moderator
1480956531
Hero Member
*
Offline Offline

Posts: 1480956531

View Profile Personal Message (Offline)

Ignore
1480956531
Reply with quote  #2

1480956531
Report to moderator
nick5429
Member
**
Offline Offline

Activity: 70


View Profile
June 24, 2011, 03:15:04 AM
 #202

Just as another datapoint: loading Multipool into the proxy doesn't work for me, either.  My miner sits there with 0Mhash/sec.  Directly mining at Multipool directly works fine for me.

Also, I get weird authentication issues with hashkill.  About half the time it will fail the initial auth, the other half it authenticates, does a little bit of work, and then claims it had authentication issues:
Code:
[hashkill] Version 0.2.5
[hashkill] Plugin 'bitcoin' loaded successfully
[hashkill] Found GPU device: Advanced Micro Devices, Inc. - Barts
[hashkill] GPU0: AMD Radeon HD 6800 Series [busy:0%] [temp:49C]
[hashkill] Temperature threshold set to 90 degrees C
[hashkill] This plugin supports GPU acceleration.
[hashkill] Initialized hash indexes
[hashkill] Initialized thread mutexes
[hashkill] Spawned worker threads
[hashkill] Successfully connected and authorized at my.bitcoinminingproxy.com:80
[hashkill] Compiling OpenCL kernel source (amd_bitcoin.cl)
[hashkill] Binary size: 349376
[hashkill] Doing BFI_INT magic...

Mining statistics...
Speed: 273 MHash/sec [proc: 4] [subm: 1] [stale: 0] [eff: 25%]     [error] (ocl_bitcoin.c:141) Cannot authenticate!

Phoenix miner works fine for me with the same proxy settings.

Computer Engineering professional by day, tinkerer and Bitcoin miner by night.
Multiclone operator -- a Multipool clone
1GaZUsCAdUNbvdwFToZenkDDxAPi6ULavA
nick5429
Member
**
Offline Offline

Activity: 70


View Profile
June 24, 2011, 03:35:12 AM
 #203

In case anyone is interested, I was able to get the mining proxy working on my Dreamhost shared server with a bit of tweaking.  This should be equally applicable to other hosts that may impose similar restrictions.

Since Dreamhost forces PHP-CGI (rather than mod_php) on its shared hosting users, the .htaccess tricks don't work and the PHP_AUTH_USER / PHP_AUTH_PW variables in the scripts are empty.

First, follow all the basic setup instructions in the standard Proxy guide.

Then, you need to make sure that magic_quotes_gpc and allow_url_fopen are set to the defaults that we need, otherwise I don't think there's anything you can do.  To do this, create a file called phpinfo.php in your htdocs with the following code:
Code:
<?php
phpinfo
();
?>

And browse to it in your browser.

Search for magic_quotes_gpc (it needs to be off) and allow_url_fopen (it needs to be on).  If these settings don't match, getting the proxy working is beyond the scope of this tweak.


If those settings are okay, then we can start with the tweaking.  The way the script is written won't allow you to authenticate, but everything else works fine.  To fix this...

Replace the contents of your .htaccess file with the following:
Code:
Options -Indexes
RewriteEngine on
RewriteRule .* - [env=HTTP_AUTHORIZATION:%{HTTP:Authorization},last]

Then, edit your common.inc.php file to include this code inside the "do_admin_auth()" function:
Code:
if (preg_match('/Basic\s+(.*)$/i', $_SERVER['HTTP_AUTHORIZATION'], $matches))
    {
        list($name, $password) = explode(':', base64_decode($matches[1]));
        $_SERVER['PHP_AUTH_USER'] = strip_tags($name);
        $_SERVER['PHP_AUTH_PW'] = strip_tags($password);
    }

This function will now look something like:
Code:
function do_admin_auth() {
    global $BTC_PROXY;
    if (preg_match('/Basic\s+(.*)$/i', $_SERVER['HTTP_AUTHORIZATION'], $matches))
    {
        list($name, $password) = explode(':', base64_decode($matches[1]));
        $_SERVER['PHP_AUTH_USER'] = strip_tags($name);
        $_SERVER['PHP_AUTH_PW'] = strip_tags($password);
    }
    if (!isset($_SERVER['PHP_AUTH_USER'])) {
        auth_fail();
    }

    if (    $_SERVER['PHP_AUTH_USER'] != $BTC_PROXY['admin_user'] ||
            $_SERVER['PHP_AUTH_PW']   != $BTC_PROXY['admin_password']) {
        auth_fail();
    }
}

EDIT: You also need to include this code snippet near the top of index.php

Computer Engineering professional by day, tinkerer and Bitcoin miner by night.
Multiclone operator -- a Multipool clone
1GaZUsCAdUNbvdwFToZenkDDxAPi6ULavA
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402


View Profile WWW
June 24, 2011, 06:53:57 AM
 #204

Meh.  Took me like 5 minutes to modify the work_data table, add new history tables, write a cron job to rotate the records out, and post my scripts.  Bonus: I didn't even have to think of any clever SQL tricks.

I looked at the linked post... There is a bunch of bad things i could now say about that cron job ...

Slow, prone to fault gum h4x to fix performance issues? People! This is how we create bloatware!

Fix the problem, not the symptoms!

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402


View Profile WWW
June 24, 2011, 08:07:43 AM
 #205

wow, there is many kinds of bad things going on code QA wise.

This code is what i call confused newbie abstraction.

Using functions to output HTML in a view (already outputting HTML)? Yup!

Using abstration function to output a simple form image button? Yup!

Doing insane joined and dynamic tables query for a few fields of simple data? Yup!

Anyways, 2 first queries optimized, one less table to lookup, indices being actually hit. Far from being completely optimized (still a filesort happening!), but should proof to be an order of magnitude faster, DESPITE hitting more rows. I have NO way to test however, nor profile correctly due to lack of dataset size, so measurements would be less than error of margin.

Replace admin/index.php 2 first queries with those found at: http://pastebin.com/hwncLV1w

NOTE: I have not done proper testing, results seem to be correct tho Smiley

EDIT: Looking into 3rd query now, it's worse than expected. Will rework complete query and accompanying view portion. It actually checks all rows multiple times, assuming MySQL realizes how to optimize. It makes 5 dynamic (on-the-fly created) tables, hits *ALL* rows on submitted work, creates temp tables (lack of suitable index), 3 times filesort, 13 tables total, and if i interpret correctly total rows to go through is in the range of hundreds of thousands rows or more with my 2526 shares submitted test data! :O

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402


View Profile WWW
June 24, 2011, 10:38:55 AM
 #206

well, i fooled around, according to profiling my changes are beneficial to the 3rd query but due to the really tiny sample data set i got the actual measurements mean nothing, the spent time might just be all in parsing the query and optimization engine, not the actual query.

So take even the earlier ones with a grain of salt: I've got no clue of the impact as i can't actually measure the difference, even tho i'm using for testing a ancient Athlon XP 1900+ ...

As soon as i got enough data i will see again how my changed queries affect the performance.

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
kripz
Full Member
***
Offline Offline

Activity: 182



View Profile
June 24, 2011, 11:21:39 AM
 #207

I will test, got a fix for the 3rd?

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402


View Profile WWW
June 24, 2011, 11:28:56 AM
 #208

I will test, got a fix for the 3rd?

Yes, but approach was wrong so it might actually perform worse.

Try also how these affect: http://pastebin.com/kcWN9gPH

How many rows in your submitted_work and work_data tables?

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
kripz
Full Member
***
Offline Offline

Activity: 182



View Profile
June 24, 2011, 11:38:52 AM
 #209

30k and 50k

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402


View Profile WWW
June 24, 2011, 11:41:09 AM
 #210

30k and 50k

That might give some hint of the effect, but really we start seeing difference at around 10x that size ...

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
wyze
Newbie
*
Offline Offline

Activity: 28



View Profile WWW
June 24, 2011, 12:09:33 PM
 #211

Anyways, 2 first queries optimized, one less table to lookup, indices being actually hit. Far from being completely optimized (still a filesort happening!), but should proof to be an order of magnitude faster, DESPITE hitting more rows. I have NO way to test however, nor profile correctly due to lack of dataset size, so measurements would be less than error of margin.

Replace admin/index.php 2 first queries with those found at: http://pastebin.com/hwncLV1w

Looking at the image below, I fail to see how your 'optimized queries' are better. Maybe I just don't have a full grasp of the EXPLAIN command from phpMyAdmin, but it looks to me as your query scans all rows in the index, which comes out to 2436 with my current data set. That is way more than what the query was scanning before. Please correct me if I am wrong in reading the output. I only looked at the first query and stopped to get clarification.

http://i.imgur.com/G4bJM.png
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402


View Profile WWW
June 24, 2011, 12:13:50 PM
 #212

those are not my queries! :O

Here they are:
$viewdata['recent-submissions'] = db_query($pdo, '
            SELECT w.name AS worker, p.name AS pool, sw.result AS result, sw.time AS time
            FROM submitted_work sw, pool p, worker w
            WHERE p.id=sw.pool_id AND w.id = sw.worker_id
            ORDER BY sw.time DESC
            LIMIT 10
        ');
        
        
        $viewdata['recent-failed-submissions'] = db_query($pdo, '
            SELECT w.name AS worker, p.name AS pool, sw.result AS result, sw.time AS time
            FROM submitted_work sw, pool p, worker w
            WHERE sw.result=0 AND p.id = sw.pool_id AND w.id = sw.worker_id
            ORDER BY sw.time DESC
            LIMIT 10
        ');


FYI, i have not actually even checked what they do exactly, just wrote simplified queries.

EDIT: I just checked, and interestingly doesn't hit index, which is really wierd. Oh well, i check why at better time.

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
wyze
Newbie
*
Offline Offline

Activity: 28



View Profile WWW
June 24, 2011, 12:36:05 PM
 #213

those are not my queries! :O

Didn't explain the image, lol. The top results were from your optimized version of the first query and the bottom result was from the query as it is now. It may be displayed differently by phpMyAdmin, but it still runs the same. Like I said, I didn't go past comparing the first query until I got some clarification on if I was reading the results of EXPLAIN correctly.
kripz
Full Member
***
Offline Offline

Activity: 182



View Profile
June 24, 2011, 12:39:52 PM
 #214

Why is my proxy all of a sudden not submiting shares?

If i point my miner to the pool directly all is fine.

EDIT: while im here, can somebody change the last submitted/request to say "X days/seconds/hours ago"

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
cdhowie
Full Member
***
Offline Offline

Activity: 182



View Profile WWW
June 24, 2011, 03:59:51 PM
 #215

Can this include some actual proxy kind of features, ie. caching?
So that this could be used behind a flaky internet connection to keep miners 100% at work, if the flakyness is in the seconds range?
A non-PHP getwork backend is planned to resolve this and other LP issues.  PHP is not well-suited to this kind of task.

Check out the join that creates the status display.

LOL! Yeah that would cause some serious issues (first query in admin/index.php) 3rd query is a monstrosity.

Well there is the problem, using dynamic (on the fly created) tables etc.

These queries is almost like SELECT *, wonder if they ever hit any indexes ...
On my database, an EXPLAIN against that query shows zero table scans and very little processing.  I spent a lot of time optimizing that query.  Note that a few indexes that are required to prevent table scans are not present in the schema; these were added later and I don't have a database migration script just yet, so it's expected that these queries will run a bit slow unless you've manually created the needed indexes.

Well there is the problem, using dynamic (on the fly created) tables etc.
This in particular made me lol.  Subqueries can be an effective optimization technique if you know how to use them correctly, and any DBA knows that.  In the "last 10 submissions" cases, MySQL creates a plan that executes the LIMIT after the JOIN, which results in a full table scan of work_data/submitted_work.  Querying those tables in a subquery with LIMIT forces it to execute the limit first, which results in a very fast join.  This was a pain point until I refactored the query to use a subquery to derive the data tables.  Please know WTF you are talking about and use EXPLAIN, kthx.

EDIT: I just checked, and interestingly doesn't hit index, which is really wierd. Oh well, i check why at better time.
Exactly.  MySQL doesn't use the indexes in this case because it has decided to apply the LIMIT after the joins.  So it does a table scan.  And you don't need indexes to do a table scan, now do you?  Essentially, MySQL's query analyzer sucks, and the subquery is the workaround.

So let's do some investigation:

Code:
mysql> SELECT COUNT(*) FROM work_data;
+----------+
| COUNT(*) |
+----------+
|    76422 |
+----------+
1 row in set (0.01 sec)

mysql> SELECT COUNT(*) FROM submitted_work;
+----------+
| COUNT(*) |
+----------+
|   126715 |
+----------+
1 row in set (0.00 sec)

After executing the dashboard status query:
Code:
3 rows in set (0.11 sec)

EXPLAIN on the dashboard status query:

Code:
+----+-------------+----------------+--------+------------------------------------------------+-------------------------+---------+---------------------------------+------+----------------------------------------------+
| id | select_type | table          | type   | possible_keys                                  | key                     | key_len | ref                             | rows | Extra                                        |
+----+-------------+----------------+--------+------------------------------------------------+-------------------------+---------+---------------------------------+------+----------------------------------------------+
|  1 | PRIMARY     | w              | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    3 | Using temporary; Using filesort              |
|  1 | PRIMARY     | <derived2>     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    1 |                                              |
|  1 | PRIMARY     | <derived4>     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    2 |                                              |
|  1 | PRIMARY     | <derived6>     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    1 |                                              |
|  6 | DERIVED     | sw             | range  | dashboard_status_index2                        | dashboard_status_index2 | 8       | NULL                            |  136 | Using where; Using temporary; Using filesort |
|  4 | DERIVED     | <derived5>     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    2 | Using temporary; Using filesort              |
|  4 | DERIVED     | sw             | ref    | dashboard_status_index,dashboard_status_index2 | dashboard_status_index  | 13      | sw2.worker_id,sw2.latest        |    1 |                                              |
|  4 | DERIVED     | p              | eq_ref | PRIMARY                                        | PRIMARY                 | 4       | bitcoin-mining-proxy.sw.pool_id |    1 |                                              |
|  5 | DERIVED     | submitted_work | range  | NULL                                           | dashboard_status_index  | 5       | NULL                            |    3 | Using where; Using index for group-by        |
|  2 | DERIVED     | <derived3>     | system | NULL                                           | NULL                    | NULL    | NULL                            |    1 |                                              |
|  2 | DERIVED     | wd             | ref    | PRIMARY,worker_time                            | worker_time             | 12      | const,const                     |    1 |                                              |
|  2 | DERIVED     | p              | eq_ref | PRIMARY                                        | PRIMARY                 | 4       | bitcoin-mining-proxy.wd.pool_id |    1 |                                              |
|  3 | DERIVED     | work_data      | range  | NULL                                           | worker_time             | 4       | NULL                            |    3 | Using index for group-by                     |
+----+-------------+----------------+--------+------------------------------------------------+-------------------------+---------+---------------------------------+------+----------------------------------------------+

Sorry, the query is fine.  A bit big, but it's attempting to reduce quite a bit of data down to three rows.  So meh.  You do better and I'll take a patch.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
cdhowie
Full Member
***
Offline Offline

Activity: 182



View Profile WWW
June 24, 2011, 04:59:49 PM
 #216

Note that a few indexes that are required to prevent table scans are not present in the schema; these were added later and I don't have a database migration script just yet, so it's expected that these queries will run a bit slow unless you've manually created the needed indexes.
I just took a few minutes to knuckle-down and get this done.  I recommend that all users get the latest from master and apply the DB migration script.  Dashboard performance should significantly improve (to sub-second response times).

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
cdhowie
Full Member
***
Offline Offline

Activity: 182



View Profile WWW
June 24, 2011, 05:08:54 PM
 #217

Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet).
There is no such thing as "weighting."  Did you read the readme?  Particularly the section on how priority works?

Code should be PLENTY more commented btw Wink
Probably, particularly index.php.  The rest is fairly well-separated and should be readable as-is.  My philosophy on comments is that if you have to comment a lot, your code doesn't explain itself well enough and should be refactored.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
nick5429
Member
**
Offline Offline

Activity: 70


View Profile
June 24, 2011, 07:46:54 PM
 #218

Has anyone successfully used Multipool as a target pool through the Flexible Mining Proxy? cdhowie, have you tried it?

I've seen at least 2-3 reports in addition to mine claiming it doesn't work, and nobody saying it works for them.

No signup or setup is needed -- just connect to http://multipool.hpc.tw:8337 with a Bitcoin address as your username, and anything for your password.  I'd be interested to hear if multipool is broken for everyone....

Computer Engineering professional by day, tinkerer and Bitcoin miner by night.
Multiclone operator -- a Multipool clone
1GaZUsCAdUNbvdwFToZenkDDxAPi6ULavA
wyze
Newbie
*
Offline Offline

Activity: 28



View Profile WWW
June 24, 2011, 09:07:31 PM
 #219

Has anyone successfully used Multipool as a target pool through the Flexible Mining Proxy? cdhowie, have you tried it?

I've seen at least 2-3 reports in addition to mine claiming it doesn't work, and nobody saying it works for them.

No signup or setup is needed -- just connect to http://multipool.hpc.tw:8337 with a Bitcoin address as your username, and anything for your password.  I'd be interested to hear if multipool is broken for everyone....

I will take a look at this for you over the weekend.
cdhowie
Full Member
***
Offline Offline

Activity: 182



View Profile WWW
June 24, 2011, 09:10:00 PM
 #220

Has anyone successfully used Multipool as a target pool through the Flexible Mining Proxy? cdhowie, have you tried it?

I've seen at least 2-3 reports in addition to mine claiming it doesn't work, and nobody saying it works for them.

No signup or setup is needed -- just connect to http://multipool.hpc.tw:8337 with a Bitcoin address as your username, and anything for your password.  I'd be interested to hear if multipool is broken for everyone....

I will take a look at this for you over the weekend.
Damn, beat me to it, wyze.  Smiley

Yes, if nobody can connect to multipool then it's likely some communication issue caused by the proxy code.  If wyze figures it out then he'll probably fix it (he has a fork over on Github you could use until I merge his fix) and if not I'll have a look sometime this weekend too.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
Pages: « 1 2 3 4 5 6 7 8 9 10 [11] 12 13 14 15 16 17 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!