Bitcoin Forum
April 26, 2024, 10:26:43 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Would you use such a mining proxy?  (Voting closed: April 13, 2011, 10:26:05 PM)
Yes, it seems like a good idea. - 7 (63.6%)
Maybe. - 1 (9.1%)
No, I don't like the idea. - 3 (27.3%)
No, I use something similar already. - 0 (0%)
Total Voters: 11

Pages: « 1 2 3 4 5 6 7 8 9 10 [11] 12 13 14 15 16 »  All
  Print  
Author Topic: Flexible mining proxy  (Read 88544 times)
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402
Merit: 250


View Profile WWW
June 24, 2011, 06:53:57 AM
 #201

Meh.  Took me like 5 minutes to modify the work_data table, add new history tables, write a cron job to rotate the records out, and post my scripts.  Bonus: I didn't even have to think of any clever SQL tricks.

I looked at the linked post... There is a bunch of bad things i could now say about that cron job ...

Slow, prone to fault gum h4x to fix performance issues? People! This is how we create bloatware!

Fix the problem, not the symptoms!

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
1714170403
Hero Member
*
Offline Offline

Posts: 1714170403

View Profile Personal Message (Offline)

Ignore
1714170403
Reply with quote  #2

1714170403
Report to moderator
Be very wary of relying on JavaScript for security on crypto sites. The site can change the JavaScript at any time unless you take unusual precautions, and browsers are not generally known for their airtight security.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714170403
Hero Member
*
Offline Offline

Posts: 1714170403

View Profile Personal Message (Offline)

Ignore
1714170403
Reply with quote  #2

1714170403
Report to moderator
1714170403
Hero Member
*
Offline Offline

Posts: 1714170403

View Profile Personal Message (Offline)

Ignore
1714170403
Reply with quote  #2

1714170403
Report to moderator
1714170403
Hero Member
*
Offline Offline

Posts: 1714170403

View Profile Personal Message (Offline)

Ignore
1714170403
Reply with quote  #2

1714170403
Report to moderator
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402
Merit: 250


View Profile WWW
June 24, 2011, 08:07:43 AM
 #202

wow, there is many kinds of bad things going on code QA wise.

This code is what i call confused newbie abstraction.

Using functions to output HTML in a view (already outputting HTML)? Yup!

Using abstration function to output a simple form image button? Yup!

Doing insane joined and dynamic tables query for a few fields of simple data? Yup!

Anyways, 2 first queries optimized, one less table to lookup, indices being actually hit. Far from being completely optimized (still a filesort happening!), but should proof to be an order of magnitude faster, DESPITE hitting more rows. I have NO way to test however, nor profile correctly due to lack of dataset size, so measurements would be less than error of margin.

Replace admin/index.php 2 first queries with those found at: http://pastebin.com/hwncLV1w

NOTE: I have not done proper testing, results seem to be correct tho Smiley

EDIT: Looking into 3rd query now, it's worse than expected. Will rework complete query and accompanying view portion. It actually checks all rows multiple times, assuming MySQL realizes how to optimize. It makes 5 dynamic (on-the-fly created) tables, hits *ALL* rows on submitted work, creates temp tables (lack of suitable index), 3 times filesort, 13 tables total, and if i interpret correctly total rows to go through is in the range of hundreds of thousands rows or more with my 2526 shares submitted test data! :O

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402
Merit: 250


View Profile WWW
June 24, 2011, 10:38:55 AM
 #203

well, i fooled around, according to profiling my changes are beneficial to the 3rd query but due to the really tiny sample data set i got the actual measurements mean nothing, the spent time might just be all in parsing the query and optimization engine, not the actual query.

So take even the earlier ones with a grain of salt: I've got no clue of the impact as i can't actually measure the difference, even tho i'm using for testing a ancient Athlon XP 1900+ ...

As soon as i got enough data i will see again how my changed queries affect the performance.

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
kripz
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
June 24, 2011, 11:21:39 AM
 #204

I will test, got a fix for the 3rd?

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402
Merit: 250


View Profile WWW
June 24, 2011, 11:28:56 AM
 #205

I will test, got a fix for the 3rd?

Yes, but approach was wrong so it might actually perform worse.

Try also how these affect: http://pastebin.com/kcWN9gPH

How many rows in your submitted_work and work_data tables?

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
kripz
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
June 24, 2011, 11:38:52 AM
 #206

30k and 50k

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402
Merit: 250


View Profile WWW
June 24, 2011, 11:41:09 AM
 #207

30k and 50k

That might give some hint of the effect, but really we start seeing difference at around 10x that size ...

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
wyze
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
June 24, 2011, 12:09:33 PM
 #208

Anyways, 2 first queries optimized, one less table to lookup, indices being actually hit. Far from being completely optimized (still a filesort happening!), but should proof to be an order of magnitude faster, DESPITE hitting more rows. I have NO way to test however, nor profile correctly due to lack of dataset size, so measurements would be less than error of margin.

Replace admin/index.php 2 first queries with those found at: http://pastebin.com/hwncLV1w

Looking at the image below, I fail to see how your 'optimized queries' are better. Maybe I just don't have a full grasp of the EXPLAIN command from phpMyAdmin, but it looks to me as your query scans all rows in the index, which comes out to 2436 with my current data set. That is way more than what the query was scanning before. Please correct me if I am wrong in reading the output. I only looked at the first query and stopped to get clarification.

https://i.imgur.com/G4bJM.png
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402
Merit: 250


View Profile WWW
June 24, 2011, 12:13:50 PM
 #209

those are not my queries! :O

Here they are:
$viewdata['recent-submissions'] = db_query($pdo, '
            SELECT w.name AS worker, p.name AS pool, sw.result AS result, sw.time AS time
            FROM submitted_work sw, pool p, worker w
            WHERE p.id=sw.pool_id AND w.id = sw.worker_id
            ORDER BY sw.time DESC
            LIMIT 10
        ');
        
        
        $viewdata['recent-failed-submissions'] = db_query($pdo, '
            SELECT w.name AS worker, p.name AS pool, sw.result AS result, sw.time AS time
            FROM submitted_work sw, pool p, worker w
            WHERE sw.result=0 AND p.id = sw.pool_id AND w.id = sw.worker_id
            ORDER BY sw.time DESC
            LIMIT 10
        ');


FYI, i have not actually even checked what they do exactly, just wrote simplified queries.

EDIT: I just checked, and interestingly doesn't hit index, which is really wierd. Oh well, i check why at better time.

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
wyze
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
June 24, 2011, 12:36:05 PM
 #210

those are not my queries! :O

Didn't explain the image, lol. The top results were from your optimized version of the first query and the bottom result was from the query as it is now. It may be displayed differently by phpMyAdmin, but it still runs the same. Like I said, I didn't go past comparing the first query until I got some clarification on if I was reading the results of EXPLAIN correctly.
kripz
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
June 24, 2011, 12:39:52 PM
Last edit: June 24, 2011, 01:17:02 PM by kripz
 #211

Why is my proxy all of a sudden not submiting shares?

If i point my miner to the pool directly all is fine.

EDIT: while im here, can somebody change the last submitted/request to say "X days/seconds/hours ago"

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
cdhowie (OP)
Full Member
***
Offline Offline

Activity: 182
Merit: 107



View Profile WWW
June 24, 2011, 03:59:51 PM
Last edit: June 24, 2011, 05:09:22 PM by cdhowie
 #212

Can this include some actual proxy kind of features, ie. caching?
So that this could be used behind a flaky internet connection to keep miners 100% at work, if the flakyness is in the seconds range?
A non-PHP getwork backend is planned to resolve this and other LP issues.  PHP is not well-suited to this kind of task.

Check out the join that creates the status display.

LOL! Yeah that would cause some serious issues (first query in admin/index.php) 3rd query is a monstrosity.

Well there is the problem, using dynamic (on the fly created) tables etc.

These queries is almost like SELECT *, wonder if they ever hit any indexes ...
On my database, an EXPLAIN against that query shows zero table scans and very little processing.  I spent a lot of time optimizing that query.  Note that a few indexes that are required to prevent table scans are not present in the schema; these were added later and I don't have a database migration script just yet, so it's expected that these queries will run a bit slow unless you've manually created the needed indexes.

Well there is the problem, using dynamic (on the fly created) tables etc.
This in particular made me lol.  Subqueries can be an effective optimization technique if you know how to use them correctly, and any DBA knows that.  In the "last 10 submissions" cases, MySQL creates a plan that executes the LIMIT after the JOIN, which results in a full table scan of work_data/submitted_work.  Querying those tables in a subquery with LIMIT forces it to execute the limit first, which results in a very fast join.  This was a pain point until I refactored the query to use a subquery to derive the data tables.  Please know WTF you are talking about and use EXPLAIN, kthx.

EDIT: I just checked, and interestingly doesn't hit index, which is really wierd. Oh well, i check why at better time.
Exactly.  MySQL doesn't use the indexes in this case because it has decided to apply the LIMIT after the joins.  So it does a table scan.  And you don't need indexes to do a table scan, now do you?  Essentially, MySQL's query analyzer sucks, and the subquery is the workaround.

So let's do some investigation:

Code:
mysql> SELECT COUNT(*) FROM work_data;
+----------+
| COUNT(*) |
+----------+
|    76422 |
+----------+
1 row in set (0.01 sec)

mysql> SELECT COUNT(*) FROM submitted_work;
+----------+
| COUNT(*) |
+----------+
|   126715 |
+----------+
1 row in set (0.00 sec)

After executing the dashboard status query:
Code:
3 rows in set (0.11 sec)

EXPLAIN on the dashboard status query:

Code:
+----+-------------+----------------+--------+------------------------------------------------+-------------------------+---------+---------------------------------+------+----------------------------------------------+
| id | select_type | table          | type   | possible_keys                                  | key                     | key_len | ref                             | rows | Extra                                        |
+----+-------------+----------------+--------+------------------------------------------------+-------------------------+---------+---------------------------------+------+----------------------------------------------+
|  1 | PRIMARY     | w              | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    3 | Using temporary; Using filesort              |
|  1 | PRIMARY     | <derived2>     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    1 |                                              |
|  1 | PRIMARY     | <derived4>     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    2 |                                              |
|  1 | PRIMARY     | <derived6>     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    1 |                                              |
|  6 | DERIVED     | sw             | range  | dashboard_status_index2                        | dashboard_status_index2 | 8       | NULL                            |  136 | Using where; Using temporary; Using filesort |
|  4 | DERIVED     | <derived5>     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    2 | Using temporary; Using filesort              |
|  4 | DERIVED     | sw             | ref    | dashboard_status_index,dashboard_status_index2 | dashboard_status_index  | 13      | sw2.worker_id,sw2.latest        |    1 |                                              |
|  4 | DERIVED     | p              | eq_ref | PRIMARY                                        | PRIMARY                 | 4       | bitcoin-mining-proxy.sw.pool_id |    1 |                                              |
|  5 | DERIVED     | submitted_work | range  | NULL                                           | dashboard_status_index  | 5       | NULL                            |    3 | Using where; Using index for group-by        |
|  2 | DERIVED     | <derived3>     | system | NULL                                           | NULL                    | NULL    | NULL                            |    1 |                                              |
|  2 | DERIVED     | wd             | ref    | PRIMARY,worker_time                            | worker_time             | 12      | const,const                     |    1 |                                              |
|  2 | DERIVED     | p              | eq_ref | PRIMARY                                        | PRIMARY                 | 4       | bitcoin-mining-proxy.wd.pool_id |    1 |                                              |
|  3 | DERIVED     | work_data      | range  | NULL                                           | worker_time             | 4       | NULL                            |    3 | Using index for group-by                     |
+----+-------------+----------------+--------+------------------------------------------------+-------------------------+---------+---------------------------------+------+----------------------------------------------+

Sorry, the query is fine.  A bit big, but it's attempting to reduce quite a bit of data down to three rows.  So meh.  You do better and I'll take a patch.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
cdhowie (OP)
Full Member
***
Offline Offline

Activity: 182
Merit: 107



View Profile WWW
June 24, 2011, 04:59:49 PM
 #213

Note that a few indexes that are required to prevent table scans are not present in the schema; these were added later and I don't have a database migration script just yet, so it's expected that these queries will run a bit slow unless you've manually created the needed indexes.
I just took a few minutes to knuckle-down and get this done.  I recommend that all users get the latest from master and apply the DB migration script.  Dashboard performance should significantly improve (to sub-second response times).

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
cdhowie (OP)
Full Member
***
Offline Offline

Activity: 182
Merit: 107



View Profile WWW
June 24, 2011, 05:08:54 PM
 #214

Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet).
There is no such thing as "weighting."  Did you read the readme?  Particularly the section on how priority works?

Code should be PLENTY more commented btw Wink
Probably, particularly index.php.  The rest is fairly well-separated and should be readable as-is.  My philosophy on comments is that if you have to comment a lot, your code doesn't explain itself well enough and should be refactored.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
nick5429
Member
**
Offline Offline

Activity: 79
Merit: 14


View Profile
June 24, 2011, 07:46:54 PM
 #215

Has anyone successfully used Multipool as a target pool through the Flexible Mining Proxy? cdhowie, have you tried it?

I've seen at least 2-3 reports in addition to mine claiming it doesn't work, and nobody saying it works for them.

No signup or setup is needed -- just connect to http://multipool.hpc.tw:8337 with a Bitcoin address as your username, and anything for your password.  I'd be interested to hear if multipool is broken for everyone....
wyze
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
June 24, 2011, 09:07:31 PM
 #216

Has anyone successfully used Multipool as a target pool through the Flexible Mining Proxy? cdhowie, have you tried it?

I've seen at least 2-3 reports in addition to mine claiming it doesn't work, and nobody saying it works for them.

No signup or setup is needed -- just connect to http://multipool.hpc.tw:8337 with a Bitcoin address as your username, and anything for your password.  I'd be interested to hear if multipool is broken for everyone....

I will take a look at this for you over the weekend.
cdhowie (OP)
Full Member
***
Offline Offline

Activity: 182
Merit: 107



View Profile WWW
June 24, 2011, 09:10:00 PM
 #217

Has anyone successfully used Multipool as a target pool through the Flexible Mining Proxy? cdhowie, have you tried it?

I've seen at least 2-3 reports in addition to mine claiming it doesn't work, and nobody saying it works for them.

No signup or setup is needed -- just connect to http://multipool.hpc.tw:8337 with a Bitcoin address as your username, and anything for your password.  I'd be interested to hear if multipool is broken for everyone....

I will take a look at this for you over the weekend.
Damn, beat me to it, wyze.  Smiley

Yes, if nobody can connect to multipool then it's likely some communication issue caused by the proxy code.  If wyze figures it out then he'll probably fix it (he has a fork over on Github you could use until I merge his fix) and if not I'll have a look sometime this weekend too.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
nick5429
Member
**
Offline Offline

Activity: 79
Merit: 14


View Profile
June 25, 2011, 04:53:24 AM
Last edit: June 25, 2011, 05:21:11 AM by nick5429
 #218

Re: the multipool issue, I added some debugging dumps to the place_json_call function.  

Slush's response to a getwork request looks like:
Code:
 {"id": "1", "result": {"hash1": "00000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000010000", "data": "00000001575fd9ef5901da864fff435588650d21f8619fa33cb3510f000006d2000000005fc23ab758404b1f6067f111978fcbcc1d4a227a377315ec590f71ed04bc0d8a4e0567941a0c2a1200000000000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000", "midstate": "bd4be1f1643712bd150248e7e2d9ace616611d3b9e8ea76b6b76a0180f6b00ce", "target": "ffffffffffffffffffffffffffffffffffffffffffffffffffffffff00000000"}, "error": null}

Whereas Multipool's looks like:
Code:
{"error":null,"id":"json","result":{"target":"ffffffffffffffffffffffffffffffffffffffffffffffffffffffff00000000","midstate":"6dfada9f763c6ae458d123a4a9e71a56bf5fc65946d7b40c8b679e865d7ebad6","data":"00000001575fd9ef5901da864fff435588650d21f8619fa33cb3510f000006d200000000a006b13a7db011c6779926e01ec4f67bc3246bc44419c1b4d204c0650be396a64e0567021a0c2a1200000000000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000","hash1":"00000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000010000"}}

I haven't investigated enough to determine if this is the issue, but any chance it's due to Multipool's "id" field being non-numeric?
edit: Doubt that's it; slush's pool sometimes returns id="json" as well.

Either way, it appears (to my eyes, which aren't familiar with the pool mining protocol) that Multipool is returning valid data, but it isn't making its way into the work_data table.  There are zero entries with the pool_id that corresponds to Multipool.

edit2: the JSON response when Multipool is enabled is: {"error":"No enabled pools responded to the work request.","result":null,"id":1}

Hope this data helped...
wyze
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
June 25, 2011, 06:25:10 AM
Last edit: June 25, 2011, 07:04:22 AM by wyze
 #219

It looks like when we do a getwork request to MultiPool, they are also returning extra headers after the JSON string. I am not sure if they are checking user agent or what on their side. I have a work around so that the data makes it into the work_data table. Now, it still looks like it has a problem submitting the data. I will look into this later in the day (Saturday), after I get some sleep. I'm thinking it is going to be along the same lines, where as, after we submit the work, MultiPool returns JSON + headers.

EDIT: I have a workable solution. I was able to successfully submit work with the proxy to MultiPool.

I will discuss the fix further with cdhowie in the morning and see how he wants to go about implementing it.
cdhowie (OP)
Full Member
***
Offline Offline

Activity: 182
Merit: 107



View Profile WWW
June 25, 2011, 08:20:54 PM
 #220

.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
Pages: « 1 2 3 4 5 6 7 8 9 10 [11] 12 13 14 15 16 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!