Bitcoin Forum
March 19, 2024, 10:47:25 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Would you use such a mining proxy?  (Voting closed: April 13, 2011, 10:26:05 PM)
Yes, it seems like a good idea. - 7 (63.6%)
Maybe. - 1 (9.1%)
No, I don't like the idea. - 3 (27.3%)
No, I use something similar already. - 0 (0%)
Total Voters: 11

Pages: « 1 2 3 4 5 6 7 8 9 10 11 [12] 13 14 15 16 »  All
  Print  
Author Topic: Flexible mining proxy  (Read 88527 times)
kripz
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
June 26, 2011, 04:56:38 AM
Last edit: June 26, 2011, 05:31:55 AM by kripz
 #221

Well i think i found the bug cdhowie, i sent you a packet dump yesterday.

http://forum.bitcoin.org/index.php?topic=1721.msg285078#msg285078


lol pool went down at time of update, what a coincedence...

Cant be me.. 90% shares arent being submitted?

Bug?

Miner 1
mhash 177.7/178.9 | a/r/hwe: 1/0/0 | ghash: 110.2 | fps: 30.0

Miner 2
mhash 819.5/826.6 | a/r/hwe: 0/3/0 | ghash: 114.0 113.3 112.3 | fps: 30.2

Quote
[24/06/11 8:49:58 PM] DEBUG: Attempt 77 found on Cypress (#3)
[24/06/11 8:50:02 PM] DEBUG: Attempt 78 found on Cypress (#2)
[24/06/11 8:50:07 PM] DEBUG: Attempt 79 found on Cypress (#2)
[24/06/11 8:50:12 PM] DEBUG: Attempt 80 found on Cypress (#3)
[24/06/11 8:50:13 PM] DEBUG: Attempt 81 found on Cypress (#2)
[24/06/11 8:50:14 PM] DEBUG: Attempt 82 found on Cypress (#3)
[24/06/11 8:50:15 PM] DEBUG: Attempt 83 found on Cypress (#2)
[24/06/11 8:50:18 PM] DEBUG: Attempt 84 found on Cypress (#3)
[24/06/11 8:50:21 PM] DEBUG: Attempt 85 found on Cypress (#1)
[24/06/11 8:50:23 PM] DEBUG: Attempt 86 found on Cypress (#2)
[24/06/11 8:50:33 PM] DEBUG: Attempt 87 found on Cypress (#2)

both updated to latest git

Just updated windows machine

Quote
[24/06/11 8:56:17 PM] DEBUG: Attempt 3 found on Cayman (#2)
[24/06/11 8:56:18 PM] DEBUG: Attempt 4 found on Cayman (#2)
[24/06/11 8:56:20 PM] DEBUG: Attempt 5 found on Cayman (#2)
[24/06/11 8:56:26 PM] DEBUG: Forcing getwork update due to nonce saturation
[24/06/11 8:56:31 PM] DEBUG: Forcing getwork update due to nonce saturation
[24/06/11 8:56:32 PM] DEBUG: Attempt 6 found on Cayman (#2)
[24/06/11 8:56:32 PM] DEBUG: Attempt 7 found on Cayman (#2)
[24/06/11 8:56:34 PM] DEBUG: Attempt 8 found on Cayman (#2)
[24/06/11 8:56:38 PM] DEBUG: Attempt 9 found on Cayman (#2)

mhash 364.5/362.8 | a/r/hwe: 0/1/0 | ghash: 30.1 | fps: 30.4

Nothing is being submitted?

EDIT: now how do i go back to the old version?

Nope, it's Diablominer and/or flexible proxy (though i never touched the proxy). It will find a few results, submit one or two and say accepted. After that "Attempt found" but never submit it?

Phoenix works 100% rock solid

The proxy probably does not correctly support things DiabloMiner does, such as time incrementing and returning multiple nonces for the same getwork over short periods. It looks like the sendwork thread is being choked by the proxy.

So, clearly, its a proxy bug.

Edit: IIRC i get the same behaviour with hashkill.

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
The block chain is the main innovation of Bitcoin. It is the first distributed timestamping system.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1710845245
Hero Member
*
Offline Offline

Posts: 1710845245

View Profile Personal Message (Offline)

Ignore
1710845245
Reply with quote  #2

1710845245
Report to moderator
1710845245
Hero Member
*
Offline Offline

Posts: 1710845245

View Profile Personal Message (Offline)

Ignore
1710845245
Reply with quote  #2

1710845245
Report to moderator
kripz
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
June 26, 2011, 08:34:45 AM
Last edit: June 26, 2011, 09:56:41 AM by kripz
 #222

Since the recent dashboard changes, the recent rejected is not working properly, seems to be missing alot of records.


This will display "5 seconds/minutes/days/weeks/years ago" instead of a full Date Time string. Unfortunately the database doesnt store milliseconds.



Code: (common.inc.php)
function human_time($difference)
{
        $postfix = array("second", "minute", "hour", "day", "week", "month", "year");
        $lengths = array(60, 60, 24, 7, 4.3452380952380952380952380952381, 12);

        for($i = 0; $difference >= $lengths[$i]; $i++)
                $difference /= $lengths[$i];

      $difference = round($difference);

        if($difference != 1)
                $postfix[$i] .= "s";

        return "$difference $postfix[$i] ago";
}

function format_date($date)
{
        global $BTC_PROXY;
        $obj = new DateTime($date, new DateTimeZone('UTC'));
        $obj->setTimezone(new DateTimeZone($BTC_PROXY['timezone']));

        if($BTC_PROXY['date_format'] != "human")
                return $obj->format($BTC_PROXY['date_format']);
        else
        {
                $now = new DateTime("now", new DateTimeZone('UTC'));
                $now->setTimezone(new DateTimeZone($BTC_PROXY['timezone']));
                $timespan = $now->getTimestamp() - $obj->getTimestamp();
                return human_time($timespan);
        }
}

Code: (config.inc.php)
# Custom php date format or "human" for "x days/minutes/seconds ago"
'date_format'           => 'Y-m-d H:i:s T',

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
wyze
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
June 26, 2011, 03:00:47 PM
 #223

Since the recent dashboard changes, the recent rejected is not working properly, seems to be missing alot of records.

No code was changed that would affect this. I simply removed the Result column as it was not really needed since they were all rejected anyways. Smiley

feature request:
  • bulk pool priority change
  • drag and release to change priority of pools

These look pretty good. Please open an issue here with some more detail and we can get that labeled properly for you.

for the stale rate, won't it make more sense if it is daily or even life time instead of default interval 1h?
but for submitted share alert, i would probably like 5min. to check whether one card is down.
for hashing speed, 15min will be fine.

TLDR: I recommend separate interval settings for submitted share alert, hashing speed and stale rate.

You can always open an issue here for the recommended changes. Smiley
nick5429
Member
**
Offline Offline

Activity: 79
Merit: 14


View Profile
June 26, 2011, 09:55:22 PM
 #224

.

Just wanted to point out that this was an empty reply Smiley
cdhowie (OP)
Full Member
***
Offline Offline

Activity: 182
Merit: 107



View Profile WWW
June 27, 2011, 12:38:02 AM
 #225

.

Just wanted to point out that this was an empty reply Smiley
Yes, I wrote a reply and then realized that my reply was incorrect.  And since I can't delete my own posts, I just edited it to be empty.  Smiley

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
dishwara
Legendary
*
Offline Offline

Activity: 1855
Merit: 1016



View Profile
June 27, 2011, 07:05:33 AM
 #226

you can delete your own post.

The option to delete post was their before. Now it was removed by admins/mods.
QUOTE EDIT DELETE
Now only QUOTE EDIT
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402
Merit: 250


View Profile WWW
June 27, 2011, 09:43:08 AM
 #227

Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet).
There is no such thing as "weighting."  Did you read the readme?  Particularly the section on how priority works?

Code should be PLENTY more commented btw Wink
Probably, particularly index.php.  The rest is fairly well-separated and should be readable as-is.  My philosophy on comments is that if you have to comment a lot, your code doesn't explain itself well enough and should be refactored.

IMHO, code should be commented nevertheless, saves time reading. Ie. doc blocks to explain what for a method is for. Later on this saves time when you can just generate code documentation, and can refer to. Larger the project is, the more important this is. Tho this might be a bit small project for such level of documentation, nevertheless a short comment here and there is a good idea.

If you check some major project commentation rules or philosophies, like Linux Kernel, they suggest that one comment per 10 lines is a good level, level which you should target.

As for the queries, if you can avoid on the fly created tables, that's better. Indeed, i did not even check how my queries did, as i was expecting them to work as expected.

I spent sometime working on the 3rd one, and i achieved in under an hour to make the work amount less, less subqueries and on the fly created tables. No temporarys, no filesorts etc. but on the testing machine i'm using (Athlon XP 1900+ so REALLY slow), i couldn't verify results as times went into margin of error easily, and might just be parsing the query & deciding indices etc. takes that damn long on that slow CPU, with my tiny dataset at the time. It was faster without mHash rate, slower with mHash rate added (comparing oranges to oranges, so i compared to yours without mhash rate, and with it). Now i have larger dataset so when i got time i'll check it out again.
Tho, i might begin from scratch as approach is wrong.

Now i got 50k rows in the table, but that's still too small, i want to get to atleast 150k+ to be able to properly go through it, despite handling on this slow machine.

Why are you using echo_html function in the view templates? Have you looked into smarty btw? It's very easy to implement, and makes views damn much simpler to view (just view as html and most highlighters are perfect for it). And those saying it forces bad code behavior: It's all about how you use it, it doesn't force anything, it simply compiles the views you ask for, and actually provides an EXCELLENT "isolation layer" for different layers of code, and eases debugging quite a bit saving tons of time. Overhead is extremely minimal too, and the first time you want to even think about it is when you are hitting high requests per sec.

Also, i noticed some core functionality issues with this: phoenix started crashing now and then, about once a day per instance. Using BTCGuild, Bitcoin.cz and Deepbit, got 2 workers put for it, another works with 2 discreet GPUs, another with just 1. Sometimes it will just not pass new work, and sometimes it pauses work for a while. Is it only me? I'm using LinuxCoin v0.2a and it's accompanying phoenix.

EDIT: Still getting quite fast load on dashboard with this dataset. Infact most of the time in the ms range. At slowest nearly a second. What is the point where people are starting to experience serious performance degradation, all the time?

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
cdhowie (OP)
Full Member
***
Offline Offline

Activity: 182
Merit: 107



View Profile WWW
June 27, 2011, 12:44:08 PM
 #228

I'll reply to the rest of the post shortly, but wanted to answer this question before I leave for work:

EDIT: Still getting quite fast load on dashboard with this dataset. Infact most of the time in the ms range. At slowest nearly a second. What is the point where people are starting to experience serious performance degradation, all the time?
If you've fetched the latest code and applied the database migration script, then you shouldn't be seeing any degradation anymore.  If you didn't, then you would certainly see horrible dashboard performance after a while, as there were some missing indexes that would cause table scans against submitted_work during the dashboard query.  (As submitted_work tends to grow slower than work_data -- but proportionally to it -- you'll need a fast miner, or a bunch of slow ones, to start seeing the performance issues quicker.)

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
sodgi7
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
June 27, 2011, 01:37:36 PM
 #229

I have phoenix crashing issues too but I'm not really certain if it is this mining proxy causing it. Problem went away when I switched over to another miner(rpcminer). There are some known problems with phoenix. Then again without the proxy phoenix doesn't crash as often so I dunno.

I guess the truth is somewhere in the middle but since using other miners with this proxy works I don't see this as a problem.
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402
Merit: 250


View Profile WWW
June 27, 2011, 02:11:33 PM
 #230

I'll reply to the rest of the post shortly, but wanted to answer this question before I leave for work:

EDIT: Still getting quite fast load on dashboard with this dataset. Infact most of the time in the ms range. At slowest nearly a second. What is the point where people are starting to experience serious performance degradation, all the time?
If you've fetched the latest code and applied the database migration script, then you shouldn't be seeing any degradation anymore.  If you didn't, then you would certainly see horrible dashboard performance after a while, as there were some missing indexes that would cause table scans against submitted_work during the dashboard query.  (As submitted_work tends to grow slower than work_data -- but proportionally to it -- you'll need a fast miner, or a bunch of slow ones, to start seeing the performance issues quicker.)

Nope, not using the latest, but atleast the 2 first queries are mine, and my added indexes are in place.

Haven't yet checked what the diffs are.

I'm running about 1.1Ghash against this, and i noticed severe drop in earnings too, due to the more frequent phoenix crashes.

Will need to put aside 2 identical rigs to properly test is there difference.
My rejected rate is also "through the roof", at about 7% via this proxy! But again, need the testing setup of 2 identicals to verify.

Need to go buy more 6950s or 6770s to setup 2 identical rigs Wink

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
sodgi7
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
June 27, 2011, 02:50:30 PM
 #231

Just remembered another thing. Even when I'm not using Phoenix miner I seem to get quite high rejected shares rate when checking from the mining proxy database. When I check mining pool dashboard though it is all fine. So I guess this proxy currently marks certain bad connection issues or something similar also as rejected shares while they are really not?

But yeah PulsedMedia.. I wouldn't be surprised if your issues are partly caused by phoenix. I'm going to testrun poclbm in Windows on my machines today.. this far it seems very stable with this proxy as rpcminer clients.
twmz
Hero Member
*****
Offline Offline

Activity: 737
Merit: 500



View Profile
June 28, 2011, 04:45:36 PM
Last edit: June 28, 2011, 05:05:49 PM by ewal
 #232

Hey, guys.  I am not sure if bitcoin mining proxy has support yet for X-Roll-NTime and for clients that increment the time header (DiabloMiner), but I added it to my ASP.NET implementation of the proxy last night and wanted to share what I had to do to make it work so that you guys to add it to the original PHP implementation if you like.

If you are not aware, the X-Roll-NTime header is a head that some pools may return to indicate to clients that they are allowed to just increment the time header when they have exhausted the nonce space in the getwork they have.  This allows poclbm, for example, to do only 1 getwork request per minute instead of 1 every 5-10 seconds or so on fast GPUs.  This is not only better for the pool because it has to respond to less getwork requests, but it appears to also dramatically reduce the occurrence of "the miner is idle" because the miner can always keep mining by just incrementing the time header while waiting for a new getwork.

Not all pools include this header, so the first change I made was to look for it when proxying getwork (both an actual getwork and submitted share) and when proxying a long poll.  I then made sure to pass the header through to the actual mining client when it was present.

The second, related situation is that some mining clients (DiabloMiner) will now choose to increment the time header even when this header is not present in circumstances where the miner would otherwise be idle.  They are doing this not as a mechanism to reduce the need for frequent getworks, but instead as a means of continuing looking for hashes even in the presence of network problems that are interfering with getwork requests (either erroring them out or just slowing them down).

In both cases (X-Roll-NTime and DiabloMiner's new algorithm), what will happen is that a submitted share will come in with data that is not exactly the same as the data we returned in the getwork, even after only looking at the first 152 characters of the hex data string.  This means that we won't find their submitted data in the work_data table and won't be able to determine which pool to submit to.

I fixed this by changing what data is stored in the work_data table and stripping out the integer (8 hex characters) that represents the timestamp being changed.  I don't claim to understand what all of the 19 integers (152 characters of hex) of the data are that we are saving, but I determinined experimentally that it was the second to last integer (8 hex characters) that was changing when the miners were incrementing the time header.

So, my code looks like this for pruning both the data returned from getwork request to a pool and for pruning the data submitted by the client during a submit.  Note, this is C#, but you can probably tell what the equvilent PHP code would look like:

Code:
           string fullDataString = (string)parameters[0];
            string data;
            if (fullDataString.Length > 152)
            {
                data = fullDataString.Substring(0, 136) + fullDataString.Substring(144, 8);
            }
            else
            {
                data = fullDataString;
            }

With this change (and with the change to correctly proxy the X-Roll-NTime header), my proxy now successfully proxies submits where the time header has been incremented.

I hope this is useful to you.

Update: See this thread for context on the DiabloMiner change:  http://forum.bitcoin.org/index.php?topic=1721.msg290512#msg290512

Was I helpful?  1TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs
WoT, GPG

Bitrated user: ewal.
cdhowie (OP)
Full Member
***
Offline Offline

Activity: 182
Merit: 107



View Profile WWW
June 28, 2011, 07:44:17 PM
Last edit: June 28, 2011, 07:59:29 PM by cdhowie
 #233

Hey, guys.  I am not sure if bitcoin mining proxy has support yet for X-Roll-NTime and for clients that increment the time header (DiabloMiner), but I added it to my ASP.NET implementation of the proxy last night and wanted to share what I had to do to make it work so that you guys to add it to the original PHP implementation if you like.
I'll add support for this at some point.  Right now I'm trying to get the new C# getwork backend finished up.

Note that as long as X-Roll-NTime is sent as an HTTP header, the proxy should still work with DiabloMiner; since the proxy will not forward this HTTP header on to the miner, it should think that the pool doesn't support it.  If DiabloMiner is assuming that the pool supports it (I can't find a reference to X-Roll-NTime in the DiabloMiner sources) then, well, that's a DiabloMiner bug.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
twmz
Hero Member
*****
Offline Offline

Activity: 737
Merit: 500



View Profile
June 28, 2011, 07:46:24 PM
 #234

Hey, guys.  I am not sure if bitcoin mining proxy has support yet for X-Roll-NTime and for clients that increment the time header (DiabloMiner), but I added it to my ASP.NET implementation of the proxy last night and wanted to share what I had to do to make it work so that you guys to add it to the original PHP implementation if you like.
I'll add support for this at some point.  Right now I'm trying to get the new C# getwork backend finished up.

Note that as long as X-Roll-NTime is sent as an HTTP header, the proxy should still work with DiabloMiner; since the proxy will not forward this HTTP header on to the miner, it should think that the pool doesn't support it.

DiabloMiner does it's time increment thing with or without the X-Roll-NTime header in any circumstance where it would otherwise have to be idle (getworks not returning fast enough, getworks erroring out, etc).  So DiabloMiner is going to submit data that won't be found with or without that header at least some of the time.

Was I helpful?  1TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs
WoT, GPG

Bitrated user: ewal.
cdhowie (OP)
Full Member
***
Offline Offline

Activity: 182
Merit: 107



View Profile WWW
June 28, 2011, 08:04:29 PM
 #235

DiabloMiner does it's time increment thing with or without the X-Roll-NTime header in any circumstance where it would otherwise have to be idle (getworks not returning fast enough, getworks erroring out, etc).  So DiabloMiner is going to submit data that won't be found with or without that header at least some of the time.
I can't even find a reference to X-Roll-NTime in the DiabloMiner sources.  As long as it only does this when otherwise idle, it's as though the pool didn't support the feature anyway.  So the effect will be the same as if DiabloMiner didn't do this at all, since all of those shares will be rejected.  Therefore this is effectively a feature request against the proxy and not a bug.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
twmz
Hero Member
*****
Offline Offline

Activity: 737
Merit: 500



View Profile
June 28, 2011, 08:10:23 PM
 #236

DiabloMiner does it's time increment thing with or without the X-Roll-NTime header in any circumstance where it would otherwise have to be idle (getworks not returning fast enough, getworks erroring out, etc).  So DiabloMiner is going to submit data that won't be found with or without that header at least some of the time.
I can't even find a reference to X-Roll-NTime in the DiabloMiner sources.  As long as it only does this when otherwise idle, it's as though the pool didn't support the feature anyway.  So the effect will be the same as if DiabloMiner didn't do this at all, since all of those shares will be rejected.  Therefore this is effectively a feature request against the proxy and not a bug.

As far as I know, poclbm is the only miner that support X-Roll-NTime.

I think the reason people complain about Diablo is that it makes the Diablo output look wrong (it shows massive rejected shares). 

That said, you are right that it should only happen in cases when it wouldn't have been productive anyway, so no harm done.

Was I helpful?  1TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs
WoT, GPG

Bitrated user: ewal.
kripz
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
June 29, 2011, 03:50:56 AM
 #237

diablo doesnt work at all, after a while it the proxy stops accepting shares. Rejected doesnt even show up.

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
kripz
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
June 29, 2011, 01:24:08 PM
 #238

What happens if i have a single account on a pool and i create 3 miners on the proxy and point them to that one pool using the same account?

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
teknohog
Sr. Member
****
Offline Offline

Activity: 518
Merit: 252


555


View Profile WWW
June 29, 2011, 02:59:13 PM
 #239

What happens if i have a single account on a pool and i create 3 miners on the proxy and point them to that one pool using the same account?

Just try it. It's worked for me well enough.

world famous math art | masternodes are bad, mmmkay?
Every sha(sha(sha(sha()))), every ho-o-o-old, still shines
Naven
Newbie
*
Offline Offline

Activity: 22
Merit: 0


View Profile
June 29, 2011, 04:47:57 PM
 #240

@cdhowie, this app is amazing, but need some work of sql optimization and hash-rate calculator
Pages: « 1 2 3 4 5 6 7 8 9 10 11 [12] 13 14 15 16 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!