Bitcoin Forum
December 05, 2016, 06:41:26 PM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
   Home   Help Search Donate Login Register  
Poll
Question: Would you use such a mining proxy?  (Voting closed: April 13, 2011, 10:26:05 PM)
Yes, it seems like a good idea. - 7 (63.6%)
Maybe. - 1 (9.1%)
No, I don't like the idea. - 3 (27.3%)
No, I use something similar already. - 0 (0%)
Total Voters: 11

Pages: « 1 2 3 4 5 6 7 8 9 10 11 [12] 13 14 15 16 17 »  All
  Print  
Author Topic: Flexible mining proxy  (Read 83905 times)
nick5429
Member
**
Offline Offline

Activity: 70


View Profile
June 25, 2011, 04:53:24 AM
 #221

Re: the multipool issue, I added some debugging dumps to the place_json_call function.  

Slush's response to a getwork request looks like:
Code:
{"id": "1", "result": {"hash1": "00000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000010000", "data": "00000001575fd9ef5901da864fff435588650d21f8619fa33cb3510f000006d2000000005fc23ab758404b1f6067f111978fcbcc1d4a227a377315ec590f71ed04bc0d8a4e0567941a0c2a1200000000000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000", "midstate": "bd4be1f1643712bd150248e7e2d9ace616611d3b9e8ea76b6b76a0180f6b00ce", "target": "ffffffffffffffffffffffffffffffffffffffffffffffffffffffff00000000"}, "error": null}

Whereas Multipool's looks like:
Code:
{"error":null,"id":"json","result":{"target":"ffffffffffffffffffffffffffffffffffffffffffffffffffffffff00000000","midstate":"6dfada9f763c6ae458d123a4a9e71a56bf5fc65946d7b40c8b679e865d7ebad6","data":"00000001575fd9ef5901da864fff435588650d21f8619fa33cb3510f000006d200000000a006b13a7db011c6779926e01ec4f67bc3246bc44419c1b4d204c0650be396a64e0567021a0c2a1200000000000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000","hash1":"00000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000010000"}}

I haven't investigated enough to determine if this is the issue, but any chance it's due to Multipool's "id" field being non-numeric?
edit: Doubt that's it; slush's pool sometimes returns id="json" as well.

Either way, it appears (to my eyes, which aren't familiar with the pool mining protocol) that Multipool is returning valid data, but it isn't making its way into the work_data table.  There are zero entries with the pool_id that corresponds to Multipool.

edit2: the JSON response when Multipool is enabled is: {"error":"No enabled pools responded to the work request.","result":null,"id":1}

Hope this data helped...

Computer Engineering professional by day, tinkerer and Bitcoin miner by night.
Multiclone operator -- a Multipool clone
1GaZUsCAdUNbvdwFToZenkDDxAPi6ULavA
1480963286
Hero Member
*
Offline Offline

Posts: 1480963286

View Profile Personal Message (Offline)

Ignore
1480963286
Reply with quote  #2

1480963286
Report to moderator
1480963286
Hero Member
*
Offline Offline

Posts: 1480963286

View Profile Personal Message (Offline)

Ignore
1480963286
Reply with quote  #2

1480963286
Report to moderator
1480963286
Hero Member
*
Offline Offline

Posts: 1480963286

View Profile Personal Message (Offline)

Ignore
1480963286
Reply with quote  #2

1480963286
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1480963286
Hero Member
*
Offline Offline

Posts: 1480963286

View Profile Personal Message (Offline)

Ignore
1480963286
Reply with quote  #2

1480963286
Report to moderator
1480963286
Hero Member
*
Offline Offline

Posts: 1480963286

View Profile Personal Message (Offline)

Ignore
1480963286
Reply with quote  #2

1480963286
Report to moderator
wyze
Newbie
*
Offline Offline

Activity: 28



View Profile WWW
June 25, 2011, 06:25:10 AM
 #222

It looks like when we do a getwork request to MultiPool, they are also returning extra headers after the JSON string. I am not sure if they are checking user agent or what on their side. I have a work around so that the data makes it into the work_data table. Now, it still looks like it has a problem submitting the data. I will look into this later in the day (Saturday), after I get some sleep. I'm thinking it is going to be along the same lines, where as, after we submit the work, MultiPool returns JSON + headers.

EDIT: I have a workable solution. I was able to successfully submit work with the proxy to MultiPool.

I will discuss the fix further with cdhowie in the morning and see how he wants to go about implementing it.
cdhowie
Full Member
***
Offline Offline

Activity: 182



View Profile WWW
June 25, 2011, 08:20:54 PM
 #223

.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
kripz
Full Member
***
Offline Offline

Activity: 182



View Profile
June 26, 2011, 04:56:38 AM
 #224

Well i think i found the bug cdhowie, i sent you a packet dump yesterday.

http://forum.bitcoin.org/index.php?topic=1721.msg285078#msg285078


lol pool went down at time of update, what a coincedence...

Cant be me.. 90% shares arent being submitted?

Bug?

Miner 1
mhash 177.7/178.9 | a/r/hwe: 1/0/0 | ghash: 110.2 | fps: 30.0

Miner 2
mhash 819.5/826.6 | a/r/hwe: 0/3/0 | ghash: 114.0 113.3 112.3 | fps: 30.2

Quote
[24/06/11 8:49:58 PM] DEBUG: Attempt 77 found on Cypress (#3)
[24/06/11 8:50:02 PM] DEBUG: Attempt 78 found on Cypress (#2)
[24/06/11 8:50:07 PM] DEBUG: Attempt 79 found on Cypress (#2)
[24/06/11 8:50:12 PM] DEBUG: Attempt 80 found on Cypress (#3)
[24/06/11 8:50:13 PM] DEBUG: Attempt 81 found on Cypress (#2)
[24/06/11 8:50:14 PM] DEBUG: Attempt 82 found on Cypress (#3)
[24/06/11 8:50:15 PM] DEBUG: Attempt 83 found on Cypress (#2)
[24/06/11 8:50:18 PM] DEBUG: Attempt 84 found on Cypress (#3)
[24/06/11 8:50:21 PM] DEBUG: Attempt 85 found on Cypress (#1)
[24/06/11 8:50:23 PM] DEBUG: Attempt 86 found on Cypress (#2)
[24/06/11 8:50:33 PM] DEBUG: Attempt 87 found on Cypress (#2)

both updated to latest git

Just updated windows machine

Quote
[24/06/11 8:56:17 PM] DEBUG: Attempt 3 found on Cayman (#2)
[24/06/11 8:56:18 PM] DEBUG: Attempt 4 found on Cayman (#2)
[24/06/11 8:56:20 PM] DEBUG: Attempt 5 found on Cayman (#2)
[24/06/11 8:56:26 PM] DEBUG: Forcing getwork update due to nonce saturation
[24/06/11 8:56:31 PM] DEBUG: Forcing getwork update due to nonce saturation
[24/06/11 8:56:32 PM] DEBUG: Attempt 6 found on Cayman (#2)
[24/06/11 8:56:32 PM] DEBUG: Attempt 7 found on Cayman (#2)
[24/06/11 8:56:34 PM] DEBUG: Attempt 8 found on Cayman (#2)
[24/06/11 8:56:38 PM] DEBUG: Attempt 9 found on Cayman (#2)

mhash 364.5/362.8 | a/r/hwe: 0/1/0 | ghash: 30.1 | fps: 30.4

Nothing is being submitted?

EDIT: now how do i go back to the old version?

Nope, it's Diablominer and/or flexible proxy (though i never touched the proxy). It will find a few results, submit one or two and say accepted. After that "Attempt found" but never submit it?

Phoenix works 100% rock solid

The proxy probably does not correctly support things DiabloMiner does, such as time incrementing and returning multiple nonces for the same getwork over short periods. It looks like the sendwork thread is being choked by the proxy.

So, clearly, its a proxy bug.

Edit: IIRC i get the same behaviour with hashkill.

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
kripz
Full Member
***
Offline Offline

Activity: 182



View Profile
June 26, 2011, 08:34:45 AM
 #225

Since the recent dashboard changes, the recent rejected is not working properly, seems to be missing alot of records.


This will display "5 seconds/minutes/days/weeks/years ago" instead of a full Date Time string. Unfortunately the database doesnt store milliseconds.



Code: (common.inc.php)
function human_time($difference)
{
        $postfix = array("second", "minute", "hour", "day", "week", "month", "year");
        $lengths = array(60, 60, 24, 7, 4.3452380952380952380952380952381, 12);

        for($i = 0; $difference >= $lengths[$i]; $i++)
                $difference /= $lengths[$i];

      $difference = round($difference);

        if($difference != 1)
                $postfix[$i] .= "s";

        return "$difference $postfix[$i] ago";
}

function format_date($date)
{
        global $BTC_PROXY;
        $obj = new DateTime($date, new DateTimeZone('UTC'));
        $obj->setTimezone(new DateTimeZone($BTC_PROXY['timezone']));

        if($BTC_PROXY['date_format'] != "human")
                return $obj->format($BTC_PROXY['date_format']);
        else
        {
                $now = new DateTime("now", new DateTimeZone('UTC'));
                $now->setTimezone(new DateTimeZone($BTC_PROXY['timezone']));
                $timespan = $now->getTimestamp() - $obj->getTimestamp();
                return human_time($timespan);
        }
}

Code: (config.inc.php)
# Custom php date format or "human" for "x days/minutes/seconds ago"
'date_format'           => 'Y-m-d H:i:s T',

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
hipaulshi
Jr. Member
*
Offline Offline

Activity: 32


View Profile
June 26, 2011, 08:56:46 AM
 #226

for the stale rate, won't it make more sense if it is daily or even life time instead of default interval 1h?
but for submitted share alert, i would probably like 5min. to check whether one card is down.
for hashing speed, 15min will be fine.

TLDR: I recommend separate interval settings for submitted share alert, hashing speed and stale rate.
hipaulshi
Jr. Member
*
Offline Offline

Activity: 32


View Profile
June 26, 2011, 09:51:33 AM
 #227

feature request:
  • bulk pool priority change
  • drag and release to change priority of pools
wyze
Newbie
*
Offline Offline

Activity: 28



View Profile WWW
June 26, 2011, 03:00:47 PM
 #228

Since the recent dashboard changes, the recent rejected is not working properly, seems to be missing alot of records.

No code was changed that would affect this. I simply removed the Result column as it was not really needed since they were all rejected anyways. Smiley

feature request:
  • bulk pool priority change
  • drag and release to change priority of pools

These look pretty good. Please open an issue here with some more detail and we can get that labeled properly for you.

for the stale rate, won't it make more sense if it is daily or even life time instead of default interval 1h?
but for submitted share alert, i would probably like 5min. to check whether one card is down.
for hashing speed, 15min will be fine.

TLDR: I recommend separate interval settings for submitted share alert, hashing speed and stale rate.

You can always open an issue here for the recommended changes. Smiley
nick5429
Member
**
Offline Offline

Activity: 70


View Profile
June 26, 2011, 09:55:22 PM
 #229

.

Just wanted to point out that this was an empty reply Smiley

Computer Engineering professional by day, tinkerer and Bitcoin miner by night.
Multiclone operator -- a Multipool clone
1GaZUsCAdUNbvdwFToZenkDDxAPi6ULavA
cdhowie
Full Member
***
Offline Offline

Activity: 182



View Profile WWW
June 27, 2011, 12:38:02 AM
 #230

.

Just wanted to point out that this was an empty reply Smiley
Yes, I wrote a reply and then realized that my reply was incorrect.  And since I can't delete my own posts, I just edited it to be empty.  Smiley

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
dishwara
Legendary
*
Offline Offline

Activity: 1372


Truth may get delay, but NEVER fails


View Profile
June 27, 2011, 07:05:33 AM
 #231

you can delete your own post.

The option to delete post was their before. Now it was removed by admins/mods.
QUOTE EDIT DELETE
Now only QUOTE EDIT
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402


View Profile WWW
June 27, 2011, 09:43:08 AM
 #232

Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet).
There is no such thing as "weighting."  Did you read the readme?  Particularly the section on how priority works?

Code should be PLENTY more commented btw Wink
Probably, particularly index.php.  The rest is fairly well-separated and should be readable as-is.  My philosophy on comments is that if you have to comment a lot, your code doesn't explain itself well enough and should be refactored.

IMHO, code should be commented nevertheless, saves time reading. Ie. doc blocks to explain what for a method is for. Later on this saves time when you can just generate code documentation, and can refer to. Larger the project is, the more important this is. Tho this might be a bit small project for such level of documentation, nevertheless a short comment here and there is a good idea.

If you check some major project commentation rules or philosophies, like Linux Kernel, they suggest that one comment per 10 lines is a good level, level which you should target.

As for the queries, if you can avoid on the fly created tables, that's better. Indeed, i did not even check how my queries did, as i was expecting them to work as expected.

I spent sometime working on the 3rd one, and i achieved in under an hour to make the work amount less, less subqueries and on the fly created tables. No temporarys, no filesorts etc. but on the testing machine i'm using (Athlon XP 1900+ so REALLY slow), i couldn't verify results as times went into margin of error easily, and might just be parsing the query & deciding indices etc. takes that damn long on that slow CPU, with my tiny dataset at the time. It was faster without mHash rate, slower with mHash rate added (comparing oranges to oranges, so i compared to yours without mhash rate, and with it). Now i have larger dataset so when i got time i'll check it out again.
Tho, i might begin from scratch as approach is wrong.

Now i got 50k rows in the table, but that's still too small, i want to get to atleast 150k+ to be able to properly go through it, despite handling on this slow machine.

Why are you using echo_html function in the view templates? Have you looked into smarty btw? It's very easy to implement, and makes views damn much simpler to view (just view as html and most highlighters are perfect for it). And those saying it forces bad code behavior: It's all about how you use it, it doesn't force anything, it simply compiles the views you ask for, and actually provides an EXCELLENT "isolation layer" for different layers of code, and eases debugging quite a bit saving tons of time. Overhead is extremely minimal too, and the first time you want to even think about it is when you are hitting high requests per sec.

Also, i noticed some core functionality issues with this: phoenix started crashing now and then, about once a day per instance. Using BTCGuild, Bitcoin.cz and Deepbit, got 2 workers put for it, another works with 2 discreet GPUs, another with just 1. Sometimes it will just not pass new work, and sometimes it pauses work for a while. Is it only me? I'm using LinuxCoin v0.2a and it's accompanying phoenix.

EDIT: Still getting quite fast load on dashboard with this dataset. Infact most of the time in the ms range. At slowest nearly a second. What is the point where people are starting to experience serious performance degradation, all the time?

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
cdhowie
Full Member
***
Offline Offline

Activity: 182



View Profile WWW
June 27, 2011, 12:44:08 PM
 #233

I'll reply to the rest of the post shortly, but wanted to answer this question before I leave for work:

EDIT: Still getting quite fast load on dashboard with this dataset. Infact most of the time in the ms range. At slowest nearly a second. What is the point where people are starting to experience serious performance degradation, all the time?
If you've fetched the latest code and applied the database migration script, then you shouldn't be seeing any degradation anymore.  If you didn't, then you would certainly see horrible dashboard performance after a while, as there were some missing indexes that would cause table scans against submitted_work during the dashboard query.  (As submitted_work tends to grow slower than work_data -- but proportionally to it -- you'll need a fast miner, or a bunch of slow ones, to start seeing the performance issues quicker.)

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
sodgi7
Newbie
*
Offline Offline

Activity: 11


View Profile
June 27, 2011, 01:37:36 PM
 #234

I have phoenix crashing issues too but I'm not really certain if it is this mining proxy causing it. Problem went away when I switched over to another miner(rpcminer). There are some known problems with phoenix. Then again without the proxy phoenix doesn't crash as often so I dunno.

I guess the truth is somewhere in the middle but since using other miners with this proxy works I don't see this as a problem.
PulsedMedia
Sr. Member
****
Offline Offline

Activity: 402


View Profile WWW
June 27, 2011, 02:11:33 PM
 #235

I'll reply to the rest of the post shortly, but wanted to answer this question before I leave for work:

EDIT: Still getting quite fast load on dashboard with this dataset. Infact most of the time in the ms range. At slowest nearly a second. What is the point where people are starting to experience serious performance degradation, all the time?
If you've fetched the latest code and applied the database migration script, then you shouldn't be seeing any degradation anymore.  If you didn't, then you would certainly see horrible dashboard performance after a while, as there were some missing indexes that would cause table scans against submitted_work during the dashboard query.  (As submitted_work tends to grow slower than work_data -- but proportionally to it -- you'll need a fast miner, or a bunch of slow ones, to start seeing the performance issues quicker.)

Nope, not using the latest, but atleast the 2 first queries are mine, and my added indexes are in place.

Haven't yet checked what the diffs are.

I'm running about 1.1Ghash against this, and i noticed severe drop in earnings too, due to the more frequent phoenix crashes.

Will need to put aside 2 identical rigs to properly test is there difference.
My rejected rate is also "through the roof", at about 7% via this proxy! But again, need the testing setup of 2 identicals to verify.

Need to go buy more 6950s or 6770s to setup 2 identical rigs Wink

http://PulsedMedia.com - Semidedicated rTorrent seedboxes
sodgi7
Newbie
*
Offline Offline

Activity: 11


View Profile
June 27, 2011, 02:50:30 PM
 #236

Just remembered another thing. Even when I'm not using Phoenix miner I seem to get quite high rejected shares rate when checking from the mining proxy database. When I check mining pool dashboard though it is all fine. So I guess this proxy currently marks certain bad connection issues or something similar also as rejected shares while they are really not?

But yeah PulsedMedia.. I wouldn't be surprised if your issues are partly caused by phoenix. I'm going to testrun poclbm in Windows on my machines today.. this far it seems very stable with this proxy as rpcminer clients.
twmz
Hero Member
*****
Offline Offline

Activity: 737



View Profile
June 28, 2011, 04:45:36 PM
 #237

Hey, guys.  I am not sure if bitcoin mining proxy has support yet for X-Roll-NTime and for clients that increment the time header (DiabloMiner), but I added it to my ASP.NET implementation of the proxy last night and wanted to share what I had to do to make it work so that you guys to add it to the original PHP implementation if you like.

If you are not aware, the X-Roll-NTime header is a head that some pools may return to indicate to clients that they are allowed to just increment the time header when they have exhausted the nonce space in the getwork they have.  This allows poclbm, for example, to do only 1 getwork request per minute instead of 1 every 5-10 seconds or so on fast GPUs.  This is not only better for the pool because it has to respond to less getwork requests, but it appears to also dramatically reduce the occurrence of "the miner is idle" because the miner can always keep mining by just incrementing the time header while waiting for a new getwork.

Not all pools include this header, so the first change I made was to look for it when proxying getwork (both an actual getwork and submitted share) and when proxying a long poll.  I then made sure to pass the header through to the actual mining client when it was present.

The second, related situation is that some mining clients (DiabloMiner) will now choose to increment the time header even when this header is not present in circumstances where the miner would otherwise be idle.  They are doing this not as a mechanism to reduce the need for frequent getworks, but instead as a means of continuing looking for hashes even in the presence of network problems that are interfering with getwork requests (either erroring them out or just slowing them down).

In both cases (X-Roll-NTime and DiabloMiner's new algorithm), what will happen is that a submitted share will come in with data that is not exactly the same as the data we returned in the getwork, even after only looking at the first 152 characters of the hex data string.  This means that we won't find their submitted data in the work_data table and won't be able to determine which pool to submit to.

I fixed this by changing what data is stored in the work_data table and stripping out the integer (8 hex characters) that represents the timestamp being changed.  I don't claim to understand what all of the 19 integers (152 characters of hex) of the data are that we are saving, but I determinined experimentally that it was the second to last integer (8 hex characters) that was changing when the miners were incrementing the time header.

So, my code looks like this for pruning both the data returned from getwork request to a pool and for pruning the data submitted by the client during a submit.  Note, this is C#, but you can probably tell what the equvilent PHP code would look like:

Code:
           string fullDataString = (string)parameters[0];
            string data;
            if (fullDataString.Length > 152)
            {
                data = fullDataString.Substring(0, 136) + fullDataString.Substring(144, 8);
            }
            else
            {
                data = fullDataString;
            }

With this change (and with the change to correctly proxy the X-Roll-NTime header), my proxy now successfully proxies submits where the time header has been incremented.

I hope this is useful to you.

Update: See this thread for context on the DiabloMiner change:  http://forum.bitcoin.org/index.php?topic=1721.msg290512#msg290512

Was I helpful?  1TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs
WoT, GPG

Bitrated user: ewal.
cdhowie
Full Member
***
Offline Offline

Activity: 182



View Profile WWW
June 28, 2011, 07:44:17 PM
 #238

Hey, guys.  I am not sure if bitcoin mining proxy has support yet for X-Roll-NTime and for clients that increment the time header (DiabloMiner), but I added it to my ASP.NET implementation of the proxy last night and wanted to share what I had to do to make it work so that you guys to add it to the original PHP implementation if you like.
I'll add support for this at some point.  Right now I'm trying to get the new C# getwork backend finished up.

Note that as long as X-Roll-NTime is sent as an HTTP header, the proxy should still work with DiabloMiner; since the proxy will not forward this HTTP header on to the miner, it should think that the pool doesn't support it.  If DiabloMiner is assuming that the pool supports it (I can't find a reference to X-Roll-NTime in the DiabloMiner sources) then, well, that's a DiabloMiner bug.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
twmz
Hero Member
*****
Offline Offline

Activity: 737



View Profile
June 28, 2011, 07:46:24 PM
 #239

Hey, guys.  I am not sure if bitcoin mining proxy has support yet for X-Roll-NTime and for clients that increment the time header (DiabloMiner), but I added it to my ASP.NET implementation of the proxy last night and wanted to share what I had to do to make it work so that you guys to add it to the original PHP implementation if you like.
I'll add support for this at some point.  Right now I'm trying to get the new C# getwork backend finished up.

Note that as long as X-Roll-NTime is sent as an HTTP header, the proxy should still work with DiabloMiner; since the proxy will not forward this HTTP header on to the miner, it should think that the pool doesn't support it.

DiabloMiner does it's time increment thing with or without the X-Roll-NTime header in any circumstance where it would otherwise have to be idle (getworks not returning fast enough, getworks erroring out, etc).  So DiabloMiner is going to submit data that won't be found with or without that header at least some of the time.

Was I helpful?  1TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs
WoT, GPG

Bitrated user: ewal.
cdhowie
Full Member
***
Offline Offline

Activity: 182



View Profile WWW
June 28, 2011, 08:04:29 PM
 #240

DiabloMiner does it's time increment thing with or without the X-Roll-NTime header in any circumstance where it would otherwise have to be idle (getworks not returning fast enough, getworks erroring out, etc).  So DiabloMiner is going to submit data that won't be found with or without that header at least some of the time.
I can't even find a reference to X-Roll-NTime in the DiabloMiner sources.  As long as it only does this when otherwise idle, it's as though the pool didn't support the feature anyway.  So the effect will be the same as if DiabloMiner didn't do this at all, since all of those shares will be rejected.  Therefore this is effectively a feature request against the proxy and not a bug.

Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ

Thanks to ye, we have the final piece.

PGP key fingerprint: 2B7A B280 8B12 21CC 260A  DF65 6FCE 505A CF83 38F5

SerajewelKS @ #bitcoin-otc
Pages: « 1 2 3 4 5 6 7 8 9 10 11 [12] 13 14 15 16 17 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!