Bitcoin Forum
April 19, 2024, 09:18:08 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Variable difficulty shares, can efficiency be improved for fast miners?  (Read 6048 times)
rjk (OP)
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
January 26, 2012, 08:07:21 PM
 #1

I have been considering the load that miners typically place on a pool, and started wondering whether it might be possible for a pool operator to give users the option to tweak how high the difficulty is in shares they receive.

For instance, most pools (all?) spit out shares that are difficulty 1 for each request. However, newer technology has greatly increased the speed with which such a share can be solved, and the number of getworks might be more and more of a problem for pools to deal with in a cost effective manner.

So, what I would like to know is whether it would be possible for the poolop to place a configurable selection in the worker configuration to raise the difficulty of shares sent to the worker, perhaps as high as say 10. This would reduce server load, and I wonder if it could increase efficiency as well.

Any ops care to comment on whether a) is this a good idea, and b) technically even possible?

EDIT: I might also add that this could have positive benefits in high latency and low throughput networks such as Tor.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
1713518288
Hero Member
*
Offline Offline

Posts: 1713518288

View Profile Personal Message (Offline)

Ignore
1713518288
Reply with quote  #2

1713518288
Report to moderator
Make sure you back up your wallet regularly! Unlike a bank account, nobody can help you if you lose access to your BTC.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
Inaba
Legendary
*
Offline Offline

Activity: 1260
Merit: 1000



View Profile WWW
January 26, 2012, 08:25:00 PM
 #2

Hmm... configurable.  Not a bad idea...

I've been toying with the idea of increasing the difficulty to lessen the load, though I think this has been tried in the past and it didn't work out so well, but I can't remember why.

If you're searching these lines for a point, you've probably missed it.  There was never anything there in the first place.
rjk (OP)
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
January 26, 2012, 08:28:04 PM
 #3

Hmm... configurable.  Not a bad idea...

I've been toying with the idea of increasing the difficulty to lessen the load, though I think this has been tried in the past and it didn't work out so well, but I can't remember why.

I even thought of another idea, perhaps the pool could automatically vary the difficulty based on ask rate (obviously this would be an option that is disabled by default) or some other statistic.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
eleuthria
Legendary
*
Offline Offline

Activity: 1750
Merit: 1007



View Profile
January 26, 2012, 08:44:32 PM
 #4

A few problems with varying difficulty:

The load only drops on the pool side for verifying shares (lower difficulty = fewer shares are returned).  The rate a client ask for shares will be the same since many miners now exhaust a full getwork before asking for more, whereas a few months ago they would submit a share and use new work even though its possible to hash multiple valid shares from one getwork.

This means that the pool is still doing just as much work software side for asks, which is where most the load on a pool server comes from (verifying shares is VERY easy/low load).  The outbound traffic is also unaffected for the same reason.

One issue with implementing it is that some miners submit all diff=1 hashes regardless of the difficulty target (I believe cgminer still does this, not sure on which others).

The only real advantage of a higher difficulty is the lower load on the database since valid shares will be submitted less often (and reduced size of the database).  I'm not sure about other pools, but I know BTC Guild hasn't had any issue with database load when it comes to logging shares in the last few months.


As far as I'm aware, all miners queue at least 1 getwork beyond what is being worked on actively.  Some cache more, or can be configured to do so (phoenix and cgminer I know have this).  This can help you if there is an unstable network connection between you and the pool.  But since most miners exhaust a full getwork nonce range before moving on to a different getwork, you would have to be running an EXTREMELY fast miner (or have some kind of cluster setup where multiple cards are splitting the nonce range).


RIP BTC Guild, April 2011 - June 2015
Inaba
Legendary
*
Offline Offline

Activity: 1260
Merit: 1000



View Profile WWW
January 26, 2012, 08:47:43 PM
 #5

Yeah database load is not an issue. 

That's right... I remember now why increasing the difficulty doesn't help, it just reduces the amount of shares returned.  It's the outbound that's the bottleneck.


If you're searching these lines for a point, you've probably missed it.  There was never anything there in the first place.
rjk (OP)
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
January 26, 2012, 09:02:32 PM
 #6

I understand, thanks for the responses. The main reason I considered this was for overclocked 7970s and such fast things. I think the benefit might only become more apparent when a single device is clocking over 1 ghash (doesn't happen yet), which may not be too far down the road given various bits of technology being worked on. Another way such a scheme could be utilized is if cgminer would split nonce ranges across all the devices that it controls - I hear every now and then pool operators moaning about cgminer being a getwork hog.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
OneFixt
Member
**
Offline Offline

Activity: 84
Merit: 11


View Profile
January 27, 2012, 12:34:51 AM
 #7

We've solved this issue at BitPenny (website) by providing an open-source client and setting the difficulty to 8.  This keeps the number of submitted shares manageable while allowing users to get latency-free work locally as often as they wish, even with an array of fast GPUs.

163c6YtwNbfVSyVvMQCBcmNX9RdYQdRqqa
DrHaribo
Legendary
*
Offline Offline

Activity: 2730
Merit: 1034


Needs more jiggawatts


View Profile WWW
January 29, 2012, 01:21:40 PM
 #8

This is certainly a good idea. I think the reason it has only been talked about in the past and never actually done is lack of support in software. I think most pool software always send a difficulty 1 target, and some/most miners ignore the target and pretend it is always difficulty 1?

Perhaps we can add a new column "variable difficulty" to this wiki page https://en.bitcoin.it/wiki/Getwork_support

Does anyone know which miners and pool programs support this?

▶▶▶ bitminter.com 2011-2020 ▶▶▶ pool.xbtodigital.io 2023-
Inaba
Legendary
*
Offline Offline

Activity: 1260
Merit: 1000



View Profile WWW
January 29, 2012, 03:19:36 PM
 #9

Err, did you read the whole thread, DrHaribo?

If you're searching these lines for a point, you've probably missed it.  There was never anything there in the first place.
DrHaribo
Legendary
*
Offline Offline

Activity: 2730
Merit: 1034


Needs more jiggawatts


View Profile WWW
January 29, 2012, 05:04:36 PM
 #10

Err, did you read the whole thread, DrHaribo?

Yes. And I think it's obvious >1 difficulty can reduce the number of requests delivering proofs of work from fast miners. To reduce the number of requests fetching new work for fast miners you can use X-Roll-NTime.

Dealing with slow workers (CPU miners) is much harder. Perhaps noncerange could help, but I suspect it won't do that much. The noncerange extension is also just supported by 1 miner that noone is using.

▶▶▶ bitminter.com 2011-2020 ▶▶▶ pool.xbtodigital.io 2023-
Inaba
Legendary
*
Offline Offline

Activity: 1260
Merit: 1000



View Profile WWW
January 29, 2012, 06:56:31 PM
 #11

Well... ok.  Are you seeing problems with the amount of submitted shares vs the amount of getworks?  The load submitted shares put on the server is pretty minimal for my servers, it's the getwork requests that clog the bandwidth.

If you're searching these lines for a point, you've probably missed it.  There was never anything there in the first place.
rjk (OP)
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
January 30, 2012, 04:02:48 AM
 #12

Well... ok.  Are you seeing problems with the amount of submitted shares vs the amount of getworks?  The load submitted shares put on the server is pretty minimal for my servers, it's the getwork requests that clog the bandwidth.
This is what I was hoping the variable diff could fix, but from what you are saying it might not make any difference at all. I don't really understand why not, since it appears to me that it would result in fewer getwork requests. But perhaps I am missing something obvious?

The idea was that someone with either a) a smart client that split the nonce range, or b) an extremely fast single miner (rig box? Tongue) would be able to adjust such a setting himself, possibly increasing efficiency for high speed devices on flaky network connections, because of less time getting work on said flaky connection.

I dunno, I tend to come up with lots of ideas that sound good on the surface, but end up not being practical for one reason or another.

P.S., to combat a botnet, could you force-feed them shares with diff=999999999999999? And disable LP? Grin

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
eleuthria
Legendary
*
Offline Offline

Activity: 1750
Merit: 1007



View Profile
January 30, 2012, 04:24:37 AM
 #13

Well... ok.  Are you seeing problems with the amount of submitted shares vs the amount of getworks?  The load submitted shares put on the server is pretty minimal for my servers, it's the getwork requests that clog the bandwidth.
This is what I was hoping the variable diff could fix, but from what you are saying it might not make any difference at all. I don't really understand why not, since it appears to me that it would result in fewer getwork requests. But perhaps I am missing something obvious?

The idea was that someone with either a) a smart client that split the nonce range, or b) an extremely fast single miner (rig box? Tongue) would be able to adjust such a setting himself, possibly increasing efficiency for high speed devices on flaky network connections, because of less time getting work on said flaky connection.

I dunno, I tend to come up with lots of ideas that sound good on the surface, but end up not being practical for one reason or another.

P.S., to combat a botnet, could you force-feed them shares with diff=999999999999999? And disable LP? Grin

A getwork request itself is unrelated to difficulty.  It is a blob of data your miner needs to construct a valid hash for the pool.  This means that hashing higher difficulty shares will not change the getwork load.  It simply means your miner won't submit work to the server unless hashing that getwork produces difficulty of X or greater.  Since a single getwork can produce multiple shares of difficulty X, your miner will continue hashing with that getwork request until it exhausts the entire nonce range (or at least MOST mining software will).

RIP BTC Guild, April 2011 - June 2015
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
January 30, 2012, 04:30:06 AM
 #14

I understand, thanks for the responses. The main reason I considered this was for overclocked 7970s and such fast things. I think the benefit might only become more apparent when a single device is clocking over 1 ghash (doesn't happen yet), which may not be too far down the road given various bits of technology being worked on. Another way such a scheme could be utilized is if cgminer would split nonce ranges across all the devices that it controls - I hear every now and then pool operators moaning about cgminer being a getwork hog.

Changing difficulty doesn't make a miner any less of a getwork hog just less of a share submitter.

How pool mining works is (and this is simplified version modern miners take extra measures to improve efficiency)
1) Miner issues a getwork
2) Pool provides miner with block header (minus nonce)
3) The miner starts w/ a nonce of 0 adds it to rest of pool header and hashes it. It then increments the nonce and hashes again.
4) The miner returns any shares found.

The problem is that the nonce is only 32 bit number.  So there are only 4 billion hashes per nonce range.  A 1 GH miner will need a new getwork every 4 seconds.  A 10 Gh miner will need a new get work every 0.4 seconds.  If only Satoshi had made the nonce range 64bit.

You could make the difficulty 1.25 million (solo mining) and 1 GH miner would still need 1 getwork every 4 seconds.

Still ntime rolling can be used to significantly reduce the number of getworks on fast miners.  If the pool allows an ntime rolling of 5 seconds then no matter how fast a GPU becomes it only needs a new getwork every 5 seconds.  When a GPU finishes a nonce range it simply increments the timestamp and hashes it again.  When the ntime rolling expires it gets new work.

https://en.bitcoin.it/wiki/Getwork#rollntime

Maybe a pool op can correct me but there is no reason one couldn't use LP combined with a very long ntime rolling (say 60 sec) to reduces number of getworks to 1 every minute per GPU/CPU, regardless of how vast they are.

rjk (OP)
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
January 30, 2012, 04:35:17 AM
 #15

I understand, thanks for the responses. The main reason I considered this was for overclocked 7970s and such fast things. I think the benefit might only become more apparent when a single device is clocking over 1 ghash (doesn't happen yet), which may not be too far down the road given various bits of technology being worked on. Another way such a scheme could be utilized is if cgminer would split nonce ranges across all the devices that it controls - I hear every now and then pool operators moaning about cgminer being a getwork hog.

Changing difficulty doesn't make a miner any less of a getwork hog.

The problem is that the nonce is only 32 bit number.  So there are only 4 billion hashes per nonce.  A 1 GH miner will need a new getwork every 4 seconds.  A 10 Gh miner will need a new get work every 0.4 seconds.  If only Satoshi had made the nonce range 64bit.

Still ntime rolling can be used to significantly reduce the number of getworks on fast miners.  If the pool allows an ntime rolling of 5 seconds then no matter how fast a GPU becomes it only needs a new getwork every 5 seconds.  When a GPU finishes a nonce range it simply increments the timestamp and hashes it again.  When the ntime rolling expires it gets new work.

Maybe a pool op can correct me but there is no reason one couldn't use LP combined with a very long ntime rolling (say 60 sec) to reduces number of getworks to 1 every minute per GPU/CPU.


DeathAndTaxes, thank you, that was the key to my puzzle. I assumed that raising the difficulty of a share would lead to spending more time hashing for each getwork, causing fewer requests, whereas you say that the time spent is not related to difficulty? Correct me If I am wrong please...

In addition, each LP results in a storm of getworks from miners that are discarding the current work and starting fresh, is this correct? Or have I misunderstood yet another important thing on how this works? Wink

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
January 30, 2012, 04:44:30 AM
 #16

DeathAndTaxes, thank you, that was the key to my puzzle. I assumed that raising the difficulty of a share would lead to spending more time hashing for each getwork, causing fewer requests, whereas you say that the time spent is not related to difficulty? Correct me If I am wrong please...

Correct.

How it works is miner gets a block header (minus nonce)

miner adds nonce of 0, hashes it and checks if it is higher than pool's difficulty (not block difficulty).
If it is the miner submits it.  If it isn't the miner discards it.

THEN regardless the miner increments the nonce to 1 and does the same thing
...
increment, hash, check (and possibly submit)
increment, hash, check (and possibly submit)
increment, hash, check (and possibly submit)
increment, hash, check (and possibly submit)
increment, hash, check (and possibly submit)
....
4 billion iterations

nonce range is exhausted.  Miner requests new work via getwork.  At that point miner starts all over w/ nonce of 0.

With difficulty of 1 the miner submits 1 share per getwork (on average) w/ perfect efficiency.
With a difficulty of 200 (current p2pool difficulty) the miner finds and submits 1 share per 200 getwork requests.

Still the rate of getwork requests remains the same.  If miner is perfectly efficiency it is roughly one every (2^32)/(speed of GPU in hashes) seconds.

It is actually slow miners which are hard on server.  100 GH made up of 100 1GH GPU is pretty easy load but 100 GH made up of 5000 CPU is pretty rough.  The reason why is slow miners are inefficient due to the low likelihood of them finding a share before it is stale.  This means they make lots of getwork requests for each share submitted.

NNtimeRolling can be used to reduce the number of getworks by allowing the miner to increment timestamp locally.
A hybrid (aka split) pool can reduce the getwork load on server to zero by having miners generate blockheaders locally.  While p2pool does this it could also be used by a "traditional pool".

Quote
In addition, each LP results in a storm of getworks from miners that are discarding the current work and starting fresh, is this correct? Or have I misunderstood yet another important thing on how this works? Wink

Correct.  All work issued to miners is now worthless so pool server will issue an LP w/ new work.  The miner locally discards any queued up work and begins to process the new work.  Obviously the pool will need to recalculate blockheaders for every miner in the pool.  This can be computationally intensive.  Some pools optimize pool efficiency by issuing LP to fastest miner's first.
eleuthria
Legendary
*
Offline Offline

Activity: 1750
Merit: 1007



View Profile
January 30, 2012, 04:58:26 AM
 #17

Just to build on DeathAndTaxes response regarding the storm of getworks after an LP.  Not only is the pool having to generate a getwork for every miner (an LP is sent by sending you a fresh getwork over the connection), it is also generating significant amounts of work because almost all miners will request 1 getwork for local queuing in addition to the active getwork.  On some miners, it is an even larger queue, so when an LP hits a pool may be preparing many thousands of getworks all at once for both the LPs and the subsequent extra requests.

RIP BTC Guild, April 2011 - June 2015
rjk (OP)
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
January 30, 2012, 04:59:44 AM
 #18

Thank you for the detailed explanation. I see now why my idea is not useful in general, and how it would only have changed how often a worker would submit to a pool. However, perhaps those with 50ghash could want to submit less often, and this is where the difference would be made. But on the other hand, submissions need next to zero bandwidth, so it is not a problem for either the miner or the pool.

And, as posted above, it sounds as though some miners submit all shares diff 1 and higher, regardless of the target diff. So no gain here either.

Again, thanks for the clear explanations, I am learning more about the Bitcoin protocol than ever before. The more I learn, the more I am able to boil it down into explainable chunks, while still being able to delve into detail when asked. The last time I presented Bitcoin to a tech-savvy person, it took close to 2 hours to run through the basics of the system, with the occasional deviation into details, even with my overly terse style of presentation. I hope to be able to present Bitcoin in the future in less time than that, while still maintaining a high level of understandability to the "explainee".

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
DrHaribo
Legendary
*
Offline Offline

Activity: 2730
Merit: 1034


Needs more jiggawatts


View Profile WWW
January 30, 2012, 07:37:08 PM
 #19

Maybe a pool op can correct me but there is no reason one couldn't use LP combined with a very long ntime rolling (say 60 sec) to reduces number of getworks to 1 every minute per GPU/CPU, regardless of how vast they are.

This is perfectly correct. There are two forms of ntime rolling support, with different messages from the server:

X-Roll-NTime: Y
X-Roll-NTime: expire=N

The second form is more flexible and allows the server to tell the client how many seconds (N) forward it is allowed to increment the timestamp (ntime). The first form is the original roll-ntime which gives the client permission to increment ntime without limit. I recently looked at the source code of the latest version of a few miners and only DiabloMiner supported the "expire=N" form. Most miners will interpret the second form to mean they can roll ntime without limit, ignoring the N value.

Thank you for the detailed explanation. I see now why my idea is not useful in general, and how it would only have changed how often a worker would submit to a pool.

Don't dismiss >1 difficulty. It is quite obvious that it is a useful concept. Also, both fetching new work and delivering proofs of work you found are done through the "getwork" JSON-RPC function. It's a very bad API design, but "getwork" refers to both fetching new work AND handing in results. So yes, higher difficulty does mean fewer "getwork" requests, unless miners ignore the target they are given.

Yes, using >1 difficulty WILL reduce bandwidth and CPU load on the server. Coupled with ntime rolling you can reduce the load coming from the fastest miners by a lot.

It is true that getting new work takes a few bytes more bandwidth and some CPU cycles more than delivering work results. But that doesn't mean the second is insignificant. Take a look at this thread for examples of both types of request and response: https://bitcointalk.org/index.php?topic=51281.0

Luke-jr. and I set up a wiki page showing what the different miners and servers support: https://en.bitcoin.it/wiki/Getwork_support. Hassle your favorite developer to get better support. I added "expire=N" (2nd generation roll ntime) and ">1 difficulty" columns just now.

So yes, you can optimize the hell out of fast miners. The problem is slow miners. The noncerange extension could at least help a bit on the server CPU usage they cause, but it is not widely supported as you can see in the feature tables at the wiki page.

▶▶▶ bitminter.com 2011-2020 ▶▶▶ pool.xbtodigital.io 2023-
cheat_2_win
Full Member
***
Offline Offline

Activity: 215
Merit: 100


View Profile
February 03, 2012, 03:01:52 PM
 #20

Doesn't P2pool already support what you are discussing here?
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!