Bitcoin Forum
November 20, 2017, 09:38:06 PM *
News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: « 1 2 3 4 5 6 [7] 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 »
  Print  
Author Topic: [ANN] Stratum mining protocol - ASIC ready  (Read 143484 times)
kano
Legendary
*
Offline Offline

Activity: 2268


Linux since 1997 RedHat 4


View Profile
September 29, 2012, 06:20:36 AM
 #121

Meanwhile ... Smiley
I think there is a change needed to the protocol ... interested in your opinion.

Suggestion: you should completely remove "mining.set_difficulty" and put difficulty in "mining.notify"

The main issue I see is that it overly complicates handling difficulty by posing issues that need to be handled due to having information in 2 messages

The only argument I've heard so far for having it separate is that is saves some bytes per "notify" - but the whole argument about stratum is that you don't need to send data very often - so who cares if you save ~1kbyte an hour per worker? ...
(less than 20 bytes x 1 "notify" per minute)
... and difficulty changes (requiring a whole new "notify") would not be an overhead since you would simply send the new difficulty change as part of the next "notify" as per normal (I can think of no urgency to send a difficulty change before the next notify)

"set_difficulty" seems to represents a work restart in exactly the same way as a "notify" does.
In fact it seems to be similar (but not the same) level as a "notify" with "Clean Jobs"=true
Any work you are working on is no longer based on the definition when the work was started
If the difficulty actually went down, you may end up throwing away work that is now valid since the pool will accept the lower difficulty work at the time is sends out the "set_difficulty" but the miner has to deal with receiving and processing that before it will allow the lower difficulty work
If the difficulty went up then your work that was valid at the time it was started may no longer be valid, even though only a "Clean Jobs"=true (i.e. an LP) should make the work invalid

This will also mean an issue in the miner that may have been over looked so far?:
The code that deals with checking the difficulty must all be exclusively thread locked since the difficulty is not at the work level, it is at a global level (with current "getwork", difficulty is a direct attribute of the work)

Also, how is this handled at the pool?
It certainly represents a loss of any lower difficulty work that was sent to the pool after the pool has sent the "set_difficulty"
(remember, pool<->miner is not instant ... in fact it is quite a long time when counting hashes ...)

The problem I see is that, having them as 2 different messages means it depends on the pool implementation as well as the miner implementation what will actually happen, when a "set_difficulty" is sent. It also complicates that process unnecessarily for no real gain.

P.S. ckolivas doesn't care about this - he ignored me Smiley

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Join ICO Now A blockchain platform for effective freelancing
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
Luke-Jr
Legendary
*
Offline Offline

Activity: 2268



View Profile
September 29, 2012, 06:36:40 AM
 #122

I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

-ck
Moderator
Legendary
*
Offline Offline

Activity: 2338


Ruu \o/


View Profile WWW
September 29, 2012, 06:47:58 AM
 #123

P.S. ckolivas doesn't care about this - he ignored me Smiley
To be fair, I had exactly the same discussion with Eleuthria but I had bigger fish to fry and stopped caring about it. He or slush can come and defend why they think it's ok. I'll work around the protocol whatever it is.

Primary developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org 1% Fee Solo mining at solo.ckpool.org
-ck
kano
Legendary
*
Offline Offline

Activity: 2268


Linux since 1997 RedHat 4


View Profile
September 29, 2012, 06:58:56 AM
 #124

I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.
Well - that immediately means it must be defcon 4 critical if you don't think it's necessary Smiley

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Mobius
Hero Member
*****
Offline Offline

Activity: 946



View Profile
September 29, 2012, 09:59:46 AM
 #125

I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

Does that indicate you won't be copying ("merging") ckolivas' stratum code into your spork?
or
Will you'll be developing your own code or waiting for someone else to copy ("merge") from?
Luke-Jr
Legendary
*
Offline Offline

Activity: 2268



View Profile
September 29, 2012, 10:04:15 AM
 #126

I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

Does that indicate you won't be copying ("merging") ckolivas' stratum code into your spork?
or
Will you'll be developing your own code or waiting for someone else to copy ("merge") from?
I was responding to Kano's request to change the Stratum protocol in a stupid way because of some implementation problem he imagines exists.

As I said before, if someone else writes the code, I will (probably - ie, if the code is reasonable) merge it into BFGMiner.
Con is capable of producing reasonable code, so I expect that when he finishes it, I will accept it into BFGMiner.

Mobius
Hero Member
*****
Offline Offline

Activity: 946



View Profile
September 29, 2012, 10:11:52 AM
 #127

I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

Does that indicate you won't be copying ("merging") ckolivas' stratum code into your spork?
or
Will you'll be developing your own code or waiting for someone else to copy ("merge") from?
I was responding to Kano's request to change the Stratum protocol in a stupid way because of some implementation problem he imagines exists.

As I said before, if someone else writes the code, I will (probably - ie, if the code is reasonable) merge it into BFGMiner.
Con is capable of producing reasonable code, so I expect that when he finishes it, I will accept it into BFGMiner.

It will be interesting to see how you alter his code when you copy it into your cgmerge branch to suit your needs.
Luke-Jr
Legendary
*
Offline Offline

Activity: 2268



View Profile
September 29, 2012, 10:15:38 AM
 #128

I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

Does that indicate you won't be copying ("merging") ckolivas' stratum code into your spork?
or
Will you'll be developing your own code or waiting for someone else to copy ("merge") from?
I was responding to Kano's request to change the Stratum protocol in a stupid way because of some implementation problem he imagines exists.

As I said before, if someone else writes the code, I will (probably - ie, if the code is reasonable) merge it into BFGMiner.
Con is capable of producing reasonable code, so I expect that when he finishes it, I will accept it into BFGMiner.

It will be interesting to see how you alter his code when you copy it into your cgmerge branch to suit your needs.
I agree.

kano
Legendary
*
Offline Offline

Activity: 2268


Linux since 1997 RedHat 4


View Profile
September 29, 2012, 10:51:14 AM
 #129

I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

Does that indicate you won't be copying ("merging") ckolivas' stratum code into your spork?
or
Will you'll be developing your own code or waiting for someone else to copy ("merge") from?
I was responding to Kano's request to change the Stratum protocol in a stupid way because of some implementation problem he imagines exists.

As I said before, if someone else writes the code, I will (probably - ie, if the code is reasonable) merge it into BFGMiner.
Con is capable of producing reasonable code, so I expect that when he finishes it, I will accept it into BFGMiner.
No, the word is not 'accept' it's 'copy' - reality please - you seriously have some problem with reality ...

The code will be more than reasonable, the problem for you will be how 'unreasonable' you make it after you copy it and if you can even implement it in your cgminer clone.

Since you've already made it pretty clear that it's too difficult for you to do - it's been obvious for a while you'll just copy it from cgminer

Like most of the internals of cgminer, your clone is certainly no better, due to the fact that you copy the code directly from cgminer, but often worse, due to the changes you've made without any guidance or feedback ... that you seriously need ... (you also clearly have issues when it comes to software performance)

Even your statement above, about my comment regarding the Stratum protocol, proves you can't see past your own shallow understanding of something and realise that you do not know everything and your know-it-all attitude is your own downfall since you clearly do NOT know-it-all.
It is not a stupid change suggestion, and it is quite easy to see that it produces an unnecessary implementation issue - a clear timing issue due to the fact the information has been separated - this yet again shows how severely unable you are to test code properly - since the problem is quite apparent as soon as you understand that no code takes 'zero' time to execute (as you once argued with me about code that you said does take zero time to execute Tongue) and worse as soon as a network delay is added to the issue it make it many times worse.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
-ck
Moderator
Legendary
*
Offline Offline

Activity: 2338


Ruu \o/


View Profile WWW
September 29, 2012, 01:29:42 PM
 #130

All I can say is

http://www.youtube.com/watch?v=BNsrK6P9QvI

http://www.youtube.com/watch?v=j_oUjMxR0YU

Primary developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org 1% Fee Solo mining at solo.ckpool.org
-ck
420
Hero Member
*****
Offline Offline

Activity: 756



View Profile
September 29, 2012, 07:18:25 PM
 #131

Surprising that mining has not yet moved away from difficulty-1 mining.


what is the significance?

Donations: 1JVhKjUKSjBd7fPXQJsBs5P3Yphk38AqPr - TIPS
the hacks, the hacks, secure your bits!
jgarzik
Legendary
*
Offline Offline

Activity: 1470


View Profile
September 29, 2012, 07:39:36 PM
 #132

Surprising that mining has not yet moved away from difficulty-1 mining.


what is the significance?

Same payout, less server <-> miner traffic.


Jeff Garzik, bitcoin core dev team and BitPay engineer; opinions are my own, not my employer.
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
420
Hero Member
*****
Offline Offline

Activity: 756



View Profile
September 29, 2012, 07:59:06 PM
 #133

Surprising that mining has not yet moved away from difficulty-1 mining.


what is the significance?

Same payout, less server <-> miner traffic.

what are the technicals what does this mean though, difficulty-1?

Donations: 1JVhKjUKSjBd7fPXQJsBs5P3Yphk38AqPr - TIPS
the hacks, the hacks, secure your bits!
jgarzik
Legendary
*
Offline Offline

Activity: 1470


View Profile
September 29, 2012, 08:11:02 PM
 #134

what are the technicals what does this mean though, difficulty-1?

https://en.bitcoin.it/wiki/Difficulty

Jeff Garzik, bitcoin core dev team and BitPay engineer; opinions are my own, not my employer.
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
420
Hero Member
*****
Offline Offline

Activity: 756



View Profile
September 29, 2012, 08:37:38 PM
 #135

what are the technicals what does this mean though, difficulty-1?

https://en.bitcoin.it/wiki/Difficulty


well if I make difficulty-5000000000 woudln'ti make much more bitcoins than everyone

Donations: 1JVhKjUKSjBd7fPXQJsBs5P3Yphk38AqPr - TIPS
the hacks, the hacks, secure your bits!
slush
Legendary
*
Offline Offline

Activity: 1372



View Profile WWW
September 30, 2012, 08:19:12 PM
 #136

I think there is a change needed to the protocol ... interested in your opinion.
Suggestion: you should completely remove "mining.set_difficulty" and put difficulty in "mining.notify"

I was thinking about this quite a lot when I designed calls and parameters and I think I can defend current protocol "as is":

1. The major architectural reason for not including share target into the job broadcast message is: Share target is NOT a part of job definition ;-). Value of target is not related to the job itself, but it is bounded to connection/worker and it's hashrate. Also the difficulty can technically change at any time, not necessary during new job broadcast. For example my retargeting algorithm (not on production server yet) is sending set_difficulty immediately when pool detects dangerous overflooding by the miner just to stop the flood without sending him new job, because jobs are generated pool-wide in some predefined interval.

2. Job definition in broadcast is the same for everybody. Maybe this is not so obvious, so I repeat it :-) : That message is composed one-time, but broadcasted to everybody. Including single connection-specific variable will break the design completely, because pools will need to compile the message per-connection, which is major performance downside.

At current protocol design miner will receive all connection-specific values by other channels (at this time they're "coinbase1" in mining.subscribe and "difficulty" in mining.set_difficulty).

Quote
The only argument I've heard so far for having it separate is that is saves some bytes per "notify"

Who told you so? Not me, correct? :-)

Quote
"set_difficulty" seems to represents a work restart in exactly the same way as a "notify" does.

I understand from your description, that you need to know target difficulty when you're creating the job. Well, but this depends on implementation of the miner, you can start a job without knowing "target" difficulty. Technically there's no reason to "restart" anything, you can just filter out low-diff shares on the output (some miners are doing it this way). I understand this may be tricky in cgminer, because of your heavy threading architecture and locking issues, but this definitely isn't the reason itself why protocol should be changed.

Quote
Also, how is this handled at the pool?

There are many ways how to handle difficulty on the pool and there's no recommended solution so far.

kano
Legendary
*
Offline Offline

Activity: 2268


Linux since 1997 RedHat 4


View Profile
October 01, 2012, 12:14:18 AM
 #137

I think there is a change needed to the protocol ... interested in your opinion.
Suggestion: you should completely remove "mining.set_difficulty" and put difficulty in "mining.notify"

I was thinking about this quite a lot when I designed calls and parameters and I think I can defend current protocol "as is":

1. The major architectural reason for not including share target into the job broadcast message is: Share target is NOT a part of job definition ;-). Value of target is not related to the job itself, but it is bounded to connection/worker and it's hashrate. Also the difficulty can technically change at any time, not necessary during new job broadcast. For example my retargeting algorithm (not on production server yet) is sending set_difficulty immediately when pool detects dangerous overflooding by the miner just to stop the flood without sending him new job, because jobs are generated pool-wide in some predefined interval.
The fact that it isn't part of the work definition in your protocol is what creates the issues.
It's a separate, global per worker, independent piece of information according to your protocol.

Basically you are defining work that you will reject - and that you must reject, since the work returned cannot prove the difficulty that it was processed at - work difficulty is not encoded anywhere in the hash either (you left it out of the hash to gain performance ...)

This means that if anyone generates a set of shares, but has connectivity problems, and during that time they were sent a difficulty increase, they will lose work that was valid at the lower difficulty, bit not at the new difficulty. Late submission of work is not handled by the protocol in this case.

A difficulty change does indeed mean throwing away work that was valid prior to receiving the difficulty change ... since the work is missing the difficulty information at both the pool and the miner.

The time from starting work, to it being confirmed, by the pool is quite long ... it includes the network delay from the miner to the pool and back ... which when hashing at 54GH/s using an ASIC device, is certainly the slowest part of the whole process, not the mining.
This also means that even during normal connectivity, work will often be in transit already when a difficulty change is received

Quote
2. Job definition in broadcast is the same for everybody. Maybe this is not so obvious, so I repeat it :-) : That message is composed one-time, but broadcasted to everybody. Including single connection-specific variable will break the design completely, because pools will need to compile the message per-connection, which is major performance downside.
No - not at all.
You must already keep information valid per worker: the difficulty ... as well as a bunch more: like who they are and where they are, that must be looked at in order to send the work out.
You simply add the work difficulty to the information sent - rather than send it separately.
Your code MUST already process through levels to get from the job definition to sending it to a worker.
... and suggesting that a software 'change' is a reason to not implement something is really bad form Tongue
Adding a small amount of information per worker is a negligible hit on the pool software since the pool must already have that information per worker and it is simply added to the message, not a regeneration of the message.

Quote
At current protocol design miner will receive all connection-specific values by other channels (at this time they're "coinbase1" in mining.subscribe and "difficulty" in mining.set_difficulty).

Quote
The only argument I've heard so far for having it separate is that is saves some bytes per "notify"

Who told you so? Not me, correct? :-)
Yep - not you.
I was looking for reasons and stating why I wanted them - I had heard none reasonable so far at that point Smiley

Quote
Quote
"set_difficulty" seems to represents a work restart in exactly the same way as a "notify" does.

I understand from your description, that you need to know target difficulty when you're creating the job. Well, but this depends on implementation of the miner, you can start a job without knowing "target" difficulty. Technically there's no reason to "restart" anything, you can just filter out low-diff shares on the output (some miners are doing it this way). I understand this may be tricky in cgminer, because of your heavy threading architecture and locking issues, but this definitely isn't the reason itself why protocol should be changed.
No it's not trickier in cgminer.
It's a performance hit due to making something global for all worker's work, yet the value can change at any time, it's not an attribute of the work according to the pool, yet in reality it indeed is.

Basic database 101 - 3rd normal form - 2 phase commit - and all that jazz Smiley

It's simply a case of any miner that isn't brain dead and does use threading properly (like any good software has used for a VERY long time Smiley) to deal with work properly, has a locking issue dealing with the fact that with testing the validity of a share, the test definition can unknowingly change before the test starts (the pool sends the difficulty change) and the change can be known during the test, but before, the test completes (arrival of the difficulty change) and thus the result is no longer true (which will also not be rare when a difficulty change arrives)
It forces a global thread lock on access to the work difficulty information - since it is global information - you can't put it in the work details since the pool doesn't do that either.

Quote
Quote
Also, how is this handled at the pool?

There are many ways how to handle difficulty on the pool and there's no recommended solution so far.
Just thought I'd leave that one as it stands Smiley

-------

... and just in case it wasn't obvious about the point of this discussion ...
The point of my discussion is not to say that the current protocol cannot be implemented - it will be - and it will include these issues if they are not changed.
It's discussing why the protocol should or shouldn't include the difficulty as part of the work information.

-------

However, I will also add, that this part of the protocol definition seems to be directly aimed at helping the pool (but in reality very little performance gain) at the expense of the miner losing shares unnecessarily sometimes.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
-ck
Moderator
Legendary
*
Offline Offline

Activity: 2338


Ruu \o/


View Profile WWW
October 01, 2012, 02:35:42 AM
 #138

However, I will also add, that this part of the protocol definition seems to be directly aimed at helping the pool (but in reality very little performance gain) at the expense of the miner losing shares unnecessarily sometimes.
This may well be the most important  part of the argument. If it's of no detriment to the pool, and is only the effort required to implement it, and it's of benefit to the miner, then I can only see an advantage. No need to set the protocol in stone at this early a stage. I'll implement whatever it is, but as I said earlier, I saw it kano's way.

Primary developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org 1% Fee Solo mining at solo.ckpool.org
-ck
m0mchil
Full Member
***
Offline Offline

Activity: 171


View Profile
October 01, 2012, 09:25:13 AM
 #139

How about making difficulty re-target optional parameter for mining.notify?

slush
Legendary
*
Offline Offline

Activity: 1372



View Profile WWW
October 01, 2012, 10:45:39 AM
 #140

The fact that it isn't part of the work definition in your protocol is what creates the issues.

I can agree that it creates the issue in your implementation. It don't create any issue in other miners or in my proxy, which has support of the same protocol.

Quote
It's a separate, global per worker, independent piece of information according to your protocol.

Yes, it is independent information from job definition. As I said previously, there's no need for bundling it with job definition, it can be freely used independently.

Quote
Basically you are defining work that you will reject - and that you must reject, since the work returned cannot prove the difficulty that it was processed at - work difficulty is not encoded anywhere in the hash either (you left it out of the hash to gain performance ...)

Basically - you're right. In some edge cases, like connecting 2THash miner with difficulty 1 to the pool, it may happen that miner will waste first few seconds of the work. However I'm thinking about "mining.propose_difficulty" command, where miner will be able to propose some minimum difficulty in connection instead of "default" 1, so with proper implementation no waste will happen.

Quote
but has connectivity problems, and during that time they were sent a difficulty increase, they will lose work that was valid at the lower difficulty, bit not at the new difficulty. Late submission of work is not handled by the protocol in this case.

This is TCP, not HTTP. You cannot "lose" part of the data transmitted over the socket. You'll receive retransmissions of TCP packets (so you'll receive difficulty adjustment) or the socket will be dropped and the "late submission" won't be accepted in any way. So it's no argument here.

Quote
A difficulty change does indeed mean throwing away work that was valid prior to receiving the difficulty change ...

Every sane pool implementation will send standard retargeting command together with new job (and clean_jobs=True). So miner have to drop previous jobs anyway, so no work lost here.

Quote
since the work is missing the difficulty information at both the pool and the miner.

We're repeating here. You don't need to know target difficulty when you're creating work in miner, right?

Quote
The time from starting work, to it being confirmed, by the pool is quite long ... it includes the network delay from the miner to the pool and back ... which when hashing at 54GH/s using an ASIC device, is certainly the slowest part of the whole process, not the mining.

Not I probably don't understand your point (language barrier). How is network latency during share submission related to difficulty change?

Quote
This also means that even during normal connectivity, work will often be in transit already when a difficulty change is received

Most common case - yes. But not necessary, there're use cases when sending it separately has some advantage, as I described above.

Quote
You must already keep information valid per worker: the difficulty ... as well as a bunch more: like who they are and where they are, that must be looked at in order to send the work out.
You simply add the work difficulty to the information sent - rather than send it separately.
Your code MUST already process through levels to get from the job definition to sending it to a worker.

Why MUST? You didn't tell me the real reason why it MUST be known on the beginning of the job. Except that threading issue, which is implementation specific and can be solved easily.

Why Per worker? Difficulty is related to the connection, not per worker. Btw bundling difficulty to job definition don't change it either.

As far as I can say, me and m0mchil implemented the same thing already (me in proxy, m0mchil in poclbm) and we don't have such issues at all. All this sounds to me that your complains are related to the miner implementation.

Quote
... and suggesting that a software 'change' is a reason to not implement something is really bad form Tongue

From the architecture view, keeping it separately is much cleaner. Maybe some things lead to ugly hacks in some implementation, but optimizing *protocol* for some implementation *is* bad example of doing software architecture.


Quote
Adding a small amount of information per worker is a negligible hit on the pool software since the pool must already have that information per worker and it is simply added to the message, not a regeneration of the message.

Again, I don't understand your point here. Of course that pool knows difficulty per connection. But what is this related to your issue in retargeting? I was saying about fact that creating of *jobs* is pool-wide, which is different story.

Quote
I was looking for reasons and stating why I wanted them - I had heard none reasonable so far at that point Smiley

I think I'm giving you a lot of reasonable points. I'm just aware that you are too much focused to implementation in cgminer.

Quote
No it's not trickier in cgminer.
It's a performance hit due to making something global for all worker's work, yet the value can change at any time, it's not an attribute of the work according to the pool, yet in reality it indeed is.

How the hell can be lookup for single value (read only!) became a "performance bottleneck"?

Quote
Basic database 101 - 3rd normal form - 2 phase commit - and all that jazz Smiley

Don't teach me over database design, please ;-).

I'm all fine with this. Just make additional filtering of difficulty when you're sending shares to the network and I'll be happy :-).

Quote
It's simply a case of any miner that isn't brain dead and does use threading properly (like any good software has used for a VERY long time Smiley) to deal with work properly, has a locking issue dealing with the fact that with testing the validity of a share, the test definition can unknowingly change before the test starts (the pool sends the difficulty change) and the change can be known during the test, but before, the test completes (arrival of the difficulty change) and thus the result is no longer true (which will also not be rare when a difficulty change arrives)
It forces a global thread lock on access to the work difficulty information - since it is global information - you can't put it in the work details since the pool doesn't do that either.

Do you know term "over-optimization"? It seems to be this case. You can safely forget about the case of race condition when miner will receive new difficulty *exactly* at the same time when some thread is checking share validity. Nothing serious will happen if you send one, two or ten of low-diff shares in this case.

Quote
It forces a global thread lock on access to the work difficulty information

This catched my eyes. You don't have write-only locks in C/C++? :-P

Quote
It's discussing why the protocol should or shouldn't include the difficulty as part of the work information.

I hope I explained this clearly already. Job definition and difficulty are logically two separate things (although maybe it don't look like in your implementation). Short summary: Job may change without difficulty change, difficulty may change without job change. And job is the same for all miners on the poo, but difficulty is defined per-connection. Is this enough for not bundling them together?

Quote
However, I will also add, that this part of the protocol definition seems to be directly aimed at helping the pool (but in reality very little performance gain)

Yeah, this is valid point! Of course it is helping pool, I'm not hiding it! But I'm strongly against saying "this feature is helping pool" and "this another feature is helping miners". Both pool and miners have the same goal - have the highest real hashrate, lowest resource consumption and highest block reward. Period. There's no real reason to fight miners against pool ops. When the protocol gives tools to pool ops to drive resource consumption in some nice way, then miners will benefit from this also - by faster replies of server, lower stale rate and potentially by lower fees, because pool don't need to handle ugly peeks in performance like it is nowadays.

Quote
at the expense of the miner losing shares unnecessarily sometimes.

This may happen only on some pool implementations and only in some edge cases. Losing jobs is definitely not "by design" and expected.

Pages: « 1 2 3 4 5 6 [7] 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!