Bitcoin Forum
April 25, 2024, 02:55:53 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 [8] 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 »
  Print  
Author Topic: [ANN] Stratum mining protocol - ASIC ready  (Read 145767 times)
kano
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
October 01, 2012, 01:59:33 PM
 #141

...
Quote
at the expense of the miner losing shares unnecessarily sometimes.

This may happen only on some pool implementations and only in some edge cases. Losing jobs is definitely not "by design" and expected.
It is expected with a % of every difficulty increase based on the amount of the old and new difficulty.
Since, as you say, difficulty is NOT an attribute of work, when you start a piece of work, the validity of the result can change at any time until you test the difficulty of the work after completing it.

The reality of any nonce found in a piece of work is that it is ALWAYS valid work at 1 difficulty.
Then of course the obvious follows on from that: at difficulty 2 - on average every 2nd nonce found will be valid work ... and so on for each difficulty level.

However, if the pool increases the difficulty, there is a % chance, related to the change in difficulty, that you will have to throw away work that was valid when you started it.

To work out that %
A difficulty increase from A to B means a ((1/A) - (1/B)) * 100% chance of losing work, due to the difficulty change, that would have been valid if the difficulty was instead an attribute of the work

e.g. a difficulty change from 1 to 2 will (obviously) means each nonce found in the work currently being worked on has a 50% chance of being discarded instead of accepted - even though it was valid when you started on it
a change from 1.8 to 2.2 means a 10% chance ...

It's nothing like "some edge cases" - guessing doesn't work too well ................

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
"You Asked For Change, We Gave You Coins" -- casascius
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714056953
Hero Member
*
Offline Offline

Posts: 1714056953

View Profile Personal Message (Offline)

Ignore
1714056953
Reply with quote  #2

1714056953
Report to moderator
1714056953
Hero Member
*
Offline Offline

Posts: 1714056953

View Profile Personal Message (Offline)

Ignore
1714056953
Reply with quote  #2

1714056953
Report to moderator
1714056953
Hero Member
*
Offline Offline

Posts: 1714056953

View Profile Personal Message (Offline)

Ignore
1714056953
Reply with quote  #2

1714056953
Report to moderator
slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
October 01, 2012, 02:20:31 PM
 #142

kano, your calculations are irrelevant. It depends on the implementation on pool side and there are algorithms which can change difficulty without any losing work (i.e. sending it together with new job which already contains clean_jobs=True).

Quote
Since, as you say, difficulty is NOT an attribute of work

Difficulty is not an attribute of job definition, which is clearly true (what logical relevance is between job and dificulty?), but set_difficulty can be sent together with new job definition, just in separate message.

Can we please dig into troubles which you have in implementing set_difficulty as separate call? Is it something with threading? What exactly?

This high-level discussion is going nowhere, because I'm stating that two separate stratum client implementations don't have any problem with it and you're stating that you have serious issues with it, but you don't think it is implementation specific issue. The only obvious solution how to decide this problem is to dig into implementation details which you have with it. We should be more constructive.

kano
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
October 01, 2012, 03:48:56 PM
 #143

kano, your calculations are irrelevant. It depends on the implementation on pool side and there are algorithms which can change difficulty without any losing work (i.e. sending it together with new job which already contains clean_jobs=True).
So you are now saying that they should be joined to stop losing work? Smiley

However, if you can't understand those simple calculations then I guess there's not much point in this discussion.
Either tell me what is 'wrong' with them, or try to understand them.
Quote
Quote
Since, as you say, difficulty is NOT an attribute of work

Difficulty is not an attribute of job definition, which is clearly true (what logical relevance is between job and dificulty?), but set_difficulty can be sent together with new job definition, just in separate message.
Well ... you have forced them to be two separate messages.
Quote
Can we please dig into troubles which you have in implementing set_difficulty as separate call? Is it something with threading? What exactly?
As I have already stated, it's not a difficulty in implementation - I'm not even implementing it.
It's a design issue that, in my opinion, sticks a wedge in the middle of expected results and a well designed implementation.
Quote
This high-level discussion is going nowhere, because I'm stating that two separate stratum client implementations don't have any problem with it and you're stating that you have serious issues with it, but you don't think it is implementation specific issue. The only obvious solution how to decide this problem is to dig into implementation details which you have with it. We should be more constructive.
The problem is the protocol means throwing away shares - and your client implementations will be throwing away these shares.

Just coz you ignore that they are throwing them away (or don't see them - more likely) doesn't mean it doesn't happen.
It quite clear why it happens as I have already stated.

Any work that is happening, when a difficulty increase appears, will have a higher % of being discarded due to the difficulty increase.
However, if difficulty was an attribute of work, these shares would not show up as being rejected or discarded due to being of too low difficulty (if your miner doesn't hide that) or would not disappear (if you miner hides that)

Miners do not go idle, doing nothing, ever, unless they are programmed badly - every time a difficulty increase appears the miner will be working on something at the old difficulty ... oh and as I've implied the obvious already: network transfers take a VERY long time when compared to hashing devices or even CPUs ...

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
October 01, 2012, 04:19:55 PM
 #144

So you are now saying that they should be joined to stop losing work? Smiley

Yes, I'm saying that it is possible. I'm not hiding fact that it is possible in some cases to lose work. But it is not necessary. Depends on agression of the pool, how much he accept overflooding by shares.

As I described in one of first posts about this, my implementation on the pool has two modes - minor difficulty changes are propagated together with new job definition, so no work is lost for 99.9% users. But when some really really strong miner joins with diff1 (it don't use mining.suggest_difficulty on startup), pool will send him new difficulty immediately, without waiting to new job definition (which is, as I already stated, generated periodically for all connected miners). In this case few seconds of the work is really lost, but it prevents pool from DoS. And all this happen only in edge case (really really strong miner, not so kind to suggest higher difficulty by himself).

Quote
However, if you can't understand those simple calculations then I guess there's not much point in this discussion.
Either tell me what is 'wrong' with them, or try to understand them.

Come on, I understand these calculations. I just stated that they're irrelevant, not wrong. Please don't became another troll on this forum :-P.

Quote
Quote
Difficulty is not an attribute of job definition, which is clearly true (what logical relevance is between job and dificulty?), but set_difficulty can be sent together with new job definition, just in separate message.
Well ... you have forced them to be two separate messages.

I "forced" them to be separate messages because of arguments which I already gave you.

Quote
The problem is the protocol means throwing away shares - and your client implementations will be throwing away these shares.

Not necessary, as I described many times already. set_difficulty CAN be send together with mining.notify.

Quote
Just coz you ignore that they are throwing them away (or don't see them - more likely) doesn't mean it doesn't happen.

Can you please summarize the reasons for having it together in one message, please? I'm serious in this. I probably lost the track on all reasons during this discussion.

Quote
Miners do not go idle, doing nothing, ever, unless they are programmed badly - every time a difficulty increase appears the miner will be working on something at the old difficulty

Please correct me if I'm wrong, but that worker is not working on the difficulty, it is working on the job. That job would be generated regardless of current difficulty. Share then can be discarded by "output control", without losing any real job. I don't see any issue with this. Mining "core" don't need to know current difficulty. Generating share below current target is not lost work, it is just low difficulty work.

Quote
... oh and as I've implied the obvious already: network transfers take a VERY long time when compared to hashing devices or even CPUs ...

I was asking you already on this, but where's the relation between latency and difficulty? The worst case is that you'll see "low difficulty" response by the pool, when miner submit share exactly at the time when difficulty get changed. But what is *real* problem with this? Btw this may happen only when difficulty is changed in another time than new job definition is broadcasted. If difficulty is corrected during clean_jobs=True job, this even won't happen, because this share would be stale as well (regardless the difficulty change).

slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
October 03, 2012, 04:41:05 PM
 #145

I just released proxy version 0.8.5. It fixes some "unhandled errors" reported by console and also implements "midstate" getwork extension which speeds up getwork when requested by modern getwork miner.

Joshwaa
Hero Member
*****
Offline Offline

Activity: 497
Merit: 500



View Profile
October 03, 2012, 05:25:28 PM
 #146

I think you meant 0.8.6.

Like what I said : 1JosHWaA2GywdZo9pmGLNJ5XSt8j7nzNiF
Don't like what I said : 1FuckU1u89U9nBKQu4rCHz16uF4RhpSTV
slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
October 03, 2012, 05:39:11 PM
 #147

I found one bug almost instantly after release, so I removed EXE of 0.8.5 from github and released 0.8.6 right now.

If you already downloaded 0.8.5 (four people did so) and you don't see any issue after you start miners, then you don't need to update. This bug was related to situation when miner doesn't propagate x-miner-extensions in HTTP header.

I think you meant 0.8.6.

slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
October 03, 2012, 07:30:41 PM
 #148

I just released version 0.9.0, introducing experimental support for socks5 proxy and native support for Tor. If you don't need these features, you don't need to update from 0.8.6.

For mining over Tor, you need Tor client running locally and then just run "mining_proxy.exe --tor". Proxy itself configures socks5 settings, pool URL and port. All these settings can be defined manually, this is just a shortcut for my pool.

makomk
Hero Member
*****
Offline Offline

Activity: 686
Merit: 564


View Profile
October 03, 2012, 10:00:40 PM
 #149

1. The major architectural reason for not including share target into the job broadcast message is: Share target is NOT a part of job definition ;-). Value of target is not related to the job itself, but it is bounded to connection/worker and it's hashrate. Also the difficulty can technically change at any time, not necessary during new job broadcast. For example my retargeting algorithm (not on production server yet) is sending set_difficulty immediately when pool detects dangerous overflooding by the miner just to stop the flood without sending him new job, because jobs are generated pool-wide in some predefined interval.

Yes. That's the problem. Suppose that the pool sends out a difficulty change, and then - at the same time - the miner submits a share, so that the two messages cross on the wire. What difficulty applies to that share? The pool and the miner have completely different views on this. In fact, pretty much the only safe time to change difficulty is when you're sending out a new item of work, because otherwise there's no way for the miner and the pool to prove to each other that a given share was meant to have a particular difficulty associated with it.

(The whole "every miner gets the same message" thing is unfortunate too. From what I can tell it makes it impossible to create a Stratum proxy that distributes work to several different upstream servers.)

Quad XC6SLX150 Board: 860 MHash/s or so.
SIGS ABOUT BUTTERFLY LABS ARE PAID ADS
slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
October 03, 2012, 10:58:57 PM
 #150

Yes. That's the problem. Suppose that the pool sends out a difficulty change, and then - at the same time - the miner submits a share, so that the two messages cross on the wire.  What difficulty applies to that share?

Share will be accepted regards to difficulty on pool side. If submitted share has difficulty higher than "new difficulty just sent by pool", it will be still accepted, even if it has been generated by miner who known the old difficulty. Difficulty is not a part of job source data, so no work is lost on roundtrips, because share submitted with previous difficulty still can be accepted!

Quote
(The whole "every miner gets the same message" thing is unfortunate too. From what I can tell it makes it impossible to create a Stratum proxy that distributes work to several different upstream servers.)

"every miner gets the same message" is *major* improvement in that whole thing. Losing this feature will kill the whole concept. Can you please describe your feature request (?) a bit more? How it is supposed to work? If miner want to work for more pools, he can use more stratum connections and use some round robin strategy for generating jobs locally. Maybe I don't see your point.

kano
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
October 04, 2012, 05:32:48 AM
 #151

Yes. That's the problem. Suppose that the pool sends out a difficulty change, and then - at the same time - the miner submits a share, so that the two messages cross on the wire.  What difficulty applies to that share?

Share will be accepted regards to difficulty on pool side. If submitted share has difficulty higher than "new difficulty just sent by pool", it will be still accepted, even if it has been generated by miner who known the old difficulty. Difficulty is not a part of job source data, so no work is lost on roundtrips, because share submitted with previous difficulty still can be accepted!

...
You conveniently ignored half the answer Tongue
Seriously slush - why would you do that?
(which is why I'd given up responding above)

But since you mentioned half the answer - I'll tell you what you ignored:

If the submitted share was sent to the pool coz it was higher than the difficulty at the time it was generated, but has difficulty lower than "new difficulty just sent by pool", it will be rejected.
Since the difficulty is not an attribute of the work.

Thus the pool throws away that piece of work the miner did ... so the miner can lose work on a difficulty change ... and as I already pointed out - there's an expected % that's easy to calculate.

Edit: there's also the reverse situation:
When the difficulty drops, the miner will NOT send a share with a lower difficulty until it knows about the lower difficulty - which will be some time AFTER the pool has decided to allow lower difficulty.
Thus there is again an amount of work lost here also.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
-ck
Legendary
*
Offline Offline

Activity: 4088
Merit: 1631


Ruu \o/


View Profile WWW
October 04, 2012, 06:03:54 AM
 #152

Slush, the miners are concerned about the potential losses by moving to stratum as well as the gains. The miners know about the possible gains and are querying the possible losses. While I will do whatever I can within the mining software to minimise these, you need to explain why this compromise is there as you are trying ultimately to convince miners to move to this protocol.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
October 04, 2012, 12:03:07 PM
Last edit: October 04, 2012, 12:52:43 PM by slush
 #153

You're right. My biggest mistake is to try to explain to the deep, from all views and advocate every corner case. That's definitely a problem, because then the most important message from my over-complicated explanations are lost. I'll try to sumarize it once again:

1. There's major problem with bundling difficulty with job definition. Stratum is designed to have the same job message for every subscribed miner. Putting difficulty, which is connection-specific, directly to this message will break whole design. This is not only some minor advantage, it is like having job broadcast 10000x faster for 10000 connected miners just by decoupling job definition and difficulty, because server don't need to serialize the same message again and again, just with one changed number. 10000x faster broadcasts is potentially 10000x lower stale ratio for miners.

2. Previously I described some edge cases, when sending just difficulty alone (without a job) can potentially improve some things. Maybe I wasn't absolutely clear, but this is not supposed to happen normally, but in extremely corner cases. When I'll choose between losing few seconds of work of flooding miner (like 500+ shares per second) and overall pool stability, I'll choose pool stability. Again, almost all miners will gain from this decision, because quick and reliable pool is in the interest of all participants.

3. As I stated many times, it is *expected* to send mining.set_difficulty and mining.notify together in standard case. And because mining.set_difficulty don't restart jobs AND pool sends set_difficulty immediately before notify, losing a milisecond of miner work is impossible.

So for standard case, all this discussion is about these two options:

Current Stratum protocol (Edit: both messages are sent together):
Code:
{"id": null, "method": "mining.set_difficulty", "params": [NEW_DIFFICULTY]}
{"id": null, "method": "mining.notify", "params": [JOB_DEFINITION]}

Alternative proposal:
Code:
{"id": null, "method": "mining.notify", "params": [JOB_DEFINITION, NEW_DIFFICULTY]}

As you can see, we're making a mountain out of a molehill, because in standard case there's no difference, except that current proposal is more flexible and much faster in real world.

kano
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
October 04, 2012, 12:39:24 PM
 #154

1. There's major problem with bundling difficulty with job definition. Stratum is designed to have the same job message for every subscribed miner. Putting difficulty, which is connection-specific, directly to this message will break whole design. This is not only some minor advantage, it is like having job broadcast 10000x faster for 10000 connected miners just by decoupling job definition and difficulty, because server don't need to serialize the same message again and again, just with one changed number. 10000x faster broadcasts is potentially 10000x lower stale ratio for miners.
Sigh ...
You already need to process each user separately since you need to know each user before you can send them work and each has a separate id.
You already know for each user the difficulty you want them to process - since you said you send it before anyway.
You must keep independent difficulty per user in the pool.

........ so why is adding that difficulty string (in your words - two changed number - instead of one) on the end of the string being sent to the miner 100000x slower?
Bullshit.

Quote
2. Previously I described some edge cases, when sending just difficulty alone (without a job) can potentially improve some things. Maybe I wasn't absolutely clear, but this is not supposed to happen normally, but in extremely corner cases. When I'll choose between losing few seconds of work of flooding miner (like 500+ shares per second) and overall pool stability, I'll choose pool stability. Again, almost all miners will gain from this decision, because quick and reliable pool is in the interest of all participants.
So your pool is unable to generate a new notify quickly for one person in less than a fraction of a second? ... that's a worry ...
I was hoping that generating notify data was pretty quick ... since it's almost exactly the same as how pools work already doing that for each user every time they need work ...
Your 'special' case is for when a user first connects and has the wrong difficulty - so the pool has to do the same old work it used to have to do ... once for each user when they first connect ...
Or you could even be smart and remember for each worker what they were last time they connected and thus it would be a minority of users who would connect at vastly different hash rates to the last time they connected ...

Quote
3. As I stated many times, it is *expected* to send mining.set_difficulty and mining.notify together in standard case. And because mining.set_difficulty don't restart jobs AND pool sends set_difficulty immediately before notify, losing a milisecond of miner work is impossible.
Unless you happen to write crappy miner software ... the miner is always mining.
So it will always be mining when you send a "set_difficulty"
Since you are changing the rules during mining a valid share, the miner will lose work if that is a standalone "set_difficulty"
So ... either you don't send them standalone ... and thus why have them separate
... or you do send them standalone and the miner has an expected % chance of losing work.

Quote
So for standard case, all this discussion is about these two options:

Current Stratum protocol:
Code:
{"id": null, "method": "mining.set_difficulty", "params": [NEW_DIFFICULTY]}
{"id": null, "method": "mining.notify", "params": [JOB_DEFINITION]}

Alternative proposal:
Code:
{"id": null, "method": "mining.notify", "params": [JOB_DEFINITION, NEW_DIFFICULTY]}

As you can see, we're making a mountain out of a molehill, because in standard case there's no difference, except that current proposal is more flexible and much faster in real world.
How is it much faster?
Why is it 100000x faster to send a string as to add 20 bytes to the end of the string and send it?
Do you manually add those 20 bytes each time using a keyboard or does a 3GHz processor do it?

Your 'flexibility' loses shares for miners.
Yes it doesn't lose anything for the pool - just the miners ... so I guess that doesn't matter to you.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
October 04, 2012, 12:45:58 PM
Last edit: October 04, 2012, 01:01:02 PM by slush
 #155

kano, how many pool software did you implement? How many of them have thousands concurrently connected clients?

I'm quite bored with arguing to you, because everytime you use some new weird arguments. Most of your comments in last message are irrelevant in some way. I'm really not going to calculate CPU cycles for you, it is simply out of scope of this discussion.

Edit:
However, if you're really interested, there's pseudocode of both solutions:
Code:
for diff in waiting_notifications: # Iterates only over connections which needs new difficulty
    diff.send()

job = prepare_job()
job.broadcast() # Send same packet to all connected client

Code:
for client in iterate_clients: # Iterates over all connected clients every time
    job = prepare_job(client.difficulty) # Do custom serialization
    job.send() # Send one packet

Do you see the difference?

http://www.forbes.com/sites/chrisbarth/2011/12/29/want-to-make-good-decisions-avoid-mount-stupid/

beekeeper
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250


LTC


View Profile WWW
October 04, 2012, 09:43:46 PM
 #156

Guys, pls, how to set difficulty client side without hacking the proxy? Higher diff set pool side eats 5% of my hashing power.

25Khs at 5W Litecoin USB dongle (FPGA), 45kHs overclocked
https://bitcointalk.org/index.php?topic=310926
Litecoin FPGA shop -> http://ltcgear.com
slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
October 04, 2012, 10:04:29 PM
 #157

Guys, pls, how to set difficulty client side without hacking the proxy? Higher diff set pool side eats 5% of my hashing power.

If pool supports it, there should be some settings of required difficulty on pool profile. But I don't have such option (yet) on the profile and afaik btcguild is the same.

Btw can you elaborate more on "eats 5% of my hashing power" sentence? How is it possible?

Higher difficulty means that share submission rate is lower, but the weight of every submitted share is proportionally higher. With difficulty 5, you'll submit 5x less work, but you'll be credited by 5 shares for every submitted share. Except that a bit higher variance in submitting shares (which is depending on implementation of particular pool), there's no need to think that "higher difficulty is eating my hashpower".

beekeeper
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250


LTC


View Profile WWW
October 04, 2012, 10:21:18 PM
 #158

Over time, the ratio between gw with a solution and gw with no solution on diff > 1 (diff 8 ) is 5% less than same ratio on diff 1. I have 3 workers, 2 at diff 1 with same ratio, one trough proxy which is moved incrementally after connect to diff 8, its ratio is 5% less than diff 1. I want to use stratum to save pool resources and I assume ratio difference its from vardiff alg. It may also be caused by some relationship with network latency, since diff one workers are connected to DE server (Europe). However, I think most likely is from vardiff, so I would like at least to test same worker at diff 1 trough stratum.

25Khs at 5W Litecoin USB dongle (FPGA), 45kHs overclocked
https://bitcointalk.org/index.php?topic=310926
Litecoin FPGA shop -> http://ltcgear.com
slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
October 04, 2012, 10:33:20 PM
 #159

Oh, this! Proxy is working in "compatibility mode" by default, to ensure that all, even very old miners, will work with it with no issues. If you have some modern miner (cgminer, poclbm or so), you can run it with --real-target. Currently the proxy is filtering low-diff shares, so you may see different numbers in the miner and on the pool. Thanks to --real-target, your numbers in miner won't be screwed anymore.

kano
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
October 04, 2012, 10:44:47 PM
 #160

kano, how many pool software did you implement? How many of them have thousands concurrently connected clients?

I'm quite bored with arguing to you, because everytime you use some new weird arguments. Most of your comments in last message are irrelevant in some way. I'm really not going to calculate CPU cycles for you, it is simply out of scope of this discussion.

Edit:
However, if you're really interested, there's pseudocode of both solutions:
Code:
for diff in waiting_notifications: # Iterates only over connections which needs new difficulty
    diff.send()

job = prepare_job()
job.broadcast() # Send same packet to all connected client

Code:
for client in iterate_clients: # Iterates over all connected clients every time
    job = prepare_job(client.difficulty) # Do custom serialization
    job.send() # Send one packet

Do you see the difference?

http://www.forbes.com/sites/chrisbarth/2011/12/29/want-to-make-good-decisions-avoid-mount-stupid/
Yeah I see the difference - you lose as a programmer Tongue
I've even commented on that without looking at your code.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Pages: « 1 2 3 4 5 6 7 [8] 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!