capped pay per share is the same as PPNLS that pays each share only once (the variant that goes back and pays the unpaid shares instead of double paying the same shares) and N = difficulty No, because on short rounds, the extra goes toward the N on future rounds in a buffer. Also, straight CPPS isn't being considered, just the CPPSB and CPPSEB variants. straight CPPS is better, you don't want people mining only when there's a large buffer and hopping off when it seems that the buffer is getting small That doesn't make sense. People are more likely to hop off with straight CPPS than with the variants thereof. I see what you mean, because of what happens on short rounds. But the variants are no better - when there's a buffer there's a buffer against bad pool luck and you have a better chance of getting 100% of your share's worth. When lots of people have backpay waiting on long rounds you'll be getting paid only for a part of your work. So based on previous performance you can see that your expected value is lower due to bad previous pool luck and you would avoid the pool. So after a long streak of bad luck there would be no point in mining the pool when you're expecting maybe 90% of PPS on a long block and only 100% on a short block No. CPPS* always offers full PPS for each share when it's submitted. Only when the buffer + current block funds are exhausted does it begin discarding or moving the oldest current-round shares to the future "backpay queue", to make sure it can continue to offer full PPS to the current miners. with PPLNS you'd be getting 80%-120%, so more variance, but previous performance has no bearing on your decision whether to mine the pool or not The whole point of CPPS* is that past history has no bearing on present or future earnings.
|
|
|
capped pay per share is the same as PPNLS that pays each share only once (the variant that goes back and pays the unpaid shares instead of double paying the same shares) and N = difficulty No, because on short rounds, the extra goes toward the N on future rounds in a buffer. Also, straight CPPS isn't being considered, just the CPPSB and CPPSEB variants. straight CPPS is better, you don't want people mining only when there's a large buffer and hopping off when it seems that the buffer is getting small That doesn't make sense. People are more likely to hop off with straight CPPS than with the variants thereof.
|
|
|
I like the current setup, ideally the change would be to use the "server" funds to pay outstanding balances as well as the 50btc from blocks. It'd be perfect at that point IMO.
The closest match in this poll to the current setup is ESMPPS
|
|
|
capped pay per share is the same as PPNLS that pays each share only once (the variant that goes back and pays the unpaid shares instead of double paying the same shares) and N = difficulty No, because on short rounds, the extra goes toward the N on future rounds in a buffer. Also, straight CPPS isn't being considered, just the CPPSB and CPPSEB variants.
|
|
|
There is a significant risk when PPPS is negative, but the upside is that it gets to rebuild a buffer sooner rather than trying to pay people off first and possibly getting even further into the negative. So in theory, it might stay on the up-side more often than a *MPPS or CPPS*B method.
I might be wrong about this but I imagine PPPS would actually rebuild it's buffer slower due to the fact that all the "sensible" miners will have left as soon as the pool looks like it will go into the negatives. The resulting mass-exodus will cut down the pool's hashing rate and prolong the time it takes to find future blocks. Well, I mean if the pool survives the block. Once the block is over, might as well come back.
|
|
|
I'd like to propose a variant of CPPS that borrows from RSMPPS. Maybe call it RCPPS. CPPSRB would fit with the existing names better. Luke-Jr, what do you think? I think I intentionally left RSMPPS off the poll, and voted for PPLNS and CPPSB Bailing on a Prop/PPS pool when it goes negative is a different kind of hopping than 43% hopping that doesn't harm the other miners on the pool by taking earnings out of their pockets. That said, if everyone does it, then the pool shrivels up and dies if even a 3-4 day long bad luck streak ever happens. What am I missing? I feel like I must be missing something since so many people are voting for it. There is a significant risk when PPPS is negative, but the upside is that it gets to rebuild a buffer sooner rather than trying to pay people off first and possibly getting even further into the negative. So in theory, it might stay on the up-side more often than a *MPPS or CPPS*B method.
|
|
|
Any thoughts on recoding the payout system as part of this?
Knowing if payouts would still be exclusively via ~50 BTC generate txs or if you are looking at mixing in (or otherwise using in some automated way) sendmany txs will make a difference in which reward methods I would vote for.
For now, let's assume the payout system will be rewritten to match the reward system.
|
|
|
Artefact2 has requested new indexes on the webserver SQL for his new graphs v3.0, so web-side SQL is down while it makes the indexes. This mainly means hashrates are showing 0.
I have also just deployed step 2 of my anti-stale improvements. Please report if you get any "invalid-time" rejected shares, or if your miner shows "idle" (especially around longpolls). Older versions of DiabloMiner have a bug, and are currently being exempted from the new behaviour (which makes it less effective for everyone), so if you use DM please upgrade to the latest version.
|
|
|
FWIW, I tried it out and couldn't get connected at all...
|
|
|
How about a middle ground: the client could pre-generate the next, say, 100 addresses, and then whenever a backup is performed, these pre-generated future addresses are also saved. "New" addresses are actually being pulled from this pool of pre-generated addresses. It already does. Finally, philosophically, I think wallets need to be living entities, and an immortal wallet is not a good idea. I can't concisely explain the reasons why this is best besides the above example about the backup found in the trash. There's a reason the PGP key system is designed to allow keys to expire, and all good security systems require occasional password changes. Even after a PGP key has expired, you can simply edit it to extend the expiration date. Passwords on deterministic wallets can be changed like any other.
|
|
|
The following branches are stable for merging to mainline: - bugfix_workspecific_rollntime -- Don't disable rollntime if the header is missing on share submissions (only on new work)
- extensions_header -- Send X-Mining-Extensions and X-Mining-Hashrate headers
- logformat -- Split off log-style printing into a separate --logformat option
- efficiency -- Display work efficiency (accepted shares / works)
- submit_retry -- Retry submitting works after network errors
The above are all merged into my branch named combo. I also have a extended_timeout branch that people with poor network connectivity can merge to get a more reliable mining experience. This one is not part of combo, and probably not suitable for merging to mainline. To pull any of the above, run: git fetch git://gitorious.org/~Luke-Jr/bitcoin/luke-jrs-poclbm.git PutBranchNameHere && git merge FETCH_HEAD
|
|
|
So if we use phoenix, we are missing out on a useful feature, but there won't be any hit to mining efficiency, is that correct? rollntime improves efficiency drastically (ie, like 0.8 accept/getwork to over 5 accept/getwork for me). Phoenix doesn't support it, no matter what a pool does.
|
|
|
Can i still use Phoenix with Eligius now? is there a list of accepted mining software? You can still use all miners with Eligius. This change should only affect those that have broken implementations of the rollntime extension. Phoenix doesn't support this at all. Since it is a helpful feature, I would recommend switching to poclbm if you can get the same hashrate out of it (it's usually better once you find the right settings).
|
|
|
I don't think it will be much of a performance hit to make the mining client keep their counter in a non-native byte ordering. If their native byte ordering is messed up, they will just have to use an 8 bit counter for the inner loop, and have a tiny bit of logic for incrementing and checking the rest when it overflows every 256 hashes.
Even if the conversion from network order to native order took as much work as a full (double) SHA256 hash, the performance it would be under 0.4%. In practice, it will be MUCH less overhead. On an Intel, it won't even take an extra register.
Just make sure that all of your range boundaries are multiples of 256.
Except that AIUI, GPUs don't iterate. They run them all at once...
|
|
|
0 to 1000000 is very different in little endian and big endian. Since SHA256 is on the byte level, the hash for 1000000 in little endian and big endian will be different.
Sure. That's why you convert all values to the proper endianness before hashing. And what kind of overhead will that have? I'm under the impression it's pretty bad.
|
|
|
You can flip endianness while converting the hex values But then your hashes will all be wrong...? We only have to agree on a endianness for the communication between server and client. Big or little doesn't really matter. The value has to be converted only once for every getwork request. Or are we misunderstanding you? 0 to 1000000 is very different in little endian and big endian. Since SHA256 is on the byte level, the hash for 1000000 in little endian and big endian will be different.
|
|
|
You can flip endianness while converting the hex values But then your hashes will all be wrong...?
|
|
|
I would appreciate it if the miner using software that rolls ntime when it's not told to, and doesn't send any User-Agent at all, would get in contact with me Example: 19BLtj3bSsJjfHp8b47eDwfGBRLognDDu2
|
|
|
|