slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 27, 2011, 07:16:21 PM |
|
I decided not to report this when I noticed it, because I was concerned somebody would find a way to exploit it, but I imagine it's better to get it fixed:
Hi, it's absolutely fine that you're reporting this stuff. I used rpcminer-cuda, which reports the hashes it finds. Within the past few days, one of my miners found the same hash a number of times in a row, and the server accepted it each time. It seems the server is sometimes assigning duplicate work?
I'm pretty sure server isn't giving the same job twice. I don't know rpcminer internals, but recalculating/resubmitting of the same job might be possible when some error appear. For example, miner can resubmit the same hash when the first attempt fail on network level. EDIT: This just happened again, but this time the duplicate hashes (including the first) were rejected by the server. (result: false, error: null). Is is this an error on my side or on the server's side?
Could you send me the specific hash, which was submitted many times? I'll search server logs...
|
|
|
|
xloem
Newbie
Offline
Activity: 2
Merit: 0
|
|
January 27, 2011, 09:09:30 PM |
|
I'm afraid I don't have hash that was submitted many times, each one successful, but I'll note it if I see it happen again. The exchanges where my client calculated the same hash twice and the server rejected it twice each looked like this: Sending to server: {"method":"getwork","params":["00000001ac490b07cf60a40113378138499a874c5f8041fadf537e150000bf3d00000000470ed189b06be2ae6299181c7dfdcc41b7681a65a7c1c028580fe5eb10ff03e44d41ade61b02fa2906b344ff000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000"],"id":1} Server sent: {"result": false, "id": "1", "error": null}
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 27, 2011, 09:30:53 PM |
|
I'm afraid I don't have hash that was submitted many times, each one successful, but I'll note it if I see it happen again. The exchanges where my client calculated the same hash twice and the server rejected it twice each looked like this:
Great, this is enough. From server logs: Share found by xloem.t0, nonce 06b344ff duplicate nonce 06b344ff duplicate nonce 06b344ff That means miner tried to upload block 3x, but only first share was accepted (which is OK). So it looks fine from pool side. Another question is why miner tried to upload one job many times.
|
|
|
|
dooglus
Legendary
Offline
Activity: 2940
Merit: 1333
|
|
January 27, 2011, 10:35:25 PM |
|
I'm retiring from my career in bitcoin mining.
I've been working flat out for a few weeks now and have finally managed to mine a whole coin.
Thanks slush for making this possible - I think I'd have been waiting a long long time to get my first coin without your help.
Please feel free to do whatever you like with the 0.00763751 BTC remaining in my mining pool account.
Na zdravi.
Chris.
|
Just-Dice | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | Play or Invest | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | 1% House Edge |
|
|
|
cdb000
Member
Offline
Activity: 112
Merit: 11
|
|
January 27, 2011, 11:31:27 PM |
|
In the case of a block with a transaction fee (eg. 10492) what happens to the fee? Does it get shared out along with the 50BTC generated?
|
|
|
|
geekmug
Newbie
Offline
Activity: 4
Merit: 0
|
|
January 28, 2011, 01:28:01 AM |
|
Maybe I am just lucky, but I feel like the 80 BTC I've earned is well behind the 200 BTC I would've earned all on my own.
Yes, this is just a luck. There is no reason for just big difference, you have only small statistical population to make conclusion. Well, there is a reason, it's because my luck has made me an outlier in the positive direction, and correspondingly, there is (collectively) a worse-than-average performance for the rest of the network -- it's just statistics. I just find it interesting that being a part of such a large pool essentially pegs you to average performance. Joining the pool is like putting your resources into a bank that pays interest versus going-it-alone is more like playing the lotto with your resources. It just perceptually is unfortunate to have had really good luck in the pool.. maybe it would be better to not tell me how many blocks I found.
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 28, 2011, 01:11:29 PM |
|
Please feel free to do whatever you like with the 0.00763751 BTC remaining in my mining pool account. Na zdravi.
Díky
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 28, 2011, 01:19:56 PM Last edit: January 29, 2011, 07:17:24 PM by slush |
|
In the case of a block with a transaction fee (eg. 10492) what happens to the fee? Does it get shared out along with the 50BTC generated?
As I was declared this many times before, currently pool keeps fees for self. Afaik, in whole pool history there are only ~0.05 BTC in fees. Once it will become any significant amount, I'll start calculating fees into participant rewards. Btw calculating fees into rewards is difficult now, because pool does not see generated blocks with current JSON API (so I don't know how much fees is for which block). Hacking Bitcoin client for 0.05 BTC is quite worthless at this time. I hope next Bitcoin release will fix that, so adding fees into rewards will be much easier.
|
|
|
|
sc8nt4u
|
|
January 29, 2011, 06:26:19 PM |
|
587 2011-01-29 03:11:38 0:54:20 23619 0.99919554 105101 invalid
How often are blocks invalid? First time I've seen an invalid block.
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 29, 2011, 07:18:41 PM |
|
How often are blocks invalid? First time I've seen an invalid block.
AFAIK it is second invalid block in whole pool history.
|
|
|
|
bitcool
Legendary
Offline
Activity: 1441
Merit: 1000
Live and enjoy experiments
|
|
January 29, 2011, 08:47:58 PM |
|
How often are blocks invalid? First time I've seen an invalid block.
AFAIK it is second invalid block in whole pool history. can anyone explain what possible scenarios are causing an invalid block being created?
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 29, 2011, 10:17:15 PM |
|
can anyone explain what possible scenarios are causing an invalid block being created?
When two independent miners find a block with the same previous hash at the same time, only one of them can be valid bitcoin block. So pool announced new block a second after someone else...
|
|
|
|
fabianhjr
Sr. Member
Offline
Activity: 322
Merit: 250
Do The Evolution
|
|
January 29, 2011, 10:56:51 PM |
|
The pool is now over 30 GHashes/sec.
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 30, 2011, 09:14:49 PM |
|
I just restarted pool server application. All of you who changed worker passwords on website recently, please check if your workers are working correctly. I see some "Bad password" messages in server log. This is because I still didn't fixed reloading worker credentials from database to running application, so if you changed passwords before few days, NOW it was applied O:-).
|
|
|
|
bitcool
Legendary
Offline
Activity: 1441
Merit: 1000
Live and enjoy experiments
|
|
January 30, 2011, 11:24:55 PM |
|
When two independent miners find a block with the same previous hash at the same time, only one of them can be valid bitcoin block. So pool announced new block a second after someone else...
Thanks for the explanation; is this latency an exploitable vulnerability though? just wondering.
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 30, 2011, 11:28:22 PM |
|
Thanks for the explanation; is this latency an exploitable vulnerability though? just wondering.
I don't think so, but I'm not an expert for this. You can ask anybody else in #bitcoin-dev, because this is not pool-related question.
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 30, 2011, 11:43:47 PM |
|
Today's pool update introduced small change in counting shares. Only submitted shares which are valid for current Bitcoin block are calculated. So if your miner ask for job and submit a share, it isn't calculated if new Bitcoin block arrived in meantime and your share cannot be a candidate for next Bitcoin block. This does not affect fairness of pool, when your miners are configured correctly. Please check your miners, if they does not use custom getwork timeout. Default period of getwork (typically 5 seconds) is the best settings. In this way, you should obtain 'stale' shares only with ~1% probability.
Please note, that also other miner settings can affect time between getwork() and share submit. For example, "-f 1" parameter in Diablo miner rised latency between getwork and submit significantly (from <5s to >10s for ATI 5970). I solved this by "-f 5".
|
|
|
|
FairUser
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
January 31, 2011, 12:27:19 AM |
|
Today's pool update introduced small change in counting shares. Only submitted shares which are valid for current Bitcoin block are calculated. So if your miner ask for job and submit a share, it isn't calculated if new Bitcoin block arrived in meantime and your share cannot be a candidate for next Bitcoin block. This does not affect fairness of pool, when your miners are configured correctly. Please check your miners, if they does not use custom getwork timeout. Default period of getwork (typically 5 seconds) is the best settings. In this way, you should obtain 'stale' shares only with ~1% probability.
Please note, that also other miner settings can affect time between getwork() and share submit. For example, "-f 1" parameter in Diablo miner rised latency between getwork and submit significantly (from <5s to >10s for ATI 5970). I solved this by "-f 5".
Please look @ http://nullvoid.org/bitcoin/statistix.phpLook at the 100 block duration. SEVERAL BLOCKS ARE FOUND IN LESS THAN 60 SECONDS. You should change the system to support submitted answers for the last block and the current block. That way anyone that submits an old work from the previous block doesn't get an invalid. I understand the desire to cut off those abusing the system, which is a good thing, but you should make sure it doesn't affect those playing by the rules.
|
|
|
|
bitcool
Legendary
Offline
Activity: 1441
Merit: 1000
Live and enjoy experiments
|
|
January 31, 2011, 01:39:48 AM |
|
I understand the desire to cut off those abusing the system, which is a good thing, but you should make sure it doesn't affect those playing by the rules.
Well, slush is changing the rule since he is the ruler However, I do think the new rule is fair, because if a block is found within seconds, those slow workers submitting their shares for previous block won't have chance anyway -- with or without a pool.
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 31, 2011, 07:44:49 AM |
|
That way anyone that submits an old work from the previous block doesn't get an invalid.
Worker doesn't get an 'invalid or stale', but the share from previous block cannot be valid Bitcoin block, so I don't see the reason why to accept this. I understand the desire to cut off those abusing the system, which is a good thing, but you should make sure it doesn't affect those playing by the rules.
Well, I'm sure. This change affected only counting of shares, not validating hashes against bitcoind. Even if I made some strange mistake in this update (which I don't expect), pool don't miss any block, because every share is still fully checked.
|
|
|
|
|