tucenaber
|
|
July 08, 2012, 10:44:10 AM Last edit: July 08, 2012, 11:08:38 AM by tucenaber |
|
No. P2pool is getting work form bitcoind. Each node do the same, but not all the nodes have exact same work because not all txes are in all places of bitcoin network at same time. So if your node will found a block it will be closed in way your bitcoind see it. Then your "block found" and your closing tx is spread across p2pool nodes. So it IS possible, that you see block value 61 and I see same block @ 55 and s1 else only 50. Good info, thanks. What still puzzles me a little is why I consistently saw blocks in the low 60s. It wasn't a one time deal, it stayed that one until we found a block, then it dropped. And it was that way long enough for my payout to increase from .9 to 1.2, and as we know, the payout doesn't change rapidly. I don't know how long it takes this info to propagate across the network. That implies there's something wrong with my logic, or propagation is pretty slow. BTW, I'm still regularly getting dupe submission messages. I just saw this (biggest one I've seen so far): 2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once! 2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once! M I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%. I have also confirmed that the same hashes shows up in cgminer's sharelog and they are never submitted more than once. (EDIT: not true; occasionally the same hash is subitted three times) Since the set contaning the submitted hashes is local to the got_response closure it seems to me that the duplicate check only affects submitted shares from the same get_work. Am I correct to conclude that this means that the problem is with cgminer? def get_work(self, pubkey_hash, desired_share_target, desired_pseudoshare_target):
/.../
received_header_hashes = set()
def got_response(header, request):
/.../
elif header_hash in received_header_hashes: print >>sys.stderr, 'Worker %s @ %s submitted share %064x more than once!' % (request.getUser(), request.getClientIP(), header_hash)
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
July 08, 2012, 12:12:59 PM |
|
BTW, I'm still regularly getting dupe submission messages. I just saw this (biggest one I've seen so far):
2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once! 2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
M
I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.
I have also confirmed that the same hashes shows up in cgminer's sharelog and they are never submitted more than once. (EDIT: not true; occasionally the same hash is subitted three times)
Since the set contaning the submitted hashes is local to the got_response closure it seems to me that the duplicate check only affects submitted shares from the same get_work. Am I correct to conclude that this means that the problem is with cgminer?
I think the problem is in p2pool, because I two of my miners are cgminer, one is phoenix, and I get dupes on all 3. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
tucenaber
|
|
July 08, 2012, 12:15:33 PM |
|
I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?
|
|
|
|
forrestv (OP)
|
|
July 08, 2012, 01:23:12 PM |
|
I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?
The work caching preserves the merkle root, but advances the timestamp by 12 seconds every time. It may be possible that CGminer is advancing the timestamp more than that (ignoring the X-Roll-NTime header, which is set to "expire=10"), and so redoing work. EDIT: I'm going to add a check to P2Pool that warns about improperly rolled work. If anyone _ever_ sees this warning, we'll know that something is broken. On the other hand, maybe CGminer retries submitting work before the original work submit is finished if it's slow? That would mean that there's no work actually being lost. I should really just look at the code... I use CGminer and almost never see this message. Is there anything special about the mining rigs that this happens to? How many GPUs do they have?
|
1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
|
|
|
tucenaber
|
|
July 08, 2012, 01:38:07 PM |
|
I use CGminer and almost never see this message. Is there anything special about the mining rigs that this happens to? How many GPUs do they have?
I have two machines and both show this fenomenon but one has a much higher hashrate and therefore it is more noticable on that. One desktop machine with a single 5850 and a dedicated rig with 4 x 5850 + 10 x Icarus. Both of these are running linux and I have the latest version of both p2pool and cgminer. Thanks for looking into it.
|
|
|
|
kano
Legendary
Offline
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
|
|
July 08, 2012, 01:42:27 PM |
|
Also look at java API stats that may show some interesting numbers (though I have no idea what values to expect from p2pool)
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
July 08, 2012, 02:30:34 PM |
|
I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?
The work caching preserves the merkle root, but advances the timestamp by 12 seconds every time. It may be possible that CGminer is advancing the timestamp more than that (ignoring the X-Roll-NTime header, which is set to "expire=10"), and so redoing work. EDIT: I'm going to add a check to P2Pool that warns about improperly rolled work. If anyone _ever_ sees this warning, we'll know that something is broken. On the other hand, maybe CGminer retries submitting work before the original work submit is finished if it's slow? That would mean that there's no work actually being lost. I should really just look at the code... I use CGminer and almost never see this message. Is there anything special about the mining rigs that this happens to? How many GPUs do they have? 2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once! 2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once! q6600 = my workstation, one 7870 on cgminer. p2pool is running here. miner1 = 4x7870 on cgminer miner2 = 4x5870 on phoenix I don't see miner2 that often, but I do see it. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
Peterae
Newbie
Offline
Activity: 22
Merit: 0
|
|
July 08, 2012, 06:15:52 PM |
|
Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.
Thx
|
|
|
|
Smoovious
|
|
July 08, 2012, 10:50:00 PM |
|
Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.
Thx
The chart is averaged out over the period the chart is displaying, and the stats shown in the console, are averaged out over the past 10 minutes or so, based on your diff-1 share submission, so luck will play a part too. -- Smoov
|
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1361
Merit: 1003
Don`t panic! Organize!
|
|
July 08, 2012, 11:34:11 PM |
|
Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.
Thx
On p2pool.info you can see last 24hrs hashrate measured by shares accepted by pool. On own chart you can see hour/day/wee/month measure by diff=1 shares. Depends on your luck you can see that rate reflected in pool shares can be higher or lower than sd1 shares. To compare what you see on p2pool.info and local page always see local last day.
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
July 08, 2012, 11:35:01 PM |
|
Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.
Thx
You're confusing the area (total megahashes, Mh) with the rate (megahashes per second, MHps). The actual rate you should expect from your given stats is: 314*86400*7/10^6 = 189.9 MHps. This is quite close to the 194 Mhps you quoted.
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
July 09, 2012, 01:12:55 AM |
|
I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?
On the other hand, I am seeing it a lot more from my two cgminer instances than my phoenix instance. Considering my phoenix instance is around 1.6g/h, and my local 7870 is 660m/h, you'd think I'd see a lot more from phoenix than I do the 7870. But I don't. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
Peterae
Newbie
Offline
Activity: 22
Merit: 0
|
|
July 09, 2012, 05:40:34 AM |
|
Thanks for the explanations, that certainly clears that up
|
|
|
|
Ente
Legendary
Offline
Activity: 2126
Merit: 1001
|
|
July 09, 2012, 07:43:11 AM |
|
2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once! 2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
M
I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%. What if.. those are the missing 10% we see globally on the p2pool stats? Ente
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
July 09, 2012, 03:04:00 PM |
|
2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once! 2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once! 2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
M
I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%. What if.. those are the missing 10% we see globally on the p2pool stats? Ente If the pool used 100% of shares to estimate the global pool hashrate instead of 91%, then yes. But I think p2Pool only counts valid shares? If not, you might notice 9% less payout, but reporting would be accurate.
|
|
|
|
tucenaber
|
|
July 09, 2012, 03:33:15 PM |
|
I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.
What if.. those are the missing 10% we see globally on the p2pool stats? Ente If the pool used 100% of shares to estimate the global pool hashrate instead of 91%, then yes. But I think p2Pool only counts valid shares? If not, you might notice 9% less payout, but reporting would be accurate. It counts valid shares and dead-on-arrivals, but not duplicates which are just dropped. So yes, you are right about that.
|
|
|
|
tucenaber
|
|
July 09, 2012, 05:32:41 PM |
|
2012-07-09 19:31:07.554921 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work! 2012-07-09 19:31:13.877677 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work! 2012-07-09 19:31:16.723297 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work! 2012-07-09 19:31:22.509470 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work! 2012-07-09 19:31:30.192033 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
Now what?
|
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1361
Merit: 1003
Don`t panic! Organize!
|
|
July 09, 2012, 05:51:28 PM |
|
Looks like you have updated to fresh version that checks and warns about double-sending
|
|
|
|
RandomQ
|
|
July 09, 2012, 09:56:58 PM |
|
I love the latest from cgminer 2.5.0 change log. ---- I've also created my own workaround for the biggest problem with existing bitforce devices - the can now abort work as soon as a longpoll hits which means literally half as much work on average wasted across longpoll than previously, and a much lower reject rate. Note these devices are still inefficient across longpoll since they don't even have the support the minirig devices have - and they never will according to bfl. This means you should never mine with them on p2pool. ---- P2pool- The Anti BFL pool
|
|
|
|
tucenaber
|
|
July 11, 2012, 11:31:57 AM |
|
So I have been having problems, and now at least I think what's going on. The problem is this: p2pool increments the getwork timestamp at every request and assumes that the miner will respect X-Roll-NTime which is set to "expire=10". cgminer (and apparently phoenix) doesn't respect that. Sometimes there is a hash collision from two different getwork requests, both rolled past 10 seconds. ( see the cgminer thread) (The check forrestv added yesterday warning about this, is broken since it only catches 1/12 of the shares rolled past 10 seconds, but that's not very important) Now, my question is what do I do about it? One reason I get more of these than other people, I suspect, is because the miner produces many shares per minute. - Would the best short term solution be to increase the local difficulty? - ckolivas suggested setting --scan-time in cgminer to 10 but that doesn't make much of a difference. - hack p2pool to inrease the 12 second increment to something higher? That seems a bit risky... other suggestions?
|
|
|
|
|