forrestv (OP)
|
|
November 17, 2012, 01:56:58 AM |
|
So - how is the pool hash rate calculated?
It includes orphans and dead on arrival shares as they are seen by every node. Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion)
|
1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
|
|
|
Syke
Legendary
Offline
Activity: 3878
Merit: 1193
|
|
November 17, 2012, 03:41:33 AM |
|
Yes our 90 day moving average is impressive and it is being used to sugar coat or slick over the issue we are trying to bring up. Just because our overall luck is good does not mean we don't have a scalability problem. When we get those bumps in hash speed, the person/s that jumped on board usually see that their payouts are low because of the bad luck during the period they hopped on and they leave. That drops us back down in hash rate back to where the supposed scalability issue is no longer in play.
So until we have a run of good luck in the 350+ GH range I and probably some others will still have doubts.
That's just ridiculous. Take a look at the last week. Hashrate has been climing to over 400 GH/s, and the 7-day average is still over 115%.
|
Buy & Hold
|
|
|
sharky112065
|
|
November 17, 2012, 04:31:11 AM |
|
Yes our 90 day moving average is impressive and it is being used to sugar coat or slick over the issue we are trying to bring up. Just because our overall luck is good does not mean we don't have a scalability problem. When we get those bumps in hash speed, the person/s that jumped on board usually see that their payouts are low because of the bad luck during the period they hopped on and they leave. That drops us back down in hash rate back to where the supposed scalability issue is no longer in play.
So until we have a run of good luck in the 350+ GH range I and probably some others will still have doubts.
That's just ridiculous. Take a look at the last week. Hashrate has been climing to over 400 GH/s, and the 7-day average is still over 115%. As you said... "has been climbing" So for part of the week we were below 350 GH. and 7 day average is now down to 110% I'm done (as in, not gonna argue with you anymore), no one will take a serious look into the issue as long as people are sugar coating it with ... see look our overall luck is good. What I and others are saying is we are noticing that blocks are being solved by the pool less frequently when we are over a certain hash rate.
|
Donations welcome: 12KaKtrK52iQjPdtsJq7fJ7smC32tXWbWr
|
|
|
Syke
Legendary
Offline
Activity: 3878
Merit: 1193
|
|
November 17, 2012, 05:11:07 AM |
|
As you said... "has been climbing" So for part of the week we were below 350 GH. and 7 day average is now down to 110%
So what you're saying is, when you flip a coin less than 350 times a day you get 50/50, but when you flip it more than 350 times in a day you get 60/40? Claim it all you want, it makes no sense. Let's look at this week day by day (which is a bad idea as there's too much variation). 2 blocks or less is below expected, 3 blocks or more is above average. 1-day ago. 2 - Below 2-days ago. 3 - Above 3-days ago. 4 - Above 4-days ago. 4 - Above 5-days ago. 4 - Above 6-days ago. 1 - Below 7-days ago. 0 - Below 4 days above average, 3 days below average. Where's the problem??? I'm done (as in, not gonna argue with you anymore), no one will take a serious look into the issue as long as people are sugar coating it with
Go for it. Take a serious look. https://github.com/forrestv/p2poolAnd when ASICs hit you'll see that you're imagining patterns where there are none.
|
Buy & Hold
|
|
|
kano
Legendary
Offline
Activity: 4620
Merit: 1851
Linux since 1997 RedHat 4
|
|
November 17, 2012, 05:31:52 AM |
|
So - how is the pool hash rate calculated?
It includes orphans and dead on arrival shares as they are seen by every node. Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion) So it doesn't include all the 1-diff shares generated by the miner ... ?
|
|
|
|
forrestv (OP)
|
|
November 17, 2012, 05:59:24 AM |
|
So - how is the pool hash rate calculated?
It includes orphans and dead on arrival shares as they are seen by every node. Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion) So it doesn't include all the 1-diff shares generated by the miner ... ? No... pool hash rate is only determined by actual shares, which is accurate enough.
|
1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
|
|
|
kano
Legendary
Offline
Activity: 4620
Merit: 1851
Linux since 1997 RedHat 4
|
|
November 17, 2012, 06:20:18 AM |
|
So - how is the pool hash rate calculated?
It includes orphans and dead on arrival shares as they are seen by every node. Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion) So it doesn't include all the 1-diff shares generated by the miner ... ? No... pool hash rate is only determined by actual shares, which is accurate enough. So back to my original statement then. The pool hash rate reported is below the actual hash rate of the miners (like all pools) However, since p2pool has 60x the number of LP's compared to other pools, the actual hash rate is quite a bit higher than reported - whereas with a normal pool it shouldn't be more than about 1% higher (I typically get <0.4% for GPU/BFL/ICA and ~0.7% for MMQ) Yet in those ignored shares, if there is a valid Block, it will of course get through, so rather than the number of blocks being representative of the number of valid shares, it is in fact representative of the number of all work generated by all p2pool miners which is of course higher than the reported hash rate. So when people are reporting that p2pool is getting 110% luck consistently (which of course isn't possible - the probability of getting 110% over even 1 month is excessively unlikely) they are in fact not comparing the correct numbers. Yes p2pool may well now be averaging 100% as expected, but those > 100% numbers are the result of not comparing the correct numbers
|
|
|
|
gyverlb
|
|
November 17, 2012, 09:56:32 AM |
|
Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion)
Is it just a flag or a counter? We have whole network stats for DOA and Orphans too, are there 2 flags/counters? With a counter the stale estimation should be pretty good. If it's a flag, multiple stales between shares would be miscounted as an unique stale. What might be underestimated too is the amount of work from nodes which stop mining after stale shares.
|
|
|
|
K1773R
Legendary
Offline
Activity: 1792
Merit: 1008
/dev/null
|
|
November 17, 2012, 10:37:25 AM |
|
So - how is the pool hash rate calculated?
It includes orphans and dead on arrival shares as they are seen by every node. Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion) So it doesn't include all the 1-diff shares generated by the miner ... ? No... pool hash rate is only determined by actual shares, which is accurate enough. So back to my original statement then. The pool hash rate reported is below the actual hash rate of the miners (like all pools) However, since p2pool has 60x the number of LP's compared to other pools, the actual hash rate is quite a bit higher than reported - whereas with a normal pool it shouldn't be more than about 1% higher (I typically get <0.4% for GPU/BFL/ICA and ~0.7% for MMQ) Yet in those ignored shares, if there is a valid Block, it will of course get through, so rather than the number of blocks being representative of the number of valid shares, it is in fact representative of the number of all work generated by all p2pool miners which is of course higher than the reported hash rate. So when people are reporting that p2pool is getting 110% luck consistently (which of course isn't possible - the probability of getting 110% over even 1 month is excessively unlikely) they are in fact not comparing the correct numbers. Yes p2pool may well now be averaging 100% as expected, but those > 100% numbers are the result of not comparing the correct numbers you can find a block with an orphaned share too! think of it as: with a 1diff share being stale on a normal pool u still can get a block (in relation to p2pool)
|
[GPG Public Key]BTC/DVC/TRC/FRC: 1 K1773RbXRZVRQSSXe9N6N2MUFERvrdu6y ANC/XPM A K1773RTmRKtvbKBCrUu95UQg5iegrqyeA NMC: N K1773Rzv8b4ugmCgX789PbjewA9fL9Dy1 LTC: L Ki773RBuPepQH8E6Zb1ponoCvgbU7hHmd EMC: E K1773RxUes1HX1YAGMZ1xVYBBRUCqfDoF BQC: b K1773R1APJz4yTgRkmdKQhjhiMyQpJgfN
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
November 17, 2012, 11:10:16 AM |
|
Yes our 90 day moving average is impressive and it is being used to sugar coat or slick over the issue we are trying to bring up. Just because our overall luck is good does not mean we don't have a scalability problem. When we get those bumps in hash speed, the person/s that jumped on board usually see that their payouts are low because of the bad luck during the period they hopped on and they leave. That drops us back down in hash rate back to where the supposed scalability issue is no longer in play.
So until we have a run of good luck in the 350+ GH range I and probably some others will still have doubts.
That's just ridiculous. Take a look at the last week. Hashrate has been climing to over 400 GH/s, and the 7-day average is still over 115%. As you said... "has been climbing" So for part of the week we were below 350 GH. and 7 day average is now down to 110% I'm done (as in, not gonna argue with you anymore), no one will take a serious look into the issue as long as people are sugar coating it with ... see look our overall luck is good. What I and others are saying is we are noticing that blocks are being solved by the pool less frequently when we are over a certain hash rate. If you look at my posts, I used to be saying the same thing, except I was saying 300 GH was the magic number. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
cabin
|
|
November 17, 2012, 01:03:13 PM |
|
So - how is the pool hash rate calculated?
It includes orphans and dead on arrival shares as they are seen by every node. Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion) So it doesn't include all the 1-diff shares generated by the miner ... ? No... pool hash rate is only determined by actual shares, which is accurate enough. So back to my original statement then. The pool hash rate reported is below the actual hash rate of the miners (like all pools) However, since p2pool has 60x the number of LP's compared to other pools, the actual hash rate is quite a bit higher than reported - whereas with a normal pool it shouldn't be more than about 1% higher (I typically get <0.4% for GPU/BFL/ICA and ~0.7% for MMQ) Yet in those ignored shares, if there is a valid Block, it will of course get through, so rather than the number of blocks being representative of the number of valid shares, it is in fact representative of the number of all work generated by all p2pool miners which is of course higher than the reported hash rate. So when people are reporting that p2pool is getting 110% luck consistently (which of course isn't possible - the probability of getting 110% over even 1 month is excessively unlikely) they are in fact not comparing the correct numbers. Yes p2pool may well now be averaging 100% as expected, but those > 100% numbers are the result of not comparing the correct numbers One thing that might mitigate some of this is that p2pool does send a header that tells the miner to submit all work, even if it is considered dead right after a long poll. This work should show up in the number of DOA shares the pool reports. So we might be counting most of the hash rate that others pools miss right after a long poll. Also +/- 10% over the course of a month will happen fairly regularly with a pool of this size.
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
November 17, 2012, 01:11:05 PM |
|
The correlation coefficient between hashrate per round and shares per round on D is 0.08. That means no significant correlation at all. Hashrate does not correlate with round length in a linear way. Plotting one against the other doesn't really show any pattern at all. It looks like a cloud of mosquitos.
Unfortunately the data only goes back to August - is there access to all rounds somewhere?
|
|
|
|
cabin
|
|
November 17, 2012, 02:58:04 PM |
|
The correlation coefficient between hashrate per round and shares per round on D is 0.08. That means no significant correlation at all. Hashrate does not correlate with round length in a linear way. Plotting one against the other doesn't really show any pattern at all. It looks like a cloud of mosquitos.
Unfortunately the data only goes back to August - is there access to all rounds somewhere?
This looks promising: http://p2pool.info/blocks?from=0
|
|
|
|
twmz
|
|
November 17, 2012, 02:59:45 PM |
|
The correlation coefficient between hashrate per round and shares per round on D is 0.08. That means no significant correlation at all. Hashrate does not correlate with round length in a linear way. Plotting one against the other doesn't really show any pattern at all. It looks like a cloud of mosquitos.
Unfortunately the data only goes back to August - is there access to all rounds somewhere?
If you are getting them from my site, you have to add a querystring parameter to get all data since I changed the main page to only show 90 days worth to improve routine performance: http://p2pool.info/blocks?all=true
|
Was I helpful? 1 TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs WoT, GPGBitrated user: ewal.
|
|
|
forrestv (OP)
|
|
November 17, 2012, 04:25:25 PM |
|
So back to my original statement then. The pool hash rate reported is below the actual hash rate of the miners (like all pools)
However, since p2pool has 60x the number of LP's compared to other pools, the actual hash rate is quite a bit higher than reported - whereas with a normal pool it shouldn't be more than about 1% higher (I typically get <0.4% for GPU/BFL/ICA and ~0.7% for MMQ)
Yet in those ignored shares, if there is a valid Block, it will of course get through, so rather than the number of blocks being representative of the number of valid shares, it is in fact representative of the number of all work generated by all p2pool miners which is of course higher than the reported hash rate.
Work done during that time is counted as DOA shares, as cabin said: One thing that might mitigate some of this is that p2pool does send a header that tells the miner to submit all work, even if it is considered dead right after a long poll. This work should show up in the number of DOA shares the pool reports. So we might be counting most of the hash rate that others pools miss right after a long poll.
So when people are reporting that p2pool is getting 110% luck consistently (which of course isn't possible - the probability of getting 110% over even 1 month is excessively unlikely) they are in fact not comparing the correct numbers. Yes p2pool may well now be averaging 100% as expected, but those > 100% numbers are the result of not comparing the correct numbers
No, the probability of us getting >110% luck for a given month is 21% (assuming we should get 2.3 blocks/day and a month has 30 days). In[23]:= expected = 2.3*30; In[24]:= 1 - CDF[PoissonDistribution[expected], expected*1.1] // N Out[24]= 0.214626
Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion)
Is it just a flag or a counter? We have whole network stats for DOA and Orphans too, are there 2 flags/counters? With a counter the stale estimation should be pretty good. If it's a flag, multiple stales between shares would be miscounted as an unique stale. What might be underestimated too is the amount of work from nodes which stop mining after stale shares. It's a flag, but P2Pool buffers the number of stale shares, setting the flag as many times as it needs to. Nodes leaving before getting a share with the flag set is a problem, yes, but I think it's a small one.
|
1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
|
|
|
gyverlb
|
|
November 17, 2012, 05:16:50 PM |
|
Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion)
Is it just a flag or a counter? We have whole network stats for DOA and Orphans too, are there 2 flags/counters? With a counter the stale estimation should be pretty good. If it's a flag, multiple stales between shares would be miscounted as an unique stale. What might be underestimated too is the amount of work from nodes which stop mining after stale shares. Note that I don't think it's an actual problem. With around 9-10 GH/s, I've always seen above 100% PPS and I'm currently at around 115% PPS for the last active weeks (computed from my actual earnings vs actual hash power). It matches the : - p2pool.info luck, - my efficiency, - the average tx fee percentage. This is why I'm back mining on p2pool: hashpower promises at least 105% PPS when leasing and falls short due to high rejects, I compared my actual PPS over several weeks and stats are largely in favor of p2pool for me. I left p2pool because I had I/O issues and likes to test novelties, but now bitcoind and p2pool are both on SSDs and they are just flying (at least when I'm not rewiring the whole apartment...).
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
November 17, 2012, 05:34:24 PM |
|
Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion)
Is it just a flag or a counter? We have whole network stats for DOA and Orphans too, are there 2 flags/counters? With a counter the stale estimation should be pretty good. If it's a flag, multiple stales between shares would be miscounted as an unique stale. What might be underestimated too is the amount of work from nodes which stop mining after stale shares. Note that I don't think it's an actual problem. With around 9-10 GH/s, I've always seen above 100% PPS and I'm currently at around 115% PPS for the last active weeks (computed from my actual earnings vs actual hash power). It matches the : - p2pool.info luck, - my efficiency, - the average tx fee percentage. This is why I'm back mining on p2pool: hashpower promises at least 105% PPS when leasing and falls short due to high rejects, I compared my actual PPS over several weeks and stats are largely in favor of p2pool for me. I left p2pool because I had I/O issues and likes to test novelties, but now bitcoind and p2pool are both on SSDs and they are just flying (at least when I'm not rewiring the whole apartment...). My first SSD should be arriving soon. I'm seriously hoping it decreases my stale rate - like you, I think it's I/O related. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
kano
Legendary
Offline
Activity: 4620
Merit: 1851
Linux since 1997 RedHat 4
|
|
November 18, 2012, 01:20:37 AM |
|
Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion)
Is it just a flag or a counter? We have whole network stats for DOA and Orphans too, are there 2 flags/counters? With a counter the stale estimation should be pretty good. If it's a flag, multiple stales between shares would be miscounted as an unique stale. What might be underestimated too is the amount of work from nodes which stop mining after stale shares. Note that I don't think it's an actual problem. With around 9-10 GH/s, I've always seen above 100% PPS and I'm currently at around 115% PPS for the last active weeks (computed from my actual earnings vs actual hash power). ... If you have always seen above 100% PPS then you are missing including something. Block+Fees average is currently 100.36% according to blockchain.info Thus you must be missing counting some work somewhere ...
|
|
|
|
lenny_
Legendary
Offline
Activity: 1036
Merit: 1000
DARKNETMARKETS.COM
|
|
November 18, 2012, 01:46:39 AM |
|
Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion)
Is it just a flag or a counter? We have whole network stats for DOA and Orphans too, are there 2 flags/counters? With a counter the stale estimation should be pretty good. If it's a flag, multiple stales between shares would be miscounted as an unique stale. What might be underestimated too is the amount of work from nodes which stop mining after stale shares. Note that I don't think it's an actual problem. With around 9-10 GH/s, I've always seen above 100% PPS and I'm currently at around 115% PPS for the last active weeks (computed from my actual earnings vs actual hash power). It matches the : - p2pool.info luck, - my efficiency, - the average tx fee percentage. This is why I'm back mining on p2pool: hashpower promises at least 105% PPS when leasing and falls short due to high rejects, I compared my actual PPS over several weeks and stats are largely in favor of p2pool for me. I left p2pool because I had I/O issues and likes to test novelties, but now bitcoind and p2pool are both on SSDs and they are just flying (at least when I'm not rewiring the whole apartment...). My first SSD should be arriving soon. I'm seriously hoping it decreases my stale rate - like you, I think it's I/O related. M Have you got latest bitcoind from git? It's much faster than stable version, with new database. With latest bitcoind from git my GetWork Latency dropped about 5-10x times.
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
November 18, 2012, 02:10:43 AM |
|
Gyver, not quite. Share have a flag that indicates whether the node that generated it got a stale share in the past, and from those, nodes can compute the stale share percentage and then the total hash rate with an expression like: unstale_hash_rate / (1 - pool_stale_proportion)
Is it just a flag or a counter? We have whole network stats for DOA and Orphans too, are there 2 flags/counters? With a counter the stale estimation should be pretty good. If it's a flag, multiple stales between shares would be miscounted as an unique stale. What might be underestimated too is the amount of work from nodes which stop mining after stale shares. Note that I don't think it's an actual problem. With around 9-10 GH/s, I've always seen above 100% PPS and I'm currently at around 115% PPS for the last active weeks (computed from my actual earnings vs actual hash power). It matches the : - p2pool.info luck, - my efficiency, - the average tx fee percentage. This is why I'm back mining on p2pool: hashpower promises at least 105% PPS when leasing and falls short due to high rejects, I compared my actual PPS over several weeks and stats are largely in favor of p2pool for me. I left p2pool because I had I/O issues and likes to test novelties, but now bitcoind and p2pool are both on SSDs and they are just flying (at least when I'm not rewiring the whole apartment...). My first SSD should be arriving soon. I'm seriously hoping it decreases my stale rate - like you, I think it's I/O related. M Have you got latest bitcoind from git? It's much faster than stable version, with new database. With latest bitcoind from git my GetWork Latency dropped about 5-10x times. I'll try it on my server. Sounds risky for my main wallet, but main wallet isn't used for mining. Thanks! M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
|