Bitcoin Forum
April 26, 2024, 09:08:41 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 [105] 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 ... 814 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 2591624 times)
ChanceCoats123
Hero Member
*****
Offline Offline

Activity: 682
Merit: 500



View Profile
April 26, 2012, 01:48:53 AM
 #2081

This is a bit off topic, and is probably something major that I should know, but why does p2pool use such high share difficulties?
1714122521
Hero Member
*
Offline Offline

Posts: 1714122521

View Profile Personal Message (Offline)

Ignore
1714122521
Reply with quote  #2

1714122521
Report to moderator
The grue lurks in the darkest places of the earth. Its favorite diet is adventurers, but its insatiable appetite is tempered by its fear of light. No grue has ever been seen by the light of day, and few have survived its fearsome jaws to tell the tale.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714122521
Hero Member
*
Offline Offline

Posts: 1714122521

View Profile Personal Message (Offline)

Ignore
1714122521
Reply with quote  #2

1714122521
Report to moderator
1714122521
Hero Member
*
Offline Offline

Posts: 1714122521

View Profile Personal Message (Offline)

Ignore
1714122521
Reply with quote  #2

1714122521
Report to moderator
cabin
Sr. Member
****
Offline Offline

Activity: 604
Merit: 250


View Profile
April 26, 2012, 01:57:47 AM
 #2082


Conventional pool rejected shares = invalid hashes + stale hashes (after Bitcoin LP)
p2pool DOA shares = invalid hashes + stale hashes (after Bitcoin LP) + "pre-orphaned" hashes (hash that would be 100% orphaned if submitted)
Only the last one is valid work which can solve a block.


I agree completely with this summary.. the percents I'm not sure about. I'm pretty sure the leads are worth exploring though. If we are mistakenly including too many dead hashes in the luck calculations then that is no good for the pools image.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
April 26, 2012, 02:07:50 AM
 #2083

This is a bit off topic, and is probably something major that I should know, but why does p2pool use such high share difficulties?

No it is perfectly on topic.

p2pool tries to maintain a 10s LP windows (10s between shares).  This is a compromise.  Shorter LP window = higher orphans.  Longer LP window = higher difficulty.

One way to look at it is p2pool is ~350 GH/s.  At diff 1 shares that is ~81 shares per second or <0.01s between shares.  Work would become stale before it even finishes propogating the network.

So the difficulty is simply a compromise.  It takes difficulty of ~800 to keep 350 GH/s producing 1 share per 10 seconds.
twmz
Hero Member
*****
Offline Offline

Activity: 737
Merit: 500



View Profile
April 26, 2012, 02:12:48 AM
 #2084


Conventional pool rejected shares = invalid hashes + stale hashes (after Bitcoin LP)
p2pool DOA shares = invalid hashes + stale hashes (after Bitcoin LP) + "pre-orphaned" hashes (hash that would be 100% orphaned if submitted)
Only the last one is valid work which can solve a block.


I agree completely with this summary.. the percents I'm not sure about. I'm pretty sure the leads are worth exploring though. If we are mistakenly including too many dead hashes in the luck calculations then that is no good for the pools image.

p2pool.info relies entirely on the number reported by p2pool, itself, at http://localhost:9332/rate   If that is wrong, then the luck calculation will be wrong.  Fixing it is not something I can do.  The fix, if there is one to be made, needs to come from forrestv.

Was I helpful?  1TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs
WoT, GPG

Bitrated user: ewal.
kano
Legendary
*
Offline Offline

Activity: 4466
Merit: 1800


Linux since 1997 RedHat 4


View Profile
April 26, 2012, 02:15:31 AM
 #2085

This is a bit off topic, and is probably something major that I should know, but why does p2pool use such high share difficulties?
Because it is transferring around the network to every p2pool every share (not just every block like the normal block chain)

So shares must be rare enough to only happen at a rate that can be handled - 10s is quite fast Smiley

To make it 10s on average per share means that the difficulty must go up to match the network (350GH/s)

350GH/s * 10s / 1 diff = 815 diff (hmm is 815 correct?)
(1 diff = 2^32 Hash)

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
April 26, 2012, 03:07:23 AM
 #2086

Please stop spreading FUD  Roll Eyes P2pool is open source and there is nothing suspect in the code. Do you know what variability and luck is?

Yes, what I'm suggesting is that we may be having issues with orphaned blocks or some such issue.
Just saying, the odds of luck as bad as p2pool now displays is less than 1%

How are you calculating that?

Binomial distribution - couldn't work out an average # for share difficulty throughout the life of p2pool, so took the values for shares submitted vs expected shares required for our current number of blocks from p2pool.info and assumed avg difficulty of 1.5 million. So N=somehugenumber, p=0.00000066667, k=actual shares found/p.

Not the best, but shouldn't be far off.

Negative binomial?

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
ChanceCoats123
Hero Member
*****
Offline Offline

Activity: 682
Merit: 500



View Profile
April 26, 2012, 03:52:46 AM
 #2087

Thank you two for the explanation. Makes perfect sense. Smiley
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
April 26, 2012, 06:19:04 AM
 #2088

@D&T:

The '% of expected' round 'lengths' published on p2pool.info are uniformly distributed, and have a very regular mean of about 160% (p-value for the linear model is less than 10^-12). If the '% of expected' works the way I think it does (normalises round length to D) then I'd expect '% of expected' to be geometrically distributed and the average to be 100%.

I'm think I'm not getting something about the way the '%luck' is calculated?

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
Ente
Legendary
*
Offline Offline

Activity: 2126
Merit: 1001



View Profile
April 26, 2012, 09:45:00 AM
 #2089


The dead shares are not relevant to the bad luck since they can actually solve for a block and result in everyone getting rewarded. P2Pool does look at them and double check if they solve a block before reporting back to cgminer as 'rejected'.

The type of problem we are looking for is exactly the opposite actually.. ie someone who is successfully submitting non-orphan, non-doa shares, but yet not finding any blocks. I can only think of far fetched scenarios under which this might happen but here are two such examples:

1) Someone modifies their p2pool code so they submit shares as normal but never submit a block. Stupid and unlikely, but theoretically possible.
2) Some obscure bug in some combination of bitcoind and p2pool, such that rarely a found block fails to be passed on to bitcoind (and yet shares and getworks continue to flow freely, the bug would have to only affect the case where it is an actual block solve)

As an example of 2), if anyone finds this line in your logs, congratulations.. you are the problem Smiley
"Error while processing potential block"


About 1):
Every miner working on Bitcoin, no matter if pool, solo or p2pool, has to decide what he wants to deliver before he finds the hash. That is because he has first to decide *what* to hash, then try a billion variances of that data, then, if/when he finds a valid hash, he can not change the data to still have a valid hash.
Exactly that discussion came up some weeks ago about the share-difficulty:

1) decide which share difficulty you work on, i.e. 600 or 2000
2) hash a million nonces, where you are allowed to change a "non-important" part of the data for every try, so you get a different hash result
3) eventually, a hash reaches the (in 1) decided upon) target. that means the first x letters of the hash are zero
4) submit exactly the block you solved (including the exact transactions and random nonce data) to the pool, or bitcoin, or p2pool network
5) profit

what you try:

3) you found a block+nonce combination with a valid hash. lets say you worked on "600 difficulty" data, which you decided in 1). You now luckily and randomly found a hash with much more zeros than needed for "600", in fact it would pass a difficulty of "5000"!
4a) thinking "yay, I want to be paied out for those "5000" now, not just the stinky "600"!
4b) you take that exact block+nonce and edit that "this is a 600 block" to "this is a 5000 block lol"
4c) you hash that block+nonce from 4b)
4d) you get an entirely different hash, which surely won't even pass a "1" target. crash and burn.

I wrote that example about p2pool share difficulty because I find it easier to understand. The point is, you have no chance to change the block+nonce to still have the same or a similar hash. The hash, and the hash only decides upon "jackpot" or "totally worthless".  And, of course, all important data is included in the block. Actually that stuff is called coinbase, and, for example, includes the hash of the block before too, among other stuff.
Now when you work on p2pool shares, it includes both the "p2pool share data" as well as the "bitcoin block data". Any manipulation will change the hash, so if you strip the p2pool part of the data you found a valid hash for, the hash does not fit any more.
(I didnt read about the details yet, the combination of both p2pool- and bitcoindata is quite clever, else we couldn't ever solve a bitcoinblock at all.. merge mining is on the same topic)

The only thing you, as a p2pool miner, can do, is drop the valid bitcoin block hash you found. No profit for the p2pool gang, but obviously no profit for you neither. The only effect this has is it will make the "luck" graph look bad. Which is exactly what we are talking about, of course..

Hope that helps, and correct anything I got wrong!

Ente

twmz
Hero Member
*****
Offline Offline

Activity: 737
Merit: 500



View Profile
April 26, 2012, 12:34:35 PM
 #2090

@D&T:

The '% of expected' round 'lengths' published on p2pool.info are uniformly distributed, and have a very regular mean of about 160% (p-value for the linear model is less than 10^-12). If the '% of expected' works the way I think it does (normalises round length to D) then I'd expect '% of expected' to be geometrically distributed and the average to be 100%.

I'm think I'm not getting something about the way the '%luck' is calculated?

Raw data is here:  http://p2pool.info/blocks

For each block, you can see:

  • The actual number of "difficulty 1" shares submitted before the block was found.  Note, that since we don't actually know how many "difficulty 1" shares were submitted, this is an estimate of how many difficulty 1 shares should have been submitted based on the average hashrate at the time and the duration of the round.  And so if the hashrate published by http://localhost:9332/rate is wrong, then this number of shares will also be wrong.
  • The estimated number of "difficulty 1" shares theoretically needed based on the bitcoin difficulty at the time

% expected for a single block is:  actual shares / expected shares

% luck over a 30 day window is (sum of expected shares for all blocks found within 30 days) / (sum of all actual shares for all same blocks)

Was I helpful?  1TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs
WoT, GPG

Bitrated user: ewal.
Ente
Legendary
*
Offline Offline

Activity: 2126
Merit: 1001



View Profile
April 26, 2012, 12:43:08 PM
 #2091

I would say the "difficulty 1 shares" are close enough as an estimate of hashing power. They are easy enough to find and other pools use them for calculating the payout (and surely the pool's hashpower) too.
So, if we can be sure the "number of diff-1-shares submitted between the last share and this share" is correct, I would use it as the base of all later calculations.


  • The actual number of "difficulty 1" shares submitted before the block was found.  Note, that since we don't actually know how many "difficulty 1" shares were submitted, [..]
I don't understand that. Sounds contradictious to me?

Ente[/list]
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
April 26, 2012, 12:58:42 PM
 #2092

Either way, both 'actual' and 'estimated' shares should be geometrically distributed, right?

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
April 26, 2012, 01:42:27 PM
 #2093

@D&T:

The '% of expected' round 'lengths' published on p2pool.info are uniformly distributed, and have a very regular mean of about 160% (p-value for the linear model is less than 10^-12). If the '% of expected' works the way I think it does (normalises round length to D) then I'd expect '% of expected' to be geometrically distributed and the average to be 100%.

I'm think I'm not getting something about the way the '%luck' is calculated?

Raw data is here:  http://p2pool.info/blocks

For each block, you can see:

  • The actual number of "difficulty 1" shares submitted before the block was found.  Note, that since we don't actually know how many "difficulty 1" shares were submitted, this is an estimate of how many difficulty 1 shares should have been submitted based on the average hashrate at the time and the duration of the round.  And so if the hashrate published by http://localhost:9332/rate is wrong, then this number of shares will also be wrong.
  • The estimated number of "difficulty 1" shares theoretically needed based on the bitcoin difficulty at the time

% expected for a single block is:  actual shares / expected shares

% luck over a 30 day window is (sum of expected shares for all blocks found within 30 days) / (sum of all actual shares for all same blocks)

Thanks for all that.  If there is an error my guess is it is in the "average hashrate".  It might be better to simply count total shares (and dificulty) received by your node.  

I will take a closer look at how the avg hashrate is calculated.  An error there would mean you are starting with "dirty data".

Note: I am not saying there IS an error just that given the 15% divergence we should take a closer look.  Your computation "downstream" from the avg hashrate computation looks valid.

On edit:
A clarification
Code:
"ActualShares":545905
You are calculating this by polling http://localhost:9332/rate periodically? how often?
then getting avg hashrate over the block?
then duration * avg hashrate = # of hashes?
then # of hashes / 2^32 = # of shares?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
April 26, 2012, 01:48:29 PM
 #2094


  • The actual number of "difficulty 1" shares submitted before the block was found.  Note, that since we don't actually know how many "difficulty 1" shares were submitted,

I don't understand that. Sounds contradictious to me?

p2pool doesn't record the # of 1 diff shares anywhere.  twmz is estimating the # of 1 diff shares by looking at avg hashrate during the block.   If there is some inaccuracy in the avg hashrate then it will reflect as inaccurate # of "actual shares".

"actual" is more an estimate.  He is using the word actual to mean actual vs expected.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
April 26, 2012, 04:38:43 PM
Last edit: April 26, 2012, 06:18:50 PM by DeathAndTaxes
 #2095

So I have been taking a close look at p2pool lately (for my own peace of mind if nothing else).

Shouldn't p2pool node report all DOA as rejected back to the miner.

Across my entire farm cgminer shows ~ 1.5% reject rate ("R" in cgminer).  p2pool shows ~2.5% DOA rate.

My assumptions is the logic flow is something like this
1) miner requests work (getwork).
2) p2pool provides work.
3) miner returns low diff shares (I have mine set to a static diff 1).
4) p2pool verifies share
   a) if share is DOA it increments DOA count by one, and sends reject notification to iminer
   b) if share is > share diff and not dead it submits it to all peers for inclusion in share chain and sends accept notification to miner
   c) if share is > block diff it submits it to bitcoin network and sends accept notification to miner

So why doesn't DOA % match reject % in cgminer?
Red Emerald
Hero Member
*****
Offline Offline

Activity: 742
Merit: 500



View Profile WWW
April 26, 2012, 06:02:59 PM
 #2096

So I have been taking a close look at p2pool lately (for my own peace of mind if nothing else).

Shouldn't p2pool node report all DOA as rejected back to the miner.

Across my entire farm cgminer shows ~ 1.5% reject rate ("R" in cgminer).  p2pool shows ~2.5% DOA rate.

My assumptions is the logic flow is something like this
1) miner requests work (getwork).
2) p2pool provides work.
3) miner returns low diff shares (I have mine set to a static diff 1).
4) p2pool verifies share
   a) if share is DOA it increments DOA count by one, and sends reject notification to iminer
   b) if share is < share diff and not dead it submits it to all peers for inclusion in share chain and sends accept notification to miner
   c) if share is < block diff it submits it to bitcoin network and sends accept notification to miner

So why does DOA % match reject % in cgminer?
I think you meant less than, not greater than. I don't know the answer to your question though Sad

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
April 26, 2012, 06:18:28 PM
 #2097

less than target or greater than difficulty one or the other.
Red Emerald
Hero Member
*****
Offline Offline

Activity: 742
Merit: 500



View Profile WWW
April 26, 2012, 06:23:33 PM
 #2098

less than target or greater than difficulty one or the other.
Ah yes. I see now.

twmz
Hero Member
*****
Offline Offline

Activity: 737
Merit: 500



View Profile
April 27, 2012, 04:17:29 AM
 #2099

Thanks for all that.  If there is an error my guess is it is in the "average hashrate".  It might be better to simply count total shares (and dificulty) received by your node.  

That is exactly how pool hashrate is calculated, as far as I know.  p2pool does the calculation and publishes the hashrate at http://localhost:9332.  I just pull from that value and calculate an average over the duration of a block.

I will take a closer look at how the avg hashrate is calculated.  An error there would mean you are starting with "dirty data".

Agreed.  I have said basically that same thing several times in this thread.  If the hashrate reported by p2pool is too high, our luck will appear artificially lower than it actually is.  Forrest has several times reassured me that the hashrate calculation being done by p2pool is correct, but I have not validated that myself by reviewing code.


A clarification
Code:
"ActualShares":545905
You are calculating this by polling http://localhost:9332/rate periodically? how often?
then getting avg hashrate over the block?
then duration * avg hashrate = # of hashes?
then # of hashes / 2^32 = # of shares?

Yes.  I collect the pool's hashrate from the http://localhost:9332/rate API every 5 minutes.  Then calculate the average hashrate during the time between when blocks are found.  Then calculated # of shares exactly as you indicated (hashrate * duration / 2^32).  The only reason I use diff 1 shares instead of hashes directly is the convenience of using smaller numbers.

Raw hashrate data is here:  http://p2pool.info/stats

Note, hashrate stats are only 1 per hour for the first few months of p2pool's life because that is what forrest had available to backfill my database.  From the time p2pool.info was created forward (sometime in early Feb), hashrate data is in 5 minute increments.  The code that calculates the average hashrate actually calculates a weighted average so as to deal with mixed frequency of hashrate stats within a single block.

Was I helpful?  1TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs
WoT, GPG

Bitrated user: ewal.
forrestv (OP)
Hero Member
*****
Offline Offline

Activity: 516
Merit: 643


View Profile
April 27, 2012, 03:54:37 PM
 #2100

So I have been taking a close look at p2pool lately (for my own peace of mind if nothing else).

Shouldn't p2pool node report all DOA as rejected back to the miner.

Across my entire farm cgminer shows ~ 1.5% reject rate ("R" in cgminer).  p2pool shows ~2.5% DOA rate.

My assumptions is the logic flow is something like this
1) miner requests work (getwork).
2) p2pool provides work.
3) miner returns low diff shares (I have mine set to a static diff 1).
4) p2pool verifies share
   a) if share is DOA it increments DOA count by one, and sends reject notification to iminer
   b) if share is > share diff and not dead it submits it to all peers for inclusion in share chain and sends accept notification to miner
   c) if share is > block diff it submits it to bitcoin network and sends accept notification to miner

So why doesn't DOA % match reject % in cgminer?

You do mean the dead on arrival pseudoshare rate, not the dead share rate, right? The dead on arrival rate should match cgminer's, but the one displayed on the console is only averaged over 10 minutes. Did you look at the graphs to get a more accurate rate? (area of local dead/area of local total)

1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
Pages: « 1 ... 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 [105] 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 ... 814 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!