Bitcoin Forum
December 03, 2016, 11:57:35 PM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
   Home   Help Search Donate Login Register  
Pages: « 1 ... 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 [96] 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 ... 744 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 2029198 times)
Ente
Legendary
*
Offline Offline

Activity: 1834



View Profile
April 11, 2012, 07:24:25 AM
 #1901

just got port forwarding setup so connections for both bitcoin and p2pool are on the rise

Incoming connections on p2pool too I guess?
Do you have a static ip address?

Ente
1480809455
Hero Member
*
Offline Offline

Posts: 1480809455

View Profile Personal Message (Offline)

Ignore
1480809455
Reply with quote  #2

1480809455
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1480809455
Hero Member
*
Offline Offline

Posts: 1480809455

View Profile Personal Message (Offline)

Ignore
1480809455
Reply with quote  #2

1480809455
Report to moderator
1480809455
Hero Member
*
Offline Offline

Posts: 1480809455

View Profile Personal Message (Offline)

Ignore
1480809455
Reply with quote  #2

1480809455
Report to moderator
camolist
Hero Member
*****
Offline Offline

Activity: 868


View Profile WWW
April 11, 2012, 07:30:13 AM
 #1902

just got port forwarding setup so connections for both bitcoin and p2pool are on the rise

Incoming connections on p2pool too I guess?
Do you have a static ip address?

Ente

only bitcoin at the moment

strangely not yet in p2pool but it's only been about 6 hours. think someone mentioned on this forum that it starts happening around 8 hours? so we'll see


static-ish. router will keep the ip as long as it has power - will get a new ip if down for more then the couple of minutes it takes to reboot


thinking of setting this up on a lightly used personal server in the datacenter with my business stuff which would have a static /dedicated ip available

spiccioli
Legendary
*
Offline Offline

Activity: 1376

nec sine labore


View Profile
April 11, 2012, 12:33:25 PM
 #1903

Well 6 blocks in last 24 hours is a good start.   But dug ourselves a deep hole over the last 7 days.

D&T,

given this from p2pool wiki

Code:
Because the sharechain is 60 times faster than the Bitcoin chain many stales are common and expected. However, because the payout is PPLNS only your stale rate relative to other nodes is relevant; the absolute rate is not.

and given 90 days luck around 90% and p2pool as a whole stale rate around 10%, can it be that stale rate is not relevant for miners but becomes relevant for p2pool as a whole?

Or to say it in another way: p2pool wastes around 10% of its hashing power and because of this all graphs which use p2pool hashing power, without subtracting stale rate, report a higher than correct expected blocks?

spiccioli.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
April 11, 2012, 12:43:24 PM
 #1904

There is no such thing as "wasting hashing power".
You either find a block or you don't.  If you don't find a block the nonce is worthless.  I don't mean worth little, I mean absolutely worthless.
Nonces which don't solve a block are liked losing lottery tickets.  Failed attempted aren't progress towards a block.  A large number of failed tries doesn't increase the chance of finding a block in the future.  The only share worth anything is the one that solves a block and at current difficulty that occurs once every 2^32 * 1,626,856.73 =  6,987,296,450,627,500 nonces.

1 nonce = 50 BTC
6,987,296,450,627,499 nonces = 0 BTC

If we awarded compensation based on actual value of nonces found it is .... solo mining.  One would either get 50 BTC or 0 BTC for every nonce hashed.  Pool mining is a method to FAIRLY and EQUITABLY distribute that same 50 BTC to reduce variance.  We track FAILED WORTHLESS WORK because it can't be cheated.  Nothing more, nothing less.

Pools are more like block insurance companies.  In keeping with "no/limited trusted 3rd party" mantra of Bitcoin we use cryptography to ensure work can't be faked.  A pool could use a custom miner which records nonces (like a nonce odometer) and everytime a block is found collects nonce recordings from all miners and splits the block by how many nonces each miner attempted. The obvious problem is that a hacked miner would allow a miner to cheat.    Since hashes of lower difficulty can't be faked they provide good "proof" of the aproximate # of hashes attempted (subject to variance).

The 10% of work which was attempted (but failed) and then was orphaned needs to be included in the stats because it was valid work attempted. The fact that the technicalities of p2pool oprhaned it from the sharechain (compensated work) doesn't change that fact.  

It is just layers of abstraction.
Actual Work: # of nonces accurately and timely hashed (it takes on average ~7 quadrillion hashes to find one which meets current difficulty target)
Proxy for Work: nonces meeting share difficulty (1 for most pools, ~600 for p2pool) thus one p2pool share (diff 600) is a PROXY for 2.58 trillion nonces attempted.
Compensated Work: shares included in the chain (excludes orphaned shares which are a technical limit of p2pool short LP time)

One could say a solominer oprhans (discards) 100% of the failed work.  You wouldn't say a solo miner finds 1 block per share right?  The attempted work even if discarded needs to be recorded.  

If your orphan rate is ~= pool's orphan rate then your % of compensated work (shares in sharechain) ~= your % of proxy for work (shares) ~= your % of actual work (nonces hashed).


Orphaned blocks
p2pool doesn't show any higher (~1% of main fork) orphan rate ON BLOCKS.  If p2pool had 10% of its BLOCKS orphaned that would obviously cause a reduction in revenue and corresponding increase in # of shares per block.  The orphan rate on p2pool blocks is a better indication of "lost work".  An abnormally high number would indicate a problem/delay w/ network broadcasting its blocks.  Keep in mind even here relative values is what matters.  True should be 1% higher due to orphaned blocks (yes orphaned blocks represent work).  If there were no oprhaned blocks collectively miners wouldn't earn any more.

Dead on arrival (bad/stale/invalid shares killed by local node)
DOA are bad/stale/invalid shares before they even get broadcast to p2pool network and can never be a block even if they meet diff target as they would be rejected by the Bitcoin network.  They are "wasted hashing power".  DOAs unlike orphans do affect the # of shares per block.     If 2% of shares are bad/stale/invalid (not just orphaned) then 2% of your blocks will be bad/stale/invalid.  This affects all pools not just p2pool.  I would assume that any hashing graph is using post DOA hashrates.  Still DOA is a much smaller % than orphans so the difference isn't very large.

On edit: modified for clarification (of course that made it even longer ... grr).
On edit 2:  crap crap crap.  tried to simplify and turned it into a novel ("as the nonce hashes").  Sorry for wall of text.  I don't have the heart to rip it up now.
DiabloD3
Legendary
*
Offline Offline

Activity: 1162


DiabloMiner author


View Profile WWW
April 11, 2012, 12:45:24 PM
 #1905

There is no such thing as "wasting hashing power".
You either find a block or the share is worthless.
If some of the worthless shares get orphaned how much have you lost?

p2pool doesn't show any higher (~1% of main fork) orphan rate ON BLOCKS.

p2pool could function with a 99.999% SHARE orphan rate but the variance and unfairness of that would cause PR (not technical) problems.

10 second LP is a compromise between share orphans and difficulty.

The thing that seems hard for people to understand is shares have ABSOLUTELY NO VALUE.  They aren't progress towards a block they are completely worthless.  We simply use them because it is a cheat proof mechanism to fairly split rewards.  Nothing more, nothing less.  So if 10% of worthless shares are lost how much value/blocks/work is lost? .... Nothing.  10% of 0 is still 0. Smiley

Only DOA affect the rate blocks are found.  DOA are bad/stale/invalid shares before they even get broadcast to p2pool network.  Thus even if they were blocks they are worthless because Bitcoin network (not just p2pool) would find them worthless even if they met target requirements for a block.  I would assume that any hashing graph is using post DOA hashrates.  Still DOA looks to only be about 2% of the network.

All of this so goddamned much.

spiccioli
Legendary
*
Offline Offline

Activity: 1376

nec sine labore


View Profile
April 11, 2012, 01:07:13 PM
 #1906

DeathAndTaxes,

very clear and very much appreciated!

Thanks.

spiccioli.

ps. this is where I heard a click inside my  brain Wink

Quote
If we awarded compensation based on actual value of nonces found it is .... solo mining.  Pool mining simply finds a mechanism to FAIRLY distribute that 50 BTC to reduce variance.  We track FAILED WORTHLESS WORK because it can't be cheated.  Nothing more, nothing less.
fabrizziop
Hero Member
*****
Offline Offline

Activity: 602



View Profile
April 11, 2012, 01:11:56 PM
 #1907

Any plans to implement merged mining on P2Pool?
spiccioli
Legendary
*
Offline Offline

Activity: 1376

nec sine labore


View Profile
April 11, 2012, 01:17:46 PM
 #1908

There is no such thing as "wasting hashing power".

...

D&T

this should go, IMHO, into 1st page and/or p2pool wiki.

spiccioli
kano
Legendary
*
Offline Offline

Activity: 1918


Linux since 1997 RedHat 4


View Profile
April 11, 2012, 01:57:40 PM
 #1909

Except ... it isn't correct.

There IS wasted hashing power.

The problem is not P2Pool specifically, but rather that people believe it is OK to get a high crappy reject rate (9%) because someone here said it was OK to be that high rate while they were getting a much lower rate.

If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.

The cause is actually that the miners are not by default configured to handle the ridiculously high share rate (10 seconds)
So P2Pool is the cause, but the solution is simply to configure your miner to handle that issue.

Aside: if you have one or more BFL FPGA Singles, you cannot mine on P2Pool without wasting a large % of your hash rate.

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
forrestv
Hero Member
*****
Offline Offline

Activity: 510


View Profile
April 11, 2012, 02:04:54 PM
 #1910

Any plans to implement merged mining on P2Pool?

First, P2Pool has long supported solo merged mining. However, as for pooled merged mining...

Merged mining, as it exists, can not efficiently be implemented because every share would need to include the full generation transaction from the parent chain. However, P2Pool's generation transaction is pretty large, and so would increase the size of P2Pool shares by more than an order of magnitude (along with P2Pool's network usage).

There are a few solutions: compute main-P2Pool's generation transaction instead of redundantly  storing nearly the same thing over and over. Alternatively, change the merged mining spec to not require storing the entire parent gentx.

I don't like the first because it would be very complex and tie the MM-P2Pool to the main-P2Pool. The second is obviously impractical in the short term.

Anyone else have ideas?

1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
DiabloD3
Legendary
*
Offline Offline

Activity: 1162


DiabloMiner author


View Profile WWW
April 11, 2012, 02:12:10 PM
 #1911

Except ... it isn't correct.

There IS wasted hashing power.

The problem is not P2Pool specifically, but rather that people believe it is OK to get a high crappy reject rate (9%) because someone here said it was OK to be that high rate while they were getting a much lower rate.

If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.

The cause is actually that the miners are not by default configured to handle the ridiculously high share rate (10 seconds)
So P2Pool is the cause, but the solution is simply to configure your miner to handle that issue.

Aside: if you have one or more BFL FPGA Singles, you cannot mine on P2Pool without wasting a large % of your hash rate.

Except reject rate means nothing, delta of average reject rate is what you need to pay attention to.

Also, BFL's firmware is broken, they won't return shares until its done 2^32 hashes, and any attempts to force it to update on long polll dumps valid shares. BFL needs to fix their shit before they sell any more FPGAs.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
April 11, 2012, 02:14:46 PM
 #1912

Except ... it isn't correct.

There IS wasted hashing power.

The problem is not P2Pool specifically, but rather that people believe it is OK to get a high crappy reject rate (9%) because someone here said it was OK to be that high rate while they were getting a much lower rate.

If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.

The cause is actually that the miners are not by default configured to handle the ridiculously high share rate (10 seconds)
So P2Pool is the cause, but the solution is simply to configure your miner to handle that issue.

Aside: if you have one or more BFL FPGA Singles, you cannot mine on P2Pool without wasting a large % of your hash rate.

Orphans aren't wasted hashing power for the p2pool "pool" which was what was being discussed.  The node will broadcast any blocks it finds to all p2pool peers and all Bitcoin peers.  Thus even a miner with 80% oprhan rate isn't wasting his hashing power from the point of view being disccused which is avg # of shares per block (or pool luck).

I think it is made pretty clear one's PERSONAL compensation depends on relative orphan rate.

Miner has 5% orphan rate, p2pool has 10% orphan rate.  Miner is compensated 5% over "fair value".
Miner has a 10% oprhan rate, p2pool has a 10% orphan rate.  Miner is compensated "fair value".
Miner has a 15% oprhan rate, p2pool has a 10% orphan rate.  Miner is compensated 5% under "fair value".

Quote
Aside: if you have one or more BFL FPGA Singles, you cannot mine on P2Pool without wasting a large % of your hash rate.
Even there the hashing power isn't WASTED.  Blocks will still be found at same rate regardless of orphan rate but the miner's compensate will be lower (due to miner having higher orphan rate relative to the pool).


Still theoretically  I do think it is possible to make a "merged" sharechain.  Bitcoin must have a single block at each height.  This is an absolute requirement due to the fact that blocks are just compensation they include tx and there must be a single consensus on which tx is included in a block (or set of blocks).

With p2pool it may be possible to include "late shares" in the chain to reduce the orphan rate.  Honestly not sure if it is worth it because as discussed if one's oprhan rate is ~= pools orphan rate the absolute values don't really matter.

Miner 0% orphan, pool 0% orphan is the same as Miner 10% orphan, pool 10% orphan.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
April 11, 2012, 02:26:25 PM
 #1913

There are a few solutions: compute main-P2Pool's generation transaction instead of redundantly storing nearly the same thing over and over. Alternatively, change the merged mining spec to not require storing the entire parent gentx.

I don't like the first because it would be very complex and tie the MM-P2Pool to the main-P2Pool. The second is obviously impractical in the short term.

Anyone else have ideas?

Yeah even if space/bandwidth wasn't an issue I don't like complicating the sharechain w/ merge mining data.  Most of the alt chains are nearly worthless and I wonder the load if it became popular to merge p2pool mine a dozen or more alt-chains.

Local generation may be rough on low end nodes so anything which makes p2pool less viable isn't worth the cost IMHO.

Would it be possible to have a separate merge mining chain and a different p2pool instance.  Still I am not clear on what level of communication or interaction is necessary between instances or even if it is possible.

Given the nearly worthless nature of alt-coins I don't see it as a useful venture.  There is so much that can be done to improve p2pool (in terms of GUI frontends, monitoring/reporting, updated docs, custom distros, simplification, etc) that I would hate to see any skill, resources, and time devoted to worthless alt-chains.
freshzive
Sr. Member
****
Offline Offline

Activity: 447


View Profile
April 11, 2012, 02:50:38 PM
 #1914

p2pool randomly freezes up (freezing my Mac for about ~10 seconds) every half an hour or so. Any idea what's causing this? Should I use a different version of python?

Code:
2012-04-11 07:47:16.501037 > Watchdog timer went off at:
2012-04-11 07:47:16.501107 >   File "run_p2pool.py", line 5, in <module>
2012-04-11 07:47:16.501141 >     main.run()
2012-04-11 07:47:16.501172 >   File "/Users/christian/p2pool/p2pool/main.py", line 1005, in run
2012-04-11 07:47:16.501203 >     reactor.run()
2012-04-11 07:47:16.501234 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/base.py", line 1169, in run
2012-04-11 07:47:16.501267 >     self.mainLoop()
2012-04-11 07:47:16.501297 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/base.py", line 1178, in mainLoop
2012-04-11 07:47:16.501331 >     self.runUntilCurrent()
2012-04-11 07:47:16.501361 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/base.py", line 800, in runUntilCurrent
2012-04-11 07:47:16.501394 >     call.func(*call.args, **call.kw)
2012-04-11 07:47:16.501424 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/defer.py", line 368, in callback
2012-04-11 07:47:16.501456 >     self._startRunCallbacks(result)
2012-04-11 07:47:16.501487 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/defer.py", line 464, in _startRunCallbacks
2012-04-11 07:47:16.501520 >     self._runCallbacks()
2012-04-11 07:47:16.501550 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/defer.py", line 551, in _runCallbacks
2012-04-11 07:47:16.501583 >     current.result = callback(current.result, *args, **kw)
2012-04-11 07:47:16.501614 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/defer.py", line 1101, in gotResult
2012-04-11 07:47:16.501647 >     _inlineCallbacks(r, g, deferred)
2012-04-11 07:47:16.501677 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/defer.py", line 1045, in _inlineCallbacks
2012-04-11 07:47:16.501710 >     result = g.send(result)
2012-04-11 07:47:16.501740 >   File "/Users/christian/p2pool/p2pool/main.py", line 799, in status_thread
2012-04-11 07:47:16.501770 >     print this_str
2012-04-11 07:47:16.501799 >   File "/Users/christian/p2pool/p2pool/util/logging.py", line 81, in write
2012-04-11 07:47:16.501830 >     self.inner_file.write(data)
2012-04-11 07:47:16.501860 >   File "/Users/christian/p2pool/p2pool/util/logging.py", line 69, in write
2012-04-11 07:47:16.501891 >     self.inner_file.write('%s %s\n' % (datetime.datetime.now(), line))
2012-04-11 07:47:16.501921 >   File "/Users/christian/p2pool/p2pool/util/logging.py", line 55, in write
2012-04-11 07:47:16.501951 >     output.write(data)
2012-04-11 07:47:16.501981 >   File "/Users/christian/p2pool/p2pool/util/logging.py", line 46, in write
2012-04-11 07:47:16.502011 >     self.inner_file.write(data)
2012-04-11 07:47:16.502041 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py", line 691, in write
2012-04-11 07:47:16.502073 >     return self.writer.write(data)
2012-04-11 07:47:16.502103 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py", line 352, in write
2012-04-11 07:47:16.502134 >     self.stream.write(data)
2012-04-11 07:47:16.558924 >   File "/Users/christian/p2pool/p2pool/main.py", line 702, in <lambda>
2012-04-11 07:47:16.559463 >     sys.stderr.write, 'Watchdog timer went off at:\n' + ''.join(traceback.format_stack())

rudrigorc2
Legendary
*
Offline Offline

Activity: 1064



View Profile
April 11, 2012, 02:55:53 PM
 #1915

There are a few solutions: compute main-P2Pool's generation transaction instead of redundantly storing nearly the same thing over and over. Alternatively, change the merged mining spec to not require storing the entire parent gentx.

I don't like the first because it would be very complex and tie the MM-P2Pool to the main-P2Pool. The second is obviously impractical in the short term.

Anyone else have ideas?

Yeah even if space/bandwidth wasn't an issue I don't like complicating the sharechain w/ merge mining data.  Most of the alt chains are nearly worthless and I wonder the load if it became popular to merge p2pool mine a dozen or more alt-chains.

Local generation may be rough on low end nodes so anything which makes p2pool less viable isn't worth the cost IMHO.

Would it be possible to have a separate merge mining chain and a different p2pool instance.  Still I am not clear on what level of communication or interaction is necessary between instances or even if it is possible.

Given the nearly worthless nature of alt-coins I don't see it as a useful venture.  There is so much that can be done to improve p2pool (in terms of GUI frontends, monitoring/reporting, updated docs, custom distros, simplification, etc) that I would hate to see any skill, resources, and time devoted to worthless alt-chains.

2x
gyverlb
Hero Member
*****
Offline Offline

Activity: 896



View Profile
April 11, 2012, 02:59:33 PM
 #1916

If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.
I'm not sure how. I have ~9% reject rate with 5x 2.3.1 cgminer connected to a p2pool node with 5 to 30ms latency. cgminer is set to use only one thread and intensity 8, which on my hardware (300+MH/s for each GPU) adds between 0 to 3 ms latency when cgminer must wait for a GPU thread to return.

If there's a way to get better results, I'd like to know it. Currently I think the large majority of orphan/dead blocks on my configuration are caused by the whole P2Pool network latency, not my configuration but I'd be glad to be proven wrong.

P2pool tuning guide
Trade BTC for €/$ at bitcoin.de (referral), it's cheaper and faster (acts as escrow and lets the buyers do bank transfers).
Tip: 17bdPfKXXvr7zETKRkPG14dEjfgBt5k2dd
spiccioli
Legendary
*
Offline Offline

Activity: 1376

nec sine labore


View Profile
April 11, 2012, 03:40:48 PM
 #1917

If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.
I'm not sure how. I have ~9% reject rate with 5x 2.3.1 cgminer connected to a p2pool node with 5 to 30ms latency. cgminer is set to use only one thread and intensity 8, which on my hardware (300+MH/s for each GPU) adds between 0 to 3 ms latency when cgminer must wait for a GPU thread to return.

If there's a way to get better results, I'd like to know it. Currently I think the large majority of orphan/dead blocks on my configuration are caused by the whole P2Pool network latency, not my configuration but I'd be glad to be proven wrong.

gyverlb,

same here, at times I'm a little better than the pool, at time a little worse.

You can use two threads per GPU, though, so that when a long poll comes in, one thread can start fetching new data while the other is waiting for the GPU to finish.

spiccioli
gyverlb
Hero Member
*****
Offline Offline

Activity: 896



View Profile
April 11, 2012, 04:07:58 PM
 #1918

You can use two threads per GPU, though, so that when a long poll comes in, one thread can start fetching new data while the other is waiting for the GPU to finish.
Are you sure ? The way I understand cgminer's threads, they should all try to keep working in parallel (for <n> threads each thread should be using 1/n of the processing power) and fetching work is done asynchronously so that it is ready as soon as a GPU thread is available. So with a given intensity the more threads you have, the more time you should spend working on a workbase invalidated by a long poll. This is how I understood the advice ckovilas gives in the cgminer's README to use only one thread.

P2pool tuning guide
Trade BTC for €/$ at bitcoin.de (referral), it's cheaper and faster (acts as escrow and lets the buyers do bank transfers).
Tip: 17bdPfKXXvr7zETKRkPG14dEjfgBt5k2dd
spiccioli
Legendary
*
Offline Offline

Activity: 1376

nec sine labore


View Profile
April 11, 2012, 04:23:48 PM
 #1919

You can use two threads per GPU, though, so that when a long poll comes in, one thread can start fetching new data while the other is waiting for the GPU to finish.
Are you sure ? The way I understand cgminer's threads, they should all try to keep working in parallel (for <n> threads each thread should be using 1/n of the processing power) and fetching work is done asynchronously so that it is ready as soon as a GPU thread is available. So with a given intensity the more threads you have, the more time you should spend working on a workbase invalidated by a long poll. This is how I understood the advice ckovilas gives in the cgminer's README to use only one thread.

gyverlb,

the one thread per GPU was a work-around for old versions of cgminer.

As I understand it, while a GPU is processing a batch, the thread that submitted it is blocked waiting for the answer, so, if you have a single thread it cannot fetch new work before the GPU completes its batch.

Using two threads makes it possible to have the second thread starting to fetch new work while the first one is still waiting for the GPU to finish its work.

I'm using two threads without problems (stales are around 1-2% lower than p2pool ones).

spiccioli.

kano
Legendary
*
Offline Offline

Activity: 1918


Linux since 1997 RedHat 4


View Profile
April 11, 2012, 04:30:03 PM
 #1920

Except ... it isn't correct.

There IS wasted hashing power.

The problem is not P2Pool specifically, but rather that people believe it is OK to get a high crappy reject rate (9%) because someone here said it was OK to be that high rate while they were getting a much lower rate.

If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.

The cause is actually that the miners are not by default configured to handle the ridiculously high share rate (10 seconds)
So P2Pool is the cause, but the solution is simply to configure your miner to handle that issue.

Aside: if you have one or more BFL FPGA Singles, you cannot mine on P2Pool without wasting a large % of your hash rate.

Except reject rate means nothing, delta of average reject rate is what you need to pay attention to.
Well, yes, but that is of course what I meant by saying that 9% is bad and you can get a lower %

Quote
Also, BFL's firmware is broken, they won't return shares until its done 2^32 hashes, and any attempts to force it to update on long polll dumps valid shares. BFL needs to fix their shit before they sell any more FPGAs.
Yep but to put it more specifically, the time to do 2^32 hashes at 830MH/s is 5.17s
Thus each BFL device will complete, on average, 1 nonce range, and then abort the 2nd one for each average 10 second share.
Thus, on average, it would only mine 5.17s out of every 10s or 51.7% thus wasting 48.3% of it's hashes ... yep it's that bad Tongue

Oddly enough, that is more similar to GPU mining than Icarus FPGA mining ...
GPU mining cannot be aborted for each nonce sub-range sent to the GPU, but of course as long as the processing time of the sub-range is small, then you aren't wasting much time waiting after an LP occurs

In this case each LP, you waste 1/2 of the expected processing time for a sub-nonce range (which is very small - but higher as you increase the intensity - each increase in intensity in cgminer increases it 2x)
On cgminer, an intensity of 9 usually means a nonce range of 2^24 or 2^25 which is of the order of 4.5 to 9 ms on ~370Mh/s (e.g. ~ ATI 6950) and of course different on other hardware

Thus with GPUs, reducing the intensity by one reduces the amount of time wasted each LP and since there are 60 times the number of LP's with P2Pool vs normal network LP's then of course that makes sense.

With the Icarus FPGA it aborts when it finds a share and returns it immediately.
This means that if you have to abort the work due to an LP, you know the hashes being thrown away containx no shares (there is a very tiny window afterwards that there could be shares - until the new work is sent)
So being able to abort and restart is very advantageous
Approx time is less than 0.014s on my hardware.
~0.014s is the overhead when processing a work (job start time after sending the work to the FPGA and the time to return the result if there is one)

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
Pages: « 1 ... 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 [96] 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 ... 744 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!