Bitcoin Forum
June 03, 2015, 01:45:20 PM *
News: Latest stable version of Bitcoin Core: 0.10.2 [Torrent]
 
   Home   Help Search Donate Login Register  
Pages: « 1 ... 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 [387] 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 ... 635 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 1154490 times)
cackalacky
Newbie
*
Offline Offline

Activity: 23


View Profile WWW

Ignore
February 07, 2014, 06:32:46 PM
 #7721

Has anyone noticed any correlation between how well distributed the p2pool hash rate is across p2pool nodes and how efficient smaller nodes are? I set up p2pool nodes for a few scrypt altcoins. For more popular coins with dozens of p2pool nodes my node's efficiency is > 100%. For less popular coins where one p2pool node has more than 2/3 of the total p2pool hash rate my node's efficiency is horrible - usually 60% - 80%.

My nodes all run on the same server with the same p2pool version. It seems like there is a pattern when one node has an overly large share of the hash rate the smaller nodes suffer efficiency. Or is this 99.99% likely just to be some other factor I am overlooking. I would love to hear from anyone who can prove/disprove this theory.
1433339120
Hero Member
*
Offline Offline

Posts: 1433339120

View Profile Personal Message (Offline)

Ignore
1433339120
Reply with quote  #2

1433339120
Report to moderator
1433339120
Hero Member
*
Offline Offline

Posts: 1433339120

View Profile Personal Message (Offline)

Ignore
1433339120
Reply with quote  #2

1433339120
Report to moderator
FREE Bitcoin - Free 10,000 Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1433339120
Hero Member
*
Offline Offline

Posts: 1433339120

View Profile Personal Message (Offline)

Ignore
1433339120
Reply with quote  #2

1433339120
Report to moderator
roy7
Sr. Member
****
Offline Offline

Activity: 434


View Profile

Ignore
February 07, 2014, 07:01:32 PM
 #7722

Could you be any more vague? LOL

An example of people trying to deal with it.

http://www.reddit.com/r/Bitcoin/comments/1f4t2e/90_transaction_fee_how_can_this_be_avoided/ca6typi

You should google "bitcoin dust" and read up on how it works.

RoyalMiningCo: Pools retired. Was fun!
freebit13
Sr. Member
****
Offline Offline

Activity: 336

I got Satoshi's avatar!


View Profile

Ignore
February 07, 2014, 07:27:54 PM
 #7723

After 21 Million BTC have been created; mining will move on to dustbusting Wink

Decentralize EVERYTHING!
bitpop
Legendary
*
Offline Offline

Activity: 1372


https://keybase.io/bitpop


View Profile WWW

Ignore
February 07, 2014, 07:29:10 PM
 #7724

After 21 Million BTC have been created; mining will move on to dustbusting Wink

1 dust = $1 million

Reputation and Escrow Service  |  PGP  |  OpenVPN  |  PayCon  |  Couple Rings
Bitcoin: 1BitPoPevGTcnSGWqHGrFiVg6fVC7y9NVK
Bitmessage: BM-2cXN9j8NFT2n1FxDVQ6HQq4D4MZuuaBFyb
freebit13
Sr. Member
****
Offline Offline

Activity: 336

I got Satoshi's avatar!


View Profile

Ignore
February 07, 2014, 07:30:51 PM
 #7725

After 21 Million BTC have been created; mining will move on to dustbusting Wink

1 dust = $1 million
Zactly! Plus the transaction fees once Bitcoin has taken over the world... Satoshi takes good care of miners Wink

Decentralize EVERYTHING!
smooth
Hero Member
*****
Offline Offline

Activity: 700



View Profile

Ignore
February 07, 2014, 09:51:07 PM
 #7726

Could you be any more vague? LOL

An example of people trying to deal with it.

http://www.reddit.com/r/Bitcoin/comments/1f4t2e/90_transaction_fee_how_can_this_be_avoided/ca6typi

You should google "bitcoin dust" and read up on how it works.

That post is slightly out of date as the current client removed the 0.01 output rule and reduced the transaction size from 10k to 3k but the principle is correct.
roy7
Sr. Member
****
Offline Offline

Activity: 434


View Profile

Ignore
February 08, 2014, 06:54:40 PM
 #7727

Interesting. I've always assumed small miners on public nodes have high variance because of share difficulty. But that's not the case, the bigger the pool is the more it will push up those miner's share targets so that the pool as a whole hits its share target. As far as I understand it so far anyway. This doesn't happen if a miner runs his own pool, since you'd then be working on the minimum difficulty if you aren't getting enough shares.

Please see this post and my two followup replies:

https://bitcointalk.org/index.php?topic=214512.msg5019286#msg5019286

For example I have a dozen of these for miners:

2014-02-08 19:06:36.628849 New work for worker! Difficulty: 0.002211 Share difficulty: 1.629049 Total block value: 50.017000 VTC including 14 transactions

And here is my ADDR/.000001 test:

2014-02-08 19:06:36.633762 New work for worker! Difficulty: 0.002211 Share difficulty: 0.054301 Total block value: 50.017000 VTC including 14 transactions

Share difficulty from web interface is .054. So left to the defaults, I'd be able to get shares onto the share chain less than 1/100 as often?

RoyalMiningCo: Pools retired. Was fun!
roy7
Sr. Member
****
Offline Offline

Activity: 434


View Profile

Ignore
February 08, 2014, 08:07:40 PM
 #7728

I realize that p2pool's intention is to treat each node as a single miner, and p2pool isn't intended to operate as a pseduo public pool. So as I look in the code, I see how the 1.67% cap is applied to the node (not individual connected miners) and that makes total sense in the context that a node is a single operation (an individual or group using p2pool with all of their own hardware as a replacement for solo mining).

However, to better support miners that want to use a public node for whatever reason I think it'd be good if that could be handled in a way that will, in effect, simulate the same result as if they were running a p2pool node of their own instead. Maybe as a command line option that is off by default so any changes make zero difference to existing operations.

Basically this comes down to making the share target for a miner (by which I mean a person or group with 1 or more physical mining devices) based on that miner, and the 1.67% cap on that miner. Not on the node as whole.

The key code in get_work currently is:

Code:
if desired_share_target is None:
desired_share_target = 2**256-1
local_hash_rate = self._estimate_local_hash_rate()
if local_hash_rate is not None:
desired_share_target = min(desired_share_target,
bitcoin_data.average_attempts_to_target(local_hash_rate * self.node.net.SHARE_PERIOD / 0.0167))
# limit to 1.67% of pool shares by modulating share difficulty

However, we wouldn't want to just change the local_hash_rate to be the miner whose new work is being assigned (the physical mining device with a connection to the pool). That would defeat the purpose of things like the 1.67% cap. What if, instead, it was based on the estimated hash rate of the destination payment address? So if I have 4 antminers all mining to ADDR_X, the target share rate is based on their combined speed. But someone else connecting their two antminers with ADDR_Y will have a lower target share rate. ADDR_X and ADDR_Y are both having the 1.67% cap applied individually, etc. Someone operating a node now with dozens of pieces of equipment all paying to the same address would see zero change even if they did toggle this on. Individual miners on a public node would see reduced variance in their own shares, since pool hash rate is taken out of the equation. They could do this by hand now with ADDR/1 (or say /.000001 for scrypt), but I think handling it automatically makes more sense (and keeps vardiff alive for miners that are maybe bigger than justifies using ADDR/1).

The way I view this is that if ADDR_X and ADDR_Y were running their own nodes instead of connecting to a public node, their target share rates would be based on only their own hash rates anyway. The 1.67% would be applied to each of them individually (instead of all combined in the public node). By adjusting their target share rates only to their own speeds, it simulates them running their own nodes.

Thoughts?

TLDR: A small miner connecting to a busy public node has much higher variance than running a node of their own.

RoyalMiningCo: Pools retired. Was fun!
roy7
Sr. Member
****
Offline Offline

Activity: 434


View Profile

Ignore
February 09, 2014, 04:54:58 AM
 #7729

As an experiment I'm running this live on one of my VTC nodes now, after testing it out some. The share difficulties are being set exactly as expected. Will check again in morning but it seems to be working great. Here is a diff for anyone's review/comments please. Remember python is whitespace sensitive if you try to apply this to test yourself. What happens is the share target when sending out work is set based on the payment addresses' hash rate instead of the whole node's hash rate. This way each person/group mining to an address on a public node finds shares at the same difficulty as if they ran a local private p2pool, and don't get increased variance based on the size of the public node.

Code:
diff --git a/p2pool/work.py b/p2pool/work.py
index e1c677d..285fa3e 100644
--- a/p2pool/work.py
+++ b/p2pool/work.py
@@ -245,12 +245,11 @@ class WorkerBridge(worker_interface.WorkerBridge):

         if desired_share_target is None:
             desired_share_target = 2**256-1
-            local_hash_rate = self._estimate_local_hash_rate()
-            if local_hash_rate is not None:
+            local_addr_rates = self.get_local_addr_rates()
+            local_hash_rate = local_addr_rates.get(pubkey_hash, 0)
+            if local_hash_rate > 0.0:
                 desired_share_target = min(desired_share_target,
                     bitcoin_data.average_attempts_to_target(local_hash_rate * self.node.net.SHARE_PERIOD / 0.0167)) # limit to 1.67% of pool shares by modulating share difficulty
-
-            local_addr_rates = self.get_local_addr_rates()
             lookbehind = 3600//self.node.net.SHARE_PERIOD
             block_subsidy = self.node.bitcoind_work.value['subsidy']
             if previous_share is not None and self.node.tracker.get_height(previous_share.hash) > lookbehind:

One weakness of the ADDR/1 workaround is it overrides vardiff completely and ignores the network.py dust threshold. The optimal solution is for every miner to run their own p2pool node.

RoyalMiningCo: Pools retired. Was fun!
Biggen
Full Member
***
Offline Offline

Activity: 161


View Profile

Ignore
February 09, 2014, 06:21:42 PM
 #7730

What is the time zone being displayed under the "Last Blocks" tab?  Is it in GMT?  The reason I ask is I see this for my node:
Code:
93499 Sun Feb 09 2014 08:40:00 GMT-0600 (Central Standard Time) 02ad14053894887b847d9060fbad4d070c2349b6eeb21a5ebd97ce329372bb6c

So is that saying that that block was found at 08:40 GMT and I need to use a -0600 offset for my time zone (which is Central btw)?
WutriCoin
Full Member
***
Offline Offline

Activity: 168

Cryptoshi.org


View Profile WWW

Ignore
February 09, 2014, 11:19:25 PM
 #7731

Could you please add DigiByte Support

cr1776
Hero Member
*****
Offline Offline

Activity: 812


View Profile

Ignore
February 10, 2014, 12:27:34 AM
 #7732

I realize that p2pool's intention is to treat each node as a single miner, and p2pool isn't intended to operate as a pseduo public pool. So as I look in the code, I see how the 1.67% cap is applied to the node (not individual connected miners) and that makes total sense in the context that a node is a single operation (an individual or group using p2pool with all of their own hardware as a replacement for solo mining).

However, to better support miners that want to use a public node for whatever reason I think it'd be good if that could be handled in a way that will, in effect, simulate the same result as if they were running a p2pool node of their own instead. Maybe as a command line option that is off by default so any changes make zero difference to existing operations.

Basically this comes down to making the share target for a miner (by which I mean a person or group with 1 or more physical mining devices) based on that miner, and the 1.67% cap on that miner. Not on the node as whole.

The key code in get_work currently is:

Code:
if desired_share_target is None:
desired_share_target = 2**256-1
local_hash_rate = self._estimate_local_hash_rate()
if local_hash_rate is not None:
desired_share_target = min(desired_share_target,
bitcoin_data.average_attempts_to_target(local_hash_rate * self.node.net.SHARE_PERIOD / 0.0167))
# limit to 1.67% of pool shares by modulating share difficulty

However, we wouldn't want to just change the local_hash_rate to be the miner whose new work is being assigned (the physical mining device with a connection to the pool). That would defeat the purpose of things like the 1.67% cap. What if, instead, it was based on the estimated hash rate of the destination payment address? So if I have 4 antminers all mining to ADDR_X, the target share rate is based on their combined speed. But someone else connecting their two antminers with ADDR_Y will have a lower target share rate. ADDR_X and ADDR_Y are both having the 1.67% cap applied individually, etc. Someone operating a node now with dozens of pieces of equipment all paying to the same address would see zero change even if they did toggle this on. Individual miners on a public node would see reduced variance in their own shares, since pool hash rate is taken out of the equation. They could do this by hand now with ADDR/1 (or say /.000001 for scrypt), but I think handling it automatically makes more sense (and keeps vardiff alive for miners that are maybe bigger than justifies using ADDR/1).

The way I view this is that if ADDR_X and ADDR_Y were running their own nodes instead of connecting to a public node, their target share rates would be based on only their own hash rates anyway. The 1.67% would be applied to each of them individually (instead of all combined in the public node). By adjusting their target share rates only to their own speeds, it simulates them running their own nodes.

Thoughts?

TLDR: A small miner connecting to a busy public node has much higher variance than running a node of their own.

This is an interesting discussion, please let us know the results.

P2pool is important for bitcoin and making it accessible to more people is important. This type of change helps public pools - and that helps bitcoin and p2pool. Let's face it, many people won't set up their own p2pool node, they want easy and compared to pools that require registration, p2pool is easy.

Decreasing variance (since people are impatient and don't care about the math) could be helpful too. 
roy7
Sr. Member
****
Offline Offline

Activity: 434


View Profile

Ignore
February 10, 2014, 12:59:27 AM
 #7733

This is an interesting discussion, please let us know the results.

P2pool is important for bitcoin and making it accessible to more people is important. This type of change helps public pools - and that helps bitcoin and p2pool. Let's face it, many people won't set up their own p2pool node, they want easy and compared to pools that require registration, p2pool is easy.

Decreasing variance (since people are impatient and don't care about the math) could be helpful too. 

Has been working well so far. The diff miners get is lower than it'd normally be, but not as low as if they did /1 to mine at the minimum share diff all of the time. The reason /1 goes lower is that it overrides the dust prevention code. So it's best to not just tell all public node miners to use /1, and instead only have people who can't stomach the variance and are willing to pay the tx fees do that (or they should use a normal pool).

The node I'm testing on is

http://vtc-us-east.royalminingco.com:9171/static/graphs.html?Week

and the code went live at the Feb 9ths mark (early morning). You can see from the pool shares graph the peaks/valleys have smoothed out, and you can see it happening in a similar way on a per-miner level on the miner graphs.

I don't run a BTC node at the present time so I don't know what sort of share difficulties normally get assigned to workers on the public pools. The VTC p2pool network is over 5% of the global network speed and finds many blocks per day. So the behavior here might be quite difference than on BTC itself. Hopefully forrestv or someone with more knowledge and experience with the p2pool code base will review and comment.

RoyalMiningCo: Pools retired. Was fun!
IYFTech
Hero Member
*****
Offline Offline

Activity: 686


WANTED: Active dev to fix & re-write p2pool in C


View Profile

Ignore
February 10, 2014, 09:21:00 AM
 #7734

Nice to see someone actually trying to do some development on p2pool - well done!  Smiley

Keep us posted dude.....

Cheers  Grin

-- Smiley  Thank you for smoking  Smiley --  If you paid VAT to dogie for items you should read this thread:  https://bitcointalk.org/index.php?topic=1018906.0
roy7
Sr. Member
****
Offline Offline

Activity: 434


View Profile

Ignore
February 10, 2014, 03:17:00 PM
 #7735

Now that we have more history. Here is with my change:

http://vtc-us-east.royalminingco.com:9171/static/graphs.html?Day

and here is my other bigger node without it:

http://vtc.royalminingco.com:9171/static/graphs.html?Day

You can see how on the test node, around 600Kh (2 GPUs, vertcoin hash speed is half litecoin speed) is enough to almost never have a gap in payment history. (Consider VaHA4dVWiSJKgWeeqmLDFQPPgrvS6BMkA3.) The payments may be as low as .025 but that honors the dust threshold in networks.py, and since there are more individual shares normally the payments are higher than that (since 2+ shares are alive at any one time).

On the "normal" busier node, there are miners at 1Mh who have more empty space than shares. (Consider VfDC2i3psuH5iSiqmb73qauY5DPQQzT6b4.) Minimum share value here is about .3, but smaller miners often only have 1 share if any.

RoyalMiningCo: Pools retired. Was fun!
smoothrunnings
Hero Member
*****
Offline Offline

Activity: 518


View Profile

Ignore
February 10, 2014, 03:22:20 PM
 #7736

This is an interesting discussion, please let us know the results.

P2pool is important for bitcoin and making it accessible to more people is important. This type of change helps public pools - and that helps bitcoin and p2pool. Let's face it, many people won't set up their own p2pool node, they want easy and compared to pools that require registration, p2pool is easy.

Decreasing variance (since people are impatient and don't care about the math) could be helpful too.  

Has been working well so far. The diff miners get is lower than it'd normally be, but not as low as if they did /1 to mine at the minimum share diff all of the time. The reason /1 goes lower is that it overrides the dust prevention code. So it's best to not just tell all public node miners to use /1, and instead only have people who can't stomach the variance and are willing to pay the tx fees do that (or they should use a normal pool).

The node I'm testing on is

http://vtc-us-east.royalminingco.com:9171/static/graphs.html?Week

and the code went live at the Feb 9ths mark (early morning). You can see from the pool shares graph the peaks/valleys have smoothed out, and you can see it happening in a similar way on a per-miner level on the miner graphs.

I don't run a BTC node at the present time so I don't know what sort of share difficulties normally get assigned to workers on the public pools. The VTC p2pool network is over 5% of the global network speed and finds many blocks per day. So the behavior here might be quite difference than on BTC itself. Hopefully forrestv or someone with more knowledge and experience with the p2pool code base will review and comment.

can you do a comparison shot of before and after? I still don't get what you have changed.

Oops just saw you did that. Smiley
roy7
Sr. Member
****
Offline Offline

Activity: 434


View Profile

Ignore
February 10, 2014, 10:40:14 PM
 #7737

When updating networks.py to a new identifier/prefix and a new spread setting, is it okay to keep the existing share chain data file for the new network? I know you can't run two network.py configs on same network, but it seems like keeping the prior share data wouldn't hurt the new network.

We'll probably bite the bullet and roll out the corrected SPREAD setting this week for VTC.

PS- It'd be nice if anyone else has tested my earlier diff to share feedback on it.

RoyalMiningCo: Pools retired. Was fun!
smoothrunnings
Hero Member
*****
Offline Offline

Activity: 518


View Profile

Ignore
February 10, 2014, 11:30:41 PM
 #7738

When updating networks.py to a new identifier/prefix and a new spread setting, is it okay to keep the existing share chain data file for the new network? I know you can't run two network.py configs on same network, but it seems like keeping the prior share data wouldn't hurt the new network.

We'll probably bite the bullet and roll out the corrected SPREAD setting this week for VTC.

PS- It'd be nice if anyone else has tested my earlier diff to share feedback on it.

So your update only only applies to the front-end that has the graphs? Will I get any thing out of it if I apply it to my P2Pool that has a different front-end?

roy7
Sr. Member
****
Offline Offline

Activity: 434


View Profile

Ignore
February 10, 2014, 11:36:21 PM
 #7739

So your update only only applies to the front-end that has the graphs? Will I get any thing out of it if I apply it to my P2Pool that has a different front-end?

Do you run a Vertcoin p2pool node? We need to do a hard fork of the network to fix a setting that was wrong when the network first went live. Has nothing to do with the graphs.

RoyalMiningCo: Pools retired. Was fun!
roy7
Sr. Member
****
Offline Offline

Activity: 434


View Profile

Ignore
February 10, 2014, 11:52:55 PM
 #7740

This is an interesting discussion, please let us know the results.

P2pool is important for bitcoin and making it accessible to more people is important. This type of change helps public pools - and that helps bitcoin and p2pool. Let's face it, many people won't set up their own p2pool node, they want easy and compared to pools that require registration, p2pool is easy.

Decreasing variance (since people are impatient and don't care about the math) could be helpful too. 

Compare the "local rate reflected in shares" between:

http://vtc.royalminingco.com:9171/static/graphs.html?Day  (standard node)
http://vtc-us-east.royalminingco.com:9171/static/graphs.html?Day  (test node)

Standard node has about twice the hash power. The share graph has peaks up to 10x the mean speed, and many gaps. The test node with half the power has peaks only about 5x the mean speed, and almost no gaps.

I guess one could calculate the standard deviation on the graphs to get an idea of the variance reduction between the two, but just visually it's very clear.

RoyalMiningCo: Pools retired. Was fun!
Pages: « 1 ... 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 [387] 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 ... 635 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!