Bitcoin Forum
November 03, 2024, 11:27:05 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 [386] 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 ... 814 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 2591880 times)
freebit13
Hero Member
*****
Offline Offline

Activity: 616
Merit: 500

I got Satoshi's avatar!


View Profile
February 07, 2014, 04:48:04 PM
 #7701

There was an article posted in December of last year, that says in it:

"P2pool is technically a mining pool, but one that acts like solo mining in terms of the end user's view. You enter the pool using your wallet's address, rather than signing up to get an account, and the payments are automatically paid out once blocks are found by anyone on the pool. This cuts down on the middle man and also reduces the time it takes to get paid, although it does bring up another issue: you will get a ton of payments out of it. With each new payment you get, you are potentially adding some bytes (around 230) to your transaction size for when you send coins to someone. As this number grows, the cost of sending the coins also grows along with it."

Does anyone know what this comment means? "With each new payment you get, you are potentially adding some bytes (around 230) to your transaction size for when you send coins to someone. As this number grows, the cost of sending the coins also grows along with it."

Here is the original article: http://www.michaelnielsen.org/ddi/how-the-bitcoin-protocol-actually-works/

Thanks,
I think that's a reference to dust...

Decentralize EVERYTHING!
smoothrunnings
Hero Member
*****
Offline Offline

Activity: 630
Merit: 501


View Profile
February 07, 2014, 04:49:02 PM
 #7702

There was an article posted in December of last year, that says in it:

"P2pool is technically a mining pool, but one that acts like solo mining in terms of the end user's view. You enter the pool using your wallet's address, rather than signing up to get an account, and the payments are automatically paid out once blocks are found by anyone on the pool. This cuts down on the middle man and also reduces the time it takes to get paid, although it does bring up another issue: you will get a ton of payments out of it. With each new payment you get, you are potentially adding some bytes (around 230) to your transaction size for when you send coins to someone. As this number grows, the cost of sending the coins also grows along with it."

Does anyone know what this comment means? "With each new payment you get, you are potentially adding some bytes (around 230) to your transaction size for when you send coins to someone. As this number grows, the cost of sending the coins also grows along with it."

Here is the original article: http://www.michaelnielsen.org/ddi/how-the-bitcoin-protocol-actually-works/

Thanks,
I think that's a reference to dust...

Could you be any more vague? LOL

freebit13
Hero Member
*****
Offline Offline

Activity: 616
Merit: 500

I got Satoshi's avatar!


View Profile
February 07, 2014, 04:55:23 PM
 #7703

There was an article posted in December of last year, that says in it:

"P2pool is technically a mining pool, but one that acts like solo mining in terms of the end user's view. You enter the pool using your wallet's address, rather than signing up to get an account, and the payments are automatically paid out once blocks are found by anyone on the pool. This cuts down on the middle man and also reduces the time it takes to get paid, although it does bring up another issue: you will get a ton of payments out of it. With each new payment you get, you are potentially adding some bytes (around 230) to your transaction size for when you send coins to someone. As this number grows, the cost of sending the coins also grows along with it."

Does anyone know what this comment means? "With each new payment you get, you are potentially adding some bytes (around 230) to your transaction size for when you send coins to someone. As this number grows, the cost of sending the coins also grows along with it."

Here is the original article: http://www.michaelnielsen.org/ddi/how-the-bitcoin-protocol-actually-works/

Thanks,
I think that's a reference to dust...

Could you be any more vague? LOL


Basically, tiny transactions that cost more to send than they are worth and clutter the blockchain... I know more space was added to [edit]each block[/edit] for free transactions, so I'm not sure if that affects it at all.

Decentralize EVERYTHING!
cackalacky
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile WWW
February 07, 2014, 06:32:46 PM
 #7704

Has anyone noticed any correlation between how well distributed the p2pool hash rate is across p2pool nodes and how efficient smaller nodes are? I set up p2pool nodes for a few scrypt altcoins. For more popular coins with dozens of p2pool nodes my node's efficiency is > 100%. For less popular coins where one p2pool node has more than 2/3 of the total p2pool hash rate my node's efficiency is horrible - usually 60% - 80%.

My nodes all run on the same server with the same p2pool version. It seems like there is a pattern when one node has an overly large share of the hash rate the smaller nodes suffer efficiency. Or is this 99.99% likely just to be some other factor I am overlooking. I would love to hear from anyone who can prove/disprove this theory.
roy7
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
February 07, 2014, 07:01:32 PM
 #7705

Could you be any more vague? LOL

An example of people trying to deal with it.

http://www.reddit.com/r/Bitcoin/comments/1f4t2e/90_transaction_fee_how_can_this_be_avoided/ca6typi

You should google "bitcoin dust" and read up on how it works.
freebit13
Hero Member
*****
Offline Offline

Activity: 616
Merit: 500

I got Satoshi's avatar!


View Profile
February 07, 2014, 07:27:54 PM
 #7706

After 21 Million BTC have been created; mining will move on to dustbusting Wink

Decentralize EVERYTHING!
bitpop
Legendary
*
Offline Offline

Activity: 2912
Merit: 1060



View Profile WWW
February 07, 2014, 07:29:10 PM
 #7707

After 21 Million BTC have been created; mining will move on to dustbusting Wink

1 dust = $1 million

freebit13
Hero Member
*****
Offline Offline

Activity: 616
Merit: 500

I got Satoshi's avatar!


View Profile
February 07, 2014, 07:30:51 PM
 #7708

After 21 Million BTC have been created; mining will move on to dustbusting Wink

1 dust = $1 million
Zactly! Plus the transaction fees once Bitcoin has taken over the world... Satoshi takes good care of miners Wink

Decentralize EVERYTHING!
smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
February 07, 2014, 09:51:07 PM
 #7709

Could you be any more vague? LOL

An example of people trying to deal with it.

http://www.reddit.com/r/Bitcoin/comments/1f4t2e/90_transaction_fee_how_can_this_be_avoided/ca6typi

You should google "bitcoin dust" and read up on how it works.

That post is slightly out of date as the current client removed the 0.01 output rule and reduced the transaction size from 10k to 3k but the principle is correct.
roy7
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
February 08, 2014, 06:54:40 PM
Last edit: February 08, 2014, 08:13:42 PM by roy7
 #7710

Interesting. I've always assumed small miners on public nodes have high variance because of share difficulty. But that's not the case, the bigger the pool is the more it will push up those miner's share targets so that the pool as a whole hits its share target. As far as I understand it so far anyway. This doesn't happen if a miner runs his own pool, since you'd then be working on the minimum difficulty if you aren't getting enough shares.

Please see this post and my two followup replies:

https://bitcointalk.org/index.php?topic=214512.msg5019286#msg5019286

For example I have a dozen of these for miners:

2014-02-08 19:06:36.628849 New work for worker! Difficulty: 0.002211 Share difficulty: 1.629049 Total block value: 50.017000 VTC including 14 transactions

And here is my ADDR/.000001 test:

2014-02-08 19:06:36.633762 New work for worker! Difficulty: 0.002211 Share difficulty: 0.054301 Total block value: 50.017000 VTC including 14 transactions

Share difficulty from web interface is .054. So left to the defaults, I'd be able to get shares onto the share chain less than 1/100 as often?
roy7
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
February 08, 2014, 08:07:40 PM
Last edit: February 09, 2014, 05:08:24 AM by roy7
 #7711

I realize that p2pool's intention is to treat each node as a single miner, and p2pool isn't intended to operate as a pseduo public pool. So as I look in the code, I see how the 1.67% cap is applied to the node (not individual connected miners) and that makes total sense in the context that a node is a single operation (an individual or group using p2pool with all of their own hardware as a replacement for solo mining).

However, to better support miners that want to use a public node for whatever reason I think it'd be good if that could be handled in a way that will, in effect, simulate the same result as if they were running a p2pool node of their own instead. Maybe as a command line option that is off by default so any changes make zero difference to existing operations.

Basically this comes down to making the share target for a miner (by which I mean a person or group with 1 or more physical mining devices) based on that miner, and the 1.67% cap on that miner. Not on the node as whole.

The key code in get_work currently is:

Code:
if desired_share_target is None:
desired_share_target = 2**256-1
local_hash_rate = self._estimate_local_hash_rate()
if local_hash_rate is not None:
desired_share_target = min(desired_share_target,
bitcoin_data.average_attempts_to_target(local_hash_rate * self.node.net.SHARE_PERIOD / 0.0167))
# limit to 1.67% of pool shares by modulating share difficulty

However, we wouldn't want to just change the local_hash_rate to be the miner whose new work is being assigned (the physical mining device with a connection to the pool). That would defeat the purpose of things like the 1.67% cap. What if, instead, it was based on the estimated hash rate of the destination payment address? So if I have 4 antminers all mining to ADDR_X, the target share rate is based on their combined speed. But someone else connecting their two antminers with ADDR_Y will have a lower target share rate. ADDR_X and ADDR_Y are both having the 1.67% cap applied individually, etc. Someone operating a node now with dozens of pieces of equipment all paying to the same address would see zero change even if they did toggle this on. Individual miners on a public node would see reduced variance in their own shares, since pool hash rate is taken out of the equation. They could do this by hand now with ADDR/1 (or say /.000001 for scrypt), but I think handling it automatically makes more sense (and keeps vardiff alive for miners that are maybe bigger than justifies using ADDR/1).

The way I view this is that if ADDR_X and ADDR_Y were running their own nodes instead of connecting to a public node, their target share rates would be based on only their own hash rates anyway. The 1.67% would be applied to each of them individually (instead of all combined in the public node). By adjusting their target share rates only to their own speeds, it simulates them running their own nodes.

Thoughts?

TLDR: A small miner connecting to a busy public node has much higher variance than running a node of their own.
roy7
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
February 09, 2014, 04:54:58 AM
Last edit: February 09, 2014, 05:10:59 AM by roy7
 #7712

As an experiment I'm running this live on one of my VTC nodes now, after testing it out some. The share difficulties are being set exactly as expected. Will check again in morning but it seems to be working great. Here is a diff for anyone's review/comments please. Remember python is whitespace sensitive if you try to apply this to test yourself. What happens is the share target when sending out work is set based on the payment addresses' hash rate instead of the whole node's hash rate. This way each person/group mining to an address on a public node finds shares at the same difficulty as if they ran a local private p2pool, and don't get increased variance based on the size of the public node.

Code:
diff --git a/p2pool/work.py b/p2pool/work.py
index e1c677d..285fa3e 100644
--- a/p2pool/work.py
+++ b/p2pool/work.py
@@ -245,12 +245,11 @@ class WorkerBridge(worker_interface.WorkerBridge):

         if desired_share_target is None:
             desired_share_target = 2**256-1
-            local_hash_rate = self._estimate_local_hash_rate()
-            if local_hash_rate is not None:
+            local_addr_rates = self.get_local_addr_rates()
+            local_hash_rate = local_addr_rates.get(pubkey_hash, 0)
+            if local_hash_rate > 0.0:
                 desired_share_target = min(desired_share_target,
                     bitcoin_data.average_attempts_to_target(local_hash_rate * self.node.net.SHARE_PERIOD / 0.0167)) # limit to 1.67% of pool shares by modulating share difficulty
-
-            local_addr_rates = self.get_local_addr_rates()
             lookbehind = 3600//self.node.net.SHARE_PERIOD
             block_subsidy = self.node.bitcoind_work.value['subsidy']
             if previous_share is not None and self.node.tracker.get_height(previous_share.hash) > lookbehind:

One weakness of the ADDR/1 workaround is it overrides vardiff completely and ignores the network.py dust threshold. The optimal solution is for every miner to run their own p2pool node.
Biggen
Full Member
***
Offline Offline

Activity: 160
Merit: 100


View Profile
February 09, 2014, 06:21:42 PM
 #7713

What is the time zone being displayed under the "Last Blocks" tab?  Is it in GMT?  The reason I ask is I see this for my node:
Code:
93499 Sun Feb 09 2014 08:40:00 GMT-0600 (Central Standard Time) 02ad14053894887b847d9060fbad4d070c2349b6eeb21a5ebd97ce329372bb6c 

So is that saying that that block was found at 08:40 GMT and I need to use a -0600 offset for my time zone (which is Central btw)?
WutriCoin
Full Member
***
Offline Offline

Activity: 224
Merit: 100



View Profile
February 09, 2014, 11:19:25 PM
 #7714

Could you please add DigiByte Support
cr1776
Legendary
*
Offline Offline

Activity: 4214
Merit: 1312


View Profile
February 10, 2014, 12:27:34 AM
 #7715

I realize that p2pool's intention is to treat each node as a single miner, and p2pool isn't intended to operate as a pseduo public pool. So as I look in the code, I see how the 1.67% cap is applied to the node (not individual connected miners) and that makes total sense in the context that a node is a single operation (an individual or group using p2pool with all of their own hardware as a replacement for solo mining).

However, to better support miners that want to use a public node for whatever reason I think it'd be good if that could be handled in a way that will, in effect, simulate the same result as if they were running a p2pool node of their own instead. Maybe as a command line option that is off by default so any changes make zero difference to existing operations.

Basically this comes down to making the share target for a miner (by which I mean a person or group with 1 or more physical mining devices) based on that miner, and the 1.67% cap on that miner. Not on the node as whole.

The key code in get_work currently is:

Code:
if desired_share_target is None:
desired_share_target = 2**256-1
local_hash_rate = self._estimate_local_hash_rate()
if local_hash_rate is not None:
desired_share_target = min(desired_share_target,
bitcoin_data.average_attempts_to_target(local_hash_rate * self.node.net.SHARE_PERIOD / 0.0167))
# limit to 1.67% of pool shares by modulating share difficulty

However, we wouldn't want to just change the local_hash_rate to be the miner whose new work is being assigned (the physical mining device with a connection to the pool). That would defeat the purpose of things like the 1.67% cap. What if, instead, it was based on the estimated hash rate of the destination payment address? So if I have 4 antminers all mining to ADDR_X, the target share rate is based on their combined speed. But someone else connecting their two antminers with ADDR_Y will have a lower target share rate. ADDR_X and ADDR_Y are both having the 1.67% cap applied individually, etc. Someone operating a node now with dozens of pieces of equipment all paying to the same address would see zero change even if they did toggle this on. Individual miners on a public node would see reduced variance in their own shares, since pool hash rate is taken out of the equation. They could do this by hand now with ADDR/1 (or say /.000001 for scrypt), but I think handling it automatically makes more sense (and keeps vardiff alive for miners that are maybe bigger than justifies using ADDR/1).

The way I view this is that if ADDR_X and ADDR_Y were running their own nodes instead of connecting to a public node, their target share rates would be based on only their own hash rates anyway. The 1.67% would be applied to each of them individually (instead of all combined in the public node). By adjusting their target share rates only to their own speeds, it simulates them running their own nodes.

Thoughts?

TLDR: A small miner connecting to a busy public node has much higher variance than running a node of their own.

This is an interesting discussion, please let us know the results.

P2pool is important for bitcoin and making it accessible to more people is important. This type of change helps public pools - and that helps bitcoin and p2pool. Let's face it, many people won't set up their own p2pool node, they want easy and compared to pools that require registration, p2pool is easy.

Decreasing variance (since people are impatient and don't care about the math) could be helpful too. 
roy7
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
February 10, 2014, 12:59:27 AM
 #7716

This is an interesting discussion, please let us know the results.

P2pool is important for bitcoin and making it accessible to more people is important. This type of change helps public pools - and that helps bitcoin and p2pool. Let's face it, many people won't set up their own p2pool node, they want easy and compared to pools that require registration, p2pool is easy.

Decreasing variance (since people are impatient and don't care about the math) could be helpful too. 

Has been working well so far. The diff miners get is lower than it'd normally be, but not as low as if they did /1 to mine at the minimum share diff all of the time. The reason /1 goes lower is that it overrides the dust prevention code. So it's best to not just tell all public node miners to use /1, and instead only have people who can't stomach the variance and are willing to pay the tx fees do that (or they should use a normal pool).

The node I'm testing on is

http://vtc-us-east.royalminingco.com:9171/static/graphs.html?Week

and the code went live at the Feb 9ths mark (early morning). You can see from the pool shares graph the peaks/valleys have smoothed out, and you can see it happening in a similar way on a per-miner level on the miner graphs.

I don't run a BTC node at the present time so I don't know what sort of share difficulties normally get assigned to workers on the public pools. The VTC p2pool network is over 5% of the global network speed and finds many blocks per day. So the behavior here might be quite difference than on BTC itself. Hopefully forrestv or someone with more knowledge and experience with the p2pool code base will review and comment.
IYFTech
Hero Member
*****
Offline Offline

Activity: 686
Merit: 500


WANTED: Active dev to fix & re-write p2pool in C


View Profile
February 10, 2014, 09:21:00 AM
 #7717

Nice to see someone actually trying to do some development on p2pool - well done!  Smiley

Keep us posted dude.....

Cheers  Grin

-- Smiley  Thank you for smoking  Smiley --  If you paid VAT to dogie for items you should read this thread:  https://bitcointalk.org/index.php?topic=1018906.0
roy7
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
February 10, 2014, 03:17:00 PM
 #7718

Now that we have more history. Here is with my change:

http://vtc-us-east.royalminingco.com:9171/static/graphs.html?Day

and here is my other bigger node without it:

http://vtc.royalminingco.com:9171/static/graphs.html?Day

You can see how on the test node, around 600Kh (2 GPUs, vertcoin hash speed is half litecoin speed) is enough to almost never have a gap in payment history. (Consider VaHA4dVWiSJKgWeeqmLDFQPPgrvS6BMkA3.) The payments may be as low as .025 but that honors the dust threshold in networks.py, and since there are more individual shares normally the payments are higher than that (since 2+ shares are alive at any one time).

On the "normal" busier node, there are miners at 1Mh who have more empty space than shares. (Consider VfDC2i3psuH5iSiqmb73qauY5DPQQzT6b4.) Minimum share value here is about .3, but smaller miners often only have 1 share if any.
smoothrunnings
Hero Member
*****
Offline Offline

Activity: 630
Merit: 501


View Profile
February 10, 2014, 03:22:20 PM
Last edit: February 10, 2014, 03:47:21 PM by smoothrunnings
 #7719

This is an interesting discussion, please let us know the results.

P2pool is important for bitcoin and making it accessible to more people is important. This type of change helps public pools - and that helps bitcoin and p2pool. Let's face it, many people won't set up their own p2pool node, they want easy and compared to pools that require registration, p2pool is easy.

Decreasing variance (since people are impatient and don't care about the math) could be helpful too.  

Has been working well so far. The diff miners get is lower than it'd normally be, but not as low as if they did /1 to mine at the minimum share diff all of the time. The reason /1 goes lower is that it overrides the dust prevention code. So it's best to not just tell all public node miners to use /1, and instead only have people who can't stomach the variance and are willing to pay the tx fees do that (or they should use a normal pool).

The node I'm testing on is

http://vtc-us-east.royalminingco.com:9171/static/graphs.html?Week

and the code went live at the Feb 9ths mark (early morning). You can see from the pool shares graph the peaks/valleys have smoothed out, and you can see it happening in a similar way on a per-miner level on the miner graphs.

I don't run a BTC node at the present time so I don't know what sort of share difficulties normally get assigned to workers on the public pools. The VTC p2pool network is over 5% of the global network speed and finds many blocks per day. So the behavior here might be quite difference than on BTC itself. Hopefully forrestv or someone with more knowledge and experience with the p2pool code base will review and comment.

can you do a comparison shot of before and after? I still don't get what you have changed.

Oops just saw you did that. Smiley
roy7
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
February 10, 2014, 10:40:14 PM
 #7720

When updating networks.py to a new identifier/prefix and a new spread setting, is it okay to keep the existing share chain data file for the new network? I know you can't run two network.py configs on same network, but it seems like keeping the prior share data wouldn't hurt the new network.

We'll probably bite the bullet and roll out the corrected SPREAD setting this week for VTC.

PS- It'd be nice if anyone else has tested my earlier diff to share feedback on it.
Pages: « 1 ... 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 [386] 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 ... 814 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!