Bitcoin Forum
December 03, 2016, 03:57:33 PM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
   Home   Help Search Donate Login Register  
Pages: « 1 ... 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 [475] 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 ... 744 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 2029050 times)
sconklin321
Member
**
Offline Offline

Activity: 109


View Profile
July 05, 2014, 03:24:23 AM
 #9481

The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

I know this is used for scrypt and not btc, but could this be solved by setting up a sub pool like doge.st has setup.  They run a stratum to stratum proxy that, if I get how it works correctly, splits the stratum work up and then distributes it to he miners looking for smaller shares.  Here is the link to their proxypool server:

https://github.com/dogestreet/proxypool

I'm not sure if this works outside of scrypt based coins as I don't know the inner workings of the protocols involved, but I do know that, if I remember correctly, the doge p2pool is set on 10 or 15 second share times so it should be requesting new work faster and more frequently that our p2pool does.  Just an idea I had to through out, I'd test it, but I don't have an S2 to test it with.  If it does work for the S2, I wonder if it would work with the bitfury stuff that is preventing Petamine from mining with P2Pool.

Edit:  Just read the readme and it only works with scrypt upstream servers, but maybe it's something worth forking if someone knows how to get it working.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1480780653
Hero Member
*
Offline Offline

Posts: 1480780653

View Profile Personal Message (Offline)

Ignore
1480780653
Reply with quote  #2

1480780653
Report to moderator
1480780653
Hero Member
*
Offline Offline

Posts: 1480780653

View Profile Personal Message (Offline)

Ignore
1480780653
Reply with quote  #2

1480780653
Report to moderator
1480780653
Hero Member
*
Offline Offline

Posts: 1480780653

View Profile Personal Message (Offline)

Ignore
1480780653
Reply with quote  #2

1480780653
Report to moderator
nreal
Full Member
***
Offline Offline

Activity: 182


View Profile
July 05, 2014, 05:41:36 AM
 #9482

Pool Luck(?) (7 days, 30 days, 90 days): 185.7%102.4%98.9%

nreal
Full Member
***
Offline Offline

Activity: 182


View Profile
July 05, 2014, 06:30:52 AM
 #9483

The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

So it works like a charm with walletadress/10000000+1024 ?

And without those 30s altcoins..
mdude77
Legendary
*
Offline Offline

Activity: 1358


View Profile
July 05, 2014, 08:42:45 AM
 #9484

The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

So it works like a charm with walletadress/10000000+1024 ?

And without those 30s altcoins..

I'm not sure if 512 or 1024 works best.  Overall, while a proxy definitely helps, the fundamental issue appears to be:

1 - the Ants can't change work fast enough to keep up with the constant changing of job IDs from p2pool
2 - p2pool doesn't accept work with old job IDs.  ever.

Eligius does submit a new job ids roughly 30-50 seconds, but it usually doesn't tell the Ants to drop what they're doing and restart "clean_jobs == false".

So I'd say either Bitmain needs to improve the response time on the Ants.. or p2pool needs to stop telling the workers to drop their work and start fresh every 30-40 seconds.

M

MMinerMonitor author, monitor/auto/schedule reboots/alerts/remote/MobileMiner for Ants and Spondoolies! Latest (5.2). MPoolMonitor author, monitor stats/workers for most pools, global BTC stats (current/nxt diff/USD val/hashrate/calc)! Latest (v4.2) 
Buyer beware of Bitmain hardware and services.
nreal
Full Member
***
Offline Offline

Activity: 182


View Profile
July 05, 2014, 09:12:11 AM
 #9485

The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

So it works like a charm with walletadress/10000000+1024 ?

And without those 30s altcoins..

What does graphs tell, hourly? Is it hashing below 1th?

I'm not sure if 512 or 1024 works best.  Overall, while a proxy definitely helps, the fundamental issue appears to be:

1 - the Ants can't change work fast enough to keep up with the constant changing of job IDs from p2pool
2 - p2pool doesn't accept work with old job IDs.  ever.

Eligius does submit a new job ids roughly 30-50 seconds, but it usually doesn't tell the Ants to drop what they're doing and restart "clean_jobs == false".

So I'd say either Bitmain needs to improve the response time on the Ants.. or p2pool needs to stop telling the workers to drop their work and start fresh every 30-40 seconds.

M
nreal
Full Member
***
Offline Offline

Activity: 182


View Profile
July 05, 2014, 09:14:55 AM
 #9486

The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

So it works like a charm with walletadress/10000000+1024 ?

And without those 30s altcoins..

What does graphs tell, hourly? Is it hashing below 1th?

I'm not sure if 512 or 1024 works best.  Overall, while a proxy definitely helps, the fundamental issue appears to be:

1 - the Ants can't change work fast enough to keep up with the constant changing of job IDs from p2pool
2 - p2pool doesn't accept work with old job IDs.  ever.

Eligius does submit a new job ids roughly 30-50 seconds, but it usually doesn't tell the Ants to drop what they're doing and restart "clean_jobs == false".

So I'd say either Bitmain needs to improve the response time on the Ants.. or p2pool needs to stop telling the workers to drop their work and start fresh every 30-40 seconds.

M

So its below 1th in http://127.0.0.1:9332/static/graphs.html?Hour ?
mdude77
Legendary
*
Offline Offline

Activity: 1358


View Profile
July 05, 2014, 11:32:25 AM
 #9487


It's close to 1th/s.  The problem is the reject level is too high.

What I don't understand is why it appears so different when using a proxy.  The reject level is high enough to make the effective rash rate what's showing when you connect directly.  But through the proxy is shows the reject count.

M

MMinerMonitor author, monitor/auto/schedule reboots/alerts/remote/MobileMiner for Ants and Spondoolies! Latest (5.2). MPoolMonitor author, monitor stats/workers for most pools, global BTC stats (current/nxt diff/USD val/hashrate/calc)! Latest (v4.2) 
Buyer beware of Bitmain hardware and services.
nreal
Full Member
***
Offline Offline

Activity: 182


View Profile
July 05, 2014, 12:34:47 PM
 #9488


It's close to 1th/s.  The problem is the reject level is too high.

What I don't understand is why it appears so different when using a proxy.  The reject level is high enough to make the effective rash rate what's showing when you connect directly.  But through the proxy is shows the reject count.

M

Isnt the DOA the same thing?
mdude77
Legendary
*
Offline Offline

Activity: 1358


View Profile
July 05, 2014, 12:55:00 PM
 #9489


It's close to 1th/s.  The problem is the reject level is too high.

What I don't understand is why it appears so different when using a proxy.  The reject level is high enough to make the effective rash rate what's showing when you connect directly.  But through the proxy is shows the reject count.

M

Isnt the DOA the same thing?

Yes.  I think the problem is p2pool isn't properly feeding the reject back to cgminer, so cgminer isn't counting the rejects.

It also looks like the Ants have problems changing difficulty quickly.  The lower the difficulty was set to, the longer it takes to change.  That causes boatloads of "worker submitted hash > target" messages.  That message itself is misleading, it's actually hash < target the way you and I think about it.  But in bitcoin world, it's > target.

M

MMinerMonitor author, monitor/auto/schedule reboots/alerts/remote/MobileMiner for Ants and Spondoolies! Latest (5.2). MPoolMonitor author, monitor stats/workers for most pools, global BTC stats (current/nxt diff/USD val/hashrate/calc)! Latest (v4.2) 
Buyer beware of Bitmain hardware and services.
mdude77
Legendary
*
Offline Offline

Activity: 1358


View Profile
July 05, 2014, 12:57:24 PM
 #9490

I found a hack that might get my proxy to work S2s and p2pool.

I always set the jobid in the work submitted to the current jobid.  p2pool doesn't seem to mind.  It rejects one or two every so often, but that's it.

I need someone who knows p2pool or cgminer better than I do to see if that's legit or not.

M

MMinerMonitor author, monitor/auto/schedule reboots/alerts/remote/MobileMiner for Ants and Spondoolies! Latest (5.2). MPoolMonitor author, monitor stats/workers for most pools, global BTC stats (current/nxt diff/USD val/hashrate/calc)! Latest (v4.2) 
Buyer beware of Bitmain hardware and services.
PatMan
Hero Member
*****
Offline Offline

Activity: 924


Watch out for the "Neg-Rep-Dogie-Police".....


View Profile WWW
July 05, 2014, 01:04:30 PM
 #9491

Very interesting mdude - maybe drop kano a line? He'd know. It's all a bit above me I'm afraid.......

"When one person is deluded it is called insanity - when many people are deluded it is called religion" - Robert M. Pirsig.  I don't want your coins, I want change.
Amazon UK BTC payment service - https://bitcointalk.org/index.php?topic=301229.0 - with FREE delivery!
http://www.ae911truth.org/ - http://rethink911.org/ - http://rememberbuilding7.org/
nreal
Full Member
***
Offline Offline

Activity: 182


View Profile
July 05, 2014, 01:33:59 PM
 #9492

P2pool expected sharetime should be higher than 30s..
mdude77
Legendary
*
Offline Offline

Activity: 1358


View Profile
July 05, 2014, 01:42:55 PM
 #9493

At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".


If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

M

MMinerMonitor author, monitor/auto/schedule reboots/alerts/remote/MobileMiner for Ants and Spondoolies! Latest (5.2). MPoolMonitor author, monitor stats/workers for most pools, global BTC stats (current/nxt diff/USD val/hashrate/calc)! Latest (v4.2) 
Buyer beware of Bitmain hardware and services.
ceslick
Full Member
***
Offline Offline

Activity: 161

digging in the bits... now ant powered!


View Profile WWW
July 05, 2014, 02:17:24 PM
 #9494

I have 4 S1's with kano's update applied, what other asic's would be recommended for p2pool?

http://www.integratedideas.net  - Home of Rock Solid Miners
NZ Based BTC P2Pool: http://www.integratedideas.net/p2pool-btc/  -  NZ Based DOGE P2Pool: http://www.integratedideas.net/p2pool-doge/
Cloud mining with CEX.IO: https://cex.io/r/2/ceslicknz/0/
windpath
Legendary
*
Offline Offline

Activity: 938


View Profile WWW
July 05, 2014, 04:31:40 PM
 #9495

I have 4 S1's with kano's update applied, what other asic's would be recommended for p2pool?

I'd suggest this list is incomplete at best, but its a start: https://en.bitcoin.it/wiki/P2Pool#Interoperability_table

SP10 needs to be added.

windpath
Legendary
*
Offline Offline

Activity: 938


View Profile WWW
July 05, 2014, 04:47:32 PM
 #9496

At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".


If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

M

Dude, GREAT WORK!

I'd suggest that perhaps Bitmain or Kano could probably fix the Ants. Bitmain because they should, Kano because he's awesome Wink

From your findings it sounds like fixing the ants so they reply to work restarts properly would fix the problem:

Quote
I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.

The more frequent work restarts are a direct result of p2pools higher share rate...

Quote
I found a hack that might get my proxy to work S2s and p2pool. I always set the jobid in the work submitted to the current jobid

I'm not sure of the impact this may have, I would think it would break the share header?

From the wiki:
Quote
P2Pool shares form a "sharechain" with each share referencing the previous share's hash. Each share contains a standard Bitcoin block header, some P2Pool-specific data that is used to compute the generation transaction (total subsidy, payout script of this share, a nonce, the previous share's hash, and the current target for shares), and a Merkle branch linking that generation transaction to the block header's Merkle hash.

but if it works it sounds like a good short term solution Smiley

mdude77
Legendary
*
Offline Offline

Activity: 1358


View Profile
July 05, 2014, 05:53:57 PM
 #9497

I'd suggest that perhaps Bitmain or Kano could probably fix the Ants. Bitmain because they should, Kano because he's awesome Wink

From your findings it sounds like fixing the ants so they reply to work restarts properly would fix the problem:

Quote
I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.

The more frequent work restarts are a direct result of p2pools higher share rate...

The problem is the Ants are hardware.  There's a finite amount of stuff that can be done to them through software.  If it's a hardware issue, then I'd be surprised if the S3s work with p2pool.  If it's software, I don't understand why Bitmain hasn't fixed it.  Surely they could do what I did to see what's going on.

Quote
I found a hack that might get my proxy to work S2s and p2pool. I always set the jobid in the work submitted to the current jobid
Quote
I'm not sure of the impact this may have, I would think it would break the share header?

Doing that fixed the absurd amount of rejects I was getting.  However I still had the decreased hashrate problem.  Because of the current true share size, my 2.2th/s would take an average of 2 hours to find a share.  I didn't let it chug long enough to see if I regularly found shares, or if I got a larger than average amount of dead shares.  I'm not going to throw away a day of mining to experiment with it. Sad

M

MMinerMonitor author, monitor/auto/schedule reboots/alerts/remote/MobileMiner for Ants and Spondoolies! Latest (5.2). MPoolMonitor author, monitor stats/workers for most pools, global BTC stats (current/nxt diff/USD val/hashrate/calc)! Latest (v4.2) 
Buyer beware of Bitmain hardware and services.
mdude77
Legendary
*
Offline Offline

Activity: 1358


View Profile
July 05, 2014, 06:08:56 PM
 #9498

Also, I also noticed p2pool would complain a little bit about shares being submitted over difficulty every time p2pool changed the pseudo share size.  It even did this when I had the psuedo share size forced to the highest value (1000) by appending +1000 to my address!  It also happened when I override the share size on the proxy side to something larger than p2pool wanted.  I didn't check the math of the shares, so it could be an Ant problem, or it could be a p2pool problem.

M

MMinerMonitor author, monitor/auto/schedule reboots/alerts/remote/MobileMiner for Ants and Spondoolies! Latest (5.2). MPoolMonitor author, monitor stats/workers for most pools, global BTC stats (current/nxt diff/USD val/hashrate/calc)! Latest (v4.2) 
Buyer beware of Bitmain hardware and services.
JakeTri
Full Member
***
Offline Offline

Activity: 154


View Profile
July 05, 2014, 06:48:19 PM
 #9499

At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".

If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

Here are 2 patches for p2pool that address point 2 and 3 from your list.

First a simple patch that allow higher than 1000 pseudo-difficulty.
https://github.com/jaketri/p2pool/commit/05b630f2c8f93b78093043b28c0c543fafa0a856

And another patch that add "--min-difficulty" parameter to p2pool. For my setup I use 256 as start pseudo difficulty.

https://github.com/jaketri/p2pool/commit/5f02f893490f2b9bfa48926184c4b1329c4d1554

BTC donations always welcome: 1JakeTriwbahMYp1rSfJbTn7Afd1w62p2q
sconklin321
Member
**
Offline Offline

Activity: 109


View Profile
July 05, 2014, 06:55:22 PM
 #9500

At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".


If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

M

Does p2pool force the clean jobs when a share is found (should be 30 seconds due to share time)?  So wouldn't not forcing that, just result is lower node efficiency through more DOA's to the sharechain?  This would be why eligius doesn't need to force a restart everytime it sends new work, just every time a bitcoin block is found (7 to 10 minutes).  And if this is truly where the error, I could be wrong, but wouldn't that mean that you are hashing at the full hash rate, but it's just getting reported wrong??
Pages: « 1 ... 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 [475] 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 ... 744 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!