Bitcoin Forum
April 24, 2024, 02:53:15 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 [494] 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 ... 814 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 2591623 times)
sconklin321
Sr. Member
****
Offline Offline

Activity: 543
Merit: 250

Orjinal üyelik ToRiKaN banlanalı asır ol


View Profile
August 05, 2014, 12:23:31 AM
 #9861



If you want to get recent blocks found by P2Pool, you can change your front end's code to make a call to the following: https://blockchain.info/blocks/P2Pool?format=json

This will return something like the following:
Code:
{ 
"blocks" : [
{
"height" : 313941,
"hash" : "0000000000000000366aaafabda75d7fc0a184b4177a8769dc258e97262cd09b",
"time" : 1407153595,
"main_chain" : true
},

{
"height" : 313846,
"hash" : "00000000000000000d086863f8131bc9fdd99689577b06829f8a2e6c171174b5",
"time" : 1407090619,
"main_chain" : true
},

{
"height" : 313791,
"hash" : "0000000000000000183861a2afd974f94116dd533f5d6a0b5d700fbb53e22960",
"time" : 1407056128,
"main_chain" : true
},

{
"height" : 313621,
"hash" : "00000000000000002804cd7f5fea7980524c0cf9dd11d3dd89e413bbd4b02a3f",
"time" : 1406975059,
"main_chain" : true
},

{
"height" : 313578,
"hash" : "000000000000000023a88888e16b1d3bade12cb9e4d47901d0d9e141b291307c",
"time" : 1406949661,
"main_chain" : true
},

{
"height" : 313449,
"hash" : "00000000000000002dc6bc02fbfaa506f6c8e8f80dcf97b2e9961ba28d7ca6da",
"time" : 1406878816,
"main_chain" : true
}
]
}


Is this the line I'm looking to change the call https://blockchain.info/blocks/P2Pool?format=json

Code:
$.getJSON(api_url + '/recent_blocks', function(data) {
          if(data) recent_blocks= data;
          $(document).trigger('update_blocks');
        })

- Orjinal Üyelik Eski Banlanmış ToRiKaN -

https://youtube.com/c/KriptoParatoner

www twitter.com/torikan
1713927195
Hero Member
*
Offline Offline

Posts: 1713927195

View Profile Personal Message (Offline)

Ignore
1713927195
Reply with quote  #2

1713927195
Report to moderator
1713927195
Hero Member
*
Offline Offline

Posts: 1713927195

View Profile Personal Message (Offline)

Ignore
1713927195
Reply with quote  #2

1713927195
Report to moderator
1713927195
Hero Member
*
Offline Offline

Posts: 1713927195

View Profile Personal Message (Offline)

Ignore
1713927195
Reply with quote  #2

1713927195
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
jonnybravo0311
Legendary
*
Offline Offline

Activity: 1344
Merit: 1023


Mine at Jonny's Pool


View Profile WWW
August 05, 2014, 12:49:16 AM
 #9862



If you want to get recent blocks found by P2Pool, you can change your front end's code to make a call to the following: https://blockchain.info/blocks/P2Pool?format=json

This will return something like the following:
Code:
{ 
"blocks" : [
{
"height" : 313941,
"hash" : "0000000000000000366aaafabda75d7fc0a184b4177a8769dc258e97262cd09b",
"time" : 1407153595,
"main_chain" : true
},

{
"height" : 313846,
"hash" : "00000000000000000d086863f8131bc9fdd99689577b06829f8a2e6c171174b5",
"time" : 1407090619,
"main_chain" : true
},

{
"height" : 313791,
"hash" : "0000000000000000183861a2afd974f94116dd533f5d6a0b5d700fbb53e22960",
"time" : 1407056128,
"main_chain" : true
},

{
"height" : 313621,
"hash" : "00000000000000002804cd7f5fea7980524c0cf9dd11d3dd89e413bbd4b02a3f",
"time" : 1406975059,
"main_chain" : true
},

{
"height" : 313578,
"hash" : "000000000000000023a88888e16b1d3bade12cb9e4d47901d0d9e141b291307c",
"time" : 1406949661,
"main_chain" : true
},

{
"height" : 313449,
"hash" : "00000000000000002dc6bc02fbfaa506f6c8e8f80dcf97b2e9961ba28d7ca6da",
"time" : 1406878816,
"main_chain" : true
}
]
}


Is this the line I'm looking to change the call https://blockchain.info/blocks/P2Pool?format=json

Code:
$.getJSON(api_url + '/recent_blocks', function(data) {
          if(data) recent_blocks= data;
          $(document).trigger('update_blocks');
        })
Yes, that's where it should be changed.  Note, however, that from my own testing, the blockchain.info site throws back an error if you try to call it from script like that.  Even though it clearly states on their site that it can be done, it fails.  I'm still looking for a way to incorporate it into the JS and not re-writing everything in a php.

Here's what happens when I change the call.  First the code for the call:
Code:
$.getJSON('http://blockchain.info/blocks/P2Pool?format=json&cors=true', function(data) {
  if(data) recent_blocks= data["blocks"];
  $(document).trigger('update_blocks');
});

And the resulting error:
Code:
XMLHttpRequest cannot load http://blockchain.info/blocks/P2Pool?format=json&cors=true. Received an invalid response. Origin 'http://69.141.89.74:9332' is therefore not allowed access.

If I change it to use https, the error becomes:
Code:
XMLHttpRequest cannot load https://blockchain.info/blocks/P2Pool?format=json&cors=true. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://69.141.89.74:9332' is therefore not allowed access.

It's an issue on blockchain.info.  They specifically state they allow cross-site connections (that's what the cors=true is supposed to handle), yet as you can see it fails.  I've also tried forcing CORS locally by doing:
Code:
$.support.cors=true
But, that fails as well.

Oh well... the API is there, and it works if you just paste that link in your browser.  Hopefully someone at blockchain.info can fix it.  Like I said, I could probably write php to get it, but I'm just not that motivated at the moment Tongue.

Jonny's Pool - Mine with us and help us grow!  Support a pool that supports Bitcoin, not a hardware manufacturer's pockets!  No SPV cheats.  No empty blocks.
jonnybravo0311
Legendary
*
Offline Offline

Activity: 1344
Merit: 1023


Mine at Jonny's Pool


View Profile WWW
August 05, 2014, 01:05:47 AM
 #9863

The concepts are the same but perhaps I should have posted that it is written with primarily scrypt miners in mind Undecided

So, yea the "low setting" caution statement is for coins that could incur large fees when sent due to low payout amounts (dust) to smaller miners. I do believe that the /0 setting overrides the minimum "dust" payout code written into p2pool so you might want to double check on that. Don't remember if I verified from reviewing the code but I'm relatively sure that's how it works....thinking about it though the minimum p2pool share difficulty is too high on BTC to matter...
I'll take a look at the code to see if /0 really would override the dust payout threshold.  I didn't see anything to that effect, but then again I wasn't looking for it.  I'll post back here (probably tomorrow, I'm tired of looking at code) with any findings.  Of course, if anybody else can provide an answer to what effect /0 might have on dust threshold, that'd be great, too.

As I wrote a few pages back, I have seen no difference in the performance of my S3s between letting the node determine difficulty, and setting it manually as per Bitmain's direction (ADDRESS/256+256).  The only actual difference I have seen is on the S3 web UI's miner status page.  The value of Best Share is always "0" when manually setting the difficulty.

The real miners who should be setting the difficulty are the big hitters.  If your hashing rate has you expecting to find more than a share an hour, set your difficulty appropriately (i.e. HIGHER than the p2pool share difficulty) so the rest of the miners on the node don't suffer for it.

Huh... there's an interesting thing to look into as well.  If I have 5 miners on my node, 4 miners have 1TH/s each and the 5th miner has 100TH/s, would the node not consider the big hitter's hash rate when setting the difficulty for the other miners if the big hitter sets his own difficulty using ADDRESS/value?  If not, then it is advisable for all miners on public nodes to set a difficulty manually.

It would be nice if forrestv actually provided the answers to these, and other queries.  Unfortunately, he's not been on the boards since 7/26, so this post will probably not gain his attention.

Jonny's Pool - Mine with us and help us grow!  Support a pool that supports Bitcoin, not a hardware manufacturer's pockets!  No SPV cheats.  No empty blocks.
wlz2011
Member
**
Offline Offline

Activity: 71
Merit: 10


View Profile
August 05, 2014, 07:27:16 AM
 #9864

Please help me

LAN 192.168.0.250:9332, 192.168.0.252: 9332.

How these two pools of fixed interconnected team.

Which folder should be modified? Thank you. Smiley
jedimstr
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000



View Profile
August 05, 2014, 10:14:49 AM
 #9865

On number 2 Grin
I have been wondering about this. This isn't really about shares, but diff with high hashrate miners question. When Mr 80TH's shows up on pool it's just killing everyone else it seems. Default diff is 999 or so. Based on what you're saying that is not actually correct because if you get a share it's worth more.
Currently one address around 12TH/s and you just see all the estimated earnings going down for smaller miners.

Have your smaller miners use /0 or whatever you think is appropriate so their share diff is not raised from the larger ones. On my front end I show the miners share diff and expected time to share by address so they can adjust it as necessary.

This is if you use a public node...if you have your own it really shouldn't matter.

Did you lift some of your suggested settings from Norgz' Antminer settings page?
http://www.norgzpool.net.au/antminer.html


bryonp
Member
**
Offline Offline

Activity: 85
Merit: 10


View Profile
August 05, 2014, 10:37:54 AM
 #9866

On number 2 Grin
I have been wondering about this. This isn't really about shares, but diff with high hashrate miners question. When Mr 80TH's shows up on pool it's just killing everyone else it seems. Default diff is 999 or so. Based on what you're saying that is not actually correct because if you get a share it's worth more.
Currently one address around 12TH/s and you just see all the estimated earnings going down for smaller miners.

Have your smaller miners use /0 or whatever you think is appropriate so their share diff is not raised from the larger ones. On my front end I show the miners share diff and expected time to share by address so they can adjust it as necessary.

This is if you use a public node...if you have your own it really shouldn't matter.

Did you lift some of your suggested settings from Norgz' Antminer settings page?
http://www.norgzpool.net.au/antminer.html



OK this link is the easiest for me to follow and accomplish...
Do you really feel that this will make my outcome much better?? I hate to screw around with the settings and have amess on my hands.
! am running 16 S-1's

As soon as we jump up in hashing... my income drops to peanuts?

PatMan
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1000


Watch out for the "Neg-Rep-Dogie-Police".....


View Profile WWW
August 05, 2014, 10:38:31 AM
 #9867

Greetings guys/gals,

Been out of the country for a few weeks so only got to setting up my S3's a few days ago, but I'm now able to get back to mining with p2pool again & thought I'd share my experience with them after tinkering for a few days trying to find the optimal settings. Generally I'm quite impressed with the S3, even though the performance can vary massively between different devices with the same settings.

They are all B1 units flashed with the latest (non beeping) firmware for my own sanity. I done some reading up on other users experiences/settings & decided to stay away from using the --scan-time 1 --expiry 1 on the advice of ck, choosing to play with the --queue settings & frequencies instead. I have always used --queue 0 with p2pool, believing this to be the optimal requirement, but found that --queue 1 resulted in the lowest discard rate & stopped the cpu maxing out at 100% - why this is I'm not sure, but I'll roll with it. All of my S3's cpu rate now hover ~85% with a much more acceptable discard rate. The freq settings are either 218.75, 237.5 or 250 depending on the unit - some flat refuse to go above 218.75 without losing the plot - others sit there at 250 with less HW errors/discards/rejects than those running at 218.75   Huh I also found 225 to be the worst performing freq of the lot!

Generally I'm quite impressed with them though and it's good to be back on p2pool. I had an email from Bitmain saying they would have a solution for various p2pool issues soon, that was 4 weeks ago & I've not heard a dickey bird since. I did also try their recommended setting of /256 +256 but found them to make no difference whatsoever, so I let p2pool decide the diff which seems to work pretty well. All the units were stripped & checked for lose screws etc before using, but generally the build quality was sound.

With everything that is going on with AMT, KNC, Rockminer, Black Arrow, Spondoolies etc, it seems Bitmain have not only come out with another little gem - but have probably saved p2pools bacon by ensuring that the S3's are at least usable with p2pool, even if not 100% efficiently - hopefully this will be addressed soon with a little more whining encouragement from p2pool users.....the increase in p2pool hashrate is testament to this and I'm wondering if the massive hashrate fluctuations are actually Bitmain keeping true to their promise of investing their equipment into p2pool......that would be nice  Grin

Did we get a new dev yet by the way?

Peace  Smiley

"When one person is deluded it is called insanity - when many people are deluded it is called religion" - Robert M. Pirsig.  I don't want your coins, I want change.
Amazon UK BTC payment service - https://bitcointalk.org/index.php?topic=301229.0 - with FREE delivery!
http://www.ae911truth.org/ - http://rethink911.org/ - http://rememberbuilding7.org/
CartmanSPC
Legendary
*
Offline Offline

Activity: 1270
Merit: 1000



View Profile
August 05, 2014, 06:42:42 PM
 #9868

On number 2 Grin
I have been wondering about this. This isn't really about shares, but diff with high hashrate miners question. When Mr 80TH's shows up on pool it's just killing everyone else it seems. Default diff is 999 or so. Based on what you're saying that is not actually correct because if you get a share it's worth more.
Currently one address around 12TH/s and you just see all the estimated earnings going down for smaller miners.

Have your smaller miners use /0 or whatever you think is appropriate so their share diff is not raised from the larger ones. On my front end I show the miners share diff and expected time to share by address so they can adjust it as necessary.

This is if you use a public node...if you have your own it really shouldn't matter.

Did you lift some of your suggested settings from Norgz' Antminer settings page?
http://www.norgzpool.net.au/antminer.html


Nope, have not seen that site before. Their recommendation on /0+220 seems good for S1's. You could play with the 220 until you get a pseudo share diff that your comfortable with.

In regard to the --scan-time 1 --expiry 1 settings I read a detailed post (I think from ck) saying those were essentially obsolete. Wish I would have saved it.

Duce
Full Member
***
Offline Offline

Activity: 175
Merit: 100


View Profile
August 05, 2014, 06:50:17 PM
 #9869

On number 2 Grin
I have been wondering about this. This isn't really about shares, but diff with high hashrate miners question. When Mr 80TH's shows up on pool it's just killing everyone else it seems. Default diff is 999 or so. Based on what you're saying that is not actually correct because if you get a share it's worth more.
Currently one address around 12TH/s and you just see all the estimated earnings going down for smaller miners.

Have your smaller miners use /0 or whatever you think is appropriate so their share diff is not raised from the larger ones. On my front end I show the miners share diff and expected time to share by address so they can adjust it as necessary.

This is if you use a public node...if you have your own it really shouldn't matter.

Did you lift some of your suggested settings from Norgz' Antminer settings page?
http://www.norgzpool.net.au/antminer.html


Nope, have not seen that site before. Their recommendation on /0+220 seems good for S1's. You could play with the 220 until you get a pseudo share diff that your comfortable with.

In regard to the --scan-time 1 --expiry 1 settings I read a detailed post (I think from ck) saying those were essentially obsolete. Wish I would have saved it.

Here it is https://bitcointalk.org/index.php?topic=18313.msg8036549#msg8036549
jonnybravo0311
Legendary
*
Offline Offline

Activity: 1344
Merit: 1023


Mine at Jonny's Pool


View Profile WWW
August 05, 2014, 11:38:21 PM
 #9870

On number 2 Grin
I have been wondering about this. This isn't really about shares, but diff with high hashrate miners question. When Mr 80TH's shows up on pool it's just killing everyone else it seems. Default diff is 999 or so. Based on what you're saying that is not actually correct because if you get a share it's worth more.
Currently one address around 12TH/s and you just see all the estimated earnings going down for smaller miners.

Have your smaller miners use /0 or whatever you think is appropriate so their share diff is not raised from the larger ones. On my front end I show the miners share diff and expected time to share by address so they can adjust it as necessary.

This is if you use a public node...if you have your own it really shouldn't matter.

Did you lift some of your suggested settings from Norgz' Antminer settings page?
http://www.norgzpool.net.au/antminer.html


Nope, have not seen that site before. Their recommendation on /0+220 seems good for S1's. You could play with the 220 until you get a pseudo share diff that your comfortable with.

In regard to the --scan-time 1 --expiry 1 settings I read a detailed post (I think from ck) saying those were essentially obsolete. Wish I would have saved it.
Ok... did a little investigating on this issue.  First off, setting the difficulty to /0 or +0 actually initially sets both to the highest possible values.  First the code in work.py in get_user_details:
Code:
desired_pseudoshare_target = None
desired_share_target = None
for symbol, parameter in zip(contents2[::2], contents2[1::2]):
  if symbol == '+':
    try:
      desired_pseudoshare_target = bitcoin_data.difficulty_to_target(float(parameter))
    except:
      if p2pool.DEBUG:
        log.err()
  elif symbol == '/':
    try:
      desired_share_target = bitcoin_data.difficulty_to_target(float(parameter))
    except:
      if p2pool.DEBUG:
        log.err()
Now the code for bitcoin_data.difficulty_to_target:
Code:
def difficulty_to_target(difficulty):
    assert difficulty >= 0
    if difficulty == 0: return 2**256-1
    return min(int((0xffff0000 * 2**(256-64) + 1)/difficulty - 1 + 0.5), 2**256-1)

So for sure don't set your miners to /0 or +0.

The only time I can find any mention of the dust threshold of the coin in the code is when you have not set a difficulty.  In this case if the expected payout per block is less than the dust threshold, it resets your desired difficulty such that your expected payout is greater than that threshold.  Here's the code:
Code:
if desired_share_target is None:
            desired_share_target = 2**256-1
            local_hash_rate = self._estimate_local_hash_rate()
            if local_hash_rate is not None:
                desired_share_target = min(desired_share_target,
                    bitcoin_data.average_attempts_to_target(local_hash_rate * self.node.net.SHARE_PERIOD / 0.0167)) # limit to 1.67% of pool shares by modulating share difficulty

            local_addr_rates = self.get_local_addr_rates()
            lookbehind = 3600//self.node.net.SHARE_PERIOD
            block_subsidy = self.node.bitcoind_work.value['subsidy']
            if previous_share is not None and self.node.tracker.get_height(previous_share.hash) > lookbehind:
                expected_payout_per_block = local_addr_rates.get(pubkey_hash, 0)/p2pool_data.get_pool_attempts_per_second(self.node.tracker, self.node.best_share_var.value, lookbehind) \
                    * block_subsidy*(1-self.donation_percentage/100) # XXX doesn't use global stale rate to compute pool hash
                if expected_payout_per_block < self.node.net.PARENT.DUST_THRESHOLD:
                    desired_share_target = min(desired_share_target,
                        bitcoin_data.average_attempts_to_target((bitcoin_data.target_to_average_attempts(self.node.bitcoind_work.value['bits'].target)*self.node.net.SPREAD)*self.node.net.PARENT.DUST_THRESHOLD/block_subsidy)
                    )

I'm going to keep digging through the code to see what more I can learn from it.

Jonny's Pool - Mine with us and help us grow!  Support a pool that supports Bitcoin, not a hardware manufacturer's pockets!  No SPV cheats.  No empty blocks.
contactlight
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
August 05, 2014, 11:48:58 PM
 #9871

Why is this not documented? No wonder P2Pool isn't taking off. We need all critical code paths documented and logic explained.

On a side note, I refuse to believe that what I am experiencing is just variance and bad luck. It has been over 24 hours and I still haven't found a share with my 2 x S3s running at 1TH/s. There is definitely some sort of incompatibility between S3s and P2Pool.

While I love the idea behind P2Pool, sadly I won't be able to be a part of it until this is fixed. However, I am not hopeful that it will be fixed as let alone identifying the problem, it isn't even acknowledged yet. Pointing my miners to Eligius.
CartmanSPC
Legendary
*
Offline Offline

Activity: 1270
Merit: 1000



View Profile
August 05, 2014, 11:49:55 PM
Last edit: August 06, 2014, 12:03:49 AM by CartmanSPC
 #9872

Ok... did a little investigating on this issue.  First off, setting the difficulty to /0 or +0 actually initially sets both to the highest possible values.  First the code in work.py in get_user_details:

So for sure don't set your miners to /0

I'm going to keep digging through the code to see what more I can learn from it.

Keep digging...don't think your assertation is correct. I know from experience that /0 absolutely sets your miners to get shares at the smallest possible p2pool share difficulty (currently at 4608650.012 for BTC).

ceslick
Full Member
***
Offline Offline

Activity: 161
Merit: 100

digging in the bits... now ant powered!


View Profile WWW
August 05, 2014, 11:52:33 PM
 #9873

Then Norgs guide saying add /0+220 to your address is not optimum?

Should it just be +220?

http://www.integratedideas.net  - Home of Rock Solid Miners
NZ Based BTC P2Pool: http://www.integratedideas.net/p2pool-btc/  -  NZ Based DOGE P2Pool: http://www.integratedideas.net/p2pool-doge/
Cloud mining with CEX.IO: https://cex.io/r/2/ceslicknz/0/
mdude77
Legendary
*
Offline Offline

Activity: 1540
Merit: 1001



View Profile
August 06, 2014, 12:14:32 AM
 #9874

Why is this not documented? No wonder P2Pool isn't taking off. We need all critical code paths documented and logic explained.

On a side note, I refuse to believe that what I am experiencing is just variance and bad luck. It has been over 24 hours and I still haven't found a share with my 2 x S3s running at 1TH/s. There is definitely some sort of incompatibility between S3s and P2Pool.

While I love the idea behind P2Pool, sadly I won't be able to be a part of it until this is fixed. However, I am not hopeful that it will be fixed as let alone identifying the problem, it isn't even acknowledged yet. Pointing my miners to Eligius.

Try running a local p2pool node.  Point your miners to it, and see what p2pool says for "expected time to share".

M

I mine at Kano's Pool because it pays the best and is completely transparent!  Come join me!
jonnybravo0311
Legendary
*
Offline Offline

Activity: 1344
Merit: 1023


Mine at Jonny's Pool


View Profile WWW
August 06, 2014, 12:22:33 AM
 #9875

Why is this not documented? No wonder P2Pool isn't taking off. We need all critical code paths documented and logic explained.

On a side note, I refuse to believe that what I am experiencing is just variance and bad luck. It has been over 24 hours and I still haven't found a share with my 2 x S3s running at 1TH/s. There is definitely some sort of incompatibility between S3s and P2Pool.

While I love the idea behind P2Pool, sadly I won't be able to be a part of it until this is fixed. However, I am not hopeful that it will be fixed as let alone identifying the problem, it isn't even acknowledged yet. Pointing my miners to Eligius.
I've got 5 S3s pointed to p2pool and have found 11 shares in the past 24 hours.  There's nothing wrong with how they function with p2pool.  I wish you the best of luck on Eligius.  Wizkid's got a good pool there.

Ok... did a little investigating on this issue.  First off, setting the difficulty to /0 or +0 actually initially sets both to the highest possible values.  First the code in work.py in get_user_details:

So for sure don't set your miners to /0

I'm going to keep digging through the code to see what more I can learn from it.

Keep digging...don't think your assertation is correct. I know from experience that /0 absolutely sets your miners to get shares at the smallest possible p2pool share difficulty (currently at 4608650.012 for BTC).
That's why I'm still looking.  I want to be absolutely sure.  To this point, however, there is nothing in the code that I've seen to indicate anything other than what I've posted.  Besides, why take the chance?  If you intend to set your difficulty manually to some silly low value, just use /1 instead.

Basically, my search is now concentrating on finding some comparison to the effect of "take the greater of my desired share difficulty and p2pool's minimum share difficulty" during miner payout calculations.  If your assertion is correct, then I need to find some other piece of code that says, "if desired share difficulty equals 2**256-1, set share difficulty equal to minimum p2pool share difficulty".

Jonny's Pool - Mine with us and help us grow!  Support a pool that supports Bitcoin, not a hardware manufacturer's pockets!  No SPV cheats.  No empty blocks.
ceslick
Full Member
***
Offline Offline

Activity: 161
Merit: 100

digging in the bits... now ant powered!


View Profile WWW
August 06, 2014, 12:27:53 AM
 #9876

In other news p2pool approaching 2.5PHash

http://www.integratedideas.net  - Home of Rock Solid Miners
NZ Based BTC P2Pool: http://www.integratedideas.net/p2pool-btc/  -  NZ Based DOGE P2Pool: http://www.integratedideas.net/p2pool-doge/
Cloud mining with CEX.IO: https://cex.io/r/2/ceslicknz/0/
contactlight
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
August 06, 2014, 12:49:35 AM
 #9877

Why is this not documented? No wonder P2Pool isn't taking off. We need all critical code paths documented and logic explained.

On a side note, I refuse to believe that what I am experiencing is just variance and bad luck. It has been over 24 hours and I still haven't found a share with my 2 x S3s running at 1TH/s. There is definitely some sort of incompatibility between S3s and P2Pool.

While I love the idea behind P2Pool, sadly I won't be able to be a part of it until this is fixed. However, I am not hopeful that it will be fixed as let alone identifying the problem, it isn't even acknowledged yet. Pointing my miners to Eligius.

Try running a local p2pool node.  Point your miners to it, and see what p2pool says for "expected time to share".

M

I've tried that as well. I live in San Francisco and I have dedicated servers in a datacenter in San Francisco as well. I set up a full Bitcoin node on one of them and set up P2Pool. My latency was under 20ms and my DOA was around 2%. It was pretty much the most optimal setup that you can get without having them on the same network.

Expected time to share was about 5 hours and I haven't gotten any shares in over 24 hours with that setup either.

Why is this not documented? No wonder P2Pool isn't taking off. We need all critical code paths documented and logic explained.

On a side note, I refuse to believe that what I am experiencing is just variance and bad luck. It has been over 24 hours and I still haven't found a share with my 2 x S3s running at 1TH/s. There is definitely some sort of incompatibility between S3s and P2Pool.

While I love the idea behind P2Pool, sadly I won't be able to be a part of it until this is fixed. However, I am not hopeful that it will be fixed as let alone identifying the problem, it isn't even acknowledged yet. Pointing my miners to Eligius.
I've got 5 S3s pointed to p2pool and have found 11 shares in the past 24 hours.  There's nothing wrong with how they function with p2pool.  I wish you the best of luck on Eligius.  Wizkid's got a good pool there.

While Eligius is really good and I am fine with switching to them, I really want to stick with P2Pool if possible. It is the "right" way to mine in my mind.

I have two S3s and you would expect them to have found around 4 shares since your 5 found 11. Variance, bad luck etc. but still 0 shares? That doesn't make much sense to me. We can't just decide not to even try to debug a distributed system because it has a random component. It's like saying it's not a bug, it's a feature.
jonnybravo0311
Legendary
*
Offline Offline

Activity: 1344
Merit: 1023


Mine at Jonny's Pool


View Profile WWW
August 06, 2014, 01:02:22 AM
 #9878

Why is this not documented? No wonder P2Pool isn't taking off. We need all critical code paths documented and logic explained.

On a side note, I refuse to believe that what I am experiencing is just variance and bad luck. It has been over 24 hours and I still haven't found a share with my 2 x S3s running at 1TH/s. There is definitely some sort of incompatibility between S3s and P2Pool.

While I love the idea behind P2Pool, sadly I won't be able to be a part of it until this is fixed. However, I am not hopeful that it will be fixed as let alone identifying the problem, it isn't even acknowledged yet. Pointing my miners to Eligius.

Try running a local p2pool node.  Point your miners to it, and see what p2pool says for "expected time to share".

M

I've tried that as well. I live in San Francisco and I have dedicated servers in a datacenter in San Francisco as well. I set up a full Bitcoin node on one of them and set up P2Pool. My latency was under 20ms and my DOA was around 2%. It was pretty much the most optimal setup that you can get without having them on the same network.

Expected time to share was about 5 hours and I haven't gotten any shares in over 24 hours with that setup either.

Why is this not documented? No wonder P2Pool isn't taking off. We need all critical code paths documented and logic explained.

On a side note, I refuse to believe that what I am experiencing is just variance and bad luck. It has been over 24 hours and I still haven't found a share with my 2 x S3s running at 1TH/s. There is definitely some sort of incompatibility between S3s and P2Pool.

While I love the idea behind P2Pool, sadly I won't be able to be a part of it until this is fixed. However, I am not hopeful that it will be fixed as let alone identifying the problem, it isn't even acknowledged yet. Pointing my miners to Eligius.
I've got 5 S3s pointed to p2pool and have found 11 shares in the past 24 hours.  There's nothing wrong with how they function with p2pool.  I wish you the best of luck on Eligius.  Wizkid's got a good pool there.

While Eligius is really good and I am fine with switching to them, I really want to stick with P2Pool if possible. It is the "right" way to mine in my mind.

I have two S3s and you would expect them to have found around 4 shares since your 5 found 11. Variance, bad luck etc. but still 0 shares? That doesn't make much sense to me. We can't just decide not to even try to debug a distributed system because it has a random component. It's like saying it's not a bug, it's a feature.
I agree it sucks that you haven't found any shares but what exactly would somebody debug?  If every S3 suffered the problem (like the S2's documented loss of hash rate) then there would be something to look into.  If your miners are reporting good hash rates with low errors, low rejects and low DOA then they're working properly.  Didn't you previously post that the same hardware was finding shares a few weeks ago?  If that wasn't you then I apologize.  My point though is that the most likely answer is simply bad luck.  There just isn't enough evidence to point to problems with the hardware.

Jonny's Pool - Mine with us and help us grow!  Support a pool that supports Bitcoin, not a hardware manufacturer's pockets!  No SPV cheats.  No empty blocks.
contactlight
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
August 06, 2014, 01:10:19 AM
 #9879

Why is this not documented? No wonder P2Pool isn't taking off. We need all critical code paths documented and logic explained.

On a side note, I refuse to believe that what I am experiencing is just variance and bad luck. It has been over 24 hours and I still haven't found a share with my 2 x S3s running at 1TH/s. There is definitely some sort of incompatibility between S3s and P2Pool.

While I love the idea behind P2Pool, sadly I won't be able to be a part of it until this is fixed. However, I am not hopeful that it will be fixed as let alone identifying the problem, it isn't even acknowledged yet. Pointing my miners to Eligius.

Try running a local p2pool node.  Point your miners to it, and see what p2pool says for "expected time to share".

M

I've tried that as well. I live in San Francisco and I have dedicated servers in a datacenter in San Francisco as well. I set up a full Bitcoin node on one of them and set up P2Pool. My latency was under 20ms and my DOA was around 2%. It was pretty much the most optimal setup that you can get without having them on the same network.

Expected time to share was about 5 hours and I haven't gotten any shares in over 24 hours with that setup either.

Why is this not documented? No wonder P2Pool isn't taking off. We need all critical code paths documented and logic explained.

On a side note, I refuse to believe that what I am experiencing is just variance and bad luck. It has been over 24 hours and I still haven't found a share with my 2 x S3s running at 1TH/s. There is definitely some sort of incompatibility between S3s and P2Pool.

While I love the idea behind P2Pool, sadly I won't be able to be a part of it until this is fixed. However, I am not hopeful that it will be fixed as let alone identifying the problem, it isn't even acknowledged yet. Pointing my miners to Eligius.
I've got 5 S3s pointed to p2pool and have found 11 shares in the past 24 hours.  There's nothing wrong with how they function with p2pool.  I wish you the best of luck on Eligius.  Wizkid's got a good pool there.

While Eligius is really good and I am fine with switching to them, I really want to stick with P2Pool if possible. It is the "right" way to mine in my mind.

I have two S3s and you would expect them to have found around 4 shares since your 5 found 11. Variance, bad luck etc. but still 0 shares? That doesn't make much sense to me. We can't just decide not to even try to debug a distributed system because it has a random component. It's like saying it's not a bug, it's a feature.
I agree it sucks that you haven't found any shares but what exactly would somebody debug?  If every S3 suffered the problem (like the S2's documented loss of hash rate) then there would be something to look into.  If your miners are reporting good hash rates with low errors, low rejects and low DOA then they're working properly.  Didn't you previously post that the same hardware was finding shares a few weeks ago?  If that wasn't you then I apologize.  My point though is that the most likely answer is simply bad luck.  There just isn't enough evidence to point to problems with the hardware.

If P2Pool was better documented, I would be able to form some sort of hypothesis but I can't due to the lack of documentation.

I know that P2Pool server essentially "lies" the the miner with psuedoshares so that it keeps mining. That's why I don't think the miner-side statistics would be very helpful in this case. I could look into the code and try to figure out what might be going wrong but that's what I do all day with my job anyway. Smiley

And you're correct, that was me. When I first started mining with my S3s on P2Pool, I found four, maybe five shares in a row but I pretty much never found any other shares after that.

I am running on the latest firmware by the way. I am not sure if others upgraded to it. It could be a problem with that firmware.

As far as I can tell Antminer S3s are running a modified version of "cgminer 3.12.0". This seems to be outdated and I might try cross-compiling the newest version. However, that lacks the Bitmain modifications and it will be less than ideal.
mdude77
Legendary
*
Offline Offline

Activity: 1540
Merit: 1001



View Profile
August 06, 2014, 01:11:23 AM
 #9880

I've tried that as well. I live in San Francisco and I have dedicated servers in a datacenter in San Francisco as well. I set up a full Bitcoin node on one of them and set up P2Pool. My latency was under 20ms and my DOA was around 2%. It was pretty much the most optimal setup that you can get without having them on the same network.

My node (http://96.44.166.190:9332) is in LA.  Looks like I have two folks there with a pair of S3s.  Try pointing your workers there and see if you notice a difference.

Also, despite all the talk going on here, I don't recommend messing with share size with + or /.  Just use your payout address and see what happens.  Once you know that's working, then try customizing it.

M

I mine at Kano's Pool because it pays the best and is completely transparent!  Come join me!
Pages: « 1 ... 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 [494] 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 ... 814 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!