Bitcoin Forum
December 02, 2016, 10:28:03 PM *
News: Latest stable version of Bitcoin Core: 0.13.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: « 1 ... 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 [637] 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 ... 744 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 2028615 times)
windpath
Legendary
*
Offline Offline

Activity: 938


View Profile WWW
June 07, 2015, 06:01:00 PM
 #12721


The correct and direct definition of luck (where >100% is good luck and less than 100% is bad luck) is simply DifficultyExpected/DifficultySubmitted

I'm not sure I understand how you would calculate this, wouldn't the submitted diff for a valid block always be greater than or equal to the expected diff ?

We use the average of the stored hashrate since the last block was found by the pool (and we weight and average the difficulty if it changed since last block found) to determine an average expected time to block, then compare that to the actual time for the block in question.

If the the times are equal its 100%

If we found it faster > than 100%

If we found it slower < than 100%

1480717683
Hero Member
*
Offline Offline

Posts: 1480717683

View Profile Personal Message (Offline)

Ignore
1480717683
Reply with quote  #2

1480717683
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
yslyung
Legendary
*
Offline Offline

Activity: 1050


Mine Mine Mine


View Profile
June 07, 2015, 08:15:46 PM
 #12722


The correct and direct definition of luck (where >100% is good luck and less than 100% is bad luck) is simply DifficultyExpected/DifficultySubmitted

I'm not sure I understand how you would calculate this, wouldn't the submitted diff for a valid block always be greater than or equal to the expected diff ?

We use the average of the stored hashrate since the last block was found by the pool (and we weight and average the difficulty if it changed since last block found) to determine an average expected time to block, then compare that to the actual time for the block in question.

If the the times are equal its 100%

If we found it faster > than 100%

If we found it slower < than 100%

math & coding is hard ...

but afaik with my poor math & coding skills ... p2pool luck is better than kano.is ?
windpath
Legendary
*
Offline Offline

Activity: 938


View Profile WWW
June 07, 2015, 08:27:30 PM
 #12723


The correct and direct definition of luck (where >100% is good luck and less than 100% is bad luck) is simply DifficultyExpected/DifficultySubmitted

I'm not sure I understand how you would calculate this, wouldn't the submitted diff for a valid block always be greater than or equal to the expected diff ?

We use the average of the stored hashrate since the last block was found by the pool (and we weight and average the difficulty if it changed since last block found) to determine an average expected time to block, then compare that to the actual time for the block in question.

If the the times are equal its 100%

If we found it faster > than 100%

If we found it slower < than 100%

math & coding is hard ...

but afaik with my poor math & coding skills ... p2pool luck is better than kano.is ?

Sometimes it is, sometimes it is not, the nature of luck Wink

chalkboard17
Sr. Member
****
Offline Offline

Activity: 271


View Profile
June 07, 2015, 08:39:54 PM
 #12724

I am using antminer s5 interface and pointing miner to the ip.

Use kano's cgminer replacement - bitmain's cgminer is borked for p2pool:

SSH into your S5 as root then copy/paste:

Code:
cd /tmp
wget http://ck.kolivas.org/apps/cgminer/antminer/s5/4.9.0-150105/cgminer
chmod +x cgminer
mv /usr/bin/cgminer /usr/bin/cgminer.bak
cp cgminer /usr/bin
/etc/init.d/cgminer.sh restart

Then press enter.
Thank you, worked.

p3yot33at3r
Sr. Member
****
Offline Offline

Activity: 266



View Profile
June 07, 2015, 09:07:59 PM
 #12725

Thank you, worked.

Thank kano  Wink

It's not persistent - you'll have to do the same after every reboot. Also, change your queue setting to 1 or 0 - whatever works best for you. There's a custom firmware mentioned a few pages back that does all this for you & is persistent - might be better for you.
jonnybravo0311
Hero Member
*****
Offline Offline

Activity: 994


Mine at Jonny's Pool


View Profile WWW
June 08, 2015, 02:54:22 PM
 #12726


The correct and direct definition of luck (where >100% is good luck and less than 100% is bad luck) is simply DifficultyExpected/DifficultySubmitted

I'm not sure I understand how you would calculate this, wouldn't the submitted diff for a valid block always be greater than or equal to the expected diff ?

We use the average of the stored hashrate since the last block was found by the pool (and we weight and average the difficulty if it changed since last block found) to determine an average expected time to block, then compare that to the actual time for the block in question.

If the the times are equal its 100%

If we found it faster > than 100%

If we found it slower < than 100%
He's referring to the actual number of shares submitted vs the expected shares submitted to find a block.  Since p2pool has no real knowledge of any miner's actual hash rate and submitted shares like ckpool does, the best we could do with p2pool is to evaluate how many share-chain shares we'd expect it to take to find a block vs how many share-chain shares were actually submitted to find it.

The problem is the number of expected shares is constantly changing on p2pool because the share difficulty constantly changes, unlike the BTC network where it's static for 2016 blocks.  The best you're ever going to get is just an approximation of luck using the expected vs actual figures, so I see no real reason to change the calculations you're currently using, since they're providing an approximation as well.

Jonny's Pool - Mine with us and help us grow!  Support a pool that supports Bitcoin, not a hardware manufacturer's pockets!  No SPV cheats.  No empty blocks.
kano
Legendary
*
Offline Offline

Activity: 1918


Linux since 1997 RedHat 4


View Profile
June 08, 2015, 11:20:40 PM
 #12727


The correct and direct definition of luck (where >100% is good luck and less than 100% is bad luck) is simply DifficultyExpected/DifficultySubmitted

I'm not sure I understand how you would calculate this, wouldn't the submitted diff for a valid block always be greater than or equal to the expected diff ?

We use the average of the stored hashrate since the last block was found by the pool (and we weight and average the difficulty if it changed since last block found) to determine an average expected time to block, then compare that to the actual time for the block in question.

If the the times are equal its 100%

If we found it faster > than 100%

If we found it slower < than 100%
He's referring to the actual number of shares submitted vs the expected shares submitted to find a block.  Since p2pool has no real knowledge of any miner's actual hash rate and submitted shares like ckpool does, the best we could do with p2pool is to evaluate how many share-chain shares we'd expect it to take to find a block vs how many share-chain shares were actually submitted to find it.

The problem is the number of expected shares is constantly changing on p2pool because the share difficulty constantly changes, unlike the BTC network where it's static for 2016 blocks.  The best you're ever going to get is just an approximation of luck using the expected vs actual figures, so I see no real reason to change the calculations you're currently using, since they're providing an approximation as well.
DifficultyExpected = 47589591153.62500763
It only changes once every 2016 blocks ...... so yes you know what it is.
The shares in the sharechain submitted have a difficulty pow requirement for each share accepted ... that's why it's accepted.
Sum up the sharechain difficulties (pow requirement, not the actual share difficulty of course) to get DifficultySubmitted.
Yeah as I said, you have those numbers.

Those numbers are how p2pool determines the pool hash rate, except it's not very accurate, and you are using that hash rate number to show the luck ...
Look at the pool hash rate and watch it change ... often up to 20% ... all over the place ... it's not very accurate.

The problem on top of all this is that is if you include the (rare) non-share-chain blocks in your calculation - but don't include the hashes that were used to find those blocks ... so ... your stated luck would be higher than it really is ... hmm that doesn't sound good ... stating it higher than it really is.

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
jonnybravo0311
Hero Member
*****
Offline Offline

Activity: 994


Mine at Jonny's Pool


View Profile WWW
June 09, 2015, 12:11:02 AM
 #12728


The problem on top of all this is that is if you include the (rare) non-share-chain blocks in your calculation - but don't include the hashes that were used to find those blocks ... so ... your stated luck would be higher than it really is ... hmm that doesn't sound good ... stating it higher than it really is.
You can't include the share difficulty of the shares that are orphaned/dead that solve blocks because those shares are never transmitted to the p2pool network.  Furthermore, even standard orphaned/dead shares can't ever be counted for the same reason - they aren't transmitted.  The only thing you've got is what can be gleaned from the share chain itself, which would only ever be accurate if absolutely there were no orphans or dead... which you're pretty much guaranteed to never have happen Smiley.

In effect, any calculation of luck is ALWAYS going to be higher than actuality because of orphaned/dead shares that never make it onto the share chain.

EDIT: I forgot to mention that the share chain doesn't keep record of all shares, so there is also the possibility that some shares drop off the chain between block finds.  So unless you're recording every share that is submitted (which you certainly should be if you're trying to capture luck) your calculations will be off from there as well.

Jonny's Pool - Mine with us and help us grow!  Support a pool that supports Bitcoin, not a hardware manufacturer's pockets!  No SPV cheats.  No empty blocks.
yslyung
Legendary
*
Offline Offline

Activity: 1050


Mine Mine Mine


View Profile
June 09, 2015, 12:15:29 AM
 #12729


The problem on top of all this is that is if you include the (rare) non-share-chain blocks in your calculation - but don't include the hashes that were used to find those blocks ... so ... your stated luck would be higher than it really is ... hmm that doesn't sound good ... stating it higher than it really is.
You can't include the share difficulty of the shares that are orphaned/dead that solve blocks because those shares are never transmitted to the p2pool network.  Furthermore, even standard orphaned/dead shares can't ever be counted for the same reason - they aren't transmitted.  The only thing you've got is what can be gleaned from the share chain itself, which would only ever be accurate if absolutely there were no orphans or dead... which you're pretty much guaranteed to never have happen Smiley.

In effect, any calculation of luck is ALWAYS going to be higher than actuality because of orphaned/dead shares that never make it onto the share chain.

minus the DOA "should" be closer to actual luck ?

well even taking away the 20% is still good luck & i hope it continues.

mine on ! 
kano
Legendary
*
Offline Offline

Activity: 1918


Linux since 1997 RedHat 4


View Profile
June 09, 2015, 01:36:40 AM
 #12730


The problem on top of all this is that is if you include the (rare) non-share-chain blocks in your calculation - but don't include the hashes that were used to find those blocks ... so ... your stated luck would be higher than it really is ... hmm that doesn't sound good ... stating it higher than it really is.
You can't include the share difficulty of the shares that are orphaned/dead that solve blocks because those shares are never transmitted to the p2pool network.  Furthermore, even standard orphaned/dead shares can't ever be counted for the same reason - they aren't transmitted.  The only thing you've got is what can be gleaned from the share chain itself, which would only ever be accurate if absolutely there were no orphans or dead... which you're pretty much guaranteed to never have happen Smiley.

In effect, any calculation of luck is ALWAYS going to be higher than actuality because of orphaned/dead shares that never make it onto the share chain.

minus the DOA "should" be closer to actual luck ?

well even taking away the 20% is still good luck & i hope it continues.

mine on !  
Ignoring the non-share-chain blocks would give you a valid luck value for the share-chain p2pool blocks - using the simple DifficultyExpected/DifficultySubmitted

The only catch of course would be to know if the non-share-chain blocks have roughly the same expected luck as the normal blocks.
i.e. is there some code/network related factor that affects their luck differently to the others?
The assumption would probably be no difference.

A very rough estimate of the non-share-chain blocks work would be 95% of all the pool's stale work - since it is that work (and only that work) that produces those blocks.
"95%" since on average, 19 out of 20 share-chain shares are submitted when there isn't a network block change.
... 30s per share = average 20 share changes per block change (on a 0% diff change), but only one of the 20 is a block change.

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
windpath
Legendary
*
Offline Offline

Activity: 938


View Profile WWW
June 09, 2015, 01:48:57 PM
 #12731

Did not implement a way to track non-share chain blocks, perhaps I will in the future, but the historical stuff is gone.

I believe it is a much higher % than you think. I'd speculate (and yes it's just speculation) that it is somewhere around 5-7% of found blocks.

I understand how your calculation works now, thank you, however I don't see how it could be applied to p2pool practically.

The share difficulty changes with every share, and often nodes disagree on difficulty (due to propagation times).

While the reported hash rate is only an estimate I believe it's the best number we have to work with, and by using the 1 minute average since the last block was found I think we get about as accurate a picture as possible without storing every single share for the long term.

Storing all the shares would be cool, but just don't see how to do it in a way that is directly query-able without creating an additional enterprise scale DB on top of what we already have.

How do you store submitted work for your pool? Do you plan to keep all the data in a query-able state for all time?

After thinking about this on and off for over a year now I'm OK with having an ~estimated~ luck, that is consistently calculated from the data we have available.

It serves it's intended purpose which is to reflect how the pool is preforming overall, over time.

I view having to use imperfect data as a trade off for getting completely trust-less decentralization, something I value much more than a 100% accurate all the time luck stat.

While I do understand your points, I find it ironic for you to criticize P2Pool's infrastructure while running your own closed source centralized pool.

Serious question: What would it take to get you and CK to merge your pool with P2Pool, and focus on it's scalability problems?

We could sure use your expertise Smiley

yslyung
Legendary
*
Offline Offline

Activity: 1050


Mine Mine Mine


View Profile
June 09, 2015, 01:57:11 PM
 #12732


While I do understand your points, I find it ironic for you to criticize P2Pool's infrastructure while running your own closed source centralized pool.


+9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999

Well said VERY well said, sending u a tip shortly for your effort for p2pool.

post up an addy.
p3yot33at3r
Sr. Member
****
Offline Offline

Activity: 266



View Profile
June 09, 2015, 02:07:55 PM
 #12733


Serious question: What would it take to get you and CK to merge your pool with P2Pool, and focus on it's scalability problems?

We could sure use your expertise Smiley

I've seen posts by others on this thread regarding the scalability thing - to me this is the biggest issue with p2pool.

Has anyone actually conversed with the dev (if he's still around) to see if he has any ideas/solutions?

Was a bounty ever organised with regard to finding a dev who could find a solution?
yslyung
Legendary
*
Offline Offline

Activity: 1050


Mine Mine Mine


View Profile
June 09, 2015, 02:36:06 PM
 #12734


Serious question: What would it take to get you and CK to merge your pool with P2Pool, and focus on it's scalability problems?

We could sure use your expertise Smiley

I've seen posts by others on this thread regarding the scalability thing - to me this is the biggest issue with p2pool.

- yes it is & it still is atm sadly. i think it can be scaled but some of those who have the knowledge might not be willing to help or share Huh

Has anyone actually conversed with the dev (if he's still around) to see if he has any ideas/solutions?

- u mean forrestv ? he has not been here since a long time ago, now it;s the community that's supporting p2pool

Was a bounty ever organised with regard to finding a dev who could find a solution?

- yes but splashed with cold water ... i'm still a supporter for p2pool & hoping something will happen someday.
jonnybravo0311
Hero Member
*****
Offline Offline

Activity: 994


Mine at Jonny's Pool


View Profile WWW
June 09, 2015, 05:17:47 PM
 #12735


Serious question: What would it take to get you and CK to merge your pool with P2Pool, and focus on it's scalability problems?

We could sure use your expertise Smiley

I've seen posts by others on this thread regarding the scalability thing - to me this is the biggest issue with p2pool.

Has anyone actually conversed with the dev (if he's still around) to see if he has any ideas/solutions?

Was a bounty ever organised with regard to finding a dev who could find a solution?
A ton of ideas have been thrown around but none of them have proved to be implementable.  The problem is in the concept of the share chain.  In effect it really is nothing more than a relatively low difficulty coin that you are solo mining.  The solution, which contains things like payout information, gets added to the share chain.  If that share also happens to solve a block of BTC, the block award gets distributed according to the payout information in the share that solves the block.

Because of this construct there's no easily implemented solution to the problem of variance.  OgNasty and Nonnakip have come up with a very interesting approach which puts ckpool on top of the p2pool backbone.  The tradeoff is that by choosing to mine there you are bound by the same constraints as a typical centralized pool.  For example, mining on a typical p2pool node, you can failover to any other p2pool node and none of your work is lost.  Not so with their implementation - no other standard p2pool node has any concept of work you've done because you don't have any individual shares on the chain.  You sacrifice the completely trust-less decentralized nature of p2pool for variance reduction.  It's a nice step forward and I've had a couple S3s pointed to OgNasty's pool (both standard p2pool and NastyPoP) since November of last year.  You can see my long-running thread about it here: https://bitcointalk.org/index.php?topic=891298.0

If there were a viable solution, it could be implemented.

Jonny's Pool - Mine with us and help us grow!  Support a pool that supports Bitcoin, not a hardware manufacturer's pockets!  No SPV cheats.  No empty blocks.
minerpool-de
Hero Member
*****
Offline Offline

Activity: 686


View Profile
June 09, 2015, 06:49:05 PM
 #12736

Somehow I think P2Pool dying out. The network performance is ridiculous, the last visitors on my Node passed weeks ago. I think I switch my node down.

p3yot33at3r
Sr. Member
****
Offline Offline

Activity: 266



View Profile
June 09, 2015, 07:00:02 PM
 #12737

Somehow I think P2Pool dying out. The network performance is ridiculous, the last visitors on my Node passed weeks ago. I think I switch my node down.

 Huh

Don't you use your own node then?

The last couple of weeks have been excellent for p2pool - way above average.
yslyung
Legendary
*
Offline Offline

Activity: 1050


Mine Mine Mine


View Profile
June 09, 2015, 07:40:24 PM
 #12738

Somehow I think P2Pool dying out. The network performance is ridiculous, the last visitors on my Node passed weeks ago. I think I switch my node down.

 Huh

Don't you use your own node then?

The last couple of weeks have been excellent for p2pool - way above average.

math is hard . . .
chalkboard17
Sr. Member
****
Offline Offline

Activity: 271


View Profile
June 09, 2015, 08:08:46 PM
 #12739

I'm facing a problem.
Every so often board 1 stops hashing for a few minutes, decreasing my overall long term hashrate from ~1385gh/s to ~1350gh/s resulting in an over 2.5% loss that would force me to use a centralized pool.

This is happening as I write this post.
http://prntscr.com/7f3hc8
http://prntscr.com/7f2vt3
http://prntscr.com/7f2vmw
The 4 drops in the day chart are all because of this problem.

I never had such problem before.
I had to do this in order to avoid having hashrate less than 1250gh/s
I have stable internet connection and good computer. I don't believe it's my node or else it would change pool or make both boards stop.
I am running bitcoin-xt 0.10.2 and board 1 always run 8c hotter than #2, but none of them have ever reached 60c and they run in a cool place.

My diff1 and diffa always were very equal but when I do the cited software modification diff1 doubles and diffa decreases a little bit. Is that ok?
Any input is greatly appreciated and I hope I can fix this as I really want to run my own node and help decentralization.
Thank you.

-ck
Moderator
Legendary
*
Offline Offline

Activity: 1988


Ruu \o/


View Profile WWW
June 10, 2015, 12:44:13 AM
 #12740

Serious question: What would it take to get you and CK to merge your pool with P2Pool, and focus on it's scalability problems?
We've both said numerous times, there is no valid solution to its most significant scalability problems. Pooled mining is a solution to the variance of solo mining - the more miners there are the less the variance. p2pool is the anti-solution to pooled mining - above a certain size, the more miners there are, the more variance there is; it is just solo mining on a smaller scale. It doesn't matter how much money you throw at us, we can't solve this problem in its current form. Rewriting the p2pool software in a scalable language like C only makes each node able to handle more miners - and that's not really important since the idea is each miner runs their own node for their own hardware. It does not solve the design issue.

While I do understand your points, I find it ironic for you to criticize P2Pool's infrastructure while running your own closed source centralized pool.

Kano's ckpool runs ckpool software which is absolutely all free and open code. Centralised it may be, but closed it most definitely is not.

Primary developer/maintainer for cgminer and ckpool/ckproxy.
Pooled mine at kano.is, solo mine at solo.ckpool.org
-ck
Pages: « 1 ... 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 [637] 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 ... 744 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!