Bitcoin Forum
November 03, 2024, 09:11:08 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 [46] 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 ... 814 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 2591881 times)
ancow
Full Member
***
Offline Offline

Activity: 373
Merit: 100


View Profile WWW
February 21, 2012, 12:52:36 AM
 #901

OK, if cgminer isn't getting work, you should do a debug run (enable the -T, -P, -D and --verbose parameters in addition to your normal parameters), let it run for a few minutes, pastebin the complete output and head over to the cgminer thread. Perhaps someone there will be able to tell what is going wrong.

BTC: 1GAHTMdBN4Yw3PU66sAmUBKSXy2qaq2SF4
Krak
Hero Member
*****
Offline Offline

Activity: 591
Merit: 500



View Profile WWW
February 21, 2012, 04:41:44 AM
 #902

Just started having this pop up on me today. Dunno why, I didn't change anything and it was working fine before.

Code:
2012-02-20 23:37:03.723000 
2012-02-20 23:37:03.723000 Error when requesting noncached value:
2012-02-20 23:37:03.724000 > Traceback (most recent call last):
2012-02-20 23:37:03.724000 >   File "twisted\internet\defer.pyc", line 388, in errback
2012-02-20 23:37:03.724000 >     
2012-02-20 23:37:03.724000 >   File "twisted\internet\defer.pyc", line 455, in _startRunCallbacks
2012-02-20 23:37:03.725000 >     
2012-02-20 23:37:03.725000 >   File "twisted\internet\defer.pyc", line 542, in _runCallbacks
2012-02-20 23:37:03.725000 >     
2012-02-20 23:37:03.725000 >   File "twisted\internet\defer.pyc", line 1076, in gotResult
2012-02-20 23:37:03.725000 >     
2012-02-20 23:37:03.725000 > --- <exception caught here> ---
2012-02-20 23:37:03.725000 >   File "twisted\internet\defer.pyc", line 1018, in _inlineCallbacks
2012-02-20 23:37:03.726000 >     
2012-02-20 23:37:03.726000 >   File "twisted\python\failure.pyc", line 350, in throwExceptionIntoGenerator
2012-02-20 23:37:03.726000 >     
2012-02-20 23:37:03.726000 >   File "p2pool\main.pyc", line 178, in <lambda>
2012-02-20 23:37:03.726000 >     
2012-02-20 23:37:03.726000 >   File "twisted\internet\defer.pyc", line 1018, in _inlineCallbacks
2012-02-20 23:37:03.726000 >     
2012-02-20 23:37:03.727000 >   File "twisted\python\failure.pyc", line 350, in throwExceptionIntoGenerator
2012-02-20 23:37:03.727000 >     
2012-02-20 23:37:03.727000 >   File "p2pool\util\jsonrpc.pyc", line 67, in callRemote
2012-02-20 23:37:03.727000 >     
2012-02-20 23:37:03.727000 > p2pool.util.jsonrpc.Error: -5 Block not found

BTC: 1KrakenLFEFg33A4f6xpwgv3UUoxrLPuGn
twmz
Hero Member
*****
Offline Offline

Activity: 737
Merit: 500



View Profile
February 21, 2012, 04:53:28 AM
 #903


The 2.0 version of my stats page is now live.  Here are the changes:

  • It is now running on a healthy web server (btcstats.net was acting funny) and I gave it its own domain.
  • The backed is now based on bitcoind directly (with a patch to get full block details) instead of being dependent on blockchain.info and blockexplorer.com.  Eventually, I will add those two back in as transparent fallback options, though.
  • The issue with active user count being too low is fixed.  Active users now includes anyone who submitted a valid share in the past 24 hours instead of just in the past 2 hours.
  • It now display the current payouts.  You can star your own address so that it is easier to find in the list.
  • It now display a list of the active users and their hashrates.  Note, hashrates are very iffy as they are based on the shares each has found over the past 24 hours.  The jury is still out on if these estimates are reasonably close to accurate.  I spot checked a few and they seemed to be within 10%, though.
  • Orphaned blocks are now shown greyed out, if the site knows about them.  Sometimes an orphaned share will never make it to my site, so they may not all show up.  But if they do, they are now indicated as orphaned.
  • You can now disable the audio alert when a block is found (see Settings in the upper right).  Note that this setting is per browser.

I just pushed the button to switch over to the new site and so I'm considering it beta for the next 24 hours.  It has been working well for me, but I did rewrite a lot of it in the last couple days so I may have broken something that used to be working.  Let me know if you see issues.

http://p2pool.info


Was I helpful?  1TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs
WoT, GPG

Bitrated user: ewal.
kano
Legendary
*
Offline Offline

Activity: 4620
Merit: 1851


Linux since 1997 RedHat 4


View Profile
February 21, 2012, 10:47:44 AM
 #904

Hmm the highest is only 508.9%
Not bad - haven't hit a really long one yet.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Ed
Member
**
Offline Offline

Activity: 69
Merit: 10


View Profile
February 21, 2012, 11:21:21 AM
 #905


The 2.0 version .......

http://p2pool.info

it is brilliant!
IMHO
btc_artist
Full Member
***
Offline Offline

Activity: 154
Merit: 102

Bitcoin!


View Profile WWW
February 21, 2012, 02:50:18 PM
 #906

The limitation of Bitcoin is that the block chain is only aware of the total hashing power, not individual miners, and thus can only adjust accordingly. P2Pool protocol chain is sort, and is easy to change, and each instance of P2Pool is aware of both the pool hashing power and each instance's local hashing power.
Would it be possible to just change the algorithm from adjusting difficulty to make a pool block every ten seconds based on overall pool hashing power, to one that bases it on the fraction of your hashing power compared to the overall pool? Have the difficulty start out at average, and as you mine, every thirty minutes recalculate your local difficulty based on reported hashing power, so that strong miners get increased difficulty and fewer shares and weak miners get more?
Or is this too difficult due to all blocks in the chain needing to be the same, or risky due to being easily hacked?

Currently shares are all the same because (your payout) = (your shares) / (total last n shares).

While you could make shares variable in difficulty and make it (your payout) = (sum of your shares difficulty) / (total sum of last n shares difficulty) it doesn't get around the ophan problem.

Bitcoin rarely has orphaned blocks because the round time is ~600 seconds.  The shorter the round time the more likely two entities on the network find solution at roughly the same time and one of them gets orphaned.   P2pool compromises between share difficulty & orphan rate by using a 10 second round time.  It sets difficulty so someone will find a share roughly 10 seconds (and hopefully most of the time that "solution") can be shared to everyone else to avoid duplicated work in time.

So to avoid higher orphan rate you still need the average share time to be ~10 seconds.  You could within reason allow smaller miners to use lower difficulty and larger miners to have higher difficulty but the average must still work out to ~1 share per 10 seconds.

So that solution has two problems:
a) the amount share difficulty can be vary is not much and if most miners are small it is very little at all.
b) larger miners would be accepting higher variance in order to give smaller miners lower variance.  Something for nothing.  Unlikely they will do that.


The way I see it there are four decentralized solutions:  multiple p2pools, merged share chain, dynamic p2pools, sub-p2pools.

multiple p2pools.
The simplest solution is to simply start a second p2pool.  There is no reason only one has to exist.  Take the source code and modify it so the "alternate p2pool can be identified" and start one node.  Node can join using modified client.  Eventually client could be modified to have user indicate which instance of the network to join or even scan all instances and give user the option.   If the two pool gets too large they also could be split.  The disadvantage is that each split requires migration and that requires people to look out for the good of the network.  For example 3 p2pools with 10GH, 20GH, and 2.87TH/s isn't exactly viable.

--------------------------------------

merged share chain
In Bitcoin there can only be "one block" which links to the prior block.  The reason why is it is used to prevent double spends.  Double spend isn't as much of a problem in p2pool.  Sure one needs to ensure that workers don't get duplicate credit but that can be solved without a static "only one" block-chain.  Modifying the protocol to allow multiple branches at one level would seem to be possible.  Since this would allow oprhans to be counted (within reason) it would be possible to reduce the share time.  For example a 1 TH/s p2pool with a 2 second share time would have no higher difficulty than a 200 GH/s p2pool with 10 second share time.  There likely are "gotchas" which need to be resolved but I believe a sharechain which allows "merging" is possible.

--------------------------------------

dynamic p2pool.
Building upon that idea of multiple p2pools the protocol could be expanded so that a new node queries a p2pool information net and gets statuses of existing p2pools.  The network assigns new nodes where they best optimize the balance of the network.   If the protocol enforces this pool assignment then there is no human gaming involved and the pools will be relatively balances.  As pools grow or shrink they can be split or combined with other pools by the protocol.   Some simulation would be needed to find the optimal balance between share variance and block variance.  The network could even intentionally allow variance in pool size and share time.  Larger pools with high difficulty and large share time to accommodate very large miners and smaller pools with lower difficulty to provide optimal solution for smaller individual miners.

--------------------------------------

sub p2pools
Imagine the p2pool forming a "backbone" and for max efficiency the share time would be longer.  Say 1 share per 60 seconds instead of 10 (difficulty goes up by factor of 6).  At 1TH/s that is ~12,000 difficulty (which is high but not as high as block difficulty of 1.3 million).  Due to 12K+ difficulty the only participants on this backbone are a) major hashing farms, b) conventional pools, and c) sub p2pools.

You notice I said conventional pools.  Conventional pools which submit valid shares to p2pool are less of a security risk to Bitcoin than opaque proprietary poools. 

For smaller miners who wish a fully decentralized solution they could form/join "sub-p2pools". These pools could be tuned for different speed miners to provide an optimal balance between block difficulty and share difficulty.  They would maintain a sub-p2pool level share chain and use that to set the reward breakout for the subpool.  When the one node in the subpool solves a "master p2pool" difficulty share (12K in above example) it submits it to the main pool (which updates the ultimate reward split to include the subpool current split for that share).  subpools could be created manually (Rassah small miner subpool), or eventually dynamically by an protocol similar to the second solution.  This requires a modest change to the existing p2pool (which would form the backbone). Currently 1 share can only be assigned to 1 address.  To make sub-p2pools possible it would need to be possible to include an address list and distribution % for 1 share. 

--------------------------------------


Note: these ideas aren't fleshed out.  Likely someone can point out issues and areas where the explanation is incomplete.  They are more designed as a thought exercise to look a potential avenues for expanding p2pool to handle potentially someday 51% of network hashing power (at which point an internal 51% attack becomes impossible).   Obviously these are complex ideas which will take time to implement.  I believe that "front ends" are preferable to small miners going back to deepbit and could act as a bridge to transition p2pool from 250GH/s to 1TH/s+ while more decentralized solutions are developed.
forrestv, are you considering acting on any of these ideas?  What are you current thoughts on this?

BTC: 1CDCLDBHbAzHyYUkk1wYHPYmrtDZNhk8zf
LTC: LMS7SqZJnqzxo76iDSEua33WCyYZdjaQoE
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 21, 2012, 02:55:36 PM
 #907

Not sure if this has been covered but I have found that using cgminer the optimal settings (at least for 5970s) on p2pool differs from normal pools.

Regular Pool
queue: 2
threads per GPU: 2
intensity: 9

P2pool
queue: 1
threads per GPU: 1
intensity: 8

This seems to have cut my stales and orphans significantly.   Anyone else experience the same?  

I think cgminer using deep queue and multiple threads conflicts with the ultra short LP time used by p2pool.

Thoughts?
kano
Legendary
*
Offline Offline

Activity: 4620
Merit: 1851


Linux since 1997 RedHat 4


View Profile
February 21, 2012, 03:00:27 PM
 #908

Not sure if this has been covered but I have found that using cgminer the optimal settings (at least for 5970s) on p2pool differs from normal pools.

Regular Pool
queue: 2
threads per GPU: 2
intensity: 9

P2pool
queue: 1
threads per GPU: 1
intensity: 8

This seems to have cut my stales and orphans significantly.   Anyone else experience the same?  

I think cgminer using deep queue and multiple threads conflicts with the ultra short LP time used by p2pool.

Thoughts?

See you may not be able to hop P2Pool, but you can get more than expected BTC by using cgminer and lowering your rejects below the 9% that everyone else gets.

Hmm time to add an anti-cgminer pool calculation coz they can get more than their fair share
[/sarcasm]

You see saying that 9% is OK is like telling someone it's OK to mine on a Prop pool ... now who was it who said 9% was OK?

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Rassah
Legendary
*
Offline Offline

Activity: 1680
Merit: 1035



View Profile WWW
February 21, 2012, 04:40:11 PM
 #909

Not sure if this has been covered but I have found that using cgminer the optimal settings (at least for 5970s) on p2pool differs from normal pools.

Regular Pool
queue: 2
threads per GPU: 2
intensity: 9

P2pool
queue: 1
threads per GPU: 1
intensity: 8

This seems to have cut my stales and orphans significantly.   Anyone else experience the same?  

I think cgminer using deep queue and multiple threads conflicts with the ultra short LP time used by p2pool.

Thoughts?


On my 5830, lowering my intensity from 9 to 8 alone dropped my rejects from 9% to 0 Tongue
JayCoin
Sr. Member
****
Offline Offline

Activity: 409
Merit: 251


Crypt'n Since 2011


View Profile WWW
February 21, 2012, 09:08:43 PM
 #910

I have multiple miners mining at one p2pool server. Each miner has it's own bitcoin address. 

How is the payout split to each bitcoin address? 

Does each miner need to find blocks added to the share chain or is the payout split to the addresses based on work submitted to my p2pool server? 

Thanks

Hello There!
Red Emerald
Hero Member
*****
Offline Offline

Activity: 742
Merit: 500



View Profile WWW
February 21, 2012, 09:36:17 PM
 #911

I have multiple miners mining at one p2pool server. Each miner has it's own bitcoin address. 

How is the payout split to each bitcoin address? 

Does each miner need to find blocks added to the share chain or is the payout split to the addresses based on work submitted to my p2pool server? 

Thanks
The workers are paid for the work they do.  Pretty simple.

Any particular reason you are mining to different addresses?  I have 2 p2pool servers (one primary, one backup) and they both mine to the same address.  My miners just have a descriptive username instead of a payment address.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 21, 2012, 09:36:47 PM
 #912

I have multiple miners mining at one p2pool server. Each miner has it's own bitcoin address.  

How is the payout split to each bitcoin address?  

Does each miner need to find blocks added to the share chain or is the payout split to the addresses based on work submitted to my p2pool server?  

Thanks

You are only paid for shares submitted to the share chain (that are still valid when a block is found).  Each address is paid seperately depending on how many "full difficulty" shares are in the chain at the time a block  is found.  You could simply use the same address for all miners.
Rassah
Legendary
*
Offline Offline

Activity: 1680
Merit: 1035



View Profile WWW
February 21, 2012, 10:46:25 PM
 #913

I have multiple miners mining at one p2pool server. Each miner has it's own bitcoin address. 

How is the payout split to each bitcoin address? 

Does each miner need to find blocks added to the share chain or is the payout split to the addresses based on work submitted to my p2pool server? 

Thanks

Each miner has its own address? I thought addresses were at P2Pool level, not miner level, so all those miners will only contribute to the single P2Pool instance getting paid to its address?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 21, 2012, 10:50:19 PM
 #914

I have multiple miners mining at one p2pool server. Each miner has it's own bitcoin address. 

How is the payout split to each bitcoin address? 

Does each miner need to find blocks added to the share chain or is the payout split to the addresses based on work submitted to my p2pool server? 

Thanks

Each miner has its own address? I thought addresses were at P2Pool level, not miner level, so all those miners will only contribute to the single P2Pool instance getting paid to its address?

You can use worker level addresses.  It is useful for someone running a public p2pool.  The address in the username is where credit is given.
Red Emerald
Hero Member
*****
Offline Offline

Activity: 742
Merit: 500



View Profile WWW
February 22, 2012, 04:48:20 AM
 #915

The limitation of Bitcoin is that the block chain is only aware of the total hashing power, not individual miners, and thus can only adjust accordingly. P2Pool protocol chain is sort, and is easy to change, and each instance of P2Pool is aware of both the pool hashing power and each instance's local hashing power.
Would it be possible to just change the algorithm from adjusting difficulty to make a pool block every ten seconds based on overall pool hashing power, to one that bases it on the fraction of your hashing power compared to the overall pool? Have the difficulty start out at average, and as you mine, every thirty minutes recalculate your local difficulty based on reported hashing power, so that strong miners get increased difficulty and fewer shares and weak miners get more?
Or is this too difficult due to all blocks in the chain needing to be the same, or risky due to being easily hacked?

Currently shares are all the same because (your payout) = (your shares) / (total last n shares).

While you could make shares variable in difficulty and make it (your payout) = (sum of your shares difficulty) / (total sum of last n shares difficulty) it doesn't get around the ophan problem.

Bitcoin rarely has orphaned blocks because the round time is ~600 seconds.  The shorter the round time the more likely two entities on the network find solution at roughly the same time and one of them gets orphaned.   P2pool compromises between share difficulty & orphan rate by using a 10 second round time.  It sets difficulty so someone will find a share roughly 10 seconds (and hopefully most of the time that "solution") can be shared to everyone else to avoid duplicated work in time.

So to avoid higher orphan rate you still need the average share time to be ~10 seconds.  You could within reason allow smaller miners to use lower difficulty and larger miners to have higher difficulty but the average must still work out to ~1 share per 10 seconds.

So that solution has two problems:
a) the amount share difficulty can be vary is not much and if most miners are small it is very little at all.
b) larger miners would be accepting higher variance in order to give smaller miners lower variance.  Something for nothing.  Unlikely they will do that.


The way I see it there are four decentralized solutions:  multiple p2pools, merged share chain, dynamic p2pools, sub-p2pools.

multiple p2pools.
The simplest solution is to simply start a second p2pool.  There is no reason only one has to exist.  Take the source code and modify it so the "alternate p2pool can be identified" and start one node.  Node can join using modified client.  Eventually client could be modified to have user indicate which instance of the network to join or even scan all instances and give user the option.   If the two pool gets too large they also could be split.  The disadvantage is that each split requires migration and that requires people to look out for the good of the network.  For example 3 p2pools with 10GH, 20GH, and 2.87TH/s isn't exactly viable.

--------------------------------------

merged share chain
In Bitcoin there can only be "one block" which links to the prior block.  The reason why is it is used to prevent double spends.  Double spend isn't as much of a problem in p2pool.  Sure one needs to ensure that workers don't get duplicate credit but that can be solved without a static "only one" block-chain.  Modifying the protocol to allow multiple branches at one level would seem to be possible.  Since this would allow oprhans to be counted (within reason) it would be possible to reduce the share time.  For example a 1 TH/s p2pool with a 2 second share time would have no higher difficulty than a 200 GH/s p2pool with 10 second share time.  There likely are "gotchas" which need to be resolved but I believe a sharechain which allows "merging" is possible.

--------------------------------------

dynamic p2pool.
Building upon that idea of multiple p2pools the protocol could be expanded so that a new node queries a p2pool information net and gets statuses of existing p2pools.  The network assigns new nodes where they best optimize the balance of the network.   If the protocol enforces this pool assignment then there is no human gaming involved and the pools will be relatively balances.  As pools grow or shrink they can be split or combined with other pools by the protocol.   Some simulation would be needed to find the optimal balance between share variance and block variance.  The network could even intentionally allow variance in pool size and share time.  Larger pools with high difficulty and large share time to accommodate very large miners and smaller pools with lower difficulty to provide optimal solution for smaller individual miners.

--------------------------------------

sub p2pools
Imagine the p2pool forming a "backbone" and for max efficiency the share time would be longer.  Say 1 share per 60 seconds instead of 10 (difficulty goes up by factor of 6).  At 1TH/s that is ~12,000 difficulty (which is high but not as high as block difficulty of 1.3 million).  Due to 12K+ difficulty the only participants on this backbone are a) major hashing farms, b) conventional pools, and c) sub p2pools.

You notice I said conventional pools.  Conventional pools which submit valid shares to p2pool are less of a security risk to Bitcoin than opaque proprietary poools. 

For smaller miners who wish a fully decentralized solution they could form/join "sub-p2pools". These pools could be tuned for different speed miners to provide an optimal balance between block difficulty and share difficulty.  They would maintain a sub-p2pool level share chain and use that to set the reward breakout for the subpool.  When the one node in the subpool solves a "master p2pool" difficulty share (12K in above example) it submits it to the main pool (which updates the ultimate reward split to include the subpool current split for that share).  subpools could be created manually (Rassah small miner subpool), or eventually dynamically by an protocol similar to the second solution.  This requires a modest change to the existing p2pool (which would form the backbone). Currently 1 share can only be assigned to 1 address.  To make sub-p2pools possible it would need to be possible to include an address list and distribution % for 1 share. 

--------------------------------------


Note: these ideas aren't fleshed out.  Likely someone can point out issues and areas where the explanation is incomplete.  They are more designed as a thought exercise to look a potential avenues for expanding p2pool to handle potentially someday 51% of network hashing power (at which point an internal 51% attack becomes impossible).   Obviously these are complex ideas which will take time to implement.  I believe that "front ends" are preferable to small miners going back to deepbit and could act as a bridge to transition p2pool from 250GH/s to 1TH/s+ while more decentralized solutions are developed.
forrestv, are you considering acting on any of these ideas?  What are you current thoughts on this?
This code is probably worthy of a bounty

kano
Legendary
*
Offline Offline

Activity: 4620
Merit: 1851


Linux since 1997 RedHat 4


View Profile
February 22, 2012, 05:36:17 AM
 #916

Hmm the highest is only 508.9%
Not bad - haven't hit a really long one yet.
... and also to better understand the meaning of that:

14:50:53 20-Feb-2012 UTC
DeepBit: block 167671 (share count)/difficulty = 829% ...

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Aion2n
Hero Member
*****
Offline Offline

Activity: 700
Merit: 503



View Profile
February 22, 2012, 05:50:11 AM
 #917

Why such a difference in the statistics and the client at all times?
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
February 22, 2012, 06:03:48 AM
 #918

Why such a difference in the statistics and the client at all times?

Because what you see as your hashing rate is an estimate based on a random process.

Any time you see a hash rate other than directly from your mining software, you need to read it as "the amount of work that another person would have to do to have a good change of duplicating my work" not as "the amount of work I've done".

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
forrestv (OP)
Hero Member
*****
Offline Offline

Activity: 516
Merit: 643


View Profile
February 22, 2012, 06:12:46 AM
 #919

forrestv, are you considering acting on any of these ideas?  What are you current thoughts on this?

I've though about it quite a bit Smiley I plan to start a second P2Pool once P2Pool reaches about 400GH/s, because only then will we have enough power to make splitting into two okay. The upcoming protocol change lets new P2Pools safely be created.

Any method of dynamically creating P2Pools runs the risk of hurting miners because a pool can't simply be terminated if the hash rate lowers, since the last day of shares that were mined won't be built on top of and won't get their fair reward.

I intend to move towards the high-difficulty p2pool backbone idea eventually, but that will obviously require a lot of thought and changes.

1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 22, 2012, 01:40:46 PM
 #920

I intend to move towards the high-difficulty p2pool backbone idea eventually, but that will obviously require a lot of thought and changes.

Excellent.  I doubt I can help with the coding but if you need to help in tacking down the details of implementation I would be glad to participate in any technical discussion or go over any whitepapers or design docs.

It is the most ambitious goal but a long share time high difficulty backbone would be immensely valuable.  

* Would allow creating an arbitrary number of p2pools.
* Would allow conventional pools to connect and provide a 3rd party method to verify a pool is legit.
* Would allow creating concepts like a distributed pps pool (payment would be semi-centralized meaning operator could cheat but it would be immediately obvious).
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 [46] 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 ... 814 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!