ancow
|
|
February 21, 2012, 12:52:36 AM |
|
OK, if cgminer isn't getting work, you should do a debug run (enable the -T, -P, -D and --verbose parameters in addition to your normal parameters), let it run for a few minutes, pastebin the complete output and head over to the cgminer thread. Perhaps someone there will be able to tell what is going wrong.
|
BTC: 1GAHTMdBN4Yw3PU66sAmUBKSXy2qaq2SF4
|
|
|
Krak
|
|
February 21, 2012, 04:41:44 AM |
|
Just started having this pop up on me today. Dunno why, I didn't change anything and it was working fine before. 2012-02-20 23:37:03.723000 2012-02-20 23:37:03.723000 Error when requesting noncached value: 2012-02-20 23:37:03.724000 > Traceback (most recent call last): 2012-02-20 23:37:03.724000 > File "twisted\internet\defer.pyc", line 388, in errback 2012-02-20 23:37:03.724000 > 2012-02-20 23:37:03.724000 > File "twisted\internet\defer.pyc", line 455, in _startRunCallbacks 2012-02-20 23:37:03.725000 > 2012-02-20 23:37:03.725000 > File "twisted\internet\defer.pyc", line 542, in _runCallbacks 2012-02-20 23:37:03.725000 > 2012-02-20 23:37:03.725000 > File "twisted\internet\defer.pyc", line 1076, in gotResult 2012-02-20 23:37:03.725000 > 2012-02-20 23:37:03.725000 > --- <exception caught here> --- 2012-02-20 23:37:03.725000 > File "twisted\internet\defer.pyc", line 1018, in _inlineCallbacks 2012-02-20 23:37:03.726000 > 2012-02-20 23:37:03.726000 > File "twisted\python\failure.pyc", line 350, in throwExceptionIntoGenerator 2012-02-20 23:37:03.726000 > 2012-02-20 23:37:03.726000 > File "p2pool\main.pyc", line 178, in <lambda> 2012-02-20 23:37:03.726000 > 2012-02-20 23:37:03.726000 > File "twisted\internet\defer.pyc", line 1018, in _inlineCallbacks 2012-02-20 23:37:03.726000 > 2012-02-20 23:37:03.727000 > File "twisted\python\failure.pyc", line 350, in throwExceptionIntoGenerator 2012-02-20 23:37:03.727000 > 2012-02-20 23:37:03.727000 > File "p2pool\util\jsonrpc.pyc", line 67, in callRemote 2012-02-20 23:37:03.727000 > 2012-02-20 23:37:03.727000 > p2pool.util.jsonrpc.Error: -5 Block not found
|
BTC: 1KrakenLFEFg33A4f6xpwgv3UUoxrLPuGn
|
|
|
twmz
|
|
February 21, 2012, 04:53:28 AM |
|
The 2.0 version of my stats page is now live. Here are the changes: - It is now running on a healthy web server (btcstats.net was acting funny) and I gave it its own domain.
- The backed is now based on bitcoind directly (with a patch to get full block details) instead of being dependent on blockchain.info and blockexplorer.com. Eventually, I will add those two back in as transparent fallback options, though.
- The issue with active user count being too low is fixed. Active users now includes anyone who submitted a valid share in the past 24 hours instead of just in the past 2 hours.
- It now display the current payouts. You can star your own address so that it is easier to find in the list.
- It now display a list of the active users and their hashrates. Note, hashrates are very iffy as they are based on the shares each has found over the past 24 hours. The jury is still out on if these estimates are reasonably close to accurate. I spot checked a few and they seemed to be within 10%, though.
- Orphaned blocks are now shown greyed out, if the site knows about them. Sometimes an orphaned share will never make it to my site, so they may not all show up. But if they do, they are now indicated as orphaned.
- You can now disable the audio alert when a block is found (see Settings in the upper right). Note that this setting is per browser.
I just pushed the button to switch over to the new site and so I'm considering it beta for the next 24 hours. It has been working well for me, but I did rewrite a lot of it in the last couple days so I may have broken something that used to be working. Let me know if you see issues. http://p2pool.info
|
Was I helpful? 1 TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs WoT, GPGBitrated user: ewal.
|
|
|
kano
Legendary
Offline
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
|
|
February 21, 2012, 10:47:44 AM |
|
Hmm the highest is only 508.9% Not bad - haven't hit a really long one yet.
|
|
|
|
Ed
Member
Offline
Activity: 69
Merit: 10
|
|
February 21, 2012, 11:21:21 AM |
|
|
|
|
|
btc_artist
Full Member
Offline
Activity: 154
Merit: 102
Bitcoin!
|
|
February 21, 2012, 02:50:18 PM |
|
The limitation of Bitcoin is that the block chain is only aware of the total hashing power, not individual miners, and thus can only adjust accordingly. P2Pool protocol chain is sort, and is easy to change, and each instance of P2Pool is aware of both the pool hashing power and each instance's local hashing power. Would it be possible to just change the algorithm from adjusting difficulty to make a pool block every ten seconds based on overall pool hashing power, to one that bases it on the fraction of your hashing power compared to the overall pool? Have the difficulty start out at average, and as you mine, every thirty minutes recalculate your local difficulty based on reported hashing power, so that strong miners get increased difficulty and fewer shares and weak miners get more? Or is this too difficult due to all blocks in the chain needing to be the same, or risky due to being easily hacked?
Currently shares are all the same because (your payout) = (your shares) / (total last n shares). While you could make shares variable in difficulty and make it (your payout) = (sum of your shares difficulty) / (total sum of last n shares difficulty) it doesn't get around the ophan problem. Bitcoin rarely has orphaned blocks because the round time is ~600 seconds. The shorter the round time the more likely two entities on the network find solution at roughly the same time and one of them gets orphaned. P2pool compromises between share difficulty & orphan rate by using a 10 second round time. It sets difficulty so someone will find a share roughly 10 seconds (and hopefully most of the time that "solution") can be shared to everyone else to avoid duplicated work in time. So to avoid higher orphan rate you still need the average share time to be ~10 seconds. You could within reason allow smaller miners to use lower difficulty and larger miners to have higher difficulty but the average must still work out to ~1 share per 10 seconds. So that solution has two problems: a) the amount share difficulty can be vary is not much and if most miners are small it is very little at all. b) larger miners would be accepting higher variance in order to give smaller miners lower variance. Something for nothing. Unlikely they will do that. The way I see it there are four decentralized solutions: multiple p2pools, merged share chain, dynamic p2pools, sub-p2pools.multiple p2pools.The simplest solution is to simply start a second p2pool. There is no reason only one has to exist. Take the source code and modify it so the "alternate p2pool can be identified" and start one node. Node can join using modified client. Eventually client could be modified to have user indicate which instance of the network to join or even scan all instances and give user the option. If the two pool gets too large they also could be split. The disadvantage is that each split requires migration and that requires people to look out for the good of the network. For example 3 p2pools with 10GH, 20GH, and 2.87TH/s isn't exactly viable. -------------------------------------- merged share chainIn Bitcoin there can only be "one block" which links to the prior block. The reason why is it is used to prevent double spends. Double spend isn't as much of a problem in p2pool. Sure one needs to ensure that workers don't get duplicate credit but that can be solved without a static "only one" block-chain. Modifying the protocol to allow multiple branches at one level would seem to be possible. Since this would allow oprhans to be counted (within reason) it would be possible to reduce the share time. For example a 1 TH/s p2pool with a 2 second share time would have no higher difficulty than a 200 GH/s p2pool with 10 second share time. There likely are "gotchas" which need to be resolved but I believe a sharechain which allows "merging" is possible. -------------------------------------- dynamic p2pool.Building upon that idea of multiple p2pools the protocol could be expanded so that a new node queries a p2pool information net and gets statuses of existing p2pools. The network assigns new nodes where they best optimize the balance of the network. If the protocol enforces this pool assignment then there is no human gaming involved and the pools will be relatively balances. As pools grow or shrink they can be split or combined with other pools by the protocol. Some simulation would be needed to find the optimal balance between share variance and block variance. The network could even intentionally allow variance in pool size and share time. Larger pools with high difficulty and large share time to accommodate very large miners and smaller pools with lower difficulty to provide optimal solution for smaller individual miners. -------------------------------------- sub p2poolsImagine the p2pool forming a "backbone" and for max efficiency the share time would be longer. Say 1 share per 60 seconds instead of 10 (difficulty goes up by factor of 6). At 1TH/s that is ~12,000 difficulty (which is high but not as high as block difficulty of 1.3 million). Due to 12K+ difficulty the only participants on this backbone are a) major hashing farms, b) conventional pools, and c) sub p2pools. You notice I said conventional pools. Conventional pools which submit valid shares to p2pool are less of a security risk to Bitcoin than opaque proprietary poools. For smaller miners who wish a fully decentralized solution they could form/join "sub-p2pools". These pools could be tuned for different speed miners to provide an optimal balance between block difficulty and share difficulty. They would maintain a sub-p2pool level share chain and use that to set the reward breakout for the subpool. When the one node in the subpool solves a "master p2pool" difficulty share (12K in above example) it submits it to the main pool (which updates the ultimate reward split to include the subpool current split for that share). subpools could be created manually (Rassah small miner subpool), or eventually dynamically by an protocol similar to the second solution. This requires a modest change to the existing p2pool (which would form the backbone). Currently 1 share can only be assigned to 1 address. To make sub-p2pools possible it would need to be possible to include an address list and distribution % for 1 share. -------------------------------------- Note: these ideas aren't fleshed out. Likely someone can point out issues and areas where the explanation is incomplete. They are more designed as a thought exercise to look a potential avenues for expanding p2pool to handle potentially someday 51% of network hashing power (at which point an internal 51% attack becomes impossible). Obviously these are complex ideas which will take time to implement. I believe that "front ends" are preferable to small miners going back to deepbit and could act as a bridge to transition p2pool from 250GH/s to 1TH/s+ while more decentralized solutions are developed. forrestv, are you considering acting on any of these ideas? What are you current thoughts on this?
|
BTC: 1CDCLDBHbAzHyYUkk1wYHPYmrtDZNhk8zf LTC: LMS7SqZJnqzxo76iDSEua33WCyYZdjaQoE
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
February 21, 2012, 02:55:36 PM |
|
Not sure if this has been covered but I have found that using cgminer the optimal settings (at least for 5970s) on p2pool differs from normal pools.
Regular Pool queue: 2 threads per GPU: 2 intensity: 9
P2pool queue: 1 threads per GPU: 1 intensity: 8
This seems to have cut my stales and orphans significantly. Anyone else experience the same?
I think cgminer using deep queue and multiple threads conflicts with the ultra short LP time used by p2pool.
Thoughts?
|
|
|
|
kano
Legendary
Offline
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
|
|
February 21, 2012, 03:00:27 PM |
|
Not sure if this has been covered but I have found that using cgminer the optimal settings (at least for 5970s) on p2pool differs from normal pools.
Regular Pool queue: 2 threads per GPU: 2 intensity: 9
P2pool queue: 1 threads per GPU: 1 intensity: 8
This seems to have cut my stales and orphans significantly. Anyone else experience the same?
I think cgminer using deep queue and multiple threads conflicts with the ultra short LP time used by p2pool.
Thoughts?
See you may not be able to hop P2Pool, but you can get more than expected BTC by using cgminer and lowering your rejects below the 9% that everyone else gets. Hmm time to add an anti-cgminer pool calculation coz they can get more than their fair share [/sarcasm] You see saying that 9% is OK is like telling someone it's OK to mine on a Prop pool ... now who was it who said 9% was OK?
|
|
|
|
Rassah
Legendary
Offline
Activity: 1680
Merit: 1035
|
|
February 21, 2012, 04:40:11 PM |
|
Not sure if this has been covered but I have found that using cgminer the optimal settings (at least for 5970s) on p2pool differs from normal pools.
Regular Pool queue: 2 threads per GPU: 2 intensity: 9
P2pool queue: 1 threads per GPU: 1 intensity: 8
This seems to have cut my stales and orphans significantly. Anyone else experience the same?
I think cgminer using deep queue and multiple threads conflicts with the ultra short LP time used by p2pool.
Thoughts?
On my 5830, lowering my intensity from 9 to 8 alone dropped my rejects from 9% to 0
|
|
|
|
JayCoin
|
|
February 21, 2012, 09:08:43 PM |
|
I have multiple miners mining at one p2pool server. Each miner has it's own bitcoin address.
How is the payout split to each bitcoin address?
Does each miner need to find blocks added to the share chain or is the payout split to the addresses based on work submitted to my p2pool server?
Thanks
|
Hello There!
|
|
|
Red Emerald
|
|
February 21, 2012, 09:36:17 PM |
|
I have multiple miners mining at one p2pool server. Each miner has it's own bitcoin address.
How is the payout split to each bitcoin address?
Does each miner need to find blocks added to the share chain or is the payout split to the addresses based on work submitted to my p2pool server?
Thanks
The workers are paid for the work they do. Pretty simple. Any particular reason you are mining to different addresses? I have 2 p2pool servers (one primary, one backup) and they both mine to the same address. My miners just have a descriptive username instead of a payment address.
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
February 21, 2012, 09:36:47 PM |
|
I have multiple miners mining at one p2pool server. Each miner has it's own bitcoin address.
How is the payout split to each bitcoin address?
Does each miner need to find blocks added to the share chain or is the payout split to the addresses based on work submitted to my p2pool server?
Thanks
You are only paid for shares submitted to the share chain (that are still valid when a block is found). Each address is paid seperately depending on how many "full difficulty" shares are in the chain at the time a block is found. You could simply use the same address for all miners.
|
|
|
|
Rassah
Legendary
Offline
Activity: 1680
Merit: 1035
|
|
February 21, 2012, 10:46:25 PM |
|
I have multiple miners mining at one p2pool server. Each miner has it's own bitcoin address.
How is the payout split to each bitcoin address?
Does each miner need to find blocks added to the share chain or is the payout split to the addresses based on work submitted to my p2pool server?
Thanks
Each miner has its own address? I thought addresses were at P2Pool level, not miner level, so all those miners will only contribute to the single P2Pool instance getting paid to its address?
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
February 21, 2012, 10:50:19 PM |
|
I have multiple miners mining at one p2pool server. Each miner has it's own bitcoin address.
How is the payout split to each bitcoin address?
Does each miner need to find blocks added to the share chain or is the payout split to the addresses based on work submitted to my p2pool server?
Thanks
Each miner has its own address? I thought addresses were at P2Pool level, not miner level, so all those miners will only contribute to the single P2Pool instance getting paid to its address? You can use worker level addresses. It is useful for someone running a public p2pool. The address in the username is where credit is given.
|
|
|
|
Red Emerald
|
|
February 22, 2012, 04:48:20 AM |
|
The limitation of Bitcoin is that the block chain is only aware of the total hashing power, not individual miners, and thus can only adjust accordingly. P2Pool protocol chain is sort, and is easy to change, and each instance of P2Pool is aware of both the pool hashing power and each instance's local hashing power. Would it be possible to just change the algorithm from adjusting difficulty to make a pool block every ten seconds based on overall pool hashing power, to one that bases it on the fraction of your hashing power compared to the overall pool? Have the difficulty start out at average, and as you mine, every thirty minutes recalculate your local difficulty based on reported hashing power, so that strong miners get increased difficulty and fewer shares and weak miners get more? Or is this too difficult due to all blocks in the chain needing to be the same, or risky due to being easily hacked?
Currently shares are all the same because (your payout) = (your shares) / (total last n shares). While you could make shares variable in difficulty and make it (your payout) = (sum of your shares difficulty) / (total sum of last n shares difficulty) it doesn't get around the ophan problem. Bitcoin rarely has orphaned blocks because the round time is ~600 seconds. The shorter the round time the more likely two entities on the network find solution at roughly the same time and one of them gets orphaned. P2pool compromises between share difficulty & orphan rate by using a 10 second round time. It sets difficulty so someone will find a share roughly 10 seconds (and hopefully most of the time that "solution") can be shared to everyone else to avoid duplicated work in time. So to avoid higher orphan rate you still need the average share time to be ~10 seconds. You could within reason allow smaller miners to use lower difficulty and larger miners to have higher difficulty but the average must still work out to ~1 share per 10 seconds. So that solution has two problems: a) the amount share difficulty can be vary is not much and if most miners are small it is very little at all. b) larger miners would be accepting higher variance in order to give smaller miners lower variance. Something for nothing. Unlikely they will do that. The way I see it there are four decentralized solutions: multiple p2pools, merged share chain, dynamic p2pools, sub-p2pools.multiple p2pools.The simplest solution is to simply start a second p2pool. There is no reason only one has to exist. Take the source code and modify it so the "alternate p2pool can be identified" and start one node. Node can join using modified client. Eventually client could be modified to have user indicate which instance of the network to join or even scan all instances and give user the option. If the two pool gets too large they also could be split. The disadvantage is that each split requires migration and that requires people to look out for the good of the network. For example 3 p2pools with 10GH, 20GH, and 2.87TH/s isn't exactly viable. -------------------------------------- merged share chainIn Bitcoin there can only be "one block" which links to the prior block. The reason why is it is used to prevent double spends. Double spend isn't as much of a problem in p2pool. Sure one needs to ensure that workers don't get duplicate credit but that can be solved without a static "only one" block-chain. Modifying the protocol to allow multiple branches at one level would seem to be possible. Since this would allow oprhans to be counted (within reason) it would be possible to reduce the share time. For example a 1 TH/s p2pool with a 2 second share time would have no higher difficulty than a 200 GH/s p2pool with 10 second share time. There likely are "gotchas" which need to be resolved but I believe a sharechain which allows "merging" is possible. -------------------------------------- dynamic p2pool.Building upon that idea of multiple p2pools the protocol could be expanded so that a new node queries a p2pool information net and gets statuses of existing p2pools. The network assigns new nodes where they best optimize the balance of the network. If the protocol enforces this pool assignment then there is no human gaming involved and the pools will be relatively balances. As pools grow or shrink they can be split or combined with other pools by the protocol. Some simulation would be needed to find the optimal balance between share variance and block variance. The network could even intentionally allow variance in pool size and share time. Larger pools with high difficulty and large share time to accommodate very large miners and smaller pools with lower difficulty to provide optimal solution for smaller individual miners. -------------------------------------- sub p2poolsImagine the p2pool forming a "backbone" and for max efficiency the share time would be longer. Say 1 share per 60 seconds instead of 10 (difficulty goes up by factor of 6). At 1TH/s that is ~12,000 difficulty (which is high but not as high as block difficulty of 1.3 million). Due to 12K+ difficulty the only participants on this backbone are a) major hashing farms, b) conventional pools, and c) sub p2pools. You notice I said conventional pools. Conventional pools which submit valid shares to p2pool are less of a security risk to Bitcoin than opaque proprietary poools. For smaller miners who wish a fully decentralized solution they could form/join "sub-p2pools". These pools could be tuned for different speed miners to provide an optimal balance between block difficulty and share difficulty. They would maintain a sub-p2pool level share chain and use that to set the reward breakout for the subpool. When the one node in the subpool solves a "master p2pool" difficulty share (12K in above example) it submits it to the main pool (which updates the ultimate reward split to include the subpool current split for that share). subpools could be created manually (Rassah small miner subpool), or eventually dynamically by an protocol similar to the second solution. This requires a modest change to the existing p2pool (which would form the backbone). Currently 1 share can only be assigned to 1 address. To make sub-p2pools possible it would need to be possible to include an address list and distribution % for 1 share. -------------------------------------- Note: these ideas aren't fleshed out. Likely someone can point out issues and areas where the explanation is incomplete. They are more designed as a thought exercise to look a potential avenues for expanding p2pool to handle potentially someday 51% of network hashing power (at which point an internal 51% attack becomes impossible). Obviously these are complex ideas which will take time to implement. I believe that "front ends" are preferable to small miners going back to deepbit and could act as a bridge to transition p2pool from 250GH/s to 1TH/s+ while more decentralized solutions are developed. forrestv, are you considering acting on any of these ideas? What are you current thoughts on this? This code is probably worthy of a bounty
|
|
|
|
kano
Legendary
Offline
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
|
|
February 22, 2012, 05:36:17 AM |
|
Hmm the highest is only 508.9% Not bad - haven't hit a really long one yet.
... and also to better understand the meaning of that: 14:50:53 20-Feb-2012 UTC DeepBit: block 167671 (share count)/difficulty = 829% ...
|
|
|
|
Aion2n
|
|
February 22, 2012, 05:50:11 AM |
|
Why such a difference in the statistics and the client at all times?
|
|
|
|
kjj
Legendary
Offline
Activity: 1302
Merit: 1026
|
|
February 22, 2012, 06:03:48 AM |
|
Why such a difference in the statistics and the client at all times?
Because what you see as your hashing rate is an estimate based on a random process. Any time you see a hash rate other than directly from your mining software, you need to read it as "the amount of work that another person would have to do to have a good change of duplicating my work" not as "the amount of work I've done".
|
17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8 I routinely ignore posters with paid advertising in their sigs. You should too.
|
|
|
forrestv (OP)
|
|
February 22, 2012, 06:12:46 AM |
|
forrestv, are you considering acting on any of these ideas? What are you current thoughts on this?
I've though about it quite a bit I plan to start a second P2Pool once P2Pool reaches about 400GH/s, because only then will we have enough power to make splitting into two okay. The upcoming protocol change lets new P2Pools safely be created. Any method of dynamically creating P2Pools runs the risk of hurting miners because a pool can't simply be terminated if the hash rate lowers, since the last day of shares that were mined won't be built on top of and won't get their fair reward. I intend to move towards the high-difficulty p2pool backbone idea eventually, but that will obviously require a lot of thought and changes.
|
1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
February 22, 2012, 01:40:46 PM |
|
I intend to move towards the high-difficulty p2pool backbone idea eventually, but that will obviously require a lot of thought and changes. Excellent. I doubt I can help with the coding but if you need to help in tacking down the details of implementation I would be glad to participate in any technical discussion or go over any whitepapers or design docs. It is the most ambitious goal but a long share time high difficulty backbone would be immensely valuable. * Would allow creating an arbitrary number of p2pools. * Would allow conventional pools to connect and provide a 3rd party method to verify a pool is legit. * Would allow creating concepts like a distributed pps pool (payment would be semi-centralized meaning operator could cheat but it would be immediately obvious).
|
|
|
|
|