reallive1
Newbie
Offline
Activity: 5
Merit: 0
|
|
February 20, 2015, 10:41:41 PM |
|
As far as rewriting goes: Stratum being a "human readable protocol"(json based), it's string manipulation in the end, python should be good enough.
The stratum protocol bits are just string manipulation but that part alone is only one tiny component of writing pool software. It was good enough in the days when only one client was expected to connect to a p2pool instance, but if you want p2pool to be useful it needs to attract big miners to push the hashrate which means hundreds if not thousands of clients for a local p2pool instance. It does not remotely scale. There are far more fundamental problems that need addressing in the modern world of mining than the mostly cosmetic startup exception issue. You're right on the point where cosmetic start-up issue is not the biggest problem. Starting from there, could you list known fundamental problem? To my knowledge, stratum don't keep its socket open(am I wrong?), thus scalability shouldn't be much of an issue here if the computer is fast enough. This could be profiled to get a better idea.
|
|
|
|
wilth1
Member
Offline
Activity: 63
Merit: 10
|
|
February 21, 2015, 01:03:02 AM |
|
Spun up a node yesterday and pointed some gear at it
|
|
|
|
Prelude
Legendary
Offline
Activity: 1596
Merit: 1000
|
|
February 21, 2015, 01:04:53 AM |
|
Count me in for a bounty. We can't just let p2pool die.
|
|
|
|
iegservers
Member
Offline
Activity: 96
Merit: 10
|
|
February 21, 2015, 01:07:16 AM |
|
Spun up a node yesterday and pointed some gear at it Count me in for a bounty. We can't just let p2pool die.
More of this please!!!! I will throw some btc at development as well.
|
|
|
|
kano
Legendary
Offline
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
|
|
February 21, 2015, 01:10:03 AM |
|
Spun up a node yesterday and pointed some gear at it Count me in for a bounty. We can't just let p2pool die.
More of this please!!!! I will throw some btc at development as well. Development isn't the current issue. You need a workable design that overcomes the current problems first ...
|
|
|
|
Prelude
Legendary
Offline
Activity: 1596
Merit: 1000
|
|
February 21, 2015, 01:20:57 AM |
|
Spun up a node yesterday and pointed some gear at it Count me in for a bounty. We can't just let p2pool die.
More of this please!!!! I will throw some btc at development as well. Development isn't the current issue. You need a workable design that overcomes the current problems first ... No, but it's an issue nonetheless. A bounty might hopefully get the ball rolling.
|
|
|
|
nezroy
Newbie
Offline
Activity: 4
Merit: 0
|
|
February 21, 2015, 01:41:08 AM |
|
Development isn't the current issue. You need a workable design that overcomes the current problems first ...
Can you summarize (or link to one?) as to what the current problems are? I'm relatively new to p2pool and haven't had a chance to catch up on all 600+ pages of this thread yet. Thanks
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
February 21, 2015, 02:12:27 AM |
|
Development isn't the current issue. You need a workable design that overcomes the current problems first ...
Can you summarize (or link to one?) as to what the current problems are? I'm relatively new to p2pool and haven't had a chance to catch up on all 600+ pages of this thread yet. Thanks I would say the three main problems are: 1 - it's single threaded. that means it starts to bog down, FAST, with heavy loads. 2 - share chain difficulty is directly proportional to pool hashpower. the more hashpower there is, the higher the share chain difficulty, making it harder and harder for smaller miners to get a share. 3 - 30 second work restart. this directly relates to #2. p2pool targets to get a share on the alt chain once every 30 seconds. the more hashpower, the quicker shares are found, so the higher the share chain requirement. weak hardware (which is most of it) does not like 30 second restarts. some hardware tolerates it, but suffers (bitmain hardware with up to date firmware). some hardware is okay (spondoolies). some hardware doesn't work at all. increasing the time frame for a share means higher share difficulty. see #2. I think #2 and #3 are the fatal flaws of the current p2pool design. Those have to change radically for p2pool to be successful on a wide scale. When trying to solve #2 and #3 remember p2pool is decentralized. each share submitted to the alt chain is a potential block, and contains the payout information for all miners who've successfully submitted shares on the alt chain. furthermore, each p2pool node verifies the work submitted to the block chain. M EDIT: Most hardware today is designed with the BTC protocol in mind, which means complete work restarts every 10 minutes. That's 20x less frequent than p2pool. Most conventional pools (At least those I've watched the data flow through closely) submit new work every few minutes or less (without a work restart requirement), but they continue to accept old work until the miner can switch to the new jobs. p2pool doesn't do that. every 30 seconds (on average), it's a hard restart, and old work is ignored.
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
-ck
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
February 21, 2015, 02:17:24 AM |
|
To my knowledge, stratum don't keep its socket open(am I wrong?), thus scalability shouldn't be much of an issue here if the computer is fast enough. This could be profiled to get a better idea.
Let me be clear - stratum is not the issue here, it is just a communication protocol for the actual mining and it is NOT stratum or p2pool's stratum implementation that is the problem. You need to get a firm grasp of what is involved in mining, pooled mining and the p2pool share chain first before you can understand where the problems are. Let that not stop you from starting on the bits you do understand though since you have to start somewhere when helping out with an established project you're new to.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
armedmilitia
|
|
February 21, 2015, 05:37:56 AM |
|
Isn't it true that if you mine on a node with other people (in your local area, to keep latency down), you can reduce difficulty to get a share that way (as the entire node finds a share, and distributes it to the participants via a lower difficulty pps method)? I see that to be a viable workaround to flaw #2 above. Sure, it might boost centralization a bit as miners conglomerate into local nodes, but as long as there are geographic distances between miners super-nodes (51% of hashrate) shouldn't exist due to latency. If this has already been answered in the thread, sorry!
|
|
|
|
-ck
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
February 21, 2015, 07:04:05 AM |
|
Isn't it true that if you mine on a node with other people (in your local area, to keep latency down), you can reduce difficulty to get a share that way (as the entire node finds a share, and distributes it to the participants via a lower difficulty pps method)? I see that to be a viable workaround to flaw #2 above. Sure, it might boost centralization a bit as miners conglomerate into local nodes, but as long as there are geographic distances between miners super-nodes (51% of hashrate) shouldn't exist due to latency. If this has already been answered in the thread, sorry! Yes it's been discussed (and done) many times before. You'll see the irony in this solution when you realise that the solution to the problem with distributed mining is... pooled mining. It requires a level of communication and trust and coordination which totally undoes the whole concept of everyone running their own p2pool node which need not have any trust component to it.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
jonnybravo0311
Legendary
Offline
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
|
|
February 21, 2015, 07:06:26 AM |
|
Isn't it true that if you mine on a node with other people (in your local area, to keep latency down), you can reduce difficulty to get a share that way (as the entire node finds a share, and distributes it to the participants via a lower difficulty pps method)? I see that to be a viable workaround to flaw #2 above. Sure, it might boost centralization a bit as miners conglomerate into local nodes, but as long as there are geographic distances between miners super-nodes (51% of hashrate) shouldn't exist due to latency. If this has already been answered in the thread, sorry! No. You can only lower your share difficulty to the network's determined difficulty. You do this by using the "/" after your address. This may sound confusing, but it's pretty easy in reality. If the node you're mining on has a considerably higher hash rate than what you are contributing, then it is advantageous to you to tell the node you want to have the network minimum share value. For example. Let's say that I run my own node and I have 30TH/s on it. If you bring a single S5 onto my node, then you should setup your user name like this: "MYBTCADDRESS/1000". That "/1000" will ensure that the node will accept the minimum share difficulty for your miner - which right now is 3,200,000ish. My 30TH/s miner, if I set the user name up like this: "MYBTCADDRESS", will assign me a share difficulty of about 14,000,000. P2Pool assigns your miners difficulty based upon the node's total hash rate. You override that value by using "/". Hope that helps.
|
Jonny's Pool - Mine with us and help us grow! Support a pool that supports Bitcoin, not a hardware manufacturer's pockets! No SPV cheats. No empty blocks.
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
February 21, 2015, 12:02:30 PM |
|
Isn't it true that if you mine on a node with other people (in your local area, to keep latency down), you can reduce difficulty to get a share that way (as the entire node finds a share, and distributes it to the participants via a lower difficulty pps method)? I see that to be a viable workaround to flaw #2 above. Sure, it might boost centralization a bit as miners conglomerate into local nodes, but as long as there are geographic distances between miners super-nodes (51% of hashrate) shouldn't exist due to latency. If this has already been answered in the thread, sorry! You can not lower the minimum alt share requirement. You can only raise it. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
aurel57
Legendary
Offline
Activity: 1232
Merit: 1000
|
|
February 21, 2015, 01:50:32 PM Last edit: February 21, 2015, 07:12:04 PM by aurel57 |
|
|
|
|
|
DiCE1904
Legendary
Offline
Activity: 1118
Merit: 1002
|
|
February 21, 2015, 05:06:47 PM |
|
and another one
|
|
|
|
Duce
|
|
February 21, 2015, 05:38:10 PM |
|
With everybody leaving I have just about made up for the slow week as my share value has more than doubled. Good thing I was on the road last week and just decided not to bother with the miners until I came home.
|
|
|
|
yslyung
Legendary
Offline
Activity: 1500
Merit: 1002
Mine Mine Mine
|
|
February 21, 2015, 06:08:43 PM |
|
and a block ! more to come
|
|
|
|
PatMan
|
|
February 21, 2015, 07:43:45 PM |
|
At last! Hopefully there will be a few more to make up for the terrible last 10 days we've had - although I do have the feeling that the reason we got these 2 blocks is because so many users left.....
|
|
|
|
aurel57
Legendary
Offline
Activity: 1232
Merit: 1000
|
|
February 21, 2015, 08:15:28 PM |
|
Isn't it true that if you mine on a node with other people (in your local area, to keep latency down), you can reduce difficulty to get a share that way (as the entire node finds a share, and distributes it to the participants via a lower difficulty pps method)? I see that to be a viable workaround to flaw #2 above. Sure, it might boost centralization a bit as miners conglomerate into local nodes, but as long as there are geographic distances between miners super-nodes (51% of hashrate) shouldn't exist due to latency. If this has already been answered in the thread, sorry! No. You can only lower your share difficulty to the network's determined difficulty. You do this by using the "/" after your address. This may sound confusing, but it's pretty easy in reality. If the node you're mining on has a considerably higher hash rate than what you are contributing, then it is advantageous to you to tell the node you want to have the network minimum share value. For example. Let's say that I run my own node and I have 30TH/s on it. If you bring a single S5 onto my node, then you should setup your user name like this: "MYBTCADDRESS/1000". That "/1000" will ensure that the node will accept the minimum share difficulty for your miner - which right now is 3,200,000ish. My 30TH/s miner, if I set the user name up like this: "MYBTCADDRESS", will assign me a share difficulty of about 14,000,000. P2Pool assigns your miners difficulty based upon the node's total hash rate. You override that value by using "/". Hope that helps. So I am on a node that has 10Th+ and I have a S5 there running at 500Gh so I should set it at /? use 1000?
|
|
|
|
Songminer
Member
Offline
Activity: 76
Merit: 10
|
|
February 21, 2015, 08:15:55 PM |
|
Hey, I clicked on the open pool list on P2Pool.org.. and its blank. http://p2pool-nodes.info/It was populated the other day..
|
|
|
|
|