Matt Corallo
|
|
August 31, 2014, 09:09:11 PM |
|
Nice results on the RelayNode mod... But how does it compare to having the standalone java node and p2pool instance versus the combined python implementation? Pros/cons? Would pypy execution make any appreciable difference for the modified p2pool instance?
Right now I'm running the mainline p2pool via pypy and Matt's java Relay separately and it's functioning fine, so I'm interested if there would be any gain to switch to the combined implementation.
The only thing I can think of is that, because Python is single-threaded in practice (both pypy and CPython have this global lock (the GIL) and they run multiple threads which contend for it, actually decreasing performance over just running a single thread and multiplexing between them....), you may end up in a situation where you're waiting for the relay network thread to finish processing something, but because all of its processing is incredibly light-weight, I highly doubt it would make more than a few nanoseconds difference. Of course if you dont already have a physical CPU core spare running the Java version, you probably save on process context switching anyway...
|
|
|
|
kgb2mining
Member
Offline
Activity: 112
Merit: 10
|
|
September 01, 2014, 01:40:34 AM |
|
I'm not very sure on the CPU overhead - I have a 16-core machine and honestly it didn't break a sweat before and isn't now. However, java introduced about a 2.5GB RAM overhead which is completely gone now when using the pure-python version.
I'll just echo what PatMan said - even if there's no real gain performance-wise between the two, there is definitely a maintenance overhead gain with everything now streamlined and that is enough for me. I don't need to worry about the java process on its own, don't have to worry on reboots or scripting it up to restart, etc. It's all happy under one roof.
Yesterday I was tinkering around with adding peering nodes and I think I tinkered too much, my efficiency never got back above 100%. I fixed most of what I was doing and since restart about 12 hours ago I have 7/0/0 shares and 110%+ efficiency again. GBT is hovering around .16-.18 on average, memory usage is small at around 500MB, and my bandwith usage is only around 50KB/s.
|
|
|
|
hamburgerhelper
Member
Offline
Activity: 83
Merit: 10
|
|
September 01, 2014, 03:19:55 AM |
|
Thanks for the info kgb2mining.
It looks like the java version is using about 70 MB of RAM and 0.2% CPU. Since I already have the startup scripts, and upgrading is so simple, I think I'll just keep running the java version. It gives me some flexibility when it comes to upgrading (like, I can just upgrade the RelayNodeClient without restarting p2pool).
Interesting about your p2pool peer experimentation. Would you mind sharing your peer list? Has anyone found that there is an optimal number of peers? Several of these peers are gone now, but here's my list:
-n p2pool.org -n minefast.coincadence.com -n 14.17.121.234 -n 58.22.92.36 -n 115.201.192.188 -n 125.126.130.134 -n 107.170.178.16 -n 50.251.148.42 -n 115.202.27.30 -n 182.41.211.14 -n 50.248.204.210 -n 192.71.218.197 -n 107.170.116.123 -n 173.160.157.222 -n 82.196.8.44
|
|
|
|
kgb2mining
Member
Offline
Activity: 112
Merit: 10
|
|
September 01, 2014, 04:12:03 AM |
|
Heh, I went for the "scorched earth" peer list approach to see what happens... I went to the p2pool node info page, sorted by country, exported to CSV, then some grep, awk, cut and echo'd the 84 U.S. nodes that resulted. Currently it appears I've active peering connections with 60+ of them outbound (hovering between 62-64 average connected). I'm of the mindset that it doesn't hurt to peer with as many nodes as you are able to reliably communicate with, since you want to get your shares out to the network as fast as possible, before everyone else. If you are able to announce your share outbound quickly to as many nodes as you can, you might just win that race. I could be completely off in my thinking, and hurting myself by having so many peers connected. Time should tell tho I think, and if my efficiency suffers, then it might be better to approach it more methodically.
|
|
|
|
hamburgerhelper
Member
Offline
Activity: 83
Merit: 10
|
|
September 01, 2014, 04:39:50 AM |
|
Heh, I went for the "scorched earth" peer list approach to see what happens... I went to the p2pool node info page, sorted by country, exported to CSV, then some grep, awk, cut and echo'd the 84 U.S. nodes that resulted. Currently it appears I've active peering connections with 60+ of them outbound (hovering between 62-64 average connected). I'm of the mindset that it doesn't hurt to peer with as many nodes as you are able to reliably communicate with, since you want to get your shares out to the network as fast as possible, before everyone else. If you are able to announce your share outbound quickly to as many nodes as you can, you might just win that race. I could be completely off in my thinking, and hurting myself by having so many peers connected. Time should tell tho I think, and if my efficiency suffers, then it might be better to approach it more methodically. Ok, this was my line of thinking. But you took it waaaay further. For that, I applaud you! I'm beefing up my peers with a slightly different strategy: sort the node list by uptime and use the oldest nodes with reasonable latency. I figure that the oldest nodes will have the deepest roots into the p2pool network. Ok, I hope we just didn't ruin our competitive advantage here...
|
|
|
|
Matt Corallo
|
|
September 01, 2014, 05:07:58 AM |
|
Ok, this was my line of thinking. But you took it waaaay further. For that, I applaud you!
I'm beefing up my peers with a slightly different strategy: sort the node list by uptime and use the oldest nodes with reasonable latency. I figure that the oldest nodes will have the deepest roots into the p2pool network. Ok, I hope we just didn't ruin our competitive advantage here...
This kind of approach rarely works. In fact, by simply decreasing peering within the relay network, relay times improved. Mostly, using this approach means your peaks (remember when a new block is found you get huge bandwidth peaks, even though amortized across even a second, you wont see all that much bandwidth) will be significantly higher, leading to strange things on the network, mostly increased packet loss. This means instead of getting a block pretty quick, you'll have to wait for the packet to timeout and get a resend. I would recommend finding only a handful of peers which are both geographically local and have high hashpower so you'll get blocks they found directly from the horse's mouth.
|
|
|
|
hamburgerhelper
Member
Offline
Activity: 83
Merit: 10
|
|
September 01, 2014, 05:24:09 AM |
|
Ok, this was my line of thinking. But you took it waaaay further. For that, I applaud you!
I'm beefing up my peers with a slightly different strategy: sort the node list by uptime and use the oldest nodes with reasonable latency. I figure that the oldest nodes will have the deepest roots into the p2pool network. Ok, I hope we just didn't ruin our competitive advantage here...
This kind of approach rarely works. In fact, by simply decreasing peering within the relay network, relay times improved. Mostly, using this approach means your peaks (remember when a new block is found you get huge bandwidth peaks, even though amortized across even a second, you wont see all that much bandwidth) will be significantly higher, leading to strange things on the network, mostly increased packet loss. This means instead of getting a block pretty quick, you'll have to wait for the packet to timeout and get a resend. I would recommend finding only a handful of peers which are both geographically local and have high hashpower so you'll get blocks they found directly from the horse's mouth. Matt, we're referring to p2pool peers, not bitcoin peers. I think that the p2pool blocks (the "shares") are very very small in comparison to bitcoin blocks. Since my node is the origin of the newly found share (the share is outbound to the rest of the p2pool network) it can only get out as fast as the maximum outbound bandwidth of my node - it seems to me that I want to try to hit that max (which in my case is 250 Mbps) so that there are many spokes to my hub (this seems to me to be hub/spoke topology in the first set of share transmissions, then it turns into a graph as the share gets retransmitted by my peer nodes). Am I totally off base here? I'm not an expert on p2pool, I just read some stuff.
|
|
|
|
norgan
|
|
September 01, 2014, 06:15:17 AM |
|
Hey all,
Been away on holidays and been lurking here to see what's going in. Just a quick Q, I have a miner on my node that has almost 100% DOA. IS that going to have any negative impacts on my node?
|
|
|
|
Matt Corallo
|
|
September 01, 2014, 06:15:39 AM |
|
Matt, we're referring to p2pool peers, not bitcoin peers. I think that the p2pool blocks (the "shares") are very very small in comparison to bitcoin blocks. Since my node is the origin of the newly found share (the share is outbound to the rest of the p2pool network) it can only get out as fast as the maximum outbound bandwidth of my node - it seems to me that I want to try to hit that max (which in my case is 250 Mbps) so that there are many spokes to my hub (this seems to me to be hub/spoke topology in the first set of share transmissions, then it turns into a graph as the share gets retransmitted by my peer nodes).
Am I totally off base here? I'm not an expert on p2pool, I just read some stuff.
Indeed, the relay network is a bit different. As many of its peers still use standard bitcoin p2p connections, full blocks get sent over the network, making peaks much higher. Still, you have to think about more than just your node. If you found the share, maxing your uplink isnt a bad idea, but if someone else found the share, you still want their share to propagate quickly (at least to you...), and for that you shouldn't fill their connection count. Of course you also have to account for bitcoind being on the same uplink (what if your share was a block?).
|
|
|
|
Matt Corallo
|
|
September 01, 2014, 06:17:01 AM |
|
In any case, this thread is moving on, I'm gonna unsubscribe. If anyone has any further relay network questions, dont hesitate to reach out via bitcoin-peering@my full name.com
|
|
|
|
K1773R
Legendary
Offline
Activity: 1792
Merit: 1008
/dev/null
|
|
September 01, 2014, 06:49:22 AM |
|
In any case, this thread is moving on, I'm gonna unsubscribe. If anyone has any further relay network questions, dont hesitate to reach out via bitcoin-peering@my full name.com
You should create a own thread for it.
|
[GPG Public Key]BTC/DVC/TRC/FRC: 1 K1773RbXRZVRQSSXe9N6N2MUFERvrdu6y ANC/XPM A K1773RTmRKtvbKBCrUu95UQg5iegrqyeA NMC: N K1773Rzv8b4ugmCgX789PbjewA9fL9Dy1 LTC: L Ki773RBuPepQH8E6Zb1ponoCvgbU7hHmd EMC: E K1773RxUes1HX1YAGMZ1xVYBBRUCqfDoF BQC: b K1773R1APJz4yTgRkmdKQhjhiMyQpJgfN
|
|
|
|
norgan
|
|
September 01, 2014, 09:57:03 AM |
|
This is pretty big for p2pool!
|
|
|
|
murdof
|
|
September 01, 2014, 10:24:51 AM |
|
And now everyone starts thinking that maybe forrestv is working for bitmain on the new branch thats why he had disappeared?
|
|
|
|
PatMan
|
|
September 01, 2014, 10:29:12 AM |
|
|
|
|
|
coinme.info
|
|
September 01, 2014, 10:35:59 AM |
|
This is pretty big for p2pool! So they have S2s working with ANTPOOL, their new p2pool implementation. 5,000 AntMiner S2 being operated in the mining farm, and the hash rate of 4,000 AntMiner S2 were switched into HASHNEST. We are investing significant resources into the development of p2pool mining protocol. At present, the hash rate of HASHNEST is mined from ANTPOOL, zero (0) fees, which will soon be contributed into p2pool. The development of contributed into p2pool is almost completed, and in the final testing and deploying. The code of ANTPOOL will be open sourced, and we are confident that you will be impressed by the outstanding contribution that BITMAIN put into p2pool protocol. We have a focused team who are developing the p2pool mining protocol, and the goal is to have 80% of the total network hash rate join p2pool within next 12 months. At that time, the whole Bitcoin community will never be anxious about the potential risks of the decentralization risk associated with the pool mining model now. UMISOO, the first Operator on HASHNEST will provide 4PH/s hashing power, and plan to start the Round 1 subscription at Beijing Time 22:00pm, 2nd September, 2014 (UCT+8:00 Time Zone). Official subscribed flat price is 0.0016 BTC/Gh/s. Due to the anticipated high demands, each user is limited to a maximum of 1,000Gh/s. The Round 1 hash rate is generated by AntMiner S2 with the maintenance fee of $0.0032424/GH/s/Day. Official subscribed flat price is 0.0016 BTC/Gh/s. Each user is limited to a maximum of 1,000Gh/s. Maintenance fee of $0.0032424/GH/s/Day. So max buy in is 0.0016 BTC/Gh/s x 1,000Gh/s = 1.6BTCMaintenance fee of $0.0032424/GH/s/Day = $3.2424 per day or $98.623 per month So a hosted S2 in China for 1.6BTC & $3.24 per day including power etc. No shippping, taxes, power costs etc.. sounds OK. This part I'm confused about: At that time, the whole Bitcoin community will never be anxious about the potential risks of the decentralization risk associated with the pool mining model now. If we all buy into running hosted miners how is the taking away the decentralization, which is why a lot of us are running our p2pool nodes?
|
|
|
|
murdof
|
|
September 01, 2014, 10:37:59 AM |
|
On a side note this answers the fact why they switched S3 to preorders. I don't know if this is good news or bad news - as this leaves only Spondoolies to offer hardware without running a farm at the same time. Yes you will say that Bitmain is "renting" its power but since they come out and say that they officially have a farm we don't know how much extra power (maybe the S3s that didn't ship for 1.5 months) and of course they are merge mining.
So is this good for p2pool? Well nobody will care in 12 months time because it seems that we will all be forced to switch to cloud mining and others will keep the benefits of p2pool. We will just get payouts based on a calculator every week.
It seems that serious companies copied pbmining's ponzi scheme. But this time since they have their own hardware they will just remove the "ponzi" part and do it legit and we will all accept and be happy.
It seems mining for the average folks will be a thing of the past soon...
|
|
|
|
IYFTech
|
|
September 01, 2014, 10:38:58 AM |
|
And now everyone starts thinking that maybe forrestv is working for bitmain on the new branch thats why he had disappeared? TBH, his infamous little rant was an invitation for someone to take up the reins - who better than a manufacturer the size of Bitmain who also have a desire for decentralization & an R&D department with limitless capability? This could be sweeeeeet
|
|
|
|
coinme.info
|
|
September 01, 2014, 10:41:12 AM |
|
It seems mining for the average folks will be a thing of the past soon...
I've felt this was on the cards for a while now and recent posts here about the viability of running our own pools have certainly had me thinking that it will all be DC style farms by the end of the year.
|
|
|
|
PatMan
|
|
September 01, 2014, 10:45:37 AM |
|
I'm presuming that as their new p2pool code is open source, it will be configurable in the same way it is now, so anyone can still have their own node?.....Maybe their hosted miners can also be configured this way too, so that it can be pointed at any node the user wishes? Just speculating, of course......
|
|
|
|
|