jtoomim
|
|
August 28, 2017, 07:37:18 PM Last edit: August 28, 2017, 08:57:30 PM by jtoomim |
|
so i tried https://github.com/jtoomim/p2pool - it worked ( although the invalid share rate was higher, sometimes 100% cpu usage.. ) now i switched back to https://github.com/p2pool/p2pool/ . but here the network is only 0.5 PH big ( 3 PH in the other version ) so am i an the wrong network, or are miners just frightened of the transition? will someone join me? jtoomimnet requires more CPU and less RAM than mainnet ("master") p2pool. Because of this, it is strongly recommended to use pypy with jtoomimnet, as pypy trades off higher RAM usage for lower CPU usage. If you use pypy on jtoomimnet, you should have lower rates of invalid (DOA+stale) shares than when you mine on mainnet. ... Yes, jtoomimnet has more hashrate. This is unlikely to change. jtoomimnet makes larger blocks and collects more fee revenue. If p2pool finds a block within about 30 seconds after the previous block, then jtoomimnet will make a ~999 kB block (plus whatever Segwit bonus there is) but mainnet would make a ~100 kB block. After 60 seconds, it's 999 vs 200 kB. After about 300 seconds, they reach parity and mainnet can mine full blocks. If you wish to verify this for yourself, wait for a block to be published (e.g. check blockchain.info), and then check the "Current block value:" on these two nodes: http://crypto.mine.nu:9332/static/classic/ (mainnet) http://crypto.mine.nu:9330/static/classic/ (jtoomimnet) You can also use the share explorer to look at how the share/block values change over time. For example, this is the first share mined by p2pool on top of block 482381 for each of the two networks (links will go invalid the next time sawa restarts his nodes): http://crypto.mine.nu:9332/static/classic/share.html#0000000000000e27a506eb2eacacdb12ae77cd1c28fe23789e5da35edd001d69 mainnet 12.66496688 BTC http://crypto.mine.nu:9330/static/classic/share.html#0000000000000df957e19f5712d4efbf04bc825ab673a1e0db2e696bd8ce9961 jtoomimnet 15.53978325 BTC Those two shares were mined within 1 second of each other. If the hash had been good enough to be a valid block, then jtoomimnet would have earned 2.874 more BTC (22.7% more) than mainnet. Here's a quick table of the differences for the first few shares after block 482381: Share# | BTC_difference | jtoomimnet_btc | mainnet_btc | 1 | 2.87 | 15.54 | 12.66 | 2 | 2.69 | 15.54 | 12.85 | 3 | 2.50 | 15.57 | 13.07 | 4 | 2.82 | 15.69 | 12.86 | 5 | 2.79 | 15.76 | 12.97 | 6 | 2.36 | 15.81 | 13.45 | 7 | 1.95 | 15.97 | 14.03 | 8 | 1.78 | 16.00 | 14.23 | 9 | 3.53 | 16.07 | 12.54 | 10 | 1.69 | 16.13 | 14.44 | 11 | 1.34 | 15.98 | 14.64 | 12 | 1.48 | 16.14 | 14.66 |
The payout per block will sometimes be the same between the two networks, especially if the current block round has been going on for a while, but when there's a difference, it should be big and in jtoomimnet's favor. The cost of the change that allows higher revenue on jtoomimnet is that p2pool has to deal with more transactions in the share chain. This increases the CPU, RAM, and bandwidth requirements for jtoomimnet. I added optimizations to the code that more than offset the RAM requirements and partially offset the CPU requirements. I have not yet addressed the network requirements, nor have I finished addressing the CPU requirements. Unfortunately, I'm a lazy loafer and only spend on average a few hours a week working on p2pool code, so things don't get done as fast as I'd like. If you have a fast enough CPU and networking, and enough RAM, and if you can get pypy working, then jtoomimnet will have better expected revenue (bigger blocks), more frequent payouts (higher hashrate), fairer payouts (fewer orphan + DOA shares, smaller advantage to large miners), and will be better for Bitcoin (larger blocks) than mainnet (v17). jtoomimnet's support for Segwit also appears to be less glitchy than mainnet at the moment. ... I expect I will have the code ready to have jtoomimnet fork into a Segwit2x and a Core chain before the Bitcoin fork happens. I would prefer it if nobody mined on the Core chain, but I think that freedom of choice is more important than having people follow my choice. The code for auto-forking is about 40% done so far. The jtoomimnet code has some problems with altcoins still, and I haven't had a chance to investigate/fix them yet, so jtoomimnet is not ready for p2pool master. I would also like to fix the CPU usage (and maybe the RAM and network usage too), but whether I do that before or after merging into p2pool master is TBD.
|
Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power. http://Toom.im
|
|
|
Acejam
|
|
August 28, 2017, 09:12:09 PM Last edit: August 28, 2017, 09:37:18 PM by Acejam |
|
I've been running P2Pool in a Docker container for over a year now with great success so I've decided to make my Docker container public. Nothing super special here, but this should allow you to easily run P2Pool Core regardless of what platform you're on and not have to worry about binaries. (As long as you have Docker installed) I currently have a tag up for 17.0 on Docker Hub, but can add more later once future releases are made. https://hub.docker.com/r/acejam/p2pool/https://github.com/acejam/docker-p2pooldocker run -p 9332:9332 -p 9333:9333 acejam/p2pool:17.0 ${p2pool_command_options_here}
|
|
|
|
Blue Bear
Newbie
Offline
Activity: 31
Merit: 0
|
|
August 29, 2017, 01:58:19 AM |
|
Thanks everyone that helped.
I have my node back online and running with no issues.
I have learned alot in these past few days getting my node upgraded to v17.
BB
|
|
|
|
kano
Legendary
Offline
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
|
|
August 29, 2017, 02:04:45 AM |
|
... There is always a statistically significant chance (albeit a small one) that the next mainnet P2Pool block would be found in less than five minutes after the previous block, due to how Bitcoin mining inherently relies on chance. ...
Actually it's a simple CDF calculation that shows how bad it is to limit block sizes to taking an exorbitant 5 minutes to get to 1MB ... 5 minutes is a 50% block on an expected 10 minute network. So the CDF is 0.39346934028737 or 1 in 1.6 blocks (60.65%) will be over 5 minutes or reversing that ... 1 in 2.54 blocks (39.3%) will be under 5 minutes ... yeah that's a big % of blocks, not "albeit a small one" at all. I'm often surprised at how little maths people understand about Bitcoin when they are considered experts in it ... and/or spending money mining on it ... Note of course, this last comment is directed at someone else not you ...
|
|
|
|
Blue Bear
Newbie
Offline
Activity: 31
Merit: 0
|
|
August 29, 2017, 02:10:50 AM |
|
Actually it's a simple CDF calculation that shows how bad it is to limit block sizes to taking an exorbitant 5 minutes to get to 1MB ... 5 minutes is a 50% block on an expected 10 minute network. So the CDF is 0.39346934028737 or 1 in 1.6 blocks (60.65%) will be over 5 minutes or reversing that ... 1 in 2.54 blocks (39.3%) will be under 5 minutes ... yeah that's a big % of blocks, not "albeit a small one" at all. I'm often surprised at how little maths people understand about Bitcoin when they are considered experts in it ... and/or spending money mining on it ... Note of course, this last comment is directed at someone else not you ... can you show me the algebraic formula so I can understand it better ....
|
|
|
|
kano
Legendary
Offline
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
|
|
August 29, 2017, 02:12:34 AM |
|
Actually it's a simple CDF calculation that shows how bad it is to limit block sizes to taking an exorbitant 5 minutes to get to 1MB ... 5 minutes is a 50% block on an expected 10 minute network. So the CDF is 0.39346934028737 or 1 in 1.6 blocks (60.65%) will be over 5 minutes or reversing that ... 1 in 2.54 blocks (39.3%) will be under 5 minutes ... yeah that's a big % of blocks, not "albeit a small one" at all. I'm often surprised at how little maths people understand about Bitcoin when they are considered experts in it ... and/or spending money mining on it ... Note of course, this last comment is directed at someone else not you ... can you show me the algebraic formula so I can understand it better .... https://en.wikipedia.org/wiki/Cumulative_distribution_function
|
|
|
|
Cryptonomist
Newbie
Offline
Activity: 27
Merit: 0
|
|
August 29, 2017, 09:16:13 AM Last edit: August 29, 2017, 09:28:52 AM by Cryptonomist |
|
Actually it's a simple CDF calculation that shows how bad it is to limit block sizes to taking an exorbitant 5 minutes to get to 1MB ... 5 minutes is a 50% block on an expected 10 minute network. So the CDF is 0.39346934028737 or 1 in 1.6 blocks (60.65%) will be over 5 minutes or reversing that ... 1 in 2.54 blocks (39.3%) will be under 5 minutes ... yeah that's a big % of blocks, not "albeit a small one" at all. I'm often surprised at how little maths people understand about Bitcoin when they are considered experts in it ... and/or spending money mining on it ... Note of course, this last comment is directed at someone else not you ... can you show me the algebraic formula so I can understand it better .... Hello, The Bitcoin mining process can be modelled as a Poisson process. From this we can calculate the probabilistic interarrival times (the time between two events, here finding of two blocks), which are exponentially distributed with lambda = 1/600. The probability of getting interarrival times (IAT) larger than 5 min or 300 sec is: Pr{IAT > 300} = e^(-300/600) ≈ 0.60653 = 60.65%. So the probability of getting two blocks less than 300 sec from each other is: Pr{IAT < 300} = 1 - Pr{IAT > 300} ≈ 0.39347 = 39.34%. This is the probability of the Bitcoin network finding a block in less than 300 sec from the previous one. We can off course do the same for P2pool. The probability that the P2pool network finds a block less than 300 sec from the other is (assuming P2Pool has 0.1% of the total hashing power of the Bitcoin network, which is more than it currently has): lambda is now 1/600000. Pr{IAT < 300} = 1 - e^(-300/600000) ≈ 0.0005 = 0.05%. (This is an exercise I did for my self. It may contain errors related to how P2pool works. If I made errors please make me aware of them so I can learn from it.). We can also find the expected number of KBs that P2pool can add to the block if P2pool finds a block after 5 min, assuming 100 KB per share, and no difficulty change of the dynamic difficulty adjustment mechanism, and after the last found block by the network, P2Pool had to start again from scratch, meaning 0 KBs). As explained above the finding of shares is also a Poisson process. The probability to find no shares in an interval of length 300 sec is: Pr(N(300) = 0) = (((300/30)^0)/0!)*e^(-300/30) ≈ 0.00005. In this case P2pool will add 0 KBs to a block. We can continue this until an ridiculous and irrelevant number of shares, so I will truncate it at 20 shares: Pr(N(300) = 1) ≈ 0.00045 100KB Pr(N(300) = 2) ≈ 0.00227 200KB Pr(N(300) = 3) ≈ 0.00757 300KB ... Pr(N(300) = 9) ≈ 0.12511 900KB Pr(N(300) = 10) ≈ 0.12511 1000KB Pr(N(300) = 11) ≈ 0.11374 1100KB ... Pr(N(300) = 19) ≈ 0.00373 1900KB Pr(N(300) = 20) ≈ 0.00187 2000KB From these results we can calculate the weighted average: 996.55KB. So if a Bitcoin block is found by P2pool 300 sec after the previous than the expected amount of KBs in that block is 996.55KB. I hope this clarifies a thing or two. Or if you find errors please make me aware of them so I can learn from it.
|
|
|
|
kano
Legendary
Offline
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
|
|
August 29, 2017, 12:59:59 PM |
|
Actually it's a simple CDF calculation that shows how bad it is to limit block sizes to taking an exorbitant 5 minutes to get to 1MB ... 5 minutes is a 50% block on an expected 10 minute network. So the CDF is 0.39346934028737 or 1 in 1.6 blocks (60.65%) will be over 5 minutes or reversing that ... 1 in 2.54 blocks (39.3%) will be under 5 minutes ... yeah that's a big % of blocks, not "albeit a small one" at all. I'm often surprised at how little maths people understand about Bitcoin when they are considered experts in it ... and/or spending money mining on it ... Note of course, this last comment is directed at someone else not you ... can you show me the algebraic formula so I can understand it better .... Hello, The Bitcoin mining process can be modelled as a Poisson process. From this we can calculate the probabilistic interarrival times (the time between two events, here finding of two blocks), which are exponentially distributed with lambda = 1/600. The probability of getting interarrival times (IAT) larger than 5 min or 300 sec is: Pr{IAT > 300} = e^(-300/600) ≈ 0.60653 = 60.65%. So the probability of getting two blocks less than 300 sec from each other is: Pr{IAT < 300} = 1 - Pr{IAT > 300} ≈ 0.39347 = 39.34%. This is the probability of the Bitcoin network finding a block in less than 300 sec from the previous one. We can off course do the same for P2pool. The probability that the P2pool network finds a block less than 300 sec from the other is (assuming P2Pool has 0.1% of the total hashing power of the Bitcoin network, which is more than it currently has): lambda is now 1/600000. Pr{IAT < 300} = 1 - e^(-300/600000) ≈ 0.0005 = 0.05%. (This is an exercise I did for my self. It may contain errors related to how P2pool works. If I made errors please make me aware of them so I can learn from it.). We can also find the expected number of KBs that P2pool can add to the block if P2pool finds a block after 5 min, assuming 100 KB per share, and no difficulty change of the dynamic difficulty adjustment mechanism, and after the last found block by the network, P2Pool had to start again from scratch, meaning 0 KBs). As explained above the finding of shares is also a Poisson process. The probability to find no shares in an interval of length 300 sec is: Pr(N(300) = 0) = (((300/30)^0)/0!)*e^(-300/30) ≈ 0.00005. In this case P2pool will add 0 KBs to a block. We can continue this until an ridiculous and irrelevant number of shares, so I will truncate it at 20 shares: Pr(N(300) = 1) ≈ 0.00045 100KB Pr(N(300) = 2) ≈ 0.00227 200KB Pr(N(300) = 3) ≈ 0.00757 300KB ... Pr(N(300) = 9) ≈ 0.12511 900KB Pr(N(300) = 10) ≈ 0.12511 1000KB Pr(N(300) = 11) ≈ 0.11374 1100KB ... Pr(N(300) = 19) ≈ 0.00373 1900KB Pr(N(300) = 20) ≈ 0.00187 2000KB From these results we can calculate the weighted average: 996.55KB. So if a Bitcoin block is found by P2pool 300 sec after the previous than the expected amount of KBs in that block is 996.55KB. I hope this clarifies a thing or two. Or if you find errors please make me aware of them so I can learn from it. Firstly, the time between any 2 blocks is what matters, when p2pool is the 2nd block, it doesn't mater who found the previous one. If p2pool happens to get two in a row at the moment, well, good to know, the chance of that is simply the size of p2pool over the whole network The point of the problem is that standard p2pool will only allow to add 100KB per 30 seconds given 30 seconds is the expected time per share-chain share. That's a simple linear function ... 300s = 30s x 10 = 100KB x 10 = 1MB Until 300s it's not expected to be a full block, thus the CDF comes into play to simply say that 39.3% of blocks found on p2pool will be 900KB or less. Most everyone on the bitcoin network itself is expected to use up the same best transactions each block found and thus all pools including p2pool should normally reset back to zero when a block is found, but then again, as before, p2pool will produce, on average, a 100KB block at the first 30s after any previous block, no matter who found it - p2pool or someone else. That's not the case by design of a 'standard' pool, though a lot of pools create empty or partial blocks in the first work generated after a block, but then switch to a full block very shortly after that. As for the CDF, it's simply the area under the poisson distribution curve - which you normally integrate a function to get the area under a curve, but in the case of the poisson there's simple approximates to calculate it given certain limitations. The simplest calculation is (1 - e^(-x)) where x is the ratio - e.g. 50% = 0.5 As long as x isn't too small, this gives a good approximate. i.e. 1 - e^(-0.5) = 0.393... or 39.3% ... thus a simple (and correct) explanation
|
|
|
|
jtoomim
|
|
August 29, 2017, 03:12:41 PM |
|
The point of the problem is that standard p2pool will only allow to add 100KB per 30 seconds given 30 seconds is the expected time per share-chain share. That's a simple linear function ... 300s = 30s x 10 = 100KB x 10 = 1MB Note: 100 kB per 30 second share is a bit of an oversimplification of how the mainnet code works. In reality, it's sometimes more than 100 kB (when transactions can be resurrected from a previous block round), and often less than 100 kB (when different nodes are using different transaction sets, or when the transactions in a template change, or when transactions are large). I have looked closely at the shares mined for two block on mainnet. In the first instance, it took around 20 shares before it got up to 1 MB instead of 10 shares. In the second instance, it hadn't gotten to 1 MB even after 16 shares. In both, there were several shares that were actually smaller than the previous share.
|
Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power. http://Toom.im
|
|
|
kano
Legendary
Offline
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
|
|
August 29, 2017, 03:26:41 PM |
|
The point of the problem is that standard p2pool will only allow to add 100KB per 30 seconds given 30 seconds is the expected time per share-chain share. That's a simple linear function ... 300s = 30s x 10 = 100KB x 10 = 1MB Note: 100 kB per 30 second share is a bit of an oversimplification of how the mainnet code works. In reality, it's sometimes more than 100 kB (when transactions can be resurrected from a previous block round), and often less than 100 kB (when different nodes are using different transaction sets, or when the transactions in a template change, or when transactions are large). I have looked closely at the shares mined for two block on mainnet. In the first instance, it took around 20 shares before it got up to 1 MB instead of 10 shares. In the second instance, it hadn't gotten to 1 MB even after 16 shares. In both, there were several shares that were actually smaller than the previous share. Well you can probably add the word 'expected' in every sentence I wrote, but that was the point of it all, the expected averages (or limits), not the exact what happened at any particular event.
|
|
|
|
jtoomim
|
|
August 29, 2017, 04:20:07 PM |
|
I just pushed a performance improvement commit to 1mb_segwit. It seems to reduce CPU usage on pypy about 45% and RAM usage about 65% by improving the code for data deserialization. Share loading time went from 31s to 17s on pypy and from 101s to 79s on CPython. There may be additional performance benefits to be had by doing some work on data serialization in addition to deserialization.
|
Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power. http://Toom.im
|
|
|
Comandante77
Newbie
Offline
Activity: 12
Merit: 0
|
|
August 29, 2017, 06:21:58 PM |
|
Strange situation. Installed and ran p2pool in solo mining 5 days ago for DASH. 3 or 4 days everything was fine. Smth about 4-5 blocks per 24h. And now 0 blocks per 24h.
May be some tuning will help?
Miners are Baikal.
|
|
|
|
ChicagoCryptocurrency
Member
Offline
Activity: 210
Merit: 10
|
|
August 29, 2017, 06:58:18 PM |
|
Hey, Guys! so glad to see p2pool still in discussion and alive I had a question in regards to Bitcoin Cash can I mine it with p2pool and is there any thread I can be directed to for the resources @Kano Good to see you still active Thanks for all the help you provide to the Bitcoin Community! you the Man!
Thanks
|
|
|
|
jtoomim
|
|
August 29, 2017, 07:02:33 PM |
|
Bitcoin Cash can I mine it with p2pool
Not yet. I've been working on adding support for it to jtoomimnet, but it's only about 40% done.
|
Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power. http://Toom.im
|
|
|
ChicagoCryptocurrency
Member
Offline
Activity: 210
Merit: 10
|
|
August 29, 2017, 07:59:44 PM |
|
Bitcoin Cash can I mine it with p2pool
Not yet. I've been working on adding support for it to jtoomimnet, but it's only about 40% done. keep me posted would love to check it out
|
|
|
|
belcher
|
|
August 29, 2017, 09:45:14 PM |
|
Thanks for your reply jtoomim. I would like to strip all transaction data out of the share data structure in the share chain in order to cut this memory footprint issue and reduce the CPU requirements for processing shares, but until that is done, increasing the share chain length is a bad idea. How would you do this? In fact, why do shares need references to transactions at all? They already commit to all the transactions via the merkle root. If a hasher turns out to have created an invalid block then they will lose money.
|
1HZBd22eQLgbwxjwbCtSjhoPFWxQg8rBd9 JoinMarket - CoinJoin that people will actually use. PGP fingerprint: 0A8B 038F 5E10 CC27 89BF CFFF EF73 4EA6 77F3 1129
|
|
|
jtoomim
|
|
August 29, 2017, 10:18:44 PM |
|
It's part of p2pool's fast block propagation technology, which for p2pool is part of the consensus layer. P2pool had one of the first systems for propagating a block without transmitting the full serialized size of the block over the network. It did this by making the share objects encode the list of transactions in a block by encoding them as a (previous share's index, transaction index) tuple. Usually, this encoding takes two or three bytes per transaction. For transactions that are being included in a share for the first time, p2pool encodes them in shares as the full 32-byte transaction hash, and assumes that the recipient of the share will already have the mapping from the 32-byte hash to the full transaction. (This is ensured by code in the p2p layer.)
P2pool in principle does not need to include the full (encoded) transaction list in the share object. But getting rid of that means getting rid of p2pool's fast block propagation tech. Since modern fast block propagation tech has advanced so much outside of p2pool (e.g. FIBRE, Falcon, Xthin, Compact Blocks), I think that p2pool should strip it out and leave that task to the better (and faster) code in bitcoind. Alternately, if fast block propagation in p2pool is desirable, it should probably be done outside of the consensus layer, so that it only gets used for actual blocks (where the resource usage is actually useful) rather than on every single share.
|
Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power. http://Toom.im
|
|
|
belcher
|
|
August 30, 2017, 04:39:15 PM |
|
I came up with an idea to improve p2pool's scalability and greatly reduce payout variance: https://bitcointalk.org/index.php?topic=2135429.0Please tell me what you think.
|
1HZBd22eQLgbwxjwbCtSjhoPFWxQg8rBd9 JoinMarket - CoinJoin that people will actually use. PGP fingerprint: 0A8B 038F 5E10 CC27 89BF CFFF EF73 4EA6 77F3 1129
|
|
|
Meuh6879
Legendary
Offline
Activity: 1512
Merit: 1012
|
|
August 30, 2017, 07:03:20 PM |
|
2017-08-30 19:00:53 Switchover imminent. Upgraded: 82.511% Threshold: 95.000%
|
|
|
|
jtoomim
|
|
August 30, 2017, 08:28:53 PM |
|
Meanwhile, on the jtoomimnet branch of p2pool... 2017-08-30 13:29:59.819445 Generating a share with 1000644 bytes (36617 new) and 1779 transactions (49 new)
|
Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power. http://Toom.im
|
|
|
|