gyverlb
|
|
June 09, 2013, 07:30:10 PM |
|
On my P2Pool-Node Hashes 5 GH/s. Thats a chance form 0.5% to find one p2pool-Blocks.
So why should I reduce my efficency for every Payout to get once the tx-fees in 200 P2pool-Blocks?
Where were you asked you to reduce your efficiency?
|
|
|
|
zvs
Legendary
Offline
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
|
|
June 10, 2013, 02:45:17 AM |
|
re: Punishing share for 'Block-stale detected!, etc.
I get a lot of orphans when this happens... because my bitcoind is too fast?
I went to several other pools (my other one in Florida, OVH in Canada, p2pool.org, some others) and they all had my share listed, which means that it was accepted and got orphaned later.
This occuring after several:
2013-06-09 21:34:35.132788 Skipping from block 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b to block 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce! 2013-06-09 21:34:36.831062 Punishing share for 'Block-stale detected! 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b < 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce'! Jumping from 2ba6238a to e8ee1640! 2013-06-09 21:34:36.917817 Punishing share for 'Block-stale detected! 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b < 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce'! Jumping from 2ba6238a to e8ee1640! 2013-06-09 21:34:36.920728 Punishing share for 'Block-stale detected! 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b < 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce'! Jumping from 2ba6238a to e8ee1640! 2013-06-09 21:34:40.064626 GOT SHARE! Brusthonin 32cdc8e4 prev e8ee1640 age 2.69s
... and then that share was orphaned. It seems to me like if I modified the source to turn off the whole 'punishing share' stuff, I'd get less orphans?
|
|
|
|
bicer
Newbie
Offline
Activity: 23
Merit: 0
|
|
June 10, 2013, 04:12:22 AM |
|
reaching 3 to 5% additional fee income Any way to calculate this? These stats are obviously underestimating what you can get: they are based on what most miners do, not what they could do. The fees could easily be 2 to 2.5% with bitcoind 0.8.2 default settings. con: you get a lot more orphans & more processing power is needed, not altogether related to the getblock latency, but actual transferring within the p2pool network itself
Not if you configure bitcoind and p2pool to lower their network traffic as (again...) explained by the guide in my signature. TL;DR: I have an average ADSL connection and use it for several other traffic types (Bitcoind+P2Pool is ~30% of my BW). I can even use 1MB blocks and lower minimum fees without lowering my efficiency (reaching 3 to 5% additional fee income). I only had to limit the number of bitcoind (10) and P2Pool connections (5 outgoing + 5 incoming) to get this result.
|
|
|
|
gyverlb
|
|
June 10, 2013, 04:24:33 AM |
|
re: Punishing share for 'Block-stale detected!, etc.
I get a lot of orphans when this happens... because my bitcoind is too fast?
I went to several other pools (my other one in Florida, OVH in Canada, p2pool.org, some others) and they all had my share listed, which means that it was accepted and got orphaned later.
This occuring after several:
2013-06-09 21:34:35.132788 Skipping from block 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b to block 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce! 2013-06-09 21:34:36.831062 Punishing share for 'Block-stale detected! 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b < 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce'! Jumping from 2ba6238a to e8ee1640! 2013-06-09 21:34:36.917817 Punishing share for 'Block-stale detected! 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b < 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce'! Jumping from 2ba6238a to e8ee1640! 2013-06-09 21:34:36.920728 Punishing share for 'Block-stale detected! 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b < 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce'! Jumping from 2ba6238a to e8ee1640! 2013-06-09 21:34:40.064626 GOT SHARE! Brusthonin 32cdc8e4 prev e8ee1640 age 2.69s
... and then that share was orphaned. It seems to me like if I modified the source to turn off the whole 'punishing share' stuff, I'd get less orphans?
If I understand correctly, P2Pool doesn't want to build a share on top of one it knows to be stale (built on a previous Bitcoin block). It may be because your bitcoind has better connections to the Bitcoin P2P network and learns of new blocks before others: if the majority of the network is slower it will not see the share as a "Block-stale" as you do and not punish it, refusing your share in the relatively rare case when you submit one just after this "Block-stale". You could lose a bit of income this way, but it should not be noticeable (this can only affect you during the ~5s a block needs to fully propagates the Bitcoin network, which is less than 1% of the block interval: in the worst case you shouldn't lose more than 0.5% of efficiency). If you modified the source to turn off the 'punishing share', you would find your node at the opposite end of the spectrum: it would build on a potential stales and make orphans too. You can find out if it's a problem by computing the number of shares you found just after a block that were orphaned and the number of shares that were accepted. If you have enough logs/hashrate you can get a meaningful sample (one hundred shares generated just after blocks would be adequate) and find if you get more orphans in this population than the orphans you get over the same time period. This would make sure there is some imbalance and I think we could even find out if reversing the logic would benefit you in your case (I'm a bit tired and going to bed, but I think it's simple probabilities).
|
|
|
|
gyverlb
|
|
June 10, 2013, 04:34:42 AM |
|
reaching 3 to 5% additional fee income Any way to calculate this? Ideally: run another node with appropriate settings, parse the result of getblocktemplate to compute the fees over an extended period. My estimation is less accurate than that: it's based on a relatively medium sample size (several dozens with these settings over the past days). This can only illustrate what you could get these past days (and maybe detect a trend too). If one large pool upgrades its configuration and starts confirming the transactions currently waiting this number might go down. On the other side if one service starts to generate large numbers of transactions with fees just below what most of the miners accept (accepting a delay until a minority of miners confirm it or the txs age enough), this number will go up.
|
|
|
|
zvs
Legendary
Offline
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
|
|
June 10, 2013, 11:30:23 AM Last edit: June 10, 2013, 12:10:01 PM by zvs |
|
re: Punishing share for 'Block-stale detected!, etc.
I get a lot of orphans when this happens... because my bitcoind is too fast?
I went to several other pools (my other one in Florida, OVH in Canada, p2pool.org, some others) and they all had my share listed, which means that it was accepted and got orphaned later.
This occuring after several:
2013-06-09 21:34:35.132788 Skipping from block 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b to block 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce! 2013-06-09 21:34:36.831062 Punishing share for 'Block-stale detected! 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b < 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce'! Jumping from 2ba6238a to e8ee1640! 2013-06-09 21:34:36.917817 Punishing share for 'Block-stale detected! 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b < 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce'! Jumping from 2ba6238a to e8ee1640! 2013-06-09 21:34:36.920728 Punishing share for 'Block-stale detected! 4bc4be449a7de0c01cb29a9d661045a19393dea2cefc2e26b < 9b14c16ea0d67eda77564adca4de0ac3e84b1efbd8cfb5adce'! Jumping from 2ba6238a to e8ee1640! 2013-06-09 21:34:40.064626 GOT SHARE! Brusthonin 32cdc8e4 prev e8ee1640 age 2.69s
... and then that share was orphaned. It seems to me like if I modified the source to turn off the whole 'punishing share' stuff, I'd get less orphans?
If I understand correctly, P2Pool doesn't want to build a share on top of one it knows to be stale (built on a previous Bitcoin block). It may be because your bitcoind has better connections to the Bitcoin P2P network and learns of new blocks before others: if the majority of the network is slower it will not see the share as a "Block-stale" as you do and not punish it, refusing your share in the relatively rare case when you submit one just after this "Block-stale". You could lose a bit of income this way, but it should not be noticeable (this can only affect you during the ~5s a block needs to fully propagates the Bitcoin network, which is less than 1% of the block interval: in the worst case you shouldn't lose more than 0.5% of efficiency). If you modified the source to turn off the 'punishing share', you would find your node at the opposite end of the spectrum: it would build on a potential stales and make orphans too. You can find out if it's a problem by computing the number of shares you found just after a block that were orphaned and the number of shares that were accepted. If you have enough logs/hashrate you can get a meaningful sample (one hundred shares generated just after blocks would be adequate) and find if you get more orphans in this population than the orphans you get over the same time period. This would make sure there is some imbalance and I think we could even find out if reversing the logic would benefit you in your case (I'm a bit tired and going to bed, but I think it's simple probabilities). yeah, like this 2013-06-10 06:10:29.076187 Punishing share for 'Block-stale detected! 4d2f83a096d4f50ce5fccfd3d81fb97988ad9eac42ddfad5b6 < 1c3c0e8358b6fe20f6d09bfeef1f9aa0fc897ad634c8da56aa'! Jumping from 5538fa58 to be8220df! 2013-06-10 06:10:31.197403 GOT SHARE! Brusthonin 10823d52 prev be8220df age 1.81s 2013-06-10 06:10:33.597242 Shares: 33 (1 orphan, 4 dead) Stale rate: ~15.2% (6-31%) Efficiency: ~104.6% (85-116%) Current payout: 0.3049 BTC 2013-06-10 06:10:43.603981 Shares: 33 (2 orphan, 4 dead) Stale rate: ~18.2% (8-35%) Efficiency: ~100.9% (80-113%) Current payout: 0.3017 BTC that just happened, again oh, here is what it looked like from my a different node: 2013-06-10 06:10:29.383634 Skipping from block 4d2f83a096d4f50ce5fccfd3d81fb97988ad9eac42ddfad5b6 to block 1c3c0e8358b6fe20f6d09bfeef1f9aa0fc897ad634c8da56aa! 2013-06-10 06:10:30.688724 Punishing share for 'Block-stale detected! 4d2f83a096d4f50ce5fccfd3d81fb97988ad9eac42ddfad5b6 < 1c3c0e8358b6fe20f6d09bfeef1f9aa0fc897ad634c8da56aa'! Jumping from 5538fa58 to be8220df! 2013-06-10 06:10:35.396229 P2Pool: 17360 shares in chain (17364 verified/17364 total) Peers: 64 (8 incoming) 2013-06-10 06:10:35.396380 Local: 0H/s in last 0.0 seconds Local dead on arrival: Expected time to share: 2013-06-10 06:10:35.396506 Shares: 0 (0 orphan, 0 dead) Stale rate: Efficiency: Current payout: 0.3049 BTC 2013-06-10 06:10:35.396555 Pool: 736GH/s Stale rate: 18.9% Expected time to block: 1.1 days nobody mining on it though, so doesn't show reports of new work. the US node runs slower since it's on a crappy dual xeon and only using 8 cores. oh, you can see how it did log my share as valid though, since the payout increased to 0.3049 the times: 06/10/2013_06:56:34:620911792 (US) 06/10/2013_06:56:34:634211668 (Germany)
|
|
|
|
zvs
Legendary
Offline
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
|
|
June 10, 2013, 12:03:16 PM Last edit: June 10, 2013, 12:16:34 PM by zvs |
|
the one before the block and the one after the block are from 185Kip6odGYs4eSHD6DYsWVDJBg2DNLfiV, though wouldn't that only matter if the next one in the chain was also 185Kip6odGYs4eSHD6DYsWVDJBg2DNLfiV? (as mine would have been sandwiched between the two)
ed: ok, so the real issue here would then be that 185Kip6odGYs4eSHD6DYsWVDJBg2DNLfiV is operating at 2s latency or higher?
all i can think of is that his client tossed out my share as being invalid, then he got another share and his node broadcast that share, and for some reason, my share got replaced by his. that should only happen if he had gotten another one after that though, eh? is there some other explanation? because that could be abused
|
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1361
Merit: 1003
Don`t panic! Organize!
|
|
June 10, 2013, 02:03:27 PM |
|
It is possible Share chain time line:
s1 s2 s3 ... sa sb<sa <-yours sx<sa <-his sy<sx <-his
and your sb is orphaned. Longer chain win. If he get fast 2 shares he can orphan every other share. Worst case scenario: every node got share, one node got 2 fast shares and orphan all other.
|
|
|
|
gyverlb
|
|
June 10, 2013, 05:33:04 PM |
|
the one before the block and the one after the block are from 185Kip6odGYs4eSHD6DYsWVDJBg2DNLfiV, though wouldn't that only matter if the next one in the chain was also 185Kip6odGYs4eSHD6DYsWVDJBg2DNLfiV? (as mine would have been sandwiched between the two)
ed: ok, so the real issue here would then be that 185Kip6odGYs4eSHD6DYsWVDJBg2DNLfiV is operating at 2s latency or higher?
Yes, there are 2 other contributing factors: - a node always prefer mining on a share it just produced (this is a gamble which doesn't change the fairness, you either get 2 shares through or none),
- according to the payouts this address controls nearly 10% of the P2Pool hashrate so it can orphan other shares more easily than the common node
all i can think of is that his client tossed out my share as being invalid, then he got another share and his node broadcast that share, and for some reason, my share got replaced by his.
The longer chain always win, so this behavior is expected. It shouldn't produce any unbalance though, there are more chances that the rest of the network (90% of the hashrate) will invalidate the "Block-stale" shares. 1 times out of 10 this address finds a share just before a block and 1 times out of 100 it should find the following share (some time they both will be valid, some time the first will be a Block-stale that makes it through thanks to the second and some time the Block-stale will be orphaned). So what you witnessed should happen every 100 blocks (16 hours or less given the rise of hashrate) on average (with these time scales you would need several weeks of data sampling to get a good enough sample of data).
|
|
|
|
zvs
Legendary
Offline
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
|
|
June 12, 2013, 03:21:13 AM |
|
Germany node:
2013-06-12 02:30:22 received block 00000000000000d9a40124da12aa3c7b26d7adad47eab9dc5cd95113d345a7b6 2013-06-12 02:30:24 received block 000000000000003eb0df7952601d567ed07a1683793c3aeb1c90fae6b378cfaf
that bitparking one came in late =p
US node:
2013-06-12 02:30:22 received block 000000000000003eb0df7952601d567ed07a1683793c3aeb1c90fae6b378cfaf 2013-06-12 02:30:26 received block 00000000000000d9a40124da12aa3c7b26d7adad47eab9dc5cd95113d345a7b6 from 5.9.24.81:25861 2013-06-12 02:30:27 received block 00000000000000d9a40124da12aa3c7b26d7adad47eab9dc5cd95113d345a7b6 from 127.0.0.1:14265
that p2pool one came in late, and was even sent faster from Germany than from my local p2pool
|
|
|
|
bitpop
Legendary
Offline
Activity: 2912
Merit: 1060
|
|
June 12, 2013, 08:28:07 AM |
|
So jalapeno p2pool yes or no?
|
|
|
|
gyverlb
|
|
June 12, 2013, 09:35:35 AM |
|
So jalapeno p2pool yes or no?
No (see my guide for why).
|
|
|
|
bitpop
Legendary
Offline
Activity: 2912
Merit: 1060
|
|
June 12, 2013, 09:38:32 AM |
|
Damn I'm going to have to leave unless I can make cgminer use only bfl on second instance.
|
|
|
|
centove
|
|
June 12, 2013, 12:51:52 PM |
|
So jalapeno p2pool yes or no?
No (see my guide for why). What about if you load balance between p2pool and a traditional pool?
|
|
|
|
freshzive
|
|
June 13, 2013, 12:53:10 AM |
|
does p2pool work with the usb block erupters (asic)?
|
|
|
|
daemondazz
|
|
June 13, 2013, 01:14:51 AM |
|
does p2pool work with the usb block erupters (asic)?
Yes, I'm currently doing this.
|
Computers, Amateur Radio, Electronics, Aviation - 1dazzrAbMqNu6cUwh2dtYckNygG7jKs8S
|
|
|
freshzive
|
|
June 13, 2013, 06:05:00 AM |
|
does p2pool work with the usb block erupters (asic)?
Yes, I'm currently doing this. What flags? I'm getting super high rejects. EDIT: Nevermind, I think that was just went I first started mining for whatever reason.
|
|
|
|
daemondazz
|
|
June 13, 2013, 06:51:02 AM |
|
With cgminer I'm currently using: --icarus-options 115200:1:1 --icarus-timing 3.0=100 -S /dev/ttyUSB0
Power is currently off at home, so I can't check the current reject rate.
|
Computers, Amateur Radio, Electronics, Aviation - 1dazzrAbMqNu6cUwh2dtYckNygG7jKs8S
|
|
|
MacDschie
Newbie
Offline
Activity: 33
Merit: 0
|
|
June 13, 2013, 08:52:44 AM |
|
Hi there, I have some problems connecting slush's stratum proxy to p2pool. The pool is up and running fine. When I start the proxy, I get the following output: > ./mining_proxy.py -o localhost -p 9332 -gp 8341 -v 2013-06-13 10:47:38,023 DEBUG stats logger.get_logger # Logging initialized 2013-06-13 10:47:38,030 DEBUG protocol logger.get_logger # Logging initialized 2013-06-13 10:47:38,030 DEBUG socket_transport logger.get_logger # Logging initialized 2013-06-13 10:47:38,050 DEBUG proxy logger.get_logger # Logging initialized 2013-06-13 10:47:38,051 DEBUG proxy logger.get_logger # Logging initialized 2013-06-13 10:47:38,051 INFO proxy jobs.<module> # C extension for midstate not available. Using default implementation instead. 2013-06-13 10:47:38,051 DEBUG proxy logger.get_logger # Logging initialized 2013-06-13 10:47:38,051 DEBUG proxy logger.get_logger # Logging initialized 2013-06-13 10:47:38,052 DEBUG proxy logger.get_logger # Logging initialized 2013-06-13 10:47:38,052 DEBUG proxy logger.get_logger # Logging initialized 2013-06-13 10:47:38,052 DEBUG proxy logger.get_logger # Logging initialized 2013-06-13 10:47:38,052 DEBUG proxy logger.get_logger # Logging initialized 2013-06-13 10:47:38,060 ERROR proxy mining_proxy.main # Stratum host/port autodetection failed Traceback (most recent call last): File "./mining_proxy.py", line 178, in main new_host = (yield utils.detect_stratum(args.host, args.port)) File "/usr/local/lib/python2.7/dist-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py", line 1070, in _inlineCallbacks result = g.send(result) File "/home/macdschie/src/stratum-mining-proxy/mining_libs/utils.py", line 69, in detect_stratum header = f.response_headers.get('x-stratum', None)[0] TypeError: 'NoneType' object has no attribute '__getitem__' 2013-06-13 10:47:38,060 WARNING proxy mining_proxy.main # Stratum proxy version: 1.5.2 2013-06-13 10:47:38,062 WARNING proxy mining_proxy.test_update # Checking for updates... 2013-06-13 10:47:38,242 WARNING proxy mining_proxy.main # Trying to connect to Stratum pool at localhost:9332 2013-06-13 10:47:38,245 INFO stats stats.print_stats # 1 peers connected, state changed 1 times 2013-06-13 10:47:38,245 DEBUG protocol protocol.connectionMade # Connected 127.0.0.1 2013-06-13 10:47:38,245 DEBUG protocol protocol.connectionMade # Resuming connection: [] 2013-06-13 10:47:38,245 INFO proxy mining_proxy.on_connect # Connected to Stratum pool at localhost:9332 2013-06-13 10:47:38,245 INFO proxy mining_proxy.on_connect # Subscribing for mining jobs 2013-06-13 10:47:38,245 DEBUG protocol protocol.writeJsonRequest # < {"params": [], "id": 1, "method": "mining.subscribe"} 2013-06-13 10:47:38,248 DEBUG protocol protocol.lineReceived # > {u'result': [[u'mining.notify', u'ae6812eb4cd7735a302a8a9dd95cf71f'], u'', 2], u'jsonrpc': u'2.0', u'id': 1, u'error': None} 2013-06-13 10:47:38,250 WARNING proxy mining_proxy.main # ----------------------------------------------------------------------- 2013-06-13 10:47:38,251 WARNING proxy mining_proxy.main # PROXY IS LISTENING ON ALL IPs ON PORT 3333 (stratum) AND 8341 (getwork) 2013-06-13 10:47:38,251 WARNING proxy mining_proxy.main # ----------------------------------------------------------------------- 2013-06-13 10:47:38,251 DEBUG protocol protocol.lineReceived # > {u'jsonrpc': u'2.0', u'params': [0.9999847412109375], u'method': u'mining.set_difficulty', u'id': 158037711} 2013-06-13 10:47:38,251 INFO proxy client_service.handle_event # Setting new difficulty: 0.999984741211 2013-06-13 10:47:38,251 DEBUG protocol protocol.writeJsonResponse # < {"error": null, "id": 158037711, "result": null} 2013-06-13 10:47:38,252 INFO proxy mining_proxy.on_disconnect # Disconnected from Stratum pool at localhost:9332 2013-06-13 10:47:38,252 INFO stats stats.print_stats # 0 peers connected, state changed 1 times 2013-06-13 10:47:38,252 DEBUG socket_transport socket_transport.clientConnectionLost # [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.error.ConnectionLost'>: Connection to the other side was lost in a non-clean fashion. ] 2013-06-13 10:47:40,618 INFO stats stats.print_stats # 1 peers connected, state changed 1 times 2013-06-13 10:47:40,618 DEBUG protocol protocol.connectionMade # Connected 127.0.0.1 2013-06-13 10:47:40,618 DEBUG protocol protocol.connectionMade # Resuming connection: [] 2013-06-13 10:47:40,618 INFO proxy mining_proxy.on_connect # Connected to Stratum pool at localhost:9332 2013-06-13 10:47:40,618 INFO proxy mining_proxy.on_connect # Subscribing for mining jobs 2013-06-13 10:47:40,619 DEBUG protocol protocol.writeJsonRequest # < {"params": [], "id": 1, "method": "mining.subscribe"} 2013-06-13 10:47:40,621 DEBUG protocol protocol.lineReceived # > {u'result': [[u'mining.notify', u'ae6812eb4cd7735a302a8a9dd95cf71f'], u'', 2], u'jsonrpc': u'2.0', u'id': 1, u'error': None} 2013-06-13 10:47:40,621 DEBUG protocol protocol.lineReceived # > {u'jsonrpc': u'2.0', u'params': [0.9999847412109375], u'method': u'mining.set_difficulty', u'id': 650029787} 2013-06-13 10:47:40,621 INFO proxy client_service.handle_event # Setting new difficulty: 0.999984741211 2013-06-13 10:47:40,621 DEBUG protocol protocol.writeJsonResponse # < {"error": null, "id": 650029787, "result": null} 2013-06-13 10:47:40,622 INFO proxy mining_proxy.on_disconnect # Disconnected from Stratum pool at localhost:9332 2013-06-13 10:47:40,622 INFO stats stats.print_stats # 0 peers connected, state changed 1 times 2013-06-13 10:47:40,622 DEBUG socket_transport socket_transport.clientConnectionLost # [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.error.ConnectionLost'>: Connection to the other side was lost in a non-clean fashion. ]
The last few lines repeat over and over again: the proxy tries to connect to the pool, exchanges some parameters and disconnects. I cannot find any output regarding the connection attempts in the p2pool logs. When I start the proxy with other pools, it works fine, Also the traceback at the beginning (TypeError: 'NoneType' object has no attribute '__getitem__') only appears with p2pool. Has anybody any clue what's going on there?
|
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1361
Merit: 1003
Don`t panic! Organize!
|
|
June 13, 2013, 09:16:13 AM |
|
It is described, you need use forrstv version of stratum for startum-proxy. You should also compile mindsatate extension.
|
|
|
|
|