yslyung
Legendary
Offline
Activity: 1500
Merit: 1002
Mine Mine Mine
|
|
January 14, 2015, 07:34:58 PM |
|
It will take a while to download the p2pool sharechain, so expect some errors. Just let it do it's thing for a while. I have never used an S4, but suggest you reduce the queue setting to 1 or 0, and let p2pool decide the diff level, ie: don't set it.
which difficulty u talking abt patman ? current share diff or miner diff ? miner diff maxxed out to 999.984741
|
|
|
|
IYFTech
|
|
January 14, 2015, 08:23:52 PM |
|
I presume he means don't set it at the miner? I never set it with p2pool as I find p2pool manages it all very nicely
|
|
|
|
PatMan
|
|
January 14, 2015, 08:59:01 PM |
|
I presume he means don't set it at the miner? I never set it with p2pool as I find p2pool manages it all very nicely Exactly. Sorry, I've been AFK trying to coax a dead S5 into life again.......
|
|
|
|
IYFTech
|
|
January 14, 2015, 09:01:33 PM |
|
I presume he means don't set it at the miner? I never set it with p2pool as I find p2pool manages it all very nicely Exactly. Sorry, I've been AFK trying to coax a dead S5 into life again....... Any luck?
|
|
|
|
PatMan
|
|
January 14, 2015, 09:05:51 PM |
|
I presume he means don't set it at the miner? I never set it with p2pool as I find p2pool manages it all very nicely Exactly. Sorry, I've been AFK trying to coax a dead S5 into life again....... Any luck? It lives!!
|
|
|
|
jedimstr
|
|
January 15, 2015, 01:48:58 PM |
|
So with the latest fix in github for p2pool for the random address enhancement (which I don't use since I point my node to a separate wallet from my node's bitcoin core), and with my curiosity getting the better of me for the latest Bitcoin Core's 0.10.0rc3 released a few days ago... I restarted my node with the latest pulls from both. I'm definitely liking the new Peers tab in Bitcoin Core (running with nowallet settings). At a glance realtime view of the current connected peers to BT core and associated ping times. I know I can get similar from the console commands, but this is nicer imho. So far so good... hopefully there aren't any gotchas.
|
|
|
|
corrow
Member
Offline
Activity: 97
Merit: 10
|
|
January 15, 2015, 03:02:10 PM Last edit: January 15, 2015, 03:45:46 PM by corrow |
|
2014-12-01 08:58:52.270247 > p2pool.util.p2protocol.TooLong: payload too long
Whatsup?
It became orphan because payload too long? Too many users on pool or too many transactions in a block or what?
I received the same error as well at around the same time. I will post it on github if anyone hasn't already. I posted the issue here https://github.com/forrestv/p2pool/issues/238 so others can update as well. Did this happen to anyone else? Well I experience more or less the same issues. P2Pool or Bitcoind does not crash, except the static site of P2Pool ( http://localhost:9332/static/) is frozen during this error and mining becomes impossible. Sometimes it takes up to 60 seconds and sometimes several minutes; the error keeps showing up until that block has been solved. The "freeze" on the static website and the interruption of mining is temporarily and only happens during the error "payload is too long". Please check the log below: 2015-01-15 14:50:04.013590 > Error submitting primary block: (will retry) 2015-01-15 14:50:04.013683 > Traceback (most recent call last): 2015-01-15 14:50:04.013742 > File "/home/bitcoind/p2pool/p2pool/work.py", line 381, in got_response 2015-01-15 14:50:04.013785 > helper.submit_block(dict(header=header, txs=[new_gentx] + other_transactions), False, self.node.factory, self.node.bitcoind, self.node.bitcoind_work, self.node.net) 2015-01-15 14:50:04.013817 > File "/home/bitcoind/p2pool/p2pool/bitcoin/helper.py", line 86, in submit_block 2015-01-15 14:50:04.013858 > submit_block_p2p(block, factory, net) 2015-01-15 14:50:04.013900 > File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1181, in unwindGenerator 2015-01-15 14:50:04.013932 > return _inlineCallbacks(None, gen, Deferred()) 2015-01-15 14:50:04.013977 > File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1039, in _inlineCallbacks 2015-01-15 14:50:04.014007 > result = g.send(result) 2015-01-15 14:50:04.014037 > --- <exception caught here> --- 2015-01-15 14:50:04.014067 > File "/home/bitcoind/p2pool/p2pool/util/deferral.py", line 41, in f 2015-01-15 14:50:04.014097 > result = yield func(*args, **kwargs) 2015-01-15 14:50:04.014127 > File "/home/bitcoind/p2pool/p2pool/bitcoin/helper.py", line 67, in submit_block_p2p 2015-01-15 14:50:04.014157 > factory.conn.value.send_block(block=block) 2015-01-15 14:50:04.014191 > File "/home/bitcoind/p2pool/p2pool/util/p2protocol.py", line 102, in <lambda> 2015-01-15 14:50:04.014222 > return lambda **payload2: self.sendPacket(command, payload2) 2015-01-15 14:50:04.014257 > File "/home/bitcoind/p2pool/p2pool/util/p2protocol.py", line 93, in sendPacket 2015-01-15 14:50:04.014289 > raise TooLong('payload too long') 2015-01-15 14:50:04.014318 > p2pool.util.p2protocol.TooLong: payload too long This error in the log file showed up during block #339060 ( https://blockchain.info/block-height/339060 ) which took ~22 minutes to solve and contained +2200 transactions. Note: Timezone is GMT+1What is wrong here? Why the error "payload too long" occurs? I'm having this issue for 1-2 weeks now and noticed that the error occurs when blocks have a larger time interval than average (+10 mins and +2000 transactions). Does anyone has any idea how to fix this issue?
|
|
|
|
sEpuLchEr
Sr. Member
Offline
Activity: 248
Merit: 250
Are we there yet?
|
|
January 15, 2015, 05:51:00 PM |
|
Need a little help.
How do you show blocks found that is more than a day ago? I've been looking at the code but I can't seem to find it or rather, changes I make doesn't seem to make it happen. So, as you can tell, I'm not a programmer and am only good at changing letters here and there.
Reason being, it looks really really really bad when recent blocks is blank.......
Thanks much.
http://minefast.coincadence.com/p2pool-stats.phpCoincadence gives pool stats and all blocks ever mined from p2pool. It is very useful. However, if you are asking how to modify your p2pool server website to show more than just one day of blocks than that code is in p2pool/web.py. Find the area of code which is "web_root.putChild('recent_blocks', ..." and in there it defines the time period. Several lines down it should say something like "24*60*60" which represents 24 hours, 60 minutes, and 60 seconds. If you would like to show more than one day of data just multiply this number larger by the number of days you would to display. That's what I was looking for. Thank you very much!
|
|
|
|
|
phillipsjk
Legendary
Offline
Activity: 1008
Merit: 1001
Let the chips fall where they may.
|
|
January 16, 2015, 03:53:37 AM |
|
....(memory usage went up too from 500MB+ to 1500MB+)...
Seriously!? 3 x more memory? When I played with pypy, I got similar results. I concluded the initial efficiency was simply the number of peers going down. The system python libraries appeared to be c-optimized though. If the libraries are not compiled, you may see more improvement.
|
James' OpenPGP public key fingerprint: EB14 9E5B F80C 1F2D 3EBE 0A2F B3DE 81FF 7B9D 5160
|
|
|
sEpuLchEr
Sr. Member
Offline
Activity: 248
Merit: 250
Are we there yet?
|
|
January 16, 2015, 01:33:43 PM |
|
....(memory usage went up too from 500MB+ to 1500MB+)...
Seriously!? 3 x more memory? When I played with pypy, I got similar results. I concluded the initial efficiency was simply the number of peers going down. The system python libraries appeared to be c-optimized though. If the libraries are not compiled, you may see more improvement. Just tried pypy.. and wow.. the RAM usage is ... just wow. But it really does load faster. Start up p2pool on python and on pypy and you can really see the increase in speed in loading p2pool. Less DOA too and registered hashrate is faster, but even with matt's relay, the getblock latency increased (tested for around 16+ hours). Can we just rm all the .pyc files and will pypy be able run p2pool? Or is there something else we have to do?
|
|
|
|
idonothave
|
|
January 16, 2015, 04:07:18 PM |
|
....(memory usage went up too from 500MB+ to 1500MB+)...
Seriously!? 3 x more memory? When I played with pypy, I got similar results. I concluded the initial efficiency was simply the number of peers going down. The system python libraries appeared to be c-optimized though. If the libraries are not compiled, you may see more improvement. Just tried pypy.. and wow.. the RAM usage is ... just wow. But it really does load faster. Start up p2pool on python and on pypy and you can really see the increase in speed in loading p2pool. Less DOA too and registered hashrate is faster, but even with matt's relay, the getblock latency increased (tested for around 16+ hours). Can we just rm all the .pyc files and will pypy be able run p2pool? Or is there something else we have to do? Running p2pool few days with pypy. Today I tried $ find -iwholename "*.pyc" -delete and rerun with pypy -O -E. Local rate: 8.49TH/s (1.3% DOA) (3*SP20,4*S3,1*S4). GBLatency 0.275s (running 0.9.4.0, qt version, ubuntu distro). RAM usage 2.19GB (there is 16GB total, I do not care). Subjectively it is better. Objectively we would need more miners using p2pool and to find some block!
|
|
|
|
sEpuLchEr
Sr. Member
Offline
Activity: 248
Merit: 250
Are we there yet?
|
|
January 16, 2015, 04:17:47 PM |
|
Running p2pool few days with pypy. Today I tried $ find -iwholename "*.pyc" -delete and rerun with pypy -O -E. Local rate: 8.49TH/s (1.3% DOA) (3*SP20,4*S3,1*S4). GBLatency 0.275s (running 0.9.4.0, qt version, ubuntu distro). RAM usage 2.19GB (there is 16GB total, I do not care). Subjectively it is better. Objectively we would need more miners using p2pool and to find some block! hmm... Think I'll try that later. Can't do that now as there are others mining in my node. And yes.. what we really need it to get more ppl using p2pool to find blocks! Just wondering what it will take to make ppl understand they have to be patience and leave the miner running on the node for at least a week or 2 to see the results. I keep having ppl come in.. mine an hour and leave.. You're just wasting electricity doing that. Maybe have to add a link to the p2pool site maintained by coincadence. Or all of us running public node should add some info. Lucky guy too. Within few seconds with 400gh, he got a share Well done.
|
|
|
|
yslyung
Legendary
Offline
Activity: 1500
Merit: 1002
Mine Mine Mine
|
|
January 16, 2015, 05:33:44 PM |
|
Running p2pool few days with pypy. Today I tried $ find -iwholename "*.pyc" -delete and rerun with pypy -O -E. Local rate: 8.49TH/s (1.3% DOA) (3*SP20,4*S3,1*S4). GBLatency 0.275s (running 0.9.4.0, qt version, ubuntu distro). RAM usage 2.19GB (there is 16GB total, I do not care). Subjectively it is better. Objectively we would need more miners using p2pool and to find some block! hmm... Think I'll try that later. Can't do that now as there are others mining in my node. And yes.. what we really need it to get more ppl using p2pool to find blocks! Just wondering what it will take to make ppl understand they have to be patience and leave the miner running on the node for at least a week or 2 to see the results. I keep having ppl come in.. mine an hour and leave.. You're just wasting electricity doing that. Maybe have to add a link to the p2pool site maintained by coincadence. Or all of us running public node should add some info. Lucky guy too. Within few seconds with 400gh, he got a share Well done. u can move them temporarily to my pool while you are doing upgrades ? i'll have a look at pypy but so far running on exe win version seems ok to me.
|
|
|
|
IYFTech
|
|
January 16, 2015, 06:04:32 PM |
|
....(memory usage went up too from 500MB+ to 1500MB+)...
Seriously!? 3 x more memory? When I played with pypy, I got similar results. I concluded the initial efficiency was simply the number of peers going down. The system python libraries appeared to be c-optimized though. If the libraries are not compiled, you may see more improvement. Just tried pypy.. and wow.. the RAM usage is ... just wow. But it really does load faster. Start up p2pool on python and on pypy and you can really see the increase in speed in loading p2pool. Less DOA too and registered hashrate is faster, but even with matt's relay, the getblock latency increased (tested for around 16+ hours). Can we just rm all the .pyc files and will pypy be able run p2pool? Or is there something else we have to do? What's the command for running p2pool with pypy?
|
|
|
|
sEpuLchEr
Sr. Member
Offline
Activity: 248
Merit: 250
Are we there yet?
|
|
January 16, 2015, 06:29:50 PM |
|
u can move them temporarily to my pool while you are doing upgrades ?
i'll have a look at pypy but so far running on exe win version seems ok to me.
How do you do that? It's ok. I don't have the time to reset the node now anyway. Runny pypy seems faster but it's the getblock latency really goes well up. RAM is not an issue. I've been tinkering with it but just can't seem to get the latency down. Without pypy, latency is ~0.1-0.2s. With pypy goes up to 0.5s and above!
|
|
|
|
sEpuLchEr
Sr. Member
Offline
Activity: 248
Merit: 250
Are we there yet?
|
|
January 16, 2015, 06:31:25 PM |
|
What's the command for running p2pool with pypy? screen -d -m -S p2pool pypy ~/p2pool/run_p2pool.py blah blah blah blah
|
|
|
|
|
IYFTech
|
|
January 16, 2015, 07:43:22 PM |
|
Perfect, that's what I needed - thanks See if you can spot where I changed over to pypy:......... I'm running 16GB Ram, so it's OK. I'll see how it fares like this for a while
|
|
|
|
idonothave
|
|
January 16, 2015, 07:46:55 PM |
|
Perfect, that's what I needed - thanks See if you can spot where I changed over to pypy:......... I'm running 16GB Ram, so it's OK. I'll see how it fares like this for a while I have the same picture.
|
|
|
|
|