jcumins
Full Member
Offline
Activity: 312
Merit: 100
Bcnex - The Ultimate Blockchain Trading Platform
|
|
February 02, 2015, 08:35:25 PM |
|
Wow my best share on a S3 running the latest firmware was showing 0
I think I will try that new cgminer.
Stats wise there not bad shares are about 15% dead or orphaned. The efficiency runs 98 to 105 so not bad.
Will see what the new cgminer does
|
|
|
|
PatMan
|
|
February 02, 2015, 08:44:17 PM |
|
Wow my best share on a S3 running the latest firmware was showing 0
You can ignore that - it's another bitmain firmware bug thing. Run it with what IYFTech suggested - you'll be good Bitmain firmware releases tend to be worse than the older ones they replace.......
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
February 02, 2015, 09:24:49 PM |
|
Wow my best share on a S3 running the latest firmware was showing 0
I think I will try that new cgminer.
Stats wise there not bad shares are about 15% dead or orphaned. The efficiency runs 98 to 105 so not bad.
Will see what the new cgminer does
The best share problem is caused by a difficulty set to a factor of 2. ie, 512, 1024, 2048, etc. Change the share difficulty slightly by adding one or two, say 514, 1025, 2049, and your best shares will show up again. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
PatMan
|
|
February 02, 2015, 09:30:31 PM |
|
I don't set share difficulty, p2pool looks after it & my display works fine...... Eg:
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
February 02, 2015, 10:06:06 PM |
|
I don't set share difficulty, p2pool looks after it & my display works fine...... Eg: Not setting it means you're using what p2pool sets it to, which is likely not a factor of 2. Ergo, no problem. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
jcumins
Full Member
Offline
Activity: 312
Merit: 100
Bcnex - The Ultimate Blockchain Trading Platform
|
|
February 02, 2015, 10:34:58 PM |
|
Well that version of cgminer will not work with the latest version of firmware for the S3
It will not even start
|
|
|
|
jcumins
Full Member
Offline
Activity: 312
Merit: 100
Bcnex - The Ultimate Blockchain Trading Platform
|
|
February 02, 2015, 10:37:50 PM |
|
I will give that a try
a -2049 on the address line thats easy to try
|
|
|
|
PatMan
|
|
February 02, 2015, 10:40:01 PM |
|
Not too sure about that tbh. Here's another screen I just took, only one of the miners is using a diff factor of 2, yet the best share is still being displayed: - unless the display is only updated when the diff is at a factor of 2 of course......I've heard a few miners say they can't get the best share to show up either on the miner gui or on your monitor, but I've never had an issue with it personally. Lucky, I guess.....
|
|
|
|
jcumins
Full Member
Offline
Activity: 312
Merit: 100
Bcnex - The Ultimate Blockchain Trading Platform
|
|
February 02, 2015, 10:45:02 PM |
|
That worked well have best share back.
Thanks much
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
February 02, 2015, 10:49:52 PM |
|
Not too sure about that tbh. Here's another screen I just took, only one of the miners is using a diff factor of 2, yet the best share is still being displayed: - unless the display is only updated when the diff is at a factor of 2 of course......I've heard a few miners say they can't get the best share to show up either on the miner gui or on your monitor, but I've never had an issue with it personally. Lucky, I guess..... I don't follow? The screenshot above shows them at 1,797, except for one, which is at 2,112. Neither is a factor of 2. (2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, etc). Every Ant I've had since S1s has had this issue. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
PatMan
|
|
February 02, 2015, 10:56:24 PM |
|
I don't follow? The screenshot above shows them at 1,797, except for one, which is at 2,112. Neither is a factor of 2. (2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, etc).
Every Ant I've had since S1s has had this issue.
M
Ah - I misunderstood, so it's only when the diff is a factor of 2 that the ants don't display best share - is that right? Peace
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
February 02, 2015, 11:06:09 PM |
|
I don't follow? The screenshot above shows them at 1,797, except for one, which is at 2,112. Neither is a factor of 2. (2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, etc).
Every Ant I've had since S1s has had this issue.
M
Ah - I misunderstood, so it's only when the diff is a factor of 2 that the ants don't display best share - is that right? Peace Bingo! And if you don't set your difficulty on p2pool, the odds of you getting a difficulty that is a factor of 2 is pretty slim. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
PatMan
|
|
February 02, 2015, 11:11:24 PM |
|
I don't follow? The screenshot above shows them at 1,797, except for one, which is at 2,112. Neither is a factor of 2. (2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, etc).
Every Ant I've had since S1s has had this issue.
M
Ah - I misunderstood, so it's only when the diff is a factor of 2 that the ants don't display best share - is that right? Peace Bingo! And if you don't set your difficulty on p2pool, the odds of you getting a difficulty that is a factor of 2 is pretty slim. M Cool, that explains why I've never had a problem with it then...... Thanks mdude. I seem to remember messing about with diff back in the day, but found it to be of no benefit quite early, so I just let p2pool sort it out - easy.
|
|
|
|
IYFTech
|
|
February 02, 2015, 11:16:49 PM |
|
Same here, p2pool does it nicely
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
February 03, 2015, 12:28:45 AM |
|
Same here, p2pool does it nicely Yes, it does do it nicely. However if you have overburdened overworked miners, such as all Antminers are, there is a reason to set the pseudo share size. I've pointed my former Ants through my homegrown proxy to see what causes rejects and why. One is beyond the control of p2pool, although it can be alleviated by setting the queue size to 1 or 0. The other, however, you can solve by fixing the pseudo share size that p2pool feeds your Ants. Here's what happens: p2pool: work size is now 121 Ant: got it, I'll start using 121 as soon as I can Ant: here's some work from the prior work size of 105 p2pool: rejected! difficulty is too high (it really says this! in reality is too high, but too keep it simple for people, we reverse it and say it's too low) Ant: alright, here's some work of size 121 p2pool: accepted! Ant: and more p2pool: accepted! p2pool: whoa, hold on Ant, you're feeding me work too fast. work size is now 151! Ant: got it, I'll start using 151 as soon as I can Ant: here's some work from the prior work size of 121 p2pool: rejected! difficulty is too high! And so forth. Fix your pseudo share size, and no more rejects because of work size that is "too high". M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
February 03, 2015, 12:38:06 AM |
|
Two additional points on my prior comment:
1 - For work that is oversize, it obviously doesn't hurt anything because it's too low to be of any value. Any significantly sized shares (like an alt chain share) wouldn't reject. So this only inflates your reject rate for no good reason.
2 - The poor overworked Ants have enough to do already without having to switch work sizes all the time, so fixing the pseudo share size should help in some manner or another, however so slight.
M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
kano
Legendary
Offline
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
|
|
February 03, 2015, 01:00:46 AM |
|
A few things about that ... Rejecting shares that are worth nothing really should not be of concern You have to send a XMillion share before it counts as anything at all ... and the rejections you are talking about is WAY below that and wont affect sharechain shares that are actually worth something. Secondly, you'd need to check what the message is that p2pool is sending. The protocol allows p2pool to tell the miner to discard all work and use the new diff, and the reverse to mean that finish your current work then move on to the new diff. If the protocol messages aren't telling the miner to discard work and immediately move on to the new diff, then that's a bug in p2pool rejecting them. Again, you lose nothing coz they are worthless, but it shouldn't reject them in this scenario. If the protocol is telling the miner to discard work, and you have "submit-stale" on (which is on by default and ALL p2pool miners must have it on) then it really doesn't matter at all since the miner is simply submitting stale shares as is required for p2pool to avoid throwing away valid bitcoin blocks - which would be a REALLY bad thing to do and not have "submit-stale" enabled Though some of the bitmain firmware does this in their version of the driver - discard valid blocks that p2pool says are stale - but you'd have to check their code for the driver you are using if it's not master cgminer.
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
February 03, 2015, 01:10:32 AM |
|
A few things about that ... Rejecting shares that are worth nothing really should not be of concern You have to send a XMillion share before it counts as anything at all ... and the rejections you are talking about is WAY below that and wont affect sharechain shares that are actually worth something. Yup, see my addendum #1 above. Secondly, you'd need to check what the message is that p2pool is sending. The protocol allows p2pool to tell the miner to discard all work and use the new diff, and the reverse to mean that finish your current work then move on to the new diff. If the protocol messages aren't telling the miner to discard work and immediately move on to the new diff, then that's a bug in p2pool rejecting them. Again, you lose nothing coz they are worthless, but it shouldn't reject them in this scenario.
It's been a while, I don't recall the exact verbiage. I do remember the sequence of events, however, and it's exactly as I described above. p2pool says "difficulty change!" and up to 10 seconds later the Ant submits work with the old difficulty and it gets rejected. There wasn't a "work restart" request in between either. This was testing with my S2 long before Bitmain significantly improved performance of the S2s on p2pool, so it's probably 5 seconds or so now. And, on the p2pool side, I clearly saw the "worker X submitted share with too high difficulty, expected X (received Y)" or something like that. If the protocol is telling the miner to discard work, and you have "submit-stale" on (which is on by default and ALL p2pool miners must have it on) then it really doesn't matter at all since the miner is simply submitting stale shares as is required for p2pool to avoid throwing away valid bitcoin blocks - which would be a REALLY bad thing to do and not have "submit-stale" enabled Though some of the bitmain firmware does this in their version of the driver - discard valid blocks that p2pool says are stale - but you'd have to check their code for the driver you are using if it's not master cgminer. I no longer have any Ants in my possession, so I'm not able to look at this again to confirm. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
mahrens917
|
|
February 03, 2015, 03:17:18 AM |
|
So doing some digging, the "payload too long" error comes about when a packet's length is greater than max_payload_length. This can be seen in p2pool/util/p2protocol.py. This is called from p2pool/bitcoin/p2p.py and p2pool/p2p.py where it sets max_payload_length to 1,000,000. I am going to change the code to print out the packet size, as well as, change the max_payload_length to twice the size. I will let the code run for awhile and see what happens and report back. If I'm correct the 'maxblocksize' is hardcoded to 1MB into the Bitcoin protocol. So I guess changing the payload length does do the trick within P2Pool code (until the blocksize will be increased). Well it is still unclear to me what part of the code you have changed. Logically it had to do with Bitcoin Core which is responsible for "packaging" the block and verify the hash of it. It would be awesome if you still could update us with news. I've also updated my Bitcoin Core and I'm waiting for a long blocktime with lots of transactions to debug my new setup. So, I'll keep you guys posted! Thanks! Yes, the block should not be coming into p2pool over 1,000,000 or perhaps it is sometimes coming in with a size of 1,024,000 (another meaning of MB). Or perhaps p2pool or bitcoind is somehow corrupting it at certain times which is causing this issue. We'll see and my logging should let us know a little better what is happening. I am running two p2pool servers. One crashed and one didn't (they usually crash at roughly the same time over this payload issue). One node however had increased max_payload_length. In the logs it states: p2pool.log:2015-01-24 13:55:07.867204 PAYLOAD sharereply 1088513 p2pool.log:2015-01-24 13:55:27.520303 PAYLOAD sharereply 1080454 The number represents the size of the payload which is over 1,000,000 and would of crashed my node. So by changing maxpayload in the p2p.py files I resolved this issue (for now). The lines I changed are in p2p.py p2protocol.Protocol.__init__(self, node.net.PREFIX, 2000000, node.traffic_happened)
and bitcoin/p2p.py p2protocol.Protocol.__init__(self, net.P2P_PREFIX, 2000000, ignore_trailing_payload=True)
|
|
|
|
aurel57
Legendary
Offline
Activity: 1246
Merit: 1000
|
|
February 03, 2015, 07:30:10 PM |
|
I have a p2pool payment question. After the blocks were found yesterday I got paid both times then later in the day I received a couple very small payments? Not sure what the small payments were for?
|
|
|
|
|