There was someone saying that ants s1 are aeting more than income is, well here in finland those four uv takes 70 dollars, one day income is if we take today 36 dollars.. so One more day in this month - can i be so unluky?
uv undervoltaged
|
|
|
Yeah this pool is awesome, just about everything is well thought, exept it could just be man to give away 1%. I have got 100% more in this week that I ought to have so its not a big deal. With shares i got 100%, didnt count coins yet, might be much more Watch that movie reservoir dogs!
|
|
|
I think we should add another wote, if we stay with current settings or go ahead and take our chanses. Before we can take it so far we all need to know how bad is this hardware we are mining with.
|
|
|
As i have said many times before, dont count your errors - count your shares. That is what you have to do with p2pool. It cant be that difficult to move from ghash to p2pool.
My earningss with p2pool are awesome - last week its been like 1000 gh with 580 gh.
Count your shares, not the rejects because you dont get paid for those and we all have it. rejected and orphan abd dead. Not so many dead lately...
|
|
|
How long does it take to take the step from ghash to p2pool, earnings are so low with this that it cant be nothing else but lazy mans job. If 50% is dead shares that peta gets then we are near what we ought to be.
Try to think, we woted for p2pool, so go ahead. please
|
|
|
Thank you all for this great waste of time and electricity. Even when im picking blueberies im thinking p2pool
|
|
|
I was talking about walletaddress/10000000 as minimium setting with 1th hardware I have 4 ants pencilmodded to 580gh -- 150 + 150 +140 +140 p2poolinfo tells me that my hashrate is 1Lesbkz3Th7Xk4xJEw6SGyLGwLF6GubNwd 6285 GH/s So im lucky
|
|
|
With walletaddress/10000000 one share is worth ~0.009 or 500ghs/day in grahps, bigger hook can give you bigger fish. And no its not high , its nice to get 2 or more of those with 600ghs in a day, and would be very difficult to get 10-15 2000000 shares a day without orphans...
|
|
|
You get orphan share if its found just before a block is found or just after it. Everyone gets those, with cpumining too!
|
|
|
Increasing share period to 2 min, all problems solved SHARE_PERIOD=30, # seconds SHARE_PERIOD=120, # seconds Why not? What is wrong with that? Then Bitcoind GetBlockTemplate Latency wouldnt mean as much as it does now either - could accept more transactions -> more income.....
|
|
|
Dont count the rejects, count the shares you get. Dont mine with minimal settings with 1th hardware - its for usb block erupters, use minimium walletaddress/10 000000 and see what happens in a week or two... Remember to relax
|
|
|
What should be edited in the code to be able to increase nodes minimium share difficulty, it would be much easier than change it with every miner. (walletaddress/100 000000)...100M
|
|
|
What is the doa% with s2? on pool level its 13% now, so anything below 13 is just fine.
I believe it's much higher than that. M 25% ? Local dead on arrival: ~ =? When I run it through my proxy, the Ants report in excess of 50% rejects. I'm not sure that number is right, but I haven't double checked it to confirm. I'm suspecting that p2pool isn't properly returning matching reject messages, so the Ants don't really know the amount of rejects. I'm not 100% of this, as the stratum specs leave a lot to interpretation/figuring out on your own. What I'm referring to is this. There's a unique ID # that's passed with each message from the miner to the pool. That number is incremented for each message. Each share is a message. So say the Ant sends message #122 and it's a share submission. If p2pool rejects it, the ID it uses on the reject is its own internal counter, so it could be #155. I'm not sure cgminer knows how to match that up. My proxy takes care of matching those up, so the message the Ant gets back from the proxy (Relayed from the pool) is the same # of the share from the Ant, in the example above, #122. M With walletadress/10000000 i can expect one share that pays in 20 h - with 4 ants1 pencil modded to 150gh - anything else is just waste of space. its the only 1 share in 20 h im interested here. if i get 2 of those in a day i call it a lucky day. The pool migth just be usable for s2s with similiar settings - some days you score bigtime some not so well. I think you are using those s2 with own walletaddress for each. This is serious mining to go after bigger fishes. Im not worried about the variation - this is more fun than chase those smaller shares..
|
|
|
What is the doa% with s2? on pool level its 13% now, so anything below 13 is just fine.
I believe it's much higher than that. M 25% ? Local dead on arrival: ~ =?
|
|
|
What is the doa% with s2? on pool level its 13% now, so anything below 13 is just fine.
|
|
|
What is the doa% with s2?
|
|
|
P2pool expected sharetime should be higher than 30s..
|
|
|
It's close to 1th/s. The problem is the reject level is too high. What I don't understand is why it appears so different when using a proxy. The reject level is high enough to make the effective rash rate what's showing when you connect directly. But through the proxy is shows the reject count. M Isnt the DOA the same thing?
|
|
|
The good news: I managed to make a stratum proxy that works with p2pool.
The bad news: It looks to me like the S2s can't respond quick enough to the restarts from p2pool. That causes rejects. If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.
I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.
Other things of note: - It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected. I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid. - The "id" in the response from p2pool doesn't match the "id" from the sender. The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id. Instead it seems to be an incremental number for that connection.
More to come.
M
So it works like a charm with walletadress/10000000+1024 ? And without those 30s altcoins.. What does graphs tell, hourly? Is it hashing below 1th? I'm not sure if 512 or 1024 works best. Overall, while a proxy definitely helps, the fundamental issue appears to be: 1 - the Ants can't change work fast enough to keep up with the constant changing of job IDs from p2pool 2 - p2pool doesn't accept work with old job IDs. ever. Eligius does submit a new job ids roughly 30-50 seconds, but it usually doesn't tell the Ants to drop what they're doing and restart "clean_jobs == false". So I'd say either Bitmain needs to improve the response time on the Ants.. or p2pool needs to stop telling the workers to drop their work and start fresh every 30-40 seconds. M So its below 1th in http://127.0.0.1:9332/static/graphs.html?Hour ?
|
|
|
The good news: I managed to make a stratum proxy that works with p2pool.
The bad news: It looks to me like the S2s can't respond quick enough to the restarts from p2pool. That causes rejects. If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.
I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.
Other things of note: - It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected. I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid. - The "id" in the response from p2pool doesn't match the "id" from the sender. The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id. Instead it seems to be an incremental number for that connection.
More to come.
M
So it works like a charm with walletadress/10000000+1024 ? And without those 30s altcoins.. What does graphs tell, hourly? Is it hashing below 1th? I'm not sure if 512 or 1024 works best. Overall, while a proxy definitely helps, the fundamental issue appears to be: 1 - the Ants can't change work fast enough to keep up with the constant changing of job IDs from p2pool 2 - p2pool doesn't accept work with old job IDs. ever. Eligius does submit a new job ids roughly 30-50 seconds, but it usually doesn't tell the Ants to drop what they're doing and restart "clean_jobs == false". So I'd say either Bitmain needs to improve the response time on the Ants.. or p2pool needs to stop telling the workers to drop their work and start fresh every 30-40 seconds. M
|
|
|
|