Bitcoin Forum
May 28, 2024, 02:53:46 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 [42] 43 44 45 46 47 »
821  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 31, 2014, 10:22:07 PM
There was someone saying that ants s1 are aeting more than income is, well here in finland those four uv takes 70 dollars, one day income is if we take today 36 dollars.. so One more day in this month - can i be so unluky?

uv undervoltaged
822  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 31, 2014, 10:12:52 PM
Yeah this pool is awesome, just about everything is well thought, exept it could just be man to give away 1%. I have got 100% more in this week that I ought to have so its not a big deal. With shares i got 100%, didnt count coins yet, might be much more Smiley
Watch that movie reservoir dogs!
823  Economy / Securities / Re: [HAVELOCK] PETAMINE - 1,150 TH/S HASH RATE (1GH/S per Unit) on: July 31, 2014, 10:03:31 PM
I think we should add another wote, if we stay with current settings or go ahead and take our chanses. Before we can take it so far we all need to know how bad is this hardware we are mining with.
824  Economy / Securities / Re: [HAVELOCK] PETAMINE - 1,150 TH/S HASH RATE (1GH/S per Unit) on: July 31, 2014, 09:51:16 PM
As i have said many times before, dont count your errors - count your shares.
That is what you have to do with p2pool.
It cant be that difficult to move from ghash to p2pool.

My earningss with p2pool are awesome - last week its been like 1000 gh with 580 gh.

Count your shares, not the rejects because you dont get paid for those and we all have it. rejected and orphan abd dead. Not so many dead lately...
825  Economy / Securities / Re: [HAVELOCK] PETAMINE - 1,150 TH/S HASH RATE (1GH/S per Unit) on: July 31, 2014, 09:40:10 PM
How long does it take to take the step from ghash to p2pool, earnings are so low with this that it cant be nothing else but lazy mans job.
If 50% is dead shares that peta gets then we are near what we ought to be.

Try to think, we woted for p2pool, so go ahead. please
826  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 17, 2014, 09:55:36 PM
Thank you all for this great waste of time and electricity.

Even when im picking blueberies im thinking p2pool  Cry
827  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 16, 2014, 06:09:05 AM
I was talking about walletaddress/10000000 as minimium setting with 1th hardware

I have 4 ants pencilmodded to 580gh -- 150 + 150 +140 +140

p2poolinfo tells me that my hashrate is 1Lesbkz3Th7Xk4xJEw6SGyLGwLF6GubNwd    6285 GH/s

So im lucky  Grin
828  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 10, 2014, 02:37:44 AM
With walletaddress/10000000 one share is worth ~0.009 or 500ghs/day in grahps, bigger hook can give you bigger fish.

And no its not high  Cheesy, its nice to get 2 or more of those with 600ghs in a day, and would be very difficult to get 10-15 2000000 shares a day without orphans...

829  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 09, 2014, 12:22:47 PM
You get orphan share if its found just before a block is found or just after it. Everyone gets those, with cpumining too!
830  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 09, 2014, 10:26:52 AM
Increasing share period to 2 min, all problems solved  Tongue

 SHARE_PERIOD=30, # seconds

 SHARE_PERIOD=120, # seconds

Why not? What is wrong with that? Then Bitcoind GetBlockTemplate Latency wouldnt mean as much as it does now either - could accept more transactions -> more income.....
831  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 08, 2014, 12:14:37 PM

Dont count the rejects, count the shares you get.
Dont mine with minimal settings with 1th hardware - its for usb block erupters, use minimium walletaddress/10 000000 and see what happens in a week or two...

Remember to relax Wink

832  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 08, 2014, 07:26:57 AM
What should be edited in the code to be able to increase nodes minimium share difficulty, it would be much easier than change it with every miner. (walletaddress/100 000000)...100M

833  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 06, 2014, 08:56:18 PM
What is the doa% with s2? on pool level its 13% now, so anything below 13 is just fine.

I believe it's much higher than that.

M

25% ?
Local dead on arrival: ~      =?


When I run it through my proxy, the Ants report in excess of 50% rejects.  I'm not sure that number is right, but I haven't double checked it to confirm.

I'm suspecting that p2pool isn't properly returning matching reject messages, so the Ants don't really know the amount of rejects.  I'm not 100% of this, as the stratum specs leave a lot to interpretation/figuring out on your own.

What I'm referring to is this.

There's a unique ID # that's passed with each message from the miner to the pool.  That number is incremented for each message.  Each share is a message.  So say the Ant sends message #122 and it's a share submission.  If p2pool rejects it, the ID it uses on the reject is its own internal counter, so it could be #155.  I'm not sure cgminer knows how to match that up.

My proxy takes care of matching those up, so the message the Ant gets back from the proxy (Relayed from the pool) is the same # of the share from the Ant, in the example above, #122.

M

With walletadress/10000000 i can expect one share that pays in 20 h - with 4 ants1 pencil modded to 150gh - anything else is just waste of space. its the only 1 share in 20 h im interested here. if i get 2 of those in a day i call it a lucky day. The pool migth just be usable for s2s with similiar settings - some days you score bigtime some not so well. I think you are using those s2 with own walletaddress for each.

This is serious mining to go after bigger fishes. Im not worried about the variation - this is more fun than chase those smaller shares..
834  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 06, 2014, 03:51:51 PM
What is the doa% with s2? on pool level its 13% now, so anything below 13 is just fine.

I believe it's much higher than that.

M

25% ?
Local dead on arrival: ~      =?
835  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 06, 2014, 02:31:38 PM
What is the doa% with s2? on pool level its 13% now, so anything below 13 is just fine.
836  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 06, 2014, 07:00:49 AM
What is the doa% with s2?
837  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 05, 2014, 01:33:59 PM
P2pool expected sharetime should be higher than 30s..
838  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 05, 2014, 12:34:47 PM

It's close to 1th/s.  The problem is the reject level is too high.

What I don't understand is why it appears so different when using a proxy.  The reject level is high enough to make the effective rash rate what's showing when you connect directly.  But through the proxy is shows the reject count.

M

Isnt the DOA the same thing?
839  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 05, 2014, 09:14:55 AM
The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

So it works like a charm with walletadress/10000000+1024 ?

And without those 30s altcoins..

What does graphs tell, hourly? Is it hashing below 1th?

I'm not sure if 512 or 1024 works best.  Overall, while a proxy definitely helps, the fundamental issue appears to be:

1 - the Ants can't change work fast enough to keep up with the constant changing of job IDs from p2pool
2 - p2pool doesn't accept work with old job IDs.  ever.

Eligius does submit a new job ids roughly 30-50 seconds, but it usually doesn't tell the Ants to drop what they're doing and restart "clean_jobs == false".

So I'd say either Bitmain needs to improve the response time on the Ants.. or p2pool needs to stop telling the workers to drop their work and start fresh every 30-40 seconds.

M

So its below 1th in http://127.0.0.1:9332/static/graphs.html?Hour ?
840  Bitcoin / Pools / Re: [600 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 05, 2014, 09:12:11 AM
The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

So it works like a charm with walletadress/10000000+1024 ?

And without those 30s altcoins..

What does graphs tell, hourly? Is it hashing below 1th?

I'm not sure if 512 or 1024 works best.  Overall, while a proxy definitely helps, the fundamental issue appears to be:

1 - the Ants can't change work fast enough to keep up with the constant changing of job IDs from p2pool
2 - p2pool doesn't accept work with old job IDs.  ever.

Eligius does submit a new job ids roughly 30-50 seconds, but it usually doesn't tell the Ants to drop what they're doing and restart "clean_jobs == false".

So I'd say either Bitmain needs to improve the response time on the Ants.. or p2pool needs to stop telling the workers to drop their work and start fresh every 30-40 seconds.

M
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 [42] 43 44 45 46 47 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!