Bitcoin Forum

Bitcoin => Pools => Topic started by: worldinacoin on October 14, 2011, 01:24:46 PM



Title: Bandwidth of a Pool
Post by: worldinacoin on October 14, 2011, 01:24:46 PM
Just checking what is the ball park figure of the bandwidth that we need to prepare for a mining pool? 


Title: Re: Bandwidth of a Pool
Post by: Caesium on October 14, 2011, 01:52:20 PM
Can only tell you what I've observed - rfcpool is using about 3Mbit outbound, 2Mbit inbound, to do 50GH/s.


Title: Re: Bandwidth of a Pool
Post by: worldinacoin on October 14, 2011, 02:26:14 PM
Wow, the larger pools must be using tons of bandwidth.  This will be cheap in USA and certain parts of Europe but expensive everywhere else :(


Title: Re: Bandwidth of a Pool
Post by: teukon on October 14, 2011, 03:02:31 PM
Can only tell you what I've observed - rfcpool is using about 3Mbit outbound, 2Mbit inbound, to do 50GH/s.

How much of that is down to the shares?  If you created a pool accepting difficulty 100 shares then would your bandwidth requirements drop significantly?


Title: Re: Bandwidth of a Pool
Post by: Caesium on October 14, 2011, 03:43:38 PM

How much of that is down to the shares?  If you created a pool accepting difficulty 100 shares then would your bandwidth requirements drop significantly?


Basically all of it. The website is doing virtually nothing compared to the poolserver. In the last 12 hours (time of my log rotation) I've had 43,000 hits to the website.. Most of them are API hits which are pretty small (couple of hundred bytes). I don't bother running detailed stats on it at the moment though.

In the same timeframe I've had 1,400,000 hits to the poolserver.
A getwork request is about 600 bytes, and a submit work is about 40 bytes.


Title: Re: Bandwidth of a Pool
Post by: BurningToad on October 14, 2011, 04:07:59 PM
Last month, ArsBitcoin (~800 GH/s or so last month?) used the following:

data transfer in    495.223 GB
data transfer out 676.921 GB

Some of that is probably backing up files and such, but probably not too much.


Title: Re: Bandwidth of a Pool
Post by: Graet on October 14, 2011, 06:12:40 PM
last month ozco.in
traffic in 224 GB
traffic out 457 GB
base hashrate ~70-90Ghash/s


Title: Re: Bandwidth of a Pool
Post by: teukon on October 14, 2011, 06:36:36 PM
Basically all of it.

Interesting.  I wonder why pools use such low difficulty for their shares then.  With most pools people are submitting many more shares than they are receiving payments so the reason is certainly not related to variance.  Higher difficulty shares would help with server resources which would allow pools to operate more cheaply too.

Is there some technical reason why difficulty-1 shares are preferred?


Title: Re: Bandwidth of a Pool
Post by: Caesium on October 14, 2011, 06:49:10 PM
The more you raise the difficulty miners solve at, the less granular you can be sure of their hashrate. Low hashrate miners may not even get to submit a share in between longpolls in extreme instances. I dare say you could safely raise it to 2 or 4 without too much trouble though.

Probably the reason most don't do it is pushpool is set to 1 by default and few pool ops are skilled enough to change this without causing all manner of side effects. Personally, my load and network usage are well within acceptable parameters so raising it would just cause loss of share granularity for no real gain.


Title: Re: Bandwidth of a Pool
Post by: teukon on October 14, 2011, 06:59:14 PM
The more you raise the difficulty miners solve at, the less granular you can be sure of their hashrate. Low hashrate miners may not even get to submit a share in between longpolls in extreme instances. I dare say you could safely raise it to 2 or 4 without too much trouble though.

Probably the reason most don't do it is pushpool is set to 1 by default and few pool ops are skilled enough to change this without causing all manner of side effects. Personally, my load and network usage are well within acceptable parameters so raising it would just cause loss of share granularity for no real gain.

I certainly understand if it's not an easy parameter to change and the servers can take it anyway.

I don't see how share granularity is much of a plus though.  There is a bonus in that the pool can quickly detect when a user has stopped mining and send out a cautionary e-mail.  Low difficulty shares can be helpful for people trying to measure stales too.  Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.

Given the low fee that most pools ask for I might expect that a pool server would have to watch it's BTC/Watt in a similar way to a miner so surely there is incentive for making the server's more efficient.

Ah well, this is just curiosity.  I don't run a pool server nor do I intend to start.


Title: Re: Bandwidth of a Pool
Post by: Caesium on October 14, 2011, 07:04:42 PM

Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.


I suppose mostly that's psychological. A user likes to see that the pool knows their approximate hashrate, its the fastest way to confirm that their setup is working properly and their shares are being accounted for. My users tend to use the hashrate estimate as their go-to stat - if its more than 20% out (acceptable variance given the way we work it out), they know something is wrong and can investigate further.

If a user has to go more than 10 minutes without submitting a single share it would be very difficult for us to work out this figure to any acceptable estimate, over a reasonably short timeframe.


Title: Re: Bandwidth of a Pool
Post by: teukon on October 14, 2011, 07:40:37 PM
I suppose mostly that's psychological. A user likes to see that the pool knows their approximate hashrate, its the fastest way to confirm that their setup is working properly and their shares are being accounted for. My users tend to use the hashrate estimate as their go-to stat - if its more than 20% out (acceptable variance given the way we work it out), they know something is wrong and can investigate further.

If a user has to go more than 10 minutes without submitting a single share it would be very difficult for us to work out this figure to any acceptable estimate, over a reasonably short timeframe.

That's very true.  I was used to this effect from solo mining and I admit it was much easier to be sure everything was working when you could see shares rolling.

Still, if a server is having resource issues then dropping to difficulty-2 shares seems like a much better idea than renting/buying a second server.


Title: Re: Bandwidth of a Pool
Post by: worldinacoin on October 15, 2011, 05:06:42 AM
That will cost a ton of money if the servers are in Australia. 

last month ozco.in
traffic in 224 GB
traffic out 457 GB
base hashrate ~70-90Ghash/s


Title: Re: Bandwidth of a Pool
Post by: Graet on October 15, 2011, 05:11:33 AM
That will cost a ton of money if the servers are in Australia. 

last month ozco.in
traffic in 224 GB
traffic out 457 GB
base hashrate ~70-90Ghash/s
yes
we are currently setting up a US server based in Dallas to take some of that pressure off :D


Title: Re: Bandwidth of a Pool
Post by: worldinacoin on October 15, 2011, 07:37:45 AM
The last quote I had, I fell off the chair :), they were pricing per 50 GB blocks of bandwidth.  I think there is a difference in USA bandwidth, if I remember correctly Northern USA IDCs are faster to Australia than the rest.  But do correct me, old folks do not have good memories :) . BTW are you planning merged mining?  Was about to sign up with you yesterday, but found no merged mining.


Title: Re: Bandwidth of a Pool
Post by: Graet on October 15, 2011, 08:56:17 AM
we are watching merged mining but have other things (like the US server) in our priority list before we can implement it :)
paying 55BTC per block tho :)


Title: Re: Bandwidth of a Pool
Post by: shads on October 15, 2011, 10:44:33 AM
A getwork request is about 600 bytes, and a submit work is about 40 bytes.

submit work should be a lot more than 40 bytes... Unless you mean outbound...

Getwork could be reduced to about 40 bytes + maybe another 40 for tcp overhead with a proper differential binary protocol.  80/640 = 88% reduction.  I'm thinking of working on that as part of the next phase of poolserverj development but I'd be interested to hear if bandwidth costs really are an issue for pool ops... If so just have to hope some miner devs will step up and implement the client side of it.

basically first request contains all the stuff in a normal getwork (though in binary) except midstate which is redundant.  Subsequent requests only contain a new merkle root and timestamp.  These are the only fields that actually change except at longpoll time.


Title: Re: Bandwidth of a Pool
Post by: DeathAndTaxes on October 16, 2011, 09:17:25 PM
I certainly understand if it's not an easy parameter to change and the servers can take it anyway.

I don't see how share granularity is much of a plus though.  There is a bonus in that the pool can quickly detect when a user has stopped mining and send out a cautionary e-mail.  Low difficulty shares can be helpful for people trying to measure stales too.  Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.

One thing to consider is that higher share difficulty punishes smaller miners.

Pools don't pay for partial shares so there is already an advantage to having higher hashrate miner.

Difficulty 1 = ~4 billion hashes.

10 minutes per block change means @ 100MH/s on average a miner will complete 12 shares per block change.

@ 400MH/s on average a miner will complete 60 shares per block change.

The reality is the miner technically has completed some fraction of shares which are lost in the block change.  However for the slower miner it is a larger % of their aggregate output. 

12.5 shares completed = 12 shares accepted = 0.5 shares lost ~ 4%
60.5 shares completed = 60 shares accepted = 0.5 shares lost ~<1%

The effect is small but real.  Higher throughput miners achieve less "block change friction" than lower throughput miners. 

If a pool paid the exact amount of worked completed this would be a non-issue but doing that isn't possible.  A pool aproximates work by counting ONLY FULL SHARES.

With a difficulty 4 share
100 MH/s = ~ 6 shares per block change.  Assuming fractional loss of 0.5 shares = ~8% inefficiency
400 MH/s = ~ 30 shares per block change.  Assuming fractional loss of 0.5 shares = 1.5% inefficiency

Yes this does mean that even today 1 400MH  GPU is worth slightly more than 2x 200MH/s GPUs.


Title: Re: Bandwidth of a Pool
Post by: teukon on October 16, 2011, 09:30:30 PM
The reality is the miner technically has completed some fraction of shares which are lost in the block change.

There is no such thing as a partially completed share.


Title: Re: Bandwidth of a Pool
Post by: DeathAndTaxes on October 16, 2011, 09:40:00 PM
The reality is the miner technically has completed some fraction of shares which are lost in the block change.

There is no such thing as a partially completed share.


Exactly that is the point however when a block changes a miner will have "wasted" any hashes being worked.

Maybe my wording is unclear but since hashes are an artificial measurement there is some "loss" which means a lower throughput miner takes MORE shares on average to achieve the same number of shares as higher throughput average.

This is because the pool only sees work in "full share" steps.  If shares were smaller it would be less of an effect and if shares were larger it would be more of an effect.

Currently a 400MH GPU outperforms 2x 200MH GPU in terms of shares earned by ~3% and outperforms 4x 100MH GPU by about 5%.  I know because I experimented by downclocking GPU to simulate slower GPU and running them for nearly a week to compare shares vs hashrate.

With higher difficulty this effect will be increased.



Title: Re: Bandwidth of a Pool
Post by: teukon on October 16, 2011, 10:20:22 PM
Exactly that is the point however when a block changes a miner will have "wasted" any hashes being worked.

Maybe my wording is unclear but since hashes are an artificial measurement there is some "loss" which means a lower throughput miner takes MORE shares on average to achieve the same number of shares as higher throughput average.

This is because the pool only sees work in "full share" steps.  If shares were smaller it would be less of an effect and if shares were larger it would be more of an effect.

Currently a 400MH GPU outperforms 2x 200MH GPU in terms of shares earned by ~3% and outperforms 4x 100MH GPU by about 5%.  I know because I experimented by downclocking GPU to simulate slower GPU and running them for nearly a week to compare shares vs hashrate.

With higher difficulty this effect will be increased.

When hashing (assuming a fixed hashing rate) the expected time to find a share can be approximately modelled by an exponential distribution.  This distribution has a "lack of memory" property.  This means that the expected time until the next share is found is independent of past events.  To say that one has lost progress towards a share is to suggest that it is possible to make progress towards a share; this is simply not compatible with the loss of memory property.  A miner can no more make progress towards finding a share than a pool can make progress towards finding a block.

One thing which can cause the illusion of partial shares at the end of a round is frequently seeing rejects at the very beginning or end of a round.  This is generally caused by setting high aggression.


Title: Re: Bandwidth of a Pool
Post by: twmz on October 18, 2011, 01:40:04 PM
The only hashes that a miner "wastes time on" are those that stale because a new block has been found elsewhere on the network (and the miner does not know about it yet).  They are wasted because if they do find a block, the block will very likely become orphaned. 

As teukon said, all other hashing is not wasted because every single hash you calculate is effectively you "starting over" in your search for a valid block.  No matter how many hashes you have already checked, the very next hash you check might has the exact same probability to be valid or invalid. 

No matter how many times you have flipped a coin and had it come up "heads", the next time you flip it it still has exactly 50% chance to be "heads".


Title: Re: Bandwidth of a Pool
Post by: deepceleron on October 19, 2011, 08:02:21 AM
bitcoins.lc has 2x100mbit links, and still has been DDOS'd by a single botnet actor until they get IP banned. You would want enough bandwidth to stay up in the face of evildoers.