Bitcoin Forum
May 06, 2024, 11:20:53 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Bandwidth of a Pool  (Read 4213 times)
worldinacoin (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
October 14, 2011, 01:24:46 PM
 #1

Just checking what is the ball park figure of the bandwidth that we need to prepare for a mining pool? 
1715037653
Hero Member
*
Offline Offline

Posts: 1715037653

View Profile Personal Message (Offline)

Ignore
1715037653
Reply with quote  #2

1715037653
Report to moderator
Remember that Bitcoin is still beta software. Don't put all of your money into BTC!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715037653
Hero Member
*
Offline Offline

Posts: 1715037653

View Profile Personal Message (Offline)

Ignore
1715037653
Reply with quote  #2

1715037653
Report to moderator
1715037653
Hero Member
*
Offline Offline

Posts: 1715037653

View Profile Personal Message (Offline)

Ignore
1715037653
Reply with quote  #2

1715037653
Report to moderator
1715037653
Hero Member
*
Offline Offline

Posts: 1715037653

View Profile Personal Message (Offline)

Ignore
1715037653
Reply with quote  #2

1715037653
Report to moderator
Caesium
Hero Member
*****
Offline Offline

Activity: 546
Merit: 500


View Profile
October 14, 2011, 01:52:20 PM
Last edit: October 14, 2011, 02:07:23 PM by Caesium
 #2

Can only tell you what I've observed - rfcpool is using about 3Mbit outbound, 2Mbit inbound, to do 50GH/s.

Tired of annoying signature ads? Ad block for signatures
worldinacoin (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
October 14, 2011, 02:26:14 PM
 #3

Wow, the larger pools must be using tons of bandwidth.  This will be cheap in USA and certain parts of Europe but expensive everywhere else Sad
teukon
Legendary
*
Offline Offline

Activity: 1246
Merit: 1002



View Profile
October 14, 2011, 03:02:31 PM
 #4

Can only tell you what I've observed - rfcpool is using about 3Mbit outbound, 2Mbit inbound, to do 50GH/s.

How much of that is down to the shares?  If you created a pool accepting difficulty 100 shares then would your bandwidth requirements drop significantly?
Caesium
Hero Member
*****
Offline Offline

Activity: 546
Merit: 500


View Profile
October 14, 2011, 03:43:38 PM
 #5


How much of that is down to the shares?  If you created a pool accepting difficulty 100 shares then would your bandwidth requirements drop significantly?


Basically all of it. The website is doing virtually nothing compared to the poolserver. In the last 12 hours (time of my log rotation) I've had 43,000 hits to the website.. Most of them are API hits which are pretty small (couple of hundred bytes). I don't bother running detailed stats on it at the moment though.

In the same timeframe I've had 1,400,000 hits to the poolserver.
A getwork request is about 600 bytes, and a submit work is about 40 bytes.

Tired of annoying signature ads? Ad block for signatures
BurningToad
Full Member
***
Offline Offline

Activity: 207
Merit: 100


View Profile
October 14, 2011, 04:07:59 PM
 #6

Last month, ArsBitcoin (~800 GH/s or so last month?) used the following:

data transfer in    495.223 GB
data transfer out 676.921 GB

Some of that is probably backing up files and such, but probably not too much.

Graet
VIP
Legendary
*
Offline Offline

Activity: 980
Merit: 1001



View Profile WWW
October 14, 2011, 06:12:40 PM
 #7

last month ozco.in
traffic in 224 GB
traffic out 457 GB
base hashrate ~70-90Ghash/s

| Ozcoin Pooled Mining Pty Ltd https://ozcoin.net Double Geometric Reward System https://lc.ozcoin.net for Litecoin mining DGM| https://crowncloud.net VPS and Dedicated Servers for the BTC community
teukon
Legendary
*
Offline Offline

Activity: 1246
Merit: 1002



View Profile
October 14, 2011, 06:36:36 PM
 #8

Basically all of it.

Interesting.  I wonder why pools use such low difficulty for their shares then.  With most pools people are submitting many more shares than they are receiving payments so the reason is certainly not related to variance.  Higher difficulty shares would help with server resources which would allow pools to operate more cheaply too.

Is there some technical reason why difficulty-1 shares are preferred?
Caesium
Hero Member
*****
Offline Offline

Activity: 546
Merit: 500


View Profile
October 14, 2011, 06:49:10 PM
 #9

The more you raise the difficulty miners solve at, the less granular you can be sure of their hashrate. Low hashrate miners may not even get to submit a share in between longpolls in extreme instances. I dare say you could safely raise it to 2 or 4 without too much trouble though.

Probably the reason most don't do it is pushpool is set to 1 by default and few pool ops are skilled enough to change this without causing all manner of side effects. Personally, my load and network usage are well within acceptable parameters so raising it would just cause loss of share granularity for no real gain.

Tired of annoying signature ads? Ad block for signatures
teukon
Legendary
*
Offline Offline

Activity: 1246
Merit: 1002



View Profile
October 14, 2011, 06:59:14 PM
 #10

The more you raise the difficulty miners solve at, the less granular you can be sure of their hashrate. Low hashrate miners may not even get to submit a share in between longpolls in extreme instances. I dare say you could safely raise it to 2 or 4 without too much trouble though.

Probably the reason most don't do it is pushpool is set to 1 by default and few pool ops are skilled enough to change this without causing all manner of side effects. Personally, my load and network usage are well within acceptable parameters so raising it would just cause loss of share granularity for no real gain.

I certainly understand if it's not an easy parameter to change and the servers can take it anyway.

I don't see how share granularity is much of a plus though.  There is a bonus in that the pool can quickly detect when a user has stopped mining and send out a cautionary e-mail.  Low difficulty shares can be helpful for people trying to measure stales too.  Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.

Given the low fee that most pools ask for I might expect that a pool server would have to watch it's BTC/Watt in a similar way to a miner so surely there is incentive for making the server's more efficient.

Ah well, this is just curiosity.  I don't run a pool server nor do I intend to start.
Caesium
Hero Member
*****
Offline Offline

Activity: 546
Merit: 500


View Profile
October 14, 2011, 07:04:42 PM
 #11


Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.


I suppose mostly that's psychological. A user likes to see that the pool knows their approximate hashrate, its the fastest way to confirm that their setup is working properly and their shares are being accounted for. My users tend to use the hashrate estimate as their go-to stat - if its more than 20% out (acceptable variance given the way we work it out), they know something is wrong and can investigate further.

If a user has to go more than 10 minutes without submitting a single share it would be very difficult for us to work out this figure to any acceptable estimate, over a reasonably short timeframe.

Tired of annoying signature ads? Ad block for signatures
teukon
Legendary
*
Offline Offline

Activity: 1246
Merit: 1002



View Profile
October 14, 2011, 07:40:37 PM
 #12

I suppose mostly that's psychological. A user likes to see that the pool knows their approximate hashrate, its the fastest way to confirm that their setup is working properly and their shares are being accounted for. My users tend to use the hashrate estimate as their go-to stat - if its more than 20% out (acceptable variance given the way we work it out), they know something is wrong and can investigate further.

If a user has to go more than 10 minutes without submitting a single share it would be very difficult for us to work out this figure to any acceptable estimate, over a reasonably short timeframe.

That's very true.  I was used to this effect from solo mining and I admit it was much easier to be sure everything was working when you could see shares rolling.

Still, if a server is having resource issues then dropping to difficulty-2 shares seems like a much better idea than renting/buying a second server.
worldinacoin (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
October 15, 2011, 05:06:42 AM
 #13

That will cost a ton of money if the servers are in Australia. 

last month ozco.in
traffic in 224 GB
traffic out 457 GB
base hashrate ~70-90Ghash/s
Graet
VIP
Legendary
*
Offline Offline

Activity: 980
Merit: 1001



View Profile WWW
October 15, 2011, 05:11:33 AM
 #14

That will cost a ton of money if the servers are in Australia. 

last month ozco.in
traffic in 224 GB
traffic out 457 GB
base hashrate ~70-90Ghash/s
yes
we are currently setting up a US server based in Dallas to take some of that pressure off Cheesy

| Ozcoin Pooled Mining Pty Ltd https://ozcoin.net Double Geometric Reward System https://lc.ozcoin.net for Litecoin mining DGM| https://crowncloud.net VPS and Dedicated Servers for the BTC community
worldinacoin (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
October 15, 2011, 07:37:45 AM
 #15

The last quote I had, I fell off the chair Smiley, they were pricing per 50 GB blocks of bandwidth.  I think there is a difference in USA bandwidth, if I remember correctly Northern USA IDCs are faster to Australia than the rest.  But do correct me, old folks do not have good memories Smiley . BTW are you planning merged mining?  Was about to sign up with you yesterday, but found no merged mining.
Graet
VIP
Legendary
*
Offline Offline

Activity: 980
Merit: 1001



View Profile WWW
October 15, 2011, 08:56:17 AM
 #16

we are watching merged mining but have other things (like the US server) in our priority list before we can implement it Smiley
paying 55BTC per block tho Smiley

| Ozcoin Pooled Mining Pty Ltd https://ozcoin.net Double Geometric Reward System https://lc.ozcoin.net for Litecoin mining DGM| https://crowncloud.net VPS and Dedicated Servers for the BTC community
shads
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
October 15, 2011, 10:44:33 AM
 #17

A getwork request is about 600 bytes, and a submit work is about 40 bytes.

submit work should be a lot more than 40 bytes... Unless you mean outbound...

Getwork could be reduced to about 40 bytes + maybe another 40 for tcp overhead with a proper differential binary protocol.  80/640 = 88% reduction.  I'm thinking of working on that as part of the next phase of poolserverj development but I'd be interested to hear if bandwidth costs really are an issue for pool ops... If so just have to hope some miner devs will step up and implement the client side of it.

basically first request contains all the stuff in a normal getwork (though in binary) except midstate which is redundant.  Subsequent requests only contain a new merkle root and timestamp.  These are the only fields that actually change except at longpoll time.

PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 16, 2011, 09:17:25 PM
 #18

I certainly understand if it's not an easy parameter to change and the servers can take it anyway.

I don't see how share granularity is much of a plus though.  There is a bonus in that the pool can quickly detect when a user has stopped mining and send out a cautionary e-mail.  Low difficulty shares can be helpful for people trying to measure stales too.  Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.

One thing to consider is that higher share difficulty punishes smaller miners.

Pools don't pay for partial shares so there is already an advantage to having higher hashrate miner.

Difficulty 1 = ~4 billion hashes.

10 minutes per block change means @ 100MH/s on average a miner will complete 12 shares per block change.

@ 400MH/s on average a miner will complete 60 shares per block change.

The reality is the miner technically has completed some fraction of shares which are lost in the block change.  However for the slower miner it is a larger % of their aggregate output. 

12.5 shares completed = 12 shares accepted = 0.5 shares lost ~ 4%
60.5 shares completed = 60 shares accepted = 0.5 shares lost ~<1%

The effect is small but real.  Higher throughput miners achieve less "block change friction" than lower throughput miners. 

If a pool paid the exact amount of worked completed this would be a non-issue but doing that isn't possible.  A pool aproximates work by counting ONLY FULL SHARES.

With a difficulty 4 share
100 MH/s = ~ 6 shares per block change.  Assuming fractional loss of 0.5 shares = ~8% inefficiency
400 MH/s = ~ 30 shares per block change.  Assuming fractional loss of 0.5 shares = 1.5% inefficiency

Yes this does mean that even today 1 400MH  GPU is worth slightly more than 2x 200MH/s GPUs.
teukon
Legendary
*
Offline Offline

Activity: 1246
Merit: 1002



View Profile
October 16, 2011, 09:30:30 PM
 #19

The reality is the miner technically has completed some fraction of shares which are lost in the block change.

There is no such thing as a partially completed share.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 16, 2011, 09:40:00 PM
 #20

The reality is the miner technically has completed some fraction of shares which are lost in the block change.

There is no such thing as a partially completed share.


Exactly that is the point however when a block changes a miner will have "wasted" any hashes being worked.

Maybe my wording is unclear but since hashes are an artificial measurement there is some "loss" which means a lower throughput miner takes MORE shares on average to achieve the same number of shares as higher throughput average.

This is because the pool only sees work in "full share" steps.  If shares were smaller it would be less of an effect and if shares were larger it would be more of an effect.

Currently a 400MH GPU outperforms 2x 200MH GPU in terms of shares earned by ~3% and outperforms 4x 100MH GPU by about 5%.  I know because I experimented by downclocking GPU to simulate slower GPU and running them for nearly a week to compare shares vs hashrate.

With higher difficulty this effect will be increased.

Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!