Bitcoin Forum
June 15, 2024, 02:13:23 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 ... 222 »
  Print  
Author Topic: ANTMINER S2 Discussion and Support Thread  (Read 355597 times)
googlemaster1
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
March 21, 2014, 09:03:38 PM
 #361

Capitalism at its finest right here boys.  China comin' in and doin' it for cheaper Wink hehehehe.

But what you didn't expect, is them having better customer service to boot.  US and Eurozone just got doubly fucked there Tongue

BTC: 15565dcUp4LEWe6KYT7tawMHFRL4cBbFGN
smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
March 21, 2014, 10:19:02 PM
Last edit: March 21, 2014, 10:32:00 PM by smooth
 #362

Yes I had previously done that calculation as well. The only thing I can think of is if you are (or maybe your employer is) already paying for a rack that is underutilized power-wise but has a little space left you can add a few of the miners.

I couldn't give a rats ass about nanometers or "sexy" miners (seriously you guys need to get out more). All I care about is price, delivery, reliability, hash rate, and power usage.

Obviously, I don't have to agree with you 100% (except for the part that I should get out more).. but you're right on the money with that fact that the ASIC doesn't need to look "pretty". To get the job done is the right attitude and it's not like we'd look like poster boys ourselves either.

Furthermore, it left me thinking about the 1.25U competitor.. If they consume between 1.2 - 1.35kW power, why bother with such a small factor (and risk with not being able to dissipate the heat effectively, hence more noise for fans spinning at max)? Yes, it is the coolest looking kid on the blockchain, but as most Data Centers can normally only facilitate up to between 8 to 12 kW of power per rack (including the equivalent in cooling), where lies the benefit in 1.25U when you can only fit 10 of those into a 42U rack maximum, leaving 70% of the rack unpopulated from a density point of view? Data Centers that could facilitate more power are few and far between.

In this sense, BITMAIN seems to be spot on with the sizing - but it does come down to the price and how well they can position themselves on the market. No doubt more competition is coming and the big names of the past are looking to regain their position.

dogie
Legendary
*
Offline Offline

Activity: 1666
Merit: 1183


dogiecoin.com


View Profile WWW
March 21, 2014, 10:44:55 PM
 #363

Yes I had previously done that calculation as well. The only thing I can think of is if you are (or maybe your employer is) already paying for a rack that is underutilized power-wise but has a little space left you can add a few of the miners.

I couldn't give a rats ass about nanometers or "sexy" miners (seriously you guys need to get out more). All I care about is price, delivery, reliability, hash rate, and power usage.

Obviously, I don't have to agree with you 100% (except for the part that I should get out more).. but you're right on the money with that fact that the ASIC doesn't need to look "pretty". To get the job done is the right attitude and it's not like we'd look like poster boys ourselves either.

Furthermore, it left me thinking about the 1.25U competitor.. If they consume between 1.2 - 1.35kW power, why bother with such a small factor (and risk with not being able to dissipate the heat effectively, hence more noise for fans spinning at max)? Yes, it is the coolest looking kid on the blockchain, but as most Data Centers can normally only facilitate up to between 8 to 12 kW of power per rack (including the equivalent in cooling), where lies the benefit in 1.25U when you can only fit 10 of those into a 42U rack maximum, leaving 70% of the rack unpopulated from a density point of view? Data Centers that could facilitate more power are few and far between.

In this sense, BITMAIN seems to be spot on with the sizing - but it does come down to the price and how well they can position themselves on the market. No doubt more competition is coming and the big names of the past are looking to regain their position.

What sort of servers are used that have higher densities of power than the usual rack? (fitting with the scenario you suggested above).

biggzi
Hero Member
*****
Offline Offline

Activity: 543
Merit: 502



View Profile
March 21, 2014, 11:29:58 PM
 #364

BITMAIN can you tell us when you will start taking orders for batch 3?
smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
March 22, 2014, 02:45:08 AM
 #365

Yes I had previously done that calculation as well. The only thing I can think of is if you are (or maybe your employer is) already paying for a rack that is underutilized power-wise but has a little space left you can add a few of the miners.
What sort of servers are used that have higher densities of power than the usual rack? (fitting with the scenario you suggested above).

The situation I described would have lower power density. Thus it would have excess power that could be balanced closer up to power density limits by putting some miners in the rack. An example would be servers that are usually idle, or large storage arrays. One might even speculate that "the Israelis" (I can't manage to get myself to memorize their name) have such a scenario in mind for their hosting produce, or a major customer who does.

klondike_bar
Legendary
*
Offline Offline

Activity: 2128
Merit: 1005

ASIC Wannabe


View Profile
March 22, 2014, 03:04:39 AM
 #366

Yes I had previously done that calculation as well. The only thing I can think of is if you are (or maybe your employer is) already paying for a rack that is underutilized power-wise but has a little space left you can add a few of the miners.

I couldn't give a rats ass about nanometers or "sexy" miners (seriously you guys need to get out more). All I care about is price, delivery, reliability, hash rate, and power usage.

Obviously, I don't have to agree with you 100% (except for the part that I should get out more).. but you're right on the money with that fact that the ASIC doesn't need to look "pretty". To get the job done is the right attitude and it's not like we'd look like poster boys ourselves either.

Furthermore, it left me thinking about the 1.25U competitor.. If they consume between 1.2 - 1.35kW power, why bother with such a small factor (and risk with not being able to dissipate the heat effectively, hence more noise for fans spinning at max)? Yes, it is the coolest looking kid on the blockchain, but as most Data Centers can normally only facilitate up to between 8 to 12 kW of power per rack (including the equivalent in cooling), where lies the benefit in 1.25U when you can only fit 10 of those into a 42U rack maximum, leaving 70% of the rack unpopulated from a density point of view? Data Centers that could facilitate more power are few and far between.

In this sense, BITMAIN seems to be spot on with the sizing - but it does come down to the price and how well they can position themselves on the market. No doubt more competition is coming and the big names of the past are looking to regain their position.

What sort of servers are used that have higher densities of power than the usual rack? (fitting with the scenario you suggested above).

there are examples of server racks that are 3 antminers/4U = >30 antminers or 12kW.  I assume with enough airflow the high power demand can be handled. Most racks though are not often equipped with more than 1 or 2 6" PDU units, generally capable of 4-6kW each depending on the outlet style.  However, adding another PDU or simply employing multiple shorter but equally powerful PDUs would at least deliver power to the rack.

Its also quite possible not many datacenters expect such power density, and may not have enough available outlets per rack even if airflow/cooling are not a limitation

24" PCI-E cables with 16AWG wires and stripped ends - great for server PSU mods, best prices https://bitcointalk.org/index.php?topic=563461
No longer a wannabe - now an ASIC owner!
1l1l11ll1l
Legendary
*
Offline Offline

Activity: 1274
Merit: 1000


View Profile WWW
March 22, 2014, 03:28:19 AM
 #367

Yes I had previously done that calculation as well. The only thing I can think of is if you are (or maybe your employer is) already paying for a rack that is underutilized power-wise but has a little space left you can add a few of the miners.

I couldn't give a rats ass about nanometers or "sexy" miners (seriously you guys need to get out more). All I care about is price, delivery, reliability, hash rate, and power usage.

Obviously, I don't have to agree with you 100% (except for the part that I should get out more).. but you're right on the money with that fact that the ASIC doesn't need to look "pretty". To get the job done is the right attitude and it's not like we'd look like poster boys ourselves either.

Furthermore, it left me thinking about the 1.25U competitor.. If they consume between 1.2 - 1.35kW power, why bother with such a small factor (and risk with not being able to dissipate the heat effectively, hence more noise for fans spinning at max)? Yes, it is the coolest looking kid on the blockchain, but as most Data Centers can normally only facilitate up to between 8 to 12 kW of power per rack (including the equivalent in cooling), where lies the benefit in 1.25U when you can only fit 10 of those into a 42U rack maximum, leaving 70% of the rack unpopulated from a density point of view? Data Centers that could facilitate more power are few and far between.

In this sense, BITMAIN seems to be spot on with the sizing - but it does come down to the price and how well they can position themselves on the market. No doubt more competition is coming and the big names of the past are looking to regain their position.

What sort of servers are used that have higher densities of power than the usual rack? (fitting with the scenario you suggested above).

there are examples of server racks that are 3 antminers/4U = >30 antminers or 12kW.  I assume with enough airflow the high power demand can be handled. Most racks though are not often equipped with more than 1 or 2 6" PDU units, generally capable of 4-6kW each depending on the outlet style.  However, adding another PDU or simply employing multiple shorter but equally powerful PDUs would at least deliver power to the rack.

Its also quite possible not many datacenters expect such power density, and may not have enough available outlets per rack even if airflow/cooling are not a limitation


The 2 DC that I'm in are completely modular, I can order multiple 60A 3-phase connections if I wanted, so power density isn't an issue. I currently have 2 30A 208v single phase per rack, which is enought for 27 S1's overclocked, and the plan will be for 10 S2's. Depending on the DC and their rules, you may be able to go above the 80%, at mine, the only thing I lose is the uptime guarantee if I trip my own breaker.

2 of these gives you a 10KW rack

http://www.newegg.com/Product/Product.aspx?Item=N82E16812120346

That PDU doesn't take up very much space. If density required it, 4, 6 or even 8 is completely feasible, however if you're running a 40KW rack, you'll probably want to think about 3-phase.


Soros Shorts
Donator
Legendary
*
Offline Offline

Activity: 1617
Merit: 1012



View Profile
March 22, 2014, 03:35:26 AM
 #368

Yes I had previously done that calculation as well. The only thing I can think of is if you are (or maybe your employer is) already paying for a rack that is underutilized power-wise but has a little space left you can add a few of the miners.

I couldn't give a rats ass about nanometers or "sexy" miners (seriously you guys need to get out more). All I care about is price, delivery, reliability, hash rate, and power usage.

Obviously, I don't have to agree with you 100% (except for the part that I should get out more).. but you're right on the money with that fact that the ASIC doesn't need to look "pretty". To get the job done is the right attitude and it's not like we'd look like poster boys ourselves either.

Furthermore, it left me thinking about the 1.25U competitor.. If they consume between 1.2 - 1.35kW power, why bother with such a small factor (and risk with not being able to dissipate the heat effectively, hence more noise for fans spinning at max)? Yes, it is the coolest looking kid on the blockchain, but as most Data Centers can normally only facilitate up to between 8 to 12 kW of power per rack (including the equivalent in cooling), where lies the benefit in 1.25U when you can only fit 10 of those into a 42U rack maximum, leaving 70% of the rack unpopulated from a density point of view? Data Centers that could facilitate more power are few and far between.

In this sense, BITMAIN seems to be spot on with the sizing - but it does come down to the price and how well they can position themselves on the market. No doubt more competition is coming and the big names of the past are looking to regain their position.

What sort of servers are used that have higher densities of power than the usual rack? (fitting with the scenario you suggested above).

there are examples of server racks that are 3 antminers/4U = >30 antminers or 12kW.  I assume with enough airflow the high power demand can be handled. Most racks though are not often equipped with more than 1 or 2 6" PDU units, generally capable of 4-6kW each depending on the outlet style.  However, adding another PDU or simply employing multiple shorter but equally powerful PDUs would at least deliver power to the rack.

Its also quite possible not many datacenters expect such power density, and may not have enough available outlets per rack even if airflow/cooling are not a limitation


The 2 DC that I'm in are completely modular, I can order multiple 60A 3-phase connections if I wanted, so power density isn't an issue. I currently have 2 30A 208v single phase per rack, which is enought for 27 S1's overclocked, and the plan will be for 10 S2's. Depending on the DC and their rules, you may be able to go above the 80%, at mine, the only thing I lose is the uptime guarantee if I trip my own breaker.

2 of these gives you a 10KW rack

http://www.newegg.com/Product/Product.aspx?Item=N82E16812120346

That PDU doesn't take up very much space. If density required it, 4, 6 or even 8 is completely feasible, however if you're running a 40KW rack, you'll probably want to think about 3-phase.


May I ask how much a 30A 208V line costs per month in your DC?
MoreBloodWine
Legendary
*
Offline Offline

Activity: 1050
Merit: 1001


View Profile
March 22, 2014, 03:43:57 AM
 #369

That PDU doesn't take up very much space. If density required it, 4, 6 or even 8 is completely feasible, however if you're running a 40KW rack, you'll probably want to think about 3-phase.
Instead off say a 60a 208v 3ph PDU, couldn't someone just get 2 30a 220/240v PDU's, space permitting of course ?

Might just mean an extra outlet / breaker etc.

To be decided...
klondike_bar
Legendary
*
Offline Offline

Activity: 2128
Merit: 1005

ASIC Wannabe


View Profile
March 22, 2014, 04:06:07 AM
 #370

That PDU doesn't take up very much space. If density required it, 4, 6 or even 8 is completely feasible, however if you're running a 40KW rack, you'll probably want to think about 3-phase.
Instead off say a 60a 208v 3ph PDU, couldn't someone just get 2 30a 220/240v PDU's, space permitting of course ?

Might just mean an extra outlet / breaker etc.

I had an electrician wire up 2x L6-30 sockets (30A 213V) and an L21-20 (20,20,20A 120V) for my space, giving me about 10kW of usable 213V and 6kW of 120V.
I PDU per outlet

24" PCI-E cables with 16AWG wires and stripped ends - great for server PSU mods, best prices https://bitcointalk.org/index.php?topic=563461
No longer a wannabe - now an ASIC owner!
MoreBloodWine
Legendary
*
Offline Offline

Activity: 1050
Merit: 1001


View Profile
March 22, 2014, 04:16:11 AM
 #371

I had an electrician wire up 2x L6-30 sockets (30A 213V) and an L21-20 (20,20,20A 120V) for my space, giving me about 10kW of usable 213V and 6kW of 120V.
I PDU per outlet
I dont get the 20,20,20A 120v socket, I mean the whole three 20 thing.

To be decided...
smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
March 22, 2014, 04:33:52 AM
 #372

Typical datacenter limits are 10-20kw per rack. I've also seen this stated as watts per square foot. If you need more watts in a rack, they may be able to handle it, but you have to rent more datacenter space even if you aren't using it, so again you get to empty space. No real advantage to such high power densities unless there is an angle to it such as some usual situation such as extra available power capacity with little available space.

I have no doubt it is possible to design a non-commodity datacenter with >10-20kw/rack but then there will be additional costs, and the economics of those 1.25U miners which already aren't that good are even worse.

I'm also not aware of anyplace on earth with very low power costs that doesn't also have lots of available space.

The economics of 1 kw/u just don't make sense without some missing piece.

Swimmer63
Legendary
*
Offline Offline

Activity: 1593
Merit: 1004



View Profile
March 22, 2014, 09:49:36 AM
 #373

2x 30a, 120v - $1075/month. Full cabinet.
Skaterdiejosh
Member
**
Offline Offline

Activity: 115
Merit: 10


View Profile WWW
March 22, 2014, 02:59:27 PM
 #374

So its pretty much Dragon Miner vs Antminer s2 !! The one thing I wonder is if Bitmain will be able to keep up with china on manufacturing these things. You can get a 1  t/hs dragon miner right now and have it at your door like the sold out batch one S2 orders by April 1st for 3600$ !! I just hope Bitmain gets some stock going and stops doing Pre order batch orders. 1 or 2 batches is cool but once you go past that people are paying for a Pre order and getting it months later.. Anything longer then 2 weeks of  time after you pay is too long .. People need to start getting the clue there is no reason to do Pre-orders for any type of mining hardware !! Those days are over there is way to much competetion for that crap !! And that goes for kncminer and any other manufacture!!

Tips -  13q4b2Rq2dA579KShuC2LapSK8E94ZeAWy
https://coinbase.com/?r=5276ab
Aexcu
Full Member
***
Offline Offline

Activity: 130
Merit: 100


View Profile
March 22, 2014, 03:04:53 PM
 #375

Agree, I detest this "Batch x, shipping on yy of month" crap.

Either have stock or don't. That'd be more the kind of way we've grown to expect BITMAIN to do business. Besides, it was fun.

klondike_bar
Legendary
*
Offline Offline

Activity: 2128
Merit: 1005

ASIC Wannabe


View Profile
March 22, 2014, 03:42:09 PM
 #376

I had an electrician wire up 2x L6-30 sockets (30A 213V) and an L21-20 (20,20,20A 120V) for my space, giving me about 10kW of usable 213V and 6kW of 120V.
I PDU per outlet
I dont get the 20,20,20A 120v socket, I mean the whole three 20 thing.

Its a 5-pin outlet (L,L,L,N,G) so there are 3 phases of 120V available on the same power strip

24" PCI-E cables with 16AWG wires and stripped ends - great for server PSU mods, best prices https://bitcointalk.org/index.php?topic=563461
No longer a wannabe - now an ASIC owner!
klondike_bar
Legendary
*
Offline Offline

Activity: 2128
Merit: 1005

ASIC Wannabe


View Profile
March 22, 2014, 03:44:58 PM
 #377

2x 30a, 120v - $1075/month. Full cabinet.

so about 6kW + space? thats pretty reasonable, a lot of places i looked wanted about $1100-$1300.  I ended up building my own space with rack shelving that has up to 18kW available (0.15/kwh) and rent is $300/month

24" PCI-E cables with 16AWG wires and stripped ends - great for server PSU mods, best prices https://bitcointalk.org/index.php?topic=563461
No longer a wannabe - now an ASIC owner!
baltimoreppc
Newbie
*
Offline Offline

Activity: 52
Merit: 0


View Profile
March 22, 2014, 04:38:09 PM
 #378

2x 30a, 120v - $1075/month. Full cabinet.

so about 6kW + space? thats pretty reasonable, a lot of places i looked wanted about $1100-$1300.  I ended up building my own space with rack shelving that has up to 18kW available (0.15/kwh) and rent is $300/month

how did you build out?  can you share with us?  I am thinking about doing same thing..
klondike_bar
Legendary
*
Offline Offline

Activity: 2128
Merit: 1005

ASIC Wannabe


View Profile
March 22, 2014, 05:06:54 PM
 #379

2x 30a, 120v - $1075/month. Full cabinet.

so about 6kW + space? thats pretty reasonable, a lot of places i looked wanted about $1100-$1300.  I ended up building my own space with rack shelving that has up to 18kW available (0.15/kwh) and rent is $300/month

how did you build out?  can you share with us?  I am thinking about doing same thing..


you cant see two of the outlets in this photo. I have 4 antminers in the stacked black crates at the side that i will probably unpack into the shelves on tuesday, as well as 3 more antminers on the way.  The L21-20 socket was stupidly expensive - i would not use it again - stick with L6-30 outlets and a 5-20 duplex if you have anything that CANT use 220V

The shelf was $50, the network switch was $10, and i got most of the 10ft lan cables for free or $0.50 each. The C13-C14 power cables cost about $3-5 depending on length.

24" PCI-E cables with 16AWG wires and stripped ends - great for server PSU mods, best prices https://bitcointalk.org/index.php?topic=563461
No longer a wannabe - now an ASIC owner!
1l1l11ll1l
Legendary
*
Offline Offline

Activity: 1274
Merit: 1000


View Profile WWW
March 22, 2014, 08:30:24 PM
 #380

Yes I had previously done that calculation as well. The only thing I can think of is if you are (or maybe your employer is) already paying for a rack that is underutilized power-wise but has a little space left you can add a few of the miners.

I couldn't give a rats ass about nanometers or "sexy" miners (seriously you guys need to get out more). All I care about is price, delivery, reliability, hash rate, and power usage.

Obviously, I don't have to agree with you 100% (except for the part that I should get out more).. but you're right on the money with that fact that the ASIC doesn't need to look "pretty". To get the job done is the right attitude and it's not like we'd look like poster boys ourselves either.

Furthermore, it left me thinking about the 1.25U competitor.. If they consume between 1.2 - 1.35kW power, why bother with such a small factor (and risk with not being able to dissipate the heat effectively, hence more noise for fans spinning at max)? Yes, it is the coolest looking kid on the blockchain, but as most Data Centers can normally only facilitate up to between 8 to 12 kW of power per rack (including the equivalent in cooling), where lies the benefit in 1.25U when you can only fit 10 of those into a 42U rack maximum, leaving 70% of the rack unpopulated from a density point of view? Data Centers that could facilitate more power are few and far between.

In this sense, BITMAIN seems to be spot on with the sizing - but it does come down to the price and how well they can position themselves on the market. No doubt more competition is coming and the big names of the past are looking to regain their position.

What sort of servers are used that have higher densities of power than the usual rack? (fitting with the scenario you suggested above).

there are examples of server racks that are 3 antminers/4U = >30 antminers or 12kW.  I assume with enough airflow the high power demand can be handled. Most racks though are not often equipped with more than 1 or 2 6" PDU units, generally capable of 4-6kW each depending on the outlet style.  However, adding another PDU or simply employing multiple shorter but equally powerful PDUs would at least deliver power to the rack.

Its also quite possible not many datacenters expect such power density, and may not have enough available outlets per rack even if airflow/cooling are not a limitation


The 2 DC that I'm in are completely modular, I can order multiple 60A 3-phase connections if I wanted, so power density isn't an issue. I currently have 2 30A 208v single phase per rack, which is enought for 27 S1's overclocked, and the plan will be for 10 S2's. Depending on the DC and their rules, you may be able to go above the 80%, at mine, the only thing I lose is the uptime guarantee if I trip my own breaker.

2 of these gives you a 10KW rack

http://www.newegg.com/Product/Product.aspx?Item=N82E16812120346

That PDU doesn't take up very much space. If density required it, 4, 6 or even 8 is completely feasible, however if you're running a 40KW rack, you'll probably want to think about 3-phase.


May I ask how much a 30A 208V line costs per month in your DC?

$585/mo

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 ... 222 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!