Bitcoin Forum
June 17, 2024, 08:23:19 AM *
News: Voting for pizza day contest
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 [4] 5 6 7 8 »  All
  Print  
Author Topic: [Klondike] Case design thread for K16  (Read 37919 times)
Miner99er
Sr. Member
****
Offline Offline

Activity: 310
Merit: 250


View Profile
June 11, 2013, 02:58:40 AM
 #61

The rack in my shop begs to differ, and begs to hold as many of those as it can.

Bought From Yochdogx2, Alexmat, SgtSpike, David_Benz, Beaflag VonRathburg, Slaveindebt, Cptmooseinc, Coinhoarder

Donations? SURE! 16foPr8FAjYXKL8ApQAzihnigXm1qNhi8Q

http://pyramining.com/referral/yfab9med7   
http://pyramining.com/referral/ahmc7en6z
http://pyramining.com/referral/pagndq4xc   
http://pyramining.com/referral/79b2gmrzx
http://pyramining.com/referral/e2ghz4asy
Bicknellski
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1000



View Profile
June 11, 2013, 04:42:52 AM
 #62

The rack in my shop begs to differ, and begs to hold as many of those as it can.

Which? 1U server idea? K256? 700W?

Dogie trust abuse, spam, bullying, conspiracy posts & insults to forum members. Ask the mods or admins to move Dogie's spam or off topic stalking posts to the link above.
KS
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


View Profile
June 11, 2013, 04:47:32 PM
Last edit: June 11, 2013, 05:55:46 PM by KS
 #63


I can bet anyone 1 satoshi that no datacenter is gonna accept a unit so dense. Personally i think its a little dangerous.

The DC's I have contacted recently will let you have a 2200W server per half rack.

edit: 2200W usual, 3500W peak
redphlegm
Sr. Member
****
Offline Offline

Activity: 246
Merit: 250


My spoon is too big!


View Profile
June 12, 2013, 08:05:57 PM
Last edit: June 12, 2013, 09:08:54 PM by redphlegm
 #64

Depending on the data center, you'll find most rack densities in the 3-7kW per rack range. At 42u, that's 71-167w per U on average. You can get more dense if you want, but you're going to pay for the power / cooling that you use and it won't do you any good to make it more dense. Ultimately, the colo provider had to build their data center with a design power in mind and if you chew up power before the usable space, they can't sell just bare white space without the power / cooling capacity to support it. You really can't look at it in terms of u-space or even physical (white) space alone. Stick to the kW / rack or even watt-per-U. 100w per U is a pretty good number since it would give you 4.2kW / rack. if it was fully populated. Go higher and you're probably going to be using blanking plates.

Source: I'm an 18-year "critical power industry" veteran.

Whiskey Fund: (BTC) 1whiSKeYMRevsJMAQwU8NY1YhvPPMjTbM | (Ψ) ALcoHoLsKUfdmGfHVXEShtqrEkasihVyqW
crazyates
Legendary
*
Offline Offline

Activity: 952
Merit: 1000



View Profile
June 13, 2013, 06:11:32 AM
 #65

...you'll find most rack densities in the 3-7kW per rack range. At 42u, that's 71-167w per U on average. ...Stick to the kW / rack or even watt-per-U. 100w per U is a pretty good number since it would give you 4.2kW / rack.
I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

Tips? 1crazy8pMqgwJ7tX7ZPZmyPwFbc6xZKM9
Previous Trade History - Sale Thread
WynX
Member
**
Offline Offline

Activity: 77
Merit: 10


View Profile
June 13, 2013, 06:22:09 AM
 #66

I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..
turtle83
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


Supersonic


View Profile WWW
June 13, 2013, 11:16:52 AM
 #67

I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

https://www.stackpop.com/configure/colocation/singapore/singapore/singapore/19885
4 kva for 42U $2500 / month
125 x K16 = 4 kva
125 x K16 = ~3 K16 per 1U  (most likely there are some unusable slots, use for switches, spacing, etc)

https://www.stackpop.com/configure/colocation/united_states/virginia/ashburn/17237
24 kva 59U for $1000 per month (or $2000 unsure if power is included or extra)
750 x k16 = 24kva
750 x k16 = 12.7 k16 per 1U or ~51 per 4U  (most likely there are some unusable slots, use for switches, spacing, etc)

But these can only be used once stability of these devices is proven, to not need much babysitting by hand.

Cramming higher density into 1U is only viable if you are hosting the rack on your own premises and can arrange adequate cooling.. But then why would one use 1U and not bigger cases where u can have better airflow using more efficient bigger fans.

Bicknellski
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1000



View Profile
June 13, 2013, 01:27:06 PM
 #68

I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

https://www.stackpop.com/configure/colocation/singapore/singapore/singapore/19885
4 kva for 42U $2500 / month
125 x K16 = 4 kva
125 x K16 = ~3 K16 per 1U  (most likely there are some unusable slots, use for switches, spacing, etc)

https://www.stackpop.com/configure/colocation/united_states/virginia/ashburn/17237
24 kva 59U for $1000 per month (or $2000 unsure if power is included or extra)
750 x k16 = 24kva
750 x k16 = 12.7 k16 per 1U or ~51 per 4U  (most likely there are some unusable slots, use for switches, spacing, etc)

But these can only be used once stability of these devices is proven, to not need much babysitting by hand.

Cramming higher density into 1U is only viable if you are hosting the rack on your own premises and can arrange adequate cooling.. But then why would one use 1U and not bigger cases where u can have better airflow using more efficient bigger fans.

For something of the 1U I designed to go into a datacenter you will need 2nd Gen or 3rd Gen ASICs where power is a third it is now. Maybe the 55nm Avalons will work. Then it will be datacenter ready. That is why I will have to go cheaper route build my own data center? LOL... sounds funny to say but it will be.

Dogie trust abuse, spam, bullying, conspiracy posts & insults to forum members. Ask the mods or admins to move Dogie's spam or off topic stalking posts to the link above.
TheSwede75
Full Member
***
Offline Offline

Activity: 224
Merit: 100



View Profile
June 13, 2013, 02:43:44 PM
 #69

Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...
As for flipping the boards they are fin to fin so that the heat is pushed out through the fins getting the best airflow from of the fans. Also the Klego is easier to put togther in a fin to fin configuration. Access however is not and maintenance and inspection will require a lot of time.  But given this a stanchion solution it is easy enough to test both configurations by changing stanchion length. I can test both ways and see which is better. I have a feeling fin to fin is optimal if the air flow is constant and in that Venturi type configuration.

Not flipping the boards. Take the whole box *as is* and turn it upside down. You don't change the current layout, just put it on its head. Smiley

The heat would be trapped then... again having the larger space above will allow heat to dissipate faster... heat rises. Flipping the design as it is would trap more heat I think. Again I can test all this out and run various configurations and see. I think the best solution is to get airflow into both the upper and lower cavity and push the air out and I think I might have to given the ambient temperatures here in Indonesia. I am not interested in air-conditioning the floor just for a single unit so I will have to be very concerned with airflow. The 3 - X6500 I have at home seem to do well although they run at between 40C - 52C all the time. I suspect with the right heat sink, fan configuration the K256 in a 1U should be more than adequate without AC. If have multiple units however then I think the collocation will have to be the option or build my own data center on the 3rd floor.

The Avalon boxes have a lot of air blowing at chips and board as well as fins in their vertical configuration. The poor goop job results in less efficient heat transfer as indicated in the thread of the Avalon user who pulled the heat sinks from the PCB. But it seems Avalon is really trying to tunnel the air into a very very tight space I just want to see if I can compact this down into something that is manageable size wise. I am really keen on what might be possible with a GEN2 Avalon chip so Blade Servers or SATA Hot swappable cards could be something that works as well with the preexisting server chassis. We will see what BKKCoins finds when he gets testing boards and heat sinks but I think the Klondike is easily configured so testing this will be quite easy. Personally seeing things like the TESLA 8 GPU server in a 3U/4U configuration and some of the 2 GPU Teslas crammed into a 1U gives me confidence that if you get the air flow and heat sinks right it is possible to get this sort of density.

15% Air Flow Top (Left)
-----------
70% Air Flow Middle Fin-2-Fin Tunnel
-----------
15% Air Flow Bottom (Right)

That way you only need the fan banks front and back. 16 Fans for 16 boards seems right as that is what people will do in stack configuration anyhow. It is such an amazing design that BKKCoins has come up with. Even if you are not going to do a lot of DIY board building just configuring cases etc is going to be great fun and I bet GEN2 Klondikes will be even more versatile.

Isolated view of the low profile fin-2-fin tunnel concept for a K32 section:



This is a brilliant design! I especially like the idea since many cases already have high airflow designs for hard drive bays and extra fan mounts available stock.
crazyates
Legendary
*
Offline Offline

Activity: 952
Merit: 1000



View Profile
June 13, 2013, 04:41:51 PM
 #70

I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

So each 1U server would literally be housing 3-4 tiny little boards for only 15-20GH/s tops. That's not a whole lot, especially when you consider the cost of the 1U server chassis, the PSU, controller, etc.

I'd rather go with a 3or4U case that can handle a decent ATX PSU, and throw a whole lot more of those K16s in there, maybe sandwiched together into pairs with fans between them?

Tips? 1crazy8pMqgwJ7tX7ZPZmyPwFbc6xZKM9
Previous Trade History - Sale Thread
trigeek
Sr. Member
****
Offline Offline

Activity: 252
Merit: 250


View Profile WWW
June 13, 2013, 06:13:25 PM
 #71

I can't remember how many W each K16 uses. How many K16s would 100W limit you to?

32W per K16 estimated

3x32W=96W
4x32W=128W

not too much eh..

So each 1U server would literally be housing 3-4 tiny little boards for only 15-20GH/s tops. That's not a whole lot, especially when you consider the cost of the 1U server chassis, the PSU, controller, etc.

I'd rather go with a 3or4U case that can handle a decent ATX PSU, and throw a whole lot more of those K16s in there, maybe sandwiched together into pairs with fans between them?

That would certainly be more cost effective.  I've seen one proposal that tries to fit 72 K16's in a single 3U case... at that density you'd run out of power after only 6U of the rack filled.  I think that 2.5KW in a single 3U case is way overboard, but even if you cut that in half to 36 K16's, you'd need just over 16KW for the whole rack, and that is unheard of.  It would be 2TH in a single rack though, which is pretty sweet.

# HashStrike $ Mining Pools -- Ruby -- Karma -- Mint -- Leaf -- Zeit -- Syn
** Low Fees ** Awesome support ** Super stable **

crazyates
Legendary
*
Offline Offline

Activity: 952
Merit: 1000



View Profile
June 13, 2013, 09:01:54 PM
 #72

That would certainly be more cost effective.  I've seen one proposal that tries to fit 72 K16's in a single 3U case... at that density you'd run out of power after only 6U of the rack filled.  I think that 2.5KW in a single 3U case is way overboard, but even if you cut that in half to 36 K16's, you'd need just over 16KW for the whole rack, and that is unheard of.  It would be 2TH in a single rack though, which is pretty sweet.
It really depends on how much you want to cram into a rack. If you're just looking for a single, really dense unit, sure you could cram them in there until they don't fit anymore. At that point, physical placement is your bigger concern. If you're talking about populating a rull rack, power distribution becomes your limiting factor.

20 of them in a 4U chassis would be about 700W (including fans/controller), which is totally doable from a thermal perspective. A 4U chassis could vent a lot more than 700W of heat, esp if you use a lot of fans on the front and back in a push/pull. 10 of those in a rack would be about 1TH/s, at ~7KW. That would be manageable, I think.

Tips? 1crazy8pMqgwJ7tX7ZPZmyPwFbc6xZKM9
Previous Trade History - Sale Thread
turtle83
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


Supersonic


View Profile WWW
June 14, 2013, 03:54:25 AM
 #73

Had a random thought.

This is only for on premise hosting... not datacenter.

If we use such dense layouts, airflow is clearly channeled. All of the cool air would be taken in from one side and hot air coming out the other end. Why not seal the outlets, attach ducts, and release the hot air to outside....  I guess the devices should run fine with 25 - 30C ambient in tropics. One would need air conditioners because the hot air released increases the ambient much higher, but that wouldnt happen if you channel the heat out... The hot air should be 5 - 10C warmer than ambient i presume...

I see hosting in datacenter or controling airflow the only reasons for even needing a case.. if no airflow control is needed just keep naked pcbs...

crazyates
Legendary
*
Offline Offline

Activity: 952
Merit: 1000



View Profile
June 14, 2013, 04:11:37 AM
 #74

I drew this little diagram in another thread, but thought it might be applicable here.

If you're not in a datacenter, then sealing everything up and channeling your airflow is the best way to go. For proper cooling, you need a low ambient temp. If the miner is dumping 1000+W into your spare bedroom, then the ambient is going to skyrocket. You could throw an AC into the room to combat the increase in temp, or you could throw all the heat out the window, literally, and just keep a fresh supply of cooler air coming in.

This thread has some good ideas for properly insulating and exhausting your rigs. At that point, you don't need to worry about giant AC units, you would just need a steady stream of cool, outside air. I live in the NE USA, and even in the summer, it barely gets 25C, and hardly ever to 30C, even in the summer. In the winter, it's not uncommon to see below 0C. Fresh, cool air isn't a huge concern for me. Wink

      COLD             HOT
       AIR               AIR
        |                      ^
          V                      | 
        OUTSIDE
----FAN-----------FAN---------------
        |                      ^           INSIDE 
        V                      |
         --> ASICs -->

Tips? 1crazy8pMqgwJ7tX7ZPZmyPwFbc6xZKM9
Previous Trade History - Sale Thread
turtle83
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


Supersonic


View Profile WWW
June 14, 2013, 04:21:54 AM
 #75

I drew this little diagram in another thread, but thought it might be applicable here.

If you're not in a datacenter, then sealing everything up and channeling your airflow is the best way to go. For proper cooling, you need a low ambient temp. If the miner is dumping 1000+W into your spare bedroom, then the ambient is going to skyrocket. You could throw an AC into the room to combat the increase in temp, or you could throw all the heat out the window, literally, and just keep a fresh supply of cooler air coming in.

This thread has some good ideas for properly insulating and exhausting your rigs. At that point, you don't need to worry about giant AC units, you would just need a steady stream of cool, outside air. I live in the NE USA, and even in the summer, it barely gets 25C, and hardly ever to 30C, even in the summer. In the winter, it's not uncommon to see below 0C. Fresh, cool air isn't a huge concern for me. Wink

      COLD             HOT
       AIR               AIR
        |                      ^
          V                      | 
        OUTSIDE
----FAN-----------FAN---------------
        |                      ^           INSIDE 
        V                      |
         --> ASICs -->

I guess in winter you have free heating by removing the ducts...

Here the ambient can go as high as 35C!

The question is... if all the heat is thrown out the window, would 35C peak ambient need to be cooled using AC? If hot air is thrown out, naturally the room would suck in fresh ambient air from other sources to recoup maintain the pressure...

redphlegm
Sr. Member
****
Offline Offline

Activity: 246
Merit: 250


My spoon is too big!


View Profile
June 14, 2013, 04:38:02 AM
 #76

While this seems good in principle, there are some things to keep in mind when using outside air for cooling.

1) Just exhausting air to the outside and using the room as the supply means that the negative pressure you're creating draws the air from somewhere else in the house / building. Exhausting the air out your window means the air to make up for what you're exhausting is going to be coming into the house from somewhere else. You may not feel it but it's happening.
2) Using an inlet and an outlet will remove this but then you have to worry about air quality, humidity, and even potentially rain. If you have a long enough duct or have one of those dryer vent covers, you'll avoid rain but humidity (specifically condensation) is a more difficult challenge. The last thing you want to have happen is for the temperature to fall below dewpoint and you start getting condensation. Probably not going to be a huge issue since the temperature change across the device is going to be positive therefore producing exhaust air that is capable of holding more moisture than the intake air; but if it's enclosed and you're taking in cold air in the winter, it could potentially cause condensation on the exterior that could mess some things up. Additionally there's debris to worry about like dust, pollen, or whatever else. This can accumulate in the heat sinks or other airflow areas and either block airflow, foul heat transfer surfaces (meaning the internal temperature rises), or even accumulate and pose a fire risk.
3) In addition to humidity, electronics may experience damage or small cracks due to excessive rate of change of temperature (called heatup and cooldown rates). ASHRAE TC 9.9 (This is the 2011 version which was recently superseded by the 2012 version, though it's not publicly available from what I could find) is the industry standard for this type of thing. This is all to prolong life and prevent damage or danger to the environment. The point here is that if you have this running in the winter (presumably powered by an external fan) and the hash rate goes down (pool goes down, lose connection to internet, or any number of other reasons), if you continue to pump cold air through there, that could stress sensitive components and ultimately lead to premature failure.

There are some other considerations but these are the top things to keep in mind. It's not as simple as just rejecting heat to the outside. It can be done but if it was simply that easy, every data center would be doing it.

Whiskey Fund: (BTC) 1whiSKeYMRevsJMAQwU8NY1YhvPPMjTbM | (Ψ) ALcoHoLsKUfdmGfHVXEShtqrEkasihVyqW
turtle83
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


Supersonic


View Profile WWW
June 14, 2013, 04:58:02 AM
 #77

While this seems good in principle, there are some things to keep in mind when using outside air for cooling.

1) Just exhausting air to the outside and using the room as the supply means that the negative pressure you're creating draws the air from somewhere else in the house / building. Exhausting the air out your window means the air to make up for what you're exhausting is going to be coming into the house from somewhere else. You may not feel it but it's happening.
That is what i was counting on.. the house/building temperature is not that warm, close to ambient.
2) Using an inlet and an outlet will remove this but then you have to worry about air quality, humidity, and even potentially rain. If you have a long enough duct or have one of those dryer vent covers, you'll avoid rain but humidity (specifically condensation) is a more difficult challenge. The last thing you want to have happen is for the temperature to fall below dewpoint and you start getting condensation. Probably not going to be a huge issue since the temperature change across the device is going to be positive therefore producing exhaust air that is capable of holding more moisture than the intake air; but if it's enclosed and you're taking in cold air in the winter, it could potentially cause condensation on the exterior that could mess some things up. Additionally there's debris to worry about like dust, pollen, or whatever else. This can accumulate in the heat sinks or other airflow areas and either block airflow, foul heat transfer surfaces (meaning the internal temperature rises), or even accumulate and pose a fire risk.
It doesnt get cold, and never ever close to dewpoint, but the problem is humidity and/or pollution/dust. I feel using internal building air should be fine, since its not directly coming from outside...
3) In addition to humidity, electronics may experience damage or small cracks due to excessive rate of change of temperature (called heatup and cooldown rates). ASHRAE TC 9.9 (This is the 2011 version which was recently superseded by the 2012 version, though it's not publicly available from what I could find) is the industry standard for this type of thing. This is all to prolong life and prevent damage or danger to the environment. The point here is that if you have this running in the winter (presumably powered by an external fan) and the hash rate goes down (pool goes down, lose connection to internet, or any number of other reasons), if you continue to pump cold air through there, that could stress sensitive components and ultimately lead to premature failure.
Interesting. Will go thru the doc after a while. Hadnt thought about heat-cool cycle causing problems.
There are some other considerations but these are the top things to keep in mind. It's not as simple as just rejecting heat to the outside. It can be done but if it was simply that easy, every data center would be doing it.

In fact i thought about this because I remembered hearing about someone (it was either amazon or facebook unsure) throwing away the air from the hot isle, and using fresh air for further cooling and using. They obviously have some expensive equipment in place to treat the air before unleashing it onto the servers....

redphlegm
Sr. Member
****
Offline Offline

Activity: 246
Merit: 250


My spoon is too big!


View Profile
June 14, 2013, 05:14:48 AM
 #78

Oh for sure, they do these kinds of things and they can be done but they are in situations where they're basically using commodity hardware (for them) and have environments where the failure of a machine, rack, row, or even in extreme cases entire data centers doesn't translate to lost service to customers. They can afford to be pioneers and push the envelopes. Most of us simply don't have this type of ability in our houses. They save money on power / cooling equipment and just accept a potentially higher MTBF of server equipment. I worked at a "big company's" data center in Seattle for a while where they had a tent outside that was basically protecting servers from the rain. Intel did similar things. These types of endeavors are what have helped open up the TC 9.9 operating window. It used to be a max of 68F back in 2008 (iirc) but it has since been raised to 80.6F for the cold aisle (inlet) and even in some classes of machines as high as 104F.

Whiskey Fund: (BTC) 1whiSKeYMRevsJMAQwU8NY1YhvPPMjTbM | (Ψ) ALcoHoLsKUfdmGfHVXEShtqrEkasihVyqW
tosku
Sr. Member
****
Offline Offline

Activity: 367
Merit: 250



View Profile WWW
June 15, 2013, 09:57:13 AM
Last edit: June 15, 2013, 03:51:01 PM by tosku
 #79

I've been thinking some more about the K64 cube heat sink. The one I originally used is very complicated. Here's a model using a 90x100mm heatsink, similar to off the shelf-sinks available today:



The heat sink I made up only has fins over the actual Avalon IC:s. Such a heat sink could be cheaper, but the airflow between fins would be decreased by the large opening in the middle. Maybe it's better to use a heat sink with fins everywhere?

Here's a model using a "perfect" heat sink I made up:



The idea is to increase the heat exchange in the far end of the tunnel, which would be pointing upwards when in use. Unfortunately, again, a heat sink like that would be complicated and expensive to make. It looks darn slick, though Smiley

Skude.se/BTC - an easier way to request your daily free coins!
jimrome
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
June 15, 2013, 02:50:27 PM
 #80

Pages: « 1 2 3 [4] 5 6 7 8 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!