Bitcoin Forum
May 13, 2024, 12:46:23 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 4 5 6 7 8 »  All
  Print  
Author Topic: [Klondike] Case design thread for K16  (Read 37917 times)
mjmvisser (OP)
Newbie
*
Offline Offline

Activity: 58
Merit: 0


View Profile
June 02, 2013, 05:52:08 PM
 #1

Post your Klondike custom case designs here!

steamboat's 3U 72xK16 proof-of-concept:
https://i.imgur.com/1ohwFnC.png
https://i.imgur.com/erBoEu8.png

marto74's 8xK16 with and without space for a PSU and router:
http://s4.postimg.org/z670utkk9/8xk16.jpghttp://s22.postimg.org/z857k2cb1/8xk16inabox.jpg

I'd love to see a design for a tower-style 2-wide-by-16-high.
1715604383
Hero Member
*
Offline Offline

Posts: 1715604383

View Profile Personal Message (Offline)

Ignore
1715604383
Reply with quote  #2

1715604383
Report to moderator
1715604383
Hero Member
*
Offline Offline

Posts: 1715604383

View Profile Personal Message (Offline)

Ignore
1715604383
Reply with quote  #2

1715604383
Report to moderator
"Bitcoin: the cutting edge of begging technology." -- Giraffe.BTC
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715604383
Hero Member
*
Offline Offline

Posts: 1715604383

View Profile Personal Message (Offline)

Ignore
1715604383
Reply with quote  #2

1715604383
Report to moderator
1715604383
Hero Member
*
Offline Offline

Posts: 1715604383

View Profile Personal Message (Offline)

Ignore
1715604383
Reply with quote  #2

1715604383
Report to moderator
1715604383
Hero Member
*
Offline Offline

Posts: 1715604383

View Profile Personal Message (Offline)

Ignore
1715604383
Reply with quote  #2

1715604383
Report to moderator
Bicknellski
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1000



View Profile
June 02, 2013, 06:08:13 PM
 #2



Trying to fit 512 into a rack mounted server... wish me luck.

Dogie trust abuse, spam, bullying, conspiracy posts & insults to forum members. Ask the mods or admins to move Dogie's spam or off topic stalking posts to the link above.
turtle83
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


Supersonic


View Profile WWW
June 02, 2013, 06:12:32 PM
 #3

Trying to fit 512 into a rack mounted server... wish me luck.

Looks good.
Appears to be 2 or 3U.
Possible to fit it all in 1u?

I think the PSU is not gonna take full length space, if thats so then ample room for some embedded computer to control these suckers...

KS
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


View Profile
June 02, 2013, 06:26:44 PM
 #4

Trying to fit 512 into a rack mounted server... wish me luck.

Looks good.
Appears to be 2 or 3U.
Possible to fit it all in 1u?

I think the PSU is not gonna take full length space, if thats so then ample room for some embedded computer to control these suckers...

Or a USB hub and a Pi/BeagleBoard hanging between the cables in the rack. Lots of wasted space there Smiley
Bicknellski
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1000



View Profile
June 02, 2013, 06:30:02 PM
Last edit: June 03, 2013, 06:58:50 PM by Bicknellski
 #5

Trying to fit 512 into a rack mounted server... wish me luck.

Looks good.
Appears to be 2 or 3U.
Possible to fit it all in 1u?

I think the PSU is not gonna take full length space, if thats so then ample room for some embedded computer to control these suckers...

2U it has to be I think for 512 and possibly 768... 1U for 256. Hard to find PSU to get the +1200 W that fits 1U and be able to power boards and arranged in 40x40cm (even if 1 layer design) 2 layers of 4 x K64. Depending on the heat sink size it might be possible to jam 512 in a 1U and I am trying to get a venturi effect to push that air hard through the tunnel I am creating I think it might be possible. The left over space could house controller... depending on what is used in front of the PSU on the corner next to the front fan bank.

Not much head room on top there... not sure that you'd want to put anything on top wires ok but nothing else going to be pretty warm there. I was thinking that the top of the case and bottom will need venting slats.

Front View:


Dogie trust abuse, spam, bullying, conspiracy posts & insults to forum members. Ask the mods or admins to move Dogie's spam or off topic stalking posts to the link above.
KS
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


View Profile
June 02, 2013, 06:48:11 PM
 #6

Not much head room on top there... not sure that you'd want to put anything on top wires ok but nothing else going to be pretty warm there. I was thinking that the top of the case and bottom will need venting slats.

That would defeat the point of putting it in a rackmount case. You'd need to leave space between the servers to exhaust the hot air.

Why not put bigger / additional fans to circulate the air on top?

Or, turn the design upside down (hot air will migrate to the tunnel) and let the fans push the hot air out through the tunnel.
tosku
Sr. Member
****
Offline Offline

Activity: 367
Merit: 250



View Profile WWW
June 02, 2013, 06:56:04 PM
 #7

I'd like to play as well. Would anyone care to share a model file for a single Klondike 16 board?

Skude.se/BTC - an easier way to request your daily free coins!
KS
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


View Profile
June 02, 2013, 07:01:37 PM
 #8

I'd like to play as well. Would anyone care to share a model file for a single Klondike 16 board?

Github?

https://github.com/bkkcoins/klondike
Bicknellski
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1000



View Profile
June 02, 2013, 07:05:32 PM
 #9

Not much head room on top there... not sure that you'd want to put anything on top wires ok but nothing else going to be pretty warm there. I was thinking that the top of the case and bottom will need venting slats.

That would defeat the point of putting it in a rackmount case. You'd need to leave space between the servers to exhaust the hot air.

Why not put bigger / additional fans to circulate the air on top?

Or, turn the design upside down (hot air will migrate to the tunnel) and let the fans push the hot air out through the tunnel.


No not really since the tunnel is sandwiched between the boards. The top does not get air flow so definitely need top slats bottom gets airflow so it is fine without. The majority of the the heat is the chips so again sticking something near them on top etc not a good plan. The idea is to stack the boards so airflow can be achieved in the smaller space so I will adjust the stanchion heights and heat sink heights till I can get a balance. A 1.5U could accommodate 512 chips max height for fans though still has to be 40mm or 50mm fans.

Dogie trust abuse, spam, bullying, conspiracy posts & insults to forum members. Ask the mods or admins to move Dogie's spam or off topic stalking posts to the link above.
KS
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


View Profile
June 02, 2013, 07:16:22 PM
 #10

Not much head room on top there... not sure that you'd want to put anything on top wires ok but nothing else going to be pretty warm there. I was thinking that the top of the case and bottom will need venting slats.

That would defeat the point of putting it in a rackmount case. You'd need to leave space between the servers to exhaust the hot air.

Why not put bigger / additional fans to circulate the air on top?

Or, turn the design upside down (hot air will migrate to the tunnel) and let the fans push the hot air out through the tunnel.


No not really since the tunnel is sandwiched between the boards. The top does not get air flow so definitely need top slats bottom gets airflow so it is fine without. The majority of the the heat is the chips so again sticking something near them on top etc not a good plan. The idea is to stack the boards so airflow can be achieved in the smaller space so I will adjust the stanchion heights and heat sink heights till I can get a balance. A 1.5U could accommodate 512 chips max height for fans though still has to be 40mm or 50mm fans.

I am saying this because you are pushing air under the boards. If you flip it upside down (just to be clear: the whole case, not the insides), the hot air under will naturally rise to the tunnel.  You could also use a 2U design with 80mm fans.
turtle83
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


Supersonic


View Profile WWW
June 02, 2013, 07:20:20 PM
 #11

Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...

Bicknellski
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1000



View Profile
June 02, 2013, 07:32:05 PM
Last edit: June 02, 2013, 07:43:31 PM by Bicknellski
 #12

Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...

Checking that out. There are plenty of GPU servers that throw that sort of wattage and more so not sure that is a hard and fast rule depending on the nature of the data center and what services they provide.  http://www.nvidia.com/object/tesla-servers.html If I need to go to a 1U K256 so be it that is still doable I just do not want to have to build a server room on my 3rd floor so this is really an experiment to see if I can get my chips into a rack mount configuration. If it works I will definitely be doing more of them in the future.

As for flipping the boards they are fin to fin so that the heat is pushed out through the fins getting the best airflow from of the fans. Also the Klego is easier to put togther in a fin to fin configuration. Access however is not and maintenance and inspection will require a lot of time.  But given this a stanchion solution it is easy enough to test both configurations by changing stanchion length. I can test both ways and see which is better. I have a feeling fin to fin is optimal if the air flow is constant and in that Venturi type configuration.

Dogie trust abuse, spam, bullying, conspiracy posts & insults to forum members. Ask the mods or admins to move Dogie's spam or off topic stalking posts to the link above.
tosku
Sr. Member
****
Offline Offline

Activity: 367
Merit: 250



View Profile WWW
June 02, 2013, 07:47:21 PM
 #13

I'd like to play as well. Would anyone care to share a model file for a single Klondike 16 board?

Github?

https://github.com/bkkcoins/klondike

Thanks

Skude.se/BTC - an easier way to request your daily free coins!
Toolhead
Full Member
***
Offline Offline

Activity: 178
Merit: 101


View Profile
June 02, 2013, 08:00:51 PM
 #14

which 3d modeling program did you use?
turtle83
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


Supersonic


View Profile WWW
June 02, 2013, 08:02:51 PM
 #15

Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...

Checking that out. There are plenty of GPU servers that throw that sort of wattage and more so not sure that is a hard and fast rule depending on the nature of the data center and what services they provide.  [...]

The thing is typically a datacenter will bill you per rack unit, and a particular amount of power(and hence cooling expense) comes with it. Generally people that need more power buy more rack units than they need. Ive never investigated this point, but thats my understanding.

hmm... after writing the above i googled a little... it seems power is not an issue.

https://www.stackpop.com/search/colocation  hit search without entering anything, choose cabinet space 1U .. there is a provider offering 15A even...

KS
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


View Profile
June 02, 2013, 08:14:56 PM
 #16

Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...

Checking that out. There are plenty of GPU servers that throw that sort of wattage and more so not sure that is a hard and fast rule depending on the nature of the data center and what services they provide.  [...]

The thing is typically a datacenter will bill you per rack unit, and a particular amount of power(and hence cooling expense) comes with it. Generally people that need more power buy more rack units than they need. Ive never investigated this point, but thats my understanding.

hmm... after writing the above i googled a little... it seems power is not an issue.

https://www.stackpop.com/search/colocation  hit search without entering anything, choose cabinet space 1U .. there is a provider offering 15A even...


Power IS a problem. Standard racks are 20A redundant (230V). I have 1U servers that need a half rack just for themselves, it's getting ridiculous. You can ask for more power but the prices are through the roof.

I'm seriously thinking of getting a DC in Texas or Washington (state).
KS
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


View Profile
June 02, 2013, 08:16:41 PM
 #17

Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...
As for flipping the boards they are fin to fin so that the heat is pushed out through the fins getting the best airflow from of the fans. Also the Klego is easier to put togther in a fin to fin configuration. Access however is not and maintenance and inspection will require a lot of time.  But given this a stanchion solution it is easy enough to test both configurations by changing stanchion length. I can test both ways and see which is better. I have a feeling fin to fin is optimal if the air flow is constant and in that Venturi type configuration.

Not flipping the boards. Take the whole box *as is* and turn it upside down. You don't change the current layout, just put it on its head. Smiley
dogie
Legendary
*
Offline Offline

Activity: 1666
Merit: 1183


dogiecoin.com


View Profile WWW
June 02, 2013, 08:59:34 PM
 #18

Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...

Checking that out. There are plenty of GPU servers that throw that sort of wattage and more so not sure that is a hard and fast rule depending on the nature of the data center and what services they provide.  http://www.nvidia.com/object/tesla-servers.html If I need to go to a 1U K256 so be it that is still doable I just do not want to have to build a server room on my 3rd floor so this is really an experiment to see if I can get my chips into a rack mount configuration. If it works I will definitely be doing more of them in the future.

As for flipping the boards they are fin to fin so that the heat is pushed out through the fins getting the best airflow from of the fans. Also the Klego is easier to put togther in a fin to fin configuration. Access however is not and maintenance and inspection will require a lot of time.  But given this a stanchion solution it is easy enough to test both configurations by changing stanchion length. I can test both ways and see which is better. I have a feeling fin to fin is optimal if the air flow is constant and in that Venturi type configuration.

I really don't think this is the "venturi" layout you're looking for. There is no significant reduction in cross section, just a hell of a lot of turbulent flow. I really think that even with a middle row of fans you'd struggle to remove that crazy amount of heat from there.

Bicknellski
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1000



View Profile
June 03, 2013, 02:13:48 AM
 #19

Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...

Checking that out. There are plenty of GPU servers that throw that sort of wattage and more so not sure that is a hard and fast rule depending on the nature of the data center and what services they provide.  [...]

The thing is typically a datacenter will bill you per rack unit, and a particular amount of power(and hence cooling expense) comes with it. Generally people that need more power buy more rack units than they need. Ive never investigated this point, but thats my understanding.

hmm... after writing the above i googled a little... it seems power is not an issue.

https://www.stackpop.com/search/colocation  hit search without entering anything, choose cabinet space 1U .. there is a provider offering 15A even...


Power IS a problem. Standard racks are 20A redundant (230V). I have 1U servers that need a half rack just for themselves, it's getting ridiculous. You can ask for more power but the prices are through the roof.

I'm seriously thinking of getting a DC in Texas or Washington (state).

I am in Indonesia and I know this is a huge issue then again our problems here are air conditioning as ambient is 28C to 33C everyday, power outages and security. So for the extra electricity costs it might be well work a collocation set up. I am checking that out now and will definitely get back to you guys about it. The other issue for me is I don't want to have to deal with having it in my school over the longer term more of a space issue. I can put a few of these K256's together but I really don't want to house them all here... I suspect that if you have any number of these the cost of water blocks and maintenance might make it worth a look to collocate if the price is right. Always about the price.

Again you can do a K256 @ 512 W and get what 72.192 GH/s in a 1U... so that is certainly not going to be an issue. As a solution 1U makes sense if you can get PSU and a empty 1U barebones chassis at a good price. Must be plenty of chassis laying around somewhere at a great price. Group Buy? I am still going to try for a more dense K512 in a 1.5U / 2U and test it out at school.

Dogie trust abuse, spam, bullying, conspiracy posts & insults to forum members. Ask the mods or admins to move Dogie's spam or off topic stalking posts to the link above.
Bicknellski
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1000



View Profile
June 03, 2013, 02:25:29 AM
Last edit: June 03, 2013, 04:01:00 AM by Bicknellski
 #20

Thinking... if the goal is co-location, i dont think datacenters  allow 1kw for 1 or 2u rackspace... its just too dense...
As for flipping the boards they are fin to fin so that the heat is pushed out through the fins getting the best airflow from of the fans. Also the Klego is easier to put togther in a fin to fin configuration. Access however is not and maintenance and inspection will require a lot of time.  But given this a stanchion solution it is easy enough to test both configurations by changing stanchion length. I can test both ways and see which is better. I have a feeling fin to fin is optimal if the air flow is constant and in that Venturi type configuration.

Not flipping the boards. Take the whole box *as is* and turn it upside down. You don't change the current layout, just put it on its head. Smiley

The heat would be trapped then... again having the larger space above will allow heat to dissipate faster... heat rises. Flipping the design as it is would trap more heat I think. Again I can test all this out and run various configurations and see. I think the best solution is to get airflow into both the upper and lower cavity and push the air out and I think I might have to given the ambient temperatures here in Indonesia. I am not interested in air-conditioning the floor just for a single unit so I will have to be very concerned with airflow. The 3 - X6500 I have at home seem to do well although they run at between 40C - 52C all the time. I suspect with the right heat sink, fan configuration the K256 in a 1U should be more than adequate without AC. If have multiple units however then I think the collocation will have to be the option or build my own data center on the 3rd floor.

The Avalon boxes have a lot of air blowing at chips and board as well as fins in their vertical configuration. The poor goop job results in less efficient heat transfer as indicated in the thread of the Avalon user who pulled the heat sinks from the PCB. But it seems Avalon is really trying to tunnel the air into a very very tight space I just want to see if I can compact this down into something that is manageable size wise. I am really keen on what might be possible with a GEN2 Avalon chip so Blade Servers or SATA Hot swappable cards could be something that works as well with the preexisting server chassis. We will see what BKKCoins finds when he gets testing boards and heat sinks but I think the Klondike is easily configured so testing this will be quite easy. Personally seeing things like the TESLA 8 GPU server in a 3U/4U configuration and some of the 2 GPU Teslas crammed into a 1U gives me confidence that if you get the air flow and heat sinks right it is possible to get this sort of density.

15% Air Flow Top (Left)
-----------
70% Air Flow Middle Fin-2-Fin Tunnel
-----------
15% Air Flow Bottom (Right)

That way you only need the fan banks front and back. 16 Fans for 16 boards seems right as that is what people will do in stack configuration anyhow. It is such an amazing design that BKKCoins has come up with. Even if you are not going to do a lot of DIY board building just configuring cases etc is going to be great fun and I bet GEN2 Klondikes will be even more versatile.

Isolated view of the low profile fin-2-fin tunnel concept for a K32 section:


Dogie trust abuse, spam, bullying, conspiracy posts & insults to forum members. Ask the mods or admins to move Dogie's spam or off topic stalking posts to the link above.
Pages: [1] 2 3 4 5 6 7 8 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!