DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
October 20, 2013, 09:56:47 PM |
|
Its also pretty darn stupid to pull air in from the back of the rack where everything else is dumping its heat. THe PSU would be breathing in warm air, heat it further, then dump that inside the cabinet where it cant get out. The PSU (like evey PSU made) exhausts out the "back". If installed as shown in HF model it would intake from the outside and exhaust out the back. If you flip the PSU 180 it would instake from the inside and still exhaust out the back.
|
|
|
|
crumbs
|
|
October 20, 2013, 10:03:56 PM |
|
What's funny about the whole layout is the radiators/fans are at the front of the case, dumping the hot air *inside* the case, making the PSs suck in hot exhaust when the PSs are flipped 180. Every single rackmount server pulls in warm case air to cool the PSU. PSU these days are highly efficient they don't need ultra cold air or high RPM fans. A low airflow of hot (90F+) air is well within design spec. It is a non-issue. At 600W load and 90% efficiency you are talking 54W of heat. It is more expensive (due to the amount of metal in larger heat sink) but they make passive power supplies because 54W in a space the size of a power supply is nothing it is less than a lightbulb in a space 10x larger. Well no. These servers are not preheating the air at the intake & toasting the case with it. The heat is generated halfway down the case, at the CPUs, VRMs, etc., etc. The air is also *managed*, it's routed -- not just haphazardly splashed all over the place. The power supplies don't suck preheated air, as they would in this design. Here's a random gif i just dug up -- certainly not the best-planned case, but an example of what i mean. Notice the PS getting cool, fresh air Edit: Just looked closer at the pic i chose -- dual P3s Crankin'!
|
|
|
|
Puppet
Legendary
Offline
Activity: 980
Merit: 1040
|
|
October 20, 2013, 10:05:27 PM |
|
The PSU (like evey PSU made) exhausts out the "back". If installed as shown in HF model it would intake from the outside and exhaust out the back.
Thats not what whateverhisname claimed HF told him. But since it makes completely no sense, I agree and he probably got it wrong. As for the cooling 50-70W inside a PSU; Im not disputing that the PSU will work fine and not have trouble with slightly elevated air temperatures, but your analogy with the lightbulb is flawed. First of all, how hot does a 60W lightbulb get under normal circumstances ? googling suggests ~110C above ambient. Inside your PSU enclosure without active cooling it would likely heat up to 160-170C, hot enough to start melting solder. Secondly, 60W also happens to be the power of my favorite soldering iron, and its not even pulling 60W constantly. As you may have guessed, surface area matters.
|
|
|
|
crumbs
|
|
October 20, 2013, 10:06:30 PM Last edit: October 20, 2013, 10:21:15 PM by crumbs |
|
Its also pretty darn stupid to pull air in from the back of the rack where everything else is dumping its heat. THe PSU would be breathing in warm air, heat it further, then dump that inside the cabinet where it cant get out. The PSU (like evey PSU made) exhausts out the "back". If installed as shown in HF model it would intake from the outside and exhaust out the back. If you flip the PSU 180 it would instake from the inside and still exhaust out the back. I got it -- you didn't read the entire thread (don't blame you). Cypherdoc claimed that the power supply *exhausts* where normal PSs intake air. Completely backwards. Leading us to suspect that he was pulling things out of ... Edit: As far as PSs being fine with sucking on hot air, i'm not so sure. Making them do it just because people were too lazy to think about proper air management is absurd.
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
October 20, 2013, 11:46:46 PM |
|
As far as PSs being fine with sucking on hot air, i'm not so sure. Making them do it just because people were too lazy to think about proper air management is absurd. ATX PSU are designed to intake from heated case air. It isn't being too lazy. A rackmount case is only 17.5" wide. A power supply is 3.4" wide. You would need to use a more expensive server (designed to have front to back airflow) power supply and then you would still lose 20% of your front surface area. Removing 750W+ by watercooling is no small task and that means a relatively large radiator surface area. Sure if you wanted to make the case larger 6U+ or only put 2 Sierras per case but those would be inferior choices IMHO. Power supplies are designed to work at high ambient temps. Both servers and PC intake cooling air from inside the case. Some high end PC allow flipping the PSU to draw outside air but that is the exception not the rule. SeaSonic puts a 5 yr warranty on their power supplies and they know 90%+ of the time it is going to draw in heated case air. They are designed to handle that. Actually it has become an easier engineering challenge as PSU become more efficient it means they have less of a heat load.
|
|
|
|
crumbs
|
|
October 21, 2013, 01:04:06 AM Last edit: October 21, 2013, 01:27:06 AM by crumbs |
|
As far as PSs being fine with sucking on hot air, i'm not so sure. Making them do it just because people were too lazy to think about proper air management is absurd. ATX PSU are designed to intake from heated case air. It isn't being too lazy. A rackmount case is only 17.5" wide. A power supply is 3.4" wide. You would need to use a more expensive server (designed to have front to back airflow) power supply and then you would still lose 20% of your front surface area. Removing 750W+ by watercooling is no small task and that means a relatively large radiator surface area. Sure if you wanted to make the case larger 6U+ or only put 2 Sierras per case but those would be inferior choices IMHO. Power supplies are designed to work at high ambient temps. Both servers and PC intake cooling air from inside the case. Some high end PC allow flipping the PSU to draw outside air but that is the exception not the rule. SeaSonic puts a 5 yr warranty on their power supplies and they know 90%+ of the time it is going to draw in heated case air. They are designed to handle that. Actually it has become an easier engineering challenge as PSU become more efficient it means they have less of a heat load. On the off-chance you're seriously missing my point, i'll try again. ATX power supplies are designed to function within their datasheet specs. Depending on the specs, they may or may not function reliably sucking hot air, but they will certainly have a slimmer thermal margin & have the shorter MTBF. Debating this is silly. Power supplies inside of PCs do not suck on hot exhaust air. The exhaust air from a typical GPU, for instance, is exhausted through the mounting bracket & out of the case. Exhaust from a properly mounted CPU cooler is also aimed at the back of the case & is evacuated, often aided by an additional fan in the rear of the case. In Hashfast's case, *all of the air inside the case is hot exhaust*. All of it. The picture used earlier in this thread, with yellow highlighter, is a picture familiar to every child who has assembled a computer. It is an example of *WHAT NOT TO DO*. It shows a CPU cooler exhausting into the intake of the PS. The caption beneath the picture reads: DO. NOT. WANT. This is really basic stuff, not open for debate. Finally, it is plain stupid to have radiators exhausting inside the case. That's why it is never done. Scratch that, it's done by Hashfast. Buy a water cooling kit. See if it exhausts inside the case. If it appears that it does, go back, read the instructions, and correct. Now that you're done, and the radiator exhaust is aimed the right way, ask yourself why Hashfast didn't spend the same amount of time reading instructions. Fini. Edit: Please examine the pic of a dual P3 i took the time to post for you. Tell me if you feel that the power supply in that ancient box is sucking on hot air.
|
|
|
|
itod
Legendary
Offline
Activity: 1974
Merit: 1077
^ Will code for Bitcoins
|
|
October 21, 2013, 01:06:04 AM |
|
HF may have many sins possibly being late, but does anyone really think they were careless with such a simple thing as air-flow direction? Come on, HF did thermal simulations with heated bodies in place where chips will be. They sure have someone who knows a thing a two about thermodynamics. Air-flow is the least of our worries.
|
|
|
|
crumbs
|
|
October 21, 2013, 01:17:58 AM |
|
HF may have many sins possibly being late, but does anyone really think they were careless with such a simple thing as air-flow direction? Come on, HF did thermal simulations with heated bodies in place where chips will be. They sure have someone who knows a thing a two about thermodynamics. Air-flow is the least of our worries.
No one ever went broke by banking on the stupidity of American people. I won't be the first. Your argument that you're "sure they know what they are doing" shows us one of two things: 1. That they do know what they're doing or 2. That you're dead wrong. You know which one i'm betting on, right?
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
October 21, 2013, 03:41:55 AM Last edit: October 21, 2013, 04:23:21 AM by DeathAndTaxes |
|
crumbs there is a finite amount of available space to make a rackmount unit. You design a rackmount unit with better airflow, buy modules from HF in bulk and then resell the package. I think you will find it is harder than it looks. Putting radiator in the back would be an "easy" solution except you only have 17" by 1.75 x U height inches to work with. If the power supply is mounted on the back and the radiator is mounted on the back the radiator will be tiny, too little surface area to effectively cool 750W+. To keep Delta T less than 10C over ambient you are going to need 1 to 2 cm2 of radiator surface area per watt (i.e 420cm x 120cm on 750W heat load) even with pretty high extreme airflow (3000 RPM pusher & puller fans). There is only so much surface area on the back or front panel of a rackmount unit.
Sure if you don't want to compromise then build a massively expensive 6U chassis with straight flow power supplies and the entire rest of the back panel devoted to a radiator. Of course when you do so you would price yourself out of the market and people will just buy the more economical solution from Hashfast or Cointerra.
|
|
|
|
xstr8guy
|
|
October 21, 2013, 04:04:35 AM |
|
Hmm, maybe the illustrator mistakenly put the rack ears on the back of the case. It would be odd that the PSUs are at the front of the case but still functional. A well built, modern PSU doesn't exhaust much hot air.
|
|
|
|
Paladin69
|
|
October 21, 2013, 05:34:11 AM |
|
0.75 btc per day for 400gh at the current diff
less than 0.5 btc per day for the next adjustment.
People will get a lot more btc for their money buying direct.
|
|
|
|
itod
Legendary
Offline
Activity: 1974
Merit: 1077
^ Will code for Bitcoins
|
|
October 21, 2013, 09:29:56 AM |
|
HF may have many sins possibly being late, but does anyone really think they were careless with such a simple thing as air-flow direction? Come on, HF did thermal simulations with heated bodies in place where chips will be. They sure have someone who knows a thing a two about thermodynamics. Air-flow is the least of our worries.
No one ever went broke by banking on the stupidity of American people. I won't be the first. Your argument that you're "sure they know what they are doing" shows us one of two things: 1. That they do know what they're doing or 2. That you're dead wrong. You know which one i'm betting on, right? Your arrogance is entertaining. My argument was not that "sure they know what they are doing", it was that they look they know their way around thermodynamics doing simulations I haven't heard other BTC rig manufacturers did. Don't that bother you to claim you know more on the issue than them.
|
|
|
|
Puppet
Legendary
Offline
Activity: 980
Merit: 1040
|
|
October 21, 2013, 09:48:31 AM |
|
I haven't heard other BTC rig manufacturers did.
|
|
|
|
itod
Legendary
Offline
Activity: 1974
Merit: 1077
^ Will code for Bitcoins
|
|
October 21, 2013, 10:07:43 AM |
|
I stand corrected, thanks.
|
|
|
|
crumbs
|
|
October 21, 2013, 10:39:45 AM Last edit: October 21, 2013, 11:00:07 AM by crumbs |
|
crumbs there is a finite amount of available space to make a rackmount unit. You design a rackmount unit with better airflow, buy modules from HF in bulk and then resell the package. I think you will find it is harder than it looks. Putting radiator in the back would be an "easy" solution except you only have 17" by 1.75 x U height inches to work with. If the power supply is mounted on the back and the radiator is mounted on the back the radiator will be tiny, too little surface area to effectively cool 750W+. To keep Delta T less than 10C over ambient you are going to need 1 to 2 cm2 of radiator surface area per watt (i.e 420cm x 120cm on 750W heat load) even with pretty high extreme airflow (3000 RPM pusher & puller fans). There is only so much surface area on the back or front panel of a rackmount unit.
Sure if you don't want to compromise then build a massively expensive 6U chassis with straight flow power supplies and the entire rest of the back panel devoted to a radiator. Of course when you do so you would price yourself out of the market and people will just buy the more economical solution from Hashfast or Cointerra.
I won't find anything "harder than it looks" -- i've designed and built cooling solutions for a wide range of gear, from 'puter boxen to cars. I know what i'm doing. That usually helps. Puting the radiators in the back is not an elegant option -- it's a design constraint. If an engineer can't figure out how it's done, there are plenty of careers in ditch digging which remain open to him. 1. The figures you quote for radiator heat dissipation are simply wrong. A radiator core has three dimensions: Height, width, and DEPTH. That's how THICK a core is. Your cm 2 ignores that. It also ignores the cooling fin design -- it is the surface area area of those fins which counts, not the H x W of the radiator. The volume of water that flows through the core & the design of the header tanks also factors in. This, again, is elementary stuff, known by every child who played with "My First Watercooler." 2. The twin, non-redundant power supplies are a waste of space & a sign of sloppy, afterthought engineering. 3 modules in the case? 2 power supplies? One module gets one, and remaining two get the other? Two power supplies to provide 750W? Honest? There are no *single* off-the-shelf PS which could handle 750W? They had to enter into a contract with Sea Sonic to provide them with *TWO ANEMIC PSs per box"? Really? 3. As xstr8guy suggested above (not sure if he was joking, but wait...), even flipping the whole magella around, so that the back of the case faces the front (becomes the inlet side) would be a more elegant solution. At least the case would only get the exhaust from the Rube Goldbergian twin PSs, not the full furnace blast of the 3 ASICs. Finally, @itod: There's nothing arrogant in what i say. The problems with this "design" are obvious to a dull-normal 5-year-old, the same 5-year-old who can fire up her dad's CAD & really botch things up. I'm not saying that the whole thing will fail on the merits of its cooling solution alone. It likely won't -- 750W is not a huge amount of power to dissipate. What i *am* saying is this: If their cooling & packaging design is indicative of their ASIC skillz, what we have here is a giant fail.
|
|
|
|
|
ImI
Legendary
Offline
Activity: 1946
Merit: 1019
|
|
October 21, 2013, 12:51:36 PM |
|
0.75 btc per day for 400gh at the current diff
less than 0.5 btc per day for the next adjustment.
People will get a lot more btc for their money buying direct.
|
|
|
|
Speakeron
Newbie
Offline
Activity: 47
Merit: 0
|
|
October 21, 2013, 02:03:45 PM |
|
The back is out of reach?
You've never seen the inside of a datacenter, have you? I spend quite a lot of time in a datacenter, and the back of a rack is most certainly accessible by opening the door in the warm corridor. (Otherwise how would you attach power and network cables to the servers?) I must admit, though, that nothing else about the design of these units makes any sense. Apart from the power supplies, I don't see any screw holes for the rack slides; these seem pretty heavy cases to be mounted purely on the ears.
|
|
|
|
jjiimm_64
Legendary
Offline
Activity: 1876
Merit: 1000
|
|
October 21, 2013, 03:10:59 PM |
|
2. The twin, non-redundant power supplies are a waste of space & a sign of sloppy, afterthought engineering. 3 modules in the case? 2 power supplies? One module gets one, and remaining two get the other? Two power supplies to provide 750W? Honest? There are no *single* off-the-shelf PS which could handle 750W? They had to enter into a contract with Sea Sonic to provide them with *TWO ANEMIC PSs per box"? Really? I'm not saying that the whole thing will fail on the merits of its cooling solution alone. It likely won't -- 750W is not a huge amount of power to dissipate. What i *am* saying is this: If their cooling & packaging design is indicative of their ASIC skillz, what we have here is a giant fail. correct me if i am wrong, but isn't the total more like 1400watts per unit... and you cannot get a good efficient PSU that is 1600, so 2*850 were choosen? also, if 2 pci cables power each miniboard, then you can have a pci cable from each PSU powering the 3rd board.
|
1jimbitm6hAKTjKX4qurCNQubbnk2YsFw
|
|
|
crumbs
|
|
October 21, 2013, 03:39:48 PM |
|
2. The twin, non-redundant power supplies are a waste of space & a sign of sloppy, afterthought engineering. 3 modules in the case? 2 power supplies? One module gets one, and remaining two get the other? Two power supplies to provide 750W? Honest? There are no *single* off-the-shelf PS which could handle 750W? They had to enter into a contract with Sea Sonic to provide them with *TWO ANEMIC PSs per box"? Really? I'm not saying that the whole thing will fail on the merits of its cooling solution alone. It likely won't -- 750W is not a huge amount of power to dissipate. What i *am* saying is this: If their cooling & packaging design is indicative of their ASIC skillz, what we have here is a giant fail. correct me if i am wrong, but isn't the total more like 1400watts per unit... and you cannot get a good efficient PSU that is 1600, so 2*850 were choosen? also, if 2 pci cables power each miniboard, then you can have a pci cable from each PSU powering the 3rd board. According to Hashfast, their chips draw .65W/GH/sec, so: .65 * 400 * 3 = 780W. Let's round it off to 900W to make up for fans, pumps & fluff. Sea Sonic, the company Hashfast has entered into some sort of a deal with (PR release is unclear), has platinum-rated 1000W gizmos available @ Newegg @$239. As far as running two power supplies in parallel, it might work, and it might be fireworks/one PS "loafing" at idle & not adding to the fun -- depending on the output circuitry of the PS. It's possible to bridge switching PS, but not worth the effort. Hope this helps.
|
|
|
|
|