Bitcoin Forum
November 11, 2024, 03:57:24 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 4 »  All
  Print  
Author Topic: .25 BTC BOUNTY for the best answer  (Read 13566 times)
yochdog (OP)
Legendary
*
Offline Offline

Activity: 2044
Merit: 1000



View Profile
December 29, 2013, 01:11:15 AM
 #1

I am in need of some technical advice from those in a position to know.

In building out a new "data center" for mining, I am considering different options for cooling the space.  I am interested in using a unit like this:

http://portablecoolers.com/models/PAC2K482S.html

My understanding is that evaporative cooling can pose a risk to electronics, as the air might become saturated with moisture.  I am in search of some expert advice on this subject. 

1)  Is this a feasible option?
2)  How risky is it?
3)  Can the risks be addressed?
4)  If not, what is a better option?

Most comprehensive answer, with the best information gets the bounty. 

Thanks!

I am a trusted trader!  Ask Inaba, Luo Demin, Vanderbleek, Sannyasi, Episking, Miner99er, Isepick, Amazingrando, Cablez, ColdHardMetal, Dextryn, MB300sd, Robocoder, gnar1ta$ and many others!
deepceleron
Legendary
*
Offline Offline

Activity: 1512
Merit: 1036



View Profile WWW
December 29, 2013, 01:16:36 AM
 #2

Those work best when you are pumping air through the facility, outside air that is already hot and needs to be cooled. You cannot just circulate the same air, unless you want to create a rain forest. If it is cool outside, you don't need vapor cooling, just lots of outside air.

You can look at facebook's system: http://gigaom.com/2012/08/17/a-rare-look-inside-facebooks-oregon-data-center-photos-video/




They use misters to cool lots of outside air - the evaporation of water cools the outside air when it adds humidity.
perfectsquare
Member
**
Offline Offline

Activity: 70
Merit: 10

LiveChains.Net


View Profile WWW
December 29, 2013, 01:17:47 AM
 #3

I design and build or rather used to....

I dont want your bounty but will happily offer up my 2 pence worth.

Not a concept i would choose, cooling is more complex than that....

How many servers are we talking? what are the BTU Raiting etc.....

How will air "flow" work around the space, will each server get enough cooled air

Why not free cooling??? just high speed fans in a small space and extraction can be enough on small setups.

Its not as simple as your question makes it

LiveChains.net - Pool Rental Services & BlockChain Providers
deepceleron
Legendary
*
Offline Offline

Activity: 1512
Merit: 1036



View Profile WWW
December 29, 2013, 01:21:24 AM
 #4

Here's a facebook update:

http://www.datacenterknowledge.com/archives/2012/07/16/facebook-revises-data-center-cooling-system/

In phase 2 of the Prineville project, Facebook has replaced the misters with an evaporative cooling system featuring adiabatic media made of fiberglass. Warm air enters through the media, which is dampened by a small flow of water that enters the top of the media. The air is cooled as it passes through the wet media.
Air Not Fully “Scrubbed”

The change followed an incident in which a plume of smoke from a fire spread across the area around the Facebook data center. Staff could smell the smoke inside the data center. That prompted the Facebook’s data center team to examine other options for treating and “scrubbing” air as it makes it way into the data center.


To clarify the above, there was a brush fire outside and they were pumping smoke and ash through their data center. They now have a waterfall through filter media that pulls particulates out of the air and into the water.

I actually attempted to make my own swamp cooler for my GPU room, it didn't work out so well as I just was drizzling water through a block of stacked cardboard. Swamp cooler pads aren't available in stores where I live: http://reviews.homedepot.com/1999/100343657/aspen-snow-cool-29-in-x-29-in-replacement-evaporative-cooler-pad-reviews/reviews.htm
yochdog (OP)
Legendary
*
Offline Offline

Activity: 2044
Merit: 1000



View Profile
December 29, 2013, 01:24:26 AM
 #5

I design and build or rather used to....

I dont want your bounty but will happily offer up my 2 pence worth.

Not a concept i would choose, cooling is more complex than that....

How many servers are we talking? what are the BTU Raiting etc.....

How will air "flow" work around the space, will each server get enough cooled air

Why not free cooling??? just high speed fans in a small space and extraction can be enough on small setups.

Its not as simple as your question makes it

I am not sure of the BTU rating, but I will need to dissipate upwards of 40,000 watts.

Air flow would be a bit tricky.  I was planning on an intake and exhaust fan to get rid of humidity.  

The summer here are brutally hot....can get up to 110 F.  high speed fans don't really cut it in those conditions, at least they haven't the last couple years.  

I am a trusted trader!  Ask Inaba, Luo Demin, Vanderbleek, Sannyasi, Episking, Miner99er, Isepick, Amazingrando, Cablez, ColdHardMetal, Dextryn, MB300sd, Robocoder, gnar1ta$ and many others!
KonstantinosM
Hero Member
*****
Offline Offline

Activity: 1492
Merit: 763


Life is a taxable event


View Profile
December 29, 2013, 01:28:09 AM
 #6

Please remember that bitcoin mining runs many times hotter than most datacenters.

You will need a lot of electricity. (Like a small hydroelectric dam's worth). You have to innovate to get it cooled. I would say that it would be most efficient to locate a bitcoin data mining center on a mountain.
The electricity could be produced with wind turbines with some solar. Cooling would be easy since being up on a mountain significantly lowers the temperature of the air.


Syscoin has the best of Bitcoin and Ethereum in one place, it's merge mined with Bitcoin so it is plugged into Bitcoin's ecosystem and takes full advantage of it's POW while rewarding Bitcoin miners with Syscoin
Millicent
Member
**
Offline Offline

Activity: 84
Merit: 10


View Profile
December 29, 2013, 01:28:58 AM
 #7

A friend of mine works for a company that services the Amazon facility in Milpitas.  If memory serves they use Trane equipment.  You might start here for some research.  http://www.trane.com/datacenter/

Most office building suites have adequate cooling for small to mid sized server rack configurations.  Where are you planning in building this out? warehouse? or office suite? what part of the world?

BTC ~ 1CX9TMGCv73XLcvckz5RsnHgsHA5fJrL2q
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
December 29, 2013, 01:32:53 AM
 #8

I design and build or rather used to....

I dont want your bounty but will happily offer up my 2 pence worth.

Not a concept i would choose, cooling is more complex than that....

How many servers are we talking? what are the BTU Raiting etc.....

How will air "flow" work around the space, will each server get enough cooled air

Why not free cooling??? just high speed fans in a small space and extraction can be enough on small setups.

Its not as simple as your question makes it

I am not sure of the BTU rating, but I will need to dissipate upwards of 40,000 watts.

Air flow would be a bit tricky.  I was planning on an intake and exhaust fan to get rid of humidity.  

The summer here are brutally hot....can get up to 110 F.  high speed fans don't really cut it in those conditions, at least they haven't the last couple years.  

Also an ex data center designer with .02 and will leave it up to another to write the best but here is a piece:
Condensing coolers will remove humidity from the air.
An evap cooler will add it, though typically used for more open air systems, you could do something novel with a combination, and being careful with the runoff and cleanliness.
Have contacts at Leibert and Klein, et al, Trane is also good, if you want the full industrial engineering work done for your environment.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
undeadbitcoiner
Sr. Member
****
Offline Offline

Activity: 910
Merit: 273


Undeadbitcoiner Will not DIE until 1BTC=50K


View Profile WWW
December 29, 2013, 01:41:07 AM
 #9

Please remember that bitcoin mining runs many times hotter than most datacenters.

You will need a lot of electricity. (Like a small hydroelectric dam's worth). You have to innovate to get it cooled. I would say that it would be most efficient to locate a bitcoin data mining center on a mountain.
The electricity could be produced with wind turbines with some solar. Cooling would be easy since being up on a mountain significantly lowers the temperature of the air.


Best Idea,
Choosing Cool Mountains and trying to get good electricty flow

EvilPanda
Hero Member
*****
Offline Offline

Activity: 658
Merit: 500


Small Red and Bad


View Profile
December 29, 2013, 02:02:47 AM
 #10

Have you considered this option?

This might create a cheap boost to your cooling by lowering the air temps and not causing humidity problems.


alumar
Full Member
***
Offline Offline

Activity: 208
Merit: 117


View Profile WWW
December 29, 2013, 02:06:11 AM
 #11

Location: Obviously wherever you live will play a huge part in this ... if your near mountains, the suggestions above will get you some interesting ambient air to play with; along with the possibility of cheap local electricity if you put up some windmill/solar near facility. Again, that is just additional capital cost when your probably more focused on spending as much as you can on G/HASH vs hedging your own power source.

Building a data center for BITCOIN or ANYCOIN should follow most of the current standards out there. Any computer equipment for extended periods of time at high temperatures greatly reduces reliability, longevity of components and will likely cause unplanned downtime. Maintaining an ambient temperature range of 68F to 75F (20 to 24C) is optimal for system reliability. This temperature range provides a safe buffer for equipment to operate in the event of air conditioning or HVAC equipment failure while making it easier to maintain a safe relative humidity level.

It is a generally agreed upon standard in the computer industry that expensive IT equipment should not be operated in a computer room or data center where the ambient room temperature has exceeded 85F (30C).

In today's high-density data centers and computer rooms, measuring the ambient room temperature is often not enough. The temperature of the air where it enters miner can be measurably higher than the ambient room temperature, depending on the layout of the data center and a higher concentration of heat producing rigs. Measuring the temperature of the aisles in the data center at multiple height levels can give an early indication of a potential temperature problem. For consistent and reliable temperature monitoring, place a temperature sensor at least every 25 feet in each aisle with sensors placed closer together if high temperature equipment like blade servers are in use. I would recommend installing TemPageR, Room Alert 7E or Room Alert 11E rack units at the top of each rack in the data center. As the heat generated by the components in the rack rises, TemPageR and Room Alert units will provide an early warning and notify staff for temperature issues before critical systems, servers or network equipment is damaged.

Recommended Computer Room Humidity
Relative humidity (RH) is defined as the amount of moisture in the air at a given temperature in relation to the maximum amount of moisture the air could hold at the same temperature. In a Mining Farm or computer room, maintaining ambient relative humidity levels between 45% and 55% is recommended for optimal performance and reliability.

When relative humidity levels are too high, water condensation can occur which results in hardware corrosion and early system and component failure. If the relative humidity is too low, computer equipment becomes susceptible to electrostatic discharge (ESD) which can cause damage to sensitive components. When monitoring the relative humidity in the data center, the recommendation is to set early warning alerts at 40% and 60% relative humidity, with critical alerts at 30% and 70% relative humidity. It is important to remember that the relative humidity is directly related to the current temperature, so monitoring temperature and humidity together is critical.

So in closing, many ways to cool, from traditional air conditioning to evaporation systems ... that part really is a math equation on capital cost. The real focus should be maintaining the optimal environmental conditions inside the mining farm to ensure your core capital investment stays operational as efficient as possible.

Tips: 1BhgD5d6YTDhf7jXLLGYQ3MvtDKw1nLjPd

Carter - Host of BitsBeTrippin
Original Cryptomining Channel on YouTube
https://www.youtube.com/bitsbetrippin
everythingbitcoin
Sr. Member
****
Offline Offline

Activity: 423
Merit: 250


View Profile
December 29, 2013, 02:13:31 AM
 #12

Just as water is an effective heat-exchange medium in evaporative cooling systems, it can also be circulated throughout the data center to cool the
An air-side economizer intakes outside air into the building when it is easier to cool than the air being returned from the conditioned space and distributes it to the space; exhaust air from the servers is vented outside. Under certain weather conditions, the economizer may mix intake and exhaust air to meet the temperature and humidity requirements of the computer equipment.

Evaporative cooling uses non-refrigerated water to reduce indoor air temperature to the desirable range. Commonly referred to as swamp coolers, evaporative coolers utilize water in direct contact with the air being conditioned. Either the water is sprayed as a fine mist or a wetted medium is used to increase the rate of water evaporation into the air. As the water evaporates, it absorbs heat energy from the air, lowering the temperature of the air as the relative humidity of the air increases.

These systems are very energy efficient as no mechanical cooling is employed. However, the systems do require dry air to work effectively, which limits full application to specific climates. Even the most conservative organizations, such as financial institutions, are beginning to use these types of systems, especially because ASHRAE has broadened the operating-temperature recommendations for data centers. ASHRAE's Technical Committee 9.9 recommendations allow dry-bulb operating temperatures between 64.4 degrees F (18 degrees C) and 80.6 degrees F (27 degrees C), with humidity controlled to keep dew points below 59.0 degrees F (15 degrees C) or 60 percent RH, whichever is lower. This has given even the most reluctant owners a green light to consider these options.

Airside economizers and evaporative cooling systems are difficult to implement in existing data centers because they typically require large HVAC ductwork and a location close to the exterior of the building. In new facilities, these systems increase the capital cost of the facility (i.e., larger building volume), HVAC equipment and ductwork. However, over the course of the lifetime of the facility, these systems significantly reduce operating costs when used in the appropriate climate, ideally, locations with consistent moderate temperatures and low humidity. Even under ideal conditions, the owner of a high-density data center that relies on outside air for cooling must minimize risks associated with environmental events, such as a forest fire generating smoke, and HVAC equipment failures.


IT equipment at the cabinet level. In fact, water cooling is far more energy efficient than air cooling. A gallon of water can absorb the same energy per degree of temperature change as 500 cubic feet of air. This yields significant operational savings in typical applications because the circulation of air to remove heat will require 10 times the amount of energy than would be required to move the water to transport the same amount of heat.

However, it is more expensive to install water piping than ductwork. An engineer can provide cost comparisons to provide the owner with the financial insight to make a sound decision when constructing a new facility. It is not usually a feasible retrofit for an existing data center.

Rear-door heat exchangers and integral water cooling are options in existing air-cooled data centers to reduce the energy use and cost associated with cooling. They put the water-cooling power of heat exchangers where they are really needed: on the server racks.

Rear-door heat exchangers are mounted on the back of each server rack. Sealed coils within the heat exchanger circulate chilled water supplied from below the raised floor. Hot air exhausted from the server passes over the coils, transferring the heat to the water and cooling the exhaust air to room temperature before it re-enters the room. The heated water is returned to the chiller plant, where the heat is exhausted from the building. Owners can achieve significant operational savings using these devices. To protect the systems during loss of utility power, many facilities put the pumps for the systems on a dedicated uninterruptible power supply (UPS) system.

Owners have been cautious in adopting this approach due to the risk of leaks. The heat exchanger is equipped with baffles that prevent water spraying into the computer in the rare event of a leak. However, water could still leak onto the floor.

Another alternative is integral cooling, a sort of a "mini AC unit" between the cabinets. This close-coupled system takes the hot air discharged from the servers, cools it immediately and then blows it back to the inlet of the server. The system contains the water within the AC unit itself. The installation can also be designed to drain to a piping system under the floor, and it can incorporate leak detectors.


-----------------------------------
An air-side economizer intakes outside air into the building when it is easier to cool than the air being returned from the conditioned space and distributes it to the space; exhaust air from the servers is vented outside. Under certain weather conditions, the economizer may mix intake and exhaust air to meet the temperature and humidity requirements of the computer equipment.

Evaporative cooling uses non-refrigerated water to reduce indoor air temperature to the desirable range. Commonly referred to as swamp coolers, evaporative coolers utilize water in direct contact with the air being conditioned. Either the water is sprayed as a fine mist or a wetted medium is used to increase the rate of water evaporation into the air. As the water evaporates, it absorbs heat energy from the air, lowering the temperature of the air as the relative humidity of the air increases.

These systems are very energy efficient as no mechanical cooling is employed. However, the systems do require dry air to work effectively, which limits full application to specific climates. Even the most conservative organizations, such as financial institutions, are beginning to use these types of systems, especially because ASHRAE has broadened the operating-temperature recommendations for data centers. ASHRAE's Technical Committee 9.9 recommendations allow dry-bulb operating temperatures between 64.4 degrees F (18 degrees C) and 80.6 degrees F (27 degrees C), with humidity controlled to keep dew points below 59.0 degrees F (15 degrees C) or 60 percent RH, whichever is lower. This has given even the most reluctant owners a green light to consider these options.

Airside economizers and evaporative cooling systems are difficult to implement in existing data centers because they typically require large HVAC ductwork and a location close to the exterior of the building. In new facilities, these systems increase the capital cost of the facility (i.e., larger building volume), HVAC equipment and ductwork. However, over the course of the lifetime of the facility, these systems significantly reduce operating costs when used in the appropriate climate, ideally, locations with consistent moderate temperatures and low humidity. Even under ideal conditions, the owner of a high-density data center that relies on outside air for cooling must minimize risks associated with environmental events, such as a forest fire generating smoke, and HVAC equipment failures.

everythingbitcoin
Sr. Member
****
Offline Offline

Activity: 423
Merit: 250


View Profile
December 29, 2013, 02:23:02 AM
 #13

The move to water-cooled applications raises many challenges for facility executives. For example, experience shows that a building’s chilled water system is anything but clean. Few data center operators understand the biology and chemistry of open or closed loop cooling systems. Even when the operating staff does a great job of keeping the systems balanced, the systems still are subject to human errors that can wreak permanent havoc on pipes.

Installing dedicated piping to in-row coolers is difficult enough the first time, but it will be nearly intolerable to have to replace that piping under the floor if, in less than five years, it begins to leak due to microbial or chemical attacks. That does happen, and sometimes attempts to correct the problem make it worse.

Consider these horror stories:

A 52-story single-occupant building with a tenant condenser water system feeding its data center and trading systems replaced its entire piping system (live) due to microbial attack.
A four-story data center replaced all of its chilled and condenser water systems (live) when the initial building operators failed to address cross contamination of the chilled water and the condenser water systems while on free cooling.
In yet another high-rise building, a two pipe (non-critical) system was used for heating in the winter and cooling in the summer. Each spring and fall the system would experience water flow blockages, so a chemical cleaning agent was added to the pipes to remove scale build-up.
Before the cleaning agent could be diluted or removed, the heating system was turned on. Thanksgiving night, the 4-inch lines let loose. Chemically treated 180-degree water flooded down 26 stories of the tower. Because no one was on site knew how to shut the system down, it ran for two hours before being stopped.

Isolation
Water quality isn’t the only issue to consider. Back in the days of water-cooled mainframes, chilled water was delivered to a flat plate heat exchanger provided by the CPU manufacturer. The other side of the heat exchanger was filled with distilled water and managed by technicians from the CPU manufacturer. Given this design, the areas of responsibility were as clear as the water flowing through the computers.

In today’s designs, some of the better suppliers promote this physical isolation through the use of a “cooling distribution unit” (CDU) with the flat plate heat exchanger inside. Not all CDUs are alike and some are merely pumps with a manifold to serve multiple cooling units. It is therefore wise to be cautious. Isolation minimizes risk.

Currently, vendor-furnished standard CDUs are limited in the number of water-cooled IRC units they can support. Typically these are supplied to support 12 to 24 IRCs with a supply and return line for each. That’s 24 to 48 pipes that need to be run from a single point out to the IRCs. If there are just a few high-density cabinets to cool, that may be acceptable, but, as the entire data center becomes high-density, the volume of piping can become a challenge. Even 1-inch diameter piping measures two inches after it is insulated.

The solution will be evolutionary. Existing data centers will go the CDU route until they reach critical mass. New data centers and ones undergoing major renovations will have the opportunity to run supply and return headers sized for multiple rows of high-density cabinets with individual, valved take-offs for each IRC unit. This reduces clutter under the floor, allowing reasonable airflow to other equipment that remains air-cooled. Again, the smart money will have this distribution isolated from the main chilled water supply and could even be connected to a local air-cooled chiller should the main chilled water plant fail.

Evaluating IRC Units

Given the multitude of water-cooled IRC variations, how do facility executives decide what’s best for a specific application? There are many choices and opportunities for addressing specific needs.

One consideration is cooling coil location. Putting the coils on top saves floor space. And the performance of top-of-the-rack designs are seldom affected by daily operations of server equipment installs and de-installs. But many older data centers and some new ones have been shoehorned into buildings with minimal floor-to-ceiling heights, and many data centers run data cabling in cable trays directly over the racks. Both these situations could make it difficult to put coils on top.

If the coil is on top, does it sit on top of the cabinet or is it hung from the structure above? The method of installation will affect data cabling paths, cable tray layout, sprinklers, lighting and smoke detectors. Be sure that these can all be coordinated within the given overhead space.

Having the coil on the bottom also saves floor space. Additionally it keeps all piping under the raised floor and it allows for overhead cable trays to be installed without obstruction. But it will either increase the height of the cabinet or reduce the number of “U” spaces in the cabinet. A “U” is a unit of physical measure to describe the height of a server, network switch or other similar device. One “U” or “unit” is 44.45 mm (1.75 inches) high. Most racks are sized between 42 and 50 “U”s (6 to 7 feet high) of capacity. To go taller is impractical because doing so usually requires special platforms to lift and install equipment at the top of the rack. To use smaller racks diminishes the opportunities to maximize the data center capacity.

With a coil on the bottom, a standard 42U cabinet will be raised 12 to 14 inches. Will that be too tall to fit through data center and elevator doors? How will technicians install equipment in the top U spaces? One option is a cabinet with fewer U spaces, but that will mean more footprint for the same capacity.

Alternative Locations
Another solution is 1-foot-wide IRC units that are installed between each high-density cabinet. This approach offers the most redundancy and is the simplest to maintain. It typically has multiple fans and can have multiple coils to improve reliability. Piping and power are from under the floor. This design also lends itself to low-load performance enhancements in the future. What’s more, this design usually has the lowest installed price.

On the flip side, it uses more floor space than the other approaches, with a footprint equal to half a server rack. It therefore allows a data center to go to high-density servers but limits the total number of computer racks that can be installed. Proponents of this design concede that this solution takes up space on the data center floor. They admit that data centers have gone to high-density computing for reduced footprint as well as for speed, but they contend that the mechanical cooling systems now need to reclaim some of the space saved.

Rear-door solutions are a good option where existing racks need more cooling capacity. But the design’s performance is more affected by daily operations then the other designs due to the door being opened when servers are being installed or removed. Facility executives should determine what happens to the cooling (and the servers) when the rear door is opened.

No matter which configuration is selected, facility executives should give careful consideration to a range of specific factors:

Connections. These probably pose the greatest risk no matter which configuration is selected. Look at the connections carefully. Are they of substance, able to take the stresses of the physical abuse when data cables get pulled around them or do they get stepped on when the floor is open? The connections can be anything from clear rubber tubing held on with hose clamps to threaded brass connections.

Think about how connections are made in the site as well as how much control can be exercised over underfloor work. Are workers aware of the dangers of putting stresses on pipes? Many are not. What if the fitting cracks or the pipe joint leaks? Can workers find the proper valve to turn off the leak? Will they even try? Does the data center use seal-tight electrical conduits that will protect power connections from water? Can water flow under the cables and conduits to the nearest drain or do the cables and conduits act like dams holding back the water and forcing it into other areas?

Valve quality. This is a crucial issue regardless of whether the valves are located in the unit, under the floor or in the CDU. Will the valve seize up over time and become inoperable? Will it always hold tight? To date, ball valves seem to be the most durable. Although valves are easy to take for granted, the ramifications of valve selection will be significant.

Servicing.
Because everything mechanical will eventually fail, one must look at IRC units with respect to servicing and replacement. How easy will servicing be? Think of it like servicing a car. Is everything packed so tight that it literally has to be dismantled to replace the cooling coil? What about the controls? Can they be replaced without shutting the unit down? And are the fans (the component that most commonly fails) hard wired or equipped with plug connections?

Condensate Drainage.
A water-cooled IRC unit is essentially a mini computer-room air conditioning (CRAC) unit. As such, it will condense water on its coils that will need to be drained away. Look at the condensate pans. Are they well drained or flat allowing for deposits to build up? If condensate pumps are needed what is the power source?

Some vendors are promoting systems that do sensible cooling only. This is good for maintaining humidity levels in the data center. If the face temperature of the cooling coil remains above the dew point temperature in the room, there will not be any condensation. The challenge is starting up a data center, getting it stabilized and then having the ability to track the data center’s dew point with all the controls automatically adjusting to maintain a sensible cooling state only.

Power. Data centers do not have enough circuits to wire the computers and now many more circuits are being added for the IRC units. What’s more, designs must be consistent and power the mechanical systems to mimic the power distribution of computers. What is the benefit of having 15 minutes of battery back-up if the servers go out on thermal overload in less than a minute? That being the case, IRC units need to be dual power corded as well. That criteria doubles the IRC circuit quantities along with the associated distribution boards and feeders back to the service entrance.

Before any of the specifics of IRC unit selection really matter, of course, facility executives have to be comfortable with water in the data center. Many are still reluctant to take that step. There are many reasons:

There’s a generation gap. Relatively few professionals who have experience with water-cooled processors are still around.
The current generation of operators have been trained so well about keeping water out of the data center that the idea of water-cooled processors is beyond comprehension.
There is a great perceived risk of making water connections in and around live electronics.
There is currently a lack of standard offerings from the hardware manufacturers.
The bottom line is that water changes everything professionals have been doing in data centers for the last 30 years. And that will create a lot of sleepless nights for many data center facility executives.

Before You Dive In
Traditionally, data centers have been cooled by computer-room air conditioning (CRAC) units via underfloor air distribution. Whether a data center can continue using that approach depends on many factors. The major factors include floor height, underfloor clutter, hot and cold aisle configurations, loss of air through tile cuts and many more too long to list here.

Generally speaking, the traditional CRAC concept can cool a reasonably designed and maintained data center averaging 4 kw to 6 kw per cabinet. Between 6 kw and 18 kw per cabinet, supplementary fan assist generally is needed to increase the airflow through the cabinets.

The fan-assist technology comes in many varieties and has evolved over time.

• First there were the rack-mounted, 1-U type of fans that increase circulation to the front of the servers, particularly to those at the top of the cabinet.

• Next came the fixed muffin fans (mounted top, bottom and rear) used to draw the air through the cabinet. Many of these systems included a thermostat to cycle individual fans on and off as needed.

• Later came larger rear-door and top-mounted fans of various capacities integrated into the cabinet design to maximize the air flow evenly through the entire cabinet and in some cases even to direct the air discharge.

All these added fans add load to the data center and particularly to the UPS. To better address this and to maximize efficiencies, the latest fan-assist design utilizes variable-speed fans that adjust airflow rates to match the needs of a particular cabinet.

Until recently, manufacturers did not include anything more than muffin fans with servers. In the past year, this has started to change. Server manufacturers are now starting to push new solutions out of research labs and into production. At least one server manufacturer is now utilizing multiple variable turbine-type fans in their blade servers. These are compact, high air volume, redundant and part of the manufactured product. More of these server-based cooling solutions can be expected in the coming months.

deepceleron
Legendary
*
Offline Offline

Activity: 1512
Merit: 1036



View Profile WWW
December 29, 2013, 02:25:38 AM
 #14

The move to water-cooled applications raises many challenges for facility executives. For example, experience shows that a building’s chilled water system is anything but clean. Few data center operators understand the biology and chemistry of open or closed loop cooling systems. Even when the operating staff does a great job of keeping the systems balanced, the systems still are subject to human errors that can wreak permanent havoc on pipes.
...

You could just post links instead of being a tool:
www.facilitiesnet.com/datacenters/article/Free-Air-WaterCooled-Servers-Increase-Data-Center-Energy-Efficiency--12989
http://www.facilitiesnet.com/datacenters/article/ripple-effect--8227

Location: Obviously wherever you live will play a huge part in this ... if your near mountains, the suggestions above will get you some interesting ambient air to play with; along with the possibility of cheap local electricity if you put up some windmill/solar near facility. Again, that is just additional capital cost when your probably more focused on spending as much as you can on G/HASH vs hedging your own power source.

Building a data center for BITCOIN or ANYCOIN should follow most of the current standards out there. Any computer equipment for extended periods of time at high temperatures greatly reduces reliability, longevity of components and will likely cause unplanned downtime. Maintaining an ambient temperature range of 68F to 75F (20 to 24C) is optimal for system reliability. This temperature range provides a safe buffer for equipment to operate in the event of air conditioning or HVAC equipment failure while making it easier to maintain a safe relative humidity level.

It is a generally agreed upon standard in the computer industry that expensive IT equipment should not be operated in a computer room or data center where the ambient room temperature has exceeded 85F (30C).
...
Recommended Computer Room Humidity
Relative humidity (RH) is defined as the amount of moisture in the air at a given temperature in relation to the maximum amount of moisture the air could hold at the same temperature. In a Mining Farm or computer room, maintaining ambient relative humidity levels between 45% and 55% is recommended for optimal performance and reliability.
..
You too:
http://www.avtech.com/About/Articles/AVT/NA/All/-/DD-NN-AN-TN/Recommended_Computer_Room_Temperature_Humidity.htm
kwoody
Sr. Member
****
Offline Offline

Activity: 454
Merit: 250


Technology and Women. Amazing.


View Profile
December 29, 2013, 02:45:00 AM
 #15

lol @ copypaste answers
empoweoqwj
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500


View Profile
December 29, 2013, 03:44:44 AM
 #16

If you set up in a place where the temperature reaches 110 in the summer, you are going to end up costing yourself an arm and a leg whatever cooling method you employ. Find a cooler location should be your first priority.
Andaloons
Newbie
*
Offline Offline

Activity: 45
Merit: 0


View Profile
December 29, 2013, 04:23:20 AM
 #17

You might consider liquid immersion:

http://www.electronics-cooling.com/1996/05/direct-liquid-immersion-cooling-for-high-power-density-microelectronics/
Tino
Newbie
*
Offline Offline

Activity: 10
Merit: 0


View Profile
December 29, 2013, 04:56:53 AM
 #18

I live in a country with long and hot summers.

1. Warm air rises while cool air sinks
2. Open systems are not that hard to cool
3. All in life is about the flow, not how big or how expensive

If you would create a big box or a small room where cool air sinks in and settles down on the bottom you got an inflow and it's ready to be absorbed / sucked up. Like normal cards, it could suck up the cool air and spit it out. It should spit it out upwards. iiiii ... i = device and the dot is the hot air spit up.

-the devices low with still 2 inches of space under where you could poor water on the side of the box/floor without an immediate danger and it would run right under the devices and level (only air needed, just visualize)
-the top of the devices blow of steam (hot air, just visualize) up while cool air sinks in from the side
-  [ \ i i i i i i ] cool air slides in to the bottom under the devices and the fans suck it up and blow it up in the way warm air would go anyway.

Circulating air without knowing what it wants to do is not gonna work, so adjust to nature and you only need air if you understand the flow. I would use AC flow with a gap in the inflow in case there would be any water, it would drip out on the floor before flow into the cool-box with devices (not a common little drain pipe that can be stuck up with 1 fly or something).  

You could put some kind of kitchen hood (a fan backwards in a tunnel connect to a mouth/hood) above the devices with hot air flowing out and up above the devices (iiii). Not to close to the cool air input and maybe not to low to suck the cool bottom air up other then the little fans from the devices (you're gonna have to play with it).

Now if that all wont work you have an insane expensive setup that should be under the ground or not in a cold county  Wink

Just air will do the job, when i opened up the computer on the left side and turn on a normal living room fan on the left front it cooled down from 78C to 66C. Also i have less dust then a normal closed computer inside the box.

In my next step I will have more cards in a huge cool box with AC flow in (not even with a tunnel, just pointing and not on swing). just not right under incase it would drip in the future. If the sealing would be filled up with a hot air cloud i just put a fan backward into an other room or pointing outside just where the AC engine is. Maybe i still create a tunnel with just 2 wide sheets left under th ac towards the box on the right \*\
timk225
Hero Member
*****
Offline Offline

Activity: 955
Merit: 1004


View Profile
December 29, 2013, 05:04:24 AM
 #19

Two part answer.

1.  The answer is 42.

2. Datacenter servers are all CPU based, there are no GPUs, much less GPUs that can mine Scrypt coins efficiently.   GPU mining is a waste of time for BitCoin now.

3. So unless you are building custom mining PCs, scrap the datacenter idea.

Here's my wallet address:  12eBRzZ37XaWZsetusHtbcRrUS3rRDGyUJ

Thank You!   Grin
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
December 29, 2013, 05:42:19 AM
 #20

No GPUs in data centers?  You should come to Hollywood, or see Pixar's renderwalls for some Data center GPU mania.
http://www.youtube.com/watch?v=i-htxbHP34s
Any animation or gaming studio of much stature has these gpu renderwalls.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
Pages: [1] 2 3 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!