Bitcoin Forum
November 14, 2024, 07:46:26 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  

Warning: Moderators do not remove likely scams. You must use your own brain: caveat emptor. Watch out for Ponzi schemes. Do not invest more than you can afford to lose.

Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 »  All
  Print  
Author Topic: DataTank Mining: 1.2MW 3M Novec Immersion Cooled 2PH Mining Container  (Read 44433 times)
NotLambchop
Sr. Member
****
Offline Offline

Activity: 378
Merit: 254


View Profile
June 30, 2014, 01:38:17 PM
 #181

You are taking the post (including the picture) out of context, it was an ironic / comedic post which nobody expects to be taken seriously.

It is my post.  As its author (and patint hodler), I assure you it is super srs. It is the future of Bitcoin mining, and I'm IPOing it on Havelock.

Quote
...You are not talking 1m and 10m here but 500* 10m-100m...

Not sure what you're talking about, but I'm talking about this:
...a 10 ft electrical cable costs 10 times less than a 100 ft cable to keep it simple and make an example = cost savings again.

My example uses these exact lengths, not "1m and 10m."  antirac's example uses 10ft and 100ft, not "10m-100m."  Please don't make any more sciences.

Quote
Are you seriously complaining that a company is trying to have the average miner get into immersion cooled mining?
It is a great solution to many problems that come with high density miner deployment. (as already mentioned enough times)

Not complaining.  Just stating the obvious--if there was industry demand for this product, why offer it to laymen through an unlicenced Panamanian exchange?

Quote
Usually people are more upset when the big companies develop technologies and keep them to themselves. Offering anyone the chance to profit who is willing to take the risk is a good idea in my opinion.

If you think DataTank Mining is doing this to share the wealth, I doubt anything further I could say would sway you. 

  ~Happy investing!
NotLambchop
Sr. Member
****
Offline Offline

Activity: 378
Merit: 254


View Profile
June 30, 2014, 02:04:52 PM
Last edit: June 30, 2014, 02:28:28 PM by NotLambchop
 #182

Re. your edit:

...
- cable example: see above. It was meant as an example for some hardware costs, but still stands on a large scale (if you take it literally)

See reply above.

Quote
- Enclosures for the sp30 are in fact needed, as they direct airflow and make the unit stackable / rack mountable.
  The custom rack solution would need a lot of NRE costs and most likely be more expensive than just fitting enclosures.

Compared to this Rube Goldberg contraption, the costs would be insignificant.  Further, to mount proprietary gear in these "Data Tanks" also requires NRE.  Unless you think that the installation is simply a question of backing up a dump truck full of PC boards to a 40-footer and tilting the bed.

Quote
- PSU costs are included in the fixed costs of the container. They are regular server grade platinum PSUs with some small tweaks for immersion cooling.
  They power every hardware that goes in the DataTanks and are reusable with every new miner generation.

So how is this different from "576 power supplies [that] cost money"?  How is this an example of savings?

Quote
- The "40ft container" is just the big enclosure, the tanks are much smaller. A very small amount of fluid is needed per blade and the cost is included in the overall cost of 0.5-0.7$/W.

If you're using the "it's included in teh cost" argument, then this entire discussion is irrelevant.  

Quote
- Shipping costs are saved as miners are lighter, take up less volume etc., the container is shipped only once (but can be shipped to another location if need be)

The miners are lighter, but (the 40-footer + cooling tower + miners) ain't.  All that stuff doesn't ship itself.

Quote
- DataTank gear needs less testing (during manufacturing) as there are less manufacturing steps (money save here aswell!).

When a board (blade) goes bad in the "DataTank," replacing it isn't as trivial as replacing an air/indirect water cooled board.  A whole cluster needs to be powered down and cooled before being serviced.

Quote
 There is no need to power down the whole farm for servicing (I don´t know how you have come up with that).

Unless you're suggesting that DataTank uses something similar to the LEM system IBM toyed with and discarded in the 70s (which antirack referenced), that's exactly what would need to happen unless the techs are wearing rebreathers Smiley

Quote
 There is also no need to put on any special protective gear (though putting on gloves shouldn´t set you back to many hours Wink )

Judging by the vids, hundreds of boards are sitting in the same Novec tank.  This is a closed-loop system, with the vapors condensing and returning to the tank.  Opening that tank with the miners powered up and boiling Novec would do 2 things:  Vent the vapor into the environment (thus losing exorbitantly expensive Novec, if nothing else), and suffocating you (unless you think that Novec turns into air in vapor phase).

Quote
- It obviously takes more time to completely manufacture finished miners than to just get PCB blades from the factory, ready to be deployed in the tanks. (It not only saves time but also labor = money)

Etc., etc.
Collider
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500



View Profile
June 30, 2014, 02:22:03 PM
 #183

At the end of the day the only thing you need to know is this:

Immersion cooling is cheaper than building a high density facility to host your miners in.

Immersion cooling is more efficient than even a free-cooling facility.

Immersion cooling has cheaper operating costs, as less personnel is required, cheaper electricity can be sourced and efficiency is higher.


Additional savings from reusable power supplys, less cost per hardware (faster/cheaper manufacturing, no assembly required)

Warm fuzzy feeling because *bubbles* (this is not meant to add monetary value).

Your very own (part) of an ultra high-density hosting facility!
NotLambchop
Sr. Member
****
Offline Offline

Activity: 378
Merit: 254


View Profile
June 30, 2014, 02:27:09 PM
 #184

OK Collider, I'll take your word for it Cheesy

Collider
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500



View Profile
June 30, 2014, 02:29:21 PM
 #185

If you do not even know the cost of a 1.2MW brick and mortar facility, how can you even contest the value proposition?

Please share your inside knowledge of the alternative you are proposing to solve the high density miner deployment problem.
NotLambchop
Sr. Member
****
Offline Offline

Activity: 378
Merit: 254


View Profile
June 30, 2014, 02:34:42 PM
 #186

Lol Collider, a mining farm has little to do with data centers.  Think "chicken coop" Smiley

Collider
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500



View Profile
June 30, 2014, 02:40:40 PM
 #187

1st I said facility. This is not limited to a datacenter.

2nd Do you know the cost of the pictured facility? People usually choose to have their miners enclosed so nobody with sticky fingers gets to them. (And one miner magically turns into two overnight Wink )

3rd Assuming it is in China, power cost is between 12ct and 8ct/kWh (if you have the right connections)

Datatank can be deployed anywhere in the world with cheap (and clean) power. (starting at 2.5ct/kWh)

Datatank miners are cheaper.

Something to consider for the future:
Chinese power costs are highly subsidized by the government. It is unlikely that they will remain so "low" forever.
NotLambchop
Sr. Member
****
Offline Offline

Activity: 378
Merit: 254


View Profile
June 30, 2014, 02:56:18 PM
 #188

Ur sadly misinformed/confused about electrical prices in general...  US average electrical prices per KWh, in pennies:


http://www.eia.gov/electricity/data/browser/#/topic/7?geo=g&agg=0,1&endsec=vg


http://www.eia.gov/electricity/monthly/epm_table_grapher.cfm?t=epmt_5_3

The Moar You Know
Ozziecoin
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


View Profile WWW
June 30, 2014, 03:03:07 PM
 #189

It's pointless to reason with nlc.

Non-technical coin. Use OZC to intro coins to everyday aussies: http://ozziecoin.com
Collider
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500



View Profile
June 30, 2014, 03:26:03 PM
 #190

It's pointless to reason with nlc.
I realize that.

@nlc
What exactly are you criticizing about my proposed electricity price?

Obviously you are not looking to get the "average industrial rate" but the cheapest.

Quote
Datatank can be deployed anywhere in the world with cheap (and clean) power. (starting at 2.5ct/kWh)

Are you doubting that there is no place on earth with stable electricity at ~2.5ct /kWh ?
NotLambchop
Sr. Member
****
Offline Offline

Activity: 378
Merit: 254


View Profile
June 30, 2014, 03:50:43 PM
Last edit: June 30, 2014, 04:11:36 PM by NotLambchop
 #191

I'm responding to the tidbit below, and pointing out just the opposite:  that even the average US industrial power cost is quite low, substantially lower in some states, like Washington.

...
Chinese power costs are highly subsidized by the government. It is unlikely that they will remain so "low" forever.

Building a Goldbergian contraption is unnecessary, as the pic of a mining farm shows.  antirack comparing DataTank's operational costs to data centers is comparing apples to oranges--IRL large-scale mining farms don't rent data center space.*  They are built in hangars (KNC) or cheap warehouses/corrugated steel sheds (pic above).  No AC, no humidity control, no filtered air that you pay for in DCs.
That's what DataTank should be compared to, not DCs.

*Unless you're talking about mining farms selling shares to rubes, like PETA.  But then cryptx gots *solar power* Cheesy

Anyhow, I see you have quite a thing for DataTank Mining, sorry if I offended your waifu Undecided

  ~Happy investing
antirack
Hero Member
*****
Offline Offline

Activity: 489
Merit: 500

Immersionist


View Profile
June 30, 2014, 04:11:44 PM
Last edit: June 30, 2014, 04:26:58 PM by antirack
 #192

I understand that this is entertainment for you, but it's absolutely not for us and we do take this serious. This is our full time job and we know a thing or two about hardware, technology, or deploying Bitcoin clusters. If you have no valuable contribution to make to this thread, please do not post. A polite request.

The people and companies you mention are personal contacts of ours, the mines you post are places we visited.

KNC's facility was built by The Node Pole in Sweden. The same company who built Facebook's data center in their neighborhood. They don't come cheap. KNC hired them after they did $75 million turnover within a half year of time.
KNC pays dozens and dozens of people to keep the fans running. That's a lot of salary in Sweden, on a monthly basis. Ask Sam Cole or read his press releases.
So much for cheap mines.

The picture in China is nice, but has little to do with today's hardware that pulls a lot of power per device (ie. 2500W SP30) - if HW is not stolen it breaks down.

Even in China there is a drive towards immersion, after all China is the pioneer. Give it a few months   Roll Eyes
NotLambchop
Sr. Member
****
Offline Offline

Activity: 378
Merit: 254


View Profile
June 30, 2014, 04:30:13 PM
 #193

20A@230V?
4600W?
If we're still talking Spondoolies SP30, that's 2500W, or 2100W less.  The other monster is TerraMiner IV@2100W, or 2600W less.
Why do you feel the need to exaggerate?
Also, what makes you feel that the gear running in DataTank will not require maintenance?  I assume that's what you mean by "...dozens and dozens of people to keep the fans running."

*Re. Chinese mine:  It IS today's hardware, "1000 machines(720G/machine using AM Gen 3 chip) on line in hashratio farm in China."
jimmothy
Hero Member
*****
Offline Offline

Activity: 770
Merit: 509



View Profile
June 30, 2014, 05:07:41 PM
 #194

The fact that crumbs/notlambchop is just grasping at straws is making me more sure that this is a solid investment.
NotLambchop
Sr. Member
****
Offline Offline

Activity: 378
Merit: 254


View Profile
June 30, 2014, 05:24:16 PM
 #195

@Jimmothy:

What can I tell you?  Thus far, all of your investings have been disastrously bad.  No exceptions.
If you could just figure out how to do the exact opposite of everything that you do, you'd be filthy rich Smiley



  ~Happy investing
LeanSixSigma
Newbie
*
Offline Offline

Activity: 14
Merit: 0


View Profile
June 30, 2014, 09:01:09 PM
 #196

Compared to this Rube Goldberg contraption, the costs would be insignificant.  Further, to mount proprietary gear in these "Data Tanks" also requires NRE.  Unless you think that the installation is simply a question of backing up a dump truck full of PC boards to a 40-footer and tilting the bed.

https://docs.google.com/uc?export=download&id=0ByWHHc0u_thNdzB3c2hvVzJkcTQ
Have you read the design guideline document? The installation will be just plugging in mining hardware via edge connector (like a GPU card), which takes a few seconds at most. The same edge connector could be used for air-cooling as well, so that no NRE required. It is an attempt to introduce a similar standard like PCIe to the mining hardware industry with several key manufacturers very interested to participate. Compare that to installing fans and heatsinks with multiple screws, taking several minutes at least. It's like desperately holding on to installing a GPU via manufacturer-specific interface instead of via standardized PCIe. Setting up a few is no problem, setting up many thousands is a different story.

Rube Goldberg contraption are subject to context and perspective. You may see that as over-engineered if you don't compare correctly. But with multiple hardware generations, every time mining hardware manufacturers have to design a new case, new cooling infrastructure, cooling and performance tests, assembly and manufacturing of everything surrounding the boards, this sounds to me much more as over-engineered. Considering that air is inherently inefficient to transport heat away (insulator in every Starbucks double-walled plastic mug, double-paned windows, etc.), a lot of engineering has to be employed to make it work effectively.

Did you know that Tencent, Baidu and other big data centers in China have an army of technicians who do nothing else but exchanging broken fans? Did you know that Intel has reliability studies of electronics rusting away within a few months in India and China because of much higher sulfur content in the air/humidity/rain? I know for a fact that many Chinese mining operations need to exchange a lot of broken fans as well and have to deal with tons of heat issues - if not mining hardware, then network switches, PSUs, etc. Those open air chicken farm facilities will probably make it impossible to reuse most of the components like PSUs if they corrode away.

http://www.intelfreepress.com/news/corrosion-testing-procedures-adapt-to-rising-air-pollution/6763
http://www.oregonlive.com/silicon-forest/index.ssf/2013/10/intel_finds_asian_pollution_ma.html

Sure, some of today's mining hardware at 40nm+ may just barely get away with so-called 'free cooling'. But that comes at the expense of spreading out in low density (e.g. huge 'chicken farm' / entire building vs 1 container). If everyone is moving towards 28nm and lower in not too distant future, then many will realize very quickly that physical limits of W/m2K heat transfer in air and single phase liquid can't be overcome. To cool down hotter mining hardware, it either requires relatively cool air (chiller in hot climate, or cold climate but then often paired with higher electricity/logistics/labor costs and taxes) and/or a LOT of air volume. I've heard of China installations having to be shut down in the summer and/or having to deal with sub-par performance because of temperature induced down-throttling. What a surprise, since it all still worked so nicely in the winter, as many have set up air flow only for that scenario... 2-phase immersion cooling has a much higher heat transfer, so that it doesn't require many pumps inside the tanks at all to transport heat away. Less moving parts = less maintenance.

Quote
So how is this different from "576 power supplies [that] cost money"?  How is this an example of savings?

If not having to replace PSUs, but being able to reuse them for many future hardware generations, you are not forced to buy new miners again with new PSUs.

But the much more important point is that if miners are selling you a finished case with PSUs, it will add on to lead time on their supply chain. Meaning that if a new mining chip has been developed, which looks really good on screen at the prevailing difficulty that time, it might only come into your hands and will be ready for mining several weeks or months later: How fast can boards be assembled into cases if they have to be supplied from elsewhere, shipping back to a logistics center after assembly, etc.? And as mentioned above on the NRE: Customizing previous cooling solution for the new mining board, testing it, starting to order fans/heatsinks/water cooling blocks in bulk with inevitable delivery time, before assembly can even start.

In comparison to immersion cooling, the mining board could be shipped out right away. That's technically not any kind of saving, but additional realized mining income. The chart in the prospectus explaining the effect of delayed deployment time and text describe that this crucial time difference of just 10 days or less getting online faster could already pay off the entire DTM container costs and still leave you a nice income on top, while others are still waiting for their hardware and still have to start setting up afterwards.

Quote
The miners are lighter, but (the 40-footer + cooling tower + miners) ain't.  All that stuff doesn't ship itself.

It's all in the prospectus: The containers will be set up first at a location chosen by DTM. Only shortly before completion of setup, mining hardware will be purchased at best efficiency and price at that point of time. So only mining hardware has to be shipped around. But if it's necessary to ship an entire case, then of course the costs are going to be higher. Some cloud miners have revealed that they have to resort to split up mining hardware into boards in one shipment and cases plus heatsinks, etc. in another shipment for custom duties reasons, because those are much higher on finished products than on components.

Hobby miners might not care if one or a few cases slip through customs. But if it's about an entire farm, the story looks completely different. And then again, we're not talking about one generation only (although those savings are already significant, despite shipping the container first time) - for every subsequent mining hardware generation, there are again much higher shipping costs. A 2U case is about the same volume as probably 5-8 mining boards, while already set up DataTank containers don't need to be shipped any more and will be ready for new generations.

Quote
When a board (blade) goes bad in the "DataTank," replacing it isn't as trivial as replacing an air/indirect water cooled board.  A whole cluster needs to be powered down and cooled before being serviced.

See above about replacing time. Consider furthermore, that most cooling solutions are already at today's maximum. If the TDP is higher, then they will need a new solution. So it's very likely not just reusing the old cooling solution for new mining hardware. You can be pretty sure that a cooling solution for 40nm or even 28nm won't work for 20nm or 14nm. Who is deliberately shipping oversized heatsinks and fans with current generation of mining hardware? Even if it would work, do you really think that mining hardware designers already consider to have the same PCB mounting holes for heatsinks, fan connectors, etc. at the same place for the next generation?

With 2-phase immersion cooling all this doesn't matter, since the fluid automatically surrounds the new mining hardware, no matter which shape and format. If you extrapolate the 4kW simulation in 200cc fluid in 1L space to an entire server rack, you would be at a theoretical 3-4MW per rack - try imagining how many future mining generations that would be if today's air cooling capacity maxes out at maybe 35-45kW per rack with LOTS of effort. It's not only cheap for the first generation already, but the costs can be truly split over multiple hardware generations because of it's incredible excess thermal capacity.

And how do you know that it's needed to power down a whole cluster? A single slot of many inside a tank needs to be powered down, since the PSUs are connected to up to 8 boards within that slot. But even that may not be the end of the road yet.

Quote
Judging by the vids, hundreds of boards are sitting in the same Novec tank.  This is a closed-loop system, with the vapors condensing and returning to the tank.  Opening that tank with the miners powered up and boiling Novec would do 2 things:  Vent the vapor into the environment (thus losing exorbitantly expensive Novec, if nothing else), and suffocating you (unless you think that Novec turns into air in vapor phase).

Again, how do you know? Anitrack has posted a picture of invited press, 3M people and himself standing around an open, bubbling tank. Antirack is obviously still alive and posting... furthermore, it's on the company website that it's Allied Control's expertise to minimize fluid and vapor losses. Having a way to hot-swap is one of the technologies already developed.

Let me quote myself from another post:
Quote
On toxicity and evaporation and all other arguments on that it's not practical - one of the many articles on the web:
http://www.pcworld.idg.com.au/article/542462/intel_sgi_test_full-immersion_cooling_servers/
Does it mean that Intel, SGI, U.S. Naval Research Laboratory, Lawrence Berkeley National Laboratory, Schneider Electric, etc. are all wrong, suicidal and have no idea what they're doing? Interesting... maybe you could make much more money by teaching all their PhD's, etc. a lesson in physics and chemistry and prove they're all wrong...

It's also used as fire extinguishing agent in the Library of Congress and as well the military for extremely confined spaces with troops inside as a much healthier and better alternative due to its much lower no effects limits than other solutions - it is intended for fogging an entire room with people inside with high concentrations of Novec, not causing any suffocation:
http://solutions.3m.com/wps/portal/3M/en_US/3MNovec/Home/News/PressReleases/?PC_Z7_RJH9U523007AE0IUQAESTF39O3000000_univid=1273685423873
http://www.youtube.com/watch?v=KNSHsUWcplo

It's perfectly fine to have another opinion and I will always respect that. But spreading wrong claims based on no source at all is not really strengthening any position.
Benny1985
Sr. Member
****
Offline Offline

Activity: 391
Merit: 250


View Profile
June 30, 2014, 09:35:27 PM
 #197

Ur sadly misinformed/confused about electrical prices in general...  US average electrical prices per KWh, in pennies:


http://www.eia.gov/electricity/data/browser/#/topic/7?geo=g&agg=0,1&endsec=vg


http://www.eia.gov/electricity/monthly/epm_table_grapher.cfm?t=epmt_5_3

The Moar You Know

You do realize that is an average for the US, and not indiciative of every state in America, right? Where I live (Ohio), its 2.5 - 4.0c/KWh depending on your provider. 
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1073



View Profile
June 30, 2014, 10:18:10 PM
 #198

If everyone is moving towards 28nm and lower in not too distant future, then many will realize very quickly that physical limits of W/m2K heat transfer in air and single phase liquid can't be overcome.
Another example of lack of understanding of physics and intentionally confusing analysis. The secondary loop in Allied Control's system is single phase. Obviously this poster is neither semi-literate nor semi-numerate. As of now I see two possibilities:

1) dishonest analyst (a.k.a. shill)
2) honest analyst that confuses single-stage and two-stage liquid cooled systems with air-cooled systems.

Time will show, so I'll quote the whole message against possible future deletion or editing.
Compared to this Rube Goldberg contraption, the costs would be insignificant.  Further, to mount proprietary gear in these "Data Tanks" also requires NRE.  Unless you think that the installation is simply a question of backing up a dump truck full of PC boards to a 40-footer and tilting the bed.

https://docs.google.com/uc?export=download&id=0ByWHHc0u_thNdzB3c2hvVzJkcTQ
Have you read the design guideline document? The installation will be just plugging in mining hardware via edge connector (like a GPU card), which takes a few seconds at most. The same edge connector could be used for air-cooling as well, so that no NRE required. It is an attempt to introduce a similar standard like PCIe to the mining hardware industry with several key manufacturers very interested to participate. Compare that to installing fans and heatsinks with multiple screws, taking several minutes at least. It's like desperately holding on to installing a GPU via manufacturer-specific interface instead of via standardized PCIe. Setting up a few is no problem, setting up many thousands is a different story.

Rube Goldberg contraption are subject to context and perspective. You may see that as over-engineered if you don't compare correctly. But with multiple hardware generations, every time mining hardware manufacturers have to design a new case, new cooling infrastructure, cooling and performance tests, assembly and manufacturing of everything surrounding the boards, this sounds to me much more as over-engineered. Considering that air is inherently inefficient to transport heat away (insulator in every Starbucks double-walled plastic mug, double-paned windows, etc.), a lot of engineering has to be employed to make it work effectively.

Did you know that Tencent, Baidu and other big data centers in China have an army of technicians who do nothing else but exchanging broken fans? Did you know that Intel has reliability studies of electronics rusting away within a few months in India and China because of much higher sulfur content in the air/humidity/rain? I know for a fact that many Chinese mining operations need to exchange a lot of broken fans as well and have to deal with tons of heat issues - if not mining hardware, then network switches, PSUs, etc. Those open air chicken farm facilities will probably make it impossible to reuse most of the components like PSUs if they corrode away.

http://www.intelfreepress.com/news/corrosion-testing-procedures-adapt-to-rising-air-pollution/6763
http://www.oregonlive.com/silicon-forest/index.ssf/2013/10/intel_finds_asian_pollution_ma.html

Sure, some of today's mining hardware at 40nm+ may just barely get away with so-called 'free cooling'. But that comes at the expense of spreading out in low density (e.g. huge 'chicken farm' / entire building vs 1 container). If everyone is moving towards 28nm and lower in not too distant future, then many will realize very quickly that physical limits of W/m2K heat transfer in air and single phase liquid can't be overcome. To cool down hotter mining hardware, it either requires relatively cool air (chiller in hot climate, or cold climate but then often paired with higher electricity/logistics/labor costs and taxes) and/or a LOT of air volume. I've heard of China installations having to be shut down in the summer and/or having to deal with sub-par performance because of temperature induced down-throttling. What a surprise, since it all still worked so nicely in the winter, as many have set up air flow only for that scenario... 2-phase immersion cooling has a much higher heat transfer, so that it doesn't require many pumps inside the tanks at all to transport heat away. Less moving parts = less maintenance.

Quote
So how is this different from "576 power supplies [that] cost money"?  How is this an example of savings?

If not having to replace PSUs, but being able to reuse them for many future hardware generations, you are not forced to buy new miners again with new PSUs.

But the much more important point is that if miners are selling you a finished case with PSUs, it will add on to lead time on their supply chain. Meaning that if a new mining chip has been developed, which looks really good on screen at the prevailing difficulty that time, it might only come into your hands and will be ready for mining several weeks or months later: How fast can boards be assembled into cases if they have to be supplied from elsewhere, shipping back to a logistics center after assembly, etc.? And as mentioned above on the NRE: Customizing previous cooling solution for the new mining board, testing it, starting to order fans/heatsinks/water cooling blocks in bulk with inevitable delivery time, before assembly can even start.

In comparison to immersion cooling, the mining board could be shipped out right away. That's technically not any kind of saving, but additional realized mining income. The chart in the prospectus explaining the effect of delayed deployment time and text describe that this crucial time difference of just 10 days or less getting online faster could already pay off the entire DTM container costs and still leave you a nice income on top, while others are still waiting for their hardware and still have to start setting up afterwards.

Quote
The miners are lighter, but (the 40-footer + cooling tower + miners) ain't.  All that stuff doesn't ship itself.

It's all in the prospectus: The containers will be set up first at a location chosen by DTM. Only shortly before completion of setup, mining hardware will be purchased at best efficiency and price at that point of time. So only mining hardware has to be shipped around. But if it's necessary to ship an entire case, then of course the costs are going to be higher. Some cloud miners have revealed that they have to resort to split up mining hardware into boards in one shipment and cases plus heatsinks, etc. in another shipment for custom duties reasons, because those are much higher on finished products than on components.

Hobby miners might not care if one or a few cases slip through customs. But if it's about an entire farm, the story looks completely different. And then again, we're not talking about one generation only (although those savings are already significant, despite shipping the container first time) - for every subsequent mining hardware generation, there are again much higher shipping costs. A 2U case is about the same volume as probably 5-8 mining boards, while already set up DataTank containers don't need to be shipped any more and will be ready for new generations.

Quote
When a board (blade) goes bad in the "DataTank," replacing it isn't as trivial as replacing an air/indirect water cooled board.  A whole cluster needs to be powered down and cooled before being serviced.

See above about replacing time. Consider furthermore, that most cooling solutions are already at today's maximum. If the TDP is higher, then they will need a new solution. So it's very likely not just reusing the old cooling solution for new mining hardware. You can be pretty sure that a cooling solution for 40nm or even 28nm won't work for 20nm or 14nm. Who is deliberately shipping oversized heatsinks and fans with current generation of mining hardware? Even if it would work, do you really think that mining hardware designers already consider to have the same PCB mounting holes for heatsinks, fan connectors, etc. at the same place for the next generation?

With 2-phase immersion cooling all this doesn't matter, since the fluid automatically surrounds the new mining hardware, no matter which shape and format. If you extrapolate the 4kW simulation in 200cc fluid in 1L space to an entire server rack, you would be at a theoretical 3-4MW per rack - try imagining how many future mining generations that would be if today's air cooling capacity maxes out at maybe 35-45kW per rack with LOTS of effort. It's not only cheap for the first generation already, but the costs can be truly split over multiple hardware generations because of it's incredible excess thermal capacity.

And how do you know that it's needed to power down a whole cluster? A single slot of many inside a tank needs to be powered down, since the PSUs are connected to up to 8 boards within that slot. But even that may not be the end of the road yet.

Quote
Judging by the vids, hundreds of boards are sitting in the same Novec tank.  This is a closed-loop system, with the vapors condensing and returning to the tank.  Opening that tank with the miners powered up and boiling Novec would do 2 things:  Vent the vapor into the environment (thus losing exorbitantly expensive Novec, if nothing else), and suffocating you (unless you think that Novec turns into air in vapor phase).

Again, how do you know? Anitrack has posted a picture of invited press, 3M people and himself standing around an open, bubbling tank. Antirack is obviously still alive and posting... furthermore, it's on the company website that it's Allied Control's expertise to minimize fluid and vapor losses. Having a way to hot-swap is one of the technologies already developed.

Let me quote myself from another post:
Quote
On toxicity and evaporation and all other arguments on that it's not practical - one of the many articles on the web:
http://www.pcworld.idg.com.au/article/542462/intel_sgi_test_full-immersion_cooling_servers/
Does it mean that Intel, SGI, U.S. Naval Research Laboratory, Lawrence Berkeley National Laboratory, Schneider Electric, etc. are all wrong, suicidal and have no idea what they're doing? Interesting... maybe you could make much more money by teaching all their PhD's, etc. a lesson in physics and chemistry and prove they're all wrong...

It's also used as fire extinguishing agent in the Library of Congress and as well the military for extremely confined spaces with troops inside as a much healthier and better alternative due to its much lower no effects limits than other solutions - it is intended for fogging an entire room with people inside with high concentrations of Novec, not causing any suffocation:
http://solutions.3m.com/wps/portal/3M/en_US/3MNovec/Home/News/PressReleases/?PC_Z7_RJH9U523007AE0IUQAESTF39O3000000_univid=1273685423873
http://www.youtube.com/watch?v=KNSHsUWcplo

It's perfectly fine to have another opinion and I will always respect that. But spreading wrong claims based on no source at all is not really strengthening any position.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
NotLambchop
Sr. Member
****
Offline Offline

Activity: 378
Merit: 254


View Profile
June 30, 2014, 10:35:18 PM
 #199

...The installation will be just plugging in mining hardware via edge connector (like a GPU card), which takes a few seconds at most. The same edge connector could be used for air-cooling as well, so that no NRE required. It is an attempt to introduce a similar standard like PCIe to the mining hardware industry with several key manufacturers very interested to participate. Compare that to installing fans and heatsinks with multiple screws, taking several minutes at least. It's like desperately holding on to installing a GPU via manufacturer-specific interface instead of via standardized PCIe. Setting up a few is no problem, setting up many thousands is a different story.

Nothing wrong with establishing a standard, but this flies in the face of "compatible with any hardware" claim.  It's compatible with any hardware that follows this standard, which (to my knowledge) is, at this point, just one model of one manufacturer.  Am I mistaken?

DataTank Mining also does not (to my knowledge) have much in the way of IP here, and no chance for patents.  There is nothing to stop copycats once the hard work of developing demand is done.  If large miner manufacturers are genuinely interested, I see no reason to need fundraising through an unlicensed Panamanian exchange.  None.  If large manufacturers are not interested, then investing in this will be no different from investing in all the other Havelock offerings--a sure way to lose money.

Quote
Rube Goldberg contraption are subject to context and perspective. You may see that as over-engineered if you don't compare correctly. But with multiple hardware generations, every time mining hardware manufacturers have to design a new case, new cooling infrastructure, cooling and performance tests, assembly and manufacturing of everything surrounding the boards, this sounds to me much more as over-engineered. Considering that air is inherently inefficient to transport heat away (insulator in every Starbucks double-walled plastic mug, double-paned windows, etc.), a lot of engineering has to be employed to make it work effectively.

You are forgetting the very thing you seem to advocate--standard form factor.  There are hundreds of motherboards, using different Intel chips, but all could use the same CPU cooler.  Standard form factor has nothing to do with 2-phase immersion cooling, and nothing is stopping manufacturers from choosing one and adhering to it.  They are doing it as we speak--using commercially available CPU coolers.  Common as dirt and only a bit more expensive.  No need for custom engineering, and readily available at amazing prices.

Quote
Did you know that Tencent, Baidu and other big data centers in China have an army of technicians who do nothing else but exchanging broken fans? Did you know that Intel has reliability studies of electronics rusting away within a few months in India and China because of much higher sulfur content in the air/humidity/rain? I know for a fact that many Chinese mining operations need to exchange a lot of broken fans as well and have to deal with tons of heat issues - if not mining hardware, then network switches, PSUs, etc. Those open air chicken farm facilities will probably make it impossible to reuse most of the components like PSUs if they corrode away.

Useful lifespan of mining hardware is less than a year.  If "high sulfur content" corrodes them in 2 years, nothing of value is lost.  Enterprise-grade fans have reasonably good MTBF.  It's also a technology that's been around a bit, and is dirt cheap and simple to repair/replace by unskilled help.  Unlike the DataTank solution.

Quote
Sure, some of today's mining hardware at 40nm+ may just barely get away with so-called 'free cooling'. But that comes at the expense of spreading out in low density (e.g. huge 'chicken farm' / entire building vs 1 container).

The picture I posted is ASIVMINER's  Gen 3 chips.  Far from the edge of obsolescence.

Quote
If everyone is moving towards 28nm and lower in not too distant future, then many will realize very quickly that physical limits of W/m2K heat transfer in air and single phase liquid can't be overcome.

Smaller node size doesn't mean higher W/m2.  KNC is cooling their 20nm chippery with air.  

Quote
To cool down hotter mining hardware, it either requires relatively cool air (chiller in hot climate, or cold climate but then often paired with higher electricity/logistics/labor costs and taxes) and/or a LOT of air volume. I've heard of China installations having to be shut down in the summer and/or having to deal with sub-par performance because of temperature induced down-throttling. What a surprise, since it all still worked so nicely in the winter, as many have set up air flow only for that scenario...

In US, we invented air conditioning, putting the whole 2-phase thing into a discrete box.  You'd be surprised how well it copes with those hot summer days.  In the winter, we use another paragon of Yankee ingenuity--the Off Switch.  

Quote
2-phase immersion cooling has a much higher heat transfer, so that it doesn't require many pumps inside the tanks at all to transport heat away. Less moving parts = less maintenance.

Quote
So how is this different from "576 power supplies [that] cost money"?  How is this an example of savings?

If not having to replace PSUs, but being able to reuse them for many future hardware generations, you are not forced to buy new miners again with new PSUs.

There's nothing preventing you from reusing standard power supplies.  If you want them to last forever, simply avoid those high-sulfur electronics-eating hot spots on this planet you've mentioned before.  Just like you would avoid building your mining farm 1000 leagues under the sea, or 500 miles away from the closest power pole.  It's common sense.

Quote
But the much more important point is that if miners are selling you a finished case with PSUs, it will add on to lead time on their supply chain.

Wait, maybe that's why KNC, along with many other gear companies, don't include a PS?  Simple solutions to complex problems Cool

Quote
Meaning that if a new mining chip has been developed, which looks really good on screen at the prevailing difficulty that time, it might only come into your hands and will be ready for mining several weeks or months later: How fast can boards be assembled into cases if they have to be supplied from elsewhere, shipping back to a logistics center after assembly, etc.? And as mentioned above on the NRE: Customizing previous cooling solution for the new mining board, testing it, starting to order fans/heatsinks/water cooling blocks in bulk with inevitable delivery time, before assembly can even start.

Are you saying that you can concoct a hypothetical in which a manufacturer fumbles every bit of planning, forgoes common sense, makes some idiotic decisions and winds up not getting me the product on time?  Sure, it's been done, even.  By going with DataTank, mining manufacturers would be locked into a single-source proprietary solution, on the other hand.  You mess up, and they can't just switch to another heatsink manufacturer--they're dead in the water.  Even US army doesn't single-source.

Quote
In comparison to immersion cooling, the mining board could be shipped out right away. That's technically not any kind of saving, but additional realized mining income. The chart in the prospectus explaining the effect of delayed deployment time and text describe that this crucial time difference of just 10 days or less getting online faster could already pay off the entire DTM container costs and still leave you a nice income on top, while others are still waiting for their hardware and still have to start setting up afterwards.

See above.  There are many ways for manufacturers to screw up.  Adding a single-source, unconventional cooling solution into the mix is unlikely to eliminate them.

End of Part the First
NotLambchop
Sr. Member
****
Offline Offline

Activity: 378
Merit: 254


View Profile
June 30, 2014, 11:32:17 PM
 #200


Part the Two

"Americans have spent a million dollars to develop a pen that works in zero gravity and nearly perfect vacuum.  The Russians used a pencil."

...
Quote
The miners are lighter, but (the 40-footer + cooling tower + miners) ain't.  All that stuff doesn't ship itself.

It's all in the prospectus: The containers will be set up first at a location chosen by DTM. Only shortly before completion of setup, mining hardware will be purchased at best efficiency and price at that point of time. So only mining hardware has to be shipped around. But if it's necessary to ship an entire case, then of course the costs are going to be higher. Some cloud miners have revealed that they have to resort to split up mining hardware into boards in one shipment and cases plus heatsinks, etc. in another shipment for custom duties reasons, because those are much higher on finished products than on components.

Hobby miners might not care if one or a few cases slip through customs. But if it's about an entire farm, the story looks completely different. And then again, we're not talking about one generation only (although those savings are already significant, despite shipping the container first time) - for every subsequent mining hardware generation, there are again much higher shipping costs. A 2U case is about the same volume as probably 5-8 mining boards, while already set up DataTank containers don't need to be shipped any more and will be ready for new generations.

This is getting weird.  An aspiring mining farm will buy your container and have it set up before getting its mining gear?  What a wonderful way to be faqued twice if the ASIC manufacturer doesn't deliver on time--not only having to absorb financial losses from not getting the miners, but being proud owners of some pricey storage containers and a chunk of land to keep them on.

Quote
Quote
When a board (blade) goes bad in the "DataTank," replacing it isn't as trivial as replacing an air/indirect water cooled board.  A whole cluster needs to be powered down and cooled before being serviced.

See above about replacing time. Consider furthermore, that most cooling solutions are already at today's maximum.

wat

Quote
If the TDP is higher, then they will need a new solution. So it's very likely not just reusing the old cooling solution for new mining hardware. You can be pretty sure that a cooling solution for 40nm or even 28nm won't work for 20nm or 14nm. Who is deliberately shipping oversized heatsinks and fans with current generation of mining hardware? Even if it would work, do you really think that mining hardware designers already consider to have the same PCB mounting holes for heatsinks, fan connectors, etc. at the same place for the next generation?

If the market demands upgradability, I'm sure the manufactrers (who are not currently using standard CPU cooling solutions) will accommodate.
This would be simpler than incorporating your edge-connector into their design.  Again, why come up with complex solutions for simple problems?

Quote
With 2-phase immersion cooling all this doesn't matter, since the fluid automatically surrounds the new mining hardware, no matter which shape and format. If you extrapolate the 4kW simulation in 200cc fluid in 1L space to an entire server rack, you would be at a theoretical 3-4MW per rack - try imagining how many future mining generations that would be if today's air cooling capacity maxes out at maybe 35-45kW per rack with LOTS of effort. It's not only cheap for the first generation already, but the costs can be truly split over multiple hardware generations because of it's incredible excess thermal capacity.

To be honest, as much as I feel that Bitcoin is here forever and its price will keep climbing, it would be irresponsible and impractical for mining concerns to lock themselves into such a long-term commitment.  If costs aren't recouped and profits are not realized quickly, people lose interest.

Quote
And how do you know that it's needed to power down a whole cluster? A single slot of many inside a tank needs to be powered down, since the PSUs are connected to up to 8 boards within that slot. But even that may not be the end of the road yet.

Please reread my previous posts.

Quote
Quote
Judging by the vids, hundreds of boards are sitting in the same Novec tank.  This is a closed-loop system, with the vapors condensing and returning to the tank.  Opening that tank with the miners powered up and boiling Novec would do 2 things:  Vent the vapor into the environment (thus losing exorbitantly expensive Novec, if nothing else), and suffocating you (unless you think that Novec turns into air in vapor phase).

Again, how do you know? Anitrack has posted a picture of invited press, 3M people and himself standing around an open, bubbling tank. Antirack is obviously still alive and posting... furthermore, it's on the company website that it's Allied Control's expertise to minimize fluid and vapor losses. Having a way to hot-swap is one of the technologies already developed.

I'm not saying Novec is toxic.  As I understand it, it is fairly inert.  But so is neon.  Don't try breathing it--it's fairly pointless, you'll suffocate.  Opening a closed-loop container while inside of a 40-footer doesn't quite give you much air to dilute.  Hence you'll probably want a forced-air suit.

Quote
Let me quote myself from another post:
Quote
On toxicity and evaporation and all other arguments on that it's not practical - one of the many articles on the web:
http://www.pcworld.idg.com.au/article/542462/intel_sgi_test_full-immersion_cooling_servers/
Does it mean that Intel, SGI, U.S. Naval Research Laboratory, Lawrence Berkeley National Laboratory, Schneider Electric, etc. are all wrong, suicidal and have no idea what they're doing? Interesting... maybe you could make much more money by teaching all their PhD's, etc. a lesson in physics and chemistry and prove they're all wrong...

It's also used as fire extinguishing agent in the Library of Congress and as well the military for extremely confined spaces with troops inside as a much healthier and better alternative due to its much lower no effects limits than other solutions - it is intended for fogging an entire room with people inside with high concentrations of Novec, not causing any suffocation:
http://solutions.3m.com/wps/portal/3M/en_US/3MNovec/Home/News/PressReleases/?PC_Z7_RJH9U523007AE0IUQAESTF39O3000000_univid=1273685423873
http://www.youtube.com/watch?v=KNSHsUWcplo

It's perfectly fine to have another opinion and I will always respect that. But spreading wrong claims based on no source at all is not really strengthening any position.

See above.

Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!