jimmothy
|
|
July 16, 2014, 09:54:16 AM |
|
It is all that matters because you are only calculating the needed airflow for your mining room, not the unit.
Is the needed airflow in the mining room not the same as the air flow in the miners? Those calculations are done by the engineers designing the unit and therefore irrelevant for the average miner. Spondoolies have done those and designed the unit accordingly.
It's not irrelevant because if you want max performance you need cooler chips. The only two variables that miners have control over is air flow and air temperature. For some people it's just not possible cool the chips down to ~90C if the dT between the chips and the air is 40-50C. (Without an AC)
|
|
|
|
Collider
|
|
July 16, 2014, 10:01:39 AM |
|
The chips DO NOT NEED to be that "cold".
If you calculate the cost of using 20-30% additional electricity and the cost of buying an AC, you will quickly discover that it is just not worth it for a small possible 5% performance increase.
You either already have very low ambient temperatures and can either settle with lower airflow or increased airflow and better performance, or you dont.
It is not economically viable to cool the air down in your mining room, if you can have enough airflow.
|
|
|
|
Searing
Copper Member
Legendary
Offline
Activity: 2898
Merit: 1465
Clueless!
|
|
July 16, 2014, 10:16:16 AM Last edit: July 16, 2014, 10:39:02 AM by Searing |
|
The chips DO NOT NEED to be that "cold".
If you calculate the cost of using 20-30% additional electricity and the cost of buying an AC, you will quickly discover that it is just not worth it for a small possible 5% performance increase.
You either already have very low ambient temperatures and can either settle with lower airflow or increased airflow and better performance, or you dont.
It is not economically viable to cool the air down in your mining room, if you can have enough airflow.
ok i'll try to do it your way....use something like this to get air moving over the unit ...so to speak http://www.menards.com/main/fans/commercial-fans/stanley-blower-fan-pivoting-utility-fan-with-grounded-outlets/p-1393859-c-12728.htmso what put a duct behind the Sp30 and just duct it out like a clothes dryer to the outside of the house (thru window?) that kinda thing? again with like 4500 watts of heat being made in my bsmt ..i still think it is likely i'm cooling with the a/c window unit the damn room down and blowing hot air out as fast as possible thru duct/window on at least the sp30 unit....and even have other fans plus the one above around the bsmt to still keep it moving out ..the air that is again I can see 10-15 days august/sept where I may have no choice...worst case senario hope folks are correct about just moving heat out...but somehow I kinda expect at least for some days (say 15 or so) i'm gonna be closing the door..in the room pumping a/c in and blowing heat out as fast as possible just to keep the sp30 running more/less...this is why I'm gonna over kill with the a/c unit ..should it be overkill rather have it in if i need it then have to mess with it after the miners trickle in again hope I'm wrong ...but seeings how I really really bit off more then I can chew with a Titan (aug) and sp30 (sept) well on a plus note come end of sept or oct i can use the heat for the house (thus why i did not host...also hosting cost as much as elec before shipping/setup costs etc)...so went with this... anyway why I got myself into this 'somewhat interesting situation" good to have goals I guess I'll put photos up so everyone can roflao at my kludged up hopefully working setup when all is done by the by spondoolies-tech do you have any advice for me... besides me being out of my frigging mind for not putting both the Titan and the Sp30 in a data center? I'll take any abuse for some advice....(will likely get a reply ..don't give us your name or we will void your warranty before we ship you the Sp30 as there is no hope) FML Searing ps here are some pics of my unit and the large block basement walls also a pic on how my knc juptier arrived full of rain water heh http://lostgonzo.imgur.com/
|
Old Style Legacy Plug & Play BBS System. Get it from www.synchro.net. Updated 1/1/2021. It also works with Windows 10 and likely 11 and allows 16 bit DOS game doors on the same Win 10 Machine in Multi-Node! Five Minute Install! Look it over it uninstalls just as fast, if you simply want to look it over. Freeware! Full BBS System! It is a frigging hoot!:)
|
|
|
Collider
|
|
July 16, 2014, 10:26:22 AM |
|
Get a hot air extractor at the one side and a fan to push air in at the other. Just look at some growshop websites to get the idea, they have everything on hand needed for ventilating hot rooms .
|
|
|
|
|
RoadStress
Legendary
Offline
Activity: 1904
Merit: 1007
|
|
July 16, 2014, 11:48:28 AM |
|
400cfm is really not much for 2.6kw.
Way ahead of you jimmothy SyRenity please address the issue of SP30 heat dissipation. How do you plan to dissipate 2500W from that small case? Answer from the team: When designing a cooling solution, the most important factor is the heat density, rather then the performance or the total amount of the dissipated heat. The second factor would be the max allowed ASIC’s Tj.
You are right, we are going to maintain the same air flow cooling mechanism for the SP30 as well. We going to use a custom heat sink that was designed and manufactured according to ASIC’s heat dense and Tj with 80mm Fans with a total of ~300 CFM.
The heat sink will be composed of Aluminum K=167 W/m*K 6061 T6 and copper base attached to the ASICs themselves. The entire design is backed up with a thermal analysis simulations that we are performing as part of the mechanical and electrical design process
|
|
|
|
jimmothy
|
|
July 16, 2014, 12:14:06 PM |
|
@roadstress
Not sure what you're trying to say.
Correct me if I'm wrong but didn't they basically sacrafice quietness and/or coolness so they can have a heat density about twice as much as most modern DCs can handle?
Again, why not double the size of the heatsinks and use quieter or more powerful 120mm fans?
|
|
|
|
klondike_bar
Legendary
Offline
Activity: 2128
Merit: 1005
ASIC Wannabe
|
|
July 16, 2014, 12:23:14 PM |
|
@roadstress
Not sure what you're trying to say.
Correct me if I'm wrong but didn't they basically sacrafice quietness and/or coolness so they can have a heat density about twice as much as most modern DCs can handle?
Again, why not double the size of the heatsinks and use quieter or more powerful 120mm fans?
a few reasons in my mind. 1) for those who can handle the density a small unit is terrific (and I think in 2015 datacenters will start designing new rooms/racks with bitcoin mining in mind) 2) saving a lot on heatsinks, a larger case, and shipping costs 3) with immersion cooling on the horizon it makes some sense to understand how efficiently the units can be cooled with the least aluminum and airflow, as density of chips in immersion will be a prioity
|
|
|
|
jimmothy
|
|
July 16, 2014, 12:39:45 PM |
|
a few reasons in my mind.
1) for those who can handle the density a small unit is terrific (and I think in 2015 datacenters will start designing new rooms/racks with bitcoin mining in mind) 2) saving a lot on heatsinks, a larger case, and shipping costs 3) with immersion cooling on the horizon it makes some sense to understand how efficiently the units can be cooled with the least aluminum and airflow, as density of chips in immersion will be a prioity
1. 40kw per rack requires extreme (and extremely expensive) cooling. I think most datacenters being built in 2015+ will use immersion cooling. 2. Shipping costs might be a bit more but extruded aliminum heatsinks are dirt cheap as well as 120mm fans. 3. I think it's about time to just make the switch over to immersion cooling. There's really no competition. Even with 40C dT air cooling still can't match the density or efficiency of immersion cooling.
|
|
|
|
RoadStress
Legendary
Offline
Activity: 1904
Merit: 1007
|
|
July 16, 2014, 12:51:31 PM |
|
@roadstress
Not sure what you're trying to say.
Correct me if I'm wrong but didn't they basically sacrafice quietness and/or coolness so they can have a heat density about twice as much as most modern DCs can handle?
Again, why not double the size of the heatsinks and use quieter or more powerful 120mm fans?
I'm saying that you are reviving a topic which was already covered like always. In my view they traded heavier units which cost more (because of the bigger radiator+more shipping fees) for a more compact case with a few more dB. I don't think they sacrificed the coolness considering their job with the SP10. While SP10 hashes a bit better in a cooler environment, the performance increase is not that big to worth the costs of better cooling in form of a bigger case with bigger radiators so I am sure that they will achieve the perfect balance on the SP30 too. Just wait a few more days and you will see.
|
|
|
|
jtoomim
|
|
July 16, 2014, 03:54:24 PM |
|
1) for those who can handle the density a small unit is terrific (and I think in 2015 datacenters will start designing new rooms/racks with bitcoin mining in mind) 2) saving a lot on heatsinks, a larger case, and shipping costs 3) with immersion cooling on the horizon it makes some sense to understand how efficiently the units can be cooled with the least aluminum and airflow, as density of chips in immersion will be a prioity
1) Absolutely. As a datacenter developer (and SP30 and SP10 customer), I'm very happy about the SP?0 power density. 1. 40kw per rack requires extreme (and extremely expensive) cooling. I think most datacenters being built in 2015+ will use immersion cooling. 3. I think it's about time to just make the switch over to immersion cooling. There's really no competition. Even with 40C dT air cooling still can't match the density or efficiency of immersion cooling.
1. I don't think so. If you do it the traditional datacenter way, with in-aisle air conditioners or water cooling loops with water chillers on the roof, then yes, it's expensive. I think there's a better way. I should know in late August if it works. It will likely be cheaper than immersion cooling. 3. I expect my DC's PUE to be under 1.05 (not including transformer losses, substation losses, and voltage drop losses). That might be higher than Allied Control's claimed 1.01, but not by much. (Their 1.01 figure certainly doesn't include any of the electrical losses I mentioned, since a transformer's efficiency is essentially never above 99%.)
|
Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power. http://Toom.im
|
|
|
Biodom
Legendary
Offline
Activity: 3934
Merit: 4453
|
|
July 16, 2014, 04:04:14 PM |
|
1) for those who can handle the density a small unit is terrific (and I think in 2015 datacenters will start designing new rooms/racks with bitcoin mining in mind) 2) saving a lot on heatsinks, a larger case, and shipping costs 3) with immersion cooling on the horizon it makes some sense to understand how efficiently the units can be cooled with the least aluminum and airflow, as density of chips in immersion will be a prioity
1) Absolutely. As a datacenter developer (and SP30 and SP10 customer), I'm very happy about the SP?0 power density. 1. 40kw per rack requires extreme (and extremely expensive) cooling. I think most datacenters being built in 2015+ will use immersion cooling. 3. I think it's about time to just make the switch over to immersion cooling. There's really no competition. Even with 40C dT air cooling still can't match the density or efficiency of immersion cooling.
1. I don't think so. If you do it the traditional datacenter way, with in-aisle air conditioners or water cooling loops with water chillers on the roof, then yes, it's expensive. I think there's a better way. I should know in late August if it works. It will likely be cheaper than immersion cooling. 3. I expect my DC's PUE to be under 1.05 (not including transformer losses, substation losses, and voltage drop losses). That might be higher than Allied Control's claimed 1.01, but not by much. (Their 1.01 figure certainly doesn't include any of the electrical losses I mentioned, since a transformer's efficiency is essentially never above 99%.) not to put pressure on you, but is that datacenter coming? what's causing the holdup? smaller retail spaces in Houston are being renovated in a day or two max as long as sq footage is profitable... is energy commitment causes the holdup? sorry to putting you on the spot, but you do talk about your upcoming facility here...
|
|
|
|
murdof
|
|
July 16, 2014, 08:27:49 PM |
|
Can anyone pm me or tell me rack ears that work with SP10 and are available on amazon or any other site that is located in EU (to have lower cost for shipping)
Thanks
|
|
|
|
jimmothy
|
|
July 16, 2014, 09:58:18 PM |
|
1. 40kw per rack requires extreme (and extremely expensive) cooling. I think most datacenters being built in 2015+ will use immersion cooling. 3. I think it's about time to just make the switch over to immersion cooling. There's really no competition. Even with 40C dT air cooling still can't match the density or efficiency of immersion cooling.
1. I don't think so. If you do it the traditional datacenter way, with in-aisle air conditioners or water cooling loops with water chillers on the roof, then yes, it's expensive. I think there's a better way. I should know in late August if it works. It will likely be cheaper than immersion cooling. 3. I expect my DC's PUE to be under 1.05 (not including transformer losses, substation losses, and voltage drop losses). That might be higher than Allied Control's claimed 1.01, but not by much. (Their 1.01 figure certainly doesn't include any of the electrical losses I mentioned, since a transformer's efficiency is essentially never above 99%.) I'm interested to know more about your DC plans. How will it be cooled? Does the 1.05PUE include the four 15w fans per sp30? What is the infrastructure $/w? (Excluding the cost of the asics) Does your fancy cooling method require extra cash or space? What's the total capacity of the DC? MW? Square feet?
|
|
|
|
jtoomim
|
|
July 16, 2014, 10:26:28 PM |
|
|
Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power. http://Toom.im
|
|
|
opentoe
Legendary
Offline
Activity: 1274
Merit: 1000
Personal text my ass....
|
|
July 17, 2014, 12:28:12 AM |
|
I thought SP-TECH would have been the first to get 28NM out the door. Shocking to see those Antminers are already shipping. Efficiency I don't think is as good, but if you ordered a whole bunch of them you'd have them in just a few days. Let us just pass the SP50 and go right to the SP100 and skip home mining completely.
|
|
|
|
jtoomim
|
|
July 17, 2014, 12:36:40 AM |
|
I thought SP-TECH would have been the first to get 28NM out the door. You mean besides KNC, Hashfast, Cointerra, Coincraft, and Black Arrow? Spondoolies isn't aiming to be first. They're aiming to be best.
|
Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power. http://Toom.im
|
|
|
Zelek Uther
|
|
July 17, 2014, 12:51:48 AM |
|
Can anyone pm me or tell me rack ears that work with SP10 and are available on amazon or any other site that is located in EU (to have lower cost for shipping)
Thanks
The SP10 rack ears are custom, email Spond and they will sell you some.
|
Run a Bitcoin node, support the network.
|
|
|
|
|
|