I'm guessing this will still suck 10 times the power of the FPGA stuff...
But it sounds like you can get these cards cheap?
What's your total costs for putting together a rig and how much mhash are you targetting?
No I haven't found them anywhere cheap, mostly just pie-in-the-sky dreaming. Like, what if I could interest a large corp or government in such a thing - would be totally crazy, but they would probably pay quite a lot. For instance, here is a picture of a custom 4-card nvidia box that was commissioned by some people:
According to the company that makes it, they are charging the customer ~$46k, for 4 Tesla cards. Even if I was forced to use dual slot cards (9 slots instead of 18), that would be pretty epic density. The only comparison is the Dell C410x box, and I have no idea how much that even costs. Linky: http://www.dell.com/us/business/p/poweredge-c410x/pd?~ck=anav
The Dell has 16 double-wide cards, but to achieve that some of them are in the front and some are in the rear, which means hot air from the front GPUs is blown over the rear GPUs, which seems lame and prone to overheats.
Single slot air cooled cards would be cool because I could then fit 18 of them in per chassis. But even better would be dual slot monsters with waterblocks bolted on, because then they would only take up 1 slot
Water cooling would be the most epic challenge ever. PCIe slots are on 0.8" inch centers, and the closest I have found to that for a manifold is the following, with 10 ports on 1.5" inch centers:
So I would have to have 2 of them stacked next to each other, offset by about 3/4 of an inch. They are 16.5" inches wide, which just barely fits into a rackmount case. Would need 2 on the supply side and 2 on the output side, and a pump and reservoir that could handle some serious flow.
If I estimate 375 watts maximum from all the cards, a radiator with a 10 HP rating would be enough to cool them, but it would take up at least 8 U. If I ganged the systems together, and had 1 or 2 rads for several systems, I could probably squeeze the important bits into 5 or 6 U per system, and then have an external reservoir, pump(s), and rad(s). WC'ed Tesla cards would be epic, but would require a graphics-class backplane. The BP that I have is "server-class" - e.g., it doesn't have 16 PCIe lanes per slot, only 4 or 8. Obviously in the context of mining (or even password cracking and shit like that) we don't need much PCIe bandwidth, but on that note we don't need "pro" graphics cards either! I based most of my calculations on stuffing this full of WC'ed 6990s or 5970s, and came up with 7KW real power draw (14KW PSU rating for efficiency), 20ish Ghash/s, and a little more than $2/mhash/s. Total cost close to $25k. Obviously, it would be cheaper to build several aircooled rigs, but the idea here is DENSITY DENSITY DENSITY, and maybe a little bit of efficiency too if the server PSUs are 91% or better.
Should I put it up on GLBSE?