Back to the OT if I may - it turns out those Swiftech universal GPU blocks end up taking up 2 slots as well.
I think I'm going to end up going with the full-coverage blocks just to keep things more open in the future.
DeathandTaxes, what fitting are you using to connect your 5970s blocks? I'm having trouble finding what I need..
|
|
|
The 32bit claim is false. I eventually got 5 GPUs working on 32-bit 11.04 Ubuntu Linux, without any hacking of the ATI drivers or APP SDK.
That's correct, didn't catch that before. Both the 32bit and 64bit drivers should support at least 8 GPUs currently. The driver's interface with the OS layer may use 32bit or 64bit addressing depending on the OS, but the binary bits of the driver are likely identical otherwise. The newer Radeon HDs actually use a 256bit address space to address it's own memory, internally - the exchange with the OS is all serial communication (PCI is serial), so addressing the GPU components from the OS level is just about sending the right hardware address + commands. It could easily be a 16bit OS and address an infinite number of GPUs, as long as the driver handles interfacing that with the OS properly.
|
|
|
Well would they corrode or something if wash them with water and let them dry ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif) ? Probably not if you used distilled water, but mineral oil is hydrophobic. You'd need some kind of detergent to bind the oil, like dish soap. You wouldn't be able to just dunk it either as the water would seep between the PCB layers and make quite a mess. Come to think of it, the mineral oil probably will too. Any way you look at it, submerging your card in some kind of liquid isn't something you can just do on a whim and expect to turn back.
|
|
|
No, just feeding the trolls ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif) Your points make sense for anyone who is considering mining - its far more profitable to buy and trade than to invest in new equipment right now. There is no reason someone already mining should stop if they're profitable at all - there is also no reason they can't buy *and* mine.
|
|
|
6X Icarus mining borad (12X SX6SLX150) @ 360M/board. 2.16GH/s -peak/total.
power consuming: 115W (MAX) on the wall.
Well done sir! That's 6.75x the hashing power of a 5830 at ~80% of it's power draw. Pretty amazing stuff.
|
|
|
Mineral oil cooling for bitcoin. Anyone try it ?
I'm not sure anyone's actually taken the leap, but it's been talked about a few times. The general consensus is that it's not worth destroying your cards for the ePeen factor. Once you dunk them in mineral oil you're certainly not going to be able to resell them, your warranties are null and void, and they become rather difficult to clean up and use outside the mineral oil bath. There's no reason it wouldn't work, but the downsides tend to steer people away.
|
|
|
Turning a computer PSU into a scientific power supply gives you a cheap, well regulated supply with the disadvantage that voltages aren't adjustable. They're perfect for working with microcontrollers and such as you get nice, stable 3.3V, 5V, and 12V rails with usually a +/- 10% variance at most. A real scientific bench power supply is much more expensive, has the advantage of adjustable voltage, but generally are rated for relatively low amperage. The voltages are spot on, with +/- 1% variance on a good supply. That sensitivity is attained by basically ignoring efficiency - they do whatever it takes to get you a stable current. So, unless you have a bunch sitting around like you do, and don't need to care about efficiency, it's generally not worthwhile to go in the other direction ![Wink](https://bitcointalk.org/Smileys/default/wink.gif)
|
|
|
I'll take the group of 5 for 125btc as long as you have a copy of the receipt in case I need to RMA.
PM sent.
|
|
|
Would you ship to Thailand for free? ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif) Free ground shipping - continental US only.
Thailand isn't exactly continental US is it? ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif)
|
|
|
Wasn't speaking of length, but rather speed. x16 maxes at 75W with 25W required at startup and x8-x1 max at 25W.
That's incorrect - the PCI spec dictates available power, not the port length. PCI-e 1x, 4x, 8x, and 16x can all draw up to 75W PCI is limited to 35W The only difference between 1x and 16x is the number of available data lanes.
|
|
|
job's a carrot's second cousin.
Sometimes I think you're just making up half these colloquialisms on the spot ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif) Anyway - submersion is a viable solution, the trick to keep the VRM/RAM/GPU from overheating are big fat heatsinks on the chips - they work the same submerged as they do in air - and pointing an (also submerged) fan at them/on them to push fluid around. In the same respect the stock HSF boxes would work, but you'd burn out the fans pretty quickly moving around anything more viscous than air. Of course, this is still combined with pumping all the fluid through an external radiator. I think running a few tubes around and spot-cooling is a little less messy though ![Wink](https://bitcointalk.org/Smileys/default/wink.gif)
|
|
|
Even if the driver is crippled by laziness, it should be possible to get around it in linux, though it might take some effort. For example, the more-than-one X session may work, or it may be possible to use a xen virtualized system to appear as a fully-unique system with just the 9th+ GPUs They probably set it figuring it gave enough room and then haven't put much thought to it yet, sorta like the old y2k 'issues'.
This is more akin to Bill Gates' famously quoted "640kb ought to be enough for anybody" Incidentally it was only recently that the limit was increased from just four GPUs - they should have realized then that 8 certainly wouldn't be enough for long.
|
|
|
jjiimm_64, when you add the 5th card what happens?
Does aticonfig see the card at all, or seem to just ignore it? Does 'lspci' list all 5 cards?
I'm curious if firing up a second X session tied only to the 5th card may be able to get around the problem..
|
|
|
Speculation belongs in the speculation thread. FUD doesn't belong here at all. The simple fact is it's pure capitalism - if you're making money, you'd be a fool to stop making money. Even if it's just a few cents. Profit is always worthwhile as long as the effort required doesn't exceed it. If you're not making money, cut your losses. Fire it up again when you are. The market adjusts accordingly. The only thing the 'newbie traders' on the forum need to know that they might not already know is you will be constantly discouraged by fools looking to vampire your share ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif)
|
|
|
If its the same issues as with windows its a hardware limit based on resources.
Both Windows and Linux are limited to 8 cores on 64bit and 4 cores on 32bit.
Either of you have a reference for that? It sounds like a BS answer made up by ATI developers who didn't want to try harder. There is no hardware limitation outside of the number of physical available ports. A software limitation I could see, as they'd possibly run out of address space in the 32bit drivers when looking at more than 32GB of VRAM ![Cheesy](https://bitcointalk.org/Smileys/default/cheesy.gif)
|
|
|
So far I haven't heard any solid take on whether or not more than 8GPU was possible using the linux catalyst driver - some say it's 8, some say it's 16, some say it's unlimited.. Are you asking because you're trying and can't get it to work, or just looking to see if it's possible? If it's the former, let me be the first to call you insane ![Wink](https://bitcointalk.org/Smileys/default/wink.gif) If it's the latter, I'm not sure there's a definite answer. The only way to hit that limit is more than 4 dual-gpu cards in a motherboard with more than 4 PCI slots, both things are generally outside the greater reach of miner's pockets ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif)
|
|
|
Deltas are beasts and are meant to have long lifetimes under full load.
Keep them clean, and keep them lubricated, and they'll last you next to forever.
There's usually a lubricant port underneath the sticker on the hub, just squeeze in a drop every 3-6 mo or so, you can probably go a lot longer but hey, it's a cheap way to keep 'em spinning for a long, long time.
|
|
|
.. or for fixing GPU ram heatsinks that otherwise fall off (putting the glue on the sides obviously as its also a very poor themal conductor).
Careful with that one, hot glue tends to melt when it gets hot ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif) I'd agree it's uber useful in the electronics world, I don't know how many PSU's I've cracked open to see what looks like someone just poured it in to keep everything in place. For the heatsink problem, I'd highly recommend thermal adhesive instead, as that's kinda what it's meant for ![Wink](https://bitcointalk.org/Smileys/default/wink.gif)
|
|
|
|