DiabloD3
Legendary
Offline
Activity: 1162
Merit: 1000
DiabloMiner author
|
|
April 17, 2012, 12:23:03 AM |
|
Do you guys know what is the limit of GPUs that a typical mobo(bios) can initialize? The fastra2 had 13 GPUs and they needed a custom bios to boot. I have tried to look into this virtualization business and damn there's a little of good howtos available. The esxi could have been an easy solution but 5970s won't play nice with that.
Depends on the CPU arch's address space (hint: 64 bit processors, internally, do not have 64 bits of address space, its usually something like 40 or 48 bit). It also depends on how the BIOS allocates space for the PCI-E GART window, some blindly select huge address spaces no matter what the device will actually use leaving you enough room for 8-10 devices. My BIOS has an option that I can enable that allows for the use of a 64-bit address table to map devices with, instead of the usual 32-bit table. However, it is an option that I have never seen on any other device, and I do not yet know whether it will work correctly, or even what operating systems support it. When I turned it on and attempted to boot BAMT, the screen filled with corruption and I didn't get too far. Similarly, Windows refused to install, citing some unknown hardware capabilities. I think a 64-bit flavor of linux should work though. BAMT is worthless shit if it isn't a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.
|
|
|
|
rjk (OP)
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
April 17, 2012, 12:25:59 AM |
|
BAMT is worthless shit if it isn't a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.
I *think* (not sure though) that the current BAMT is 64-bit, but I had been trying with the previous version which wasn't.
|
|
|
|
DiabloD3
Legendary
Offline
Activity: 1162
Merit: 1000
DiabloMiner author
|
|
April 17, 2012, 12:27:25 AM |
|
BAMT is worthless shit if it isn't a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.
I *think* (not sure though) that the current BAMT is 64-bit, but I had been trying with the previous version which wasn't. Open xterm, uname -a
|
|
|
|
TheHarbinger
Sr. Member
Offline
Activity: 378
Merit: 250
Why is it so damn hot in here?
|
|
April 17, 2012, 12:40:00 AM |
|
BAMT is worthless shit if it isn't a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.
I *think* (not sure though) that the current BAMT is 64-bit, but I had been trying with the previous version which wasn't. To lazy to SSH in to check, but I run 5 and 6 cards rigs on BAMT 0.5c.
|
12Um6jfDE7q6crm1s6tSksMvda8s1hZ3Vj
|
|
|
JL421
|
|
April 17, 2012, 02:43:33 AM |
|
BAMT is worthless shit if it isn't a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.
I have 7 GPU cores on my 32 bit BAMT 0.5c install...
|
|
|
|
tnkflx
|
|
April 17, 2012, 05:55:21 AM |
|
Do you guys know what is the limit of GPUs that a typical mobo(bios) can initialize? The fastra2 had 13 GPUs and they needed a custom bios to boot. I have tried to look into this virtualization business and damn there's a little of good howtos available. The esxi could have been an easy solution but 5970s won't play nice with that.
This depends on the motherboard... For instance, my GA-990FXA-UD7 wouldn't even boot with 8 GPUs until I started disabling some devices in the BIOS (USB3, eSata, etc...). After a couple of tries, everything works and this box has been running stable for several months now.
|
| Operating electrum.be & us.electrum.be |
|
|
|
tnkflx
|
|
April 17, 2012, 05:55:48 AM |
|
BAMT is worthless shit if it isn't a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.
I *think* (not sure though) that the current BAMT is 64-bit, but I had been trying with the previous version which wasn't. Nope, BAMT is still 32-bit.
|
| Operating electrum.be & us.electrum.be |
|
|
|
DiabloD3
Legendary
Offline
Activity: 1162
Merit: 1000
DiabloMiner author
|
|
April 17, 2012, 08:56:14 AM |
|
Do you guys know what is the limit of GPUs that a typical mobo(bios) can initialize? The fastra2 had 13 GPUs and they needed a custom bios to boot. I have tried to look into this virtualization business and damn there's a little of good howtos available. The esxi could have been an easy solution but 5970s won't play nice with that.
This depends on the motherboard... For instance, my GA-990FXA-UD7 wouldn't even boot with 8 GPUs until I started disabling some devices in the BIOS (USB3, eSata, etc...). After a couple of tries, everything works and this box has been running stable for several months now. Yeah, you need to disable as many devices filling up address space blocks as possible. It might be able to get more than 4 on 32 bit Linux just by doing that, and also by putting less memory in it (but its hard to get less than 1gb DDR3 sticks that aren't crap or volted wrong as the DDR3 manufacturing process was targeting chip sizes for 1gb minimum sticks).
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
April 17, 2012, 01:28:06 PM |
|
This depends on the motherboard... For instance, my GA-990FXA-UD7 wouldn't even boot with 8 GPUs until I started disabling some devices in the BIOS (USB3, eSata, etc...). After a couple of tries, everything works and this box has been running stable for several months now. That is a good point. The BIOS maps all onboard functions to the virtual memory space. Most are not anticipating 8 GPUs. Good idea to turn off everything you don't need (it also saves some watts). From the NVidia cluster blog it loooks like BIOS designers are horribly inefficient with memory space allocating giant chunks of memory to each device. Still move >8 GPUs requires more "exotic" solutions. NVidia makes it easier since GPU are decoupled from xorg and have no driver level limit. AMD driver level limit and tight coupling make virtualization a requirement. GPU limitations: BIOS limitation - (NVidia cluster w/ 13 GPU required a custom BIOS built by manufacturer) Driver limit - AMD drivers have a hardcoded limit of 8 (NVidia has no driver level limit but doesn't support >8 due to other limits) Virtual space limit - >8 GPU likely mandates x64 OS Kernel limit - Large number of addressable devices may require changes to kernel xorg limit - xorg can't handle > 8 GPU (NVidia bypasses this by making drivers independent of xorg) So it is "hard" w/ Nvidia but downright impossible w/ AMD. Virtualization should work though ... maybe.
|
|
|
|
ice_chill
|
|
April 17, 2012, 01:51:22 PM |
|
Apparently ASUS Crosshair V (990FX) comes with AMD's version of VT-D hardware visualization. They call it IOMMU. Would be interesting to see someone try it.
|
|
|
|
DiabloD3
Legendary
Offline
Activity: 1162
Merit: 1000
DiabloMiner author
|
|
April 17, 2012, 02:31:14 PM |
|
Apparently ASUS Crosshair V (990FX) comes with AMD's version of VT-D hardware visualization. They call it IOMMU. Would be interesting to see someone try it.
All FX series chips have it, none of the rest do. That includes 790FX, 890FX, and 990FX. Its really a downscaled version of their server chipset, not a member of their desktop series. IOMMU is the literal description of what that is, both Intel and AMD (plus Sun's and IBM's) are called that. All it does is allow two way remapping of hardware interrupts and device-side device memory map remapping (such as unified remaps of device->system ram and cpu->device ram). If you're going to do any virtualization of suitably complex devices, you need IOMMU. This includes, but isn't limited to, GPUs, higher end NICs (10gbit and up typically), high end RAID controllers, etc.
|
|
|
|
twoBitBasher
Member
Offline
Activity: 85
Merit: 10
|
|
April 18, 2012, 10:49:37 AM |
|
Have you found any good HOWTOs on setting up the virtualization? I guess the options are: VMWARE esxi: the easiest to setup, just install and select devices for passthrough and install guest OS. It might even boot existing BAMT from usb stick(pass usb controller). Free to test! any dual GPU is no go though Citrix XenServer: Has GPU passthrough support on a super expensive licenced copy... so no way to tell if it is any easier to implement than the opensource project. I think that some documentation would be available but one might still need to be a Linux guru. Opensource Xen: According to Xen wiki AMD passthrough works OOB. However some GPU bios extracting might be involved and all kind of hacking to get actually in the point of attempting a first boot. On the upside some reports of having 5970 working on this solution are floating in the net. My GoogleFu is apparently weak as I have not found a simple and thorough enough guide on how to actually tame this Xen-beast...
|
If you think my comments have benefitted you it would be nice to hear thanks Doge: DMnfgNp1HQSjtTZ1HcWiYtMwoGP5xcYDcz
|
|
|
rjk (OP)
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
April 18, 2012, 12:36:56 PM |
|
Have you found any good HOWTOs on setting up the virtualization? I guess the options are: VMWARE esxi: the easiest to setup, just install and select devices for passthrough and install guest OS. It might even boot existing BAMT from usb stick(pass usb controller). Free to test! any dual GPU is no go though Citrix XenServer: Has GPU passthrough support on a super expensive licenced copy... so no way to tell if it is any easier to implement than the opensource project. I think that some documentation would be available but one might still need to be a Linux guru. Opensource Xen: According to Xen wiki AMD passthrough works OOB. However some GPU bios extracting might be involved and all kind of hacking to get actually in the point of attempting a first boot. On the upside some reports of having 5970 working on this solution are floating in the net. My GoogleFu is apparently weak as I have not found a simple and thorough enough guide on how to actually tame this Xen-beast... Nobody's made anything quite like this, to my knowledge. There is a KVM tut on the previous page that I will be looking at.
|
|
|
|
|
|
Jaryu
Member
Offline
Activity: 90
Merit: 10
|
|
April 19, 2012, 01:32:30 AM |
|
I don't know if it would help you or not RJK, but since unlike it's predecessors the PCI-e standard is plug and play(unless it's different on your motherboard) isn't it just a matter of plugging the cards to the MB after it boots to avoid the boot issue and configure the VM that way? If it works it would still suck to have to reboot the system(unplugging all the but 4 cards) but if it works... why reboot the rig ever again right :p hehe
|
|
|
|
malevolent
can into space
Legendary
Offline
Activity: 3472
Merit: 1724
|
|
April 19, 2012, 02:51:17 AM |
|
Depends on the CPU arch's address space (hint: 64 bit processors, internally, do not have 64 bits of address space, its usually something like 40 or 48 bit).
So there rest is reserved for something else?
|
Signature space available for rent.
|
|
|
rjk (OP)
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
April 19, 2012, 03:57:46 AM |
|
Virtualbox from Oracle would also be worth checking out.
Virtualbox is a type 2 hypervisor, I.E. it won't work. I don't know if it would help you or not RJK, but since unlike it's predecessors the PCI-e standard is plug and play(unless it's different on your motherboard) isn't it just a matter of plugging the cards to the MB after it boots to avoid the boot issue and configure the VM that way? If it works it would still suck to have to reboot the system(unplugging all the but 4 cards) but if it works... why reboot the rig ever again right :p hehe
I've tried this on other platforms with mixed results. I'm not going to risk this expensive board though. Since KVM has been brought up, Proxmox VE would be my obvious choice because it's a bare-metal debian distro with KVM already rolled in. You would have to manually edit your vm.conf files with the device ID's for the VGA passthrough, but this should be easy enough for a linux admin.
So it supports hardware passthrough out of the box? I thought even vanilla KVM needed some special configs and modules to be loaded. Also, alas, I am not a linux admin. Just a linux tinkerer. In other news, I ordered the board today to the tune of a little over a grand. It comes with a 30 day evaluation to see if it will even be compatible with my backplane, since the backplane is made by Trenton and the board is from Advantech. I guess we will see whether it works or not.
|
|
|
|
DiabloD3
Legendary
Offline
Activity: 1162
Merit: 1000
DiabloMiner author
|
|
April 19, 2012, 04:06:13 AM |
|
Depends on the CPU arch's address space (hint: 64 bit processors, internally, do not have 64 bits of address space, its usually something like 40 or 48 bit).
So there rest is reserved for something else? No, it just doesn't waste the space internally.
|
|
|
|
rjk (OP)
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
April 19, 2012, 04:31:49 AM |
|
To the best of my knowledge sir passthrough works out of the box in PVE. I've admin'd production deployments of it now to virtualize KVM machines since .9
I know it isn't available in the web gui like I said, but you can edit your .conf files and provide device ID's (lspci, lsusb) and pass-through devices from your host. I've done it with a USB dongle/key, a PCI modem, and an external USB hard drive, possibly serial - all from debian host to windows guest, all successful.
Basically PVE installs openvz and KVM and an apache web interface on debian. If you get it installed and get logged into the web interface and can't create a KVM machine only openvz, enable virtualization in your BIOS. They have fairly good support forums.
Sweet, good info.
|
|
|
|
|