Bitcoin Forum
November 11, 2024, 02:04:32 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [18] 19 20 21 22 23 24 25 26 »
  Print  
Author Topic: Mining rig extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS]  (Read 169521 times)
DiabloD3
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
April 17, 2012, 12:23:03 AM
 #341

Do you guys know what is the limit of GPUs that a typical mobo(bios) can initialize? The fastra2 had 13 GPUs and they needed a custom bios to boot.
I have tried to look into this virtualization business and damn there's a little of good howtos available. The esxi could have been an easy solution but 5970s won't play nice with that.

Depends on the CPU arch's address space (hint: 64 bit processors, internally, do not have 64 bits of address space, its usually something like 40 or 48 bit). It also depends on how the BIOS allocates space for the PCI-E GART window, some blindly select huge address spaces no matter what the device will actually use leaving you enough room for 8-10 devices.
My BIOS has an option that I can enable that allows for the use of a 64-bit address table to map devices with, instead of the usual 32-bit table. However, it is an option that I have never seen on any other device, and I do not yet know whether it will work correctly, or even what operating systems support it. When I turned it on and attempted to boot BAMT, the screen filled with corruption and I didn't get too far. Similarly, Windows refused to install, citing some unknown hardware capabilities.

I think a 64-bit flavor of linux should work though.

BAMT is worthless shit if it isn't  a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.

rjk (OP)
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
April 17, 2012, 12:25:59 AM
 #342


BAMT is worthless shit if it isn't  a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.
I *think* (not sure though) that the current BAMT is 64-bit, but I had been trying with the previous version which wasn't.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
DiabloD3
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
April 17, 2012, 12:27:25 AM
 #343


BAMT is worthless shit if it isn't  a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.
I *think* (not sure though) that the current BAMT is 64-bit, but I had been trying with the previous version which wasn't.

Open xterm, uname -a

TheHarbinger
Sr. Member
****
Offline Offline

Activity: 378
Merit: 250


Why is it so damn hot in here?


View Profile
April 17, 2012, 12:40:00 AM
 #344


BAMT is worthless shit if it isn't  a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.
I *think* (not sure though) that the current BAMT is 64-bit, but I had been trying with the previous version which wasn't.

To lazy to SSH in to check, but I run 5 and 6 cards rigs on BAMT 0.5c.

12Um6jfDE7q6crm1s6tSksMvda8s1hZ3Vj
JL421
Hero Member
*****
Offline Offline

Activity: 812
Merit: 510


View Profile
April 17, 2012, 02:43:33 AM
 #345

BAMT is worthless shit if it isn't  a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.

I have 7 GPU cores on my 32 bit BAMT 0.5c install...
tnkflx
Sr. Member
****
Offline Offline

Activity: 349
Merit: 250


View Profile
April 17, 2012, 05:55:21 AM
 #346

Do you guys know what is the limit of GPUs that a typical mobo(bios) can initialize? The fastra2 had 13 GPUs and they needed a custom bios to boot.
I have tried to look into this virtualization business and damn there's a little of good howtos available. The esxi could have been an easy solution but 5970s won't play nice with that.

This depends on the motherboard... For instance, my GA-990FXA-UD7 wouldn't even boot with 8 GPUs until I started disabling some devices in the BIOS (USB3, eSata, etc...). After a couple of tries, everything works and this box has been running stable for several months now.

| Operating electrum.be & us.electrum.be |
tnkflx
Sr. Member
****
Offline Offline

Activity: 349
Merit: 250


View Profile
April 17, 2012, 05:55:48 AM
 #347


BAMT is worthless shit if it isn't  a 64 bit Debian build already. You can't even get more than 4 cards working in 32 bit Linux, sometimes only 3.
I *think* (not sure though) that the current BAMT is 64-bit, but I had been trying with the previous version which wasn't.

Nope, BAMT is still 32-bit.

| Operating electrum.be & us.electrum.be |
DiabloD3
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
April 17, 2012, 08:56:14 AM
 #348

Do you guys know what is the limit of GPUs that a typical mobo(bios) can initialize? The fastra2 had 13 GPUs and they needed a custom bios to boot.
I have tried to look into this virtualization business and damn there's a little of good howtos available. The esxi could have been an easy solution but 5970s won't play nice with that.

This depends on the motherboard... For instance, my GA-990FXA-UD7 wouldn't even boot with 8 GPUs until I started disabling some devices in the BIOS (USB3, eSata, etc...). After a couple of tries, everything works and this box has been running stable for several months now.

Yeah, you need to disable as many devices filling up address space blocks as possible. It might be able to get more than 4 on 32 bit Linux just by doing that, and also by putting less memory in it (but its hard to get less than 1gb DDR3 sticks that aren't crap or volted wrong as the DDR3 manufacturing process was targeting chip sizes for 1gb minimum sticks).

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
April 17, 2012, 01:28:06 PM
 #349

This depends on the motherboard... For instance, my GA-990FXA-UD7 wouldn't even boot with 8 GPUs until I started disabling some devices in the BIOS (USB3, eSata, etc...). After a couple of tries, everything works and this box has been running stable for several months now.

That is a good point.  The BIOS maps all onboard functions to the virtual memory space.  Most are not anticipating 8 GPUs.  Good idea to turn off everything you don't need (it also saves some watts).  From the NVidia cluster blog it loooks like BIOS designers are horribly inefficient with memory space allocating giant chunks of memory to each device.

Still move >8 GPUs requires more "exotic" solutions.    NVidia makes it easier since GPU are decoupled from xorg and have no driver level limit.  AMD driver level limit and tight coupling make virtualization a requirement.

GPU limitations:
BIOS limitation - (NVidia cluster w/ 13 GPU required a custom BIOS built by manufacturer)
Driver limit - AMD drivers have a hardcoded limit of 8 (NVidia has no driver level limit but doesn't support >8 due to other limits)
Virtual space limit - >8 GPU likely mandates x64 OS
Kernel limit - Large number of addressable devices may require changes to kernel
xorg limit - xorg can't handle > 8 GPU (NVidia bypasses this by making drivers independent of xorg)

So it is "hard" w/ Nvidia but downright impossible w/ AMD.  Virtualization should work though ... maybe. Smiley
ice_chill
Sr. Member
****
Offline Offline

Activity: 336
Merit: 250


View Profile
April 17, 2012, 01:51:22 PM
 #350

Apparently ASUS Crosshair V (990FX) comes with AMD's version of VT-D hardware visualization. They call it IOMMU. Would be interesting to see someone try it.
DiabloD3
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
April 17, 2012, 02:31:14 PM
 #351

Apparently ASUS Crosshair V (990FX) comes with AMD's version of VT-D hardware visualization. They call it IOMMU. Would be interesting to see someone try it.

All FX series chips have it, none of the rest do. That includes 790FX, 890FX, and 990FX. Its really a downscaled version of their server chipset, not a member of their desktop series. IOMMU is the literal description of what that is, both Intel and AMD (plus Sun's and IBM's) are called that. All it does is allow two way remapping of hardware interrupts and device-side device memory map remapping (such as unified remaps of device->system ram and cpu->device ram).

If you're going to do any virtualization of suitably complex devices, you need IOMMU. This includes, but isn't limited to, GPUs, higher end NICs (10gbit and up typically), high end RAID controllers, etc.

twoBitBasher
Member
**
Offline Offline

Activity: 85
Merit: 10



View Profile
April 18, 2012, 10:49:37 AM
 #352

Have you found any good HOWTOs on setting up the virtualization?

I guess the options are:

VMWARE esxi: the easiest to setup, just install and select devices for passthrough and install guest OS. It might even boot existing BAMT from usb stick(pass usb controller). Free to test! any dual GPU is no go though Sad

Citrix XenServer: Has GPU passthrough support on a super expensive licenced copy... so no way to tell if it is any easier to implement than the opensource project. I think that some documentation would be available but one might still need to be a Linux guru.

Opensource Xen: According to Xen wiki AMD passthrough works OOB. However some GPU bios extracting might be involved and all kind of hacking to get actually in the point of attempting a first boot. On the upside some reports of having 5970 working on this solution are floating in the net.

My GoogleFu is apparently weak as I have not found a simple and thorough enough guide on how to actually tame this Xen-beast...

If you think my comments have benefitted you it would be nice to hear thanks Smiley

Doge: DMnfgNp1HQSjtTZ1HcWiYtMwoGP5xcYDcz
rjk (OP)
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
April 18, 2012, 12:36:56 PM
 #353

Have you found any good HOWTOs on setting up the virtualization?

I guess the options are:

VMWARE esxi: the easiest to setup, just install and select devices for passthrough and install guest OS. It might even boot existing BAMT from usb stick(pass usb controller). Free to test! any dual GPU is no go though Sad

Citrix XenServer: Has GPU passthrough support on a super expensive licenced copy... so no way to tell if it is any easier to implement than the opensource project. I think that some documentation would be available but one might still need to be a Linux guru.

Opensource Xen: According to Xen wiki AMD passthrough works OOB. However some GPU bios extracting might be involved and all kind of hacking to get actually in the point of attempting a first boot. On the upside some reports of having 5970 working on this solution are floating in the net.

My GoogleFu is apparently weak as I have not found a simple and thorough enough guide on how to actually tame this Xen-beast...
Nobody's made anything quite like this, to my knowledge. There is a KVM tut on the previous page that I will be looking at.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
dizzy1
Full Member
***
Offline Offline

Activity: 134
Merit: 100


View Profile
April 18, 2012, 04:02:29 PM
 #354

Have you found any good HOWTOs on setting up the virtualization?

...
Nobody's made anything quite like this, to my knowledge. There is a KVM tut on the previous page that I will be looking at.

This seems to be for esx and maybe esxi. The thing is that there seems to be a limit on the number of devices that can be passed through (4).

http://blogs.vmware.com/performance/2011/10/gpgpu-computing-in-a-vm.html
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010789
mem
Hero Member
*****
Offline Offline

Activity: 644
Merit: 501


Herp Derp PTY LTD


View Profile
April 19, 2012, 01:25:13 AM
 #355

KVM and Xen in linux both support PCI (and pcie) passthrough to guests:
http://wiki.xensource.com/xenwiki/XenVGAPassthrough

Virtualbox from Oracle would also be worth checking out.


Jaryu
Member
**
Offline Offline

Activity: 90
Merit: 10


View Profile
April 19, 2012, 01:32:30 AM
 #356

I don't know if it would help you or not RJK, but since unlike it's predecessors the PCI-e standard is plug and play(unless it's different on your motherboard) isn't it just a matter of plugging the cards to the MB after it boots to avoid the boot issue and configure the VM that way? If it works it would still suck to have to reboot the system(unplugging all the but 4 cards) but if it works... why reboot the rig ever again right :p hehe
malevolent
can into space
Legendary
*
Offline Offline

Activity: 3472
Merit: 1724



View Profile
April 19, 2012, 02:51:17 AM
 #357

Depends on the CPU arch's address space (hint: 64 bit processors, internally, do not have 64 bits of address space, its usually something like 40 or 48 bit).

So there rest is reserved for something else?

Signature space available for rent.
rjk (OP)
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
April 19, 2012, 03:57:46 AM
 #358

Virtualbox from Oracle would also be worth checking out.
Virtualbox is a type 2 hypervisor, I.E. it won't work.

I don't know if it would help you or not RJK, but since unlike it's predecessors the PCI-e standard is plug and play(unless it's different on your motherboard) isn't it just a matter of plugging the cards to the MB after it boots to avoid the boot issue and configure the VM that way? If it works it would still suck to have to reboot the system(unplugging all the but 4 cards) but if it works... why reboot the rig ever again right :p hehe
I've tried this on other platforms with mixed results. I'm not going to risk this expensive board though. Tongue

Since KVM has been brought up, Proxmox VE would be my obvious choice because it's a bare-metal debian distro with KVM already rolled in. You would have to manually edit your vm.conf files with the device ID's for the VGA passthrough, but this should be easy enough for a linux admin.
So it supports hardware passthrough out of the box? I thought even vanilla KVM needed some special configs and modules to be loaded. Also, alas, I am not a linux admin. Just a linux tinkerer.

In other news, I ordered the board today to the tune of a little over a grand. It comes with a 30 day evaluation to see if it will even be compatible with my backplane, since the backplane is made by Trenton and the board is from Advantech. I guess we will see whether it works or not.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
DiabloD3
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
April 19, 2012, 04:06:13 AM
 #359

Depends on the CPU arch's address space (hint: 64 bit processors, internally, do not have 64 bits of address space, its usually something like 40 or 48 bit).

So there rest is reserved for something else?

No, it just doesn't waste the space internally.

rjk (OP)
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
April 19, 2012, 04:31:49 AM
 #360

To the best of my knowledge sir passthrough works out of the box in PVE. I've admin'd production deployments of it now to virtualize KVM machines since .9

I know it isn't available in the web gui like I said, but you can edit your .conf files and provide device ID's (lspci, lsusb) and pass-through devices from your host. I've done it with a USB dongle/key, a PCI modem, and an external USB hard drive, possibly serial - all from debian host to windows guest, all successful.

Basically PVE installs openvz and KVM and an apache web interface on debian. If you get it installed and get logged into the web interface and can't create a KVM machine only openvz, enable virtualization in your BIOS.  They have fairly good support forums.
Sweet, good info.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [18] 19 20 21 22 23 24 25 26 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!