rjk (OP)
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
July 04, 2012, 01:51:50 PM |
|
So, you are selling the video cards ? Sad, I was really hoping to see this completed one day. Backplane up for grabs ??
I wasn't thinking of selling it right away unless you were going to make a really great offer.
|
|
|
|
crosby
|
|
July 04, 2012, 01:57:07 PM |
|
So, you are selling the video cards ? Sad, I was really hoping to see this completed one day. Backplane up for grabs ??
I wasn't thinking of selling it right away unless you were going to make a really great offer. I really wish I could
|
Get your BTC and LTC at www.betcoinpartners.com/c/2/374 with FREEROLLS + Daily & Weekly Bonuses + Satellites + ON DEMAND tournaments + Instant Deposit + Instant Withdrawals + RING games + GTD Tournament + On Demand Satellite, + LIVE DEALER CASINO + SPORTSBOOK + 100% WELCOME BONUS www.betcoinpartners.com/c/2/374 150 btc ($100,000) gtd monthly, 75 btc ($50,000) gtd weekly and 10 BIG daily gtd ranging 0.5 btc to up to 20 btc
|
|
|
crosby
|
|
July 04, 2012, 02:01:29 PM |
|
|
Get your BTC and LTC at www.betcoinpartners.com/c/2/374 with FREEROLLS + Daily & Weekly Bonuses + Satellites + ON DEMAND tournaments + Instant Deposit + Instant Withdrawals + RING games + GTD Tournament + On Demand Satellite, + LIVE DEALER CASINO + SPORTSBOOK + 100% WELCOME BONUS www.betcoinpartners.com/c/2/374 150 btc ($100,000) gtd monthly, 75 btc ($50,000) gtd weekly and 10 BIG daily gtd ranging 0.5 btc to up to 20 btc
|
|
|
Slushpuppy
Member
Offline
Activity: 106
Merit: 10
|
|
July 16, 2012, 12:51:36 AM |
|
this looked awesome
|
|
|
|
FormerlyAnonymous
Newbie
Offline
Activity: 37
Merit: 0
|
|
August 11, 2012, 05:21:29 PM |
|
Dunno if this project is dead or what... but if you continue, for the software side, you might want to consider a Xen based platform. Xen is without doubt an ugly stepchild in the virtualization world, but it's been used extensively for PCI passthrough -- indeed, if I'm not mistaken, it even supported come crude virtual PCI passthrough capability before hardware IOMMUs came to market.
However I will warn you that Xen adds some complexity and if you're already struggling you might not like it. Another approach would be to get rid of Proxmox and use regular linux. If I were doing this myself, I would probably build it as a Gentoo system with a hand-configured kernel -- but again that's kinda complex.
I've tried proxmox and found it to be pretty buggy and fragile. Great concept, but they just need to do more work on their implementation. Everything seemed to be half-way implemented, so as soon as you deviated from the simple use-cases they designed it for, everything fell to pieces. Actually everything fell to pieces even when I tried to follow the simple guides in their wiki.
Proxmox is basically debian with an openvz kernel and a bunch of Red-Hat cluster software. You don't need the extra complexity that openvz or Red Hat cluster suite bring into the system -- you might have better luck just running ubuntu and kvm. It's not that hard -- just follow the guides in the wiki and you'll be 90% of the way there.
I understand the allure of a GUI like Proxmox, but often the "simplicity" offered by such tools is illusory and ends up acting more like a "complexity loan" -- installation will be easy, sure, but then try to actually /do/ anything and you will spend more time than you saved during installation working around all the bugs and limitations of whatever mysterious pile of software is sitting underneath that pretty GUI.
|
|
|
|
rjk (OP)
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
August 12, 2012, 01:49:00 AM |
|
Dunno if this project is dead or what... but if you continue, for the software side, you might want to consider a Xen based platform. Xen is without doubt an ugly stepchild in the virtualization world, but it's been used extensively for PCI passthrough -- indeed, if I'm not mistaken, it even supported come crude virtual PCI passthrough capability before hardware IOMMUs came to market.
However I will warn you that Xen adds some complexity and if you're already struggling you might not like it. Another approach would be to get rid of Proxmox and use regular linux. If I were doing this myself, I would probably build it as a Gentoo system with a hand-configured kernel -- but again that's kinda complex.
I've tried proxmox and found it to be pretty buggy and fragile. Great concept, but they just need to do more work on their implementation. Everything seemed to be half-way implemented, so as soon as you deviated from the simple use-cases they designed it for, everything fell to pieces. Actually everything fell to pieces even when I tried to follow the simple guides in their wiki.
Proxmox is basically debian with an openvz kernel and a bunch of Red-Hat cluster software. You don't need the extra complexity that openvz or Red Hat cluster suite bring into the system -- you might have better luck just running ubuntu and kvm. It's not that hard -- just follow the guides in the wiki and you'll be 90% of the way there.
I understand the allure of a GUI like Proxmox, but often the "simplicity" offered by such tools is illusory and ends up acting more like a "complexity loan" -- installation will be easy, sure, but then try to actually /do/ anything and you will spend more time than you saved during installation working around all the bugs and limitations of whatever mysterious pile of software is sitting underneath that pretty GUI.
Thanks for the input. Unfortunately, all of the video cards have been sold, although I do still have the rest of the platform (trying to figure out what the hell to do with it....). By the time that I got around to installing anything, KVM was repeatedly being lauded as the be-all, end-all solution, which is why I went with Proxmox. Unfortunately, that was right around a time when I got very busy and could no longer devote time to the project. I would have preferred Xen, since I currently use the XenServer virtualization from Citrix for my other work stuff, and I know it better, but in either case it probably wouldn't have worked out anyways. I am not very good at Linux stuff in general, and slapping on a layer of complexity like hardware virtualization transformed the entire process into Greek to me. It's possible that XenServer might work, but as far as I know the hardware virtualization requires an expensive license to use, and using open source Xen would be Greek all over again. Sooo... I don't really know where to go from here. I have the board all wired into the PSU, ready to go, and I have almost 5kw of 12vdc available from 2 other PSUs (but no proper distribution bus or anything). I have an awesome frame built by Spotswood ( http://richchomiczewski.wordpress.com/), the backplane itself, 2 host boards (one old one and one newer one), the 1200w PSU, and the two 2360w PSUs. At the beginning of the thread is a list of parts and the total that I have spent on all of them. Wonder if someone is interested in buying it...
|
|
|
|
FormerlyAnonymous
Newbie
Offline
Activity: 37
Merit: 0
|
|
August 12, 2012, 02:30:08 AM Last edit: August 12, 2012, 02:50:52 AM by FormerlyAnonymous |
|
It's possible that XenServer might work, but as far as I know the hardware virtualization requires an expensive license to use, and using open source Xen would be Greek all over again. For the record -- sounds like you're aware of this, rjk, but I've seen people that aren't -- Xen is open source, and doesn't cost anything; that includes the hardware virtualization support. It just so happens that XenServer uses Xen as part (a big part, I'd presume) of its commercial offering, but there's no reason regular people (who don't hate money ) can't use it also, for free.
|
|
|
|
rjk (OP)
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
August 12, 2012, 02:31:24 AM |
|
It's possible that XenServer might work, but as far as I know the hardware virtualization requires an expensive license to use, and using open source Xen would be Greek all over again. For the record, XenServer, isn't a product I'd recommend for most purposes due to my not being too fond of the company that makes it. Xen, by contrast, is open source -- it doesn't cost anything; that includes the hardware virtualization support. In other words, it just so happens that XenServer uses Xen as part (most) of its "commercial product," but there's no reason regular people who don't hate money can't use it also, for free Of course, but unfortunately I'm more of a GUI guy, and don't know the Xen command line well at all.
|
|
|
|
420
|
|
August 14, 2012, 09:26:22 AM |
|
Am I mistaken or are you chaining PCI-e extenders ? I was wondering if I could do that...
Yep. For mining with its minimal bandwidth, it should be fine. Has worked in my other rigs as well, you can chain 1x as well as 16x or combine them however you like. multiple times i've tried linking 16x risers and both times it failed under Windows 7 64-bit with an ASUS WS board 1155
|
Donations: 1JVhKjUKSjBd7fPXQJsBs5P3Yphk38AqPr - TIPS the hacks, the hacks, secure your bits!
|
|
|
RulerOf
Newbie
Offline
Activity: 14
Merit: 0
|
|
September 20, 2012, 03:35:43 AM |
|
At the beginning of the thread is a list of parts and the total that I have spent on all of them. Wonder if someone is interested in buying it... Probably a bunch of people, myself included, but I guess I would need some more info. How's the 3D performance of the PCIe cards through the x4(?) electrical connection? To your knowledge, is there an SHB available (probably AMD-based) that will allow me to pack at least 36 cores into the box? And at least 64 GB of RAM? I've built two multi-headed gaming boxes now out of my old mining hardware. Fun stuff. If there's a realistic way to stuff 18 heads into a single case (Oh hrm, I'd need more GPUs ), I'd be open to looking into it.
|
|
|
|
rjk (OP)
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
September 20, 2012, 03:55:18 AM |
|
At the beginning of the thread is a list of parts and the total that I have spent on all of them. Wonder if someone is interested in buying it... Probably a bunch of people, myself included, but I guess I would need some more info. How's the 3D performance of the PCIe cards through the x4(?) electrical connection? To your knowledge, is there an SHB available (probably AMD-based) that will allow me to pack at least 36 cores into the box? And at least 64 GB of RAM? I've built two multi-headed gaming boxes now out of my old mining hardware. Fun stuff. If there's a realistic way to stuff 18 heads into a single case (Oh hrm, I'd need more GPUs ), I'd be open to looking into it. It's still available. I unfortunately don't know how well 3D would do on a link smaller than x16. I think there have been studies on it to show that the gaming performance hit would be nil, but rendering and other such workloads would take a major hit. This is the most powerful SHB on the market that I know of: http://www.trentonsystems.com/products/single-board-computers/picmg-13/bxt7059It supports up to two 8-core Intel Xeon E5-2448L processors (16 physical and 32 virtual cores with HT), and up to 96GB of RAM if you use 16GB modules in all 6 slots. If you will be processing video, you will want this backplane instead: http://www.trentonsystems.com/products/backplanes/picmg-13-backplanes/video-processing-gpu-backplane-bpg8032It has x16 links on all slots. As for how many cards you can actually install and use... you would be on your own there. It requires virtualization and/or driver tricks that I have no idea how to do (that's mainly why this project flopped), and possibly BIOS mods depending on how you are trying to make it work. If you want it, let me know.
|
|
|
|
RulerOf
Newbie
Offline
Activity: 14
Merit: 0
|
|
September 20, 2012, 05:13:21 AM |
|
Wow. It's still available. I unfortunately don't know how well 3D would do on a link smaller than x16. I think there have been studies on it to show that the gaming performance hit would be nil, but rendering and other such workloads would take a major hit. This is the most powerful SHB on the market that I know of: http://www.trentonsystems.com/products/single-board-computers/picmg-13/bxt7059It supports up to two 8-core Intel Xeon E5-2448L processors (16 physical and 32 virtual cores with HT), and up to 96GB of RAM if you use 16GB modules in all 6 slots. If you will be processing video, you will want this backplane instead: http://www.trentonsystems.com/products/backplanes/picmg-13-backplanes/video-processing-gpu-backplane-bpg8032It has x16 links on all slots. As for how many cards you can actually install and use... you would be on your own there. It requires virtualization and/or driver tricks that I have no idea how to do (that's mainly why this project flopped), and possibly BIOS mods depending on how you are trying to make it work. If you want it, let me know. All things considered, there's a lot of limiting factors to the idea I've got as far as this backplane goes, but with that SHB, while the chips on it would be good for a parallelizable workload, they'd likely be terrible for gaming I was doing some math... Without an AMD option, even if I could pack the machine with the appropriate Intel chips, it'd likely be more than 3-4x the cost of putting together multiple boxes, even considering duplicated hardware such as cases, power supplies, and so on. I naturally expect a premium for enterprise hardware, but at some point.... I'm sure you get th idea Part of me really wants to see the sight of 15 or so monitors plugged into a noisy metal behemoth. Shame I didn't catch this thread back in February. Even if VTd passthrough weren't an option, I probably could have helped you with using Xen PV passthrough to make it all work, and that would have been a real pleasure to see! Oh well. If you want to take a stab at making it work again, drop me a line. I could at least point you in the right direction Also, thanks for the quick reply!
|
|
|
|
EnergyVampire
|
|
September 23, 2012, 04:17:51 PM Last edit: September 23, 2012, 04:34:16 PM by EnergyVampire |
|
Hello rjk,
Your rig is looking really nice!
Quick question on your setup. Which driver(s) are you using? Is the driver causing the 11 GPU limitation? I believe I read somewhere that the AMD drivers could only handle 8 GPUs but your rig has proven otherwise, kudos!
I don't think it's a driver limit because I don't have any drivers loaded. I am just listing the PCI devices, which as far as I know does not require a driver. That's why I'm pretty sure it's a BIOS limit. There will of course be a driver limit, likely still of 8 devices, which is why I was planning on using VT-d and PCIe passthough with some virtual machines, but I didn't get very far on that. A research group from the University of Antwerp managed to get 13 GPUs on one ASUS motherboard. They confirm the limitation is BIOS related as you mentioned. The group ended up customizing the BIOS and hacking the Linux kernel.
|
|
|
|
420
|
|
September 27, 2012, 03:37:56 AM |
|
Hello rjk,
Your rig is looking really nice!
Quick question on your setup. Which driver(s) are you using? Is the driver causing the 11 GPU limitation? I believe I read somewhere that the AMD drivers could only handle 8 GPUs but your rig has proven otherwise, kudos!
I don't think it's a driver limit because I don't have any drivers loaded. I am just listing the PCI devices, which as far as I know does not require a driver. That's why I'm pretty sure it's a BIOS limit. There will of course be a driver limit, likely still of 8 devices, which is why I was planning on using VT-d and PCIe passthough with some virtual machines, but I didn't get very far on that. A research group from the University of Antwerp managed to get 13 GPUs on one ASUS motherboard. They confirm the limitation is BIOS related as you mentioned. The group ended up customizing the BIOS and hacking the Linux kernel. kickass is there a video of this?
|
Donations: 1JVhKjUKSjBd7fPXQJsBs5P3Yphk38AqPr - TIPS the hacks, the hacks, secure your bits!
|
|
|
EnergyVampire
|
|
February 22, 2013, 06:47:29 PM |
|
Hello rjk,
Your rig is looking really nice!
Quick question on your setup. Which driver(s) are you using? Is the driver causing the 11 GPU limitation? I believe I read somewhere that the AMD drivers could only handle 8 GPUs but your rig has proven otherwise, kudos!
I don't think it's a driver limit because I don't have any drivers loaded. I am just listing the PCI devices, which as far as I know does not require a driver. That's why I'm pretty sure it's a BIOS limit. There will of course be a driver limit, likely still of 8 devices, which is why I was planning on using VT-d and PCIe passthough with some virtual machines, but I didn't get very far on that. A research group from the University of Antwerp managed to get 13 GPUs on one ASUS motherboard. They confirm the limitation is BIOS related as you mentioned. The group ended up customizing the BIOS and hacking the Linux kernel. kickass is there a video of this? Hello 420, Sorry for my late reply, The system is called Fastra II: http://fastra2.ua.ac.be/
|
|
|
|
rjk (OP)
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
December 03, 2013, 04:16:42 PM Last edit: December 03, 2013, 06:19:34 PM by rjk |
|
I'm bumping this to hopefully reel in some offers for this stuff, because I need to get rid of it. I have tons of parts that need to go. Here's a list:
The backplane itself, mounted on the custom tray/frame, and hard wired to a 1.2 KW PC Power and Cooling PSU. Because of the way this is mounted, I don't want to take it apart and I wish to sell it as a package. $750
The Sandy Bridge based CPU board with 8 GB of ECC DDR3 RAM as well as the older NLT6313 dual CPU board with 8 GB of DDR2. $1000 Or, the backplane and CPU boards for $1500 together.
All the other accessories - buy the entire package for $2000 including the backplane and CPU boards.
These accessories include:
Delta fans IBM 240 volt PDU with 70 amp connector Dell 2360 watt 12 volt power supplies for blade servers 6 AWG cable with Dell PSU connectors soldered on Additional frame parts Custom built 240 volt PDU with power cables (for connecting regular rigs)
SHIPPING IS EXTRA! If you buy everything, it will probably take up 3 or 4 very large boxes (50+ lbs). For this reason, I will not ship internationally. I'm located in Ohio if anyone wants to pick some stuff up.
Payment: I will not take only bitcoins, but I will consider a partial bitcoin trade with cash. Since PayPal sucks, I will only consider using it with the most trusted members. I will also consider partial trades of BFL ASIC hardware.
MAKE OFFERS
|
|
|
|
jimmydorry
Newbie
Offline
Activity: 58
Merit: 0
|
|
December 06, 2013, 04:48:26 AM |
|
Not accepting Bitcoins fully? Madness!
|
|
|
|
Garr255
Legendary
Offline
Activity: 938
Merit: 1000
What's a GPU?
|
|
December 06, 2013, 05:10:35 AM |
|
rjk, do you still have the entire pwm fan controller setup that I built for this project years ago? I remember taking a lot of pride in that, being my first bitcoin-related project Good luck selling, I hope someone will be eager to take on the challenge for sCrypt mining!
|
“First they ignore you, then they laugh at you, then they fight you, then you win.” -- Mahatma Gandhi
Average time between signing on to bitcointalk: Two weeks. Please don't expect responses any faster than that!
|
|
|
rjk (OP)
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
December 06, 2013, 04:29:59 PM |
|
rjk, do you still have the entire pwm fan controller setup that I built for this project years ago? I remember taking a lot of pride in that, being my first bitcoin-related project Good luck selling, I hope someone will be eager to take on the challenge for sCrypt mining! Yep I do, and that cool micro-router based wifi sharing thingy.
|
|
|
|
|
|