Bitcoin Forum
December 04, 2016, 06:23:23 AM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: TI XIO3130  (Read 1498 times)
vector76
Member
**
Offline Offline

Activity: 70


View Profile
July 09, 2011, 03:52:42 PM
 #1

Someone should make risers with these:
http://focus.ti.com/docs/prod/folders/print/xio3130.html
Then use a big bang marshal and 8 PCI-e risers/switches to drive 24 GPUs from one host.

Any volunteers?  I'll buy you a fire extinguisher.
1480832603
Hero Member
*
Offline Offline

Posts: 1480832603

View Profile Personal Message (Offline)

Ignore
1480832603
Reply with quote  #2

1480832603
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
hugolp
Hero Member
*****
Offline Offline

Activity: 742



View Profile
July 09, 2011, 04:21:29 PM
 #2

Someone should make risers with these:
http://focus.ti.com/docs/prod/folders/print/xio3130.html
Then use a big bang marshal and 8 PCI-e risers/switches to drive 24 GPUs from one host.

Any volunteers?  I'll buy you a fire extinguisher.

Does the standard pci-e has a limit in the number of devices it can handle? Probably the chipset of the motherboard does.
Zagitta
Member
**
Offline Offline

Activity: 84


View Profile
July 09, 2011, 07:57:07 PM
 #3

As far as i know there's no limit in the protocol itself but the motherboard needs to have enough lanes to support it so to drive 24 cards you need 24 lanes...

Anyway i tried requesting a free sample but their website is a POS and simply gives me a white side when ever i do that, it also did it after creating an account  Angry

Oh well... they go here for 20$ it seems: http://avnetexpress.avnet.com/store/em/EMController/Miscellaneous/Texas-Instruments/XIO3130ZHC/_/R-14820589/A-14820589/An-0?action=part&catalogId=500201&langId=-1&storeId=500201&listIndex=-1

I'm not american though...
IlbiStarz
Full Member
***
Offline Offline

Activity: 224


View Profile
July 09, 2011, 08:33:27 PM
 #4

And then run 6990's so its 48 gpus!

No but really max number of gpus is 8 in Linux and Windows.

It's better to be pissed off, than to be pissed on.
BTC : 1UgM1rqL9mFtH4PHF8TgvAaceymaKmhmP         LTC : LgCGw2WrRphr94RYS1qXHj2PUuYrTap4vk
FC : 6jc9PEmqxpMSxydfepHtshE4f2jMom1dAJ
vector76
Member
**
Offline Offline

Activity: 70


View Profile
July 09, 2011, 10:01:49 PM
 #5

I have no idea where the limits are for hardware/firmware/bios/software, but I would think it would be possible.  If you dive low enough in the OS you should be able to have complete control over the messages going to each motherboard PCI-e slot, and if so you could negotiate whatever needs to be negotiated to multiplex over the 3 GPUs.  Best case is if it just looks like a bridge and there already exists a driver for it, and then it might just work out of the box.

I discovered the XIO3130 from it being mentioned here.  This suggests that it can work without hacking the host all that much, although when you stuff 8 in there things might start to change.

I've made electronic circuit boards before but only for microcontroller stuff with big fat traces that run at tens of MHz.  No way I can do a BGA at 2.5GHz.

Where are the FPGA guys?
Jack of Diamonds
Sr. Member
****
Offline Offline

Activity: 252



View Profile
July 10, 2011, 03:05:24 PM
 #6

To run 24 GPUs from a single host the motherboard needs to supply 1800 watts of electricity.

Don't think even a BB Marshall can handle 1.8KW constant stream going through it.

1f3gHNoBodYw1LLs3ndY0UanYB1tC0lnsBec4USeYoU9AREaCH34PBeGgAR67fx
vector76
Member
**
Offline Offline

Activity: 70


View Profile
July 10, 2011, 03:43:48 PM
 #7

To run 24 GPUs from a single host the motherboard needs to supply 1800 watts of electricity.
Yes this would be a problem for a naive implementation.

The motherboard should be just fine if the daughter boards have additional connectors to provide the PCI-e power rails to the GPUs.  Not entirely unlike the extension cables that already exist.
vector76
Member
**
Offline Offline

Activity: 70


View Profile
July 17, 2011, 01:24:38 AM
 #8

Just came across what appears to be a product that does exactly what I had imagined:
http://www.amfeltec.com/products/x1pcie-splitter3.php

Will have to dremel off the end of their connectors though.  Tongue

They also have one that uses an x4 slot for the uplink and provides four x1 slots instead of three.  That means 32 GPUs on the host instead of 24 lol.

They claim (just like the other product) that no extra software is necessary.  I guess the PCI-e standard already contemplates bridges and hubs?

And yes they have separate power connectors to provide power to the the PCI-e slots instead of trying to get it all from the host connector.
Pipesnake
Sr. Member
****
Offline Offline

Activity: 249


View Profile
July 17, 2011, 02:17:57 AM
 #9

Better to have multiple rigs with 4-6 gpus each.

If you have one rig with 24 gpus and it goes down you're going to be freaking out while they're offline.
Jabba
Member
**
Offline Offline

Activity: 74


View Profile
July 20, 2011, 12:41:45 PM
 #10

Kids, kids, kids - get real.

Ever thought of an existing limitation by drivers? AMD just raised the limit from 4 to 8. So dream on or send mails to AMD to let your dream of a 24 or 32 GPU PC come true.

Jabba
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!