@catfish
Looks like u connected the PCI-E extender to another extender.
Didn't know u could do that. What is the MAX length for PCI-E cables?
Not sure. However I *do* know that daisy-chaining PCIe extenders works both for low-power cards like the 5670 in this pic, and the overclocked 5830 in my second frame rig. The frame rig has an x1->x1 extender daisy-chained to an x1->x16 extender, no Molex power augmentation. And it works fine.
I haven't tried more than two extenders chained together - I only did it because I had to. In the case of the Antec case (heh), the 5670 had to go in the drive bays to have a second fan blowing air at it - so extra length was essential. In the frame rig, I wanted the hot dual-slot cards to be spaced equidistant to get decent airflow. Check out the 'show us pictures of your rigs' thread (the 'other' one with the mad open frame stuff) for my double-logic-board twin-PSU 6-card frame - ensuring all 6 cards get fresh air meant that the leftmost card's extender wouldn't reach any logic board PCIe slot. So I used two extenders.
Given that the 5830 has two PCIe power lines from the PSU, the real proof of the concept is the 5670 in the Antec case - all the power that GPU pulls is PURELY from the PCIe slot. There is *no* auxiliary power feed from the PSU. It's *all* coming through those two extenders chained together... and it's been running overclocked to 900 MHz core (103 megahash/sec) for weeks now, stable as a rock. No fires yet!
I can't see many reasons to use three normal-length extenders chained together (I've used Cablesaurus brand, shipped from the US, and the 71leven eBay vendor to get next-day no-import-tax delivery in the UK - the 71leven UK extenders are, dare I say it, better build quality than Cablesaurus) unless you've done a NASA on your frame rig design and mixed up centimetres and inches (sorry, couldn't resist)
However please note that I don't own any cards more powerful than the 5850 series. The 5850 is alleged to be more power-efficient than the 5830, so my overclocked 5830s (hell, ALL my cards are overclocked, so it's redundant mentioning it
) probably suck the most power. But even if my juiciest card eats 200W, 150W will be taken from the lower-resistance direct-to-PSU feeds, and the remaining 50W from the PCIe slot (this is my understanding - happy to be corrected by any EEs here). And I don't really think *any* of my cards are *really* pulling 200W.
If you've got some monster dual-GPU card like a 5970 or the big 6990 (?) things, there's a good chance that the card will pull the full-fat 75W max spec power from the PCIe slot as *well* as the full power from the auxiliary power feeds. And my daisy-chained unpowered PCIe extenders may melt and catch fire with 75W but not with a max of 50W. Remember that's 6.25 amps through those thin-gauge ribbon cables (yeah, they are duplexed, but 6.25A is lots of current for such a thin wire), compared to my theoretical maximum of 4.16A. At the very least, it'll create enough resistance to cause a noticeable voltage drop on the supply via the PCIe slot. YMMV.
IMO if you're running a heavy-duty card (compared to my mid-level cards) on x1->x16 extenders, or worse, daisy-chained x1 extenders, then overclock the card and monitor the temperature of the ribbon cable with a
laser thermometer. If it gets too hot (cable max temp spec is often written on the cable) then try a different approach (or use Molex-augmented extenders)