Bitcoin Forum
December 04, 2016, 12:07:01 AM *
News: Latest stable version of Bitcoin Core: 0.13.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: Asus P8Z68-V logic board - why can't I use all 5 PCIe slots?  (Read 3829 times)
catfish
Sr. Member
****
Offline Offline

Activity: 270


teh giant catfesh


View Profile
July 30, 2011, 02:32:47 PM
 #1

As a base logic board for my open-frame rig, all I needed was a LOT of PCIe slots. Doesn't matter about bandwidth, all slots will have x1 -> x16 extender cables fitted so the GPUs can be mounted up on a frame spaced out for airflow.

The last thread about this line of Asus logic boards got derailed into an operating system discussion so I've started a new thread.

Basically, I bought this board because it was on offer, and I happened to have an appropriate 1155 CPU to fit. I really don't need all the bells & whistles that the logic board comes with - the integrated graphics, fancy CPU overclocking / overvolting tools etc. (though I could use them in the future to *downclock* and *downvolt* the CPU and northbridge to save power, I guess). I don't need the hardware RAID, or the USB3, Turbo Boost, eSATA, etc.

Of course, if I was building a general-purpose PC, these features would have me salivating - especially the hardware RAID and USB3...

But the main reason for this board is for mining with lots of GPUs. It has 3 full length x16 slots, with two x1 slots hiding around the first x16 slot. I know the bandwidth is shared, and there isn't enough for all three x16 running at full x16 speed... but having just the 2 'crossfire enabled' x16 slots suggests to me that there's enough bandwidth to run all five slots at x1.

I've set the BIOS utility option for PCIe to 'X1' which allegedly enables all 5 PCIe slots at x1 speed, but disables one of the USB risers. 12 USB ports is enough for me, so that's no big deal Wink

However I can't get the system to recognise more than 3 GPUs. I have 4 connected to the system, and whatever combination of ports I try (I'm using extender cables, all with x1 male ends), only three can be seen. I'm running Linux (Ubuntu 11.04 natty) and the 2.4 SDK.


Is this a known logic board issue - or do I have to consider that there's something wrong with either the logic board or the GPUs?

...so I give in to the rhythm, the click click clack
I'm too wasted to fight back...


BTC: 1A7HvdGGDie3P5nDpiskG8JxXT33Yu6Gct
1480810021
Hero Member
*
Offline Offline

Posts: 1480810021

View Profile Personal Message (Offline)

Ignore
1480810021
Reply with quote  #2

1480810021
Report to moderator
1480810021
Hero Member
*
Offline Offline

Posts: 1480810021

View Profile Personal Message (Offline)

Ignore
1480810021
Reply with quote  #2

1480810021
Report to moderator
1480810021
Hero Member
*
Offline Offline

Posts: 1480810021

View Profile Personal Message (Offline)

Ignore
1480810021
Reply with quote  #2

1480810021
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
mikeo
Full Member
***
Offline Offline

Activity: 185



View Profile
July 31, 2011, 11:51:40 AM
 #2

You don't mention what power supply you are running. I know when I had a less than optimal PSU on my 5xGPU rig it would not see all the cards. I'm also considering going with PCIe x1 entenders that have the molex attached to supply power directly to the cards without going through the motherboard, which seems to be an issue with my MSI 890FXA-GD65. Maybe then the board will support its full complement of 6 GPUs.


If this post tickles your fancy or helped you make more bitcoin I'll gladly take a tip:
17DWhv9f5TkRDL6kyA45qiG34d4v1QiwqE
BkkCoins
Hero Member
*****
Offline Offline

Activity: 784


firstbits:1MinerQ


View Profile WWW
July 31, 2011, 02:04:17 PM
 #3

Also consider, sometimes, if you use an x1 cable in a x16 slot it may need some pins patched to be detected on some boards.

There is a discussion of this elsewhere on this forum. I'm not sure if you are doing that but it seems better to use full x16 extenders for x16 slots, and x1 extenders for x1 slots. They cost about the same anyway (from 9mart.com at least, $9 for 2).

catfish
Sr. Member
****
Offline Offline

Activity: 270


teh giant catfesh


View Profile
July 31, 2011, 04:19:24 PM
 #4

You don't mention what power supply you are running. I know when I had a less than optimal PSU on my 5xGPU rig it would not see all the cards. I'm also considering going with PCIe x1 entenders that have the molex attached to supply power directly to the cards without going through the motherboard, which seems to be an issue with my MSI 890FXA-GD65. Maybe then the board will support its full complement of 6 GPUs.


It's got my best PSU - the Corsair 800 with the 65A single rail 12V line Smiley Should be ample for the four GPUs I've got in there right now - it's only two 5850s and two 5770 single-slot specials. I'll be swapping the 5770s for 5830s when my cheap-deal cards arrive.

I don't think it's a PSU problem, though I've got the tools to measure power consumption etc.

I'm starting to think that it's a problem with the card - I can swap the x1 extenders around, but this particular card doesn't want to be detected at all. It's only ever that one card - the other 5770 in the same slot gets detected.

I know there's a limitation with bandwidth on the board - but there's meant to be an 'X1' mode where ALL PCIe slots are functional.

Since I'm totally out of GPUs to swap out to test whether it's a bum 5770 card, and I don't want ANY downtime due to slush's time-in-block based scoring system, I'll have to wait until upgrade time Sad

Thanks though, doesn't sound like a known fundamental problem with the board!

...so I give in to the rhythm, the click click clack
I'm too wasted to fight back...


BTC: 1A7HvdGGDie3P5nDpiskG8JxXT33Yu6Gct
catfish
Sr. Member
****
Offline Offline

Activity: 270


teh giant catfesh


View Profile
August 17, 2011, 11:32:31 AM
 #5

Update on this - well it DOES look like a logic board problem and not a GPU problem. I've managed to get four GPUs working on that board - one 5850 reference, one 5850 'extreme', one cheap 5830 and one Sapphire 5770. Seems to be rock-solid, but the fifth slot can't be used for some reason.

The single-slot 5770 that didn't want to be detected now resides in one of the new rigs, and every GPU I own is working hard. None have failed, so that 'invisible' 5770 wasn't duff after all.

I presume it's just a limitation of the board, with too much power being allocated to the CPU and not enough to the PCIe slots...

I've ordered an MSI 870FX-GD70 thing as they have 6 PCIe slots - this ought to allow a decent density new rig... though having ordered it, I've seen some scary looking threads indicating that some people are having trouble with this particular board Sad Well, at least 5 of the slots should work - they're all configured as x16 sockets, so one presumes that the board was designed to feed 75W to each of the x16 slots? I've got a new 1000W PSU to plug into this board - hoping to put two 5850s and three 5830s into this...

...so I give in to the rhythm, the click click clack
I'm too wasted to fight back...


BTC: 1A7HvdGGDie3P5nDpiskG8JxXT33Yu6Gct
plastic.elastic
Full Member
***
Offline Offline

Activity: 168


View Profile
August 17, 2011, 03:54:57 PM
 #6

b4 you're going crazy and burn your board, make sure to attach 12v volt line from the GPU directly to the PSU. Otherwise your board cant handle that many PCI-E.



Tips gladly accepted: 1LPaxHPvpzN3FbaGBaZShov3EFafxJDG42
catfish
Sr. Member
****
Offline Offline

Activity: 270


teh giant catfesh


View Profile
August 17, 2011, 07:27:06 PM
 #7

b4 you're going crazy and burn your board, make sure to attach 12v volt line from the GPU directly to the PSU. Otherwise your board cant handle that many PCI-E.

¿Qué?

Only my 5770 cards have a single auxiliary power connector to the PSU (75W from PCIe, 75W from the 6-pin power cable from the PSU). All the other cards (5830s and 5850s of various flavours) require *two* auxiliary power cables from the PSU. That's three loads of 75W at maximum, which is 225W. Most commentators here estimate a heavily overclocked 5830 to max out *less* than 200W, and more so for the more efficient 5850. However, 200W per card is what most use as a 'rule of thumb' for PSU sizing.

So what are you talking about? I actually know a bit about what I'm doing, you know... I didn't move my entire business to Apple Macs until OS X 10.0 turned up - before then, I was running Linux and building my own PCs.


You may have a very good point, however, though I'm not sure if you meant it or not. The big issue is that the PCIe spec calls for 75W per slot being made available. The new MSI board I've ordered (890FX-GD70, IIRC) has 5 full-length blue slots - evidently intended for graphics cards. This *theoretically* means that MSI have engineered the board to supply 75W to each of the full-length slots... that's a whopping 375W if I sling 5 cards into the rig (as I intend to do)...

Does the ATX connector supply nearly 400W to the logic board? I doubt it...

Does the PC power socket join a common 12V rail on the logic board, or is it hypothecated purely for the CPU? My CPU sure as hell won't need hundreds of watts, it's the cheapest CPU I could find and will be underclocked to the minimum - it's a dedicated miner after all. If the CPU power is hypothecated, and the logic board can't route the supply to the PCIe slots, then the *total* 375W at full chat will be sucked down that ATX cable...


The question I haven't heard answered is whether the GPUs preferentially draw power from the PCIe 6-pin power cables, or take the full 75W from the PCIe slot itself first. Or, of course, whether the power consumption is balanced against whatever source has the least resistance, theoretically (given the cable thickness) preferring the dedicated PCIe power cables, so a 5-card full-power rig won't pull 375W from the logic board, but only whatever excess over 150W cannot be supplied by the twin PCIe power cables...


What I *do* know is that my new board doesn't appear to have a specialised PCIe slot power socket - there's no way to bump up the power available to the PCIe slots from the PSU, other than the supply from the ATX connector. So I'm hoping the graphics cards preferentially suck power from the dedicated PCIe power cables - straight from the PSU - and only take the 'excess' over 150W from the logic board, because you are quite right that excessive load on the logic board itself may cause problems.

That said - what the hell are MSI doing if they're selling logic boards with 5 blue full-length x16 PCIe slots - if the logic board cannot supply the rated power to the slots with all full? If my problems with the Asus board were due to the inability to supply 75W to all 5 PCIe slots, then perhaps this is common practice amongst logic board manufacturers, and we all have a problem. But the Asus board had 3 full-length (i.e. intended for GPUs) slots and two x1 slots, so may have been engineered to supply 75W to the three long slots and whatever's left to the rest. The new MSI board has NO excuse. I hope it works, it should arrive tomorrow so we'll see...

...so I give in to the rhythm, the click click clack
I'm too wasted to fight back...


BTC: 1A7HvdGGDie3P5nDpiskG8JxXT33Yu6Gct
plastic.elastic
Full Member
***
Offline Offline

Activity: 168


View Profile
August 17, 2011, 08:04:05 PM
 #8

LOL,

Tips gladly accepted: 1LPaxHPvpzN3FbaGBaZShov3EFafxJDG42
cicada
Full Member
***
Offline Offline

Activity: 196


View Profile
August 17, 2011, 08:38:04 PM
 #9

You sure write alot... yet all nonsense.... rambling much?

Since you claimed to be know-it-all guy, this is my last response to you. Good luck....

You need a ladder to get off that high horse?  

Most of the OP's previous post were questions - if he already knew or assumed to know the answers, why ask?

So far the only person I see claiming to be 'know it all' certainly isn't the OP.


The question I haven't heard answered is whether the GPUs preferentially draw power from the PCIe 6-pin power cables, or take the full 75W from the PCIe slot itself first.

PCIe specs call for 75W supplied via the PCIe slot, it's up to the vendor how to actually manage that power.  Without tracing pins / caps / resistors on the board I can't say exactly, but you should assume the GPU is going to want and use all 75W.

That said - what the hell are MSI doing if they're selling logic boards with 5 blue full-length x16 PCIe slots

The Big Bang Marshall has 8!  

MSI tends to overrate their components (that's the 'military class' they like to advertise so often), so it's quite likely the board will have no problem supplying 75W to all the PCIe slots.  Whether or not your ATX 12V pins melt isn't their problem, however.  As a rule of thumb, any time you're using more than 4 cards, I suggest PCIe extenders with molex power adapters to lessen the strain on your ATX connector.

Team Epic!

All your bitcoin are belong to 19mScWkZxACv215AN1wosNNQ54pCQi3iB7
catfish
Sr. Member
****
Offline Offline

Activity: 270


teh giant catfesh


View Profile
August 17, 2011, 09:07:04 PM
 #10

You sure write alot... yet all nonsense.... rambling much?

<snip>But here is what you should know to wrap your head around it...

<snip>Stop blaming MSI for your silly expectation. You assumed too much .

<snip>hello ? anyone home?

<snip>read more

<snip>Since you claimed to be know-it-all guy, this is my last response to you. Good luck....
The last resort of the uneducated... ad-hominem attacks.  Roll Eyes

If it was all nonsense, how come you managed to make some effort to 'answer' the questions that I clearly asked?

Anyway I'll not lower myself to your level. Have a good evening.

...so I give in to the rhythm, the click click clack
I'm too wasted to fight back...


BTC: 1A7HvdGGDie3P5nDpiskG8JxXT33Yu6Gct
Elder III
Member
**
Offline Offline

Activity: 91



View Profile
August 17, 2011, 10:13:19 PM
 #11

As a general rule of thumb, anytime you plan to run more than 4 video cards on one motherboard you may need one (or more) of the special PCIE extenders with the Molex Adapter to allow it to draw all its power directly from the PSU.  Some people seem to be able to get 5 GPUs working just fine without that measure, but it seems to vary, even when using the same components.  My personal feelings on the matter are that most hardware out there is going to last longer if it's not stressed as much and I'd rather spend the $25 to get the adapter from cablesaurious rather than fry my motherboard and have to deal with that, let alone all the downtime...  Some motherboards may be designed to supply full power to 5 or more PCIE slots, but I don't know which ones can or can't do that.  *it would make some interesting research though* Smiley
catfish
Sr. Member
****
Offline Offline

Activity: 270


teh giant catfesh


View Profile
August 18, 2011, 10:44:23 AM
 #12

As a general rule of thumb, anytime you plan to run more than 4 video cards on one motherboard you may need one (or more) of the special PCIE extenders with the Molex Adapter to allow it to draw all its power directly from the PSU.  Some people seem to be able to get 5 GPUs working just fine without that measure, but it seems to vary, even when using the same components.  My personal feelings on the matter are that most hardware out there is going to last longer if it's not stressed as much and I'd rather spend the $25 to get the adapter from cablesaurious rather than fry my motherboard and have to deal with that, let alone all the downtime...  Some motherboards may be designed to supply full power to 5 or more PCIE slots, but I don't know which ones can or can't do that.  *it would make some interesting research though* Smiley
Well, I have a couple of PCIe extenders with the Molex power augmentation so I'll test that out.

The annoying thing is that I'm using Slush's pool, and it's well known that if you drop out of a particular round for any length of time, your reward is rather adversely affected! Smiley Hence I try to keep my miners online and working all the time - tinkering and restarting hits the income stream... I was planning to build an entirely new rig today with the kit I ordered yesterday, but the liars at Scan failed to deliver when they promised (and what I paid extra for, since I've got today free to build this thing). So unless I can obtain a 5-card logic board from a retail store reasonably nearby, I'd have to power down my existing Asus board to experiment, plus the original miner that is currently using the Molex-augmented extender...

What the hell, I may do it anyway.

Interestingly though - if 4 cards (300W through the logic board to PCIe slots) is the 'consensus' limit, what's the point of 'extreme' boards like the 'Big Bang' series, and other preternaturally over-endowed boards with preposterous names? These are being sold with 7 or 8 full length slots. Unless they're being built *solely* for the GPGPU processing crew (as you know, there are plenty of embarrassingly parallel scientific computing projects out there, not just Bitcoin mining), who, like us may consider augmenting the PCIe power supply with custom-made cables, surely these boards are going to be somewhat disappointing to other customers who expect the slots to 'just work'?

Then again, I can't think of any customer base for 8-slot boards aside from GPGPU computing types... except perhaps the top end of the 'extreme gamer' segment. And given the layout of the boards... only 4 GPUs can be installed without cable extenders anyway. I only use Windows when working with clients on their premises, and I don't do this sort of 'extreme hardware' thing for a living Smiley so have no idea whether Windows drivers even allow 'crossfire' with more than 4 cards. If not, then even the 'extreme gamers' have no need for more than 4 GPUs. I'm not a gamer, so would appreciate being educated on this subject...

So (apologies for thinking out loud) - the conclusion is that these extended ATX boards with loads of PCIe slots only have a market for GPGPU types. Since this sub-community (especially the large operations, or university scientific computing teams) would benefit from the high density afforded by 8 cards on one board, why aren't the logic board manufacturers including an additional power socket on the logic board to ensure each PCIe slot can handle a load of 75W? If the customer base is likely to fill all slots with GPUs, then surely one engineers the board to the likely customer usage?

I seriously don't believe that the majority of people who buy these 'big bang' type boards are just show-offs who boast about 'extreme' hardware but don't actually use the slots.


The main problem is that Bitcoin miners are trying to do this on the cheap. Yes, one can acquire an 'extreme' board for £300 but that's very expensive, so most (including myself) look for the cheaper boards with maybe 2 PCIe x16 and three PCIe x1 - and with luck, and x1-x16 extender cables, this would allow 5 cards on one board... after all, the economics of Bitcoin mining get hit by overly expensive hardware. And *this* type of board almost certainly isn't designed to handle 5 full-load Radeon 58xx series GPUs. However, the *specifically designed* quad-crossfire boards and above, with loads of x16 full-length slots - these surely *should* handle the power, given that all the customers for these boards will be doing something like this?

I guess there's no real answer other than to check out the *successful* mining rigs with 'extreme' logic boards running 5+ GPUs without Molex-augmented extenders - i.e. piggyback on other peoples' R&D rather than do it all myself...



TL;DR - who is running 5+ GPUs successfully on one logic board without Molex-augmented extenders, and which logic board are you using? Have you had any problems? Which boards definitely do *not* work, and have had to be returned? I will propose the Asus P8Z68-V board as unable to power all 5 PCIe slots with GPUs, to start.

...so I give in to the rhythm, the click click clack
I'm too wasted to fight back...


BTC: 1A7HvdGGDie3P5nDpiskG8JxXT33Yu6Gct
mikeo
Full Member
***
Offline Offline

Activity: 185



View Profile
August 18, 2011, 12:12:38 PM
 #13

@catfish,

We need a separate motherboard thread to provide a listing of "What Works" with four or more GPUs. I have some of the same poor experiences with motherboards as yourself and it's a drag having to glean tidbits of information from dozens of threads.

So I'll start a thread.

If this post tickles your fancy or helped you make more bitcoin I'll gladly take a tip:
17DWhv9f5TkRDL6kyA45qiG34d4v1QiwqE
catfish
Sr. Member
****
Offline Offline

Activity: 270


teh giant catfesh


View Profile
August 18, 2011, 12:15:26 PM
 #14

@catfish,

We need a separate motherboard thread to provide a listing of "What Works" with four or more GPUs. I have some of the same poor experiences with motherboards as yourself and it's a drag having to glean tidbits of information from dozens of threads.

So I'll start a thread.
Excellent, great idea. Will contribute when it's up and running Smiley Thanks!

...so I give in to the rhythm, the click click clack
I'm too wasted to fight back...


BTC: 1A7HvdGGDie3P5nDpiskG8JxXT33Yu6Gct
catfish
Sr. Member
****
Offline Offline

Activity: 270


teh giant catfesh


View Profile
August 18, 2011, 07:14:28 PM
 #15

BREAKTHROUGH!

I've got all 5 slots working. With a bit of experimentation, I found the sekrit voodoo required to convince this infernal board to recognise all 5 GPUs in a fully populated board.

On Plastic Elastic's advice, I tried running the 5th card to the remaining spare slot using a Molex-augmented PCIe extender - using his/her theory that the logic board wasn't man enough to send the specified 75W to each of the 5 populated slots (which, at a board-level load of 375W, was certainly plausible). Note that all the other cards were connected to their respective slots using my trusty x1->x16 unpowered extenders. Even if I had a full-length slot, I only plug an x1 width 'card end' into the slot. This has worked fine everywhere else - you can see how I'm using x1 cables in my frame rigs here (apologies for repeating the large photo):



The difference between my cables and the Cablesaurus ones I started with is that mine have a full-length x16 female connector at the end of the two-ply length of x1 ribbon cable, with clip and stub of PCB. I don't see much point in using unwieldy x16 full-width ribbon cable extenders if I (a) don't need the bandwidth, and (b) the x1 cables work fine in x16 slots.

This turned out to be the voodoo magick. A few threads going back seemed to insist that if you had an x16 slot, then you should use a full-width x16 extender. I haven't been doing this, since I've found that x1->x16 extenders work in 99% of cases...

...but switching to an x16->x16 extender for the remaining (grey) PCIe slot on this damn Asus logic board worked! You can see in the photo (taken before I managed to get all 5 slots usable) that there's an unused grey x16 PCIe slot, between two light blue old-PCI slots. For some reason, any GPU plugged into this slot using an x1->x16 extender isn't visible. However, plug the *same* GPU into the slot using an x16->x16 extender, and suddenly it all works...


Interestingly Plastic's power concerns proved unfounded - I thought he/she was almost certainly right and I'd need to buy a load of Molex-augmented PCIe extenders. However the resistance of the logic board seems to be much higher than the direct-to-PSU power leads, and hence the load is distributed unequally. The PCIe power cables provide the power until they're maxed out, and only then does the card pull any significant power from the PCIe slot itself. The very useful Mrb 'whitepixel' blog page about the quad-5970 password hash cracking machine, amongst loads of other great information, also seems to agree that the big GPUs pull power from the direct-to-PSU cables preferentially, leaving any excess over 150W (in an ideal world) taken from the PCIe slot itself. You'd need GPUs pulling 225W to max out each PCIe slot - at which point I agree with Plastic's and Elder III's concern about the safety of the logic board!

However none of my GPUs can pull 225W, even with my moderate overclocking. So all seems well so far... the 5-slot board is now sporting two 5850s, two 5830s, and a lone Sapphire dual-slot 5770 (which happily clocks to 1000 on standard voltages and air cooling). It's sucking around 900-950W from the wall, and I'm only using a 1000W PSU, but load from the wall is always higher than actual supply power (due to conversion inefficiencies), so the PSU is probably supplying around 800-850W (if it's a good PSU... it's a Cooler Master SilentPro) to the system. 80-85% of rated power should still be around the PSU's sweet spot - does this sound about right?


TL; DR: catfish gets all 5 PCIe slots populated with GPUs on the Asus P8Z68-V logic board by using a full-width x16->x16 PCIe extender cable in the *grey* PCIe slot. All other slots (including the two other full-width x16 slots) happily accept the x1->x16 PCIe extenders. Power-augmented extenders (i.e. with Molex) are not needed. Yes, this makes no sense - ask Asus WTF their engineers were smoking...

...so I give in to the rhythm, the click click clack
I'm too wasted to fight back...


BTC: 1A7HvdGGDie3P5nDpiskG8JxXT33Yu6Gct
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!