Bitcoin Forum
November 14, 2024, 05:34:40 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: 16x extender cable problem in MSI BB Marshal mother board  (Read 3142 times)
dishwara (OP)
Legendary
*
Offline Offline

Activity: 1855
Merit: 1016



View Profile
June 03, 2011, 12:01:37 PM
 #1

My mother board is http://www.msi.com/product/mb/Big-Bang-Marshal--B3-.html

It has 8 pcie ports. The cards i use to mine are 5870. No crossfire.
Windows 7 32 bit, using 11.5 catalyst & 2.4 APP latest drivers.
One monitor & others cards with dummy connector.

I connected one card to 1st slot pcie_1 using 16x extender cable & in next slot connected another card using 1x-16x extender cable.
Both worked fine.
If i connect directly card to board then it takes 2nd slot also. so have to use cable.

I connected 3rd card in pcie_3 slot using 1x-16x extender cable. windows didn't detect card, so tried with another cable & windows didn't detect.
After some thought used 16x extender cable to connect 3rd card & windows detected.

after some time of mining again tried with 1x-16x extender cable, windows didn't detect card.

It seems i have to connect alternate cables in alternate slots for windows to detect.

pcie_1, pcie_3, pcie_5, pcie_7 with 16x extender cable &
pcie-2, pcie_4, pcie_6, pcie_8 with 1x-16x extender cable to connect 8 , 5870 cards.

Am i right?
why windows won't detect the card in alternate slot?
I don't have any more cards to try & also 16x extender cables.

The motherboard has 4 switches to switch on & off pcie. only 4 switches from PGE1, PGE2, PGE3, PGE4.
switching on/off PGE2 & others have no effect.

The rig is not my own, but its my share holders mining rig & i can't take risk.

Please some one help me by giving advice in this odd behavior.
Will same problem happen in Linux?
I will be surely able to run 8 cards by using alternative cables in alternative slots?

Thanks in advance.
Meatball
Sr. Member
****
Offline Offline

Activity: 378
Merit: 250



View Profile
June 03, 2011, 12:06:44 PM
 #2

I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.
MiningBuddy
Hero Member
*****
Offline Offline

Activity: 927
Merit: 1000


฿itcoin ฿itcoin ฿itcoin


View Profile
June 03, 2011, 12:11:08 PM
Last edit: August 11, 2011, 09:32:52 PM by MiningBuddy
 #3

8 5870's on that mobo and you're running risk of blowing the fsb.  Shocked

Also, Windows is supposedly capped at 4 gpus, use linux, especially if this is a dedicated rig paid for by shareholders.

bcpokey
Hero Member
*****
Offline Offline

Activity: 602
Merit: 500



View Profile
June 03, 2011, 12:16:34 PM
 #4

I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.

http://forums.nvidia.com/index.php?showtopic=97478 claims differently. I have no first hand experience, so I can't say directly, but I see noe reason why windows would set such a limit.

/agreed as to possibly destroying the FSB though. Consider an externally powered PCI-E riser or something.
rezin777
Full Member
***
Offline Offline

Activity: 154
Merit: 100


View Profile
June 03, 2011, 12:18:10 PM
 #5

In fact, if this is a dedicated rig with shareholders wtf are you even doing thinking about using Windows?

Or making mining as expensive and complicated as possible by using a BigBang Marshall?  Huh
carbonc
Member
**
Offline Offline

Activity: 126
Merit: 60


View Profile
June 03, 2011, 01:27:43 PM
 #6

I can see that filling all 8 PCIe slots would require tremendous power.  Probably enough to trip a 15-20 Amp fuse in a standard home.

Genrobo
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 03, 2011, 01:31:52 PM
 #7

I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.

http://forums.nvidia.com/index.php?showtopic=97478 claims differently. I have no first hand experience, so I can't say directly, but I see noe reason why windows would set such a limit.

/agreed as to possibly destroying the FSB though. Consider an externally powered PCI-E riser or something.

That forum post on Nvidia.com is about 4x Dual GPU cards.
OP is trying to use 8 single GPU cards.

The difference is...
They only use 4 PCI-E slots
Dishwara is trying to literally use 8 PCI-E slots

Big difference in practical application.
Genrobo
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 03, 2011, 01:32:51 PM
 #8

Dishwara, switch to Linux.
If this is going to be a dedicated machine for shareholders, you'll be much better off.
Secondly...
How in the hell are you powering that behemoth?
dishwara (OP)
Legendary
*
Offline Offline

Activity: 1855
Merit: 1016



View Profile
June 03, 2011, 01:36:47 PM
 #9

I am using cooler master silent pro gold 1200 W, 2 nos. giving me a total of 2400W.
Also, i don't understand how it will blow FSB?
Genrobo
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 03, 2011, 01:39:36 PM
Last edit: June 03, 2011, 01:51:39 PM by Genrobo
 #10

I am using cooler master silent pro gold 1200 W, 2 nos. giving me a total of 2400W.
Also, i don't understand how it will blow FSB?

Motherboards provide power over the PCI-E slots to whatever's plugged into them.
Upwards of 75w per slot.

75x8=600w

If your motherboard tries to pull 600w through it, it'll probably fry internal connections/blow out the bus.


Also PCI-E 2.0 X16 slots can provide 128 Gb/s of bandwidth
It has 16 lanes, can do 8Gb/s per lane.

Let's say you run at the max speed of each slot...
128Gb/s * 8 = 1024 Gb/s
That's over a Tb/s

Do you think your motherboard can internally handle data transfers that high?
Now, that's if you're loading these cards 100%, which hashing doesn't really do, but still, in order to use them all simultaneously hashing you will probably pull some impressive bandwidth numbers.
Possibly high enough to blow out the bus/stall the system/crash the system.
dishwara (OP)
Legendary
*
Offline Offline

Activity: 1855
Merit: 1016



View Profile
June 03, 2011, 01:43:17 PM
 #11

I am using power from PSU itself. 4 pci 6/8 pin attached to the psu itself. Also another 2 cable was given with each has 1 pci 6& 1 pci 6+2.
So, with one power supply, i connect 4 cards which all draw power from psu itself.
Genrobo
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 03, 2011, 01:45:50 PM
 #12

I am using power from PSU itself. 4 pci 6/8 pin attached to the psu itself. Also another 2 cable was given with each has 1 pci 6& 1 pci 6+2.
So, with one power supply, i connect 4 cards which all draw power from psu itself.

Regardless of that fact PCI-E slots provide some power to the cards, whether they're externally powered or not.
Genrobo
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 03, 2011, 01:49:49 PM
 #13

I just can't see, in a setup like that, a point where one or more cards isn't sitting idle because it has to wait for free bandwidth on the bus to transfer it's information, causing lag/latency internally.
bcpokey
Hero Member
*****
Offline Offline

Activity: 602
Merit: 500



View Profile
June 03, 2011, 01:51:14 PM
 #14

I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.

http://forums.nvidia.com/index.php?showtopic=97478 claims differently. I have no first hand experience, so I can't say directly, but I see noe reason why windows would set such a limit.

/agreed as to possibly destroying the FSB though. Consider an externally powered PCI-E riser or something.

That forum post on Nvidia.com is about 4x Dual GPU cards.
OP is trying to use 8 single GPU cards.

The difference is...
They only use 4 PCI-E slots
Dishwara is trying to literally use 8 PCI-E slots

Big difference in practical application.

I'm not talking about practical application, I am talking about software limitations. 4 Dual GPU cards still act as 8 physical GPUs. Are you saying that if I put 4 GPUs in my windows machine and then bought a PCIE Raid controller that windows would not be able to recognize the 5th PCIE device?
Genrobo
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 03, 2011, 01:53:14 PM
 #15

I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.

http://forums.nvidia.com/index.php?showtopic=97478 claims differently. I have no first hand experience, so I can't say directly, but I see noe reason why windows would set such a limit.

/agreed as to possibly destroying the FSB though. Consider an externally powered PCI-E riser or something.

That forum post on Nvidia.com is about 4x Dual GPU cards.
OP is trying to use 8 single GPU cards.

The difference is...
They only use 4 PCI-E slots
Dishwara is trying to literally use 8 PCI-E slots

Big difference in practical application.

I'm not talking about practical application, I am talking about software limitations. 4 Dual GPU cards still act as 8 physical GPUs. Are you saying that if I put 4 GPUs in my windows machine and then bought a PCIE Raid controller that windows would not be able to recognize the 5th PCIE device?

I don't even think it's 100% a software limitation.
It may very well be a hardware limitation, because of the amount of bandwidth needed/available to run that many devices over PCI-E.
bcpokey
Hero Member
*****
Offline Offline

Activity: 602
Merit: 500



View Profile
June 03, 2011, 01:57:18 PM
 #16

I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.

http://forums.nvidia.com/index.php?showtopic=97478 claims differently. I have no first hand experience, so I can't say directly, but I see noe reason why windows would set such a limit.

/agreed as to possibly destroying the FSB though. Consider an externally powered PCI-E riser or something.

That forum post on Nvidia.com is about 4x Dual GPU cards.
OP is trying to use 8 single GPU cards.

The difference is...
They only use 4 PCI-E slots
Dishwara is trying to literally use 8 PCI-E slots

Big difference in practical application.

I'm not talking about practical application, I am talking about software limitations. 4 Dual GPU cards still act as 8 physical GPUs. Are you saying that if I put 4 GPUs in my windows machine and then bought a PCIE Raid controller that windows would not be able to recognize the 5th PCIE device?

I don't even think it's 100% a software limitation.
It may very well be a hardware limitation, because of the amount of bandwidth needed/available to run that many devices over PCI-E.

I find bandwidth difficult to accept, why would it work under linux but not windows if there was a physical limitation? Moreover many users report 100% hashrates running cards via x1 PCI-E slots, which should mean theoretically you could run > 16 cards.
Genrobo
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 03, 2011, 02:01:44 PM
 #17

I never implied his 8 card implentation would work in Linux.
I'd like him to test it just to see personally.
However, with 8 cards over PCI-E that's a total aggregate bandwidth of 128GB/s
More than double what AMD's HyperTransport can currently do, and Intel's Quickpath Interconnect.

If there's some magical chip on a motherboard that can handle aggregate bandwidth that high, by all means, let me know.
bcpokey
Hero Member
*****
Offline Offline

Activity: 602
Merit: 500



View Profile
June 03, 2011, 02:11:50 PM
 #18

I never implied his 8 card implentation would work in Linux.
I'd like him to test it just to see personally.
However, with 8 cards over PCI-E that's a total aggregate bandwidth of 128GB/s
More than double what AMD's HyperTransport can currently do, and Intel's Quickpath Interconnect.

If there's some magical chip on a motherboard that can handle aggregate bandwidth that high, by all means, let me know.

Well again as I mentioned, people are reporting full hashing rates utilizing x1 lanes. An x1 lane has bandwidth of 1GB/sec maximum, so unless there are some magical x1 lanes that can utilize x16 bandwidth, or for some reason cards mining request the full bandwidth of their PCI-E slot for no reason other than that they have it available, bandwidth is not a concern.
Genrobo
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 03, 2011, 02:18:50 PM
 #19

Good point, even so, handling bandwidth requests for 8 different slots simultaneously would surely add some sort of lag.
A motherboard isn't an infinite lane highway.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!