Bitcoin Forum
April 27, 2024, 04:18:15 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: 4X 6990 on MSI 970A-G45  (Read 2550 times)
Striker (OP)
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
September 05, 2011, 02:37:52 PM
 #1

Hi,


Did anyone tried to put 4 X 6990 on the MSI 970A-G45 or any other 970 Chipset motherboard?

According to specs they have 2 PCI-e 16X and 2 PCI-e 1X both 2.0.

But in any case dis anyone tried to use all PCI-e slots on something like that ?


Regards.
"In a nutshell, the network works like a distributed timestamp server, stamping the first transaction to spend a coin. It takes advantage of the nature of information being easy to spread but hard to stifle." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714191495
Hero Member
*
Offline Offline

Posts: 1714191495

View Profile Personal Message (Offline)

Ignore
1714191495
Reply with quote  #2

1714191495
Report to moderator
CanaryInTheMine
Donator
Legendary
*
Offline Offline

Activity: 2352
Merit: 1060


between a rock and a block!


View Profile
September 05, 2011, 10:29:02 PM
 #2

Hi,


Did anyone tried to put 4 X 6990 on the MSI 970A-G45 or any other 970 Chipset motherboard?

According to specs they have 2 PCI-e 16X and 2 PCI-e 1X both 2.0.

But in any case dis anyone tried to use all PCI-e slots on something like that ?


Regards.


putting 3 dual-core cards on this mobo under windows doesn't work for me.  nothing but problems.  best I can get under 64bit win7 is 2 dual core cards + something like a 5850.
Striker (OP)
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
September 06, 2011, 09:22:43 PM
 #3

Hi,


I will be using Linux, not windoze ... Smiley
I would like to know if the Mobo really supports the Quad setup and the 8 GPU's under Linux, not windows.

Regards.
Striker (OP)
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
September 12, 2011, 06:18:43 PM
 #4

Hi,



How do you plan to power the whole thing ?



Of course with 375 X 4 = 1500W plus Mobo + Mem  CPU + Fans etc I would say 1650 W is the expected top power.

I will use Two OCZ 1000W PSU's. Could be also two Corsairs 1000W it depends on availability.


Regards.
cicada
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
September 12, 2011, 06:33:22 PM
 #5

My experience with dual-core cards is nonexistent, but I do know they divide the available bandwidth between the GPUs..  even though mining is bandwidth-minimal, I'd be concerned that a 6990 wouldn't be too happy dividing up a 1x slot.

Take it with a grain of salt, though, again I know little of these things.  If someone else has run 5970/6990 or other dual-gpu cards on 1x PCIe, please chime in Wink

Team Epic!

All your bitcoin are belong to 19mScWkZxACv215AN1wosNNQ54pCQi3iB7
jamesg
VIP
Legendary
*
Offline Offline

Activity: 1358
Merit: 1000


AKA: gigavps


View Profile
September 13, 2011, 03:38:21 PM
 #6


This is not a problem.

Bandwidth needs for mining are almost nil.

See zorinaq's blog, he has 4 5970 knocked into a single mobo via
x1 to x16 extenders.

The real concern when hooking lots of dual core boards in one mobo
is you're going to suck way too much current through the PCI-e bus
(beyond specs) and risk blowing/burning something up. Hence the
PCI-extenders with molex trick.



The extenders with the molex are absolutely necessary. You will also not be able to get *huge* overclocks with multiple cards. Plan for middle of the road.
haploid23
Legendary
*
Offline Offline

Activity: 812
Merit: 1002



View Profile WWW
September 13, 2011, 10:36:22 PM
 #7

it should be theoretically possible, although you'll most likely run into problems with mobo/windows/drivers accepting all 8 GPUs. and this is with the assumption you're going to use pci-e extenders with molex power connector. you're not going to be able to pull this much power (stably) through the pci-e slots of the motherboard. i can run 6 GPUs fine, but i start having problems with the 7th physical card. i think this is a limitation of my motherboard, not windows or drivers.

only place that sells these extender with molex power is cablesaurus and me i believe:

https://bitcointalk.org/index.php?topic=43614.0
https://bitcointalk.org/index.php?topic=38725.0

Striker (OP)
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
September 13, 2011, 11:29:09 PM
 #8

Hi,

About the issues raised:

I am using Linux, I can not imagine anyone using other OS then Linux for this or any other serious work ...  Add to this that my rigs will be in a remote location. And I have to work during the day.
I obviously was planning to use pci-e adapters with the direct PSU molex connection.

About the bandwidth:
PCI-e X1  2.0 = 500 MB/s (5 GT/s)

I already asked around in the newbies and this was indeed my first concern because I also do not have any experience in mining whatsoever although I have extensive PC assembly, overclocking and many years of troubleshooting and maintenance experience.
For a Dual GPU to share 500MB/s that means 250MB/s per GPU ...

My problem was exactly that one ... How come a, say, 750MH/s card can operate on a 500MB/s  Bus ?
If the Hashes calculated at the GPU's are sent to the cpu certainly a 500MB/s bus Can Not deliver enough Bandwidth to sustain the maximum rate of Hashing ....
Many people told me that it was not a problem ... maybe then the PCI-e bus, meaning the CPU to GPU I/O bus, is not used like that after all ...
Maybe someone could explain that to me with more detail ...

I really do have concerns about how such speed could affect CPU-GPU transfers ... I imagine (I could be wrong) the most intensive and time consuming part of the algorithm is the calculation made in the GPU's ... even so, data must be transfered to the CPU eventually ...

Some very simplified and not Accurate simple math would be to exchange say 1KB for each Stream processor :

=> 3072 X 1KB  = +/- 3MB ... per "batch".

One could send to both GPU's 500MB/3 =  166.66 "MegaBatch" calculations per second.

In the model above it would be 166.66MH /s per board ... of course this could be totally wrong ... it all depends on exactly the size of data sent, how it is sent and so on ...

I think the GPU's take more time to calculate them the bus speed would allow and also I am may not be right about How the process is done in terms of GPU-CPU I/O operation ...

If someone could explain this it would be helpful ...

Regards.



etotheipi
Legendary
*
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
September 13, 2011, 11:37:48 PM
 #9

It is my understanding that the GPU takes in the block header than needs to be hashed, and splits up the array of potential nonces between the threads/cores.  Each thread can, itself, determine whether it's own result meets the difficulty threshold and throws it away if it doesn't.  The only communication that needs to happen is for the GPU to send the 1/2^32 results that meet the difficulty threshold (i.e. finishes computing a share), and the CPU only needs to send new work to be done after a share is complete.   We're probably looking at a few kB/sec... I think a 1x slot can handle that.

There might be a little bit more going on here, but not a lot.  As another poster said:  the bandwidth req't is nil

Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
Striker (OP)
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
September 13, 2011, 11:52:46 PM
 #10

Hi,

It is my understanding that the GPU takes in the block header than needs to be hashed, and splits up the array of potential nonces between the threads/cores.  Each thread can, itself, determine whether it's own result meets the difficulty threshold and throws it away if it doesn't.  The only communication that needs to happen is for the GPU to send the 1/2^32 results that meet the difficulty threshold (i.e. finishes computing a share), and the CPU only needs to send new work to be done after a share is complete.   We're probably looking at a few kB/sec... I think a 1x slot can handle that.

There might be a little bit more going on here, but not a lot.  As another poster said:  the bandwidth req't is nil

There you go! That settles the issue.

Well a single 500MB/s X1 PCI-e would then be enough to support All 4 Boards ... and then some more Smiley Smiley

Regards.

cicada
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
September 14, 2011, 12:39:47 AM
 #11

To give you an idea, the first FPGA miners were running on serial data connections, probably 115kbaud with room to spare Wink

Team Epic!

All your bitcoin are belong to 19mScWkZxACv215AN1wosNNQ54pCQi3iB7
Striker (OP)
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
September 14, 2011, 03:39:03 PM
 #12

Hi,


One could send to both GPU's 500MB/3 =  166.66 "MegaBatch" calculations per second.


You're assuming "hashes" are somehow generated on the CPU
and then "uploaded" to the GPU to somehow get "processed".

That's not the way it works.

The CPU simply uploads a tiny bit of data to the GPU, on the
order of 1K (that K for kilobyte, i.e. 1024 bytes).

The GPU then sets out to chew though a bazillion prefixes to that
tiny piece of data until it either finds a hash(prefix+data) that
fits the bill or decides to give up and check back with the CPU
to see if new work has appeared.

The large computation described above happens without any
communication between the CPU and the GPU and thus requires
no bandwidth.


Precisely.
Exactly like I suspected: Because everyone told me bandwidth was not the problem I actually said that actually GPU calculation time is Way greater then data data transfered from CPU to GPU ...
The numbers showed only the proof that 1K Constant Transfers were not possible obviously ...



Regards.
Striker (OP)
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
September 14, 2011, 03:47:33 PM
 #13

Hi,



Again: if you want to hook many GPUs into a single mobo,
the most I've seen wokrkng is 8 GPU core per mobo, and
bandwidth is absolutely not what you should be worrying
about.

Power distribution to the GPU cores on the other hand is
not always an easy nut to crack and this is where your
focus should be.

Cooling (and therefore spatial placement of the H/W) is
the other one.



Yeah, I know about that.
I do not intent to make any overclock on the GPU's. Likely I will only try to reduce the Memory clock ( Core clock's should be stock ) and take very care about air flow inside the custom made box.
I am planning to use Two 1000W PSU's per MB either from OCZ or from Corsair.
I have a preference for Thermaltake or Enermax, but they are both very expensive and I thing OCZ's/Corsairs will do just fine.

The idea is to have a very stable system.
I actually think that Overclocking for the 6990 is not a good thing ... what one gains from performance is lost on Power consumption and also Thermal and electrical Risk. Hence overclocinkg could be an option for say a water cooled single 6990 system. But for Quad 6990's it is really not a safe bet.

Regards.
etotheipi
Legendary
*
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
September 14, 2011, 06:11:34 PM
 #14

However, I believe it is a poor decision to design to you 100% of max power consumption unless you buy top-of-the-line power supplies.  If you anticipate 1200W of consumption, don't get a 1200W PSU.  It can work for a few weeks maybe even months at 100% load, but it's going to wear out eventually.    On the other hand, if you only pull 100W from a 700W PSU, it will probably run forever, no matter how crappy that PSU is.

I try to keep the max power consumption around 60-70% of max rating of the combined power supplies, and really prefer 50% unless it's a high-end PSU like a Corsair HX.  Perhaps it's paranoia, but a failing power supply can take a lot of other hardware with it, so I usually err on the side of too much power.  It's worth it to me to spend 10% more, and sleep well knowing that my hardware won't be on fire when I wake up.

Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
etotheipi
Legendary
*
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
September 14, 2011, 08:08:36 PM
 #15

Quote
Your 100W out of 700W supply is completely wrong though.  If the draw @ the wall is less than ~50% (depends on model) then you are actually making the power supply LESS efficient and creating more wear, heat, and unnecessary power consumption.  Under loading a power supply that much is horrible inefficient and the excess heat is detrimental to longterm health of a powersupply.   A smaller powersupply would run cooler, safer, and more efficiently.

So what you're saying is that by letting my home computer run at idle (50W) with my 750W PSU, I'm actually doing more harm that loading it 80-100%?  I don't buy that.  I agree the efficiency might be lower at lower loads, but the longevity of the device isn't going to be negatively affected.

Regardless, I think we're in agreement about high loads--you want to keep the actual power consumption to about 50-70% of the PSU rating, unless you don't mind changing the PSUs every couple months. They're not guaranteed to fail, but PSU problems tend to show up as problems with other components, and thus are a complete pain to diagnose.  I prefer to pay 10%-20% extra to avoid it entirely.

My strategy has been not to base anything on manufacturer power consumption numbers.  Look at something like the mining hardware comparison which shows the typical range based on users measuring actual, from-the-wall power consumption.  I have a bunch of 5850s, 5870s, and 6950s, and I've measured all of them with a Kill-a-watt meter myself, and my numbers have always matched up with what's on the page. 

On that note, that page shows an non-overclocked 6990 using about 350W at full load, higher once you start cranking up the clock rates.  Unfortunately, I don't actually have any of these GPUs, so I can't measure it myself.  Just saying that in the past, I have found the power consumption numbers there to be reliable.


Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
Striker (OP)
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
September 15, 2011, 06:11:42 PM
 #16

Hi,


Unlikely you need that much power.  375 is TDP.  It is the max power the card is designed for.  You won't hit 375 in real life even at 100% GPU load 24/7.   On the 5970 I pull about ~260W from the wall per card.  3 cards running off a 1250W PS.  Total draw @ wall is normally around 930W.

Take a look at benchmarks for 6990 from various sites to get an idea of real world power draw.
http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/41404-amd-radeon-hd-6990-4gb-review-21.html

If you want to go w/ dual power supplies I would go with maybe 700W - 800W units (1400 - 1600W combined should be more than enough).  Of course a 2kW of power isn't going to hurt anything it just may mean you are overspending some.

Thanks for your comment but I know what TDP is.

So if you noticed I took TDP value in order to Scale the PSU, not a precise value.
Actually MB+CPU + etc will also not consume at Maximum possible levels.

Also almost all common commercial PSU's will be at their Top of efficiency at 50% Load ... hence the double 1000W ... I know ... I know .. it is only a 2% at most difference on the efficiency curve ... but to me it matters. Likely over a 5 year period pays for the price difference to non-optimal configuration ... and after those 5 years I am sure the PSU was not pushed to the limits so longevity is also an issue.

Also besides the PSU's being at their peak efficiency at 50% load there is also the issues of electric spikes on the Electrical installation you are running  your equipment.

My gear will not be in my home. It will be running on a industrial type of facility where electricity is much cheaper, there is 24/7 security, no noise for the neighbours  and so on ...

So apart from having peak protection gear on my electric circuit I must make sure  my PSU's are not at the very brink of taking 3500US$ /Each with them when a possible and un-controllable peak comes along.
The facility in question does have a lot of heavy power machines ... so all care is important.

If something I really do not care to pay a bit more for the PSU. It is one of the components I do like to bet in terms of proper dimension and quality.

Regards.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!