Bitcoin Forum
April 25, 2024, 12:49:26 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: could dual power supplies cause problems with stability?  (Read 7840 times)
jjshabadoo (OP)
Hero Member
*****
Offline Offline

Activity: 535
Merit: 500



View Profile
January 14, 2012, 11:40:30 PM
 #1

I have two rigs with two seasonic 750watt gold psus powering three 5970's. I used a lian li adapter which connects the power supplies together.

The rigs power up and run fine for a certain amount of time, maybe 24 hours, but then they shut down. Other rigs I have with only one PSU seem to be far more stable.

So could the dual psu's be causing the stability issues?

all rigs use the same motherboards, linuxcoin via usb, etc.
1714049366
Hero Member
*
Offline Offline

Posts: 1714049366

View Profile Personal Message (Offline)

Ignore
1714049366
Reply with quote  #2

1714049366
Report to moderator
Even in the event that an attacker gains more than 50% of the network's computational power, only transactions sent by the attacker could be reversed or double-spent. The network would not be destroyed.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714049366
Hero Member
*
Offline Offline

Posts: 1714049366

View Profile Personal Message (Offline)

Ignore
1714049366
Reply with quote  #2

1714049366
Report to moderator
1714049366
Hero Member
*
Offline Offline

Posts: 1714049366

View Profile Personal Message (Offline)

Ignore
1714049366
Reply with quote  #2

1714049366
Report to moderator
jake262144
Full Member
***
Offline Offline

Activity: 210
Merit: 100


View Profile
January 15, 2012, 12:15:51 AM
Last edit: January 15, 2012, 12:37:42 AM by jake262144
 #2

Of course they COULD. If the machine keeps totally powering down, they most likely DO.

Those are the Seasonic X-750W units, right?
If the whole rig is shutting down, it's the PSU powering the mobo that's to be blamed.
Were the other one to fail, at most you'd lose the cards it's feeding.

Make sure all connectors are firmly seated (that includes the modular cable connectors as well).
Make sure that the PSU powering the mobo is only feeding the hard drive (if any) and one GPU.
The other one should take care of the two remaining ones.

If the rig powers down again, swap the PSUs around.
If stability still isn't achieved, remove one GPU so that the PSU powering the mobo doesn't power anything else but the hard drive.

Report success here.
jjshabadoo (OP)
Hero Member
*****
Offline Offline

Activity: 535
Merit: 500



View Profile
January 15, 2012, 05:24:06 AM
 #3

Thanks for the reply and that is exactly how I have them set up.

PSU 1 is powering the mobo, cpu and one gpu. I use a usb drive so nothing else needs power.

The second psu powers the other two gpus.

Oh and the rigs with three 5970's are at stock 725/300. the one rig has 2 5970's at 800/300.

I'm thinking I have a problem with the wiring in the basement where the rigs are. probably some power fluctuations or "dirty" power, etc.

I have nearly identical rigs at my house here and they have been 100% stable and they are overclocked. the 5970's are at 820/410 and the 5870's are at 950/300.
One rig has 4 5870's on a 1250 watt seasonic gold psu.
one rig has 3 5870's and one 5970 on a 1250 watt seasonic gold psu
one rig has 2 5970's with a 750 watt seasonic gold psu.

all have MSI 890fx-gd70 mobos, amd sempron 140, 2gb ram and 16gb fash drives using linuxcoin final with cgminer 2.0.7
jake262144
Full Member
***
Offline Offline

Activity: 210
Merit: 100


View Profile
January 15, 2012, 09:23:05 AM
 #4

I'm thinking I have a problem with the wiring in the basement where the rigs are. probably some power fluctuations or "dirty" power, etc.
Darn it, those can only be measured with an oscilloscope. I don't suppose you have one lying around?

If you have a kill-a-watt, can you measure how much power this rig is drawing? Perhaps you could buy a UPS just strong enough to to the power conditioning necessary.
P4man
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
January 15, 2012, 09:58:02 AM
 #5

I have two rigs with two seasonic 750watt gold psus powering three 5970's. I used a lian li adapter which connects the power supplies together.

The rigs power up and run fine for a certain amount of time, maybe 24 hours, but then they shut down. Other rigs I have with only one PSU seem to be far more stable.

So could the dual psu's be causing the stability issues?

all rigs use the same motherboards, linuxcoin via usb, etc.

How are they connected? I would never power 1 card from 2 PSUs, is that what you are doing?

jjshabadoo (OP)
Hero Member
*****
Offline Offline

Activity: 535
Merit: 500



View Profile
January 15, 2012, 07:34:11 PM
 #6

no, not powering cards with lines from two different psu's. I know that is not only idiotic, but will fry your card. I am using a lian li power supply adapter. see below

http://www.frozencpu.com/products/5637/cpa-167/Lian_Li_Dual_Power_Supply_Adapter_Cable.html

I have the 24pin atx part connected to what I will call PSU 1 and then the 24pin part with just the green and black wires connected to PSU2. of course I then have the 24pin ATX part connected to the motherboard.

Now PSU 1 is then connected to the cpu power connector on my motherboard and one of the pci-e cables is connected to video card 1, which is in pcix16 slot 1 which is what I connect a monitor to when trouble shooting.

PSU 2 is connecting the other two 5970's with two separate pci-e connectors of course and nothing else, just the two cards.
P4man
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
January 15, 2012, 07:45:30 PM
 #7

If it powers down, thats usually the powersupply. Your PSUs ought to be powerfull enough, but maybe one is faulty? Any chance its overheating because of bad airflow, bad fan or whatever?

jjshabadoo (OP)
Hero Member
*****
Offline Offline

Activity: 535
Merit: 500



View Profile
January 16, 2012, 12:35:58 AM
 #8

I don't think so, they're seasonic gold 750 watts and I used them separately with just two cards on them before. The rigs have always shut down after a certain point at  this location. At first I thought it might be the cooling since those stock diamond 5970's from newegg come with horrible stock TIM and pads.

I haven't replaced the TIM and pads on all the cards, but I have on four and they run a little longer, but not much. I know I replaced the thermal stuff properly because the temps went down like 5-6c after doing so. I haven't figured out how to monitor the ram temps in linux yet, so maybe that is my next step.

Also, I did try and run them at stock clocks (725/300) and it didn't make a difference except of course on my hash rate..lol

I'm just thinking these diamonds from newegg are refurbed stinkers that they simply got running at the factory and then pawned them off to newegg as "new".

I have an XFX 5970 at my house that I run at 800/300 without issue for over a week. I also have an ati brand 5970 paired with it that hasn't had any issues either. Then again, I have less problems overall on the rigs at my house versus the ones at my brother-in-laws. I think it just might be his wiring over there. I'm wondering if I can buy something that can "clean up" or maybe stabilize the power over there.

Who knows...
newunit16
Member
**
Offline Offline

Activity: 133
Merit: 10


View Profile
January 17, 2012, 03:27:32 PM
 #9

load the 5v rails on "PS2". Plug the HDD and fans into it. Anything that requires power (aside from the MOBO) should be plugged into the secondary PSU.

I have a similar setup. Running 24/7 since last June. 2 PSU, 4x GPU, got that dual PSU connector cable (makes it much cleaner). Before loading the other rails with some fans and the HDD, my PSU2 would actually fail to power up. Some PSU's will power up without load on the other rails, I'm sure. But they will all fail prematurely if not loaded.


On a side note:
To say powering the GPU's from two sources will invite problems is to now know the entire truth. If the two PSu's were not sharing a common ground, then yes. It will likely die in a matter of days. But this is not the case. One of the wires connecting the two PSU's together is a ground wire.

I have all 4 cards plugged into mobo with risers, no molex on the risers. 3 of the GPU's are externally powered with PS2, and the 4th is utilizing the PCIE power from PS1.  No issues since inception.
P4man
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
January 17, 2012, 03:33:28 PM
 #10

On a side note:
To say powering the GPU's from two sources will invite problems is to now know the entire truth.

Thats not what I said. I specifically said powering one GPU with 2 PSUs, as that is a recepy for disaster because of voltage regulation. THere is no real problem using multiple PSUs for multiple cards, as long as no single GPU is powered by 2 PSUs

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
January 17, 2012, 03:39:14 PM
 #11

On a side note:
To say powering the GPU's from two sources will invite problems is to now know the entire truth.

Thats not what I said. I specifically said powering one GPU with 2 PSUs, as that is a recepy for disaster because of voltage regulation. THere is no real problem using multiple PSUs for multiple cards, as long as no single GPU is powered by 2 PSUs

P4man is right.  Also you don't want to do it because anytime you connect two power sources together you are going to get losses due to voltage mismatch.  For similar reasons you shouldn't power graphics cards which have 2 power connectors from different rails of the same PSU as they are going to have differing voltage output.
jake262144
Full Member
***
Offline Offline

Activity: 210
Merit: 100


View Profile
January 17, 2012, 04:12:09 PM
Last edit: January 17, 2012, 04:40:31 PM by jake262144
 #12

... For similar reasons you shouldn't power graphics cards which have 2 power connectors from different rails of the same PSU as they are going to have differing voltage output.

Sorry DAT, old fellow, I can't agree.

The reason is, the rails would be actually more appropriately called "virtual rails". They take their input voltage from a common 12V rail and it's only at the OCP (Over Current Protection) chip that current load of each "rail" is being checked. That's the simplest and the only commonly-seen today approach to multi-rail designs. Any voltage difference between the rails should be negligent.

Code:
					[to OCP]
┌────────────┴────────────[rail0 output]

[common 12V input]──────────────┼────────────┬────────────[rail1 output]
│ [to OCP]

├────────────┬────────────[rail2 output]
│ [to OCP]
...
                        

A benefit of this approach is, should a single rail become dangerously overloaded, a good PSU should only disable the troubled rail. Care should be taken never to overload the rail supplying +12V to the CPU and mobo. Only if the load at the common 12V input exceeds the safety limit must the whole PSU shut down.

Although a few designs using two independent 12V circuits were manufactured, they were more complex than necessary, required more components (thus being more expensive), and suffered from the issue you described.
ArtForz
Sr. Member
****
Offline Offline

Activity: 406
Merit: 257


View Profile
January 17, 2012, 04:40:00 PM
 #13

For OPs mysterious shutdowns...
X750 is single-rail, so can't be a problem there.
Also, mainboard + stuff + one card on #1; 2 cards on #2 is the way to go.
mysterious shutdown points towards #1 or mainboard having some issue (at least one piece of true info, you only get locked up cards if #2 drops out).
Now for bad power causing issues... possible, my 4*5970 boxes regularly locked up or powered off on brownouts (got a line voltage monitor/logger, so correlation was easy).
Another possible issue... southbridge chipset temp. msi 890fxa-gd70 is *very* iffy there.
Oh ,and on that board the CPU voltage regulator phase switching is... problematic. Maybe test if the problem goes away if you disable it, or just keep the cpu at 100% load constantly (no fucking clue why that helps, but it did work here to stabilize a 890fxa w/ 4*5970 and a Athlon2 X2...).

Now... long rant:

So much misinformation in one thread.

Quote from: newunit16
But they will all fail prematurely if not loaded.
Quite a strong general statement there. Got any proof to back that up?
For a group-regulated design, you want some load on +5/+3.3, due to obvious reasons, and those *can* fail if you don't do it.
But for anything that uses DC/DCs to create 5V and 3.3V outputs ... no.
And if you had bothered to check out the X750s OP is using... guess what... they *are* 12V + DC/DC based.

On a side note:
To say powering the GPU's from two sources will invite problems is to now know the entire truth.

Thats not what I said. I specifically said powering one GPU with 2 PSUs, as that is a recepy for disaster because of voltage regulation. THere is no real problem using multiple PSUs for multiple cards, as long as no single GPU is powered by 2 PSUs

P4man is right.  Also you don't want to do it because anytime you connect two power sources together you are going to get losses due to voltage mismatch.  For similar reasons you shouldn't power graphics cards which have 2 power connectors from different rails of the same PSU as they are going to have differing voltage output.
Uhmmm... nope. No problem either.
Please find *any* video card that connects the aux connectors together (violates PCIe spec. badly).
Or hell, any card that doesn't like one of them at 12.6V with the other at 11.4V.
Now why *isn't* that a big problem? Well, modern multiphase DC/DCs as used on any recent graphics card deal it pretty nicely, you get some uneven average current on phases powered from different voltages, roughly the same % mismatch as the voltage mismatch. Read up on theory of operation of synchronous multiphase step-down converters with current-mode control of individual phases and you'll figure out why.

So... why on earth can't people stop to check their "facts" or even just think for a bit before regurgitating the same old myths and overgeneralizations?

bitcoin: 1Fb77Xq5ePFER8GtKRn2KDbDTVpJKfKmpz
i0coin: jNdvyvd6v6gV3kVJLD7HsB5ZwHyHwAkfdw
jake262144
Full Member
***
Offline Offline

Activity: 210
Merit: 100


View Profile
January 17, 2012, 04:43:12 PM
 #14

[input validation request] have I been inaccurate anywhere in this thread, Art?
ArtForz
Sr. Member
****
Offline Offline

Activity: 406
Merit: 257


View Profile
January 17, 2012, 04:56:09 PM
 #15

Have I been inaccurate anywhere in this thread, Art?
Nope, just factually correct info and helpful suggestions, presented in a professional fashion... Guess there's at least some hope remaining for this forum.

bitcoin: 1Fb77Xq5ePFER8GtKRn2KDbDTVpJKfKmpz
i0coin: jNdvyvd6v6gV3kVJLD7HsB5ZwHyHwAkfdw
jjshabadoo (OP)
Hero Member
*****
Offline Offline

Activity: 535
Merit: 500



View Profile
January 17, 2012, 09:36:40 PM
 #16

Thanks for the help, I think we have isolated that it is some type of electrical wiring issue, bad circuit, etc. I don't claim to know jack about electrical work, but my partner does, so he's going to check it out. I think we're just going to get a dedicated 30 amp line/wire? for the mining rigs. (with correct wiring, etc. I don't know the technical terms)

We might just be overloading the circuit it is on which causes fluctuating power issues or it was wired improperly to begin with.

Fr the record, the rigs don't "shut down", the cards just stop mining, yet they are not having any heating issues. They will shut down at 50c.

Thanks for all the help though.
newunit16
Member
**
Offline Offline

Activity: 133
Merit: 10


View Profile
January 21, 2012, 02:34:41 AM
 #17

On a side note:
To say powering the GPU's from two sources will invite problems is to now know the entire truth.

Thats not what I said. I specifically said powering one GPU with 2 PSUs, as that is a recepy for disaster because of voltage regulation. THere is no real problem using multiple PSUs for multiple cards, as long as no single GPU is powered by 2 PSUs

in my setup 3/4 of the GPU's are powered from 2 sources. the motherboard and the secondary power supply.

0 issues.
jjshabadoo (OP)
Hero Member
*****
Offline Offline

Activity: 535
Merit: 500



View Profile
January 21, 2012, 07:01:01 AM
 #18

I think I should also try to the cpu power phase trick that art suggested.

It's not like the rigs shut down, the video cards themselves just stop hashing at a certain point. I think part of that is they are those diamonds from newegg and are just a "shitty" batch. If you guys could see what the thermal paste/pads look like when you take off the stock coolers you'd just laugh. Paste is caked on, thermal pads on small ram(vrm?) I forget what they're called, but the "little" ones. Those pads are powdered when you take the cooler off. But I have some of those at home here and they also hash away like champs.

I'm really thinking there are power issues at the location where I am having problems. circuit can't handle the load or something.

So we've been replacing the thermal pads and pastes and that has helped. I think the cpu power phase thing and maybe a surge protector with "power cleaning"? I just read about these and it sounds like they help with normal electrical interference on high end A/V equipment.

I am also thinking the powered pci extenders are not needed for the MSI 890FX-GD70, even with four 5970's, and possibly screw things up. Again, in this case I'm only running three 5970'd per rig right now.

Although I have a rig here with four 5870's on powered extenders(not connected to molex) and they have run like champs at 950/160 from day one at about 750 watts(peak) giving me 1.7 Gh's on average.

who knows... but thank you all very much for your help.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!