Bitcoin Forum
December 07, 2016, 04:30:23 PM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
   Home   Help Search Donate Login Register  
Pages: « 1 [2] 3 4 5 »  All
  Print  
Author Topic: 2x6970's Crashing Repeatedly with GUIMiner  (Read 7100 times)
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
December 31, 2011, 11:42:04 PM
 #21

Calm down. I opened up the case and the 100c card is getting 80c, and the 80c card is getting 70's-60's. It's just my case. One card runs great. But my second card is close to the bottom with minimal air flow.

Sounds good.  Like I said if temp drops it is air restriction.  You need a large volume of cool air to keep those GPUs cool.  If you need to use a case look for one which has plenty of room below the bottom card.  Also look for one which has the HDD mounted high (up near the CPU)  that gives you a straight shot from the front mounted fans to the cards air intake.

Quote
I've got my mining pretty much stable. I'm tweaking the voltages lower to see what is stable to get cooler temps maxed out.
Lowering memclock as far as possible (depends on the card, bios, company) will reduce some heat and give you some headroom.

Quote
This rig isn't going to be mining 24/7. It'll be mining overnight or when I'm in class. So it'll be running less than 24 hours per day at those temps and fan speeds. I have a 3 year warranty. Will that cover me if my fans malfunction? I would guess so. I'm not worried about cooking my GPU because anything above 90c is completely unacceptable to me. I try for 70'sc.

Well 16 hours a day is pretty much same as 24 hours a day.  Mining is hard on hardware.  Yeah a dead fan will be covered under warranty but it will be a pain in the ass and a long 2+ week delay getting your refurb card (which was used by someone else who likely abused it Smiley ). 

With good airflow you should be able to keep fans speed down to 75% or less.  That will improve longevity of the fan.  If your motherboard allows separating the cards more (because it has 3+ 16x slots) that helps too.  You can find extra long crossfire cables if you need one for gaming.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1481128223
Hero Member
*
Offline Offline

Posts: 1481128223

View Profile Personal Message (Offline)

Ignore
1481128223
Reply with quote  #2

1481128223
Report to moderator
1481128223
Hero Member
*
Offline Offline

Posts: 1481128223

View Profile Personal Message (Offline)

Ignore
1481128223
Reply with quote  #2

1481128223
Report to moderator
Bananington
Sr. Member
****
Offline Offline

Activity: 366


Twinkle twinkle motherfucker, twinkle twinkle.


View Profile
January 01, 2012, 01:46:55 AM
 #22

Calm down. I opened up the case and the 100c card is getting 80c, and the 80c card is getting 70's-60's. It's just my case. One card runs great. But my second card is close to the bottom with minimal air flow.

Sounds good.  Like I said if temp drops it is air restriction.  You need a large volume of cool air to keep those GPUs cool.  If you need to use a case look for one which has plenty of room below the bottom card.  Also look for one which has the HDD mounted high (up near the CPU)  that gives you a straight shot from the front mounted fans to the cards air intake.

Quote
I've got my mining pretty much stable. I'm tweaking the voltages lower to see what is stable to get cooler temps maxed out.
Lowering memclock as far as possible (depends on the card, bios, company) will reduce some heat and give you some headroom.

Quote
This rig isn't going to be mining 24/7. It'll be mining overnight or when I'm in class. So it'll be running less than 24 hours per day at those temps and fan speeds. I have a 3 year warranty. Will that cover me if my fans malfunction? I would guess so. I'm not worried about cooking my GPU because anything above 90c is completely unacceptable to me. I try for 70'sc.

Well 16 hours a day is pretty much same as 24 hours a day.  Mining is hard on hardware.  Yeah a dead fan will be covered under warranty but it will be a pain in the ass and a long 2+ week delay getting your refurb card (which was used by someone else who likely abused it Smiley ). 

With good airflow you should be able to keep fans speed down to 75% or less.  That will improve longevity of the fan.  If your motherboard allows separating the cards more (because it has 3+ 16x slots) that helps too.  You can find extra long crossfire cables if you need one for gaming.
-I'm a gamer, this is a case built for gaming, plus I don't have the money for a new case just to mine.

-My memory clock is as low as possible.

-As for the RMA wait period, I'll always have one 6970 running to keep my games going. Lol

Thank you guys for your advice. I'll keep working on this stuff and update with what I finally get it stabilized at.
Bananington
Sr. Member
****
Offline Offline

Activity: 366


Twinkle twinkle motherfucker, twinkle twinkle.


View Profile
January 02, 2012, 08:29:34 PM
 #23

Ok, so I'm tweaking the settings on one 6970. So when I run it, after about an hour or so, I get a long poll IO error reported by GUIMiner. So when I wake up to check on the mining, it has stopped mining several hours ago.

Any ideas? Normally with a crash, it doesn't show up as long poll IO error, right?
Bananington
Sr. Member
****
Offline Offline

Activity: 366


Twinkle twinkle motherfucker, twinkle twinkle.


View Profile
January 15, 2012, 05:16:26 AM
 #24

So I've come to the conclusion that I can't run two cards plugged in at stock speeds. For some reason, GPU 2 can run at stock speeds, but GPU 1 crashes at stock speeds. The cards run perfectly at stock speeds when I only have one or the other plugged into the mobo. I've tested both PCI-e x16 slots individually and they both work and run each card at stock speeds without crashing, but when I have them both plugged in, GPU 1 CANNOT run at stock speeds for some reason.

Does anyone else think this is a driver issue? Because it doesn't seem to be a hardware issue.
portron
Member
**
Offline Offline

Activity: 106


View Profile
January 15, 2012, 08:17:03 AM
 #25

So I've come to the conclusion that I can't run two cards plugged in at stock speeds. For some reason, GPU 2 can run at stock speeds, but GPU 1 crashes at stock speeds. The cards run perfectly at stock speeds when I only have one or the other plugged into the mobo. I've tested both PCI-e x16 slots individually and they both work and run each card at stock speeds without crashing, but when I have them both plugged in, GPU 1 CANNOT run at stock speeds for some reason.

Does anyone else think this is a driver issue? Because it doesn't seem to be a hardware issue.

Need more info...
 
Mobo? Have you updated your BIOS or checked for issues with your particular model?

What OS / Driver version are you using?

Crossfire bridge installed?  Are you manging both cards with Afterburner and using identical settings?

What are your temps like now (since you had the issue before)?
P4man
Hero Member
*****
Offline Offline

Activity: 504



View Profile
January 15, 2012, 09:03:01 AM
 #26

SOunds like your powersupply isnt up to the task.

jake262144
Full Member
***
Offline Offline

Activity: 210


View Profile
January 15, 2012, 10:04:11 AM
 #27

What you say sounds about right, P4
If the PSU doesn't feel up to snuff, OP might be getting a lot of power fluctuations resulting in a crash.

Request you post PSU specs, including the manufacturer and model info.
Bananington
Sr. Member
****
Offline Offline

Activity: 366


Twinkle twinkle motherfucker, twinkle twinkle.


View Profile
January 15, 2012, 05:35:33 PM
 #28

Mobo - http://www.newegg.com/Product/Product.aspx?Item=N82E16813131655
PSU - http://www.newegg.com/Product/Product.aspx?Item=N82E16817152045
OS - Windows 7 64bit
CCC Driver - 11.12

Crossfire Bridge is installed because I will eventually need to crossfire for my games, but crossfire is disabled via CCC.

My Temps are GPU 1(81c) and GPU 2(70c) this is with GPU 1's clock lowered from stock settings until it stops crashing.

My PSU has more power than is required, it shouldn't be a problem unless it is faulty. :O

Like I said earlier, both cards run at stock speeds perfectly on there own in either PCI-e slot with either pair of my 8-pin connectors.

I appreciate all of your responses. Smiley I hope we can figure this out.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
January 16, 2012, 02:54:36 AM
 #29

81C is too high for 24/7 operation.  Are you downclocking the memory? 
Bananington
Sr. Member
****
Offline Offline

Activity: 366


Twinkle twinkle motherfucker, twinkle twinkle.


View Profile
January 16, 2012, 03:48:47 AM
 #30

81C is too high for 24/7 operation.  Are you downclocking the memory?  
Everything is downclocked. Core voltage(without crashing), AUX voltage(without crashing), Memory Frequency(without crashing). I've already come to the conclusion that my case is at fault, causing the the extreme GPU temps because of poor airflow. I'm buying a new case ASAP.

But temperatures aren't the issue for me right now, why won't my GPU 1 run at stock settings when GPU 2 is installed?

My miner crashes GPU 1 when the clock is at stock speeds, my games crashes the GPU at core clock stock speeds when it's usage is above 80% as seen in Skyrim. The GPU runs perfectly at stock when I don't have another GPU plugged in. So what is the issue?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
January 16, 2012, 03:52:03 AM
 #31

81C is too high for 24/7 operation.  Are you downclocking the memory? 
Everything is downclocked. Core voltage(without crashing), AUX voltage(without crashing), Memory Frequency(without crashing). I've already come to the conclusion that my case is at fault, causing the the extreme GPU temps because of poor airflow. I'm buying a new case ASAP.

But temperatures aren't the issue for me right now, why won't my GPU 1 run at stock settings when GPU 2 is installed?

Installed or mining.

GPU 1 crashes if GPU2 is idle or under full load?
What is the VRM temp (not core temp) on GPU1 when it crashes?
Bananington
Sr. Member
****
Offline Offline

Activity: 366


Twinkle twinkle motherfucker, twinkle twinkle.


View Profile
January 16, 2012, 04:07:17 AM
 #32

81C is too high for 24/7 operation.  Are you downclocking the memory?  
Everything is downclocked. Core voltage(without crashing), AUX voltage(without crashing), Memory Frequency(without crashing). I've already come to the conclusion that my case is at fault, causing the the extreme GPU temps because of poor airflow. I'm buying a new case ASAP.

But temperatures aren't the issue for me right now, why won't my GPU 1 run at stock settings when GPU 2 is installed?

Installed or mining.

GPU 1 crashes if GPU2 is idle or under full load?
What is the VRM temp (not core temp) on GPU1 when it crashes?

-installed, and/or mining. Even if they are crossfired(I did this to test mining stability) GPU 1 still crashed.
-GPU 2 can be idle or under full load, it does not matter at all, just as long as it is plugged in the issues will occur. I have not tried switching the GPU's around. (putting GPU 1 in GPU 2's spot and visa versa) but I have tested them 1 card at a time in each slot and it worked fine for both at stock settings.
-How can I determine VRM temp of GPU 1? And what is VRM?

I am also using MSI Afterburner to manage/monitor these GPU's. I believe someone asked that earlier and I failed to answer. Crashing occurs when the cards have different and identical settings as well.
worldinacoin
Hero Member
*****
Offline Offline

Activity: 658



View Profile WWW
January 16, 2012, 04:15:26 AM
 #33

I am also having problems with 6970, I stopped running 3 of them and instead left one for mining.  It is way too hot no matter how you undervoltage it etc.  I think the 5xxx series are better for mining.
Bananington
Sr. Member
****
Offline Offline

Activity: 366


Twinkle twinkle motherfucker, twinkle twinkle.


View Profile
January 16, 2012, 04:24:22 AM
 #34

I am also having problems with 6970, I stopped running 3 of them and instead left one for mining.  It is way too hot no matter how you undervoltage it etc.  I think the 5xxx series are better for mining.
I think you misunderstand, my problem is not a temperature problem. My problem is that when I have two GPU's plugged in, one of them crashes at core clock speeds within seconds of usage going above 80% way before temps get out of the 70's.
P4man
Hero Member
*****
Offline Offline

Activity: 504



View Profile
January 16, 2012, 07:34:49 AM
 #35

Even though on paper your PSU is certainly powerful enough, it still sounds like a power delivery issue to me. Can you try another one?

jake262144
Full Member
***
Offline Offline

Activity: 210


View Profile
January 16, 2012, 10:06:55 AM
 #36

OP, read this article:
http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story&reid=246

The PSU reviewed there is build on the same platform as yours. They are both manufactured by Andyson according to the same spec. The only difference are stickers, just like with reference GPUs.

Although it might come as a shocker, contrary to what the label might imply, your PSU is NOT capable of delivering 1200W in continuous mode.

Moreover, your PSU has all its 12V power divided into four rails, each of them limited to 40A (480W).
You need to take very good care when loading up these rails.
If you connect your PSU in such a way that one rail becomes overloaded, as the PSU heats up and as a loses some efficiency (so-called de-rating), it might shut off on you.

This is a half-decent PSU but the rail distribution makes it quite hard to connect it the right way.
You'll need to read the manual and figure out which connectors to use to distribute the load evenly across the rails.
Bananington
Sr. Member
****
Offline Offline

Activity: 366


Twinkle twinkle motherfucker, twinkle twinkle.


View Profile
January 16, 2012, 06:56:09 PM
 #37

Even though on paper your PSU is certainly powerful enough, it still sounds like a power delivery issue to me. Can you try another one?
No I can't. Sad

OP, read this article:
http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story&reid=246

The PSU reviewed there is build on the same platform as yours. They are both manufactured by Andyson according to the same spec. The only difference are stickers, just like with reference GPUs.

Although it might come as a shocker, contrary to what the label might imply, your PSU is NOT capable of delivering 1200W in continuous mode.

Moreover, your PSU has all its 12V power divided into four rails, each of them limited to 40A (480W).
You need to take very good care when loading up these rails.
If you connect your PSU in such a way that one rail becomes overloaded, as the PSU heats up and as a loses some efficiency (so-called de-rating), it might shut off on you.

This is a half-decent PSU but the rail distribution makes it quite hard to connect it the right way.
You'll need to read the manual and figure out which connectors to use to distribute the load evenly across the rails.
I'm not familiar with PSU's at all. I have 1 video card plugged into the PCI-E slot on the PSU, and the other video card plugged into another PCI-E slot on the PSU. Wouldn't that distribute the load?

Any suggestions on how to test this would be appreciated.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
January 16, 2012, 06:59:52 PM
 #38

Even though on paper your PSU is certainly powerful enough, it still sounds like a power delivery issue to me. Can you try another one?
No I can't. Sad

OP, read this article:
http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story&reid=246

The PSU reviewed there is build on the same platform as yours. They are both manufactured by Andyson according to the same spec. The only difference are stickers, just like with reference GPUs.

Although it might come as a shocker, contrary to what the label might imply, your PSU is NOT capable of delivering 1200W in continuous mode.

Moreover, your PSU has all its 12V power divided into four rails, each of them limited to 40A (480W).
You need to take very good care when loading up these rails.
If you connect your PSU in such a way that one rail becomes overloaded, as the PSU heats up and as a loses some efficiency (so-called de-rating), it might shut off on you.

This is a half-decent PSU but the rail distribution makes it quite hard to connect it the right way.
You'll need to read the manual and figure out which connectors to use to distribute the load evenly across the rails.
I'm not familiar with PSU's at all. I have 1 video card plugged into the PCI-E slot on the PSU, and the other video card plugged into another PCI-E slot on the PSU. Wouldn't that distribute the load?

Any suggestions on how to test this would be appreciated.

It can be more complicated than that.

Your 6970 has 2x power connectors.  

Your PSU has 4 12V rails.  Each rail is only powerful enough to support 1 6970.  If you have both 6970s on the same rail and/or 6970 and mb on the same rail it will overload that rail and make the system unstable.

So you want it hooked up like this

Motherboard & other 12V loads - rail 1
First 6970 - rail 2
Second 6970 - rail 3

Figuring out which connector goes with what rail can be "tricky".  They often are poorly marked.  Looking at the wattage sticker on PSU, any documentation that came with it, and any specs provided online may help.
Bananington
Sr. Member
****
Offline Offline

Activity: 366


Twinkle twinkle motherfucker, twinkle twinkle.


View Profile
January 16, 2012, 07:27:47 PM
 #39

Alright, I just switched the cards and they both run at stock speeds without crashing. I have no idea anymore. Problem solved so far. I'll post if I have any unforeseen issues later.
Bananington
Sr. Member
****
Offline Offline

Activity: 366


Twinkle twinkle motherfucker, twinkle twinkle.


View Profile
January 17, 2012, 03:40:04 AM
 #40

Ok, so I switched slots, So GPU 1 is now GPU 2 and vice versa. So instead of the computer locking up when GPU 2 is running at 99% load at stock clock speeds, the video driver crashes and i get a notification saying that my driver crashed and was restarted. This is a step away from my computer crashing, now it's the driver. :/

Any ideas ladies and gentlemen?
Pages: « 1 [2] 3 4 5 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!