Bitcoin Forum
May 14, 2024, 11:03:46 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: RIG not booting anymore after a few restarts  (Read 314 times)
tenie (OP)
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
November 14, 2017, 07:48:47 AM
 #1

Hi,

So the rig has 6 X 580 PULSE (with modded bios) and the Biostar TB250 BTC motherboard. Everything powered by 2 X 700 W Thermaltake PSUs.

I had the rig running for some time pretty good but few things happened:

- had issues with some PSU cables, so the SATA connector coming out from the PSU melted (first I was thinking that the heat in the room caused this, but after a while I noticed that the cable that was connected to 3 risers and SSD had the connection in the PSU melted again so now, finally, I power a max of 2 risers or 1 riser + SSD per cable)

-  noticed some "Display driver stopped responding and has recovered" and system freezes a few weeks ago, but changed the TDR settings in registries and the rig was able to run without crash for 2-3-5 days.


After the last crash I noticed the melting of the SATA connector to the PSU (from above), reorganized a bit cables and reinstalled drivers and the rig was able to install all the cards, I applied the patch for the drivers and restarted Windows to have them ready to use.

After a fresh install I was able to start Claymore to do a bit of testing and all 6 cars were mining well, but I was more interested in the restart issue and if it will into Windows so I hit the restart button in Windows for a few times. Worked and booted into Windows 2-3 times, but not anymore.


Now I'm still at the point where cards were installed, drivers installed and patched, but not able to boot into Windows after some restarts.

So what could be the issue? Faulty motherboard, risers, GPU?

Thanks Smiley
1715684626
Hero Member
*
Offline Offline

Posts: 1715684626

View Profile Personal Message (Offline)

Ignore
1715684626
Reply with quote  #2

1715684626
Report to moderator
There are several different types of Bitcoin clients. The most secure are full nodes like Bitcoin Core, but full nodes are more resource-heavy, and they must do a lengthy initial syncing process. As a result, lightweight clients with somewhat less security are commonly used.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715684626
Hero Member
*
Offline Offline

Posts: 1715684626

View Profile Personal Message (Offline)

Ignore
1715684626
Reply with quote  #2

1715684626
Report to moderator
1715684626
Hero Member
*
Offline Offline

Posts: 1715684626

View Profile Personal Message (Offline)

Ignore
1715684626
Reply with quote  #2

1715684626
Report to moderator
Bakhtra
Full Member
***
Offline Offline

Activity: 215
Merit: 100



View Profile
November 14, 2017, 08:30:30 AM
 #2

Try boot without all things in it, just motherboard processor and ram. Unplug all GPU and all other cable power to riser and riser itself.
Then add ssd in it. After that check 1 GPU each time to make sure.
mittooss
Member
**
Offline Offline

Activity: 154
Merit: 10

DEm1CKDTViM1y9YmEcBaktNLWVx8rwuQUm


View Profile
November 14, 2017, 09:09:25 AM
 #3

Check and verify if you have connected any other USB disks. If so remove and restart. If it still doesn't work try to boot it with single GPU
tenie (OP)
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
November 14, 2017, 08:11:02 PM
 #4

I had an idea with simple mining os, but somehow that did not worked. Tried with 1 GPU to boot simple mining os but the screen got all just some grey lines. Same with 2 different GPUs. Could not used with integrated graphics because had no signal to monitor most of the times with a DVI-D to HDMI adaptor.

Now I'm working on testing GPUs, one at a time: install drivers, patch, mine a bit and then test the restart and Windows boot.



But "how" normal is to have "Display driver stopped responding and has recovered" in Events logs? Because I saw this again.
cryptocoinfarmer
Member
**
Offline Offline

Activity: 154
Merit: 10


View Profile
November 15, 2017, 03:24:26 PM
 #5

Looks like you have PSU problem. But I can be wrong.  
First of all, if you cannot get the signal to monitor then check the bios settings setup for an intended card.
Or you could change the HDMI to DVI adapter because I have the same motherboard and there are no problems with that.  
Problem with the "Display driver stopped responding"  cloud with the overclocked video card.
 
Cloud you draw a diagram of your wiring and add some information about devices you have?  
What kind of synchronizer do you have?  
  

tenie (OP)
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
November 17, 2017, 09:23:10 AM
 #6

Looks like you have PSU problem. But I can be wrong.  
First of all, if you cannot get the signal to monitor then check the bios settings setup for an intended card.
Or you could change the HDMI to DVI adapter because I have the same motherboard and there are no problems with that.  
Problem with the "Display driver stopped responding"  cloud with the overclocked video card.
 
Cloud you draw a diagram of your wiring and add some information about devices you have?  
What kind of synchronizer do you have?  
  



It had signal for like 3 seconds, after a few restarts, one time. It was a new DVI-D to HDMI adapter, also cheap, so could be faulty or something. Try to find also a better one.

About the overclock: core clock was down to 1150 from 1340 and only memory clock a bit up, to around 2100. Since 2 days I'm testing also with a bit of less voltage for cards,
so running Claymore with 900 mV in the parameters.

About devices:

1 x Intel Celeron
1 X 4 GB DDR4
1 X Biostar TB 250 BTC
2 X 700 Thermaltake PSU
1 x Wazney synchronizer (but I noticed that sometimes after some a bit more forced shutdowns, the second PSU will not start, not sure if synchronizer has something to do with this)
6 x Wazney (I think) Risers
1 x Biostar 120 SSD
6 X RX 580 PULSE

About wires:

Before: 1 SATA cable for 3 x riser and 1 X SDD on one PSU and 1 SATA cable for 3 x riser. And since each PSU has only 2 PCIE connection with split wires, 2 GPU needed to be on same wire. (Also had melting issues with too many things on one SATA cable)

Now: 1 SATA cable for 2 x riser and 1 SATA cable for 1 x riser and on the other PSU 1 SATA cable for 2 x riser and 1 SATA cable for 1 x riser and 1 x SSD. GPUs still connected the same way.

Since Thermaltake PSU has some piece of software that can read some information from PSU (has USB cable connected from PSU to MOBO) I was able to see the some details there.

Yesterday I had only 3 GPUs + risers on one PSU ( reporing almost 400 W ) and MOBO + CPU + SSD on the other (reporting up to 50 W) and:
- for 5 V the PSU software reported 5.3 V
- one of the GPUs hanged at some point and Claymore tried to do a restart.

Before adding the 3rd GPU, while I was away from the rig, it restarted itself and I saw in Windows that it was an unexpected shutdown. :/

jeswin
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
November 17, 2017, 09:34:44 AM
 #7

Try removing all the cables and retry
tenie (OP)
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
November 17, 2017, 01:42:14 PM
 #8

Yes, already did that.

A little update:

1 PSU with 3 x GPU and 3 x riser I have like this:
- 11.3 (even 11.2 sometimes) V on the 12 V line (with 34.0 A)
- 5.31 V on the 5 V line
- 3.39 V on the 3.3 V line

How good / safe / stable is this?

Proton2233
Sr. Member
****
Offline Offline

Activity: 434
Merit: 252


View Profile
November 17, 2017, 02:04:51 PM
 #9

2 power supply 700 watt for 6 GPU is not enough. If mine cryptonight then maybe their power will be enough but if you run dual mining Ethereum+decred they may not be enough. Just one sagging voltage and your setup will not start. The PSU is 700 watt enough to power 2 GPU. Try disabling one GPU and to check the efficiency of your installation.
Mike011
Full Member
***
Offline Offline

Activity: 312
Merit: 104


View Profile
November 17, 2017, 02:13:33 PM
 #10

Also try clearing CMOS/removing mobo battery as well.
tenie (OP)
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
November 17, 2017, 03:44:36 PM
Last edit: November 18, 2017, 02:04:47 PM by tenie
 #11

Only used to mine ETH and it was working with 2 x 700W. Until it did not anymore.  Undecided

Also did CMOS reset, that's how I was able to get into bios after only black screen even after restart from button.


Also I'm looking for a replacement PSU and I saw a server HP PSU, 1200 2000 (after I did a bit of more search) W cabled with at least 6 x 6 PIN + 6 x 8 PIN.

Should this be good to power 5-6 x GPU + 5-6 x riser at a time? Maybe keep 1 x GPU + riser on the 700 W PSU that I would keep for MOBO + CPU + SSD.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!