Bitcoin Forum
November 18, 2017, 02:15:49 PM *
News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: GPU utilization  (Read 1078 times)
cignus76
Newbie
*
Offline Offline

Activity: 14


View Profile
February 24, 2016, 09:20:24 PM
 #1

I'm mining the x11 algorithm with my GTX 970 I have it set to only use 58% of the power with EVGA PrecisionX.
I wanted to try to lower the GPU utilization in order to save the wear and tear on my card as this is my main PC
Is there any way to lower the GPU utilization with ccminer?

or is utilization even the issue that I'm trying to correct if I want to save the life of the card?

Thanks in advance.
A blockchain platform for effective freelancing
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1511014549
Hero Member
*
Offline Offline

Posts: 1511014549

View Profile Personal Message (Offline)

Ignore
1511014549
Reply with quote  #2

1511014549
Report to moderator
wilding2004
Jr. Member
*
Offline Offline

Activity: 37


View Profile
February 24, 2016, 10:21:18 PM
 #2

I'm mining the x11 algorithm with my GTX 970 I have it set to only use 58% of the power with EVGA PrecisionX.
I wanted to try to lower the GPU utilization in order to save the wear and tear on my card as this is my main PC
Is there any way to lower the GPU utilization with ccminer?

or is utilization even the issue that I'm trying to correct if I want to save the life of the card?

Thanks in advance.

A GPU is a solid state device. No moving parts, no mechanics, therefore no wear and tear. As long as you keep to the default voltage and don't allow it to overheat it will last exactly the same amount of time running at 50% as it will running at 100%
Kalder
Full Member
***
Offline Offline

Activity: 168



View Profile
February 25, 2016, 10:15:53 AM
 #3

I'm mining the x11 algorithm with my GTX 970 I have it set to only use 58% of the power with EVGA PrecisionX.
I wanted to try to lower the GPU utilization in order to save the wear and tear on my card as this is my main PC
Is there any way to lower the GPU utilization with ccminer?

or is utilization even the issue that I'm trying to correct if I want to save the life of the card?

Thanks in advance.

A GPU is a solid state device. No moving parts, no mechanics, therefore no wear and tear. As long as you keep to the default voltage and don't allow it to overheat it will last exactly the same amount of time running at 50% as it will running at 100%

Yes. For most of the mining programs, you can undervolt the voltage of Core or memory to generate less heat.

cignus76
Newbie
*
Offline Offline

Activity: 14


View Profile
February 25, 2016, 08:09:55 PM
 #4

okay awesome, I  can't remember where but I thought I remembered reading that the strain from it running at 100% constantly could lead to the cards life being shortened.
wilding2004
Jr. Member
*
Offline Offline

Activity: 37


View Profile
February 25, 2016, 10:53:07 PM
 #5

okay awesome, I  can't remember where but I thought I remembered reading that the strain from it running at 100% constantly could lead to the cards life being shortened.

That might have been referring to the fans. Obviously they are mechanical and do suffer from wear and tear.

Just a little FYI. I've been running a pair of 970's at 100% load with a 100mhz overclock, 24/7 for the past 9 months. They're actually running the Folding@home program which requires much higher levels of stability than mining or gaming does. Therefore undervolting is out of the question, and sometimes overvolting is needed to keep an overclock stable.  Side effect of this is higher power usage and more heat. I've had the fans set to run 100% since day one, and just recently I've noticed a little "whining" starting, and also a little oscilation in the fans rotation.
Kalder
Full Member
***
Offline Offline

Activity: 168



View Profile
February 26, 2016, 08:50:54 AM
 #6

okay awesome, I  can't remember where but I thought I remembered reading that the strain from it running at 100% constantly could lead to the cards life being shortened.

That might have been referring to the fans. Obviously they are mechanical and do suffer from wear and tear.

Just a little FYI. I've been running a pair of 970's at 100% load with a 100mhz overclock, 24/7 for the past 9 months. They're actually running the Folding@home program which requires much higher levels of stability than mining or gaming does. Therefore undervolting is out of the question, and sometimes overvolting is needed to keep an overclock stable.  Side effect of this is higher power usage and more heat. I've had the fans set to run 100% since day one, and just recently I've noticed a little "whining" starting, and also a little oscilation in the fans rotation.

For the AMD cards, using 280x as example, the default core voltage is 1.2v, but it is too high, usually, 1.12 could support the 1000MHz core frequency.

QuintLeo
Hero Member
*****
Offline Offline

Activity: 882


View Profile
February 26, 2016, 08:52:27 AM
 #7

I'm mining the x11 algorithm with my GTX 970 I have it set to only use 58% of the power with EVGA PrecisionX.
I wanted to try to lower the GPU utilization in order to save the wear and tear on my card as this is my main PC
Is there any way to lower the GPU utilization with ccminer?

or is utilization even the issue that I'm trying to correct if I want to save the life of the card?

Thanks in advance.

A GPU is a solid state device. No moving parts, no mechanics, therefore no wear and tear. As long as you keep to the default voltage and don't allow it to overheat it will last exactly the same amount of time running at 50% as it will running at 100%

 100% wrong.

 The hotter ANY electronic device runs, the shorter it's lifetime (on average) will be.

 Running it at 100% WILL make it run hotter.

 
 This is WHY manufacturers will VOID YOUR GUARENTEE if they can prove or have reasonable reason to believe you have overclocked a GPU (or CPU or any other device that has a clock that can be adjusted in it) - it puts quite a bit more heat stress on the components which shortens their lifetimes to quite a bit less than designed for.
Kalder
Full Member
***
Offline Offline

Activity: 168



View Profile
February 26, 2016, 08:54:36 AM
 #8

I'm mining the x11 algorithm with my GTX 970 I have it set to only use 58% of the power with EVGA PrecisionX.
I wanted to try to lower the GPU utilization in order to save the wear and tear on my card as this is my main PC
Is there any way to lower the GPU utilization with ccminer?

or is utilization even the issue that I'm trying to correct if I want to save the life of the card?

Thanks in advance.

A GPU is a solid state device. No moving parts, no mechanics, therefore no wear and tear. As long as you keep to the default voltage and don't allow it to overheat it will last exactly the same amount of time running at 50% as it will running at 100%

 100% wrong.

 The hotter ANY electronic device runs, the shorter it's lifetime (on average) will be.

 Running it at 100% WILL make it run hotter.

 
 This is WHY manufacturers will VOID YOUR GUARENTEE if they can prove or have reasonable reason to believe you have overclocked a GPU (or CPU or any other device that has a clock that can be adjusted in it) - it puts quite a bit more heat stress on the components which shortens their lifetimes to quite a bit less than designed for.


I agree with you. But he mentioned "do not let it over heat". As long as no over heating, the card will last long.

wilding2004
Jr. Member
*
Offline Offline

Activity: 37


View Profile
February 26, 2016, 09:17:30 AM
 #9

I'm mining the x11 algorithm with my GTX 970 I have it set to only use 58% of the power with EVGA PrecisionX.
I wanted to try to lower the GPU utilization in order to save the wear and tear on my card as this is my main PC
Is there any way to lower the GPU utilization with ccminer?

or is utilization even the issue that I'm trying to correct if I want to save the life of the card?

Thanks in advance.

A GPU is a solid state device. No moving parts, no mechanics, therefore no wear and tear. As long as you keep to the default voltage and don't allow it to overheat it will last exactly the same amount of time running at 50% as it will running at 100%

 100% wrong.

 The hotter ANY electronic device runs, the shorter it's lifetime (on average) will be.

 Running it at 100% WILL make it run hotter.

 
 This is WHY manufacturers will VOID YOUR GUARENTEE if they can prove or have reasonable reason to believe you have overclocked a GPU (or CPU or any other device that has a clock that can be adjusted in it) - it puts quite a bit more heat stress on the components which shortens their lifetimes to quite a bit less than designed for.


No. A maximum temperature is built into the design of any IC. Running at 100% load will stay under that designed for temperature. Only going over this temp will result in heat damage. Overclocking and overvolting could do this - but in reality most IC's reduce power automatically to prevent it.

What CAN happen, is that a card might containsub-standard components (capacitors etc) that simply aren't specified to the level they need to be. (THIS is what manufacturers do to keep costs down)
QuintLeo
Hero Member
*****
Offline Offline

Activity: 882


View Profile
February 27, 2016, 08:49:15 PM
 #10

There is no "max temperature" built into any IC - the more power you run through it, the hotter it gets, the more likely it will fail sooner.

 There is ZERO guarentee "built into any IC" about not getting too hot for reliability or outright failure - I can count the exceptions I have seen on one hand, and I do NOT need all the fingers, in 30+ years of working with many many ICs both commercial AND mil-spec SPECIFICALLY INCLUDING GPUs from AMD and NVIDEA.

 It's bloody RARE for any IC to have any form of temperature limiting, and is not normally built into an IC, though it's more common in certain types of software (sgminer and such have temp control as an OPTION, but it doesn't always work and it NOT ENABLED BY DEFAULT).


 DO NOT ASSUME TEMP CONTROL.


 Temperature SENSING is NOT "TEMP CONTROL" or "TEMP LIMITING".
 Do not confuse them, they are NOT THE SAME AT ALL.

 Temperature SENSING is fairly widespread, and common in GPUs - but it DOES NOT LIMIT THE MAX TEMP OF THE IC.
wilding2004
Jr. Member
*
Offline Offline

Activity: 37


View Profile
February 28, 2016, 07:43:14 PM
 #11

I said built into the design - there's a difference. Same with power, voltage, thermal cycling. All are a range of values built into the design. keep all the variables within design parameters and the IC will have same life expectancy.
QuintLeo
Hero Member
*****
Offline Offline

Activity: 882


View Profile
February 29, 2016, 06:25:43 AM
 #12

I said built into the design - there's a difference. Same with power, voltage, thermal cycling. All are a range of values built into the design. keep all the variables within design parameters and the IC will have same life expectancy.

 You said "built into the design of any IC".

 In any event, a LOT of IC-based circuits have ZERO temp limiting or control, and even those that DO have it often do not enable it by default or set the default to a rather high temperature.

cignus76
Newbie
*
Offline Offline

Activity: 14


View Profile
March 01, 2016, 03:42:25 PM
 #13

okay but back to the original question of will 100% utilization of a card have any effect on its life if it is operating at or below a normal temperature.
Kalder
Full Member
***
Offline Offline

Activity: 168



View Profile
March 02, 2016, 11:30:55 AM
 #14

okay but back to the original question of will 100% utilization of a card have any effect on its life if it is operating at or below a normal temperature.

If the card is operating below the normal temperature, there is no effect on the life expentancy of the card.

QuintLeo
Hero Member
*****
Offline Offline

Activity: 882


View Profile
March 02, 2016, 11:45:33 AM
 #15

To reiterate, the hotter the card runs the more likely it is to fail sooner.

 PERIOD.

Higher utilization will make the card run hotter.
 You CAN help keep the card cooler by upping the fan settings, or undervolting it, to counteract part or all of that effect.
Bombadil
Hero Member
*****
Offline Offline

Activity: 644



View Profile
March 02, 2016, 09:03:49 PM
 #16

To reiterate, the hotter the card runs the more likely it is to fail sooner.

 PERIOD.

Higher utilization will make the card run hotter.
 You CAN help keep the card cooler by upping the fan settings, or undervolting it, to counteract part or all of that effect.

There isn't a linear, but an exponential correlation between heat and wear on solid state devices.
In practice, this means that if you keep the heat below a certain threshold, you can do with it whatever you want, it wont eat away much from the lifespan of the cards.

The fans have a more linear correlation, but they can be easily swapped out and fixed.
I usually keep my cuda cards at max 70°C. They've been mining for a year now, without any problems. Fans at max 90%. Make sure to keep them dustfree and ventilated, so the fans themselves don't need to do much work.
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!