Bitcoin Forum
May 27, 2017, 06:36:07 AM *
News: Latest stable version of Bitcoin Core: 0.14.1  [Torrent]. (New!)
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: GPU's temperature and its relationship with power consumption  (Read 4527 times)
_Vince_
Jr. Member
*
Offline Offline

Activity: 33


View Profile
February 22, 2012, 01:52:35 PM
 #1

We all know that the high temperature, the higher current leakage in the transistors. But the question is how much higher?

Suppose same GPU with constant clock, fan, workload...; at 80oC it consumes more energy than when it is at 70oC. It is because of current leakage at higher temp is much higher, VRM efficiency decrease with VRM temp

There is a interesting article here:

http://www.techpowerup.com/reviews/Zotac/GeForce_GTX_480_Amp_Edition/27.html

"so for every °C that the card runs hotter it needs 1.2W more power to handle the exact same load."


Have you ever measure your card to see how much additional power does it take when the temp increase 1oC?

1495866967
Hero Member
*
Offline Offline

Posts: 1495866967

View Profile Personal Message (Offline)

Ignore
1495866967
Reply with quote  #2

1495866967
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1495866967
Hero Member
*
Offline Offline

Posts: 1495866967

View Profile Personal Message (Offline)

Ignore
1495866967
Reply with quote  #2

1495866967
Report to moderator
1495866967
Hero Member
*
Offline Offline

Posts: 1495866967

View Profile Personal Message (Offline)

Ignore
1495866967
Reply with quote  #2

1495866967
Report to moderator
1495866967
Hero Member
*
Offline Offline

Posts: 1495866967

View Profile Personal Message (Offline)

Ignore
1495866967
Reply with quote  #2

1495866967
Report to moderator
BookLover
Hero Member
*****
Offline Offline

Activity: 535


^Bitcoin Library of Congress.


View Profile
February 22, 2012, 02:03:32 PM
 #2

(marking)

cpt_howdy
Member
**
Offline Offline

Activity: 70



View Profile
February 22, 2012, 02:07:39 PM
 #3

If you want to maximise the efficiency of your cooled mining card, there'll be an equilibrium point where a 1W increase in power to your cooling solution (probably a fan) would result in a 1W reduction in the power draw of your card. At this point it's not worth cooling the card down further (unless you go for a more efficient cooling technology)
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
February 22, 2012, 02:13:18 PM
 #4

If you want to maximise the efficiency of your cooled mining card, there'll be an equilibrium point where a 1W increase in power to your cooling solution (probably a fan) would result in a 1W reduction in the power draw of your card. At this point it's not worth cooling the card down further (unless you go for a more efficient cooling technology)

True but one can also run  a card cooler by lowering clocks.  Lower clock enough and you can also lower the voltage.

I can run a 5970 @ 40% fan and <60C  but only at 535MHz and 0.7V Smiley

Right now my power costs even with "hot GPUs" (~70C) are only about 1/3rd of the revenue so increased efficiency is mostly academic however as the network becomes more efficient (7900 series cards, FPGAs, etc) things like lower temps, undervolting, underclocking can be used to increase the "effective economic lifespan".  When my 12GH/s farm is no longer economical I can "convert it" to a 6GH/s farm which is economical and grind out maybe another years worth of revenue.
cpt_howdy
Member
**
Offline Offline

Activity: 70



View Profile
February 22, 2012, 02:18:48 PM
 #5

If you want to maximise the efficiency of your cooled mining card, there'll be an equilibrium point where a 1W increase in power to your cooling solution (probably a fan) would result in a 1W reduction in the power draw of your card. At this point it's not worth cooling the card down further (unless you go for a more efficient cooling technology)

True but one can also run  a card cooler by lowering clocks.  Lower clock enough and you can also lower the voltage.

I can run a 5970 @ 40% fan and <60C  but only at 535MHz and 0.7V Smiley

Right now my power costs even with "hot GPUs" (~70C) are only about 1/3rd of the revenue so increased efficiency is mostly academic however as the network becomes more efficient (7900 series cards, FPGAs, etc) things like lower temps, undervolting, underclocking can be used to increase the "effective economic lifespan".  When my 12GH/s farm is no longer economical I can "convert it" to a 6GH/s farm which is economical and grind out maybe another years worth of revenue.

Je suis d'accord! There are much greater efficiency gains to be had by undervolting and underclocking, but if you really want to shave off the last few watts possible, then you can ramp up those fans. I personally keep the temps acceptable and the fans low, just so my secret rigs in the cupboards don't get discovered  Wink
_Vince_
Jr. Member
*
Offline Offline

Activity: 33


View Profile
February 22, 2012, 02:26:10 PM
 #6

If any of you have a kill-a-watt and some spare time, please help doing test:

-With cgminer, set target temp 65oC, auto fan , record the wattage (average for 1-2 minutes)

-Repeat with target temp 75oC

Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!