Bitcoin Forum
May 13, 2024, 11:28:57 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: GPU's temperature and its relationship with power consumption  (Read 5223 times)
_Vince_ (OP)
Newbie
*
Offline Offline

Activity: 33
Merit: 0


View Profile
February 22, 2012, 01:52:35 PM
 #1

We all know that the high temperature, the higher current leakage in the transistors. But the question is how much higher?

Suppose same GPU with constant clock, fan, workload...; at 80oC it consumes more energy than when it is at 70oC. It is because of current leakage at higher temp is much higher, VRM efficiency decrease with VRM temp

There is a interesting article here:

http://www.techpowerup.com/reviews/Zotac/GeForce_GTX_480_Amp_Edition/27.html

"so for every °C that the card runs hotter it needs 1.2W more power to handle the exact same load."


Have you ever measure your card to see how much additional power does it take when the temp increase 1oC?

1715642937
Hero Member
*
Offline Offline

Posts: 1715642937

View Profile Personal Message (Offline)

Ignore
1715642937
Reply with quote  #2

1715642937
Report to moderator
1715642937
Hero Member
*
Offline Offline

Posts: 1715642937

View Profile Personal Message (Offline)

Ignore
1715642937
Reply with quote  #2

1715642937
Report to moderator
1715642937
Hero Member
*
Offline Offline

Posts: 1715642937

View Profile Personal Message (Offline)

Ignore
1715642937
Reply with quote  #2

1715642937
Report to moderator
I HATE TABLES I HATE TABLES I HA(╯°□°)╯︵ ┻━┻ TABLES I HATE TABLES I HATE TABLES
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715642937
Hero Member
*
Offline Offline

Posts: 1715642937

View Profile Personal Message (Offline)

Ignore
1715642937
Reply with quote  #2

1715642937
Report to moderator
1715642937
Hero Member
*
Offline Offline

Posts: 1715642937

View Profile Personal Message (Offline)

Ignore
1715642937
Reply with quote  #2

1715642937
Report to moderator
BookLover
Hero Member
*****
Offline Offline

Activity: 533
Merit: 500


^Bitcoin Library of Congress.


View Profile
February 22, 2012, 02:03:32 PM
 #2

(marking)

cpt_howdy
Member
**
Offline Offline

Activity: 70
Merit: 10



View Profile
February 22, 2012, 02:07:39 PM
 #3

If you want to maximise the efficiency of your cooled mining card, there'll be an equilibrium point where a 1W increase in power to your cooling solution (probably a fan) would result in a 1W reduction in the power draw of your card. At this point it's not worth cooling the card down further (unless you go for a more efficient cooling technology)
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 22, 2012, 02:13:18 PM
 #4

If you want to maximise the efficiency of your cooled mining card, there'll be an equilibrium point where a 1W increase in power to your cooling solution (probably a fan) would result in a 1W reduction in the power draw of your card. At this point it's not worth cooling the card down further (unless you go for a more efficient cooling technology)

True but one can also run  a card cooler by lowering clocks.  Lower clock enough and you can also lower the voltage.

I can run a 5970 @ 40% fan and <60C  but only at 535MHz and 0.7V Smiley

Right now my power costs even with "hot GPUs" (~70C) are only about 1/3rd of the revenue so increased efficiency is mostly academic however as the network becomes more efficient (7900 series cards, FPGAs, etc) things like lower temps, undervolting, underclocking can be used to increase the "effective economic lifespan".  When my 12GH/s farm is no longer economical I can "convert it" to a 6GH/s farm which is economical and grind out maybe another years worth of revenue.
cpt_howdy
Member
**
Offline Offline

Activity: 70
Merit: 10



View Profile
February 22, 2012, 02:18:48 PM
 #5

If you want to maximise the efficiency of your cooled mining card, there'll be an equilibrium point where a 1W increase in power to your cooling solution (probably a fan) would result in a 1W reduction in the power draw of your card. At this point it's not worth cooling the card down further (unless you go for a more efficient cooling technology)

True but one can also run  a card cooler by lowering clocks.  Lower clock enough and you can also lower the voltage.

I can run a 5970 @ 40% fan and <60C  but only at 535MHz and 0.7V Smiley

Right now my power costs even with "hot GPUs" (~70C) are only about 1/3rd of the revenue so increased efficiency is mostly academic however as the network becomes more efficient (7900 series cards, FPGAs, etc) things like lower temps, undervolting, underclocking can be used to increase the "effective economic lifespan".  When my 12GH/s farm is no longer economical I can "convert it" to a 6GH/s farm which is economical and grind out maybe another years worth of revenue.

Je suis d'accord! There are much greater efficiency gains to be had by undervolting and underclocking, but if you really want to shave off the last few watts possible, then you can ramp up those fans. I personally keep the temps acceptable and the fans low, just so my secret rigs in the cupboards don't get discovered  Wink
_Vince_ (OP)
Newbie
*
Offline Offline

Activity: 33
Merit: 0


View Profile
February 22, 2012, 02:26:10 PM
 #6

If any of you have a kill-a-watt and some spare time, please help doing test:

-With cgminer, set target temp 65oC, auto fan , record the wattage (average for 1-2 minutes)

-Repeat with target temp 75oC

Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!