Bitcoin Forum
May 04, 2024, 05:39:21 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Does underclocking reduce power consumption?  (Read 9938 times)
Yannick (OP)
Member
**
Offline Offline

Activity: 68
Merit: 10



View Profile
July 10, 2011, 11:13:09 PM
Last edit: July 11, 2011, 01:11:46 AM by Yannick
 #1

Hey you guys,

At night, I'm underclocking my hardware to try and catch some sleep while the fans are more silent.

So I underclock both 5870 cards to core 700 and mem 250. I notice that the gpu fans are only spinning around 50% and I'm generating ~600 Mhash/sec.

However, GPU1 and GPU2 usage is still the same: 98% !

During daytime, I obviously overclock the cards with max fan speed and get ~840 Mhash/sec.

Now I'm wondering: the fact that GPU usage is still 98% while underclocked at night, does this mean that power consumption is still the same as if they were overclocked?

Thanks!

1714844361
Hero Member
*
Offline Offline

Posts: 1714844361

View Profile Personal Message (Offline)

Ignore
1714844361
Reply with quote  #2

1714844361
Report to moderator
1714844361
Hero Member
*
Offline Offline

Posts: 1714844361

View Profile Personal Message (Offline)

Ignore
1714844361
Reply with quote  #2

1714844361
Report to moderator
Transactions must be included in a block to be properly completed. When you send a transaction, it is broadcast to miners. Miners can then optionally include it in their next blocks. Miners will be more inclined to include your transaction if it has a higher transaction fee.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714844361
Hero Member
*
Offline Offline

Posts: 1714844361

View Profile Personal Message (Offline)

Ignore
1714844361
Reply with quote  #2

1714844361
Report to moderator
CYPER
Hero Member
*****
Offline Offline

Activity: 798
Merit: 502



View Profile
July 10, 2011, 11:33:05 PM
 #2

Power consumption is less, but not a significant difference.
Pentium100
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
July 11, 2011, 12:51:42 AM
 #3

Just get used to sleeping with the fans at full speed. Do it like this: every evening, increase the fan speed by 1% or so from the previous night, and soon you won't notice that the fan is running at full speed.

1GStzEi48CnQN6DgR1s3uAzB8ucuwdcvig
rethaw
Sr. Member
****
Offline Offline

Activity: 378
Merit: 255



View Profile
July 11, 2011, 12:57:07 AM
 #4

I believe it is also possible to reduce the voltage to the memory using RBE for some cards. I would be interested to hear other's experience with this, I believe this would more significantly reduce consumption than reducing clock alone.

CYPER
Hero Member
*****
Offline Offline

Activity: 798
Merit: 502



View Profile
July 11, 2011, 01:11:09 AM
 #5

Yes, I got mine @ 300Mhz, but have no data for the power consumption decrease. Probably not much to have a profound effect on the power bill.
KKAtan
Newbie
*
Offline Offline

Activity: 50
Merit: 0


View Profile
July 11, 2011, 01:12:26 AM
 #6

As rethaw noted, educing Voltage makes power consumption go down very significantly. You should of course note that reducing voltage often needs reduction in clock speed as well, they accompany each other.

Other things that can help lower power reduction:

If you use this client: http://forum.bitcoin.org/index.php?topic=3878.0
You can tweak the -s flag
Quote
Q: My temperatures are too high, can I throttle the GPU so it runs slower but cooler?
A: If you are mining using OpenCL you can use the -s flag a value such as 0.01 in order to force the GPU to sleep for 0.01 seconds in between runs. Increase or decrease this value until you have the desired GPU utilization.


And of course the -f flag.
Using -f60 or -f120 can also significantly reduce consumption.
Yannick (OP)
Member
**
Offline Offline

Activity: 68
Merit: 10



View Profile
July 11, 2011, 02:14:00 AM
 #7

Or should I buy this?
http://www.arctic.ac/en/p/cooling/vga/20/accelero-xtreme-5870.html?c=2182
 I'm using the sapphire 5870 now, I really don't like the noise if their fans blow over 50%. Sad

Or is it ok as long as I keep my temps below 90°? My target was 80° but I read that below 90° shouldn't be a problem for the 5870's.

CYPER
Hero Member
*****
Offline Offline

Activity: 798
Merit: 502



View Profile
July 11, 2011, 02:17:40 AM
 #8

Or should I buy this?
http://www.arctic.ac/en/p/cooling/vga/20/accelero-xtreme-5870.html?c=2182
 I'm using the sapphire 5870 now, I really don't like the noise if their fans blow over 50%. Sad

Or is it ok as long as I keep my temps below 90°? My target was 80° but I read that below 90° shouldn't be a problem for the 5870's.

I wouldn't recommend you keep your cards at constant 90°, not even 85°. Try to be below 80°.
n4l3hp
Full Member
***
Offline Offline

Activity: 173
Merit: 100


View Profile
July 11, 2011, 01:28:24 PM
 #9

It really comes down to the quality of the card's components. I have a 3850 and a 4850 both overclocked running BOINC for almost 2 years (Collatz, Milkyway, DNETC, Primegrid) at 90+ degrees. Still fine, no artifacts or whatever. If ever bitcoin flops later on, my newer cards will join their older brothers.
Yannick (OP)
Member
**
Offline Offline

Activity: 68
Merit: 10



View Profile
July 11, 2011, 01:52:59 PM
 #10

Yeah, I read on another board from an Nvidia expert saying that as long you keep your temps below 100° and you notice that your system is still stable, these hot temperatures are no problem.

So if you can mine with your cards at 90° with a smooth and stable system, there's nothing to worry about.

CYPER
Hero Member
*****
Offline Offline

Activity: 798
Merit: 502



View Profile
July 11, 2011, 10:40:04 PM
 #11

I bought an Energy Meter to see how much electricity my rig uses, so:

4x XFX 5870 @ 960Mhz Core & 300Mhz Memory
Gigabyte GA-770T-D3L
AMD Athlon II X2 250
2GB 1333MHz DDR3
USB Flash Drive
3x CM Sickleflow @ 2000rpm
===================================
800-815W from the Socket depending on the time of the day
830W Peak

So the components use around 730W when the efficiency of the PSU is taken into account.
Littleshop
Legendary
*
Offline Offline

Activity: 1386
Merit: 1003



View Profile WWW
July 11, 2011, 11:25:09 PM
 #12

I under volt my 6990 to 1.075 from stock which I think is 1.175 and it saves about 30 watts but really keeps the noise down.  I run at 830 not at 860+ that would need more then 1.250 volts.

Grinder
Legendary
*
Offline Offline

Activity: 1284
Merit: 1001


View Profile
July 12, 2011, 09:27:22 AM
 #13

Yeah, I read on another board from an Nvidia expert saying that as long you keep your temps below 100° and you notice that your system is still stable, these hot temperatures are no problem.
Nvidia cards are built to run at a higher temperature than ATI/AMD.
n4l3hp
Full Member
***
Offline Offline

Activity: 173
Merit: 100


View Profile
July 12, 2011, 10:14:46 AM
 #14

Yeah, I read on another board from an Nvidia expert saying that as long you keep your temps below 100° and you notice that your system is still stable, these hot temperatures are no problem.
Nvidia cards are built to run at a higher temperature than ATI/AMD.

Yes, you might be right when you look at the temps of the original GTX 470 and 480 at full load.   Smiley
KKAtan
Newbie
*
Offline Offline

Activity: 50
Merit: 0


View Profile
July 12, 2011, 10:20:29 AM
 #15

Yeah, I read on another board from an Nvidia expert saying that as long you keep your temps below 100° and you notice that your system is still stable, these hot temperatures are no problem.
Nvidia cards are built to run at a higher temperature than ATI/AMD.

Yes, you might be right when you look at the temps of the original GTX 470 and 480 at full load.   Smiley
Nope, it's not true at all. ATI fans used that same excuse back when the 4870/4850 generation ran hot. Roll Eyes

They are all excuses, no card should ever exceed 85 Celcius as that is usually the limit for the capacitors and other components on the card.
Furthermore, the 40 nm process from TSMC is particularly bad at leaking more power the higher the temperature, so it's like a bad snow ball effect.

When you exceed 90 Celcius like the gtx480, it just means that you're desperate to retain the performance crown at all cost.
Grinder
Legendary
*
Offline Offline

Activity: 1284
Merit: 1001


View Profile
July 12, 2011, 01:34:51 PM
 #16

Nope, it's not true at all. ATI fans used that same excuse back when the 4870/4850 generation ran hot. Roll Eyes
Some people have reported their Fermi cards reaching temperatures of ~110C. I don't think AMD cards would survive that.

They are all excuses, no card should ever exceed 85 Celcius as that is usually the limit for the capacitors and other components on the card.
The temperature measurement is for the GPU, most of the other components will have a much lower temperature.
compro01
Hero Member
*****
Offline Offline

Activity: 590
Merit: 500



View Profile
July 12, 2011, 02:29:21 PM
 #17

underclocking will reduce power consumption, as the switching current of a CMOS gate is proportional to the frequency.

undervolting will save you more, as power is proportional to the square of the voltage.

the equation is roughly (ignoring a couple constants not relevant to this discussion) Power=frequency*voltage^2
KKAtan
Newbie
*
Offline Offline

Activity: 50
Merit: 0


View Profile
July 12, 2011, 07:08:59 PM
 #18

Some people have reported their Fermi cards reaching temperatures of ~110C. I don't think AMD cards would survive that.
1. You're ignoring the leakage issue I pointed out. Like I said, the snowball effect, a negative feedback loop, is a bad thing to deal with, it's best to stay clear away from the beginning.
2. I've lurked around enough to know that some of the Fermi distributors have been voiding user warranties for ridiculous reasons like "too much dust on your vga", even though the user had been cleaning the dust already before sending the RMA. If that's not a sign of desperation, then....
3. I've seen VRM components on HD5970 cards exceed 100 degrees, if anything, it's the digital VRMs on reference high-end ATI cards that are of higher quality and higher tolerances than Fermi's cheap low cost circuitries.
4. Fermi is made from exactly the same silicon as AMD's chips, they come from the same manufacturing process, that same manufacturing process is from the very same semiconductor foundry, as far as the core goes, the only argument on your side is that Fermi is a physically larger chip and that the heat is distributed over a wider area surface, but I seriously doubt that this has much weight when put up against the other points I made.

The temperature measurement is for the GPU, most of the other components will have a much lower temperature.
Actually some components can be having even higher temperature than the core.
Videocard coolers are not magical, they will transfer the heat away from the core, but you can't guarantee that that same heat won't be transferring directly into the other components.
Grinder
Legendary
*
Offline Offline

Activity: 1284
Merit: 1001


View Profile
July 13, 2011, 11:33:07 AM
 #19

Some people have reported their Fermi cards reaching temperatures of ~110C. I don't think AMD cards would survive that.
1. You're ignoring the leakage issue I pointed out. Like I said, the snowball effect, a negative feedback loop, is a bad thing to deal with, it's best to stay clear away from the beginning.
No, I'm not. The source of the heat is not relevant, and of course staying cooler would always be safer.

3. I've seen VRM components on HD5970 cards exceed 100 degrees, if anything, it's the digital VRMs on reference high-end ATI cards that are of higher quality and higher tolerances than Fermi's cheap low cost circuitries.
We weren't talking about VRMs, I'm well aware that they get much hotter. On one of my cards it's 109 right now.

4. Fermi is made from exactly the same silicon as AMD's chips, they come from the same manufacturing process, that same manufacturing process is from the very same semiconductor foundry, as far as the core goes, the only argument on your side is that Fermi is a physically larger chip and that the heat is distributed over a wider area surface, but I seriously doubt that this has much weight when put up against the other points I made.
Even though silicon is generally silicon, that doesn't mean that some designs can't be more resistent to heat problems than others.

The temperature measurement is for the GPU, most of the other components will have a much lower temperature.
Actually some components can be having even higher temperature than the core.
Which is probably why I said *most*...

Videocard coolers are not magical, they will transfer the heat away from the core, but you can't guarantee that that same heat won't be transferring directly into the other components.
Of course you can. If the component doesn't touch the heatsink or the GPU and it doesn't create a lot of heat on it's own, it will be cooler than the GPU. Wheather or not it touches the heat sink, it will be cooler the further away from GPU you get. The designers of graphics card aren't stupid, and take this into account.
KKAtan
Newbie
*
Offline Offline

Activity: 50
Merit: 0


View Profile
July 14, 2011, 04:03:04 AM
Last edit: July 14, 2011, 04:24:29 AM by KKAtan
 #20

Some people have reported their Fermi cards reaching temperatures of ~110C. I don't think AMD cards would survive that.
1. You're ignoring the leakage issue I pointed out. Like I said, the snowball effect, a negative feedback loop, is a bad thing to deal with, it's best to stay clear away from the beginning.
No, I'm not. The source of the heat is not relevant, and of course staying cooler would always be safer.
You're not making any sense, the source of the heat is very relevant. The snow ball effect, go read up on it, increased temperature -> causes increased leakage -> causes increased power consumption -> causes increased temperature -> causes increased leakage -> causes increased power consumption -> ....
I believe I have spelled this out in the simplest way possible for you. This is an issue that all chips from TSMC's 40 nm fab has, no matter if you're named AMD or Nvidia.

We weren't talking about VRMs, I'm well aware that they get much hotter. On one of my cards it's 109 right now.
You're wrong, we are. We're talking about video cards supposedly "built to last" a certain temperature, and VRMs are very important parts of this, and AMD's reference design VRMs are of significantly higher quality than Nvidia's. You will see in the reference design of a small gpu like the HD5770, even that small one has VRMs that are higher quality than a gtx470.
This is made possible thanks to the size of the chips, AMD's chips are physically smaller, for example a HD5870 is physically smaller than a gtx460. At the same time the HD5870 is also much faster. This causes the value of the 5870 to increase (higher quality components can be afforded in the reference design), while Nvidia has to skimp on component quality to have any sort of profit. I hope you understand that, AMD's reference design components are of higher quality.

4. Fermi is made from exactly the same silicon as AMD's chips, they come from the same manufacturing process, that same manufacturing process is from the very same semiconductor foundry, as far as the core goes, the only argument on your side is that Fermi is a physically larger chip and that the heat is distributed over a wider area surface, but I seriously doubt that this has much weight when put up against the other points I made.
Even though silicon is generally silicon, that doesn't mean that some designs can't be more resistent to heat problems than others.
When the manufacturing process is the same, then you're wrong. This isn't an intel vs amd deal here, this is a tsmc vs tsmc deal. It's the same shit, so to speak, the chips both come from TSMC, they both share the same issues.

Which is probably why I said *most*...
....
Of course you can. If the component doesn't touch the heatsink or the GPU and it doesn't create a lot of heat on it's own, it will be cooler than the GPU. Wheather or not it touches the heat sink, it will be cooler the further away from GPU you get. The designers of graphics card aren't stupid, and take this into account.
Don't be naive, the designers of custom boards will prioritise cost savings more than anything else, (unless we're dealing with extreme editions like MSI lightning, ect.)
And you are wrong when you imply that the gpu temperature has no effect on the other components on board, the heat WILL transfer on to the other components, whether it be heat transferred by airflow, or heat transferred by PCB. (The PCB is actually pretty good at transferring heat, try to put your hand on the back side of your video card, and you'll see what I mean. There is no magic to be pulled off here, things will get hot)

As a matter of fact, the Video card itself (!!) has an effect on the motherboard component's temperature.
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!