Bitcoin Forum
November 06, 2024, 05:17:56 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 »  All
  Print  
Author Topic: Undervolting a 5870 and a 5770 to achieve better MH/J performance  (Read 15510 times)
Mousepotato
Hero Member
*****
Offline Offline

Activity: 896
Merit: 1000


Seal Cub Clubbing Club


View Profile
January 19, 2012, 12:55:20 AM
 #41

Ohhh yeah I'm an idiot.com/index.php?herp=derp.  I totally misunderstood the definition of "headless"

Mousepotato
Starcraftman
Newbie
*
Offline Offline

Activity: 58
Merit: 0


View Profile
January 19, 2012, 04:26:42 PM
 #42

Anyone try this with 5830's yet?
jamesg
VIP
Legendary
*
Offline Offline

Activity: 1358
Merit: 1000


AKA: gigavps


View Profile
January 21, 2012, 04:05:39 AM
 #43

Anyone try this with 5830's yet?

+1
k9quaint
Legendary
*
Offline Offline

Activity: 1190
Merit: 1000



View Profile
January 21, 2012, 06:45:52 PM
 #44

Unless the idle draw of the cards represents the same amount of logical work done, you can't just subtract them out to get the "mining" power consumed.
If one card idles hot doing tons of work that is not mining related, subtracting that from the final mining number misrepresents how much power that card actually draws in order to mine. Especially if it stops doing that work while mining. Conversely, if one card idles cold it won't have any idle wattage to subtract making its mining numbers seem like they draw more power.

The only numbers that can be compared are 100% mining vs 100% mining. Unless someone wants to map out exactly how much logical work each card does while idle and pro-rate their wattage for the extra work done...

Bitcoin is backed by the full faith and credit of YouTube comments.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
January 21, 2012, 06:52:33 PM
 #45

Unless the idle draw of the cards represents the same amount of logical work done, you can't just subtract them out to get the "mining" power consumed.
If one card idles hot doing tons of work that is not mining related, subtracting that from the final mining number misrepresents how much power that card actually draws in order to mine. Especially if it stops doing that work while mining. Conversely, if one card idles cold it won't have any idle wattage to subtract making its mining numbers seem like they draw more power.

The only numbers that can be compared are 100% mining vs 100% mining. Unless someone wants to map out exactly how much logical work each card does while idle and pro-rate their wattage for the extra work done...

Idle card does no work.  i.e. 0 mhash/s.

Look at it this way.

System at load:  300W
System at idle (including GPU idle wattage): 100W
GPU idle wattage: 10W

The reason we wan't to subtract the GPU idle wattage is to get the true GPU load wattage.

100W - 10W = 90W (system idle w/o no GPU).

300W - 90W = 210W (GPU full wattage at load).

Now we have apples to apples comparison, the GPU wattage at load.


We can also predict other system values.
system w/ 1 GPU  = 90W + 1*210W = 300W
system w/ 2 GPUs = 90W + 2*210W = 510W
system w/ 3 GPUs = 90W + 3*210W = 720W
system w/ 4 GPUs = 90W + 4*210W = 930W
system w/ 5 GPUs = 90W + 5*210W = 1140W
system w/ 6 GPUs = 90W + 6*210W = 1350W
k9quaint
Legendary
*
Offline Offline

Activity: 1190
Merit: 1000



View Profile
January 21, 2012, 06:59:26 PM
Last edit: January 21, 2012, 07:19:57 PM by k9quaint
 #46

Unless the idle draw of the cards represents the same amount of logical work done, you can't just subtract them out to get the "mining" power consumed.
If one card idles hot doing tons of work that is not mining related, subtracting that from the final mining number misrepresents how much power that card actually draws in order to mine. Especially if it stops doing that work while mining. Conversely, if one card idles cold it won't have any idle wattage to subtract making its mining numbers seem like they draw more power.

The only numbers that can be compared are 100% mining vs 100% mining. Unless someone wants to map out exactly how much logical work each card does while idle and pro-rate their wattage for the extra work done...

Idle card does no work.  i.e. 0 mhash/s.

Look at it this way.

System at load:  300W
System at idle (including GPU idle wattage): 100W
GPU idle wattage: 10W

The reason we wan't to subtract the GPU idle wattage is to get the true GPU load wattage.

100W - 10W = 90W (system idle w/o no GPU).

300W - 90W = 210W (GPU full wattage at load).

Now we have apples to apples comparison, the GPU wattage at load.


We can also predict other system values.
system w/ 1 GPU  = 90W + 1*210W = 300W
system w/ 2 GPUs = 90W + 2*210W = 510W
system w/ 3 GPUs = 90W + 3*210W = 720W
system w/ 4 GPUs = 90W + 4*210W = 930W
system w/ 5 GPUs = 90W + 5*210W = 1140W
system w/ 6 GPUs = 90W + 6*210W = 1350W

Idle cards do no hashing related work. But it consumes watts. Therefore it does work. Explain exactly all of the instructions executed (like monitoring and answering driver polls, DMA channel, etc). If this work is accomplished only on an idle card and not on a mining card, then we need to account for that. Once those are accounted for we can compare to see if one card is providing say "full service with frills" to the OS, and the other card is "self service" while idle. That condition can skew the measurements. Especially with a card designed to go very cold while idle vs one that runs hot with idle cycles.

Apples to oranges.

Edit: A concrete example follows.

The AMD Phenom II X6 1100T (3.3GHz) consumes 20w while idle, and 109w under full load.
The Intel Core i7 2600K @ 4.4GHz consumes 5w while idle, and 111w under full load.

While under full load (in theory), the CPUs are not executing any idle cycles.
Also, voltage may be stepped down while idle and parts of the chip shut off further skewing the comparison.
Subtracting those 20 from 109  & 5 from 111 will not give you anything useful.

Bitcoin is backed by the full faith and credit of YouTube comments.
NetworkerZ
Member
**
Offline Offline

Activity: 114
Merit: 10


View Profile
January 21, 2012, 10:14:39 PM
 #47

Hello all

I just recently found out how dramatic an increase in efficiency undervolting can achieve. In my mining rig I'd like to optimize this, and preferably get it to consume around 250W in total. Right now it's consuming around 265W. I'm running with the following voltage/clock settings currently:

5870: 750 MHz / 1.000V / 97W / 343 MH/s / 3.54 MH/J
5770: 750 MHz / 1.010V / 63W / 168 MH/s / 2.67 MH/J

As you can the 5870 is significantly more efficient than the 5770. Then again it's also running at 1V vs. the 5770's 1.01V. But I simply can't get the 5770 to run properly at 1V at 750 MHz. At this voltage it keeps stalling in cgminer ("declared SICK") unless I run it at 700 MHz.

What are people's experiences with 5870's and 5770's and stable voltage/clock combinations of these?

By the way, the 5870 is an HD-587X-ZNFV V1.3 5870, and the 5770 is this Sapphire card, though I'm not sure if it has 512 or 1024MB RAM.

Hiho!

I have 2x 5830 and 4x 5770 cards in two rigs. Every card is overclocked and undervolted. My settings are:

"Sapphire HD5830"  core=900 memory=300 vddc=1.140
"XFX 5770"            core=880 memory=600 vddc=1.200
"Sapphire HD5770"  core=905 memory=300 vddc=0.960

"Sapphire HD5830"  core=910 memory=300 vddc=1.080
"ASUS HD5770"      core=925 memory=300 vddc=1.050
"Sapphire HD5770"  core=880 memory=300 vddc=1.010

Like you can see, there are many different setting on the 5770 cards. The XFX for example needs 600Mhz mem clock, no way to go more down. The Sapphire runs at 905 Mhz with only 0.960 vddc. I think you have to test a lot, till it's stable. Both RIGs need 650 Watts BUT with 3,5 HDD and the 100% Bug. So I think I can bring it down to 550 Watts @ 1,5 GHash. I'm looking forward to my new XFX 5970 Black Edition with 2x 920 Mhz @ 1.200 vddc ;-) (tested but not build into RIG yet).

Btw.: The great thing about my 5770s is, that all are "used" and I got them for around 60$ each on Sept. 2010! They are still a good catch I think.

Greetz
NetworkerZ
cuz0882
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250


View Profile
January 23, 2012, 08:04:01 AM
 #48

Even with 3x 5970s at 880 mhz (2400 mhash) around 900 watts its only  18 dollars a month to run them. The current payout is $343 a month. That leaves power at 5.2% of expense. Outside of keeping temps down it seems to make little sense to underclock. I realize power costs are higher in some places but it seems like if power was that high mining would be a bad idea to start with, maybe fpga would make more sense.
CoinSpeculator
Sr. Member
****
Offline Offline

Activity: 420
Merit: 250



View Profile
January 24, 2012, 05:42:02 AM
 #49

Even with 3x 5970s at 880 mhz (2400 mhash) around 900 watts its only  18 dollars a month to run them. The current payout is $343 a month. That leaves power at 5.2% of expense. Outside of keeping temps down it seems to make little sense to underclock. I realize power costs are higher in some places but it seems like if power was that high mining would be a bad idea to start with, maybe fpga would make more sense.

.9kw x 24 x 30 = 648.

$18 / 648kwh = $0.027/kwh  Huh

You pay 3 cents a kilowatt-hour?

More reasonable cost are 10 cents a kilowatt-hour, for around $60 a month in electricity.  I think undervolted GPU mining will keep many systems up a while longer but my with FPGA's doing 400+ MH/s for 15 watts the day of the GPU is going to come to an end.
cuz0882
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250


View Profile
January 24, 2012, 08:47:37 AM
 #50

Even with 3x 5970s at 880 mhz (2400 mhash) around 900 watts its only  18 dollars a month to run them. The current payout is $343 a month. That leaves power at 5.2% of expense. Outside of keeping temps down it seems to make little sense to underclock. I realize power costs are higher in some places but it seems like if power was that high mining would be a bad idea to start with, maybe fpga would make more sense.

.9kw x 24 x 30 = 648.

$18 / 648kwh = $0.027/kwh  Huh

You pay 3 cents a kilowatt-hour?

More reasonable cost are 10 cents a kilowatt-hour, for around $60 a month in electricity.  I think undervolted GPU mining will keep many systems up a while longer but my with FPGA's doing 400+ MH/s for 15 watts the day of the GPU is going to come to an end.
I pay .0202 cents a watt
rjk
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
January 24, 2012, 03:04:18 PM
 #51

I pay .0202 cents a watt
You don't pay "per watt", you pay per watt-hour or to be precise, per kilowatt-hour (1000 watts per hour is $0.0202). Further, there are likely taxes and generation fees on top of that. That is a good rate however.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
k9quaint
Legendary
*
Offline Offline

Activity: 1190
Merit: 1000



View Profile
January 24, 2012, 06:55:06 PM
 #52

Even with 3x 5970s at 880 mhz (2400 mhash) around 900 watts its only  18 dollars a month to run them. The current payout is $343 a month. That leaves power at 5.2% of expense. Outside of keeping temps down it seems to make little sense to underclock. I realize power costs are higher in some places but it seems like if power was that high mining would be a bad idea to start with, maybe fpga would make more sense.

You pay $0.0202 per kilowatt-hour. At home I pay ~$0.32 per kilowatt-hour. My electricity costs at home are 16 times yours. At your power consumption rate that would put my expenses at ~80%. Undervolting can cut it from ~80% to ~40%, which makes it worth doing.

Bitcoin is backed by the full faith and credit of YouTube comments.
Mousepotato
Hero Member
*****
Offline Offline

Activity: 896
Merit: 1000


Seal Cub Clubbing Club


View Profile
January 24, 2012, 07:45:01 PM
 #53

I pay .0202 cents a watt

Damn, that's pretty good. I have one of those variable plans. It toggles from $0.05815/kwh during on-peak hours, down to $0.04273/kwh during off-peak hours. Of course I have to pay a flat "demand" charge of something like $10 per kW I use.  My power company likes to stack on little rinky dink charges that add up Sad

Mousepotato
runeks (OP)
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
January 25, 2012, 05:50:51 AM
 #54

What would be the theoretical correlation between stable voltage and clock pairs? I mean, if I decrease the voltage by 10%, should I also be able to decrease the clock by 10% and get a stable GPU? Or is it more complex, perhaps so much that trial and error is the only way to know? In any case, it'd be useful to have some rule of thumb to go by, even if it isn't completely accurate.

Currently, my mining rig is placed in a shed, where I would like the temperature to never go below 10C. Because of this, I'd like to be able to change voltage/clock dynamically, in order to control the power that my mining rig dissipates, thus acting as a sort of radiator with a thermostat. So it would be useful to be able to derive stable voltage/clock pairs from a known stable voltage/clock pair.
BCMan
Hero Member
*****
Offline Offline

Activity: 535
Merit: 500



View Profile
January 26, 2012, 03:33:50 AM
Last edit: January 26, 2012, 06:58:28 AM by BCMan
 #55

 Underclocking <300 even gives some performance!

 Here's what I've had before for my Sapphire Radeon 5770:
960/300/1.005v:
temp1: 67
temp2: 72
temp3: 70
fan: 50%
221.39

 And what now:
960/244/1.005v:
temp1: 66
temp2: 70
temp3: 69
fan: 48%
222.25 mhash/s!!!!
Brunic
Hero Member
*****
Offline Offline

Activity: 632
Merit: 500



View Profile
January 28, 2012, 08:16:24 PM
 #56

So,

I tried some undervolting on my cards, a couple had VRM chip, so I could change the voltage without any problems. Other cards didn't had any voltage control, so I tried to modify the BIOS of the card, to change the voltage of the card.

My test card was a Powercolor AX6770 with those default settings:
850 Mhz
1.2V

It worked partially, and I was able to modify the default Clock speed and the default voltage. I created a new BIOS with RBE, with the clock at 700 mhz and the voltage at 0.950. The BIOS has some "default" clocks, and I modified every one of them so they had 0.950V in their instructions.

When I booted the card on my mining rig, everything was in order. The card was by default at 700 Mhz, and cgminer showed me that the voltage was at 0.950. The thing is, while mining, the card didn't care about the voltage instructions in the BIOS. It was still consuming the same amount of Watts as before, with the stock BIOS. I could also clock the card at 900 Mhz without any problem, and with the same watts consumption as before.

Overall, the card just used the voltage it needed, ignoring the instructions in the BIOS.

Any of you tried something similar? I believe we have a deep topic on our hands, and it would be interesting to compile the informations of what you guys did.

Here's my results of my testing:
Diamond Radeon 5850
http://www.diamondmm.com/5850PE51G.php
Went from 2.68 MH/J to 3.51 MH/J
700 MHz at 0.88 V
260 MH/s

Sapphire Radeon 5830
http://devicegadget.com/hardware/sapphire-radeon-hd-5830-xtreme-review/3760/
From 2.07 MH/J to 3.16 MH/J
725 MHz at 0.95 V
215 MH/s

Sapphire Vapor 5770 (this thing is a beauty!!!)
http://www.sapphiretech.com/presentation/product/?psn=0001&pid=305&lid=1
From 2.13 MH/J to 3.42 MH/J
825 MHz at 0.95 V
178 MH/s

PowerColor 5870 (this one has an Arctic Cooler on it, because I broke physically the stock fan)
http://techiser.com/powercolor-radeon-hd-5870-ax5870-graphic-card-118806.html
From 2.33 MH/J to 3.51 MH/J
750 MHz at 0.95 V
330 MH/s

These cards have no VRM control:
Powercolor AX6770
http://www.shopping.com/power-color-powercolor-ax6770-1gbd5-h-radeon-hd-6770-1gb-128-bit-gddr5-pci-express-2-1-x16-hdcp-ready-video-card/info

Sapphire 6950
http://www.newegg.com/Product/Product.aspx?Item=N82E16814102914
Yeah, this one is surprising. You can use the Powertune to adjust the consumption, but it keeps the same MH/J.

Powercolor 5770
http://www.guru3d.com/article/powercolor-radeon-hd-5770-pcs-review/

runeks (OP)
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
January 29, 2012, 07:50:40 AM
 #57

I tried some undervolting on my cards, a couple had VRM chip, so I could change the voltage without any problems. Other cards didn't had any voltage control, so I tried to modify the BIOS of the card, to change the voltage of the card.
I'm pretty sure all non-ancient graphics cards have voltage control. If they didn't, power management would be non-existent.

In Linux you should be able to use AMDOverdriveCtrl to control the voltage (within the limits set by the BIOS). I'm not sure which Windows programs achieve the same. Here's a list that provides some candidates: https://en.bitcoin.it/wiki/GPU_overclocking_tools
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
January 29, 2012, 08:08:10 AM
 #58

I tried some undervolting on my cards, a couple had VRM chip, so I could change the voltage without any problems. Other cards didn't had any voltage control, so I tried to modify the BIOS of the card, to change the voltage of the card.
I'm pretty sure all non-ancient graphics cards have voltage control. If they didn't, power management would be non-existent.

In Linux you should be able to use AMDOverdriveCtrl to control the voltage (within the limits set by the BIOS). I'm not sure which Windows programs achieve the same. Here's a list that provides some candidates: https://en.bitcoin.it/wiki/GPU_overclocking_tools

Many modern (mostly cheaper) models lack voltage control.  The VRM are not adjustable. Sure it wastes power but it makes the card $5 to $10 cheaper. 
Brunic
Hero Member
*****
Offline Offline

Activity: 632
Merit: 500



View Profile
January 29, 2012, 08:32:52 AM
 #59

I tried some undervolting on my cards, a couple had VRM chip, so I could change the voltage without any problems. Other cards didn't had any voltage control, so I tried to modify the BIOS of the card, to change the voltage of the card.
I'm pretty sure all non-ancient graphics cards have voltage control. If they didn't, power management would be non-existent.

In Linux you should be able to use AMDOverdriveCtrl to control the voltage (within the limits set by the BIOS). I'm not sure which Windows programs achieve the same. Here's a list that provides some candidates: https://en.bitcoin.it/wiki/GPU_overclocking_tools

Many modern (mostly cheaper) models lack voltage control.  The VRM are not adjustable. Sure it wastes power but it makes the card $5 to $10 cheaper. 

Bingo! That's my problem. I can "change" the voltage, the software even indicates me the new voltage. But in reality, with a Kill-a-watt plugged in it, you see no difference. I can even input 0V, the software takes it, and the card run as normal.

I believe it could be possible to limit the voltage through the BIOS, but I'm don't know how yet.
runeks (OP)
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
January 29, 2012, 08:34:56 AM
 #60

Shows what I know. I thought adjustable VRMs were so cheap that no one would bother not putting one on the card. But then again I guess dynamically adjusting the voltage is a bit more complex than simply buying a capable VRM and slapping it on the card (like the additional software/BIOS development required).
Pages: « 1 2 [3] 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!