Bitcoin Forum
November 14, 2024, 11:42:41 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 [4] 5 »  All
  Print  
Author Topic: Radeonvolt - HD5850 reference voltage tweaking and VRM temp. display for Linux  (Read 27976 times)
zefir
Donator
Hero Member
*
Offline Offline

Activity: 919
Merit: 1000



View Profile
April 02, 2012, 08:47:31 PM
 #61

Thanks for clarification.

Whether ADL is limiting ranges or the BIOS does, effect remains the same: getting full control goes only by bypassing AMD provided interfaces and accessing HW directly (please correct me if I'm wrong assuming those controller chips are I2C accessible).

I doubt that MSI as manufacturer had to reverse engineer to get Afterburner done, but GPU-Z folks for sure had to. That's why I proposed social engineering approach, i.e. maybe some guys from OC scene are also bitcoiners and have access to specs or source code willing to share. Maybe you as one of the technically most competent bitcoiner are the one?

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
April 02, 2012, 08:55:50 PM
 #62

Thanks for clarification.

Whether ADL is limiting ranges or the BIOS does, effect remains the same: getting full control goes only by bypassing AMD provided interfaces and accessing HW directly (please correct me if I'm wrong assuming those controller chips are I2C accessible).

Well not exactly.  I for example was able to raise the stock voltage on my 5970s by modifying the BIOS.  Had the limit been enforced by ADL I would have no options.

Quote
I doubt that MSI as manufacturer had to reverse engineer to get Afterburner done, but GPU-Z folks for sure had to. That's why I proposed social engineering approach, i.e. maybe some guys from OC scene are also bitcoiners and have access to specs or source code willing to share. Maybe you as one of the technically most competent bitcoiner are the one?

I don't think so.  GPU-Z is useful because nobody else can do it.  The author has indicated he has absolutely no interest in ever providing a GPU-Z for Linux.  He has also indicated he will never release the source code to allow anyone else to write it.  I don't have a link as I researched it well over a year ago and when I saw that I was like "ok guess it won't be happening". Yes a very "non open" attitude but open source isn't embraced by all software developers.
-ck
Legendary
*
Offline Offline

Activity: 4284
Merit: 1645


Ruu \o/


View Profile WWW
April 03, 2012, 12:34:41 AM
 #63

cgminer is limited by what the bios will accept via the driver. Often it is -way- outside the reported "safe range" that the ATI Display Library tells it. cgminer will allow you to happily ignore the safe range and set whatever you like. Some cards respond to that, some don't, ignoring values you pass to it. On my cards I can overclock my engine to any value I like and same with the memory. But try to set the memory more than 125 below the engine it ignores it (6970). It also happily ignores -any- voltage setting I pass to it. On the other hand, flash the bios on those cards and you can set whatever you like via the ATI Display Library and therefore cgminer. The other tools that hack via i2c and stuff are so device and OS dependent that they'd be a nightmare to write in a general fashion that could be included in cgminer. Sure if someone else did the code, I'd include it. But short of having one of each card, and every possible OS to test it on, I cannot write the code myself.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
zefir
Donator
Hero Member
*
Offline Offline

Activity: 919
Merit: 1000



View Profile
April 03, 2012, 06:56:11 AM
 #64

To resolve the confusion here: my primary goal was not to further OC cards to squeeze out their last kH/s but to maximize the H/J, which with Linux is not possible with the given max delta between mem and engine clocks Con is describing.

Patching BIOS to surpass the absolute max ranges is fine to push the card to the limit (which I won't do any more after bricking one 6950 trying to unlock it  Embarrassed), the 7970s I lastly added to my rig do not really need patching - they just run fine with cgminer (see [1]). But reading people are able to reduce energy consumption by 20% lowering memclock and core voltage makes me wanting go back to Windows.
(OT: Hell, not long ago I bought standby-killers to turn TV off over night instead of letting it consume 5W in standby-mode, now I'm burning kWatts 24/7  Undecided, different story).

My pragmatic idea was to record the i2c-commands issued by Afterburner when controlling popular mining cards and to build up a library for directly accessing the controller chips (like radeonvolt does for the vt1165). But thinking further, with that lib you'd give the user the perfect tool to fry their cards. Counter measures (like use it only for reducing values) are not applicable in the open source world - we'll soon have folks yelling at cgminer/Linux for bricking their cards.

Con, you're often not happy with AMD's Linux drivers (who is), but you'd also agree better to live with the limitations at the safe side than having the freedom to kill miner's cards, right?


OP, sorry for hijacking this thread. Closing here.


[1] https://bitcointalk.org/index.php?topic=67283.msg824652#msg824652

DiabloD3
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
April 03, 2012, 07:52:22 AM
 #65

(OT: Hell, not long ago I bought standby-killers to turn TV off over night instead of letting it consume 5W in standby-mode, now I'm burning kWatts 24/7  Undecided, different story).

New TVs typically use a watts or less on standby, which you exchange for instant on and less wear on the parts. Disabling standby will just kill your TV faster which is more expensive than the electricity it is "wasting".

bulanula
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
April 03, 2012, 10:04:29 AM
 #66

I still see no resolve for my reading VRM temperatures using Linux.

I have modified this radeonvolt and it still does not appear to list the VRM temperatures but only the core temperatures.

All reference cards, too.
-ck
Legendary
*
Offline Offline

Activity: 4284
Merit: 1645


Ruu \o/


View Profile WWW
April 03, 2012, 12:08:02 PM
Last edit: April 04, 2012, 12:03:15 AM by ckolivas
 #67

To resolve the confusion here: my primary goal was not to further OC cards to squeeze out their last kH/s but to maximize the H/J, which with Linux is not possible with the given max delta between mem and engine clocks Con is describing.


Con, you're often not happy with AMD's Linux drivers (who is), but you'd also agree better to live with the limitations at the safe side than having the freedom to kill miner's cards, right?
Indeed but I'm not advocating changes to raise voltage and engine clock speed further. There is no apparent limit to how high you can set engine clock speed with just the ADL support. I want to lower memory clock speed and voltage. Can't say that I've heard of underclocking or undervolting harming hardware.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
finway
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500


View Profile
April 03, 2012, 02:02:10 PM
 #68

This  saves power.

runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
April 04, 2012, 01:00:50 AM
 #69


Thanks for the commitment, any support will help. I think most Linux fellows sooner or later feel this desire to move back to Windows, being it that LibreOffice is not able to open some DOCX or your fav game not working under WINE, right Sad

[...]
FYI, Office 2007 works fine under Linux using the latest WINE.
http://imgur.com/dtkXz

EDIT: Also, not sure if it's relevant, but using Linuxcoin (which uses an older version of the Catalyst driver as far as I'm aware of), I'm able to set both core clock and memory clock to any value I like on both my 5870 and 5770. Not sure if it's related to the Catalyst version or if it's related to the card models (XFX and Sapphire, respectively).
-ck
Legendary
*
Offline Offline

Activity: 4284
Merit: 1645


Ruu \o/


View Profile WWW
April 04, 2012, 01:25:17 AM
 #70

EDIT: Also, not sure if it's relevant, but using Linuxcoin (which uses an older version of the Catalyst driver as far as I'm aware of), I'm able to set both core clock and memory clock to any value I like on both my 5870 and 5770. Not sure if it's related to the Catalyst version or if it's related to the card models (XFX and Sapphire, respectively).
Some very early drivers had limits on what you could try to change, but anything 11.4+ on windows and 11.6+ on linux has none. 5xx0 cards are much more accepting of changes than 6/7xxx though.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
zefir
Donator
Hero Member
*
Offline Offline

Activity: 919
Merit: 1000



View Profile
April 04, 2012, 07:01:46 AM
 #71

(OT: Hell, not long ago I bought standby-killers to turn TV off over night instead of letting it consume 5W in standby-mode, now I'm burning kWatts 24/7  Undecided, different story).

New TVs typically use a watts or less on standby, which you exchange for instant on and less wear on the parts. Disabling standby will just kill your TV faster which is more expensive than the electricity it is "wasting".
Hi mod, OP not active for nearly a year, so its ok to hijack his thread I guess...

True, the latest campaigns for energy efficiency and green labels really pushed the manufacturers to save energy. My current 46" plasma is using as little as my previous 24" one during operation and only 0.3W in stand-by. But my other one wastes ~9W plus 12W for the cablecom STB for just being ready to watch the news once a day - just insane! Using a standby-killer for a month I can save enough to power one of my rigs for nearly 18hours -- insane²! (see the irony?).

zefir
Donator
Hero Member
*
Offline Offline

Activity: 919
Merit: 1000



View Profile
April 04, 2012, 07:30:57 AM
Last edit: April 08, 2012, 07:10:14 AM by zefir
 #72

Talked to a guy that reverse-engineered WiFi chips at register level to develop Linux drivers and realized that it will not be easy.

The approach to hook into the i2c communication and log command sequences is sure the way to go. But it won't be enough to just collect the info per-chip, it most probably will be required to have it per-card. Setting the clocks at controller for one card does not necessarily mean the same for a similar one (assembly options, scaling, offset, etc.). That's possibly the reason why bulanula can't set his params with radeonvolt.

If you string the Afterburner binaries there are IDs for supported cards, most probably they are using card-individual settings. Therefore we basically would have to rewrite AB to get reliable control over our mining cards - it is more a man-year task than a weekend's hack.

Given the remaining lifetime of mining GPUs of say 6-9 months (yeah, see DAT around the corner asking for my cards -- cheap Wink), I'd say it does not pay off to invest that effort.

bulanula
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
April 07, 2012, 07:47:20 PM
 #73

An update.

I did all the proper steps and modifications to try and get this working on my reference ATI 5870s.

Will post back later with some results but on first sight the VRM temps are too low to be real and probably just core temps.

Runeks fork was even worse and did not even show anything other than "supported device".

Again, 100% reference ATI branded 5870s here ...
DiabloD3
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
April 08, 2012, 12:00:49 AM
 #74

An update.

I did all the proper steps and modifications to try and get this working on my reference ATI 5870s.

Will post back later with some results but on first sight the VRM temps are too low to be real and probably just core temps.

Runeks fork was even worse and did not even show anything other than "supported device".

Again, 100% reference ATI branded 5870s here ...

Some VRMs are higher quality and are around GPU temps: say, GPU temp is around 85c, VRMs could be around both 120c for older/shittier VRMs (and be within the spec for the VRMs too), and 85c for newer/less shitty VRMs.

bulanula
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
April 10, 2012, 01:08:04 PM
 #75

An update.

I did all the proper steps and modifications to try and get this working on my reference ATI 5870s.

Will post back later with some results but on first sight the VRM temps are too low to be real and probably just core temps.

Runeks fork was even worse and did not even show anything other than "supported device".

Again, 100% reference ATI branded 5870s here ...

Some VRMs are higher quality and are around GPU temps: say, GPU temp is around 85c, VRMs could be around both 120c for older/shittier VRMs (and be within the spec for the VRMs too), and 85c for newer/less shitty VRMs.

Turns out you were right !

I started mining and the core on one card using a special cooler was at 39 degrees. Radeonvolt reported the VRM temps as 50 on that card so it seems to be working.

Thus, it seems like VRM temps are very close to GPU core temps. Others are having 70 core and VRM at say 90 so that is a +20 difference.

On my other cards the core and VRM are almost the same.

Does that mean that these GPUs were straight out the factory or something Huh
QuantumFoam
Full Member
***
Offline Offline

Activity: 200
Merit: 100


|Quantum|World's First Cloud Management Platform


View Profile WWW
April 22, 2012, 06:51:53 PM
 #76

I've been messing with the source code so I can see the VRM temps in linux for my 5970. I was able to do this with the commenting out of vendor_id and device_id check as mentioned earlier in this thread, and with a modification to also accept a device class of type PCI_CLASS_DISPLAY_OTHER in addition to PCI_CLASS_DISPLAY_VGA. Without this second change, I was only able to see the VRM temps of one GPU on the 5970. Hope that helps other 5970 linux users, it's a simple code change in the enum_cards function in radeonvolt.c (first if branch below the first for loop).

|Quantum|World's First Cloud Management Platform on the Blockchain
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
April 22, 2012, 07:42:15 PM
 #77

I've been messing with the source code so I can see the VRM temps in linux for my 5970. I was able to do this with the commenting out of vendor_id and device_id check as mentioned earlier in this thread, and with a modification to also accept a device class of type PCI_CLASS_DISPLAY_OTHER in addition to PCI_CLASS_DISPLAY_VGA. Without this second change, I was only able to see the VRM temps of one GPU on the 5970. Hope that helps other 5970 linux users, it's a simple code change in the enum_cards function in radeonvolt.c (first if branch below the first for loop).

Could you provide the modified code (pastebin would work fine).  I tried messing around with this but never got it working.
QuantumFoam
Full Member
***
Offline Offline

Activity: 200
Merit: 100


|Quantum|World's First Cloud Management Platform


View Profile WWW
April 22, 2012, 07:58:01 PM
Last edit: February 15, 2017, 09:48:54 AM by QuantumFoam
 #78



Output on my xubuntu machine:

Device [8]: Hemlock [ATI Radeon HD 5900 Series]
        Current core voltage: 1.0375 V
        Presets: 0.9500 / 1.0000 / 1.0375 / 1.0500 V
        Core power draw: 57.48 A (59.64 W)
        VRM temperatures: 57 / 61 / 60 C


Device [9]: Hemlock [ATI Radeon HD 5900 Series]
        Current core voltage: 1.0375 V
        Presets: 0.9500 / 1.0000 / 1.0375 / 1.0500 V
        Core power draw: 56.61 A (58.74 W)
        VRM temperatures: 81 / 82 / 82 C


I'm sure there's a better solution than removing the vendor_id and device_id checks, one would need to get the relevant ones for the 5900 series.

|Quantum|World's First Cloud Management Platform on the Blockchain
ummas
Sr. Member
****
Offline Offline

Activity: 274
Merit: 250


View Profile
April 23, 2012, 04:43:19 AM
Last edit: April 23, 2012, 05:05:49 AM by ummas
 #79

Yeah, and tell what to do to get it working :/

EDIT:
I have replaced original radeonvolt.c file with those lines posted by QuantumFoam ( just those fited was removed)
but it does not show my anything.
QuantumFoam
Full Member
***
Offline Offline

Activity: 200
Merit: 100


|Quantum|World's First Cloud Management Platform


View Profile WWW
April 23, 2012, 06:36:33 AM
 #80

I just provided the altered function, the rest of the file should be the same. If it's not showing anything, perhaps there are differences between operating systems/hardware that cause the problem? You did remake it after saving, right? It could be the device class has a differing value. One way to narrow down the issue would be to put in some printfs and see what the output is, if you're comfortable enough with C. If not, I could paste the relevant code with printfs.

|Quantum|World's First Cloud Management Platform on the Blockchain
Pages: « 1 2 3 [4] 5 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!