Bitcoin Forum
November 14, 2024, 09:55:50 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Mining speed depedence on memory speed.  (Read 2511 times)
MrTeal (OP)
Legendary
*
Offline Offline

Activity: 1274
Merit: 1004


View Profile
February 27, 2012, 03:16:21 AM
 #1

Between clocktweak and afterburner I've finally been able to lower the clocks on the memory of my 6870, but the results really have me confused. My understanding from other threads is that underclocking memory should have almost no effect on hashing speed. I found that not to be the case for my personal rig (2500k/Z68x-UD3H-B3/HIS6870), and I'm wondering what I'm doing wrong. Voltage, core and memory speeds were verified in both afterburner and GPU-Z, while power draw was measured at the wall with a P3 Kill-A-Watt

Code:
V		Core	Mem	Mhash		Watts		MH/W
1.168 1000 300 260 163 1.595092025
1.168 1000 450 277 178 1.556179775
1.168 1000 525 290 182 1.593406593
1.168 1000 600 300 186 1.612903226
1.168 1000 700 310 191 1.623036649
1.168 1000 800 318 196 1.62244898
1.168 1000 900 320 201 1.592039801
1.168 1000 1050 321 206 1.558252427



I know that there is supposed to be a maximum spread with the 6xxx GPUs of 125 MHz between core and memory speed, but both GPU-Z and AB reported the speeds being set correctly. Is there another explanation for these results, and is there anything I can do so I don't lose 60MHash/s by underclocking my memory?
waterboyserver
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
February 27, 2012, 03:36:56 AM
 #2

Hmm that is interesting, I have used a 6850 to mine through bitminter and my hash rate did not change when I rigorously underclocked the graphics ram, but it did keep the card slightly cooler, which is probably a reason why many people might do this. On the other hand, it could be that some miners have optimizations for a specific version of the AMD app SDK that may improve hashing by utilizing graphics ram. Now I use a 7970 with these optimizations, but increasing or decreasing the graphics ram does not impact my hash rate. How long did you run your mining app to record those values in the graph (assuming no video or graphic demanding software was running simultaneously)?
MrTeal (OP)
Legendary
*
Offline Offline

Activity: 1274
Merit: 1004


View Profile
February 27, 2012, 03:46:01 AM
 #3

Sorry, I meant to include that info in the OP. I let each one stabilize for a minute, though it has stopped changing within 10 seconds of being set. I was using Phoenix with the Phatk2 kernel, with flags VECTORS4 WORKSIZE=128 BFI_INT FASTLOOP=false AGGRESSION=13.
drakahn
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500



View Profile
February 27, 2012, 03:52:48 AM
 #4

what SDK version? I read somewhere that the latest likes the higher mem speeds

14ga8dJ6NGpiwQkNTXg7KzwozasfaXNfEU
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 27, 2012, 04:04:16 AM
 #5

Which SDK are you running?

Are you sure it is changing memclock.  Don't trust AB which reports what clock is set at.
What does cgminer show?  Show does GPU-Z (sensors tab not main tab which also shows what card is set to) show?
MrTeal (OP)
Legendary
*
Offline Offline

Activity: 1274
Merit: 1004


View Profile
February 27, 2012, 04:28:52 AM
 #6

Which SDK, AMD APP? I'm no sure which version I'm running, I would assume it's whatever comes with the precert 12.3 CCC drivers. How would I go about finding out?

The memory speed that was displayed in AB was the same as what was reported in the sensors tab of GPU-Z. I haven't run cgminer, but I could give that a shot and see if it changes anything.
MrTeal (OP)
Legendary
*
Offline Offline

Activity: 1274
Merit: 1004


View Profile
February 27, 2012, 04:52:12 AM
 #7

I checked with cgminer, and it reports the same memory speed with debug on as AB and GPU-Z.
waterboyserver
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
February 27, 2012, 07:20:59 AM
 #8

There is a post here that can demonstrate how to find out which SDK you have:

https://bitcointalk.org/index.php?topic=57784.0

diabinc141
Newbie
*
Offline Offline

Activity: 14
Merit: 0



View Profile
February 27, 2012, 08:02:33 AM
 #9

Yeah those drivers come with the newer SDK with is much worse for mining in my experience. Try to go back to SDK 2.5
bulanula
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
February 27, 2012, 09:44:21 AM
 #10

These results are BS. You are using SDK 2.6 which is not good at all.

MrTeal (OP)
Legendary
*
Offline Offline

Activity: 1274
Merit: 1004


View Profile
February 27, 2012, 02:18:39 PM
 #11

So, I didn't have the SDK installed at all, though there were some openCL binaries in the AMD APP /bin directory. I tried installing the latest version (2.6), but it made no difference. When I get home I will try uninstalling 2.6 and installing 2.5 to see if that changes the results. Running 2.5 with CCC 12.x won't cause an issue, will it?

These results are BS. You are using SDK 2.6 which is not good at all.
I don't see how the results are BS. Those are the values I'm getting, so unless I'm either measuring wrong or making them up I'd say they're valid.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 27, 2012, 02:59:04 PM
 #12

So, I didn't have the SDK installed at all, though there were some openCL binaries in the AMD APP /bin directory. I tried installing the latest version (2.6), but it made no difference. When I get home I will try uninstalling 2.6 and installing 2.5 to see if that changes the results. Running 2.5 with CCC 12.x won't cause an issue, will it?

The SDK runtime (yes horribly stupid name AMD) is included in the install pkg.  If you picked "express" install it installed 12.1 driver AND SDK 2.6 runtime.  If you start cgminer with cgminer -n it will tell you which SDK runtime you have installed.

2.6 doesn't work well with low memclock.  For 5000 series cards 2.1 is the best but only by 2% or 3% or so.  2.4 & 2.5 have roughly the same performance and 2.6 completely blows when using low memclock (sometimes 20%+ worse performance).

Quote
When I get home I will try uninstalling 2.6 and installing 2.5 to see if that changes the results. Running 2.5 with CCC 12.x won't cause an issue, will it?

No it won't cause a problem but removing SDK/runtime 2.6 is a nightmare.  The alternative way is to check in cgminer thread conman setup an ftp w/ already compiled bin files for various SDK versions.

The OpenCL runtime is only used to CREATE the bin file used by the graphics card.  Once created it is no longer used so you can test different SDK versions by simply deleting (or moving) bin files from cgminer folder and replacing them w/ bin files created under another sdk.

If you ever do a clean install the best thing to do is to first install the SDK/runtime only and then install latest driver (select custom and UNSELECT the "OpenCL Runtime" which will be 2.6+).
forsetifox
Sr. Member
****
Offline Offline

Activity: 266
Merit: 250



View Profile
February 27, 2012, 03:55:34 PM
 #13

Using the installer for the catalyst and/or SDK packages has always been a nightmare.

It seems to use the installer that was used last. I install the full catalyst drivers(including the streaming stuff) and then extract the opencl.dll from whatever SDK I want to use and plop that into the cgminer folder. That why I don't have installer issues.

Removing catalyst drivers is even more annoying.

Un-install everything.
Delete all ATI/ AMD folders including the ones in Roaming.
Open up regedit and kill all ATI and AMD folders in localmachine and currentuser.
Reboot in safe mode and run Driver Sweeper.
Then reboot into normal windows and install catalyst.

Or you could just re-install windows. =P
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 27, 2012, 04:33:09 PM
 #14

Or you could just re-install windows. =P

This.  Put your user data on a seperate drive.  Windows 7 installs pretty quit from a decent speed (newer) usb stick.  I hate not knowing if I got everything from crappy driver fragments.
drakahn
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500



View Profile
February 27, 2012, 04:36:21 PM
 #15

http://dl.dropbox.com/u/9768004/AMD-APP-SDK-v2.5.rar
unrar that to c:\ and then restart your miners with the lower mem speed

14ga8dJ6NGpiwQkNTXg7KzwozasfaXNfEU
bulanula
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
February 27, 2012, 05:44:06 PM
 #16

Quote from: DeathAndTaxes
For 5000 series cards 2.1 is the best but only by 2% or 3% or so.

Any hard evidence or just speculation. I am using 2.1 myself but I don't think it really is that much better than 2.4.

I want to see actual 2.1 vs 2.4 results for a 5xxx card showing 2.1 is better if that exists anywhere and also core to memory ratio analysis.

Thanks !
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 27, 2012, 05:53:22 PM
 #17

Quote from: DeathAndTaxes
For 5000 series cards 2.1 is the best but only by 2% or 3% or so.

Any hard evidence or just speculation. I am using 2.1 myself but I don't think it really is that much better than 2.4.

I want to see actual 2.1 vs 2.4 results for a 5xxx card showing 2.1 is better if that exists anywhere and also core to memory ratio analysis.

Thanks !

Just my own analysis (and was only limited to a few clocks (and the memclock I found optimal).  The difference is small but I found 2.1 to provide higher MH/S per clock.  With 2.4 I got about 0.42 to 0.45 MH per Mhz (MH scale linearly w/ clock speed). With 2.1 at the various clocks I test (I admit not a comprehensive test) I achieved 0.44 to 0.46.  Average improvement was around 2%.

Would be nice to use some script to test.
clock speed
mem speed
vectors
worksize
SDK

in a comprehensive manner (i.e. run 1K shares on each, record result iterate to new parameter set).

If someone wrote up some code I would be willing to "donate" a 3x5970 rig to run tests for as long as it takes.
BCMan
Hero Member
*****
Offline Offline

Activity: 535
Merit: 500



View Profile
February 27, 2012, 07:54:27 PM
 #18

Between clocktweak and afterburner I've finally been able to lower the clocks on the memory of my 6870, but the results really have me confused. My understanding from other threads is that underclocking memory should have almost no effect on hashing speed. I found that not to be the case for my personal rig (2500k/Z68x-UD3H-B3/HIS6870), and I'm wondering what I'm doing wrong. Voltage, core and memory speeds were verified in both afterburner and GPU-Z, while power draw was measured at the wall with a P3 Kill-A-Watt

Code:
V		Core	Mem	Mhash		Watts		MH/W
1.168 1000 300 260 163 1.595092025
1.168 1000 450 277 178 1.556179775
1.168 1000 525 290 182 1.593406593
1.168 1000 600 300 186 1.612903226
1.168 1000 700 310 191 1.623036649
1.168 1000 800 318 196 1.62244898
1.168 1000 900 320 201 1.592039801
1.168 1000 1050 321 206 1.558252427



I know that there is supposed to be a maximum spread with the 6xxx GPUs of 125 MHz between core and memory speed, but both GPU-Z and AB reported the speeds being set correctly. Is there another explanation for these results, and is there anything I can do so I don't lose 60MHash/s by underclocking my memory?
LOL. 323 mhashes/s here @ 1000/300/1.051v with Sapphire 6870. Using 2.6 SDK?  Roll Eyes
ssateneth
Legendary
*
Offline Offline

Activity: 1344
Merit: 1004



View Profile
February 27, 2012, 09:25:21 PM
 #19

Quote from: DeathAndTaxes
For 5000 series cards 2.1 is the best but only by 2% or 3% or so.

Any hard evidence or just speculation. I am using 2.1 myself but I don't think it really is that much better than 2.4.

I want to see actual 2.1 vs 2.4 results for a 5xxx card showing 2.1 is better if that exists anywhere and also core to memory ratio analysis.

Thanks !

Well I have a good chunk of sample data at https://docs.google.com/spreadsheet/ccc?key=0AjXdY6gpvmJ4dEo4OXhwdTlyeS1Vc1hDWV94akJHZFE&hl=en_US#gid=0 but it's currently for 2.1 SDK. Would need to uninstall 2.6 and install 2.4 sdk instead (which is fine for me, dont use 2.6 anyways) and run all the combos the same way. Here's a pretty graph in case you don't want to read the spreadsheet.
Note: My sample is very incomplete. It requires a lot of time to test all the combos. Environment was Win7 x64, phoenix 2.0 RC1, phatk 2.2 kernel, 12.1 driver, 2.1 SDK, AGGRESSION=14 BFI_INT with varying worksize and vectors. Card used is an Asus Radeon 5870 @ 1035mhz core.

MrTeal (OP)
Legendary
*
Offline Offline

Activity: 1274
Merit: 1004


View Profile
February 28, 2012, 02:46:32 AM
 #20

Well, fixed my issues. I uninstalled 2.6, installed 2.5, uninstalled that and tried 2.3, still no change. I'm now back to 2.6, but the fix was switching from vectors4 to just vectors. vectors4 gives me a couple extra MHash/s @ 1150MHz, but kills the speed at lower memory clocks. I'm now at a wall power draw of 176W for 319MHash/s.
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!