Riseman
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 04:18:43 AM |
|
I guess when beta4 comes out,it will consume as much power as scrypt algo. ![Angry](https://bitcointalk.org/Smileys/default/angry.gif) I think it's rather good because it means that the miner is fully utilizing the GPU. Unless you can't adjust the intensity at all.
|
|
|
|
WORE
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 07:59:06 AM |
|
I guess when beta4 comes out,it will consume as much power as scrypt algo. ![Angry](https://bitcointalk.org/Smileys/default/angry.gif) I think it's rather good because it means that the miner is fully utilizing the GPU. Unless you can't adjust the intensity at all. You know when you have full utilization of the card(s) when you no longer need to run the space heater that burns 1500 watts, but the trick in it is not to burn up the new heater... ![Grin](https://bitcointalk.org/Smileys/default/grin.gif)
|
|
|
|
cashclash
Member
![*](https://bitcointalk.org/Themes/custom1/images/star.gif)
Offline
Activity: 86
Merit: 11
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 08:12:39 AM |
|
Nice miner upgrade regarding hasrate. My power consumption went from around 240 Watt to 315 Watt on my Sapphire r9 290.
update: forgot the ADL_SDK after recompiling consumption is now around 250 Watt
|
|
|
|
berbip
Member
![*](https://bitcointalk.org/Themes/custom1/images/star.gif)
Offline
Activity: 143
Merit: 10
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 01:15:17 PM |
|
I'm getting around 240 - 247 per 280x clocked 1000/1050 vddc 1100 still problems with keeping the heat on my cards down, secondary card hits 90+ in a couple of minutes
|
|
|
|
wr104 (OP)
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 02:09:32 PM Last edit: January 04, 2015, 03:52:44 PM by wr104 |
|
I've switched to beta2,I am worried about my power supply. ![Angry](https://bitcointalk.org/Smileys/default/angry.gif) There is no need to go back to beta2. The Kernel is the same. You can remove the '--cl-opt-disable' option, delete the 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' file and cgminer beta3 should run just like the beta2. Or, you can take advantage of the Intensity setting. For your R270, you can use intensity 3 or 4 on each card. Try specifying: -I 3 or -I 4 @wr104
I am also getting the HW errors again with the Nvidia cards. Was there something besides the .bin files that needed to be deleted?
EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.
As I said, the Kernel hasn't changed. If the '--cl-opt-disable' option isn't helping on the 970, just remove it and cgminer should pretty much work as before. Don't forget to delete 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' every time you make a change so, cgminer is forced to re-compile the Kernel next time it is executed. Also, try changing the Work Size using the '--worksize' option on your 970. The default is now 256 and your nVidia GPU might not like that value. A request for increased value range of 'gpu-engine'. Presently it will not accept a value below 1Ghz, default it tries to run at 1.1Ghz. Presently if using he "cl-opt-disable" : true, with "Intensity" : "8" on an R9 280x DD, I am getting a fair hash rate of about 235KH/s but the driver is locking up, but not until card reaches around 90C which puts the VRM's near max tolerance of 120C with the increased wattage draw under the new kernel. Would it be possible to allow input for gpu-engine to run down to around 850Mhz on the tahiti to maintain full utilization of memory while providing the ability to lower running temps across all the hardware in the card? I've been playing with all the GPU settings (Manual Engine clock, Autotune, Powertune, etc) on all my cards and this seems to be an issue with the drivers or with the ADL SDK. On my R9 290x, only reducing Powertune reduces the GPU clock. On my HD7970, nothing makes the GPU clock change. On my HD6950, everything works fine. On a 280x I can directly specify the clock setting or give it a range 1000-1100+ which cgminer is supposed to dynamically adjust based on the temp-overheat+temp-hysteresis and was observed doing such on beta 2. It just won't go below 1Ghz even if told to. Is a bummer that one cannot do the same based on VRM temps as this would have saved a lot of cards throughout the history of BTC. No biggy for me though, I'd just swap out burned VRM's with new ones as I have to do a lot of component level board repairs on my day gig. Could be another interesting implementation of PID in there as well. ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) I don't have the GPU temp swap thing going on unless it has GPU-Z spoofed, funny though, GPU0 does run hotter that GPU1, but VRM on GPU1 runs hotter than GPU0. But this has been observed mining under other algo's so may be a problem with that card. Whacky, the temp-overheat+temp-hysteresis that is triggering clock down seems to clock down the cooler GPU1 running card. But realistically, the '"cl-opt-disable" : true' option is for water cooled rigs, I was running the clock at 1Ghz WS256 I6 and still hit 90C and caught the VRM at 120C and only saw 24Kh/s gain, not worth it to me unless I can has less Mhz on the clock and avoid having to unbrick my card. Unless something got messed up with the ADL library when I built Beta3, the Autotune should work the same as Beta2. The temperature mapping issue exits because, there is no way to automatically correlate in code what OpenCL thinks it is GPU0 with what ADL reports as GPU0. The way to tell for sure is by disabling one GPU in cgminer and watch the temperatures in the status bar. If the disabled GPU doesn't cool down then, you know you got an incorrect mapping and you need to use the --gpu-map option. Edit: I just recompiled a new cgminer from a fresh sandbox, included the latest ADL SDK and I get the same results... AutoTune isn't working on Tahiti or Hawaii. Cayman works fine.
|
|
|
|
WORE
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 05:22:10 PM |
|
I'm getting around 240 - 247 per 280x clocked 1000/1050 vddc 1100 still problems with keeping the heat on my cards down, secondary card hits 90+ in a couple of minutes
At that temp you better watch your VRM's. 120C is max tolerance and if you let it go over that for any length of time you are doing damage and you will brick your card(s). GPU-Z will tell you your VRM temps.
|
|
|
|
Riseman
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 05:32:39 PM |
|
120C is max tolerance
How do you know that? I mean is there a datasheet or something?
|
|
|
|
WORE
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 05:44:02 PM |
|
I've switched to beta2,I am worried about my power supply. ![Angry](https://bitcointalk.org/Smileys/default/angry.gif) There is no need to go back to beta2. The Kernel is the same. You can remove the '--cl-opt-disable' option, delete the 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' file and cgminer beta3 should run just like the beta2. Or, you can take advantage of the Intensity setting. For your R270, you can use intensity 3 or 4 on each card. Try specifying: -I 3 or -I 4 @wr104
I am also getting the HW errors again with the Nvidia cards. Was there something besides the .bin files that needed to be deleted?
EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.
As I said, the Kernel hasn't changed. If the '--cl-opt-disable' option isn't helping on the 970, just remove it and cgminer should pretty much work as before. Don't forget to delete 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' every time you make a change so, cgminer is forced to re-compile the Kernel next time it is executed. Also, try changing the Work Size using the '--worksize' option on your 970. The default is now 256 and your nVidia GPU might not like that value. A request for increased value range of 'gpu-engine'. Presently it will not accept a value below 1Ghz, default it tries to run at 1.1Ghz. Presently if using he "cl-opt-disable" : true, with "Intensity" : "8" on an R9 280x DD, I am getting a fair hash rate of about 235KH/s but the driver is locking up, but not until card reaches around 90C which puts the VRM's near max tolerance of 120C with the increased wattage draw under the new kernel. Would it be possible to allow input for gpu-engine to run down to around 850Mhz on the tahiti to maintain full utilization of memory while providing the ability to lower running temps across all the hardware in the card? I've been playing with all the GPU settings (Manual Engine clock, Autotune, Powertune, etc) on all my cards and this seems to be an issue with the drivers or with the ADL SDK. On my R9 290x, only reducing Powertune reduces the GPU clock. On my HD7970, nothing makes the GPU clock change. On my HD6950, everything works fine. On a 280x I can directly specify the clock setting or give it a range 1000-1100+ which cgminer is supposed to dynamically adjust based on the temp-overheat+temp-hysteresis and was observed doing such on beta 2. It just won't go below 1Ghz even if told to. Is a bummer that one cannot do the same based on VRM temps as this would have saved a lot of cards throughout the history of BTC. No biggy for me though, I'd just swap out burned VRM's with new ones as I have to do a lot of component level board repairs on my day gig. Could be another interesting implementation of PID in there as well. ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) I don't have the GPU temp swap thing going on unless it has GPU-Z spoofed, funny though, GPU0 does run hotter that GPU1, but VRM on GPU1 runs hotter than GPU0. But this has been observed mining under other algo's so may be a problem with that card. Whacky, the temp-overheat+temp-hysteresis that is triggering clock down seems to clock down the cooler GPU1 running card. But realistically, the '"cl-opt-disable" : true' option is for water cooled rigs, I was running the clock at 1Ghz WS256 I6 and still hit 90C and caught the VRM at 120C and only saw 24Kh/s gain, not worth it to me unless I can has less Mhz on the clock and avoid having to unbrick my card. Unless something got messed up with the ADL library when I built Beta3, the Autotune should work the same as Beta2. The temperature mapping issue exits because, there is no way to automatically correlate in code what OpenCL thinks it is GPU0 with what ADL reports as GPU0. The way to tell for sure is by disabling one GPU in cgminer and watch the temperatures in the status bar. If the disabled GPU doesn't cool down then, you know you got an incorrect mapping and you need to use the --gpu-map option. Edit: I just recompiled a new cgminer from a fresh sandbox, included the latest ADL SDK and I get the same results... AutoTune isn't working on Tahiti or Hawaii. Cayman works fine. If I disable GPU0 then the temp drops on GPU0, but the other condition exists. If GPU0 goes to 88C then it throttles down GPU1 to 1Ghz so maybe the Autotune mappings are crossed in your latest kernel, but then again; cgminer source has had a lot of hands in it.
|
|
|
|
wr104 (OP)
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 06:14:10 PM Last edit: January 04, 2015, 07:03:38 PM by wr104 |
|
The ADL code in sgminer has a lot of changes. I'm going to try to port them to cgminer-khc and see what happens.
Edit:
No change in behavior. Autotune only works on the 6950. This rig has Win7 x64.
Perhaps, there is an issue with the driver in Windows 8.1. My two rigs where autotune won't work are Windows 8.1 Pro.
|
|
|
|
WORE
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 07:08:48 PM Last edit: January 04, 2015, 07:19:24 PM by WORE |
|
The ADL code in sgminer has a lot of changes. I'm going to try to port them to cgminer-khc and see what happens.
Edit:
No change in behavior. Autotune only works on the 6950. This rig has Win7 x64.
Perhaps, there is an issue with the driver in Windows 8.1. My two rigs where autotune won't work are Windows 8.1 Pro.
I've got the 14.12 Omega on Win7x64, the drivers seem pretty stable other than when VRM temps reach tolerance threshold and starves the core voltage, and that is what pretty much is what saves my second card from bricking. Both cards are running 15.43 bios as that idles the cards at 500mhz rather than 15.44 bios that idles at 300mhz which allows the mouse pointer driver corruption after the instance of a card being reset by the driver.
|
|
|
|
wr104 (OP)
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 07:12:53 PM Last edit: January 04, 2015, 08:42:28 PM by wr104 |
|
I'm going to put my HD7970 in my Win7 system and see what happens.
Edit: Got the same results. It seems that with the 14.12 Omega drivers you can only overclock the GPU. Underclocking either GPU or Memory won't work.
|
|
|
|
WORE
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 10:24:52 PM |
|
I'm going to put my HD7970 in my Win7 system and see what happens.
Edit: Got the same results. It seems that with the 14.12 Omega drivers you can only overclock the GPU. Underclocking either GPU or Memory won't work.
Rrrrrr? That's an AMD done screwed the pooch thing, the core was designed for 850Mhz on an XFX R9 280x DD and has 1000Mhz boost. Maybe they are looking to be overwhelmed by RMA's? Or are they just going to be giving out free 290x's?
|
|
|
|
wr104 (OP)
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 11:19:08 PM |
|
BTW, I also noticed that on Tahiti, the best worksize value is 64. Using worksize 256 will make it drop to 190Kh/s
|
|
|
|
WORE
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 04, 2015, 11:29:24 PM Last edit: January 04, 2015, 11:50:37 PM by WORE |
|
BTW, I also noticed that on Tahiti, the best worksize value is 64. Using worksize 256 will make it drop to 190Kh/s
Yes, I had previously been using that value but you based your example table on 256 so I threw that in there, cgminer won't accept a value of 30 or 32 based on 2048 shaders. Will take I 16 max. With no entry the GUI reports 0, gpu-engine clock limited to 1Ghz, runs over temp on VRM and hoses the driver, machine freezes. Doesn't brick the card though, but chkdsk gets old quick on 2TB drive.
|
|
|
|
antonio8
Legendary
Offline
Activity: 1386
Merit: 1000
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 05, 2015, 12:00:02 AM |
|
BTW, I also noticed that on Tahiti, the best worksize value is 64. Using worksize 256 will make it drop to 190Kh/s
I just tried it on my 280X Tahiti but it won't go over 80 kh/s with worksize 64
|
If you are going to leave your BTC on an exchange please send it to this address instead 1GH3ub3UUHbU5qDJW5u3E9jZ96ZEmzaXtG, I will at least use the money better than someone who steals it from the exchange. Thanks ![Wink](https://bitcointalk.org/Smileys/default/wink.gif)
|
|
|
cisahasa
Legendary
Offline
Activity: 910
Merit: 1000
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 05, 2015, 12:03:25 AM Last edit: January 05, 2015, 12:18:56 AM by cisahasa |
|
still here.. tested, new miner is finding solo now (still needs some fixing..)
270khs/gpu
about the algo, u cant fight asics, but it still can become Wolf0-proof.
|
|
|
|
wr104 (OP)
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 05, 2015, 12:32:16 AM Last edit: January 05, 2015, 12:58:28 AM by wr104 |
|
BTW, I also noticed that on Tahiti, the best worksize value is 64. Using worksize 256 will make it drop to 190Kh/s
I just tried it on my 280X Tahiti but it won't go over 80 kh/s with worksize 64 If you are using worksize equal to 64, Intensity needs to be 32 for the HD7970/R9-280x. But, I limited it to 16. ![Undecided](https://bitcointalk.org/Smileys/default/undecided.gif) Try disabling Intensity and if it gets too hot, reduce the shaders-mul I guess, I'm going to have to release beta4 ASAP to address this issue.
|
|
|
|
rigging
Jr. Member
Offline
Activity: 59
Merit: 10
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 05, 2015, 02:34:59 AM |
|
The wallet and gpu miners seem stable now. I think the next challenge for KHC is to create applications(ios&androind) that users want to use.
|
██████ ... ANTSHARES, DIGITAL ASSETS FOR EVERYONE ... ██████ ████ ... ANTSHARES ICO STARTS AUGUST 8th ... ████ ▀▀▀▀▀▀▀▀▀▀▀▀▀▀ ANTSHARES ▀▀▀▀▀▀▀▀▀▀▀▀▀▀ (https://antshares.com/?ref=rigging)
|
|
|
antonio8
Legendary
Offline
Activity: 1386
Merit: 1000
|
![](https://bitcointalk.org/Themes/custom1/images/post/xx.gif) |
January 05, 2015, 02:49:45 AM |
|
The wallet and gpu miners seem stable now. I think the next challenge for KHC is to create applications(ios&androind) that users want to use.
I am having an issue with Beta 3 now. For some reason my miner just stops. I had it happen about 3 times now on my 280x. I have to exit out and restart it. I'll try and see if my log shows something the next time it happens if I can remember to look at it before restarting it.
|
If you are going to leave your BTC on an exchange please send it to this address instead 1GH3ub3UUHbU5qDJW5u3E9jZ96ZEmzaXtG, I will at least use the money better than someone who steals it from the exchange. Thanks ![Wink](https://bitcointalk.org/Smileys/default/wink.gif)
|
|
|
|
|