Bitcoin Forum
June 18, 2024, 10:11:38 AM *
News: Voting for pizza day contest
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 [45] 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 »
  Print  
Author Topic: [ANN] Kryptohash | Brand new PoW algo | 320bit hash | ed25519 | PID algo for dif  (Read 149396 times)
Riseman
Hero Member
*****
Offline Offline

Activity: 690
Merit: 500


View Profile
January 04, 2015, 04:18:43 AM
 #881

I guess when beta4 comes out,it will consume as much power as scrypt algo. Angry

I think it's rather good because it means that the miner is fully utilizing the GPU. Unless you can't adjust the intensity at all.
WORE
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
January 04, 2015, 07:59:06 AM
 #882

I guess when beta4 comes out,it will consume as much power as scrypt algo. Angry

I think it's rather good because it means that the miner is fully utilizing the GPU. Unless you can't adjust the intensity at all.

You know when you have full utilization of the card(s) when you no longer need to run the space heater that burns 1500 watts, but the trick in it is not to burn up the new heater...   Grin
cashclash
Member
**
Offline Offline

Activity: 86
Merit: 11


View Profile
January 04, 2015, 08:12:39 AM
 #883

Nice miner upgrade regarding hasrate.
My power consumption went from around 240 Watt to 315 Watt on my Sapphire r9 290.

update: forgot the ADL_SDK after recompiling consumption is now around 250 Watt
berbip
Member
**
Offline Offline

Activity: 143
Merit: 10


View Profile
January 04, 2015, 01:15:17 PM
 #884

I'm getting around 240 - 247 per 280x
clocked 1000/1050 vddc 1100
still problems with keeping the heat on my cards down, secondary card hits 90+ in a couple of minutes
wr104 (OP)
Sr. Member
****
Offline Offline

Activity: 329
Merit: 250


View Profile WWW
January 04, 2015, 02:09:32 PM
Last edit: January 04, 2015, 03:52:44 PM by wr104
 #885

I've switched to beta2,I am worried about my power supply. Angry

There is no need to go back to beta2. The Kernel is the same.

You can remove the '--cl-opt-disable' option, delete the 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' file and cgminer beta3 should run just like the beta2.  
 
Or, you can take advantage of the Intensity setting.  For your R270, you can use intensity 3 or 4 on each card.  Try specifying:  -I 3  or -I 4


@wr104

I am also getting the HW errors again with the Nvidia cards.  Was there something besides the .bin files that needed to be deleted?

EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.

As I said, the Kernel hasn't changed.  If the '--cl-opt-disable' option isn't helping on the 970, just remove it and cgminer should pretty much work as before.  Don't forget to delete 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' every time you make a change so, cgminer is forced to re-compile the Kernel next time it is executed.

Also, try changing the Work Size using the '--worksize' option on your 970.  The default is now 256 and your nVidia GPU might not like that value.

 


A request for increased value range of 'gpu-engine'.  Presently it will not accept a value below 1Ghz, default it tries to run at 1.1Ghz.

Presently if using he "cl-opt-disable" : true, with "Intensity" : "8" on an R9 280x DD, I am getting a fair hash rate of about 235KH/s but the driver is locking up, but not until card reaches around 90C which puts the VRM's near max tolerance of 120C with the increased wattage draw under the new kernel.

Would it be possible to allow input for gpu-engine to run down to around 850Mhz on the tahiti to maintain full utilization of memory while providing the ability to lower running temps across all the hardware in the card?

I've been playing with all the GPU settings (Manual Engine clock, Autotune, Powertune, etc) on all my cards and this seems to be an issue with the drivers or with the ADL SDK.

On my R9 290x, only reducing Powertune reduces the GPU clock.  
On my HD7970, nothing makes the GPU clock change.
On my HD6950, everything works fine.


On a 280x I can directly specify the clock setting or give it a range 1000-1100+ which cgminer is supposed to dynamically adjust based on the temp-overheat+temp-hysteresis and was observed doing such on beta 2.  It just won't go below 1Ghz even if told to.

Is a bummer that one cannot do the same based on VRM temps as this would have saved a lot of cards throughout the history of BTC.  No biggy for me though, I'd just swap out burned VRM's with new ones as I have to do a lot of component level board repairs on my day gig.  Could be another interesting implementation of PID in there as well.   Smiley

I don't have the GPU temp swap thing going on unless it has GPU-Z spoofed, funny though, GPU0 does run hotter that GPU1, but VRM on GPU1 runs hotter than GPU0.  But this has been observed mining under other algo's so may be a problem with that card.  Whacky, the temp-overheat+temp-hysteresis that is triggering clock down seems to clock down the cooler GPU1 running card.  But realistically, the '"cl-opt-disable" : true' option is for water cooled rigs, I was running the clock at 1Ghz WS256 I6 and still hit 90C and caught the VRM at 120C and only saw 24Kh/s gain, not worth it to me unless I can has less Mhz on the clock and avoid having to unbrick my card.

Unless something got messed up with the ADL library when I built Beta3, the Autotune should work the same as Beta2.

The temperature mapping issue exits because, there is no way to automatically correlate in code what OpenCL thinks it is GPU0 with what ADL reports as GPU0.
The way to tell for sure is by disabling one GPU in cgminer and watch the temperatures in the status bar.  If the disabled GPU doesn't cool down then, you know you got an incorrect mapping and you need to use the --gpu-map option.

Edit: I just recompiled a new cgminer from a fresh sandbox, included the latest ADL SDK and I get the same results...  AutoTune isn't working on Tahiti or Hawaii.  Cayman works fine.

WORE
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
January 04, 2015, 05:22:10 PM
 #886

I'm getting around 240 - 247 per 280x
clocked 1000/1050 vddc 1100
still problems with keeping the heat on my cards down, secondary card hits 90+ in a couple of minutes

At that temp you better watch your VRM's. 120C is max tolerance and if you let it go over that for any length of time you are doing damage and you will brick your card(s).  GPU-Z will tell you your VRM temps.
Riseman
Hero Member
*****
Offline Offline

Activity: 690
Merit: 500


View Profile
January 04, 2015, 05:32:39 PM
 #887

120C is max tolerance

How do you know that? I mean is there a datasheet or something?
WORE
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
January 04, 2015, 05:44:02 PM
 #888

I've switched to beta2,I am worried about my power supply. Angry

There is no need to go back to beta2. The Kernel is the same.

You can remove the '--cl-opt-disable' option, delete the 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' file and cgminer beta3 should run just like the beta2.  
 
Or, you can take advantage of the Intensity setting.  For your R270, you can use intensity 3 or 4 on each card.  Try specifying:  -I 3  or -I 4


@wr104

I am also getting the HW errors again with the Nvidia cards.  Was there something besides the .bin files that needed to be deleted?

EDIT: It works fine for my 750ti's. Getting around 80 kh/s. Up from about 50 kh/s. The 970 gets HW errors though. Not sure what is going on with tat yet.

As I said, the Kernel hasn't changed.  If the '--cl-opt-disable' option isn't helping on the 970, just remove it and cgminer should pretty much work as before.  Don't forget to delete 'kshake320-546-uint2{GPUNAME}v1w256i4.bin' every time you make a change so, cgminer is forced to re-compile the Kernel next time it is executed.

Also, try changing the Work Size using the '--worksize' option on your 970.  The default is now 256 and your nVidia GPU might not like that value.

 


A request for increased value range of 'gpu-engine'.  Presently it will not accept a value below 1Ghz, default it tries to run at 1.1Ghz.

Presently if using he "cl-opt-disable" : true, with "Intensity" : "8" on an R9 280x DD, I am getting a fair hash rate of about 235KH/s but the driver is locking up, but not until card reaches around 90C which puts the VRM's near max tolerance of 120C with the increased wattage draw under the new kernel.

Would it be possible to allow input for gpu-engine to run down to around 850Mhz on the tahiti to maintain full utilization of memory while providing the ability to lower running temps across all the hardware in the card?

I've been playing with all the GPU settings (Manual Engine clock, Autotune, Powertune, etc) on all my cards and this seems to be an issue with the drivers or with the ADL SDK.

On my R9 290x, only reducing Powertune reduces the GPU clock.  
On my HD7970, nothing makes the GPU clock change.
On my HD6950, everything works fine.


On a 280x I can directly specify the clock setting or give it a range 1000-1100+ which cgminer is supposed to dynamically adjust based on the temp-overheat+temp-hysteresis and was observed doing such on beta 2.  It just won't go below 1Ghz even if told to.

Is a bummer that one cannot do the same based on VRM temps as this would have saved a lot of cards throughout the history of BTC.  No biggy for me though, I'd just swap out burned VRM's with new ones as I have to do a lot of component level board repairs on my day gig.  Could be another interesting implementation of PID in there as well.   Smiley

I don't have the GPU temp swap thing going on unless it has GPU-Z spoofed, funny though, GPU0 does run hotter that GPU1, but VRM on GPU1 runs hotter than GPU0.  But this has been observed mining under other algo's so may be a problem with that card.  Whacky, the temp-overheat+temp-hysteresis that is triggering clock down seems to clock down the cooler GPU1 running card.  But realistically, the '"cl-opt-disable" : true' option is for water cooled rigs, I was running the clock at 1Ghz WS256 I6 and still hit 90C and caught the VRM at 120C and only saw 24Kh/s gain, not worth it to me unless I can has less Mhz on the clock and avoid having to unbrick my card.

Unless something got messed up with the ADL library when I built Beta3, the Autotune should work the same as Beta2.

The temperature mapping issue exits because, there is no way to automatically correlate in code what OpenCL thinks it is GPU0 with what ADL reports as GPU0.
The way to tell for sure is by disabling one GPU in cgminer and watch the temperatures in the status bar.  If the disabled GPU doesn't cool down then, you know you got an incorrect mapping and you need to use the --gpu-map option.

Edit: I just recompiled a new cgminer from a fresh sandbox, included the latest ADL SDK and I get the same results...  AutoTune isn't working on Tahiti or Hawaii.  Cayman works fine.



If I disable GPU0 then the temp drops on GPU0, but the other condition exists.  If GPU0 goes to 88C then it throttles down GPU1 to 1Ghz so maybe the Autotune mappings are crossed in your latest kernel, but then again; cgminer source has had a lot of hands in it.
wr104 (OP)
Sr. Member
****
Offline Offline

Activity: 329
Merit: 250


View Profile WWW
January 04, 2015, 06:14:10 PM
Last edit: January 04, 2015, 07:03:38 PM by wr104
 #889

The ADL code in sgminer has a lot of changes.  I'm going to try to port them to cgminer-khc and see what happens.



Edit:

No change in behavior.  Autotune only works on the 6950. This rig has Win7 x64.

Perhaps, there is an issue with the driver in Windows 8.1.  My two rigs where autotune won't work are Windows 8.1 Pro.
WORE
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
January 04, 2015, 07:08:48 PM
Last edit: January 04, 2015, 07:19:24 PM by WORE
 #890

The ADL code in sgminer has a lot of changes.  I'm going to try to port them to cgminer-khc and see what happens.



Edit:

No change in behavior.  Autotune only works on the 6950. This rig has Win7 x64.

Perhaps, there is an issue with the driver in Windows 8.1.  My two rigs where autotune won't work are Windows 8.1 Pro.

I've got the 14.12 Omega on Win7x64, the drivers seem pretty stable other than when VRM temps reach tolerance threshold and starves the core voltage, and that is what pretty much is what saves my second card from bricking.  Both cards are running 15.43 bios as that idles the cards at 500mhz rather than 15.44 bios that idles at 300mhz which allows the mouse pointer driver corruption after the instance of a card being reset by the driver.
wr104 (OP)
Sr. Member
****
Offline Offline

Activity: 329
Merit: 250


View Profile WWW
January 04, 2015, 07:12:53 PM
Last edit: January 04, 2015, 08:42:28 PM by wr104
 #891

I'm going to put my HD7970 in my Win7 system and see what happens.


Edit: Got the same results.  It seems that with the 14.12 Omega drivers you can only overclock the GPU.  Underclocking either GPU or Memory won't work.
WORE
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
January 04, 2015, 10:24:52 PM
 #892

I'm going to put my HD7970 in my Win7 system and see what happens.


Edit: Got the same results.  It seems that with the 14.12 Omega drivers you can only overclock the GPU.  Underclocking either GPU or Memory won't work.


Rrrrrr?  That's an AMD done screwed the pooch thing, the core was designed for 850Mhz on an XFX R9 280x DD and has 1000Mhz boost.  Maybe they are looking to be overwhelmed by RMA's?  Or are they just going to be giving out free 290x's?
wr104 (OP)
Sr. Member
****
Offline Offline

Activity: 329
Merit: 250


View Profile WWW
January 04, 2015, 11:19:08 PM
 #893

BTW, I also noticed that on Tahiti, the best worksize value is 64.  Using worksize 256 will make it drop to 190Kh/s
WORE
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
January 04, 2015, 11:29:24 PM
Last edit: January 04, 2015, 11:50:37 PM by WORE
 #894

BTW, I also noticed that on Tahiti, the best worksize value is 64.  Using worksize 256 will make it drop to 190Kh/s

Yes, I had previously been using that value but you based your example table on 256 so I threw that in there, cgminer won't accept a value of 30 or 32 based on 2048 shaders.  Will take I 16 max.  With no entry the GUI reports 0, gpu-engine clock limited to 1Ghz, runs over temp on VRM and hoses the driver, machine freezes.  Doesn't brick the card though, but chkdsk gets old quick on 2TB drive.
antonio8
Legendary
*
Offline Offline

Activity: 1386
Merit: 1000


View Profile
January 05, 2015, 12:00:02 AM
 #895

BTW, I also noticed that on Tahiti, the best worksize value is 64.  Using worksize 256 will make it drop to 190Kh/s

I just tried it on my 280X Tahiti but it won't go over 80 kh/s with worksize 64

If you are going to leave your BTC on an exchange please send it to this address instead 1GH3ub3UUHbU5qDJW5u3E9jZ96ZEmzaXtG, I will at least use the money better than someone who steals it from the exchange. Thanks Wink
cisahasa
Legendary
*
Offline Offline

Activity: 910
Merit: 1000


View Profile
January 05, 2015, 12:03:25 AM
Last edit: January 05, 2015, 12:18:56 AM by cisahasa
 #896

still here..
tested, new miner is finding solo now
(still needs some fixing..)

270khs/gpu

about the algo, u cant fight asics, but it still can become Wolf0-proof.

wr104 (OP)
Sr. Member
****
Offline Offline

Activity: 329
Merit: 250


View Profile WWW
January 05, 2015, 12:32:16 AM
Last edit: January 05, 2015, 12:58:28 AM by wr104
 #897

BTW, I also noticed that on Tahiti, the best worksize value is 64.  Using worksize 256 will make it drop to 190Kh/s

I just tried it on my 280X Tahiti but it won't go over 80 kh/s with worksize 64

If you are using worksize equal to 64, Intensity needs to be 32 for the HD7970/R9-280x. But, I limited it to 16.  Undecided

Try disabling Intensity and if it gets too hot, reduce the shaders-mul  

I guess, I'm going to have to release beta4 ASAP to address this issue.
rigging
Jr. Member
*
Offline Offline

Activity: 59
Merit: 10


View Profile
January 05, 2015, 02:34:59 AM
 #898

The wallet and gpu miners seem stable now.
I think the next challenge for KHC is to create applications(ios&androind) that users want to use.

██████    ... ANTSHARES, DIGITAL ASSETS FOR EVERYONE  ...    ██████
████    ... ANTSHARES ICO STARTS AUGUST 8th  ...     ████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀     ANTSHARES     ▀▀▀▀▀▀▀▀▀▀▀▀▀▀ (https://antshares.com/?ref=rigging)
antonio8
Legendary
*
Offline Offline

Activity: 1386
Merit: 1000


View Profile
January 05, 2015, 02:49:45 AM
 #899

The wallet and gpu miners seem stable now.
I think the next challenge for KHC is to create applications(ios&androind) that users want to use.

I am having an issue with Beta 3 now.

For some reason my miner just stops. I had it happen about 3 times now on my 280x. I have to exit out and restart it.

I'll try and see if my log shows something the next time it happens if I can remember to look at it before restarting it.

If you are going to leave your BTC on an exchange please send it to this address instead 1GH3ub3UUHbU5qDJW5u3E9jZ96ZEmzaXtG, I will at least use the money better than someone who steals it from the exchange. Thanks Wink
wr104 (OP)
Sr. Member
****
Offline Offline

Activity: 329
Merit: 250


View Profile WWW
January 05, 2015, 02:58:07 AM
 #900

cgminer-khc 3.7.6 Beta4 available.

https://github.com/kryptohash/cgminer-khc/releases/tag/v3.7.6-Beta4

Changes since Beta3.

Updated ADL code using sgminer's
Removed the setting of worksize to 256 by default.
Increased the Intensity range from 16 to 32.

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 [45] 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!