Bitcoin Forum
March 19, 2019, 05:05:33 AM *
News: Latest Bitcoin Core release: 0.17.1 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 [178] 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 ... 1136 »
  Print  
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX]  (Read 3405804 times)
djm34
Legendary
*
Offline Offline

Activity: 1288
Merit: 1040


View Profile WWW
February 02, 2014, 08:33:47 PM
 #3541

I was playing a bit with the autotune code and I think I found what cause all the strange results we might get in script.
It seems to be due to the time over which the average khash/s is calculated (50ms).
At first, I increased it to 500ms and I saw that the power was gradually increasing between 75% and 120% (the limit I choose in overclocking) for each line and at the next line of the config table was starting again at 75% up to 120% for 32
This means that it was just impossible to compare each numbers.

So then, I decreased the time that time to 10ms and now everything is kept at more or less the power level (there are still a few spike). However I am now able to get meaningfull number (or least they can be compared between each other) and the autotune seems to be more reliable (and 5x time faster)

djm34 facebook page
BTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze
Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
1552971933
Hero Member
*
Offline Offline

Posts: 1552971933

View Profile Personal Message (Offline)

Ignore
1552971933
Reply with quote  #2

1552971933
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1552971933
Hero Member
*
Offline Offline

Posts: 1552971933

View Profile Personal Message (Offline)

Ignore
1552971933
Reply with quote  #2

1552971933
Report to moderator
1552971933
Hero Member
*
Offline Offline

Posts: 1552971933

View Profile Personal Message (Offline)

Ignore
1552971933
Reply with quote  #2

1552971933
Report to moderator
1552971933
Hero Member
*
Offline Offline

Posts: 1552971933

View Profile Personal Message (Offline)

Ignore
1552971933
Reply with quote  #2

1552971933
Report to moderator
cbuchner1
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500


View Profile
February 02, 2014, 08:36:14 PM
 #3542

I was playing a bit with the autotune code and I think I found what cause all the strange results we might get in script.
It seems to be due to the time over which the average khash/s is calculated (50ms).
At first, I increased it to 500ms and I saw that the power was gradually increasing between 75% and 120% (the limit I choose in overclocking) for each line and at the next line of the config table was starting again at 75% up to 120% for 32
This means that it was just impossible to compare each numbers.

So then, I decreased the time that time to 0.01 and now everything is kept at more or less the power level (there are still a few spike). However I am now able to get meaningfull number (or least they can be compared between each other) and the autotune seems to be more reliable (and 5x time faster)

was this on Windows or on Linux?
djm34
Legendary
*
Offline Offline

Activity: 1288
Merit: 1040


View Profile WWW
February 02, 2014, 08:41:17 PM
 #3543

it was on windows

djm34 facebook page
BTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze
Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
Mole.
Newbie
*
Offline Offline

Activity: 27
Merit: 0


View Profile
February 02, 2014, 08:43:25 PM
 #3544

got it up to 450  now Smiley

On the December release I get 630 on my EVGA superclocked 780Ti's.
Madmick
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
February 02, 2014, 08:46:08 PM
 #3545

hi sorry to but in , but im new to mining! ive got a gtx650 that im trying to mine with i think its working on guiminer but i wanted to try cudaminer my problem is in the cmd interface i type in cudaminer.exe -0 http://127.0.0.1:8332 -u madmick.1 -p x  but i get a message saying something about the worker name and code -1!  can you help?
cbuchner1
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500


View Profile
February 02, 2014, 08:50:20 PM
 #3546

hi sorry to but in , but im new to mining! ive got a gtx650 that im trying to mine with i think its working on guiminer but i wanted to try cudaminer my problem is in the cmd interface i type in cudaminer.exe -0 http://127.0.0.1:8332 -u madmick.1 -p x  but i get a message saying something about the worker name and code -1!  can you help?


the IP address 127.0.0.1 is useable only for solo mining, unless you run your own pool and web server on your home PC...

the first option should be passed with lower case o

-o http://127.0.0.1:8332

you would have to use the options rpcport=8332 and rpcuser=madmick.1 and rpcpassword=x and server=1 in the wallet's .conf file to use these settings on Solo mining. This password is too weak and unsafe though.
cwizard
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
February 02, 2014, 08:51:07 PM
 #3547

With build 114, using the 12/18 dll's, I get lower performance to the tune of a couple hundred khash.

Waiting for release Wink
liomojo1
Hero Member
*****
Offline Offline

Activity: 668
Merit: 500


View Profile
February 02, 2014, 10:18:52 PM
 #3548


Edit: I'm up there in the VIP section with cbuchner1 now:   Cool

Congratulations!

15 mio. here Wink

Are you solominig or on a pool, to get this result? 
cbuchner1
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500


View Profile
February 02, 2014, 11:36:28 PM
 #3549

15 mio. here Wink

Are you solominig or on a pool, to get this result? 

pool. No luck solo'ing this coin at all.
cbuchner1
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500


View Profile
February 02, 2014, 11:37:19 PM
Last edit: February 02, 2014, 11:49:49 PM by cbuchner1
 #3550

I posted a 2014-02-02 release. I did not have a lot of time for testing. In case anything is
seriously broken, I might post some update (hotfix).

For those using the github version so far, please note the change in the kernel letters.

upper case T,K,F -> scrypt and low N-factor scrypt jane coins
lower case t,k,f   -> high N-factor scrypt-jane coins

you can still use the previous kernel names X,Y,Z if you so please.

autotune will use the lower case kernels for scrypt-jane with high N-factor automatically.
However the threshold may not be chosen optimally. So please experiment and override
the autotune to find which one is actually better.

Note that the upper case letters T and K now select the high register count kernels
submitted by nVidia. This matters if you used T or K kernel configs in your Yacoin mining
scripts so far -> switch to t and k.

Mining through N-factor changes should not lead to crashes or validation errors now, but
the speed might not be optimal after the change. Best to re-tune the kernel afterwards.

Christian
cbuchner1
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500


View Profile
February 03, 2014, 12:10:58 AM
 #3551

it was on windows

The problem is that my gettimeofday() does not have the best accuracy on Windows. This is why I chose to measure for 50ms minimum.

Autotune is affected by nVidia's boost feature unfortunately. I wish an application could turn it off momentarily.
ManIkWeet
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
February 03, 2014, 12:20:59 AM
 #3552

I noticed that the autotune does not take -L into consideration anymore, am I correct?

BTC donations: 18fw6ZjYkN7xNxfVWbsRmBvD6jBAChRQVn (thanks!)
cbuchner1
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500


View Profile
February 03, 2014, 12:22:43 AM
 #3553

I noticed that the autotune does not take -L into consideration anymore, am I correct?

I did not touch that part of code today. Only kernel selection was modified. And the Fermi kernel got lookup gap support.
ManIkWeet
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
February 03, 2014, 12:26:09 AM
Last edit: February 03, 2014, 12:37:08 AM by ManIkWeet
 #3554

I did not touch that part of code today. Only kernel selection was modified. And the Fermi kernel got lookup gap support.
Seems to happen if I do not specify any -l parameter, if I specify "-l t" it does take my "-L 4" but it prints "[2014-02-03 01:22:21] GPU #0: Given launch config 't' does not validate."

Edit:
Happens with: -L 4 -m 1 -i 1 --algo=scrypt-jane:YAC -o http://url:port -O acc:pass

BTC donations: 18fw6ZjYkN7xNxfVWbsRmBvD6jBAChRQVn (thanks!)
Ultimist
Full Member
***
Offline Offline

Activity: 602
Merit: 102



View Profile
February 03, 2014, 01:30:37 AM
 #3555

Interesting. I'll have to try out this latest "official" version...

Cbuchner1, your work on this is astounding. You are a real developer! Great job!

Morgahl
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
February 03, 2014, 01:32:45 AM
 #3556

GTX Titan

T kernel is slightly more overclock friendly but at same clocks, no changes.

t kernel is worse now, same config from before an after I've lost around 0.6 kh/s @ Nfactor=14, Still quite a few issue with Autotune in Windows so this may just need a different version of the tuning but it's missing a bit Sad

k kernel interestingly enough I can allocate k128x1 with -L 1 and it fully allocates out nearly 5GB so not sure what it is doing to allow me past the 3GB limit but it can be done, that said performance is VERY bad compared to even the reduced t kernel.
MexiMelt
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
February 03, 2014, 01:45:19 AM
 #3557

I posted a 2014-02-02 release. I did not have a lot of time for testing. In case anything is
seriously broken, I might post some update (hotfix).

For those using the github version so far, please note the change in the kernel letters.

upper case T,K,F -> scrypt and low N-factor scrypt jane coins
lower case t,k,f   -> high N-factor scrypt-jane coins

you can still use the previous kernel names X,Y,Z if you so please.

autotune will use the lower case kernels for scrypt-jane with high N-factor automatically.
However the threshold may not be chosen optimally. So please experiment and override
the autotune to find which one is actually better.

Note that the upper case letters T and K now select the high register count kernels
submitted by nVidia. This matters if you used T or K kernel configs in your Yacoin mining
scripts so far -> switch to t and k.

Mining through N-factor changes should not lead to crashes or validation errors now, but
the speed might not be optimal after the change. Best to re-tune the kernel afterwards.

Christian


Could you elaborate on this? When mining dogecoin, it appears autotune wants to select lower-case kernals as if it were a scrypt-jane coin. Is it expected to have to put -l K to override it for scrypt now?
cwizard
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
February 03, 2014, 01:53:41 AM
 #3558

I lost performance but it may be due to new switches that I am not understanding yet.  I am mining normal scrypt coins and usually see around 500khash not messing with timings or anything, and with this build I'm having a hard time breaking 240 Sad
MexiMelt
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
February 03, 2014, 02:12:26 AM
 #3559

I lost performance but it may be due to new switches that I am not understanding yet.  I am mining normal scrypt coins and usually see around 500khash not messing with timings or anything, and with this build I'm having a hard time breaking 240 Sad

Try specifying -l K (or whatever letter kernal your card is.)

I saw a 20kh increase after doing so.

Autotune seems to want to pick small kernals even if your coin is scrypt, not scrypt-jane. and I was getting terrible performance.
djm34
Legendary
*
Offline Offline

Activity: 1288
Merit: 1040


View Profile WWW
February 03, 2014, 02:17:26 AM
 #3560

it was on windows

The problem is that my gettimeofday() does not have the best accuracy on Windows. This is why I chose to measure for 50ms minimum.

Autotune is affected by nVidia's boost feature unfortunately. I wish an application could turn it off momentarily.
What I saw using 10ms, was that the card didn't have the time to boost and the power was at 75% during all the autotuning.
Also I was able to get back some config which was totally wrong with the 50ms.
For example the config I use in script Z15x16 gives 135khash/s with the time set at 50ms,
but with the time set to 10ms is found around its true value 700khash.
But yes this is not necessarily very precise the value are overestimated, however they remain consistant with each other which is easier for the autotuning to chose the best config.

djm34 facebook page
BTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze
Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
Pages: « 1 ... 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 [178] 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 ... 1136 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!