Bitcoin Forum
April 19, 2024, 01:01:12 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 [178] 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 ... 1135 »
  Print  
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX]  (Read 3426866 times)
ManIkWeet
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
February 03, 2014, 12:26:09 AM
Last edit: February 03, 2014, 12:37:08 AM by ManIkWeet
 #3541

I did not touch that part of code today. Only kernel selection was modified. And the Fermi kernel got lookup gap support.
Seems to happen if I do not specify any -l parameter, if I specify "-l t" it does take my "-L 4" but it prints "[2014-02-03 01:22:21] GPU #0: Given launch config 't' does not validate."

Edit:
Happens with: -L 4 -m 1 -i 1 --algo=scrypt-jane:YAC -o http://url:port -O acc:pass

BTC donations: 18fw6ZjYkN7xNxfVWbsRmBvD6jBAChRQVn (thanks!)
1713488472
Hero Member
*
Offline Offline

Posts: 1713488472

View Profile Personal Message (Offline)

Ignore
1713488472
Reply with quote  #2

1713488472
Report to moderator
1713488472
Hero Member
*
Offline Offline

Posts: 1713488472

View Profile Personal Message (Offline)

Ignore
1713488472
Reply with quote  #2

1713488472
Report to moderator
1713488472
Hero Member
*
Offline Offline

Posts: 1713488472

View Profile Personal Message (Offline)

Ignore
1713488472
Reply with quote  #2

1713488472
Report to moderator
There are several different types of Bitcoin clients. The most secure are full nodes like Bitcoin Core, which will follow the rules of the network no matter what miners do. Even if every miner decided to create 1000 bitcoins per block, full nodes would stick to the rules and reject those blocks.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
Ultimist
Full Member
***
Offline Offline

Activity: 812
Merit: 102



View Profile
February 03, 2014, 01:30:37 AM
 #3542

Interesting. I'll have to try out this latest "official" version...

Cbuchner1, your work on this is astounding. You are a real developer! Great job!

Morgahl
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
February 03, 2014, 01:32:45 AM
 #3543

GTX Titan

T kernel is slightly more overclock friendly but at same clocks, no changes.

t kernel is worse now, same config from before an after I've lost around 0.6 kh/s @ Nfactor=14, Still quite a few issue with Autotune in Windows so this may just need a different version of the tuning but it's missing a bit Sad

k kernel interestingly enough I can allocate k128x1 with -L 1 and it fully allocates out nearly 5GB so not sure what it is doing to allow me past the 3GB limit but it can be done, that said performance is VERY bad compared to even the reduced t kernel.
MexiMelt
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
February 03, 2014, 01:45:19 AM
 #3544

I posted a 2014-02-02 release. I did not have a lot of time for testing. In case anything is
seriously broken, I might post some update (hotfix).

For those using the github version so far, please note the change in the kernel letters.

upper case T,K,F -> scrypt and low N-factor scrypt jane coins
lower case t,k,f   -> high N-factor scrypt-jane coins

you can still use the previous kernel names X,Y,Z if you so please.

autotune will use the lower case kernels for scrypt-jane with high N-factor automatically.
However the threshold may not be chosen optimally. So please experiment and override
the autotune to find which one is actually better.

Note that the upper case letters T and K now select the high register count kernels
submitted by nVidia. This matters if you used T or K kernel configs in your Yacoin mining
scripts so far -> switch to t and k.

Mining through N-factor changes should not lead to crashes or validation errors now, but
the speed might not be optimal after the change. Best to re-tune the kernel afterwards.

Christian


Could you elaborate on this? When mining dogecoin, it appears autotune wants to select lower-case kernals as if it were a scrypt-jane coin. Is it expected to have to put -l K to override it for scrypt now?
cwizard
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
February 03, 2014, 01:53:41 AM
 #3545

I lost performance but it may be due to new switches that I am not understanding yet.  I am mining normal scrypt coins and usually see around 500khash not messing with timings or anything, and with this build I'm having a hard time breaking 240 Sad
MexiMelt
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
February 03, 2014, 02:12:26 AM
 #3546

I lost performance but it may be due to new switches that I am not understanding yet.  I am mining normal scrypt coins and usually see around 500khash not messing with timings or anything, and with this build I'm having a hard time breaking 240 Sad

Try specifying -l K (or whatever letter kernal your card is.)

I saw a 20kh increase after doing so.

Autotune seems to want to pick small kernals even if your coin is scrypt, not scrypt-jane. and I was getting terrible performance.
djm34
Legendary
*
Offline Offline

Activity: 1400
Merit: 1050


View Profile WWW
February 03, 2014, 02:17:26 AM
 #3547

it was on windows

The problem is that my gettimeofday() does not have the best accuracy on Windows. This is why I chose to measure for 50ms minimum.

Autotune is affected by nVidia's boost feature unfortunately. I wish an application could turn it off momentarily.
What I saw using 10ms, was that the card didn't have the time to boost and the power was at 75% during all the autotuning.
Also I was able to get back some config which was totally wrong with the 50ms.
For example the config I use in script Z15x16 gives 135khash/s with the time set at 50ms,
but with the time set to 10ms is found around its true value 700khash.
But yes this is not necessarily very precise the value are overestimated, however they remain consistant with each other which is easier for the autotuning to chose the best config.

djm34 facebook page
BTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze
Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
lordaccess
Member
**
Offline Offline

Activity: 69
Merit: 10


View Profile
February 03, 2014, 02:23:35 AM
 #3548

Had to make the bat from scratch but I did see a 540 to 605 khps on my one GTX 780 and a 475 to 505 increase on the second. Again no idea why although same cards I get different khps... The second starts at 605 and drops to 505 as does the GPU core clock and the Voltage... (that was an issue in all cudaminers so no worries) if anyone can help? Btw no overclock.

YinCoin YangCoin ☯☯First Ever POS/POW Alternator! Multipool! ☯ ☯ http://yinyangpool.com/ 
https://bitcointalk.org/index.php?topic=623937
Bwincoin - 100% Free POS. BRz1SNnSs6bGkJkG4kvw5ADSin5dBat3Cx
Mkilbride
Newbie
*
Offline Offline

Activity: 33
Merit: 0


View Profile
February 03, 2014, 02:47:56 AM
 #3549

How does one setup this new version? I keep getting it doesn't validate errors.
Stanr010
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
February 03, 2014, 02:54:12 AM
 #3550

What's a good difficulty to start for a GTX 780 on LTC/DOGE? I tried -l auto but then it game a string of cudaError 30.

On the previous version, I was using:

cudaminer.exe -i 0 -C 2 -l T12x16 -o stratum+tcp://useast.middlecoin.com:3333 -O 1DVgwcCLEhFb2HRGA2PZD3rnNU6w7xNJc7:x    

While this still works on the new version, it's not using any of the new coding goodness.


Stanr010
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
February 03, 2014, 03:16:48 AM
 #3551

Had to make the bat from scratch but I did see a 540 to 605 khps on my one GTX 780 and a 475 to 505 increase on the second. Again no idea why although same cards I get different khps... The second starts at 605 and drops to 505 as does the GPU core clock and the Voltage... (that was an issue in all cudaminers so no worries) if anyone can help? Btw no overclock.

What is the launch .bat command you're using for your 780s?
Mkilbride
Newbie
*
Offline Offline

Activity: 33
Merit: 0


View Profile
February 03, 2014, 03:40:37 AM
 #3552

This new oen just doesn't work for me. =/
SystmHash
Newbie
*
Offline Offline

Activity: 37
Merit: 0


View Profile
February 03, 2014, 03:44:55 AM
 #3553

What's a good difficulty to start for a GTX 780 on LTC/DOGE? I tried -l auto but then it game a string of cudaError 30.

On the previous version, I was using:

cudaminer.exe -i 0 -C 2 -l T12x16 -o stratum+tcp://useast.middlecoin.com:3333 -O 1DVgwcCLEhFb2HRGA2PZD3rnNU6w7xNJc7:x    

While this still works on the new version, it's not using any of the new coding goodness.


http://s29.postimg.org/mcqa6zq7b/Untitled.png


I got best results with  x86 version and:
cudaminer.exe -d 0 -H 2 -C 0 -m 1 -l Z12x24 -i 0
Stanr010
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
February 03, 2014, 03:47:48 AM
 #3554

What's a good difficulty to start for a GTX 780 on LTC/DOGE? I tried -l auto but then it game a string of cudaError 30.

On the previous version, I was using:

cudaminer.exe -i 0 -C 2 -l T12x16 -o stratum+tcp://useast.middlecoin.com:3333 -O 1DVgwcCLEhFb2HRGA2PZD3rnNU6w7xNJc7:x    

While this still works on the new version, it's not using any of the new coding goodness.





I got best results with  x86 version and:
cudaminer.exe -d 0 -H 2 -C 0 -m 1 -l Z12x24 -i 0

Is there a difference between "Z" and "T"? I thought Z was just an alias for T.
lordaccess
Member
**
Offline Offline

Activity: 69
Merit: 10


View Profile
February 03, 2014, 03:49:06 AM
 #3555

Had to make the bat from scratch but I did see a 540 to 605 khps on my one GTX 780 and a 475 to 505 increase on the second. Again no idea why although same cards I get different khps... The second starts at 605 and drops to 505 as does the GPU core clock and the Voltage... (that was an issue in all cudaminers so no worries) if anyone can help? Btw no overclock.

What is the launch .bat command you're using for your 780s?

-i 0 -C 2 -H 1 -l T12x20

let me know if it works

YinCoin YangCoin ☯☯First Ever POS/POW Alternator! Multipool! ☯ ☯ http://yinyangpool.com/ 
https://bitcointalk.org/index.php?topic=623937
Bwincoin - 100% Free POS. BRz1SNnSs6bGkJkG4kvw5ADSin5dBat3Cx
SystmHash
Newbie
*
Offline Offline

Activity: 37
Merit: 0


View Profile
February 03, 2014, 03:50:55 AM
 #3556

Quote
Is there a difference between "Z" and "T"? I thought Z was just an alias for T.

Shouldn't be any different, I just used the same batch file that I used with the beta's
Mkilbride
Newbie
*
Offline Offline

Activity: 33
Merit: 0


View Profile
February 03, 2014, 03:51:53 AM
 #3557

Why does my config that worked with the last version, not work with this version? I read the readme, but nothing stood out.

I just get failed to validate error - alot.
MexiMelt
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
February 03, 2014, 03:55:45 AM
 #3558

Why does my config that worked with the last version, not work with this version? I read the readme, but nothing stood out.

I just get failed to validate error - alot.

It's weird, and the autotune keeps assuming you're using scrypt-jane. Try starting your miner with -l K (or whatever letter your config is) and nothing else. Let autotune try that and you'll probably see good results.
Mkilbride
Newbie
*
Offline Offline

Activity: 33
Merit: 0


View Profile
February 03, 2014, 04:04:46 AM
 #3559

Got Auto-tune going.

Still alot of "Failed to valide to CPU" errors.

=/


The 18th build can get me 570KH/s stable. This latest one, when it does validate, hits about 550.

GTX670  SLI and it says "Launch COnfiguration K does not validate"
Stanr010
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
February 03, 2014, 04:05:58 AM
 #3560

Got Auto-tune going.

Still alot of "Failed to valide to CPU" errors.

=/


The 18th build can get me 570KH/s stable. This latest one, when it does validate, hits about 550.

GTX670  SLI and it says "Launch COnfiguration K does not validate"

Me too, I'm getting a lot of failed to validate to CPU errors.

Had to make the bat from scratch but I did see a 540 to 605 khps on my one GTX 780 and a 475 to 505 increase on the second. Again no idea why although same cards I get different khps... The second starts at 605 and drops to 505 as does the GPU core clock and the Voltage... (that was an issue in all cudaminers so no worries) if anyone can help? Btw no overclock.

What is the launch .bat command you're using for your 780s?

-i 0 -C 2 -H 1 -l T12x20

let me know if it works

this one gives me a nice bump from 600 to 650 kh/s per card. What's interesting is it's using my 3960X @ 100% now but my overall power draw actually decreased by 100W.

I am getting a "GeForce GTX 780 result does not validate on CPU <i=3945, s=0>!
Pages: « 1 ... 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 [178] 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 ... 1135 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!