Bitcoin Forum
November 13, 2024, 01:28:09 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 [59] 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 ... 1135 »
  Print  
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX]  (Read 3426932 times)
norpick
Newbie
*
Offline Offline

Activity: 32
Merit: 0



View Profile
November 06, 2013, 05:21:19 PM
 #1161

I am getting this error using cudaMiner:
http://shrani.si/f/1J/zA/18PFvNsm/11.jpg
http://shrani.si/f/1N/jk/3m4BYXU4/2.jpg

I tried different pools, different instructions, but it seems it just won't connect to the server. And I am sure it's not an issue with my net/port-forwarding/firewall since all my other miners work without a problem...
Anyone has any idea what might be causing this?

i have this same problem, is there a cure? or have i missed it somewhere?
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
November 06, 2013, 08:01:45 PM
 #1162

this image suggests to use http:// and not stratum+tcp://

http://imgur.com/r/all/cesAJhA

Maybe the stratum would require a different port number?
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
November 06, 2013, 08:38:20 PM
 #1163

this image suggests to use http:// and not stratum+tcp://

http://imgur.com/r/all/cesAJhA

Maybe the stratum would require a different port number?


they have it missed typed. if you look at the configuration on their page it says it should be http://p2pool.org:9327

cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
November 06, 2013, 10:43:03 PM
 #1164

Auto tune keep picking up Titan kernal and gave me T575x1 and T576x1 so far. Both which only give 260kh/s.

Okay, i made some improvements to the Titan kernel, which brought my khash/s up from 55 kHash (achieved with the Kepler Kernel) to 62 kHash on the GT 640 (GK208 chip, Compute 3.5). Maybe you want to try this binary on the GTX 780? Let me know how I should send it to you.

Christian
spartan117x7
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
November 06, 2013, 11:42:06 PM
 #1165

Hey, I got cudaminer running perfectly on my desktop except this error keeps on showing up.  The only parameter I have is --no-autotune along with my url, worker & pass.   What is going on?  I've seen others have this problem but no clear answers.  I've fiddled with the -D values but no use.
[2013-11-05 17:33:23] GPU #0: GeForce GTX 295 result does not validate on CPU!
[2013-11-05 17:33:27] GPU #1: GeForce GTX 295 result does not validate on CPU!
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
November 06, 2013, 11:50:11 PM
 #1166

Hey, I got cudaminer running perfectly on my desktop except this error keeps on showing up.  The only parameter I have is --no-autotune along with my url, worker & pass.   What is going on?  I've seen others have this problem but no clear answers.  I've fiddled with the -D values but no use.
[2013-11-05 17:33:23] GPU #0: GeForce GTX 295 result does not validate on CPU!
[2013-11-05 17:33:27] GPU #1: GeForce GTX 295 result does not validate on CPU!

Must let it autotune. then use -l once it find a valid configuration..Plenty of answers to this..means you've chosen an invalid configuration.

spartan117x7
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
November 07, 2013, 12:52:45 AM
 #1167

Nope, doesn't work.  I autotuned and now have the values -l L30x3,L30x3,L4x2    Still has the validation issue with the cpu
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
November 07, 2013, 02:28:50 AM
 #1168

Nope, doesn't work.  I autotuned and now have the values -l L30x3,L30x3,L4x2    Still has the validation issue with the cpu

make sure drivers are up to date.

spartan117x7
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
November 07, 2013, 04:36:02 AM
 #1169

Nope, drivers are all up to date on my 9400gt and gtx 295.  I've gotten it to work on my laptop (with gt 520m) and my brother's macbook (gt 650m).  I'm guessing it's something with the configuration value.  Any more suggestions?
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
November 07, 2013, 01:07:13 PM
 #1170

Nope, drivers are all up to date on my 9400gt and gtx 295.  I've gotten it to work on my laptop (with gt 520m) and my brother's macbook (gt 650m).  I'm guessing it's something with the configuration value.  Any more suggestions?

there new switches like -K and -F Christian posted them a couple pages back. I would suggest you try them. You could also disable the 9400gt and try just mining with the gtx295, then do the same with the other one. to try to determine if one is working or not

Odrec
Member
**
Offline Offline

Activity: 112
Merit: 10


View Profile
November 07, 2013, 10:54:20 PM
 #1171

Do you think that I'll I get any profit mining litecoins with a 260gtx??

OneOfMany07
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
November 08, 2013, 02:26:46 AM
Last edit: November 08, 2013, 02:47:58 AM by OneOfMany07
 #1172

I'm trying to run cudaminer for the first time on my GTX 480 SLI cards.  This is Windows 8 (not 8.1 to my knowledge) with latest beta driver (331.65) from Geforce experience.  When I autotune it crashes after picking a kernel and I'm not sure how to get useful info.  I do have some of the CUDA SDK installed on my computer, but haven't touched it recently.  Was years ago I last programmed something CUDA, and installed it more recently on a whim.  The last few lines of a failing run with debug turned on is below...

>cudaminer -D -H 1 -i 1,0 -o stratum+tcp://<url> -O <user>:<pass>

...
[2013-11-06 20:38:56] 353:     |     |     |     |     |     |     |     |     |     |     |     |
   |     |     |      kH/s
[2013-11-06 20:38:56] 354:     |     |     |     |     |     |     |     |     |     |     |     |
   |     |     |      kH/s
[2013-11-06 20:38:56] GPU #0:   71.63 khash/s with configuration F7x8
[2013-11-06 20:38:56] GPU #0: using launch configuration F7x8


I tried the idea of limiting which kernels to use.  When I picked Fermi (which my card is) it crashed again.  When I picked Kepler I got farther, but this still seems wrong not least of which because this is not a Kepler based card and the errors from card 1.

>cudaminer -H 1 -i 1,0 -l K -o stratum+tcp://<url> -O <user>:<pass>
           *** CudaMiner for nVidia GPUs by Christian Buchner ***
                     This is version 2013-11-01 (alpha)
        based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
               Cuda additions Copyright 2013 Christian Buchner
           My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-11-07 18:18:25] 2 miner threads started, using 'scrypt' algorithm.
[2013-11-07 18:18:25] Starting Stratum on stratum+tcp://<url>
[2013-11-07 18:18:27] GPU #1: GeForce GTX 480 with compute capability 2.0
[2013-11-07 18:18:27] GPU #1: interactive: 0, tex-cache: 0 , single-alloc: 0
[2013-11-07 18:18:27] GPU #0: GeForce GTX 480 with compute capability 2.0
[2013-11-07 18:18:27] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-11-07 18:18:27] GPU #1: Performing auto-tuning (Patience...)
[2013-11-07 18:18:27] GPU #0: Given launch config 'K' does not validate.
[2013-11-07 18:18:27] GPU #0: Performing auto-tuning (Patience...)
[2013-11-07 18:19:07] GPU #0:  201.97 khash/s with configuration K15x16
[2013-11-07 18:19:07] GPU #0: using launch configuration K15x16
[2013-11-07 18:19:07] GPU #0: GeForce GTX 480, 7680 hashes, 0.19 khash/s
[2013-11-07 18:19:07] GPU #0: GeForce GTX 480, 15360 hashes, 128.99 khash/s
[2013-11-07 18:19:36] GPU #1:  213.19 khash/s with configuration F30x8
[2013-11-07 18:19:36] GPU #1: using launch configuration F30x8
[2013-11-07 18:19:36] GPU #1: GeForce GTX 480, 7680 hashes, 0.11 khash/s
[2013-11-07 18:19:36] GPU #1: GeForce GTX 480, 7680 hashes, 105.13 khash/s
[2013-11-07 18:19:36] GPU #1: GeForce GTX 480 result does not validate on CPU!
[2013-11-07 18:19:40] GPU #0: GeForce GTX 480, 6174720 hashes, 190.56 khash/s
[2013-11-07 18:19:40] accepted: 1/1 (100.00%), 295.70 khash/s (yay!!!)
[2013-11-07 18:19:43] GPU #0: GeForce GTX 480, 698880 hashes, 184.91 khash/s
[2013-11-07 18:19:43] accepted: 2/2 (100.00%), 290.04 khash/s (yay!!!)
[2013-11-07 18:19:46] GPU #1: GeForce GTX 480, 2181120 hashes, 209.68 khash/s
[2013-11-07 18:19:47] accepted: 3/3 (100.00%), 394.59 khash/s (yay!!!)
Ctrl-C
[2013-11-07 18:19:49] workio thread dead, waiting for workers...
[2013-11-07 18:19:49] worker threads all shut down, exiting.


Thanks for any help in advance.

----------------------------------

Just realized SLI was turned off in the driver.  I'll try with it on in a bit...
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
November 08, 2013, 10:37:42 AM
 #1173

use -l K,K to tell both cards to run Kepler kernels

or

-l K15x16,K15x16

to skip autotune alltogether.

Also try enabling

-C 2,2

to get some 5% boost (from the texture cache)
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
November 08, 2013, 02:03:09 PM
 #1174

Do you think that I'll I get any profit mining litecoins with a 260gtx??

I scrapped both my GTX 260 (SLI config) because I was getting only 40 kHash/s from each one under Windows 7.
They might have worked better on Linux or Windows XP (due to a different driver model used there).

I also scrapped a GTX 460 (too old, 96 kHash/s).

I got myself instead:
a GTX 560 Ti 448 core edition (used), a GTX 560 Ti (new) and a GTX 660 Ti (new) and a GT 640 (the new model with GK 208 chip).

Now the machine can do 600 kHash @ 800 Watts. And it's good for driving MANY monitors and for gaming as well Wink And it isn't actually profitable, considering the local electricity costs and the increased mining difficulty. But I am doing this for fun.

Christian
Devasive
Full Member
***
Offline Offline

Activity: 161
Merit: 100



View Profile
November 09, 2013, 12:55:23 AM
 #1175

 I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1, even with a -d 0,1 or similar configuration. A single GPU pushes out a hashrate of around 150 khash/s, though it would be nice to double this by utilizing both of them. Has anyone else experienced this issue and found a solution?
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
November 09, 2013, 02:10:02 PM
 #1176

I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1

ever tried running two separate instances of cudaminer? one with -d 0 and one with -d 1 maybe?

do tools like CUDA-z and GPU-z show both chips separately?

Christian
Devasive
Full Member
***
Offline Offline

Activity: 161
Merit: 100



View Profile
November 09, 2013, 07:08:41 PM
 #1177

I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1

ever tried running two separate instances of cudaminer? one with -d 0 and one with -d 1 maybe?

do tools like CUDA-z and GPU-z show both chips separately?

Christian


They are detected independently in any other program. I did run two separate batch files of cudaminer for the same pool for each GPU and that does work, but one of the instances accepts blocks much quicker than the other. CGminer utilizes both GPU's in the same instance and works quickly as well, albeit with a hashrate of about 65 khash/s per card. I would like to get both cards running within the same instance in cudaminer but I cannot find a configuration that allows it.
MaxBTC1
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile WWW
November 09, 2013, 08:12:43 PM
 #1178

When I start the program it just shuts down immediately.

I navigated via cmd and when running it, it just tells me about the program, the developed and how to donate - it never starts

halp!
dgross0818
Full Member
***
Offline Offline

Activity: 308
Merit: 146



View Profile
November 10, 2013, 12:05:21 AM
 #1179

I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1

ever tried running two separate instances of cudaminer? one with -d 0 and one with -d 1 maybe?

do tools like CUDA-z and GPU-z show both chips separately?

Christian


They are detected independently in any other program. I did run two separate batch files of cudaminer for the same pool for each GPU and that does work, but one of the instances accepts blocks much quicker than the other. CGminer utilizes both GPU's in the same instance and works quickly as well, albeit with a hashrate of about 65 khash/s per card. I would like to get both cards running within the same instance in cudaminer but I cannot find a configuration that allows it.

I'm running a laptop with two 680m's and experiencing a similar issue. Autodetect was pretty lame in what it chose (it tried but I was only getting like 80kh/s) and right now I'm running two instances, one per GPU, one at K14x16, the other at K70x2 - the K14x16 one works well at around 117kh/s, the 70x2 config is a little slower at around 104kh/s

I tried running both with K14x16, however, I have a feeling I was running out of VRAM (these are the 2GB 680m's) because my drivers would crash and both cards would drop to like 5kh/s

Memory utilization at the current settings is around 87% on both cards according to HWinfo

I'm happy with around 215kh/s from a laptop with NVidia GPUs :]

That said, if anyone knows a config that may work better on the 680m, I'm all ears Smiley
Devasive
Full Member
***
Offline Offline

Activity: 161
Merit: 100



View Profile
November 10, 2013, 12:44:20 AM
 #1180

I have both instances set to K16x16 and they both maintain around 145 khash/s independently, but one always seems to overtake the other in accepted blocks (by a decent amount). What I am doing now is splitting my mining up between FTC and LTC. One GPU mines litecoin while the other is mining feathercoin. It's actually kind of nice, but I would still like to find a way to put them together within one instance to ramp up the output on just one.
Pages: « 1 ... 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 [59] 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 ... 1135 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!