norpick
Newbie
Offline
Activity: 32
Merit: 0
|
|
November 06, 2013, 05:21:19 PM |
|
i have this same problem, is there a cure? or have i missed it somewhere?
|
|
|
|
|
Lacan82
|
|
November 06, 2013, 08:38:20 PM |
|
they have it missed typed. if you look at the configuration on their page it says it should be http://p2pool.org:9327
|
|
|
|
cbuchner1 (OP)
|
|
November 06, 2013, 10:43:03 PM |
|
Auto tune keep picking up Titan kernal and gave me T575x1 and T576x1 so far. Both which only give 260kh/s.
Okay, i made some improvements to the Titan kernel, which brought my khash/s up from 55 kHash (achieved with the Kepler Kernel) to 62 kHash on the GT 640 (GK208 chip, Compute 3.5). Maybe you want to try this binary on the GTX 780? Let me know how I should send it to you. Christian
|
|
|
|
spartan117x7
Newbie
Offline
Activity: 13
Merit: 0
|
|
November 06, 2013, 11:42:06 PM |
|
Hey, I got cudaminer running perfectly on my desktop except this error keeps on showing up. The only parameter I have is --no-autotune along with my url, worker & pass. What is going on? I've seen others have this problem but no clear answers. I've fiddled with the -D values but no use. [2013-11-05 17:33:23] GPU #0: GeForce GTX 295 result does not validate on CPU! [2013-11-05 17:33:27] GPU #1: GeForce GTX 295 result does not validate on CPU!
|
|
|
|
Lacan82
|
|
November 06, 2013, 11:50:11 PM |
|
Hey, I got cudaminer running perfectly on my desktop except this error keeps on showing up. The only parameter I have is --no-autotune along with my url, worker & pass. What is going on? I've seen others have this problem but no clear answers. I've fiddled with the -D values but no use. [2013-11-05 17:33:23] GPU #0: GeForce GTX 295 result does not validate on CPU! [2013-11-05 17:33:27] GPU #1: GeForce GTX 295 result does not validate on CPU!
Must let it autotune. then use -l once it find a valid configuration..Plenty of answers to this..means you've chosen an invalid configuration.
|
|
|
|
spartan117x7
Newbie
Offline
Activity: 13
Merit: 0
|
|
November 07, 2013, 12:52:45 AM |
|
Nope, doesn't work. I autotuned and now have the values -l L30x3,L30x3,L4x2 Still has the validation issue with the cpu
|
|
|
|
Lacan82
|
|
November 07, 2013, 02:28:50 AM |
|
Nope, doesn't work. I autotuned and now have the values -l L30x3,L30x3,L4x2 Still has the validation issue with the cpu
make sure drivers are up to date.
|
|
|
|
spartan117x7
Newbie
Offline
Activity: 13
Merit: 0
|
|
November 07, 2013, 04:36:02 AM |
|
Nope, drivers are all up to date on my 9400gt and gtx 295. I've gotten it to work on my laptop (with gt 520m) and my brother's macbook (gt 650m). I'm guessing it's something with the configuration value. Any more suggestions?
|
|
|
|
Lacan82
|
|
November 07, 2013, 01:07:13 PM |
|
Nope, drivers are all up to date on my 9400gt and gtx 295. I've gotten it to work on my laptop (with gt 520m) and my brother's macbook (gt 650m). I'm guessing it's something with the configuration value. Any more suggestions?
there new switches like -K and -F Christian posted them a couple pages back. I would suggest you try them. You could also disable the 9400gt and try just mining with the gtx295, then do the same with the other one. to try to determine if one is working or not
|
|
|
|
Odrec
Member
Offline
Activity: 112
Merit: 10
|
|
November 07, 2013, 10:54:20 PM |
|
Do you think that I'll I get any profit mining litecoins with a 260gtx??
|
|
|
|
OneOfMany07
Newbie
Offline
Activity: 3
Merit: 0
|
|
November 08, 2013, 02:26:46 AM Last edit: November 08, 2013, 02:47:58 AM by OneOfMany07 |
|
I'm trying to run cudaminer for the first time on my GTX 480 SLI cards. This is Windows 8 (not 8.1 to my knowledge) with latest beta driver (331.65) from Geforce experience. When I autotune it crashes after picking a kernel and I'm not sure how to get useful info. I do have some of the CUDA SDK installed on my computer, but haven't touched it recently. Was years ago I last programmed something CUDA, and installed it more recently on a whim. The last few lines of a failing run with debug turned on is below...
>cudaminer -D -H 1 -i 1,0 -o stratum+tcp://<url> -O <user>:<pass>
... [2013-11-06 20:38:56] 353: | | | | | | | | | | | | | | | kH/s [2013-11-06 20:38:56] 354: | | | | | | | | | | | | | | | kH/s [2013-11-06 20:38:56] GPU #0: 71.63 khash/s with configuration F7x8 [2013-11-06 20:38:56] GPU #0: using launch configuration F7x8
I tried the idea of limiting which kernels to use. When I picked Fermi (which my card is) it crashed again. When I picked Kepler I got farther, but this still seems wrong not least of which because this is not a Kepler based card and the errors from card 1.
>cudaminer -H 1 -i 1,0 -l K -o stratum+tcp://<url> -O <user>:<pass> *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-11-01 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-11-07 18:18:25] 2 miner threads started, using 'scrypt' algorithm. [2013-11-07 18:18:25] Starting Stratum on stratum+tcp://<url> [2013-11-07 18:18:27] GPU #1: GeForce GTX 480 with compute capability 2.0 [2013-11-07 18:18:27] GPU #1: interactive: 0, tex-cache: 0 , single-alloc: 0 [2013-11-07 18:18:27] GPU #0: GeForce GTX 480 with compute capability 2.0 [2013-11-07 18:18:27] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-11-07 18:18:27] GPU #1: Performing auto-tuning (Patience...) [2013-11-07 18:18:27] GPU #0: Given launch config 'K' does not validate. [2013-11-07 18:18:27] GPU #0: Performing auto-tuning (Patience...) [2013-11-07 18:19:07] GPU #0: 201.97 khash/s with configuration K15x16 [2013-11-07 18:19:07] GPU #0: using launch configuration K15x16 [2013-11-07 18:19:07] GPU #0: GeForce GTX 480, 7680 hashes, 0.19 khash/s [2013-11-07 18:19:07] GPU #0: GeForce GTX 480, 15360 hashes, 128.99 khash/s [2013-11-07 18:19:36] GPU #1: 213.19 khash/s with configuration F30x8 [2013-11-07 18:19:36] GPU #1: using launch configuration F30x8 [2013-11-07 18:19:36] GPU #1: GeForce GTX 480, 7680 hashes, 0.11 khash/s [2013-11-07 18:19:36] GPU #1: GeForce GTX 480, 7680 hashes, 105.13 khash/s [2013-11-07 18:19:36] GPU #1: GeForce GTX 480 result does not validate on CPU! [2013-11-07 18:19:40] GPU #0: GeForce GTX 480, 6174720 hashes, 190.56 khash/s [2013-11-07 18:19:40] accepted: 1/1 (100.00%), 295.70 khash/s (yay!!!) [2013-11-07 18:19:43] GPU #0: GeForce GTX 480, 698880 hashes, 184.91 khash/s [2013-11-07 18:19:43] accepted: 2/2 (100.00%), 290.04 khash/s (yay!!!) [2013-11-07 18:19:46] GPU #1: GeForce GTX 480, 2181120 hashes, 209.68 khash/s [2013-11-07 18:19:47] accepted: 3/3 (100.00%), 394.59 khash/s (yay!!!) Ctrl-C [2013-11-07 18:19:49] workio thread dead, waiting for workers... [2013-11-07 18:19:49] worker threads all shut down, exiting.
Thanks for any help in advance.
----------------------------------
Just realized SLI was turned off in the driver. I'll try with it on in a bit...
|
|
|
|
cbuchner1 (OP)
|
|
November 08, 2013, 10:37:42 AM |
|
use -l K,K to tell both cards to run Kepler kernels
or
-l K15x16,K15x16
to skip autotune alltogether.
Also try enabling
-C 2,2
to get some 5% boost (from the texture cache)
|
|
|
|
cbuchner1 (OP)
|
|
November 08, 2013, 02:03:09 PM |
|
Do you think that I'll I get any profit mining litecoins with a 260gtx??
I scrapped both my GTX 260 (SLI config) because I was getting only 40 kHash/s from each one under Windows 7. They might have worked better on Linux or Windows XP (due to a different driver model used there). I also scrapped a GTX 460 (too old, 96 kHash/s). I got myself instead: a GTX 560 Ti 448 core edition (used), a GTX 560 Ti (new) and a GTX 660 Ti (new) and a GT 640 (the new model with GK 208 chip). Now the machine can do 600 kHash @ 800 Watts. And it's good for driving MANY monitors and for gaming as well And it isn't actually profitable, considering the local electricity costs and the increased mining difficulty. But I am doing this for fun. Christian
|
|
|
|
Devasive
|
|
November 09, 2013, 12:55:23 AM |
|
I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1, even with a -d 0,1 or similar configuration. A single GPU pushes out a hashrate of around 150 khash/s, though it would be nice to double this by utilizing both of them. Has anyone else experienced this issue and found a solution?
|
|
|
|
cbuchner1 (OP)
|
|
November 09, 2013, 02:10:02 PM |
|
I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1 ever tried running two separate instances of cudaminer? one with -d 0 and one with -d 1 maybe? do tools like CUDA-z and GPU-z show both chips separately? Christian
|
|
|
|
Devasive
|
|
November 09, 2013, 07:08:41 PM |
|
I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1 ever tried running two separate instances of cudaminer? one with -d 0 and one with -d 1 maybe? do tools like CUDA-z and GPU-z show both chips separately? Christian They are detected independently in any other program. I did run two separate batch files of cudaminer for the same pool for each GPU and that does work, but one of the instances accepts blocks much quicker than the other. CGminer utilizes both GPU's in the same instance and works quickly as well, albeit with a hashrate of about 65 khash/s per card. I would like to get both cards running within the same instance in cudaminer but I cannot find a configuration that allows it.
|
|
|
|
MaxBTC1
Newbie
Offline
Activity: 56
Merit: 0
|
|
November 09, 2013, 08:12:43 PM |
|
When I start the program it just shuts down immediately.
I navigated via cmd and when running it, it just tells me about the program, the developed and how to donate - it never starts
halp!
|
|
|
|
dgross0818
|
|
November 10, 2013, 12:05:21 AM |
|
I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1 ever tried running two separate instances of cudaminer? one with -d 0 and one with -d 1 maybe? do tools like CUDA-z and GPU-z show both chips separately? Christian They are detected independently in any other program. I did run two separate batch files of cudaminer for the same pool for each GPU and that does work, but one of the instances accepts blocks much quicker than the other. CGminer utilizes both GPU's in the same instance and works quickly as well, albeit with a hashrate of about 65 khash/s per card. I would like to get both cards running within the same instance in cudaminer but I cannot find a configuration that allows it. I'm running a laptop with two 680m's and experiencing a similar issue. Autodetect was pretty lame in what it chose (it tried but I was only getting like 80kh/s) and right now I'm running two instances, one per GPU, one at K14x16, the other at K70x2 - the K14x16 one works well at around 117kh/s, the 70x2 config is a little slower at around 104kh/s I tried running both with K14x16, however, I have a feeling I was running out of VRAM (these are the 2GB 680m's) because my drivers would crash and both cards would drop to like 5kh/s Memory utilization at the current settings is around 87% on both cards according to HWinfo I'm happy with around 215kh/s from a laptop with NVidia GPUs :] That said, if anyone knows a config that may work better on the 680m, I'm all ears
|
|
|
|
Devasive
|
|
November 10, 2013, 12:44:20 AM |
|
I have both instances set to K16x16 and they both maintain around 145 khash/s independently, but one always seems to overtake the other in accepted blocks (by a decent amount). What I am doing now is splitting my mining up between FTC and LTC. One GPU mines litecoin while the other is mining feathercoin. It's actually kind of nice, but I would still like to find a way to put them together within one instance to ramp up the output on just one.
|
|
|
|
|