Eli0t
|
|
August 27, 2013, 05:09:29 AM |
|
Any chance this gets adapted for the Yacoin family, those scrypt coins with a higher N factor? AMD cards performance are taking a bigger hit with higher N than CPU. Would the performance gap between Nvidia and AMD cards also narrow? it doesn't work? I was using multipool.us pretty well with it..which has variety of coins. I don't know the N factor of those coins off the top of my head.. multipool doesn't do scrypt jane coins
|
LTC: LKpJf3uk7KsHU73kxq8iFJrP1AAKN7Yni7 DGC: DKXGvEbj3Rwgrm2QQbRyNPDDZDYoq4Y44d XPM: AWV5AKfLFyoBaMjg9C77rGUBhuFxz5DGGL
|
|
|
ilostcoins
|
|
August 27, 2013, 12:14:45 PM |
|
Any chance this gets adapted for the Yacoin family, those scrypt coins with a higher N factor? AMD cards performance are taking a bigger hit with higher N than CPU. Would the performance gap between Nvidia and AMD cards also narrow? it doesn't work? I was using multipool.us pretty well with it..which has variety of coins. I don't know the N factor of those coins off the top of my head.. The yacoin family of coins uses a modified version of scrypt for hashing. So, a normal miner that works with litecoin etc won't work without adaptation. BTW, just tested the CUDA miner with litecoin and it works well. Getting ~60kh on a 650ti.
|
LTC: LSyqwk4YbhBRtkrUy8NRdKXFoUcgVpu8Qb NVC: 4HtynfYVyRYo6yM8BTAqyNYwqiucfoPqFW TAG id: 4313 CMC: CAHrzqveVm9UxGm7PZtT4uj6su4suxKzZv YAC: Y9m5S7M24sdkjdwxnA9GZpPez6k6EqUjUt
|
|
|
Stoneysilence
Member
Offline
Activity: 104
Merit: 10
|
|
August 28, 2013, 05:42:59 AM |
|
Yes, i had the same issue. Make sure you set it up, get on a pool, etc. You dont have to make a bat tho if you dont know how and i prefer short cuts anyways. You can right click, create short cut to desktop, then right click the short cut, go to properties, and on the target, just type the command like stoneyshows above.
Btw, Thanks for the wemineltc pool suggestions, its night and day difference!
Also, anyone still trying to fix that stratum failed to parse error? Would love a fix o.O, is it not that common? I would expect it to be fairly common..
I am old school, I just find Bat files easier since they are plain text and you just cut and paste. Shortcuts can be a pain to setup with parameters. Either way works.
|
|
|
|
nulo0b
|
|
August 31, 2013, 09:49:15 PM |
|
uhm +600kh/s ? what is that all about, not on a pool.. solo on a coin. Gonna run for abit, maybe its fake? thx
|
|
|
|
cbuchner1 (OP)
|
|
September 01, 2013, 11:39:33 AM |
|
Mine on a pool for testing and see if the reported hits are validated by the CPU. ~600 kHash seems way too high for this type of card. on solo mining, you might only find out after months that your computed result was bogus Christian
|
|
|
|
jruben4
Newbie
Offline
Activity: 17
Merit: 0
|
|
September 04, 2013, 08:38:34 PM |
|
I now get the "stratum_recv_line failed to parse a newline-terminated string" about once every 6 hrs of operations (which freezes mining).
Would it be better if I went back to my own stratum proxy?
|
|
|
|
mhps
|
|
September 05, 2013, 12:55:45 AM |
|
I now get the "stratum_recv_line failed to parse a newline-terminated string" about once every 6 hrs of operations (which freezes mining).
Would it be better if I went back to my own stratum proxy?
That is what I did.
|
|
|
|
BenTheRighteous
Newbie
Offline
Activity: 32
Merit: 0
|
|
September 07, 2013, 12:44:30 AM |
|
I compiled for linux (fedora 18) and it appears to have worked, but when I try and run the program, I get this: LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials] *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-07-13 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-06 20:39:24] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333 [2013-09-06 20:39:25] thread 31334 create failed If I limit the number of threads with a -t option, I get something more like this: LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials] -t 1024 *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-07-13 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-06 20:41:06] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333 [2013-09-06 20:41:06] 1024 miner threads started, using 'scrypt' algorithm. [2013-09-06 20:41:07] GPU #0: starting up...
[2013-09-06 20:41:07] GPU #1: starting up...
[2013-09-06 20:41:07] GPU #1: with compute capability 0.0 [2013-09-06 20:41:07] GPU #1: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-06 20:41:07] GPU #1: Performing auto-tuning (Patience...) [2013-09-06 20:41:07] GPU #0: w�= with compute capability 0.0 [2013-09-06 20:41:07] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-06 20:41:07] GPU #0: Performing auto-tuning (Patience...) [2013-09-06 20:41:07] GPU #4: starting up...
[2013-09-06 20:41:07] GPU #4: with compute capability 0.0 [2013-09-06 20:41:07] GPU #4: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-06 20:41:07] GPU #4: Performing auto-tuning (Patience...) [2013-09-06 20:41:07] GPU #4: 0.00 khash/s with configuration 0x0 [2013-09-06 20:41:07] GPU #4: using launch configuration 0x0 Floating point exception (core dumped) I only have 1 GPU (a GTX 650 Ti) so I'm not sure if GPU #1, GPU #4 etc makes any sense. Is this because I'm on a 64 bit system?
|
|
|
|
Schleicher
|
|
September 07, 2013, 04:59:36 PM |
|
I compiled for linux (fedora 18) and it appears to have worked, but when I try and run the program, I get this: [2013-09-06 20:39:25] thread 31334 create failed cudaminer doesn't recognise your graphics card. One of the linux freaks here should be able to help you.
|
|
|
|
Lacan82
|
|
September 07, 2013, 05:20:49 PM |
|
LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials] -t 1024 *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-07-13 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-06 20:41:06] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333 [2013-09-06 20:41:06] 1024 miner threads started, using 'scrypt' algorithm. [2013-09-06 20:41:07] GPU #0: starting up...
[2013-09-06 20:41:07] GPU #1: starting up...
I only have 1 GPU (a GTX 650 Ti) so I'm not sure if GPU #1, GPU #4 etc makes any sense. Is this because I'm on a 64 bit system? [/quote] This is usually a driver issue. make sure they are up to date
|
|
|
|
BenTheRighteous
Newbie
Offline
Activity: 32
Merit: 0
|
|
September 07, 2013, 06:34:50 PM |
|
I'm using the proprietary driver - I can play games just fine, so I would think that the driver isn't the issue? I could be wrong, but: glxinfo | grep OpenGL OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GTX 650 Ti/PCIe/SSE2 OpenGL core profile version string: 4.3.0 NVIDIA 319.32 OpenGL core profile shading language version string: 4.30 NVIDIA via Cg compiler OpenGL core profile context flags: (none) OpenGL core profile profile mask: core profile OpenGL core profile extensions: OpenGL version string: 4.3.0 NVIDIA 319.32 OpenGL shading language version string: 4.30 NVIDIA via Cg compiler OpenGL context flags: (none) OpenGL profile mask: (none) OpenGL extensions:
|
|
|
|
Lacan82
|
|
September 07, 2013, 07:47:57 PM |
|
I'm using the proprietary driver - I can play games just fine, so I would think that the driver isn't the issue? I could be wrong, but: glxinfo | grep OpenGL OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GTX 650 Ti/PCIe/SSE2 OpenGL core profile version string: 4.3.0 NVIDIA 319.32 OpenGL core profile shading language version string: 4.30 NVIDIA via Cg compiler OpenGL core profile context flags: (none) OpenGL core profile profile mask: core profile OpenGL core profile extensions: OpenGL version string: 4.3.0 NVIDIA 319.32 OpenGL shading language version string: 4.30 NVIDIA via Cg compiler OpenGL context flags: (none) OpenGL profile mask: (none) OpenGL extensions: those drivers should work but there is newer. you have CUDA 5 dev kit installed?
|
|
|
|
BenTheRighteous
Newbie
Offline
Activity: 32
Merit: 0
|
|
September 07, 2013, 08:13:45 PM |
|
I have CUDA 5.5 installed, that's what I compiled cudaminer against.
|
|
|
|
|
Schleicher
|
|
September 08, 2013, 04:58:48 AM |
|
I'm not sure if the driver version 319.32 supports CUDA 5.5
|
|
|
|
BenTheRighteous
Newbie
Offline
Activity: 32
Merit: 0
|
|
September 08, 2013, 02:03:07 PM |
|
I tried downgrading to CUDA 5.0 and recompiling. Pretty much the same result: LD_LIBRARY_PATH=/usr/local/cuda-5.0/lib64/ ./cudaminer -d 0 -i 0 --benchmark *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-07-13 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-08 08:59:59] 1 miner threads started, using 'scrypt' algorithm. [2013-09-08 08:59:59] GPU #0: starting up...
[2013-09-08 08:59:59] GPU #0: with compute capability 131072.0 [2013-09-08 08:59:59] GPU #0: interactive: 0, tex-cache: 0 , single-alloc: 0 [2013-09-08 08:59:59] GPU #0: Performing auto-tuning (Patience...) [2013-09-08 08:59:59] GPU #0: 0.00 khash/s with configuration 0x0 [2013-09-08 08:59:59] GPU #0: using launch configuration 0x0 Floating point exception (core dumped) If I remove the -d 0 -i 0 arguments, here's what I get instead: LD_LIBRARY_PATH=/usr/local/cuda-5.0/lib64/ ./cudaminer --benchmark *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-07-13 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-08 09:58:51] GPU #6: starting up...
[2013-09-08 09:58:51] GPU #6: with compute capability 131072.0 [2013-09-08 09:58:51] GPU #6: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-08 09:58:51] GPU #6: Performing auto-tuning (Patience...) [2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #7: starting up...
[2013-09-08 09:58:51] GPU #0: with compute capability -67106336.32627 [2013-09-08 09:58:51] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-08 09:58:51] GPU #0: Performing auto-tuning (Patience...) [2013-09-08 09:58:51] GPU #1: starting up...
[2013-09-08 09:58:51] GPU #-1: starting up...
[2013-09-08 09:58:51] GPU #7: with compute capability 131072.0 [2013-09-08 09:58:51] GPU #7: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #0: 0.00 khash/s with configuration 0x0 [2013-09-08 09:58:51] GPU #0: using launch configuration 0x0 [2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #6: 0.00 khash/s with configuration 0x0 [2013-09-08 09:58:51] GPU #6: using launch configuration 0x0 [2013-09-08 09:58:51] GPU #1: starting up...
[2013-09-08 09:58:51] GPU #0: with compute capability 131072.0 [2013-09-08 09:58:51] GPU #3: starting up...
[2013-09-08 09:58:51] GPU #4: starting up...
[2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #0: with compute capability 131072.0 [2013-09-08 09:58:51] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-08 09:58:51] GPU #1: with compute capability 131072.0 Floating point exception (core dumped) What on earth is it doing? GPU #7? GPU #-1? This can't be right. I agree that it looks like it's not detecting my video card correctly, but how can I figure out what the issue is? I have no idea at all what makes it think I have more than one GPU in my system.
|
|
|
|
BenTheRighteous
Newbie
Offline
Activity: 32
Merit: 0
|
|
September 08, 2013, 02:14:44 PM |
|
Ok, you guys were on the money - the issue was with the driver in the Fedora repositories. Once I installed the latest driver straight from Nvidia, it started working with my CUDA 5.0 binary. Thanks for the help.
|
|
|
|
BenTheRighteous
Newbie
Offline
Activity: 32
Merit: 0
|
|
September 08, 2013, 02:24:41 PM |
|
It also works with 5.5. In case anyone else needs it, here's my configure.sh: ./configure "CFLAGS=-O3" "CXXFLAGS=-O3" --with-cuda=/usr/local/cuda-5.5 --build=x86_64-unknown-linux-gnu --host=i686-unknown-linux-gnu Various forum posts had given me the idea that building a 64-bit binary was a bad idea. EDIT: I removed the --build and --host and recompiled just to see what would happen. It seems like it's working just fine without them, and now I'm getting lines that look like this: [2013-09-08 10:27:19] accepted: 2/2 (100.00%), 80.80 khash/s (yay!!!) So I guess the 64-bit binary works just fine?
|
|
|
|
mcgreed
Newbie
Offline
Activity: 28
Merit: 0
|
|
September 09, 2013, 03:58:59 AM |
|
ESET NOD32 sees this new version as a PUP (Potentially Unwanted Program) and warns against its installation. This didn't occur with the 4-22 or 4-30 versions.
This is also the case for other antivirus programs (and for other miners); it's probably that the programs have been either compiled into some malware/viruses and/or dropped as a payload from viruses--and anti-virus companies lazily mark the payload as dangerous, instead of the program that delivers it. However, I've found what seem to be two effective, painless fixes for this problem, which I would rather not allow people who plant miners via trojan viruses to know about. Anyone who would like to know these workarounds is welcome to PM me and try to persuade me.
|
|
|
|
cbuchner1 (OP)
|
|
September 09, 2013, 09:18:43 AM Last edit: September 09, 2013, 03:56:17 PM by cbuchner1 |
|
The brave among you might want to try out this code repo under Linux (it's a straight fork from pooler's cpuminer with my CUDA additions). https://github.com/cbuchner1/cpuminerThere are 4 kernels, accessible with the monikers L - Legacy (for compute 1.x devices) F - Fermi (for Fermi class devices, compute 2.x) K or S - Kepler kernel (using spinlocks to guard shared memory, compute 3.0) T - Tesla (compute 3.5) to autotune for a specific kernel, you can just pass the letter representing the specific kernel to the -l option, otherwise it just picks the kernel that matches your architecture. some of the kernels (Fermi and Kepler) have been sped up a bit using optimizations I've received from a nice guy named Alex from Greece. 5-15% speed up can be obtained. Note that sometimes a kernel for an older architecture may run at same or better speed than the kernel for your hardware architecture. Currently getting 52 kHash/s on GT 640M (compute 3.5) and GT 750M (compute 3.0) Open issues: -the Fermi kernel currently doesn't run for me on Linux 64bit on a GT750M with compute 3.0 -still not enough error checking is done in the CUDA code -that Stratum parse error and subsequent protocol freeze
|
|
|
|
|