Bitcoin Forum
December 09, 2024, 04:58:25 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 [52] 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 ... 1135 »
  Print  
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX]  (Read 3426963 times)
Eli0t
Sr. Member
****
Offline Offline

Activity: 252
Merit: 250


View Profile
August 27, 2013, 05:09:29 AM
 #1021

Any chance this gets adapted for the Yacoin family, those scrypt coins with a higher N factor? AMD cards performance are taking a bigger hit with higher N than CPU. Would the performance gap between Nvidia and AMD cards also narrow?  Huh

it doesn't work? I was using multipool.us pretty well with it..which has variety of coins. I don't know the N factor of those coins off the top of my head..
multipool doesn't do scrypt jane coins

LTC:  LKpJf3uk7KsHU73kxq8iFJrP1AAKN7Yni7  DGC:  DKXGvEbj3Rwgrm2QQbRyNPDDZDYoq4Y44d  XPM:  AWV5AKfLFyoBaMjg9C77rGUBhuFxz5DGGL
ilostcoins
Sr. Member
****
Offline Offline

Activity: 274
Merit: 250



View Profile
August 27, 2013, 12:14:45 PM
 #1022

Any chance this gets adapted for the Yacoin family, those scrypt coins with a higher N factor? AMD cards performance are taking a bigger hit with higher N than CPU. Would the performance gap between Nvidia and AMD cards also narrow?  Huh

it doesn't work? I was using multipool.us pretty well with it..which has variety of coins. I don't know the N factor of those coins off the top of my head..

The yacoin family of coins uses a modified version of scrypt for hashing. So, a normal miner that works with litecoin etc won't work without adaptation.

BTW, just tested the CUDA miner with litecoin and it works well. Getting ~60kh on a 650ti.

LTC: LSyqwk4YbhBRtkrUy8NRdKXFoUcgVpu8Qb   NVC: 4HtynfYVyRYo6yM8BTAqyNYwqiucfoPqFW   TAG id: 4313
CMC: CAHrzqveVm9UxGm7PZtT4uj6su4suxKzZv   YAC: Y9m5S7M24sdkjdwxnA9GZpPez6k6EqUjUt
Stoneysilence
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
August 28, 2013, 05:42:59 AM
 #1023

Yes, i had the same issue. Make sure you set it up, get on a pool, etc. You dont have to make a bat tho if you dont know how and i prefer short cuts anyways.  You can right click, create short cut to desktop, then right click the short cut, go to properties, and on the target, just type the command like stoneyshows above.


Btw, Thanks for the wemineltc pool suggestions, its night and day difference!

Also, anyone still trying to fix that stratum failed to parse error?  Would love a fix o.O, is it not that common? I would expect it to be fairly common..

I am old school, I just find Bat files easier since they are plain text and you just cut and paste. Shortcuts can be a pain to setup with parameters.

Either way works.
nulo0b
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


View Profile
August 31, 2013, 09:49:15 PM
 #1024



uhm +600kh/s ?

what is that all about, not on a pool.. solo on a coin.

Gonna run for abit, maybe its fake?

thx

cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
September 01, 2013, 11:39:33 AM
 #1025


Mine on a pool for testing and see if the reported hits are validated by the CPU.
~600 kHash seems way too high for this type of card.

on solo mining, you might only find out after months that your computed result was bogus Wink

Christian
jruben4
Newbie
*
Offline Offline

Activity: 17
Merit: 0


View Profile
September 04, 2013, 08:38:34 PM
 #1026

I now get the "stratum_recv_line failed to parse a newline-terminated string" about once every 6 hrs of operations (which freezes mining).

Would it be better if I went back to my own stratum proxy?
mhps
Hero Member
*****
Offline Offline

Activity: 516
Merit: 500


CAT.EX Exchange


View Profile
September 05, 2013, 12:55:45 AM
 #1027

I now get the "stratum_recv_line failed to parse a newline-terminated string" about once every 6 hrs of operations (which freezes mining).

Would it be better if I went back to my own stratum proxy?

That is what I did.




|(
▄▄██████████▄▄
▄██████████████████▄
▄█████▀ ▀█████▀ ▀██████▄
██████ ███ ▀▀▀ ███ ███████
██████▀▄███████████▄▀███████
███████ █████████████ ████████
███████ █████████████ ████████
████████▄▀█████████▀▄█████████
██████████▄ █████ ▄█▀▄▄▄▀█████
██████████ ████▌▐█ █▀▄█ ████
████████▌▐█████ █▌▐█▄▄████
▀█████▀ ██████▄ ▀ █████▀
▀██████████████████▀
▀▀██████████▀▀
)(.
)
▌   ANNOUNCE THREAD   ▌▐   BOUNTY   ▐
TWITTER  |  FACEBOOK  |  TELEGRAM  |  DISCORD
(((((((   MOBILE APP [ ANDROID / IOS ]   )))))))
)
BenTheRighteous
Newbie
*
Offline Offline

Activity: 32
Merit: 0


View Profile
September 07, 2013, 12:44:30 AM
 #1028

I compiled for linux (fedora 18) and it appears to have worked, but when I try and run the program, I get this:

Code:
LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials]
   *** CudaMiner for nVidia GPUs by Christian Buchner ***
             This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
       Cuda additions Copyright 2013 Christian Buchner
   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-09-06 20:39:24] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333
[2013-09-06 20:39:25] thread 31334 create failed

If I limit the number of threads with a -t option, I get something more like this:

Code:
LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials] -t 1024
   *** CudaMiner for nVidia GPUs by Christian Buchner ***
             This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
       Cuda additions Copyright 2013 Christian Buchner
   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-09-06 20:41:06] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333
[2013-09-06 20:41:06] 1024 miner threads started, using 'scrypt' algorithm.
[2013-09-06 20:41:07] GPU #0: starting up...

[2013-09-06 20:41:07] GPU #1: starting up...

[2013-09-06 20:41:07] GPU #1:  with compute capability 0.0
[2013-09-06 20:41:07] GPU #1: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-06 20:41:07] GPU #1: Performing auto-tuning (Patience...)
[2013-09-06 20:41:07] GPU #0: w�= with compute capability 0.0
[2013-09-06 20:41:07] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-06 20:41:07] GPU #0: Performing auto-tuning (Patience...)
[2013-09-06 20:41:07] GPU #4: starting up...

[2013-09-06 20:41:07] GPU #4:  with compute capability 0.0
[2013-09-06 20:41:07] GPU #4: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-06 20:41:07] GPU #4: Performing auto-tuning (Patience...)
[2013-09-06 20:41:07] GPU #4:    0.00 khash/s with configuration  0x0
[2013-09-06 20:41:07] GPU #4: using launch configuration  0x0
Floating point exception (core dumped)

I only have 1 GPU (a GTX 650 Ti) so I'm not sure if GPU #1, GPU #4 etc makes any sense.
Is this because I'm on a 64 bit system?
Schleicher
Hero Member
*****
Offline Offline

Activity: 675
Merit: 514



View Profile
September 07, 2013, 04:59:36 PM
 #1029

I compiled for linux (fedora 18) and it appears to have worked, but when I try and run the program, I get this:

Code:
[2013-09-06 20:39:25] thread 31334 create failed
cudaminer doesn't recognise your graphics card.
One of the linux freaks here should be able to help you.

Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
September 07, 2013, 05:20:49 PM
 #1030


Code:
LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials] -t 1024
  *** CudaMiner for nVidia GPUs by Christian Buchner ***
            This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
      Cuda additions Copyright 2013 Christian Buchner
  My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-09-06 20:41:06] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333
[2013-09-06 20:41:06] 1024 miner threads started, using 'scrypt' algorithm.
[2013-09-06 20:41:07] GPU #0: starting up...

[2013-09-06 20:41:07] GPU #1: starting up...

I only have 1 GPU (a GTX 650 Ti) so I'm not sure if GPU #1, GPU #4 etc makes any sense.
Is this because I'm on a 64 bit system?
[/quote]

This is usually a driver issue. make sure they are up to date

BenTheRighteous
Newbie
*
Offline Offline

Activity: 32
Merit: 0


View Profile
September 07, 2013, 06:34:50 PM
 #1031

I'm using the proprietary driver - I can play games just fine, so I would think that the driver isn't the issue? I could be wrong, but:

Code:
glxinfo | grep OpenGL
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 650 Ti/PCIe/SSE2
OpenGL core profile version string: 4.3.0 NVIDIA 319.32
OpenGL core profile shading language version string: 4.30 NVIDIA via Cg compiler
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.3.0 NVIDIA 319.32
OpenGL shading language version string: 4.30 NVIDIA via Cg compiler
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
September 07, 2013, 07:47:57 PM
 #1032

I'm using the proprietary driver - I can play games just fine, so I would think that the driver isn't the issue? I could be wrong, but:

Code:
glxinfo | grep OpenGL
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 650 Ti/PCIe/SSE2
OpenGL core profile version string: 4.3.0 NVIDIA 319.32
OpenGL core profile shading language version string: 4.30 NVIDIA via Cg compiler
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.3.0 NVIDIA 319.32
OpenGL shading language version string: 4.30 NVIDIA via Cg compiler
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:

those drivers should work but there is newer. you have CUDA 5 dev kit installed?

BenTheRighteous
Newbie
*
Offline Offline

Activity: 32
Merit: 0


View Profile
September 07, 2013, 08:13:45 PM
 #1033

I have CUDA 5.5 installed, that's what I compiled cudaminer against.
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
September 07, 2013, 09:03:38 PM
 #1034

I have CUDA 5.5 installed, that's what I compiled cudaminer against.

https://forum.litecoin.net/index.php?topic=3231.0 may help

Schleicher
Hero Member
*****
Offline Offline

Activity: 675
Merit: 514



View Profile
September 08, 2013, 04:58:48 AM
 #1035

I'm not sure if the driver version 319.32 supports CUDA 5.5

BenTheRighteous
Newbie
*
Offline Offline

Activity: 32
Merit: 0


View Profile
September 08, 2013, 02:03:07 PM
 #1036

I tried downgrading to CUDA 5.0 and recompiling. Pretty much the same result:

Code:
LD_LIBRARY_PATH=/usr/local/cuda-5.0/lib64/ ./cudaminer -d 0 -i 0 --benchmark
   *** CudaMiner for nVidia GPUs by Christian Buchner ***
             This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
       Cuda additions Copyright 2013 Christian Buchner
   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-09-08 08:59:59] 1 miner threads started, using 'scrypt' algorithm.
[2013-09-08 08:59:59] GPU #0: starting up...

[2013-09-08 08:59:59] GPU #0:  with compute capability 131072.0
[2013-09-08 08:59:59] GPU #0: interactive: 0, tex-cache: 0 , single-alloc: 0
[2013-09-08 08:59:59] GPU #0: Performing auto-tuning (Patience...)
[2013-09-08 08:59:59] GPU #0:    0.00 khash/s with configuration  0x0
[2013-09-08 08:59:59] GPU #0: using launch configuration  0x0
Floating point exception (core dumped)

If I remove the -d 0 -i 0 arguments, here's what I get instead:

Code:
LD_LIBRARY_PATH=/usr/local/cuda-5.0/lib64/ ./cudaminer --benchmark
   *** CudaMiner for nVidia GPUs by Christian Buchner ***
             This is version 2013-07-13 (alpha)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
       Cuda additions Copyright 2013 Christian Buchner
   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-09-08 09:58:51] GPU #6: starting up...

[2013-09-08 09:58:51] GPU #6:  with compute capability 131072.0
[2013-09-08 09:58:51] GPU #6: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #6: Performing auto-tuning (Patience...)
[2013-09-08 09:58:51] GPU #0: starting up...

[2013-09-08 09:58:51] GPU #7: starting up...

[2013-09-08 09:58:51] GPU #0:  with compute capability -67106336.32627
[2013-09-08 09:58:51] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #0: Performing auto-tuning (Patience...)
[2013-09-08 09:58:51] GPU #1: starting up...

[2013-09-08 09:58:51] GPU #-1: starting up...

[2013-09-08 09:58:51] GPU #7:  with compute capability 131072.0
[2013-09-08 09:58:51] GPU #7: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #0: starting up...

[2013-09-08 09:58:51] GPU #0:    0.00 khash/s with configuration  0x0
[2013-09-08 09:58:51] GPU #0: using launch configuration  0x0
[2013-09-08 09:58:51] GPU #0: starting up...

[2013-09-08 09:58:51] GPU #6:    0.00 khash/s with configuration  0x0
[2013-09-08 09:58:51] GPU #6: using launch configuration  0x0
[2013-09-08 09:58:51] GPU #1: starting up...

[2013-09-08 09:58:51] GPU #0:  with compute capability 131072.0
[2013-09-08 09:58:51] GPU #3: starting up...

[2013-09-08 09:58:51] GPU #4: starting up...

[2013-09-08 09:58:51] GPU #0: starting up...

[2013-09-08 09:58:51] GPU #0:  with compute capability 131072.0
[2013-09-08 09:58:51] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-09-08 09:58:51] GPU #1:  with compute capability 131072.0
Floating point exception (core dumped)

What on earth is it doing? GPU #7? GPU #-1? This can't be right. I agree that it looks like it's not detecting my video card correctly, but how can I figure out what the issue is? I have no idea at all what makes it think I have more than one GPU in my system.
BenTheRighteous
Newbie
*
Offline Offline

Activity: 32
Merit: 0


View Profile
September 08, 2013, 02:14:44 PM
 #1037

Ok, you guys were on the money - the issue was with the driver in the Fedora repositories. Once I installed the latest driver straight from Nvidia, it started working with my CUDA 5.0 binary. Thanks for the help.
BenTheRighteous
Newbie
*
Offline Offline

Activity: 32
Merit: 0


View Profile
September 08, 2013, 02:24:41 PM
 #1038

It also works with 5.5. In case anyone else needs it, here's my configure.sh:

Code:
./configure "CFLAGS=-O3" "CXXFLAGS=-O3" --with-cuda=/usr/local/cuda-5.5 --build=x86_64-unknown-linux-gnu --host=i686-unknown-linux-gnu

Various forum posts had given me the idea that building a 64-bit binary was a bad idea.

EDIT: I removed the --build and --host and recompiled just to see what would happen. It seems like it's working just fine without them, and now I'm getting lines that look like this:

[2013-09-08 10:27:19] accepted: 2/2 (100.00%), 80.80 khash/s (yay!!!)

So I guess the 64-bit binary works just fine?
mcgreed
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
September 09, 2013, 03:58:59 AM
 #1039

ESET NOD32 sees this new version as a PUP (Potentially Unwanted Program) and warns against its installation.  This didn't occur with the 4-22 or 4-30 versions.

This is also the case for other antivirus programs (and for other miners); it's probably that the programs have been either compiled into some malware/viruses and/or dropped as a payload from viruses--and anti-virus companies lazily mark the payload as dangerous, instead of the program that delivers it. Angry

However, I've found what seem to be two effective, painless fixes for this problem, which I would rather not allow people who plant miners via trojan viruses to know about. Anyone who would like to know these workarounds is welcome to PM me and try to persuade me.
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
September 09, 2013, 09:18:43 AM
Last edit: September 09, 2013, 03:56:17 PM by cbuchner1
 #1040

The brave among you might want to try out this code repo under Linux (it's a straight fork from pooler's cpuminer with my CUDA additions).

https://github.com/cbuchner1/cpuminer

There are 4 kernels, accessible with the monikers
L - Legacy (for compute 1.x devices)
F - Fermi (for Fermi class devices, compute 2.x)
K or S - Kepler kernel (using spinlocks to guard shared memory, compute 3.0)
T - Tesla (compute 3.5)

to autotune for a specific kernel, you can just pass the letter representing the specific kernel to the -l option, otherwise it just picks the kernel that matches your architecture.

some of the kernels (Fermi and Kepler) have been sped up a bit using optimizations I've received from a nice guy named Alex from Greece. 5-15% speed up can be obtained. Note that sometimes a kernel for an older architecture may run at same or better speed than the kernel for your hardware architecture. Wink

Currently getting 52 kHash/s on GT 640M (compute 3.5) and GT 750M (compute 3.0)

Open issues:
-the Fermi kernel currently doesn't run for me on Linux 64bit on a GT750M with compute 3.0
-still not enough error checking is done in the CUDA code
-that Stratum parse error and subsequent protocol freeze
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 [52] 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 ... 1135 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!