Show Posts
|
Pages: [1]
|
Can't screen run a shell script that sets LD_LIBRARY_PATH so libcudart before starting cudaminer? Somehow I don't see how this incompatibility is a a real issue with cudaminer itself...
Don't have to this is how I do cat miner.sh LD_LIBRARY_PATH=/usr/local/cuda/lib64 ./CudaMiner/cudaminer.......
cat screen_miner.sh screen -dmS cudaminer miner.sh
|
|
|
Has anyone used CudaMiner with the Tesla K series cards? How do they perform compared to the regular GPU cards?
Yep, I have 4x k10 and 2x C2050 I get 120kh/s per k10, so 480kh/s total on VTC. ./CudaMiner/cudaminer --algo=scrypt:2048 -u user -p secret -o pool -d 0,1,2,3 -i 0,0,0,0 -H 1 -l K64x4 --no-autotune -C 1 85kh/s per C2050, so 170kh/s total on VTC. ./CudaMiner/cudaminer --algo=scrypt:2048 -u user -p secret -o pool --no-autotune -d 0,1 -i 0,0 -l F14x16 -C 1 -H 1 Someone asked if you need to have something else installed to mine, answer is no, you don't need
|
|
|
oh my god did you not look at the readme? Autoune is completely non functional for keccak. the keccak code was hacked up in just 24 hours and barely functional. Autotune did NOT make it in before the coin's release.
Take the example values from the README. They start with upper case F and K letters.
-L 256 -l F1024x16 for Fermi devices -L 256 -l K1024x32 for Kepler devices don't bother with -l T... as this is slower and requires Compute 3.5 anyways
I would also suggest to run exactly one cudaminer instance per card, as otherwise the likelyhood of crashing is higher. only the latest code on github improves this.
Christian
Damn, sorry =/
|
|
|
3.0, and the hash rate is low isn't it?
and i have two C2050 (capability 2.0) that does the same thing
share your launch configs, please. I would like to figure out what is wrong there. But from the cudaminer screenshot I cannot guess much. k10 (cap 3.0) ./CudaMiner/cudaminer --algo=keccak -u worker -p x -o stratum+tcp://max.netcodepool.org:5555 -d 0,1,2,3 -i 0,0,0,0 -H 1 -C 2 -l k144x24 c2050 (cap 2.0) ./CudaMiner/cudaminer --algo=keccak -u worker -p x -o stratum+tcp://max.netcodepool.org:5555 -d 0,1 -i 0,0 -H 1 -C 2 -l f2571x1 -l values are from auto-tune
|
|
|
those hashes that aren't validating on the CPU are not sent to the pool at all (that would be a guaranteed boo and wasted network bandwidth!) So they don't go into the 17/17 statistics you see below.
What compute capability is a Tesla K10.G1.8GB ?
Christian
3.0, and the hash rate is low isn't it? and i have two C2050 (capability 2.0) that does the same thing
|
|
|
Okay, I have everything pointed at the pool now. One Windows box, two Linux boxes. No CPU validation errors. good accept rates verywhere. Let's see how this plays (pays) out over time.
Did my latest commit fix your CPU validation issues?
Christian
No Compiled from github (like 10 minutes ago) and getting validation error on ever share (but they are valid) [2014-02-08 22:13:39] GPU #2: Tesla K10.G1.8GB result for nonce $aca702b9 does not validate on CPU! [2014-02-08 22:13:39] GPU #2: Tesla K10.G1.8GB result for nonce $aca702b9 does not validate on CPU! [2014-02-08 22:13:39] GPU #2: Tesla K10.G1.8GB result for nonce $aca702b9 does not validate on CPU! [2014-02-08 22:13:39] GPU #2: Tesla K10.G1.8GB, 15550 khash/s [2014-02-08 22:13:39] accepted: 17/17 (100.00%), 62180 khash/s (yay!!!)
ideas?
|
|
|
"PATH=/usr/local/cuda-5.5/bin:$PATH make"
Just add export PATH=$PATH:/usr/local/cuda/bin/
to the end of ~/.bashrc "LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64 ./cudaminer --algo=scrypt-jane:UltraCoin -o [pool-info] -O username:password -q -l 70x4 -H 2 -C 1 -S &"
and on terminal sudo ldconfig /usr/local/cuda/lib64
|
|
|
Lol chinese pool thanks for the info, think I will go to vertcoin
|
|
|
Anyone mining Vertcoin? If so where (pool or solo)? And how many hash/s?
|
|
|
If I run Jane in a VM of Ubuntu on a windows host would this still give me windows stupid memory management?
Yes, you can't pass the gpu to virtual machine (not easily)
|
|
|
Is it normal that cudaminer is not using 100% of gpu ram? Using normal scrypt mining +------------------------------------------------------+ | NVIDIA-SMI 5.319.17 Driver Version: 319.17 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K10.G2.8GB Off | 0000:05:00.0 Off | 0 | | N/A 58C P0 108W / 117W | 1085MB / 3583MB | 99% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla K10.G2.8GB Off | 0000:06:00.0 Off | 0 | | N/A 53C P0 108W / 117W | 1085MB / 3583MB | 99% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla K10.G1.8GB Off | 0000:85:00.0 Off | 0 | | N/A 59C P0 115W / 117W | 1085MB / 3583MB | 99% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla K10.G1.8GB Off | 0000:86:00.0 Off | 0 | | N/A 68C P0 117W / 117W | 1085MB / 3583MB | 99% Default | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Compute processes: GPU Memory | | GPU PID Process name Usage | |=============================================================================| | 0 24937 ./CudaMiner/cudaminer 4297MB | | 1 24937 ./CudaMiner/cudaminer 4297MB | | 2 24937 ./CudaMiner/cudaminer 4297MB | | 3 24937 ./CudaMiner/cudaminer 4297MB | +-----------------------------------------------------------------------------+
|
|
|
The 12-18 version is having problems with my Tesla C2075's. I have two C2075s on a server. I do not use #0 for mining. I run cudaminer on #1. Before the 12-18 version came out, everything was fine. I use -d 1 -i 0 -C 2 -l F14x16 and get about 178kh/s. The driver was 311.35 (win8 64bit, cuda 5.0). After the 12-18 came out the first thing I found out was that cudaminer couldn't find the gpu card. So I upgraded to the latest driver 331.82. Funny things began. I couldn't benchmark (-D) #0 because if I used -i 1 I get a lof garbage on the screen; if I use -i 0 I get a lot of cup-doesn't-validate errors. If I bench mark #1 card, I get low performance at ~130kh/s but the most strange thing is that the nvidia-smi program reports that the #0 card is using power and getting hot when I run cudaminer on #1 (-d 1) -- on the cudaminer console it does say "#1" as shown below. So somehow the miner is using the wrong card. Then I figure maybe I should upgrade cuda toolkit to cuda 5.5. After I did, the driver version went down to 320.57, but cudaminer can still see the cards. However not only the 12-18 cudaminer uses the wrong card, even the 12-10 version and older start to use the wrong card, too! This is the output of 11-20 version. Its performance is top knotch compared with all versions [2013-12-30 16:52:15] Binding thread 0 to cpu 0 [2013-12-30 16:52:15] 1 miner threads started, using 'scrypt' algorithm. [2013-12-30 16:52:15] Starting Stratum on stratum+tcp://us.wemineltc.com:80 [2013-12-30 16:52:16] Stratum detected new block [2013-12-30 16:52:16] GPU #1: Tesla C2075 with compute capability 2.0 [2013-12-30 16:52:16] GPU #1: interactive: 0, tex-cache: 2D, single-alloc: 1 [2013-12-30 16:52:16] GPU #1: using launch configuration F14x16 [2013-12-30 16:52:16] GPU #1: Tesla C2075, 7168 hashes, 26.99 khash/s [2013-12-30 16:52:25] GPU #1: Tesla C2075, 1619968 hashes, 181.88 khash/s [2013-12-30 16:52:28] GPU #1: Tesla C2075, 616448 hashes, 180.97 khash/s [2013-12-30 16:52:29] accepted: 1/1 (100.00%), 180.97 khash/s (yay!!!)
Please let me know what you can make out of it. since I want to use only #1 for mining,. Seems kinda slow... I get: [2013-12-30 19:29:24] GPU #0: Tesla C2050, 946176 hashes, 182.60 khash/s [2013-12-30 19:29:26] GPU #0: Tesla C2050, 358400 hashes, 177.92 khash/s
with -i 0 -l F28x16 -C 1 -H 1 Cuda 5.5 driver 319.17 CudaMiner compiled from github on debian wheezy
|
|
|
Can you give an example of how to use -c option? -c, --config=FILE load a JSON-format configuration file
|
|
|
btw can you upload it to github (for example), every time I download I have to dos2unix all files do a search for cudaminer on github and grab it from there. It even has a few (so far unreleased) improvements in it. Maybe the source tarball in the 2013-12-18 version doesn't yet have the necessary compilation fixes for Linux, so that's why your cuda compiler still crashes on the spinlock kernel. Compiling from github worked! Got almost 20% increase thanks!
|
|
|
Both the 32 bit and 64 bit versions crash on my Windows 7 system: [2013-12-28 07:23:28] Starting Stratum on stratum+tcp://192.168.0.58:3333 [2013-12-28 07:23:28] 2 miner threads started, using 'scrypt' algorithm. [2013-12-28 07:23:28] Stratum detected new block [2013-12-28 07:23:29] GPU #1: GeForce 8800 GTX with compute capability 1.0 [2013-12-28 07:23:29] GPU #1: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-12-28 07:23:29] GPU #0: GeForce 8800 GTX with compute capability 1.0 [2013-12-28 07:23:29] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-12-28 07:23:30] GPU #1: Performing auto-tuning (Patience...) [2013-12-28 07:23:30] GPU #1: maximum warps: 165 [2013-12-28 07:23:30] GPU #1: 0.00 khash/s with configuration L0x0 [2013-12-28 07:23:30] GPU #1: using launch configuration L0x0 Have you got the latest nvidia drivers? I had the same when on old drivers. Just updated to the latest WHQL 331.82, rebooted, and it still crashed. Try passing a launch configuration like -l L16x3. would it still crash? Christian I had this problem before Auto-tuning crashed when I ran with more than 1 card, try running with -d 0 just to get the best config Then run with -d 0,1 -l <best config from auto-tuning>
|
|
|
It was a problem with the 331.20 driver, downgraded to 319.17 and device query works just downloaded from first page and same thing, full log: gcc -std=gnu99 -DHAVE_CONFIG_H -I. -msse2 -fopenmp -pthread -fno-strict-aliasing -O3 -MT cudaminer-cpu-miner.o -MD -MP -MF .deps/cudaminer-cpu-miner.Tpo -c -o cudaminer-cpu-miner.o `test -f 'cpu-miner.c' || echo './'`cpu-miner.c gcc -std=gnu99 -DHAVE_CONFIG_H -I. -msse2 -fopenmp -pthread -fno-strict-aliasing -O3 -MT cudaminer-util.o -MD -MP -MF .deps/cudaminer-util.Tpo -c -o cudaminer-util.o `test -f 'util.c' || echo './'`util.c gcc -std=gnu99 -DHAVE_CONFIG_H -I. -msse2 -fopenmp -pthread -fno-strict-aliasing -O3 -MT cudaminer-sha2.o -MD -MP -MF .deps/cudaminer-sha2.Tpo -c -o cudaminer-sha2.o `test -f 'sha2.c' || echo './'`sha2.c g++ -DHAVE_CONFIG_H -I. -msse2 -fopenmp -pthread -fno-strict-aliasing -O3 -MT cudaminer-scrypt.o -MD -MP -MF .deps/cudaminer-scrypt.Tpo -c -o cudaminer-scrypt.o `test -f 'scrypt.cpp' || echo './'`scrypt.cpp mv -f .deps/cudaminer-cpu-miner.Tpo .deps/cudaminer-cpu-miner.Po /usr/local/cuda/bin/nvcc -O3 -arch=compute_10 --maxrregcount=124 --ptxas-options=-v -o salsa_kernel.o -c salsa_kernel.cu mv -f .deps/cudaminer-util.Tpo .deps/cudaminer-util.Po /usr/local/cuda/bin/nvcc -O3 -Xptxas "-abi=no -v" -arch=compute_30 --maxrregcount=63 -o spinlock_kernel.o -c spinlock_kernel.cu mv -f .deps/cudaminer-sha2.Tpo .deps/cudaminer-sha2.Po /usr/local/cuda/bin/nvcc -O3 -arch=compute_10 --maxrregcount=124 --ptxas-options=-v -o legacy_kernel.o -c legacy_kernel.cu mv -f .deps/cudaminer-scrypt.Tpo .deps/cudaminer-scrypt.Po /usr/local/cuda/bin/nvcc -O3 -Xptxas "-abi=no -v" -arch=compute_20 --maxrregcount=63 -o fermi_kernel.o -c fermi_kernel.cu ./legacy_kernel.cu(310): Warning: Cannot tell what pointer points to, assuming global memory space
................ a lot of pointer warning .................
./legacy_kernel.cu(274): Warning: Cannot tell what pointer points to, assuming global memory space /usr/local/cuda/bin/nvcc -O3 -Xptxas "-abi=no -v" -arch=compute_35 --maxrregcount=64 -o test_kernel.o -c test_kernel.cu /usr/local/cuda/bin/nvcc -O3 -Xptxas "-abi=no -v" -arch=compute_35 --maxrregcount=64 -o titan_kernel.o -c titan_kernel.cu UNREACHABLE executed! Stack dump: 0. Running pass 'NVPTX DAG->DAG Pattern Instruction Selection' on function '@_Z28spinlock_scrypt_core_kernelBILi3EEvPj' Aborted make[2]: *** [spinlock_kernel.o] Error 134 make[2]: *** Waiting for unfinished jobs....
btw can you upload it to github (for example), every time I download I have to dos2unix all files Oh and the output of autgen.sh ./autogen.sh: line 1: aclocal: command not found
|
|
|
weird. the 12-18 is known to compile on CUDA 5.5. For this to work, I changed the outdated Spinlock Kernel to use compute_12, and I had to lower the shared memory use (by limiting the max. no of warps to 12 I believe). No idea why it still uses compute_30 for you.
try running ./autogen.sh before configure and make, maybe?
Christian
Its something wrong with my system, freeze when run device query for example.... about the spin lock maybe I messed up the sources will download again and retry
|
|
|
Hey first of all thanks for your effort in making cudaminer well, when I had cuda-5.0 everything worked great but I updated it to 5.5 and now I can't compile /usr/local/cuda/bin/nvcc -O3 -Xptxas "-abi=no -v" -arch=compute_30 --maxrregcount=63 -o spinlock_kernel.o -c spinlock_kernel.cu UNREACHABLE executed! Stack dump: 0. Running pass 'NVPTX DAG->DAG Pattern Instruction Selection' on function '@_Z28spinlock_scrypt_core_kernelBILi3EEvPj' Aborted make[2]: *** [spinlock_kernel.o] Error 134 Its the same error for 12-10 and 12-18, any ideas? Driver version 331.20 I have 2 miners miner 1: 4x tesla k10 miner 2: tesla C1050 and tesla C2050 And both are debian wheezy
|
|
|
Hey thanks for the great work! can't wait do use it
|
|
|
|