Show Posts
|
Pages: « 1 [2]
|
That's what I'm starting to expect - difference between "nominal" and "guaranteed" is a little wiggle room to allow for a dead chip.
Still, doesn't hurt to ask and know for sure!
|
|
|
So Canary does that mean that there's no way to troubleshoot the dead chip? What are the terms for warranty service in that case?
|
|
|
How do you switch from low/high-power modes? Is it a hidden configuration setting or do I have to jumper something?
I'm running the blades off a backplane that's being fed by one of those HP DPS800's that the plane is designed to work with.
Out of the whole plane, I just have one "x" chip out of the whole setup. It "might" be a power issue, but I'm more inclined to think it's just a funky chip. I'll happily troubleshoot anything though.
EDIT: These are the non-overclockable blades. Does that mean there is no such thing as low/high-power mode?
|
|
|
I got my blades today! Most of them are perfect... but I think one of them has a dead chip. Chip: OOOOOOOOOOOOOOOOOOOxOOOOOOOOOOOO Any way to fix that? If not, does that qualify for warranty replacement?
|
|
|
It also works with 5.5. In case anyone else needs it, here's my configure.sh: ./configure "CFLAGS=-O3" "CXXFLAGS=-O3" --with-cuda=/usr/local/cuda-5.5 --build=x86_64-unknown-linux-gnu --host=i686-unknown-linux-gnu Various forum posts had given me the idea that building a 64-bit binary was a bad idea. EDIT: I removed the --build and --host and recompiled just to see what would happen. It seems like it's working just fine without them, and now I'm getting lines that look like this: [2013-09-08 10:27:19] accepted: 2/2 (100.00%), 80.80 khash/s (yay!!!) So I guess the 64-bit binary works just fine?
|
|
|
Ok, you guys were on the money - the issue was with the driver in the Fedora repositories. Once I installed the latest driver straight from Nvidia, it started working with my CUDA 5.0 binary. Thanks for the help.
|
|
|
I tried downgrading to CUDA 5.0 and recompiling. Pretty much the same result: LD_LIBRARY_PATH=/usr/local/cuda-5.0/lib64/ ./cudaminer -d 0 -i 0 --benchmark *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-07-13 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-08 08:59:59] 1 miner threads started, using 'scrypt' algorithm. [2013-09-08 08:59:59] GPU #0: starting up...
[2013-09-08 08:59:59] GPU #0: with compute capability 131072.0 [2013-09-08 08:59:59] GPU #0: interactive: 0, tex-cache: 0 , single-alloc: 0 [2013-09-08 08:59:59] GPU #0: Performing auto-tuning (Patience...) [2013-09-08 08:59:59] GPU #0: 0.00 khash/s with configuration 0x0 [2013-09-08 08:59:59] GPU #0: using launch configuration 0x0 Floating point exception (core dumped) If I remove the -d 0 -i 0 arguments, here's what I get instead: LD_LIBRARY_PATH=/usr/local/cuda-5.0/lib64/ ./cudaminer --benchmark *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-07-13 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-08 09:58:51] GPU #6: starting up...
[2013-09-08 09:58:51] GPU #6: with compute capability 131072.0 [2013-09-08 09:58:51] GPU #6: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-08 09:58:51] GPU #6: Performing auto-tuning (Patience...) [2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #7: starting up...
[2013-09-08 09:58:51] GPU #0: with compute capability -67106336.32627 [2013-09-08 09:58:51] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-08 09:58:51] GPU #0: Performing auto-tuning (Patience...) [2013-09-08 09:58:51] GPU #1: starting up...
[2013-09-08 09:58:51] GPU #-1: starting up...
[2013-09-08 09:58:51] GPU #7: with compute capability 131072.0 [2013-09-08 09:58:51] GPU #7: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #0: 0.00 khash/s with configuration 0x0 [2013-09-08 09:58:51] GPU #0: using launch configuration 0x0 [2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #6: 0.00 khash/s with configuration 0x0 [2013-09-08 09:58:51] GPU #6: using launch configuration 0x0 [2013-09-08 09:58:51] GPU #1: starting up...
[2013-09-08 09:58:51] GPU #0: with compute capability 131072.0 [2013-09-08 09:58:51] GPU #3: starting up...
[2013-09-08 09:58:51] GPU #4: starting up...
[2013-09-08 09:58:51] GPU #0: starting up...
[2013-09-08 09:58:51] GPU #0: with compute capability 131072.0 [2013-09-08 09:58:51] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-08 09:58:51] GPU #1: with compute capability 131072.0 Floating point exception (core dumped) What on earth is it doing? GPU #7? GPU #-1? This can't be right. I agree that it looks like it's not detecting my video card correctly, but how can I figure out what the issue is? I have no idea at all what makes it think I have more than one GPU in my system.
|
|
|
I have CUDA 5.5 installed, that's what I compiled cudaminer against.
|
|
|
I'm using the proprietary driver - I can play games just fine, so I would think that the driver isn't the issue? I could be wrong, but: glxinfo | grep OpenGL OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GTX 650 Ti/PCIe/SSE2 OpenGL core profile version string: 4.3.0 NVIDIA 319.32 OpenGL core profile shading language version string: 4.30 NVIDIA via Cg compiler OpenGL core profile context flags: (none) OpenGL core profile profile mask: core profile OpenGL core profile extensions: OpenGL version string: 4.3.0 NVIDIA 319.32 OpenGL shading language version string: 4.30 NVIDIA via Cg compiler OpenGL context flags: (none) OpenGL profile mask: (none) OpenGL extensions:
|
|
|
I compiled for linux (fedora 18) and it appears to have worked, but when I try and run the program, I get this: LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials] *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-07-13 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-06 20:39:24] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333 [2013-09-06 20:39:25] thread 31334 create failed If I limit the number of threads with a -t option, I get something more like this: LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64/ ./cudaminer -o stratum+tcp://ltc.give-me-coins.com:3333 -O [my credentials] -t 1024 *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-07-13 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-09-06 20:41:06] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:3333 [2013-09-06 20:41:06] 1024 miner threads started, using 'scrypt' algorithm. [2013-09-06 20:41:07] GPU #0: starting up...
[2013-09-06 20:41:07] GPU #1: starting up...
[2013-09-06 20:41:07] GPU #1: with compute capability 0.0 [2013-09-06 20:41:07] GPU #1: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-06 20:41:07] GPU #1: Performing auto-tuning (Patience...) [2013-09-06 20:41:07] GPU #0: w�= with compute capability 0.0 [2013-09-06 20:41:07] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-06 20:41:07] GPU #0: Performing auto-tuning (Patience...) [2013-09-06 20:41:07] GPU #4: starting up...
[2013-09-06 20:41:07] GPU #4: with compute capability 0.0 [2013-09-06 20:41:07] GPU #4: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-09-06 20:41:07] GPU #4: Performing auto-tuning (Patience...) [2013-09-06 20:41:07] GPU #4: 0.00 khash/s with configuration 0x0 [2013-09-06 20:41:07] GPU #4: using launch configuration 0x0 Floating point exception (core dumped) I only have 1 GPU (a GTX 650 Ti) so I'm not sure if GPU #1, GPU #4 etc makes any sense. Is this because I'm on a 64 bit system?
|
|
|
Waiting on some bitcoins to arrive. When they come in I'll be buying a blade.
|
|
|
It might help if you specify where the program came from. Did you compile it from source yourself? Get it from a repository or somewhere else?
|
|
|
|