Antares88
Newbie
Offline
Activity: 10
Merit: 0
|
|
November 27, 2013, 05:04:56 PM |
|
maybe it's because they are considered legacy cards and the software is tested on Kepler and Fermi architectures :-/
|
|
|
|
zuludrag
|
|
November 27, 2013, 06:14:11 PM |
|
maybe it's because they are considered legacy cards and the software is tested on Kepler and Fermi architectures :-/
Anyway, with these cards I get best results with Fermi and Kepler kernel... Legacy kernel works much slower...
|
|
|
|
Bitice
Member
Offline
Activity: 66
Merit: 10
|
|
November 27, 2013, 07:33:16 PM |
|
A 2013-11-20 release was just posted: I may have found the reason for the crashyness of the Legacy and Fermi kernels. These had kernel identical CUDA kernel names (but situated in different .cu files), and maybe that name clash may have caused kernel launch issues on a lot of systems. I can now run and autotune the Legacy kernel again (on the 560 Ti cards at least), after fixing that naming issue. I also applied some optimizations found in the other kernels to the old Legacy kernel. Go fire up your old compute 1.x irons! They might surprise you with newfound agility Some updated stats on cards like 8800 GS/GT/GTS/GTX, 9600 GSO, GT 240, GTX 260, GTX 280 would be nice (Linux and Windows, for comparison) Just fired up my old 260 GTX Core 216 to try it out, on Windows 7 64bit. Autotune gives me L54x3 and I get about 44-45 Khash. The card is a dual fan OC version from MSI. Launch config I used was cudaminer -d 1 -C 0 -i 0 -H 1 -l L54x3 I tried a few different ##x## combos but best ones so far was 54x3 and 162x1.
|
|
|
|
cbuchner1 (OP)
|
|
November 28, 2013, 12:49:24 AM |
|
Just fired up my old 260 GTX Core 216 to try it out, on Windows 7 64bit. Autotune gives me L54x3 and I get about 44-45 Khash. The card is a dual fan OC version from MSI. Launch config I used was cudaminer -d 1 -C 0 -i 0 -H 1 -l L54x3
this is very much in line with what I was getting back in April. The GTX 260 were promptly replaced with something faster (560 Ti in two editions). Maybe on Linux or WinXP the cards would mine faster. Christian
|
|
|
|
TiWu
Newbie
Offline
Activity: 25
Merit: 0
|
|
November 28, 2013, 02:43:28 AM |
|
latest version yields a launch config for GTX570: was 30x8 till now, autotune chose F15x16 with this version. Seems to give a little increase in hasrate to about 225 kH/s (was 5-8 lower with 30x8)
running stock clock and mem: 732/1900
|
|
|
|
HeatSurge
Newbie
Offline
Activity: 8
Merit: 0
|
|
November 28, 2013, 04:38:42 AM |
|
To the poster above, autotune chooses DIFFERENT versions sometimes with certain -C configs for me, so make sure you run it several times to be sure.
Although I think 30x8 is generally about the same as 15x16 (it might have to do with them being multiples of each other? Divide by 2 multiply by 2? :-) I have no idea how this works or what I'm talking about, I just noticed a pattern.)...
|
|
|
|
lemons
|
|
November 28, 2013, 08:10:08 AM |
|
my ASUS GTX550ti 77KH/s
|
|
|
|
highwaychile
Newbie
Offline
Activity: 3
Merit: 0
|
|
November 28, 2013, 03:42:37 PM |
|
Hi, I'm trying to get cudaminer running on a Nvidia Geforce GT 750M, but when executing cudaminer I get this error: ben@Ben:~/cudaminer-2013-11-15/cudaminer-src-2013.11.15$ optirun ./cudaminer -o stratum+tcp://europe.mine-litecoin.com -O foo.bar:pass *** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-11-15 (alpha) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-11-28 00:08:21] Starting Stratum on stratum+tcp://europe.mine-litecoin.com [2013-11-28 00:08:21] 1 miner threads started, using 'scrypt' algorithm. [2013-11-28 00:08:21] Binding thread 0 to cpu 0 [2013-11-28 00:08:21] JSON decode failed(1): '[' or '{' expected near 'HTTP' [2013-11-28 00:08:21] JSON decode failed(1): '[' or '{' expected near 'Server' [2013-11-28 00:08:21] ...retry after 15 seconds
I've already searched for this problem, but I've found no solution Does anybody know how to fix this? Thank you in advance!
|
|
|
|
Vanderi
|
|
November 28, 2013, 03:44:46 PM |
|
Lol, fail. Got greedy on the LTC, thought to put my two GTX680 4GB to work from my gaming rig. Well auto,auto it said, froze and a hard-reset broke the dual-SSD software RAID controller. Just a blinking cursor in a black screen.
Now, only OS on C: - check backup on all other drives - check backup on Skyrim lvl 41 character x 2 - check Acronis True Image -13 installed, can be booted from removable and rebuild OS disc from somewhat fresh backup - check.
The fail actually was my lazy ass just dragging over to the kitchen, opening the MacBook Pro.
I'm getting old.
*edit I distinctly remember, with flags high,vowing to never do any unstable sheisse on the desktop behemoth, when engaging the software RAID controller. Damn, greed. It always comes back and bites your ass.
|
|
|
|
cbuchner1 (OP)
|
|
November 28, 2013, 05:32:35 PM |
|
ben@Ben:~/cudaminer-2013-11-15/cudaminer-src-2013.11.15$ optirun ./cudaminer <blah>
you're maybe not connecting to the stratum port, but to the TCP port. Append a :3334 to the -o argument, maybe?
|
|
|
|
highwaychile
Newbie
Offline
Activity: 3
Merit: 0
|
|
November 28, 2013, 06:59:04 PM |
|
Hi,
thank you for your reply, after trying out 3334 I also tried out 443 and this one works!
I was playing around a little bit, and by using "-l Tauto" I got (probably unrealistic) 620 kH/s, but when running cudaminer without "--benchmark" I got lots of "result does not validate on CPU!" Therefore I'm using now "-i 0 --no-autotune" with ~50 kH/s, which is probably the best I can expect of my card (Geforce GT 750M). Thanks for your help!
|
|
|
|
netfq
Newbie
Offline
Activity: 9
Merit: 0
|
|
November 28, 2013, 07:04:45 PM |
|
I'm getting the following error: "Unable to query CUDA driver version! Is an nVidia driver installed?"
Setup:
Cudaminer 2013-11-20 Windows 7 (64-bit) nVidia GeForce GTX580 (running Geforce 331.83 drivers)
Does anyone have any idea how to get this working?
|
|
|
|
CryptoLego
Newbie
Offline
Activity: 7
Merit: 0
|
|
November 28, 2013, 07:44:44 PM |
|
I wanted to give my stats:
GTX 580 using 331.82 drivers
Using Cudaminer 11/20/2013
Win 7 X64
options: -H 1 -l F16x14
Auto-tuning was using F32x8 which was averaging lower than what I am getting using F16x14.
avg about 230 khash/s
I was using the 64 bit version but I was only averaging about 200 khash/s
Using about 95% of GPU running at a temp of about 83 degrees Celsius.
I believe this is the best I can expect running a stock GTX 580 with no overclocking.
|
|
|
|
fruittool
Newbie
Offline
Activity: 19
Merit: 0
|
|
November 28, 2013, 08:16:00 PM |
|
I wanted to give my stats:
GTX 580 using 331.82 drivers
Using Cudaminer 11/20/2013
Win 7 X64
options: -H 1 -l F16x14
Auto-tuning was using F32x8 which was averaging lower than what I am getting using F16x14.
avg about 230 khash/s
I was using the 64 bit version but I was only averaging about 200 khash/s
Using about 95% of GPU running at a temp of about 83 degrees Celsius.
I believe this is the best I can expect running a stock GTX 580 with no overclocking.
I think you should get more than that as i get 230Kh/s from a 560ti 448 core. Maybe try F16x16? oh and -i 0.
|
|
|
|
CryptoLego
Newbie
Offline
Activity: 7
Merit: 0
|
|
November 28, 2013, 08:36:58 PM |
|
I wanted to give my stats:
GTX 580 using 331.82 drivers
Using Cudaminer 11/20/2013
Win 7 X64
options: -H 1 -l F16x14
Auto-tuning was using F32x8 which was averaging lower than what I am getting using F16x14.
avg about 230 khash/s
I was using the 64 bit version but I was only averaging about 200 khash/s
Using about 95% of GPU running at a temp of about 83 degrees Celsius.
I believe this is the best I can expect running a stock GTX 580 with no overclocking.
I think you should get more than that as i get 230Kh/s from a 560ti 448 core. Maybe try F16x16? oh and -i 0. Hey fruittool: Thanks for the tip. GPU usage has increased to 99%. I am now hitting a max of 240 khash/s. GPU Core running at 85 degrees Celsius.
|
|
|
|
netfq
Newbie
Offline
Activity: 9
Merit: 0
|
|
November 28, 2013, 08:40:24 PM |
|
CryptoLego: I don't get it ... you have basically the same set up as me yet I get the dreaded "Unable to query CUDA driver" ... what have you installed that I have not?? Any suggestions would be great!
|
|
|
|
CryptoLego
Newbie
Offline
Activity: 7
Merit: 0
|
|
November 28, 2013, 08:49:54 PM |
|
CryptoLego: I don't get it ... you have basically the same set up as me yet I get the dreaded "Unable to query CUDA driver" ... what have you installed that I have not?? Any suggestions would be great! I noticed that, but I'm such a noob at this I was not sure what to suggest. I did receive a similar error when I first ran cudaminer. I realized it was due to a syntax error. I was not using the -u and -p flags for username and password. I pulled a example from the server pool that I am using and I noticed the difference. I'm not sure what else to suggest. Hopefully, someone with more experience will be able to assist you.
|
|
|
|
Roger100
Newbie
Offline
Activity: 24
Merit: 0
|
|
November 29, 2013, 04:20:01 AM Last edit: November 29, 2013, 05:00:36 AM by Roger100 |
|
As a total newb, I damn nearly pulled my hair out with this thing but I eventually got it to work with my 2x780 EVGA FTW's. I poured over this entire thread TWICE to find a config that worked and NOTHING WORKED until I came across someone who posted this extension: -H 1 -l K48x6 -C 2 That worked great, but only one card functioned and with weird lag issues. That all went away after disabling SLI and adding: -d1,0 It did an autotune for the second card but then both fired into life! YAAAAAY I didn't just waste FOUR HOURS!!!! I'm so happy! Currently hashing away at a combined speed of 620kh/s. If anyone has any other suggestions to tweak these bad boys, i'd be super grateful to hear it. Anyway, just thought I would share my findings. EDIT: Dropped to 560kh/s now. The next mission: MORE POWWWWEEERRRRR!
|
|
|
|
DeeBo
Newbie
Offline
Activity: 52
Merit: 0
|
|
November 29, 2013, 04:35:25 AM Last edit: November 29, 2013, 05:07:27 AM by DeeBo |
|
Can't get it to compile on Linux (running the distro Arch Linux.) The chmod +x command worked although it looks like autotools.sh is no longer part of the package. $ ./configure.sh ./configure.sh: ./configure: /bin/sh^M: bad interpreter: No such file or directory
$ ./configure bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory
I'm dual-booting so I'm about to try this in Windows 7 and post my results. Trying to see if I can do better than ~46KH/s which I'm current getting from cpuminer. EDIT: Looks to be running well in Windows. Getting ~210KH/s on my GTX 770 running 361.65 drivers. Would prefer to be on Linux though.
|
|
|
|
atomton
Newbie
Offline
Activity: 4
Merit: 0
|
|
November 29, 2013, 05:34:29 AM |
|
Can't get it to compile on Linux (running the distro Arch Linux.) The chmod +x command worked although it looks like autotools.sh is no longer part of the package. $ ./configure.sh ./configure.sh: ./configure: /bin/sh^M: bad interpreter: No such file or directory
$ ./configure bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory
I'm dual-booting so I'm about to try this in Windows 7 and post my results. Trying to see if I can do better than ~46KH/s which I'm current getting from cpuminer. EDIT: Looks to be running well in Windows. Getting ~210KH/s on my GTX 770 running 361.65 drivers. Would prefer to be on Linux though. You need to convert the line endings from DOS to UNIX.
|
|
|
|
|