dejahboi
Newbie
Offline
Activity: 6
Merit: 0
|
|
December 19, 2013, 11:19:30 AM |
|
with a couple of adjustments i got my 680 up to ~355 k/hashes -i 0 -C 2 -m 1 -H 1 -l K16x16 http://flic.kr/p/irGqEi
|
|
|
|
hilariousandco
Global Moderator
Legendary
Offline
Activity: 3990
Merit: 2717
Join the world-leading crypto sportsbook NOW!
|
|
December 19, 2013, 12:51:09 PM |
|
https://bitcointalk.org/index.php?topic=377174.msg4040564#msg4040564Just posting the following for a newbie from the above thread: It's urgent I believe. I downloaded the latest version for use with my Fermi GTX 470, and thought "Hey great, 2.5 khs more is better than nothing." My fan runs at max, so I couldn't tell, but when I glanced at my GPU temp a few minutes later, it was 98-99c!!! Far over the max of 95c and my personal max of 90c. Too long at this rate would kill the GPU, and this new version needs strict labeling as KELPER ONLY excepting a few possible cases. Thanks! ` ~ EDIT: I keep having "duplicate post," or "you just posted" when it is not and I did not. Here is the link https://bitcointalk.org/index.php?topic=167229.0
|
|
|
|
manofcolombia
Member
Offline
Activity: 84
Merit: 10
SizzleBits
|
|
December 19, 2013, 02:08:10 PM |
|
using -H 1 -i 0 -C 1 -D -l K14x16 with my 660ti. Its stable but the numbers dont look right and I still cant get my max overclock with the updated version of cudaminer. Would be nice if someone knew anything about that but apparently my buddy and I are the only ones with this issue. Heres proof...
|
|
|
|
f4t4l
Full Member
Offline
Activity: 200
Merit: 100
Presale Starting May 1st
|
|
December 19, 2013, 04:38:58 PM |
|
i have error at cudaminer: please help.
|
|
|
|
MUBBLE86
Full Member
Offline
Activity: 126
Merit: 100
1
|
|
December 19, 2013, 04:57:03 PM |
|
i have error at cudaminer: please help. setting to high
|
|
|
|
manofcolombia
Member
Offline
Activity: 84
Merit: 10
SizzleBits
|
|
December 19, 2013, 05:00:54 PM |
|
using -H 1 -i 0 -C 1 -D -l K14x16 with my 660ti. Its stable but the numbers dont look right and I still cant get my max overclock with the updated version of cudaminer. Would be nice if someone knew anything about that but apparently my buddy and I are the only ones with this issue. Heres proof... Your numbers are better than my 670, everything looks gravy to me so long as you are intending to push your card and sacrifice a little of it's lifespan. Well thats what I was saying in a post earlier, since the update of 12/18 my gpu is now taking 103%+ power when it used to sit nicely at 95% And if i up my voltages to where they normally are for stable gaming my core speed drops and so does my hashrate. So really right now these settings will instantly crash with any game or anything intense. Thats because my voltage is so low because I wanna keep my clock speed high for mining, even though it doesnt get nearly as high as it used to. As for lifespan, this is a MSI Power Edition card so I'm confident with the integrity however I don't understand how my power usage shot up like by a full 10% after 12/18 with the exact same settings I was using in 12/10. EDIT: Also, what pool are you using because I was seeing the same results all yesterday. I was getting just chunks of stales in 5-10 when normally I'm at a 99% valid yesterday was like at 96%.
|
|
|
|
f4t4l
Full Member
Offline
Activity: 200
Merit: 100
Presale Starting May 1st
|
|
December 19, 2013, 05:02:46 PM |
|
i have error at cudaminer: please help. setting to high do you know setting for geforce 8400m ?
|
|
|
|
cbuchner1 (OP)
|
|
December 19, 2013, 05:15:43 PM |
|
i have error at cudaminer: please help. setting to high do you know setting for geforce 8400m ? can't use x4. Stick to x3 launch configs
|
|
|
|
f4t4l
Full Member
Offline
Activity: 200
Merit: 100
Presale Starting May 1st
|
|
December 19, 2013, 05:21:10 PM |
|
i have error at cudaminer: please help. setting to high do you know setting for geforce 8400m ? can't use x4. Stick to x3 launch configs thanks. so this: and cpu to to80% i am using laptop.
|
|
|
|
Bratinov
Newbie
Offline
Activity: 3
Merit: 0
|
|
December 19, 2013, 07:35:10 PM Last edit: December 19, 2013, 07:59:44 PM by Bratinov |
|
No luck with my 580 and the latest version, its basically cooking itself with no performance increase. gtx660 got a nice performance bump
|
|
|
|
mgogalu
Newbie
Offline
Activity: 32
Merit: 0
|
|
December 19, 2013, 07:54:52 PM |
|
Can you mine other altcoins besides litecoins with cudaminer?
|
|
|
|
Icon
|
|
December 19, 2013, 07:56:49 PM |
|
all sha256 and scrypt types
|
|
|
|
Ness
Newbie
Offline
Activity: 10
Merit: 0
|
|
December 19, 2013, 08:03:19 PM |
|
using -H 1 -i 0 -C 1 -D -l K14x16 with my 660ti. Its stable but the numbers dont look right and I still cant get my max overclock with the updated version of cudaminer. Would be nice if someone knew anything about that but apparently my buddy and I are the only ones with this issue.
Heres proof...
Your numbers are better than my 670, everything looks gravy to me so long as you are intending to push your card and sacrifice a little of it's lifespan. Well thats what I was saying in a post earlier, since the update of 12/18 my gpu is now taking 103%+ power when it used to sit nicely at 95% And if i up my voltages to where they normally are for stable gaming my core speed drops and so does my hashrate. So really right now these settings will instantly crash with any game or anything intense. Thats because my voltage is so low because I wanna keep my clock speed high for mining, even though it doesnt get nearly as high as it used to. As for lifespan, this is a MSI Power Edition card so I'm confident with the integrity however I don't understand how my power usage shot up like by a full 10% after 12/18 with the exact same settings I was using in 12/10. EDIT: Also, what pool are you using because I was seeing the same results all yesterday. I was getting just chunks of stales in 5-10 when normally I'm at a 99% valid yesterday was like at 96%. I'm assuming that changes made to the most recent version caused/allowed our cards to reach the full TDP that we have set. Before, I could set my power target to 140% and I would only use about 110% under 99% GPU load. Now that my hashrate has jumped up, so has the TDP usage and temps as well. I just sort of assumed that this was normal and that this build allowed our cards to run open throttle. I'd be pleasantly surprised if something could be changed that brought power consumption down while retaining the current performance.
|
|
|
|
mgogalu
Newbie
Offline
Activity: 32
Merit: 0
|
|
December 19, 2013, 08:29:06 PM |
|
all sha256 and scrypt types but at what hash rate? for 780gtx
|
|
|
|
trell0z
Newbie
Offline
Activity: 43
Merit: 0
|
|
December 19, 2013, 08:34:03 PM |
|
No luck with my 580 and the latest version, its basically cooking itself with no performance increase. gtx660 got a nice performance bump Yeah, just use the 12-10 version until/if he can fix the increased temps w/o performance increase on older cards, or at least on the 580 since my result with the new version is the same ^^ I mean, if there was an equal performance increase for the heat I'd be fine with it, but absolute waste when it performs the same/slightly less with all that extra heat.
|
|
|
|
dga
|
|
December 19, 2013, 08:50:02 PM |
|
Does anyone have settings for a 690 that doesn't absolutely melt the device? How can I throttle this thing to make it less intense?
Try running with -i -- it'll reduce the speed a little bit. I _just_ got my 690 up and running and am still having driver issues with it (I can't run on both devices at the same time, sigh). But with a single device running with -d1 -m1 -lK8x16 I'm seeing about 270-275 kh/s and after a few minutes my card is at 78C. It's freestanding (motherboard-on-a-table kind of thing). 78 is not something you want to stick your tongue on, but it shouldn't hurt the card. What kind of temperatures are you seeing from nvidia-smi? What hash rates and what config? (And - for my own use - if you're running Linux, which driver are you using that works? *grins*) I tried running "-i --" but that just put interactive mode to 0 and made my pc unresponsive. I'm seeing 400 KH/s in total with default settings on my 690 on Windows 8.1 Is there really no way for to have my 690 only mine at 60 % for example? Are you comfortable editing the source? There's an easy change to accomplish what you want, but it's a bit of a hack and requires recompiling. You could also try running with a kernel config with something like -lK1x16 and see if that slows it down and reduces the heat. I never compiled anything on windows before, I don't mind editing the source tho, can you guide me through it? I don't think it's a great idea. I'm confused about your performance. I now have my 690 working well. It's at 85degC on one half and 77degC on the other half doing 550 kh/sec (with no display attached). That's with kernel config -l K8x32,K8x32 What temperature are you seeing? If you want to edit the source, go into kepler_kernel.cu and look for the line that says Sleep(1); and change it to Sleep(10); and then recompile. See how much of a temperature reduction that gives you vs performance drop and play from there. But, as I said - your performance seems low. You might have a ventilation problem that's causing some thermal shutdown? -Dave
|
|
|
|
San1ty
|
|
December 19, 2013, 09:21:41 PM |
|
Does anyone have settings for a 690 that doesn't absolutely melt the device? How can I throttle this thing to make it less intense?
Try running with -i -- it'll reduce the speed a little bit. I _just_ got my 690 up and running and am still having driver issues with it (I can't run on both devices at the same time, sigh). But with a single device running with -d1 -m1 -lK8x16 I'm seeing about 270-275 kh/s and after a few minutes my card is at 78C. It's freestanding (motherboard-on-a-table kind of thing). 78 is not something you want to stick your tongue on, but it shouldn't hurt the card. What kind of temperatures are you seeing from nvidia-smi? What hash rates and what config? (And - for my own use - if you're running Linux, which driver are you using that works? *grins*) I tried running "-i --" but that just put interactive mode to 0 and made my pc unresponsive. I'm seeing 400 KH/s in total with default settings on my 690 on Windows 8.1 Is there really no way for to have my 690 only mine at 60 % for example? Are you comfortable editing the source? There's an easy change to accomplish what you want, but it's a bit of a hack and requires recompiling. You could also try running with a kernel config with something like -lK1x16 and see if that slows it down and reduces the heat. I never compiled anything on windows before, I don't mind editing the source tho, can you guide me through it? I don't think it's a great idea. I'm confused about your performance. I now have my 690 working well. It's at 85degC on one half and 77degC on the other half doing 550 kh/sec (with no display attached). That's with kernel config -l K8x32,K8x32 What temperature are you seeing? If you want to edit the source, go into kepler_kernel.cu and look for the line that says Sleep(1); and change it to Sleep(10); and then recompile. See how much of a temperature reduction that gives you vs performance drop and play from there. But, as I said - your performance seems low. You might have a ventilation problem that's causing some thermal shutdown? -Dave I didn't do any tweaking by the way this is autotune. Now that I use your -l K8x32,K8x32 it performs a bit cooler AND faster: 511 KH/s, Core one on 84 and Core two on 85 degrees. Still a bit hot too my taste for every day use. I have the card on a regular desktop case tho. My final settings are: cudaminer.exe -H 1 -C 2 -t 1 -i 1 -l K8x32,K8x32 Any other suggestions? Also this is with interactive on, but I notice my pc becomes very laggy.
|
Found my posts helpful? Consider buying me a beer :-)!: BTC - 1San1tyUGhfWRNPYBF4b6Vaurq5SjFYWk NXT - 17063113680221230777
|
|
|
dga
|
|
December 19, 2013, 10:56:50 PM |
|
Now that I use your -l K8x32,K8x32 it performs a bit cooler AND faster:
511 KH/s, Core one on 84 and Core two on 85 degrees. Still a bit hot too my taste for every day use.
I have the card on a regular desktop case tho. My final settings are: cudaminer.exe -H 1 -C 2 -t 1 -i 1 -l K8x32,K8x32
Any other suggestions? Also this is with interactive on, but I notice my pc becomes very laggy.
Not yet. I'm finally managing to reproduce your thermal overload problem on my own setup with 2x GTX690s: | 82% 90C N/A N/A / N/A | 1087MiB / 2047MiB | N/A Default | | 57% 78C N/A N/A / N/A | 1087MiB / 2047MiB | N/A Default | | 57% 79C N/A N/A / N/A | 1087MiB / 2047MiB | N/A Default | | 54% 75C N/A N/A / N/A | 1087MiB / 2047MiB | N/A Default | Toasty. That 90C isn't good unless planning on making tea on your computer. My kernel is going to make your display laggy even in interactive mode, unfortunately. The only thing I can think of to try to reduce both power and lagginess without changing the code is to try -l K2x32,K2x32 or something similar. Have you given that a shot? It should reduce the duration of time that the kernel runs and increase the relative amount of time interactive mode spends telling the GPU to not do mining. Could you let me know how that works? I'm tied up for a while, but I'll see if now that I have my 690 running I can figure out any efficiency gains for it. Don't hold your breath, though: The 690 is pretty similar to the Grid K2 that I was optimizing for before. I think there are gains to be had for GF110 devices, but maybe not GF104. -Dave
|
|
|
|
San1ty
|
|
December 19, 2013, 11:13:32 PM Last edit: December 19, 2013, 11:25:11 PM by San1ty |
|
Now that I use your -l K8x32,K8x32 it performs a bit cooler AND faster:
511 KH/s, Core one on 84 and Core two on 85 degrees. Still a bit hot too my taste for every day use.
I have the card on a regular desktop case tho. My final settings are: cudaminer.exe -H 1 -C 2 -t 1 -i 1 -l K8x32,K8x32
Any other suggestions? Also this is with interactive on, but I notice my pc becomes very laggy.
Not yet. I'm finally managing to reproduce your thermal overload problem on my own setup with 2x GTX690s: | 82% 90C N/A N/A / N/A | 1087MiB / 2047MiB | N/A Default | | 57% 78C N/A N/A / N/A | 1087MiB / 2047MiB | N/A Default | | 57% 79C N/A N/A / N/A | 1087MiB / 2047MiB | N/A Default | | 54% 75C N/A N/A / N/A | 1087MiB / 2047MiB | N/A Default | Toasty. That 90C isn't good unless planning on making tea on your computer. My kernel is going to make your display laggy even in interactive mode, unfortunately. The only thing I can think of to try to reduce both power and lagginess without changing the code is to try -l K2x32,K2x32 or something similar. Have you given that a shot? It should reduce the duration of time that the kernel runs and increase the relative amount of time interactive mode spends telling the GPU to not do mining. Could you let me know how that works? I'm tied up for a while, but I'll see if now that I have my 690 running I can figure out any efficiency gains for it. Don't hold your breath, though: The 690 is pretty similar to the Grid K2 that I was optimizing for before. I think there are gains to be had for GF110 devices, but maybe not GF104. -Dave I wouldn't dare be disappointed, your help is much appreciated! I'm just trying to find out if it would be possible for me to mine with my 690. The setting you provided indeed helped a lot with my display lag (Performance dropped to 419 Kh/s, but I don't mind that), unfortunately temps climb to 90 after 2 minutes or so, so still no-go :-). I guess I'm out of luck (unless you find something). Thanks a lot for having a look!
|
Found my posts helpful? Consider buying me a beer :-)!: BTC - 1San1tyUGhfWRNPYBF4b6Vaurq5SjFYWk NXT - 17063113680221230777
|
|
|
Raitzi
Newbie
Offline
Activity: 19
Merit: 0
|
|
December 20, 2013, 04:57:08 AM |
|
My 770GTX produces 330kH/s at -i 0 -C 2 -m 1 -H 2 -l K16x16 (shitty CPU does not like -H 1 ) However for 24h operation I had to lower voltage and underclock to get temps stabilize to 79 degrees of Celsius.(I will optimize this later) With underclock I get 300kH/s.
|
|
|
|
|