Force_field
Newbie
Offline
Activity: 14
Merit: 0
|
|
December 12, 2013, 08:01:18 PM |
|
Power settings are all ok (off or at max where it should be). Hardware acceleration that triggers in chrome is MAYBE the case, but how can I set windows to trigger GPU's full potential in console (DOS) window? Other question is already posted: why it is still running fine even AFTER I close Chrome (after Cudaminer start)? That trigger then remains in "ON" state or what?... My other rig with GTX 760 never does that (same nVidia driver, same Windows settings)?!
|
|
|
|
qwerty77
Newbie
Offline
Activity: 53
Merit: 0
|
|
December 12, 2013, 09:49:31 PM |
|
Power settings are all ok (off or at max where it should be). Hardware acceleration that triggers in chrome is MAYBE the case, but how can I set windows to trigger GPU's full potential in console (DOS) window? Other question is already posted: why it is still running fine even AFTER I close Chrome (after Cudaminer start)? That trigger then remains in "ON" state or what?... My other rig with GTX 760 never does that (same nVidia driver, same Windows settings)?!
I assume this is related to the different performance levels of your Nvidia card. I got a similar issue, if a pool I'm mining for has overload, then the hashrate decreases and I get frame drops with animations in my Chrome browser because the GPU load is not high enough to switch to the next higher clock profile. Guess this could be fixed using the tool Nvidia Inspector, it comes together with a tool named "Multi Display Power Saver", which is made to prevent running the graphics card at higher profiles than necessary when using multiple displays and therefore save energy. But there you can configure applications that always trigger the highest available clock profile for your GPU, so you could add your miner there and the GPU should always use the full clock when your miner is running.
|
|
|
|
fruittool
Newbie
Offline
Activity: 19
Merit: 0
|
|
December 13, 2013, 04:01:53 AM |
|
Hi Christian, me again. dunno if im barking up the wrong tree here but if i reduce the amount of cuda threads, i.e: 'case 16: fermi_scrypt_core_kernelA<16><<< grid, threads, 0, stream >>>(d_idata); break;' Say i set threads to say 256 (<512). theres a massive increase in speed... But quite a few errors. Why the errors? Cuda is still new to me.... http://s22.postimg.org/qdcboxcwh/Capture.jpg
|
|
|
|
cbuchner1 (OP)
|
|
December 13, 2013, 07:47:25 AM |
|
Hi Christian, me again. dunno if im barking up the wrong tree here but if i reduce the amount of cuda threads, i.e:
'case 16: fermi_scrypt_core_kernelA<16><<< grid, threads, 0, stream >>>(d_idata); break;'
Say i set threads to say 256 (<512). theres a massive increase in speed... But quite a few errors.
Why the errors?
You're asking basically: if I break the program there's errors. Why are there errors? short answer: because you broke it long answer: because you only compute half the requested results with 256 threads.
|
|
|
|
Ness
Newbie
Offline
Activity: 10
Merit: 0
|
|
December 13, 2013, 06:50:27 PM Last edit: December 13, 2013, 07:04:02 PM by Ness |
|
I've been lurking this thread ever since I started mining a week ago. Thanks for all the hard work everyone has done. I've got 2 EVGA 670 FTW OC'ed hitting 453 kh/s. Supporting hardware is an i7 3770k @4.6ghz, EVGA Z77 FTW motherboard, and 8gb of ram. I've tried as many combinations of arguments as possible, and these 2 are getting me the same results: -H 1 -i 0 -C 1 -m 1 -l K14x16 -H 1 -i 0 -C 1 -m 1 -l K112x2 <- way more consistent About 220-230 kh/s on each card. I've also noticed that if I put in an argument that cudaminer doesn't like, I have to reboot my computer since it drops my GPU clock speed to 750mhz and seems to lock it there. I'm hoping for some more K20 optimizations in the future! Overclock and hash rate screenshot
|
|
|
|
icolsuineg
|
|
December 14, 2013, 08:58:39 AM |
|
I've also noticed that if I put in an argument that cudaminer doesn't like, I have to reboot my computer since it drops my GPU clock speed to 750mhz and seems to lock it there.
Try enabling and disabling SLI in such cases. Not sure if it will help with newer nVidia cards, but works just fine for locked-after-error 480s ever since I found sometimes I manage to lock them at lower speed.
|
♦♦♦ CRYPTUMCOIN ♦♦♦ Miner and trade-centric Equihash-based cryptocurrency
|
|
|
Ness
Newbie
Offline
Activity: 10
Merit: 0
|
|
December 14, 2013, 10:19:01 AM Last edit: December 14, 2013, 11:48:09 AM by Ness |
|
Misread your post at first. Thanks for the tip, I'll try that next time!
|
|
|
|
thejepper
|
|
December 15, 2013, 12:58:51 PM |
|
any progress on the optimizations the cloud miner promised to send us?
|
|
|
|
Tobiman5
Newbie
Offline
Activity: 3
Merit: 0
|
|
December 15, 2013, 05:19:07 PM |
|
Hi guys, I'm in a pinch atm. I keep getting this when I launch cudaminer. HTTP reqest failed: Failed connect to 127.0.0.1:9332; No error. Json_rpc_call failed, retry after 15 seconds. I already tried disabling firewall but it doesn't help. I think it has to do with my batch file but I can't figure out what exactly is wrong atm. http://i42.tinypic.com/o6jq4m.png
|
|
|
|
GoldBit89
|
|
December 15, 2013, 06:09:24 PM |
|
Hi guys, I'm in a pinch atm. I keep getting this when I launch cudaminer. HTTP reqest failed: Failed connect to 127.0.0.32; No error. Json_rpc_call failed, retry after 15 seconds. I already tried disabling firewall but it doesn't help. I think it has to do with my batch file but I can't figure out what exactly is wrong atm. i had same issue and finally i got it fixed by doing this on this website--sucks that you have to use a stratum proxy but it did fix my issue and keep me from having to use the memory hog cgminer. http://www.lpshowboat0099.com/Blog/how-to-mining-ltc-with-cudaminer-on-a-stratum-server/this should do it for you.
|
FTC 6nvzqqaCEizThvgMeC86MGzhAxGzKEtNH8 |WDC WckDxipCes2eBmxrUYEhrUfNNRZexKuYjR |BQC bSDm3XvauqWWnqrxfimw5wdHVDQDp2U8XU BOT EjcroqeMpZT4hphY4xYDzTQakwutpnufQR |BTG geLUGuJkhnvuft77ND6VrMvc8vxySKZBUz |LTC LhXbJMzCqLEzGBKgB2n73oce448BxX1dc4 BTC 1JPzHugtBtPwXgwMqt9rtdwRxxWyaZvk61 |ETH 0xA6cCD2Fb3AC2450646F8D8ebeb14f084F392ACFf
|
|
|
Tobiman5
Newbie
Offline
Activity: 3
Merit: 0
|
|
December 15, 2013, 06:28:58 PM |
|
I finally got it to work. It was the stratum proxy as you said. Thx guys! Now it's time for optimization.
|
|
|
|
qwerty77
Newbie
Offline
Activity: 53
Merit: 0
|
|
December 15, 2013, 06:40:16 PM |
|
any progress on the optimizations the cloud miner promised to send us?
wanted to ask about the same ...
|
|
|
|
cbuchner1 (OP)
|
|
December 15, 2013, 06:59:52 PM |
|
any progress on the optimizations the cloud miner promised to send us?
wanted to ask about the same ... keep checking his blog for updates.
|
|
|
|
peri
Newbie
Offline
Activity: 8
Merit: 0
|
|
December 15, 2013, 10:32:44 PM Last edit: December 15, 2013, 10:53:54 PM by peri |
|
Hi, I have 2 680's, and letting Cudaminer automatically find the best settings for each card individually, it tells me 16x14 is ideal - which it appears to be. If I run one of my cards at that (either one), it'll give 200+ khs. Trying to run both cards at the same time (with -H 2 -d 0,1 -l K16x14,K16x14) and the rate drops to around 0.7 khs - whats going on? The only config I've found that gives anything other than either <1khs or a crash is 8x24, and that tops out at about 220khs with both cards running. Any ideas how it can be improved? Thank you
|
|
|
|
trell0z
Newbie
Offline
Activity: 43
Merit: 0
|
|
December 15, 2013, 11:47:52 PM |
|
Hi, I have 2 680's, and letting Cudaminer automatically find the best settings for each card individually, it tells me 16x14 is ideal - which it appears to be. If I run one of my cards at that (either one), it'll give 200+ khs. Trying to run both cards at the same time (with -H 2 -d 0,1 -l K16x14,K16x14) and the rate drops to around 0.7 khs - whats going on? The only config I've found that gives anything other than either <1khs or a crash is 8x24, and that tops out at about 220khs with both cards running. Any ideas how it can be improved? Thank you Tried it with -H 0/1 and different combinations of -C 0/1/2, 32/64bit exe ? And also disabling sli, since I don't have sli/xfire I don't know that much about the problems myself, but everywhere I've read about it people seem to say it's a bad idea when mining, no idea if it's the same or not with cudaminer though.
|
|
|
|
Ness
Newbie
Offline
Activity: 10
Merit: 0
|
|
December 16, 2013, 12:42:27 AM |
|
Hi, I have 2 680's, and letting Cudaminer automatically find the best settings for each card individually, it tells me 16x14 is ideal - which it appears to be. If I run one of my cards at that (either one), it'll give 200+ khs. Trying to run both cards at the same time (with -H 2 -d 0,1 -l K16x14,K16x14) and the rate drops to around 0.7 khs - whats going on? The only config I've found that gives anything other than either <1khs or a crash is 8x24, and that tops out at about 220khs with both cards running. Any ideas how it can be improved? Thank you SLI needs to be disabled from everything I've experienced. Since our cards are similar, give this a shot: -H 1 -i 0 -C 1 -m 1 -l K112x2 You dont need to specify arguments for each card if you want them to run under the same settings.
|
|
|
|
|
leshow
Newbie
Offline
Activity: 48
Merit: 0
|
|
December 16, 2013, 02:17:55 AM |
|
*** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-12-01 (beta) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-12-15 21:17:20] 1 miner threads started, using 'scrypt' algorithm. [2013-12-15 21:17:20] Starting Stratum on stratum+tcp://stratum.gentoomen.org:3333 [2013-12-15 21:17:35] Stratum detected new block [2013-12-15 21:17:35] Stratum detected new block [2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti with compute capability 3.0 [2013-12-15 21:17:38] GPU #0: interactive: 1, tex-cache: 1D, single-alloc: 1 [2013-12-15 21:17:38] GPU #0: using launch configuration K14x16 [2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti, 7168 hashes, 3.25 khash/s [2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti, 200704 hashes, 833.73 khash/s [2013-12-15 21:18:01] GPU #0: GeForce GTX 650 Ti result does not validate on CPU! anyone know how to fix this problem?
|
|
|
|
eagleeyez
Newbie
Offline
Activity: 12
Merit: 0
|
|
December 16, 2013, 05:00:58 AM |
|
try to restart or set the -l auto and try... maybe you need to test with the -l until you get the best performance without errors in cpu check.
|
|
|
|
Ness
Newbie
Offline
Activity: 10
Merit: 0
|
|
December 16, 2013, 05:18:32 AM |
|
Nice! My auto config settings usually get set to K14x16, but I've found that 112x2 gives me the best results. With that 680 your hash rates seem to be right on the mark. If your temperatures are cool enough, you could look into getting a few more kh/s with overclocking. I believe this applies to all kepler cards, but the most important thing to keep in mind is that your gpu clock will begin to gradually decrease after 65 degrees celsius.From some overclocking testing I've done with my cards, I've seen that a 50mhz change = roughly 10 kh/s difference. There is a pretty good overclocking guide that you can find here if you are feeling brave enough http://www.overclock.net/t/1265110/the-gtx-670-overclocking-master-guide
|
|
|
|
|