UNOE
Sr. Member
Offline
Activity: 791
Merit: 273
This is personal
|
|
December 23, 2013, 03:42:57 AM |
|
Is there anyway to do backup pools ?
|
|
|
|
Antivanity
Newbie
Offline
Activity: 26
Merit: 0
|
|
December 23, 2013, 03:50:41 AM |
|
Is there anyway to do backup pools ?
This is a feature to come, read the bottom of the readme.txt file.
|
|
|
|
mareyx
Newbie
Offline
Activity: 3
Merit: 0
|
|
December 23, 2013, 04:46:01 AM |
|
Recently started mining. I've got 2 Gigabyte GTX 770 4GB however I'm seeing some weird results.
My settings looks like this: "-i 1 -l K16x16 -C 2"
I'm seeing a lot of "result does not validate on CPU", basically every change I make to the settings will cause this error. Even restarting the client with the same settings will give me this error a bunch of times, I have to tweak the settings, try to launch with new settings and then switch back to my old settings to get it to work. Even launching it without any pre-set settings will cause an error.
Once I do get it to work, one of the GPUs is reporting a hashrate of ~2000 khash/s which is obviously false, and the other is showing ~350 khash/s. However, if I change my settings to "-i 0", one of the GPUs will show a hashrate of ~90,000 khash/s while the other one remains at 350.
Also, looking at MSI Afterburner only one card is being utilized, GPU #1 is at 100% usage while GPU #2 is sitting at 0% usage, I can't seem to get both GPUs to be utilized.
Any suggestions or ideas?
|
|
|
|
Lacan82
|
|
December 23, 2013, 04:55:51 AM |
|
Recently started mining. I've got 2 Gigabyte GTX 770 4GB however I'm seeing some weird results.
My settings looks like this: "-i 1 -l K16x16 -C 2"
I'm seeing a lot of "result does not validate on CPU", basically every change I make to the settings will cause this error. Even restarting the client with the same settings will give me this error a bunch of times, I have to tweak the settings, try to launch with new settings and then switch back to my old settings to get it to work. Even launching it without any pre-set settings will cause an error.
Once I do get it to work, one of the GPUs is reporting a hashrate of ~2000 khash/s which is obviously false, and the other is showing ~350 khash/s. However, if I change my settings to "-i 0", one of the GPUs will show a hashrate of ~90,000 khash/s while the other one remains at 350.
Also, looking at MSI Afterburner only one card is being utilized, GPU #1 is at 100% usage while GPU #2 is sitting at 0% usage, I can't seem to get both GPUs to be utilized.
Any suggestions or ideas?
is that configuration based off of autotune? or did you just selected it?
|
|
|
|
mareyx
Newbie
Offline
Activity: 3
Merit: 0
|
|
December 23, 2013, 05:01:48 AM |
|
is that configuration based off of autotune? or did you just selected it?
The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparisonWhich were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again. The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU"
|
|
|
|
Lacan82
|
|
December 23, 2013, 05:08:28 AM |
|
is that configuration based off of autotune? or did you just selected it?
The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparisonWhich were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again. The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU" Take out -l because that could be and old config that no longer valid. You need to autotune after every upgrade
|
|
|
|
kernels10
Sr. Member
Offline
Activity: 408
Merit: 250
ded
|
|
December 23, 2013, 05:09:31 AM |
|
So after running my GTX 560 Ti for 2days staight at a solid 280khash on the new 12/18 software, cudaminer out of nowhere spiked the hashrate up to about 450 and the card started giving hardware errors as it can't run that high.
I got the notification from my.mining pool that a worker was down, so I RDP to the machine close out cudaminer and restart my script, no changes made at all.
Now all of a sudden cudaminer is saying, "unable to query CUDA driver version. Is an nVidia driver installed." This of course isn't true.
Seeing as how this happened the very first time I ran cudaminer I simply tried to reinstall the driver. When that didn't work I tried downgrading the driver and still no luck. I even installed the CUDA development kit and that didn't work either. I can no longer get cudaminer to launch any of the 3 versions that I have previously used.
I'm very confused at the moment. The only thing crossing my mind is that maybe when I RDP to the machine the graphic settings are changing for remote desktop and the CUDA driver is being disabled and therefore cannot relaunch.
Anyone ever tried to restart cudaminer via RDP before? Bigger question is why did cudaminer decide to randomly jump to 450khash after 2 straight days mining at 280?
Thoughts, comments, help, all appreciated. 5k doge to anyone that can help me find a solution.
Lots doge you rich coins wow cudaminer wow doge happy coin.
Driver crashed? happens to me if I try to push my oc to high, does it still happen after reboot? Haven't used RDP but I am using chrome remote desktop and haven't had issues. WOOOT!!!!! kernels10 you have been awarded 5k doge. My conclusion about RDP was 100% accurate and I was able to verify that via chrome remote desktop. I used RDP to install chrome remote desktop, exited RDP, entered through chrome remote desktop and the scripts started up perfectly. What this verified is that at least on the GTX 560 Ti RDP does indeed kill the CUDA nVidia drivers upon connection; therefore making it impossible to restart cudaminer. I'm curious if this is the case with all Microsoft RDP sessions. Thx DL7Kf4tT1heq4E8NX41mSCKoaWnsySQEAt Maybe MS RDP disables "unnecessary" for performance reasons? I am not too familiar at all with RDP
|
|
|
|
69charger
|
|
December 23, 2013, 05:17:15 AM |
|
is that configuration based off of autotune? or did you just selected it?
The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparisonWhich were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again. The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU" To get the other card working you need "-d 0,1" and then you can set intensity to "-i 1,0" to use the 2nd card to it's max. Ex. "-D -d 0,1 -i 1,0 -l auto,auto -H 1 -C 2"
|
|
|
|
Valnurat
|
|
December 23, 2013, 10:17:08 AM |
|
How do you use cudaminer through a proxy?
|
|
|
|
Tacticat
|
|
December 23, 2013, 12:14:18 PM |
|
I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.
I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.
Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?
Thank you!
|
Tips and donations:
15nqQGfkgoxrBnsshD6vCuMWuz71MK51Ug
|
|
|
cbuchner1 (OP)
|
|
December 23, 2013, 01:08:20 PM Last edit: December 23, 2013, 01:33:11 PM by cbuchner1 |
|
I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.
I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.
Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?
Thank you!
It would be a bit like explaining to a passenger how to land a plane. Wouldn't it be easier if I just showed him how to push the auto-land button? (yes, newer models have that feature). To understand the terminology of launch configurations like -l K28x8 you would have to understand the CUDA programming model, what a launch grid is, what a thread block is, and how it consists of warps that are independently scheduled on your Kepler multiprocessor's for warp schedulers. And you would have to understand what parameters could make sense on your particular GPU architecture to achieve high occupancy. You would also have to know certain limits imposed by shared memory use and registers used by a given kernel. Try auto-tuning first. Pass either -l auto, or no -l argument at all. If that doesn't find a satisfactory configuration, we can talk about blocks and warps and the memory requirements. the treatise linked to in my follow-up posting also has a bit of information. Christian
|
|
|
|
cbuchner1 (OP)
|
|
December 23, 2013, 01:31:54 PM |
|
the GPU_MAX_ALLOC_PERCENT is snake-oil for nVidia CUDA devices. I am pretty sure the driver won't care about this flag. Christian
|
|
|
|
ak84
|
|
December 23, 2013, 01:44:49 PM |
|
I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.
I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.
Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?
Thank you!
Your GPU isn't gonna burn up dude. Relax. Hundreds of 660TI owners here have been mining away on a whole host of Kxxxx configurations. I myself have used K7x32, K12x16, K31x6, K14x8, K14x16, and K12x8 (recommended by -l auto) and THEY BURNED MY GPU OH MY GOD
|
|
|
|
69charger
|
|
December 23, 2013, 04:12:34 PM Last edit: December 23, 2013, 04:41:12 PM by 69charger |
|
I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.
I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.
Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?
Thank you!
It would be a bit like explaining to a passenger how to land a plane. Wouldn't it be easier if I just showed him how to push the auto-land button? (yes, newer models have that feature). To understand the terminology of launch configurations like -l K28x8 you would have to understand the CUDA programming model, what a launch grid is, what a thread block is, and how it consists of warps that are independently scheduled on your Kepler multiprocessor's for warp schedulers. And you would have to understand what parameters could make sense on your particular GPU architecture to achieve high occupancy. You would also have to know certain limits imposed by shared memory use and registers used by a given kernel. Try auto-tuning first. Pass either -l auto, or no -l argument at all. If that doesn't find a satisfactory configuration, we can talk about blocks and warps and the memory requirements. the treatise linked to in my follow-up posting also has a bit of information. Christian Here's what I found as a noob... My card likes multiples of 160. Used to be 80x2, then 10x16, now 5x32 is the best. So find your "magic number" by running autotune several times and look at the first four digit hash number it gives you, mine was 5120, then divide by 32 to get your magic number. Then experiment with multiples. Has always worked out best for me and I have no idea why.
|
|
|
|
Valnurat
|
|
December 23, 2013, 06:45:30 PM |
|
How do you use cudaminer through a proxy?
Anyone? I can see there is a setting -x or --proxy. How do i use that?
|
|
|
|
dga
|
|
December 23, 2013, 06:56:19 PM |
|
I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.
I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.
Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?
Thank you!
It would be a bit like explaining to a passenger how to land a plane. Wouldn't it be easier if I just showed him how to push the auto-land button? (yes, newer models have that feature). To understand the terminology of launch configurations like -l K28x8 you would have to understand the CUDA programming model, what a launch grid is, what a thread block is, and how it consists of warps that are independently scheduled on your Kepler multiprocessor's for warp schedulers. And you would have to understand what parameters could make sense on your particular GPU architecture to achieve high occupancy. You would also have to know certain limits imposed by shared memory use and registers used by a given kernel. Try auto-tuning first. Pass either -l auto, or no -l argument at all. If that doesn't find a satisfactory configuration, we can talk about blocks and warps and the memory requirements. the treatise linked to in my follow-up posting also has a bit of information. Christian Here's what I found as a noob... My card likes multiples of 160. Used to be 80x2, then 10x16, now 5x32 is the best. So find your "magic number" by running autotune several times and look at the first four digit hash number it gives you, mine was 5120, then divide by 32 to get your magic number. Then experiment with multiples. Has always worked out best for me and I have no idea why. Step 1: Figure out how many CUDA cores your device has by googling for it and looking at NVidia's page. Example: GTX 660 http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-660/specificationsHas 960 CUDA cores. Step 2: Figure out how many of those cores are physically present on each execution unit. Step 2a: Figure out your compute capability https://developer.nvidia.com/cuda-gpus Example, GTX 660, compute capability 3.0 Step 2b: For compute capability 3.0, each execution unit has 192 CUDA cores. Step 3: Divide #CUDA cores by number per execution unit. Ex: 960 / 192 = 5. This tells you how many independent execution units you have ("SMXes" is the name for them in Kepler-based devices). Setting the first number to the number of execution units is good. Therefore: 5 is a very good choice for the first number in your tuning. The second number depends on the amount of memory you have, but for compute capability 3.0 and 3.5 devices, 32 is a pretty good one. So 5x32, as the follow-up poster suggested, is probably the right answer for you. But trusting autotune in general is probably simpler. :-) -Dave
|
|
|
|
ajax3592
Full Member
Offline
Activity: 210
Merit: 100
Crypto News & Tutorials - Coinramble.com
|
|
December 23, 2013, 07:38:47 PM Last edit: December 23, 2013, 08:38:58 PM by ajax3592 |
|
That latest "GeForce 331.82 Driver" which is required to run this release hangs and BSODs my pc. Therefore, not able to run 18th DEC release
On GTS 450
Can you help out guys it's a bit urgent ?
|
|
|
|
y2kcamaross
|
|
December 24, 2013, 12:02:12 AM |
|
That latest "GeForce 331.82 Driver" which is required to run this release hangs and BSODs my pc. Therefore, not able to run 18th DEC release
On GTS 450
Can you help out guys it's a bit urgent ? I'm having the same problem, 2 780's
|
|
|
|
Lacan82
|
|
December 24, 2013, 12:03:23 AM |
|
That latest "GeForce 331.82 Driver" which is required to run this release hangs and BSODs my pc. Therefore, not able to run 18th DEC release
On GTS 450
Can you help out guys it's a bit urgent ? I'm having the same problem, 2 780's Clean install?
|
|
|
|
Treggar
|
|
December 24, 2013, 12:49:42 AM Last edit: December 24, 2013, 01:11:04 AM by Treggar |
|
I run 331.93 driver with this release and it works just fine, but wow do my two cards ever get hot.
With 12/10 release they run at 85C ~399-402KH/s between the GTX570 and GTX465 and with 12/18 release they run at 93C ~402-410KH/s.
[2013-12-23 19:06:29] 2 miner threads started, using 'scrypt' algorithm. [2013-12-23 19:06:30] Stratum detected new block [2013-12-23 19:06:30] GPU #1: GeForce GTX 465 with compute capability 2.0 [2013-12-23 19:06:30] GPU #1: interactive: 0, tex-cache: 2D, single-alloc: 1 [2013-12-23 19:06:30] GPU #1: using launch configuration F11x16 [2013-12-23 19:06:30] GPU #0: GeForce GTX 570 with compute capability 2.0 [2013-12-23 19:06:30] GPU #0: interactive: 0, tex-cache: 2D, single-alloc: 1 [2013-12-23 19:06:30] GPU #0: using launch configuration F15x16 [2013-12-23 19:06:31] accepted: 1/1 (100.00%), 186.67 khash/s (yay!!!) [2013-12-23 19:07:05] Stratum detected new block [2013-12-23 19:07:07] accepted: 2/2 (100.00%), 404.98 khash/s (yay!!!) [2013-12-23 19:07:08] accepted: 3/3 (100.00%), 407.63 khash/s (yay!!!) [2013-12-23 19:07:09] accepted: 4/4 (100.00%), 403.10 khash/s (yay!!!) [2013-12-23 19:07:20] accepted: 5/5 (100.00%), 402.91 khash/s (yay!!!) [2013-12-23 19:07:27] accepted: 6/6 (100.00%), 410.95 khash/s (yay!!!) [2013-12-23 19:07:33] accepted: 7/7 (100.00%), 410.43 khash/s (yay!!!) [2013-12-23 19:07:35] accepted: 8/8 (100.00%), 408.76 khash/s (yay!!!)
|
|
|
|
|