Bitcoin Forum
November 09, 2024, 12:17:33 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 [88] 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 ... 1135 »
  Print  
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX]  (Read 3426930 times)
UNOE
Sr. Member
****
Offline Offline

Activity: 791
Merit: 273


This is personal


View Profile
December 23, 2013, 03:42:57 AM
 #1741

Is there anyway to do backup pools ?

Antivanity
Newbie
*
Offline Offline

Activity: 26
Merit: 0


View Profile
December 23, 2013, 03:50:41 AM
 #1742

Is there anyway to do backup pools ?

This is a feature to come, read the bottom of the readme.txt file.
mareyx
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
December 23, 2013, 04:46:01 AM
 #1743

Recently started mining. I've got 2 Gigabyte GTX 770 4GB however I'm seeing some weird results.

My settings looks like this: "-i 1 -l K16x16 -C 2"

I'm seeing a lot of "result does not validate on CPU", basically every change I make to the settings will cause this error. Even restarting the client with the same settings will give me this error a bunch of times, I have to tweak the settings, try to launch with new settings and then switch back to my old settings to get it to work. Even launching it without any pre-set settings will cause an error.

Once I do get it to work, one of the GPUs is reporting a hashrate of ~2000 khash/s which is obviously false, and the other is showing ~350 khash/s. However, if I change my settings to "-i 0", one of the GPUs will show a hashrate of ~90,000 khash/s while the other one remains at 350.

Also, looking at MSI Afterburner only one card is being utilized, GPU #1 is at 100% usage while GPU #2 is sitting at 0% usage, I can't seem to get both GPUs to be utilized.

Any suggestions or ideas?
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
December 23, 2013, 04:55:51 AM
 #1744

Recently started mining. I've got 2 Gigabyte GTX 770 4GB however I'm seeing some weird results.

My settings looks like this: "-i 1 -l K16x16 -C 2"

I'm seeing a lot of "result does not validate on CPU", basically every change I make to the settings will cause this error. Even restarting the client with the same settings will give me this error a bunch of times, I have to tweak the settings, try to launch with new settings and then switch back to my old settings to get it to work. Even launching it without any pre-set settings will cause an error.

Once I do get it to work, one of the GPUs is reporting a hashrate of ~2000 khash/s which is obviously false, and the other is showing ~350 khash/s. However, if I change my settings to "-i 0", one of the GPUs will show a hashrate of ~90,000 khash/s while the other one remains at 350.

Also, looking at MSI Afterburner only one card is being utilized, GPU #1 is at 100% usage while GPU #2 is sitting at 0% usage, I can't seem to get both GPUs to be utilized.

Any suggestions or ideas?

is that configuration based off of autotune? or did you just selected it?

mareyx
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
December 23, 2013, 05:01:48 AM
 #1745

is that configuration based off of autotune? or did you just selected it?

The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparison

Which were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again.
The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU"
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
December 23, 2013, 05:08:28 AM
 #1746

is that configuration based off of autotune? or did you just selected it?

The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparison

Which were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again.
The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU"

Take out -l because that could be and old config that no longer valid. You need to autotune after every upgrade

kernels10
Sr. Member
****
Offline Offline

Activity: 408
Merit: 250


ded


View Profile
December 23, 2013, 05:09:31 AM
 #1747

So after running my GTX 560 Ti for 2days staight at a solid 280khash on the new 12/18 software, cudaminer out of nowhere spiked the hashrate up to about 450 and the card started giving hardware errors as it can't run that high.

I got the notification from my.mining pool that a worker was down, so I RDP to the machine close out cudaminer and restart my script, no changes made at all.

Now all of a sudden cudaminer is saying, "unable to query CUDA driver version.  Is an nVidia driver installed."
This of course isn't true.

Seeing as how this happened the very first time I ran cudaminer I simply tried to reinstall the driver.  When that didn't work I tried downgrading the driver and still no luck.  I even installed the CUDA development kit and that didn't work either.  I can no longer get cudaminer to launch any of the 3 versions that I have previously used.

I'm very confused at the moment.  The only thing crossing my mind is that maybe when I RDP to the machine the graphic settings are changing for remote desktop and the CUDA driver is being disabled and therefore cannot relaunch.

Anyone ever tried to restart cudaminer via RDP before?
Bigger question is why did cudaminer decide to randomly jump to 450khash after 2 straight days mining at 280?

Thoughts, comments, help, all appreciated.  5k doge to anyone that can help me find a solution.

Lots doge you rich coins wow cudaminer wow doge happy coin.


Driver crashed? happens to me if I try to push my oc to high, does it still happen after reboot?

Haven't used RDP but I am using chrome remote desktop and haven't had issues.

WOOOT!!!!!  kernels10 you have been awarded 5k doge.  My conclusion about RDP was 100% accurate and I was able to verify that via chrome remote desktop.

I used RDP to install chrome remote desktop, exited RDP, entered through chrome remote desktop and the scripts started up perfectly.  What this verified is that at least on the GTX 560 Ti RDP does indeed kill the CUDA nVidia drivers upon connection; therefore making it impossible to restart cudaminer.

I'm curious if this is the case with all Microsoft RDP sessions.

Thx  Cheesy
DL7Kf4tT1heq4E8NX41mSCKoaWnsySQEAt

Maybe MS RDP disables "unnecessary" for performance reasons?
I am not too familiar at all with RDP
69charger
Full Member
***
Offline Offline

Activity: 173
Merit: 100


View Profile
December 23, 2013, 05:17:15 AM
 #1748

is that configuration based off of autotune? or did you just selected it?

The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparison

Which were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again.
The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU"

To get the other card working you need "-d 0,1" and then you can set intensity to "-i 1,0" to use the 2nd card to it's max.

Ex. "-D -d 0,1 -i 1,0 -l auto,auto -H 1 -C 2"
Valnurat
Full Member
***
Offline Offline

Activity: 167
Merit: 100


View Profile
December 23, 2013, 10:17:08 AM
 #1749

How do you use cudaminer through a proxy?
Tacticat
Full Member
***
Offline Offline

Activity: 210
Merit: 100



View Profile
December 23, 2013, 12:14:18 PM
 #1750

I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!

Tips and donations:

15nqQGfkgoxrBnsshD6vCuMWuz71MK51Ug
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
December 23, 2013, 01:08:20 PM
Last edit: December 23, 2013, 01:33:11 PM by cbuchner1
 #1751

I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!

It would be a bit like explaining to a passenger how to land a plane. Wouldn't it be easier if I just showed him how to push the auto-land button? (yes, newer models have that feature).

To understand the terminology of launch configurations like -l K28x8  you would have to understand the CUDA programming model, what a launch grid is, what a thread block is, and how it consists of warps that are independently scheduled on your Kepler multiprocessor's for warp schedulers. And you would have to understand what parameters could make sense on your particular GPU architecture to achieve high occupancy. You would also have to know certain limits imposed by shared memory use and registers used by a given kernel.

Try auto-tuning first. Pass either -l auto, or no -l argument at all.

If that doesn't find a satisfactory configuration, we can talk about blocks and warps and the memory requirements.
the treatise linked to in my follow-up posting also has a bit of information.

Christian
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
December 23, 2013, 01:31:54 PM
 #1752


the GPU_MAX_ALLOC_PERCENT is snake-oil for nVidia CUDA devices. I am pretty sure the driver won't care about this flag.

Christian

Hi everyone, I wrote a small Treatise on Cuda Miner, mind helping me check it over? (Much updated! wow!)
http://www.reddit.com/r/Dogecoinmining/comments/1tguse/a_treatise_on_cuda_miner/
ak84
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
December 23, 2013, 01:44:49 PM
 #1753

I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!

Your GPU isn't gonna burn up dude. Relax. Hundreds of 660TI owners here have been mining away on a whole host of Kxxxx configurations. I myself have used K7x32, K12x16, K31x6, K14x8, K14x16, and K12x8 (recommended by -l auto)

and THEY BURNED MY GPU OH MY GOD


▬▬▬▬▬▬▬▬▬ Edutainment.Tech ▬▬▬▬▬▬▬▬▬
Double ICO: Games for smart and games for business
SmartGames    ◼ CorpEdu
69charger
Full Member
***
Offline Offline

Activity: 173
Merit: 100


View Profile
December 23, 2013, 04:12:34 PM
Last edit: December 23, 2013, 04:41:12 PM by 69charger
 #1754

I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!

It would be a bit like explaining to a passenger how to land a plane. Wouldn't it be easier if I just showed him how to push the auto-land button? (yes, newer models have that feature).

To understand the terminology of launch configurations like -l K28x8  you would have to understand the CUDA programming model, what a launch grid is, what a thread block is, and how it consists of warps that are independently scheduled on your Kepler multiprocessor's for warp schedulers. And you would have to understand what parameters could make sense on your particular GPU architecture to achieve high occupancy. You would also have to know certain limits imposed by shared memory use and registers used by a given kernel.

Try auto-tuning first. Pass either -l auto, or no -l argument at all.

If that doesn't find a satisfactory configuration, we can talk about blocks and warps and the memory requirements.
the treatise linked to in my follow-up posting also has a bit of information.

Christian


Here's what I found as a noob... My card likes multiples of 160. Used to be 80x2, then 10x16, now 5x32 is the best. So find your "magic number" by running autotune several times and look at the first four digit hash number it gives you, mine was 5120, then divide by 32 to get your magic number. Then experiment with multiples. Has always worked out best for me and I have no idea why.  Grin
Valnurat
Full Member
***
Offline Offline

Activity: 167
Merit: 100


View Profile
December 23, 2013, 06:45:30 PM
 #1755

How do you use cudaminer through a proxy?

Anyone? I can see there is a setting -x or --proxy. How do i use that?
dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
December 23, 2013, 06:56:19 PM
 #1756

I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!

It would be a bit like explaining to a passenger how to land a plane. Wouldn't it be easier if I just showed him how to push the auto-land button? (yes, newer models have that feature).

To understand the terminology of launch configurations like -l K28x8  you would have to understand the CUDA programming model, what a launch grid is, what a thread block is, and how it consists of warps that are independently scheduled on your Kepler multiprocessor's for warp schedulers. And you would have to understand what parameters could make sense on your particular GPU architecture to achieve high occupancy. You would also have to know certain limits imposed by shared memory use and registers used by a given kernel.

Try auto-tuning first. Pass either -l auto, or no -l argument at all.

If that doesn't find a satisfactory configuration, we can talk about blocks and warps and the memory requirements.
the treatise linked to in my follow-up posting also has a bit of information.

Christian


Here's what I found as a noob... My card likes multiples of 160. Used to be 80x2, then 10x16, now 5x32 is the best. So find your "magic number" by running autotune several times and look at the first four digit hash number it gives you, mine was 5120, then divide by 32 to get your magic number. Then experiment with multiples. Has always worked out best for me and I have no idea why.  Grin

Step 1:  Figure out how many CUDA cores your device has by googling for it and looking at NVidia's page.  Example:  GTX 660
http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-660/specifications

Has 960 CUDA cores.

Step 2:  Figure out how many of those cores are physically present on each execution unit.
  Step 2a:  Figure out your compute capability
   https://developer.nvidia.com/cuda-gpus
    Example, GTX 660, compute capability 3.0

  Step 2b:  For compute capability 3.0, each execution unit has 192 CUDA cores.

Step 3:  Divide #CUDA cores by number per execution unit.  Ex:  960 / 192 = 5.

  This tells you how many independent execution units you have ("SMXes" is the name for them in Kepler-based devices).
Setting the first number to the number of execution units is good.

Therefore:  5 is a very good choice for the first number in your tuning.  The second number depends on the amount of memory you have, but for compute capability 3.0 and 3.5 devices, 32 is a pretty good one.  So 5x32, as the follow-up poster suggested, is probably the right answer for you.  But trusting autotune in general is probably simpler. :-)

  -Dave

ajax3592
Full Member
***
Offline Offline

Activity: 210
Merit: 100

Crypto News & Tutorials - Coinramble.com


View Profile
December 23, 2013, 07:38:47 PM
Last edit: December 23, 2013, 08:38:58 PM by ajax3592
 #1757

That latest "GeForce 331.82 Driver" which is required to run this release hangs and BSODs my pc. Therefore, not able to run 18th DEC release  Cry

On GTS 450

Can you help out guys it's a bit urgent ?

Crypto news/tutorials >>CoinRamble<<                            >>Netcodepool<<                >>My graphics<<
y2kcamaross
Full Member
***
Offline Offline

Activity: 226
Merit: 100


View Profile
December 24, 2013, 12:02:12 AM
 #1758

That latest "GeForce 331.82 Driver" which is required to run this release hangs and BSODs my pc. Therefore, not able to run 18th DEC release  Cry

On GTS 450

Can you help out guys it's a bit urgent ?

I'm having the same problem, 2 780's
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
December 24, 2013, 12:03:23 AM
 #1759

That latest "GeForce 331.82 Driver" which is required to run this release hangs and BSODs my pc. Therefore, not able to run 18th DEC release  Cry

On GTS 450

Can you help out guys it's a bit urgent ?

I'm having the same problem, 2 780's

Clean install?

Treggar
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
December 24, 2013, 12:49:42 AM
Last edit: December 24, 2013, 01:11:04 AM by Treggar
 #1760

I run 331.93 driver with this release and it works just fine, but wow do my two cards ever get hot.  

With 12/10 release they run at 85C ~399-402KH/s between the GTX570 and GTX465 and with 12/18 release they run at 93C ~402-410KH/s.

[2013-12-23 19:06:29] 2 miner threads started, using 'scrypt' algorithm.
[2013-12-23 19:06:30] Stratum detected new block
[2013-12-23 19:06:30] GPU #1: GeForce GTX 465 with compute capability 2.0
[2013-12-23 19:06:30] GPU #1: interactive: 0, tex-cache: 2D, single-alloc: 1
[2013-12-23 19:06:30] GPU #1: using launch configuration F11x16
[2013-12-23 19:06:30] GPU #0: GeForce GTX 570 with compute capability 2.0
[2013-12-23 19:06:30] GPU #0: interactive: 0, tex-cache: 2D, single-alloc: 1
[2013-12-23 19:06:30] GPU #0: using launch configuration F15x16
[2013-12-23 19:06:31] accepted: 1/1 (100.00%), 186.67 khash/s (yay!!!)
[2013-12-23 19:07:05] Stratum detected new block
[2013-12-23 19:07:07] accepted: 2/2 (100.00%), 404.98 khash/s (yay!!!)
[2013-12-23 19:07:08] accepted: 3/3 (100.00%), 407.63 khash/s (yay!!!)
[2013-12-23 19:07:09] accepted: 4/4 (100.00%), 403.10 khash/s (yay!!!)
[2013-12-23 19:07:20] accepted: 5/5 (100.00%), 402.91 khash/s (yay!!!)
[2013-12-23 19:07:27] accepted: 6/6 (100.00%), 410.95 khash/s (yay!!!)
[2013-12-23 19:07:33] accepted: 7/7 (100.00%), 410.43 khash/s (yay!!!)
[2013-12-23 19:07:35] accepted: 8/8 (100.00%), 408.76 khash/s (yay!!!)
Pages: « 1 ... 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 [88] 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 ... 1135 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!