Bitcoin Forum
October 21, 2017, 06:08:56 PM *
News: Latest stable version of Bitcoin Core: 0.15.0.1  [Torrent]. (New!)
 
   Home   Help Search Donate Login Register  
Pages: « 1 ... 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 [88] 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 ... 1145 »
  Print  
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX]  (Read 3314825 times)
Spiffy_1
Full Member
***
Offline Offline

Activity: 210


View Profile
December 22, 2013, 09:09:54 PM
 #1741

If you're mining on a pool with vardiff  you're probably being assigned a difficulty your card can't process in the short time between blocks.  Everything is working right on your end it looks like.  Diff should scale back down but it can take up to 24 hours to dial in your difficulty.. I tend to use pools where I can set my difficulty manually so I don't run into this issue.

If you like what I've posted, mine for me on whatever algo you like on www.zpool.ca for a minute using my bitcoin address: 1BJJYPRcRPzTEfByCwkeJ8SCBcrnGD1nhL
1508609336
Hero Member
*
Offline Offline

Posts: 1508609336

View Profile Personal Message (Offline)

Ignore
1508609336
Reply with quote  #2

1508609336
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1508609336
Hero Member
*
Offline Offline

Posts: 1508609336

View Profile Personal Message (Offline)

Ignore
1508609336
Reply with quote  #2

1508609336
Report to moderator
cbuchner1
Hero Member
*****
Offline Offline

Activity: 742


View Profile
December 22, 2013, 09:10:18 PM
 #1742

So I am just mining a bit for the experience and its mostly going well but this weird thing. After a couple accepted my hash rate goes out of control and it never works again until I reboot my computer. If this is a known issue or someone else has this if you could let me know. Thanks and sorry if this is newbish.

probably an unstable card due to excessive overclocking ...
cbuchner1
Hero Member
*****
Offline Offline

Activity: 742


View Profile
December 22, 2013, 09:12:59 PM
 #1743

Doubling up keys in a clever way might get that to 90 but at the cost of probably unacceptable register pressure.  I tried it once and threw away the code, but there are a few other ways to imagine doing it.

It's really hard to beat the raw number of ALUs those AMD devices have when the code is as trivially parallel as brute-force hashing.

have you heard of the lookup_gap feature of the OpenCL based miner? reduces the scratchpad size by factor 2 or 3 and replaces the lookups by some extra computation. Not sure if nvidia cards have the computational reserves, but we could try...the funnel shifter seems to help a bit in creating some breathing room. compute 3.5 devices are definitely memory limited when hashing.
dga
Hero Member
*****
Offline Offline

Activity: 574


View Profile
December 22, 2013, 09:26:42 PM
 #1744

Doubling up keys in a clever way might get that to 90 but at the cost of probably unacceptable register pressure.  I tried it once and threw away the code, but there are a few other ways to imagine doing it.

It's really hard to beat the raw number of ALUs those AMD devices have when the code is as trivially parallel as brute-force hashing.

have you heard of the lookup_gap feature of the OpenCL based miner? reduces the scratchpad size by factor 2 or 3 and replaces the lookups by some extra computation. Not sure if nvidia cards have the computational reserves, but we could try...the funnel shifter seems to help a bit in creating some breathing room. compute 3.5 devices are definitely memory limited when hashing.

Haven't seen it, but I'm guessing from your description that it stores only every other (or every third) scratchpad entry and dynamically recomputes when it needs to access an odd-numbered entry?

I've thought about it, but the nvidia kernels are so compute-bound that I never took it seriously.  Do you know any numbers for how much it speeds up the OpenCL miner?  It's not that hard to implement if it really seems worthwhile, but I'm skeptical for nvidia.

(It is, however, the obvious route to go for FPGA or ASIC.)

BTC: 17sb5mcCnnt4xH3eEkVi6kHvhzQRjPRBtS
polarbear7217008
Newbie
*
Offline Offline

Activity: 4


View Profile
December 22, 2013, 09:28:39 PM
 #1745

So I am just mining a bit for the experience and its mostly going well but this weird thing. After a couple accepted my hash rate goes out of control and it never works again until I reboot my computer. If this is a known issue or someone else has this if you could let me know. Thanks and sorry if this is newbish.

probably an unstable card due to excessive overclocking ...


my card is not overclocked at all but could the same result be from just over heating?
Spiffy_1
Full Member
***
Offline Offline

Activity: 210


View Profile
December 22, 2013, 09:43:50 PM
 #1746

Doubtful.. Try a pool that doesn't use vardiff. and set it to its lowest difficulty and see if you get solid returns.  Multipool.us has set difficulty levels. You have the same hash rate as I do on my gtx 560M and I get steady returns all the time with it.

If you like what I've posted, mine for me on whatever algo you like on www.zpool.ca for a minute using my bitcoin address: 1BJJYPRcRPzTEfByCwkeJ8SCBcrnGD1nhL
madjules007
Sr. Member
****
Offline Offline

Activity: 402



View Profile
December 22, 2013, 09:55:15 PM
 #1747

Hi everyone, I wrote a small Treatise on Cuda Miner, mind helping me check it over? (Much updated! wow!)
http://www.reddit.com/r/Dogecoinmining/comments/1tguse/a_treatise_on_cuda_miner/

Thanks for this. Great explanation. Got about 60kh/s extra on my GTX 780!

██████████████████████
██████████████████████
██████████████████████
██████████████████████
██████████████████████
██████████████████████
██████████████████████
██████████████████████
██████████████████████
██████████████████████
██████████████████████
██████████████████████
██████████████████████
██████████████████████
RISE
Ponn
Newbie
*
Offline Offline

Activity: 5


View Profile
December 22, 2013, 10:07:22 PM
 #1748

Hi everyone, I wrote a small Treatise on Cuda Miner, mind helping me check it over? (Much updated! wow!)
http://www.reddit.com/r/Dogecoinmining/comments/1tguse/a_treatise_on_cuda_miner/

Thanks for this. Great explanation. Got about 60kh/s extra on my GTX 780!

Sweet! Glad I could help, this has been a great learning experience for me.
liermam
Newbie
*
Offline Offline

Activity: 16


View Profile
December 23, 2013, 12:29:26 AM
 #1749

compute 3.5 devices are definitely memory limited when hashing.

I've yet to see a shred of evidence of this in my 4+ weeks of testing the newest versions.
daddywarbucks
Newbie
*
Offline Offline

Activity: 4


View Profile
December 23, 2013, 01:47:48 AM
 #1750

Hey everyone,

I decided to start mining some alt coins and thought I'd fire up a device I had lying around. I'm using a Tesla S870 system, but only 1 of the 2 connections for now (2 cards).  I'm having an issue finding a stable mining configuration. Essentially, the system appears to be busy and accepting work, but not coins are ever mined. I've done a lot of searching and trail & error with the -l command-line option. As I understand it, the -l option should be the multiprocessors x CUDA cores, correct?  I.e. (from deviceQuery):

Code:
  (16) Multiprocessors x (  8) CUDA Cores/MP:    128 CUDA Cores

Therefore -l L16x8 ?

If I pump up the -l setting (i.e. L128x64, but can be much lower), it will often say it's getting 300+kh/s, which I believe is completely off. I noticed this in another post in this thread, and it seemed to be a one-off. I know that this is not rockstar hardware, but I would like to use it for some light mining. My questions are:

(1) What's the _right_ way to determine the -l settings? I have tried many options as well as 'auto' with -D and even -P (I am a web guy, after all Wink ), which often leads to L0x0 and crashes.

(2) Is there anything I can do to help with support for this hardware?

Here's my configuration:

OS:

Code:
$ uname -a
Linux hypercoil 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

NVIDIA Driver:

Code:
$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module  304.54  Sat Sep 29 00:05:49 PDT 2012
GCC version:  gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)

NVCC/Cuda Tools:

Code:
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2012 NVIDIA Corporation
Built on Fri_Sep_21_17:28:58_PDT_2012
Cuda compilation tools, release 5.0, V0.2.1221

CudaMiner:

Code:
$ ./cudaminer
   *** CudaMiner for nVidia GPUs by Christian Buchner ***
             This is version 2013-12-10 (beta)
based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
       Cuda additions Copyright 2013 Christian Buchner
   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

Kind regards,

DW
UNOE
Sr. Member
****
Offline Offline

Activity: 259



View Profile
December 23, 2013, 03:42:57 AM
 #1751

Is there anyway to do backup pools ?

Antivanity
Newbie
*
Offline Offline

Activity: 26


View Profile
December 23, 2013, 03:50:41 AM
 #1752

Is there anyway to do backup pools ?

This is a feature to come, read the bottom of the readme.txt file.
mareyx
Newbie
*
Offline Offline

Activity: 3


View Profile
December 23, 2013, 04:46:01 AM
 #1753

Recently started mining. I've got 2 Gigabyte GTX 770 4GB however I'm seeing some weird results.

My settings looks like this: "-i 1 -l K16x16 -C 2"

I'm seeing a lot of "result does not validate on CPU", basically every change I make to the settings will cause this error. Even restarting the client with the same settings will give me this error a bunch of times, I have to tweak the settings, try to launch with new settings and then switch back to my old settings to get it to work. Even launching it without any pre-set settings will cause an error.

Once I do get it to work, one of the GPUs is reporting a hashrate of ~2000 khash/s which is obviously false, and the other is showing ~350 khash/s. However, if I change my settings to "-i 0", one of the GPUs will show a hashrate of ~90,000 khash/s while the other one remains at 350.

Also, looking at MSI Afterburner only one card is being utilized, GPU #1 is at 100% usage while GPU #2 is sitting at 0% usage, I can't seem to get both GPUs to be utilized.

Any suggestions or ideas?
Lacan82
Sr. Member
****
Offline Offline

Activity: 247


View Profile
December 23, 2013, 04:55:51 AM
 #1754

Recently started mining. I've got 2 Gigabyte GTX 770 4GB however I'm seeing some weird results.

My settings looks like this: "-i 1 -l K16x16 -C 2"

I'm seeing a lot of "result does not validate on CPU", basically every change I make to the settings will cause this error. Even restarting the client with the same settings will give me this error a bunch of times, I have to tweak the settings, try to launch with new settings and then switch back to my old settings to get it to work. Even launching it without any pre-set settings will cause an error.

Once I do get it to work, one of the GPUs is reporting a hashrate of ~2000 khash/s which is obviously false, and the other is showing ~350 khash/s. However, if I change my settings to "-i 0", one of the GPUs will show a hashrate of ~90,000 khash/s while the other one remains at 350.

Also, looking at MSI Afterburner only one card is being utilized, GPU #1 is at 100% usage while GPU #2 is sitting at 0% usage, I can't seem to get both GPUs to be utilized.

Any suggestions or ideas?

is that configuration based off of autotune? or did you just selected it?

mareyx
Newbie
*
Offline Offline

Activity: 3


View Profile
December 23, 2013, 05:01:48 AM
 #1755

is that configuration based off of autotune? or did you just selected it?

The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparison

Which were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again.
The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU"
Lacan82
Sr. Member
****
Offline Offline

Activity: 247


View Profile
December 23, 2013, 05:08:28 AM
 #1756

is that configuration based off of autotune? or did you just selected it?

The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparison

Which were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again.
The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU"

Take out -l because that could be and old config that no longer valid. You need to autotune after every upgrade

kernels10
Sr. Member
****
Offline Offline

Activity: 408


ded


View Profile
December 23, 2013, 05:09:31 AM
 #1757

So after running my GTX 560 Ti for 2days staight at a solid 280khash on the new 12/18 software, cudaminer out of nowhere spiked the hashrate up to about 450 and the card started giving hardware errors as it can't run that high.

I got the notification from my.mining pool that a worker was down, so I RDP to the machine close out cudaminer and restart my script, no changes made at all.

Now all of a sudden cudaminer is saying, "unable to query CUDA driver version.  Is an nVidia driver installed."
This of course isn't true.

Seeing as how this happened the very first time I ran cudaminer I simply tried to reinstall the driver.  When that didn't work I tried downgrading the driver and still no luck.  I even installed the CUDA development kit and that didn't work either.  I can no longer get cudaminer to launch any of the 3 versions that I have previously used.

I'm very confused at the moment.  The only thing crossing my mind is that maybe when I RDP to the machine the graphic settings are changing for remote desktop and the CUDA driver is being disabled and therefore cannot relaunch.

Anyone ever tried to restart cudaminer via RDP before?
Bigger question is why did cudaminer decide to randomly jump to 450khash after 2 straight days mining at 280?

Thoughts, comments, help, all appreciated.  5k doge to anyone that can help me find a solution.

Lots doge you rich coins wow cudaminer wow doge happy coin.


Driver crashed? happens to me if I try to push my oc to high, does it still happen after reboot?

Haven't used RDP but I am using chrome remote desktop and haven't had issues.

WOOOT!!!!!  kernels10 you have been awarded 5k doge.  My conclusion about RDP was 100% accurate and I was able to verify that via chrome remote desktop.

I used RDP to install chrome remote desktop, exited RDP, entered through chrome remote desktop and the scripts started up perfectly.  What this verified is that at least on the GTX 560 Ti RDP does indeed kill the CUDA nVidia drivers upon connection; therefore making it impossible to restart cudaminer.

I'm curious if this is the case with all Microsoft RDP sessions.

Thx  Cheesy
DL7Kf4tT1heq4E8NX41mSCKoaWnsySQEAt

Maybe MS RDP disables "unnecessary" for performance reasons?
I am not too familiar at all with RDP
69charger
Full Member
***
Offline Offline

Activity: 173


View Profile
December 23, 2013, 05:17:15 AM
 #1758

is that configuration based off of autotune? or did you just selected it?

The settings were originally "-D -H 1 -m 1 -d 0 -i 1 -l K16x16 -C 2", copied from here: https://litecoin.info/Mining_hardware_comparison

Which were working fine at first for multiple hours of mining but wouldn't work once I restarted the client so I tweaked them a bit until I got it to work again.
The autotune settings haven't really been working for me, just a bunch of "result does not validate on CPU"

To get the other card working you need "-d 0,1" and then you can set intensity to "-i 1,0" to use the 2nd card to it's max.

Ex. "-D -d 0,1 -i 1,0 -l auto,auto -H 1 -C 2"
Valnurat
Full Member
***
Offline Offline

Activity: 129


View Profile
December 23, 2013, 10:17:08 AM
 #1759

How do you use cudaminer through a proxy?
Tacticat
Full Member
***
Offline Offline

Activity: 210



View Profile
December 23, 2013, 12:14:18 PM
 #1760

I've been googling for ages and I have read the "README" file at least five times but no one could I find the intensity setting K28x8 or similar actually mean.

I'm using the argument -l 28x8 but I might as well be burning my GPU right now because I have no actual idea what any of those numbers means and it is nowhere explained.

Could please, someone, explain how those numbers work so that I can use them properly on my GTX 660?

Thank you!

Tips and donations:

15nqQGfkgoxrBnsshD6vCuMWuz71MK51Ug
Pages: « 1 ... 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 [88] 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 ... 1145 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!