Bitcoin Forum
March 29, 2024, 10:16:36 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 [111] 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 ... 1135 »
  Print  
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX]  (Read 3426839 times)
DBG
Member
**
Offline Offline

Activity: 119
Merit: 100


Digital Illustrator + Software/Hardware Developer


View Profile
January 09, 2014, 01:26:43 AM
 #2201

DBG are you using the 12-18-2013 binary?

According to https://docs.google.com/spreadsheet/ccc?key=0Aj3vcsuY-JFNdHR4ZUN5alozQUNvU1pyd2NGeTNicGc&usp=sharing#gid=0 people are getting 300+ kH/s with the 660 Ti. Most of them are using K7x32. If you're using a more recent commit from github, try -C 1.

Thanks m8, I'm using the last official release but I am setting things up for nightly builds.  I went and changed my flags to the following "-H 1 -i 0 -l K14x16 -C 0 -m 1" and now I'm finally able to hit the 250kH/s talked about in the readme.  The 250 Ghz boost to the GPU is actually working (overclocking was only working during an interactive/auto start-up) and puts me up another ~30kH/s.  I have a lot more playing around to do but finally sitting down and fully RTFM helped a lot (also with a bit of luck).

Also grazie cbuchner1, thanks for going open-source and being so active with the community Smiley.

Bitcoin - 3DTcMYT8SmRw4o4Lbq9cvm71YaUtVuNn29
Litecoin - MAoFYsBf7BzeK86gg6WRqzFncfwWnoYZet
/* Coins are never required but always appreciated if feeling generous! */
1711707396
Hero Member
*
Offline Offline

Posts: 1711707396

View Profile Personal Message (Offline)

Ignore
1711707396
Reply with quote  #2

1711707396
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1711707396
Hero Member
*
Offline Offline

Posts: 1711707396

View Profile Personal Message (Offline)

Ignore
1711707396
Reply with quote  #2

1711707396
Report to moderator
bigjme
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
January 09, 2014, 01:56:50 AM
 #2202

So im now fine for compiling a lot of stuff. I have compiled a version of minerd which runs and gets 0.08 per thread instead of 0.07. But strangely when I run my shell file to launch that in terminal it launches. But cudaminer from the shell file wont. Weird

Owner of: cudamining.co.uk
coercion
Newbie
*
Offline Offline

Activity: 34
Merit: 0


View Profile
January 09, 2014, 02:20:10 AM
 #2203

Can someone make a short list of other scrypt-jane currencies please? I only heard about QQCoin so far.
scrypt-N parameters included for anyone who wants try them out.

YBCoin  (YBC) Start time: 1372386273 minN: 4, maxN: 30
https://bitcointalk.org/index.php?topic=243046.0
Chinese YAC clone, NFactor just hit 14 today, same as YaCoin. YBC has a much higher network hash rate. This is the only other jane coin I've mined, though I've slowly been looking into the others.

ZcCoin (ZCC) Start time: 1375817223 minN: 12, maxN: 30
https://bitcointalk.org/index.php?topic=268575.0

FreeCoin (FEC) Start time: 1375801200, minN: 6, maxN: 32
https://bitcointalk.org/index.php?topic=269669

OneCoin (ONC) Start time: 1371119462 minN: 6, maxN: 30
https://bitcointalk.org/index.php?topic=200177.0

QQCoin Start Time: 1387769316 minN: 4, maxN: 30
https://bitcointalk.org/index.php?topic=389238.0

Memory Coin
https://bitcointalk.org/index.php?topic=267522.0
This one apparently uses scrypt-jane but doesn't appear to use it the same way as the others. Couldn't find any start times or min max parameters.

On another note, I picked up a gt 640 today. Best I've gotten out of it on Yacoin is 1.5 kH/s with K6x3. I thought I might be able to push those a little higher, having 4GB and all, but anything more inevitably crashes the driver and halts my system.
bathrobehero
Legendary
*
Offline Offline

Activity: 2002
Merit: 1050


ICO? Not even once.


View Profile
January 09, 2014, 02:37:45 AM
 #2204

There's also CACHeCoin (CACH) (https://bitcointalk.org/index.php?topic=400389.0) and Radioactivecoin (RAD)(https://bitcointalk.org/index.php?topic=405481.0) but I don't know much about them.
Rad is completely new, but it's a weird one, couldn't figure out much about it.

Not your keys, not your coins!
manofcolombia
Member
**
Offline Offline

Activity: 84
Merit: 10

SizzleBits


View Profile WWW
January 09, 2014, 03:20:14 AM
 #2205

Quote

holy crap...you just gave me a link to the best most sarcastic site ever.
I WILL use this daily...

PS. All this talk about scrypt-jane is making my windows machines jealous...

jots
Newbie
*
Offline Offline

Activity: 7
Merit: 0


View Profile
January 09, 2014, 04:47:25 AM
 #2206

Hi-ya Christian,

My 660Ti registers at ~270 khash/s peak (scrypt) for a single instance of cudaminer (which appears to start a single mining thread) - By starting two instances, I'd have expected that rate to halve.  Instead, each instance reports ~200khash/s peak.

Is there any efficiency to be gained by running multiple instances of cudaminer on the same card?  Or, am I reading these figures wrong?  Smiley
eduncan911
Member
**
Offline Offline

Activity: 101
Merit: 10

Miner / Engineer


View Profile WWW
January 09, 2014, 05:11:57 AM
 #2207

The search on this forum software is horrendous... 

Sorry if this has been asked, but Google couldn't find this.

How do you set the difficulty factor manually with cudaminer like you can with cgminer?

I ask because I am solo mining directly against my wallet app and the difficulty dropped significantly today for the coin i was on, but yet it still took nearly twice as long as it should have to find a block.  But I have no way of knowing at what difficulty cudaminer is running at.

cgminer detects new difficulty levels and prints it to the screen and adjusts accordingly.  Does cudaminer have this ability in debug output perhaps?




BTC: 131Zt92zoA7XUfkLhm1p2FwSP3tAxE43vf
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 09, 2014, 08:54:54 AM
 #2208

Hi-ya Christian,

My 660Ti registers at ~270 khash/s peak (scrypt) for a single instance of cudaminer (which appears to start a single mining thread) - By starting two instances, I'd have expected that rate to halve.  Instead, each instance reports ~200khash/s peak.

This is surprising. What is your launch configuration?
What does GPU-z show for GPU utilization when running just a single instance?
How's GPU utilization and memory usage with 1 and 2 instances?
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 09, 2014, 08:58:58 AM
 #2209

How do you set the difficulty factor manually with cudaminer like you can with cgminer?

I ask because I am solo mining directly against my wallet app and the difficulty dropped significantly today for the coin i was on, but yet it still took nearly twice as long as it should have to find a block.  But I have no way of knowing at what difficulty cudaminer is running at.

with -D it prints the stratum difficulty, whenever it changes.  I am not aware of a print feature for getwork. When solo mining it should ask for new work from the server like every 5 seconds. Wouldn't that always include a difficulty number?

Christian
CaptainBeck
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
January 09, 2014, 09:51:11 AM
 #2210

Quote

holy crap...you just gave me a link to the best most sarcastic site ever.
I WILL use this daily...

PS. All this talk about scrypt-jane is making my windows machines jealous...

Why windows machines???

I've got jane running on my windows machine. My 660ti is about 2.5khs and i still running the rest of my atis on scrypt because they hate jane.

cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 09, 2014, 10:29:48 AM
 #2211

I've got jane running on my windows machine. My 660ti is about 2.5khs and i still running the rest of my atis on scrypt because they hate jane.

On Linux I've been getting 3.2 kHash on a 660Ti and I am heading for 3.6 kH once I get the -C 2 option going again.

I am running  K7x3 -i 0 -m 1 and strangely this setting is not liked much by Windows (much slower than e.g. K4x4)
CaptainBeck
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
January 09, 2014, 10:38:54 AM
 #2212

I've got jane running on my windows machine. My 660ti is about 2.5khs and i still running the rest of my atis on scrypt because they hate jane.

On Linux I've been getting 3.2 kHash on a 660Ti and I am heading for 3.6 kH once I get the -C 2 option going again.

I am running  K7x3 -i 0 -m 1 and strangely this setting is not liked much by Windows (much slower than e.g. K4x4)

Anything more than K 13 for me claims it requires to much memory and gives invalid cpu.

So K13x1 is the best for me it seems.
Ultimist
Full Member
***
Offline Offline

Activity: 812
Merit: 102



View Profile
January 09, 2014, 12:12:57 PM
 #2213

Something is definitely not working right for me with the 12/18 version posted in the OP.

On my auto-tuned GTX 670 sometimes in Nvidia Inspector, GPU usage is shown as 25 to 35% only, and the hashrate is about 85 Kh/s. Other times GPU usage goes to 95% and above. I don't understand why it isn't fully using the GPU. This version is very buggy and never seems to work the same way twice. The autotune never comes up with the same value twice, either, even if run 2 minutes after it was run before. Is it just picking things at random?

patoberli
Member
**
Offline Offline

Activity: 106
Merit: 10


View Profile
January 09, 2014, 12:45:45 PM
 #2214

You can try a to add --benchmark -D to see the results it's getting (leave the -l parameter away). Sometimes the results are very close and thus it takes a different one.

YAC: YA86YiWSvWEGSSSerPTMy4kwndabRUNftf
BTC: 16NqvkYbKMnonVEf7jHbuWURFsLeuTRidX
LTC: LTKCoiDwqEjaRCoNXfFhDm9EeWbGWouZjE
bathrobehero
Legendary
*
Offline Offline

Activity: 2002
Merit: 1050


ICO? Not even once.


View Profile
January 09, 2014, 12:53:56 PM
 #2215

You can try a to add --benchmark -D to see the results it's getting (leave the -l parameter away). Sometimes the results are very close and thus it takes a different one.

That, and also background apps stressing the card (even just a little bit) can affect the results and it's also worht noting that overclocking seem to confuse autotune fairly often as well.

I think K7x32 should be best for your card.

Not your keys, not your coins!
bigjme
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
January 09, 2014, 02:09:50 PM
 #2216

Ok then so my linux system is mining now. Using the old settings of 16x1 I am getting 3.1khash/s and my cpu is hashing at 0.64khash/s

And one thing to note. No driver crashes in linux! So I may be able to get a higher hash rate then I am now

Owner of: cudamining.co.uk
eduncan911
Member
**
Offline Offline

Activity: 101
Merit: 10

Miner / Engineer


View Profile WWW
January 09, 2014, 02:18:39 PM
 #2217

How do you set the difficulty factor manually with cudaminer like you can with cgminer?

I ask because I am solo mining directly against my wallet app and the difficulty dropped significantly today for the coin i was on, but yet it still took nearly twice as long as it should have to find a block.  But I have no way of knowing at what difficulty cudaminer is running at.

with -D it prints the stratum difficulty, whenever it changes.  I am not aware of a print feature for getwork. When solo mining it should ask for new work from the server like every 5 seconds. Wouldn't that always include a difficulty number?

Christian


Excellent answer, thanks!  I will use -D from now on to see the changes.

As for requesting every 5 seconds, that sounds perfect.  But again, I'd want to see these changes.  -D sounds like the way to go.

BTC: 131Zt92zoA7XUfkLhm1p2FwSP3tAxE43vf
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 09, 2014, 02:37:10 PM
 #2218

Something is definitely not working right for me with the 12/18 version posted in the OP.

On my auto-tuned GTX 670 sometimes in Nvidia Inspector, GPU usage is shown as 25 to 35% only, and the hashrate is about 85 Kh/s. Other times GPU usage goes to 95% and above. I don't understand why it isn't fully using the GPU. This version is very buggy and never seems to work the same way twice. The autotune never comes up with the same value twice, either, even if run 2 minutes after it was run before. Is it just picking things at random?


Hmm, too bad that a this software beta version has bugs.  Here, have your money back. I award you 0 LTC.

The thing about autotune is that mid range and high end Kepler GPUs dynamically adjust clock rates "as they see fit" to meet thermal and power requirements. And hence there is a certain randomness to the autotuning.

There are Windows machines on which we cannot get 100% GPU utilization. This happened e.g. on a machine on which I installed Windows Server 2012 R2 for evaluation purposes. It would never quite go above 80% GPU use.

Christian
bigjme
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
January 09, 2014, 02:53:46 PM
 #2219

cbuchner1 thi is my 780 running at over 3.7khash/s!
it is just a straight screen shot not cropped sorry

http://s29.postimg.org/62ttjbizb/Screenshot_from_2014_01_09_14_49_51.png

./cudaminer --algo=scrypt-jane -H 0 -i 0 -d 0 -l T20x1 -o http://127.0.0.1:3339 -u user -p pass

gpu memory usage is 2.86GB
so my gpu and cpu now mine together at over 4.4khash/s!!!!

Owner of: cudamining.co.uk
aliens
Newbie
*
Offline Offline

Activity: 15
Merit: 0


View Profile
January 09, 2014, 03:02:21 PM
 #2220

Can't seem to compile latest git clone. I run ./autogen.sh (which doesn't output anything), ./configure then make, and I get the following errors:

Code:
nvcc -g -O2 -Xptxas "-abi=no -v" -arch=compute_10 --maxrregcount=64 --ptxas-options=-v -I./compat/jansson -o salsa_kernel.o -c salsa_kernel.cu
salsa_kernel.cu(479): error: too few arguments in function call

salsa_kernel.cu(742): error: more than one instance of overloaded function "cuda_scrypt_core" has "C" linkage

salsa_kernel.cu(760): error: too few arguments in function call

3 errors detected in the compilation of "/tmp/tmpxft_00004701_00000000-6_salsa_kernel.cpp1.ii".

Any ideas on fix?
Pages: « 1 ... 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 [111] 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 ... 1135 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!