DBG
Member
Offline
Activity: 119
Merit: 100
Digital Illustrator + Software/Hardware Developer
|
|
January 09, 2014, 01:26:43 AM |
|
Thanks m8, I'm using the last official release but I am setting things up for nightly builds. I went and changed my flags to the following "-H 1 -i 0 -l K14x16 -C 0 -m 1" and now I'm finally able to hit the 250kH/s talked about in the readme. The 250 Ghz boost to the GPU is actually working (overclocking was only working during an interactive/auto start-up) and puts me up another ~30kH/s. I have a lot more playing around to do but finally sitting down and fully RTFM helped a lot (also with a bit of luck). Also grazie cbuchner1, thanks for going open-source and being so active with the community .
|
Bitcoin - 3DTcMYT8SmRw4o4Lbq9cvm71YaUtVuNn29 Litecoin - MAoFYsBf7BzeK86gg6WRqzFncfwWnoYZet /* Coins are never required but always appreciated if feeling generous! */
|
|
|
bigjme
|
|
January 09, 2014, 01:56:50 AM |
|
So im now fine for compiling a lot of stuff. I have compiled a version of minerd which runs and gets 0.08 per thread instead of 0.07. But strangely when I run my shell file to launch that in terminal it launches. But cudaminer from the shell file wont. Weird
|
Owner of: cudamining.co.uk
|
|
|
coercion
Newbie
Offline
Activity: 34
Merit: 0
|
|
January 09, 2014, 02:20:10 AM |
|
Can someone make a short list of other scrypt-jane currencies please? I only heard about QQCoin so far.
scrypt-N parameters included for anyone who wants try them out. YBCoin (YBC) Start time: 1372386273 minN: 4, maxN: 30 https://bitcointalk.org/index.php?topic=243046.0Chinese YAC clone, NFactor just hit 14 today, same as YaCoin. YBC has a much higher network hash rate. This is the only other jane coin I've mined, though I've slowly been looking into the others. ZcCoin (ZCC) Start time: 1375817223 minN: 12, maxN: 30 https://bitcointalk.org/index.php?topic=268575.0FreeCoin (FEC) Start time: 1375801200, minN: 6, maxN: 32 https://bitcointalk.org/index.php?topic=269669OneCoin (ONC) Start time: 1371119462 minN: 6, maxN: 30 https://bitcointalk.org/index.php?topic=200177.0QQCoin Start Time: 1387769316 minN: 4, maxN: 30 https://bitcointalk.org/index.php?topic=389238.0Memory Coin https://bitcointalk.org/index.php?topic=267522.0This one apparently uses scrypt-jane but doesn't appear to use it the same way as the others. Couldn't find any start times or min max parameters. On another note, I picked up a gt 640 today. Best I've gotten out of it on Yacoin is 1.5 kH/s with K6x3. I thought I might be able to push those a little higher, having 4GB and all, but anything more inevitably crashes the driver and halts my system.
|
|
|
|
|
manofcolombia
Member
Offline
Activity: 84
Merit: 10
SizzleBits
|
|
January 09, 2014, 03:20:14 AM |
|
holy crap...you just gave me a link to the best most sarcastic site ever. I WILL use this daily... PS. All this talk about scrypt-jane is making my windows machines jealous...
|
|
|
|
jots
Newbie
Offline
Activity: 7
Merit: 0
|
|
January 09, 2014, 04:47:25 AM |
|
Hi-ya Christian, My 660Ti registers at ~270 khash/s peak (scrypt) for a single instance of cudaminer (which appears to start a single mining thread) - By starting two instances, I'd have expected that rate to halve. Instead, each instance reports ~200khash/s peak. Is there any efficiency to be gained by running multiple instances of cudaminer on the same card? Or, am I reading these figures wrong?
|
|
|
|
eduncan911
Member
Offline
Activity: 101
Merit: 10
Miner / Engineer
|
|
January 09, 2014, 05:11:57 AM |
|
The search on this forum software is horrendous...
Sorry if this has been asked, but Google couldn't find this.
How do you set the difficulty factor manually with cudaminer like you can with cgminer?
I ask because I am solo mining directly against my wallet app and the difficulty dropped significantly today for the coin i was on, but yet it still took nearly twice as long as it should have to find a block. But I have no way of knowing at what difficulty cudaminer is running at.
cgminer detects new difficulty levels and prints it to the screen and adjusts accordingly. Does cudaminer have this ability in debug output perhaps?
|
BTC: 131Zt92zoA7XUfkLhm1p2FwSP3tAxE43vf
|
|
|
cbuchner1 (OP)
|
|
January 09, 2014, 08:54:54 AM |
|
Hi-ya Christian,
My 660Ti registers at ~270 khash/s peak (scrypt) for a single instance of cudaminer (which appears to start a single mining thread) - By starting two instances, I'd have expected that rate to halve. Instead, each instance reports ~200khash/s peak.
This is surprising. What is your launch configuration? What does GPU-z show for GPU utilization when running just a single instance? How's GPU utilization and memory usage with 1 and 2 instances?
|
|
|
|
cbuchner1 (OP)
|
|
January 09, 2014, 08:58:58 AM |
|
How do you set the difficulty factor manually with cudaminer like you can with cgminer?
I ask because I am solo mining directly against my wallet app and the difficulty dropped significantly today for the coin i was on, but yet it still took nearly twice as long as it should have to find a block. But I have no way of knowing at what difficulty cudaminer is running at.
with -D it prints the stratum difficulty, whenever it changes. I am not aware of a print feature for getwork. When solo mining it should ask for new work from the server like every 5 seconds. Wouldn't that always include a difficulty number? Christian
|
|
|
|
CaptainBeck
|
|
January 09, 2014, 09:51:11 AM |
|
holy crap...you just gave me a link to the best most sarcastic site ever. I WILL use this daily... PS. All this talk about scrypt-jane is making my windows machines jealous... Why windows machines??? I've got jane running on my windows machine. My 660ti is about 2.5khs and i still running the rest of my atis on scrypt because they hate jane.
|
|
|
|
cbuchner1 (OP)
|
|
January 09, 2014, 10:29:48 AM |
|
I've got jane running on my windows machine. My 660ti is about 2.5khs and i still running the rest of my atis on scrypt because they hate jane.
On Linux I've been getting 3.2 kHash on a 660Ti and I am heading for 3.6 kH once I get the -C 2 option going again. I am running K7x3 -i 0 -m 1 and strangely this setting is not liked much by Windows (much slower than e.g. K4x4)
|
|
|
|
CaptainBeck
|
|
January 09, 2014, 10:38:54 AM |
|
I've got jane running on my windows machine. My 660ti is about 2.5khs and i still running the rest of my atis on scrypt because they hate jane.
On Linux I've been getting 3.2 kHash on a 660Ti and I am heading for 3.6 kH once I get the -C 2 option going again. I am running K7x3 -i 0 -m 1 and strangely this setting is not liked much by Windows (much slower than e.g. K4x4) Anything more than K 13 for me claims it requires to much memory and gives invalid cpu. So K13x1 is the best for me it seems.
|
|
|
|
Ultimist
|
|
January 09, 2014, 12:12:57 PM |
|
Something is definitely not working right for me with the 12/18 version posted in the OP.
On my auto-tuned GTX 670 sometimes in Nvidia Inspector, GPU usage is shown as 25 to 35% only, and the hashrate is about 85 Kh/s. Other times GPU usage goes to 95% and above. I don't understand why it isn't fully using the GPU. This version is very buggy and never seems to work the same way twice. The autotune never comes up with the same value twice, either, even if run 2 minutes after it was run before. Is it just picking things at random?
|
|
|
|
patoberli
Member
Offline
Activity: 106
Merit: 10
|
|
January 09, 2014, 12:45:45 PM |
|
You can try a to add --benchmark -D to see the results it's getting (leave the -l parameter away). Sometimes the results are very close and thus it takes a different one.
|
YAC: YA86YiWSvWEGSSSerPTMy4kwndabRUNftf BTC: 16NqvkYbKMnonVEf7jHbuWURFsLeuTRidX LTC: LTKCoiDwqEjaRCoNXfFhDm9EeWbGWouZjE
|
|
|
bathrobehero
Legendary
Offline
Activity: 2002
Merit: 1051
ICO? Not even once.
|
|
January 09, 2014, 12:53:56 PM |
|
You can try a to add --benchmark -D to see the results it's getting (leave the -l parameter away). Sometimes the results are very close and thus it takes a different one.
That, and also background apps stressing the card (even just a little bit) can affect the results and it's also worht noting that overclocking seem to confuse autotune fairly often as well. I think K7x32 should be best for your card.
|
Not your keys, not your coins!
|
|
|
bigjme
|
|
January 09, 2014, 02:09:50 PM |
|
Ok then so my linux system is mining now. Using the old settings of 16x1 I am getting 3.1khash/s and my cpu is hashing at 0.64khash/s
And one thing to note. No driver crashes in linux! So I may be able to get a higher hash rate then I am now
|
Owner of: cudamining.co.uk
|
|
|
eduncan911
Member
Offline
Activity: 101
Merit: 10
Miner / Engineer
|
|
January 09, 2014, 02:18:39 PM |
|
How do you set the difficulty factor manually with cudaminer like you can with cgminer?
I ask because I am solo mining directly against my wallet app and the difficulty dropped significantly today for the coin i was on, but yet it still took nearly twice as long as it should have to find a block. But I have no way of knowing at what difficulty cudaminer is running at.
with -D it prints the stratum difficulty, whenever it changes. I am not aware of a print feature for getwork. When solo mining it should ask for new work from the server like every 5 seconds. Wouldn't that always include a difficulty number? Christian Excellent answer, thanks! I will use -D from now on to see the changes. As for requesting every 5 seconds, that sounds perfect. But again, I'd want to see these changes. -D sounds like the way to go.
|
BTC: 131Zt92zoA7XUfkLhm1p2FwSP3tAxE43vf
|
|
|
cbuchner1 (OP)
|
|
January 09, 2014, 02:37:10 PM |
|
Something is definitely not working right for me with the 12/18 version posted in the OP.
On my auto-tuned GTX 670 sometimes in Nvidia Inspector, GPU usage is shown as 25 to 35% only, and the hashrate is about 85 Kh/s. Other times GPU usage goes to 95% and above. I don't understand why it isn't fully using the GPU. This version is very buggy and never seems to work the same way twice. The autotune never comes up with the same value twice, either, even if run 2 minutes after it was run before. Is it just picking things at random?
Hmm, too bad that a this software beta version has bugs. Here, have your money back. I award you 0 LTC. The thing about autotune is that mid range and high end Kepler GPUs dynamically adjust clock rates "as they see fit" to meet thermal and power requirements. And hence there is a certain randomness to the autotuning. There are Windows machines on which we cannot get 100% GPU utilization. This happened e.g. on a machine on which I installed Windows Server 2012 R2 for evaluation purposes. It would never quite go above 80% GPU use. Christian
|
|
|
|
|
aliens
Newbie
Offline
Activity: 15
Merit: 0
|
|
January 09, 2014, 03:02:21 PM |
|
Can't seem to compile latest git clone. I run ./autogen.sh (which doesn't output anything), ./configure then make, and I get the following errors: nvcc -g -O2 -Xptxas "-abi=no -v" -arch=compute_10 --maxrregcount=64 --ptxas-options=-v -I./compat/jansson -o salsa_kernel.o -c salsa_kernel.cu salsa_kernel.cu(479): error: too few arguments in function call
salsa_kernel.cu(742): error: more than one instance of overloaded function "cuda_scrypt_core" has "C" linkage
salsa_kernel.cu(760): error: too few arguments in function call
3 errors detected in the compilation of "/tmp/tmpxft_00004701_00000000-6_salsa_kernel.cpp1.ii".
Any ideas on fix?
|
|
|
|
|