datguyian
|
|
April 12, 2013, 06:52:57 AM |
|
Another quadro (4000) was getting around 24 on the previous drive, now at 48 configured at 68x2. When I ran without a specifed kernal, it configured at 68x7 but was hashing junk shares. so I started at 68x1, which did a about 14 kh/s, then tried 68x2 and it got up to 48 kh/s. Anything hire produces junk shares or kills cudaminer and crashes the driver. I wish I knew more about this kernel setting... would play around with it a bit to see what would work. Instead I just crash my system if I mess with it too much :| I would be alright with that if I was physically near the machine to restart it.
Averaging more like 54 kh/s now with 68x4. Driver still crashes when I end cudaminer for some reason. If I went 68x5 or 68x6, higher rates were displayed but all bad shares (is there a proper term for this? not sure what else to refer to the "booo!" shares as...).
|
|
|
|
cbuchner1 (OP)
|
|
April 12, 2013, 07:48:30 AM |
|
CudaMiner 10/04 GeForce 9800 GT OC Cudaminer - 57 khash/sec Pool - 48 khash/sec
That is really nice for a granny from 2008!
|
|
|
|
jimmy3dita
|
|
April 12, 2013, 10:31:30 AM |
|
since you have an i7, why not switch to the internal Intel HD graphics for your primary display and then let the nvidia cards run their thing in the background. This will eliminate any drops you see from moving the mouse, scrolling, moving windows, etc.
...unless of course it's not a dedicated mining machine and you use it for games or other 3d work. As for your loaded cpu dropping your gpu hashrate, I have no ideas as we have the same cpu and I don't experience that.
I've got the following and haven't experienced anything that you've mentioned (including the effects of a fully loaded cpu dropping your hashrate). - maybe it's because I'm using the intel hd graphics as my primary display. i7-3770k @ 4.6Ghz Z77 chipset 24GB ddr3 1800 2x GTX560SE 1 GT430 Windows 7 (Aero disabled) driver 314.22
I get a total of ~240Kh/s from the gpu's. 43Kh/s from the CPU (6 threads).
You are absolutely right, my i7 is just an everyday PC, mining in my country isn't worth with any computer configuration (and any other cryptocurrencies), due to insanely high power costs that are about 0,30€ per kWh Nevertheless I'm going to plug the internal card and see, maybe I can reach a bit higher results. Thanks!
|
Acquista il mio libro "Investire Bitcoin": clicca qui
|
|
|
cbuchner1 (OP)
|
|
April 12, 2013, 11:11:03 AM |
|
Are you guys ready for another 10-20% speed boost? Gtx 660Ti now reporting 180kHash (previously about 150)
Yes / No / Maybe
the nVidia cards have a texture cache, why not use it.
|
|
|
|
Lacan82
|
|
April 12, 2013, 11:14:20 AM |
|
Are you guys ready for another 20% speed boost?
Yes / No / Maybe
the nVidia cards have a texture cache, why not use it.
yes, please
|
|
|
|
nst6563
|
|
April 12, 2013, 02:42:04 PM |
|
Are you guys ready for another 10-20% speed boost? Gtx 660Ti now reporting 180kHash (previously about 150)
Yes / No / Maybe
the nVidia cards have a texture cache, why not use it.
definately ready...bring it and we'll test it!
|
|
|
|
|
nst6563
|
|
April 12, 2013, 02:47:31 PM |
|
Did another test. It was a bit better but still - 12 Khash/s. Proof of it in pics.
GTX295 *snip* Run them both on stock settings.
What happens if you run only using only 1 gpu? My gtx 260 gets 40kh/s itself, I would expect at least double that with a 295.
|
|
|
|
Testarossa
Newbie
Offline
Activity: 11
Merit: 0
|
|
April 12, 2013, 03:21:26 PM |
|
Cuda 10.04 version GPU 0 http://13h.imghost.us/gl.jpgCuda 10.04 GPU 1 http://13h.imghost.us/h4.jpgCouldn't do this with cudaminer 1st version. It has no effect to run them both or not on individual khash/s seemingly. If you add them up you get +- 62 khash/s, as in previous post. Cudaminer 1st version yields the best results. Btw a GTX 295 is infact 2 times a GTX 280 in 1, but doesn't scale the same as 2 GTX 280 in SLI.... Thx nvidia
|
|
|
|
cbuchner1 (OP)
|
|
April 12, 2013, 03:44:36 PM Last edit: April 12, 2013, 04:10:30 PM by cbuchner1 |
|
Hey, I will be adding the texture cache also to the kernel for the old hardware (compute 1.x cards) AND I am going back to the old method of memory allocation, using a single chunk of memory only. There is a good chance that the next version will run faster on the GTX 295 than the original cudaminer version after all.
Not sure what I will do about the Titan support - if it is a compiler bug there is really no way for me to tell (without owning such hardware) whether or not a kernel will correctly run on the Titan. Maybe I just go back to the working 04/09 version, only adding the texture cache.
|
|
|
|
Testarossa
Newbie
Offline
Activity: 11
Merit: 0
|
|
April 12, 2013, 03:46:51 PM |
|
If you wouldn't have tested it, we wouldn't know.
Knowledge is power!
Thx for all the effort Christian.
|
|
|
|
cbuchner1 (OP)
|
|
April 12, 2013, 03:48:52 PM Last edit: April 12, 2013, 08:10:58 PM by cbuchner1 |
|
Thx for all the effort Christian.
I am doing this for fun (and for the donations, lol). No, honestly: I am learning new stuff that I can use on my job, and I am bringing some knowledge gained on my job into this tool as well. Update on the texture cache thing: There's no real improvement in the kHash on my GTX 560Ti device. It might even have lost 5kHash. So the texture cache seems benefit the Kepler architecture only, so far. But then it's in the range of 15-20%. So this new feature must be optional, or have an override switch at least. Christian
|
|
|
|
cbuchner1 (OP)
|
|
April 12, 2013, 10:09:35 PM |
|
First comprehensive results of the Texture Cache feature.
Second generation of CUDA devices (Compute 1.2 and 1.3) GTX 260 from 44 kHash to 77 kHash (UNEXPECTED, EXCELLENT!)
Fermi cards (Compute 2.x) GTX 460 from 94 kHash to 94 kHash (NO CHANGE) GTX 560Ti from 135 kHash to 135 kHash. (NO CHANGE)
Kepler cards (Compute 3.x) GTX 660Ti from 150 kHash to 175-180 kHash (NICE!)
FURTHER NOTES:
I have currently no idea why Fermi devices do NOT profit.
The GTX 260 no longer prefers the S27x3 kernel, but some regular configuration (27x4). Maybe it's time to remove the S kernel, or try a different variant that benefits Fermi for a change.
The program will hence enable the texture cache by default on compute 1.x and 3.0 devices.
For Titan, I will stick with the exact same kernel from the working 04-09 release.
Christian
|
|
|
|
cbuchner1 (OP)
|
|
April 12, 2013, 10:24:18 PM |
|
the April 12th release is published!
Inspect the README file, it explains the new options.
Highlights:
1) interactive switch for desktop use (no more lag) 2) speed gains through texture caches 3) fix for Titan (roll back to 04-09 state)
|
|
|
|
nst6563
|
|
April 12, 2013, 11:05:58 PM |
|
Nice! Cant wait to get home and try it. Maybe you already found the magic combo for fermi...althought its hard to believe that a gtx260 is almost at the performance of my gtx560se's.
|
|
|
|
cbuchner1 (OP)
|
|
April 12, 2013, 11:23:36 PM |
|
I anyone else is getting a lot of "boo!"'s also, please let me know. If so, just turn off the texture cache with -C 0 and retry. Maybe the stratum Proxy misled me into believing that everything was working, even though it wasn't. Now I am mining on kattare.com (Burnside) with a GT640 and the 04-12 Version and having Problems. also, the PSU on my main rig just blew. boo!
|
|
|
|
cbuchner1 (OP)
|
|
April 12, 2013, 11:34:46 PM Last edit: April 13, 2013, 12:06:47 AM by cbuchner1 |
|
And yea, that sux your psu blew. Was it a cheapy?
Let's say it was only able to handle 2 out of 3 GPUs. Everything was fine while I let only 2 crunch at a time lol ,) UPDATE: I think I only bind 1/4 of the required memory to the CUDA texture, which might lead to problems. I will post a fix ASAP. UPDATE2: I replaced the download archive and hopefully things are better now. Whether or not the previously reported speed gains are real or just due to programming error, hopefully you guys can tell me by tomorrow. I'll have some rest now. Bye! Christian
|
|
|
|
Lacan82
|
|
April 13, 2013, 12:42:51 AM Last edit: April 13, 2013, 01:29:35 AM by Lacan82 |
|
GTX 570 texture 1 went from 203 to 220 (according to autotune) but getting 212
However, Crashed when autotuning completed. (cudaminer did) used -l 30x 7 and it worked fine.
Faulting application cudaminer.exe, version 0.0.0.0, time stamp 0x51674e54, faulting module cudaminer.exe, version 0.0.0.0, time stamp 0x51674e54, exception code 0xc0000005, fault offset 0x00006cc6, process id 0xe60, application start time 0x01ce37dec0223de0.
650M 32>35 texture 1
Crashes as well after autotune
UPDATE It isn't submitting shares that show YAY! with texture on. GTX ran for 10 minutes with no increase in unpaid shares. Turned off textures and Share start showing up. (almost instantly)
650M same issue has above. Tried other pools as well.
|
|
|
|
nst6563
|
|
April 13, 2013, 01:11:32 AM |
|
GTX260 went from 41kh/s > 80kh/s 9300 (nforce 730i) remained at 4.5kh/s whether the texture cache was enabled or not
Here is some interesting results with my fermi cards: gtx560se autotune + textcache = 77kh/s gtx560se autotune - textcache = 102kh/s (46x2 config) enabling the textcache AFTER performing the autotune = 107kh/s (46x2 config)
gt430 went from 33-35kh/s down to 31kh/s no matter what. (48x4 config)
|
|
|
|
jimmy3dita
|
|
April 13, 2013, 01:16:03 AM |
|
Going back I remember that the first tests gave me a 10% of "boo!" (not valid) on the pool.
Moving on with Stratum the rate dropped at about 2%
I'll try the new release during the nest days, by now the pc is inside his "crate"
|
Acquista il mio libro "Investire Bitcoin": clicca qui
|
|
|
|