peri
Newbie
Offline
Activity: 8
Merit: 0
|
 |
December 16, 2013, 02:30:16 PM Last edit: December 16, 2013, 03:06:37 PM by peri |
|
Thanks Ness  Managed to get around 500khs..... could get a little more, but then its not reliable enough to leave for hours unattended. http://s5.postimg.org/ljtgb5uw3/16_12_2013_14_25_30.jpg_http://img703.imageshack.us/img703/8964/dhqi.jpg
|
|
|
|
cbuchner1 (OP)
|
 |
December 16, 2013, 03:30:24 PM Last edit: December 16, 2013, 03:41:39 PM by cbuchner1 |
|
*** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-12-01 (beta) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-12-15 21:17:20] 1 miner threads started, using 'scrypt' algorithm. [2013-12-15 21:17:20] Starting Stratum on stratum+tcp://stratum.gentoomen.org:3333 [2013-12-15 21:17:35] Stratum detected new block [2013-12-15 21:17:35] Stratum detected new block [2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti with compute capability 3.0 [2013-12-15 21:17:38] GPU #0: interactive: 1, tex-cache: 1D, single-alloc: 1 [2013-12-15 21:17:38] GPU #0: using launch configuration K14x16 [2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti, 7168 hashes, 3.25 khash/s [2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti, 200704 hashes, 833.73 khash/s [2013-12-15 21:18:01] GPU #0: GeForce GTX 650 Ti result does not validate on CPU! anyone know how to fix this problem? cudaminer 2013-12-10 has a fix for validation issues on Kepler devices Furthermore I suspect K14x16 requires more memory than the WDDM graphics driver model will allow on your system. cudaminer 2013-12-10 will print a warning if this happens. Try one of these a) don't use -C 1 (allowing the card to make a bunch of small memory allocations instead (i.e single-alloc: 0) b) use a smaller launch config than K14x16. e.g. K14x8 c) upgrade your system RAM (e.g. to 4 GB or 6 GB), allowing larger memory allocations d) Linux and Windows XP don't have the trouble with the WDDM driver model at all. Switch if you can. e) autotune for the best launch config on your system instead of doing copy&paste from someone else's config
|
|
|
|
cbuchner1 (OP)
|
 |
December 16, 2013, 04:20:47 PM |
|
any progress on the optimizations the cloud miner promised to send us?
wanted to ask about the same ... keep checking his blog for updates. He's published his secret sauce. So let's race to integrate it - hopefully we can compensate for the last 3 difficulty increases 
|
|
|
|
Ness
Newbie
Offline
Activity: 10
Merit: 0
|
 |
December 16, 2013, 05:53:42 PM |
|
any progress on the optimizations the cloud miner promised to send us?
wanted to ask about the same ... keep checking his blog for updates. He's published his secret sauce. So let's race to integrate it - hopefully we can compensate for the last 3 difficulty increases  Nice! Hope to see a new build soon 
|
|
|
|
Laska_Forum
|
 |
December 16, 2013, 06:13:31 PM |
|
Anybody had problem with Notebook and Cuda? I have reinstaled all drivers. After that installed new one, I have also installed cuda etc. and i can't mine. Always problem with CUDA. I have GT540M 
|
|
|
|
leshow
Newbie
Offline
Activity: 48
Merit: 0
|
 |
December 17, 2013, 02:54:58 AM Last edit: December 17, 2013, 03:58:12 AM by leshow |
|
*** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-12-01 (beta) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm
[2013-12-15 21:17:20] 1 miner threads started, using 'scrypt' algorithm. [2013-12-15 21:17:20] Starting Stratum on stratum+tcp://stratum.gentoomen.org:3333 [2013-12-15 21:17:35] Stratum detected new block [2013-12-15 21:17:35] Stratum detected new block [2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti with compute capability 3.0 [2013-12-15 21:17:38] GPU #0: interactive: 1, tex-cache: 1D, single-alloc: 1 [2013-12-15 21:17:38] GPU #0: using launch configuration K14x16 [2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti, 7168 hashes, 3.25 khash/s [2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti, 200704 hashes, 833.73 khash/s [2013-12-15 21:18:01] GPU #0: GeForce GTX 650 Ti result does not validate on CPU! anyone know how to fix this problem? cudaminer 2013-12-10 has a fix for validation issues on Kepler devices Furthermore I suspect K14x16 requires more memory than the WDDM graphics driver model will allow on your system. cudaminer 2013-12-10 will print a warning if this happens. Try one of these a) don't use -C 1 (allowing the card to make a bunch of small memory allocations instead (i.e single-alloc: 0) b) use a smaller launch config than K14x16. e.g. K14x8 c) upgrade your system RAM (e.g. to 4 GB or 6 GB), allowing larger memory allocations d) Linux and Windows XP don't have the trouble with the WDDM driver model at all. Switch if you can. e) autotune for the best launch config on your system instead of doing copy&paste from someone else's config im on archlinux actually, and i have 8gb of ram... so that shouldn't be an issue. ill give the new settings a shot
|
|
|
|
Notanon
|
 |
December 17, 2013, 05:35:37 AM |
|
any progress on the optimizations the cloud miner promised to send us?
wanted to ask about the same ... keep checking his blog for updates. He's published his secret sauce. So let's race to integrate it - hopefully we can compensate for the last 3 difficulty increases  Nice. I can then finish mining on Coinotron and send you a 1LTC donation finally. 
|
|
|
|
cbuchner1 (OP)
|
 |
December 17, 2013, 09:10:31 AM Last edit: December 17, 2013, 10:26:02 AM by cbuchner1 |
|
He's published his secret sauce. So let's race to integrate it - hopefully we can compensate for the last 3 difficulty increases  Nice. I can then finish mining on Coinotron and send you a 1LTC donation finally.  github has the new code in test_kernel.cu currently (kernel prefix is X). It requires the -m 1 option. still experimenting with it... There's still some work to be done before it can be part of a cudaminer release (i.e. support Compute 3.0 and also the -C option) What seems apparent already is that there is no 20-30% speed gain. Weird. On a non overclocked 780Ti I was going from ~440 kHash to 487 kHash (a 10% improvement) On a GT 750M I was going from 55 kHash/s to 59 kHash/s (no texture read caching implemented so far) BTW: 1 LTC is too much of a donation to ask for with today's exchange rates.I need to update the readme file regarding this.
|
|
|
|
SavellM
|
 |
December 17, 2013, 12:04:24 PM |
|
What seems apparent already is that there is no 20-30% speed gain. Weird.
On a non overclocked 780Ti I was going from ~440 kHash to 487 kHash (a 10% improvement) On a GT 750M I was going from 55 kHash/s to 59 kHash/s (no texture read caching implemented so far)
Well a 10% increase is better than a kick in the teeth  Just a question, would it not be worth trying to contact the author and ask him if he can take a quick look and see if there are any other improvements he could make for us? Lastly, could you release your latest update as like a Beta or something, I would like to test it but cant compile from Github. And keep it up 
|
|
|
|
GoldBit89
|
 |
December 17, 2013, 01:55:12 PM |
|
according to the spreadsheet this is what i should be getting: i have the 1 GB version
9600 GT 1GB Win 7 x64 "01.12.2013 " 30-300 99 Non work 1175 1550 MSI 9600GT 512mb Win 7 x64 29/11/2013 290+/-20
what im getting is 12-13 khash average and 15 khash top using auto tune and using computer at the same time. driver 331.65 cuda 5.5/5.0 versions cudaminer 11/14/13 11/20/13 12/1/13 versions this config: \x64\cudaminer.exe -o stratum+tcp://stratum01.hashco.ws:8888 -u user.1 -p forumpost
autotunes to max warps :239 L16x3 [2013-12-17 07:53:10] Stratum detected new block [2013-12-17 07:53:11] GPU #0: GeForce 9600 GT with compute capability 1.1 [2013-12-17 07:53:11] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0 [2013-12-17 07:53:11] Stratum detected new block [2013-12-17 07:53:12] GPU #0: Performing auto-tuning (Patience...) [2013-12-17 07:53:12] GPU #0: maximum warps: 239 [2013-12-17 07:53:17] GPU #0: 13.24 khash/s with configuration L16x3 [2013-12-17 07:53:17] GPU #0: using launch configuration L16x3
where is this extra 200 + khash?
|
FTC 6nvzqqaCEizThvgMeC86MGzhAxGzKEtNH8 |WDC WckDxipCes2eBmxrUYEhrUfNNRZexKuYjR |BQC bSDm3XvauqWWnqrxfimw5wdHVDQDp2U8XU BOT EjcroqeMpZT4hphY4xYDzTQakwutpnufQR |BTG geLUGuJkhnvuft77ND6VrMvc8vxySKZBUz |LTC LhXbJMzCqLEzGBKgB2n73oce448BxX1dc4 BTC 1JPzHugtBtPwXgwMqt9rtdwRxxWyaZvk61 |ETH 0xA6cCD2Fb3AC2450646F8D8ebeb14f084F392ACFf
|
|
|
blackraven1425
Member

Offline
Activity: 98
Merit: 10
|
 |
December 17, 2013, 01:57:57 PM |
|
On a 9600GT, I would doubt you can get 200+ khash. Some of the newer, high end cards don't even break that, and yours is a very old, mid range card.
Also, someone completely wrecked the spreadsheet. I restored it to an old version (only a couple hours ago). Can you modify permissions so people can only add and update things they have added themselves?
|
|
|
|
GoldBit89
|
 |
December 17, 2013, 02:11:57 PM |
|
On a 9600GT, I would doubt you can get 200+ khash. Some of the newer, high end cards don't even break that, and yours is a very old, mid range card.
Also, someone completely wrecked the spreadsheet. I restored it to an old version (only a couple hours ago). Can you modify permissions so people can only add and update things they have added themselves?
thats what i thought too--has always been in the 15 khash range tops just wanted to make sure i wasnt missing something updated. Thanks.
|
FTC 6nvzqqaCEizThvgMeC86MGzhAxGzKEtNH8 |WDC WckDxipCes2eBmxrUYEhrUfNNRZexKuYjR |BQC bSDm3XvauqWWnqrxfimw5wdHVDQDp2U8XU BOT EjcroqeMpZT4hphY4xYDzTQakwutpnufQR |BTG geLUGuJkhnvuft77ND6VrMvc8vxySKZBUz |LTC LhXbJMzCqLEzGBKgB2n73oce448BxX1dc4 BTC 1JPzHugtBtPwXgwMqt9rtdwRxxWyaZvk61 |ETH 0xA6cCD2Fb3AC2450646F8D8ebeb14f084F392ACFf
|
|
|
waterbit
Newbie
Offline
Activity: 2
Merit: 0
|
 |
December 17, 2013, 02:41:46 PM |
|
cbuchner1 you were running a gtx 260 before you replaced it. Did you ever find an ideal config for it? Your autotune runs it at 27x3 or 25x3 but I see you had it at 54x3 for some time. Just trying to dial this thing in for the time being before I get a new GPU.
|
|
|
|
Notanon
|
 |
December 17, 2013, 03:30:17 PM |
|
He's published his secret sauce. So let's race to integrate it - hopefully we can compensate for the last 3 difficulty increases  BTW: 1 LTC is too much of a donation to ask for with today's exchange rates.I need to update the readme file regarding this. Fair enough.
|
|
|
|
dga
|
 |
December 17, 2013, 05:25:39 PM |
|
What seems apparent already is that there is no 20-30% speed gain. Weird.
On a non overclocked 780Ti I was going from ~440 kHash to 487 kHash (a 10% improvement) On a GT 750M I was going from 55 kHash/s to 59 kHash/s (no texture read caching implemented so far)
Well a 10% increase is better than a kick in the teeth  Just a question, would it not be worth trying to contact the author and ask him if he can take a quick look and see if there are any other improvements he could make for us? We're chatting. I suspect that what it's going to come down to is: - OK speedups on some cards (5-10%) using Cuda 5.0 - Decent speedups on some other cards (20%) using Cuda 5.5 - Really good speedups (80%) on some low-cuda-core-count mobile platforms using Cuda 5.5. It's a little tricky, though - there's something in my code that really likes Cuda 5.5, and the previous code targets Cuda 5.0 in order to be more widely usable. Likely nothing that can't be solved, but it may take some work.
|
|
|
|
69charger
|
 |
December 17, 2013, 06:05:16 PM |
|
What seems apparent already is that there is no 20-30% speed gain. Weird.
On a non overclocked 780Ti I was going from ~440 kHash to 487 kHash (a 10% improvement) On a GT 750M I was going from 55 kHash/s to 59 kHash/s (no texture read caching implemented so far)
Well a 10% increase is better than a kick in the teeth  Just a question, would it not be worth trying to contact the author and ask him if he can take a quick look and see if there are any other improvements he could make for us? We're chatting. I suspect that what it's going to come down to is: - OK speedups on some cards (5-10%) using Cuda 5.0 - Decent speedups on some other cards (20%) using Cuda 5.5 - Really good speedups (80%) on some low-cuda-core-count mobile platforms using Cuda 5.5. It's a little tricky, though - there's something in my code that really likes Cuda 5.5, and the previous code targets Cuda 5.0 in order to be more widely usable. Likely nothing that can't be solved, but it may take some work. Thanks to both Christian and Dave for all the hard work! Anxiously awaiting your masterpiece on two GTX 660's 
|
|
|
|
juggs
Newbie
Offline
Activity: 28
Merit: 0
|
 |
December 18, 2013, 02:29:51 AM |
|
I know ~nothing~ about coding this stuff, so forgive me if this is a dumb question. How hard would it be to have cudaminer display the currently mined scrypt's difficulty, the GPU utilisation, temp and fan speed %ge? I only ask as I used to get to get temp and fan speed info through nVidia's X Server Settings application (on linux). Now I've installed the latest nVidia drivers and CUDA from source, moved my desktop display to the motherboard onboard graphics and run cudaminer with the non-interactive flag (works fine) that route no longer works to query the information. I've looked around but not been able to find any way to query the activity of the card. I probably missed something obvious - so all suggestions welcome 
|
|
|
|
juggs
Newbie
Offline
Activity: 28
Merit: 0
|
 |
December 18, 2013, 02:43:51 AM |
|
And having looked for hours, having posted the above I find a way that should query those things. /usr/bin/nvidia-smi -a Sadly it is mostly N/A with my card (GTS 250) ==============NVSMI LOG==============
Timestamp : Wed Dec 18 02:40:11 2013 Driver Version : 319.37
Attached GPUs : 1 GPU 0000:02:00.0 Product Name : GeForce GTS 250 Display Mode : N/A Display Active : N/A Persistence Mode : Disabled Accounting Mode : N/A Accounting Mode Buffer Size : N/A Driver Model Current : N/A Pending : N/A Serial Number : N/A GPU UUID : GPU-83625be0-08b8-2a8c-44c8-82f560e2d9b7 VBIOS Version : 62.92.7E.00.00 Inforom Version Image Version : N/A OEM Object : N/A ECC Object : N/A Power Management Object : N/A GPU Operation Mode Current : N/A Pending : N/A PCI Bus : 0x02 Device : 0x00 Domain : 0x0000 Device Id : 0x061510DE Bus Id : 0000:02:00.0 Sub System Id : 0x110319DA GPU Link Info PCIe Generation Max : N/A Current : N/A Link Width Max : N/A Current : N/A Fan Speed : 43 % Performance State : N/A Clocks Throttle Reasons : N/A Memory Usage Total : 511 MB Used : 235 MB Free : 276 MB Compute Mode : Default Utilization Gpu : N/A Memory : N/A Ecc Mode Current : N/A Pending : N/A ECC Errors Volatile Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Aggregate Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending : N/A Temperature Gpu : 64 C Power Readings Power Management : N/A Power Draw : N/A Power Limit : N/A Default Power Limit : N/A Enforced Power Limit : N/A Min Power Limit : N/A Max Power Limit : N/A Clocks Graphics : N/A SM : N/A Memory : N/A Applications Clocks Graphics : N/A Memory : N/A Default Applications Clocks Graphics : N/A Memory : N/A Max Clocks Graphics : N/A SM : N/A Memory : N/A Compute Processes : N/A
That's with cudaminer running - all I have to go on is the temp and fan speed. Doesn't seem to be working that hard really - it used to get hotter and higher fans than that gaming on it. Any pointer welcome 
|
|
|
|
69charger
|
 |
December 18, 2013, 04:57:44 AM |
|
Does the massive update on GitHub mean a release is close  ? 
|
|
|
|
cbuchner1 (OP)
|
 |
December 18, 2013, 05:34:53 AM |
|
Does the massive update on GitHub mean a release is close  ?  I present the 2013-12-18 release with all new Kepler, Titan kernels. Huge thanks to David Andersen who came up with a more efficient way to do scrypt mining on the Kepler architecture. We now use CUDA 5.5, which has increased driver requirements. Be sure to use a recent nVidia driver. When mining on Kepler devices, please autotune again and report your findings. Christian
|
|
|
|
|