Bitcoin Forum
November 13, 2024, 04:39:55 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 [69] 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 ... 1135 »
  Print  
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX]  (Read 3426932 times)
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
December 04, 2013, 09:16:42 PM
Last edit: December 04, 2013, 09:29:30 PM by Lacan82
 #1361

I'm up to 279 on my GTX 570  Smiley

how?? im on less than 160 233 with my 570 :-(??

Check my previous posts. you'll see my config Smiley  However i have evga classified version it is factory OC'd  autoconfig select F15x16

Tansen
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
December 05, 2013, 12:43:44 AM
 #1362

I'm up to 279 on my GTX 570  Smiley

how?? im on less than 160 233 with my 570 :-(??

Be happy.. Im only getting 130kh/s

But I can't get my cuda to go despite posting my conf doesn't seem like there's anything identify ably wrong with it from what I can see but I'm still getting Json errors.
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
December 05, 2013, 01:29:28 AM
 #1363

I'm up to 279 on my GTX 570  Smiley

how?? im on less than 160 233 with my 570 :-(??

Be happy.. Im only getting 130kh/s

But I can't get my cuda to go despite posting my conf doesn't seem like there's anything identify ably wrong with it from what I can see but I'm still getting Json errors.

change allowip=10.1.1.* to allowip=All  see if that helps

alexroz
Newbie
*
Offline Offline

Activity: 14
Merit: 0


View Profile
December 05, 2013, 08:20:58 AM
 #1364

I am trying to make cudaminer run during the Windows-7 screensaver by triggering Windows scheduler by log-off event, according to this guide http://www.reddit.com/r/BitcoinMining/comments/1mtx01/launching_closing_cgminer_with_screensaver_in/
I have succeeded to lunch cudaminer. But I have troubles to properly kill the cudaminer process.
When I kill it with
Code:
taskkill /F /IM cgminer-scr.exe
command it lead to crash of Nvidia drivers.
But in other hand taskkill without /F parameter doesn't able to kill cudaminer at all.
Please advise proper way to automatically terminate cudaminer task.
Does anyone can answer the above question?
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
December 05, 2013, 09:20:29 AM
 #1365

Please advise proper way to automatically terminate cudaminer task.
Does anyone can answer the above question?

the tool linked in this stackoverflow answer ("SendSignal") will send a Ctrl-Break signal to Cudaminer, to which it will react and shutdown.

http://serverfault.com/questions/371545/can-i-send-ctrl-break-and-or-ctrl-c-to-a-running-process-in-windows
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
December 05, 2013, 11:49:00 AM
 #1366

I am working towards a cudaminer release that is fit for building a high performance miner using a weak CPU.

My proof of concept nVidia monster miner will have 3x GTX 780 Ti and is powered by a core i3 low power CPU (35W TDP max).  For this, apparently the CPU can no longer do the SHA256 hashing. The mainboard was selected such that it fits 3 double width graphics cards without having to use risers.

I want this to be a 1.5 MHash/s machine. Mainboard, RAM and SSD have just arrived. So far I have received one out of 3 780Ti cards. I got the cards for 560 Euros a piece. This is still more expensive than equivalent hashing power AMD cards - but this is a proof of concept.

This is a significant monetary investment and I hope that it will at least return most of its hardware costs during its lifetime. Ideally it will output 1.5 MHash/s at 800 Watts power draw from the wall.

Christian
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
December 05, 2013, 11:56:47 AM
Last edit: December 05, 2013, 01:03:55 PM by cbuchner1
 #1367

need some help with a Nvidea Geforce GT610 2GB memory
please provide me with the right settings to get max khash/s 
specific line not just auto functions

that chip is just not worth it. Expect less than 15 kHash/s from that.

Why would you expect a suggestion for a complete optimized command line from someone else? How many people do you think have tried this before on a chip that's virtually unsuitable for mining? Please do your own research, use autotune and play with the -C cache settings.
trell0z
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile
December 05, 2013, 05:42:52 PM
 #1368

Early christmas present for all 580 (and maybe 570?) users:
-H 1 -i 0 -l F16x14 -C 2 x64 exe
Currently hashing at ~320kH/s! My 580 is running @970Mhz core / 2200Mhz mem. Mem speed doesn't affect much if at all, biggest increase is core. So if you can clock higher by lowering your memory, make a mine profile for that and another for normal games if you need the mem speed, usually though core is better there too.
DuckDodgers
Newbie
*
Offline Offline

Activity: 20
Merit: 0


View Profile
December 05, 2013, 06:58:07 PM
 #1369

Early christmas present for all 580 (and maybe 570?) users:
-H 1 -i 0 -l F16x14 -C 2 x64 exe
Currently hashing at ~320kH/s! My 580 is running @970Mhz core / 2200Mhz mem. Mem speed doesn't affect much if at all, biggest increase is core. So if you can clock higher by lowering your memory, make a mine profile for that and another for normal games if you need the mem speed, usually though core is better there too.
My GTX580 peaks with the F16x16 config (~293 kH/s @ 825MHz). And indeed the mem clock can be slashed down without impact on the performance.
trell0z
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile
December 05, 2013, 07:04:35 PM
 #1370

Early christmas present for all 580 (and maybe 570?) users:
-H 1 -i 0 -l F16x14 -C 2 x64 exe
Currently hashing at ~320kH/s! My 580 is running @970Mhz core / 2200Mhz mem. Mem speed doesn't affect much if at all, biggest increase is core. So if you can clock higher by lowering your memory, make a mine profile for that and another for normal games if you need the mem speed, usually though core is better there too.
My GTX580 peaks with the F16x16 config (~293 kH/s @ 825MHz). And indeed the mem clock can be slashed down without impact on the performance.

Really, what other settings? 16x16 with my config gives me only ~200kH/s
DuckDodgers
Newbie
*
Offline Offline

Activity: 20
Merit: 0


View Profile
December 05, 2013, 07:29:20 PM
 #1371

Really, what other settings? 16x16 with my config gives me only ~200kH/s
cudaminer64.exe -H 1 -i 0 -d 0 -l F16x16 -C 2 --no-autotune

Device driver ver. 314.22 WHQL
trell0z
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile
December 05, 2013, 07:49:58 PM
 #1372

Really, what other settings? 16x16 with my config gives me only ~200kH/s
cudaminer64.exe -H 1 -i 0 -d 0 -l F16x16 -C 2 --no-autotune

Device driver ver. 314.22 WHQL

Strange.. guess it's the other hardware/driver ver diff. in our rigs then. Or our cards are just veery different haha.
Anyways, why the --no-autotune? Far as I can understand it, if you specify an launch config like we do with (letter)numberxnumber any kind of autotuning is disabled.
Tansen
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
December 06, 2013, 12:50:41 AM
 #1373

I'm up to 279 on my GTX 570  Smiley

how?? im on less than 160 233 with my 570 :-(??

Be happy.. Im only getting 130kh/s

But I can't get my cuda to go despite posting my conf doesn't seem like there's anything identify ably wrong with it from what I can see but I'm still getting Json errors.

change allowip=10.1.1.* to allowip=All  see if that helps

I'm not sure if it worked or not because of that but thank you.

Also the wallet had an update the night of so it might be when it was updating the block chain it d/ced and never knew what block to work on. But either way thanks lol it works now.
linzkl
Newbie
*
Offline Offline

Activity: 5
Merit: 0


View Profile
December 06, 2013, 12:58:56 AM
 #1374

Hi~I have some problem with my gtx460 using cudaminer. Before cudaminer, I used like bfgminer or guiminer, no problems happened. But now after two days running cudaminer, my card right now cannot be used for gaming, only can watch movie. It became unstable when I'm not doing mining. When I force to open some games, the screen will flash and then shut down the computer in a blue screen. The hashing speed at the beginning only has ~70, but after a while it can hold ~99kh/s. I used some tools to test it like GPUz, they all shows normal. Is there some special setting in cudaminer changed the GPU? Can I set it back or it happened just because of the GPU malfunctioned and can no longer be used?

By the way, when I used autotune, it set F8x16, right now it will lose response when I using that, so I use F28x4. It will show F1x16 then lose response.

Thank you.
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
December 06, 2013, 01:07:20 AM
 #1375

Hi~I have some problem with my gtx460 using cudaminer. Before cudaminer, I used like bfgminer or guiminer, no problems happened. But now after two days running cudaminer, my card right now cannot be used for gaming, only can watch movie. It became unstable when I'm not doing mining.

do you have any overclocking tool running in the background? tools like EVGA PrecisionX or MSI Afterburner may remember the clock rate across reboots. If an unstable clock rate was selected, the setting will persist.
linzkl
Newbie
*
Offline Offline

Activity: 5
Merit: 0


View Profile
December 06, 2013, 01:46:07 AM
 #1376

Hi~I have some problem with my gtx460 using cudaminer. Before cudaminer, I used like bfgminer or guiminer, no problems happened. But now after two days running cudaminer, my card right now cannot be used for gaming, only can watch movie. It became unstable when I'm not doing mining.

do you have any overclocking tool running in the background? tools like EVGA PrecisionX or MSI Afterburner may remember the clock rate across reboots. If an unstable clock rate was selected, the setting will persist.


Hi~ I don't have any that tools. The two days I just want to try if LTC worth.

My card used more than two years, I just used it gaming and sometimes do the BOINC computing or mine some btc. Is that possible that two days I just triggered the death of some chips on GPU?
anonkun
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
December 06, 2013, 03:06:03 AM
 #1377

Hi,

I have this weird gimmick with cudaminer with my GTX 670 where if I play a video, cudaminer gets about 20% more KH/s and GPU utilization increases by 10%. I can't think of any reason why this would be the case as the GPU clock never changes. Any idea why this happens?

Cudaminer by itself:
http://s24.postimg.org/n5kpeg4np/image.png


Cudaminer while playing a video:
http://s24.postimg.org/mu391oo7p/image.png
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
December 06, 2013, 03:13:51 AM
 #1378

Hi,

I have this weird gimmick with cudaminer with my GTX 670 where if I play a video, cudaminer gets about 20% more KH/s and GPU utilization increases by 10%. I can't think of any reason why this would be the case as the GPU clock never changes. Any idea why this happens?

Cudaminer by itself:


You're using with Interactive 1. Which means it is reserving GPU for your monitor. try using -i 0


anonkun
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
December 06, 2013, 04:08:00 AM
 #1379

Hi,

I have this weird gimmick with cudaminer with my GTX 670 where if I play a video, cudaminer gets about 20% more KH/s and GPU utilization increases by 10%. I can't think of any reason why this would be the case as the GPU clock never changes. Any idea why this happens?

Cudaminer by itself:


You're using with Interactive 1. Which means it is reserving GPU for your monitor. try using -i 0


That seems to have fixed the gimmick.
Is there an explanation as to why it does that?
Lacan82
Sr. Member
****
Offline Offline

Activity: 247
Merit: 250


View Profile
December 06, 2013, 04:16:02 AM
 #1380

Hi,

I have this weird gimmick with cudaminer with my GTX 670 where if I play a video, cudaminer gets about 20% more KH/s and GPU utilization increases by 10%. I can't think of any reason why this would be the case as the GPU clock never changes. Any idea why this happens?

Cudaminer by itself:


You're using with Interactive 1. Which means it is reserving GPU for your monitor. try using -i 0


That seems to have fixed the gimmick.
Is there an explanation as to why it does that?

The technical explanation would have come from Christian, but interactive 1 reserves gpu for usuage. 

Pages: « 1 ... 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 [69] 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 ... 1135 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!