Bitcoin Forum
April 27, 2024, 08:04:02 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 [85] 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 ... 1135 »
  Print  
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX]  (Read 3426868 times)
frontier204
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
December 21, 2013, 02:33:51 AM
 #1681

...
550Ti
Linux kernel-3.11 x86_64  cudatoolkit-5.5 90-92 kH
Win7x64 72-76 kH

Aha! It's all in the settings...

Using the following:
cudaminer -i 0 -C 1 -H 1 -o stratum.... -O ....

--- the result is
Code:
GPU #0: GeForce GTX 550 Ti, 313344 hashes, 94.62 khash/s
accepted: 35/35 (100.00%), 94.62 khash/s (yay!!!)

To get this result, I also had to stop mining on the CPU, since CPU usage is over 50% of 2 cores with that setup. My setup is a GV-N550OC-1GI at its factory overclock running on a lowly AMD 5200+ / Asus M2AVM / 6GB. (Ubuntu 13.04 with the kernel 3.12 .DEBs stolen from "trusty"'s archive)

1714248242
Hero Member
*
Offline Offline

Posts: 1714248242

View Profile Personal Message (Offline)

Ignore
1714248242
Reply with quote  #2

1714248242
Report to moderator
1714248242
Hero Member
*
Offline Offline

Posts: 1714248242

View Profile Personal Message (Offline)

Ignore
1714248242
Reply with quote  #2

1714248242
Report to moderator
1714248242
Hero Member
*
Offline Offline

Posts: 1714248242

View Profile Personal Message (Offline)

Ignore
1714248242
Reply with quote  #2

1714248242
Report to moderator
The trust scores you see are subjective; they will change depending on who you have in your trust list.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714248242
Hero Member
*
Offline Offline

Posts: 1714248242

View Profile Personal Message (Offline)

Ignore
1714248242
Reply with quote  #2

1714248242
Report to moderator
Malstrond
Newbie
*
Offline Offline

Activity: 2
Merit: 0



View Profile
December 21, 2013, 04:46:32 AM
 #1682


Coolbits is not supported anymore for Fermi and Kepler. Despite Nvidia advertising their unified driver architecture with equal features on all platforms, it is actually impossible to software-overclock 4xx and newer cards on GNU/Linux. nvclock doesn't work either with the newer cards.
The only way is to flash a modified VBIOS. But most of the modified ones floating around on the net, for example the one on TechInferno by svl7, disable GPU Boost and set a low base clock, relying on the user to set the clock rate in software.
You could edit one yourself (Kepler BIOS Tweaker 1.26 (not 1.25) supports base clock, boost and TDP editing for recent cards), but I'd advise you to have a backup card for recovering if it goes wrong.
kernels10
Sr. Member
****
Offline Offline

Activity: 408
Merit: 250


ded


View Profile
December 21, 2013, 06:53:13 AM
 #1683


Coolbits is not supported anymore for Fermi and Kepler. Despite Nvidia advertising their unified driver architecture with equal features on all platforms, it is actually impossible to software-overclock 4xx and newer cards on GNU/Linux. nvclock doesn't work either with the newer cards.
The only way is to flash a modified VBIOS. But most of the modified ones floating around on the net, for example the one on TechInferno by svl7, disable GPU Boost and set a low base clock, relying on the user to set the clock rate in software.
You could edit one yourself (Kepler BIOS Tweaker 1.26 (not 1.25) supports base clock, boost and TDP editing for recent cards), but I'd advise you to have a backup card for recovering if it goes wrong.

yep, no coolbits.

I guess I'm gonna have to try this out sooner than I'd planned lol
ajax3592
Full Member
***
Offline Offline

Activity: 210
Merit: 100

Crypto News & Tutorials - Coinramble.com


View Profile
December 21, 2013, 07:04:18 AM
Last edit: December 21, 2013, 08:38:36 AM by ajax3592
 #1684

...
550Ti
Linux kernel-3.11 x86_64  cudatoolkit-5.5 90-92 kH
Win7x64 72-76 kH

Aha! It's all in the settings...

Using the following:
cudaminer -i 0 -C 1 -H 1 -o stratum.... -O ....

--- the result is
Code:
GPU #0: GeForce GTX 550 Ti, 313344 hashes, 94.62 khash/s
accepted: 35/35 (100.00%), 94.62 khash/s (yay!!!)

To get this result, I also had to stop mining on the CPU, since CPU usage is over 50% of 2 cores with that setup. My setup is a GV-N550OC-1GI at its factory overclock running on a lowly AMD 5200+ / Asus M2AVM / 6GB. (Ubuntu 13.04 with the kernel 3.12 .DEBs stolen from "trusty"'s archive)

On GTS 450, recent miner won't launch with code:
Code:
cudaminer.exe -i 0 [b]-C 1 -H 1[/b]
or with any other code Sad  Cry someone plz give me a working code for 450

Tried this too:
Code for 18 Dec release to run on GTS 450 please.

Change -l 32x4 to -l auto, remove -C (it's ignored now).
Not working

Crypto news/tutorials >>CoinRamble<<                            >>Netcodepool<<                >>My graphics<<
litecoinbeast
Full Member
***
Offline Offline

Activity: 136
Merit: 100


View Profile
December 21, 2013, 07:09:53 AM
Last edit: December 21, 2013, 08:31:43 PM by litecoinbeast
 #1685

Gigabyte GTX 770 OC 4 gig card Cheesy...few beers last night.

How does ~330 Kh/s sound with new cudaminer....4 gigs.
ak84
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
December 21, 2013, 08:03:26 AM
Last edit: December 21, 2013, 08:13:50 AM by ak84
 #1686

Anyone have optimal settings for GTX660 Ti?
I'm just starting out and my settings aren't working. I've been reading the readme file for an hour now trying to make sense of the -H -C and all.

C:\cudaminer\x64\cudaminer.exe -H 2 -d 0 -l auto -i 1 K7x32 -o stratum+tcp://www.zzz.com:3333 -O worker:x

This is my first time so I must have the settings wrong. Thanks!


edit: ok I went back a a few pages and just copied other people's 660ti settings. I am testing out

cudaminer.exe -H 1 -i 0 -C 1 -D -l K14x16 -o stratum...   (AVG around 265 khash/s)

and

cudaminer.exe -H 1 -i 0 -C 1 -D -l k7x32-o stratum   (will report later)

▬▬▬▬▬▬▬▬▬ Edutainment.Tech ▬▬▬▬▬▬▬▬▬
Double ICO: Games for smart and games for business
SmartGames    ◼ CorpEdu
davethetrousers
Full Member
***
Offline Offline

Activity: 196
Merit: 100



View Profile
December 21, 2013, 11:58:34 AM
Last edit: December 21, 2013, 12:29:09 PM by davethetrousers
 #1687

I get 224kH/s on my MSI GTX 660 (non-TI) with optimal settings and a good overclock with maximum powertune.

Settings: -l K10x16 -i 0 -m 1

While I use the machine, I can set the card to minimum powertune, slight overclock on engine clock, use -i 1 of course, and still get 170kH/s. Real power is at 77% of nominal, at a very efficient 0.962V figure for voltage.

Keep up the good work! I am considering a donation Smiley

Add: Just tried the -H 1 parallelization. Together with a more parallel K5x32 it got me to 180kH/s, with interactive powersave settings. My UI now is a tiny bit less responsive and CPU load is somewhat higher, though.

LoveCats
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile
December 21, 2013, 01:21:17 PM
 #1688

Hi,

A new miner here. I've a rig which is not meant for mining, but I thought to use it just for trial run to see how the things work. I am mining Litecoins with the setup.

I've 4x nVidia Quadro K6000 cards and I am running the cudaMiner in autotune mode(if that's the mode in which it runs if no extra flags are specified).

One thing which I've noticed is that, one of the GPU out of 4 gives hash rate of 485kh/s while other 3 ranges from 280-370kh/s. I am getting a total avg of 1450 kh/s. Is this hash rate good at all? How to make the other 3 work on 485 kh/s as well?

Do anyone have any suggestion with manual config to get better result? UI response is of no use to me. Just suggest me the settings that would give maximum performance.

At last, I would like to thank OP for the cudaMiner. As far as I've read, it seems to be only one which makes nVidia cards any useful.

Regards.
Prima Primat
Member
**
Offline Offline

Activity: 117
Merit: 10


View Profile
December 21, 2013, 01:23:31 PM
 #1689


Coolbits is not supported anymore for Fermi and Kepler. Despite Nvidia advertising their unified driver architecture with equal features on all platforms, it is actually impossible to software-overclock 4xx and newer cards on GNU/Linux. nvclock doesn't work either with the newer cards.
The only way is to flash a modified VBIOS. But most of the modified ones floating around on the net, for example the one on TechInferno by svl7, disable GPU Boost and set a low base clock, relying on the user to set the clock rate in software.
You could edit one yourself (Kepler BIOS Tweaker 1.26 (not 1.25) supports base clock, boost and TDP editing for recent cards), but I'd advise you to have a backup card for recovering if it goes wrong.

I'm also curious about OC on Linux, but I don't even need to set any clocks, just raise the power target (like in MSI Afterburner). Is that impossible, too?
trell0z
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile
December 21, 2013, 02:37:13 PM
 #1690

Hi,

A new miner here. I've a rig which is not meant for mining, but I thought to use it just for trial run to see how the things work. I am mining Litecoins with the setup.

I've 4x nVidia Quadro K6000 cards and I am running the cudaMiner in autotune mode(if that's the mode in which it runs if no extra flags are specified).

One thing which I've noticed is that, one of the GPU out of 4 gives hash rate of 485kh/s while other 3 ranges from 280-370kh/s. I am getting a total avg of 1450 kh/s. Is this hash rate good at all? How to make the other 3 work on 485 kh/s as well?

Do anyone have any suggestion with manual config to get better result? UI response is of no use to me. Just suggest me the settings that would give maximum performance.

At last, I would like to thank OP for the cudaMiner. As far as I've read, it seems to be only one which makes nVidia cards any useful.

Regards.

Try adding -D to see which config it picks for the cards, and then applying the one that gives the highest hashrate to all of them. After that you can play around with -H 0/1/2. Otherwise from previous posts, you should try the -l config of (amount of smx units enabled on card)x32, which for k6000 should be -l T15x32, or maybe the kepler kernel with K15x32
ajax3592
Full Member
***
Offline Offline

Activity: 210
Merit: 100

Crypto News & Tutorials - Coinramble.com


View Profile
December 21, 2013, 02:54:58 PM
 #1691

Okay guys I looked in the spreadsheet for GTS 450 code, still cannot run the miner (it runs for a microsecond and then closes)

Code:
cudaminer.exe -i 0 -D -H 1 -C 2 -l F24x8 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 


I can easily run the 7 Dec release on this code:

Code:
cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 

Crypto news/tutorials >>CoinRamble<<                            >>Netcodepool<<                >>My graphics<<
davetheshrew
Full Member
***
Offline Offline

Activity: 140
Merit: 100


View Profile
December 21, 2013, 03:50:45 PM
 #1692

Okay guys I looked in the spreadsheet for GTS 450 code, still cannot run the miner (it runs for a microsecond and then closes)

Code:
cudaminer.exe -i 0 -D -H 1 -C 2 -l F24x8 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 


I can easily run the 7 Dec release on this code:

Code:
cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 

cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass

in bold is intensity, you have set this at zero, set it to 13 and see what happens, also id drop the flag -l auto, just let it figure it out itself then set it to what it targeted at.

My [POT] address PKGbqyqnVLvMoxmYq1YAm7dKAxue52LeNj
ajax3592
Full Member
***
Offline Offline

Activity: 210
Merit: 100

Crypto News & Tutorials - Coinramble.com


View Profile
December 21, 2013, 04:03:29 PM
 #1693

Okay guys I looked in the spreadsheet for GTS 450 code, still cannot run the miner (it runs for a microsecond and then closes)

Code:
cudaminer.exe -i 0 -D -H 1 -C 2 -l F24x8 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 


I can easily run the 7 Dec release on this code:

Code:
cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 

cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass

in bold is intensity, you have set this at zero, set it to 13 and see what happens, also id drop the flag -l auto, just let it figure it out itself then set it to what it targeted at.

Changed -i and -l auto codes as you said, doesn't start up (same microsecond startup)
Code:
cudaminer.exe -i 13 -D -H 1 -C 2 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 


Crypto news/tutorials >>CoinRamble<<                            >>Netcodepool<<                >>My graphics<<
ak84
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
December 21, 2013, 04:09:11 PM
 #1694

I get 224kH/s on my MSI GTX 660 (non-TI) with optimal settings and a good overclock with maximum powertune.

Settings: -l K10x16 -i 0 -m 1

While I use the machine, I can set the card to minimum powertune, slight overclock on engine clock, use -i 1 of course, and still get 170kH/s. Real power is at 77% of nominal, at a very efficient 0.962V figure for voltage.

Keep up the good work! I am considering a donation Smiley

Add: Just tried the -H 1 parallelization. Together with a more parallel K5x32 it got me to 180kH/s, with interactive powersave settings. My UI now is a tiny bit less responsive and CPU load is somewhat higher, though.


hi 660 owner!

what does the -H1 parallelizatoin do?

and why do you have -i 0  , what does that do? vs. -i 1


▬▬▬▬▬▬▬▬▬ Edutainment.Tech ▬▬▬▬▬▬▬▬▬
Double ICO: Games for smart and games for business
SmartGames    ◼ CorpEdu
Prima Primat
Member
**
Offline Offline

Activity: 117
Merit: 10


View Profile
December 21, 2013, 04:31:32 PM
 #1695

I get 224kH/s on my MSI GTX 660 (non-TI) with optimal settings and a good overclock with maximum powertune.

Settings: -l K10x16 -i 0 -m 1

While I use the machine, I can set the card to minimum powertune, slight overclock on engine clock, use -i 1 of course, and still get 170kH/s. Real power is at 77% of nominal, at a very efficient 0.962V figure for voltage.

Keep up the good work! I am considering a donation Smiley

Add: Just tried the -H 1 parallelization. Together with a more parallel K5x32 it got me to 180kH/s, with interactive powersave settings. My UI now is a tiny bit less responsive and CPU load is somewhat higher, though.


hi 660 owner!

what does the -H1 parallelizatoin do?

and why do you have -i 0  , what does that do? vs. -i 1



It's all in the readme.
davetheshrew
Full Member
***
Offline Offline

Activity: 140
Merit: 100


View Profile
December 21, 2013, 04:48:57 PM
 #1696

Okay guys I looked in the spreadsheet for GTS 450 code, still cannot run the miner (it runs for a microsecond and then closes)

Code:
cudaminer.exe -i 0 -D -H 1 -C 2 -l F24x8 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 


I can easily run the 7 Dec release on this code:

Code:
cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 

cudaminer.exe -i 0 -l auto -m 1 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass

in bold is intensity, you have set this at zero, set it to 13 and see what happens, also id drop the flag -l auto, just let it figure it out itself then set it to what it targeted at.

Changed -i and -l auto codes as you said, doesn't start up (same microsecond startup)
Code:
cudaminer.exe -i 13 -D -H 1 -C 2 -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass 



One thing a lot of people get wrong is targeting

is your cudaminer file in C:/ root? if it was your bat would look something like this

START C:/cudaminer/cudaminer.exe -o stratum+tcp://xxx.xxx.com:3333 -O id.worker:worker pass

if your file is in C you could run that right now for autotuning

My [POT] address PKGbqyqnVLvMoxmYq1YAm7dKAxue52LeNj
Sheldor333
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250


View Profile
December 21, 2013, 05:04:24 PM
 #1697

Ok. Windows 7 user here. Runned it using cudaminer.exe -i 13 -D -H 1 -C 2 -o stratum+tcp://pool.com:3333 -O id.worker:pass
All I got is:

Edit: Tried it with a 3335 port, since that is what my pool uses. No luck there either.

dejahboi
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
December 21, 2013, 05:43:57 PM
 #1698

Ok. Windows 7 user here. Runned it using cudaminer.exe -i 13 -D -H 1 -C 2 -o stratum+tcp://pool.com:3333 -O id.worker:pass
All I got is:
https://i.imgur.com/qW0qoO2.png
Edit: Tried it with a 3335 port, since that is what my pool uses. No luck there either.

Its the pool not the miner in this case.
davethetrousers
Full Member
***
Offline Offline

Activity: 196
Merit: 100



View Profile
December 21, 2013, 06:37:03 PM
 #1699

what does the -H1 parallelizatoin do?

and why do you have -i 0  , what does that do? vs. -i 1

With the -H flag at 1, the SHAish parts of the app are done multi-threaded on the CPU.

The -i flag says whether the app may load the GPU as much as possible (0) or keep some headroom for the user interface (1).

LoveCats
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile
December 21, 2013, 06:38:34 PM
 #1700

Hi,

A new miner here. I've a rig which is not meant for mining, but I thought to use it just for trial run to see how the things work. I am mining Litecoins with the setup.

I've 4x nVidia Quadro K6000 cards and I am running the cudaMiner in autotune mode(if that's the mode in which it runs if no extra flags are specified).

One thing which I've noticed is that, one of the GPU out of 4 gives hash rate of 485kh/s while other 3 ranges from 280-370kh/s. I am getting a total avg of 1450 kh/s. Is this hash rate good at all? How to make the other 3 work on 485 kh/s as well?

Do anyone have any suggestion with manual config to get better result? UI response is of no use to me. Just suggest me the settings that would give maximum performance.

At last, I would like to thank OP for the cudaMiner. As far as I've read, it seems to be only one which makes nVidia cards any useful.

Regards.

Try adding -D to see which config it picks for the cards, and then applying the one that gives the highest hashrate to all of them. After that you can play around with -H 0/1/2. Otherwise from previous posts, you should try the -l config of (amount of smx units enabled on card)x32, which for k6000 should be -l T15x32, or maybe the kepler kernel with K15x32
Thanks bro. The -l with T15x32 boosted the performance. Now getting about 1600 kh/s and all the GPUs are hashing in near about same rate. nVidia GPU's surely suck for mining. This one is such a high-end GPU and such poor performance. Not sure I'll continue this. The trial run's result will decide if its going any further. Thanks again for your help though.
Pages: « 1 ... 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 [85] 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 ... 1135 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!