Bitcoin Forum
May 07, 2024, 05:15:38 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 [131] 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 ... 1135 »
  Print  
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX]  (Read 3426872 times)
szczyglo
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile
January 16, 2014, 09:30:05 PM
Last edit: January 19, 2014, 12:30:43 PM by szczyglo
 #2601

My gtx590 gain 0.68-0.7 kh/s for each GF110 (mem 2x1.5GB) with -l X5x2 -C 1, so total kh/s for gtx590 is about 1.38 kh/s. Low mem is pain :/
Thanks to all developers, donation sent. Patoberli - thanks for build!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715058938
Hero Member
*
Offline Offline

Posts: 1715058938

View Profile Personal Message (Offline)

Ignore
1715058938
Reply with quote  #2

1715058938
Report to moderator
1715058938
Hero Member
*
Offline Offline

Posts: 1715058938

View Profile Personal Message (Offline)

Ignore
1715058938
Reply with quote  #2

1715058938
Report to moderator
1715058938
Hero Member
*
Offline Offline

Posts: 1715058938

View Profile Personal Message (Offline)

Ignore
1715058938
Reply with quote  #2

1715058938
Report to moderator
bathrobehero
Legendary
*
Offline Offline

Activity: 2002
Merit: 1051


ICO? Not even once.


View Profile
January 17, 2014, 12:51:18 AM
 #2602

I think the card's (stock) BIOS would not push the past the reliability voltage anyway. So even if one sets the slider extremely high, the clocks you can actually reach will be much lower. So one needs to overvolt, or a install seriously modded BIOS to work around that restriction.

As far as I know the stock BIOS is not limiting you from increasing the core clock speed even after it hits the max stock voltage, and we would only require overvoltage during normal circumstances to get the clock stable, but for scrypt-jane it could be stable without massive overvoltage.

The poster (Acad) was kind enough to provide a screenshot and while it could be considered a 286 Mhz OC if we were to compare it to the boost clock, it's still quite amazing. He noted "Driver will crash if you do anything else that uses the GPU ex a website " which again means that it's only stable for scrypt-jane.


Not your keys, not your coins!
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 17, 2014, 03:19:54 PM
Last edit: January 17, 2014, 03:33:18 PM by cbuchner1
 #2603

There was a breaking change today regarding the format of launch configs for David Andersen's kernels.

This has advantages because of more fine grained control and memory allocation. It is however a nightmare
for maintainers of the Google spreadsheets Wink

Code:
for scrypt-jane the equivalent config to B x W is B x 4*W and for scrypt it is 4*B x W
so e.g for Yacoin replace -l K2x8 with -l K2x32
and for Litecoin -l K2x32 becomes -l K8x32.

this affects K,T,X kernel configs only (these are derived from David's code) - and only when you run a
github version from today or later.

or you can simply autotune again to find a good config, saving you the hassle of converting...

The main advantage to this is that users of 1GB cards can now use up to 10% more memory
than before (memory is now allocated in increments of 32MB for Yacoin - previously it was 128MB).

bathrobehero
Legendary
*
Offline Offline

Activity: 2002
Merit: 1051


ICO? Not even once.


View Profile
January 17, 2014, 03:46:15 PM
 #2604

It is however a nightmare for maintainers of the Google spreadsheets Wink

That would be only me, at least for the two combo sheets, but for the greater good I'll think of something!

This is great news though, and I was just thinking about this yesterday while toying around with N14+ benchmarks it if were possible to increase the resolution of the kernel configs so we might be able to squeeze out more VRAM usage, and here it is Smiley

Not your keys, not your coins!
ManIkWeet
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
January 17, 2014, 04:00:43 PM
 #2605

There was a breaking change today regarding the format of launch configs for David Andersen's kernels.

This has advantages because of more fine grained control and memory allocation. It is however a nightmare
for maintainers of the Google spreadsheets Wink

Code:
for scrypt-jane the equivalent config to B x W is B x 4*W and for scrypt it is 4*B x W
so e.g for Yacoin replace -l K2x8 with -l K2x32
and for Litecoin -l K2x32 becomes -l K8x32.

this affects K,T,X kernel configs only (these are derived from David's code) - and only when you run a
github version from today or later.

or you can simply autotune again to find a good config, saving you the hassle of converting...

The main advantage to this is that users of 1GB cards can now use up to 10% more memory
than before (memory is now allocated in increments of 32MB for Yacoin - previously it was 128MB).


So I assume if I run autotune with this build it'll use more memory than it does currently?
Currently my build uses 2619MB/3GB with T9x2.

Also after 2 days mining with no block found, I quickly found a YACoin block today Smiley

BTC donations: 18fw6ZjYkN7xNxfVWbsRmBvD6jBAChRQVn (thanks!)
ktf
Newbie
*
Offline Offline

Activity: 24
Merit: 0


View Profile
January 17, 2014, 05:00:03 PM
 #2606

Hi guys. Thank you all for your work on cudaminer , helping us with Nvidia cards a bit Smiley I've read quite a bit of the thread and I feel I am doing something  wrong here. I am using cudaminer like this :

cudaminer.exe  -i 0 -H 1 -l K5x32,K5x32 -o stratum+tcp://europe.mine-litecoin.com:80 -u user

on two GTX 660 cards :

[2014-01-17 18:38:01] GPU #1: GeForce GTX 660, 40960 hashes, 2.88 khash/s
[2014-01-17 18:38:02] GPU #0: GeForce GTX 660, 51200 hashes, 3.16 khash/s
[2014-01-17 18:39:05] GPU #1: GeForce GTX 660, 174080 hashes, 2.84 khash/s
[2014-01-17 18:39:15] GPU #0: GeForce GTX 660, 194560 hashes, 2.75 khash/s
[2014-01-17 18:39:51] Stratum detected new block
[2014-01-17 18:39:55] GPU #0: GeForce GTX 660, 102400 hashes, 2.71 khash/s
[2014-01-17 18:39:55] GPU #1: GeForce GTX 660, 133120 hashes, 2.71 khash/s
[2014-01-17 18:40:56] GPU #0: GeForce GTX 660, 163840 hashes, 2.80 khash/s
[2014-01-17 18:40:56] GPU #1: GeForce GTX 660, 163840 hashes, 2.75 khash/s
[2014-01-17 18:41:45] GPU #0: GeForce GTX 660, 138240 hashes, 2.82 khash/s
[2014-01-17 18:41:48] accepted: 4/4 (100.00%), 5.58 khash/s (yay!!!)
[2014-01-17 18:41:59] GPU #1: GeForce GTX 660, 168960 hashes, 2.84 khash/s
[2014-01-17 18:42:49] GPU #0: GeForce GTX 660, 174080 hashes, 2.89 khash/s
[2014-01-17 18:43:00] GPU #1: GeForce GTX 660, 174080 hashes, 2.92 khash/s

 I've seen people getting 70khash/s on those and others reaching 500+ on 780. Am I missing something obvious here ? At this rate I doubt I even recover electricity costs .
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 17, 2014, 05:07:08 PM
 #2607

Hi guys. Thank you all for your work on cudaminer , helping us with Nvidia cards a bit Smiley I've read quite a bit of the thread and I feel I am doing something  wrong here. I am using cudaminer like this :

cudaminer.exe  -i 0 -H 1 -l K5x32,K5x32 -o stratum+tcp://europe.mine-litecoin.com:80 -u user

on two GTX 660 cards :

[2014-01-17 18:41:48] accepted: 4/4 (100.00%), 5.58 khash/s (yay!!!)

yeah, that looks like decent hash rates for scrypt-jane. But I don't see you enabling scrypt-jane support at all.

What version are you running and on what OS?

bathrobehero
Legendary
*
Offline Offline

Activity: 2002
Merit: 1051


ICO? Not even once.


View Profile
January 17, 2014, 05:12:59 PM
 #2608

That's weird, scrypt-jane hashrates while it's accepting shares for a litecoin pool.

Edit: maybe it's the latest version and so it should be K20x32, not K5x32

Not your keys, not your coins!
ktf
Newbie
*
Offline Offline

Activity: 24
Merit: 0


View Profile
January 17, 2014, 05:59:58 PM
 #2609

It was the latest version , but yes, without jane support, I didn't compile a version with it.

 I tried a version with scrypt-jane uploaded by someone else on the thread and I get :

[2014-01-17 19:32:36] GPU #0: GeForce GTX 660, 26.44 khash/s
[2014-01-17 19:33:36] GPU #1: GeForce GTX 660, 26.35 khash/s
[2014-01-17 19:33:36] GPU #0: GeForce GTX 660, 26.32 khash/s
[2014-01-17 19:34:05] GPU #1: GeForce GTX 660 result does not validate on CPU (i=1981, s=0)!
[2014-01-17 19:34:06] GPU #1: GeForce GTX 660 result does not validate on CPU (i=1810, s=1)!
[2014-01-17 19:34:36] GPU #0: GeForce GTX 660, 26.39 khash/s
[2014-01-17 19:34:37] GPU #1: GeForce GTX 660, 26.35 khash/s
[2014-01-17 19:35:36] GPU #0: GeForce GTX 660, 26.52 khash/s
[2014-01-17 19:35:36] GPU #1: GeForce GTX 660, 26.49 khash/s
[2014-01-17 19:35:45] GPU #0: GeForce GTX 660 result does not validate on CPU (i=1692, s=1)!
[2014-01-17 19:35:50] GPU #0: GeForce GTX 660 result does not validate on CPU (i=1637, s=1)!
[2014-01-17 19:36:04] GPU #1: GeForce GTX 660 result does not validate on CPU (i=1119, s=1)!
[2014-01-17 19:36:22] Stratum detected new block
[2014-01-17 19:36:22] GPU #0: GeForce GTX 660, 26.47 khash/s
[2014-01-17 19:36:22] GPU #1: GeForce GTX 660, 26.44 khash/s
[2014-01-17 19:36:26] Stratum detected new block
[2014-01-17 19:36:26] GPU #1: GeForce GTX 660, 25.87 khash/s
[2014-01-17 19:36:26] GPU #0: GeForce GTX 660, 26.08 khash/s
[2014-01-17 19:37:25] GPU #1: GeForce GTX 660, 26.46 khash/s
[2014-01-17 19:37:25] GPU #0: GeForce GTX 660, 26.46 khash/s
[2014-01-17 19:38:06] Stratum detected new block

but as you can see, lots of errors. I just tried with K20x32 :

[2014-01-17 19:47:58] GPU #0: GeForce GTX 660, 30556160 hashes, 630.38 khash/s
[2014-01-17 19:48:03] GPU #0: GeForce GTX 660 result does not validate on CPU (i=17502, s=1)!
[2014-01-17 19:48:04] GPU #1: GeForce GTX 660 result does not validate on CPU (i=4428, s=0)!
[2014-01-17 19:48:09] GPU #1: GeForce GTX 660 result does not validate on CPU (i=18108, s=1)!
[2014-01-17 19:48:10] Stratum detected new block
[2014-01-17 19:48:10] GPU #1: GeForce GTX 660, 20848640 hashes, 673.53 khash/s
[2014-01-17 19:48:10] GPU #0: GeForce GTX 660, 7475200 hashes, 618.09 khash/s
[2014-01-17 19:48:10] GPU #0: GeForce GTX 660 result does not validate on CPU (i=1735, s=1)!
[2014-01-17 19:48:12] GPU #1: GeForce GTX 660 result does not validate on CPU (i=14795, s=0)!
[2014-01-17 19:48:14] GPU #0: GeForce GTX 660 result does not validate on CPU (i=18516, s=0)!
[2014-01-17 19:48:22] GPU #0: GeForce GTX 660 result does not validate on CPU (i=18900, s=1)!
[2014-01-17 19:48:24] GPU #1: GeForce GTX 660 result does not validate on CPU (i=15159, s=0)!
[2014-01-17 19:48:26] GPU #1: GeForce GTX 660 result does not validate on CPU (i=19189, s=1)!
[2014-01-17 19:48:27] GPU #0: GeForce GTX 660 result does not validate on CPU (i=16918, s=0)!
[2014-01-17 19:48:31] GPU #1: GeForce GTX 660 result does not validate on CPU (i=8619, s=1)!
[2014-01-17 19:48:44] GPU #0: GeForce GTX 660 result does not validate on CPU (i=15977, s=1)!
[2014-01-17 19:48:45] GPU #1: GeForce GTX 660 result does not validate on CPU (i=8294, s=0)!
[2014-01-17 19:48:46] GPU #0: GeForce GTX 660 result does not validate on CPU (i=5393, s=0)!
[2014-01-17 19:48:49] GPU #0: GeForce GTX 660 result does not validate on CPU (i=17230, s=0)!
[2014-01-17 19:48:50] GPU #1: GeForce GTX 660 result does not validate on CPU (i=6325, s=1)!
[2014-01-17 19:48:57] GPU #1: GeForce GTX 660 result does not validate on CPU (i=856, s=0)!
[2014-01-17 19:48:58] GPU #0: GeForce GTX 660 result does not validate on CPU (i=1391, s=1)!
[2014-01-17 19:49:08] GPU #1: GeForce GTX 660 result does not validate on CPU (i=9682, s=0)!
[2014-01-17 19:49:09] GPU #0: GeForce GTX 660 result does not validate on CPU (i=5051, s=1)!
[2014-01-17 19:49:10] GPU #1: GeForce GTX 660, 40427520 hashes, 668.99 khash/s
[2014-01-17 19:49:11] GPU #0: GeForce GTX 660, 37089280 hashes, 606.77 khash/s
[2014-01-17 19:49:11] Stratum detected new block
[2014-01-17 19:49:11] GPU #0: GeForce GTX 660, 102400 hashes, 416.21 khash/s
[2014-01-17 19:49:11] GPU #1: GeForce GTX 660, 655360 hashes, 684.72 khash/s
bathrobehero
Legendary
*
Offline Offline

Activity: 2002
Merit: 1051


ICO? Not even once.


View Profile
January 17, 2014, 06:17:56 PM
 #2610

Scrypt autotune with latest github:
Code:
[2014-01-17 18:13:01] GPU #0:  106.47 khash/s with configuration K30x21
[2014-01-17 18:13:01] GPU #0: using launch configuration K30x21

But I really don't mind it since I'm up to 2.76kH/s with scrypt-jane!

Not your keys, not your coins!
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 17, 2014, 07:03:52 PM
 #2611

Scrypt autotune with latest github:
Code:
[2014-01-17 18:13:01] GPU #0:  106.47 khash/s with configuration K30x21
[2014-01-17 18:13:01] GPU #0: using launch configuration K30x21

But I really don't mind it since I'm up to 2.76kH/s with scrypt-jane!

Yeah, getting script levels back to old levels (or better) would be one important TODO before making a new release.

At the moment my focus is still on scrypt-jane. A LOOKUP_GAP implementation is due this weekend. Maybe I also have a look into Keccak on GPU.

Christian
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 17, 2014, 07:04:57 PM
 #2612

Hi guys. Thank you all for your work on cudaminer , helping us with Nvidia cards a bit Smiley I've read quite a bit of the thread and I feel I am doing something  wrong here. I am using cudaminer like this :

cudaminer.exe  -i 0 -H 1 -l K5x32,K5x32 -o stratum+tcp://europe.mine-litecoin.com:80 -u user

May I recommend starting individual cards with -d 0 -l K5x32 -C 2 -H 1 and the other instance running -d 1 -l K5x32 -C 2 -H 1.

Ever since I migrated to CUDA 5.5 I have had inexplicable issues when trying to run multiple cards in a single miner instance.

Use the official 2013-12-18 version, that is definitely the fastest for scrypt.

Christian
manofcolombia
Member
**
Offline Offline

Activity: 84
Merit: 10

SizzleBits


View Profile WWW
January 17, 2014, 07:41:02 PM
 #2613

Where are yall trading your yacoins? I just sent a quick test deposit of 11 to bter.com because I heard it was a bit sketchy but I havent been able to find any other place that takes yacoin.

cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 17, 2014, 08:12:54 PM
 #2614

Where are yall trading your yacoins? I just sent a quick test deposit of 11 to bter.com because I heard it was a bit sketchy but I havent been able to find any other place that takes yacoin.

bter.com for me too. That's my one lucky trade. The other sell orders are sitting unfulfilled at the moment, waiting for a buyer with fat fingers ;-)

Date                          Type  Pair          Price                  Amount                      Total   
2014-01-14 04:44:49    Sell    YAC/BTC    0.000049 BTC      2,850.000000 YAC        0.1397 BTC

Now I really wish the YAC/BTC price hadn't fallen so much.

I heard cryptsy is planning to list YAC again, after some maintenance/upgrade work on their servers.

Christian
69charger
Full Member
***
Offline Offline

Activity: 173
Merit: 100


View Profile
January 17, 2014, 08:25:15 PM
 #2615

I'm gonna sit on mine. Once GPU mining is out of the picture I think the prices will rise as it will still be an attractive currency to CPU purists. Hoping for .001 or better  Grin
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 17, 2014, 08:27:03 PM
 #2616

I'm gonna sit on mine. Once GPU mining is out of the picture I think the prices will rise as it will still be an attractive currency to CPU purists. Hoping for .001 or better  Grin

If wou want this to improve, get involved in Yacoin client (wallet) development. They really need some upgrades/improvements to their client and to their Piece of Shit.. pardon Proof of Stake system Wink

Christian
djm34
Legendary
*
Offline Offline

Activity: 1400
Merit: 1050


View Profile WWW
January 17, 2014, 09:54:29 PM
 #2617

There was a breaking change today regarding the format of launch configs for David Andersen's kernels.

This has advantages because of more fine grained control and memory allocation. It is however a nightmare
for maintainers of the Google spreadsheets Wink

Code:
for scrypt-jane the equivalent config to B x W is B x 4*W and for scrypt it is 4*B x W
so e.g for Yacoin replace -l K2x8 with -l K2x32
and for Litecoin -l K2x32 becomes -l K8x32.

this affects K,T,X kernel configs only (these are derived from David's code) - and only when you run a
github version from today or later.

or you can simply autotune again to find a good config, saving you the hassle of converting...

The main advantage to this is that users of 1GB cards can now use up to 10% more memory
than before (memory is now allocated in increments of 32MB for Yacoin - previously it was 128MB).



I just did a few test with the windows version of the new config and It does not seem to work exactly that way.
I have a 780ti, with the old config I use T15x32 (480 warps)
But with the new config I get only 962warps (should I get 480x4 ?) meaning I can't go to 60x32 (which isn't reported as a working mode) but only to 60x16 (this one gives a slightly better hash rate than the original T15x32).

Concerning the scrypt-jane, the autotuning seems broken on windows (I didn't tried it yet on linux).
The autotuning takes a very long time with a very unstable gpu usage (it seems to oscillate constantly between 0 and 100) giving very unconsistant results.

None of the results goes higher than 0.7khash while it should be around 1.65khash for the best modes (on windows)
Thanks for you help (and the good work)


djm34 facebook page
BTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze
Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
bathrobehero
Legendary
*
Offline Offline

Activity: 2002
Merit: 1051


ICO? Not even once.


View Profile
January 17, 2014, 10:17:59 PM
 #2618

The autotuning takes a very long time with a very unstable gpu usage (it seems to oscillate constantly between 0 and 100) giving very unconsistant results.

The resolution or density or however you want to call it for the acceptable kernel configs are 4 times higher, therefore autotune tries 4 times as much configs to test so it will take a while.

In my case autotune earlier only detected 13 warps (K13x1) while the manual K14x1 yielded much better results, so I tried some manual configs this time as well to see if it really found the best scenario and apparently it did because it came up with K59x1:
Code:
setup   kH/s    VRAM
K14x4 2.55 1820
K15x4 2.57 1948 jitters
K58x1 2.64 1884
K59x1 2.68 1916
K60x1 2.62 1948 jitters
K61x1 2.27 1948
K8x7 2.49 1820
K10x6 2.60 1948 jitters
K6x10 2.50 1916 jitters
K2x28 2.27 1820
K2x29 2.26 1884
K2x30 2.24 1948 jitters
*jitter: the hashrate varies somewhat greatly.

TL;DR: Autotune is slower, but better.

Not your keys, not your coins!
cbuchner1 (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 502


View Profile
January 17, 2014, 10:36:09 PM
Last edit: January 17, 2014, 11:06:43 PM by cbuchner1
 #2619

[I just did a few test with the windows version of the new config and It does not seem to work exactly that way.
I have a 780ti, with the old config I use T15x32 (480 warps)
But with the new config I get only 962warps (should I get 480x4 ?)

Ah yes I should have raised the total warp limit from 1024 to 4096... this affects scrypt mainly, because the memory limit just shrank to 1 GB (from 4GB previously). Oops. ;-) Will fix tomorrow...

EDIT: commit is in. it's 11:45 PM, so it isn't actually "tomorrow" yet.

Also some of you might want to check if it works for you to specify --algo=scrypt:2048 (or whatever "N" value it is currently at) to mine VertCoin. You can now directly give the N parameter if needed (not the N-factor like with scrypt-jane).

bathrobehero
Legendary
*
Offline Offline

Activity: 2002
Merit: 1051


ICO? Not even once.


View Profile
January 18, 2014, 12:07:52 AM
Last edit: January 18, 2014, 01:44:22 AM by bathrobehero
 #2620

I can't pass arguments to --algo=scrypt. Leads to show_usage_and_exit (Try `cudaminer --help' for more information.).

Edit: Applecoin just entered the N 11 territory, so it's still young for us, but it will enter:
N 12 on 2014.02.22;
N 13 on 2014.04.12;
N 14 on 2014.05.31;
N 15 on 2014.12.11

...if I didn't screw it up. (chainstarttime is 1384720832).

Not your keys, not your coins!
Pages: « 1 ... 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 [131] 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 ... 1135 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!