Bitcoin Forum
November 16, 2024, 07:33:31 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: nVidia listings on Mining Hardware Comparison  (Read 28300 times)
deadlizard
Member
**
Offline Offline

Activity: 112
Merit: 11



View Profile
March 31, 2011, 05:25:11 PM
 #21

So is there any poclbm flag adjustment to be made for the nVidia's ?
yeah -f1  Cheesy

btc address:1MEyKbVbmMVzVxLdLmt4Zf1SZHFgj56aqg
gpg fingerprint:DD1AB28F8043D0837C86A4CA7D6367953C6FE9DC

nster
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
March 31, 2011, 05:42:56 PM
 #22

-f 0 is better IMO

167q1CHgVjzLCwQwQvJ3tRMUCrjfqvSznd Donations are welcome Smiley Please be kind if I helped
motherhumper
Newbie
*
Offline Offline

Activity: 16
Merit: 0


View Profile
March 31, 2011, 06:40:20 PM
 #23

-f 0 is better IMO

lol -f 0 does not work on nvidia  Grin
grbgout
Newbie
*
Offline Offline

Activity: 12
Merit: 0


View Profile
April 01, 2011, 04:05:57 PM
 #24

 I'm guessing these values are in fahrenheit.
You would be guessing wrong. I idle around 50 and can hit 90 under load if I push it hard enough.
Mining coins and boiling water at the same time, just 10 more to go  Grin
That should have been written "hoping" rather than "guessing".  My entire post was poorly authored, and is a prime example of why one shouldn't post whilst sleep deprived. I certainly knew better, I've never seen PC temperature sensors represented in anything other than C.  I just didn't like the idea of something in one of my machines running so hot.

I don't know what the wattage draw is, but I have noticed the temperature of the card jump from 58 (idle) to 72 (mining).  I'm guessing these values are in fahrenheit.

I didn't know that applying power could cause a card to refrigerate itself to below room temperature!
Are you from opposite land? "Opposite land: crooks chase cops, cats have puppies... Hot snow falls up."1

An increase in temperature (58 to 72) certainly does not imply refrigeration.  The temperature in my room is currently 70 F (21.11 C), but that's probably 'cause it's only 36 F outside (2.22 C).... So, had those values actually represented F, it would have gone from below room temperature to above it. Smiley

nVidia specs this card to run at 69 W.  Also, try to determine if this card runs at nVidia reference clocks.  GPU-Z should do the trick, as well as CPU-Z (Graphics Tab, highest perf level).

Thanks for mentioning the specified wattage, I was procrastinating looking it up on nVidia's site, and your data prompted me to confirm it.  This set my mind at ease, I was worried that running my GPU at a steady 74 C might damage it, but the specification lists a max temperature of 105 C.

The card is running at nVidia's reference clocks (per the linked spec), data obtained using the linux command nvidia-settings:
Code:
$ nvidia-settings --display :0 -q [gpu:0]/GPUCurrentProcessorClockFreqs

  Attribute 'GPUCurrentProcessorClockFreqs' (htpc:0[gpu:0]): 1340.
    The valid values for 'GPUCurrentProcessorClockFreqs' are in the range 335 - 2680 (inclusive).
    'GPUCurrentProcessorClockFreqs' can use the following target types: X Screen, GPU.

$ nvidia-settings --display :0 -q [gpu:0]/GPUCurrentClockFreqs

  Attribute 'GPUCurrentClockFreqs' (htpc:0[gpu:0]): 550,1800.
    'GPUCurrentClockFreqs' is a packed integer attribute.
    'GPUCurrentClockFreqs' is a read-only attribute.
    'GPUCurrentClockFreqs' can use the following target types: X Screen, GPU.

The 1340 is the processor clock and 550 is the graphics clock, but I'm not sure what the 1800 represents.  The "attribute" GPUDefault3DClockFreqs has the same 550,1800 value(s).

It should also be noted that I have the 512MB model.

In time I'll be setting up PowerTop to confirm the specified wattage, but it is low priority.

I noticed a GPUMemoryInterface "attribute", which has a value of 128.  Do you think it would be advisable to try setting the worksize flag to match this?

 As far as you're -v issue goes, you may have just had a run of bad luck.  Who knows?
Sorry, that was the prime example of poor writing.  When I ran the miner (poclbm-mod as well as poclbm) without the vectors option I saw values between 21567 to 21579.  I did not let the test run long enough to determine if I would still see multiple accepts for a single getwork.  Currently, using the vectors option, the most accepts I have ever seen on a single getwork has been 5.

lol -f 0 does not work on nvidia  Grin
Could you elaborate?  Do you mean that using -f 0 will see no improvements over -f 1?  I've been using -f 0 for quite a while, but didn't pay close enough attention when switching from -f 1 to notice.

Thanks to whomever updated the wiki with the information I provided; I'm guessing it was urizane.
zhalox
Full Member
***
Offline Offline

Activity: 176
Merit: 106


XMR = BTC in 2010. Rise chikun.


View Profile
April 20, 2011, 05:37:11 AM
 #25

I'm mining with my XFX GeForce GTX 275 at a steady 59 to 61Mhash/s. I underclocked the core graphics clock down to 576MHz and the memory clock down to 775MHz, and overlocked the processor clock up to 1502 MHz.

Grinder
Legendary
*
Offline Offline

Activity: 1284
Merit: 1001


View Profile
April 20, 2011, 08:06:03 AM
 #26

I'm mining with my XFX GeForce GTX 275 at a steady 59 to 61Mhash/s. I underclocked the core graphics clock down to 576MHz and the memory clock down to 775MHz, and overlocked the processor clock up to 1502 MHz.
I assume you have free electricity? Otherwise you're probably losing money.
jak0b
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
May 17, 2011, 04:34:19 PM
Last edit: May 22, 2011, 01:39:47 PM by jak0b
 #27

Hi all Smiley

I am new to this mining stuff. Learned about it yesterday and thought that it could be fun to try out.

Now after having fooled a little bit around with both OpenCL miner, and the RPC-Cuda miner (using the GUIMiner v2011-05-01)

I am able to get 83.5-84Mhash/sec. on a GTX 560 Ti (Gainward phantom 2) with the rpcminer-cuda  w/flags: -gpugrid=128 -gputhreads=768

I am running Win7 x64, and Nvidia driver package v270.61

the GFX is all factory defaults in speed: GPU:822.5 MHz / Mem:2004 MHz / Shader:1645 MHz (170W TPD)


Running the cuda miner with flags: -gpugrid=64 -gputhreads=384 which is the GPU's actual texture units/shader cores, yields about 80.5 Mh/s. Why doubling those numbers is working better, I can't explain.?

The OpenCL miner have caused my gfx-driver to stop responding every time the miner is stopped. It usually recovers it self, but one time I had to hard-reset the system to get my screen back. And then it also run's slower than the Cuda.

EskimoBob
Legendary
*
Offline Offline

Activity: 910
Merit: 1000


Quality Printing Services by Federal Reserve Bank


View Profile
July 03, 2011, 07:17:56 PM
Last edit: July 03, 2011, 08:48:13 PM by EskimoBob
 #28

you can also add a 9800GT (no OC)

Code:
|-
| 9800GT || 26 ||   ||   || 1500  || 112 || poclbm-mod.py with -w 64 -f 200 -d 0

I do some work on this PC at the same time. You can probably squeeze more Mh out of it.
 
I tested with the following settings: -w 64 -f 30 -d 0  an it can do 27.2MHash/s


While reading what I wrote, use the most friendliest and relaxing voice in your head.
BTW, Things in BTC bubble universes are getting ugly....
PcChip
Sr. Member
****
Offline Offline

Activity: 418
Merit: 250


View Profile
July 04, 2011, 01:30:07 AM
 #29

You can add my stats, I don't want to sign up:

8800GT - 25 MH/s

GTX570 Stock speeds: ~115 MH/s
GTX570 860 MHz (approx) ~ 130 MH/s (approx)

Legacy signature from 2011: 
All rates with Phoenix 1.50 / PhatK
5850 - 400 MH/s  |  5850 - 355 MH/s | 5830 - 310 MH/s  |  GTX570 - 115 MH/s | 5770 - 210 MH/s | 5770 - 200 MH/s
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!