Bitcoin Forum
November 11, 2024, 09:27:55 PM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: nVidia Workstation Card Conspiracy  (Read 1792 times)
Akilae (OP)
Newbie
*
Offline Offline

Activity: 7
Merit: 0


View Profile
January 04, 2011, 04:56:43 PM
 #1

First post to the forum, and
Firstly, I just found out about bitcoin yesterday, and as of about 12 hours ago have gotten my setup running around ~60M hashes/s.
Very interesting community here, at a glance it looks like very helpful, generous, and greedy folks (simultaneously!  Tongue ).

According to Anandtech's article on the matter,
http://www.anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/6,
it appears that nvidia purposefully crippled their 400-series line by making it skip cycles in its gp-gpu performance, thereby validating the existence of their more expensive workstation cards. A recent forum post (http://bitcointalk.org/index.php?topic=2338.0) had rumor of a 680Mhash card, roughly 11x what I can run. The anandtech article cites that my card is limited to 1/12 of the fp64 performance of the equivalent workstation chip. Coincidence?

Any truth to the notion that it could account for the enormous performance difference between ATI & nVidia for bitcoin mining?
Or would it be a simple matter of more stream procs -> higher FLOPS -> more Mhashes/s?
As it stands now, a radeon 5570 with 400 stream proc's gets a roughly proportionally higher productivity to my gtx460's 312 cores.
5570 source: http://www.bitcoin.org/wiki/doku.php?id=bitcoin_miners

And if it IS more cores = more hashes, why doesn't this performance translate to other gp-gpu applications? (Or does it? I haven't done the research on that.)

Setup achieving 60M hashes:
AMD Phenom II X4 956 BE at 3.4 Ghz, stock settings (contributes 5.8-6.0 M-hashes itself on 4-core mode)
OC'd nVidia GTX460 1GB version, running 840/1680/2050 (contributes 52-54 M-hashes on m0mchil's opencl miner, -w 128 -f 30)
Cdecker
Hero Member
*****
Offline Offline

Activity: 489
Merit: 505



View Profile WWW
January 04, 2011, 06:02:53 PM
 #2

As far as I know the FLOPS (floating point operations/s) are not indicative of the final performance since hashing is a purely integer operation.

Want to see what developers are chatting about? http://bitcoinstats.com/irc/bitcoin-dev/logs/
Bitcoin-OTC Rating
teknohog
Sr. Member
****
Offline Offline

Activity: 520
Merit: 253


555


View Profile WWW
January 04, 2011, 10:07:46 PM
 #3

Aside from the purposeful crippling, there are simply different design choices behind AMD and Nvidia GPUs. One focuses more on integer performance, the other on floating point. Both can be used for graphics, but in general purpose computation you obviously see some difference.

Interestingly enough, the search for Mersenne primes is an integer venture, but it is more practically realized using double-precision floating point. Currently it is mostly done with CPUs, but the GPU efforts are mainly focused on Nvidia cards:

http://mersenneforum.org/forumdisplay.php?f=92

For me, one important reason for going with AMD cards is their support for open source. While the OS drivers for their latest cards are far from complete (at least for OpenCL), they do provide full documentation, which shows a positive attitude towards OS. As a result, you can get accelerated OpenGL with Linux on PowerPC, for which no binary drivers have been released, and you are no longer dependent on the company's whims for future driver releases.

(The wiki could use a little updating, as I now get about 63 Mhash/s from the HD5570 after playing with the -w and -v options. I also have a HD5770 that hashes at 164M.)

world famous math art | masternodes are bad, mmmkay?
Every sha(sha(sha(sha()))), every ho-o-o-old, still shines
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!