Bitcoin Forum
June 21, 2018, 10:29:04 AM *
News: Latest stable version of Bitcoin Core: 0.16.1  [Torrent]. (New!)
   Home   Help Search Donate Login Register  
Pages: [1]
Author Topic: nVidia Workstation Card Conspiracy  (Read 1633 times)
Offline Offline

Activity: 7
Merit: 0

View Profile
January 04, 2011, 04:56:43 PM

First post to the forum, and
Firstly, I just found out about bitcoin yesterday, and as of about 12 hours ago have gotten my setup running around ~60M hashes/s.
Very interesting community here, at a glance it looks like very helpful, generous, and greedy folks (simultaneously!  Tongue ).

According to Anandtech's article on the matter,,
it appears that nvidia purposefully crippled their 400-series line by making it skip cycles in its gp-gpu performance, thereby validating the existence of their more expensive workstation cards. A recent forum post ( had rumor of a 680Mhash card, roughly 11x what I can run. The anandtech article cites that my card is limited to 1/12 of the fp64 performance of the equivalent workstation chip. Coincidence?

Any truth to the notion that it could account for the enormous performance difference between ATI & nVidia for bitcoin mining?
Or would it be a simple matter of more stream procs -> higher FLOPS -> more Mhashes/s?
As it stands now, a radeon 5570 with 400 stream proc's gets a roughly proportionally higher productivity to my gtx460's 312 cores.
5570 source:

And if it IS more cores = more hashes, why doesn't this performance translate to other gp-gpu applications? (Or does it? I haven't done the research on that.)

Setup achieving 60M hashes:
AMD Phenom II X4 956 BE at 3.4 Ghz, stock settings (contributes 5.8-6.0 M-hashes itself on 4-core mode)
OC'd nVidia GTX460 1GB version, running 840/1680/2050 (contributes 52-54 M-hashes on m0mchil's opencl miner, -w 128 -f 30)
If you see garbage posts (off-topic, trolling, spam, no point, etc.), use the "report to moderator" links. All reports are investigated, though you will rarely be contacted about your reports.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
Hero Member
Offline Offline

Posts: 1529576944

View Profile Personal Message (Offline)

Reply with quote  #2

Report to moderator
Hero Member
Offline Offline

Activity: 490
Merit: 503

View Profile WWW
January 04, 2011, 06:02:53 PM

As far as I know the FLOPS (floating point operations/s) are not indicative of the final performance since hashing is a purely integer operation.

Want to see what developers are chatting about?
Bitcoin-OTC Rating
Sr. Member
Offline Offline

Activity: 497
Merit: 252


View Profile WWW
January 04, 2011, 10:07:46 PM

Aside from the purposeful crippling, there are simply different design choices behind AMD and Nvidia GPUs. One focuses more on integer performance, the other on floating point. Both can be used for graphics, but in general purpose computation you obviously see some difference.

Interestingly enough, the search for Mersenne primes is an integer venture, but it is more practically realized using double-precision floating point. Currently it is mostly done with CPUs, but the GPU efforts are mainly focused on Nvidia cards:

For me, one important reason for going with AMD cards is their support for open source. While the OS drivers for their latest cards are far from complete (at least for OpenCL), they do provide full documentation, which shows a positive attitude towards OS. As a result, you can get accelerated OpenGL with Linux on PowerPC, for which no binary drivers have been released, and you are no longer dependent on the company's whims for future driver releases.

(The wiki could use a little updating, as I now get about 63 Mhash/s from the HD5570 after playing with the -w and -v options. I also have a HD5770 that hashes at 164M.)

Pages: [1]
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!