First post to the forum, and
Firstly, I just found out about bitcoin yesterday, and as of about 12 hours ago have gotten my setup running around ~60M hashes/s.
Very interesting community here, at a glance it looks like very helpful, generous, and greedy folks (simultaneously!
).
According to Anandtech's article on the matter,
http://www.anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/6,
it appears that nvidia purposefully crippled their 400-series line by making it skip cycles in its gp-gpu performance, thereby validating the existence of their more expensive workstation cards. A recent forum post (
http://bitcointalk.org/index.php?topic=2338.0) had rumor of a 680Mhash card, roughly 11x what I can run. The anandtech article cites that my card is limited to 1/12 of the fp64 performance of the equivalent workstation chip. Coincidence?
Any truth to the notion that it could account for the enormous performance difference between ATI & nVidia for bitcoin mining?
Or would it be a simple matter of more stream procs -> higher FLOPS -> more Mhashes/s?
As it stands now, a radeon 5570 with 400 stream proc's gets a roughly proportionally higher productivity to my gtx460's 312 cores.
5570 source:
http://www.bitcoin.org/wiki/doku.php?id=bitcoin_minersAnd if it IS more cores = more hashes, why doesn't this performance translate to other gp-gpu applications? (Or does it? I haven't done the research on that.)
Setup achieving 60M hashes:
AMD Phenom II X4 956 BE at 3.4 Ghz, stock settings (contributes 5.8-6.0 M-hashes itself on 4-core mode)
OC'd nVidia GTX460 1GB version, running 840/1680/2050 (contributes 52-54 M-hashes on m0mchil's opencl miner, -w 128 -f 30)