Bitcoin Forum

Bitcoin => Mining => Topic started by: keten on June 11, 2011, 07:38:56 PM



Title: supercomputer GPUs vs consumer GPUs?
Post by: keten on June 11, 2011, 07:38:56 PM
So take a look at the

http://www.nvidia.com/docs/IO/105880/DS_Tesla-M2090_LR.pdf
M2050:
1030 single precision GFLOPs
448 CUDA cores
3 GB GDDR5
148 GB/s memory bandwidth
cost: $2400

vs.

http://www.geeks3d.com/20110324/nvidia-geforce-gtx-590-officially-launched-specifications-and-reviews-hd-6990-is-still-faster/
GeForce GTX 590
2486 single precision GFLOPs
1024 CUDA cores
3 GB GDDR5
327 GB/s memory bandwidth
cost: $699

So why do people use supercomputer GPUs? They seem worse in every way specification wise. Is there any way it'd be better than the consumer one for bitcoin mining?




Title: Re: supercomputer GPUs vs consumer GPUs?
Post by: Basiley on June 11, 2011, 07:53:13 PM
Single Precision shaders==fail.
scalar arch==fail.
so, in short, nothing interesting in NVidia, except easy-to-start/use SDK.
but this rolled around proprietary/slowly-advancing API, called CUDA, which isn't point to invest developing around/for.


Title: Re: supercomputer GPUs vs consumer GPUs?
Post by: Rob P. on June 11, 2011, 07:53:30 PM
No one serious is using an nVidia to mine.  There's no point, nVidia's blow at mining compared to ATI cards of the same cost.

SHA256 is an Integer operation, which ATI cards excel at doing.
nVidia cards are better at floating point operations, which mapping polygons on a screen requires lots of...

Basically, neither of those cards are great at mining.


Title: Re: supercomputer GPUs vs consumer GPUs?
Post by: bcpokey on June 11, 2011, 07:56:29 PM
Is OP asking about use of these cards for mining or for general usage? No one mines on nvidia cards regardless of their specs.

In general however people use the supercomputer GPUs for non-single precision calcs.


Title: Re: supercomputer GPUs vs consumer GPUs?
Post by: Basiley on June 11, 2011, 08:00:20 PM
using supers for SP computation is joke, sure. watch for usual LINPACK/LIVERMORE lib usage, for example ;)
even DP isn't enough anymore and new, quad-presision, FP-format introduced both for Fortran[in 2003]and other languages.


Title: Re: supercomputer GPUs vs consumer GPUs?
Post by: keten on June 11, 2011, 08:07:02 PM
Hmmm, are nvidias really that bad?

I have the opportunity to use a setup with 22 GB RAM, 2 x NVIDIA Tesla “Fermi” M2050 GPU. Is there really no way to get any kind of comparable mining performance from that compared to just any old ATI card?


Title: Re: supercomputer GPUs vs consumer GPUs?
Post by: Basiley on June 11, 2011, 08:10:04 PM
Hmmm, are nvidias really that bad?

I have the opportunity to use a setup with 22 GB RAM, 2 x NVIDIA Tesla “Fermi” M2050 GPU. Is there really no way to get any kind of comparable mining performance from that compared to just any old ATI card?

they Good for rendering, ie in games.
esp in complex scenes with cool light and huge viewdistance.
but for GPGPU usage they sucks.
and stay in this biz, only because ATI/AMD doing it wrong.


Title: Re: supercomputer GPUs vs consumer GPUs?
Post by: bcpokey on June 11, 2011, 08:11:53 PM
You might be looking at about 100MHash/sec per card? Not sure. But that's about the equivalent mhash of an old 4850 ($50 AMD)