Bitcoin Forum
December 08, 2016, 04:07:54 PM *
News: Latest stable version of Bitcoin Core: 0.13.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: Cray Unveils Its First GPU Supercomputer  (Read 2669 times)
Gameover
Member
**
Online Online

Activity: 70


View Profile
May 25, 2011, 01:26:17 AM
 #1

http://www.hpcwire.com/hpcwire/2011-05-24/cray_unveils_its_first_gpu_supercomputer.html

so what is the hashrate of 70 teraflops turn out to be?
1481213274
Hero Member
*
Offline Offline

Posts: 1481213274

View Profile Personal Message (Offline)

Ignore
1481213274
Reply with quote  #2

1481213274
Report to moderator
1481213274
Hero Member
*
Offline Offline

Posts: 1481213274

View Profile Personal Message (Offline)

Ignore
1481213274
Reply with quote  #2

1481213274
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
Txyru
Member
**
Offline Offline

Activity: 62


View Profile
May 25, 2011, 02:26:54 AM
 #2

A 5850 does 2 terraflops (or more), so assuming 300mhash/s for each 5850 I estimate 10.5ghash/s...
commlinx
Full Member
***
Offline Offline

Activity: 126



View Profile
May 25, 2011, 03:12:06 AM
 #3

I noticed in the article the GPUs are Tesla and while I'm not familiar with that model presumably they are better for floating point than integer the same as the consumer Nvidia cards.

Basiley
Jr. Member
*
Offline Offline

Activity: 42


View Profile
May 25, 2011, 03:16:18 AM
 #4

Nvidia Tesla inside ?
no double-precision math support ?
what kind of "super" it would be ? :/
home super ? :| like Octane 3 ?
while new Fortran-2003 -adapted libs enjoy quad-precision FP math :-/

p.s.
finally Cray jumps to hybrid supers bandvagon :/
noticing five 5 most powerful supers [hybrid]nature.

p.p.s.
probably they scrap their FPGA/Vector modules development soon.
w128
Newbie
*
Offline Offline

Activity: 14



View Profile
May 25, 2011, 03:20:41 AM
 #5

Previously, the only difference between certain Tesla models and certain consumer GeForce cards was the lack of video connectors and a (presumably) more stringent binning process to ensure Tesla parts were stable within specific tolerances for 24/7 use.

Syke
Legendary
*
Offline Offline

Activity: 2086


View Profile
May 25, 2011, 05:11:21 AM
 #6

ROFL. Cray jumped the shark when they decided to use Windows on their HPCs.

Buy & Hold
IlbiStarz
Full Member
***
Offline Offline

Activity: 224


View Profile
May 25, 2011, 05:39:03 AM
 #7

I thought their "non-gpu" supercomputer were already hitting petaflops? So wouldn't that be faster.

It's better to be pissed off, than to be pissed on.
BTC : 1UgM1rqL9mFtH4PHF8TgvAaceymaKmhmP         LTC : LgCGw2WrRphr94RYS1qXHj2PUuYrTap4vk
FC : 6jc9PEmqxpMSxydfepHtshE4f2jMom1dAJ
allinvain
Legendary
*
Offline Offline

Activity: 2002



View Profile
May 25, 2011, 06:31:51 AM
 #8

Bah, why didn't they partner up with AMD? They're already a big purchaser of AMD Opteron chips, so why not poke AMD to develop a similar solution like the Teslas. Multiple Cayman cores on a PCB anyone? Hmm, or is it because Nvidia's CUDA language is easier to code with?


commlinx
Full Member
***
Offline Offline

Activity: 126



View Profile
May 25, 2011, 10:19:44 AM
 #9

Hmm, or is it because Nvidia's CUDA language is easier to code with?
My guess is that it's because floating point is more useful for most simulations. I'm sure someone will be able to point out exceptions, but the only two mainstream research applications I've got that support a GPUs are a GIS application and an RF (radio frequency) simulation package where floating point is a natural fit. I'd guess most physics and "real world" simulations would fall into the same category.

The only two GPU applications I've got where integer is better are Bitcoin mining and a password cracker. Outside a few government agencies and security researchers it's problably not the sort of things that many people throw a lot of money at.

Gameover
Member
**
Online Online

Activity: 70


View Profile
May 25, 2011, 02:01:23 PM
 #10

Hmm, or is it because Nvidia's CUDA language is easier to code with?
My guess is that it's because floating point is more useful for most simulations. I'm sure someone will be able to point out exceptions, but the only two mainstream research applications I've got that support a GPUs are a GIS application and an RF (radio frequency) simulation package where floating point is a natural fit. I'd guess most physics and "real world" simulations would fall into the same category.

The only two GPU applications I've got where integer is better are Bitcoin mining and a password cracker. Outside a few government agencies and security researchers it's problably not the sort of things that many people throw a lot of money at.

Lol good point, on both of the integer applications if you have significant funds or influence you can simply use the much cheaper solution of interrogation to get the password or hire a criminal and steal someones bitcoins.  For floating point its impossible to influence a weather pattern or cell tower via political power  Grin
Basiley
Jr. Member
*
Offline Offline

Activity: 42


View Profile
May 25, 2011, 03:30:53 PM
 #11

basicly because AMD not provide virtually NO support for GPGPU developers[both software and hardware] for years !!
even datasheets findings is troublesome[and many things undisclosed yet :/].
and Nvidia actually co-operate with devlopers and even co-assist in development itself Tongue
i whish AMD start investing little more in building healthy eco-system around their fellow GPGPU developers/enthusisasts :-|
starting from improving OpenCL related SDK, expanding devs support division and helping in making 1st GPGPU steps.
bulanula
Hero Member
*****
Offline Offline

Activity: 518



View Profile
May 25, 2011, 03:34:24 PM
 #12

This has 50 petaflops power.

More than 40 petaflops of whole Bitcoin network.

This could kill this network in a matter of seconds !

Nobody is invincible, including Bitcoin !
Basiley
Jr. Member
*
Offline Offline

Activity: 42


View Profile
May 25, 2011, 03:39:26 PM
 #13

This has 50 petaflops power.

More than 40 petaflops of whole Bitcoin network.

This could kill this network in a matter of seconds !

Nobody is invincible, including Bitcoin !
trick/lie is lied in HOW those "Petaflops" measured Wink
hardly such thing reach this rate on general-purpose math and even less likely its done with double precision.
and both make this "Super" pointless, compared to BitCoin network.
commlinx
Full Member
***
Offline Offline

Activity: 126



View Profile
May 25, 2011, 03:56:52 PM
 #14

This has 50 petaflops power.
Considering they say it may scale up to that you should ask for a quote on that configuration. I see a card at $3K for a tflop, so that would make it $150M, that was a one off price for a PC card though but would still assume it would be in the tens of millions so if you had that much cash it would be better spent on GPUs or an ASIC for mining.

Gameover
Member
**
Online Online

Activity: 70


View Profile
May 25, 2011, 05:06:11 PM
 #15

This has 50 petaflops power.
Considering they say it may scale up to that you should ask for a quote on that configuration. I see a card at $3K for a tflop, so that would make it $150M, that was a one off price for a PC card though but would still assume it would be in the tens of millions so if you had that much cash it would be better spent on GPUs or an ASIC for mining.

Again, much cheaper to simply hire a team to find people with most bitcoins and extort them.
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!