Bitcoin Forum
May 08, 2024, 08:46:54 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Warning: One or more bitcointalk.org users have reported that they strongly believe that the creator of this topic is a scammer. (Login to see the detailed trust ratings.) While the bitcointalk.org administration does not verify such claims, you should proceed with extreme caution.
Pages: [1]
  Print  
Author Topic: [About CPU coins] Intel unveils 1 teraflop chip with 50-plus cores  (Read 1456 times)
Gabi (OP)
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
November 18, 2011, 03:25:20 PM
 #1

http://seattletimes.nwsource.com/html/technologybrierdudleysblog/2016775145_wow_intel_unveils_1_teraflop_c.html

http://www.xbitlabs.com/news/cpu/display/20111115163857_Intel_Shows_Off_Knights_Corner_MIC_Compute_Accelerator.html

I want one, then mining CPU coins will get a BOOOOOOST

1715201214
Hero Member
*
Offline Offline

Posts: 1715201214

View Profile Personal Message (Offline)

Ignore
1715201214
Reply with quote  #2

1715201214
Report to moderator
No Gods or Kings. Only Bitcoin
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715201214
Hero Member
*
Offline Offline

Posts: 1715201214

View Profile Personal Message (Offline)

Ignore
1715201214
Reply with quote  #2

1715201214
Report to moderator
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 18, 2011, 03:33:39 PM
Last edit: November 18, 2011, 03:46:16 PM by DeathAndTaxes
 #2

It's architecture looks more "GPU like".  It isn't designed to run operating system but rather co-exist w/ general purpose CPU and acts a a "math accelerator".  Very parallel architecture.

Remember there is no such thing as a "CPU coin" that is a misnomer.  There are simply some coins that require a lot of L1 cache and others that don't.  A CPU w/ insufficient L1 cache will perform poorly. A GPU w/ sufficient L1 cache will perform well.

Lastly there is nothing on its integer performance.  Hell it might not even have integer ALU.  Instead being a dedicated double floating point engine.  In that case it would be worthless for most crytpo work.

Still very cool technology.  Insane power densities. Awesome tech porn.  I loved the watercooled supercomputing chassis.  Interesting it looks like the internal piping is rigid copper not flexible tubing.  
tacotime
Legendary
*
Offline Offline

Activity: 1484
Merit: 1005



View Profile
November 18, 2011, 03:36:01 PM
 #3

PCIe express interface --> slow memory transfer

The only real advantage to GPUs is x86 instruction sets in the cores which may allow for easy porting of code to the offboard "CPUs", but really, this is basically a GPU with slightly different architecture.  Many AMD GPUs are already faster than this, for instance 5970 and 6990 can pull 1 TFLOPs.

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 18, 2011, 03:42:51 PM
Last edit: November 18, 2011, 03:56:55 PM by DeathAndTaxes
 #4

PCIe express interface --> slow memory transfer

Well depends on how they configure it and how many lanes. Could be multiple chips & memory on a single card with internal PCIE switch.  While not ultra fast it would be still pretty fast.  PCIe 3.0 w/ 16 lanes is 16GB/s.  The chip could support more than 16 lanes.  

Quote
The only real advantage to GPUs is x86 instruction sets in the cores which may allow for easy porting of code to the offboard "CPUs", but really, this is basically a GPU with slightly different architecture.  

I agree.  The line between CPU & GPU is betting blurred.  CPU becoming more GPU like, the rise of APU (hybrid chips), GPU gaining general purpose computing and moving to more complex "shaders" (more like processing engines).

This would have an advantage over GPU as all of the die is dedicated to computing.  The average GPU "wastes" about 30% of its die for non-shader related functionality.  Of course GPU are so powerful that they overcome this inefficiency w/ pure brute force but a compute engine could be very interesting.  

Quote
Many AMD GPUs are already faster than this, for instance 5970 and 6990 can pull 1 TFLOPs.

Well 5970 is only 0.98 TFLOPs in DP.  The 6990 is 1.278TFLOPS but that is w/ 2 chips.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 18, 2011, 03:51:32 PM
 #5



This is pretty cool.  Just 15 years.  From 5KW for 1 TFLOP to 20W.

This is one area it will destroy GPU (which are just starting to make their way into SuperComputers).

A 6970 gets 0.675 TFLOPS (double precision).  That works out to 370W per TFLOP.  While a lot better than 5000W it can't even compete w/ 20W.  Lower wattage not only means less electrical cost it means higher densities.  You can put more chips in the same rack given the same cooling capacity.

In this case almost 15x as many computing power in the same rack space.  Grin

On edit:  looks like I was wrong. The 20W in article isn't the consumption of Intel chip it is the goal for a exascale super computer by 2018 (~100x current fastest supercomputer).  20W per TFLOP is needed to avoid requiring a nuclear reactor to run your exascale computer.  Just for fun.  AMD top of the line GPU get about 370W per TFLOP.  That's 0.37 MW per PFLOP or 370MW per Exaflop.  Ok so a small nuclear reactor Grin

So I wonder what wattage that chips needs.  
tacotime
Legendary
*
Offline Offline

Activity: 1484
Merit: 1005



View Profile
November 18, 2011, 04:19:49 PM
 #6


So I wonder what wattage that chips needs.  


From the cooling on the card (high speed intake fan, heat pipes) I would guess that it's probably in the range of 150-300w.

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 18, 2011, 04:53:44 PM
Last edit: November 18, 2011, 05:04:17 PM by DeathAndTaxes
 #7

I would hope less than that.  It is 22nm process which should be roughly 4x the performance per watt as a 45nm chip for the same architecture.

Granted we have nothing to compared it to directly but AMD 45nm GPU get ~350W per TFLOP and that includes a lot of non-computational components like highspeed video ram and render units so one would hope a dedicated chip would have higher performance per watt all things being equal.

If it is above 100W that is disapointing.  AMD 7000 series (32nm) chips will likely be in the ballpark of 150W per TFLOP (DP) and that isn't a "pure computing" optimized design.
Gabi (OP)
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
November 18, 2011, 07:29:08 PM
 #8

Quote
Many AMD GPUs are already faster than this, for instance 5970 and 6990 can pull 1 TFLOPs.
Yes but, as DeathAndTaxes said, i think that this Knights Corner will have more cache than a GPU. Sure, it will be slower, but by having more cache it will be useful for more things

ElectricMucus
Legendary
*
Offline Offline

Activity: 1666
Merit: 1057


Marketing manager - GO MP


View Profile WWW
November 20, 2011, 12:54:15 PM
 #9

50 Cores is nothing.

There are dirt cheap chips with more than twice of that. And as said fast memory is the issue with scrypt. Time will come were parallel architectures will be usable for this but not yet.

Memory takes alot of silicon estate, too much to pack a decent amount next to many small cores. But once we have sandwich '3D chips' this will be feasible.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!