Bitcoin Forum
March 29, 2024, 10:48:45 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Possible leaked picture of Radeon 7990?  (Read 3891 times)
waterboyserver (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
April 16, 2012, 03:02:22 AM
 #1

This is eye catching: (Would there not be a PCI-E controller?)

http://www.tweakpc.de/forum/ati-grafikkarten/86500-erstes-bild-radeon-hd-7990-aufgetaucht.html

1711709325
Hero Member
*
Offline Offline

Posts: 1711709325

View Profile Personal Message (Offline)

Ignore
1711709325
Reply with quote  #2

1711709325
Report to moderator
You can see the statistics of your reports to moderators on the "Report to moderator" pages.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1711709325
Hero Member
*
Offline Offline

Posts: 1711709325

View Profile Personal Message (Offline)

Ignore
1711709325
Reply with quote  #2

1711709325
Report to moderator
1711709325
Hero Member
*
Offline Offline

Posts: 1711709325

View Profile Personal Message (Offline)

Ignore
1711709325
Reply with quote  #2

1711709325
Report to moderator
1711709325
Hero Member
*
Offline Offline

Posts: 1711709325

View Profile Personal Message (Offline)

Ignore
1711709325
Reply with quote  #2

1711709325
Report to moderator
MrTeal
Legendary
*
Offline Offline

Activity: 1274
Merit: 1004


View Profile
April 16, 2012, 03:25:34 AM
 #2

That's the first thing I noticed too. I supposed a PCIe switch could be on the backside, but it really looks like all the diff pairs go to the first GPU. Either it's a really good fake, or one of the GPUs handles the PCIe switching.
drakahn
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500



View Profile
April 16, 2012, 03:27:42 AM
 #3

posted " 01.04.2012, 10:54"

april fools?

14ga8dJ6NGpiwQkNTXg7KzwozasfaXNfEU
waterboyserver (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
April 16, 2012, 03:28:09 AM
 #4

One would think the memory chips would be fewer and denser. The VRM appears somewhat replicated, the lack of a CFX controller, and 2 SLI connectors...hmmm
MrTeal
Legendary
*
Offline Offline

Activity: 1274
Merit: 1004


View Profile
April 16, 2012, 03:54:26 AM
 #5

One would think the memory chips would be fewer and denser. The VRM appears somewhat replicated, the lack of a CFX controller, and 2 SLI connectors...hmmm

It's definitely fake in my mind. Looks at the screw hold between the memory chips. For one, having it that close to the BGAs would be really poor design anyway, but in this case the hole is so close it actually eats part of the chip.
1l1l11ll1l
Legendary
*
Offline Offline

Activity: 1274
Merit: 1000


View Profile WWW
April 16, 2012, 04:39:36 PM
 #6

Doesn't AMD always have on GPU upside down?

And is it just me or is the pink capacitor below the right GPU a bit flat on top? Looks a little cut off via photoshop. 


rjk
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
April 16, 2012, 04:43:59 PM
 #7

That's the first thing I noticed too. I supposed a PCIe switch could be on the backside, but it really looks like all the diff pairs go to the first GPU. Either it's a really good fake, or one of the GPUs handles the PCIe switching.
I always wondered why they did that clumsy bodge with the PCIe switch on dual-gpu cards. It would only make sense for the GPUs to communicate with each other directly, and perhaps to even show up as a single device. I suppose the switch would be easier to design in, and wouldn't need to have special GPU support, but it just seems natural to delete it.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
waterboyserver (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
April 16, 2012, 05:08:47 PM
 #8

There are boards that communicate directly from PCI-E to GPU without CFX bridge.  MSI (as well as Asus) made/designed a board or two a few years back using two mobile Radeons, each using 8 of the 16 PCI-E channels through older MXM slots. Asus actually sold one (Trinity) with three MXM slots for triple CFX in one PCI-E board. Nevertheless mobile GPU may not be the best choice for many obvious reasons, cost being perhaps the major one.

MSI Germanium http://adler-pc.com/news/msi.mxm03b.jpg

rjk
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
April 16, 2012, 05:13:14 PM
 #9

There are boards that communicate directly from PCI-E to GPU without CFX bridge.  MSI (as well as Asus) made/designed a board or two a few years back using two mobile Radeons, each using 8 of the 16 PCI-E channels through older MXM slots. Asus actually sold one with three MXM slots for triple CFX in one PCI-E board. Nevertheless mobile GPU may not be the best choice for many obvious reasons, cost being perhaps the major one.

MSI Germanium http://adler-pc.com/news/msi.mxm03b.jpg
But even then, that is still using PCIe lanes. Wonder if a custom protocol on a direct, short link would be even faster (think HyperTransport, or its equivalents).

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1063


Gerald Davis


View Profile
April 16, 2012, 05:14:38 PM
 #10

I always wondered why they did that clumsy bodge with the PCIe switch on dual-gpu cards. It would only make sense for the GPUs to communicate with each other directly, and perhaps to even show up as a single device. I suppose the switch would be easier to design in, and wouldn't need to have special GPU support, but it just seems natural to delete it.

It simplifies drivers and OS interaction.

It is two complete, independent, and indentical GPUs (from OS point of view).

If you only had no PCIe switch then either
a) you need some proprietary switch to route I/O to proper GPU.

b) have one GPU be the "master" and the other GPU the "slave".  Only one GPU is connected to PCIe port and relays I/O to the second GPU.

Neither is really a good solution.  I don't see the switch as clumsy.  You have two independent components on a single board which need access to the CPU via a single PCIe port.  1 Port -> switch -> 2 ports each with a GPU connected.

I don't think it will ever be "deleted".  Remember 7990s are simply two 7970s.  There would be no need for 7970 to interface w/ PCIe adapter/controller via a proprietary solution. 
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1063


Gerald Davis


View Profile
April 16, 2012, 05:17:13 PM
 #11

But even then, that is still using PCIe lanes. Wonder if a custom protocol on a direct, short link would be even faster (think HyperTransport, or its equivalents).

PCIe 3.0 with 16 lanes is 16GB/s bidirectional non blocking.  What scenario's would require >32GB/s?  For those 0.0000000000000000000000000000001% of the time does the cost of a proprietary solution outweigh the simplicity and economies of scale in using off the shelf components?
rjk
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
April 16, 2012, 05:20:00 PM
 #12

But even then, that is still using PCIe lanes. Wonder if a custom protocol on a direct, short link would be even faster (think HyperTransport, or its equivalents).

PCIe 3.0 with 16 lanes is 16GB/s bidirectional non blocking.  What scenario's would require >32GB/s?  For those 0.0000000000000000000000000000001% of the time does the cost of a proprietary solution outweigh the simplicity and economies of scale in using off the shelf components?
Good points. For sure, it wouldn't apply in the slightest to bitcoin mining. But all kinds of odd stuff is needed for other GPU compute applications. What if someone decided to fit one SHA256 core on each GPU, and force them to communicate with each other in order to have a double SHA256 needed for Bitcoin? That would use a lot of bandwidth, although I don't know whether it would use the GPU resources better/faster or not.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
MrTeal
Legendary
*
Offline Offline

Activity: 1274
Merit: 1004


View Profile
April 16, 2012, 05:22:11 PM
 #13

If you actually look at it zoomed in, it is pretty obviously faked. For instance, look where the PCIe pairs enter into the GPU on the left. That seems pretty normal. Now look at the bottom of the GPU on the right, the same traces enter the GPU as on the left despite the fact that there isn't the PCIe lanes coming in, and they just abruptly end at those GDD5 chips.
Also, on the top left side of the left GPU you can see the fanout of traces going to the two RAM modules. On the right GPU, the fanout is exactly the same, but it just ends at a wall of ceramic caps.

Having the GPU do the switching isn't a terrible idea, but it would be a whole redesign of the Tahiti die unless a sideport is built in. I haven't heard of them bringing that back, so I assume if they wanted to do something like this they'd have to redesign Tahiti and that's just not going to happen for a low volume part like this.
waterboyserver (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
April 16, 2012, 05:26:51 PM
 #14

It is most definitely a fake image. As far as I know, the 7990 is supposed to have a PLX PEX 8747 (Gen3 48 lanes, 5 ports) PCI-E Switch on the board, probably be on the front side no?

(48 lanes...just imagine 3 GPU's...)
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1063


Gerald Davis


View Profile
April 16, 2012, 05:26:58 PM
 #15

But even then, that is still using PCIe lanes. Wonder if a custom protocol on a direct, short link would be even faster (think HyperTransport, or its equivalents).

PCIe 3.0 with 16 lanes is 16GB/s bidirectional non blocking.  What scenario's would require >32GB/s?  For those 0.0000000000000000000000000000001% of the time does the cost of a proprietary solution outweigh the simplicity and economies of scale in using off the shelf components?
Good points. For sure, it wouldn't apply in the slightest to bitcoin mining. But all kinds of odd stuff is needed for other GPU compute applications. What if someone decided to fit one SHA256 core on each GPU, and force them to communicate with each other in order to have a double SHA256 needed for Bitcoin? That would use a lot of bandwidth, although I don't know whether it would use the GPU resources better/faster or not.

It all comes down to cost vs benefit.

The bridge chip is actually "not" required.

PCIe spec differentiates ports from lanes.  It is "possible" to have a PCIe controller route lanes to two ports on the same expansion slot.  Support is essentially non-existent.  Most (all?) motherboards assume 1 slot = 1 port.  If a dual GPU used that feature it either wouldn't work or only 1 GPU would be usable in non-compliant motherboards.

If at some point in the future most motherboards supported routing 2+ ports to the same physical slot then there would be no need to use a bridge chip.  Each GPU would simply connect via 8 lanes.  One irony is that those GPU wouldn't be usable PCIe1x slots (i.e. extenders).
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!