Here’s an early look at AMD’s dual-GPU Fiji card

At the PC Gaming Show this evening, AMD CEO Lisa Su took to the stage to show off the Radeon R9 Nano graphics card revealed earlier today. She also cracked open an intimidating-looking aluminum case that turned out to contain a prototype for the as-yet-unnamed dual-GPU Fiji uber-card. Have a look:

Su didn't share any further details of the card, but those Fiji dies with HBM onboard do seem to make for a more compact package overall. Cue rampant speculation until more official details surface for this apparent Fury X2.

Comments closed
    • ronch
    • 5 years ago

    Thinking about how cool my Voodoo3 3000 AGP was.

    Progress is good.

    • ronch
    • 5 years ago

    Judging by that photo of Lisa holding it, I can really tell that thing’s gonna be REALLY, REALLY FAST and REALLY, REALLY ENERGY EFFICIENT.

    /s

    • anotherengineer
    • 5 years ago

    Looks like AMD finally put up some official specs. Funny how they don’t list the GCN revision though.

    [url<]http://www.amd.com/en-us/products/graphics/desktop/r7#[/url<] [url<]http://www.amd.com/en-us/products/graphics/desktop/r9#[/url<] at bottom - specs tab edit - like to see the R7 360 review if the memory is going to be 6.5Ghz (gddr5 effective)

    • anotherengineer
    • 5 years ago

    And the Nano

    [url<]http://www.techpowerup.com/213556/amd-radeon-r9-nano-to-feature-a-single-pcie-power-connector.html[/url<]

    • ronch
    • 5 years ago

    Should’ve been called ‘Furry’.

      • chuckula
      • 5 years ago

      Actually, AMD’s radiator design should help make it easier to prevent furry situations since they are easier to clean compared to big on-card HSFs.

        • auxy
        • 5 years ago

        [url=http://www5.picturepush.com/photo/a/14721113/img/FFXIV/ffxiv-06172015-085152.png<]Why prevent furry situations? [/url<]

    • JMccovery
    • 5 years ago

    2015: Radeon R9 Fury Maxx
    2016: Radeon R9 Fury 2Fast2Furious (Mitsubishi edition)
    2017: Radeon R9 Fury III: Tokyo Compute (multi-colored neon load indicator)
    2018: Radeon R9 Fury IV: Faster and Furiouser
    2019: Radeon R9 Fury Fast 5 (penta-GPU Crossfire insanity)
    2020: Radeon R9 Fury VI: Yes, another one
    2021: Radeon R9 Fury VII: Now with CGI-rendered Lisa Su

      • ronch
      • 5 years ago

      Good to see they have a clear roadmap ahead of them.

    • tsk
    • 5 years ago

    Here’s a real good look guys.
    [url<]http://images.anandtech.com/doci/9385/DualFiji2.jpg[/url<]

      • vargis14
      • 5 years ago

      Nice find but you would think they would clean the darn fingerprints off the silicone:)

        • Airmantharp
        • 5 years ago

        I like my fingerprints on the silicone… πŸ˜€

          • TheMonkeyKing
          • 5 years ago

          Oh come on, you really prefer that over saline?

      • koaschten
      • 5 years ago

      anyone besides me confused by the amount of ram chips? it’s an uneven amount?!

        • EndlessWaves
        • 5 years ago

        Looks like four on each interposer to me. I’m guessing the white one is just the angle of the lighting rather than a missing chip.

    • DPete27
    • 5 years ago

    Hey Chrispy! Look! Power off the back of the card!

      • Chrispy_
      • 5 years ago

      Another classic example of AMD delays losing them sales.

      The GTX970 was so much smaller and more power-efficient than anything in the AMD portfolio that I got tired of waiting and dumped one into my HTPC. Runs like a dream, and was released almost 10 months ahead of AMD.

      10 months in a market as competitive and fast as the GPU market is catastrophic.

    • snook
    • 5 years ago

    Over the past three or four months, reading articles here and at PCper, I’ve come to one conclusion. I don’t get people any longer…

    I’ll blame it on being fifty.

      • Anovoca
      • 5 years ago

      We aren’t people, we’re geeks.

        • snook
        • 5 years ago

        I consider myself the same. I just imagine the tedium suffered to even engage in conversation with y’all. Like now…

    • vargis14
    • 5 years ago

    With all the awesome high MP cameras with telephoto lenses no one could get a better picture of the board.
    I am guessing no cameras were not allowed except cameras on phones maybe.

    Just would like to see a detailed picture of the configuration.

      • Risme
      • 5 years ago

      There’s a slightly better, but still low quality picture of it here: [url<]https://forum.beyond3d.com/threads/amd-pirate-islands-r-3-series-speculation-rumor-thread.55600/page-98#post-1853068[/url<]

      • puppetworx
      • 5 years ago

      [url=http://www.pcper.com/image/view/57800?return=node%2F63238<]PCPer got a pretty good one.[/url<]

    • shank15217
    • 5 years ago

    I would love to buy this thing but I don’t even know how I would mount the radiator to my case. I wish the fury x had an air cooler edition coming out..

    • Anovoca
    • 5 years ago

    Edit: Double post glitch – moderator please delete

    • Anovoca
    • 5 years ago

    Someone put a heatsink on that dirty card, this is a family website.

      • l33t-g4m3r
      • 5 years ago

      That, or Urkel spectacles. – Lol, it kinda looks like a smiley face too.

    • Tristan
    • 5 years ago

    Hope it has DP1.3. Great for 5K @ 60FPS.

    • Meadows
    • 5 years ago

    Actually, I’ve just come to a second realisation. If the rumours are correct about the regular Fury being [i<]anywhere between[/i<] 200-300 W, then how does this prototype even function at all unless it's just a Nano X2?

      • Alexko
      • 5 years ago

      Why wouldn’t it function? The 295X2 has a TDP of about 500W and works just fine.

        • Melvar
        • 5 years ago

        I always wondered about that. The 295X2 would seem to only have 375W of input power between the motherboard connector and the two 8-pin PCIe connectors.

          • chuckula
          • 5 years ago

          The 375 watts is the minimum rated spec for the 8-pin power connectors and the PCIe.
          That’s just what you need to be considered compliant with the PCIe standard, but many power supplies can exceed that power limit, especially over the 8-pin connectors.

          • ImSpartacus
          • 5 years ago

          The Anandtech review explains it.

          [url<]http://www.anandtech.com/show/7930/the-amd-radeon-r9-295x2-review/2[/url<] Basically they just expect modern PSUs to handle the extra current. And they were right.

            • Meadows
            • 5 years ago

            I see. I was not aware that high-wattage GPU design relied on leaps of faith.

      • ImSpartacus
      • 5 years ago

      The same way the 500w 295×2 did?

    • Krogoth
    • 5 years ago

    Well we can see why marketing people decided to resurrect the “Fury” brand.

      • Voldenuit
      • 5 years ago

      They should call it 2 Fast 2 Fury.

        • Krogoth
        • 5 years ago

        $0 FA$T $0 FURI0U$! $O 1337 HAX0R!

          • YukaKun
          • 5 years ago

          Considering there was a “Fury MAXX” (caps and all), it would not surprise me one bit, to be honest.

          Cheers!

            • derFunkenstein
            • 5 years ago

            No way they bring that dual-card name back; the Fury MAXX had a troubled history to say the least…

            • Concupiscence
            • 5 years ago

            Wasn’t there some aspect of the design that was fundamentally incompatible with the NT 5.x HAL? I only remember a few die-hards clinging to it running Windows 98 in college, and even they gave up soon after XP came out.

            • derFunkenstein
            • 5 years ago

            Yeah, that’s my recollection as well. It wouldn’t work with Windows 2000 or later.

            They eventually got it to work as a single-GPU card, so half the performance, and it was pretty late to the party.

            [url<]http://www.rage3d.com/board/showthread.php?t=33594979[/url<]

            • swaaye
            • 5 years ago

            It was troublesome in general and NV thoroughly beat it (and everything else) with GeForce 256.

            It also used AFR and it did have some problems with frame times causing stutter.

            • 0x800300AF
            • 5 years ago

            Actually, was due to the way PCI worked vs AGP. (going of 15+ year old memory here.. and a bit tipsy) but with win98 AMD was able to use PCI emulation with the Rage 120 Pro “Fury Maxx”. With WinNT AGP couldn’t do so and support was nil and none. If you seach through the earliest pages of rage3D (and Rage128 in the Way Back Machine) you will find some very descript pages by myself, including numerous RMAs to ATI and concerns about Via’s AGP implementation (vs Intel’s) and how this affected (super) socket 7 systems..

            ok back to drinking..

      • EndlessWaves
      • 5 years ago

      Do not go gentle into that good night.
      Rage, Rage against the dying of the light.

        • MastaVR6
        • 5 years ago

        Best words spoken by Rodney Dangerfield!

    • Meadows
    • 5 years ago

    If it’s still a 4 GiB card, then they have a problem.

      • KeillRandor
      • 5 years ago

      lol.

        • Meadows
        • 5 years ago

        Elaborate.

          • pranav0091
          • 5 years ago

          He did, he loled. Your argument is now invalid. Like, totally invalid.

            • auxy
            • 5 years ago

            Yep! That’s how it works! (βˆ©Β΄βˆ€ο½€)∩

            • Milo Burke
            • 5 years ago

            Is that the emoticon for a compass playing croquet?

            • auxy
            • 5 years ago

            No, it’s a smiling person with both hands up. (*β€˜βˆ€β€˜)

      • rems
      • 5 years ago

      It might just be the classic physical 8GB and effective 4GB.

        • Meadows
        • 5 years ago

        I know, my point exactly.

        • tipoo
        • 5 years ago

        DX12 would allow addressing those pools separately and having 8GB total to work with, but that’s up to developers…Who tend to do the least they can.

          • auxy
          • 5 years ago

          Addressing the pools separately doesn’t change the fact that you are still going to have to duplicate all of the assets. This is a very wide and common misconception and I wish people would quit repeating it.

            • tipoo
            • 5 years ago

            Depends how they do frame rendering, but the difference would still be having to have 1:1 identical memory contents under DX11, to being able to do different things, even if some base level of memory has to be the same, under DX12.

      • Laykun
      • 5 years ago

      Not sure why you’re getting down voted. 4GB is going to be a problem for this card since they claim it’s been developed specifically for 4k gaming. My GTX 670 4GB has a hard enough time with 1080p in GTA V, that game is thirsty for VRAM.

        • auxy
        • 5 years ago

        [url=http://i.imgur.com/bMMVnUo.png<]Eheh ... your 670 is having issues because it's an old, slower GPU, not because you're running out of VRAM.[/url<] Also NVIDIA has sorta thrown you under the bus as far as actual driver performance updates on newer software.

          • Laykun
          • 5 years ago

          I never said I was getting poor performance. It is however only just maintaining a stable playable frame rate with a few of the more advanced settings turned down, it’s on the very cusp. Having said that GTA V still consumes most of the GPUs memory, despite it having 4GB. Scaling up to 4K is a 4x increase in pixels needing rendered, leading to a substantial increase in required memory for the game, which is the point I was trying to make.

            • auxy
            • 5 years ago

            Yes, it quadruples the size of the buffers, which are trivially small compared to the textures, models, and other assets. ( *Β΄θ‰Έο½€) Resolution has much less effect on VRAM usage than you have been led to believe.

            • Laykun
            • 5 years ago

            I’m a graphics programmer, I’m well aware of the effect on VRAM resolution increase has. Making a blanket statement that resolution has little effect on video memory usage is redundant as you’re not accounting for the fact that each engine/game handles resolution changes differently. Many engines scale up shadow map resolutions with screen resolution, along with LOD levels (Arma 2’s Engine) and view distances. Different games also use different numbers and combinations of off-screen buffers, Unreal Engine 4 uses something like 8 offscreen buffers of varying formats to store information it needs for doing PBR and deferred lighting, some of those formats potentially storing R32G32B32A32 per pixel (I haven’t looked into the specific formats it uses but having made deferred renderers it’s highly likely), that’s 280MB for a single buffer at 4K resolution (3820×2400), which has to live along side shadow maps and gobs of texture maps. Other engines can scale liberally with access to more VRAM, giving you more content in one frame like the frost bite engine.

            You can play down the VRAM capacity as much as you like, but on a brand new flagship card you should expect it to be somewhat future proof. I understand that because of technical limitations they couldn’t simply put 8GB in there ( which they plan on doing ) but lets not pretend 4GB is enough for a flagship card.

            • sweatshopking
            • 5 years ago

            Sadly it’s a limit of their HBM implementation. You’re right, it should be 8, especially at that price.

            • auxy
            • 5 years ago

            Well! I see you aren’t as ignorant as I assumed! (´・ω・)

            However, let’s look at the facts:[list<][*<]I play a lot of games in 4K@60+ on my single 290X and it handles it just fine with 4GB. [/*<][*<]I went over 4GB VRAM usage only once the entire time I had a GTX TITAN. [/*<][*<]I do not think 4K monitors will see significant market penetration within the next 2 years (typical GPU upgrade cycle, even for people buying halo products), [/*<][*<]and I think 4GB is just fine for 4K apps.[/*<][/list<] So I really don't see a problem with 4GB VRAM on a flagship card. People want to make it an issue because they're buying things they can't really afford so they want them to last a long, long time. The reality is that the problem is with their purchasing habits, not the card itself. Would it be [b<]BETTER[/b<] with more memory? Of course, obviously. But I don't think the FuryX having only 4GB VRAM is reason enough to discount the fantastic performance* besides. The overwhelming majority of gamers are still on 1080p [b<]anyway[/b<]. [i<][sub<](edit: pending benchmarks which show the card actually performing near its theroetical performance!)[/sub<][/i<]

            • Laykun
            • 5 years ago

            I don’t expect the Fury X to do poorly in benchmarks for current games, but I don’t expect the card to age very well either, and for a flagship card, that’s not desirable. I’ve been stung multiple times in my life because of this exact same problem, with the Geforce 6800nu 128MB, x1900xt 256mb, 4870 512mb, each and every time there’s been a successor card with double the amount of VRAM (in the 4870s case it was the 4780 X2, as I had crossfire 4870s). With each of these cards they passed benchmarks with flying colours on the day of release and not too far down the road the market changed and I was hamstrung not by the performance of the GPU but the lack of VRAM. The Fury X doesn’t have much breathing room when it comes to VRAM to spare and I feel like that’ll be detrimental to it’s future.

            • auxy
            • 5 years ago

            [url<]http://www.techpowerup.com/reviews/Performance_Analysis/The_Witcher_3/3.html[/url<] I don't really doubt you when you say you had performance issues on those cards due to VRAM, but all of those cards have hilariously small local memory relative to their performance -- and more importantly, also, every single one of those came in double VRAM versions, so you kinda cherry-picked the low-VRAM cards. If you had the issue on the 6800, why did you continue to buy the low-VRAM versions of those cards...? I bought an X1800XT and it was 512MB; later, I had a 4870 that was 1GB. Anyway, this discussion is dumb; 4GB is fine. [url=http://i.imgur.com/jHJ0S9G.png<]I'm tired of arguing.[/url<]

            • Laykun
            • 5 years ago

            I don’t think 4GB is a good fit for the performance profile for the Fury X, much like 256mb wasn’t a good fit for the X1900XT, or 128mb for the GeForce 6800. Also when I bought those cards I was a lot poorer so I didn’t have the means to buy the larger version. Likewise AMD has said they will release an 8GB Fury X in August (although not sure if just rumour), and that’ll be the card you want to get.

            • K-L-Waster
            • 5 years ago

            [quote<]Likewise AMD has said they will release an 8GB Fury X in August (although not sure if just rumour), and that'll be the card you want to get.[/quote<] Well, if you don't have a burning need to buy a GPU, you can count on the fact that if you wait, there will always be something better + faster + cheaper at a later date. Just because there is a new shiny thing doesn't obligate you to break out the credit card.

            • derFunkenstein
            • 5 years ago

            I dunno, my “3.5 + 0.5 GB” 970 does fine in GTA5 at 1440p. In my case, I found that the GeForce Experience turns ambient occlusion up to high. Turning it off entirely skyrockets framerates.

            • K-L-Waster
            • 5 years ago

            The 970 isn’t a flagship card, though.

            To put it another way, releasing the Fury and Nano as 4GB makes sense, but releasing the FuryX and the dual card at that level makes as little sense as if NV had made the TitanX or 980TI 4GB cards.

            • anotherengineer
            • 5 years ago

            Some games are just coded for memory use better than others.

            [url<]http://www.techpowerup.com/reviews/Performance_Analysis/The_Witcher_3/3.html[/url<] scroll to bottom

        • ImSpartacus
        • 5 years ago

        Like most things in life, it’s never that simple.

        There’s a good discussion about it on the beyond 3d forums.

        [url<]https://forum.beyond3d.com/threads/spinoff-4gb-framebuffer-is-it-enough-for-the-top-end.56964/[/url<] I can't wait to see what happens next week when the rubber meets the road and the fury x reviews are out.

        • Ninjitsu
        • 5 years ago

        There is a chance that this is an oversimplification, we should wait for benchmarks.

      • YukaKun
      • 5 years ago

      To be honest, I still don’t see it as a problem. Current 4K gaming uses a tad north of 3GB with everything turned on (see the investigations done for the 970) and given memory use trends, it will be a good 2-3 years using DX12 that we’ll see more than 4GB being used.

      Point is, for the “here and now” market, it’s fine. If you want “future proof”, it is very debatable πŸ˜›

      Cheers!

      EDIT: Fresh information in regards to VRAM use from TweakTown [url<]http://www.tweaktown.com/tweakipedia/90/much-vram-need-1080p-1440p-4k-aa-enabled/index.html[/url<]

      • PerroLito
      • 5 years ago

      Wasn’t this 2x4GB VRAM issue supposed to be obviated by DX12 and Vulkan? If positive, than it would only be a problem for current games which do not support the new APIs, and this card would really shine under the new lower level APIs.

        • ImSpartacus
        • 5 years ago

        Yeah, I think you’re right about that. Instead of frame alternating, they were supposed to use other techniques.

        • sweatshopking
        • 5 years ago

        yes. apparently you should have access to the full 8gb in dx12

        • auxy
        • 5 years ago

        [b<]NO.[/b<] [url=http://i.imgur.com/OMZiHEQ.jpg<]Ugh.[/url<] So sick of this misinformation. DX12 will allow developers to use different techniques to divvy up work between GPUs (such as split-frame rendering) which will allow each GPU to use as little as half the memory for the buffers. The cards will still have to duplicate most scene assets; otherwise, they can't work on the same scene. The overall impact on GPU memory utilization will be small. It does not allow you to "use all the VRAM of both cards". This is being wildly misreported and blown out of proportion. This MIGHT have a significant impact on GPU [b<]bandwidth[/b<] requirements, but not VRAM usage.

      • derFunkenstein
      • 5 years ago

      Since nobody else has brought up Scott’s writeup on HBM, I guess I will

      [quote<]This first-gen HBM stack will impose at least one limitation of note: its total capacity will only be 4GB. At first blush, that sounds like a limited capacity for a high-end video card. After all, the Titan X packs a ridiculous 12GB, and the prior-gen R9 290X has the same 4GB amount. Now that GPU makers are selling high-end cards on the strength of their performance at 4K resolutions, one might expect more capacity from a brand-new flagship graphics card. When I asked Macri about this issue, he expressed confidence in AMD's ability to work around this capacity constraint. In fact, he said that current GPUs aren't terribly efficient with their memory capacity simply because GDDR5's architecture required ever-larger memory capacities in order to extract more bandwidth. As a result, AMD "never bothered to put a single engineer on using frame buffer memory better," because memory capacities kept growing. Essentially, that capacity was free, while engineers were not. Macri classified the utilization of memory capacity in current Radeon operation as "exceedingly poor" and said the "amount of data that gets touched sitting in there is embarrassing." Strong words, indeed. With HBM, he said, "we threw a couple of engineers at that problem," which will be addressed solely via the operating system and Radeon driver software. "We're not asking anybody to change their games."[/quote<] I'm not saying they've now suddenly got unlimited resources or anything, but I have a feeling that it'll be better than doom and gloom, at least.

        • Ninjitsu
        • 5 years ago

        Yup, this is what I’ve been thinking of, too. At 4K I think lack of enough shaders is more of a problem than memory at the moment, and is what will limit the 390 and 390X.

          • derFunkenstein
          • 5 years ago

          Righto. I’m sure to get the perf per watt up, the voltage has been lowered and probably also the clock speed to match. So it’ll go at it with a wider set of compute units (4096×2 vs 2816×2 in 295X2) but I’m guessing at slightly lower clock speeds. Especially so on the Fury X, rather than the X2 I think that’ll expose a compute challenge.

        • l33t-g4m3r
        • 5 years ago

        AMD could also include an option to force texture compression, like they did with tessellation. One of the biggest reasons why you need so much vram is because devs are using uncompressed assets. Forcing texture compression would essentially solve the problem, or prevent it from becoming an issue, since it isn’t one right now.

          • derFunkenstein
          • 5 years ago

          I assume that’s going to be the default and then maybe an option to disable it on a per-game basis if for some reason a game misbehaves.

        • K-L-Waster
        • 5 years ago

        As Scott says, “Strong words, indeed.”

        It remains to be seen how effective this will be, and if their assertion that it can “be addressed solely via the operating system and Radeon driver software” actually works in the real world with real games (which of course may have their own real bugs interfering.)

        I am by no means saying that it won’t work — in principle, it sounds like exactly what the gaming industry needs — but I want to see hard results demonstrating that it works before I trust that it *does* work. (For purposes of this discussion, hard results == a TR review where Damage says it did what AMD said it would.)

        Until then, I’m cautiously on the sidelines.

          • derFunkenstein
          • 5 years ago

          It does remain to be seen, you’re right. I remain hopeful. AMD needs to be right.

        • Kretschmer
        • 5 years ago

        This looks promising! I wonder if these changes will percolate down to 2XX cards as well.

          • derFunkenstein
          • 5 years ago

          I’m assuming the compression is all in CPU and in the driver, but decompression concerns me. They already have it in the 285 and it has not made its way to the rest of the series, and I have a feeling that’s due to some sort of dedicated decompression hardware.

      • Hattig
      • 5 years ago

      At 8K, yeah, 4GB won’t cut the mustard.
      At 4K, it seems it is fine. Except maybe in some specific benchmarks.
      In Crossfire, with DX12 and it’s ability to split workload more efficiently, it won’t be a problem either.

      • prb123
      • 5 years ago

      Has AMD released any info on Memory Compression yet? Did they buy up Quarterdeck for QEMM?

      • Mikael33
      • 5 years ago

      Yes, Amd is just releasing a 4GB high end card be cause they don’t know any better.
      Hint- there are numerous sites who’ve tested the benefit of 8GB cards over their 4GB brethren, aside from Shadow of Mordor and perhaps one or 2 more titles there isn’t any benefit on current cards and AMD has stated they have done zero work in their current drivers to optimize for memory usage.

      edit- Oh noes, 3 people disagree (at this time), amd ‘s graphics division is DOOMED.

        • prb123
        • 5 years ago

        Down votes are due to this site reporting on the need to optimize: [url<]https://techreport.com/review/28294/amd-high-bandwidth-memory-explained/2.[/url<] Scroll to the bottom.

    • chuckula
    • 5 years ago

    If GPU scaling in DX12 actually pans out, then this is where Fiji may have an outright win over Nvidia.

      • ImSpartacus
      • 5 years ago

      They were thinking that dual gpus is actually superior for vr if I recall rumors correctly.

      That’s why quantum has two gpus and it was advertised as a vr machine.

      It was something about running each eye’s display at a separate variable frame rate or something.

      I don’t remember the details, but apparently dual gpus are going to be useful very soon.

        • jts888
        • 5 years ago

        multi-GPU systems started out as frame splitting (scanned line interleaving actually) back when transform and lighting were done by the host CPU and you could get near 100% efficiency in parallelizing the triangle draws.

        after T&L (and later generic geometry shading) was moved to the GPU, doing that work redundantly lowered peak theoretical frame rates, so Nvidia and AMD moved to alternate frame rendering, which brought us issue like microstuttering but looked better on paper at least.

        with VR, you need to do geometry separately for each eye anyway, so you can get very close to ideal scaling with 2 GPUs, since the complexity of each eye’s frame should be very close to identical almost all the time.

          • auxy
          • 5 years ago

          [quote=”jts888″<]with VR, you need to do geometry separately for each eye anyway, so you can get very close to ideal scaling with 2 GPUs, since the complexity of each eye's frame should be very close to identical almost all the time. [/quote<]This is very interesting! Thanks for pointing that out!

          • K-L-Waster
          • 5 years ago

          Hadn’t thought of that, but makes sense – each eye will be looking at a very slightly different angle, so would need to be rendered independently.

          Also would make multi-card setups more interesting than in a standard monitor situation, but having both GPUs on one card may be better still (less likely for the GPUs to drift due to latency etc. if they are on the same PCB).

    • brucethemoose
    • 5 years ago

    This has to set some kind of compute density record, right? The PCB looks smaller than my 7950.

    Despite the small memory capacity, I bet some customers would love to cram a few dozen of these things into a single server rack… I wonder if a FirePro Fiji is coming.

      • auxy
      • 5 years ago

      ( [url=http://i.imgur.com/XreYnRq.jpg<][i<]auxy when 4GB is 'small memory capacity'[/i<][/url<] )

        • sweatshopking
        • 5 years ago

        YOU KEEP POSTING ANIME CATS CREATURES AND I DON’T UNDERSTAND IT. what do they mean?

          • derFunkenstein
          • 5 years ago

          this one has a question mark floating overhead, meaning she is puzzled by the idea that 4GB is small memory.

            • Deanjo
            • 5 years ago

            [quote<]this one has a question mark floating overhead, meaning she is puzzled by the idea that 4GB is small memory.[/quote<] This descriptive narration for TV for the visually impaired has been brought to you by the letters T and R.

            • derFunkenstein
            • 5 years ago

            I’m here to help.

          • NeelyCam
          • 5 years ago

          ssk, what’s with Microsoft?

          [url<]http://www.financialexpress.com/article/companies/former-nokia-chief-stephen-elop-out-in-microsoft-shakeup/86663/[/url<] i thought the leadership was happy with his successful trojan horse..

          • auxy
          • 5 years ago

          Here, look at this: [url=http://i.imgur.com/uZe41Lm.png<]it explains everything.[/url<]

          • TopHatKiller
          • 5 years ago

          Sorry to inject – you’re complaining about not understanding?
          You’ve been here for ever. I get one-out-of five comments. Certain people have replied to me with pictures of hammers…? Why? Am I supposed to hit my head with them?
          I don’t know. Bless, etc, Evil-Overlord-of-Workers.

        • K-L-Waster
        • 5 years ago

        Pictogram communication doesn’t work any better here than it does in Ikea instructions.

          • auxy
          • 5 years ago

          I don’t understand. It works great in IKEA instructions? I love putting together IKEA furniture; it’s so easy! (´・ω・`)

            • K-L-Waster
            • 5 years ago

            Even easier if you throw the pictogram instructions away and just use spatial reasoning πŸ™‚

            • auxy
            • 5 years ago

            [url=http://i.imgur.com/aKdBUhU.jpg<]That sounds like a typically hypermasculine thing to do, yep.[/url<] Try following instructions so you can build it right the first time! (*β€˜βˆ€β€˜)

            • K-L-Waster
            • 5 years ago

            When the instructions are actually comprehensible I do follow them (I’m a tech writer, so I appreciate good documentation.)

            When the instructions are cryptic pictograms, OTOH — echh, toss’em and exercise some neurons.

            • auxy
            • 5 years ago

            So the problem is less that the instructions are bad and more that you have some kind of semantic aphasia. (Β΄β–½ο½€)

            • K-L-Waster
            • 5 years ago

            Or it’s just that I’m over 40 and prefer communication that, y’know, communicates…

            • auxy
            • 5 years ago

            LEGO has always used pictures for instructions, and children the world over can build LEGO thanks to that. If you’re arguing against visual communication, you are arguing against LEGO, and anyone who is against LEGO is my enemy. (οΎ‰ `Π”Β΄)οΎ‰ ~┻━┻

            • K-L-Waster
            • 5 years ago

            Lego’s pictures are clear. Unlike those goofy emoticons.

      • TopHatKiller
      • 5 years ago

      HBM apparently reduces the size of graphic cards… which is good. The density is another matter: 8.6billion? We need to know the chip area to see if AMD can bring about an incredible transistor density. Based on their track record and their [quite obviously excellent] high-density libraries – I guess they can. And show everyone else how to do it.
      I’m a bit more optimistic than I was – as long as the rebrands eliminate ancient AMD gpus. Like Pitcairn. … Tally-ho!

    • albundy
    • 5 years ago

    Fury X2: Balls of Fury! now in a smaller cramped package!

      • James296
      • 5 years ago

      the marketing department at AMD

      [url<]https://youtu.be/1ZXHsNqkDI4[/url<]

    • JustAnEngineer
    • 5 years ago

    How soon until a super-special [url=http://i230.photobucket.com/albums/ee120/JustAnEngineer/Games/R9-Quad-Damage.jpg<]quad-GPU version[/url<] arrives?

      • DrDominodog51
      • 5 years ago

      For optimal performance give the 4 gpus some steak every day.

        • Anovoca
        • 5 years ago

        And don’t to forget to to fill the radiator reservoir with HgH

      • ozzuneoj
      • 5 years ago

      I was expecting to see a picture of a Voodoo 5 6000 when I clicked that.

      • anotherengineer
      • 5 years ago

      Twice as long as it takes the dual version to arrive πŸ˜‰

Pin It on Pinterest

Share This