AMD and Falcon Northwest cram Gemini into a tiny tower at VRLA

Back when we reviewed the R9 Fury X, we commented on how much power it fits into a compact package. Now AMD and Falcon Northwest, purveyors of powerful (and expensive) prebuilt systems, have taken AMD's Fiji GPU to a new level. Falcon showed off a machine with an AMD "Gemini" dual-Fiji card at Virtual Reality Los Angeles, powering a demo of HTC's Vive headset.

The machine in question is one of Falcon's Tiki micro tower models, which measures a mere 13" tall by 4" wide (or 33 by 10 cm). Despite the diminutive size, Falcon's system seems capable of housing and cooling AMD's CrossFire-on-a-stick Gemini card.

The Fiji GPU is a pretty power-efficient part for its delivered performance, and everything that AMD has been saying about its upcoming Polaris GPU indicates that power efficiency will once more be the name of the game. It's not too hard to imagine Nvidia working along the same lines with its next-gen Pascal chips.

One of the main reasons why the PC gaming collective likes mid- and full-tower cases is in part because of the space, power, and cooling requirements of graphics cards. With the next GPU families from AMD and Nvidia looking like they'll deliver big gains in power efficiency, it's not that far-fetched to imagine a future where the norm is more compact systems packing a ton of horsepower. Your thoughts?

Comments closed
    • tbone8ty
    • 4 years ago

    Damage is that you in there? When is Damage coming back for a fireside chat with techreport?

      • ronch
      • 4 years ago

      AMD tricked Scott by pretending they’re offering him a nice job. Now they have him locked up deep within AMD HQ, tormenting him with the heat coming out of a system with an FX-9590 and 295X2 pointed straight at him, with Koduri laughing his evil laugh in the dark. This is what Scott gets for speaking the truth (not that political correctness BS) against AMD all these years.

      “What do you mean we didn’t want to give you a Nano for review? Why would we? You don’t give ‘fair’ reviews!”

    • TopHatKiller
    • 4 years ago

    Read the lot, and YUK!
    #1 i don’t want, have never wanted, never will want a tiny pc case.
    #2 any such tiny case will still suffer cooling/noise problems over bigger cases.
    #3 fnw is a greet boutique builder, i have talked to them over the years, but their stipulations of pc-builds, when shipping to the uk, broke me desire to buy from them.
    #4 – this is a killer –
    speculations about gpu designs from anyone, without any actual evidence, are foolish.
    Having said that I’ve already decided that AMD new designs are crap – NO WAIT –
    That Nv’s are crap.
    I’m sure some of them will be crap. No wait, maybe all. No wait, maybe none.

    Not sure… idiots. Yet

      • Arbiter Odie
      • 4 years ago

      Well, I think I would like a small case. It would take up less leg-room under the desk. Of course, it would have to be really fast, and support lots of screens and lots of harddrives, so thunderbolt (because of it’s speed and daisy-chaining) will have to make headway in adoption for this to happen.

        • TopHatKiller
        • 4 years ago

        I wanna big one! And I’m NOT compensating for anything.!

    • UberGerbil
    • 4 years ago

    Every time we’ve had a [i<]real[/i<] new GPU generation (ie process shrink), it has delivered big gains in power efficiency. And every time enthusiasts have chosen the GPUs that employ those gains to get more performance, not to make smaller, cooler-running systems. So while the power efficiency of this new gen of GPUs should be extremely helpful in making actual "gaming laptops" that aren't huge, hot, and tied to a power outlet (especially because laptop panel resolution hasn't climbed to quite the degree the desktop has), I don't imagine they'll do much to make true enthusiast systems smaller. The pocket rocket designs are more practical than they were, and maybe the broad center of the market becomes even more focused on mATX, but all the other compromises that come with smaller systems aren't going away.

      • _ppi
      • 4 years ago

      This.

      Total power consumption is going up and will ultimately go up when 14/16nm cards are finished. AMD said they can increase peformance by some 30% at the same power budget. But at the same time, 14/16nm process allows to pack 4x more transistors …

      As a small reminder for everyone, there used to be times, when graphic cards could be powered fully off PCIe bus. And it was a shock when new cards required additional power cable. And there were times when tiny little 5cm cooler on tiny little heatsing would suffice for high-end card (this is what my 3dfx Banshee had, my S3-Virge did not even have the heatsink). And hey, now I have two power cables in my “power efficient” GTX 970 with two huge fans, and Skylake mobo makers are producing boards with reinforced PCIe slots …

      I would not be surprised that by the time we are at 5-7nm, high-end GPUs will consume 1kW, and nobody will blink an eye. Maybe even sooner.

        • synthtel2
        • 4 years ago

        If you chart the TDP increases, they have kinda flattened out at ~250W. Nvidia hit 236W with the GTX 280 in 2008, AMD hit 215W with the 2900 XT in 2007 (only counting single-GPU cards). Nvidia still hasn’t released anything over 250W nominally, and AMD only went to 275W last June after sticking with 250W since 2010. [1] Of course this is just nominal TDPs, and many of the actual cards these days are a fair bit higher due to factory overclocks, but one way or another both AMD and Nvidia look pretty reluctant to push TDPs into the 300+W range.

        [1] This is consumer-space only. I don’t know so much about Firepros/Quadros, but the impression I’ve gotten is they tend to be almost universally clocked lower.

          • _ppi
          • 4 years ago

          Well, right. But with AIO water coolers on 295X and FuryX, I cannot imagine them leaving on the table e.g. doubling performance, even if it meant 500W card.

          Sure, it will be dedicated to high end models first. But that’s where you test the waters first.

      • Pwnstar
      • 4 years ago

      Enthusiasts need massive amounts of GPU performance. Of course they are going to choose that over small and low watts.

      But you misunderstand power efficiency. We need power efficiency so we can fit more performance into 250w. We wouldn’t be able to double the amount of transistors in that power envelope without it.

    • Bomber
    • 4 years ago

    I think the biggest surprise here is that Falcon is still around. I remember in my 20’s (way too long ago) drooling over their machine in month Computer Shopper mags.

      • UberGerbil
      • 4 years ago

      They’re like one of the boutique custom car builders / customizers you see on various reality shows. They exist because there’s a small but profitable niche for that sort of thing, be it individuals with deep pockets who want more than “a Dell” or corporations needing a fast and bling-y machine for tradeshows or other publicity purposes. (And it probably doesn’t hurt they’re in Medford, so their costs are low)

        • Bomber
        • 4 years ago

        Fair points across the board but Voodoo and Alienware and many of the other small boutiques from the same period no longer exist the same. They either shuttered or were absorbed by one of the big box companies (like the above mentioned). Just interesting that any company from the boutique era is still around.

          • ultima_trev
          • 4 years ago

          The Boutique PC space is still quite alive. Asides from Falcon Northwest, there’s Maingear, Velocity Micro, Origin, iBuyPower, CyberPower, Puget Systems… And I’m sure there’s others that I’m forgetting…

            • Bomber
            • 4 years ago

            Most of those didn’t exist 20 years ago. Actually I’m guessing none did. That’s the point. Those came out after the tech bust of the 90’s. The fact that FW survived that is impressive.

            • Krogoth
            • 4 years ago

            FNW is actually almost 25 years old.

            They were around when the 486 and Pentium I were the new hotness in town.

            • Bomber
            • 4 years ago

            Yes, that is what I was saying. LOL The examples above of modern boutique builders are the “didn’t exist 20 years ago” 🙂

      • tipoo
      • 4 years ago

      Pretty impressive that they’re still alive and independent. Voodoo was bought and killed by HP (sigh), Alienware was taken by Dell, a few others from early on just fell…FNW has been trucking since 1992.

      I remember it was the “if you were running this on a Falcon, you’d be done by now” messages in the early 3Dmark (when it was MadOnion?) that made me aware of them 😛

        • Krogoth
        • 4 years ago

        I remember their aids in really old computer magazines that used T-800 motif (Terminator II just came out) on their monitors.

    • brucethemoose
    • 4 years ago

    When I hear AMD say they’re targeting power efficiency with Polaris, all I hear is that they’re choosing relatively low stock voltages/clockspeeds to conserve power.

    That’s good for overclockers, since it means we have more headroom to work with… but wouldn’t it be advantageous to clock the same part higher out-of-the-box and achieve higher performance with the same part?

      • odizzido
      • 4 years ago

      Design is a pretty huge part of power efficiency. This does not have to mean that they’re going with lower clocked parts, at least not on the performance oriented parts.

      • lmc5b
      • 4 years ago

      That is what they did with the 290/X, those cards are really power efficient at 850-900 MHz and you only lose 5-10% performance down from ~1000. But they wanted a Titan and got burned for it (no pun intended) as the cards got a really bad reputation. I think this is a smart move and they are clearly learning from the past. My underclocked 290 is awesome and runs nice and cool, but if you want to overclock the option is still there, and a lot more people will think to overclock a card that runs cool than to underclock a card that runs hot.

      • DPete27
      • 4 years ago

      With Hawaii (and to a lesser extent Fury) they pushed the voltage/performance to the max in order to keep up with Nvidia. That resulted in much higher power draws.

      Some of the TR gerbils have found significant power savings with minimal performance impact by tuning these cards back just slightly.
      [url<]https://techreport.com/forums/viewtopic.php?f=3&t=109936[/url<] I'm not so sure whether Hawaii was even designed with maximizing power efficiency in mind, or if their hand was forced by Nvidia's superior Maxwell architecture. Maybe a mix of both. While "power efficiency on the desktop doesn't matter much" is a valid argument, lower power draw does reduce heat generated and makes for quieter and/or smaller systems. Not to mention these companies are using these architectures for mobile parts as well where battery life is ALWAYS a big factor.

        • Anovoca
        • 4 years ago

        [quote<]I'm not so sure whether Hawaii was even designed with maximizing power efficiency in mind, or if their hand was forced by Nvidia's superior Maxwell architecture. [/quote<] The fact that the first Hawaii cards out the gate were water cooled is a pretty good indication as to their marketing strategy. Edit: The first "Fury" cards our the gate.

          • ImSpartacus
          • 4 years ago

          Wasn’t the first Hawaii card the air cooled 290x?

          Granted, the 295×2 used an aio cooler, but it also didn’t really throttle its cards at all. It was literally just the tdp of one card multiplied by two. I doubt we’ll ever see another dual gpu card with a two slot air cooler.

            • Anovoca
            • 4 years ago

            was it? my mistake. Thank you for correcting.

            • _ppi
            • 4 years ago

            Yes, it had el-cheapo blower-style cooler, that let it go up to 95C (“by design”), and then throttled. And was as loud as bad vacuum cleaner.

            Funnily enough, open air custom designs keep temperatures below 80C, and pretty much never throttled.

            Seems AMD took a lessons from it, at least looking at FuryX, Nano and letting vendors do their custom designs on Fury and 380X. I am wondering how will they reflect this with Polaris.

            • brucethemoose
            • 4 years ago

            They did the same with with Tahiti… and Cayman. They also put an open air-ish cooler on the reference 7950, but it was significantly worse than Tahiti’s already problematic blower.

            I think AMD decided that they don’t do reference designs well, hence they just let vendors handle it these days, or slap an AIO + Gentle Typhoon on the GPU for a premium design.

    • DragonDaddyBear
    • 4 years ago

    Crossfire Fury power with only 4GB of VRAM isn’t going to cut it, especially when you’re touting your next-gen GPU. Too little too late, AMD [url<]http://www.eteknix.com/amd-r9-fury-x-crossfirex-12k-eyefinity-review/9/[/url<]

      • Goty
      • 4 years ago

      Never mind the actual, y’know, [url=http://www.hardocp.com/article/2015/10/06/amd_radeon_r9_fury_x_crossfire_at_4k_review/10#.VqZJ9VLQjVI<]evidence[/url<] or anything.

        • DragonDaddyBear
        • 4 years ago

        TR’s own analysis showed that VRAM was an issue in “some” cases, and that was with one card. Putting something that can drive more than 4K resoultion with less than enough VRAM in “some” cases on a flagship product is foolish. They admit as much by releasing a tier lower card wtih 8GB of VRAM.

        I’m not an AMD hater (I have a 7950). But it’s not worth their time to produce that card.

          • Goty
          • 4 years ago

          It IS worth their time, because it gets people talking about the product and company. Nobody is really concerned with the handful of edge cases in which you can push the card beyond its limits when there are many other situations in which its raw pixel-pushing power enables it to (potentially) offer better performance than the best performance you can get from any two cards available from the competition.

      • Airmantharp
      • 4 years ago

      3.5GB on my GTX970’s is fine, for 1600p.

      Yes, I’ll be looking to trade them out for something with more RAM- but I’ll need more grunt, too, whether I go 4k or with one of those 21:9 1440p screens.

        • slowriot
        • 4 years ago

        Fine for what? I’ve hit the 3.5GB limit of my GTX970 in GTA V at 3440×1440. I wouldn’t be surprised if this starts becoming a lot more common.

          • Airmantharp
          • 4 years ago

          Sure, and I just noted that moving up in resolution would mean an upgrade beyond 4GB (or 3.5GB in my case). As will future games.

          But for now, BF4 and Fallout4 are working as admirably as can be expected.

            • slowriot
            • 4 years ago

            Ok, but how is that a counter to what Losergamer04 said? There are plenty of people out there already with 4K or 21:9 monitors. Who is buying a potential Crossfire-on-a-stick card if not people who have also spent significant on monitors? Or expect to use it for VR?

            Losergamer04 has a point. It may not affect you, but I was pointing out how that limit has affected me. It’s certainly something I would consider when buying a super high end card. Going forward I wouldn’t be happy with just 4GB of VRAM on a top tier card.

            And yet -11 and counting. Because why?

            • Airmantharp
            • 4 years ago

            I don’t know, I actually agree with his point- ~4GB is okay for current gen games at unaggressive resolutions/refresh rates (overall fillrates). For more aggressive settings and future games, not so much.

          • Anovoca
          • 4 years ago

          He already stated, fine for 1600p or 2560x1600p. And throwing more VRAM at a card is far less efficient that upping memory bandwidth which is something the next gen cards will work to address. Maxwell and Hawaii were never meant to be 4k GPUs (despite how they are advertised).

            • slowriot
            • 4 years ago

            GTA V is a specific case where you run out of VRAM before you run out of compute power. You’ll be absolutely fine with a steady FPS until you eclipse that VRAM limit and suddenly FPS will go from a steady 45FPS to 5-10.

            • Anovoca
            • 4 years ago

            Exactly, it is a “specific case.” That doesn’t mean the issue lies with the hardware. There are plenty of games with significantly higher texture limits that run on less vram. GTAV is just coded poorly.

            But all that is beside the point that memory capacity is not the biggest issue atm, bandwidth between the GPU and the VRAM and between the GPU and the CPU over PCI bus are the biggest bottlenecks with modern graphics. If you don’t believe me then just wait for 2gb HBM2 cards to hit the market later this year and see how they bench compared to current gen 4gb cards.

            • slowriot
            • 4 years ago

            There likely won’t be 2GB HMB2 cards. But whatever dude…. GTA V… so “poorly” coded yet it runs fantastic for the visual quality. There’s arguably 3-5 other games out there that match it, none of them run appreciably better. And then there are other games like Fallout 4 that look much worse and run worse. What’s Fallout 4? Garbage coding?

            • VincentHanna
            • 4 years ago

            Upping bandwidth vs adding vram is not an efficiency trade-off, they address different problems. The reason adding more vram to a game that doesn’t need it is “inefficient” is because the game ignores what it doesn’t need… and you can’t replace large quantities of Vram with HBM, if the game in question needs large quantities of Vram. It just doesn’t work.

          • ImSpartacus
          • 4 years ago

          That’s 20% more pixels than a 2560×1600 display, which is 11% more than 1440p. I wouldn’t try to use a 970 beyond the 1440p mark. It’s a damn nice gpu, but it has limits.

          You need something beefier if you want 3440×1440 or 4k gaming.

            • slowriot
            • 4 years ago

            A GTX 970 is definitely playable at 3440×1440 as long as you make sure not to eclipse the VRAM you have. GTA V runs fine as long as you’re mindful about not going over that limit. Witcher 3 runs fine as long as you lower the shadows/water a bit and don’t use Hairworks. And then pretty much every other game is fine.

            I don’t understand why people on TR are so fond of telling others’ what they should be experiencing. Do you own a GTX 970 and a 3440×1440? If not, why are you telling me about my experience? I’m using those components. I KNOW how they behave together.

            • ImSpartacus
            • 4 years ago

            I feel like a lot of people make blanket statements in situations like this and in order to do so, you need a bit of “margin”. You have to be slightly “above” the average requirements so that you can be more confident that you’re capturing more outliers.

            Maybe it’s possible to play something like 80% of modern games 3440×1440 with a 970, especially if overclocked. However, you mentioned that your experience has shown that several game-specific tweaks or considerations are required for a 970 to punch above its weight class. And in certain circumstances, the 970 simply fails to perform.

            It sounds like that high-touch, high-maintenance experience is perfectly satisfactory for you and that’s great. You might’ve saved yourself a couple bucks and that’s never a bad thing.

            However, those are a couple of unique situations under carefully chosen circumstances. That doesn’t make me confident enough to be able to make the blanket statement that a 970 can satisfactorily game at 3440×1440.

            And since we’re limited to blanket statements in these kinds of discussions, that’s it. It’s just an unfortunate reality.

      • slowriot
      • 4 years ago

      Typical TR. You’re not wrong. As I have hit those limitations myself on current games and on currently available monitors.

      And yet you’re getting down voted. Tremendous. And then I post my own experience and it gets down voted because why? Because apparently we’re SOLELY talking about Airmantharp’s experience and his defined critiera. Which YOU didn’t use at all.

      But TR, so whatever….

        • DragonDaddyBear
        • 4 years ago

        I don’t get the down votes either. What I am confused about it it appears that he his agreeing with my assessment. I made my claim based on the review that TR did.

        I don’t completely agree with your assessment of the TR readers. However, I have noticed a trend where stove people are voted higher than others. It is actually an interesting trend I see on many sites. It would make an interesting social science study.

          • slowriot
          • 4 years ago

          I would guess the majority of the down votes you have received are from people who didn’t even read your full comment. Let alone give any real consideration to your point. The voting system just encourages that poor behavior. It’s a problem TR could address (to a degree) fairly easily but the site staff seems to just ignore it.

          It would bother me deeply to run a site where worthwhile points get down voted and cast in a negative light for no reason. But hey, TR is also a place where posters who routinely make personal attacks get nothing more than time outs.

          Hmm… maybe it’s time to just move on…

        • auxy
        • 4 years ago

        He is wrong. You can absolutely make use of a pair of Fiji processors with only 4GB buffer available. Trivially even. I mean, you can’t even really argue the point.

        The point of whether you CAN make use of more than 4GB of VRAM is irrelevant. It’s not as if that means every game does, or even will. Remarks like “settings where you will need two Fiji chips need more than 4GB” don’t even make sense because it’s not as if applying more post-processing takes more VRAM.

        I could surely use a pair of Fiji chips to run Dragon’s Dogma: Dark Arisen with ENB at better than 50 FPS.

          • DragonDaddyBear
          • 4 years ago

          4GB is enough today, sure, for 4K gaming. Anything above that you start to see issues.
          [url<]https://techreport.com/blog/28800/how-much-video-memory-is-enough[/url<] Now, I get it, one Fury isn't going to be enough at that resolutions above 4K. But that's why you buy another card. My point still stands that the this 2XFury is too late. This year they will produce a card that is probably going to get close to performance level of Gemini and with more VRAM that will be less prone to the VRAM constraints that the Fury will likely hit in the near future. Also, if 4GB of VRAM is "enough" why sell a card with 8GB? Let's settle this with science. Do a Fury in Crossfire and 390X in Crossfire and a 290X in Crossfire with 1, 2 and 3 4K monitors. Until that's been done I don't think any of us really has a leg to stand on.

            • auxy
            • 4 years ago

            1) I agree that the Gemini is too late. However, your reasoning is wrong. Gemini is too late because we’re too close to Polaris and whatever NVIDIA has coming.

            2) Resolution is not the primary driver of VRAM usage; media is. (textures, mostly) What games you are playing matters much, much more than what resolution you are playing them in.

            3) I doubt we will see any GPU from AMD or NVIDIA this year which matches Gemini in raw performance.

            4) 390(X) has 8GB of VRAM to differentiate it from its competitors, namely the 290(X). It is not because it ‘needs’ 8GB of VRAM.

            [url=https://media.giphy.com/media/ARj2OMThsPoAw/giphy.gif<]Good day sir![/url<]

        • ImSpartacus
        • 4 years ago

        This shouldn’t surprise anyone.

        Tr literally sells the ability to cast more comment votes. They obviously are invested in this particular system.

        But as long as it’s merely a vanity thing, then what’s the problem? It’s not like people get auto-banned or something as a result of vote behavior.

          • VincentHanna
          • 4 years ago

          It does tend to reflect poorly on the site as a whole, if the high voted comments are meaningless or low quality. I don’t think that downvotes matter much, but people will ignore low-ranked comments, and to their detriment if the ranking system isn’t subject to guidelines of any kind.

          And while you can’t get banned for being downvoted here in the comments, the forum mods are a whole other story. Once they have laid down the truth of the matter, discussion ends and you must either accept their unsupported word or leave. The forums are a fiefdom, and they are your lords.

      • synthtel2
      • 4 years ago

      Isn’t Gemini supposed to be all about VR? For VR, the framerate/pixel count requirements are pretty steep, but per-pixel work done tends to be pretty low. Also, you have (as I understand it) two GPUs so one can be rendering from each POV. In that case of multi-GPU use, they don’t have to be doing that much framebuffer data duplication between the two memory pools (assuming the game engine people have a clue).

      While when 42 or 60 fps is your target framerate it’s entirely feasible to run out of 4GB VRAM with a Fiji, it’s a whole lot tougher when 90 fps is your target. VRAM requirements don’t scale with raw performance of the card, they scale with work per frame, and VR necessarily has to keep work per frame lower to get the frametimes they need.

        • _ppi
        • 4 years ago

        If 4GB is good enough to run at 60fps, then it probably means 4GB is enough memory, or else you would not hit it without spikes and other issues. There is absolutely no reason why memory usage should go up when the card is churning the frames out at a faster pace.

        FuryX’s trouble hitting 90fps would rather lie in relatively low number of ROPs and texture units.

          • synthtel2
          • 4 years ago

          ROPs, absolutely, but it’s got more texture filtering power than any other chip in existence (outside of R&D departments, anyway). 😉 [1]

          ROPs are one of the things you can scale back load on more easily to hit a frametime target (by turning down MSAA and shadows, particularly). I would be more concerned about the draw call and geometry processing cost. If you plot framerates of different cards at different resolutions, AMD is easily winning on per-pixel time, but Nvidia is easily winning on the constant per-frame factor. It’s a lot easier to scale back (with in-game settings) on costs associated with one pixel than those that aren’t.

          [1] correction: it has easily more int8 texture filtering power than any other chip you can buy. FP16 loses to most of Nvidia’s mid-high end. Most stuff that games (especially VR ones) will be throwing at TMUs is int8 tho. Most exceptions would be for fairly niche cases like running SMAA on an HDR buffer (SMAA tends to be after the conversion back to LDR).

          • synthtel2
          • 4 years ago

          Ah, I reread it and I think I get what you’re saying. I was saying 90 fps should be less stressful on VRAM than 60, but I reversed some grammar from typical, so that was me being unclear.

      • Mr Bill
      • 4 years ago

      This is not crossfire. Isn’t it 4GB (or more for HBM2) per GPU? That’s at least 8GB or more on the same video card. The Radeon R9 295×2 had plenty of performance with 4GB per GPU.

        • auxy
        • 4 years ago

        It is crossfire. How do you think the 295×2 worked? It was also crossfire. It’s 4GB per GPU, which means 4GB total.

          • Mr Bill
          • 4 years ago

          Maybe you’re right. I was under the impression that the plx chip on these dual GPU cards allowed both GPU’s to reach into eachothers screen/memory space to make it seamless and 4GB per GPU is still 8GB total. In that first 295×2 review each GPU (and its memory) only painted half the screen. But that is still 8GB for the whole screen space.

            • Goty
            • 4 years ago

            Each GPU must store all of the information required to draw the entire screen, so each pool of memory is an exact copy of the other, meaning there is effectively 4GB of memory available to store the assets required to draw the entire screen. Even in split-screen rendering, each GPU must store ALL of the data, not just the half that will be on the side of the screen that GPU is rendering. Crossfire is a doubling of the rendering resources only, not a doubling of all resources.

            • Airmantharp
            • 4 years ago

            To add, while the GPUs are connected via high-speed buses- these buses are still several orders of magnitude slower than the speed at which each GPU has access to its own memory pool.

            Put simply: One GPU could not operate anywhere near full speed while rendering using resources that were stored in another GPUs memory.

            One possible future exception to this would be if GPU makers were to up the size of their interposers currently used in HBM so that two GPUs and their respective memory could be co-located on the same interposer, thus allowing for much higher bandwidth buses between them.

            • Mr Bill
            • 4 years ago

            Thanks Goty, Airman, auxy, for explaining in detail.

Pin It on Pinterest

Share This