Rumor: Radeon RX Vega could sport 64 CUs at 1200 MHz

We're probably not all that far away from the Radeon RX Vega launch, which AMD has slated for the first half of this year. Supplies of Fiji-powered Radeon graphics cards appear to be drying up at online retailers, and a new set of information turned up by the inveterate leakers at VideoCardz suggests that certain implementations of Vega silicon could be in testing now.

The site noticed that a new AMD graphics chip has shown up in the CompuBench database of OpenCL and CUDA performance numbers. This device, called AMD 687F:C1, apparently shares an identifier with earlier Vega products that AMD has shown in the past. CompuBench collects a wealth of information about the device being benchmarked, including a reported maximum clock frequency and a maximum number of OpenCL compute units, among other details.

Source: CompuBench

In the case of this purported Vega silicon, CompuBench says the device reports 64 OpenCL compute units running at possible maximum clock frequencies of 1000 MHz or 1200 MHz. If a Vega NCU continues the GCN tradition of having 64 stream processors per unit, that would indicate a 4096-SP chip with 8.2 TFLOPS of raw computing performance at 1000 MHz, or 9.8 TFLOPS at 1200 MHz.

Those rough figures are slightly higher than the GTX 1070's 6.5 TFLOPS and the GTX 1080's 8.9 TFLOPS by our calculations. While TFLOPS aren't a perfect approximation of delivered performance by any means, they do mesh with my impressions of how an RX Vega performs in the forms AMD has demonstrated thus far.

If these figures are accurate, we're also curious what they mean about the Radeon Instinct MI25 accelerator's performance potential. I guessed that card could deliver 25 TFLOPS at a 1500 MHz clock speed—2x its single-precision throughput—using Vega's new packed math support. Assuming my guess is correct, perhaps Instinct cards are getting the best of Vega silicon to hit those high clocks at reasonable power levels. Whatever the case may be, we won't know until the Radeon RX Vega actually hits the streets.

Comments closed
    • playtech1
    • 3 years ago

    I reckon a cheap-ish AMD card that can keep pace with a 1080 will look like really good value for those who might pair it with a Freesync monitor.

    The G-Sync equivalents have a major premium and quite limited inputs. I felt quite sore to fork out as much as I had to in order to pair a G-Sync monitor with my nVidia card.

    But if Ryzen and Vega end up as also-rans to Intel’s and nVidia’s best then this AMD renaissance might be short-lived.

    • Chrispy_
    • 3 years ago

    So Vega is 300MHz slower than rumoured and a good 150MHz slower than existing Polaris yields.

      • NoOne ButMe
      • 3 years ago

      MI25 is already confirmed… which means minimum clock of 1525mhz is confirmed for professional cards.

      This is likely an ES like Ryzen was at around 2.8-3.2Ghz

      Unless AMD was dumb enough mislead investors about the capabilities they would provide cloud companies to a massive degree. Which seems unlikely to me.
      Possible, but very unlikely.

        • Chrispy_
        • 3 years ago

        I hope this is an ES.
        Jeff needs to jump off the rumour bandwagon, especially if it contradicts already-confirmed product announcements.

    • BaronMatrix
    • 3 years ago

    Vega has to be the most secretive CPU or GPU I can remember… AMD has said nothing about CUs or SP counts… Even the rumor mill is grasping for straws… Just looking at the move to 14nm and the maturity of it, I see more than 4096/64… Maybe

    With the naming I can assume we’ll see Vega, Vega X and Vega Nano (Fury Nano is an Instinct card)…

    Those could even replace the Polaris and Nano cards (MI6, MI8) with perhaps double the FLOPS…

    I hope they have a water-cooled version since I’m holding off on my upgrade for Vega… Thought about just getting a Ti but I want to see Infinity, etc…

      • jts888
      • 3 years ago

      They’ve already released slides saying each NCU does 128 fp32 ops, 256 fp16 ops, or 512 int8 ops per clock.
      Given that it’s industry standard practice to count adds and multiplies separately in mul-add units like GPU ALUs, it’s safe to assume the implied 64 ALUs/CU.

      Similarly, the slides for the Instinct M25 listed the card as having 64 CUs and ~25 TFLOPS for fp16, which again is indirectly stating 4096 ALUs and ~1525 MHz clocks for that part.

      It’s all the other implementation details they’ve kept close to their chest, including the Primitive Shaders, HB Cache Controller, Binning Rasterizer, and who knows what else.

        • BaronMatrix
        • 3 years ago

        So, in your weird way you agree that for something due in May it has no specifics that will really determine FLOPS…

          • ImSpartacus
          • 3 years ago

          We have had flops rumors for quite a while.

          The rumored config is 4096 SPs Rubin at ~1500MHz, 2 stacks of HBM2 (8GB total) providing 512 GB/s of bandwidth. All in, it’s 12 TFLOPS.

          [url<]https://videocardz.com/65521/amd-vega-10-and-vega-20-slides-revealed[/url<] The leaks in this sticks only prove to confirm these existing rumors.

      • NoOne ButMe
      • 3 years ago

      They are spending lots of transistors/area on supporting packed math (more data path) FP64 1/2 I think(more data path), INT8 (more data path), and fixing weak areas of Fiji while doing a completely rework of parts of GCN.

      They also cutting 2-HBM2 memory controllers, but that’s <10mm^2.

      Also title could use a little correction as 1500mhz+ operation has been confirmed already for numbers on the Pro version.

        • ImSpartacus
        • 3 years ago

        Vega 10 is rumored to have 1/16 dp.

        You have to wait for the 7nm Vega 20 shrink to get 1/2.

        This is a months old rumor: [url<]https://videocardz.com/65521/amd-vega-10-and-vega-20-slides-revealed[/url<]

          • NoOne ButMe
          • 3 years ago

          My bad, thanks.

    • jts888
    • 3 years ago

    We already know that the fanless enterprise Instinct card will be ~12.5 TFLOPS in fp32, or ~1525 MHz, so 1200 MHz is about a real final consumer clocking as the 2.8 GHz Zen ESs were IMO.

    • Forge
    • 3 years ago

    So I’ll be getting my high-end version of GTX 1080 traded out for the base model GTX 1080 Ti soonish, and I’m seriously thinking about chucking the NIB 1080 Ti onto eBay.

    If the Team-formerly-IDed-as-Red can get Vega out in the next few months, and it’s competitive, I’ll pay attention. If it’s actually good and continues the new and improved FOSS support, I’ll give them my money gladly.

    Get me to come back, AMD. I’ll meet you halfway. I haven’t run an ATI GPU since they were ATI GPUs, but I’m open to new experiences in my gaming PC.

      • DoomGuy64
      • 3 years ago

      Competitive vs what? I doubt it’ll beat the TI if it’s just an upgrade of the Fury. It could beat the 1080, but 64 ROPs vs 88 is an insurmountable bottleneck.

      Don’t get me wrong, 1080 level performance is pretty good, but it is no way a game changer unless you are specifically looking for a reasonable high end card from AMD. In that case, Vega will definitely be the best card AMD offers, because right now they are not competing in the high end at all.

      I doubt Vega will be a disappointment, since it has a lot of efficiency gains, as well as massively better drivers compared to older versions. It just isn’t going to beat the TI, especially since Nvidia finally made some improvements to their dx12/Vulkan support.

      I might be interested in the slower non-X version of Vega, since that would match my freesync monitor pretty well. Nvidia could have gotten a 1070 purchase out of me if they supported freesync, but since they don’t I’ll be waiting for Vega.

        • mesyn191
        • 3 years ago

        It won’t beat the 1080Ti but it might beat a 1080 while selling for less which is a big deal.

        The bang for buck as a game changer is what you’re not considering.

          • DoomGuy64
          • 3 years ago

          I did consider that, #1 reason to go AMD. What I don’t get is why anyone would buy the 1080Ti, sell it, then switch to Vega. Is this some upgrade program that you get subsidized prices?

          I suppose you could make out doing that, under that scenario. Otherwise, Jackie Chan is confused.

            • mesyn191
            • 3 years ago

            I see no consideration of that in your post.

            And yeah it’d be stupid in nearly all cases for someone to buy a 1080Ti and sell it for a lower performing Vega but that isn’t the situation AMD needs to get money or market share.

            Tooooons of people would like 1080-like performance but can’t afford to spend $500+ for one. That is the market AMD needs to get and could get with Vega.

            Remember, the market past $200 shrinks rapidly, and past $400 is down right tiny. Everyone focuses on the top end and there is a halo effect on lower end card sales from having the “best” video card but realistically that doesn’t matter to most.

            • NoOne ButMe
            • 3 years ago

            It used to. The lower end of the market has been shrinking, and the higher end growing for a while.

            • Voldenuit
            • 3 years ago

            [quote<]Remember, the market past $200 shrinks rapidly, and past $400 is down right tiny. Everyone focuses on the top end and there is a halo effect on lower end card sales from having the "best" video card but realistically that doesn't matter to most.[/quote<] The 970 is the single most popular card on steam, and the 1070, in fifth place, [url=http://store.steampowered.com/hwsurvey/videocard/<]is still 3.5x more popular than the RX 480[/url<]. Not only are these cards 'popular' (if still only a fraction of the integrated market), the margins are a lot better than on $100-200 cards. It's a segment that AMD literally can't afford to ignore, and Vega can't come soon enough for them or for consumers. While we were stuck on 1080p and last-gen console ports for the past decade, it was fine to ignore your GPU, because most games ran fine at 1080p60 even with a 7870 or 660Ti, but with 144+ Hz displays and 4K/2.5K coupled with the new consoles, I feel that the demand for higher end cards has been pent up.

            • mcarson09
            • 3 years ago

            I’d rather not wait months for proper driver support when new games come out. Ryzen was a let down (can’t even beat broadwell-E) and AMD is late to the party again (just like it was with the 290x and Fury X) and Nvidia was even dragging it’s butt with the 1080 ti. The 1080 ti offers double the performance of my 980 ti now.

    • albundy
    • 3 years ago

    fingerprints will kill that shiny shine!

    • chuckula
    • 3 years ago

    The numbers are definitely suspect but that picture of Raj holding Vega says this: If that’s actually Vega, it’s a big chip (definitely bigger than the GP102 that Nvidia just re-launched with the GTX-1080Ti).

    AMD didn’t produce a GPU with a die that big & HBM just to compete with last year’s GTX-1080.

    At least I hope they didn’t.

      • AnotherReader
      • 3 years ago

      I agree. Since the 2900 XT, AMD has shied away from bigger dies unless they are competitive from a performance perspective.

    • not@home
    • 3 years ago

    Nothing to see here. AMD intentionally released this info, in a backward yet assuredly intentional way, to increase hype and to draw some attention away from Nvidia’s recent release. The numbers will be very low, compared to release hardware, as this is an ES that is underclocked and may even have some SPs disabled due to defects and/or binning. The numbers will not relate to final shipping products in any way.

    • dharris
    • 3 years ago

    I just recently bought a Sapphire Fury Nitro. Is it necessary to upgrade to Vega later?

      • chuckula
      • 3 years ago

      Absolutely.

      Be sure to check the expiration date on the bottom of your card to make sure your Vega order is placed in plenty of time. We can grant you an extension if you preorder promptly, so please don’t think that it’s a good idea to wait for Vega to actually launch before your purchase is made.

      — AMD marketing.

      • DoomGuy64
      • 3 years ago

      Only if you want more than 4GB vram, which is probably the difference between ultra and high in games that support it.

      Performance wise, the Fury still has legs and potential for driver improvement. Well over double the capability of consoles, which use AMD chips. The Fury isn’t going anywhere. Vega is merely a luxury upgrade.

      Upgrading to Vega from a Fury is like upgrading to a TI from the 1080.

      • Kretschmer
      • 3 years ago

      It depends on the resolution of your monitor and target frame rate. A fury should be capable of hitting 2560x1440x75-100FPS in many games as long as you stick to reasonable settings. To comfortably run at 3440x1440x100FPS or attempt 4Kx60FPS, you’ll want a 1080Ti or possibly Vega.

      • christos_thski
      • 3 years ago

      I just bought the exact same card. You’re worrying too much, in my humble opinion, it’s a great card, and I expect it to last a long time. Not to mention the fact it is selling at some insane prices right now (bought it from french amazon on offer for 250 euros, that’s about 266 dollars!). You might need a Vega for 4K later on, but the Fury never was the ideal card for that resolution anyway. Maybe I’m overoptimistic, but I expect to be playing triple A titles without issue on this card until the Playstation 5 or so.

    • Meadows
    • 3 years ago

    That chart is so green that for a second I thought it had come from NVidia.

    • cpucrust
    • 3 years ago

    I for one, am willing to try this Grand Vega, even though it will require some projection and bravia.

    • brucethemoose
    • 3 years ago

    “Equivalent” GCN cards have beaten the snot out of Nvidia cards in the OpenCL tests I’ve tried.

    So, matching a Maxwell Titan X in an OpenCL test is not good at all… Then again it’s an ES chip, so it doesn’t tell us anything about the performance of release Vega.

      • Klimax
      • 3 years ago

      That’s fairly temporary situation. OpenCL support in NVidia drivers is under full development. (See Anadtech recent review). Won’t last long.

    • ermo
    • 3 years ago

    Vega’s hand better be a royal flush.

    • AnotherReader
    • 3 years ago

    Jeff, I would, respectfully, disagree with your forecast. This is probably similar to the engineering samples for [url=https://techreport.com/news/30514/rumor-amd-zen-engineering-samples-leaked-and-benchmarked<]Zen that were seen late last year[/url<]. Those were clocked as low as 2.8 GHz. The MI25 sets down a known figure for the fp16 rate of Vega's most potent variant. Keep in mind that the Pro cards are usually clocked lower than the consumer cards. As an example, the FirePro W9100 was clocked at [url=https://techreport.com/news/29993/amd-slaps-32gb-of-ram-on-its-firepro-w9100-graphics-card<]930 MHz[/url<] while the R9 290X reached 1000 MHz for reference cards.

    • cmrcmk
    • 3 years ago

    AMD hasn’t been this exciting in years!

    • Neutronbeam
    • 3 years ago

    I dunno; those details seem a little Vega to me.

    • Kretschmer
    • 3 years ago

    Rumor: AMD Vega will arrive in 2017, all ten high-end buyers without Pascal interested.

      • MrDweezil
      • 3 years ago

      I’m still here with my 970 and $300 waiting for my upgrade.

        • DancinJack
        • 3 years ago

        You might wanna save another 300 if you plan on buying Vega…

          • MrDweezil
          • 3 years ago

          Its fine, I’ll just keep waiting until something shows up.

        • Kretschmer
        • 3 years ago

        Hopefully used GTX 1070s fall into that range for you, soon. 🙂

    • Kougar
    • 3 years ago

    [quote<] If a Vega NCU continues the GCN tradition of having 64 stream processors per unit, that would indicate a 4096-SP chip[/quote<] This is what I don't understand. Fury X already is a 4096-SP chip with a 1050MHz base clock. If the only thing they did was die-shrink and add 200Mhz then it doesn't sound like much of an upgrade.

      • Jeff Kampman
      • 3 years ago

      Fiji and Vega are much different architectures.

        • cmrcmk
        • 3 years ago

        Do we have any reliable info on how much better the new arch is? I’m sure it’s not an Excavator->Ryzen kind of bump, but if it’s even 10-20% that’d be great. I also think I remember there being a bit of evidence that Fury X was more often memory constrained than CU constrained so they may have focused on balancing those better.

          • Jeff Kampman
          • 3 years ago

          No reliable performance info yet.

            • VincentHanna
            • 3 years ago

            We know that AMD was too embarrassed to launch it at their launch party last week.

            That’s really all you need to know.

          • Rza79
          • 3 years ago

          Anand has an overview:
          [url<]http://www.anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser[/url<]

          • the
          • 3 years ago

          From what has been disclosed so far, the more analogous generational change would be GTX 770 -> GTX 980 from the nVidia side of things. I say this because of the [url=https://techreport.com/review/31224/the-curtain-comes-up-on-amd-vega-architecture/3<]more tile based approach[/url<] Vega is taking. nVidia took a [url=http://www.realworldtech.com/tile-based-rasterization-nvidia-gpus/<]similar leap with the Maxwell generation of GPUs. [/url<] Just looking at raw changes in clock speed and ALU count, you'd expect a ~55% increase in performance but often the GTX 980 is over twice as fast as the GTX 770 due to changes in the architecture. This can be seen in the benchmarks referenced in the article. The main thing AMD will have going this time around from the above leak is just an increase in architectural efficiency and a minor change in clock speed. The problem is that a side-by-side comparison with Fiji and its 4096 ALUs don't show much of a speed increase outside of raw clocks. This is not a good showing for AMD considering the massive architectural changes being touted by Vega.

            • Airmantharp
            • 3 years ago

            Hopefully Vega is more than the sum of its synthetically measured parts- hopefully.

          • mesyn191
          • 3 years ago

          Nothing reliable. Some of the rumors suggest around 20% more performance at the same clock (edit: vs Polaris, not Fiji, against Fiji the improvements will be huge) for most things while also improving clock speed some and power efficiency a bit too.

          It will still be a GCN derivative so it’ll still use more power than Pascal in general.

            • NoOne ButMe
            • 3 years ago

            Please Paris pref/watt is close behind Pascal on each architectures “sweet spot”. Pascal design allow it to clock much higher at this point is all.

            Polaris 10/11 are just 8/4 CUs short of being able to keep the clock in this spot while being performance competitive.

        • Kougar
        • 3 years ago

        Okay that’s reassuring! Thanks for the reply

        It just seemed strange to die-shrink but not increase the SP’s to take advantage, so in that case it sounds like a major redesign. Looking forward to reading the details!

          • DrDominodog51
          • 3 years ago

          Vega probably is a more balanced chip design than the Fury which will boost performance regardless of architectural changes.

            • Kougar
            • 3 years ago

            No one’s disputing it’s a more balanced and refined chip. But the question is, is it enough to let it easily play against the cheaper 1080.

            Given the execution hardware count hasn’t increased I still can’t see it competing against the 1080 Ti, which would be a real shame given it’s GDDR5X vs HBM2 advantage. What is the point of utilizing cutting edge memory technologies (ie, AMD co-investing in HBM 1-3 with SK Hynix) if they will never take the performance crown with it.

      • tipoo
      • 3 years ago

      GCN 1.2 (Fiji) to Polaris alone I think was worth something like 30-40% more performance per core per clock iirc, and then compound that with whatever Vega is over Polaris. Even adding a modest amount for Vega, the same core config would still be markedly faster.

        • AnotherReader
        • 3 years ago

        At 1080p, across ComptuerBase’s benchmark compendium, [url=http://www.computerbase.de/2016-08/amd-radeon-polaris-architektur-performance/2/#diagramm-doom-1920-1080<]Polaris is 7% faster than a similar Tonga clock for clock[/url<]. The greatest difference was seen in Witcher 3: 15%.

          • DancinJack
          • 3 years ago

          lol sorry, that made me laugh. tipoo says 30-40 and evidence shows 7-15. good laugh for a friday.

          • tipoo
          • 3 years ago

          Whomp. Guess I was remembering GCN 1.0 to Polaris, which in TW3 was that 40%.

          • BobbinThreadbare
          • 3 years ago

          So if they just made Polaris version of the Fury X with the clock rate boost, you’d expect 25-30% boost in speeds. Based on the TR 1080 TI review, that would put it pretty close to a 1080 vanilla.

      • DoomGuy64
      • 3 years ago

      Probably an accurate prediction. I think Vega will just be Fury 2.0, may beat the 1080 but not the TI, and most of the improvements will be in dx12, not dx11.

      That said, as long as nvidia keeps pushing gsync, it’ll still be a good option especially with the improvements over Fiji.

      • ImSpartacus
      • 3 years ago

      I agree that it’s moderately alarming at first, but remember:

      [list<] [*<]New architecture with some excess performance there, all else being equal. [/*<][*<]Fiji technically had 64 CUs, but it was a painfully unbalanced chip that couldn't properly utilize all of those CUs for all they were worth. Vega is much more balanced. [/*<][*<]GCN is limited to 4 shader engines ([url=http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/4<]confirmed for Fiji[/url<]), so this is actually as big as an amd chip can get without changes to how gcn works. [/*<] [/list<] On the later point, one thing to note is amd is rumored to be pursuing a multichip strategy in the near future. Therefore, it might not be as much of a disadvantage as it may seem. And actually, the 64 cu rumor is really really old news. It's been "confirmed" several times in several different ways.

    • Major-Failure
    • 3 years ago

    For anyone else wondering when Vega “hits the streets”:

    “are expected to ship in the second quarter of 2017.” That is, between April 1 and June 30.

      • chuckula
      • 3 years ago

      You just had to put April 1 in that post….

    • JosiahBradley
    • 3 years ago

    I highly doubt Vega will only match a Fury X in performance. That would be rather idiotic from a business standpoint and Lisa Su is a smart woman.

      • Jeff Kampman
      • 3 years ago

      Running all-out, it will almost certainly do better. But that’s the top of the lineup, not the middle or bottom.

      AMD’s Radeon strategy at this point appears to be about bringing certain classes of performance to lower price points than it is to push the envelope. The RX 480 has been perfectly consistent with this strategy. You can now get R9 290/290X-class performance for about $200 or less, whereas we used to pay over $300 for that privilege until just recently. Vega will probably lay the groundwork for future performance advances and a move into the HPC market, but I doubt it’ll break too much from the Polaris mission this time around.

        • JosiahBradley
        • 3 years ago

        I have 2 290xs that completely destroy 2 480s at the same price I paid 2 years ago. There has been 0 uptick from the Radeon side of the house. A fury X now cost 300$ and Fury is 250$ meaning if Vega performs the same it would need to be priced the same as a 480 which makes no sense at all. None of this makes any since after they showed MI25 being a 12.5 TFLOP card. I think this is the same thing that happened to Zen. Zen had engineering samples leaked at 3Ghz and when it shipped it could hit 4Ghz and performance was night and day. Not saying your wrong on the analysis just that why is the CPU side of the house pushing so hard while Radeon seems to be completely flattening out like Intel CPUs.

    • geniekid
    • 3 years ago

    I thought you were making a snarky comment about leakers being cowardly until I realized you wrote “inveterate” instead of “invertebrate”.

      • CuttinHobo
      • 3 years ago

      Kudos to Jeff for making me look up a word! 🙂

    • chuckula
    • 3 years ago

    I gotta fever. And the only Rx is more Compute Units!

Pin It on Pinterest

Share This