Rumor: Radeon RX Vega benched in 3DMark Fire Strike

To the best of our knowledge, AMD didn't send out any review units of its Vega Frontier Edition video cards. Performance figures still trickled out despite the card's four-figure price tag, and its numbers in gaming benchmarks aren't that impressive when compared value-wise to the $500 GeForce GTX 1080 and the $700 GTX 1080 Ti.

AMD has stated that the Vega FE's gaming chops aren't necessarily comparable to those of the upcoming Radeon RX Vega cards, though, and some new information may show the company has a point. Earlier today, the rumor mongers over at Videocardz.com probed 3DMark's Fire Strike database and found figures that it says were submitted by AMD employee Jason Evangelho. The GPU scores in question would put the RX Vega ahead of the GeForce GTX 1070 and neck-and-neck with the GTX 1080—though the database entries have since been removed.

Three scores were submitted, two at Radeon RX Vega's rumored 1630 MHz and one at a lower 1536 MHz clock speed. All three entries display a clock rate of 945 MHz for the 8 GB of HBM2 memory. The reported graphics scores for the two higher-clocked entries were 22,330 and 22,291, just shy of an MSI GeForce GTX 1080 Gaming X's tally of 22,585. That MSI card is likely faster than Nvidia's reference GeForce GTX 1080 FE thanks to its higher factory clock speed and more capable cooler. The RX Vega clocked at 1536 MHz reportedly achieved a score of 20,949, a substantially higher figure than the sample GeForce GTX 1070 score of 18,561. The database entries include no data about the number of stream processors in the RX Vega GPU, though we imagine the card in question packs the same armada of 4096 SPs as the Vega FE.

These numbers bear more weight than other random engineering benchmark leaks because of their apparent source. Videocardz's screen captures indicate that the results were submitted by "TheGameTechnician." That handle has historically been used by Jason Evangelho, a former tech writer and now a marketing specialist at AMD.

If the figures above are correct, they'd corroborate our own predictions that RX Vega will land with performance comparable to Nvidia's now-14-month-old GeForce GTX 1080. Unfortunately, that performance parity is unlikely to extend to power consumption, seeing as the Vega Frontier cards have TDPs of 300 W and up. As always with CPUs and graphics cards, the value proposition of Radeon RX Vega will hinge on its price, and AMD has been quite tight-lipped on that subject.

Comments closed
    • Chrispy_
    • 2 years ago

    I’ll wait for proper reviews of tangible cards that I can buy, but I think people expecting a direct match for a 1080 are setting themselves up for disappointment.

    For a start, AMD haven’t managed to get their 14nm GloFo parts to clock as high, or be as efficient as TSMC’s 14nm or Samsung’s 16nm Nvidia parts. Even if AMD made a direct competitor for the 1080, it would be slower and hotter based on fabrication process alone.

    Secondly, and this is probably why AMD launched Vega as a compute card first, The balance of resources in Vega is shifted much further to the GP compute side than to the gaming side. If you look at Nvidia cards, they all completely suck at half precision throughput, and in the case of Vega, it’s:
    [list<][*<]125x faster than a $1200 Titan XP for half-precision compute (machine-learning) [/*<][*<]2x faster than a $1200 Titan XP for double-precision compute (dataset analysis, scientific simulations) [/*<][*<]10% faster than a $1200 Titan XP for single-precision compute (rapid-prototyping, physics/weather/fluid simulation)[/*<][/list<] I mean, yeah - it's going to be faster than a 1070 in games, but the Pascal architecture is really only of any value in games and single-precision compute which is a limited list of scenarios. The NCU architecture of Vega and the chip's design sacrifice die area and resources that would be gaming-specific to make it a more useful general purpose processor. If AMD had far more resources perhaps they'd do what Nvidia does and make two very different product stacks - the Geforce/Titan gaming/rendering line and the Tesla compute line.

    • rudimentary_lathe
    • 2 years ago

    Very disappointing power consumption figures if true. After all this time I would have expected AMD to have addressed this long-standing problem with their cards.

    • Star Brood
    • 2 years ago

    It’s too bad Vega is mega delayed and isn’t enough of an improvement over the Fury X. I’d like to hope Navi will be a complete overhaul for the positive.

    • ronch
    • 2 years ago

    I just want a more power efficient midrange part. Say, 2048 stream processors at around 90w. RX 470 is a 120w part. And AMD had to kill efficiency further with the RX 570 for not much performance gain.

      • Airmantharp
      • 2 years ago

      So buy Intel/Nvidia?

      😉

        • ronch
        • 2 years ago

        Nah. AMD or nuthin’!

        Seriously though, I might go NV for my next GPU. I’ve always stuck with AMD since ’04 but if they keep this up I’m going green.

      • Chrispy_
      • 2 years ago

      An RX 470 will do that at 1GHz, 0.95V. Just edit the clock and voltage sliders in the crimson drivers.

      The RX500 series is just the old RX400 series, but overclocked and overvolted. Nobody is stopping you from underclocking – and I ran a particularly good RX480 at 1250/0.925V for some time, with power draw likely to be under 100W.

      • Mat3
      • 2 years ago

      Why is 30 or so fewer watts so important?

    • mcarson09
    • 2 years ago

    This card should be in the hands of the end user months ago. I already have had my 1080 ti for months and Vega will not beat it. They cancel even match power efficiency. The GPU division needs to be reformed back into ATI. Worst of all they seem to be only able to either release a new CPU arch or new GPU Arch and not handle both simultaneously. AMD is hurting competition on both the CPU and GPU fronts.

      • Krogoth
      • 2 years ago

      HBM2 supply issues and it is possible that AMD waited until the whole Etherium craze was over before launching it.

      Pascal seems good because Nvidia pushed what it do under the 16nm process and built upon what made Maxwell an efficient architecture.

        • DavidC1
        • 2 years ago

        The Ethereum craze part has just one flaw.

        Vega FE only gets 37MH/s, which is only 30% better than RX 470/480/570/580. That’s piss poor for the rumored prices. This thing is entirely for gaming.

          • Krogoth
          • 2 years ago

          It still doesn’t stop miners from trying to grab them though and etailers trying to run on the “Etherum” craze hype to drive up prices.

    • brucethemoose
    • 2 years ago

    Isn’t Vega like a 500mm^2 GPU?

    What the heck happened? That seems like a [i<]regression[/i<] from Polaris.

      • Voldenuit
      • 2 years ago

      RX 480/580 is 232 mm^2 vs GP106 (1060) 200 mm^2. +16% die size vs comparable nv part.
      Vega is 484 mm^2 vs GP104 (1070/1080) 314 mm^2. +54% die size, no doubt in part due to increased FP16 resources (2x FP32 rate), and infinity interconnect.
      GP102 (1080Ti) is 471 mm^2.

      Also, Vega needs an interposer and 2 stacks of HBM2, so I wouldn’t be surprised if it costs AMD more to manufacture than it costs nvidia to make a 1080 Ti.

        • DavidC1
        • 2 years ago

        54% increase in die size can’t be explained by 2x FP16 and an Infinity Interconnect. The latter should take miniscule space. Actually, both should take minimal space.

        The product is just messed up.

        Being more costly than 1080 Ti is guaranteed with HBM2 stacks. More expensive to make, performs less, and took a year later to arrive. It’s pretty much a disaster.

          • ronch
          • 2 years ago

          Can’t blame Raja. He was pretty caught up with [url=https://pbs.twimg.com/media/C4XrzmTVYAAjU5L.jpg<]other things[/url<] lately to focus on Vega.

    • the
    • 2 years ago

    Much like when Zen hit, benchmarks were all over the place with what saw major improvements vs.mediocre improvements compared to Intel. I suspect that Vega will follow that same pattern with some benchmarks showing some very impressive gains but they will be far from universal. Of course this is like predicting that the sun will rise tomorrow in the east. If not and this is an indicator for how well Vega performs overall, then it is indeed a poor showing.

    AMD potentially has an ACE up their sleeve with Infinity Fabric support between Ryzen and Vega. If AMD can enable this link between the two chips, there is hope that CPU bound situations can be reduced in some scenarios (think 1080p and below resolutions).

      • chuckula
      • 2 years ago

      [quote<]AMD potentially has an ACE up their sleeve with Infinity Fabric support between Ryzen and Vega. If AMD can enable this link between the two chips, there is hope that CPU bound situations can be reduced in some scenarios (think 1080p and below resolutions).[/quote<] Nice pun, but I'm not seeing anything magical about "infinity fabric" considering it requires a big and very power-hungry Vega part to be slapped next to another big piece of RyZen silicon (presumably 8 cores at best for a realistic setup) in a single package. That buys you some advantages over a PCIe link to a regular GPU... but not many. As for consumer-grade APUs, AMD was already using the memory controller to communicate on-die between the GPU and the CPU going all the way back to the first LLano chips and 'infinity fabric' just uses the memory controller to push data between CCX units on Ryzen... so it's basically the same design they've used since 2011 with a marketing name.

        • jts888
        • 2 years ago

        Beyond just tiny dedicated GMI transceivers, Vega still has the potential to mux xGMI over its PCIe PHY lanes, which would let it speak in native IF coherency protocol over a PCIe slot to Ryzen/TR/Epyc, bypassing protocol translation and IOMMU overhead.

        It would only be ~20 GB/s at most (vs 16 GB/s for x16 PCIe 3.0), but it would allow reasonable utility as a physically split APU, heterogeneous memory access support and all.

          • the
          • 2 years ago

          What I’ve been seeing has been pointing toward a 25 GByte/s link using Infinity Fabric, which is approximately halfway between PCIe 3.0 and 4.0 in raw bandwidth. With the reduction in overhead, it should provide better performance than PCIe 4.0 unless the use-case remain perpetually bandwidth starved beyond 25 GByte/s.

            • jts888
            • 2 years ago

            The GMI transceivers (cross-package/inter-die) appear faster than the xGMI links (share 12.5 Gb/s PHY lanes with PCIe), which are the ones I was saying I expected would exist on Vega as well. I expect Zen2/3 to go to ~16.5 Gb/s PHYs for PCIe 4.0 and faster inter-socket xGMI, but GMI for TR/Epyc could clearly ramp up independently as well.

            I don’t expect that GMI or xGMI will ever get that close to theoretical bandwidth limits (e.g., 25 GB/s for 16 lanes * 12.5 Gb/s) since 64b66b and 128b130b physical codings add too much latency (5ns-10ns at 12.5 Gb/s vs. 0.8ns for 8b10b).

        • the
        • 2 years ago

        We’re mostly in agreement. Infinity Fabric is more about latency reduction than raw bandwidth which is why I indicated that it’d be most helpful in CPU bound scenarios. It won’t magically make the CPU faster but make it easier to hit that peak mark.

      • psuedonymous
      • 2 years ago

      [quote<]AMD potentially has an ACE up their sleeve with Infinity Fabric support between Ryzen and Vega. If AMD can enable this link between the two chips, there is hope that CPU bound situations can be reduced in some scenarios (think 1080p and below resolutions).[/quote<] Same issue as with NVLink: requires a proprietary physical interconnect, and support at both ends of the link. As Infinity Fabric uses the same driving electronics as PCIe (it can be thought of as a semicustom extension over wider ganged PCIe lanes), if you try and maintain backward compatibility by using existing PCIe 16x slots, then you are limited to the same performance as PCIe 16x and gain nothing.

    • Kretschmer
    • 2 years ago

    A few years ago we were pillorying the AMD CPU division’s terrible Bulldozer parts while praising the competitive R9 2X0 GPUs. I routinely would express the belief that AMD should spin off or shut down its CPU anchor in an attempt to save the competitive GPU business .

    Now AMD’s Zen is competitive in limited scenarios and Vega appears to be on the ropes. Is this just a case of the company being too small to execute on both major product lines simultaneously? Did Nvidia leapfrog traditional GPU improvement “jumps” with Pascal? Is the current GlobalFoundries process poorly suited for GPUs? I’m curious about the root cause of these turnarounds.

      • chuckula
      • 2 years ago

      Good GPU or good CPU.

      Pick one!

        • ColeLT1
        • 2 years ago

        Both, but at 1-2 generations behind.

    • USAFTW
    • 2 years ago

    Really appreciate the thought and attention TR pays to their news posts recently, keep it up!
    Vega seems to be another R600. Fermi for all its faults was the performance leader at the time and was nearly as power efficient as Cayman in its fixed form (aka GTX 580/570). Vega is almost twice as power hungry as GP104 but for the exact same performance.
    I’ll pass.

      • Krogoth
      • 2 years ago

      Fermi has not a performance leader until GF110 came out. The GF100 was a dud at launch that couldn’t beat a HD 5850 at gaming performance let alone a HD 5870. It was only good at general compute.

        • USAFTW
        • 2 years ago

        That’s not what the TR review shows. Of course, there are some use cases where the 5870 came ahead. But mostly the GTX 480 was solidly in the lead.
        [url<]https://techreport.com/review/18682/nvidia-geforce-gtx-480-and-470-graphics-processors/11[/url<] [url<]http://www.anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/9[/url<] Overall though, it was quite terrible. I had a friend who got a 480 and got rid of it a few weeks later.

    • Jigar
    • 2 years ago

    Its a turd guys, no matter how you spin it.

      • DPete27
      • 2 years ago

      Pricing will determine that mostly.

      But yes, AMD is still producing caveman performance compared to Nvidia. If it’s anything like Polaris, Vega is probably clocked/volted higher than the silicon is comfortable at in order to match Nvidia performance tiers, and some simple undervolting can bring it much more in line with the competition in terms of power.

    • Unknown-Error
    • 2 years ago

    Looks like I have been way too harsh on the AMD CPU division. Zen may not have brought AMD to the [b<]Athlon 64 X2[/b<] status but Zen HAS brought AMD back into the respectability status and did so with very limited resources. But for the fracking GPU division? Performance of a GeForce GTX 1080 over a year after it was launched? What is this the new GPU version of Bulldozer?

    • ronch
    • 2 years ago

    I notice that AMD usually takes longer to come up with a product that almost matches up against their competition and then makes up for it with pricing, thereby still making the product compelling. It’s giving 90% the performance at ~50-70% of the price. While that doesn’t make their technical prowess the most respectable on Earth it does make great business sense and it’s good for us consumers. And don’t forget the fact that they don’t have the R&D resources Intel and Nvidia do, so that’s saying something.

      • tipoo
      • 2 years ago

      I don’t know that it “makes great business sense” so much as it’s necessary given their necessarily lower R&D. It’s a self sustaining cycle. Launching later for similar performance at a lower price = lower margins = lower R&D.

        • frenchy2k1
        • 2 years ago

        Actually, it makes very little business sense, but they just can’t do better at the moment.
        Their latest quarter highlights how little business sense their strategies do: they sell bigger pieces of silicon at a lower price, meaning they have lower margins and make less money.
        R&D is a one time cost, margin is an ongoing revenue. Trading a higher one time cost for lower production costs allows to make more money if you can get the volume. Being first to market drive4s that volume.
        This is why nVidia has very healthy margins and make lots of money while AMD is barely surviving.

        Don’t get me wrong, their position is great for the consumers, that get competition and great products at a lower price, but AMD is only a few failed products from dying and then that competition would disappear.

      • chuckula
      • 2 years ago

      [quote<]It's giving 90% the performance at ~50-70% of the price.[/quote<] Wait, so you are saying that Vega with all that HBM2 and all that silicon is starting in a range of $285 to $399? What happened to the Rx 580?? Considering this real-world price of a tricked-out GTX-1080 at $570? [url<]https://www.amazon.com/ZOTAC-ZT-P10800C-10P-IceStorm-Wraparound-Ultra-wide/dp/B01GCAVRSU/ref=sr_1_5?ie=UTF8&qid=1501077811&sr=8-5&keywords=GTX-1080[/url<]

    • Fonbu
    • 2 years ago

    I am wondering if their R&D budget cost will be recuperated after this release? HBM2 seems pretty steep of a BOM for such a consumer card. As we know that RX Vega will come in XTX(CLC), XT(AIR) and XL(Harvested die) variants. And possibly in a 4GB variant of the XL. Many are hopeful on the day of presentation!

    • JosiahBradley
    • 2 years ago

    Hype has already killed Vega. If it doesn’t create world peace at 99$ people will cry failure.

      • flip-mode
      • 2 years ago

      Hey, I think people would be happy paying $599 for that.

      • chuckula
      • 2 years ago

      #PoorVolta

    • Krogoth
    • 2 years ago

    It sounds about right.

    Those who were expecting a 1080Ti killer are just plain delusional. A scaled-up Polaris design would have not done any better.

    I don’t expect that Volta will be that much of a leap in gaming performance over Pascal. Pascal already pushed the 16nm process to its limits. Volta is going back towards general compute direction.

    The days of massive jumps in performance for each new generation of silicon have been over since GCN 1.0 and Kepler.

      • curtisb
      • 2 years ago

      [quote<]wouldn't have not fare any more better either[/quote<] That hurt my brain.

      • sreams
      • 2 years ago

      Nice edit, but still hurting brains everywhere.

      “would have not fare any more better either”

      Two more edits and you’ll be there.

        • DancinJack
        • 2 years ago

        lol

        • chuckula
        • 2 years ago

        Here’s the simplified version:

        Roses are red,
        Violets are blue.

        They don’t think it be like it is,
        [url=http://knowyourmeme.com/memes/they-don-t-think-it-be-like-it-is-but-it-do<] But it do.[/url<]

          • Mr Bill
          • 2 years ago

          +2

      • ronch
      • 2 years ago

      On the good side, it means GPU designs are very mature. Heck, even GCN 1.0 is pretty cool if you think about it.

      • ImSpartacus
      • 2 years ago

      Vega is supposed to have several architectural improvements, so it should be measurably better than a scaled up Polaris.

        • DPete27
        • 2 years ago

        And yet, even when AMD switches to tile-based rendering, they still can’t compete with Nvidia. I guess the reverse engineers at AMD still haven’t figured out the secret sauce recipe.

        • mcarson09
        • 2 years ago

        It’s still behind a 1080 ti.

      • BurntMyBacon
      • 2 years ago

      I was figuring on roughly 1080 performance (give or take), but hoping Vega would slot in just above it. Vega can’t really put much pricing pressure on the 1080Ti if it can’t beat the 1080. Depending on the state of the drivers for the cards that were tested, it is possible it could still beat the 1080, but I would be surprised if it beats it by much.

      That said, there is little reason to believe, given the specifications, that Vega will be a 1080Ti killer. I also agree, a scaled up Polaris design would not have done any better. The Vega’s architectural improvements combined with its ability to hit higher clocks (than Polaris) make a theoretical “Big” Polaris unlikely to match Vega unless the chip was significantly larger.

      [i<]Edit: Clarity.[/i<]

      • Leader952
      • 2 years ago

      [quote<]I don't expect that Volta will be that much of a leap in gaming performance over Pascal. Pascal already pushed the 16nm process to its limits.[/quote<] How soon we forget Maxwell. Maxwell was on the same 28nm process as Kepler yet it was a vast improvement.

        • DavidC1
        • 2 years ago

        We should not forget that Maxwell’s enhancements may be a one time thing. The cache increases for power efficiency is a one-time thing. You can’t increase it that much on the next generation. The shader re-organization is a one-time thing.

        Traditionally new process technologies basically took out the thinking required(very roughly speaking) to make a faster chip. You just put more of them to the limits of power and die area you set. New processes always saved power and was always faster so you end up with a superior product. You could have a somewhat sub-optimal architecture but who cares? As long as you do better on the process side it easily makes up for architectural deficiencies.

        Processes are so late and difficult nowadays since a decade ago architects started doing “transistor-level” optimizations. That’s basically a technical term for bringing some features of the yet-to-be-released next process to the current uarch. Look at power gating introduced with Intel’s Nehalem. It made their 22nm process irrelevant because Trigate was supposed to reduce leakage significantly, but power gating means you’d be shutting off high leakage circuits anyway.

      • mcarson09
      • 2 years ago

      Explain to me why the 1080 ti does double the performance of the 980 ti in the Witcher 3? I think the performance problem is just an AMD while both Intel and Nvidia wait for them to chance up on two front.

        • Krogoth
        • 2 years ago

        Witcher 3 is shader depended in terms of performance (Hairworks and other eye candy stuff). The 1080Ti has a nice bump in shader units (3586 vs 2816) and it also clocked considerably higher then its 980Ti predecessor (almost 1.5Ghz base – 1.7Ghz Turbo Boost versus 1Ghz base – 1.2Ghz Turbo Boost )

        Not much of a shock that 1080Ti ends up yielding a 50-100% performance gain in shader performance over the 980Ti.

    • torquer
    • 2 years ago

    Honestly no one should really care as long as prices come down on the cards most people buy. Lower prices benefit all consumers regardless of which company you blindly follow and passionately defend against all logic and fact.

      • Klimax
      • 2 years ago

      If it can still fund R&D…

        • Chrispy_
        • 2 years ago

        I’d worry more if AMD wasn’t in the best financial position they’ve been in for a decade or more.

        People have been saying stuff like “no money for R&D” and “AMD can’t afford to keep competing” since 2007’s Phenom flopped and suffered the TLB bug. They’re still putting up a decent fight a decade later….

    • tsk
    • 2 years ago

    This isn’t looking too good, maybe they should have just scaled up polaris instead.

      • tay
      • 2 years ago

      14 months later and likely at 40% more power. Travesty.

        • Voldenuit
        • 2 years ago

        300W is 66% more power than 180W (GTX 1080).

          • djayjp
          • 2 years ago

          Yeah and 1.5 years later than when the 1080 launched and it has far worse power consumption. It really is pretty sad. ATI would’ve been better.

      • LostCat
      • 2 years ago

      Give me pricing, DX12 and Vulkan results and then I may or may not agree with you.

        • credible
        • 2 years ago

        This is where I stand, if its priced at or lower than a 1070 then its a no brainer…for me anyways.

          • DoomGuy64
          • 2 years ago

          It’s the only upgrade option for freesync users since Nvidia won’t support it, and 1080 level of performance is good enough for 99.9% of games out there. Hell, the 1070 is good enough, and I would have bought one myself except for the fact that Nvidia still won’t support freesync.

          Scenario 1: Nvidia doesn’t support freesync, I buy a Vega after price drops and cheaper model.
          Scenario 2: Nvidia supports freesync, I buy a 1070 and hope they’ll work on their dx12 drivers.

          It basically comes down to who supports freesync at the best price/perf point. Nvidia’s not even an option until they support it.

            • K-L-Waster
            • 2 years ago

            Would be interesting to see actual sales figures comparisons for G-Sync vs Freesync. I know there are more Freesync models available, but the monitor manufacturers keep introducing new G-Sync models too. They wouldn’t do that if they were losing money on the existing models, which suggests they are selling even with the premium price.

            Actual sales figures would make the picture clearer. I’ve looked, but so far my Google-fu isn’t up to the task.

            • DoomGuy64
            • 2 years ago

            G-Sync is selling because Nvidia has the most marketshare. However, the $200+ premium prices it out of the mid-range market, and the high end market only has so many users. That means manufacturers cannot afford to heavily compete in such a small niche, and the more G-Sync monitors are introduced, the more sales will drop as they steal marketshare from each other. There might only be one or two brands actually turning a decent profit with those things, considering how many people are available to buy them. This is why G-Sync doesn’t have the selection that freesync has.

            Freesync on the other hand has the opposite problem. You can sell lots of low end monitors, but the high end monitors won’t sell because AMD doesn’t have high end hardware. The reason why bigger monitors support it is more likely that it doesn’t hurt anything to include it, not that AMD has the market for it.

            • ChicagoDave
            • 2 years ago

            DoomGuy64 – You’ve got that exactly right.

            There’s probably a lot more Freesync monitors sold to date, but that’s just because there’s little to no cost increase for manufacturers to include it. In another year or two I bet 95% of monitors will support one of the two techs. All of the cheap monitors, and many of the mid and high end ones will have a Freesync option, since that’ll just be the default option. G-Sync will only be attached to medium to high end models due to the ~$200 additional expense.

            Plus I don’t think AMD benefits from a large user base of Freesync monitor owners the same way NVidia does. If someone gets Freesync thrown in on their monitor, they may or may not take advantage of it by buying an AMD GPU. Based on their total market share, I doubt that a large percentage do. However, if someone actually pays $ 150-$250 extra for a G-Sync monitor, they’ll almost certainly use NVidia cards for the rest of that display’s life.

            So AMD has the current and likely near future problem of being unable to compete with NVidia in raw performance (and likely price), and the possible longer term repercussion being losing a customer for a decade if they buy a G-Sync monitor.

            If Vega can’t comfortably do 4k @ 60hz (let alone HDR, 120hz, etc) they’re in real trouble. I have a 3440×1440 monitor, roughly half way between 2560×1440 and 3840 x 2160 and wish it had G-Sync (wasn’t available on first gen). If I were in the market today and buying a high resolution monitor, I’d have a hard time looking seriously at AMD and Freesync. I’ll say it again – Vega needs to easily run 4k @ 60hz or AMD is in big trouble. It needs to do that for games out today, and have headroom for future games. Anything less is going to be a hard (impossible?) pill for many to swallow at the price ranges we’re expecting ( $550+?)

            If they sell it for like $400, which I find hard to believe given the chip size and HBM, then they’ll slide more into the middle enthusiast range where 3440/2560×1440 and below are the norm. They ‘ll have a lot better sales volume but who know it they’ll even be making money at $400, given their retailers need to make money too.

            • DoomGuy64
            • 2 years ago

            Well if it’s anything like Fury, there will be an X and non X version, and the non X version will be cheaper. That would be the model I would get, because it will only be moderately slower and still support all the latest features from AMD.

            As for 120hz 4K, that’s a pipe dream. You’d have to buy SLI or crossfire to actually get 120fps with high end titles. I’d say the more realistic application would be 1440p @ 144hz, which Vega should do quite easily.

            There isn’t any point in buying a super expensive monitor, considering tech still isn’t good enough to justify spending that kind of money. Maybe once they eliminate ghosting with VRR (coming soon), but even at that I wouldn’t spend more than $500 on a monitor.

            I already have a MG279 freesync monitor though, so I’m not in the market for a monitor. Vega non-X is pretty much the only upgrade I’m interested in at this point. I currently have a 390 and a Fury, and both have downsides. I can max out the graphics on the Fury, but ram is limited (very few games actually need more) and there are compatibility issues with my monitor. The 390 has no problems other than speed in newer titles, but freesync makes that tolerable. I need to sell one, and it’s probably going to be the Fury due to hardware issues and higher resale value, then I’ll use that to offset Vega.

      • chuckula
      • 2 years ago

      When Polaris launched Raj showed off 2 Rx 480s as being comparable to the GTX 1080.
      Now they are showing off one Vega as being comparable to the GTX 1080.

    • DancinJack
    • 2 years ago

    Sad!

      • Srsly_Bro
      • 2 years ago

      $599 is my guess. I’ll of course quote myself on this at a later date should it be accurate.

        • DancinJack
        • 2 years ago

        That’s….pretty expensive for what seems to have roughly 1080 performance-ish. I guess we’ll see.

        • mcarson09
        • 2 years ago

        That might be MSRP. I say $650 will be the price.

        I’m sure a “shortage” upon release will hit making the price shoot up to $750.

Pin It on Pinterest

Share This