Report: Doom’s Vulkan renderer proves a boon for Radeons

Doom's Vulkan rendering path became available to the public this week, and its arrival appears to be a boon for Radeon owners. German site Computer Base ran some new benchmarks with Vulkan turned on for Doom, and the results are rather surprising. At 1920×1080 with graphics settings maxed out, the top-end Radeon R9 Fury X churned out 160.4 frames per second with Vulkan, a 66% increase over its 96.7 FPS showing with OpenGL. The Radeon RX 480 gets a 46% boost or so with Vulkan on, too. It runs the game at 122.6 FPS with the low-overhead API, compared to 83.8 FPS delivered with OpenGL. 

The Fury X gets a similar speed-up at 2560×1440 with Vulkan turned on. In Computer Base's testing, the card rockets from 72.6 FPS in OpenGL mode to 110.4 with Vulkan, a 52% boost. The RX 480 also delivers a rather nice boost, moving from 56.4 FPS to 79 FPS once Vulkan is enabled. At 4K, however, all of the cards Computer Base tested are taxed to the point where the performance increases from API changes are minor.

GeForce cards don't see much of a speed-up with Vulkan enabled, however. At 1920×1080, the Pascal GeForce GTX 1070 moved from 130.6 FPS to 135.7 FPS, or just 4% more FPS on average. The GeForce GTX 970 barely budges: 93.4 FPS with OpenGL and 93.8 FPS with Vulkan. The trend is similar at 2560×1440 and 4K. It's worth noting that Computer Base didn't repeat its tests with the GeForce GTX 1080, but that card delivered slightly lower average FPS rates in OpenGL mode in Computer Base's testing than the Fury X does with Vulkan. We'd be curious to see what Nvidia's top-end card can do with the new API, though.

Longtime TR readers will know what's coming next. We always caution against reading too much into average FPS numbers, since they don't say anything about the quality of the gaming experience like our frame-time benchmarking methods do. Still, going by the measure of potential performance that Computer Base's numbers show, it seems the R9 Fury X (and other Radeons) still have quite a bit of untapped performance to offer when next-gen APIs enter the picture. If you really like Doom and have a high-refresh-rate FreeSync monitor handy, that $400 R9 Fury X floating around Newegg right now might be a worthy upgrade.

Comments closed
    • jokinin
    • 3 years ago

    Even with my ancient Radeon HD7870 I have seen improvements from 45-60 fps to 50-70 fps most of the time.
    I can now play at 1980×1080 medium detail (shadows off), and very rarely dips below 40 fps. Quite an enjoyable experience for such and old GPU.

    • Klimax
    • 3 years ago

    Absolutely unsurprising. When you have driver team which is overstretched and likely inexperienced then good game/engine devs can beat them. (But it means they have to waste boatload of time and energy on it and nobody outside benefits from it) And they have to update code constantly for new HW since previous version will at best have inferior performance.

    However, when you have skilled, experienced driver team then game/engine devs cannot beat them at all. In fact I would argue there is simply no exception. Driver team has access to internal information about HW and control most of stack.

    And that’s what we are seen here, again. AMD is once again leaving loads of performance because their drivers are not well optimized nor can use HW well. And Nvidia simply cannot see benefit since their drivers are already optimized and know how to push HW to maximum performance.

    TL.DR: It is not case about not supporting something well, but about how well drivers are done.

      • AnotherReader
      • 3 years ago

      While I agree with your observation about unoptimized AMD drivers, were you expecting this to be a landslide win for AMD? Even Pascal is embarassed (Fury X 26% faster than 1070 at 1440p; RX 480 at 90% of 1070, and 390 at 91% of 980 Ti).

      • BurntMyBacon
      • 3 years ago

      [quote<]However, when you have skilled, experienced driver team then game/engine devs cannot beat them at all. [b<]In fact I would argue there is simply no exception.[/b<] Driver team has access to internal information about HW and control most of stack.[/quote<] Disprove by counterexample (only takes one): 980Ti gets a 7.1% boost under vulkan at 1920x1080 in these same charts. That doesn't even have Asynchronous Compute. Assuming Pascal has the hardware to take advantage of it properly, we'll probably see a pretty consistent boost from them as well. I do agree in general with your observation about low level APIs holding more benefit for cards that are less optimized at the driver level. However, I would hesitate to claim that as their only benefit. It's certainly too early to deal in absolutes.

        • sweatshopking
        • 3 years ago

        Pascal doesn’t have proper async hardware.

          • Voldenuit
          • 3 years ago

          [quote<] Pascal doesn't have proper async hardware.[/quote<] From [url=http://www.anandtech.com/comments/10486/futuremark-releases-3dmark-time-spy-directx12-benchmark/508014<]Ryan Smith at Anandtech[/url<]: [quote<]"Wait, isn't Nvidia doing async, just via pre-emption? " No. They are doing async - or rather, concurrency - just as AMD does. Work from multiple tasks is being executed on GTX 1070's various SMs at the same time. Pre-emption, though a function of async compute, is not concurrency, and is best not discussed in the same context. It's not what you use to get concurrency.[/quote<]

    • DPete27
    • 3 years ago

    So while the RX480 (for example) is being priced as a competitor to the GTX970, in future Vulkan/DX12 games, it could perform more like a GTX1060 or higher? That’s good news for AMD. It’s just too bad they needed this to unlock their cards’ full potential.

    • derFunkenstein
    • 3 years ago

    OK so we’re all pretty sure that the RX 480 is roughly as fast as (or a bit faster than, with Fallout 4 anyway) a GTX 970 most of the time. TR’s review bore that out, so just hang with me here.

    OpenGL
    GTX 970: 93.4 FPS
    RX 480: 72.6 FPS

    The GTX 970 is 28.65% (20.8/72.6) faster than the RX 480

    Vulkan
    GTX 970: 93.8 FPS
    RX 480: 110.4 FPS

    The RX 480 is 17.69% (16.6/93.8) faster than the GTX 970

    So is this about how great Vulkan is, or is it about how awful AMD’s OpenGL support (apparently) continues to be? My hunch is that it’s more the latter than the former, considering the enormous deficit handicapping the RX 480

    This is great either way, but really in more of a “dadgummit, AMD” kind of way.

      • Puiucs
      • 3 years ago

      it’s a mix of bad OpenGL drivers and Nvidia not having Async shader support.

        • DoomGuy64
        • 3 years ago

        I don’t think async works outside of Vulkan / dx12 either, so there’s that. GCN basically requires a new API to maximize it’s performance. The whole reason for Mantle was to showcase what the cards were capable of outside of dx11, as dx11 had scheduling limitations that AMD worked around in hardware with async.
        [url<]http://www.tomshardware.com/news/amd-dx12-asynchronous-shaders-gcn,28844.html[/url<] Synchronous limitations: [quote<]The simplest way to describe how synchronous multi-threaded graphics works is that the command queues are merged by switching between one another on time intervals – one queue will go to the main command stream briefly, and then the next queue will go, and so on. Therefore, the gaps mentioned above remain in the central command queue, meaning that the GPU will never run at 100 percent actual load. In addition, if an urgent task comes along, it must merge with the command queue and wait for the rest of the command queues to finish executing. Another way of thinking of this is multiple sources at a traffic light merging into a single lane. [/quote<] Performance benefits of Async: [quote<]To bring numbers to the table, AMD ran the LiquidVR SDK sample, which ran at 245 FPS with Asynchronous Shaders off and post-processing off. With post-processing enabled, it dipped to 158 FPS, but upon enabling both Asynchronous Shaders and post-processing, the framerate jumped to 230 FPS, nearly that of the original.[/quote<] DX11 by itself is pretty inefficient. Nvidia seems to have handled it by working around some of the inefficiencies in their driver, but at the end of the day dx11 is not efficient enough for modern graphics cards, and needs features like async to overcome it's weakness. Pre-emption is just a workaround. Unless Microsoft improves the multitasking capabilities of dx11, it's going to become obsolete, as newer games will play increasingly slower on an API not designed for efficiency. Doom is probably one of the better examples of the limitations of synchronous vs asynchronous capabilities. Without async, AMD has to manually optimize the driver for the game, and still won't achieve peak performance. With async, the game achieves maximum performance from day one, because scheduling is being handled directly by the hardware. There is only so much optimization that can be done in last gen API's, as async is just that much faster.

          • bfar
          • 3 years ago

          Nail on the head.

          I’m perplexed as to why people are expecting improved performance on Nvidia cards. They already have excellent scheduling and throughput efficiency via per-game driver optimization. Enabling async compute wouldn’t make a whole lot of difference, as there’s very little untapped potential left to unlock.

          Async compute isn’t some magical pixie dust that offers free fps, it solves an efficiency problem for AMD that Nvidia have already been tackling in a completely different way. For the record, I much prefer AMDs solution, but there’s no getting away from the downside – the scheduling problem remains unsolved with the older APIs on AMD hardware. Thankfully that will become a moot point as DX11 and OpenGL die out.

            • DoomGuy64
            • 3 years ago

            I think there is far more to it than just “AMD drivers bad”. The reality is, AMD’s drivers [i<]and hardware[/i<] have vastly improved from years ago, and no longer exhibit the massive problems that existed when GCN 1.0 first arrived. IMO, AMD designed GCN 1.0 far too ahead for it's own good, as it wasn't efficient enough at what it should have been doing day one. Tessellation was an afterthought over compute for example. AMD realized it's mistakes and partially corrected it with GCN 1.1, then further corrected it with 1.2+. I also wouldn't blame Gameworks performance on AMD's drivers either. That's deliberate sabotage on Nvidia's part, and shouldn't be held against AMD as long as they eventually fix it. As far as dx11 efficiency goes today, Polaris has less ROPs and shaders than a 390, yet somehow manages to perform almost exactly the same. That's pretty damn good for what it is. Previous architectures were just unbalanced and inefficient with their resources. Dx11 is no longer a problem for AMD, but they don't have a high performance dx11 card on the market either. Vega should be where AMD finally has a competitive dx11 card, but until then their existing hardware has potential mostly in next gen API's and driver updates. I love how GCN has so much potential for performance increases over time. Hawaii is probably the best example of this, which is why I'm using a 390. Superior to a 970 today, and the improvements aren't stopping either. I mean, just look at the Vulkan performance. Can't go wrong with AMD's mid-range hardware, especially Polaris. They just don't have good hardware at the high end yet.

          • BurntMyBacon
          • 3 years ago

          [quote<]I don't think async works outside of Vulkan / dx12 either, so there's that. GCN basically requires a new API to maximize it's performance. The whole reason for Mantle was to showcase what the cards were capable of outside of dx11, as dx11 had scheduling limitations that AMD worked around in hardware with async.[/quote<] In the gaming world, I agree. Mantle was useful to demonstrate that there were benefits to be had in the gaming world should developers decide to use some of these features that didn't have a place in DX11. In the compute world, however, GCN features have been available to program for since day 1 and made easier with HSA iterations. After all, the hardware scheduler (and others) was a compute oriented feature designed to further their aspirations with HSA. The full benefits will become more apparent as more software (or software threads) on the computer use the GPU for compute.

    • Sammael
    • 3 years ago

    Thanks Wasson

    • Waco
    • 3 years ago

    TR, come on. Why are percentages always hard?

    [quote<]At 1920x1080 with graphics settings maxed out, the top-end Radeon R9 Fury X churned out 160.4 frames per second with Vulkan, a 40% increase over its 96.7 FPS showing with OpenGL. The Radeon RX 480 gets a 32% boost or so with Vulkan on, too. It runs the game at 122.6 FPS with the low-overhead API, compared to 83.8 FPS delivered with OpenGL. [/quote<] That's a 66% increase and a 46% increase, respectively. How does this continually get mixed up? Every single metric stated was calculated backwards. πŸ™

      • Ninjitsu
      • 3 years ago

      Haha yeah they should fix it soon…

      • Jeff Kampman
      • 3 years ago

      Whoops, sorry, I shouldn’t do math in the mornings.

        • Waco
        • 3 years ago

        No harm done, I just read it and did a double take. πŸ™‚

        I’ve got a standing rule of math not being allowed pre-coffee or pre-noon, whichever comes first.

        • tipoo
        • 3 years ago

        You could say you should compute math asynchronously from mornings.

        I’ll see myself out.

        • Mr Bill
        • 3 years ago

        I thought those numbers were a little on the shader side.

        I’ll see myself out.

      • Mr Bill
      • 3 years ago

      He divided by the new frame rate to get the percentage rather than the old frame rate. His math was correct but solved the wrong problem.
      160.4-96.7=63.7 …….. 63.7/160.4=40% ……… 63.7/96.4=66%
      122.6-83.8=38.8 ……. 38.8/122.6=32% ……… 38.8/83.8=46%
      Perhaps computing math asynchronously (between yawns) in the morning got an out of order value in the queue.

      I’ll show myself out.

        • Waco
        • 3 years ago

        You’re doing it the hard way. πŸ˜›

        160.4 / 96.7 = 166%
        122.6/83.8 = 146%

        But yea, percentages are a constant problem in tech articles all around the web. Early morning math introduces error. πŸ™‚

          • Mr Bill
          • 3 years ago

          You don’t get the original results by simple inverting the fraction. You would have to do it like this… (1- (96.7/160.4))=(1-0.603)= 0.40 or 40%. So, the ‘hard way’ is the easy way to see the problem.

            • Waco
            • 3 years ago

            Maybe I’m just borked from playing the percentage game too much that I’m careless about the actual steps. I see .603 in the result and go “hmm, 40%”.

    • swaaye
    • 3 years ago

    This would be more exciting if the engine had some licensees. I don’t foresee myself playing Doom again in the near future.

    One thing is for certain though. NV screwed up with having it more difficult to do this asynchronous graphics processing in their architecture. Endless bad press from a few games.

      • Mat3
      • 3 years ago

      The number of games will only get bigger.

    • Unknown-Error
    • 3 years ago

    Radeon be DOOM(ed)

      • Srsly_Bro
      • 3 years ago

      I’m too lazy to post the Futurama video.

    • JumpingJack
    • 3 years ago

    Great, this makes good sense now…. I tried OpenGL and Vulkan on my 980 and did not see much of a difference. Can’t say why this is the case … are Nvidia drivers just well optimized in both DX11 and OpenGL? Are AMD DX11/OpenGL drivers just so bad? Or is the Vulkan code path taking advantage of async compute features that Nvidia really sux at?

      • caconym
      • 3 years ago

      Async isn’t yet enabled in Nvidia’s drivers.

        • Chrispy_
        • 3 years ago

        More likely Async isn’t possible on 900-series cards because Nvidia lied about it.

          • biffzinker
          • 3 years ago

          What else are they going to do when Kepler/Maxwell are incapable of supporting Async Compute because the GPU lacks the necessary logic? You didn’t expect Nvidia to be honest and admit that the only way to get Async Compute was by buying Pascal equipped card? Although Async compute does exist in CUDA so who knows if Maxwell could.

            • mesyn191
            • 3 years ago

            Actually I would expect them to. The reasons why they wouldn’t want to are obvious but I don’t care about what they want since I don’t work for NV. Or AMD for that matter.

          • swaaye
          • 3 years ago

          From what I understand following Beyond3D, there is some catch with mixing compute and graphics processing. Everything Kepler and newer can do asynchronous processing of compute.

        • JumpingJack
        • 3 years ago

        I knew that πŸ™‚ …. just wondering if async is used in Vulkan drivers and how much Doom vulkan code path may utilize it, could this explain the huge Radeon improvement and mediocre GTX improvement?

        • JumpingJack
        • 3 years ago

        Yep, found the answer, I think this is likely the culprit. Doom does take advantage of Async compute, which is why we see a great performance gain on AMD GPUs and hardly none on Nvidia GPUs, here is a quote from the Steam FAQ page on DOOM/Vulcan:

        [quote<]Does DOOM support asynchronous compute when running on the Vulkan API? Asynchronous compute is a feature that provides additional performance gains on top of the baseline id Tech 6 Vulkan feature set. Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run. We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon. Click here for additional information on asynchronous compute[/quote<]

      • Klimax
      • 3 years ago

      Correct. Nvidia’s drivers are in this regard completely different league then AMD’s. NVidia spent years on making them optimized. (As was observable when AMD’s Mantle was tested)

    • erwendigo
    • 3 years ago

    Ok, so AMD cards improve their bad basal performance with this game with OpenGL, and… what?

    That’s the natural thing.

    One interesting review about that is:

    [url<]http://www.gamersnexus.net/game-bench/2510-doom-vulkan-vs-opengl-benchmark-rx-480-gtx-1080[/url<] Where the Rx480 "improves" its performance to equalize with a GTX 970. So the fucxxx important thing about this is the original OpenGL performance, that was clearly a bad one in the AMD's camp. Yes, Vulkan fixed it, but the headline of this new and the conclusions are... so WRONG! It isn't about Vulkan making great the Radeons (and another material useful to doing PR to AMD), it's about OpenGL bad performance as point of departure.

      • AnotherReader
      • 3 years ago

      Perhaps you missed [url=http://twitter.com/idSoftwareTiago/status/752590016988082180<]this tweet from id Software's Tiago Sousa[/url<]. Gamers Nexus's settings disabled asynchronous shading; the performance increase is actually even larger as seen in the ComputerBase article that the second sentence of this article is linked to. You are right about bad OpenGL performance from AMD making Vulkan look even better.

      • Sammael
      • 3 years ago

      Like I posted to some other guy earlier on another post.

      a base rx 480 with async compute disabled from the settings, and forced to go up against a super clocked 970, and the 970 still got beat.

      The 970 got beat with the amd cards hand tied behind its back with no async enabled and against a clock boosted card.

    • NTMBK
    • 3 years ago

    Digital Foundry did some measurements too: [url<]http://www.eurogamer.net/articles/digitalfoundry-2016-doom-vulkan-patch-shows-game-changing-performance-gains[/url<] Similar story, solid 5-10% gains for NVidia cards and whopping 20-30% gains for AMD cards. Good news for everyone!

      • tipoo
      • 3 years ago

      In the interest of fairness it should be noted they only enabled async on AMD, while they’re working with Nvidia to get it in there “soon” (and it’s “proven more challenging”), so Nvidia could see further gains, but probably not as much as AMD who simply has more hardware overhead to work with for async and hardware scheduling features like ACEs

      I think this is playing out a bit like unified shaders, Xenos has unified shaders, it’s no big deal, it’s no big deal, aaaand now we’re flipping completely to unified shaders (and before ATI on desktop no less, thanks to the AMD buyout)

        • NTMBK
        • 3 years ago

        Yup πŸ™‚ NVidia knows all about the importance of multiple queues to keep occupancy up, it’s a key trick to getting maximum performance in CUDA programming- look for some of their guides about “streams”. This is the same concept, but in a graphics context.

    • chuckula
    • 3 years ago

    If you are using a 1920×1080 resolution as a benchmark for success on one of these cards you’ve already lost.

    Incidentally, go and look at the 4K benchmarks that they ran. The Fury X “wins” by a grand total of 3%, or about 1 FPS. Not over the GTX-1080 mind you — which they didn’t test curiously enough — but over the GTX-1070, which is still massively cheaper than the Fury X outside of fire sales.

    Remember when the Fury X was supposed to be the ultimate “4K” card and testing it at low resolutions was cheating in favor of Nvidia? Funny how things have changed.

      • Jeff Kampman
      • 3 years ago

      This is why I think it’s fair to point out that it’s really only a win for AMD when you consider high-refresh-rate 1920×1080 or 2560×1440 monitors.

        • LostCat
        • 3 years ago

        One wonders who can spend $500 on a vidcard but not $200 on a monitor.

          • Jeff Kampman
          • 3 years ago

          Going by the number of people who still have 1080p displays on Steam, it seems plenty do πŸ˜›

            • LostCat
            • 3 years ago

            I know people still convinced you can’t see more than 60. Sigh.

            • CaptTomato
            • 3 years ago

            I think it moreso a question of “realism”, ie, does the real world move/look 144fps?

            • natedawg72
            • 3 years ago

            As far as I can tell, my real world has an uncapped frame rate.

            • CaptTomato
            • 3 years ago

            Oh, you can move faster that the speed of light?

            • natedawg72
            • 3 years ago

            I know Bethesda tries really hard to be an exception, but in any good game the physics engine is not tied to the framerate πŸ™‚

            • Laykun
            • 3 years ago

            144hz, faster than the speed of light.

            • Chrispy_
            • 3 years ago

            * except 145Hz light, which is one more.

            • chuckula
            • 3 years ago

            n00b.
            You probably actually believe those trolls who claim that nobody can see movement faster than Planck time.

            • chΒ΅ck
            • 3 years ago

            Biologically speaking, the “refresh rate” of your eyes is limited to the rate at which the G-protein coupled receptors in your retina can be recharged.
            I think this happens to be about 66 times a second on average.

            • Mystiq
            • 3 years ago

            Then why did I have to put my old CRT monitors at 72 Hz or I couldn’t take it? And I can tell between 60 and 100.

            And US airmen have been clocked at 400 FPS under certain conditions. Look up the study.

            No, your eyes are an analog system and have multiple ways to run up their effectiveness. Also look up microtremors. [url<]https://techreport.com/news/27540/xbox-dev-explains-why-30-fps-isnt-enough[/url<]

            • Liron
            • 3 years ago

            But, if their recharge rate of all the receptors is not synchronized, then you would get more information from frame rates higher than a single receptor’s recharge rate.

            • AnotherReader
            • 3 years ago

            The response time of our eyes is not gated by the renewal of the various opsins. BBC research demonstrated that 300 fps was needed for sports coverage. [b<]Even 100 fps was insufficient for table tennis[/b<]: the link to the pdf is below: [url<]http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP169.pdf[/url<]

            • jihadjoe
            • 3 years ago

            60?! What extravagance. 24fps ought to be enough for everybody.

            • maxxcool
            • 3 years ago

            YUP~! [url<]http://store.steampowered.com/hwsurvey[/url<] *ALL* other monitors <5% 1366 x 768 25.92% +0.20% 1920 x 1080 36.50% -0.31%

            • djayjp
            • 3 years ago

            I gotta hook the fury x up to my 1366×768 monitor….

            • JustAnEngineer
            • 3 years ago

            I apologize. The latest steam survey came up on my Zenbook UX32VD. It has a 13.3″ 1920×1080 screen.

            • BurntMyBacon
            • 3 years ago

            Then don’t submit it. It’ll roll around again.

            • BurntMyBacon
            • 3 years ago

            Other “Big” hitters:
            1600 x 900 6.60% -0.14% (Seems you missed that one)
            1440 x 900 4.88% +0.11%
            1280 x 1024 4.52% -0.08%
            1680 x 1050 3.99% +0.01%
            Combination of all resolutions surveyed higher than 1920×1200 can be no greater than 3.9%.
            (Others category may include lower resolutions, but I’m including it here)

            • sweatshopking
            • 3 years ago

            Yeah, so test more games at 1080p!

            • LostCat
            • 3 years ago

            I’m stickin with 1080p, but goin for 144+Freesync if I can.

          • f0d
          • 3 years ago

          i spent $600 on a 1080 monitor (ultrawide 34″) and $200 on a graphics card (290)
          i dont have a single problem with 1080 it looks pretty fantastic to me and i get my 100+fps which i prefer to higher resolutions any day of the week

          1080 doesnt mean cheap

          • Pitabred
          • 3 years ago

          Because people don’t respect the actual setup. They don’t realize that they spend 100% of their time with their monitor, mouse and keyboard, and get crappy ones because they exist and they’re cheaper. The digital to analog linkup is poorly understood and even less respected πŸ˜‰ Similar reason they get badass video cards and use the cheapest motherboard and PSU they can find. They don’t view the systems holistically, they view them as discrete parts, some of which just don’t matter to the average user like it should.

          • Ninjitsu
          • 3 years ago

          What if you already have a decent 1080p monitor that you’re happy with, but only have 500 to spend?

          I don’t think good quality adaptive/gsync IPS 1440p monitor, or a 16:10 monitor, comes in at $200.

          So suppose you spend all 500 on this monitor, if there exists one. Now what? You have your current GPU, which could just about perform well at 1080p, pushing out 20 fps or less on this new monitor.

          What now? Spend another $400 on a GPU that will give you 30-60 fps on the new monitor? Okay, suppose you do. That’s $900 gone.

          Alternatively, spend $300 to $500 on a GPU that, under any conditions won’t push out less than your 1080p monitor’s refresh rate, and enjoy fluid animation and saved money.

          Or at least, this is how I think. Probably few have both the time and money to justify that much expenditure on these things, which is why the steam numbers are what they are.

          And I know people who do have triple monitor setups and all, but they also like to invest in peripherals (for flight sims, racing sims, better sitting posture, etc.). So finally upgrading the monitor *may* not be top priority.

          For me personally unless it’s a work requirement, or the monitor dies, I’m unlikely to upgrade it.

            • Voldenuit
            • 3 years ago

            [quote<]What if you already have a decent 1080p monitor that you're happy with, but only have 500 to spend?[/quote<] Assuming said monitor caps out at 60 Hz, it would be a bit silly spend all of the budget on a graphics card that will be refresh-limited. Something like the upcoming 470 or hypothetical 1050 would be able to run nearly all current and most upcoming games at over 60 fps @ 1080 for under $200. I would pocket the remaining $300 and do something else with it, or put it towards a high refresh or VRR monitor down the road. Especially since the monitor will last you 5 years, whereas a high end GPU will be obsoleted in 12-18 months. EDIT: Said 1060 when I meant 1050. Brainfart.

            • sweatshopking
            • 3 years ago

            470 isn’t going to run Warhammer at 60fps maxed nor even far cry 4.

            • DreadCthulhu
            • 3 years ago

            Maybe not maxed, but some more slides from their China event got revealed yesterday; AMD claims that the RX 470 will manage to run Farcry 4 & Warhammer (among other popular games) at 70+ FPS @ 1080p at high settings. [url<]http://wccftech.com/amd-radeon-rx-470-460-official-performance/[/url<]

            • Laykun
            • 3 years ago

            Long story short, quality 1440p 144hz gsync monitor, 1.1k NZD, GTX 980 Ti 1.3K NZD, all up I had to spend 2.4k NZD (~1.7k USD) to get a good experience. Even now the 980 Ti doesn’t even come close to pushing 144fps in a lot of titles. You have to spend quite a bit more than $200 for a decent monitor.

            I can certainly understand why people stick to 1080p 60hz at the moment, it’s just very expensive to jump up a tier, and this is likely to make up the majority of users for a long time.

          • bfar
          • 3 years ago

          I use high end cards on a 23′ 1080p monitor. Insane performance, decent dpi, and a low physical footprint. I’m happy with that for now.

          I might be tempted by a 2K 24 ‘, but looking at anything more than a standard 60hz IPS, things actually start getting pretty expensive. I don’t think I could ever go back to TN. It’s not that I can’t afford a new monitor, but if I’m going to spend the money I’m inclined to wait another year for better tech.

          • alrey
          • 3 years ago

          Its because many values gaming performance more. Why would I play at 4k if I have more fps with a reasonably good graphics at 1080p?

          • anotherengineer
          • 3 years ago

          Because $200 bucks for the past 8 years just gets you a basic 1920×1080 monitor that will last 10 years. There isn’t much point spending another $200 bucks on another equal monitor 2 years later.

          Whereas a video card will typically see very decent improvements every 2-3 years, so there is more incentive to spend on a video card. (where you can get more fps and eye candy)

          If you could get a 120-144Hz, 8-bit IPS, 24″-27″ 2560×1440, with fully H.A.S. with DP1.3, freesync, gsync, etc. etc. for $350 tax in, I’m sure in 2 years it would surpass 1920×1080 resolution on steam.

          Then there is the other part of it…………..if it ain’t broke……………….

        • Billstevens
        • 3 years ago

        Beating the 1070 at 1440p is the most important thing. 4k is still only marginally possible on either but 1440p and ultrawide are a bit more common now for people with those graphics cards.

        • maxxcool
        • 3 years ago

        Sooo massive throw-down with 1080/1070/980/970/480x/Fury-X Confirmed\imminent ? πŸ™‚

          • djayjp
          • 3 years ago

          And 1060!

            • maxxcool
            • 3 years ago

            Damn NDA …

        • Concupiscence
        • 3 years ago

        I think that view underestimates the difference Vulkan makes on lower-end AMD hardware. The R7 370 in my i3 4170 HTPC went from comfortably managing 1600×900 at medium quality settings in OpenGL to the same resolution at high quality in Vulkan [i<]with[/i<] a 20 fps boost. If I were willing, I could shunt the detail settings down and play at 1080p with Vulkan, but TSSAA forgives a lot of sins, and high quality 900p looks better than detail-reduced 1080p. Everybody's keen to talk about what this renderer does for high-end kit because that's the hard-charging stuff that best represents what PC gaming can do, but mere mortals stand to gain a lot from Vulkan too.

        • Bensam123
        • 3 years ago

        And that’s what most gamers are playing on.

        • ptsant
        • 3 years ago

        As an owner of a high-refresh 1440p monitor, I would like to point out that 1440p @ 100 fps is very close to 4k @ 50 fps in terms of pixels/sec. Some people will prefer one over the other and many 4k screens are cheaper than high-end 1440p adaptive sync monitors.

        I get the excitement with 4k, but until 1440p becomes trivial, 4k remains exotic in my opinion, limited to people with at least a GTX 1080 or 2…

      • sweatshopking
      • 3 years ago

      These cards look finally fast enough for 1080p in my books. Not even 1440p Imo. I want 100fps on a shooter, not 40.

        • bittermann
        • 3 years ago

        Go get a Freesync monitor and STFU about 100 fps. πŸ™‚

          • Firestarter
          • 3 years ago

          50fps + freesync is still not 100fps, not in DOOM anyway

          • f0d
          • 3 years ago

          freesync and low refresh is not as good as a high refresh and high fps
          i have freesync and low fps is still low fps but just a bit smoother

        • bfar
        • 3 years ago

        Right. Although I’d say we’re ok now for 1440p. But 4k is still years away. I laugh when I hear the words ‘4k ready’ in terms of gaming.

      • derFunkenstein
      • 3 years ago

      If it’s about CPU overhead and all that fun stuff, it makes more sense to have the cards not be a huge bottleneck.

      • Krogoth
      • 3 years ago

      They are testing to see how much the CPU is a bottleneck which is where you can see the difference between the API. The best way to do this is running at relatively low resolutions.

      Please remove those green-tinted shades, you are looking very silly.

      • VoodooIT
      • 3 years ago

      Did you not see the 1440p results? Those are pretty good too.

Pin It on Pinterest

Share This