Doom’s long-awaited Vulkan support is rolling out today

Doom's Vulkan implementation made waves at the GeForce GTX 1080 launch. Developer id Software showed the Vulkan version of the game running with its settings cranked at well over 100 FPS, and now us common folk can try it out ourselves. A Doom update with the new rendering path is rolling out this morning. id Software says users with minimum-spec hardware will get "better performance at higher video settings," while folks with heavier-duty systems "will experience exceptional performance with Doom's advanced video settings cranked up to full effect." Players will be prompted to choose OpenGL or Vulkan mode upon starting the game with the new update installed.

The release does come with a few caveats. Users will want to update their video card drivers to the latest versions available from Nvidia and AMD. Windows 7 users who have Nvidia graphics cards with 2GB of RAM on board can't enable the new API. GeForce GTX 690 owners won't be able to run the game with Vulkan enabled at all. id Software says that Doom's asynchronous compute features are only working on AMD graphics cards at the moment, though the company says it's working with Nvidia to bring async compute features to the green team's graphics cards, too.

For its part, AMD seems pleased with the benefits of this release. The company says Doom takes advantage of asynchronous compute shaders, shader intrinsics, and "frame flip optimizations" (a feature that the company describes as "[passing] the frame directly to the display once it’s ready") on AMD graphics cards to deliver a substantial increase in performance. For example, the Radeon RX 480 can purportedly run the Vulkan version of Doom as much as 27% faster at 1920×1080 than it can with the OpenGL renderer. At 2560×1440, users might see as much as a 23% performance gain. We'll have to fire up the title and see just how much of a boost Vulkan offers soon.

Comments closed
    • End User
    • 3 years ago

    So both Pascal and Polaris see a performance bump when one switches to Vulkan. Polaris sees another bump when asynchronous compute is enabled. I look forward to Bethesda enabling asynchronous compute on NVIDIA GPUs.

    From Bethesda:
    [url=https://community.bethesda.net/thread/54585?start=0&tstart=0<][quote<]Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run. We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.[/quote<][/url<]

      • tipoo
      • 3 years ago

      Waiting with bated breath for the latter part. Still want to see what the Pascal vs Maxwell differences will be here. They (Nvidia) said Maxwell would get it, but then advertised it as a point for Pascal, so it’s hard to tell what’s going to happen, if they both had it but they wanted the advertising point for Pascal so they held it back, or if Maxwell never had it and they meant a software implementation with the CPU taking the brunt of it, or something in between where they both have it but Pascals is just better.

      Even Pascal doesn’t have the seamless interleaving between clocks I don’t think, but it may not matter with how much brute power it has.

    • anubis44
    • 3 years ago

    Here’s a wccftech video showing the RX 480 running Doom using OpenGL, Vulkan (async compute disabled), and Vulkan (with async compute enabled):

    [url<]https://www.youtube.com/watch?v=GlzKPBIjZPo&t=0s[/url<] If this is a harbinger of things to come, nVidia is going to be in serious trouble for leaving out hardware schedulers/asynchronous compute engines from their Pascal design.

      • Theolendras
      • 3 years ago

      Trouble would be a bit of exaggeration, they would indeed do not scale well with DX12 value/performance chart, but remain largely equal in DX11 value/perf chart all the while been more efficient in performance per watt. Could eat their expected margins tough.

    • chuckula
    • 3 years ago

    Having seen a few reviews, the Rx 480’s “amazing” improvements are just like the “amazing” improvements in nuclear plant safety that occurred when [url=http://simpsons.wikia.com/wiki/Simpson_and_Delilah<]Homer Simpson got promoted out of his old job as safety inspector[/url<]. The "improvements" are mostly AMD's terrible OpenGL driver being taken out of the equation and it's trivially easy to make a false impression of "amazing" performance on a relative basis when you can control both things being compared. As proof, here's the supposedly "gimped" and "obsolete" GTX-970 compared to the brand new Rx 480: [url<]http://media.gamersnexus.net/images/media/2016/game-bench/doom/vulkan/vulkan-doom-1080p.png[/url<] Notice how the Vulkan-enabled brand new Rx 480 beats the "gimped" GTX-970 by uh.. a little less than 5%? Remember the TR review where all those bad DX11 games that we ought to throw out gave the Rx 480 the same margin of victory over the GTX-970 as what it just pulled in the new magical Vulkan benchmark? Want to see Nvidia get performance improvements like that? Sure! Just have Nvidia do that thing that you always accuse them of: Have them gimp their OpenGL drivers to be just as incompetent as AMD's. Then, you can show the same massive "improvement" for the GTX-970.

      • Concupiscence
      • 3 years ago

      [url=http://i0.kym-cdn.com/photos/images/facebook/001/044/247/297.png<]Are we making Simpsons references now?[/url<] And to show that I'm not unrepentant snark made flesh, yeah, OpenGL's always been a second class citizen on AMD hardware in Windows, going back to the days when ATi still walked the earth.

      • AnotherReader
      • 3 years ago

      While I suspect that your point about a relatively unoptimized OpenGL driver from AMD magnifying the gains is correct, I would be remiss if I did not point out that you are comparing to an overclocked 970. The link also shows that Pascal benefits from Vulkan too, especially at lower resolutions. As the effect is most pronounced at lower resolutions, it may be due to better use of the CPU.

        • Concupiscence
        • 3 years ago

        With Vulkan the GPU’s likely spending less time waiting for unnecessary work to complete before asking for more work from the CPU. At higher resolutions the importance of CPU speed begins taking a back seat to the sheer demands of the graphical resources, which is why that window of improvement begins to close. I think that’s been true going back to the days of Quake III Arena benchmarking.

      • DoomGuy64
      • 3 years ago

      Vulkan only gave me a small boost on my 390, as it was already pushing close to 100 fps. What I did gain seemed to be slightly smoother fps. Vulkan still offers real improvements over OpenGL, optimizations or not.

      The 480 probably got such a huge boost being a newer card that AMD hasn’t optimized as much for. That doesn’t mean AMD can’t or won’t optimize OpenGL for the 480, it just means Vulkan currently runs better. It also shows us the true potential of the 480. Which of course, is kryptonite to a Nvidia fanboy. eg: *count cuckula reads benchmarks* “AHH! I’M MELTING..”

        • Concupiscence
        • 3 years ago

        What CPU’s feeding your 390? That seems to make a huge difference…

          • DoomGuy64
          • 3 years ago

          i7 Haswell

            • Concupiscence
            • 3 years ago

            Yep, that’ll do it. The i7’s high IPC and abundant threads were doing a good job of keeping the R9 390 fed, though I’m glad to hear latency’s improved under Vulkan at any rate. I’m kinda dying to see how benchmarks improve for people running capable GPUs with AMD kit and Core i3’s…

        • Sammael
        • 3 years ago

        Did you set the AA mode to tssaa, it needs to be that or disabled to use async. If you had some other AA setting you should test it again.

      • Sammael
      • 3 years ago

      You probably missed it, as did they at the time of reviewing, but they need to use TSSAA or no AA else async is disabled, they used FXAA so lost the additional boost that async brings to the rx480

      [url<]https://twitter.com/idSoftwareTiago/status/752590016988082180[/url<] So no, the 970 is not at the same level, it's left behind. [url<]https://www.computerbase.de/2016-07/doom-vulkan-benchmarks-amd-nvidia/[/url<] ps, in the gamernexus test that was a superclocked 970 evga card vs a base levle rx 480, and crippled without async as a boos it still got beat with amds hands still partially tied behind their back. Nvidia loses here, badly. Repent.

        • AnotherReader
        • 3 years ago

        If this holds true until the 1060 launch, we know that there is at least one game where the 1060 will lose badly to the 480. The 480 is 90% of a 1070 in Doom. The Fury X also does very well: [b<]30% faster than a 980 Ti[/b<] and [u<]25% faster than a 1070[/u<]. I haven't seen a 1080 on that chart, but it seems like it wouldn't be any faster than the Fury X at 1440p. However, if we look at the OpenGL performance of the various Radeons, it seems clear that Chuckula was right about a mediocore OpenGL implementation from AMD being responsible for some of the gains. The gains are too large to be due to that alone though. Finally, I am waiting impatiently for the custom versions of the RX 480; if they overclock well, the 1060 may be in for a rude shock.

          • BurntMyBacon
          • 3 years ago

          The Fury X improvements here border on the obscene. It is almost unbelievable that merely making use of low level APIs and Async compute would propel them well past the 1070 and possibly into 1080 territory.
          [quote<]Finally, I am waiting impatiently for the custom versions of the RX 480; if they overclock well, the 1060 may be in for a rude shock.[/quote<] The 1060 will no doubt overclock just as well, but I see your point. [b<]Except[/b<] Bethesda is still in the process of enabling Async compute for nVidia GPUs. I don't hold high hopes for Maxwell, but Pascal advertises the feature. I wonder how that will change the landscape.

            • AnotherReader
            • 3 years ago

            You are right. Let’s see how much performance is added by asynchronous shading to Pascal. Is there a toggle for switching off asynchronous shading in Doom? I recall that there was one in [url=http://www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta/6<]Ashes of the Singularity[/url<].

            • Theolendras
            • 3 years ago

            In think the Fury X is the very reason why AMD introduced hardware scheduler. They had a hard time to feed it in some circumstances. Async compute obviously improve hardware utilization which in turn is probably more efficient on an architecture with some idling units…

    • Bensam123
    • 3 years ago

    And looks like another win for AMD as far as older GPUs running newer software. AMD definitely seems to age more gracefully then Nvidia, regardless of initial performance numbers.

      • wingless
      • 3 years ago

      Not sure why this comment was down voted. This is a verifiable fact. Red Tech Gaming did a YouTube video on this subject. As time goes on, AMD GPUs have more longevity than their nVidia counterparts. A theory is actually that AMD improves their drivers over time (obviously) but nVidia purposefully either gimps performance on older GPUs or (probably more reasonably) end driver optimizations for their older GPUs.

      Long term, an AMD tends to just get faster.

        • tipoo
        • 3 years ago

        I’ll join in on this chain of downvotes and say I’m not sure why it’s being downvoted either – ask Fermi owners about long term support vs AMD from the same gen.

          • chuckula
          • 3 years ago

          [quote<]ask Fermi owners about long term support vs AMD from the same gen.[/quote<] OK, I'll ask: Q: Chuckula, you still use an old Fermi card, how is AMD's Vulkan support for a pre-GCN 6900 series card from the same time period? A: What support? [Edit: Here's a GTX-560Ti, which is actually also what I'm running in my secondary system, that actually runs Doom 2016: [url<]https://www.youtube.com/watch?v=kqtY4CvJFuk[/url<] Obviously a 5 year old card is not miraculously fast, but just try to compare a pre-GCN 6-series AMD card running at the same settings and no, Vulkan won't help since those cards don't have any Vulkan support].

            • swaaye
            • 3 years ago

            I haven’t even tried to run Doom on my machine with a 6970. The driver support essentially ended last year. I have actually played a few missions of Ashes of the Singularity on it and that was not pretty. Lots of strange stuttering pauses.

            • tipoo
            • 3 years ago

            GTX-560Ti is Fermi 2.0, the 400 series was Fermi. What I was getting at was the per-game optimizations for Fermi that ended when it was two generations back, while even your example of the 6900 was getting optimizations per-game until late 2015 when it was legacy’ed, in which it still receives drivers but no architecture optimization.

            And every GCN part out there is still enjoying per-game optimizations which launched the following year.

            • chuckula
            • 3 years ago

            [quote<]GTX-560Ti is Fermi 2.0, the 400 series was Fermi.[/quote<] Yeah, I'm sure there's not one single quote from 2011 about how the 5 series Fermi is just a rebrand. Even if "Fermi 2.0" is some miraculous improvement over "Fermi 1.0", the fact is a Radeon 6-series card from the same time period as "Fermi 2.0" has literally zero Vulkan support and is straddled with AMD's broken OpenGL implementation. The whole "Ngreedia never gives support for old cards!" line was already contradicted before you made your post since I posted a clear video of that Fermi "2.0" card from 2011 doing just fine with Doom 2016 in a way that we sure aren't seeing from AMD's cards of the same time.

            • DoomGuy64
            • 3 years ago

            I tested the Doom beta on my 470, being in-between graphics cards at the time, and the performance was terrible. Fermi is too old to play modern games, and you have to lower too many settings to get acceptable performance. Not to mention resolution is limited to 1080p over HDMI, and doesn’t support displayport. Also, you specifically mentioned Vulkan, so I’ll return the question with asking does Fermi support Vulkan?
            [url<]http://wccftech.com/nvidia-fermi-geforce-400-500-vulkan-api-support/[/url<] Nobody should be using Fermi or VLIW today. It's pure masochism. Whether or not Fermi still gets updates is irrelevant when the architecture is clearly too old to handle modern titles, and if you really want to argue future proofing, the 6970 had 2GB of ram, while the 470 had 1.25, giving AMD's VLIW cards an edge over Fermi in games that needed the Vram. The majority of Fermi cards that had extra Vram were usually the X-60Ti cards, which were already underpowered. The video you linked showed the user running Doom @ 720p. Another feature missing from Fermi is accelerated video recording, which is why you don't see too many of these cards being used in lets play videos. Time for Gramps to retire, I'd say.

            • chuckula
            • 3 years ago

            [quote<]Also, you specifically mentioned Vulkan, so I'll return the question with asking does Fermi support Vulkan?[/quote<] The original poster made the fallacious argument that AMD's support of old GPUs from the Fermi-era is so much better than Nvidia's, which is flat wrong. I never once said that Nvidia had Vulkan support for Fermi, I merely pointed out that the "miracle" Vulkan support that seems to be a requirement for AMD hardware to run the game well doesn't exist for those old AMD GPUs in spite of the incorrect implication of the original poster that AMD provides this magically great level of support for old hardware. I also correctly pointed out that the existing OpenGL support for Fermi was good enough to even let a 5 year old GPU play Doom 2016 as evidenced by the Youtube video that was even further hampered by screen recording software.

            • DoomGuy64
            • 3 years ago

            Concupiscence just linked a video of someone with a Radeon 5830 playing doom. The VLIW cards being EOL doesn’t mean they can’t run games, it just means they are no longer getting optimization, because there is no point. Just because your card [i<]can[/i<] play a new game, doesn't mean you should be using something 5 years old. Both cards are running Doom at 720p sub 60fps. No thank you. GCN was designed for next gen API's. That's why it does better. 1.0 was terribly inefficient at dx11, which improved with driver updates and newer hardware revisions. 1.1 is where GCN actually started getting good at dx11. Either way, GCN was designed for next gen API's, which is exactly why GCN gets such a large boost when using them. DUH.

            • swaaye
            • 3 years ago

            Yeah I would think that modern games aren’t being written for old chips. In other words, they don’t look at what techniques will run acceptably on Fermi or Kepler anymore.

            GCN has the benefit of chips being rebranded for years and the architecture being fairly similar between all of them. But we saw with some Mantle games that later GCN designs were problematic so the architectures certainly aren’t identical.

            On VLIW, even the VLIW4 hardware is probably hopeless now because it was never getting good utilization on non-graphics shader processing, and compute is much more prevalent these days.

            • Concupiscence
            • 3 years ago

            Upvoted to mitigate the downvote brigade. The faster Fermi cards really can grunt through Doom ’16. My GTS 450 wasn’t quite up to the job – it ran OK until I got to the Foundry, where performance dipped below 20 fps and it became unplayable – but as long as it’s a 460 or faster and it has at least a gig of RAM on it, Fermi’s capable. It’ll just heat your house while doing it, and as everyone’s said, it doesn’t support Vulkan.

            Here’s a thing: Doom ’16 [url=https://www.youtube.com/watch?v=7efy64gr4eA<]managing to run on some poor bastard's Radeon 5830.[/url<] I think AMD currently enjoys a support advantage because all their driver releases are for various, largely similar variations of GCN, while Nvidia still provides some level of support for [b<]four different, variably related graphics architectures[/b<]. And from personal experience I'll share how much it sucks when they decide to EOL hardware that's only guilty of not being sexy any more.

            • BurntMyBacon
            • 3 years ago

            I saw some of these comments floating around the internet and decided to do some testing.
            TLDR: I don’t think nVidia is in the business of sabotaging their own cards, but their older architectures have a harder (not impossible) time with modern games.

            [quote<]The faster Fermi cards really can grunt through Doom '16 ... as long as it's a 460 or faster and it has at least a gig of RAM on it, Fermi's capable.[/quote<] I have a GTX470 that does not do well here in my experience. The system is perhaps under-powered with a Phenom II 955BE @ 4.4GHz and 16GB DDR3-1600, but it is the same system I used to compare the following cards. Have a buddy who lent me an old 560Ti (448C) to check out. It did seem to work a fair bit better, though it really should not have been that much faster than the GTX470. A Radeon 5850 seemed to handle it better than these fermi cards on this particular system, though not by an appreciable amount. --- Opinion based on personal experience --- While I don't subscribe to the nVidia is sabotaging their old cards mantra, I do feel as if they stop optimizing for their old architectures (I'm not talking one generation out). To be fair, there is a practical limit to how long you can put resources into optimizing old architectures before you loose money on the product. Sometimes optimizations for the current generation architecture are prioritized such that the others receive them a bit later. I also doubt very much that modern games are targeted at their old architectures either. Finally, given how good nVidia is at driver optimizations, I can see how some people may get the feeling that they are being sabotaged when their drivers are no longer optimized for new games. ATi also stops optimizing for their old architectures (pre-GCN at the moment). They, however, have the benefit of modern games being targeted at the GCN architecture due to their console wins. While not all GCN is created equal, there are definitely commonalities that help them here. They are historically not as proficient at driver level optimizations as nVidia, either, leading to initial sub-optimal performance that often gets better over time. The fact that DX12 and Vulkan don't rely nearly as much on driver level optimization helps their day 1 performance. Because their cards are often not as well optimized as their nVidia counterparts and the GCN architecture is a targeted architecture, they don't have as large of a performance falloff when optimizations stop.

      • Krogoth
      • 3 years ago

      GCN and VLIW architectures tend to rely more on brute force then their Nvidia counterparts. Hawaii really exemplifies this. There’s just more untapped software potential that gets exploited by future optimizations.

      Early 7970 and 290X adopters have been very fortunate that their silicon has lasted so long. The only thing that comes close on the Nvidia side is either 8800GTX or GTX 680.

        • tipoo
        • 3 years ago

        The 8800 launched the month after the PS3 and could run nearly everything nearly till the end of that generation, “wait for the generation after a console launch” seems like a good strategy for longevity 😛

        The iterative consoles may muck things up, though I expect the base will still be the optimization target…I’d hope, for the tens of millions of users.

        • AnotherReader
        • 3 years ago

        I would disagree with the 680; [url=http://tpucdn.com/reviews/AMD/RX_480/images/perfrel_1920_1080.png<]it is no match for the 7970 now[/url<]. I think it is a limitation of the optimization in drivers rather than of the architecture. These aren't massive out of order processors that can extract ILP from even poorly optimized code; these are in-order, massively threaded cores requiring just the right instruction mix. Nvidia is just better at optimizing; AMD does get there with time though.

          • BurntMyBacon
          • 3 years ago

          While I don’t necessarily disagree with you initial point, seeing as your link has neither the 680 or the 7970, I don’t see how your link helps make this point.
          [quote<]I would disagree with the 680; [url=http://tpucdn.com/reviews/AMD/RX_480/images/perfrel_1920_1080.png<]it is no match for the 7970 now.[/url<][/quote<] Wrong link?

            • AnotherReader
            • 3 years ago

            I chose this as it is the most recent one I could find: the 770 is a mildly overclocked 680 (3%) and the 280x is an overclocked 7970 (8%). The comparison between the 680 and the 7970 would be a bit kinder to the 680.

            • BurntMyBacon
            • 3 years ago

            I agree, but when talking about the longevity of particular cards (especially when driver optimizations that may or may not be available to older cards are part of the discussion) it is important to compare the actual cards and not newer near equivalents. Again, I don’t necessarily disagree with your initial point. It’s good that you clarified your comparison as well.

            • AnotherReader
            • 3 years ago

            If the differences were as substantial as between the GTX 480 and the GTX 580, I would have sought an older comparison, but given that these are the same GPUs with different memory and GPU clocks, I didn’t think it necessary.

      • Pancake
      • 3 years ago

      No. Interpretation fail. What it really signifies is lack of progress. They’ve been tweaking GCN forever while NVidia have been developing new, more efficient architectures.

      And this is unfortunate as what will really hurt AMD is a lack of mobile design wins. With Pascal being so hugely, absurdly more efficient who would put a Radeon in a laptop? You won’t sell any.

    • OhYeah
    • 3 years ago

    Here is my config:

    Core i5-3570k
    16 GB RAM
    Radeon 380x 4 GB
    256 GB SSD
    Win 7 Pro SP1
    2560×1440

    With OpenGL and all medium settings the fps was around 50-60. Now with Vulkan I switched to “high” settings and the fps is stable around 60, occasionally going 70+. So in practice around a 20% performance gain seems very reasonable for mid-range gaming rig. With FreeSync the game is very smooth at 60 fps. So far *very* impressed with Vulkan and how it has been implemented.

      • sweatshopking
      • 3 years ago

      You have two weeks to fix that OS

        • raddude9
        • 3 years ago

        Yup, he’s severely lacking in spyware 😉

          • Pancake
          • 3 years ago

          He’ll be severely lacking in DX12 too.

    • bfar
    • 3 years ago

    What a showcase for the API! Man I hope developers flock to Vulkan rather than DX12.

    While I admire what Microsoft have done with DirectX over the years, it’s not healthy for a company like Microsoft to have a virtual monopoly on PC games development APIs. There’s always the risk they’ll hamstring the experience in a clumsy attempt to monetize it. They came pretty damn close with the Windows store DX12 fiasco a few weeks back.

      • Zizy
      • 3 years ago

      While I agree it is better to have at least semi-neutral grounds here, I don’t think this is going to happen. DX has a nice head start, better tools and more devs proficient in it. Vulkan being almost everywhere isn’t going to help when the relevant other markets for most PC games are Xbox and PS, which don’t have Vulkan, and Xbox has DX.
      I believe we will maintain status quo – DX on PC, OGL (and now Vulkan) on mobile.

        • bfar
        • 3 years ago

        I was going with aspiration rather than prediction! But yea, I don’t see DX going anywhere. The support that’s offered to developers is very valuable.

        That said, I think the ground has shifted a bit. There are new factors to consider. The rise of mobile devices and the success of PS4 over xbone are influences. If you want to target a big audience, DX is no longer a no brainer.

        DX got out of the traps early, but there’s been hiccups, such as that silliness with the windows store and vsync. And it’s uptake is still painfully slow.

          • Zizy
          • 3 years ago

          Mobile doesn’t really compete with PC gaming all that much. Flash games on sites like Kongregate moved to the Facebook and then phones, but those never really competed with AAA PC/console games. Very slight overlap between them right now.
          PS4 dominance is a large factor – it is often better to target PS4 first with DX port for XB1/PC afterwards. I still don’t see Vulkan gaining in this scenario though. DX port is a must to reach Xbox, which has way more gamers than Linux (and even W7->8.1), so Vulkan is just a 3rd – at which point most studios won’t bother.

          I don’t think uptake is really slow. DX12 is less than a year old with a few games already and many more coming soon. Better than even DX11, which had large advantage of very poor DX10 reception.

            • tipoo
            • 3 years ago

            That’s the problem I have with phone games. I have this 200Gflop GPU, wicked CPU device in my pocket that would easily give the 7th gen consoles at least a run, with a controller it should be able to deliver some nice looking, nice playing games when away from home, but it’s mostly meaningless little apps you play for 5 minutes on the toilet instead.

            There are a couple of problems with getting the level of game we want to mobile, very few people have mobile game controllers so few games assume they’re there and touchscreen games limit scope, and there’s the issue of app cost, if an average app is 5 dollars AAA game developers aren’t going to sink 200 million dollars into making games as well as last gen consoles for them. And storage footprint, I think there are still limits here.

      • tipoo
      • 3 years ago

      Ideally I’d like to get back to a point where both APIs are highly competitive and have significant marketshare, to get to that old point we used to have where they would continually innovate and introduce new features and leapfrog each other. Now that they’re low level though more of the optimization workload is on developers, but there are still other rendering features they could implement I’m sure.

    • End User
    • 3 years ago

    In Doom at 2560×1440 using a GTX 1080:

    Ultra settings with FOV @ 130, AA @ SMAA 1TX

    Using the same checkpoint to test:

    OpenGL – hovering at 60 fps
    Vulkan – hovering above 130 fps

      • JosiahBradley
      • 3 years ago

      Did you have vsync on for opengl? Those results don’t sound possible at all. Especially seeing as other reviewers saw no increase in performance.

        • Meadows
        • 3 years ago

        According to TotalBiscuit on youtube, there is currently some sort of “60 fps” limiting bug with Doom that several people experience which is only fixed by repeatedly changing to windowed mode and back again. This sounds like it.

        • End User
        • 3 years ago

        V-Sync is off as I’m using a 165 Hz G-Sync display (set to 150 Hz).

        I’m just going by the FPS displayed once the checkpoint loads. Once I move around the FPS go up past 80 when using OpenGL. When moving to the same view under Vulcan I top out at 160 FPS.

        The only change I am making is the Graphics API switch between OpenGL and Vulcan.

        • End User
        • 3 years ago

        According to WCCFTech there is a [url=https://www.youtube.com/watch?v=GlzKPBIjZPo&t=0s<]massive performance jump[/url<] when using Vulcan.

    • kvndoom
    • 3 years ago

    Hmph, I updated drivers on my GTX970 to 368.69 and I don’t get the prompt to pick between OpenGL and Vulkan when I start the game.

      • End User
      • 3 years ago

      You have to go into the game settings and manually change via the Graphics API option under Advanced from OpenGL to Vulcan.

        • kvndoom
        • 3 years ago

        Thanks! Will try this tonight. 🙂

    • Freon
    • 3 years ago

    How can I benchmark doom? Is there a record/timedemo feature? A common demo loop that we can share?

      • biffzinker
      • 3 years ago

      No timedemo, you would need to pick an area in one of the maps, and record FPS/frame times over 2-3 iterations.

    • Kretschmer
    • 3 years ago

    Maybe this will fix the “crash on boot” issues I’ve had with Doom4 so I can finish the damn game.

    • swaaye
    • 3 years ago

    I’m wondering if it will only do anything for you if you’re CPU bound.

      • DPete27
      • 3 years ago

      Like Mantle and DX12, Vulcan is a “close to the metal” API of which one benefit is generally lower CPU overhead. So yes, it will have more impact for systems that are CPU bound
      **if implemented properly**

      • Concupiscence
      • 3 years ago

      It’ll do more for you if you’re CPU-bound, but part of what Vulkan does is minimize chances for an API call to waste time performing unnecessary work. That’s a particularly big problem for a complicated, legacy-burdened API like OpenGL, which still presents a model of activity that doesn’t represent what’s really happening in the state and design of modern video hardware.

      edit: That’s not to say that OpenGL hasn’t taken steps to mitigate some of those problems. The hubbub around AZDO a little while back and the performance id was able to wring out of the API by sticking to 4.3 as a lower cutoff are testament to the hard work put in on all sides. But Vulkan is definitely more capable of taking advantage of modern hardware’s full capabilities… it just takes more work and micromanagement to get there, and the spec is still maturing.

    • LostCat
    • 3 years ago

    Guess I’ll have to pick up Doom soon. Hadn’t felt the need yet.

      • Laykun
      • 3 years ago

      Generally you’d pick it up for the gameplay, not the graphics API. I can whole heartedly recommend it for the gameplay.

        • LostCat
        • 3 years ago

        Yeah, and if it was like Doom 3 I’d wait til it was $3 in a sale and play it for about an hour, but apparently it isn’t.

        • LostCat
        • 3 years ago

        Generally I pick games up for their entire package – gameplay and technology.

        • BurntMyBacon
        • 3 years ago

        [quote<]Generally you'd pick it up for the gameplay[/quote<] Get Out.

    • DPete27
    • 3 years ago

    Sooo, no word from NGreedia how much performance improvement the Vulkan API gives?

    Is there a way to turn OFF async compute using the RX480 to test how much improvement is coming from that alone?

      • tipoo
      • 3 years ago

      Last point would be interesting. Some devs say they save 3 to 5ms with Async, so that’s pretty huge if you only have 16.6ms to render a frame (or even 33.3). Saving up to 30% rendertime is massive for a feature a certain company underplayed.

      [url<]http://wccftech.com/async-compute-praised-by-several-devs-was-key-to-hitting-performance-target-in-doom-on-consoles/[/url<]

      • Concupiscence
      • 3 years ago

      Yes, async compute can be turned off. 🙂 It’s a setting in Doom’s advanced video options that can be toggled, though you may need to exit the game and hop back in to compare.

      • Voldenuit
      • 3 years ago

      [quote<]Sooo, no word from NGreedia how much performance improvement the Vulkan API gives?[/quote<] Guru3D has benchies: [url<]http://www.guru3d.com/news-story/new-patch-brings-vulkan-support-to-doom.html[/url<] Early days still, so it's probably a bit too soon to say whether it's a case of Vulkan not being supported in the 368.69 drivers g3d tested, Vulkan not providing much benefit to nvidia, or nvidia's OpenGL path already being efficient, or a combination thereof.

        • Concupiscence
        • 3 years ago

        Vulkan’s definitely supported in that driver revision. My guess is the one-two punch of Nvidia’s GL implementation being more tightly optimized in combination with the (current) lack of support for async compute. We’ll see what subsequent driver work yields for Pascal.

      • Laykun
      • 3 years ago

      NGreedia, implying that nvidia is just out to make money and by contrast AMD isn’t. If you look at AMD’s yearly financial reports it really seems like they don’t want to make any money at all.

        • LostCat
        • 3 years ago

        I think it’s the tactics NV has used several times in the making of said money.

        That said, every time I see ‘nGreedia’ I think someone needs a baseball bat to the head.

          • kvndoom
          • 3 years ago

          No worse than M$, and that one’s been around since the last [i<]century![/i<]

            • fyo
            • 3 years ago

            Technically, “M$” has been around since the last [i<]millennium[/i<].

            • hansmuff
            • 3 years ago

            Both show a lack of maturity, so if that’s what one is going for, one is just as good as the other.

        • Concupiscence
        • 3 years ago

        I thought NGreedia was a joke and part of Tech Report lore at this point, referring back to [url=https://techreport.com/news/27261/assassin-creed-unity-pc-requires-6gb-of-ram-gtx-680<]a very, VERY special comment thread.[/url<] I upvoted DPete27 to keep him from suffering because he knew his history 'round these parts.

          • DPete27
          • 3 years ago

          Thank you. In this case, yes, I was just using the term “NGreedia” for historical reasons only.

          In comments about GSync though….I use “NGreedia” in its full intended disparagement.

            • Concupiscence
            • 3 years ago

            And rightly so.

    • Chrispy_
    • 3 years ago

    I wonder if they’ve fixed antialiasing with Vulcan:

    In [s<]DirectX[/s<] [i<]*OpenGL[/i<] mode, any of the non-temporal AA modes exhibited sparkle and dithering on reflections, whilst the temporal AA modes looked great but had a frame of extra lag. I'd love the non-temporal AA to look as good as the temporal stuff without the lag. [i<]* - derp edit: [/i<]

    • adampk17
    • 3 years ago

    Those purported performance improvements, this is exciting! I hope gains in the ~20% realm materialize for the green team as well.

    • maxxcool
    • 3 years ago

    Sweet, now we need a DX12 vs VULKAN 1080p ‘everything turned up throw down’ between 200-250$ cards on the market, And a 4k throw down as well on 400-600$ cards… let’s get it on!

    edit coffee

    • maxxcool
    • 3 years ago

    “purportedly” that word! 😉

      • Stochastic
      • 3 years ago

      I always think of the Techreport now whenever I see it.

    • tipoo
    • 3 years ago

    “though the company says it’s working with Nvidia to bring async compute features to the green team’s graphics cards, too.”

    Now, Pascal only, or “we swear it’s coming in the drivers HEY LOOK PASCAL” Maxwell? Wouldn’t be the First time…

    The gains sound good over OpenGL, but I wonder what they would have been over DX11 had the game been made in that, since OpenGL was routinely 30% or more behind it.

      • shank15217
      • 3 years ago

      What you said doesn’t make any sense

        • tipoo
        • 3 years ago

        Which part? Nvidia did say async drivers were in the works for Maxwell, leading some to think the hardware had the capability but it wasn’t yet turned on. And then when Pascal struck they billed that as their first architecture with the capability.

        [url<]http://www.guru3d.com/news-story/nvidia-will-fully-implement-async-compute-via-driver-support.html%20[/url<]

        • nanoflower
        • 3 years ago

        I’ll translate for you

        Nvidia doesn’t have async compute ready (at least not on their cards) so they try to distract users and discount the importance of async compute. There’s also the question of will the async compute support come for Maxwell and older cards or just Pascal.

      • Concupiscence
      • 3 years ago

      On your last point, an awful lot depends on how OpenGL was being used. If you were talking about a representative sample being a major title’s Linux support implemented via middleware that translated Direct3D 11 calls and resources to OpenGL, I can believe that kind of variance. But more than any other developer id has always done an excellent job harnessing OpenGL. It’s possible they could have squeezed some extra performance from a native Direct3D 11 renderer, but I wouldn’t predict it to be more than a single digit percentage. Doom’s OpenGL renderer was already a very well-optimized piece of work.

        • tipoo
        • 3 years ago

        I could believe that, no doubt id would be among the best with OpenGL compared to other efforts I’ve seen. Guess I’ll never know what this game would have performed like on DX.

          • stefem
          • 3 years ago

          Then why you made such a comment on OpenGL performance?

            • tipoo
            • 3 years ago

            It’s regularly lower in OpenGL games which was my original comment, but I have no way of comparing id OpenGL performance to id DX performance because they don’t make DX games, and I also believe they’re one of the most talented technical bunch that could do right by OpenGL.

            What’s the beef?

            • stefem
            • 3 years ago

            If you want to compare APIs performance you shouldn’t base your conclusion on ports made with wrapper and middleware, just that.
            OpenGL has it’s own strength compared do D3D11, like a lower drawcall overhead for example

      • bfar
      • 3 years ago

      Even if Nvidia implemented Async compute in the same way AMD did, I suspect we wouldn’t see huge performance gains. Nvidia solutions had already been getting excellent hardware throughput via per game driver optimizations, and Pascal’s new preemption feature is an additional workaround. AMD achieves throughput efficiency via async compute, and it’s arguably a neater solution, but the catch is they had to wait for the new APIs to realize the benefit. The other thing is that async compute is seemingly challenging to code for, so we might not see it as a feature in every single game.

        • tipoo
        • 3 years ago

        This is true, Asyncs big gain is applying unused hardware to compute without slowing down render performance, and Nvidia seems to have higher utilization per GPU core than AMD. In that case it also means they have less overhead for doing GPU compute in games though, or at least in a theoretical world where they both had the same execution hardware (the 1070 and 1080 can of course brute force past most/all AMD efforts)

Pin It on Pinterest

Share This