Examining early DirectX 12 performance in Deus Ex: Mankind Divided

Deus Ex: Mankind Divided is one of the hottest games on the block right now. It also happens to be one of AMD’s Gaming Evolved titles, so it’s no surprise that the company is using it as a showcase for its graphics cards’ performance—especially now that a preview version of the DirectX 12 render path is available on Steam as a beta release branch. DX12 has offered performance boosts for AMD graphics cards in several titles now, so I was curious to see how the new renderer would affect Radeons’ performance in DXMD, as well.

After playing around with DXMD‘s DX12 preview last night, however, I wasn’t surprised to find some rough edges in this beta software. I noticed a lot of hitching and jerkiness in DX12 mode with GeForces and Radeons alike, so when AMD sent over some flattering average-FPS benchmark numbers as part of its press materials for the game, I knew it was time to pull out PresentMon and see just what the story behind that roughness was. You can see some of those numbers over at WCCFTech, for reference. Other sites, like ComputerBase, have also measured average-FPS results with Deus Ex‘s DX12 renderer.

Before we share some test numbers of our own, I want to make it clear that this is a quick-and-dirty look at one game’s performance, not a full review. While we tried to maintain the same rigor we do for our graphics card reviews, we didn’t test the full range of graphics cards or games we would have for a full review. Our goal here is just to get a sense of how this beta version of Deus Ex performs in a way that average FPS numbers just don’t offer. These results will most likely change once Deus Ex‘s DX12 renderer gets an official release on September 19.

Here are the specifications for our test system:

Processor Intel Core i7-6950X
Motherboard Gigabyte GA-X99-Designare EX
Chipset Intel X99
Memory size 64GB (4x 16GB DIMMS)
Memory type G.Skill Trident Z DDR4-3200
Memory timings 16-18-18-38
Chipset drivers Intel Management Engine 11.0.0.1155
Audio Integrated X99/Realtek ALC1150
Hard drive Intel 750 Series 400GB NVMe SSD
Power supply Seasonic Platinum SS660-XP2
OS Windows 10 Pro

And here are the specifications and driver versions for each graphics card we used in our testing:

  Driver revision GPU base

core clock

(MHz)

GPU boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

MSI GeForce GTX 1070 Gaming Z GeForce 372.70 1632 1835 2025 8192
EVGA GeForce GTX 1060 6GB SC 1759 1898 2002 6144
AMD Radeon R9 Fury X Radeon Software

16.9.1

1050 500 4096
AMD Radeon RX 480 8GB 1120 1266 2000 8192

Here are the graphics settings we used for both our DirectX 11 and DirectX 12 tests:

Without any further ado, let’s see how this quartet of cards performs in Deus Ex: Mankind Divided.

 

DirectX 11 results

Under DX11, all of the cards we tested deliver solid-looking frame-time plots with only a few major spikes. Going by average FPS, the R9 Fury X and the GTX 1070 are closely matched in their weight class, while the Radeon RX 480 opens a bit of a lead on the GTX 1060 6GB. Our 99th-percentile frame-time graphs tell the most interesting story, however. The GTX 1070 and the Fury X need more than 16.7 ms to deliver 99% of their frames, suggesting more variance than average FPS alone can tell us. Meanwhile, the RX 480 and GTX 1060 both deliver 99% of their frames in about 30 ms.

Recall that if a graphics card delivers every frame in 16.7 ms, it’s maintained a perfect 60 FPS throughout our tests with no hitches. In a similar vein, a card would have maintained a perfect 30 FPS if it delivered every frame on a 33.3 ms interval. Any time spent beyond 16.7 or 33.3 ms is time that the average frame rate drops beneath 60 or 30 FPS. Let’s see just how much time each card spends working on tough frames now.


Our “time-spent-beyond-X” graphs are meant to show “badness,” or the amount of time in our one-minute test period where a card might have delivered less-than-fluid animation. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

Between the RX 480 and the GTX 1060, the most interesting graphs may be those of the time spent beyond the 50-ms and 30-ms thresholds. The RX 480 spends slightly longer working on frames that take longer  than 50 ms and 33.3 ms to complete, and that time translates into slight-but-noticeable hitches during gameplay. The GTX 1060 spends barely any time past 33.3 ms working on frames, and it spends no time past 50 ms. That means it offers a perceptibly smoother gameplay experience, even if it’s not turning out as many frames as the RX 480 does. 

The GTX 1070 and the Fury X, on the other hand, are about as evenly matched as they come in this title. Neither card spends any time past 50 ms working on tough frames, and the GTX 1070 only has a vanishingly slight hiccup that takes it past 33.3 ms. Each card spends about three seconds of our one-minute test period working on frames that would drop frame rates below 60 FPS, too. Let’s see what switching over to Deus Ex‘s DX12 renderer can tell us about each card’s performance. 

 

DirectX 12 performance





So that’s a thing. Switching over to DXMD‘s DirectX 12 renderer doesn’t improve performance on any of our cards, and it actually makes life much worse for the Radeons. The R9 Fury X turns in an average FPS result that might make you think its performance is on par with the GTX 1070 once again, but don’t be fooled—that card’s 99th-percentile frame time number is no better than even the GTX 1060’s. Playing DXMD on the Fury X and RX 480 was a hitchy, stuttery experience, and our frame-time plots confirm that impression.

In the green corner, the GTX 1070 leads the 99th-percentile frame-time pack by a wide margin, and that translates into noticeably smoother gameplay than any other card here can provide while running under DirectX 12.


The Radeons’ lackluster 99th-percentile frame-time numbers in this test are corroborated by our “time-spent-beyond-X” graphs. In total, both of the red teams’ cards spend half a second working on frames that take more than 50 ms to render, and that means an unpleasantly rough gameplay experience while DirectX 12 is enabled. If I hadn’t been collecting data for this article, I would have immediately unticked the DirectX 12 checkbox in DXMD‘s options and gone back to the smooth sailing that the game’s DX11 mode offers on all of these cards.

Click over to the time-spent-beyond-33.3-ms graph, and we can see that both Radeons spend a full second working on frames that take longer than that to render. Again, that means we’re spending quite a bit of time below 30 FPS during our one-minute testing period, a noticeable blot on these cards’ smoothness. The Fury X’s monster shader array doesn’t seem to be much help here—it’s only a bit better off than the RX 480.

The GTX 1070 has a much better time of it in Deus Ex‘s DX12 mode. The card spends only a hair’s breadth of time past the 33.3-ms mark, and it spends 34% less time working on frames that would drop the frame rate below 60 FPS than the Fury X does. I had to double-check whether the GTX 1070 was actually performing this well compared to the Radeons, but the numbers don’t lie. It’s the performance champion in DXMD‘s DX12 mode with these settings, and not by a small margin.

Conclusions

I hate to toot TR’s horn here, but tests like these demonstrate why one simply can’t take average FPS numbers at face value when measuring graphics-card performance. We’ve been saying so for years. From our results and our subjective experience, it’s clear that the developers behind Deus Ex: Mankind Divided have a lot of optimizing to do for Radeons before the game’s DirectX 12 mode goes gold in a week and change. AMD’s driver team may also have a few long nights ahead, though in theory, DX12 puts much more responsibility on the shoulders of the developer.

It’s also clear that it’s too early to call a winner between the green and red teams for DirectX 12 performance in this beta build of Deus Ex, even if AMD seems to feel confident in doing so. The Radeon cards we tested perform poorly in our latency-sensitive frame-time metrics in DX12 mode, meaning that the Fury X’s hitchy gameplay stands in stark contrast to its respectable average-FPS result. Even if Nvidia isn’t shouting from the rooftops about Pascal’s performance in DXMD‘s DX12 mode right now, the green team has some kind of smoothness advantage despite the game’s beta tag. To be fair, we used different settings than AMD did while gathering its performance numbers, but we don’t feel like the choices we made would be much different than those the average enthusiast would have with this hardware.

The one bright spot for AMD among these early numbers is that the R9 Fury X and GeForce GTX 1070 are so closely matched in Mankind Divided once DX12 is out of the picture. The similarly-powerful GeForce GTX 980 Ti was the clear favorite in our latency-sensitive 99th-percentile FPS measure when we pitted it against the Fury X, but the Fury X has closed the gap in this one game, at least. It’s worth remembering that DXMD is a Gaming Evolved title, but even so, Fury X owners can enjoy a gaming experience that’s just as smooth as they would get with a Pascal-powered card, and that’s no small feat given AMD’s past history with DirectX 11 performance. We’ll just have to wait and see whether similar performance improvements are possible for Radeons running under Deus Ex‘s DX12 renderer once that software comes out of beta.

Comments closed
    • mganai
    • 3 years ago

    Wow. Hope they fix this.

    Are there any core comparison benchmarks (i. e. 6700k vs. 6800k/6850k/etc.)?

    • zzzzzzzzzzz
    • 3 years ago

    I guess you guys know that PresentMon is giving INvalid results measuing FPS in DX12 in DeusEx!

    • LostCat
    • 3 years ago

    DX12 support was updated in the game recently btw. It should be final tomorrow afaik.

      • LostCat
      • 3 years ago

      Was updated again on the 23rd but I guess still not final.

    • richardjhonson
    • 3 years ago
    • SOT3
    • 3 years ago

    AMD is very disappointing the GTX 1070 is clearly the best choice.
    Forget AMD for now until Vega arrives

      • kruky
      • 3 years ago

      gtx 1060 and rx 480 is clearly the better choice for 1080p gaming then gtx 1070.

    • Tristan
    • 3 years ago

    lol, AMD is back

      • SOT3
      • 3 years ago

      GTX1070 is the winner so no they are not “back”. Wait for Vega (RX490)

        • Tristan
        • 3 years ago

        Waif for what ? I’ve been waiting for Hawaiii that was worse than 970/980, then waited for Fiji that was worse than 980 TI, then waiting for Polaris that is worse than mid-range Pascals, and Vega will be worse than 1080Ti, and Navi will be worse than Volta.
        AMD don’t have money to make chips better than NV. Skilled engineers have their price, and that price is too high for AMD. Polaris and Vega is creating by chinese ‘cost-effective labour’, and their ‘skills’ are best visible on Polaris – low performance and high power draw.

          • LostCat
          • 3 years ago

          Hawaii October 2013
          970+980 September 2014

            • RAGEPRO
            • 3 years ago

            Nevermind that the 390X competes (often favorably) with the 980, and the 290 competes with the 970, with the 290X and 390 falling in between. Heh.

            • Tristan
            • 3 years ago

            but the same 28 nm, and 390 was’t better

            • Krogoth
            • 3 years ago

            Hawaii was a brute-force architecture but it end-up being faster than GM204 chips that it directly competed against. GM204 chips are just more power efficient.

            Polaris chips are competitive against GP206 in terms of die-size, power efficiency and performance.

          • Krogoth
          • 3 years ago

          Please drop the green-tinted glasses.

          You are embarrassing Jen-Hsun.

    • Krogoth
    • 3 years ago

    [url<]https://youtu.be/D9hqXLYBvnc[/url<]

    • semitope
    • 3 years ago

    what was your benchmark video recorded with? Video for all the cards would be nice.

      • Jeff Kampman
      • 3 years ago

      The GTX 1070, I think? It’s meant to show where we benchmarked, not the gameplay experience.

    • wingless
    • 3 years ago

    Games built on DX12 from the ground-up seem to work well, but adding it as a patch to a DX11 game is disastrous. Lazy developers probably shouldn’t try to work with DX12 now. There’s too much room to make things worse.

    This is all reminiscent of the atrocious early DX11 implementations that were shoehorned into DX10 games years ago. DX12 shouldn’t be a check box on the marketing department’s to-do list for a game. It has to be fundamental from the get-go.

      • Voldenuit
      • 3 years ago

      [quote<]Games built on DX12 from the ground-up seem to work well, but adding it as a patch to a DX11 game is disastrous. Lazy developers probably shouldn't try to work with DX12 now. There's too much room to make things worse.[/quote<] It should be said that low-level APIs are not a panacea. I might even go so far as to say that they are a potential headache and could even disastrous in the long-term, as GPU architectures change in the future beyond what current developers code for. There's a reason why games are hardly ever written in assembly anymore. Sure the potential gains from multithreading workloads in Mantle/DX12/Vulkan are nice, and they are a significant step forward around a previous bottleneck, but I have to ask, [i<]at what price progress?[/i<] My prediction is that as the DX12 landscape evolves, we will start seeing more middleware vendors, as most developers are more interested (and skilled) at developing [i<]gameplay[/i<] experiences rather than wasting time coding and optimizing (and debugging) one-off low-level engines. It's a shame that id seems to have lost interest in licensing engines, as that was one of their core skillsets.

        • Anonymous Coward
        • 3 years ago

        Hadn’t thought about it that way… I wonder if DX11 or DX12 games are more likely to be playable 10-20 years from now. (Or maybe neither will be, because it was all tied to short-lived online resources that were shut down.)

        • Airmantharp
        • 3 years ago

        They’re owned by Bethesda, which likely limits their prospects. Wouldn’t want a game using a licensed engine competing with another Bethesda game, etc.

        • Magic Hate Ball
        • 3 years ago

        Low level API’s will be the only way to push forward with better graphics as process node shrinks stop being the easiest way to make computers faster.

        Eventually, we’ll have to optimize and parallelize engines to squeeze more shiny out of hardware that has decreasing gains each iteration.

      • bfar
      • 3 years ago

      Right. It’s pure marketing. In fact, there’s absolutely no good reason for a game to be built on more than one api.

    • tipoo
    • 3 years ago

    DX12 and Vulkan kind of just go “Here’s a whole lot of rope, Developers! Do what you want!”

    I expect it’ll be around a game development length cycle (2-3 years) before DX12 is demonstrably better nearly universally rather than so hit or miss as it is right now. Plus even if devs kick and scream, middleware tools will at least take advantage, or become uncompetitive.

    Remember the PS3s early development woes? Apart from byzantine Cell, in part it was also that the PS3 arguably had the first new age low level API in LibGCM, which it took time for developers not to choke themselves with.

    • ltron
    • 3 years ago

    These results mirror my experience with an R9 390X and I’m surprised I haven’t seen more complaints. If this is AMD’s idea of superior DX12 performance, in a gaming evolved title no less, then they need to wake up and stop dreaming as they’ve got a lot of work to do.

      • Pwnstar
      • 3 years ago

      So you’re blaming a GPU company for the work the developer of the game has to do? Aim your criticism at the right target.

        • rahulahl
        • 3 years ago

        No. He is blaming the GPU company for using these results to show off how AMD gpu are better than Nvidia.

      • beck2448
      • 3 years ago

      AMD claims of vr premium experience and superior dx12 are debun once again . It never stops

        • I.S.T.
        • 3 years ago

        I would say in this case it’s the developers fault. DX12 seems to be bugged as hell in the DEMD implementation.

          • renz496
          • 3 years ago

          sometimes i just think developer are not ready to use DX12. but for AMD every GE games from now on must be DX12. i still remember AMD touting Hitman will be showing the best case of Async compute usage. and yet we still see some radeon performance get regression instead of improving in Hitman.

          • K-L-Waster
          • 3 years ago

          And so it begins.

          If the performance is good, “AMD is awesum!!1!”

          If the performance is poor, “Stoopid developers can’t code straight. AMD is awesum!!1!”

            • BobbinThreadbare
            • 3 years ago

            AMD has less efficient drivers, so well written DX12 or Vulkan code does solve that problem. Most devs aren’t very good as shown by this game.

            So stoopid developers can’t code straight is not incompatible with AMD also not being awesum!!1!

            • psuedonymous
            • 3 years ago

            Or in other words: AMD cards run great as long as [i<]someone else[/i<] is willing (and able) to do the software work that AMD are unwilling to.

            • beck2448
            • 3 years ago

            Another Amd myth exposed. Vr premium, not, and dx12 superiority. Kudos to PC perspective hardocp and tech report pioneering frame latency testing which AMD bitterly opposed.

    • GreatGooglyMoogly
    • 3 years ago

    Not featuring the GTX 1080 is a big omission IMO. As a 1080 owner, I feel this article didn’t inform me at all. I can’t just extrapolate the 1070 numbers in good faith.

    I wouldn’t call DX11 “smooth sailing” either. DX:MD DX11 runs like a dog on my 1080 at times (this game is why I upgraded from my GTX 780 after all).

      • Mat3
      • 3 years ago

      The 1080 does seem to get better than expected gains on DX12 compared to the 1070. Makes me wonder if it has some extra scheduling hardware that is disabled on the 1070.

        • Meadows
        • 3 years ago

        I find that unlikely. If it did, NVidia would’ve already bragged about it at some point.

    • Meadows
    • 3 years ago

    If we’re back to where we used to be with regards to new stuff constantly requiring “driver updates” from the GPU guys, then what was the point of all those months of bluster leading up to this?

    Edit:
    Plot twist: what if the DX11 title-specific optimisations in the latest drivers are actually what cause the issues under DX12?

      • Klimax
      • 3 years ago

      Re ETA: Tat shouldn’t be happening as DX12 uses for most part different calls to driver and kernel driver shouldn’t be affected by it.

    • Pancake
    • 3 years ago

    Surprising result. I had hoped the RX480 would be smoother as it has more memory and async rendering than the 1060.

      • Pwnstar
      • 3 years ago

      I don’t think VRAM is the problem here.

        • Pancake
        • 3 years ago

        VRAM can be helpful in reducing stuttering. It gives the game more space to cache resources (textures, geometry etc) and speculatively prefetch. Just like a CPU…

        This can be particularly helpful in open world games like GTA V which have huge amounts of data and you’re moving through the environment in a predictable way. Of course, I know nothing about Deux X design. Or GTA V…

        • tipoo
        • 3 years ago

        It sometimes helps

        [url<]https://twitter.com/scottwasson/status/748162402265403392?lang=en[/url<]

          • arbiter9605
          • 3 years ago

          That is Typical of AMD’s PR machine when doing comparison, use a pervious gen card instead of the current one. If they used gtx970’s replacement it wouldn’t look so good which is gtx1070

      • psuedonymous
      • 3 years ago

      “async rendering”

      Sadly, I think we need to retire “Async” to the Buzzword Bingo pile. It appears you don’t even need to know what a compute shader is or does to tout having a hardware vs. software scheduling system as a magical go-faster feature.

    • christos_thski
    • 3 years ago

    Has there ever been a new D3D api that degraded performance like this, without offering graphical improvements? What’s going on? Most new games’ DX12 rendering paths are shit…

      • Airmantharp
      • 3 years ago

      DX12 doesn’t degrade performance, it opens up the hardware to the software vendors.

      And it puts the onus on them to put it to good use: here with DEMD we see that Squeenix sucks at DX12, so bad that the performance is demonstrably worse than with D3D11.

        • christos_thski
        • 3 years ago

        Thank you for this clarification. Having said that, we’ve yet to see a programming studio develop a DX12 path that improves over DX11?

        As far as I’ve seen, only ID’s Doom improves upon the DX11 rendering path, but that’s on vulkan. Maybe it’s a matter of developers getting to grips with dx12?

          • Andrew Lauritzen
          • 3 years ago

          Ashes is better in DX12 than DX11. Beyond that there are some API test type scenarios that show the potential, but I would expect it to take a bit more time for folks to come to grips with the new API as doing DX12/Vulkan/etc. “properly” involves some fairly invasive changes into most engines.

          And to do it really well, you kind of want to drop the DX11 path as well, which isn’t going to be feasible for some time yet.

            • christos_thski
            • 3 years ago

            So -for a while, at least- we may expect Microsoft win10-exclusive Xbox one ports be the games best utilizing dx12…

            • Airmantharp
            • 3 years ago

            You can’t count on the ports either.

            There is the potential for the ports to be better, but they’re still console ports and any game on the PC must still support DX11 as well as DX12 (or use OpenGL, and that’s literally just id).

            Basically, expect DX12 to get more or less parity in the near-ish future, but don’t expect to see a real advantage outside of special cases like AoS. Also, while DX12 looks like a godsend for AMD cards, this quick look on TR shows why that isn’t really the case; further, because DX12 requires the developers to get more intimate with the hardware, Nvidia’s ground game represents a significant advantage.

            • Klimax
            • 3 years ago

            If Ashes are any similar to Star Swarm, then the only reason why DX 12 code path looks any good is atrocious code path for DX 11.

            • Andrew Lauritzen
            • 3 years ago

            Ashes’ DX11 code path is just fine – they are are just intentionally doing something that doesn’t work particularly well in older APIs. But – as the new APIs demonstrate – there’s nothing intrinsically bad about what they are doing, the old APIs were just poor at handling it… which is exactly why we have new APIs now 🙂

          • Rza79
          • 3 years ago

          Doom doesn’t use DX11.

            • sweatshopking
            • 3 years ago

            No, it uses the crappy (generally speaking for games) opengl.

            • End User
            • 3 years ago

            Doom looks and performs great when using OpenGL. The future is Vulcan though and Doom performs even better when using it.

        • Klyith
        • 3 years ago

        “F–king up closer to the metal”

      • BobbinThreadbare
      • 3 years ago

      Keep in mind DX12 is not intended to replace DX11. Microsoft labeled them really stupidly.

      There is a new DX11.3 on Win10, but DX12 is the low level API and DX11 is the high level API. So performance of DX12 is as good as the developers are.

        • Airmantharp
        • 3 years ago

        Oh, it should be a replacement.

        Just not anywhere close to immediately, for obvious reasons. It may take five years or more before companies stop shipping games with DX11 support.

          • Klimax
          • 3 years ago

          DX 12 cannot be replacement for anything by its nature. It requires too much work for little or no gain.

          You are wrong. Companies will NEVER abandon DX11. It would never make any sense. How many fails we have to see before you understand this?

            • Airmantharp
            • 3 years ago

            I’m wrong?

            Okay. DX12 mimiks the relationship between game and OS on consoles. It requires more work (on the developer), but has more potential for performance.

            It’s the future, and it is also what APIs used to look like before Microsoft forced standardization using DirectX. That was needed when hardware was wildly divergent and future developments were unclear, but now we’ve arrived with GPUs (which didn’t exist when DirectX became the standard, the hardware was all fixed function!) that are essentially just giant arrays of ALUs/FPUs.

            Also, failures: these aren’t ‘failures’, they’re half-assed efforts. AoS is an example of a focused effort for a specific task, but so is Doom on Vulcan, which makes use of the same principle.

            What we need is a few more decent developers to put some real effort into DX12 to show how it can be done, and the market inertia will start swapping away from DX11.

            • mkk
            • 3 years ago

            Yes, the doom and gloom was also similar when DX10 and DX11 just got out the door. Perhaps DX10 was more similar, as almost anything that could be done with it graphics wise could be done under its predecessor.

            The new opened up possibilities of making things more efficiently, which in time lead to higher fidelity as well. Early titles explored a fraction of the possibilities and put most of the work into the known old workhorse DX9

            Some game developers (Bethesda, cough.) dragged their feet for many years and new hardware built with the future in mind were hamperes under their code as a result.

            • BobbinThreadbare
            • 3 years ago

            This is not similar to DX10 to DX11. Those were both high level APIs and DX11 was intended as a replacement. DX12 is not intended as a replacement, DX11 and 12 are meant to co-exist.

            • renz496
            • 3 years ago

            the problem on is PC is not console. if GPU is simply giant arrays of ALUs/FPUs then why there are games favoring certain hardware? if gpu was really that simple we will not see Project Cars favoring nvidia cards more while hitman favoring AMD cards more. hitman dev mention async compute is not easy because you need to tweak it down to card specific level not just architecture specific because each model will have different compute to bandwidth ratio.

            you mention we need more decent developer that can put some real effort on DX12. and that is another problem. how many will?

            • Airmantharp
            • 3 years ago

            I’m simplifying the description as an array of ALUs/FPUs because I’m comparing current GPUs to the first video cards that were out when DirectX became a thing.

            There are still obviously architectural differences at play here; and consider that AMD and Intel, while both producing x86 CPUs, still run into these same issues.

            • BobbinThreadbare
            • 3 years ago

            Many games don’t need increased performance, and will happily use a high level API that makes it easier to make instead.

            • psuedonymous
            • 3 years ago

            [quote<]I'm wrong?[/quote<] Yes. Even Microsoft themselves say that the DX11 codepath will stick around along with DX12 as the high-level alternative. Direct3D 11.4 was released to production only a month ago, so development is in no way abandoned. Unless you're pushing the limits of GPU performance (and even then, only in ways that are limited by the API rather than hardware) there is absolutely no reason to waste a massive amount of effort to implement DX12 over DX11, AND a performance [i<]penalty[/i<] if you do not implement DX12 well.

            • Airmantharp
            • 3 years ago

            I did not say that DX11 will not stick around- quite the opposite. I said that DX12 will replace it eventually.

          • renz496
          • 3 years ago

          even in five years i don’t think we will stop seeing Dx11 based game. DX11 has been around since 2009 and there is still game releasing with DX9 this year. and i agree about the notion about DX12 not replacing DX11. DX12 is more like “optional” for those who want more control. if your game performing just fine with high level API there is no reason to go low level and open the opportunity to have more issues.

      • Klimax
      • 3 years ago

      Problem is, low level API push too much work onto devs. This is part of problem. (The other s, it is fragile and once new HW comes out, original code will suffer on it)

      Reminder: Not only you have to optimize for each GPU variant from each GPU vendor and you have to manage VRAM yourself. It requires immense investment and for most part just to match driver optimizations. To surpass them you have to fight for each % of increase. It is hard to maintain, requires extensive infrastructure in code and that can quickly eat any of gains.

      That’s why low level APIs like DX12, Vulcan and Mantle NEVER made sense on PCs. We, skeptics, warned you all about this. Warned you about hype and hopes. (especially AMD fans hope that DX12/Vulcan will magically fix AMD driver situation)

      Those are reasons why you won’t see many DX12/Vulcan titles nor keep performance in future.

        • Pancake
        • 3 years ago

        It requires immense effort to get DX12 working efficiently but this is where game engines come into it. Unreal and other game engine vendors *may* have the resources to abstract DX12 and present something a game developer could use efficiently. That’s the hope anyway.

      • derFunkenstein
      • 3 years ago

      That was true in DX11 early on, too. Yes, DX11 came with an official tessellation API, and did a lot to sanction GPGPU with DirectCompute, but those games didn’t LOOK any better, at least early one. Remember Dragon Age II?

        • LostCat
        • 3 years ago

        DA2 actually had a fairly impressive special effect boost from DX11.

    • chuckula
    • 3 years ago

    Thank you (again) TR for doing the hard work that 99% of the other sites out there refuse to do.
    These results once again show why the average frame rate graph is never the whole story.

    [Edit: Oh yeah, and props for slipping in a GTX 1060. Looking forward to its full review 😉 ]

    • Peldor
    • 3 years ago

    Please don’t change the scale on graphs you want people to flip between. It’s really misleading.

      • derFunkenstein
      • 3 years ago

      Depends on the graph. The “Time spent beyond X” graphs would be nearly useless if they all used the “Time spent beyond 8.3ms” scale.

      You could also argue that the DX11/DX12 graphs aren’t meant to show every single frame, but they do a great job showing the difference between the two even with different scales. All of those spikes vs. almost none. Doesn’t matter what the scale there is.

      OTOH, there are other reviews where the graphs are showing the same run on different cards, and in those cases I agree, it’d be nice if the scale didn’t jump around.

      • The Wanderer
      • 3 years ago

      Not only that. The two graphs at the top of the “DirectX 11 results” section aren’t “flip between” types, but they easily could be – that is, the only reason they’re separate graphs is to avoid clutter within an individual graph – and they have the same problem.

      The fact that they have different horizontal scales means you can’t directly compare the lines between the two graphs; that means, among possibly other things, that you can’t easily tell which cards produced more total frames during the testing period (something I’ve used as a quick proxy for overall frame rate in the past).

      If I had to try to pin down when this started happening, I’d have to guess at some time around the point when Damage left. Is it possible that the people who picked up doing reviews around that point didn’t realize that making sure these juxtaposed graphs use the same scale, for ease of comparison, was important?

    • derFunkenstein
    • 3 years ago

    Now that’s some serious turnaround time. Nicely done, Jeff!

    Beta or not, this game is out and one of the first big games of the holiday selling season. AMD and SqEnix are gonna have to get that solved in a hurry. Especially considering the Gaming Eviolved label.

    • torquer
    • 3 years ago

    WTF no GLide results??

      • Kevsteele
      • 3 years ago

      GLide? Now there’s a name I’ve not heard in a long, long time…

    • Billstevens
    • 3 years ago

    Results appear to be missing.

      • torquer
      • 3 years ago

      It’s so fast you can’t see them

Pin It on Pinterest

Share This