Here’s an early look at DX12 “Inside the Second” benchmark data

We didn't publish DirectX 12 performance results in our recent reviews of the GeForce GTX 1080 and the Radeon RX 480. It's still relatively early days for Microsoft's low-overhead API, and we still think DX11 performance is the most relevant metric to go by for most gamers. That omission doesn't mean we aren't able to collect performance data, though. We wanted to share some early results of our benchmarking methods for DirectX 12 using Rise of the Tomb Raider as our platform. Let's dive in.

First, we used the same settings and benchmark run that we did for RoTR in our reviews. The only change was to enable the DirectX 12 rendering path in the game's graphics settings. We tested the game with two graphics cards: the Radeon RX 480 and the GeForce GTX 970. 

Well, that's disappointing. A major drop in average FPS and a major increase in 99th-percentile frame times doesn't bode well for RoTR's DirectX 12 implementation. The Radeon RX 480 does seem to suffer much less than the GTX 970 when DirectX 12 is enabled, but neither card is providing a good experience at 1920×1080 with the settings we chose.

Our measures of "badness" suggest gamers should leave DirectX 12 off in Rise of the Tomb Raider, as well. Neither card spends much time past 50 ms working on challenging frames with the DX11 API, but enabling DX12 causes both cards to spend almost two seconds working on tough frames—and those hitches will almost certainly be noticeable, since the corresponding frame rate will drop below 20 FPS during those times.

Neither card produced a solid 60 FPS in our original benchmark, but they struggle quite a bit more in DX12 mode when we look at the time spent past 33.3 ms and 16.7 ms. That's not surprising, given our average frame rate results for both cards. In any case, DX12 does not provide a more fluid gaming experience.

What can we take away from these numbers? DirectX 12 puts much more of a burden on the game developer to deliver a well-optimized experience, and Rise of the Tomb Raider's DX12 implementation seems to need more time in the oven. It's certainly irresponsible to call a winner using these numbers—both cards fail to deliver a playable experience by our rather high standards. Leave RoTR in DX11 mode and enjoy it that way for now.

Comments closed
    • snarfbot
    • 3 years ago

    imagine how horrible it must feel to be wearing those tight pants and running around the woods for hours. the disgusting feels in the butt region must be pretty unpleasant.

    • NTMBK
    • 3 years ago

    Yay, glad to see you’re going inside the second on DX12! 😀 This is why TR is one of the best tech sites out there.

    But yeah, wow, RotTR is a complete mess in DX12 mode D: Total Warhammer just released its DX12 mode in beta, hopefully that shapes up a little better. (Especially since DX11 is hammering my poor old Phenom II)

      • Tirk
      • 3 years ago

      Warhammer does like to be a great CPU stress test. I run it mostly on an I5-4690K and it still takes every ounce it can from it. Despite DX 11 performing well on my computer, I still got a 15% increase in FPS when switching to the DX 12 beta. My guess is that the GPU is waiting less to be queued and more time rendering frames with the DX 12 pipeline. This takes some load off of the CPU and lets the GPU charge ahead which is what most of us are probably hoping that DX 12 is able to accomplish.

      It will probably depend as well on what GPU you are using whether it will increase your FPS but at least your CPU will run cooler when playing the game 😉 The 390X system I have seems less cpu starved than the Fury X system so it doesn’t see the same 15% boost but that makes complete sense when you look at most DX 11 titles under utilizing the shader cores and bandwidth of the Fury X so its nice to see we are moving into a future that will take more advantage of the Fury’s design choices. I haven’t yet tested it on the Nvidia card that’s in my house so i don’t yet have any personal insights on how well it likes the DX 12 beta. If you have the game did you try the bench marking tool in the graphics option setting to see how your system performs with different settings?

      *edit* As for numbers the screen I use has a resolution of 3440×1440 or 1440p widescreen. With the Fury X it went from 40 fps to 46 fps consistently in the benchmark which comes to a 15% increase between DX 11 and DX 12. Frames seems noticeably smoother. Can’t remember the 390x’s fps numbers off hand as I use that computer less (Spouse’s computer) but it uses a 2560×1440 screen MG279Q with freesync on and renders well within the freesync range so frames are smooth regardless of DX 11 or DX 12.

    • Pettytheft
    • 3 years ago

    This is the first time I’ve ever questioned something at TR. Why Tomb Raider? It’s the worst implementation of DX12 out there. Hitman, and Total War even Ashes all give a bump to Nvidia and AMD with their DX12 implementation.

      • stefem
      • 3 years ago

      Hitman is a not so great console port and with Ashes the results that generated all the hype was exacerbated by AMD crap DX11 driver, if you look at it now there are even cases were DX11 is slightly faster than DX12 (with very fast CPU indeed).
      This situation is exactly why TR decided to skip DX12 benchmarking, for now

    • travbrad
    • 3 years ago

    Turns out there might be some cases where driver developers are better at optimizing for their graphics cards in games than game developers are. Who would have thought?

    • Philldoe
    • 3 years ago

    Why not use a game that was built for DX12 like Ashes? RoTR’s DX12 mode was really poorly done.

      • xeridea
      • 3 years ago

      Ashes was controversial because Nvidias drivers were broken and they blamed the developers. Now some sites don’t use it because they say the game isn’t popular enough… really I think they just don’t want the controversy, which is sad, because it lets Nvidia continue to downplay DX12, which they know they are weak at. Now I guess we are testing on the worst DX12 implementation ever conceived, and saying DX12 titles are immature.

        • travbrad
        • 3 years ago

        DX12 titles do seem to be immature so far though. I haven’t seen a single DX12 game so far that actually runs well considering the graphics quality it has. AoS runs “better” in DX12 than DX11 but still runs very poorly for the mediocre graphics it has.

        DX12 definitely has the POTENTIAL for better optimization than DX11, but it takes work and it’s not a magic bullet.

          • xeridea
          • 3 years ago

          AoS, Hitman, and Total War: Warhammer all seem to run fine on DX12. Not awesome on green cards, but that is expected since they have been downplaying new APIs, there drivers are consistently bad for DX12, and pre-Pascal hardware we now know doesn’t support async at all in hardware.

          Main point is that it seems odd they are testing the worst pile of junk DX12 implementation for their guinea pig, instead of one of the others that are better implemented, and perform more or less as expected.

            • chuckula
            • 3 years ago

            AoTs and Total War: Maybe.

            Even Hitman shows some awful weird results though, like the R9 Fury X winning by a whopping 1 FPS over the 390X at 4K resolutions: [url<]http://www.guru3d.com/index.php?ct=articles&action=file&id=23009[/url<] Admittedly the Fury X is a little further ahead at lower resolutions, but you'd think the gap would be larger. Additionally, it means that the R9 Fury X has [i<]worse[/i<] resolution scaling than its predecessor in a DX12 title.

            • xeridea
            • 3 years ago

            Not saying the other implementations are all perfect, but anything is better than RoTR. I am wondering why RoTR was chosen to test when it clearly has a bad implementation, and the others are at least decent.

      • ImSpartacus
      • 3 years ago

      I think the general consensus is that current AAA DX12 implementations are either half-broken or “biased” to one GPU vendor.

      I can’t help but agree. I feel like reviewers shouldn’t benchmark a game just because it uses DX12. They should review games that actual people will play.

      So I actually think TR’s approach is a pretty reasonable compromise.

        • rechicero
        • 3 years ago

        If they test things like ALU latency or Bandwith, there is no way you can justify not testing good implementations of DX12 just because they are not the most popular games. For better of worse, it’s a peek in future perf.

    • Shobai
    • 3 years ago

    Thanks Jeff, that Frame Number plot is looking much better than the GTX 1080 and RX 480 reviews: same scale between tabs, you can clearly see that no dataset has been truncated, and the plots fill the majority of the space.

    I feel bad for picking another nit, but how far off the top of the chart do the GTX 970’s DX12 frame times spike? Do they actually hit 120ms with your filter in place? I guess the other question is whether that information really tells us much more than the plot as it stands.

    • bill94el
    • 3 years ago

    Win7 still ftw

    • Theolendras
    • 3 years ago

    Interestingly, Hardware canuck has just a little better performance out of Rise of the Tomb Raider from DX12 over DX11. I wonder what knobs or environnement difference is causing those different results.

      • UberGerbil
      • 3 years ago

      Colder environment in Canada allows everything to overclock more.

        • Captain Ned
        • 3 years ago

        Well, it’s not like the Maple Leafs are really using the rink these days (decades??).

        • tipoo
        • 3 years ago

        Maple syrup has an excellent heat extraction rate for our liquid cooled rigs

      • stefem
      • 3 years ago

      just a different benchmark scene probably

    • Mat3
    • 3 years ago

    How about testing on a AMD CPU to see if there’s an improvement there over DX11?

    The one thing that every DX12 implementation should bring is better CPU performance in the form of lower overhead and better threading. This should, in theory, help a AMD CPU a lot more than an Intel one.

      • Jeff Kampman
      • 3 years ago
      • Airmantharp
      • 3 years ago

      If this were the case, there wouldn’t be a difference between the two on an Intel setup.

        • derFunkenstein
        • 3 years ago

        I think he’s looking for the point at which CPU performance sucks so badly that there’s no real difference, and I can’t see why you’d want that at all.

        • Mat3
        • 3 years ago

        If it’s CPU limited on a AMD CPU in DX11 due to weak single thread performance, then DX12 should fix that and boost the performance. A top performing Intel CPU is not going to be CPU limited, especially in this game with those GPUs, so it’s not surprising there’s nothing to be gained (but obviously strange that it’s worse).

      • tipoo
      • 3 years ago

      Here, even a nice cross chart between AMD GPUs, Nvidia GPUs, AMD CPUs, and Intel CPUs. The FXs do see a nice gain.

      [url<]http://www.eurogamer.net/articles/digitalfoundry-2015-why-directx-12-is-a-gamechanger[/url<]

      • derFunkenstein
      • 3 years ago

      So based on what you wrote, we’d get overall worse performance but less delta. That doesn’t really seem desirable.

        • Tirk
        • 3 years ago

        I think this article explains more about what Mat3 is referring to:
        [url<]http://www.technologyx.com/featured/amd-vs-intel-our-8-core-cpu-gaming-performance-showdown/5/[/url<] What they found was that even in modern DX 11 games, optimizations in game code have closed the gap heavily between Intel and AMD CPUs performance. It even eludes to what Mat3 is describing could happen in DX 12. Part of Bulldozer's IPC problem was that its architecture had to be optimized in software differently than an Intel Core and that gap has closed over time. Intel cores still have an all around advantage but in games its not as severe as it once was. Tipoo above also links an article to that effect.

    • Black Jacque
    • 3 years ago

    I’ve forgotten in the Test Rig is an i5 or an i7. How would hyperthreading effect the DX12 results?

      • Jeff Kampman
      • 3 years ago

      Our test rig is powered by a Core i7-5960X.

      EDIT: To actually answer your question, well-implemented DX12 games appear to like as many cores as you can throw at them, but I’m not sure I’ve seen tests that examine scaling beyond eight cores/threads.

        • UberGerbil
        • 3 years ago

        Once we have at least one known good implementation of a DX12 game… that would be a fascinating subject for a test: holding the GPU constant and varying the core and thread count. Of course some games will make better use of threading than others, and some kinds of games will by their nature be more amenable to being well-threaded (Civilization for example), but still, some day in the future when you don’t have any sexy hardware in the review queue…

      • derFunkenstein
      • 3 years ago

      Test rig: [url<]https://techreport.com/review/30328/amd-radeon-rx-480-graphics-card-reviewed/4[/url<] edit: ninja'd

      • EndlessWaves
      • 3 years ago

      On that subject, has anyone tested whether Polaris suffers as much on low end CPUs as previous GCN cards?

        • Concupiscence
        • 3 years ago

        Initial results suggest that it has similar overhead issues for DX11 and OpenGL, yes.

    • brucethemoose
    • 3 years ago

    Does ROTR look any different running DX12 vs DX11? If the DX12 path is actually doing things the DX11 path isn’t, that would help explain the performance difference.

    Come to think of it, I haven’t seen any image quality comparisons on different cards recently… Its something I miss from older reviews.

    Like calling out GPU makers for boosting average FPS scores at the expense of frametime consistency, I feel like TR should be the one to expose any IQ differences these days. If AMD and/or Nvidia are cutting corners (which, with the recent texture compression tech, seems like a real possibility), many people would certainly want to know.

      • moriz
      • 3 years ago

      there’s no image quality difference between DX11 and DX12 for Rise of the Tomb Raider.

      • stefem
      • 3 years ago

      Compression used are delta based and lossless, it’s meant to reduce traffic on the bus not to save memory space

    • USAFTW
    • 3 years ago

    It’s certainly disappointing to see the DX12 patch for ROTR turn out the way it did, but thankfully it is the only game so far (I think) that kind of leaves a sour taste in the mouth. Other games have had at least the same performance as DX11 while lowering overhead.
    Also, it is the only DX12 title that’s sponsored by Nvidia’s crack marketing team.

      • xeridea
      • 3 years ago

      Yeah, just looked at comparisons for the other 3 DX12 titles, they all seem to run fine in DX12 mode (generally better on AMD cards, perhaps due to async, and their less optimal DX11 drivers), but not always, depends on resolution. Nvidia cards tend to lose a bit of performance, but nothing near the delta here (more like 10% , not 50-100+% like in ROTR). So clearly ROTR is terrible for DX12, but other games seem to have similar or better performance.

      So hopefully we can get more detailed comparison for other games, good to see it is being looked into.

    • chuckula
    • 3 years ago

    Excellent as usual and we look forward to much more guys!

    Out of curiosity, did you think there was any positive/negative impact of the API on the actual quality of graphics being displayed when the settings were held constant across the APIs?

    From my limited understanding, DX12 isn’t turning on any new graphical effects per-se, but merely provides different and potentially more efficient mechanisms to load assets, execute shader programs, and render the finished products with a higher layer of control. I’d be curious if there is more information about potential image quality differences though.

    • odizzido
    • 3 years ago

    This is exactly what I wanted you guys to explore. Thanks for doing it 🙂

    • derFunkenstein
    • 3 years ago

    I was told yesterday that DX12 was the main reason to buy an RX 480 (with a custom cooler that will drive the price up) instead of the GTX 970. I demand a recount.

      • slowriot
      • 3 years ago

      I don’t think its something we can ignore though. Not just in the context of RX 480 vs GTX 970, but as in all the UWP titles that are going to be coming and are DX12 only. If someone is interested in the upcoming Forza for example its a big deal.

        • derFunkenstein
        • 3 years ago

        No, probably not, but by the time DX12 runs better than DX11, both cards are going to be awfully slow.

          • slowriot
          • 3 years ago

          But what about DX12-only titles? Those are already here, after all. While I don’t think anyone is deciding between two cards because of Gears of War Ultimate Edition performance… they might be making that decision based on the upcoming Forza titles. Maybe I’m being crazy here but I fully expect the Windows Store-exclusives Microsoft is going to be rolling out (GoW4, ReCore, Dead Rising 4, etc) to be DX12-only games.

            • derFunkenstein
            • 3 years ago

            Well, Gears UE is another prime example of a horrible port, and guess [url=http://www.forbes.com/sites/jasonevangelho/2016/03/01/gears-of-war-ultimate-edition-on-pc-is-a-disaster-for-amd-radeon-gamers/#76a9d1967e7e<]which hardware[/url<] it had a harder time on? I'm not saying AMD can't avoid this sort of release-day disaster, but they're going to have to actually do it before I plunk down money on one of their cards.

            • LostCat
            • 3 years ago

            QB was patched up nicely. I’ve heard Gears was too, though I don’t own it.

            • derFunkenstein
            • 3 years ago

            Gears did get a [url=https://techreport.com/news/29846/gears-of-war-ultimate-edition-developer-details-upcoming-patch<]patch[/url<] and UWP updates fix things like variable-refresh-rate displays, but the patch took a while. I don't own it (well, I do, but on Xbox).

            • Jeff Kampman
            • 3 years ago

            Gears of War UE still runs like garbage. Lots of hitching and audio errors. I wouldn’t recommend it at all.

            • LostCat
            • 3 years ago

            🙁 Well, still hoping they’ll spot me a PC copy eventually after Play Anywhere starts.

            • slowriot
            • 3 years ago

            [url<]http://www.pcper.com/reviews/Graphics-Cards/AMD-Radeon-RX-480-Review-Polaris-Promise/Gears-War-Ultimate-Edition[/url<] I mean, it looks to me the RX 480 is doing pretty well on it now.

            • derFunkenstein
            • 3 years ago

            Yes, now that Gears has been out for months, it’s doing better. My point is that release day is always a bad deal for AMD, and only sometimes a bad deal for Nvidia.

            I don’t have the time or money to rah-rah root for one company or the other. I just want my stuff to work. Until release day isn’t a disaster for AMD basically every time a game comes out, I just can’t be bothered.

            • slowriot
            • 3 years ago

            My point is simply… DX12-only titles are already here and will be arriving rapidly because Microsoft is really pushing it and Xbox One/Windows 10 cross platform titles as one thing. I think some of those titles like Forza and Gears of War 4 will be very popular on PC. I don’t think its a “main” reason to get a RX 480 over a GTX 970 but its a pretty additional reason and so far its been backed up by every DX12 results I’ve seen.

            • derFunkenstein
            • 3 years ago

            I understand that, but it’s been proven (so far with GoW and RotTT anyway) that DX12 launch days are still a sad time for AMD owners.

            We’re talking past each other a bit, I fear. Your saying DX12 is here, and it is, and I’m saying things still aren’t any better for consumers yet. We’ll see.

            • Tirk
            • 3 years ago

            For every bad implementation there is also games that have shined a light on to the potential of DX 12. Sure if you only look at GoW and RotTT, DX 12 looks like a disaster but the same could be said for excluding those and looking at AoS, Hitman, and Total War: Warhammer and saying DX 12 is a critical success without any mishaps. So the majority of DX 12 games run well on AMD and now because of 2 flops (that have problems on Nvidia cards as well i might add) its disaster for them, really?

            Now of course you could argue we should exclude any Gaming Evolved titles from bench marking but then why is it that so many The Way its Meant to be Played games are consistently tested in GPU reviews? I can’t think of the countless times I’ve heard how Nvidia optimized Batman games, of which I’ve never played, are used to show why NOT to buy AMD. There are so many other games that people play but Batman and the cool Nvidia FX bolted onto it is the only game that shows what GPU to buy and AMD users shouldn’t complain because its the way the market works that games are sponsored and we shouldn’t whine that Nvidia sponsors more. But wait, Nvidia users are completely fine to complain when the tables are turned and AMD starts filling the market with sponsored games? …….. (I’m sorry the end is turning into a bit of a rant but I hope you understand what I’m getting at.)

            No one can claim that DX 11 was spared horrendously optimized games that tanked performance. And yet, its as if people are reacting to DX 12 games as if their DX 11 time has been spent with near flawless execution every single time. I can remember quite a number of flops and successes when DX 11 came out and reviewers were still adding the well optimized games to their charts. I hope Jeff merely started with RotTT and hasn’t left out the majority of DX 12 titles that have shown improvements for AMD and even in some cases Nvidia. I own Warhammer and was testing DX 12 and DX 11 over several runs and consistently got at least a 15% improvement in FPS and higher minimum frames. That’s not a sad time for me and I play TW:WH (unlike RotTT) so I’m very happy DX 12 has improved my performance in it. And if the industry is going to be about cherry picking games then I’m very well going to cherry pick the games I PLAY! I am fully willing to post my own results in TW:WH if people are interested but last time I mentioned it in the forums I was figuratively asked to put on a scarlet letter. (Again sorry for any perceived ranting but things are not as gloom for DX 12 or Vulcan for that matter as people are portraying it here)

            • sweatshopking
            • 3 years ago

            I stopped taking you seriously when you admitted to never playing batman. WTF MAN

            • Tirk
            • 3 years ago

            Hehe, never peaked my interest. Batman has never been a superhero I’ve particularly liked anyway. Rich man turn vigilante because his parents died, doesn’t seem particularly enthralling to me nor plausible or we’d have batman’s bouncing around all over the place in real life 😉

            People will probably laugh, but if I had to play a DC hero in a PC game it’d be Aquaman. He’s an overlooked superhero, that connects human mythology into the modern world and makes something that’s both mythical but still from this world. I’ve also lived by the ocean for most my life so I can appreciate that it is one of the most powerful forces on earth.

            • sweatshopking
            • 3 years ago

            No. Just play the batman games. They’re excellent metroidvania style action games regardless of the character.

            • Tirk
            • 3 years ago

            I don’t care about the type of game if I care nothing for the story or character. I play more tactical and strategy games anyway, although an intriguing character and story do pull me to play other games as well.

            Is there a mod that takes everything batman out of the game and puts in something more interesting? 😉

      • Billstevens
      • 3 years ago

      ROTR is known to have the worst directX 12 implementation. Granted there are only what 3 games getting benchmarked on Dx12. ROTR hurts performance on both cards with Dx12 on. While other games like ashes see a performance boost when switching to Dx12.

      Dx12 utilization is still kind of a novelty until major games start coming out with native support.

        • derFunkenstein
        • 3 years ago

        I prefer playing RoTT.

        • RAGEPRO
        • 3 years ago

        Unfortuantely it’s likely to be this way for DX12 and Vulkan into eternity.

        These new APIs are return to the old days, when game performance was about the hardware second and the game programmer’s ability first, with the driver a very distant third.

        I do think it’s a good thing overall, but frankly speaking anyone who doesn’t use an established engine with its own DX12/Vulkan support is in very uncharted territory unless they are seasoned graphics programmers. Even when using an engine package (such as Unreal) any custom shaders (which is to say, any effects that aren’t included as examples, so, most effects) will have to be written and optimized by hand.

        I expect games are about to start looking even more similar than they already do.

      • Srsly_Bro
      • 3 years ago

      I demand a recall!

      • Concupiscence
      • 3 years ago

      Think big picture and getting mileage out of it as the crest behind Vulkan and DX12 builds for the next few years. For now it’s more than competent at DX11 at a hell of a price point; they just need to iron out these launch problems.

      • smilingcrow
      • 3 years ago

      With the 970 over-clocking much better than the 480 leading to it then winning in the majority of games which are DX11 and only losing in DX12 it does change the outlook.
      But people see 8GB and DX12 and it’s a compelling pull along with the potential gains with drivers for the new card.

      • rechicero
      • 3 years ago

      that’s odd. More memory, Freesync (=much cheaper in that scenario), perf improvements with time (a classic with AMD)… DX12 could be in the list, but never as the main. If it ends up being true, better future proof, yeah, but there are good reasons right now. For DX12 is too soon to say.

      • psuedonymous
      • 3 years ago

      The caveat that is rarely mentioned along with that is DX12 performance is entirely dependant on the developer who implements it.
      If they drop the ball, or if they use one specific architecture feature that is not available elsewhere, or use a technique that works well on one architecture but not another (or if you’re hilariously paranoid, ‘deliberately sabotage performance’) etc, then performance of one architecture over another – not just one vendor or another, but specific architectures – can vary wildly.

      DX12 provides no automatic bonus to any architecture. AoTS works extremely well in DX12 on GCN because it was initially coded for Mantle, which was GCN-specific. At takes full advantage of GCN’s capabilities, as it was designed to exploit those capabilities from the outset. This isn’t AoTS “sabotaging Nvidia performance”, it was just designed with one architecture in mind rather than another.
      Games that were originally written for engines designed for DX11 and retrofitted to use DX12 (e.g. the current crop of UWP releases) will perform broadly similar to DX11 (or worse, depending on how much or little effort was put into optimisation).

        • NTMBK
        • 3 years ago

        Given that RotTR was an Xbox exclusive, I was kind of hoping it would be tuned for GCN and DX12, and mostly reuse the console code. Guess not!

          • LostCat
          • 3 years ago

          There were some areas in the game that performed horribly in DX11 and not as much in DX12, at least.

    • flip-mode
    • 3 years ago

    For about a decade now, TechReport has been posting these graphs with single-pixel-wide lines in the legend. IT IS DIFFICULT TO SEE THOSE! PLEASE MAKE THE LEGEND LINES THICKER. PLEASE. PLEASE. PLEASE.

      • odizzido
      • 3 years ago

      I have upvoted maybe once or twice…ever. I am upvoting this. This is a problem me for as well and the fix is something extremely easy.

        • Srsly_Bro
        • 3 years ago

        My -1 defeated your +1. sup?

          • Srsly_Brotli
          • 3 years ago

          sup?

            • Srsly_Bro
            • 3 years ago

            sup? -1 so we can match for at least a little while!

          • odizzido
          • 3 years ago

          o noes!

      • EndlessWaves
      • 3 years ago

      What resolution, screen size and viewing distance are you running at?

      I don’t have any problem with them at 1920×1200@24″@90cm

      • Captain Ned
      • 3 years ago

      If you’re talking about the “which color is which card” lines, my aging eyes wholeheartedly concur.

      • chuckula
      • 3 years ago

      I would vote for outputting the line graphs as SVG files that can be rescaled more dynamically than standard jpeg/png raster images. It could help in making the same graph be legible to different people.

        • UberGerbil
        • 3 years ago

        Yeah, in the Brave New Widely Varying PPI Future that we’ve stumbled into, we really need to embrace vector graphics more. It’s understandable why we haven’t until now though, considering how much the implementations have sucked until recently (both of the file formats themselves and how the various browsers rendered them).

          • UberGerbil
          • 3 years ago

          And on that note, I’m not sure if TR can justify the investment (time and money) but I’d highly recommend [url=https://www.tableau.com/products/desktop<]Tableau[/url<] (I have no connection except I've used it, and I've run into to some the guys who work there from time to time) I mean, imagine GPU result graphs that worked like [url=http://www.tableau.com/stories/gallery/tale-100-entrepreneurs<]this[/url<], or price-performance scatter plots that looked like [url=http://public.tableau.com/profile/technical.product.marketing#!/vizhome/World-bank_5/EaseofBusiness<]this[/url<]

      • ermo
      • 3 years ago

      This is the first time *ever* that I’ve used my gold status to add three (up)votes to a post.

      Adding wider lines in the legend would add value to the site benchmarks for me and therefore I’m literally voting with my wallet this time. =)

      • Mr Bill
      • 3 years ago

      Disagree, because, hear me out; that is a drawback of Excel that the legend lines are the same width as the plot lines. If the plot lines are any thicker, you can’t see the details of the line plots. However, a little cut and paste might be in order. Do those legends with thick lines and then snip that bit and put it on top of the legend in the graphs with the skinny lines.

        • meerkt
        • 3 years ago

        Why snip when you have image editors.

      • torquer
      • 3 years ago

      Wait for the Xbox One Scorpio. It has the highest quality pixels.

      • Firestarter
      • 3 years ago

      yes please, these legends are hard enough to read for people with perfectly functioning eyes and for us colorblind (1 in 20 of your target audience TR!) it’s basically impossible

      • ClickClick5
      • 3 years ago

      I have no problems with a 640×480 monitor. I see the line just fine. You 4K users are the ones with problems!

    • tipoo
    • 3 years ago

    Low level APIs are definitely a case of enough rope to hang yourself with, rather than a magic bullet that makes everything better. It’ll still be a game development cycles worth of years (2-3?) before titles are consistently better on it than DX11 I would guess. And by then we’ll probably have new DX12 feature levels for GPU hardware and stuff.

    Where I might see it helping sooner is integrated, where the CPU and GPU share the same max TDP, if the CPU can chill more due to lower overhead so that the GPU can stay at boost clocks longer (or even base clocks if throttling is really bad). Say, the Skull Canyon NUC for example, 72EU Iris Pro should have a fair bit of potential, but it can’t boost long before it hits 99C.

    Nice output on the site in the last little while by the way!

      • chuckula
      • 3 years ago

      I fully agree.

      Does DX12 have the [i<]potential[/i<] to be better? Sure. Is it [i<]guaranteed[/i<] to be better? Heck no, and you can also screw it up. [Cue up Spiderman great-power-great-responsibility speech]

        • stefem
        • 3 years ago

        There are actually more chance to screw it up compared to older API

      • Anovoca
      • 3 years ago

      Let’s just all agree that we should blame Microsoft for screwing up DX12 for now until we can prove it is the studios fault.

      • bthylafh
      • 3 years ago

      I hope DX12 and Vulkan don’t turn out like Intel’s Itanium processor, which also needed a lot of low-level hand holding to get the best performance.

        • tipoo
        • 3 years ago

        Almost by definition, actually literally by definition it does need low level hand holding. But I think mid-large devs will just get better at it eventually to have the performance edge, and middleware tools will take advantage of it at any rate.

          • UberGerbil
          • 3 years ago

          Yeah, the difference is that the only people who really have to get it right are the game engine developers, and those guys (hopefully) are the least in need to hand-holding and the most likely to make maximum use of the power this hands them. Once the Unreal Engine (for example) is well-implemented in DX12, all the games that license that get the benefits for “free.” But it’s going to take the engine folks a fair bit of time and development resources to re-write from the ground up to do that.

          In the Itanium universe, the problem was that [i<]nobody[/i<] was able to do the low-level hand-holding. The theory was that the compiler guys would figure it out and everybody would just take advantage of their work, but the reality was that even the compiler guys couldn't get optimal usage out of such a wide design with so many caveats and limitations (instruction bundles, etc). It was a case where the hardware guys decided to handwave a problem by saying it would get handled in software, and the software guys turned out to be unable to do it. DX12 is the software guys saying that [i<]different[/i<] software guys can do what they've been doing, but in a application-specific way that's more optimal than a generic, high-level API allows. It is more like asking folks to write in assembly instead of C.

            • smilingcrow
            • 3 years ago

            “It is more like asking folks to write in assembly instead of C.”

            If it was that much of a leap then I doubt it would ever come to anything useful but I doubt it is!

      • stefem
      • 3 years ago

      Hell, just to draw a simple triangle require so many line of code, it’s really very verbose (I’m talking about Vulkan but D3D12 in not much dissimilar).
      Big top developer of graphics engine may have not much problem dealing with that but for a small developer that’s an entire different story.
      There were other approach to solve the overhead issue but AMD has been historically “close to the metal”, do you guys remember CTM against CUDA?
      I feel OpenGL will have a long life ahead, there where already “AZDO” technique before DX12 and Vulkan where released.

      I’ve seen so many user enthusiastic about low level API but very few graphics programmer, maybe with time we will get used to it.

    • Anovoca
    • 3 years ago

    [s<]low[/s<]API

    • chuckula
    • 3 years ago

    OK! UH… WHEN WE DEMANDED DX12 WE DIDN’T MEAN THAT ONE! THROW IT BACK IN!

      • davidbowser
      • 3 years ago

      Do we start counting down to the DX12 Anniversary Update?

        • tipoo
        • 3 years ago

        “Do you want to upgrade this game to DX12?”
        “Err, no?”
        “We heard yes, upgrading your library to DX12”
        “NOOOOO!”

    • YukaKun
    • 3 years ago

    I would be interested in seeing a case study in the settings for the game itself and see if there is an apparent “culprit” for that performance degradation.

    I know it wouldn’t be sustainable to do this for every other DX12 game, but would be interesting none the less. Maybe you find something interesting 🙂

    Cheers!

      • chuckula
      • 3 years ago

      I’ve got an initial guess based on some analysis that Endless Waves has done: Memory management.

      The Rx 480 comes with 8 GB of RAM while the 970 only has 4. The “low level” APIs force you to do your own memory housekeping and an 8 GB pool is a lot more forgiving to a less than perfect memory subsystem than a 4GB pool is.

        • Concupiscence
        • 3 years ago

        And don’t forget: the last 512 MB of the 970’s memory is both slow and can’t be accessed at the same time as the other 3.5 gigs…

        • YukaKun
        • 3 years ago

        I thought you always had to do memory management all the way from DX1.0 and Glide days?

        How you manage assets has always been on the Dev side, hasn’t it? Or you mean frameworks like UE, Unity or Source hiding that part of the implementation?

        Cheers!

    • slowriot
    • 3 years ago

    Interesting. I was just looking at PC Perspective’s results for the RX 480 vs GTX 970 with DX12 enabled in Rise of the Tomb Raider and their results show a much greater delta between the 970 and 480: [url<]http://www.pcper.com/reviews/Graphics-Cards/AMD-Radeon-RX-480-Review-Polaris-Promise/Rise-Tomb-Raider[/url<] There are some settings differences. You guys are using SMAA vs FXAA and you guys have some things like motion blur disabled too. Wonder if that explains it? Or maybe different parts of the game being tested?

      • Jeff Kampman
      • 3 years ago

      I think there’s still a major delta between the two cards, just in different places. Look at the 99th-percentile frame time for each card in DX12 mode, for example.

      In any case, PCPer did use different settings and their results are not directly comparable to ours as a result.

        • slowriot
        • 3 years ago

        Indeed. It’s too bad DX12 results in worse performance in the new Tomb Raider. Any plans to do a similar comparison with Hitman? As you point out, and I really do appreciate, being able to compare data with the same settings/setup is much more useful than trying to do fuzzy comparisons across sites.

        • cynan
        • 3 years ago

        To me, it just looks like the RX 480 is ever so slightly faster (ie, average FPS), resulting in a few less frames beyond 8.3 and 16.7 ms, while the 970 has a bit less variation in frame time delivery/pacing (resulting in the Radeon having a few frames over 33 ms). Doesn’t really look like a major delta from these results..

          • AnotherReader
          • 3 years ago

          Does [url=http://www.pcper.com/files/review/2016-06-28/RoTRDX12_1920x1080_PER.png<]that[/url<] look just slightly faster to you? That is PCPer's FPS by percentile graph for Rise of the Tomb Raider. By the time the RX 480 drops below 50 fps, the 970 is already well below 40 fps. At the end, the 480 is in the mid 40s while the 970 is barely above 30.

    • AnotherReader
    • 3 years ago

    Great work! Are you using PresentMon to log frame time data?

      • Jeff Kampman
      • 3 years ago

      Yep.

        • stefem
        • 3 years ago

        Will you still use FCAT for multi GPU testing? it’s a very advanced tool to measure frame time

Pin It on Pinterest

Share This