DirectX 11 more than doubles the GT 1030’s performance versus DX12 in Hitman

It's been an eventful week in the TR labs, to say the least, and today had one more surprise in store for us. Astute commenters on our review of the Ryzen 5 2400G and Ryzen 3 2200G took notice of the Nvidia GeForce GT 1030's lagging performance in Hitman compared to the Radeon IGPs and wondered just what was going on. On top of revisiting of the value proposition of the Ryzen 5 2400G and performing some explorations of just how much CPU choice advantaged the GT 1030 in our final standings, I wanted to dig into this performance disparity to see whether it was just how the GT 1030 gets along with Hitman or an indication of a possible software problem.

With our simulated Core i3-8100 running the show, I fired up Hitman again to see what was going on. We've seen some performance disparities between Nvidia and AMD graphics processors under DirectX 12 in the past, so Hitman's rendering path seemed like the most obvious setting to tweak. To my horror, I hit the jackpot.

Hitman with DirectX 11 on the GT 1030 ran quite well with no other changes to our test settings. In its DirectX 11 mode, the GT 1030 turned in a 43-FPS average and a 28.4-ms 99th-percentile frame time, basically drawing dead-even with the Vega 11 IGP on board the Ryzen 5 2400G.

Contrast that with the slideshow-like 20-FPS average and 83.4-ms 99th-percentile frame time our original testing showed. While the GT 1030 is the first tier on the Pascal GeForce ladder, there was no way its performance should have been that bad in light of our other test results.

This new data puts the GT 1030 in a much better light compared to our first round of tests in our final reckoning. Even if we use a geometric mean to lessen the effect of outliers on the data, a big performance drop like the one we observed with Hitman under DirectX 12 will have disproportionate effects on our final index. Swapping out the GT 1030's DirectX 12 result for DirectX 11 is only fair, since it's the way gamers should apparently play with the card for the moment. That move does require a major rethink of how the Ryzen 5 2400G and Ryzen 3 2200G compare to the entry-level Pascal card, though.

With the parts lists we put together yesterday, the Ryzen 5 2400G system is about 15% less expensive than the Core i3-8100 system, and its 99th-percentile FPS figure is now about 11% lower than that of the Core i3-8100-and-GT-1030 box. That's still a better-than-linear relationship in price-to-performance ratios for gaming, and it's still impressive. Prior to today, gamers on a shoestring had no options short of purchasing a discrete card like the GT 1030, and the Ryzen 5 2400G and Ryzen 3 2200G now make entry-level gaming practical on integrated graphics alone.

Dropping a Ryzen 3 2200G into our build reduces 99th-percentile FPS performance about another 16% from its beefier sibling, but it makes our entry-entry-level build 17% cheaper still. As a result, I still think the Ryzen 5 2400G and Ryzen 3 2200G are worthy of the TR Editor's Choice awards they've already garnered, but it's hard to deny that these new results take a bit of the shine off both chips' performance.

To be clear, I don't think this result is an indictment of our original data or testing methods. We always set up every graphics card as equally as possible before we begin testing, and that includes gameplay settings like anti-aliasing, texture quality, and API. Our choice to use Hitman's DX12 renderer across all of our test subjects was no different. This is rudimentary stuff, to be sure, but the possibility simply didn't occur to me that using Hitman's DirectX 12 renderer would pose a problem for the GT 1030.

We've long used Hitman for performance testing despite its reputation as a Radeon-friendly title, and its DirectX 12 mode hasn't caused large performance disparities among GeForces and Radeons even as recently as our GeForce GTX 1070 Ti review. Given that past history, I felt it would be no problem to continue as we always have in using Hitman's cutting-edge API support. Testing hardware is full of surprises, though, and putting the Ryzen APUs and the GT 1030 through the wringer has produced more than its fair share of them.

Comments closed
    • xrror
    • 2 years ago

    The sad thing to me is that we even care about GT 1030 performance.

    Horray video card prices =(

    • LoneWolf15
    • 2 years ago

    Just curious…

    Remember CrossfireX, where you could pair and AMD A-series APU with a base AMD graphics card and get increased performance?

    Does anyone know if this is the case with Ryzen, and if there options there? Maybe it has already been said and I’ve had my nose in a book for a few months, just checking.

    • Voldenuit
    • 2 years ago

    Has anyone tested the 2400G and 2200G with discrete graphics?

    I can imagine that they are attractive options right now for budget gamers wanting to wait out the current inflated dGPU prices. Buy a Ryzen 2400G now, get a 580 or 1070 later when prices drop.

    Is the 8x PCIE interface a bottleneck for modern games and cards? Does switching to a dGPU give the CPU more thermal headroom to boost? Inquiring minds want to know.

      • HERETIC
      • 2 years ago

      Not the best option for a discrete GPU.
      [url<]https://www.pcper.com/reviews/Processors/AMD-Ryzen-5-2400G-and-Ryzen-3-2200G-Review-Return-APU/Discrete-Gaming-Tests[/url<] In a few weeks we should be seeing Ryzen refresh,rumors floating around of a 15% improvement..................

    • anotherengineer
    • 2 years ago

    but but but I thought DX12 was supposed to be so much faster when I got the Pre-W10 memo?!?!

      • chuckula
      • 2 years ago

      That’s only on AMD GPUs because only AMD supports DX12!

    • HERETIC
    • 2 years ago

    Did you re-test the 1050 as well Jeff???????????

    • deruberhanyok
    • 2 years ago

    This is really odd. I ran some 1080p benchmarks on a few different budget cards and posted the results in the forum a few months ago:

    [url<]https://techreport.com/forums/viewtopic.php?f=3&t=120455[/url<] Given the overall higher performance of the RX550 in the 3dmark tests I ran, the Hitman results seemed normal to me. They were also what I expected, with the knowledge that the AMD cards just generally do better in Hitman (and, I think, always have a little edge against competing NVIDIA part in DX12/Vulkan titles). The Unigine tests used DX11 or OpenGL, and I think AMD's performance has always lagged NVIDIA there, so that also made sense. Superposition was an outlier, but then, it's a brand new benchmark and these were $70 video cards, so I wasn't expecting anything spectacular from any of them anyways. Now I'm thinking, maybe the GT 1030 is actually a better deal than it at first seemed, making the RX 550 kind of... unnecessary, even as a cheap dual-slot card. Curious to see if NVIDIA will have any kind of response. Though since it's a budget part that most people probably aren't looking at, I expect it will just be fixed in a driver release and that will be the end of it.

      • MOSFET
      • 2 years ago

      [quote<]Now I'm thinking, maybe the GT 1030 is actually a better deal than it at first seemed, making the RX 550 kind of... unnecessary, even as a cheap dual-slot card.[/quote<] GT1030 is a really nice performer for its MSRP, and that's all-around performance - desktop, video, light gaming (could be extreme gaming at light settings, you get the point). One scenario where AMD's solution like RX460, RX550, and RX560 may potentially make life easier was a bit surprising to me - triple monitors for Windows desktop usage, non-3D. I can't speak for the 1030, but a year ago I replaced dual $79 GT630 1GB with a $99 Asus RX460 2GB. The Nvidia drivers (tried several) would crash consistently enough (weekly? over the span of a year) to be problematic for workflow. The RX460 has been absolutely perfect driving the 3x Asus 21.5" 1080p monitors, while simply letting Win10 handle the AMD driver. This is in a nice 990FX board (M5A99FX Pro R2) which is perfectly capable of handling two x16 cards, not to mention two x4 or x8 cards. The GT630s have been split up and are perfectly happy to this day that way.

      • renz496
      • 2 years ago

      they will just release the new GT2030 that only consume as much as GT1030 but at least perform like GTX1050Ti?

        • deruberhanyok
        • 2 years ago

        I would have no complaints about that level of performance in a budget part, especially if they keep the low profile, single slot form factor.

    • msroadkill612
    • 2 years ago

    Its a cuious topic. It’s where discrete meets IGP in the market arena, but its ~never gonna happen in reality – its moot.

    Anyone who takes the decision to go the more messy discrete route, would be mad to get such a basic gpu, or even cpu for that matter, when 6 cores are so ~cheap.

    It would be a false economy to not spend 1-150$ more.

    there is no end to it. with the apu there is an end, and its a very satisfactory end.

    The zen/vega apu is very egalitarian. A plateau of millios of users with a “stock rig”. Such a common standard SOC platform to code for, it becomes a strength in itself for the apu.

    There are many ways these apuS are superior to discrete also, but these treasures need to be mined by developers improved code.

    For equivalent performance to an apu from a discrete, better the simplicity of an apu.

    • ermo
    • 2 years ago

    Jeff,

    Do you have an informed view on why titles optimized for NVidia vs. titles optimized for AMD GPUs tend to not do so well on the other guys’ GPUs?

    From a marketshare POV, wouldn’t studios be better off targeting the largest possible user base?

    Also, do you think it is the case that DX12 is less likely to favor one GPU vendor over another, and that it also makes it more difficult to intentionally nerf the game on your competitors’ products?

      • EndlessWaves
      • 2 years ago

      From a studio perspective It’s always best to encourage at least some diversity in your customer’s choices, otherwise you end up dealing with a monopoly and have less power to shape the way gaming goes.

      • psuedonymous
      • 2 years ago

      [quote<]Also, do you think it is the case that DX12 is less likely to favor one GPU vendor over another, and that it also makes it more difficult to intentionally nerf the game on your competitors' products?[/quote<]The other way around: to get DX12 (or Vulcan) performance up to and eventually above that of DX11/OGL, the game/engine developer needs to perform the optimisations that (for DX11/OGL) would otherwise have been done by the driver developer (i.e. the GPU vendor). If they do not, performance is left on the table. The cheapest way to get that architecture-specific optimisation done is to use pre-optimised libraries supplied by a vendor (e.g. GameWorks) where the vendor's developers have already done that work for you. Remember, low-level APIs create [i<]more[/i<] work, not less, and shifts that work from the GPU vendor to the game/engine developer.

        • Klimax
        • 2 years ago

        To add: Not only developers have do optimizations on their own, but they have to absolutely maintain it against incoming new GPUs! (Each time new GPU arrives they have to add another set of rules or code to engine)

        That’s what I suspect happened here. Games engine doesn’t know how to work with 1030 and thus uses base version of code.

        • DoomGuy64
        • 2 years ago

        I agree with everything except GameWorks being optimized. Gameworks may be made by nvidia’s excellent software engineers, but every case it is implemented causes slowdowns even on their own hardware. Like sub-pixel tessellation and hairworks ignoring lod culling, which mostly serves as a method to cripple AMD cards that don’t do tile based rendering. It ruins game performance for everyone, and adds useless extra effects that often detract from the core experience. A lot of the features even have alternatives that are both visually superior and perform faster, which makes it all the more questionable.

        AMD might have compensated for some of these issues in Polaris and Vega, but I still don’t think you can fix gameworks 100% of the time, especially when nvidia developers are constantly updating it with more effects, and hardware is static. Overall, I consider it nothing more than AMD benchmark crippling, and not a honest effects library.

        edit: seems the bot program Chuckula is using to troll these forums has found my post.

      • tipoo
      • 2 years ago

      >Also, do you think it is the case that DX12 is less likely to favor one GPU vendor over another, and that it also makes it more difficult to intentionally nerf the game on your competitors’ products?

      I should imagine it would only accelerate this. DX11 keeps things abstracted away from the GPU, you can optimize for one or the other but it’s at a higher level. DX12 is lower level and lets you talk directly to the architecture more, if anything it should allow things to get even more architecture optimized.

      • stefem
      • 2 years ago

      Many games (most actually) are developed exlusively on console and only then ported to PC (likely by a different studio), console hardware limitation also force you to use every trick and corner cut possible with the architecture which may (likely) not work with others.

        • Voldenuit
        • 2 years ago

        The best console optimization is having convinced the userbase that 30 fps is acceptable.

    • meerkt
    • 2 years ago

    The article calls the 2200G Ryzen [b<]5[/b<] a few times, instead of 3.

      • Jeff Kampman
      • 2 years ago

      Sorry about that, fixed.

        • meerkt
        • 2 years ago

        That’s a mouthful anyway. Easier to just call it “the 2200G”. 🙂

          • meerkt
          • 2 years ago

          And apparently I’ve triggered a commenting system bug, with that trim-to-length happening after HTML entity escaping.

          • msroadkill612
          • 2 years ago

          Also, the “zen vega apu” says a lot unambiguously & is easily typed imo.

            • chuckula
            • 2 years ago

            Zega*!

            * Wait for the lawsuit from Sega.

      • Mr Bill
      • 2 years ago

      Be a good idea to label the DX12 graphs in addition to text which comes somewhat farther down the page. We all knew where you were going but maybe not readers new to site.

    • FubbHead
    • 2 years ago

    DX11 is higher level, isn’t it? Better look for image discrepancies, God knows what voodoo optimizations they’ve put in there.

      • Jeff Kampman
      • 2 years ago

      If you want to talk image-quality issues, then the UHD Graphics 630 had noticeable flaws compared to the Nvidia and AMD graphics cards I tested. If the GT 1030 had issues I would have noticed, but it looks identical to the other non-Intel GPUs on the bench.

      • Klimax
      • 2 years ago

      Simple consequence of optimizations done by GPU professional driver team versus amateurs trying to do same. (optimizations for GPU especially in games are very hard problem and requires damn lot of work over too many GPUs.)

    • Rza79
    • 2 years ago

    [quote<]With the parts lists we put together yesterday, the Ryzen 5 2400G system is about 15% less expensive than the Core i3-8100 system, and its 99th-percentile FPS figure is now 15% lower than that of the Core i3-8100-and-GT-1030 box.[/quote<] That's if you only take GPU performance into account. The 2400G's CPU performance is better than that of an i3 8100. You tested very few programs but looking at other reviews, the 2400G beats it across the board.

      • derFunkenstein
      • 2 years ago

      Yeah I think all around the 2400G is still more than just a slightly better buy. The extra threads help CPU performance over the 2200G more than the extra graphics resources help GPU performance and I think it’s still absolutely worth the money.

        • EndlessWaves
        • 2 years ago

        Yeah, you presumably wouldn’t pair something as powerful as an 8100 with a GT 1030 if you were only interested in gaming.

        • Chrispy_
        • 2 years ago

        If DDR4 wasn’t more than triple the price it used to be, I would disagree with you, since the 2200G is better perf/dollar than the 2400G even on CPU alone.

        However, in the current market where a 2x8GB kit costs $200, worrying about the difference between a $100 and a $170 processor seems trivial.

        I guess you could always build an 8GB machine with these, but then you’re cutting it fine because the IGP needs a good chunk of that as VRAM.

      • MrJP
      • 2 years ago

      Plus the Ryzen system will have a potentially longer upgrade life when/if GPU prices come back to sane levels. It will be a better partner for a mid-range GPU in 2-3 years time than the i3.

    • maroon1
    • 2 years ago

    I also think that when you do reviews you should use the best API for each GPU. If GPU X is better in DX12 then use it and if GPU Y does better in DX11. There no difference in image quality between DX11 and DX12. Just use the best API for best possible fps

      • Jeff Kampman
      • 2 years ago

      This hasn’t been necessary for months and months, IMO. [url=https://techreport.com/news/31565/geforce-378-78-drivers-supercharge-directx-12-and-vulkan<]Nvidia specifically shipped a driver to improve its cards' performance under DirectX 12[/url<] in March of last year, and as our [i<]Hitman[/i<] results from the GTX 1070 Ti review show, it's been effective in leveling the playing field.

    • Phartindust
    • 2 years ago

    Hmm perhaps a look at the GT 1030’s performance comparing DX11 and 12 in games is in order. I wonder if this is repeatable in other games.

      • Jeff Kampman
      • 2 years ago

      The converse is true in [i<]Doom[/i<], at least, since OpenGL more or less halves average frame rate on the GT 1030 in my very informal sanity check of that game this afternoon.

        • Phartindust
        • 2 years ago

        Sounds like this would be an interesting story to dig into.

    • Concupiscence
    • 2 years ago

    Are there any ideas why the discrepancy exists? It’s not a question of the 2GB of video memory, as the Ryzen IGP and vanilla GTX 1050 are in the same boat…

      • Jeff Kampman
      • 2 years ago

      The fact that an API change alone fixes it suggests some kind of bad juju in the way this game and the card/driver interact under DX12, but I really don’t know. Watch Nvidia’s driver release notes for [i<]Hitman[/i<]-related changes, I guess.

        • Klimax
        • 2 years ago

        Unlikely driver can fix DX 12 case. After all, we were all sold these APIs (12 and V) as bypassing driver. This is result This is case of game not setting up optimizations correctly for card.

        And it is warning for future, because cases like this will be far more common and increasing. Each game that fails to get updated for new GPUs will have same problem. Consequence of stupidity of low-level APIs. (DX12 or Vulcan, doesn’t matter)

        • mczak
        • 2 years ago

        I would say it’s likely a 2GB vram issue (you should test with some settings which require less vram to confirm), together with the chip only supporting 4x pcie.
        So, the card needing to swap out resources too often, and with just 4x pcie (which is otherwise quite sufficient for such a chip) this causing huge slowdown.

        • Mr Bill
        • 2 years ago

        Handwaving explaination…. I thought the architectural difference was: wider pipes in the Radeon line versus single pipeline used in NVIDIA optimized hardware. Maybe the 1030 has a more restricted pipeline than the 1050 and gets confused by having to reload its pipeline from the multithreaded packets DX12 puts out.

    • maroon1
    • 2 years ago

    In other words GT 1030 is better than 2400G (even with expensive low latency DDR4 3200 )

    Hitman was the only reason why GT 1030 has worse average fps than 2400G. In most of other games including doom vulkan, GT 1030 performed better than 2400G

      • chuckula
      • 2 years ago

      Yeah, as I predicted a long time ago, the 2400G gets you very good [i<]for an IGP[/i<] graphics. But some people are getting just a little too excited about it being the end of regular GPUs.

        • derFunkenstein
        • 2 years ago

        No that’s cryptomining

      • watzupken
      • 2 years ago

      This is to be expected since the GT 1030 have access to dedicated and faster GDDR5.

    • chuckula
    • 2 years ago

    Oh come on Kampman, you didn’t need to write an article about this!

    Everybody knows Ngreedia gimps the performance of the GTX-1030 to make RyzVega look better as part of it’s anti-Intel conspiracy.

    That’s like.. common knowledge man!

      • Phartindust
      • 2 years ago

      LOL

        • chuckula
        • 2 years ago

        This may be the first time ever that several people here have [b<]downthumbed[/b<] a post that accuses Nvidia of gimping its own products.

          • Phartindust
          • 2 years ago

          I just thought it was funny how you trolled all 3 in one sentence

          • mdkathon
          • 2 years ago

          If I were you, I’d treat that as a badge of honor.

      • albundy
      • 2 years ago

      everyone knows that if you tie electrodes on a hamster in a wheel to any 1030 card, you could easily boost the clock speeds to 1080 performance!

        • Wirko
        • 2 years ago

        Miners are reading this, too! They will go for gerbils when hamster supply can no longer keep up with demand!

      • NovusBogus
      • 2 years ago

      Poe’s Law is a beautiful thing.

Pin It on Pinterest

Share This