Ashes of the Singularity’s second beta gives GCN a big boost with DX12

Ashes of the Singularity is probably the first game that will come to most people's minds these days when we talk about DirectX 12. That title has been the source of a bunch of controversy in its pre-release lifetime, thanks to an ongoing furor about the game's reliance on DirectX 12's support for asynchronous compute on the GPU and that feature's impact on the performance of AMD and Nvidia graphics cards alike.

Ashes of the Singularity will be getting a second beta with a new benchmark ahead of its March 22 debut, and several outlets have been given early access to that software in advance of its official release. AnandTech, ExtremeTech, and Guru3D have all benched this new beta in both single and multi-card configurations, including the kinds of Frankenstein builds we saw when developer Oxide Games previewed support for DirectX 12's explicit multi-adapter mode. Explicit multi-GPU support is being added to the game's public builds for this beta release, too, so more of the press has been able to put Radeons and GeForces side-by-side in the same systems to see how they work together.

Performance highlights

While all three reviews are worth reading in-depth, we want to highlight a couple things. Across every review, Radeon cards tend to lead comparable GeForces every step of the way in single-card configurations, at least in the measure of potential performance that FPS averages give us. Those leads widen as resolution and graphics quality settings are turned up.

For example, with Ashes' High preset and the DX12 renderer, the Radeon R9 Fury X leads the GeForce GTX 980 Ti by 17.6% at 4K in AnandTech's testing, going by average FPS. That lead shrinks to about 15% at 2560×1440, and to about 4% at 1080p. Ashes does have even higher quality settings, and Guru3D put the game's most extreme one to the test. Using the Crazy preset at 2560×1440, that site's Fury X takes a whopping 31% lead over the GTX 980 Ti in average FPS. Surprisingly, even a Radeon R9 390X pulls ahead of Nvidia's top-end card with those settings.

As we saw last year in an earlier Ashes benchmark series from PC Perspective, switching to DirectX 11 reverses the Radeons' fortunes. Using that rendering path causes a significant drop in performance: 20% or more, according to AnandTech's results.

The new Ashes benchmark lets testers flip asynchronous compute support on and off, too, so it's interesting to examine what effect that has on performance. AnandTech found that turning on the feature mildly harmed performance on GeForce cards. Radeons, on the other hand, got as much as a 10% boost in frame rates with async compute enabled. Nvidia says it hasn't enabled support for async compute in its public drivers yet, so that could explain part of the performance drop there.

In the "things you can do, but probably shouldn't" department, the latest Ashes beta also lets testers turn on DX12's Explicit Multiadapter feature in unlinked mode, which we'll call EMU for short. As we saw the last time we reported on performance of cards in an EMU configuration, the feature does allow for some significant performance scaling over single-card setups. It also allows weirdos who want to throw a Fury X and a GTX 980 Ti in the same system let their freak flags fly.

AnandTech did just that with its Fury X and GTX 980 Ti. Using the red team's card as the primary adapter, the site got a a considerable performance increase over a single card when running Ashes at 4K and with its Extreme preset. The combo delivered about 39% more performance over a GTX 980 Ti and about 24% over an R9 Fury X. With the GTX 980 Ti in the hot seat, the unlikely team delivered about 35% more frames per second on average.

EMU does come with one drawback, though. Guru3D measured frame times using FCAT for its EMU testing, and the site found that enabling the feature with a GTX 980-and-GTX-980-Ti pairing resulted in significant frame pacing issues, or "microstutter," an ugly problem that we examined back in 2013 with the Radeon HD 7990. If microstutter is a widespread issue when EMU is enabled, it could make the feature less appealing.

Caveats

As with any purportedly earth-shattering numbers like these, we think there are a few caveats. For one, this is a beta build of Ashes of the Singularity. AnandTech cautions that it's "already seen the performance of Ashes shift significantly since our last look at the game, and while the game is much closer to competition now, it is not yet final."

For two, each site tested Ashes with the Radeon Software 16.1.1 hotfix, AMD's latest beta driver. After the results from each site were published, AMD released the Radeon Software 16.2 hotfix, which contains some Ashes-specific optimizations. We're curious to see what effect the updated driver has on the performance of the game on AMD hardware, and it's entirely possible that the effect could be positive.

For three, as we mentioned earlier, Nvidia says that it still hasn't enabled support for asynchronous compute in its public drivers. GeForce software product manager Sean Pelletier took to Twitter yesterday to point this out, and he also noted that Oxide Software's statement that asynchronous compute support was enabled in the green team's public drivers was incorrect. Given how heavily Ashes appears to rely on asynchronous compute, the fact that Nvidia's drivers apparently aren't exposing the feature to the game could partially explain why GeForce cards are lagging their Radeon competitors so much in this benchmark.

Still, if these numbers are any indication, gamers with AMD graphics cards could see a big boost in performance with DirectX 12, all else being equal. It's unclear whether other studios will take advantage of the same DX12 features that Oxide Games has with Ashes of the Singularity. Just because a game claims DirectX 12 support, that gives us no insight about the number of API features a particular title includes. Even so, we could be on the threshold of some exciting times in the graphics performance space. We'll have to see how games using this new API take shape over the coming months.

Comments closed
    • southrncomfortjm
    • 3 years ago

    Given recent track records, I’m sure Nvidia will be competitive here. If not, that’s okay, I wasn’t interested in Ashes of the Singularity anyway.

    Hopefully Pascal supports asynchronous compute. Is it a big deal in DX12? Or is this game just choosing to use this feature?

    • Umbral
    • 3 years ago

    Unless this lets me utilize the wasted silicon Intel puts in their ‘lesser’ CPU’s I don’t see this being particularly useful. Additionally, since all the GPU’s presumably need to be Directx 12 compliant, I’m not going to purchase new and disparate parts to accommodate some esoteric capability.

    • forb1d
    • 3 years ago

    I think it’s great that Oxide is pointing out the lack of feature support by Nvida. This will hopefully get them to fix current gen or at least not drop the ball in future tech. That said, it’s a beta of a game I wont play until is sub $20 on steam. Although I see it’s $24 till Monday. It’s not bad that it looks like a new rendition of “total annihilation” so it should be fun.

    I think we need more games to show this great imbalance of performance. Not just more games, more games people will actually play. I buy games to play, I don’t by games because they’ll look pretty on my system.

    Example. If a sure to be popular game – GTA VI showed similar performance hit, still lopsided to AMD’s gain, perhaps Nvidia would give their users more responses on the matter.

    I’m not a fan boy of either. I’m stuck with Nvidia with a gsync 144hz monitor. But it’s not out of the question if AMD and nvidia release around the same time this year and AMD is actually better, to buy a freesync monitor to pair.

    Although I doubt it’ll close the gap, I’d like to see OC’d ti vs fury x. Since the OC is limited on fury x, more so than ti.

    • Rza79
    • 3 years ago

    [quote<]For three, as we mentioned earlier, Nvidia says that it still hasn't enabled support for asynchronous compute in its public drivers. GeForce software product manager Sean Pelletier took to Twitter yesterday to point this out, and he also noted that Oxide Software's statement that asynchronous compute support was enabled in the green team's public drivers was incorrect. Given how heavily Ashes appears to rely on asynchronous compute, the fact that Nvidia's drivers apparently aren't exposing the feature to the game could partially explain why GeForce cards are lagging their Radeon competitors so much in this benchmark.[/quote<] Such a big explanation and useless defending of nVidia here. Current nVidia hardware doesn't have DX12 compatible async compute in hardware. What they will implement in their drivers, will be CPU based. This is all that you should have said, which is the truth. Let's be honest. AMD's DX11 drivers are not as efficient as nVidia's which has probably led to nVidia's current 80% market share. But from its initial launch in 2012, it was very clear that GCN was very forward looking and quite unbeatable in compute. In my mind, there was no doubt that with GCN AMD has had excellent hardware on which they didn't manage to materialize. So it's no surprise that it's so powerful in DX12. The raw power has always been there.

      • stefem
      • 3 years ago

      Of course NVIDIA doesn’t manage async compute in hardware, after Fermi GPUs are feed by a software scheduler, they don’t need to modify the hardware (after all one of the strength of sw scheduling is that you can “easily” modify it) but is yet to be seen how much they will gain from async compute considering this feature is meant to keep the GPU feed of work and NVIDIA’s GPUs are already good at (that’s where part of the efficiency advantage they have come from).
      Hardware and software scheduling has both pros and cons, just to give you an idea hardware is good at taking small and simple decision while software shine with big and complex one.
      I you look at D3D11 performance you see that while NVIDIA relay on software scheduling (that run on CPU) they clearly get more performance out of slow CPU than AMD does, this means that despite they pay an increased CPU load the gain on efficiency greatly outpace that penalty.
      One of the drawback of low level programming is that developer is responsible for optimization of each device since the driver will no longer mediate for that, and if that could be not that complicate when you work with just a single target it could become a nightmare when you have to deal with the whole diversified market share.
      Honestly you can’t define GCN as “quite unbeatable in compute”, unless you compare it to mid range Kepler/Maxwell (GK104/GM104) or AMD’s Cayman, previous AMD architecture was really “touchy” on compute.

      • Meadows
      • 3 years ago

      Wrong suggestion. Even if what you say about async compute is true, TR should stick to reporting what spokespeople say about unreleased products, lest they be confused with questionable rumour sites.

      • ermo
      • 3 years ago

      Did you mean “capitalize”?

      [quote<]"In my mind, there was no doubt that with GCN AMD has had excellent hardware on which they didn't manage to [b<]materialize[/b<]."[/quote<]

    • MrJP
    • 3 years ago

    Any plans for TR to conduct their own testing on this, or are you waiting for the release version? I’d still rather see some proper Tech Reports rather than links to other, less thorough sites.

    • peaceandflowers
    • 3 years ago

    Can nVidia’s current cards even support asynchronous compute? I thought it was a hardware thing.

      • BehemothJackal
      • 3 years ago

      From what I’ve read that is my impression as well. This whole “enabled in drivers, disabled in drivers” argument becomes moot when the hardware isn’t even capable of it. Apparently of the devs for this game said that nVidia could enable Async to work through the drivers, but that the load would shift to the CPU since the current cards aren’t really built for it.

      One source: [url<]http://www.dsogaming.com/news/oxide-developer-nvidia-was-putting-pressure-on-us-to-disable-certain-settings-in-the-benchmark/[/url<] Also this discussion started back in August 2015, as you can see from the article. Meaning in 6 months they haven't been able to implement a solution? Seems fishy to me, fishy to the point where they're blaming it on the software, but know that the hardware just isn't capable of it and won't glean any benefits.

      • stefem
      • 3 years ago

      It’s hardware if your scheduler is in hardware like AMD GPU’s from GCN, it’s software if you have a software scheduler like NVIDIA from Kepler does.
      That since both vendor GPUs support concurrent execution of commands from different streams.

    • BehemothJackal
    • 3 years ago

    I would just like to point out that Guru3d did not use the Fury X at all in their review. They compared the 980ti to the regular Fury. Not sure what the point of that was, but that’s what they did. So that 31% difference you see at “Crazy” settings isn’t even using AMDs flagship.

    “The graphics cards used in this article are:

    GeForce GTX 950 (2GB)
    GeForce GTX 970
    GeForce GTX 980
    GeForce GTX 980 Ti
    Radeon R7 370
    Radeon R9 380 (2GB)
    Radeon R9 380X
    Radeon R9 390X
    Radeon R9 Fury”

    • Klimax
    • 3 years ago

    Doesn’t matter. Just give it a time. Then performance gains become virtual (aka nonexistent) at best. Mantle (and likely Vulcan too since it is based on Mantle) already showed that, see Battlefield 4 on Tonga. DX 12 itself is undecided for now, but short of extensive driver work unlikely to avoid same fate.

    Thus either no benefit while throwing brutal amount of work on devs or drivers will re-optimize it anyway and thus all alleged “benefits” are gone…

    All that arguing will be moot either way…

    • Kaotik
    • 3 years ago

    “Given how heavily Ashes appears to rely on asynchronous compute, the fact that Nvidia’s drivers apparently aren’t exposing the feature to the game could partially explain why GeForce cards are lagging their Radeon competitors so much in this benchmark.”
    I don’t see how you come to this conclusion.
    Sure, Radeons gain depending on settings anywhere from sub-10 to around 20% with Async shaders, but they’re performing faster than equivalent GeForces even when async shaders is set to off on both.

      • xeridea
      • 3 years ago

      I think the real reason is Nvidia gives the minimal effort required to get a working DX12 driver, so performance is bad, and ACE hit is just a side affect. Their hardware does not support ACE though, so not sure if their driver emulation of it (if it is ever implemented), will make much difference. Even with a hyper optimized DX11 driver, they shouldn’t be doing worse in DX12, which… doesn’t need nearly as much driver optimization.

        • stefem
        • 3 years ago

        ACE is just an AMD marketing name, of course NVIDIA doesn’t feature (you said support) that… it’s like saying that AMD GPUs does not support PolyMorph Engine (NVIDIA’s geometry engine) which is simply nonsense.
        AMD GPU’s resources are organized in cluster that AMD call ACE, NVIDIA’s one are organized in GPC and SM and they are both able to issue command from multiple queue at the same time, in fact the very same technique is used to reduce latency in VR and is called very imaginatively async warp (wich out of confusion relay on the same async shader concept).

          • Goty
          • 3 years ago

          AMD’s and NVIDIA’s asynchronous compute capabilities are actually quite different; see here:

          [url<]http://ext3h.makegames.de/DX12_Compute.html[/url<]

            • stefem
            • 3 years ago

            Yea, and with low level api now developers are far more responsible for optimization, something that work good for AMD could ruin NVIDIAs performance and vice versa.

          • BehemothJackal
          • 3 years ago

          [url<]http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/[/url<] "Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have. To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code. We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present)." Also: [url<]http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995[/url<] "Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD."

            • stefem
            • 3 years ago

            Devs don’t need to be biased, there’s not a single way of doing things in a graphic engine, you end up with a solution that may work better on some hardware and worst on other.

    • torquer
    • 3 years ago

    The game will probably suck. That being said, its great to see performance improvements from DX12. Seems like AMD made some good design decisions either by luck or intention.

    Can anyone think of a game in recent memory that was effectively a “tech demo” for a new technology or DirectX version that wasn’t utter crap as an actual game?

      • TopHatKiller
      • 3 years ago

      “luck” only.
      who could possibly say that amd are not utterly worthless at everything?
      i’m shocked their engineers don’t head-but themselves to-death when they struggle, and fail, to tie their own shoelaces.

      • YellowOnion
      • 3 years ago

      I could be missing something, but first things that come to mind are:

      Half-Life 2, Newtonian Physics.

      Doom 3, Volumetric lighting.

      Possibly, HL1, with skeletal animations.

      • Platedslicer
      • 3 years ago

      Well, Crysis wasn’t so bad as a game. A bit lacking in personality, but pretty good gameplay. Damn good, I’d say.

    • swaaye
    • 3 years ago

    The game looks like SoaSE + Supcom. To be honest I think the visuals look awfully similar to old D3D9 Supcom too. Diminshing returns in the fancy pixels I guess. I hope it has solid co-op play vs. AI, and that it’s actually an enjoyable RTS in general…

      • juzz86
      • 3 years ago

      “The game looks like SoaSE + Supcom.”

      – swaaye

      This is exactly what it is. It’s fun, though. Scale is epic. I liked SoaSE though, so YMMV.

        • swaaye
        • 3 years ago

        It’s interesting that there’s also that Homeworld land RTS too now. Two games that took their gameplay concepts to land instead of space.

    • bfar
    • 3 years ago

    There are probably more questions than answers based on the figures we have so far, but wow, what an exciting time ahead!

    Multi GPU has always had frame pacing issues in alternate frame rendering modes, but my understanding is that new multi gpu rendering methods are a possibility with DX12. Imagine the configuration options available to enthusiasts?

      • xeridea
      • 3 years ago

      Yeah its weird they are doing AFR when they could be doing Split Frame Rendering, with automatic load balancing to get the most out of dissimilar cards.

    • adampk17
    • 3 years ago

    Re: asynchronous compute

    Nvidia getting caught with its drawers down?

      • maxxcool
      • 3 years ago

      I am a Nvidia fan and it sure looks that way ..

      • morphine
      • 3 years ago

      Nvidia has frequent driver releases, so one is probably just around the corner.

        • adampk17
        • 3 years ago

        Yes but, unless I’m mistaken, isn’t asynchronous compute something Nvidia hasn’t built their hardware to do quite as well as AMD?

        Don’t get me wrong, I’m generally a green teamer as well.

          • morphine
          • 3 years ago

          In theory, yes, those are the rumors floating around. But keep in mind that (IMO) Nvidia’s driver acumen is still superior, so there’s no telling how good the final implementation performs.

          Besides (and not to defend NV here), even if it is worse, it’s only one DX12 feature among several, all of which contribute to the final performance.

            • bfar
            • 3 years ago

            Here’s the thing. DX12 hands control of the hardware back to the developer. Optimisation won’t be happening at the driver level, so we might have to get used to NOT getting frequent vendor level improvements. The ball will be in the devs court.

            What we will expect is vendors to support features at both hardware and driver level. Maxwell may not have asynchronous compute support, but i’d argue that with low level access to the Nvidia hardware available through the new API, the onus is partly on developers like Oxide to get the most out of Maxwell.

            • xeridea
            • 3 years ago

            So…. Nvidia refuses to support one of the major features of DX12, and this is the fault of the developer? They don’t optimize specifically for each vendor, because time is better spent on general optimizations. There is a reason Nvidia has been downplaying all 3 next gen APIs for years now, because their hardware is very narrowmindedly built, with absolute single threaded DX11 in mind, and nothing else matters. The last round of Ashes benchmarks Nvidia swore up and down it was the games fault for their abysmal performance, but it was proven it was a driver bug, since they haven’t cared much about DX12.

            • stefem
            • 3 years ago

            [quote<]They don't optimize specifically for each vendor, because time is better spent on general optimizations. [/quote<] Lol, with low level API like D3D12 and Vulkan developers are actually responsible for optimization since the driver will no longer mediate for that.

          • maxxcool
          • 3 years ago

          Depends on your definition of asynchronous compute.. apparently there is a number of ways to accomplish it. NV’s method does not like how Ashes is coded.

          Grain of salt, its WCC
          [url<]http://wccftech.com/asynchronous-compute-investigated-in-fable-legends-dx12-benchmark/[/url<]

            • Tirk
            • 3 years ago

            I thanked you below for linking that, but I’d also like to note that WCC is concluding that NV does not perform well when you take full advantage of asynchronous compute, not that Ashes is coded improperly to hurt NV’s performance.

            Fable’s asynchronous compute is a milder use of it, so hence NV does not see as much of an impact when its used. Although I agree with you that WCC is not the first go to site for testing lol.

            • stefem
            • 3 years ago

            I think WCC is far worst, sometimes they miss to cite source, they took number from other website and failed to even understand what those numbers are* (it wasn’t a typo since the whole thought was based on that wrong assumption), they published something they where brave to call analysis where they where just reporting AMD PR’s number (they where claiming G-sync had perf penalty, AMD’s freesync a boost…).
            I don’t think they are biased but they are just incompetent (as journalist first and technically too).

            * those moron cited PCper’s numbers as fps, they probably made the assumption since are 2 digit numbers but that’s because PCper show numbers in million!!! look at that
            [url<]http://www.pcper.com/files/imagecache/article_max_width/review/2015-03-25/dx12-980.png[/url<] [url<]http://wccftech.com/amd-r9-290x-fast-titan-dx12-enabled-3dmark-33-faster-gtx-[/url<]

        • Kaotik
        • 3 years ago

        It’s been “around the corner” for some half a year already, Oxide has been complaining about since the day they published their first DX12 build of Ashes

      • Voldenuit
      • 3 years ago

      More like drivers down.

      Although as several hardware sites and developers have said, nvidia doesn’t have the scheduling hardware in Maxwell to handle async compute at the same level as AMD, although it does have CUDA schedulers that we have been told are incompatible with DX12. Not trying to get anyone’s hopes up, though, the most likely outcome is that nv won’t be able to compete at an async compute level with AMD this round.

      But since only a single game is using ACE so far, and nv and AMD were neck to neck in the Fable DX12 benchmark, I also wouldn’t count nv out of the running based on a single benchmark. Yet.

        • Tirk
        • 3 years ago

        I’ve read somewhere Fable is also planning to implement async compute so I wouldn’t quite use that as an example of a DX12 game that will not use it. I’ll edit this post again if I get off my lazy ass and search for the link where I read so you won’t just take my word for it hehe 😉

        Ah I didn’t have to look far at all lol Maxxcool above links to Fable using async compute. THANKS MAXXCOOL!

    • LostCat
    • 3 years ago

    Hmm. Don’t have $25. Don’t know that I need the game right now anyway.

      • chuckula
      • 3 years ago

      I think this game is turning into a modern equivalent of Quake 3 when it first launched: Everybody uses it as a benchmark, nobody actually plays it.

        • LostCat
        • 3 years ago

        I used to be an RTS fan, so I will get it. Eventually.

        • Topinio
        • 3 years ago

        Objection! I played a lot of Quake 3…

          • MOSFET
          • 3 years ago

          I pretty much had to extend my college education a year because of q3test. Q3A was slightly anticlimactic, but still it would be impossible to count the hours I [i<]invested / wasted...[/i<]

        • Wild Thing
        • 3 years ago

        NV got rekt.
        The game hasn’t been released yet…but “nobody actually plays it”?
        LOL….cry more!

          • maxxcool
          • 3 years ago

          /facepalm/

        • moshpit
        • 3 years ago

        Timedemo 1
        demo q3crush
        400FPS??? WTF? 😛

    • barich
    • 3 years ago

    I am being reminded of the relative DirectX 9 performance of R300 and NV30.

    • Voldenuit
    • 3 years ago

    You guys should also mention [url=http://www.guru3d.com/articles_pages/ashes_of_singularity_directx_12_benchmark_ii_review,1.html<]guru3d's tests of the same benchmark[/url<] for a second data point. They don't see as severe a drop for GeForces except at the Crazy setting, and also have FCAT numbers, where the GeForces do very well.

      • morphine
      • 3 years ago

      You mean, like the multiple mentions and links in the article? 🙂

        • Voldenuit
        • 3 years ago

        Ah, missed it in my firat glance through, my bad.

      • Kaotik
      • 3 years ago

      The FCAT results are borked, the tool doesn’t understand the results it’s reading when the card isn’t using DirectFlip.
      [url<]http://www.extremetech.com/extreme/223654-instrument-error-amd-fcat-and-ashes-of-the-singularity[/url<] AMD is following Microsofts recommendation to use DWM instead of DirectFlip in DirectX 12, which FCAT can't understand "AMD’s driver follows Microsoft’s recommendations for DX12 and composites using the Desktop Windows Manager to increase smoothness and reduce tearing. FCAT, in contrast, assumes that the GPU is using DirectFlip. According to Oxide, the problem is that FCAT assumes so-called intermediate frames make it into the data stream and depends on these frames for its data analysis. If V-Sync is implemented differently than FCAT expects, the FCAT tools cannot properly analyze the final output. The application’s accuracy is only as reliable as its assumptions, after all."

        • Tirk
        • 3 years ago

        I saw that article, very nice detail on how FCAT is making incorrect assumptions.

        Now the question is, should sites use the Event Trace for Windows (ETW) for windows 10 tests or will FCAT be fixed to record Microsoft’s new preferred rendering method?

        • psuedonymous
        • 3 years ago

        Guru3D has posted a follow-up in their article: Even if you don’t ‘beleive’ FCAT, you can watch the video and see that the actual results reflect what FCAT is measuring in terms of actual screen updates.

          • Tirk
          • 3 years ago

          The problem is Guru3D came to a very flawed conclusion with the numbers they got from FCAT, which is what Extremetech is pointing out. They say to throw out the AMD FPS numbers which is a bad conclusion. AMD’s drivers are working correctly and do exactly what they are supposed to do, Nvidia’s look normal only because they default to DirectFlip, hence the screen tearing, but as soon as you alt-tab in and out of the game it will start using WDM and WILL LOOK EXACTLY LIKE AMD’s FCAT graph.

          FCAT is recording something, but not very well due to the way Microsoft’s preferred rendering method works in DX12. I would suggest reading Jeff’s article [url<]https://techreport.com/blog/29791/microsoft-push-for-a-unified-cross-platform-gaming-experience-backfires[/url<] for a little more insight into the issue. It would be interesting to test, as derFunkenstein points out in the comments, to see if variable refresh rate monitors see any impact at all switching away from directflip. I'd test it myself on my freesync monitor but I don't have Ashes.

    • TopHatKiller
    • 3 years ago

    Hung-less dogma stuttering from his own mouth. How embarrasing.
    Is the future of dx12 perfomance? Nope.

      • chuckula
      • 3 years ago

      tree loiter to you
      I challenged and challenged more
      I taught honesty

      Suzuki admired
      until the gravel imagined
      and the crayons asked

      wet princes whispered.
      wet elbows agreed strangely
      green fingers pondered

      [url<]http://www.randomhaiku.com/[/url<]

        • maxxcool
        • 3 years ago

        Bahahahaha 🙂

        • n00b1e
        • 3 years ago

        Well done my good sir, well done indeed.

        • TopHatKiller
        • 3 years ago

        you’re still an idiot, and i’m still right. Although, I hope someone laughed.

          • chuckula
          • 3 years ago

          cheat chairs on clothing
          when papers take to plunder
          because elbows work

            • TopHatKiller
            • 3 years ago

            well, I think everyone would agree with that

          • torquer
          • 3 years ago

          So I know Scott is gone, but does anyone actually moderate this stuff anymore?

            • brucethemoose
            • 3 years ago

            I don’t think basic insults are worth censoring, IMHO.

            Not that I condone it… I just don’t like drawing the actual moderation line too close to anything resembling actual discussion. That’s what downvoting is for.

            • ImSpartacus
            • 3 years ago

            Comment voting does nothing to assist moderation. It’s just a vanity contest. It’s not like downvoted comments are hidden or anything like that. They are still right there. So I don’t see any functional result from comment voting.

            • TopHatKiller
            • 3 years ago

            i vote for you, just ‘cos you’re so reasonable and polite

          • maxxcool
          • 3 years ago

          /faceplam/

      • gerryg
      • 3 years ago

      Means this? Thinking will much tell it to me maybe. Looking forward eventually decode before understand. Ow.

      • TopHatKiller
      • 3 years ago

      Huhumph. Arrrggggg. [Clearning my throat]

      Correction:

      JEN-Hung-less [please refer to previous bollocks that fool has uttered.*]

      also:

      Is THIS the future of DX12 performance. Nope.

      *No I’m not going to give a list. [a] I don’t have the effing time [b] the list would be too long…

Pin It on Pinterest

Share This