Nvidia’s RTX Super and AMD’s Radeon RX 5700 series graphics cards reviewed

Radeon Anti-Lag

Radeon Anti-Lag (RAL) is the other big new technology that AMD announced on stage at E3. Put simply, Radeon Anti-Lag allows the GPU portion of the game’s workload to overlap to some degree with the CPU portion of the work, reducing the real-time delay from user input to screen response. AMD says that Radeon Anti-Lag “can in theory shrink input lag by almost a full frame.”

Source: AMD

Input latency is a topic near and dear to my heart, so I was pretty interested in RAL. Upon playing games with it for a couple-dozen hours, I can confidently say that I have determined absolutely nothing about Radeon Anti-Lag. You can toggle it with a hotkey, and frankly, even now, I cannot definitively tell if it’s on or off in any game I tried. Subjectively, it might as well be snake oil to me.

Objectively, the best way to test Radeon Anti-Lag would be using a high-speed camera along with a CRT display or modified input device. Unfortunately, I don’t have such a camera handy. (It’s really time for a new smartphone.) Since I don’t have the requisite equipment, I can’t really do the truly thorough investigation of the technology that I’d like to do.

The Open Capture and Analysis Tool.

Even though I don’t have a high-speed camera, I can prove that RAL is doing something by using the Open Capture and Analysis (OCAT) tool to measure input lag introduced in software. Recent versions of OCAT monitor the “estimated driver lag” as part of the standard game performance capture process. Before I share my data, I want to mention a few caveats to this whole process.

First and foremost is that so-called “driver lag” is only one part of the input lag in a typical PC gaming input-output cycle. I’m actually elated that AMD is taking steps to reduce it, but let’s try not overstate the impact of doing so. Furthermore, let’s keep in mind that OCAT is software primarily maintained by AMD employees. I have full faith in the fairness of the fellows who fabricated this terrific tool, but it’s no coincidence that OCAT recently gained the ability to monitor software input lag.

More than either of those points, though, I think it’s important to note that this sort of input lag is caused by excessive GPU load. AMD even says as much; in its documentation, the company notes that the benefits of Radeon Anti-Lag are best-illustrated in severely GPU-limited scenarios. The point is, the simplest way to reduce software input lag is to reduce proportional GPU load, either by reducing graphics settings, or simply getting a faster GPU.

In any case, let’s take a look at what OCAT has to say about Radeon Anti-Lag. I tested three DirectX 11 games on the Radeon RX 5700 XT, first with Radeon Anti-Lag disabled, then with it enabled, and then finally once more after switching out the Radeon for the GeForce RTX 2070 Super. I would have used the RTX 2060 Super, but I unfortunately no longer had access to it when performing this testing. As it turns out, the results were still pretty interesting.

The first game I tested was Hellblade: Senua’s Sacrifice. I tested in 3440×1440, a slightly higher resolution than we tested in earlier, to increase the GPU load. The display I used also supports AMD FreeSync up to 100 Hz, so I enabled that for even further-reduced baseline input lag.

The GeForce card runs this game quite a bit faster than the Radeon RX 5700 XT, but the estimated driver lag is very similar for both cards. Turning on Radeon Anti-Lag apparently reduces input lag by some 11 milliseconds, but I certainly couldn’t tell the difference.

Deciding to test a much more fast-paced and reaction-time-oriented game, I loaded up Warframe. I would have used Devil May Cry V, but Radeon Anti-Lag only works in DirectX 11 games. That’s right; you can’t use Anti-Lag and Image Sharpening at the same time, at least for now.

The effect wasn’t obvious in Warframe, either, but that’s not particularly surprising for a variety of reasons. Notably, Warframe runs at around 110 FPS on the Radeon RX 5700 XT in 3440×1440. The input lag was purportedly reduced by nearly 8ms, which is “almost a full frame” just as AMD said.

Finally, deciding to really load up the video cards, I set up Grand Theft Auto V in 3440×1440, with 4x multi-sampling anti-aliasing (MSAA). This was quite a burden to bear for our Radeon, but the GeForce RTX 2070 Super handled it easily.

At these settings, according to OCAT, Radeon Anti-Lag was actually dropping input lag by over 15 milliseconds. Even then, it still wasn’t apparent to me, but I also don’t really have any reason to doubt OCAT’s data. My measured results are consistent with what AMD said to expect, so I’m willing to give the feature the benefit of the doubt.

My experiences with Radeon Anti-Lag left me with more questions than answers, though. AMD said that by its nature, Radeon Anti-lag might reduce framerates, and I indeed benchmarked the feature with typical vigilance. I’m not publishing those numbers simply because they aren’t informative. In fact, in all three games, enabling Anti-Lag marginally reduced the games’ 99th-percentile frame times—implying a slight improvement in performance—but the difference was so infinitesimal it’s irrelevant, and likely down to margin of error due to the imperfect repeatability of our tests.

That left me wondering why Anti-Lag can’t simply be toggled on in Radeon Settings; instead, it has to be enabled manually, and on a per-game basis. Taking that same train of thought to the next station, I wondered why these apparently “free” latency reductions are there for the taking. Put another way, why isn’t it done this way in the first place?

The point is, as a PC gamer, I obviously want the minimum possible input lag, and Radeon Anti-Lag supposedly improves things on that front. I sure can’t tell, though. I don’t mean to demean the hard work of AMD’s engineers—as I said above, I’m elated that the company is tackling the issue of input lag—but the extra frame-rate afforded by the (admittedly bigger and hotter) faster graphics card made all three games feel more responsive than enabling Radeon Anti-Lag.

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

avatar
19 Comment threads
36 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
30 Comment authors
KretschmerMr BillwillmoreLoneWolf15K-L-Waster Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
Mr Bill
Guest
Mr Bill

Zak, excellent review. You always have interesting observations for each game and how the cards deal with the game. That in-the-second commentary is what brings the nerds to the yard.

willmore
Guest
willmore

Did you crush that 5700? Or is that really how the shroud looks?

willmore
Guest
willmore

Sorry, it was the XT model that looks crushed.

K-L-Waster
Guest
K-L-Waster

That is in fact what the reference shroud looks like.

(As if peeps didn’t have enough reasons to hold out for 3rd party coolers…)

willmore
Guest
willmore

I wouldn’t call it ugly, but I really don’t like the look.

Kretschmer
Guest
Kretschmer

What this review tells me is that my 1080Ti held up really, really well. I got 2070 Super performance for two extra years at a $150 premium.

Sure no ray tracing, but I’d rather buy into that tech when it becomes better developed.

plonk420
Guest
plonk420

thank you SOOOO much for this review! Time Spent Beyond x ms / average frametime now is the first thing i look at with reviews

LoneWolf15
Guest
LoneWolf15

One note I don’t think I recall being brought up (the lack of index drop-down in the pages listed makes me loath to go back and check): Everything I’ve seen says that if Navi is your card, do not buy a reference design. Some reference blowers are quite good (nVidia Pascal was; I was very happy with the noise level on two 1070 Founders Edition cards at full load, and a single one was whisper quiet). AMD’s blower -is not. It is both loud, and a poor cooler; Sapphire’s own dual-fan Pulse 5700XT runs twenty degrees cooler. Twenty degrees [i]Celsius[/i]… Read more »

LoneWolf15
Guest
LoneWolf15

I should add, this is not meant to denigrate the AMD’s Navi. Just AMD’s reference cooler design.

I think the Sapphire cards (specifically, the Pulse 5700XT or Pulse 5700) for ten bucks more than the reference design, is competitive, and worth looking into. But they may be hard to find at the moment.

https://www.newegg.com/sapphire-radeon-rx-5700-xt-100416p8gl/p/N82E16814202349?Description=sapphire%20pulse%205700xt&cm_re=sapphire_pulse_5700xt-_-14-202-349-_-Product

https://www.newegg.com/sapphire-radeon-rx-5700-100417p8gl/p/N82E16814202350?Description=sapphire%20pulse%205700&cm_re=sapphire_pulse_5700-_-14-202-350-_-Product

anotherengineer
Guest
anotherengineer

“For gamers like myself who use myriad monitors—I’m currently using five—”

got a matrox card? 😉

when are the $235 card coming out?!?!

Mr Bill
Guest
Mr Bill

I have a Matrox G650. Its a fabulous office card for multi-monitor but far too slow for gaming.

StuG
Guest
StuG

I feel like a lot more cards should have been included in the conclusions graph (given that was already charted on previous reviews) so we could see where a larger range of cards would fall (even if the dots were marked as previous reviews or something).

Captain Ned
Guest
Captain Ned

If those cards were not benchmarked on the same rigs as used for this review, the comparison would not be univariate. Multivariate is what proper reviews try to avoid.

Krogoth
Guest
Krogoth

Vega will continue on as general compute solutions while RDNA will focus more on graphical prowess.

Yomom
Guest
Yomom

So sad that great content like this has to get fakked by this horrible horrible generic template.

DPete27
Guest
DPete27

Nvidia’s new Ultra Low Latency setting description may shed some light on AMD’s implementation:
https://www.nvidia.com/en-us/geforce/news/gamescom-2019-game-ready-driver/

By reducing maximum pre-rendered frames to 1, you go to a just-in-time frame scheduling….Sure, that would improve latency, but it would also leave you susceptible to frame-time spikes if a frame takes a little longer than expected. I suspect that using VRR can reduce this effect, but still would be interesting to test. How many pre-rendered frames is optimal?

DPete27
Guest
DPete27

Also of note:
“in DX12 and Vulkan titles, the game decides when to queue the frame”
would be nice to include in your review write-up.

Jesse
Guest
Jesse

I usually set mine to 2 globally in the Nvidia control panel – enough for double-buffered vsync if I want it, and much less latency in game engines that pre-render like 5 frames by default.

Jason Deford
Guest
Jason Deford

So… Are the Radeon Vega cards an evolutionary dead end?

I was disappointed to see the Nvidia 1080 ti card in testing, but not the Radeon RX Vega 64. If you’re making a generational comparison including the Nvidia 10-series and its follow-ons, I’d think you should include the Radeon Vega-series in comparison to the RX 5700-series. The RX 580 card shown in the comparison isn’t in the same price/performance range as the newer cards being benched.

Krogoth
Guest
Krogoth

Vega will continue on as general compute solutions while RDNA will focus more on graphical prowess.

StuG
Guest
StuG

This was exactly what I thought as well. No VII/64/56?

tfp
Guest
tfp

I was wondering the same, that said a check on newegg shows that VII is very limited. Is AMD running into production issues with the VII?

Krogoth
Guest
Krogoth

Radeon VII was a stopgap solution until Navi was ready. It was a way to clear out excessive Vega 20 stock that ate too much power for ISV customers.

Navi already bests Vega 20 at gaming performance when memory bandwidth isn’t a factor.

LoneWolf15
Guest
LoneWolf15

Yup. Radeon VII owners are sadly being left high and dry.

It was a lousy buy even for the most die-hard AMD fans, and its short market time is pretty disappointing for anyone who bought one.

jihadjoe
Guest
jihadjoe

My guess is AMD doesn’t really want to make any more of the VII than is necessary. It’s relatively cheap for something that uses such expensive components, and built on an expensive process.

90% of those chips they rather go into the Mi50 accelerators.

Krogoth
Guest
Krogoth

Yep, the Radeon VII is a much better general compute/content creation card then a gaming card. There’s nothing close to it in its price point. You have to spend a lot more if you want to get performance in either market.

It was a steal for general compute hobbyist like Kepler-based Titans back in the day.

Krogoth
Guest
Krogoth

It is likely that Zak simply doesn’t have any Vega hardware on hand and his test system is different from previous Vega benches making an apples to apples comparison difficult at best.

If you want a ballpark figure just take 5700XT results and reduce them by like ~2-10% to get Vega 64 stock/tuned Vega 56 results.

Colton Westrate
Editor

^ This. Zak was working with what he had, or in some cases, what he could borrow for a couple days.

Jason DeFord
Guest
Jason DeFord

“If you want a ballpark figure just take 5700XT results and reduce them by like ~2-10% to get Vega 64 stock/tuned Vega 56 results.” I think you’re over-simplifying the comparison. I still think seeing the Vega GPUs on the ‘scatter charts’ would be valuable data points. In addition, there is the price dimension that needs to accounted for. Right now, an ASRock Phantom Gaming X Radeon RX Vega 56 can be found for US$270, while a ASRock Radeon RX 5700 goes for US$330 @ TheEgg. Saving ~20% of GPU cost for a difference of “~2-10%” in performance is worth considering.… Read more »

Krogoth
Guest
Krogoth

Vega 56 is a bit of a wildcard because it is highly depend on how well you can undervolt the unit. Unlike the 5700 and 5700XT which can operate at their performance levels without too much hassle. Vega 64 only pulls ahead if you are brave enough to overclock/undervolt to its limits and are willing to tolerate the power consumption.

Vega units are decisively better if you care more about general compute performance.

Oliv
Guest
Oliv

Completely agree, especially since the model used was the 4GB version. One if these things is not like the other.

juzz86
Guest
juzz86

Oh Zak, you’ve done it again mate. I know any of our staff who were tasked with carrying the site’s major drawcard articles would give it every bit of justice you had – as you all do with your own posts. But to see what we all crave seeing on the site hold the same format, same prose, same detail as it always has – means an awful lot to a sentimental fella like me. [Site] Formatting and [staff] introductory niggles around the ownership change aside, I’m heartened to see the stalwart content keep coming (Fish, Bruno, Ben, Josh, Nath)… Read more »

unknown-error
Guest
unknown-error

The “all-white” background is going to take a lot more getting used to. On my desktop, sorry say, but it looks really amateurish. Since there is no drop-down menu with the relevant page titles, it would help us a lot if you put the “Page listing:” at the bottom of each page. So we can skip to pages that interest us. What I do now is, I open the page with the “Page listing:” in one tab and open the pages I like in another tab.

The reviews itself is great as usual and thanks a lot for that.

John
Guest
John

On some pages you mention about “Jeff’s write-up” and you link to another article, but that article has a different author name, without “Jeff” anywhere. You should correct it to not confuse the readers (yes, I know Jeff wanted his name removed from TR). Also, it is disappointing you did not use some other DX12 games like Metro Exodus. It is a perfect game to test performance with RTX enabled and also useful for comparing GPUs without RTX effects. It is also disappointing that you did not select some popular MMORPGs for benchmark, something like Black Desert Online or FFXIV.… Read more »

Fonbu
Guest
Fonbu

Thank You ! Tech Report Staff for making this possible.
Many of us I am sure have waited for this review. And it was smart of the Tech Report to wait for all the Geforce Super cards and the new Radeons to arrive, and showcase them against each other. Being the most productive choice.
I like how all the new driver features of the products were showcased.
Was this Zaks first major video card review? It was well done.

Ben Funk
Guest
Ben Funk
Sam
Guest
Sam

Adding a page title listing to the content of the first page would help a lot, since we lost the convenient dropdown box after the refresh. Some people just want to see benchmarks for a certain game and don’t want to click through 14 pages to find that.

chuckula
Guest
chuckula

Thanks for the review!

As for the product well, let’s say that 7nm has allowed AMD to avoid some of the worst issues with previous cards essentially being overclocked out of the box. But given how much guff Nvidia has gotten for dedicating silicon to RTX, it’s also pretty telling that their 16nm parts (2060 super in particular) are still competitive even when you never turn RTX on and even when you look at power consumption.

I think this calls for some market disruption by a third party (and of course I mean S3!)

Krogoth
Guest
Krogoth

No, a dark shadow from the distance past will emerge from its somber…..

[b]Bitboys[/b]

chuckula
Guest
chuckula

Bitboys??!!

Oy!!

LoneWolf15
Guest
LoneWolf15

I upvoted a Chuckula post. Demons must be shivering in hell as we speak.

Waco
Guest
Waco

It makes me wonder just how much of Nvidia is currently propped up by datacenter sales – their die sizes compared to AMD are monstrous.

Also, Nvidia is at 12 nm, not 16.

chuckula
Guest
chuckula

Die sizes are irrelevant since AMD is clearly paying a fortune for 7nm silicon or else they would have launched these chips for a small fraction of the price of the RTX parts to grab market share. Furthermore, TSMC’s “12nm” is the 16nm process with a couple of tweaks and a marketing name change.

As for the data center you should have paid attention to Nvidia’s most recent earnings beat where the data center was actually down a good bit but overall results beat the street and –unlike AMD — Nvidia just raised it’s outlooks for the rest of 2019.

Neutronbeam
Guest

That is one hell of a review Zak! Excellent work; well done!

Krogoth
Guest
Krogoth

It was worth the wait.

Waco
Guest
Waco

It’s good to see essentially price-parity on the XT / 2060 Super. That’s good news for anyone in the $400 and below market. The standard 5700 looks to stand on its own pretty handily between the 1660 Ti and 2060 Super, it’s slightly better value than either if you look at 99th% FPS / dollar.

Frenchy2k1
Guest
Frenchy2k1

TR left out the RTX2060 which will continue at $350 and is in direct competition with AMD 5700.
The other “SUPER” cards supersede the previous models (RTX2070 SUPER completely replaces the original 2070), but the original 2060 will continue.
It seems TR was not sourced one and hence did not include it in the graph.

Pin It on Pinterest

Share This