Radeon Anti-Lag (RAL) is the other big new technology that AMD announced on stage at E3. Put simply, Radeon Anti-Lag allows the GPU portion of the game’s workload to overlap to some degree with the CPU portion of the work, reducing the real-time delay from user input to screen response. AMD says that Radeon Anti-Lag “can in theory shrink input lag by almost a full frame.”
Input latency is a topic near and dear to my heart, so I was pretty interested in RAL. Upon playing games with it for a couple-dozen hours, I can confidently say that I have determined absolutely nothing about Radeon Anti-Lag. You can toggle it with a hotkey, and frankly, even now, I cannot definitively tell if it’s on or off in any game I tried. Subjectively, it might as well be snake oil to me.
Objectively, the best way to test Radeon Anti-Lag would be using a high-speed camera along with a CRT display or modified input device. Unfortunately, I don’t have such a camera handy. (It’s really time for a new smartphone.) Since I don’t have the requisite equipment, I can’t really do the truly thorough investigation of the technology that I’d like to do.
Even though I don’t have a high-speed camera, I can prove that RAL is doing something by using the Open Capture and Analysis (OCAT) tool to measure input lag introduced in software. Recent versions of OCAT monitor the “estimated driver lag” as part of the standard game performance capture process. Before I share my data, I want to mention a few caveats to this whole process.
First and foremost is that so-called “driver lag” is only one part of the input lag in a typical PC gaming input-output cycle. I’m actually elated that AMD is taking steps to reduce it, but let’s try not overstate the impact of doing so. Furthermore, let’s keep in mind that OCAT is software primarily maintained by AMD employees. I have full faith in the fairness of the fellows who fabricated this terrific tool, but it’s no coincidence that OCAT recently gained the ability to monitor software input lag.
More than either of those points, though, I think it’s important to note that this sort of input lag is caused by excessive GPU load. AMD even says as much; in its documentation, the company notes that the benefits of Radeon Anti-Lag are best-illustrated in severely GPU-limited scenarios. The point is, the simplest way to reduce software input lag is to reduce proportional GPU load, either by reducing graphics settings, or simply getting a faster GPU.
In any case, let’s take a look at what OCAT has to say about Radeon Anti-Lag. I tested three DirectX 11 games on the Radeon RX 5700 XT, first with Radeon Anti-Lag disabled, then with it enabled, and then finally once more after switching out the Radeon for the GeForce RTX 2070 Super. I would have used the RTX 2060 Super, but I unfortunately no longer had access to it when performing this testing. As it turns out, the results were still pretty interesting.
The first game I tested was Hellblade: Senua’s Sacrifice. I tested in 3440×1440, a slightly higher resolution than we tested in earlier, to increase the GPU load. The display I used also supports AMD FreeSync up to 100 Hz, so I enabled that for even further-reduced baseline input lag.
The GeForce card runs this game quite a bit faster than the Radeon RX 5700 XT, but the estimated driver lag is very similar for both cards. Turning on Radeon Anti-Lag apparently reduces input lag by some 11 milliseconds, but I certainly couldn’t tell the difference.
Deciding to test a much more fast-paced and reaction-time-oriented game, I loaded up Warframe. I would have used Devil May Cry V, but Radeon Anti-Lag only works in DirectX 11 games. That’s right; you can’t use Anti-Lag and Image Sharpening at the same time, at least for now.
The effect wasn’t obvious in Warframe, either, but that’s not particularly surprising for a variety of reasons. Notably, Warframe runs at around 110 FPS on the Radeon RX 5700 XT in 3440×1440. The input lag was purportedly reduced by nearly 8ms, which is “almost a full frame” just as AMD said.
Finally, deciding to really load up the video cards, I set up Grand Theft Auto V in 3440×1440, with 4x multi-sampling anti-aliasing (MSAA). This was quite a burden to bear for our Radeon, but the GeForce RTX 2070 Super handled it easily.
At these settings, according to OCAT, Radeon Anti-Lag was actually dropping input lag by over 15 milliseconds. Even then, it still wasn’t apparent to me, but I also don’t really have any reason to doubt OCAT’s data. My measured results are consistent with what AMD said to expect, so I’m willing to give the feature the benefit of the doubt.
My experiences with Radeon Anti-Lag left me with more questions than answers, though. AMD said that by its nature, Radeon Anti-lag might reduce framerates, and I indeed benchmarked the feature with typical vigilance. I’m not publishing those numbers simply because they aren’t informative. In fact, in all three games, enabling Anti-Lag marginally reduced the games’ 99th-percentile frame times—implying a slight improvement in performance—but the difference was so infinitesimal it’s irrelevant, and likely down to margin of error due to the imperfect repeatability of our tests.
That left me wondering why Anti-Lag can’t simply be toggled on in Radeon Settings; instead, it has to be enabled manually, and on a per-game basis. Taking that same train of thought to the next station, I wondered why these apparently “free” latency reductions are there for the taking. Put another way, why isn’t it done this way in the first place?
The point is, as a PC gamer, I obviously want the minimum possible input lag, and Radeon Anti-Lag supposedly improves things on that front. I sure can’t tell, though. I don’t mean to demean the hard work of AMD’s engineers—as I said above, I’m elated that the company is tackling the issue of input lag—but the extra frame-rate afforded by the (admittedly bigger and hotter) faster graphics card made all three games feel more responsive than enabling Radeon Anti-Lag.