Testing setup and methodology
Here’s a condensed list of all the graphics cards that I used for testing in this review:
At the outset, this article was originally going to be two reviews. However, when it became clear that I wasn’t going to have the original GeForce RTX nor AMD’s Vega cards to test with, my only option left was to test all of these cards together, against each other. It’s a kludge, to be sure; the Navi Radeons don’t really belong in the same review with the GeForce GTX 1080 Ti, to say nothing of the RTX 2080 Super. Ultimately, I had to play with the cards I was dealt—no pun intended—and I think there’s still a lot of useful data here.
The GeForce GTX 1660 Ti and Radeon RX 580 cards also don’t really belong in this review, but we’ve never tested the former, and I thought that the latter could provide an interesting and familiar performance baseline. Both were used as very brief loans from friends of the site. Unfortunately, the specific Radeon RX 580 card that we used was a 4GB reference model, and it struggles badly with the settings we used in most of the tests. I included its data for completeness’ sake, but keep in mind it would comport itself much better with more moderate test environments.
As ever, we did our best to deliver clean benchmark numbers. Along with the above video cards, our test system used the following configuration:
|Processor||AMD Ryzen 9 3900X|
|Cooling||AMD Wraith Prism|
|Motherboard||ASRock X570 Taichi|
|Memory size||16 GB|
|Memory type||G.Skill Trident Z Royal (2x 8GB) DDR4 SDRAM|
|Memory speed||3600 MT/s (actual)|
|Memory timings||16-16-16-36 1T|
|System drive||Gigabyte GP-ASM2NE6200TTTD 2TB PCIe4|
If you’re new to The Tech Report, we don’t benchmark games like most other sites. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it’s like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it’s the industry standard for quantifying graphics performance. Accept no substitutes.
What’s more, we don’t rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out interesting test scenarios that one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.
Most of the frame-time data you’ll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn’t suit our needs, we relied on the PresentMon utility.