Single page Print

Our testing methods

If you're new to The Tech Report, we don't benchmark games like most other sites on the web. Instead of throwing out a simple FPS average (or even average and minimum FPS figures)—numbers that tell us only the broadest strokes of what it's like to play a game on a particular graphics card—we can go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it's the industry standard for quantifying graphics performance. Accept no substitutes.

What's more, we don't rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out test scenarios that are typical of what one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.

Most of the frame-time data you'll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn't suit our needs, we relied on the PresentMon utility.

We tested Wolfenstein II: The New Colossus at 3840x2160 using its "Mein Leben!" preset. The game provides fine-grained control over what it calls "Nvidia Adaptive Shading," although we imagine most people will simply want to choose among the three presets on offer: "Balanced," "Performance," and "Quality." In fact, that's exactly what we did for our testing.

As ever, we did our best to deliver clean benchmark numbers. Each test was run at least three times, and we took the median of each result. Our test system was configured like so:

Processor Intel Core i9-9980XE
Motherboard Asus Prime X299-Deluxe II
Chipset Intel X299
Memory size 32 GB (4x 8 GB)
Memory type G.Skill Trident Z DDR4-3200
Memory timings 14-14-14-34 2T
Storage Intel 750 Series 400 GB NVMe SSD (OS)
Corsair Force LE 960 GB SATA SSD (games)
Power supply Seasonic Prime Platinum 1000 W
OS Windows 10 Pro with October 2018 Update (ver. 1809)

We used the following graphics cards for our testing, as well:

Graphics card Graphics driver Boost clock speed (nominal) Memory data rate (per pin)
Nvidia GeForce RTX 2080 Ti Founders Edition GeForce
Game Ready
1635 MHz 14 Gbps
Gigabyte GeForce RTX 2080 Gaming OC 8G 1815 MHz
Asus ROG Strix GeForce RTX 2070 O8G 1815 MHz

Thanks to Intel, Corsair, Gigabyte, G.Skill, and Asus for helping to outfit our test rigs with some of the finest hardware available. Nvidia, Gigabyte, and Asus supplied the graphics cards we used for testing, as well. Have a gander at our fine Asus motherboard before it got buried beneath a pile of graphics cards and a CPU cooler:

And a look at our spiffy Gigabyte GeForce RTX 2080, seen in the background here:

And our Asus ROG Strix GeForce RTX 2070, which just landed in the TR labs:

With those formalities out of the way, let's get to testing.