Single page Print

Our testing methods

If you're new to The Tech Report, we don't benchmark games like most other sites on the web. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it's like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it's the industry standard for quantifying graphics performance. Accept no substitutes.

What's more, we don't typically rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out test scenarios that are typical of what one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.

Most of the frame-time data you'll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn't suit our needs, we relied on the PresentMon utility.

As ever, we did our best to deliver clean benchmark numbers. Our test system was configured like so:

Processor Intel Core i9-9900K
Motherboard MSI Z370 Gaming Pro Carbon
Chipset Intel Z370
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Flare X DDR4-3200
Memory timings 14-14-14-34 2T
Storage Samsung 960 Pro 512 GB NVMe SSD (OS)
Corsair Force LE 960 GB SATA SSD (games)
Power supply Seasonic Prime Platinum 1000 W
OS Windows 10 Pro version 1809

Thanks to Intel, Corsair, G.Skill, and MSI for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and Gigabyte supplied the graphics cards we used for testing today, as well.

Graphics card Boost clock
(specified)
Graphics driver version
Nvidia GeForce GTX 1080 Ti Founders Edition 1582 MHz GeForce Game Ready 418.81
Gigabyte GeForce RTX 2070 Gaming OC 8G 1725 MHz
Nvidia GeForce RTX 2080 Founders Edition 1800 MHz
Nvidia GeForce RTX 2080 Ti Founders Editoin 1635 MHz
AMD Radeon RX Vega 64 1546 MHz Radeon Software Adrenalin 2019 Edition Press
AMD Radeon VII 1750 MHz

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 3840x2160, unless otherwise noted. We enabled HDR in games where it was available. Our HDR display is an LG OLED55B7A television.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.