Single page Print

Our testing methods

If you're new to The Tech Report, we don't benchmark games like most other sites on the web. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it's like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it's the industry standard for quantifying graphics performance. Accept no substitutes.

What's more, we don't typically rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out test scenarios that are typical of what one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.

Most of the frame-time data you'll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn't suit our needs, we relied on the PresentMon utility.

As ever, we did our best to deliver clean benchmark numbers. Our test system was configured like so:

Processor Intel Core i9-9900K
Motherboard MSI Z370 Gaming Pro Carbon
Chipset Intel Z370
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Flare X DDR4-3200
Memory timings 14-14-14-34 2T
Storage Samsung 960 Pro 512 GB NVMe SSD (OS)
Corsair Force LE 960 GB SATA SSD (games)
Power supply Seasonic Prime Platinum 1000 W
OS Windows 10 Pro version 1809

Thanks to Intel, Corsair, G.Skill, and MSI for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, Asus, Gigabyte, and EVGA supplied the graphics cards for testing, as well.

To serve as a foil for the Strix, we've fired up Gigabyte's RTX 2070 Gaming OC 8G card. Unlike the Strix, the Gaming OC 8G doesn't break out of the two-slot mold, so it can fit into smaller systems where both space and delivered performance are paramount. Gigabyte's 1725-MHz specified boost clock might seem significantly lower than the Strix's 1815 MHz in both cards' default "Gaming Mode" clock profiles, but we know from experience that Nvidia's GPU Boost dynamic voltage and frequency scaling logic tends to push clocks higher than nameplate specs. If there's a meaningful difference in delivered performance from these cards, our frame-time-focused benchmarking methods ought to tease it out. Further, this card sells for $530, giving us a look at what an RTX 2070 closer to its suggested price can deliver.

Graphics card Boost clock
(specified)
Graphics driver version
EVGA GeForce GTX 1070 SC2 Gaming 1784 MHz GeForce Game Ready 417.35
Nvidia GeForce GTX 1070 Ti Founders Edition 1683 MHz
Nvidia GeForce GTX 1080 Founders Edition 1733 MHz
Nvidia GeForce GTX 1080 Ti Founders Edition 1582 MHz
Gigabyte GeForce RTX 2070 Gaming OC 8G 1725 MHz
Asus ROG Strix GeForce RTX 2070 O8G Gaming 1815 MHz
Nvidia GeForce RTX 2080 Founders Edition 1800 MHz
AMD Radeon RX Vega 56 1471 MHz Radeon Software Adrenalin 2019 Edition 19.1.1
AMD Radeon RX Vega 64 1546 MHz

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 2560x1440 and 144 Hz, unless otherwise noted.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.