Single page Print

AMD's Radeon RX 590 graphics card reviewed


Mind the gap

Morning, folks. I'm fighting intermittent power outages thanks to an ice storm in the locale of the TR labs last night, but that hasn't stopped me from collecting and digesting data on AMD's latest graphics card: the Radeon RX 590. The company didn't share a ton of details about this card with us, so I'll keep this short. The RX 590 uses the same basic Polaris 10 GPU that's powered the RX 480 and got a boost in the RX 580. This time, the performance improvements come courtesy of a move to GlobalFoundries' 12LP process, an improved version of the basic 14-nm FinFET technology that has underpinned AMD's CPUs and GPUs for some time now.

The XFX RX 590 Fatboy card (and yes, that is its name) that we've had the privilege of playing with over the past few days carries a 1600-MHz boost clock range, up a fair bit compared to the roughly 1400-MHz range that RX 580 partner cards could boast of. We'll be adding more to this article as we can, but all of our test data is present and accounted for. If you'd rather not page through reams of frame-time data, you can skip ahead to the conclusion at leisure.

Our testing methods

If you're new to The Tech Report, we don't benchmark games like most other sites on the web. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it's like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it's the industry standard for quantifying graphics performance. Accept no substitutes.

What's more, we don't rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out test scenarios that are typical of what one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.

Most of the frame-time data you'll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn't suit our needs, we relied on the PresentMon utility.

As ever, we did our best to deliver clean benchmark numbers. Our test system was configured like so:

Processor Intel Core i9-9900K
Motherboard Gigabyte Z390 Aorus Master
Chipset Intel Z390
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Flare X DDR4-3200
Memory timings 14-14-14-34 2T
Storage Samsung 960 Pro 512 GB NVMe SSD (OS)
Corsair Force LE 960 GB SATA SSD (games)
Power supply Seasonic Prime Platinum 1000 W
OS Windows 10 Pro with April 2018 Update

Thanks to Intel, Corsair, G.Skill, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and EVGA supplied the graphics cards for testing, as well. Have a gander at our fine Gigabyte Z390 Aorus Master motherboard before it got buried beneath a pile of graphics cards and a CPU cooler:

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 1920x1080 and 60 Hz, unless otherwise noted.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.