Single page Print

Test notes
Ok, look, I just couldn't do it, all right? I couldn't bring myself to spend hours testing the 270X and 280X against their not-quite-identical counterparts in the Radeon HD 7000 series in order to show you the small-percentage performance differences involved. Testing like we do takes a lot of work, and we already know the stakes here are pretty darned low.

Rather than look at incredibly minor differences under the microscope, I chose to test the new Radeons against the direct competitors from Nvidia. I've also tested against a couple of much older Radeons, the HD 5870 and 6970, in order to show would-be upgraders what they're missing. I think that will make for a more interesting comparison.

Peak pixel
fill rate
(Gpixels/s)
Peak
bilinear
filtering
int8/fp16
(Gtexels/s)
Peak
shader
arithmetic
rate
(tflops)
Peak
rasterization
rate
(Gtris/s)
Memory
bandwidth
(GB/s)
Radeon HD 5870 27 68/34 2.7 0.9 154
Radeon HD 6970 28 85/43 2.7 1.8 176
Radeon R9 270X 34 84/42 2.7 2.1 179
Radeon R9 280X 32 128/64 4.1 2.0 288

There's a case to be made that the Pitcairn chip in the R9 270X is pretty much just a newer, smaller version of two legendary Radeons of yore, the HD 5870 and HD 6970. You can see how closely they match up in nearly every key category except for rasterization rate. Thing is, the 270X's GCN architecture ought to be more efficient, allowing it to achieve higher performance despite similar theoretical peak specs. I'm curious to see how this contest plays out.

The performance results you'll see on the following pages come from capturing and analyzing the rendering times for every single frame of animation during each test run. For an intro to our frame-time-based testing methods and an explanation of why they're helpful, you can start here. Please note that, for this review, we're only reporting results from the FCAT tools developed by Nvidia. We usually also report results from Fraps, since both tools are needed to capture a full picture of animation smoothness. However, testing with both tools can be time-consuming, and our window for work on this review was fairly small. We think sharing just the data from FCAT should suffice for this review, which is generally about incremental differences between video cards based on familiar chips.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte X79-UD3
Chipset Intel X79 Express
Memory size 16GB (4 DIMMs)
Memory type Corsair Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-9-24 1T
Chipset drivers INF update 9.2.3.1023
Rapid Storage Technology Enterprise 3.5.1.1009
Audio Integrated X79/ALC898
with Realtek 6.0.1.6662 drivers
Hard drive OCZ Deneva 2 240GB SATA
Power supply Corsair AX850
OS Windows 7 Service Pack 1

Driver revision GPU base
core clock
(MHz)
GPU boost
clock
(MHz)
Memory
clock
(MHz)
Memory
size
(MB)
GeForce GTX 660 GeForce 331.40 beta 980 1033 1502 2048
GeForce GTX 760 GeForce 331.40 beta 980 1033 1502 2048
GeForce GTX 770 GeForce 331.40 beta 1046 1085 1753 2048
Radeon HD 5870 Catalyst 13.11 beta 850 - 1200 2048
Radeon HD 6970 Catalyst 13.11 beta 890 - 1375 2048
Radeon R9 270X Catalyst 13.11 beta ? 1050 1400 2048
Radeon R9 280X Catalyst 13.11 beta ? 1070 1600 3072

Thanks to Intel, Corsair, Gigabyte, and OCZ for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we've assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

In addition to the games, we used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.