Home AMD’s Radeon R9 290 graphics card reviewed
Reviews

AMD’s Radeon R9 290 graphics card reviewed

Scott Wasson Former Editor-in-Chief Author expertise
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

Sometimes, in this job, the task is rather complex. Delving deep into the guts of a new GPU architecture, summarizing a chip comprised of billions of transistors, understanding the subtleties of frame dispatch and delivery—these things can be hard to do well. At other times, though, things are actually rather straightforward. Happily, the Radeon R9 290 isn’t a difficult product to understand if you’re familiar with its big brother, the Radeon R9 290X. Heck, what you need to know is this: it’s nearly the same product but a way better deal. Allow me to explain.

The Radeon R9 290
You see, the Radeon R9 290 is almost the same thing as its elder sibling. The 290 shares the same basic card design and cooler, and it’s based on the same brand-new “Hawaii” graphics chip as the R9 290X. AMD knows not everyone is willing to fork over 550 bucks to have one of the fastest graphics cards in the known universe. To help ease the pain a bit, they’ve strategically trimmed the R9 290’s graphics performance and reduced the price accordingly.

GPU
Boost
clock
(MHz)
ROP
pixels/
clock
Texels
filtered/
clock
(int/fp16)
Shader
processors
Rasterized
triangles/
clock
Memory
transfer
rate
(Gbps)
Memory
interface
width (bits)
Starting
price
Radeon R9 290 947 64 160/80 2560 4 5 512 $399
Radeon R9 290X 1000 64 176/88 2816 4 5 512 $549

Well, I say “accordingly,” but between you and me, I think they may have been a bit too generous. The table above tells the story. Versus the R9 290X, the 290 has had only two minor adjustments: the peak clock speed is down from 1000MHz to 947MHz, and the number of active compute units on the chip has been reduced from 44 to 40. That means the 290 has a truly enormous amount of shader arithmetic power, but not quite the borderline terrifying capacity of the R9 290X. Both should be more than sufficient.

Now look at the other specs. The 290 retains the Hawaii GPU’s full complement of 64 pixels per clock of ROP throughput, so it has loads of pixel filling and antialiasing power, and it can still rasterize quad primitives per clock cycle for high-polygon tessellation goodness. Even better, the R9 290 has the exact same memory config as the 290X, with a 512-bit-wide path to four gigabytes of GDDR5 running at 5 GT/s. Memory bandwidth is oftentimes the limiting factor in graphics performance, so this choice is especially notable.

But yeah, AMD somehow dropped the price by $150 compared to the 290X. That’s a mighty big price break for not much change in specs. The 290 stacks up very well against the fastest graphics cards available today.

Peak pixel
fill rate
(Gpixels/s)
Peak
bilinear
filtering
int8/fp16
(Gtexels/s)
Peak
shader
arithmetic
rate
(tflops)
Peak
rasterization
rate
(Gtris/s)
Memory
bandwidth
(GB/s)
Radeon HD
5870
27 68/34 2.7 0.9 154
Radeon HD
6970
28 85/43 2.7 1.8 176
Radeon HD
7970
30 118/59 3.8 1.9 264
Radeon
R9 280X
32 128/64 4.1 2.0 288
Radeon
R9 290
61 152/86 4.8 3.8 320
Radeon
R9 290X
64 176/88 5.6 4.0 320
GeForce GTX 770 35 139/139 3.3 4.3 224
GeForce GTX 780 43 173/173 4.2 3.6 or 4.5 288
GeForce GTX
Titan
42 196/196 4.7 4.4 288

The R9 290 has higher theoretical peaks of ROP throughput, shader flops, and memory bandwidth than a thousand-dollar GeForce Titan. And it’s just not that far from the R9 290X in any of the key graphics rates.

Of course, the numbers above are theoretical peaks. Especially in the case of Hawaii-based cards, the GPU won’t always be operating at those clock speeds. AMD’s PowerTune algorithm raises and lowers GPU clock speeds dynamically in response to various workloads, and it does so more aggressively than any other GPU we’ve seen before.

Now realize that both the R9 290 and 290X apparently have the same PowerTune limits for power draw (~290W, from what I gather, although AMD has been coy on this front) and temperature (94°C). You can imagine what that means for actual operating clock speeds. In fact, here’s a look at the operating clocks during our short (4-5 minutes) warm-up period in Skyrim for power and noise testing.

Once the cards have both heated up, near the end of span of time in question, the 290X’s clocks drop down to nearly match the R9 290’s. During those moments when the clocks almost match, the only real performance difference between the two is a small amount of texture filtering and shader computing power. Now, this is just one scenario. You will definitely see both of these cards throttle more with different workloads, and changes in ambient conditions will cause GPU speeds to vary, too. Also, as you can see, raising the fan speed limit on the 290X by putting it into “uber” mode keeps its GPU clocks closer to 1GHz. Just know that we’re talking about some pretty small differences between the 290 and the 290X in its stock fan mode. We’ll show you more of the actual performance shortly.

First, let me tell you a little story about the 290’s early life and upbringing, which will help you understand how it became the card it is today.

Back in its formative days—that is, when it arrived in Damage Labs roughly two weeks ago—the R9 290 wasn’t quite the same. Although we didn’t know the price yet, AMD supplied us with info showing the R9 290 positioned against a specific competitor: the GeForce GTX 770, a $399 card from the green team. The 290 was well prepared to take on this foe, more than ready to embarrass the competition with its performance.

Then, just as the 290’s big day approached, the wily green team decided to slash prices rather dramatically in response to the new Radeons. Suddenly, the GTX 770 was out of the 290’s price range, down at $329, and the closest competition was the GeForce GTX 780. The GTX 780 was now priced at $499, but it was faster than the 290 and came bundled with three major games and a $100 discount on Nvidia’s Shield handheld Android game console. One could conceivably make a case for the GTX 780 over the R9 290—and the 290X, for that matter.

AMD’s product team sprung into action, delaying the 290’s release by a week and supplying us with a new driver intended to help the card match up better against the GeForce GTX 780. The one change contained in that driver was an increase in the card’s max fan speed. Originally, the 290 shared the same “40% of max speed” limit as the R9 290X in its default, or “quiet,” mode—and it was a little more subdued than the 290X, according to our decibel meter. The new driver raised the 290’s fan speed limit to 47%. That change alone endowed the 290 with a few percentage points of additional performance, with the obvious tradeoff that it was a little louder while gaming.

So that’s how the R9 290 as you’ll know it came to be. This card is a little more aggressive and noisier than originally expected, but its difficult upbringing hardened it against the knocks it’ll encounter in this cruel world. Totally like Eminem. Or some other rapper.

Any of them, I guess.

Anyhow, AMD tells us the Radeon R9 290 should be available at online stores starting today, thankfully in higher numbers than the so far hard-to-find R9 290X. We’ll have wait and to see how that supply picture meets the demand, of course.

Test notes
To generate the performance results you’re about to see, we captured and analyzed the rendering times of every single frame of animation during each test run. For an intro to our frame-time-based testing methods and an explanation of why they’re helpful, you can start here. Please note that, for this review, we’re only reporting results from the FCAT tools developed by Nvidia. We usually also report results from Fraps, since both tools are needed to capture a full picture of animation smoothness. However, testing with both tools can be time-consuming, and our window for work on this review was fairly small. We think sharing just the data from FCAT should suffice for now.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-9-24
1T
Chipset drivers INF update
9.2.3.1023
Rapid Storage Technology Enterprise 3.5.1.1009
Audio Integrated
X79/ALC898
with Realtek 6.0.1.6662 drivers
Hard drive OCZ
Deneva 2 240GB SATA
Power supply Corsair
AX850
OS Windows 7
Service Pack 1
Driver
revision
GPU
base
core clock
(MHz)
GPU
boost
clock
(MHz)
Memory
clock
(MHz)
Memory
size
(MB)
GeForce GTX 660 GeForce
331.40 beta
980 1033 1502 2048
GeForce GTX 760 GeForce
331.40 beta
980 1033 1502 2048
GeForce GTX 770 GeForce
331.40 beta
1046 1085 1753 2048
GeForce GTX 780 GeForce
331.40 beta
863 902 1502 3072
GeForce GTX Titan GeForce
331.40 beta
837 876 1502 6144
Radeon
HD 5870
Catalyst
13.11 beta
850 1200 2048
Radeon
HD 6970
Catalyst
13.11 beta
890 1375 2048
Radeon
R9 270X
Catalyst
13.11 beta
? 1050 1400 2048
Radeon
R9 280X
Catalyst
13.11 beta
? 1000 1500 3072
Radeon
R9 290
Catalyst
13.11 beta 5
947 1250 4096
Radeon
R9 290X
Catalyst
13.11 beta 5
1000 1250 4096

Thanks to Intel, Corsair, Gigabyte, and OCZ for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

In addition to the games, we used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Crysis 3


Click through the buttons above to see frame-by-frame results from a single test run for each of the graphics cards. You can see how there are occasional spikes on each of the cards. They tend to happen at the very beginning of each test run and a couple of times later, when I’m exploding dudes with dynamite arrows.



We’re at a nice place with our current selection of games, test scenarios, and the latest video card drivers. The FPS averages and our various frame latency-focused metrics tend to agree about which solution is best. In the case of Crysis 3, there are still some spikes in rendering times for each card, but those appear to be some caused by some sort of CPU or system performance limitation. The cards from both brands are all affected similarly by those slowdowns, as our “time beyond 50 ms” metric demonstrates.

At the end of it all, the R9 290 outperforms the GeForce GTX 780 ever so slightly in this test sequence, and it’s just an eyelash behind the R9 290X.

Far Cry 3: Blood Dragon




We’re not moving around the level or doing anything too crazy in this test sequence. We’re mostly just sniping bad guys from a ways off. As a result, all of the cards produce relatively consistent frame times throughout this test. The R9 290 performs very well, rendering every single frame in 33 milliseconds or less. The 290X is measurably faster, but the difference would be very tough to perceive.

GRID 2


This looks like the same Codemasters engine we’ve seen in a string of DiRT games, back for one more round. We decided not to enable the special “forward+” lighting path developed by AMD, since the performance hit is pretty serious, inordinately so on GeForces. Other than that, we have nearly everything cranked to the highest quality level.




Just like the 290X, the R9 290 renders each and every frame in less than 16.7 milliseconds. That’s a perfect 60Hz rate of production. Notice that the FPS average is 50% higher than that. You can’t really get a feel for smoothness with FPS averages alone.

Then again, this game just isn’t much of a test for video cards this fast. Everything from the GeForce GTX 770 on up cranks out frames at a near-perfect 60 FPS rate. Interestingly, though, the latency curves show that none of the cards are producing frames quickly enough to match a 120Hz display. Even the fastest cards are above the requireed 8.3 milliseconds per frame.

Tomb Raider





Although the 290’s average of 43 FPS might seem iffy, in truth its performance is stellar, with virtually no latency spikes and every frame produced in under 30 milliseconds. Those nice, smooth latency curves tell the tale. The 290 again comes out slightly ahead of the GeForce GTX 780, although the two are practically equivalent here.

Guild Wars 2




Don’t let those occasional frame time spikes on the faster cards bug you too much. This game has some kind of issue that causes the fastest solutions to run into periodic spikes, but the spikes themselves are fairly small in magnitude. We’d be better off without them, of course, but they’re not easy to notice while playing.

Here’s one game where the GTX 780 outperforms the R9 290. The 290 continues to acquit itself well, running just a smidgen behind the R9 290X.

Power consumption

Please note that our load test isn’t an absolute peak scenario. Instead, we have the cards running a real game, Skyrim, in order to show us power draw with a more typical workload.

The 290’s power draw at idle is a little higher than the 290X’s, possibly due to differences in voltage or chip binning. This small delta isn’t anything to worry about, though.

Under load, well, like I said, the power limits on the 290 and 290X appear to be the same. The 290 is often faster than a GeForce Titan, but it gets there by using more power.

Noise levels and GPU temperatures

AMD’s eleventh-hour decision to raise the 290’s fan speed bought a few percentage points worth of added performance, but it means the 290 is relatively loud for a high-end graphics card.

For what it’s worth, had AMD stuck with the original plan, the 290 would have been quieter than the R9 290X. The 290 originally registered 46.5 dBA on the meter under load.

Conclusions
Ok, you know the drill here. We’ll sum up our performance results and mash ’em up with the latest prices in a couple of our handy scatter plots. As ever, the best values will gravitate toward the top left corner of the plot, while the worst will be near the bottom right. The two legacy Radeons are shown at their introductory prices to make you dance with glee about progress—or, you know, not be impressed, if somehow that’s your reaction.


Remember how I said up front that my task was simple this time around? Here’s why. The R9 290 is just ever so slightly slower than the R9 290X and essentially matches the GeForce GTX 780. Yet it costs $150 less than the 290X and a hundred bucks less than the GTX 780. This card’s value proposition is outstanding. AMD clearly wants to win in this product category, and they’ve practically sacrificed the viability of the R9 290X in order to do so. The R9 290 is virtually the same thing as the 290X at a handsome discount—and it’s a way better value than the GeForce GTX 780, too.

Much has been made of the R9 290X’s relatively high power draw, operating temperatures, and noise levels. Obviously, the R9 290 shares these same characteristics, with a somewhat louder default fan profile. In my view, the only one of these properties that’s really worth fussing over is the noise, since it’s the thing you’ll notice in day-to-day use.

We’re apparently going to have to face this price/performance-versus-acoustics tradeoff for a while, so I spent some quality time with the R9 290 trying to get a handle on what I think of the noise, beyond the readings on the decibel meter. I’ve gotta say, there are some mitigating factors. For one, I like AMD’s choice to stick with a blower that exhausts hot air out of the case rather than going for a triple-fan cooler that doesn’t. I’ve seen those fan-based aftermarket coolers perform poorly in multi-GPU configs, and they often occupy quite a bit more space—maybe even a third expansion slot—in order to work their magic. I’m also not convinced AMD’s cooler is a poor performer and therefore noisy, as some folks seem to think. Remember, it has more heat to remove than any of the coolers on the other cards we tested. Finally, I don’t think this blower’s ~49 dBA reading is the worst of its type. The quality of the sound isn’t grating. Subjectively speaking, there are much more annoying coolers in this territory on the decibel meter. The impressively smooth, gradual ramp of fan speeds up and down in the new PowerTune algorithm helps make the noise less noticeable, too. This ain’t an FX-5800 Ultra, folks.

Before you run off and do some damage to your credit card, I would advise waiting just a few more days. I’ve been working on upgrading our GPU test rigs to Windows 8.1, attaching a 4K monitor, and installing new releases like Battlefield 4 and Arkham Origins. Fancy new game testing at 4K will soon commence. I really need to spend some quality time closely inspecting the behavior of AMD’s new XDMA-based CrossFire mechanism, too. As you might be aware, Nvidia plans to release the GeForce GTX 780 Ti in two days. You can imagine some activity in Damage Labs around that development. If you can sit tight, we’ll know a lot more very soon.

Then again, if you can’t wait and want to pull the trigger on an R9 290 today, I can’t say I’d blame you. It’s a great value, and nothing that happens later this week is likely to change that fact.

Follow me on Twitter like a boss.

The Tech Report - Editorial ProcessOur Editorial Process

The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

Scott Wasson Former Editor-in-Chief

Scott Wasson Former Editor-in-Chief

Scott Wasson is a veteran in the tech industry and the former Editor-in-Chief at Tech Report. With a laser focus on tech product reviews, Wasson's expertise shines in evaluating CPUs and graphics cards, and much more.