The Fury X is here. At long last, after lots of hype, we can show you how AMD’s new high-end GPU performs aboard the firm’s snazzy new liquid-cooled graphics card. We’ve tested in a range of games using our famous frame-time-based metrics, and we have a full set of results to share with you. Let’s get to it.
A brief stop in Fiji
Over the past several weeks, almost everything most folks would want to know about the new Radeon GPU has become public knowledge—except for how it performs. If you’ve somehow missed out on this info, let’s take a moment to summarize. At the heart of the Radeon R9 Fury X are two new core technologies: the Fiji graphics processor and a new type of memory known as High Bandwidth Memory (HBM).
The Fiji GPU is AMD’s first new top-end GPU in nearly two years, and it’s the largest chip in a family of products based on the GCN architecture that stretches back to 2011. Even the Xbox One and PS4 are based on GCN, although Fiji is an evolved version of that technology built on a whole heck of a lot larger scale. Here are its vitals compared to the biggest PC GPUs, including the Hawaii chip from the Radeon R9 290X and the GM200 from the GeForce GTX 980 Ti.
Yep, Fiji’s shader array has a massive 4096 ALU lanes or “shader processors,” more than any other GPU to date. To give you some context for these numbers, once you factor in clock speeds, the Radeon R9 Fury X has seven times the shader processing power of the Xbox One and over seven times the memory bandwidth. Even a block diagram of Fiji looks daunting.
In many respects, Fiji is just what you see above: a larger implementation of the same GCN architecture that we’ve known for several years. AMD has made some important improvements under the covers, though. Notably, Fiji inherits a delta-based color compression facility from last year’s Tonga chip. This feature should allow the GPU to use its memory bandwidth and capacity more efficiently than older GPUs like Hawaii. Many of the other changes in Fiji are meant to reduce power consumption. A feature called voltage-adaptive operation, first used in AMD’s Kaveri and Carrizo APUs, should allow the chip to run at lower voltages, reducing power draw. New methods for selecting voltage and clock speed combinations and switching between those different modes should make Fiji more efficient than older GCN graphics chips, too. (For more info on Fiji’s graphics architecture, be sure to read my separate article on the subject.)
This combination of increased scale and reduced power consumption allows Fiji to cram about 45% more processing power into roughly the same power envelope as Hawaii before it. Yet even that fact isn’t Fiji’s most notable attribute. Instead, Fiji’s signature innovation is HBM, the first new type of high-end graphics memory introduced in seven years. HBM takes advantage of a technique in chip-building technology known as stacking, in which multiple silicon dies are piled on top of one another in order to improve the bit density. We’ve seen stacking deployed in the flash memory used in SSDs, but HBM is perhaps even more ambitious. And Fiji is the first commercial implementation of this tech.
The Fiji GPU sits atop a piece of silicon, known as an interposer, along with four stacks of HBM memory. The individual memory chips run at a relatively sedate speed of 500MHz in order to save power, but each stack has an extra-wide 1024-bit connection to the outside world in order to provide lots of bandwidth. This “wide and slow” setup works because the GPU and memory get to talk to one another over the silicon interposer, which is the next best thing to having memory integrated on-chip.
With four HBM stacks, Fiji effectively has a 4096-bit-wide path to memory. That memory transfers data at a rate of 1Gbps, yielding a heart-stopping total of 512 GB/s of bandwidth. The Fury X’s closest competitor, the GeForce GTX 980 Ti, tops out at 336 GB/s, so the new Radeon represents a substantial advance.
HBM also saves power, both on the DRAM side and in the GPU’s memory control logic, and it enables an entire GPU-plus-memory solution to fit into a much smaller physical footprint. Fiji with HBM requires about one-third the space of Hawaii and its GDDR5, as illustrated above.
This very first implementation of HBM does come with one potential drawback: it’s limited to a total of 4GB of memory capacity. Today’s high-end cards, including the R9 Fury X, are marketed heavily for use with 4K displays. That 4GB capacity limit could perhaps lead to performance issues, especially at very high resolutions. AMD doesn’t seem to think it will be a problem, though, and, well, you’ll see our first round of performance numbers shortly.
The Radeon R9 Fury X card
Frankly, I think most discussions of the physical aspects of a graphics card are horribly boring compared to the GPU architecture stuff. I’ll make an exception for the Fury X, though, because this card truly is different from the usual fare in some pretty dramatic ways.
R9 Fury X
|1050 MHz||4096||4 GB HBM||2 x 8-pin||275W||$649.99|
The differences start with the card itself, which is a stubby 7.7″ long and has an umbilical cord running out of its belly toward an external water cooler. You can expect this distinctive layout from all Fury X cards, because AMD has imposed tight controls for this product. Board makers won’t be free to tweak clock speeds or to supply custom cooling for the Fury X.
Instead, custom cards will be the domain of the vanilla Radeon R9 Fury, due in mid-July at prices starting around $550. The R9 Fury’s GPU resources will be trimmed somewhat compared to the Fury X, and customized boards and cooling will be the norm for it. AMD tells us to expect some liquid-cooled versions of the Fury and others with conventional air coolers.
Few of those cards are likely to outshine the Fury X, though, because video card components don’t get much more premium than these. The cooling shroud’s frame is encased in nickel-plated chrome, and the black surfaces are aluminum plates with a soft-touch coating. The largest of these plates, covering the top of the card in the picture above, can be removed with the extraction of four small hex screws. AMD hopes end-users will experiment with creating custom tops via 3D printing.
I’m now wondering if that liquid cooler could also keep a beer chilled if I printed a cup-holder attachment. Hmm.
The Fury X’s array of outputs is relatively spartan, with three DisplayPort 1.2 outputs and only a single HDMI 1.4 port. HDMI 2.0 support is absent, which means the Fury X won’t be able to drive most cheap 4K TVs at 60Hz. You’re stuck with DisplayPort if you want to do proper 4K gaming. Also missing, though perhaps less notable, is a DVI port. That omission may sting a little for folks who own big DVI displays, but DisplayPort-to-DVI adapters are pretty widely available. AMD is sending a message with this choice of outputs: the Fury X is about gaming in 4K, with FreeSync at high refresh rates, and on multiple monitors. In fact, this card can drive as many as six displays with the help of a DisplayPort hub.
Here’s a look beneath the shroud. The Fury X’s liquid cooler is made by Cooler Master, as the logo atop the water block proclaims. This block sits above the GPU and the HBM stacks, pulling heat from all of the chips.
AMD’s decision to make liquid cooling the stock solution on the Fury X is intriguing. According to Graphics CTO Raja Koduri, the firm found that consumers want liquid cooling, as evidenced by the fact that they often wind up paying extra for aftermarket kits. This cooler does seem like a nice inclusion, something that enhances the Fury X’s value, provided that the end user has an open emplacement in his or her case for a 120-mm fan and radiator. Sadly, I don’t think the new Damagebox has room for another radiator, since I already have one installed for the CPU.
The cooler in the Fury X is tuned to keep the GPU at a frosty 52°C, well below the 80-90°C range we’re used to seeing from stock coolers. The card is still very quiet in active use despite the aggressive temperature tuning, probably because the cooler is rated to remove up to 500W of heat. Those chilly temps aren’t just for fun, though. At this lower operating temperature, the Fiji GPU’s transistors shouldn’t be as leaky. The chip should convert less power into heat, thus improving the card’s overall efficiency. The liquid cooler probably also helps alleviate power density issues, which may have been the cause of the R9 290X’s teething problems with AMD’s reference air coolers.
That beefy cooler should help with overclocking, of course, and the Fury X’s power delivery circuitry has plenty of built-in headroom, too. The card’s six-phase power can supply up to 400 amps, well above the 200-250 amps that the firm says is needed for regular operation. The hard limit in the BIOS for GPU power is 300W, which adds up to 375W of total power board power draw. That’s 100W beyond the Fury X’s default limit of 275W.
To better facilitate overclocking, the Catalyst Control Center now exposes separate sliders for the GPU’s clock speed, power limit, temperature, and maximum fan speed. Users can direct AMD’s PowerTune algorithm to seek the mix of acoustics and performance they prefer.
Despite its many virtues, our Fury X review unit does have one rather obvious drawback. Whenever it’s powered on, whether busy or idle, the card emits a constant, high-pitched whine. It’s not the usual burble of pump noise, the whoosh of a fan, or the irregular chatter of coil whine—just an unceasing squeal like an old CRT display might emit. The noise isn’t loud enough to register on our sound level meter, but it is easy enough to hear. The sound comes from the card proper, not from the radiator or fan. An informal survey of other reviewers suggests our card may not be alone in emitting this noise. I asked AMD about this matter, and they issued this statement:
AMD received feedback that during open bench testing some cards emit a mild “whining” noise. This is normal for most high speed liquid cooling pumps; Usually the end user cannot hear the noise as the pumps are installed in the chassis, and the radiator fan is louder than the pump. Since the AMD Radeon R9 FuryX radiator fan is near silent, this pump noise is more noticeable.
The issue is limited to a very small batch of initial production samples and we have worked with the manufacturer to improve the acoustic profile of the pump. This problem has been resolved and a fix added to production parts and is not an issue.
That’s reassuring—I think. I’ve asked AMD to send us a production sample so we can verify that retail units don’t generate this noise.
Fury X cards have one more bit of bling that’s not apparent in the pictures above: die blikenlights. Specifically, the Radeon logo atop the cooler glows deep red. (The picture above lies. It’s stoplight red, honest.) Also, a row of LEDs next to the power plugs serves as a GPU tachometer, indicating how busy the GPU happens to be.
These lights are red by default, but they can be adjusted via a pair of teeny-tiny DIP switches on the back of the card. The options are: red tach lights, blue tach lights, red and blue tach lights, and tach lights disabled. There’s also a green LED that indicates when the card has dropped into ZeroCore power mode, the power-saving mode activated when the display goes to sleep.
Speaking of going to sleep, that’s what I’m gonna do if we don’t move on to the performance results. Let’s do it.
Our testing methods
Most of the numbers you’ll see on the following pages were captured with Fraps, a software tool that can record the rendering time for each frame of animation. We sometimes use a tool called FCAT to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average. This filter should account for the effect of the three-frame submission queue in Direct3D. If you see a frame time spike in our results, it’s likely a delay that would affect when the frame reaches the display.
We didn’t use Fraps with Civ: Beyond Earth or Battlefield 4. Instead, we captured frame times directly from the game engines using the games’ built-in tools. We didn’t use our low-pass filter on those results.
As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:
|Memory size||16GB (4 DIMMs)|
DDR4 SDRAM at 2133 MT/s
|Chipset drivers||INF update
Rapid Storage Technology Enterprise 18.104.22.1688
with Realtek 22.214.171.12446 drivers
SSDNow 310 960GB SATA
R9 295 X2
R9 Fury X
GTX 780 Ti
GeForce GTX 980
GeForce GTX 980 Ti
Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.
Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.
Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.
The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
Sizing ’em up
Do the math involving the clock speeds and per-clock potency of the latest high-end graphics cards, and you’ll end up with a comparative table that looks something like this:
|Asus R9 290X||67||185/92||4.2||5.9||346|
|Radeon R9 Fury X||67||269/134||4.2||8.6||512|
|GeForce GTX 780 Ti||37||223/223||4.6||5.3||336|
|Gigabyte GTX 980||85||170/170||5.3||5.4||224|
|GeForce GTX 980 Ti||95||189/189||6.5||6.1||336|
|GeForce Titan X||103||206/206||6.5||6.6||336|
Those are the peak capabilities of each of these cards in theory. As I noted in my article on the Fiji GPU architecture, the Fury X is particularly strong in several departments, including memory bandwidth and shader rates, where it substantially outstrips both the R9 290X and the competing GeForce GTX 980 Ti. In other areas, the Fury X’s theoretical graphics rates haven’t budged compared to the 290X, including the pixel fill rate and rasterization. Those are also precisely the areas where the Fury X looks weakest compared to the competition. We are looking at a bit of asymmetrical warfare this time around, with AMD and Nvidia fielding vastly different mixes of GPU resources in similarly priced products.
Of course, those are just theoretical peak rates. Our fancy Beyond3D GPU architecture suite measures true delivered performance using a series of directed tests.
The Fiji GPU has the same 64 pixels per clock of ROP throughput as Hawaii before it, so these results shouldn’t come as a surprise. These numbers illustrate something noteworthy, though. Nvidia has grown the ROP counts substantially in its Maxwell-based GPUs, taking even the mid-range GM204 aboard the GTX 980 beyond what Hawaii and Fiji offer. Truth be told, both of the Radeons probably offer more than enough raw pixel fill rate. However, these results are a sort of proxy for other types of ROP power, like blending for multisampled anti-aliasing and Z/stencil work for shadowing, that can tax a GPU.
This bandwidth test measures GPU throughput using two different textures: an all-black surface that’s easily compressed and a random-colored texture that’s essentially incompressible. The Fury X’s results demonstrate several things of note.
The 16% delta between the black and random textures shows us that Fiji’s delta-based color compression does it some good, although evidently not as much good as the color compression does on the Maxwell-based GeForces.
Also, our understanding from past reviews was that the R9 290X was limited by ROP throughput in this test. Somehow, the Fury X speeds past the 290X despite having the same ROP count on paper. Hmm. Perhaps we were wrong about what limited the 290X. If so, then 290X may have been bandwidth limited, after all—and Hawaii apparently has no texture compression of note. The question then becomes whether the Fury X is also bandwidth limited in this test, or if its performance is limited by the render back-end. Whatever the case, the Fury X “only” achieves 387 GB/s of throughput here, well below the 512 GB/s theoretical max of its HBM-infused memory subsystem. Ominously, the Fury X only leads the GTX 980 Ti by the slimmest of margins with the compressible black texture.
Fiji has a ton of texture filtering capacity on tap, especially for simpler formats. The Fury X falls behind the GTX 980 Ti when filtering texture formats that are 16 bits per color channel, though. That fact will matter more or less depending on the texture formats used by the game being run.
The Fury X achieves something close to its maximum theoretical rate in our polygon throughput test, at least when the polygons are presented in a list format. However, it still trails even the Kepler-based GeForce GTX 780 Ti, let alone the newer GeForces. Adding tessellation to the mix doesn’t help matters. The Fury X still manages just over half the throughput of the GTX 980 Ti in TessMark.
Fiji’s massive shader array is not to be denied. The Fury X crunches through its full 8.6 teraflops of theoretical peak performance in our ALU throughput test.
At the end of the day, the results from these directed tests largely confirm the major contrasts between the Fury X and the GeForce GTX 980 Ti. These two solutions have sharply divergent mixes of resources on tap, not just on paper but in terms of measurable throughput.
Project Cars is beautiful. I could race around Road America in a Formula C car for hours and be thoroughly entertained, too. In fact, that’s pretty much what I did in order to test these graphics cards.
Click the buttons above to cycle through the plots. What you’ll see are frame times are from one of the three test runs we conducted for each card. You’ll notice that PC graphics cards don’t always produce smoothly flowing progressions of succeeding frames of animation, as the term “frames per second” would seem to suggest. Instead, the frame time distribution is a hairy, fickle beast that may vary widely. That’s why we capture rendering times for every frame of animation—so we can better understand the experience offered by each solution.
The Fury X’s bright red plot indicates consistently lower frame times than the R9 290X’s purple plot. The dual-GPU R9 295 X2 often produces even lower frame times than the Fury X, but there’s a nasty spike near the middle of the test. That’s a slowdown that you can feel while gaming in the form of a stutter. The Fury X avoids that fate, and playing Project Cars on it generally feels smooth as a result.
Unfortunately for the red team, the Fury X doesn’t crank out frames as quickly as the GeForce GTX 980 Ti. The 980 Ti produces more frames over the course of the test run, so naturally, its FPS average is higher.
Higher averages aren’t always an indicator of smoother overall animation, though. Remember, we saw a big spike in the 295 X2’s plot. Even though its FPS average is higher than the Fury X’s, gameplay on the the 295 X2 isn’t as consistently smooth. That’s why we prefer to supplement average FPS with another metric: 99th percentile frame time. This metric simply says “99% of all frames in this test run were produced in X milliseconds or less.” The lower that threshold, the better the general performance. In this frame-time-focused metric, the Fury X just matches the 295 X2, despite a lower FPS average.
Almost all of the cards handle this challenge pretty well, considering that we’re asking them to render in 4K at fairly high image quality settings. All but one of them manage a 99th percentile frame time of below 33 milliseconds. That means, on a per-frame basis, they perform at or above 30 FPS 99% of the time.
We can understand in-game animation fluidity even better by looking at the entire “tail” of the frame time distribution for each card, which illustrates what happens with the most difficult frames.
These curves show generally consistent performance from nearly all of the cards, with the lone exception of the Radeon R9 295 X2. That card struggles with toughest three percent of frames, and a result, the line for this dual-Hawaii card curves up to meet the one for the single-Hawaii 290X. These are the dangers of multi-GPU solutions.
These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.
The frame time plots have few big spikes in them, and the FPS averages here are well above 20. As a result, none of the cards spend any time beyond our 50-ms threshold. Even the 295 X2, which has one spike beyond 50 ms in the frame time plots, doesn’t repeat a slowdown of the same magnitude in the other three test runs. (These results are a median of three runs.)
The Fury X essentially spends no time beyond our 33-ms threshold, either. Like I said, it generally feels pretty good to play this game on it in 4K. Trouble is, the new Radeon falls behind three different GeForces, including the cheaper GTX 980, across a range of metrics. Perhaps the next game will be a different story.
The Witcher 3
Performance in this game has been the subject of some contention, so I tried to be judicious in selecting my test settings. I tested the older Radeons with the Catalyst 15.5 beta drivers here (and 15.15 on the Fury X), and all cards were tested with the latest 1.04 patch for the game. Following AMD’s recommendations for achieving good CrossFire performance, I set “EnableTemporalAA=false” in the game’s config file when testing the Radeon R9 295 X2. As you’ll see below, I also disabled Nvidia’s HairWorks entirely in order to avoid the associated performance pitfalls.
You can tell by the “fuzzy” frame-time plots that the Radeons struggle in this game. That’s particularly an issue when the frame time spikes get to be fairly high—into the 40-60-ms range. The Fury X trails the GTX 980 Ti in the FPS average results, but it falls even further behind in the 99th-percentile frame time metric. This outcome quantifies something you can feel during the test run: the animation hiccups and sputters much more than it should, especially in the early part of the test sequence.
The GeForce GTX 780 Ti struggles here, too. Since we tested, Nvidia has released new drivers that may improve the performance of Kepler-based cards like this one. My limited time with the Radeon Fury X has been very busy, however, so I wasn’t able to re-test the GTX 780 Ti with new drivers for this review.
In our “badness” metric, both the Fury X and the R9 290X spend about the same amount of time beyond the 50-ms threshold—not a ton, but enough that one can feel it. The fact that these two cards perform similarly here suggests the problem may be a software issue gated by CPU execution speed.
Despite those hiccups, the Fury X generally improves on the 290X’s performance, which is a reminder of the Fiji GPU’s tremendous potential.
Forgive me for the massive number of screenshots below, but GTA V has a ton of image quality settings. I more or less cranked them all up in order to stress these high-end video cards. Truth be told, most or all of these cards can run GTA V quite fluidly at lower settings in 4K—and it still looks quite nice. You don’t need a $500+ graphics card to get solid performance from this game in 4K, not unless you push all the quality sliders to the right.
No matter how you slice it, the Fury X handles GTA V in 4K quite nicely. The 99th-percentile results track with the FPS results, which is what happens when the frame time plots are generally nice and flat. Again, though, the GTX 980 Ti proves to be measurably faster than the Fury X.
Far Cry 4
At last, we have a game where the Fury X beats the GTX 980 Ti in terms of average FPS. Frustratingly, though, a small collection of high frame times means the Fury X falls behind the GeForce in our 99th-percentile metric.
The similarities between the Fury X and the 290X in our “badness” metric might suggest some common limitation in that handful of most difficult frames.
Whatever the case, playing on the GeForce is smoother, although the Fury X’s higher FPS average suggests it has more potential.
In every metric we have, the Fury X is situated just between the GTX 980 and the 980 Ti in this game. All of these cards are very much competent to play Alien Isolation fluidly in 4K.
Civilization: Beyond Earth
Since this game’s built-in benchmark simply spits out frame times, we were able to give it a full workup without having to resort to manual testing. That’s nice, since manual benchmarking of an RTS with zoom is kind of a nightmare.
Oh, and the Radeons were tested with the Mantle API instead of Direct3D. Only seemed fair, since the game supports it.
This is a tight one, but the GTX 980 Ti manages to come out on top overall by a whisker. For all intents, though, the Fury X and 980 Ti perform equivalently here.
Initially, I tested BF4 on the Radeons using the Mantle API, since it was available. Oddly enough, the Fury X’s performance was kind of lackluster with Mantle, so I tried switching over to Direct3D for that card. Doing so boosted performance from about 32 FPS to 40 FPS. The results below for the Fury X come from D3D.
The Fury X trades blows with the GeForce GTX 980 in BF4. The new Radeon’s performance is fairly solid, but it’s just not as quick as the GTX 980 Ti.
Ack. The Fury X looks competitive with the GTX 980 Ti in the FPS sweeps, but it drops down the rankings in our 99th-percentile frame time measure. Why?
Take a look at the frame time plot, and that one spot in particular where frame times jump to over 120 milliseconds. This slowdown happens at the point in our test run where there’s an explosion with lots of particles on the screen. There are smaller spikes on the older Radeons, but nothing like we see from the Fury X. This problem is consistent across multiple test runs, and it’s not subtle. Here’s hoping AMD can fix this issue in its drivers.
Our “badness” metric at 50 ms picks up those slowdowns on the Fury X. This problem mars what would otherwise be a very competitive showing versus the 980 Ti.
Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.
In the Fury X, AMD has managed to deliver a substantial upgrade in GPU performance over the R9 290X with lower power draw while gaming. That’s impressive, especially since the two GPUs are made on the same 28-nm process technology. The Fury X still requires about 50W more power than the GTX 980 Ti, but since its liquid cooler expels heat directly out of the PC case, I’m not especially hung up on that fact. GCN-based GPUs still aren’t as power-efficient as Nvidia’s Maxwell chips, but AMD has just made a big stride in the right direction.
Noise levels and GPU temperatures
These video card coolers are so good, they’re causing us testing problems. You see, the noise floor in Damage Labs is about 35-36 dBA. It varies depending on things I can’t quite pinpoint, but one notable contributor is the noise produced by the lone cooling fan always spinning on our test rig, the 120-mm fan on the CPU cooler. Anyhow, what you need to know is that any of the noise results that range below 36 dBA are running into the limits of what we can test accurately. Don’t make too much of differences below that level.
The Fury X’s liquid cooler lives up to its billing with a performance that’s unquestionably superior to anything else we tested. You will have to find room for the radiator in your case, though. In return, you will get incredibly effective cooling at whisper-quiet noise levels.
As usual, we’ll sum up our test results with a couple of value scatter plots. The best values tend toward the upper left corner of each plot, where performance is highest and prices are lowest. We’ve converted our 99th-percentile frame time results into FPS, so that higher is better, in order to make this layout work.
If you’ve been paying attention over the preceding pages, you pretty much know the story told by our FPS value scatter. The Radeon R9 Fury X is a big advance over the last-gen R9 290X, and it’s a close match overall for the GeForce GTX 980 Ti. However, the GeForce generally outperforms the Fury X across our suite of games—by under 10%, or four FPS, on average. That’s massive progress from the red team, and it’s a shame the Fury X’s measurably superior shader array and prodigious memory bandwidth don’t have a bigger payoff in today’s games.
Speaking of which, if you dig deeper using our frame-time-focused performance metrics—or just flip over to the 99th-percentile scatter plot above—you’ll find that the Fury X struggles to live up to its considerable potential. Unfortunate slowdowns in games like The Witcher 3 and Far Cry 4 drag the Fury X’s overall score below that of the less expensive GeForce GTX 980. What’s important to note in this context is that these scores aren’t just numbers. They mean that you’ll generally experience smoother gameplay in 4K with a $499 GeForce GTX 980 than with a $649 Fury X. Our seat-of-the-pants impressions while play-testing confirm it. The good news is that we’ve seen AMD fix problems like these in the past with driver updates, and I don’t doubt that’s a possibility in this case. There’s much work to be done, though.
Assuming AMD can fix the problems we’ve identified with a driver update, and assuming it really has ironed out the issue with the whiny water pumps, there’s much to like about the Fury X. The GPU has the potential to enable truly smooth gaming in 4K. AMD has managed to keep power consumption in check. The card’s cooling and physical design are excellent; they’ve raised the standard for products of this class. Now that I’ve used the Fury X, I would have a hard time forking over nearly 700 bucks for a card with a standard air cooler. At this price, decent liquid cooling at least ought to be an option.
Also, although we have yet to perform tests intended to tease out any snags, we’ve seen no clear evidence that the Fury X’s 4GB memory capacity creates problems in typical use. We will have to push a little and see what happens, but our experience so far suggests this worry may be a non-issue.
The question now is whether AMD has done enough to win back the sort of customers who are willing to pony up $650 for a graphics card. My sense is that a lot of folks will find the Fury X’s basic proposition and design attractive, but right now, this product probably needs some TLC in the driver department before it becomes truly compelling. AMD doesn’t have to beat Nvidia outright in order to recover some market share, but it does need to make sure its customers can enjoy their games without unnecessary hiccups.
Enjoy our work? Pay what you want to subscribe and support us.