Several weeks ago, I received a slightly terrifying clandestine communique consisting only of a picture of myself in duplicate and the words, “Wouldn’t you agree that two is better than one?” I assume the question wasn’t truly focused on unflattering photographs or, say, tumors. In fact, I had an inkling that it probably was about GPUs, as I noted in a bemused news item.
A week or so after that, another package arrived at my door. Inside were two small cans of Pringles, the chips reduced to powder form in shipping, and a bottle of “Hawaiian volcanic water.” Also included were instructions for a clandestine meeting. Given what had happened to the chips, I feared someone was sending me a rather forceful signal. I figured I’d better comply with the sender’s demands.
So, some days later, I stood at a curbside in San Jose, California, awaiting the arrival of my contacts—or would-be captors or whatever. Promptly at the designated time, a sleek, black limo pulled up in front of me, and several “agents” in dark clothes and mirrored sunglasses spilled out of the door. I was handed a document to sign that frankly could have said anything, and I compliantly scribbled my signature on the dotted line. I was then whisked around town in the limo while getting a quick-but-thorough briefing on secrets meant for my eyes only—secrets of a graphical nature, I might add, if I weren’t bound to absolute secrecy.
Early the next week, back at home, a metal briefcase was dropped on my doorstep, as the agents had promised. It looked like so:
After entering the super-secret combination code of 0-0-0 on each latch, I was able to pop the lid open and reveal the contents.
Wot’s this? Maybe one of the worst-kept secrets anywhere, but then I’m fairly certain the game played out precisely as the agents in black wanted. Something about dark colors and mirrored sunglasses imparts unusual competence, it seems.
Pictured in the case above is a video card code-named Vesuvius, the most capable bit of graphics hardware in the history of the world. Not to put too fine a point on it. Alongside it, on the lower right, is the radiator portion of Project Hydra, a custom liquid-cooling system designed to make sure Vesuvius doesn’t turn into magma.
Mount Radeon: The R9 295 X2
Liberate it from the foam, and you can see Vesuvius—now known as the Radeon R9 295 X2—in all of its glory.
You may have been wondering how AMD was going to take a GPU infamous for heat issues with only one chip on a card and create a viable dual-GPU solution. Have a glance at that external 120-mm fan and radiator, and you’ll wonder no more.
If only Pompeii had been working with Asetek. Source: AMD.
The 295 X2 sports a custom cooling system created by Asetek for AMD. This system is pre-filled with liquid, operates in a closed loop, and is meant to be maintenance-free. As you can probably tell from the image above, the cooler pumps liquid across the surface of both GPUs and into the external radiator. The fan on the radiator then pushes the heat out of the case. That central red fan, meanwhile, cools the VRMs and DRAM on the card.
We’ve seen high-end video cards with water cooling in the past, but nothing official from AMD or Nvidia—until now. Obviously, having a big radiator appendage attached to a video card will complicate the build process somewhat. The 295 X2 will only fit into certain enclosures. Still, it’s hard to object too strongly to the inclusion of a quiet, capable cooling system like this one. We’ve seen way too many high-end video cards that hiss like a Dyson.
There’s also the matter of what this class of cooling enables. The R9 295 X2 has two Hawaii GPUs onboard, fully enabled and clocked at 1018MHz, slightly better than the 1GHz peak clock of the Radeon R9 290X. Each GPU has its own 4GB bank of GDDR5 memory hanging off of a 512-bit interface. Between the two GPUs is a PCIe 3.0 switch chip from PLX, interlinking the Radeons and connecting them to the rest of the system. Sprouting forth from the expansion slot cover are four mini-DisplayPort outputs and a single DL-DVI connector, ready to drive five displays simultaneously, if you so desire.
So the 295 X2 is roughly the equivalent of two Radeon R9 290X cards crammed into one dual-slot card (plus an external radiator). That makes it the most capable single-card graphics solution that’s ever come through Damage Labs, as indicated by the bigness of the numbers attached to it in the table below.
R9 295 X2
|GeForce GTX 770||35||139/139||3.3||4.3||224|
|GeForce GTX 780||43||173/173||4.2||3.6 or 4.5||288|
Those are some large values. In fact, the only way you could match the bigness of those numbers would be to pair up a couple of Nvidia’s fastest cards, like the GeForce GTX 780 Ti. No current single GPU comes close.
There is a cost for achieving those large numbers, though. The 295 X2’s peak power rating is a jaw-dropping 500W. That’s quite a bit higher than some of our previous champs, such as the GeForce GTX 690 at 300W and the Radeon HD 7990 at 375W. Making this thing work without a new approach to cooling wasn’t gonna be practical.
Exotic cooling, steep requirements
AMD has gone out of its way to make sure the R9 295 X2 looks and feels like a top-of-the-line product. Gone are the shiny plastics of the Radeon HD 7990, replaced by stately and industrial metal finishes, from the aluminum cooling shroud up front to the black metal plate covering the back side of the card.
That’s not to say that the 295 X2 isn’t any fun. The bling is just elsewhere, in the form of illumination on the “Radeon” logo atop the shroud. Another set of LEDs makes the central cooling fan glow Radeon red.
I hope you’re taken by that glow—I know I kind of am—because it’s one of the little extras that completes the package. And this package is not cheap. The suggested price on this puppy is $1499.99 (or, in Europe, €1099 plus VAT). I believe that’s a new high-water mark for a consumer graphics card, although it ain’t the three frigging grand Nvidia intends to charge for its upcoming Titan Z with dual GK110b chips. And I believe the 295 X2’s double-precision math capabilities are fully enabled at one-quarter the single-precision rate, or roughly 2.8 teraflops. That makes the 295 X2 a veritable bargain by comparison, right?
Well, whatever the case, AMD expects the R9 295 X2 to hit online retailers during the week of April 21, and I wouldn’t be shocked to see them sell out shortly thereafter. You’ll have to decide for yourself whether 295 X2’s glowy lights, water cooling, and other accoutrements are worth well more than the $1200 you’d put down for a couple of R9 290X cards lashed together in a CrossFire config.
You know, some things about this card—its all-metal shroud, illuminated logo, secret agent-themed launch, metal briefcase enclosure, and exploration of new price territory—seem strangely familiar. Perhaps that’s because the GeForce GTX 690 was the first video card to debut an all-metal shroud and an illuminated logo; it was launched with a zombie apocalypse theme, came in a wooden crate with prybar, and was the first consumer graphics card to hit the $1K mark. Not that there’s anything wrong with that. The GTX 690’s playbook is a fine one to emulate. Just noticing.
Assuming the R9 295 X2 fits into your budget, you may have to make some lifestyle changes in order to accommodate it. The card is 12″ long, like the Radeon HD 7990 before it, but it also requires a mounting point for the 120-mil radiator/fan combo that sits above the board itself. Together, the radiator and fan are 25 mm deep. If you’re the kind of dude who pairs up two 295 X2s, AMD recommends leaving a one-slot gap between the two cards, so that airflow to that central cooling fan isn’t occluded. I suspect you’d also want to leave that space open in a single-card config rather than, say, nestling a big sound card right up next to that fan.
More urgently, your system’s power supply must be able to provide a combined 50 amps across the card’s two eight-pin PCIe power inputs. That wasn’t a problem for the Corsair AX850 PSU in our GPU test rig, thanks to its single-rail design. Figuring out whether a multi-rail PSU offers enough amperage on the relevant 12V rails may require some careful reading, though.
Now for a whole mess of a issues
The Radeon R9 295 X2 is a multi-GPU graphics solution, and that very fact has triggered a whole mess of issues with a really complicated backstory. The short version is that AMD has something of a checkered past when it comes to multi-GPU solutions. The last time they debuted a new dual-GPU graphics card, the Radeon HD 7990, it resulted in one of the most epic reviews we’ve ever produced, as we pretty much conclusively demonstrated that adding a second GPU didn’t make gameplay anywhere near twice as smooth as a single GPU. AMD has since added a frame-pacing algorithm to its drivers in order to address that problem, with good results. However that fix didn’t apply to Eyefinity multi-display configs and didn’t cover even a single 4K panel. (The best current 4K panels use two “tiles” and are logically treated as dual displays.)
A partial fix for 4K came later, with the introduction of the Radeon R9 290X and the Hawaii GPU, in the form of a new data-transfer mechanism for CrossFire known as XDMA. Later still, AMD released a driver with an updated frame pacing for older GPUs, like the Tahiti chip aboard the Radeon R9 280X and the HD 7990.
And, shamefully, we haven’t yet tested either XMDA CrossFire or the CrossFire + 4K/Eyefinity fix for older GPUs. I’ve been unusually preoccupied with other things, but that’s still borderline scandalous and sad. AMD may well have fixed its well-documented CrossFire issues with 4K and multiple displays, and son, testing needs to be done.
Happily, the R9 295 X2 review seemed like the perfect opportunity to spend some quality time vetting the performance of AMD’s current CrossFire solutions with 4K panels. After all, AMD emphasized repeatedly in its presentations that the 295 X2 is built for 4K gaming. What better excuse to go all out?
So I tried. Doing this test properly means using FCAT to measure how individual frames of in-game animation are delivered to a 4K panel. Our FCAT setup isn’t truly 4K capable, but we’re able to capture one of the two tiles on a 4K monitor, at a resolution of 1920×2160, and analyze performance that way. It’s a bit of a hack, but it should work.
Emphasis on should. Trouble is, I just haven’t been able to get entirely reliable results. It works for GeForces, but the images coming in over HDMI-to-DVI-to-splitter-to-capture-card from the Radeons have some visual corruption in them that makes frame counting difficult. After burning a big chunk of last week trying to make it work by swapping in shorter and higher-quality DVI cables, I had to bail on FCAT testing and fall back on the software-based Fraps tool in order to get reliable results. I will test XMDA CrossFire and the like with multiple monitors using FCAT soon. Just not today.
Fraps captures frame times relatively early in the production process, when they are presented as final to Direct3D, so it can’t show us exactly when frames are reaching the screen. As we’ve often noted, though, there is no single place where we can sample to get a perfect picture of frame timing. The frame pacing and metering methods used in multi-GPU solutions may provide regular, even frame delivery to the monitor, but as a result, the animation timing of those frames may not match their display times. Animation timing is perhaps better reflected in the Fraps numbers—depending on how the game engine tracks time internally, which varies from game to game.
This stuff is really complicated, folks.
Fortunately, although Fraps may not capture all the nuances of multi-GPU microstuttering and its mitigation, it is a fine tool for basic performance testing—and there are plenty of performance challenges for 4K gaming even without considering frame delivery to the display. I think that’ll be clear very soon.
One more note: I’ve run our Fraps results though a three-frame low-pass filter in order to compensate for the effects of the three-frame Direct3D submission queue used by most games. This filter eliminates the “heartbeat” pattern of high-and-then-low frame times sometimes seen in Fraps results that doesn’t translate into perceptible hitches in the animation. We’ve found that filtered Fraps data corresponds much more closely to the frame display times from FCAT. Interestingly, even with the filter, the distinctive every-other-frame pattern of multi-GPU microstuttering is evident in some of our Fraps results.
The 4K experience
We’ve had one of the finest 4K displays, the Asus PQ321Q, in Damage Labs for months now, and I’ve been tracking the progress of 4K support in Windows, in games, and in graphics drivers periodically during that time. This is our first formal look at a product geared specifically for 4K gaming, so I thought I’d offer some impressions of the overall experience. Besides, I think picking up a $3000 4K monitor ought to be a prerequisite for dropping $1500 on the Radeon R9 295 X2, so the 4K experience is very much a part of the overall picture.
The first thing that should be said is that this 31.5″ Asus panel with a 3840×2160 pixel grid is a thing of beauty, almost certainly the finest display I’ve ever laid eyes upon. The color reproduction, the uniformity, the incredible pixel density, the really-good-for-an-LCD black levels—practically everything about it is amazing and wondrous. The potential for productivity work, video consumption, or simply surfing the web is ample and undeniable. To see it is to want it.
The second thing to be said is that—although Microsoft has made progress and the situation isn’t bad under Windows 8.1 when you’re dealing with the file explorer, desktop, or Internet Explorer—the 4K support in Windows programs generally is still awful. That matters because you will want to use high-PPI settings and to have text sizes scaled up to match this display. Reading five-point text is not a good option. Right now, most applications do scale up their text size in response to the high-PPI control panel settings, but the text looks blurry. Frustrating, given everything, but usable.
The bigger issues have to do with the fact that today’s best 4K displays, those that support 60Hz refresh rates, usually present themselves to the PC as two “tiles” or separate logical displays. They do so because, when they were built, there wasn’t a display scaler ASIC capable of handling the full 4K resolution. The Asus PQ321Q can be connected via dual HDMI inputs or a single DisplayPort connector. In the case of DisplayPort, the monitor uses multi-stream transport mode to essentially act as two daisy-chained displays. You can imagine how this reality affects things like BIOS screens, utilities that run in pre-boot environments, and in-game menus the first time you run a game. Sometimes, everything is squished up on half of the display. Other times, the image is both squished and cloned on both halves. Occasionally, the display just goes black, and you’re stuck holding down the power button in an attempt to start over.
AMD and Nvidia have done good work making sure their drivers detect the most popular dual-tile 4K monitors and auto-configure them as a single large surface in Windows. Asus has issued multiple firmware updates for this monitor that seem to have helped matters, too. Still, it often seems like the tiling issues have moved around over time rather than being on a clear trajectory of overall improvement.
Here’s an example from Tomb Raider on the R9 295 X2. I had hoped to use this game for testing in this review, but the display goes off-center at 3840×2160. I can’t seem to make it recover, even by nuking the registry keys that govern its settings and starting over from scratch. Thus, Lara is offset to the left of the screen while playing, and many of the in-game menus are completely inaccessible.
AMD suggested specifying the aspect ratio for this game manually to work around this problem, but doing so gave me an entire game world that was twice as tall as it should have been for its width. Now, I’m not saying that’s not interesting and maybe an effective substitute for some of your less powerful recreational drugs, because wow. But it’s not great for real gaming.
Another problem that affects both AMD and Nvidia is a shortage of available resolutions. Any PC gamer worth his salt knows what to do when a game doesn’t quite run well enough at the given resolution, especially if you have really high pixel densities at your command: just pop down to a lower res and let the video card or monitor scale things up to fill the screen. Dropping to 2560×1440 or 1920×1080 would seem like an obvious strategy with a display like this one. Yet too often, you’re either stuck with 3840×2160 or bust. The video drivers from AMD and Nvidia don’t consistently expose even these two obvious resolutions that are subsets of 3840×2160 or anything else remotely close. I’m not sure whether this issue will be worked out in the context of these dual-tile displays or not. Seems like they’ve been around quite a while already without the right thing happening. We may have to wait until the displays themselves get better scaler ASICs.
There’s also some intermittent sluggishness in using a 4K system, even with the very fastest PC hardware. You’ll occasionally see cases of obvious slowness, where screen redraws are laborious for things like in-game menus. Such slowdowns have been all but banished at 2560×1600 and below these days, so it’s a surprise to see them returning in 4K. I’ve also encountered some apparent mouse precision issues in game options menus and while sniping in first-person shooters, although such things are hard to separate precisely from poor graphics performance.
In case I haven’t yet whinged enough about one of the coolest technologies of the past few years, let me add some about the actual experience of gaming in 4K. I’ve gotta say that I’m not blown away by it, when my comparison is a 27″ 2560×1440 Asus monitor, for several reasons.
For one, game content isn’t always 4K-ready. While trying to get FCAT going, I spent some time with this Asus monitor’s right tile in a weird mode, with only half the vertical resolution active. (Every other scanline was just repeated.) You’d think that would be really annoying, and on the desktop, it’s torture. Fire up a session of Borderlands 2, though, and I could play for hours without noticing the difference, or even being able to detect the split line, between the right and left tiles. Sure, Crysis 3 is a different story, but the reality is that many games won’t benefit much from the increased pixel density. Their textures and models and such just aren’t detailed enough.
Even when games do take advantage, I’m usually not blown away by the difference. During quick action, it’s often difficult to appreciate the additional fidelity packed into each square inch of screen space.
When I do notice the additional sharpness, it’s not always a positive. For example, I often perceive multiple small pixels changing quickly near each other as noise or flicker. The reflections in puddles in BF4 are one example of this phenomenon. I don’t think those shader effects have enough internal sampling, and somehow, that becomes an apparent problem at 4K’s high pixel densities. My sense is that, most of the time, lower pixel densities combined with supersampling (basically, rendering each pixel multiple times at an offset and blending) would probably be more pleasing overall than 4K is today. Of course, as with many things in graphics, there’s no arguing with the fact that 4K plus supersampling would be even better, if that were a choice. In fact, supersampling may prove to be an imperative for high-PPI gaming. 4K practically requires even more GPU power and will soak it up happily. Unfortunately, 4X or 8X supersampling at 4K is not generally feasible right now.
Don’t get me wrong. When everything works well and animation fluidity isn’t compromised, gaming at 4K can be a magical thing, just like gaming at 2560×1440, only a little nicer. The sharper images are great, and edge aliasing is much reduced at high PPIs.
I’m sure things will improve gradually as 4K monitors become more common, and I’m happy to see the state of the art advancing. High-PPI monitors are killer for productivity. Still, I think some other display technologies, like G-Sync/Freesync-style variable refresh intervals and high-dynamic-range panels, are likely to have a bigger positive impact on gaming. I hope we don’t burn the next few years on cramming in more pixels without improving their speed and quality.
Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:
|Memory size||16GB (4 DIMMs)|
DDR3 SDRAM at 1600MHz
|Chipset drivers||INF update
Rapid Storage Technology Enterprise 126.96.36.1993
with Realtek 188.8.131.5271 drivers
HyperX 480GB SATA
GTX 780 Ti
GeForce GTX 780 Ti
|GeForce 337.50||875||928||1750||3072 (x2)|
|Catalyst 14.4 beta||950||1000||1500||3072|
Radeon R9 290X
|Catalyst 14.4 beta||–||1000||1250||4096|
R9 295 X2
Thanks to Intel, Corsair, Kingston, Gigabyte, and OCZ for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.
Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.
Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.
In addition to the games, we used the following test applications:
- GPU-Z 0.7.7
The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
Click on the buttons above to cycle through plots of the frame times from one of our three test runs for each graphics card. You’ll notice that the lines for the multi-GPU solutions like the R9 295 X2 and two GTX 780 Ti cards in SLI are “fuzzier” than the those from the single-GPU solutions. That’s an example of multi-GPU micro-stuttering, where the two GPUs are slightly out of sync, so the frame-to-frame intervals tend to vary in an alternating pattern. Click on the buttons below to zoom in and see how that pattern looks up close.
The only really pronounced example of microstuttering in our zoomed-in plots is the GTX 780 Ti SLI config, and it’s not in terrible shape, with the peak frame times remaining under 25 ms or so. The thing is, although we can measure this pattern in Fraps, it’s likely that Nvidia’s frame metering algorithm will smooth out this saw-tooth pattern and ensure more consistent delivery of frames to the display.
Not only does the 295 X2 produce the highest average frame rate, but it backs that up by delivering the lowest rendering times across 99% of the frames in our test sequence, as the 99th percentile frame time indicates.
Here’s a broader look at the frame rendering time curve. You can see that the 295 X2 has trouble in the very last less-than-1% of frames. I can tell you where that happens in the test sequence, when my exploding arrow does its thing. We’ve seen frame time spikes on both brands of video cards at this precise spot before. Thing is, if you look at the frame time plots above, Nvidia appears to have reduced the size of that spike recently, perhaps during the work it’s done optimizing this new 337.50 driver.
These “time spent beyond X” graphs are meant to show “badness,” those instances where animation my be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.
Per our discussion above, the GTX 780 Ti SLI aces this test by never crossing the 50-ms threshold. The R9 295X X2 is close behind—and solidly ahead of a single Hawaii GPU aboard the Radeon R9 290X. That’s the kind of real-world improvement we want out of a multi-GPU solution. This is where I’d normally stop and say we’ll want to verify the proper frame delivery with FCAT, but in this particular case, I’ll skip that step and call it good. Subjectively speaking, Crysis 3 on the 295 X2 at 4K is amazingly fluid and smooth, and this game has the visual fidelity to make you appreciate the additional pixels.
Assassin’s Creed 4 Black Flag
Uh oh. Click through the plots above, and you’ll see occasional frame time spikes from AMD’s multi-GPU solutions, both the HD 7990 and the R9 295 X2. Those same spikes are absent from the plots of the R9 290X and the two GeForce configs. The spikes have a fairly modest impact on the 295 X2’s FPS average, which is still much higher than a single 290X card’s, but they’re reflected more clearly in the latency-sensitive 99th percentile metric.
The 295 X2 is still faster than a single R9 290X overall in Black Flag, but its multi-GPU scaling is marred by those intermittent slowdowns. Meanwhile, the GTX 780 Ti SLI setup never breaches the 33-ms barrier, not even once.
Thanks to the hard work put in by Johan Andersson and the BF4 team, this game is now an amazing playground for folks who want to understand performance. I was able to collect performance data from the game engine directly here, without the use of Fraps, and I grabbed much more of it than I can share in the context of this review, including information about the CPU time and GPU time required to render each frame. BF4 supports AMD’s Mantle, where Fraps cannot go, and the game now even includes an FCAT overlay rendering option, so we can measure frame delivery with Mantle.
I’m on board for all of that—and I even tried out two different frame-pacing options BF4 offers for multi-Radeon setups—but I didn’t have time to include it all in this review. In the interests of time, I’ve only included Direct3D results below. Trust me, the differences in performance between D3D and Mantle are slight at 4K resolutions, where the GPU limits performance more than the CPU and API overhead. Also, given the current state of multi-GPU support and frame pacing in BF4, I think Direct3D is unquestionably the best way to play this game on a 295 X2.
Still, we’ll dig into that scrumptious, detailed BF4 performance data before too long. There’s much to be learned.
Check each one of the metrics above, and it’s easy to see the score. The R9 295 X2 is pretty much exemplary here, regardless of which way you choose to measure.
Oddly enough, although its numbers look reasonably decent, the GTX 780 Ti SLI setup struggles, something you tell by the seat of your pants when playing. My insta-theory was that the cards were perhaps running low on memory. After all, they “only” have 3GB each, and SLI adds some memory overhead. I looked into it by logging memory usage with GPU-Z while playing, and the primary card was using its RAM pretty much to the max. Whether or not that’s the source of the problem is tough to say, though, without further testing.
Batman: Arkham Origins
Well. We’re gliding through the rooftops in this test session, and the game must be constantly loading new portions of the city as we go. You’d never know that when playing on one of the GeForce configs, but there are little hiccups that you can feel all along the path when playing on the Radeons. For whatever reason, this problem is most pronounced on the 295 X2. Thus, the 295 X2 fares poorly in our latency-sensitive performance metrics. This is a consistent and repeatable issue that’s easy to notice subjectively.
Guild Wars 2
Uh oh. Somehow, the oldest game in our roster still doesn’t benefit from the addition of a second GPU. Heck, the single 290X is even a little faster than the X2. Not what I expected to see here, but this is one of the pitfalls of owning a multi-GPU solution. Without the appropriate profile for CrossFire or SLI, many games simply won’t take advantage of additional GPUs.
Call of Duty: Ghosts
Hm. Watch the video above, and you’ll see that the first part of our test session is a scripted sequence that looks as if it’s shown through a camera lens. This little scripted bit starts the level, and I chose to include it because Ghosts has so many fricking checkpoints riddled throughout it, there’s practically no way to test the same area repeatedly unless it’s at the start of a mission. By looking at the frame time plots, you can see that the Radeons really struggle with this portion of the test run—and, once again, the multi-GPU configs suffer the most. During that bit of the test, the 290X outperforms the 295 X2.
Beyond those opening seconds, the 295 X2 doesn’t perform too poorly, although the dual 780 Ti cards are faster, but by then the damage is done.
I decided to just use Thief‘s built-in automated benchmark, since we can’t measure performance with AMD’s Mantle API using Fraps. Unfortunately, this benchmark is pretty simplistic, with only FPS average and minimum numbers (as well as a maximum, for all that’s worth).
Watch this test run, and you can see that it’s a struggle for most of these graphics cards. Unfortunately, Mantle isn’t any help, even on the single-GPU R9 290X. I had hoped for some gains from Mantle, even if the primary benefits are in CPU-bound scenarios. Doesn’t look like that’s the case.
As you can see, Thief‘s developers haven’t yet added multi-GPU support to their Mantle codepath, so the 295 X2 doesn’t perform at its best with Mantle. With Direct3D, though, the 295 X2 easily leads the pack.
Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.
Yeah, so this is the same test rig in each case; only the graphics card changes. Dropping in the R9 295 X2 raises the total system power consumption at the wall outlet to an even 700W, over 130W higher than with dual GTX 780 Ti cards.
Noise levels and GPU temperatures
The good news here is that, despite its higher power draw and the presence of a water pump and an additional 120-mm fan, the Radeon R9 295 X2 isn’t terribly loud at all. This is progress. A couple of generations ago, the Radeon HD 6990 exceeded 58 dBA in the same basic test conditions. I’m not sure I want to see all future dual-GPU cards come with a radiator appendage hanging off of ’em, but I very much prefer that to 58 dBA of noise.
We couldn’t log the 295 X2’s temperatures directly because GPU-Z doesn’t yet support this card (and you need to log temps while in full-screen mode so both GPUs are busy). However, the card’s default PowerTune limit is 75°C. Given how effective PowerTune is at doing its job, I’d fully expect the 295 X2 hit 75°C during our tests.
Notice, also, that our R9 290X card stays relatively cool at 71°C. That’s because it’s an XFX card with an excellent aftermarket cooler. The card not only remained below its thermal limit, but also ran consistently at its 1GHz peak clock during our warm-up period and as we took the readings. Using a bigger, beefier cooler, XFX has solved AMD’s problem with variable 290X clock speeds and has erased the performance difference between the 290X’s default and “uber” cooling modes in the process. The performance results for the 290X on the preceding pages reflect that fact.
Let’s sum up our performance results—and factor in price—using our world-famous scatter plots. These overall performance results are a geometric mean of the outcomes on the preceding pages. We left Thief out of the first couple of plots since we tested it differently, but we’ve added it to a third plot to see how it affects things.
As usual, the best values will tend toward the top left of the plot, where performance is high and price is low, while the worst values will gravitate toward the bottom right.
As you can see, the 295 X2 doesn’t fare well in our latency-sensitive 99th percentile FPS metric (which is just frame times converted to higher-is-better FPS). You’ve seen the reasons why in the test results: frame time spikes in AC4 and Arkham Origins, struggles in a portion of our Call of Duty: Ghosts test session, and negative performance scaling for multi-GPU in Guild Wars 2. These problems push the R9 295 X2 below even a single GeForce GTX 780 Ti in the overall score.
AMD’s multi-GPU struggles aren’t confined to the 295 X2, either. The Radeon HD 7990 is, on paper, substantially more powerful than the R9 290X, but its 99th percentile FPS score is lower than a single 290X card’s.
The 295 X2 does somewhat better if you’re looking at the FPS average, and the addition of Thief makes the Radeons a little more competitive overall. Still, two GTX 780 Ti cards in SLI are substantially faster even in raw FPS terms. And we know that the 295 X2 struggles to produce consistently the sort of gaming experience that its hardware ought to provide.
I’ve gotta say, I find this outcome incredibly frustrating and disappointing. I believe AMD’s hardware engineers have produced probably the most powerful graphics card we’ve ever seen. The move to water cooling has granted it a massive 500W power envelope, and it has a 1GB-per-GPU advantage in memory capacity over the GeForce GTX 780 Ti SLI setup. Given that we tested exclusively in 4K, where memory size is most likely to be an issue, I fully expected the 295 X2 to assert its dominance. We saw flashes of its potential in Crysis 3 and BF4. Clearly the hardware is capable.
At the end of the day, though, a PC graphics card requires a combination of hardware and software in order to perform well—that’s especially true for a multi-GPU product. Looks to me like the R9 295 X2 has been let down by its software, and by AMD’s apparent (and, if true, bizarre) decision not to optimize for games that don’t wear the Gaming Evolved logo in their opening titles. You know, little franchises like Call of Duty and Assassin’s Creed. It’s possible AMD could fix these problems in time, but one has to ask how long, exactly, owners of the R9 295 X2 should expect to wait for software to unlock the performance of their hardware. Recently, Nvidia has accelerated its practice of having driver updates ready for major games before they launch, after all. That seems like the right way to do it. AMD is evidently a long way from that goal.
I dunno. Here’s hoping that our selection of games and test scenarios somehow just happened to be particularly difficult for the R9 295 X2, for whatever reason. Perhaps we can vary some of the test scenarios next time around and get a markedly better result. There’s certainly more work to be done to verify consistent frame delivery to the display, anyhow. Right now, though, the 295 X2 is difficult to recommend, even to those folks who would happily pony up $1500 for a graphics card.
I occasionally post pictures of expensive graphics cards Twitter.