AMD’s Radeon HD 7990 graphics card reviewed

Well, this is quite the turn of events. We recently started using some new GPU testing tools from Nvidia that measure precisely when frames are being delivered to the display, and in the process, we found that Radeon-based multi-GPU solutions have some troubling problems. As I said in that article, our next task is to address the issue of multi-GPU microstuttering in more depth. Just as we published that article, we learned of AMD’s plans to introduce a killer new multi-GPU graphics product: the Radeon HD 7990.

So I guess our work is cut out for us.

By most measures, the introduction of a card like the Radeon HD 7990 should be simple, because it is unreservedly the most powerful graphics card the world has ever seen. The formula is straightforward enough: two Tahiti GPUs, like those driving the Radeon HD 7970, working together in CrossFire on one graphics card. That’s like… twin Clydesdales pulling your wagon, a pair of Ferrari V12s driving all four wheels, like various other poor analogies involving large-scale parallelism and testosterone. Point is, the 7990’s hardware is world-class, second-to-none stuff, capable of crunching more flops, bits, and texels than anything else you can plug into a PCIe slot.

Reviewing this thing should be, you know, fun.

But we have some difficult questions to ask about the 7990’s true performance along the way. I think that makes our task interesting, at least. Although we have loads of data collected by multiple tools, our goal is to take a very practical approach that should yield some definitive answers to the questions at hand. Let’s have a closer look at the 7990’s formidable hardware, and then we’ll dive into the performance results.

From Tahiti to New Zealand to Malta

The Radeon HD 7990’s introduction is more interesting than usual because it’s either really late, already over, or nearly didn’t happen. I’m not sure which, entirely. You see, back when AMD unveiled the Radeon HD 7970 at the end of 2011, the firm let slip a code name, New Zealand, for an upcoming dual-GPU graphics card and said it was “coming soon.” Since we in the media are given to fits of speculation, we pretty much expected to see a dual-Tahiti graphics card from AMD at some point in early 2012. That product didn’t arrive as anticipated, and we nearly gave up hope that it ever would.

Eventually, several board makers, including Asus and PowerColor, slapped two Tahiti chips onto a single card, but those products didn’t ship until late last year, in extremely limited volumes. We tried to get our hands on one of the water-cooled Asus ARES II cards for review but were told the cards were completely sold out practically as soon as the product was introduced.

We figured that was it for the 7990, but then the news broke that AMD would be extending the tenure of the Radeon HD 7000 series until the end of 2013. At that time, the company told us it had more 7000-series products on the way. Then came GDC last month, when we got our first peek at the 7990. Now here we are, well over a year since the Radeon HD 7970 was introduced, looking at an official Radeon HD 7990 reference card.

This thing even has its own code name, “Malta.” AMD tells us New Zealand is an umbrella code name that refers to all dual-Tahiti products from itself and its partners, including those in the FirePro lineup, while Malta refers specifically to this reference design, proving once and for all that codenames are almost infinitely malleable. The fact Malta exists as a reference design from AMD matters, though. AMD tells us this card will be widely available through all of its partners, a true mass-market product. Also, the level of refinement evident in this card and cooler goes well beyond what we’d expect out of a science project from a board maker.

The biggest revelation about this reference design comes courtesy of those three fans spread atop a massive, board-long array of heatpipes and fins. This card is much, much quieter than its predecessor, the Radeon HD 6990, which set some records on the Damage Labs decibel meter. Heck, the 7990 seems like a faint whisper next to the reference 7970’s cooler. AMD has put some work into searching out a quieter cooling solution. They claim air turbulence, not fan noise, creates noise in most coolers; this cooling setup reduces turbulence by pushing air down directly through the heatsink fins. The payoff will be obvious to your ears.

Pretty much anything else you can say about the 7990 requires big numbers. With dual Tahiti GPUs, it has a total of 4096 shader ALUs providing 8.2 teraflops of compute power. The board packs 6GB of GDDR5 memory. The true memory capacity is half that, with 3GB allocated to each GPU, but the memory interface is effectively 768 bits wide, with all the bandwidth that entails.

The twin GPUs are joined by a bridge chip from PLX with 48 lanes of PCIe bandwidth, 16 lanes to each GPU and 16 to the PCIe slot. AMD claims the board has “96 GB/s” of “inter-GPU bandwidth,” but do the math with me. Each PCIe x16 link can transfer 16 GB/s in one direction or 32 GB/s bidirectionally. That means GPU 1 can transfer, say, a big texture to GPU 2 at a peak rate of 16 GB/s. That’s, you know, considerably less than the claimed 96 GB/s. It should be more than sufficient, regardless.

The card can drive up to five displays simultaneously, four via DisplayPort and one via DVI, though plug adapters of various types are available. As you can see above, the board has a CrossFire connector, so you can double up on 7990s if you’re feeling a little unsure whether just two Tahiti chips will suffice. AMD says a quad CrossFire config would be ideal for driving 4K display resolutions. (Note to self: test this claim ASAP.)

Poking up out of the top of the 7990 are two eight-pin auxiliary power plugs, to the surprise of no one. The board requires 375W of power, which is 25% less than the power requirements for two separate Radeon HD 7970 GHz Edition cards. Some of the power savings likely come from clock frequencies that are a smidgen lower. The 7970 GHz Edition has a 1GHz base clock and a 1050MHz “boost” frequency. The 7990 clocks in at 950MHz base and 1000MHz boost.

At 12″, the 7990 is a full inch longer than its most direct competitor, the GeForce GTX 690, and at this point, the jokes just write themselves. The Radeon’s additional endowment may prove to be inconvenient, though, if you’re trying to install the board into any sort of mid-sized PC enclosure. You’ll want to check to see whether your case has enough room before ordering up a 7990.

Speaking of which, perhaps the largest number of all associated with this card is its price: $999.99. Gulp.

Yep. Nvidia started it with the GTX 690 and Titan, and AMD is following suit by pricing its latest premium graphics card at one shiny penny shy of a grand. I was initially surprised by this move, since the 7990 doesn’t have the distinctive industrial design touches that the GTX 690 and Titan do, such as magnesium-and-aluminum cooling shrouds, LED-lit logos, and blowers with Crisco-and-gold-dust bearing lube. The 7990 is handsome—and I’ve already told you the cooler is quiet—but it mostly just looks like another Radeon covered in shiny plastic. Asking this much is also a bit of a risk because you can buy a Radeon HD 7970 for 400 bucks at Newegg right now, so two of them presumably would be 800 bucks, which I understand is less than a grand.

However, AMD has a couple of awfully decent justifications for charging as much as it does. First of all, it’s taking this whole Never Settle game bundle concept to its terrifyingly wondrous logical conclusion. The 7990 will come with a coupon—right there in the box, to prevent redemption hassles—for the following games: BioShock Infinite, Tomb Raider, Crysis 3, Far Cry 3, Far Cry 3 Blood Dragon, Hitman: Absolution, Sleeping Dogs, and Deus Ex: Human Revolution. That’s an eleventy billion dollar value, purchased separately. But the 7990 comes with all of ’em.

The other reason AMD can get away with asking a grand for this card is simply that the specs justify it. Have a look at the 7990 versus the competition:

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

(Gtexels/s)

Peak

bilinear

fp16

filtering

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory

bandwidth

(GB/s)

GeForce GTX 680 34 135 135 3.3 4.2 192
GeForce GTX
Titan
42 196 196 4.7 4.4 288
GeForce GTX
690
65 261 261 6.5 8.2 385
Radeon HD 7970
GHz
34 134 67 4.3 2.1 288
Radeon HD
7990
64 256 128 8.2 4.0 576

As I’ve said, the GeForce GTX 690 is the 7990’s nearest competitor, and the 7990 has substantially higher peak rates in the two most critical categories for modern GPU hardware: shader flops and memory bandwidth. Granted, the GK104 chips driving the GTX 690 have proven to be formidably efficient performers, but Tahiti’s larger shader array and 384-bit memory interface cannot be denied. One can see why AMD would price the 7990 directly opposite the GTX 690 and its theoretically less powerful single-GPU sibling, the GTX Titan. Add in the value of that stupendously stuffed game bundle, and the 7990 practically looks like a bargain by comparison. Also, I think there’s some funky psychology at work at the ultra high end of the market: lower prices communicate inferiority, and any signal that carries that message is directly at odds with the 7990’s whole mission.

Here’s the deal

Last time out, by using the FCAT tools that measure how frames are delivered to the display, we found some troubling problems with Radeon CrossFire multi-GPU configs.

We’ve known for a while about multi-GPU microstuttering, a timing problem related to the alternate frame rendering method of load-balancing employed by both CrossFire and SLI. Frames are doled out to one GPU and then the other in interleaved fashion, but sometimes, the GPUs can get out pretty far out of sync. The result is that frames are dispatched in an uneven manner, introducing a tight pattern of jitter into the animation. Here’s an example from my original Inside the second article.

We can detect such problems with software tools like Fraps, which can detect when the game engine signals to the DirectX API that it has handed off a completed frame for processing. That’s relatively early in the frame rendering process—at the orange line marked “Present()” in the simplified diagram below.

We learned using frame capture tools that the microstuttering patterns in CrossFire solutions can become exaggerated by the time frames reach the display. Instead of something like the mild case of jitter in the plot above, the true pattern of frames arriving onscreen could look more like this:

In this example, the “short” frames in the sequence arrive only a fraction of a millisecond behind the “long” frames. With vsync disabled, those short or “runt” frames may only occupy a handful of horizontal lines across the screen, adding virtually no additional visual information to the picture. Here’s a zoomed-in example from BF3 with the FCAT overlay on the left showing a different color for each individual frame rendered by the GPU:

Yes, a slice the height of that olive-colored bar is all you see of a fully rendered GPU frame. In other cases, we found that CrossFire simply dropped frames entirely, never showing even a portion of them onscreen. Yet those runt and dropped frames are counted by software benchmark tools as entirely valid, inflating FPS averages and the like. Nvidia’s SLI solutions don’t have this problem, interestingly enough, because they employ a frame-metering technique to even out the delivery of frames to the display.

All of that seemed like quite an indictment of CrossFire, but we had lingering questions about the practical impact of microstuttering on real-world performance. Does it impact the smoothness of in-game animation in a meaningful way? We couldn’t tell conclusively from our first set of test results. In the example from Skyrim shown above, the “long” frame times are so quick—less than 15 milliseconds—that the display would be getting new frames even faster than its typical 60 Hz refresh cycle. In a situation like that, you’re getting plenty of new information each time the screen is painted, so there’s really not much of a practical issue. We needed more data.

So, for this article, we set out to test the Radeon HD 7990 and friends with a very practical question in mind: in a truly performance constrained scenario, where one GPU struggles to get the job done, does adding a second GPU help? If so, how much does it help?

To answer that question, we had to find scenarios where two of today’s top GPUs would struggle to produce smooth animation—and we had to find them within the limits of our FCAT setup, which captures frames from a single monitor at up to 2560×1440 at 60Hz. (In theory, one could use the colored FCAT overlay with a multi-display config, but that complicates things quite a bit.) Fortunately, via a little creative tinkering with image quality settings, we were able to tune up five of the latest, most graphically advanced games to push the limits of these cards. All we need to do now is step through the results from each game and ask our very practical questions about the impact of adding a second GPU to the mix.

Test notes

Testing with FCAT at 2560×1440 and 60Hz requires capturing uncompressed video at a constant rate of 422 MB/s. Your storage array can’t miss a beat, or you’ll get dropped frames and invalidate the test run. As before, our solution to this problem was our RAID 0 array of four Corsair Neutron 256GB SSDs, which holds nearly a terabyte of data and writes at nearly a gigabyte per second. This array is held together with my patented hillbilly rigging:

Hey, it works.

Trouble is, the SSD array offers less than a terabyte of storage, and that just won’t do. A single 60-second test session produces a 25GB video. For this article, we planned to test six different configs in five games, with three test sessions per card and game. When the reality of the storage requirements began to dawn on us, we reached out to Western Digital, who kindly agreed to provide the class of storage we needed in the form of two WD Black 7,200-RPM 4TB hard drives.

The Black is the fastest 4TB drive on the market, and thank goodness it exists. We put two of them into a RAID 1 array for additional data integrity, and we were able to store all of our video in one place. We’re already contemplating a RAID 10 array with four of these drives in order to improve transfer speeds and total capacity.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-9-24
1T
Chipset drivers INF update
9.2.3.1023

Rapid Storage Technology Enterprise 3.5.1.1009

Audio Integrated
X79/ALC898

with Realtek 6.0.1.6662 drivers

Hard drive OCZ
Deneva 2 240GB SATA
Power supply Corsair
AX850
OS Windows 7
Service Pack 1
Driver
revision
GPU
base

core clock 

(MHz)

GPU
boost

 clock 

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

GeForce
GTX 680
GeForce
314.22 beta
1006 1059 1502 2048
GeForce
GTX 690
GeForce
314.22 beta
915 1020 1502 2 x 2048
GeForce
GTX Titan
GeForce
314.22 beta
837 876 1502 6144

Radeon HD 7970 GHz
Catalyst
13.5 beta 2
1000 1050 1500 3072
Radeon
HD 7990
Catalyst
13.5 beta 2
950 1000 1500 2 x 3072

Thanks to Intel, Corsair, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

In addition to the games, we used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Crysis 3
Crysis 3 easily stresses these video cards at its highest image quality settings with 4X MSAA. You can see a video of our test run and the exact settings we used below.

Here’s where things become a little bit (more?) complicated. You’ll see in the graphs below that I’ve included two results from two different tools, Fraps and FCAT. As I noted last time out, both tools are essential to understanding animation smoothness, particularly in multi-GPU configs where the possibility of microstuttering looms in the backdrop.

Fraps does its sampling early in the frame production pipeline, so its timing info should correspond pretty closely to what the game engine sees—and the game engine determines the content of the frames by advancing its physical simulation of the game world. An interruption in Fraps frame times, if it’s large enough, will yield a perceptible interruption in animation, even if buffering and frame metering later in the pipeline smooths out the delivery of frames to the display. We saw an example of this phenomenon in our last article, and we’ve seen others in our own testing.

Meanwhile, FCAT tells you exactly when a frame arrives at the display. An interruption there, if it’s large enough to be perceptible, will obviously cause a problem, as well. Because it monitors the very end of the frame production pipeline, FCAT may show us problems that software-based tools don’t see.

One complication we ran into is that, in most newer games, Fraps and the FCAT overlay will not play well together. You can’t use them both at the same time. That means we can’t provide you with neat, perfectly aligned data from the same five test runs per card, captured with both Fraps and FCAT. What we’ve done instead is conduct three test runs with FCAT and three with Fraps. We’ve then mashed the two together. The plots you’ll see come from the second of the three test runs for each card and test type. Since the Fraps and FCAT plots come from different test sessions, conducted manually, the data in them won’t align perfectly. Both tools should be producing results that are correct for what they measure, but they are measuring different test runs and at different points in the pipeline.

Since we’re looking at the question of microstuttering, I’ve included zoomed-in snippets of our frame time plots, so that we can look at those jitter patterns up close. Note that you can switch between all three plots for each GPU by pressing one of the buttons below.

If you click through the different GPUs, several trends become apparent. Even though they’re from different test runs, the Fraps and FCAT data for the single-GPU products tend to be quite similar. As we’ve seen before, there’s a little more variability in the Fraps frame times, but the timing gets smoothed out by buffering by the time FCAT does its thing.

Frame time
in milliseconds
FPS
rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

For the multi-GPU cards like the GTX 690 and 7990, though, the Fraps and FCAT results diverge. That familiar microstuttering jitter is apparent in the Fraps results from the 7990, and the swing from frame to frame grows even larger by the time those frames reach the display. This is the sort of situation we’d hoped to avoid. The “long” frame times on the 7990 reach to 70 ms and beyond—the equivalent of 14 FPS—while the shorter frame times in FCAT are well under 10 ms. When you’re waiting nearly 70 milliseconds for every other frame to come along, you’re not talking about creamy smooth animation.

The GTX 690 isn’t immune to microstuttering issues, either. A pronounced jitter shows up in its Fraps results, even though it’s smoothed out by SLI frame metering before the frames reach the display. The impact of frame metering here is pretty remarkable, but it’s not a cure-all. Frames are still being generated according to the game engine’s timing. As I understand it, the vast majority of game engines tend to sample the time from a Windows timer and use that to decide how to advance their simulations. As a result, the content of frames generated unevenly would advance unevenly, even if the frames hit the display at more consistent intervals. The effect would likely be subtle, since the stakes here are tens of milliseconds, but it’s still less than ideal.

The results for both of the multi-GPU cards illustrate an undesirable trait of the microstuttering problem. As frame times grow, so does the opportunity for frame-to-frame jitter. The 7990 seems to be taking especially full advantage of that opportunity. This is one reason why we wanted to test SLI and CrossFire in truly tough conditions. Looks like, when you need additional performance the most, multi-GPU configs may fail most spectacularly to deliver.

The 7990 wins the FPS beauty contest, confirming its copious raw GPU power, if nothing else. The 99th percentile frame time focuses instead on frame latencies and, because it denotes the threshold below which all but 1% of the frames were produced, offers a tougher assessment of animation smoothness. All of the cards are over the 50-ms mark here, so they’re all producing that last 1% of frames at under 20 FPS. (Told you this was a tough scenario.)

Also, the 99th percentile numbers for Fraps and FCAT tend to differ, which makes sense in light of the variations in the two sets of plots above. What do we make of them? My sense is that a good solution should avoid slowdowns at both points in the pipeline, at frame generation and delivery. If so, then true performance will be determined by the slower of the two sets of results for each GPU.

Picking out the correct points of focus in the graph above made my eyes cross, though. We can tweak the colors to highlight the lower-performance number for each card:

I think that’s helpful. By this standard, the single-GPU GeForce GTX Titan is the best performer. The GTX 690 and Radeon HD 7990 aren’t far behind, but they’re not nearly as far ahead of the GTX 690 and 7970 as the FPS numbers suggest.


Here’s a look at the entire latency curve. You can see how the 7990 FCAT curve starts out strangely low, thanks to the presence of lots of unusually short frame times in that jitter sequence, and then ramps up aggressively. By the last few percentage points, the 7990’s FCAT frame times catch up to the 7970’s. Although the 7990 is generally faster than the 7970, it’s not much better when dealing with the most difficult portions of the test run. The GTX 690’s Fraps curve suffers a similar fate compared to the Titan’s, but by any measure, the GTX 690 still performs quite a bit better than the GTX 680.


This last set of results gives us a look at “badness,” at those worst-case scenarios when rendering is slowest. What we’re doing is adding up any time spent working on frames that take longer than 50 ms to render—so a 70-ms frame would contribute 20 ms to the total, while a 53-ms frame would only add 3 ms. We’ve picked 50 ms as our primary threshold because it seems to be something close to a good perceptual marker. A steady stream of 50-ms frames would add up to a 20 FPS frame rate. Go slower than that, and animation smoothness will likely be compromised.

I’ve taken the liberty of coloring the slower of the two results for each card here, as well, to draw our focus. The outcomes are intriguing. The 7990 spends about 40% less time above the 50-ms threshold than its single-GPU sibling, the 7970. That’s a respectable improvement, in spite of everything. The gains from a single GTX 680 to the 690 are even more dramatic, in part because the single 680 performs so poorly. The Titan again comes out on top.

You’re probably wondering what all of these numbers really mean to animation smoothness. We’ve captured every frame of animation during our FCAT testing, and I’ve spent some time watching the videos from the different cards back to back, trying to decide what I think. I quickly learned that being precise in subjective comparisons like these is incredibly difficult. To give you a sense of things, I’ve included short snippets from several of the video cards below. These are FCAT recordings slowed down to half speed (30 FPS) and compressed for YouTube. They won’t give you any real sense of image quality, but they should demonstrate the fluidity of the animation. We’ll start with the Radeon HD 7970:

And the 7990:

And now the GTX 690:

Finally, the Titan:

Like I said, making clear distinctions can be difficult, both with these half-speed online videos and with the source files (or while playing). I do think we can conclude that the FPS results suggesting the multi-GPU solutions are twice as fast as the single-GPU equivalents appear to be misleading. Watching the videos, you’d never guess you were seeing a “17 FPS” solution versus a “30 FPS” one; the 7990 is an improvement over the 7970, but the difference is much subtler. The same is true for the GTX 690 versus the 680. I do think the Titan comes out looking the smoothest overall. In fact, I’m more confident than ever that our two primary metrics track well with human perception after this little exercise. The basic sorting that they have done—with the Titan in the lead, the multi-GPU offerings next, and their single-GPU counterparts last—fits with my impressions.

So, are the dual-GPU cards better than their single-GPU versions? Yes, in this scenario, I believe they are. How much better? Not nearly enough to justify paying over twice the price of a 7970 for a 7990.

Tomb Raider

Getting the new Tomb Raider to stress these high-end graphics cards was easy once we found the option to invoke 4X supersampled antialiasing. Supersampling looks great, but it essentially requires rendering every pixel on the screen four times, so it’s fairly GPU-intensive.

Flip over to the Radeon HD 7970’s plots to start, if you will. This is a classic example of a noteworthy phenomenon that affects even single-GPU configs. The 7970’s Fraps plot shows a troubling series of frame latency spikes throughout, many above 50 ms and a few even above 60 ms. Whenever one of those upward spikes occurs, a downward spike for the next frame follows. What Fraps is detecting here is back-pressure in the frame production pipeline. Due to delay somewhere later in the process, the submission queue for frames is filling up, and the game has to wait to submit the next frame. Hence the upward spike. Once the delay is resolved, the submission queue quickly drains, since work has continued on queued frames during the wait. The game is able to submit one or two more frames in quick succession before the queue fills back up. Hence the downward spike. FCAT doesn’t even notice these hiccups—its plot is much smoother—since a few frames are buffered at the end of the pipeline, and none of the delays are large enough to exhaust the buffer and interrupt frame delivery.

We’ve seen this pattern many times while testing, and it can be accompanied by perceptible jerkiness in game animation, as in our example from Skyrim. The interruption in Fraps alone is sufficient to affect the timing of the game engine, and thus the content of the frames, disrupting the fluidity of motion. The distinctive thing about the 7970 in Tomb Raider is that these spikes in Fraps do not translate into perceptible stuttering. The 7970’s output isn’t exactly a model of fluidity, with frame times averaging around 40 milliseconds (or 25 FPS), but there aren’t any perceptible pauses or hitches. In fact, the 7970 cranks out clearly smoother animation than the GTX 680.

We’ve seen a similar situation with Hitman: Absolution where, with the Catalyst 13.2 drivers, there were spikes in Fraps and small but noticeable stutters in our test scenario. AMD addressed the issue with the Cat 13.3 beta driver. The spikes in Fraps were reduced in size—from ~60 ms to ~40 ms—but not eliminated. The game then seemed to run perfectly smoothly.

I’ve had some conversations with David Nalasco, technical marketing guru at AMD, who has cited scenarios like this one in a presentation critical of Fraps-based benchmarking. He has a point. Some games appear to tolerate ~40ms and longer delays in Fraps without issue. However, others do not. The difference probably has to do with how each game advances it internal simulation timing. Other factors, such as the way the camera is moving through the game world and what’s taking place onscreen may affect how much the player perceives any slowdowns. Trouble is, those issues with stuttering in Hitman with Catalyst 13.2 are very real and don’t show up at all in FCAT results. That’s why I keep asserting that we need to use both Fraps and FCAT to see what’s really happening. In fact, we need to use every tool at our disposal, including subjective evaluations of gameplay and videos, to understand what the numbers mean. We also need to see more instances like this one, so we can get a better handle on what factors contribute to whether a delay is a perceptible.

The 7990’s situation here isn’t nearly so complicated, since both Fraps and FCAT agree that it’s plagued with some serious microstuttering issues. The plots show that, in FCAT, frame times frequently approach zero, indicating the presence of “runt” frames that likely occupy only a small portion of the screen.

By contrast, the GTX 690’s plot doesn’t look too bad at all, with a small amount of jitter in Fraps—on the order of about 10 ms—that’s removed before the frames reach the display. This plot looks more like what we’ve come to expect from frame-metered SLI setups; the pronounced jitter we saw with Fraps in Crysis 3 is unusual. My crackpot theory is that, in many cases, the small delays inserted by frame metering must exert pressure that makes it way back up the pipeline, keeping the Fraps samples from developing too much of a see-saw pattern. Something certainly seems to be keeping them in check here compared to the huge swings in the 7990’s Fraps plot.



The 7990 once again cranks out the top FPS average, but it puts in a relatively poor showing in our latency-sensitive performance metrics. In fact, if you look at the latency curve, you’ll see the 7990’s FCAT curve rising past the 7970’s at about the 95th percentile. That’s not a problem for the GTX 690, whose curves both indicate markedly lower frame latencies than the GTX 680.

I’m going to pull out some half-speed videos again, because I think these illustrate things more plainly than the videos from Crysis 3.

And the 7990:

And now the GTX 690:

I see little difference in terms of fluidity between the 7970 and 7990. They both seem to deliver about the same experience. The GTX 690, though, is easily the best. You can particularly see the added fluidity in the motion of Lara’s arms and at the edges of the screen, where the terrain is moving by rapidly.

Back to our questions. Is the 7990 better than its single-GPU counterpart? Not here. What about the GTX 690. Yes, in fact, it’s probably the best overall solution.

Far Cry 3

The thing to note about the Far Cry 3 results is that the Fraps and FCAT numbers should align perfectly, since we were able to use both tools simultaneously with this game. Click through the results from the different GPUs, and you can see how they correspond, generally with a little more variance from Fraps than FCAT. The 7990 is something of an exception to this rule, since it has a lot going on, including latency spikes in Fraps that don’t show up in FCAT and multi-GPU jitter visible to both tools. By contrast, the GTX 690 has microstuttering almost completely under control.



Just low FPS numbers alone tell us this scenario is tough going for the 7970 and GTX 680. None of the other metrics are kind to those cards, either. The two top solutions in the latency-focused measurements are the GTX 690 and the Titan. The GTX 690 looks to be faster generally, but it suffers from a much larger hiccup about 40% of the way into the test run, which pushes up its time spent beyond 50 ms. The Titan handles that same speed bump much better.

After that, well, it’s complicated. By the numbers, the 7990 appears to split with the 7970; one or the other comes out on top, depending. What you don’t see in the numbers, though, is the herky-jerky motion that the 7990 produces, apparently as the result of some kind of CrossFire issue. We noted this problem with dual 7970s in our Titan review months ago, and AMD hasn’t fixed it yet. We don’t need to slow down the video in order to illustrate this problem. Just have a look at the 7970 versus the 7990 below. We’ll start with the 7970:

And now the 7990:

And the GTX 690:

Whatever’s wrong with the 7990, the 7970 isn’t affected, and neither is the dual-GPU GTX 690.

Does the 7990’s second GPU give it an advantage over the 7970? Not in Far Cry 3, not with that awful motion. I’d take a single 7970 over that anytime. The GTX 690, on the other hand, comes out looking pretty solid, definitely better than the GTX 680 or the 7970.

Sleeping Dogs

The plots for most of the cards look nice and clean in this case, with only a handful of occasional frame time spikes as we drive through the cityscape and the game streams in new data. Many of those spikes appear only in the Fraps results and are evened out in FCAT. The 7990’s plot is something else entirely.


Although the 7990 produces the most total frames of any of the cards, and thus wins the FPS sweeps, it cranks out those frames unevenly. As a result, the 7990’s latency curves push above the single 7970’s in the last five percent or so of frames produced. Also, notice how the 7990’s FCAT latency curve starts out, running along the axis near zero. That indicates the presence of some “runt” frames that aren’t likely to have much visual impact.


Does the 7990 provide a better gaming experience in this scenario than the 7970? The numbers, again, are mixed.

Let’s look at some videos. First, the 7970:

The 7990:

And the GTX 690:

When I watch the source videos at 60 FPS, I see something similar to what we saw in Tomb Raider, where the 7970 and 7990 are hard to differentiate, but the GTX 690 is visibly smoother. I’m not so sure that sense comes through in these 30-FPS YouTube videos, though.

Going back again to our practical question, I think we can say the Radeon HD 7990 isn’t appreciably superior to the 7970 in our Sleeping Dogs test scene.

BioShock Infinite

You might not think of BioShock Infinite as a game that would put much stress on the latest graphics cards, and generally speaking, you’d be right. However, I played through almost the entire game on the Radeon HD 7990 at the settings below, and I encountered intermittent stutters while moving around Columbia much more often than one would hope. I figured this game could offer a nice test of a different sort of performance question: how well does adding a second GPU mitigate those occasional hiccups and hitches?

You can see the intermittent spikes on the 7990’s plot, from both Fraps and FCAT, into the 60-80 ms range. Flip over to the Radeon HD 7970 plot and… uh oh. The 7970 only spikes beyond 60 ms once in the entire test run. Overall, the hitches on the single-GPU card are much smaller.



The 7990 manages the highest FPS average and and the lowest 99th percentile frame time, but all of the cards are under 35 ms for that latter figure, so they’re all plenty quick, generally. The trouble is those intermittent slowdowns, and there’s a distinct pecking order in the time spent beyond 50 ms: the Radeon HD 7970 is the best of the bunch, followed by the GTX 690, the Titan, and the GTX 680. The 7990 takes up the rear, with the worst case of stuttering of the group.

As usual, we’ll begin the videos with the 7970:

The 7990:

And the GTX 690:

Watch right after the player has exited through the gate. That’s where the game tends to stutter the most, and the problem is worst on the 7990.

I expect this stuttering issue could perhaps be fixed via a driver update. However, at present, the Radeon HD 7990 actually offers a worse experience in this game than the 7970. A single Tahiti GPU outperforms any of the GeForces, but adding a second GPU spoils the soup.

A ray of hope: a prototype driver with frame pacing capability

As you’ve seen, the 7990’s multi-GPU microstuttering issues can translate into very real performance problems. The folks at AMD are aware of this fact, and since our initial FCAT article, have been working on a software driver with a potential remedy. AMD calls this feature “frame pacing,” and the principle is the same as Nvidia’s frame metering: the driver attempts to even out the pace of frame delivery by inserting a small amount of delay when needed.

Happily, AMD already has a very early driver, which it classifies as a “prototype,” that it shared with us for use in 7990 testing. I should emphasize that this is still a developmental piece of software, and we don’t yet have any firm timetable from AMD about when to expect a final—or even a beta—driver with this feature present. Also, this prototype driver is based on an older version of the Catalyst code, so it doesn’t incorporate the latest optimizations. We had to switch our test system’s OS to Windows 8 in order to test this prototype driver, since it wouldn’t install on Windows 7.

Does it work? Well, have a look.

The multi-GPU jitter pattern is reduced substantially with the frame-pacing prototype. There are still some spikes to around 40ms in Fraps, but as we noted with the 7970, those aren’t anything to be concerned about. The FCAT results still show something of a jitter pattern, but it’s largely kept in check. As a result, peak frames times in FCAT are much lower overall.

The latency curve for the prototype driver tells the tale. Although the regular driver’s FCAT line hugs the lower axis for the first seven or eight percent of frames, the new driver virtually eliminates those runts. Better yet, the frame pacing driver achieves lower latencies for roughly 50% of the frames rendered. The 7990’s 99th percentile frame time, as measured by FCAT, drops a full 15 ms, from 49.7 ms to 34.7 ms with the prototype.

I think the difference is fairly easy to perceive on video. The 7990 with the regular Catalyst driver:

And with the frame-pacing prototype:

Much better. Let’s see what this driver does for Crysis 3.

Microstuttering again is vastly reduced. There’s still some jitter present, especially in the Fraps results, as we saw with the GTX 690. However, with this driver, the 7990 looks to have less jitter than the GTX 690 does. Both the frame time plots and the latency curve for the 7990 with the prototype look similar to a single card result—only with quite a bit lower latencies than the 7970.

The 7990 with Catalyst 13.5 beta 2:

And with the frame-pacing prototype:

Again, I think we have a visible improvement in the fluidity of the in-game animation with the prototype.

The 7990:

The prototype driver:

The frame-pacing driver isn’t as much of an unequivocal win in our Sleeping Dogs test. The latency curve from FCAT has improved, but not the one from Fraps. I’m not seeing as much difference between the two videos, as well.

Unfortunately, frame pacing does not appear to resolve the 7990’s issues with uneven animation in Far Cry 3 or the stuttering problems in BioShock Infinite. Still, these are early days for this still-in-the-works feature, and already, frame pacing appears to be effective in improving the 7990’s behavior in several games. Kudos to AMD’s driver guys for making this happen in software in such a short time window.

I would caution anyone looking to plunk down a grand for this video card not to consider the CrossFire microstuttering a solved problem on the basis of this handful of tests, though. The prototype driver we tested isn’t available to the public and likely won’t be in its current form. AMD still has work to do, and it has evidently been content to sell CrossFire-based solutions to its customers for years without addressing this problem until now.

One more thing. I understand AMD plans on making frame pacing an option in future Catalyst drivers but leaving it disabled by default. The rationale has to do with the fact that frame pacing injects a small amount of lag into the overall chain from user input to visual response. AMD says it wants to avoid adding even a few milliseconds to overall input lag. That seems to me like something of a pose, a way of avoiding the admission that Nvidia’s frame metering technology is preferable to AMD’s original approach. You’ve seen the results in the videos above. I think the vast majority of consumers would prefer to have frame pacing enabled, granting them perceptibly smoother animation. Disabling this feature should be an option reserved for extreme twitch gamers whose reflexes are borderline superhuman. Here’s hoping AMD does the right thing with this option when the time comes to release a public driver with frame pacing.

Power consumption

Oh, right. I also tested power, noise, and temperatures. The 7990’s numbers are new, but the rest of the results have been shamelessly poached from my GTX Titan review. They should suffice for a quick comparison. Have a look.

Noise levels and GPU temperatures

The 7990 comes out looking spiffy across all of these tests. Don’t get too hung up on its slightly higher noise levels at idle and with the display off. The card turns its fans completely off and makes zero noise when the display goes into power-save mode. What you’re seeing there is just a little fluctuation in the noise floor in Damage Labs.

Beyond that, everything about the 7990’s power consumption and acoustics is exemplary for a high-end card, especially the part where it registers lower on the decibel meter under load than the GeForce GTX 690. The 7990 is dissipating an additional 50+ watts of power versus the GTX 690 and is still quieter. This is huge progress for AMD, and it’s only fitting for a thousand-dollar graphics card to have a cooling solution this effective.

Conclusions

AMD has built a mighty fine piece of hardware in the Radeon HD 7990. Consistently high FPS averages attest to its potential as the single most powerful graphics card in the world. At the same time, the 7990 is exceptionally quiet under load. Throw in the fact that it ships with a ridiculous bundle packed with some of the most notable games of the past year, and it’s easy to see how the 7990 could grab the crown in the $1K graphics card market.

However, we’ve deployed some advanced tools and metrics to answer some very practical questions about the benefits of the 7990’s second GPU, and the answers haven’t turned out like one would hope. The card just doesn’t hold up well under the weight of really tough scenarios where smooth gameplay is threatened. The 7990 does perform a little bit better than its single-GPU counterpart, the Radeon HD 7970, in our Crysis 3 test, but not by a broad margin. The 7990 doesn’t offer an appreciable benefit over the 7970 in our Tomb Raider and Sleeping Dogs test scenarios. In each of these cases, the 7990’s FPS averages scale to nearly twice the 7970’s, but the uneven frame delivery caused by multi-GPU microstuttering blunts the impact of those additional frames. Worse still, the 7990 runs into some apparent CrossFire compatibility snafus in Far Cry 3 and BioShock Infinite, both AAA titles that AMD has co-marketed and bundled with the card itself. Yikes. In those two games, you’re literally better off playing with a Radeon HD 7970.

Sure, we’re only talking about five games, tested under specific conditions. But we created these conditions in order to answer a pressing question about the impact of multi-GPU microstuttering. We’ve had the tools to detect its presence for a little while, but does microstuttering really have a negative impact on gameplay? The answer appears to be yes. Also, microstuttering tends to grow worse as frame rates drop, calling into question the true value of multi-GPU schemes like CrossFire and SLI.

Nvidia has mitigated the effects of multi-GPU jitter via its frame metering capability, and that feature appears to work reasonably well most of the time. The GeForce GTX 690 is tangibly superior to the single-GPU GeForce GTX 680 in each of our test scenes, although the difference is pretty minor in Crysis 3. That case is a reminder that frame metering and pacing schemes aren’t perfect. The 690 has near-pristine frame delivery in Crysis 3, but the smoothness of the animation is compromised by a see-saw pattern of frame dispatch, as we measured with Fraps. Fortunately, such early-in-the-pipeline jitter is usually confined to small spans of time—just a handful of milliseconds—on the GTX 690 and similar frame-metered SLI solutions.

AMD is now following suit with the development of a frame-pacing feature in its drivers, and the early returns look promising. If the firm can follow through and deliver a production driver that includes this capability, the 7990 has the potential to become a more appealing product. The thing is, that driver isn’t here yet, and we don’t know when it’s coming. Radeon HD 7990 cards are due to hit store shelves a little more than a week from now. My advice to would-be buyers is to hold out until a final driver with frame pacing has been released, tested thoroughly, and found to be effective. In its current form, there’s no way the 7990 is deserving of its $1K price tag.


Maybe I’ll post this whole review on Twitter, bit by bit.

Comments closed
    • melissamando230
    • 6 years ago
    • moose17145
    • 6 years ago

    Like a few others have mentioned, I think it would be interesting (and potentially more meaningful) to see a few tests done on these cards with games like Skyrim, except WITH v-sync enabled. As was mentioned… some of these games do not allow you to turn off v-sync in their options without editing files because it messes with the way the game simulation advances forward. Granted many people will claim that this might be pointless because you will likely see just a big flat line of frames being delivered with no variation… but isn’t that the whole point? Isn’t that what you WANT to see? Just because v-sync in enabled doesn’t mean that the game can’t still have little hiccups and stutters… in fact I have noticed in many games that whether v-sync is enabled or not often times makes very little difference in whether v-sync is on or off when it comes to little stutters or hiccups. Naturally some games support turning v-sync off in their options… and for those games you likely want to turn off v-sync for testing since the in game timing mechanisms likely aren’t dependent upon v-sync being enabled at all times. But for games like Skyrim that need v-sync to work properly… even if all we see is a perfectly smooth line of frame delivery… that to me just tells me that everything is working as it should be on that particular card. Which is just as valuable of information as knowing there is something wrong IMO.

    • TO11MTM
    • 6 years ago

    If I’m paying 1000$ for an AMD Card, I’d like all 6 of my monitor outputs, Please.

      • clone
      • 6 years ago

      as opposed to an Nvidia gfx card which would only require 2?… 3?

      • Krogoth
      • 6 years ago

      Due to the nature of SLI/CF this isn’t possible.

        • clone
        • 6 years ago

        I don’t believe that, not for a second.

        they could add an adapter to expand the display ports for up to 6… 12, 18, 24+ if need, limited only by the resolutions asked and the amount of onboard ram.

        that said the demand for 6 connections as the excuse not to buy the product is indeed weak but that doesn’t mean it can’t be done, it most certainly can.

          • Krogoth
          • 6 years ago

          You are not understanding the problem. SLI/CF require the use of TDMS/RAMDAC for each GPU in the chain in order to work correctly. This effectively reduces the ability bandwidth for each TDMS/RAMDAC in the chain. This places a damper on how many displays you can output on the SLI/CF chain. SLI/CF on-stick solutions are no exception.

            • clone
            • 6 years ago

            ok, from the image I just viewed HD 7990 comes with 5 native ports for connections, 1 DVI and 4 display ports.

            TDMS from what I just read is a conversion format but from what I was lead to believe display ports allow for adapters for additional displays so just by equipping them the option is there.

            as for RAMDAC while it may or may not be an issue all of this can be overcome via the gfx driver given the digital to analog converter isn’t required anyway.

            p.s. AMD claims the card can support 6 displays via the display ports.

    • moose17145
    • 6 years ago

    What I took away from this review. 7970 is the best top of the line card to get right now (save the GTX titan if you are looking in the thousand dollar bracket). NVidia’s SLI profiles are clearly more up to date than AMD’s CrossFire profiles, and they have some extra features to help with the issues involved with dual GPU setups that are a bit ahead of what AMD has (for the moment). But as many have mentioned… if a game comes out and there is not a SLI profile for it, or NVidia is late in being able to get one out… then you are just flat out better off with only a single GPU. If you are searching in $1k territory… I’d say pick up the titan, and have almost the performance of dual 680’s in SLI with none of the SLI downfalls. But if you are looking into more reasonable 400- 500 price ranges (still expensive, but otherwise top of the line market), I would say the 7970 is the way to go over the 680 based upon what I read in this review.

    I am not trying to sound fanboyish… but honestly I have been extremely impressed with AMD with the 7k series as a whole. My reasons for this are that their physical hardware on the the cards themselves is definitely a step up from what NVidia has for most of their 6xx series (although the 6 series are more efficient at using available resources typically, but this also comes at a cost of them having to be more heavily reliant upon up to date game profiles for each game in their drivers), but that also means that the 7k cards have more room to “grow” than the NVidia cards as AMD gets their drivers more and more efficient and learn to to eek out ever more performance from the GCN architecture. And get more efficient and eek out more performance they certainly have done! We have seen fairly steady and consistent performance improvements as new drivers are rolled out. 2% here, 5% there, another 3% here… it adds up fast. The 7k series today is hardly the same 7k series that was initially released just thanks to driver improvements. When the frame latency issue was brought to AMD’s attention, they seemed to be fairly responsive in addressing the issue, and seem to be continuing the effort on that front, as well as working to improve the issue in CrossFire. The 7k series most definitely had some growing pains… no doubt about that… but I would say that the cards they have out with the current drivers are easily very good competitors for anything NVidia has on the market, again, save for the Titan… that thing is just in a class of its own at this point… although even then, in a few games we see the 7970 biting at its heals, and in the case of BioShock, actually BEATING it… that one came as one heck of a shocker to me! If you REALLY want dual GPUs then probably go with SLI, but again… from what I took away is what most of us already know… It’s typically better to just stick with a single GPU and not have to worry about it.

    Normally if this were the ATI of old… I wouldn’t have found all of this THAT impressive… but this isn’t ATI from old… this is current AMD that has had, and still is struggling through some serious hardships. I cannot imagine that it’s very easy for AMD to be producing products like this, trying very hard to earn back consumers minds, while restructuring the company trying to make it profitable again while being massively in debt and having to lay off thousands of people at a time. The fact that AMD’s engineers have produced something of this quality I truly find stunning and amazing given the limited resources that I am sure they probably have available to them compared to their competitors (predominantly Intel and NVidia). Again… not a fanboy… just calling it as I see it… AMD has been fighting hard to make a competitive product while going through some hard times and having to make some hard decisions. If nothing else… bravo to the engineers who have pulled it off and continue to work hard to improve upon what they have made (driver updates)! The little chip maker who could indeed!

    Edit: And I am currently running a GeForce GTX 285 in my main rig… so I am not really for or against either company… Just saying… I am impressed with how much AMD can achieve with so little, relatively speaking. That and they are the under dog… who doesn’t like cheering for the under dog?!? Other than Yankee fans that is…

    • Bensam123
    • 6 years ago

    I didn’t add this to my giant page long rant partly because I forgot and because it deserves it’s own little place. Do you guys intend on testing 120/144hz configurations? Is it even possible? I don’t know of many capture cards that go over 60hz. It would still be relevant for frame times.

    I think that may introduce some very meaningful data as pretty much everything has been locked at 60hz for the last decade or so by majority rule.

    • Laykun
    • 6 years ago

    SLi and Crossfire need to be completely revamped in my opinion. The added latency from being 1 or 2 frames behind isn’t really worth the extra cost. I’m no expert in parallel computing but I’ve written my fair share of multi-threaded applications. It seems to me that they need a way of dividing up the work load in hardware to get two GPUs to work on a single frame as opposed to alternate frame rendering.

    Inside each GPU you have clusters of work units that are controlled by a thread scheduler/delegation unit, it seems to me that there needs to be a higher level scheduler external from the GPU that can delegate work to different GPUs as needed. Perhaps this means an external synchronization card to the GPUs that acts as a single GPU with a shared framebuffer with high bandwidth interconnects. I feel however that this solution gets more expensive and may be why AMD and nvidia have decided one a compromise. But their current compromise is kinda of shit.

      • clone
      • 6 years ago

      pls correct if I’m wrong but AFR is the most primitive of the techniques used in dual gpu rendering, inside the drivers of both AMD and Nvidia product is a load balancing component that attempts to share the rendering of each frame between the gpu’s by equally sharing the load on a per frame basis depending on detail levels of each frame by splitting it in half as measured by the workload.

      if AFR was the only game in town the latencies between each frame would penalize to such a degree that a 2nd GPU wouldn’t offer anything of value as it was forced to dump frames and work on new ones while waiting for the first frame to be released…. late.

        • MathMan
        • 6 years ago

        I’m pretty sure you’re wrong: AFR is the only load sharing technique used at the moment. You used to have SFR (for Nvidia) and checkerboard (for AMD) load sharing, but AFAIK those have been dropped long time ago.

        This is not surprising: when you use SFR/Checkerboard, the demands on the inter-GPU data bus is much higher than for AFR, because you need to do intra-frame sharing of intermediate rendered textures. You also have to deal with intra-frame dependencies. This makes scale much harder than for AFR, where you only need to deal with inter-frame dependencies (which is usually only present for reflections etc.)

        As long as the bandwidth between 2 GPUs is an order of magnitude lower than the bandwidth from a GPU to its local memory, don’t expect this to change.

        That said: when all is going well, your statement about the 2nd GPU being useless in AFR is not necessarily true. There are many games for which 1 frame of additional latency isn’t such a big deal, and were the increased frame rate is helpful.

      • MathMan
      • 6 years ago

      > Perhaps this means an external synchronization card to the GPUs that acts as a single GPU with a shared framebuffer with high bandwidth interconnects.

      This is exactly the problem. SLI was great in the nineties on Voodoo because they hadn’t invented stuff like render-to-texture yet. As a result, there were no intra-frame dependencies and each GPU didn’t need to worry about what the other was doing.

      That is unthinkable in current games. I think it’s a very hard problem to solve.

    • DrDillyBar
    • 6 years ago

    Now I have to go research the Workstation card, if it exists yet.
    Well done Damage, as always.

      • JustAnEngineer
      • 6 years ago

      Didn’t we already see a dual-chip server card?
      [url<]http://www.amd.com/us/products/workstation/graphics/firepro-remote-graphics/S10000/Pages/S10000.aspx[/url<]

    • Aistic
    • 6 years ago

    I seem to remember reviews stating that microstutter was (in some games at least) subjectively reduced with trifire vs 2-way crossfire. I don’t seem to remember an inside the second featuring trifire.
    If you could find time for such niche topics, it would be… relevant to my interests… very relevant, indeed…

    Also, yay for magicsause driver.

    • DPete27
    • 6 years ago

    Scott,
    Did you [url=http://www.tomshardware.com/reviews/radeon-hd-7990-review-benchmark,3486-15.html<]notice any [b<]coil noise[/b<][/url<] during your 7990 review? Tom's seemed to feel it was ridiculously annoying.

      • Damage
      • 6 years ago

      Only briefly, and not during the time when I did my acoustic testing. It does seem to happen some, but it wasn’t terribly loud on the card I tested.

        • derFunkenstein
        • 6 years ago

        That it exists at all is good to know. Not that I was going to drop $1k on this card, but if I can hear coil whine, the part in question goes back. That kind of high-pitched stuff doesn’t have to register high on the sound meter to be incredibly annoying.

    • albundy
    • 6 years ago

    Fantastic review of the 7990! And a great summing up of what it’s worth : “In its current form, there’s no way the 7990 is deserving of its $1K price tag.”
    True dat! They seem to be bundling in $500 worth of games to shadow its real value.

    “AMD says a quad CrossFire config would be ideal for driving 4K display resolutions.”

    I can see AMD will take many many many years to adopt this format if they believe it requires $4000 of 8 gpu’s to run it. Hard to justify when you can get a 4k TV for $1200.

    “We recently started using some new GPU testing tools from Nvidia that measure precisely when frames are being delivered to the display, and in the process, we found that Radeon-based multi-GPU solutions have some troubling problems.”

    Using Nvidia software to test AMD hardware might not fare well will the fanboys.

    • spiked_mistborn
    • 6 years ago

    I love the scientific approach that you’re using when doing your GPU reviews since I tend to be more of a “show me the proof” kind of person, so thank you. I would also like to see science applied to testing input latencies of the various cards, and maybe some VSYNC=ON results.

      • MrJP
      • 6 years ago

      I do wonder whether Vsync (or better yet the adaptive Vsync in RadeonPro) wouldn’t give much of the effect of the frame metering drivers. If the card is prevented from producing frames faster than the refresh rate, then any stutter would largely disappear, right?

        • Voldenuit
        • 6 years ago

        I believe pcper did some vsync testing recently and found that enforcing vsync was disastrous on AMD crossfire setups and somewhat better behaved on SLI (especially with adaptive vsync). I believe the takehome message was that ‘vsync off’ gave more fluid results on multi-GPU setups in general even with frame tearing.

        EDIT: [url=http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-Visual-Effects-Vsync-Gaming-Animation<]Found it[/url<]. Looks like my memory was a bit faulty; they only looked at AMD cards (so no nvidia results), but found issues with stuttering with vysnc enabled.

    • LukeCWM
    • 6 years ago

    Scott,

    What an excellent review. I really enjoyed it. You were completely thorough and comprehensive and unbiased.

    And I just know all the other tech sites are going to be giving this card a glowing review because they haven’t evolved their testing to the point that you folks here at TR have. Sometimes I read the other sites, but I increasingly find myself disregarding their conclusions on gaming performance since TR appears to be the only place we can go for reviews that show the whole story.

    P.S. I’m eagerly awaiting the creative, cleverly-worded troll in the user comments about how TR can only say good things about nVidia. I honestly find it the best part of any review even remotely involving AMD. =]

    • Disco
    • 6 years ago

    I’m still very happy with my 7970 I bought in the fall, which came with Hitman, Sleeping Dogs, and FarCry3. on sale for under $400. Still looks like the card to get.

    • flip-mode
    • 6 years ago

    I feel like I’m reading something wrong. These halo cards were tested in Bioshock Infinite at 2560×1440? Seems pretty low. Last night I was playing Bioshock Infinite at 4800×1200 with all settings maxed on a GTX 660 – playing that same Comstock House level (that mother ghost was tough to kill) – and while it wasn’t butter smooth is was certainly playable.

    I guess I’m surprised all games weren’t tested in some kind of Eyefinity configuration.

      • Damage
      • 6 years ago

      I feel like you’re reading something wrong, too. By not reading it, most likely.

        • flip-mode
        • 6 years ago

        WTF, that’s so helpful. The configuration screenshot says 2560×1440.

        [url<]https://techreport.com/r.x/radeon-hd-7990/bsi-settings1.jpg[/url<] So I /am/ reading that.

          • Damage
          • 6 years ago

          There are also words on that page. Try them!

            • flip-mode
            • 6 years ago

            Ah, perhaps you’re suggesting I read page 2 instead of page 7. On page 2 you explain the resolution limitation. Thanks for so politely pointing that out!

            Edit: too bad FCAT forces you to test at such low resolutions. The higher Eyefinity resolutions are kinda what these halo cards are all about, and now you’re precluded from testing them that way…

            • Damage
            • 6 years ago

            Actually, page 7 is quite nice, as well. Not trying to be flippant, but you seriously could not have read even the first few sentences.

            • flip-mode
            • 6 years ago

            I read page 7. Perhaps you implied something there and I’m not picking up on it. Anyway, nevermind, you tested they way you tested. Page 2 is as important as ever. I’ll deal with it.

            Edit: here’s the first paragraph:
            [quote<]You might not think of BioShock Infinite as a game that would put much stress on the latest graphics cards, and generally speaking, you'd be right. However, I played through almost the entire game on the Radeon HD 7990 at the settings below, and I encountered intermittent stutters while moving around Columbia much more often than one would hope. I figured this game could offer a nice test of a different sort of performance question: how well does adding a second GPU mitigate those occasional hiccups and hitches?[/quote<] Again, I can make some inferences from that paragraph, but it's not like you made any explicit statements. I hate to be pedantic about it but I guess I feel like you went straight to bitch slapping me when maybe the fact that I'm missing something isn't completely unreasonable.

            • Ryu Connor
            • 6 years ago

            [quote<][...] I encountered intermittent stutters while moving around Columbia much more often than one would hope.[/quote<] Those intermittent stutters are a bug with the engine. [url<]http://forums.2kgames.com/showthread.php?222666-Possible-solutions-for-known-issues[/url<] The engine doesn't properly identify or manage texture memory. Even cards with large amounts of VRAM (4GB models) will suffer texture stuttering. According to Chris Kline (Tech Director @Irrational), FRAPS can make the stuttering situation worse.

            • clone
            • 6 years ago

            interesting….. Damage know about this?

            should have been mentioned in the review.

            • NeelyCam
            • 6 years ago

            [quote<]I hate to be pedantic about it but I guess I feel like you went straight to bitch slapping me when maybe the fact that I'm missing something isn't completely unreasonable.[/quote<] Not the first time this has happened.

      • Meadows
      • 6 years ago

      In the review Damage mentioned that a later batch of tests using 4K resolutions is in the pipeline.

        • Damage
        • 6 years ago

        Nah, I just mentioned the possibility of trying out 4K. I’m not aware of a video capture card capable of handling 4K resolutions at 60Hz, so FCAT testing will not happen any time soon. Never meant to suggest it would, and there’s certainly nothing in any “pipeline.”

        Again, we saw real performance problems in each game we tested at the settings we used, as the article explains.

          • Farting Bob
          • 6 years ago

          Any chance you can give us a Fraps-only review for 7990, 690 and titan at stupidly big resolutions? 3 monitors, maybe even 6!

          Actually, it might be cool to do in the summer once the new drivers AMD sent you (as well as the usual improvements that NV make in that time) are more mature.

            • Damage
            • 6 years ago

            Yeah, that’d be fun, once the time comes.

          • Meadows
          • 6 years ago

          Oh, I see. I didn’t realise recording those resolutions would require that much muscle.

            • Damage
            • 6 years ago

            We can theoretically capture only the left-most display of a triple-monitor setup at up to 2560×1440 and get meaningful results from FCAT, but I understand that presents issues with the overlay. Is a project for another day.

      • Bensam123
      • 6 years ago

      In this thread, Flip-mode brags about his ultra high definition monitor and then makes a jab at TR for not using similar settings.

        • flip-mode
        • 6 years ago

        Ben, you’re an ass. Too bad comments don’t allow drawing pictures so maybe you’d have a chance at comprehension. Let me do my best to explain: my question asked why the testing resolution for an extremely high-end configuration was lower than what was possible in the same game on my own vastly inferior system. It’s got nothing to do with me bragging, but that appears to be the lens through which you view everyone, so you have at it, troll boss.

          • Bensam123
          • 6 years ago

          [code<] ( \\ // ) \ \\ // / \_\\||||//_/ \/ _ _ \ \/|(O)(O)| \/ | | ___________________\/ \ / // // |____| // || / \ //| \| \ 0 0 / // \ ) V / \____/ // \ / ( / "" \ /_________| |_/ / /\ / | || / / / / \ || | | | | | || | | | | | || |_| |_| |_|| \_\ \_\ \_\\[/code<] That's what I got out of it and the first set of replies between you and damage.

            • flip-mode
            • 6 years ago

            . . . . . . . . . . . . . . . . . . . ________
            . . . . . .. . . . . . . . . . . ,.-‘”. . . . . . . . . .“~.,
            . . . . . . . .. . . . . .,.-”. . . . . . . . . . . . . . . . . .“-.,
            . . . . .. . . . . . ..,/. . . . . . . . . . . . . . . . . . . . . . . ”:,
            . . . . . . . .. .,?. . . . . . . . . . . . . . . . . . . . . . . . . . .\,
            . . . . . . . . . /. . . . . . . . . . . . . . . . . . . . . . . . . . . . ,}
            . . . . . . . . ./. . . . . . . . . . . . . . . . . . . . . . . . . . ,:`^`.}
            . . . . . . . ./. . . . . . . . . . . . . . . . . . . . . . . . . ,:”. . . ./
            . . . . . . .?. . . __. . . . . . . . . . . . . . . . . . . . :`. . . ./
            . . . . . . . /__.(. . .“~-,_. . . . . . . . . . . . . . ,:`. . . .. ./
            . . . . . . /(_. . ”~,_. . . ..“~,_. . . . . . . . . .,:`. . . . _/
            . . . .. .{.._$;_. . .”=,_. . . .“-,_. . . ,.-~-,}, .~”; /. .. .}
            . . .. . .((. . .*~_. . . .”=-._. . .“;,,./`. . /” . . . ./. .. ../
            . . . .. . .\`~,. . ..“~.,. . . . . . . . . ..`. . .}. . . . . . ../
            . . . . . .(. ..`=-,,. . . .`. . . . . . . . . . . ..(. . . ;_,,-”
            . . . . . ../.`~,. . ..`-.. . . . . . . . . . . . . . ..\. . /\
            . . . . . . \`~.*-,. . . . . . . . . . . . . . . . . ..|,./…..\,__
            ,,_. . . . . }.>-._\. . . . . . . . . . . . . . . . . .|. . . . . . ..`=~-,
            . .. `=~-,_\_. . . `\,. . . . . . . . . . . . . . . . .\
            . . . . . . . . . .`=~-,,.\,. . . . . . . . . . . . . . . .\
            . . . . . . . . . . . . . . . . `:,, . . . . . . . . . . . . . `\. . . . . . ..__
            . . . . . . . . . . . . . . . . . . .`=-,. . . . . . . . . .,%`>–==“
            . . . . . . . . . . . . . . . . . . . . _\. . . . . ._,-%. . . ..`\

            • ULYXX
            • 6 years ago

            This whole thread needs a brushy.

            [url<]http://nsfl.co/img/brushie-brushie.jpg[/url<]

            • flip-mode
            • 6 years ago

            Love that.

    • indeego
    • 6 years ago

    [quote<] apparent CrossFire compatibility snafus [/quote<] So, the same as its introduction in 2005, correct?

    • willmore
    • 6 years ago

    Being a happy owner of a slower card and not in the market for such a thing, my takeaway from the article is that the best money is to buy an HD7970 for $400. It beats the GTX680 by a good margin and the only other card that delivers more frames smoothly is the Titan and that’s $600 more, so probably not worth the small marginal improblement.

      • Farting Bob
      • 6 years ago

      the $250-400 zone from both companies is very good right now. Thats still a fair bit to throw down on a GPU but they can play any game at high res and will continue to do well for years to come. Ive never been a fan of multi-GPU setups and recent articles from TR and others have highlighted the severe flaws in using them so a single GPU is always going to be my first choice. Sure if i had thousands to throw around id consider a Titan but it’s price is silly.

        • superjawes
        • 6 years ago

        Even if I had thousands to throw around I probably wouldn’t get a Titan. I would much rather invest that cash into a few mointors and a pair of non-SLI/Crossfire cards in the $300-$400 range.

      • Airmantharp
      • 6 years ago

      If you’re considering performance alone, that’s the consensus- if you’re considering the need for more performance by adding an additional card, Nvidia’s 660Ti-670 4GB range offers better scaling, compatibility, and thermal and aural performance, which is why they get recommended so much.

      The real bargain is with AMD cards that use nice custom coolers to provide excellent performance and quiet operation at very competitive prices. Buyers just need to understand that they’ll need more airflow in their enclosures to accommodate open air coolers and that adding a second card down the line isn’t really an option.

    • jessterman21
    • 6 years ago

    I literally cannot believe this product made it to the market with these problems. Especially in light of the widespread FCAT testing done months ago. I could understand if it was released a year ago in this state, but not now – there’s no excuse, other than they didn’t want to be any later. The only saving grace is the prototype driver, but who knows when that will be ready. Long after this card becomes irrelevant, I’m afraid.

    • Ryhadar
    • 6 years ago

    [quote=”Article”<]At 12", the 7990 is a full inch longer than its most direct competitor, the GeForce GTX 690, and at this point, the jokes just write themselves.[/quote<] Phrasing!

      • Firestarter
      • 6 years ago

      LANNAAA

    • Scrotos
    • 6 years ago

    Does Lucid work with video cards in general, or is it just Intel integrated paired with a discrete?

    My thinking is, get two AMD or nvidia cards but instead of CF/SLI them, use Lucid to team them together and see how that affects the pipeline. Would it be stuttering during FRAPS as well as FCAT or would it smooth things out early on in the pipeline?

    For gamers wanting the best experience, would it make more sense to get two Titans and Lucid rather than a GTX 690?

      • kuraegomon
      • 6 years ago

      As long as they can stomach the extra ONE THOUSAND DOLLARS(!) 🙂

        • Scrotos
        • 6 years ago

        Or two 7970s or whatever. Hey man, I’m running a 6850 and it plays Bejewelled very well, thank you! 😀

        I’m still curious as to whether or not Lucid would do anything. Maybe Damage might find it interesting too? I dunno.

          • cynan
          • 6 years ago

          I thought Lucid Virtu only worked with Intel IGPs (and a single discrete GPU). Doesn’t it just basically couple the IGP and discrete GPU into one video output system? Ie, you can attach your monitor to the IGP out and get video from the 3D accelerated application being run on the discrete GPU? It doens’t actually use both GPUs to render the same 3D application, does it?

            • Deanjo
            • 6 years ago

            Older Lucid tech did allow rendering across multi mismatched GPU’s. Not sure if it ever worked with the IGP’s however.

            • Scrotos
            • 6 years ago

            Yeah, all their stuff on their website now is quicksync this and quicksync that. I guess Intel bankrolled them or something? I hadn’t followed them closely enough to know if they could still do mismatched GPUs or if it allowed for identical GPUs as well.

    • drfish
    • 6 years ago

    [quote<]...AMD plans on making frame pacing an option in future Catalyst drivers but leaving it disabled by default. The rationale has to do with the fact that frame pacing injects a small amount of lag into the overall chain from user input to visual response. AMD says it wants to avoid adding even a few milliseconds to overall input lag. That seems to me like something of a pose, a way of avoiding admitting that Nvidia's frame metering technology is preferable to AMD's own approach.[/quote<] That and because FPS averages are likely to take a hit, right?

      • Damage
      • 6 years ago

      Nah, not really. Frame pacing shouldn’t reduce the total number of frames produced, so the FPS average should be unaffected.

        • drfish
        • 6 years ago

        Ahh, I misunderstood, I thought the runt frames were boosting AMD’s FPS average (even though I know FCAT can filter them out).

          • Damage
          • 6 years ago

          We did not use FCAT filtering for any of our results in this review. I toyed with it, but I don’t like FCAT’s default rule, which says:

          Frames < 20 scanlines & frames <25% the size of the frame prior = runts

          The percentage-based rule will count against regular-sized frames that come after long delays, which makes no sense. Once you disable the % rule, the 20-scanline filter is pretty weak, only filtering out a small portion of frames in most games. I tried raising it to 30 scanlines, but it still had very little effect on our 99th percentile frame times. FPS only changed 1-2 FPS, also, in Sleeping Dogs. Might have done a little more in Tomb Raider.

          Anyhow, yeah, FPS averages just don’t tell you what you need to know, but they should be equally useless with and without metering. 😉

    • anotherengineer
    • 6 years ago

    ” In its current form, there’s no way the 7990 is deserving of its $1K price tag.”

    And neither is the Titan!!

    Edit – Did I skim too fast and miss the 3 monitor eyefinity tests, or was that omitted?

    Also I noticed 2560×1600, were any tests run at 1920×1200 or (1080)?

      • travbrad
      • 6 years ago

      Yep these “flagship” cards have always been ridiculously overpriced. The Titan is still a much better card than the 7990 though, unless you care more about 3dmark scores than a smooth gaming experience.

      Any $1000 card is extreme overkill for the vast majority of gamers though (who still play at 1080p). A more interesting test would be a couple of $200 cards in SLI/CF against a $400 single-GPU card. The results would probably be pretty similar though. If even a 7990 has issues I can’t imagine dual-7850s faring much better.

    • Chrispy_
    • 6 years ago

    Thanks for confirming [i<]scientifically[/i<] what we've all suspected for years - [b<]Crossfire [has been, still is (and, [i<]until properly fixed[/i<], will continue to be) COMPLETELY WORTHLESS[/b<] Great article, Scott; Even though the outcome was near-guaranteed, I still read every page, clicked every button and watched every video. Let's hope AMD put as much effort into gameplay smoothness as you do from this point onwards....

      • Krogoth
      • 6 years ago

      SLI suffers from the same thing outside of titles aren’t under the “The Way It’s Meant to be Played” moniker.

      Multi-GPU solutions cannot escape their inherent shortcomings. The new focus on latency for framerate testing is putting forth to the masses on what ex-SLI/CF users have known for years.

        • superjawes
        • 6 years ago

        Frankly, unless you can get something truly parallel, start piping frames to the same location, or completely rethink how you render frames across two GPUs, multi-GPU setups will always have an added latency that will ruin the value perspective.

        Sure, you can even things out by adding delays or something, but that basically means that one GPU is running “slower” than the other.

          • travbrad
          • 6 years ago

          [quote<]multi-GPU setups will always have an added latency that will ruin the value perspective.[/quote<] Yep and the value perspective is made even worse when you consider the need for a beefier power supply, more expensive motherboard, plus a lot of extra heat being generated in your PC.

        • entropy13
        • 6 years ago

        [quote<]SLI suffers from the same thing outside of titles aren't under the "The Way It's Meant to be Played" moniker. [/quote<] Exactly. Crossfire works properly in 'Gaming Evolved' titles. Oh, wait.

          • Chrispy_
          • 6 years ago

          So very [b<][i<]this[/i<][/b<]. SLI certainly isn't without major problems, but Crossfire just never, [b<][i<]ever[/i<][/b<] works much better than a single card....

        • Voldenuit
        • 6 years ago

        [quote<]SLI suffers from the same thing outside of titles aren't under the "The Way It's Meant to be Played" moniker. [/quote<] I guess Sleeping Dogs didn't get the memo, since it's a 'Gaming Evolved' Title and still smoothest on SLI? I don't advocate multi-GPU solutions as a rule but it does appear that nvidia has the more polished and robust multi-GPU product right now, and they should receive the due credit for prioritizing smoothness over framerate even before hardware sites started measuring that metric.

        • MathMan
        • 6 years ago

        I don’t think it’s unreasonable to say that CF suffers much worse than SLI, irrespective of the games being Nvidia or AMD sponsored.

        Just look at that CF behavior…

        Once AMD releases its new driver, I expect them to be in the same league as SLI. Not perfect, but acceptable.

    • rootheday3
    • 6 years ago

    I have a slightly different take: Have we hit the point of diminishing returns? These tests require running the most demanding games at extreme settings/resolution to be able to create situations where the frame rates have perceptible stuttering issues. How many users have 25×16 monitors? Even if the fans are quiet, do you really want 400 watts warming up the room? Do you want to pay >$400 for graphics cards for the privelege?

    Seems like this is a pretty niche market and its unclear what the rationale would be for the next graphics card upgrade… A couple years ago the answer was stereo 3d would drive the next wave, then multidisplay/eyefinity. Neither really seems to have moved into the mainstream.

      • Cataclysm_ZA
      • 6 years ago

      I think we’re still hitting good returns and even the Titan is worth it’s asking price, all things considered. Both the GTX690 and the HD7990 should be capable of gaming at 4K resolutions with at least medium details, depending on the game. Both cards suck up less than 400W on their own.

      [url<]http://www.anandtech.com/show/2584/10[/url<] It wasn't too long ago that a system with the HD4870x2 consumed around 250W at idle and just over 750W at full load. Over time, we've dropped that requirement by a full 300W and increased the performance per watt ratio immensely.

        • swaaye
        • 6 years ago

        You’ve sold yourself on this hardware, apparently. I miss the days when 3D cards didn’t even need fans and performance was doubling yearly.

          • Scrotos
          • 6 years ago

          Then why didn’t you buy a Matrox Parhelia when they needed you?!? HEADCASTING WAS TO RULE THE WORLD!

          /me goes back to his Kyro II

            • swaaye
            • 6 years ago

            Because Parhelia cost >$300 and was buried by Radeon 9700 Pro about a month after release. Plus the guys on MURC were reporting all sorts of strange bugs.

            Headcasting was mainly advertised for G550 though.

            I did get myself a Parhelia for cheap recently though and tried it out. Doom3 is still broken. 😀

            • Scrotos
            • 6 years ago

            Heheh, it was also broken in DirectX 9 somehow too, wasn’t it? I used to <3 Matrox but they really lost it after the G400. And oh gawd, I haven’t thought of MURC ni years.

            • swaaye
            • 6 years ago

            They initially advertised a strange combination of vertex shader 2.0 and pixel shader 1.3 support. Later that was revised to D3D8 all around.

            Oh and it was actually $400 at first.

            • Scrotos
            • 6 years ago

            Well, we’ll always have the Savage2000!

            (with hardware T&L which was broken)

      • sunner
      • 6 years ago

      “….Have we hit the point of diminishing returns? These Tests require running the most demanding games at extreme settings & resolutions….Seems like this is a pretty niche market…”

      Agreed. But Tech Report is a “Tech site” (and a darn good one) so they have to run them.

      Btw, I’d like to see TR run a “Review” comparing the PC games of circa 1990’s and PC games of circa 2012.
      A Review, that compares the noble Hero’s of the early days of gaming (PC Hero’s were loyal, did noble deeds, rescued fair maidens, freed kingdoms, etc) … with Today’s PC Hero’s (who often switch loyalties as casually as changing a shirt, or slip a knife in a sleeping back, or do grisly things like cannibalism).
      But, such a ‘Review’ will probly never happen cuz it would shine a mirror on us :).

      • Airmantharp
      • 6 years ago

      You’re right- it comes down to two problems, though, both outside the realm of graphics card developers. Essentially, game development is focused elsewhere, and the demand for higher resolution and multidimensional displays in the desktop space is going unrealized by the market. Also, consoles are still largely the center of game development. Developers cannot ignore the volume and market penetration that consoles have achieved and would be doing themselves a grave disservice by ignoring them in their development pipelines.

      The solutions to both of these problems are centered largely on the release of the next generation of consoles along with the impending push to 4k television. Expect game developers to start targeting much higher levels of detail across the board, and panel manufacturers to start focusing on higher density displays at every panel size.

      • Deanjo
      • 6 years ago

      Todays “niche” are tomorrows mainstream. It wasn’t all that long ago that Steam Stats showed 1024×768 being the mainstream and 1920×1080 were “niche” markets then once that 1080p monitor price hit that sub $300 mark…… BOOOM …… all of a sudden everyone was running 1080p panels.

        • flip-mode
        • 6 years ago

        This is the repeat answer to that repeat question. There are a number of previous niche products that are now outperformed by today’s mainstream products. In the mean time, it’s perfectly OK for niche products to serve niche markets.

      • brucek2
      • 6 years ago

      For purchasers of this card, I’d imagine the portion of them with total resolution of 25×16 or (much) greater is nearly 100%.

      I just saw a review of an offname 4K monitor that wasn’t much more than $1,000.

    • Silus
    • 6 years ago

    Oh? A “new” AMD video card that is hampered by poor drivers ? Nooooo….it can’t be!

    At least in the past, the price was better than the competition…now it’s the same, although the bundles do help the situation a bit.
    A “too late” product by AMD…This should’ve been released half a year ago.

      • NeelyCam
      • 6 years ago

      ^ This.

      This is exactly why I’ve been saying that if I were to build a gaming rig again, I’d go with NVidia.

        • Fighterpilot
        • 6 years ago

        Neeley….not buying AMD?
        Gee…didn’t see that coming.

        /eye-roll

          • NeelyCam
          • 6 years ago

          I prefer buying quality stuff since I can afford it

      • jessterman21
      • 6 years ago

      Aw, downvoted by the Radeonites. Totally agree with you.

      • MrJP
      • 6 years ago

      Yes, but if you flip that around then anyone who bought a 7970 a year or so ago has now had the last laugh over those that bought the more expensive and ultimately slower GTX 680.

      Whether its best to go with the best hardware or the best out of the box drivers depends entirely on the length of your upgrade cycle.

        • superjawes
        • 6 years ago

        Cut it out JP. Silus is busy fanboying.

          • Silus
          • 6 years ago

          So is fanboying just commenting on the review and the facts it presents ? Or maybe you read a different review, where the drivers were great and the 7990 cleaned the house ? And that the 7990 is actually right on time and not late at all to the party ?

          Just trying to understand how to get up voted, because it’s clearly not by following reality.

            • superjawes
            • 6 years ago

            You weren’t posting facts, though. You were providing the same commentary you always provide: “Radeon bad. GeForce good.”

            If you really want upthumbs instead of down, you need to either be funny or constructive. Try these:

            “AMD is really getting hammered on the drivers front these days. Lots of fast hardware getting crippled by drivers, especially with these frame time metrics.”

            “If this were released six months ago, it might have been able to do something for AMD, but I don’t see it making a dent in the market now.”

            Those are basically the same points you were making, but they lack the fanboyism that gets panned by TR comment critics.

            Keep in mind that you’re not just getting downthumbed by “Radeonites.” I myself am very happy with my GeForce cards, but that doesn’t mean I want AMD to do poorly. In fact, I want them to do very well because most industries perform best when there is more competition.

            • flip-mode
            • 6 years ago

            Well said.

            • Silus
            • 6 years ago

            So…by stating that this product is late, suffers from poor drivers (as per usual with AMD) is me wanting AMD to do poorly ?
            See that’s what makes you the fanboy (regardless of what you say on the interwebs about owning a GeForce), because stating the reality, clearly proved in the review, does not mean I want them to do poorly. That’s your extremist (and fanboy) PoV, not the reality. The wording I used is good enough and will only offend those with glass roofs such as yourself.

            I’ve been quite adamant of competition in EVERY area (unlike the VAST majority around here that love and openly support monopolies – an example being Steam vs all other digital distribution systems), so I don’t want AMD to fail. But I would like to see AMD doing something on time or at least complete. This is NOT the first time (not by a ,long shot) that AMD releases new hardware that is hindered by poor drivers. If you want to continue to support this, by all means. It’s your right! But wanting AMD to do what they should be doing to actually improve their customer’s experience with their hardware, instead of the usual mess of releasing good enough hardware + crappy software, is quite far from wanting them to do poorly. But again, your PoV is an extremist PoV. The reality is that no one should support this type of behavior by companies in general. By supporting it and not showing you are displeased with it, you’re sending the word to these companies that they can continue doing it again and again…it seems AMD got that exact same message and did just that once more with the release of this card…

            • superjawes
            • 6 years ago

            [quote<]Oh? A "new" AMD video card that is hampered by poor drivers ? Nooooo....it can't be![/quote<]That's [i<]your[/i<] fanboy commentary, not a fact, and that's why you got downthumbed. But seeing as you basically flipped logic on its head and called me the fanboy and yourself the truthiness warrior, I see no reason being reasonable with you anymore. Seriously. All I did was give you a guide to posting constructive comments and, possibly, getting the ubthumbs you seem to want, and somehow I'm an AMD fanboy now? I didn't even post anything praising AMD...

            • Silus
            • 6 years ago

            I understand that irony or sarcasm is often missed in the interwebs…I often fall for it myself…so I’ll give you a hint: I don’t care about up or down votes. If that mattered at all, then TR would be an AMD fanboy haven, since they are a clear majority around here.

            My comment is a fact and is based on the review: Poor drivers hamper the performance of a card that theoretically should be much faster. Something that is also AMD’s usual: great in theory, but drivers pull it down quite a bit. My sarcastic remark in the end is just that – sarcarm – and is in fact a simple joke, but one that of course will be taken seriously only by those with very thick red glasses such as yourself.

            And yes, you are an AMD fanboy or a fanboy in general when you consider that criticizing a release of hardware that is hindered by poor software as being “busy fanboying”. You still fail to realize that this hurts AMD consumers that actually buy their hardware and that have to wait months, sometimes years to get something working as it should on release. This new card already started the same way: poor drivers with the “promise” of improvement.

            But hey you own a “GeForce” right ? You don’t really care about Radeons or Radeon users. Let them suffer with poor drivers, because if that’s what AMD does, them that’s what AMD does and no one can criticize them for it…unless they are fanboys…

            Oh and just to be clear…that last paragraph is me being sarcastic again.

            • superjawes
            • 6 years ago

            I caught your original sarcasm, but when done properly, sarcasm is funny (even on the internet). Your comments were not funny. Instead of being entertaining or offering some constructive criticism, heck, even condolences for AMD, you just parroted (and are parroting) the same tired talking point of Nvidia fanboys about AMD/ATI drivers. It’s old and not funny anymore, and I’m not sure it was funny to begin with.

            By comparison, MrJP offered a different (and more balanced) perspective to the GPU market. My response to him was a sarcastic one acknowledging the tired fanboyism of your OP, and I think it was funnier, if I do say so myself (and I do).

            • Silus
            • 6 years ago

            And once again, you just showed how much of an AMD fanboy you are when you consider the argument of AMD’s drivers being crappy as “tired talking”, when it’s pretty much in effect NOW and this review (as others in the past have) proves it!

            Disregarding reality doesn’t make it less real. I know it’s harsh sometimes, but you just have to suck it up and deal with it.

            I won’t talk about what is or isn’t funny. That’s a matter of opinion, unlike the results of this review which are indeed fact and support my OP, which you so lavishly consider to be “busy fanboying”…

            • superjawes
            • 6 years ago

            Oh? A fanboy ignoring the point being made and pulling out exactly what he wants to hear? Nooooooo…..it can’t be! Fanboys always pay attention to the key points people make, like the difference between a “fact” and a “commentary” and they NEVER confuse the two!

            /sacrasm (for clarity)

            • clone
            • 6 years ago

            I had a GTX 460 up until 3 months ago, I replaced it with an HD 7850….. if AMD’s drivers are “crappy” in your eyes then Nvidia’s drivers are just as crappy despite you never mentioning that “reality.”

            it’s not my bar I’m using it’s yours.

            the difference between AMD and Nvidia amounts to method and little more, AMD publicly talks about the fixes they apply, they offer these up promptly, it’s why HD 7970 has surpassed Nvidia’s GTX 680 in benches over time and never fell far behind to begin with despite having been launched officially 3 but more like 7 months earlier than GTX 680 which was an overpriced paper launch for 4 months.

            the downside in this method is that ppl can bitch about AMD’s drivers because AMD is admitting the issue publicly, Nvidia on the other hand does much the same while admitting nothing…. they do it quietly, the new drivers do come out in a timely manner and fixes are offered but…. Nvidia won’t talk about what was fixed.

            personally I prefer AMD’s method, it’s nice to see them not just fixing issues but also mentioning “if you are playing this game, the new driver will improve this, or fix this”…. I appreciate that.

            with Nvidia when I stumble into a problem I have to give up on the application for a few months or swap out the card and use an AMD part….. if I choose to stick with the Nvidia box I’ll have to do my own discovery process by revisiting with an updated driver or a game patch to find out if the issue has been fixed.

            that is annoying especially when it is not fixed after several revisits.

            to be clear I wasn’t disgusted with my GTX 460, it did it’s job, it had several issues but none of them egregious, that said the HD 7850 has so far been issue free despite the constant accusations of “crappy driver’s”.

            • superjawes
            • 6 years ago

            I think the big picture is that Nvidia has the upper hand in GPU market share, so instead of convincing more people to buy Nvidia, it makes more sense to hype a few products to keep excitement high and reaffirm their position at the market leader.

            AMD, on the other hand, is definitely going through a rough patch. They are being beaten on the CPU and GPU fronts (again, market share), and the company is losing money. They [i<]need[/i<] the publicity to generate sales and pick up market share. Basically, the strategies are reflecting the state of each company.

            • clone
            • 6 years ago

            I’m not sure I agree, their are only 2 real players in this segment, Nvidia and AMD, Nvidia has always been quiet about their drivers, it’s been this way for 15 years…. it’s company policy, they do update them, they do fix issues but they don’t give anyone any directions in finding them.

            AMD has been doing Catalyst updates for years and they’ve always provided release notes.

            AMD’s cpu division is losing money, the company as a whole is losing money because of it’s cpu division, but that isn’t about the GPU division, which is making money.

            the policy differences between the 2 are ideological based.

            regarding market share, AMD’s position will always be problematic because in building the AMD brand and killing off the ATI one they alienated themselves, from an OEM position Intel, AMD inside sounds….. wrong.

            at least with Intel, Nvidia you know you have an intel cpu and Nvidia gfx.

            • NeelyCam
            • 6 years ago

            [quote<]If that mattered at all, then TR would be an AMD fanboy haven, since they are a clear majority around here.[/quote<] If only S|A forums had up/downvoting...

            • clone
            • 6 years ago

            and yet HD 7970 is spanking the GTX 680 in this review despite your claim of HD 7970 suffering from “bad drivers”…. if it’ll get quicker still once the drivers are up to your “level” of competence think of how bad the GTX 680 will look.

            facts… inconvenient maybe but facts nonetheless.

        • HisDivineOrder
        • 6 years ago

        So you’d take six months of substandard drivers, three of which the 680 coexisted for, plus no option to realistically enjoy Crossfire by buying another card for a year now for a minuscule lead over the 670 that cost less at launch than that 7970 did?

        Because the 7970 doesn’t beat the 680. The 7970GHZ beats the 680. The 7970 beats the 670, which was even CHEAPER than the 7970 at launch.

        Bad drivers are bad drivers. I’d stick with the company that has a track record of anticipating what needs to be evaluated and successfully doing so with their drivers. One side is busy being told for years that they need to include frame metering because their solution is jerky (ie., HardOCP’s reviews) and the other side’s been DOING said solution for years. One company didn’t even know HOW to test it until the other team finally took pity on them after reviews came out that made it plain as day that there WAS a problem and it was undeniable. Finally, their competitor had to sit AMD down and show them how to test their own cards properly.

        That right there’s your reason for staying the hell away from AMD. Their driver team and their testing teams can’t even make proper drivers for their own hardware. And especially not at launch.

        So you think paying $550 for a 7970 that quickly dropped over a $100 after the 680 came out got a steal? Haha, no way.

          • clone
          • 6 years ago

          I would especially given the HD 7970 came out much earlier than the GTX 670, Nvidia much like AMD has it’s own driver issues and because over time the HD 7970 has gotten faster and outpaced not just the 670 but also the 680 both of which came out later after having had the advantage of seeing where the bar was.

          so yes, paying $550 for the quickest gfx card at the time that only got better over time and outpaced it’s later launched rivals seems like the only logical choice.

          even worse the GTX 680 was a paper launch for the first 3-4 months and it’s street prices were ridiculous compared to AMD’s MSRP/street prices….. and did I mention it’s falling farther behind over time.

          AMD came out first, came out quickest and got quicker over time…..awesome?

    • badnews
    • 6 years ago

    [quote<]At 12", the 7990 is a full inch longer than its most direct competitor, the GeForce GTX 690, and at this point, the jokes just write themselves. The Radeon's additional endowment may prove to be inconvenient, though, if you're trying to install the board into any sort of mid-sized PC enclosure.[/quote<] lol. You rock, Scott. Keep the metaphors fast and flowing.

    • Cataclysm_ZA
    • 6 years ago

    Good review as always, Scott.

    Unfortunately, I didn’t see what I was really hoping for – someone to actually go ahead and test Crossfired HD7970s with the same frame pacing driver to see if it gets the same benefits.

    I wonder when the damn thing will leak into the internet.

    • Bensam123
    • 6 years ago

    Perhaps I’m a bit weird, but I spent some time watching those bars on the left side of the screen for the prototype videos. With the non-prototype drivers, you can see slim frames actually migrate up the edge of the frame in a rotational type pattern most of the time, almost like watching a film. Where as in the prototype driver they appear to wobble up and down in a split fashion (instead of there being three frames on the screen).

    It’s easiest to see this in the Lara Croft video… I’m not entirely sure what it means though and it could definitely be my eyes playing tricks on me, but it does look like there is a pattern in there.

    I do think we still need a way to measure overall latency from user input to output that would take into account any buffering that is happening along the way. It was mentioned that it’s preferable having extra latency over jittery frames. I think that is quite subjective though and does add up. Having a IPS monitor hooked up to a card with frame metering hooked up to a wireless peripheral all connected to a wireless network would have a very different feel to a fast TN monitor, with a wired mouse, wired connection, and little to no latency introduced.

    After you get down to a certain point the ‘jerkiness’ even if it’s operating without latency can throw you off as much as the latency can though. A good example of this is the cl_interp command in source games. Counter Strike and Teamfortress 2 is where it’s most relevant. cl_interp is a command for interpolation, basically the game engines smooths animation between updates from the server. A cl_interp of 0.1 will gives the engine 100ms to buffer and smooth out animation over those 100ms, where as a cl_interp of 0.01 will give the engine 10ms to smooth out that animation. Sycnronization between a client and the server happens on updates (also why tic is very important). The tic varies how often a client updates, giving you the most up to date information requires a faster tic rate and results in smoother game play if the server has a faster tic.

    Most players don’t get into this sort of thing, but it’s very important when it comes down to tailoring the feel of the game to how you play. If you don’t understand interpolation you may end up shooting at a non-existant player. Meaning where you’re shooting and where the player actually is on the server don’t correlate to one another because the player has moved to a different location during the buffer and interpretation and what the game engine is displaying to you on your computer is no longer relevant. That’s what happens if the interpolation is too strong, if you set it higher then it normally is or if you leave it at default and you’re right on the money with shots in high action scenarios. The scene will be completley smooth and you wont see any jerkiness though.

    However, if you set interpolate to something like 0 the positions of characters will be absolute based on data you receive from the server, but this results in really jerky gameplay as updates don’t happen fast enough to give extremely fluid gameplay. So you end up with something in between. I ended up setting on 0.01, where as normal play is on 0.1. Most professional players prefer .01 as well. While 1ms may not make much of a difference, 100ms for most players with fast reaction speeds is quite noticeable and in high action scenarios results in a loss of precision. A jerky animation would be more preferable over a fluid one in this case as fluidity has relatively little meaning if it’s not realistically representative of actual players. You end up dead before you even know it.

    Every engine with multiplayer capabilities has interpolation built in in one form or another, they almost never expose it to the users though, which is a shame.

    This may seem like it’s unrelated, but it very much relates to this review and how this affects perception. Giving someone an extremely smooth experience is definitely important, but when you need to interact with something there needs to be a good balance between the two or you end up with either a muddled picture which you can’t make anything out in (cl_interp 0) or one where you’re shooting at ghosts because players are not where you’re aiming (cl_interp 0.1).

    To this end I really wish video card manufacturers, processor makers, and game developers would give us the option to tailor Latency Vs Clarity. Where you have a trade off of extremely low latencies or pristine fluidity. That’s generally been what Vsync has done over the years only it’s never really described in a user friendly way and you usually have to dig for a explanation in order to find one.

    I actually think AMD is doing the right thing by making this a option. Perhaps having it disabled by default isn’t the right one, but having it as a option is a better direction then Nvidia simply having it on at all times. Ideally I think the algorithm should be based on a slider, where you can better choose what threshold this takes place at. It’s pretty obvious that you can buffer the shit out of all of this and it’ll fix all the problems, but then you run into problems of real time verse perceived time. Where things are on your screen, verse where they actually are.

    Supplemental reading if you want to learn about how game engines synchronize data over the internet (source engine):
    [url<]https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking[/url<] (There is actually a lot of cool information in this article concerning buffering, time dilation, and real life verses perception when dealing with a simulated world.) I am rather disappointed with the blower on this card. I'm sure it reduces noise levels, but heat levels perhaps not so much. All that heat is simply gushing into your computer and it eventually ends up over saturated unless you're running a box fan through your computer. After my latest go at one of these with a Gigabyte 7870 I have no interest in buying a card without a card that blows outside of the case. It does keep it cooler at idle, but when the insides of my case starts getting warm or I'm OCing that heat builds up and it sits down there in the bottom of my case leading to higher card temps and faster fan speeds in order to compensate. I don't think something like this is accurately reflected in temperature and noise measurements from reviews as they're on a open workbench and never inside a enclosed case with a normal workload over a prolonged period of time. I always look for and recommend the stock blowers that discard heat outside of the case now. I'm sorta surprised given their rational for improving acoustics by having air blow directly through the fins they don't adopt a squirrel cage blower instead of their side spinner on rear exhaust models. Ideally you'd want the fan to blow heat directly through the card and out the back side, but you can't aim a fan like that without using something like a 40mm fan. A squirrel cage would fix that and at low rpms they're pretty quiet. "Scott Wasson — 9:57 AM on April 23, 2013" Really? Where has this article been the last six days? :p Great review otherwise. Have you asked them when the driver will pop out with updated memory management? Have you also considered applying the FCAT to processors and doing a update on a couple models?

      • Scrotos
      • 6 years ago

      Some more info on network latency correction:

      [url<]http://www.ra.is/unlagged/[/url<] [url<]http://unlagged.com/[/url<] (links at the bottom for more info) What I'd like to see are some similar tests done with CRTs and PS/2 mice on older hardware and actually see what the latency of such a system is for input. Quake 3 is an example that'd work over the years. There was a fit when a modernization effort, ioquake3, switched from "direct input" to SDL. Most people didn't notice anything different but some people said it was "slow" and all that. Since the engine is open, you could make a test map with very little in it, just a box with patterns on the wall, and hack the engine to send each frame with a timestamp on it. Hook something into the keyboard API that records when a key is pressed and track it back to find out how long it takes to happen on-screen. There's so much to take into account, though, as the game engine may run at its own internal timing. In Quake 3 I believe the default is 20 fps. So the time slices may always have a certain latency that you can't get away from. And the engine is designed with certain timeslices in mind even though you can change it, it may have unintended consequences. Doom 3's internal slices were 30 fps, I think. Who knows what other game engines or what modern games in general do? Would a CRT and PS/2 mouse (because you can overclock the port to 200 Hz or something) really matter compared to LCD and USB or even wireless? Does anyone care enough to test it? That's the real question.

        • Bensam123
        • 6 years ago

        I’m not sure… I noticed in NS2 they actually expose frame-times to the user with r_stats. This is the first game I’ve seen that’s done something like this.

        Usually developers are pretty secretive about internal timings though. This is the first time I’ve heard about internal engine timings being so low. Sometimes they’re sync’d to the graphics, other times they run in a separate loop from graphics.

        It’d actually be pretty great if Scott got in touch with Epic, Valve, or Crytek to talk about this stuff and perhaps refine his graphics testing methods even more. He already talks with Nvidia and AMD, so I think it’s only natural he’d talk with engine developers too as it represents the entire higher end portion of what he’s testing.

        Maybe if it’s impossible talking from someone from a bigger company think about Unknown Worlds which makes NS2 and they’re a pretty small team. They’d probably be willing to talk and maybe give some updates.

          • auxy
          • 6 years ago

          The games on the Cryptic in-house engine (Champions Online, Star Trek Online, Neverwinter) can also display frametimes, with a graphic histogram.

            • Bensam123
            • 6 years ago

            Neat… Source engine also has net_graph 3 which displays render times in a histogram (I think). I’m still unsure of that though as it’s rather confusing.

        • moose17145
        • 6 years ago

        Kind of sort of a bit off topic… but I thought that most people “pro” gamers preferred PS/2 ports because they were interrupt driven instead of using a polling system like USB. Meaning that there should be next to zero input lag, because the instant input is … errr… inputted, it sends an interrupt to the CPU to deal with the input immediately. That being said I didn’t think it was possible to overclock something that is interrupt driven in that manor…

          • Bensam123
          • 6 years ago

          I haven’t heard of that, it could be though. Maybe pre-1000hz mice. 1000hz mice really have no trouble with input lag, that I know of.

          Like whole input latency chain is rather new in that it’s starting to be completely explored. What Scott is doing is just a small part of the whole process. Perhaps TR will eventually try to cover the whole chain, but right now they’re just working on graphics cards to render.

      • flip-mode
      • 6 years ago

      Question for audience: is it worth the time to read this comment?

        • kuraegomon
        • 6 years ago

        Yes.

          • Scrotos
          • 6 years ago

          Yes but he moves from what are physical hardware effects to game-engine-specific effects. Which is fine and all but he’s talking multiplayer network code and variables that most casual gamers would never mess with and most engines don’t allow you to mess with. To really find the effect of input with a video card and game engine, you need to turn off interpolation and all predictive stuff. With something like the Quake engines and probably HL you often have that kind of control for testing.

          As a matter of my opinion I think some of this testing could be done and may prove interesting but you really REALLY have to strictly define how you’re testing and what you’re testing. Even then at the end of the day there may be little practical application of what you find out.

          But yeah it’d be neat to see if “mouse lag” really matters in modern games or if “pro” gamers are just full of it. 🙂

            • kuraegomon
            • 6 years ago

            Note that pretty much all of what Scott, et. al. have been investigating over the last year+ are software issues in the rendering pipeline – i.e. driver-level software, not hardware. Also, it’s the interaction _between_ the game engine and the driver layer that’s at the heart of this entire discussion.

            Really, the primary value that I extracted from Bensam’s post was his counterpoint to the prevailing view that frame smoothness should be the only goal of this exercise. Many of us regularly complain that gameplay is more important than the pretty pictures, so his argument that he’d rather consistently hit what he’s shooting at than see it parade smoothly (but un-shot!) across his screen packs considerable weight with me.

            I’m quite sure that actual pro gamers aren’t full of it (lucky bastards), but they’re all basically fast-twitch mutants. If Bensam is any representation (not sure from his post whether he actually competes for pay), they certainly seem to have moved on to having a much more sophisticated dialogue/approach about the topic than just talking about “mouse lag”. This makes sense – competition for compensation incentivizes acquiring a more accurate understanding of the variables that affect your playing environment. Watching the rise of analytics in North American pro sports (particularly NBA basketball) is an excellent example of this phenomenon.

            • Scrotos
            • 6 years ago

            No YOU misunderstand to the maaxxx!@# 😀

            Here, this is where I’m pullin’ it from:

            [i<]I do think we still need a way to measure overall latency from user input to output that would take into account any buffering that is happening along the way. It was mentioned that it's preferable having extra latency over jittery frames. I think that is quite subjective though and does add up. Having a IPS monitor hooked up to a card with frame metering hooked up to a wireless peripheral all connected to a wireless network would have a very different feel to a fast TN monitor, with a wired mouse, wired connection, and little to no latency introduced.[/i<] From having dealt with "pro" gamers for more than a decade in both competition and game engine side, the types of things that they whine about (and yeah I'm being unfair to "pro" gaming people but that's why they make the big bucks, right?) are very related to fps and input lag. Input lag being evil LCDs that are "slow" and USB mice that have low sampling rates. Yeah, I'm a little out of it, but just look this stuff up: [url<]http://www.overclock.net/t/173255/cs-s-mouse-optimization-guide[/url<] I guess USB can sample up to 100 Hz if you overclock it. Again, this is what these "pro" gamers are looking at: [i<]One tricks that was tucked away up some sleeves until recently was how to change the USB polling rate to faster than the Windows default of 125hz. For all intents and purposes the default 125hz polling rate has a 8ms built in response time (lag) that cannot be overcome without changing the usbport.sys file. If you change the polling rate to 250hz your mouse response time drops to 4ms. At 500hz it drops to 2ms and 1000HZ it drops to 1ms. This is an obvious advantage in a gaming environment. Some Logitech and Microsoft mice also have a huge performance boost when you overclock the mouse port, because of an interface limit related to the 8-bit data bus. Specifically they are: Logitech's MX300, MX500 & MX510 and Microsoft's WMO, IE3.0 & Laser 6000. My recommendation is to set the USB rate to 500Hz for these mice. However increasing the reporting rate of any mouse will benefit from smoother tracking and faster response, however not all mice will dramatically reduce negative acceleration and improve perfect control like these do.[/i<] And man is mouse acceleration a contentious issue, too. So IPS was mentioned versus TN. Some "pro" gamers still pine for CRTs. I've seen threads at TR with console gaming where people talk about how they can't game with TVs because the input lag is bad. Here, here's a section of an old Dell 2407 LCD review talking about input lag versus a CRT: [url<]http://www.tftcentral.co.uk/articles/content/dell_2xx7wfp_2.htm[/url<] That was from 2007 and it wasn't a new issue then for "pro" gamers. Bensam talks about the engine interpreting frames to smooth out latency issues. You ought to read some of the unlagged stuff if it packs considerable weight with you. Not being snide or nothin', you'll probably find it very interesting considering the detail the ra.is link goes into discussing how the technique works. My counterpoint is that all you can really test is some of the hardware and you should try to isolate it as much as possible from the game engines. Quake 3 is opensource and it has engine variables exposed to the end user that allows you to mess with both the internal engine timings (sv_fps) and whether or not network prediction and interpolation is enabled which I think is cl_predict. Google sv_fps and you'll see stuff like (for Call of Duty) [i<]sv_Fps 250 wil make you shoot smoother. If you have problems with your m1 getting stuck will clicking, sv_fps 250 is the solution for you.[/i<] and [i<]Stop using 'sv_fps 30' on servers. The game (RTCW, ET) are hardcoded around a 50ms frame time. 30fps = 33.333ms frames. This causes all kinds of problems in the game -- from rounding errors to just plain breaking stuff totally.[/i<] Once you're able to test the effect of different hardware configurations (CRT vs TN vs IPS, USB vs PS/2 vs overclocking each) and see how each affects the overall experience, then you can start jacking around with the game engine variables. If you know that your CRT gives you 5ms advantage over a LCD and your 1000 Hz USB mouse will only add 1ms of lag, then you have an educated base from which to mess with cl_interp, sv_fps, and that kind of stuff. And from there you can maybe make assumptions on engines... if your minimum latency is always 50ms between cause and effect, then perhaps the game engine is designed to only run at 20fps. And if an engine is only designed to run at a certain number of time slices per second, that's kind of a ceiling from which you can't break through no matter how many CPUs or GPUs you throw at it. But the first step really has to be isolating and testing the traditional "pro" gamer arenas of "common knowledge" for improving your gaming experience. Don't even begin to talk about specific network code of a specific game--remove it entirely from the picture to see what your peripherals actually contribute to the overall experience.

            • Bensam123
            • 6 years ago

            In my experience and I’m sure for a lot of gamers, I don’t think I was ever so objective about measuring latencies and what not. I simply do what makes my game experience feel smoother and a lot of times more accurate (as long as quality isn’t too far compromised). I’m not one of those people that sets all the graphics settings on low so nothing gets in the way of my vision and I get the absolute fastest FPS. That sorta takes away from the point of playing the game. A 1000hz mouse does make a lot of difference and that’s very tangible as it directly correlates to the mouse position on your screen.

            Something else I’ve noticed is a raw input option some games have started adding. I’m unsure exactly what this does for each and every mouse, but it definitely makes it feel different (supposedly bypasses windows).

            I mentioned to your other post that Scott should definitely get in touch with some bigger engine developers to start talking about this sort of thing. I don’t believe you can simply pull the hardware out of a software environment and then test it in any meaningful way, you lose the generalizeability of it to a real world environment. What your testing may be extremely accurate, it just doesn’t apply to a real world environment and sometimes the hardware can change based on what games you’re playing, how the games interact with the hardware, and maybe even driver level optimizations that are implemented for different games.

            A game engine update wouldn’t be a hard limitation. A mouse, monitor, graphics card, processor, game engine are all independent of eachother to a certain degree and also dependent. Like some people believe having fps over 60fps doesn’t matter when you have a 60hz monitor (which isn’t true)… or when you have a 8ms refresh or what not. The variables are a lot of times unique in their own interaction with the user and interaction with the scenario. Video games have a lot of variables that influence the result from input to output.

            I was offering an explanation of how software deals with time dilation, perception, fluidity for comparison to what is currently being discussed in hardware, not necessarily stipulating that TR test specific variables in a engine.

            • Scrotos
            • 6 years ago

            Yeah, but a lot of engines don’t let you mess around and tweak stuff at the level of CS or Quake. Game devs already ship broken games and patch them later, you think they’re gonna devote resources to help people benchmark their games?

            The hardware can be objectively tested. Game engines are too variable. You tweak things that make the experience “feel” better. Ok, that’s fine and all, but for benchmarking you need to have some kind of objective measurement so you can make an educated guess as to what will improve the experience for other people too.

            I’m not really going on the tack that you’re suggesting TR do these engine tests, just talking about testing the effects of some of this stuff in general. I think some of the considerations/tests you allude to would be dubious without considering the hardware part of the user experience first and foremost for the latency they all add to the “chain” of the experience.

            At least you don’t do that stupid “pro” thing where you set texture detail to basically nothing for higher fps and to see your opponents more easily. Ugh. I’m with you, man, play the game in all its glory!

            • Bensam123
            • 6 years ago

            I don’t really think this is ‘devoting resources’, I hear that argument brought up a lot of times and it’s usually quite frivelous. Like adding a frame time stamp to a FPS window like they did in NS2 is a simple timer (when you have access to everything else on the inside). A lot of this could simply be cleared up in a conversation though with the devs though.

            I haven’t really alluded to any sort of testing using engine variables. That was a comparison and offers some interesting information to what Scott was talking about concerning frame pacing. I did suggest he consider talking with the developers to help clear up some of the things he’s more or less guessing on and maybe they’d be willing to help him out with his testing as this is the higher end of the chain and the games he’s directly testing.

            “One more thing. I understand AMD plans on making frame pacing an option in future Catalyst drivers but leaving it disabled by default. The rationale has to do with the fact that frame pacing injects a small amount of lag into the overall chain from user input to visual response. AMD says it wants to avoid adding even a few milliseconds to overall input lag. That seems to me like something of a pose, a way of avoiding the admission that Nvidia’s frame metering technology is preferable to AMD’s original approach. You’ve seen the results in the videos above. I think the vast majority of consumers would prefer to have frame pacing enabled, granting them perceptibly smoother animation. Disabling this feature should be an option reserved for extreme twitch gamers whose reflexes are borderline superhuman. Here’s hoping AMD does the right thing with this option when the time comes to release a public driver with frame pacing.”

            • Bensam123
            • 6 years ago

            “Really, the primary value that I extracted from Bensam’s post was his counterpoint to the prevailing view that frame smoothness should be the only goal of this exercise.”

            Yup… I was arguing for a balance between the two and I happen to lean more towards the occasional disruption in fluidity over a completely fluid picture. Usually those disruptions occur exactly when you need to be accurate though. High action scenes where split seconds count, like coming around a corner and running into someone, which can be hard for a engine to deal with in a number of ways.

            Split second squatting right in the heat of battle is a easy way to throw off someones aim or jumping. I’m sure most people heard about dolphin diving in BF2 which made people almost impervious to bullets. You go from a straight on run, jump, prone while in midair. You essentially became flat as a board while in the air and have the profile to fit it. Engine prediction on top of it further compounded that issue.

            You pretty concisely summed up my post though (at least the large portion of it).

            • Bensam123
            • 6 years ago

            It was a comparison, it’s easy to describe something like network code as it’s pretty well laid out (especially if you look at the link). They’ve already covered most of the issues dealing with time dilation and latency that are now being looked at in hardware (which is pretty close to real time).

        • willmore
        • 6 years ago

        If you want to understand, then yes. If you want to fanboy + or -, just move along.

      • AzureFrost
      • 6 years ago

      If you have a video card that doesn’t blow air out of the case, set your side fans as exhaust.
      I’ve tried it and it works, my 660Ti load temps went down by 2-3 degrees. Not a substantial amount, but it does help the hot air from the card to get out.

        • Bensam123
        • 6 years ago

        I don’t have a side fan. Not to sound snotty, but I don’t buy cases with fans on the side either as it sits on my desk. Perforations allow sound to escape and you can hear fans spin up and down (regardless of if there is a fan mounted there).

        I agree though, I’m sure it would fix the problem with air stagnation and the other problems I mentioned (to a certain extent). But that’s like using another solution to solve the problem of the first solution (the original hsf on the card). It’s not always a option for everyone either.

      • JustAnEngineer
      • 6 years ago

      [quote=”Bensam123″<]"Scott Wasson — 9:57 AM on April 23, 2013" Really? Where has this article been the last six days? :p [/quote<] You obviously have no appreciation for the ginormous amount of [b<]data[/b<] that Scott had to crunch for this review. [quote="Damage"<]I played through almost the entire game on the Radeon HD 7990 at the settings below, and I encountered intermittent stutters while moving around Columbia much more often than one would hope. I figured this game could offer a nice test of a different sort of performance question...[/quote<]

    • Meadows
    • 6 years ago

    I’m confused. The review says April 23?

      • Krogoth
      • 6 years ago

      A minor oversight. It is a vestige from one of the earlier drafts of the review.

        • Saribro
        • 6 years ago

        LIES!! LIES AND SLANDER!! 7990 MAKES YOU TRAVEL THROUGH TIME!!!!!!! (sort of like a -big- microstutter)

          • superjawes
          • 6 years ago

          If this were the case, the 7990 would be blue and bigger on the inside.

            • Scrotos
            • 6 years ago

            Well, slightly smaller after portions were jettisoned/consumed, ya?

        • Meadows
        • 6 years ago

        Surely you would know, having participated in the making of this review.

      • Damage
      • 6 years ago

      I’ve corrected the publication date.

    • raghu78
    • 6 years ago

    AMD should not have released the HD 7990 now. they should have waited till the frame pacing driver is ready in atleast a stable beta form. knowing very well the new approach to gameplay testing and the criticism AMD will get for the HD 7990 its just poor strategy and marketing. i fail to see what AMD will gain from this card launch. enthusiasts are going to flock to GTX 690 and Titan till AMD has the frame pacing driver ready.

      • JustAnEngineer
      • 6 years ago

      I don’t think many folks will buy a GeForce GTX690 at this point, either. If you’re going to blow $1000+ on a ridiculously-overpriced e-peen graphics card, you’re going to get the GeForce GTX Titan.

      If you really look at the reviews of the $1000 luxury gaming graphics cards, it’s hard not to come away with the opinion that a single overclocked Radeon HD7950 or GeForce GTX670 offers a much better value.

      • Anonymous Coward
      • 6 years ago

      At this point, it seems the only people buying this kind of product are those that have more money than sense. AMD is happy to accept their money. Won’t make any difference with frame pacing.

      • beck2448
      • 6 years ago

      For this money Titan is still supreme in actual experience.

    • Krogoth
    • 6 years ago

    Very solid review.

    The 7990 falls under any sensible gamer exceptions. It is the fastest GPU on paper, but it has to deal with all of the issues associated with multi-GPU solutions. AMD’s multi-GPU support is a league behind Nvidia. Not that it really matters much, since most of the world runs on single-GPU solutions.

    7990 uses newer spin of Tahiti silicon which is how it is able to consume considerably less power than a normal 7970 CF Setup.

    7990 is a monster at OpenCL related stuff through and shows it in Sleeping Dogs (Physics in it utilizes OpenCL). I bet that it screams through buttcoins if that is your thing.

    • Arclight
    • 6 years ago

    That’s an excellent review, Mr. Wasson.

    OK, so the frame times are pretty bad but seems that AMD will fix them soon. That said, if i were in the market right now, i’d probably wait for nvidia to launch their 700 series and maybe grab one of those super clocked, custom cooled Titan LE cards. I don’t think that multi GPU is worth it given the frustration caused by drivers (on both sides, even though nvidia seems to be doing better).

    • sschaem
    • 6 years ago

    auto multi-gpu is so, so messy. nvidia does a great job at it, coming up with this and that profile and hack to make it automagically work .. most of the time.

    But the concept stinks… like auto stereo drivers.

    To get either right, the game engine should implement the feature.

    Yes, its possible for Unreal, Cryengine, etc.. to explicitly support multi GPU rendering.
    and being done at this level would eliminate all this nonsense and crazy driver hacking.

    I guess the sli/cross fire market is so small that its not worth it for Epic or Crytek to make their engine multi GPU aware and stop relying on driver hacks 🙁

    And it seem even Microsoftr doesn’t see any value in this, otherwise this would be done at the DX layer. instead nvidia need to ‘invent’ the wheel, and AMD re-invent it…

    Lame….

    multi GPU would be so powerfull if it was done right, at the game engine level.

      • willg
      • 6 years ago

      I’m hoping the AMD GPU guys can borrow some know-how from the CPU guys and build a MCM multi-GPU product with inter-chip bandwidth high enough to make the two chips cooperate at such a level they can share dispatchers, caches, schedulers and memory and operate much more like a single GPU.

        • Antimatter
        • 6 years ago

        Would Hypertransport be able to provide enough bandwidth between the GPUs?

        • Bensam123
        • 6 years ago

        Infiniband?

        Outside of clusters that aren’t very latency dependent I don’t imagine anything like this ever being done. Both chips would have to be on the same diy in order to keep latencies down and then you’re pretty much talking about modern day dual and quad core processors.

      • Bensam123
      • 6 years ago

      I don’t think a engine will ever implement this as a feature. That’s like programming directly to the hardware and not a engines job, if anything a feature like this should and would be found in directx or opengl.

      This is actually right up the ally of DX or OGL. I can sorta understand why MS is doing this as it would impede on their console baby, but for OGL this sounds like something they should tackle and gain the leg up on MS.

      I agree though and this idea has already been approached by Lucid with their multi-gpu technology… Unfortunately they haven’t really shown anything new for a year or so in that regard. It’s all hyperformance and hypervsync stuff.

    • gamoniac
    • 6 years ago

    Could one of the two Tahiti GPUs be disabled? If the owner of such an expensive card runs into a serious driver issue for a specific game, this might be a desirable, instant option as opposed to having to wait a few months for a fix to come out.

    • Unknown-Error
    • 6 years ago

    Merci for the Review Scott.

    • tbone8ty
    • 6 years ago

    Solid review! Cheers Scott!

    • south side sammy
    • 6 years ago

    the driver they are working on. you forgot to mention it is a rework of a “12” series driver. and did you mention it might take 3 month’s for this to get hammered out?

      • derFunkenstein
      • 6 years ago

      The article says it’s based on older Catalyst code. Sorry to hear about your illiteracy.

        • south side sammy
        • 6 years ago

        been up too many hours to completely read. thanks for the snide remark. you’re just like lots of people on this site. up yours turd!

          • Captain Ned
          • 6 years ago

          Scott did say quite clearly that it’s old code, that it’s completely internal at the moment, and that he has no idea if or when it might be a public release.

          • paulWTAMU
          • 6 years ago

          Your statement was totally incorrect and you got called on it. Put on your big boy panties and deal?

            • clone
            • 6 years ago

            do you really believe you’ve made a constructive comment or are you defending ppl assuming an entitlement to behave like turds whenever someone makes an honest mistake?

            when composing constructive / destructive should be considered.

            • Scrotos
            • 6 years ago

            Why post on an article you didn’t read? Or, why post on an article you on which skimmed pictures and only gleaned the info that backs up your predetermined opinion of the brand/vendor/product?

            Given the tone of the post I think it’s a safe assumption it was only posted to rain turdburgers all over AMD’s product line.

            • clone
            • 6 years ago

            could have been but that wasn’t what I was commenting on.

            • derFunkenstein
            • 6 years ago

            It’s not a mistake to say “you forgot to mention” when one doesn’t read the article (as per his own admission). In fact, I’m not sure what purpose it serves. Correcting that misrepresentation of the facts, by contrast, is at least helpful to people who came along later.

            • clone
            • 6 years ago

            it is a mistake if he missed it in the article, the later caveat that he hadn’t fully read the article is not an admission that he didn’t read it at all….. quite the opposite actually and explains why the question was asked….. a simple mistake.

            in response to what seems like an honest mistake their was hostility, and someone else jumped in to defend the hostility as if that is the best way to handle questions about an article… any article.

            hence the reason why I asked the question I did.

            on a side note the upvoting and downvoting system is a joke, someone asks what may have been an honest question born from an insignificant mistake and gets attacked openly and downvoted as if they did something wrong by asking a question…. in a comments section where the whole idea of free speech is by design supposed to be encouraged.

          • MadManOriginal
          • 6 years ago

          [quote<]up yours turd![/quote<] The lack of a comma in this sentence makes it awesome.

          • derFunkenstein
          • 6 years ago

          I don’t really care your excuse; if you can’t read the article, then don’t comment on the article.

            • clone
            • 6 years ago

            the reason for the comments section is to have a free and open discussion about the article, if someone finds the article confusing or potentially flawed where would you prefer comments / concerns / questions be posted?

            your position seems quite fascist in nature as is the belief in an entitlement to attack anyone who asks a question you arbitrarily decide you don’t like.

            p.s. down voted for defending free speech, hilarious.

            • superjawes
            • 6 years ago

            [quote<]p.s. down voted for being a whiny commenter and using the word "turd", hilarious.[/quote<]FTFY Seriously, not one, but TWO people using the word "turd" as an insult?

            • clone
            • 6 years ago

            lol, being polite and showing respect is now “whiny”, roflmao……………..

            takes a real genius, a real visionary, a real unique person with the age and maturity level of a 7 year old to default to caustic behaviour…….I still love how badly it speaks of the voting system when ppl are quietly / passively endorsing attacks on free speech “cause it’s fun ‘n funny” and overall defending beligerent behavior as the best form of expression.

            p.s. didn’t use the word “turd”, revisit the thread and be more careful next time.

            • superjawes
            • 6 years ago

            [quote=”clone”<]do you really believe you've made a constructive comment or are you defending ppl assuming an entitlement to behave like turds whenever someone makes an honest mistake?[/quote<] Yeah...I did read the thread. Have you? And the voting system is not an "attack on free speech." In fact, it is free speech. Basically, some commenters could write out a response to you, or they can just dismiss it for having no value and downvote you. Saves a lot of sanity that way.

            • clone
            • 6 years ago

            yep, I had a part in writing it, did you ….. really?…. or did you just come in to shoot your mouth off (metaphorically of course) and defend ppl who jump all over anyone who asks a question they’ve decided they don’t like?

            since you read it so closely and more notably you fully understood what was being said I just know you can provide clear examples where I said the voting system is an open attack on free speech.

            I know I said the voting system is a joke, I know I said the voting system was hilarious but where did I say it was an open attack on free speech?

            I’m pretty certain… well ok I know for a fact my comments on the voting system were a side topic handled on the side given I segmented them to the side via P.S. and 2ndary comments but I’m sure you are absolutely 100% certain you can provide details that will conclusively prove that I said openly that the voting system is an attack on free speech…. because you read so closely.

    • rhettigan
    • 6 years ago

    What’s its Bitcoin mining hashrate?

      • BoBzeBuilder
      • 6 years ago

      8999.

        • chuckula
        • 6 years ago

        [quote<]8999.[/quote<] UNDER NINE-THOUSAND?!?!?!!?!?

          • NeelyCam
          • 6 years ago

          Still over 8000

      • smilingcrow
      • 6 years ago

      2 doobies an evening in the week and up to 6 bongs per hour on weekends.

      • jossie
      • 6 years ago

      Pretty relevant question since that’s about all it’s good for at the moment. And you can bet it was miners who bought nearly all of last year’s 7990s.

Pin It on Pinterest

Share This