The GTX 700 series sure is growing fast. Over the past month alone, we’ve seen the arrival of the GeForce GTX 780, the GeForce GTX 770, and now, the GeForce GTX 760. We hardware reviewers have been working tirelessly to keep up, spending long days, nights, and weekends benchmarking the new arrivals.
When will it end?
Today, actually. Nvidia says it doesn’t plan to bring the GTX 700 series down further beyond the GTX 760. The company’s desktop graphics lineup will to remain as it is through the fall. Phew!
Oh, don’t get us wrong. We love to see fresh meat in the GPU market. However, the GTX 700 series isn’t based on new silicon. In each case, Nvidia has simply taken an old GPU and turned a few knobs, moved a few dials, and flipped a few switches to keep it from getting stale. All that tweaking has yielded some decidedly welcome performance-per-dollar improvements, but next-gen parts these are not.
The GeForce GTX 760 continues this succession of value-conscious makeovers by replacing the old GeForce GTX 660 Ti at a slightly reduced price: $249. As you’re about to see, Nvidia has made some interesting changes to the way it hobbles the GK104 chip to make this thing. Some unit counts have been increased, others have been reduced, and clock speeds have gone up. The result is a card that may be slower in some tasks but could be a better performer in today’s games—all for less money than its predecessor. That’s perhaps not as swoon-worthy as a brand-new graphics architecture, but it’s definitely something.
The GeForce GTX 760
The star of this morning’s show hails from the depths of Nvidia’s secret underground bunker. Or, more likely, some kind of QA lab or something.
Give us a twirl, won’t you, sweetheart?
From the outside, the GeForce GTX 760 looks pretty much exactly like its older sister, the GeForce GTX 660 Ti. The stock cooler is the same. The circuit board is just as stubby, and there are still two 6-pin PCI Express connectors providing power to the card.
What’s going on under the hood is quite different, though. With the GTX 660 Ti, Nvidia lopped off one of the GK104 chip‘s eight shader multiprocessors (SMXes), leaving 1344 shader ALUs and 110 texels per clock of texture filtering power. The company also disabled one of the four ROP partitions and one of the four memory controllers, which gave us, respectively, 24 pixels per clock of resolve power and a 192-bit path to memory.
|GeForce GTX 660 Ti||915||980||1344||112||24||6 GT/s||192||150W|
|GeForce GTX 680||1006||1058||1536||128||32||6 GT/s||256||195W|
|GeForce GTX 760||980||1033||1152||96||32||6 GT/s||256||170W|
|GeForce GTX 770||1046||1085||1536||128||32||7 GT/s||256||230W|
|GeForce GTX 780||863||900||2304||192||48||6 GT/s||384||250W|
In the GeForce GTX 760, all of the ROP partitions are enabled, as are all of the memory controllers. That means we have 32 pixels per clock and a full-fat 256-bit memory interface. However, one additional shader multiprocessor has been culled, which means we’re down to 1152 ALUs and 96 textures per clock.
The way Nvidia disables the SMXes also means different GTX 760 cards will have different tessellation capabilities. Remember, the GK104 chip’s eight SMX units are paired up inside four GPCs, or graphics processing clusters, and each GPC has a raster engine that can rasterize one triangle per clock. To make a GTX 760, Nvidia can either disable one entire GPC or turn off two SMXes in two separate GPCs. In the former configuration, one of the raster engines goes dark, and the card rasterizes three triangles per clock. In the latter, all raster engines survive the culling, and the raster rate goes up to four per clock.
This isn’t the first time Nvidia has played musical chairs with raster engines. The GeForce GTX 780 is similarly inconsistent, with either four or five triangles rasterized per clock depending on how the GK110 chip is pared down. Offering inconsistent specs in a single product may not be ideal, but the ability to prune any two SMXes (or any three in the GTX 780) gives Nvidia much more flexibility to repurpose defective GPUs. Also, as far as the GTX 760 is concerned, even the worst-case scenario beats the competition, since AMD’s Radeon HD 7950 Boost only rasterizes two triangles per clock.
Speaking of clocks, the the GeForce GTX 760 also boasts a higher clock speed than the GTX 660 Ti. Nvidia has cranked up the base speed from 915MHz to 980MHz, and it’s pumped up the peak Boost speed from 980MHz to 1033MHz. Part of the gain comes courtesy of Nvidia’s GPU Boost 2.0 algorithm, which uses GPU temperatures, not power draw, as the main factor to determine maximum speeds. When temperatures are low enough, GPU Boost 2.0 can raise voltages to increase the amount of clock-speed headroom for a given chip. No doubt thanks to that algorithm, you can expect to see even higher-clocked versions of the GTX 760 from Nvidia’s partners. Both Gigabyte and MSI will have cards with 1085MHz base clocks and 1150MHz Boost clocks, and some vendors will offer even faster models.
|GeForce GTX 660 Ti||24||110||110||2.6||3.9||144|
|GeForce GTX 680||34||135||135||3.3||4.2||192|
|GeForce GTX 760||33||99||99||2.4||3.1 or 4.1||192|
|GeForce GTX 770||35||139||139||3.3||4.3||224|
|GeForce GTX 780||43||173||173||4.2||3.6 or 4.5||288|
|Radeon HD 7870 GHz||32||80||40||2.6||2.0||154|
|Radeon HD 7950 Boost||30||104||52||3.3||1.9||240|
|Radeon HD 7970 GHz||34||134||67||4.3||2.1||288|
Here’s how the GTX 760’s combination of higher speeds and tweaked unit counts translates into peak theoretical rates. As you can see, the higher clocks don’t quite make up for the missing SMX—peak texture filtering and peak shader performance are both lower than on the GTX 660 Ti. At the same time, the peak pixel fill rate has gone up a fair bit, which should mean better antialiasing resolve performance. Memory bandwidth has increased substantially, as well.
Compared to the Radeon HD 7950 Boost, the GTX 760 looks superior or competitive in all but raw shader speed and memory bandwidth. Of course, the 7950 Boost also has a wider, 384-bit memory interface, and it’s a slightly more expensive card right now. (Prices start at $279.99, or $259.99 after a mail-in rebate.)
At 170W, the GeForce GTX 760’s power envelope is a little larger than the GeForce GTX 660 Ti’s 150W TDP. To keep noise levels in check, Nvidia has implemented a revised fan control algorithm that curbs fluctuations in speed. The result should be a more consistent noise profile—one that’s hopefully easier to tune out. A similar algorithm debuted in the GTX 780 last month, and it seemed to work wonders. Of course, the GTX 760 has a different heatsink and fan design, so we’ll have to do some hands-on noise testing to see how it fares.
But before that, let’s first have a look at another member of the GTX 700 series.
The GeForce GTX 770
The GeForce GTX 770 debuted a few weeks ago at $400. This higher-end offering is basically a juiced-up version of the GTX 680 for about $20 less. Today, we take our first look at how it performs.
Unlike the other members of the GeForce 700 series, the GTX 770 has a fully functional GPU. The GK104 chip is the same as what’s found in the GTX 760, but all four GPCs are intact, and so are the SMX units that lie within them. The same configuration was used in the GeForce GTX 680. This time around, however, the clock speeds have been turned up.
The GeForce GTX 770 has base and Boost clocks of 1046MHz and 1085MHz, respectively. Those are modest increases over the 1006/1058MHz clocks of the old GTX 680, and Nvidia’s new GPU Boost 2.0 algorithm deserves some of the credit. The clock-boosting tech is the same as that employed by the GTX 760 and other members of the 700 series.
On the memory front, the GeForce GTX 770 offers a 7 GT/s transfer rate, up 1 GT/s from the GTX 680. The memory interface is still 256 bits wide, so bandwidth has risen by a substantial 17%. Standard cards are available with 2GB of GDDR5 memory, and some vendors are offering 4GB variants for around $450. Card makers have also concocted hot-clocked models with Boost frequencies up to 1202MHz and memory transfer rates as high as 7.2 GT/s.
To accommodate higher clock speeds, the GeForce GTX 770 also has a higher thermal envelope. The 230W TDP is up 35W from the GTX 680’s, and the onboard power connectors have changed to help supply the additional power. Instead of the dual 6-pin PCIe connectors on the GTX 680, the GTX 770 has one six-pin connector and one eight-pin one.
The card we’ve tested is an Nvidia reference model that uses the same swanky cooler as the GeForce Titan. This cooler is beautifully crafted from magnesium and aluminum, and it’s whisper quiet. Unfortunately, the heatsink doesn’t seem to be available on GeForce GTX 770s out in the wild. Not one of the cards listed at Amazon or Newegg features the Titan cooler. Instead, they’re all equipped with custom solutions that quite likely aren’t as nice. Keep that in mind when looking at the noise and temperature results later in this review.
Other things of note
Nvidia has a handful of software bonuses for its recent GeForce cards. The first is GeForce Experience, which automates driver updates and game setting optimizations. Automating driver updates is fairly straightforward. Optimizing in-game detail settings based on the user’s hardware is a little more involved, though all the work is done on Nvidia’s end.
Games typically make their own settings recommendations based on system hardware. Those defaults tend to be fairly conservative, and they don’t always recognize new graphics cards. GeForce Experience is more aggressive, and it knows all about the latest GeForce models. It’s also capable of modifying game config files directly, making the optimization process a one-click affair for end users.
GeForce Experience’s optimization intelligence is based on profiling work conducted by Nvidia. The firm uses human testers to find demanding sections of games and benchmark the performance impact of various graphical settings. The performance impact of individual settings is weighted against their visual impact. Minimum frame rates are also defined based on the nature of the gameplay. All this information is fed into a software simulator that performs loads of iterative testing to determine the ideal settings for various hardware configurations.
For newbies who don’t know the difference between ambient occlusion and antialiasing, GeForce Experience takes the guesswork out of graphics tweaking—and explains how the various settings affect image quality. The settings recommendations aren’t just for the uninitiated, either. They can also be used as a starting point from which seasoned enthusiasts can proceed with further fiddling. The list of profiled games is already quite extensive.
On Kepler-based graphics cards, GeForce Experience will also serve as the server software for Nvidia’s Shield gaming handheld. Only a handful of games presently support streaming to the device, which is due to be released June 27.
Shield streaming relies on the H.264 encoding block incorporated in Kepler GPUs. Next month, that block will also be used by Nvidia’s ShadowPlay software. This application promises to record gaming sessions with much less of a performance penalty than existing game capture software. In fact, the performance overhead is so minimal that Nvidia expects gamers to have the feature enabled at all times. The always-on “shadow” mode allows users to allocate a chunk of system storage to recording the last few minutes of gameplay, ensuring there’s always evidence of epic feats. Let’s hope there’s an option for SSD users to point ShadowPlay to mechanical storage. There is an option for manual recording, and ShadowPlay may eventually support live broadcast via streaming services.
The performance results you’ll see on the the following pages come from capturing and analyzing the rendering times for every single frame of animation during each test run. For an intro to our frame-time-based testing methods and an explanation of why they’re helpful, you can start here. Please note that, for this review, we’re only reporting results from the FCAT tools developed by Nvidia. We usually also report results from Fraps, since both tools are needed to capture a full picture of animation smoothness. However, we are building on a set of results from our GeForce GTX 780 review, and in that review, Fraps and FCAT generally seemed to agree on the nature of scope of any frame delivery problems. We think sharing just the data from FCAT should suffice for this review, which is generally about incremental differences between video cards based on familiar chips.
Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:
|Memory size||16GB (4 DIMMs)|
DDR3 SDRAM at 1600MHz
|Chipset drivers||INF update
Rapid Storage Technology Enterprise 18.104.22.1689
with Realtek 22.214.171.12462 drivers
Deneva 2 240GB SATA
Service Pack 1
|GeForce GTX 660 Ti||GeForce 320.39 beta||915||980||1502||2048|
|GeForce 320.18 beta||1006||1059||1502||2048|
|GeForce GTX 760||GeForce 320.39 beta||980||1033||1502||2048|
|GeForce GTX 770||GeForce 320.18 beta||1046||1085||1753||2048|
|GeForce 320.18 beta||863||902||1502||3072|
|GeForce 320.14 beta||837||876||1502||6144|
|Radeon HD 7950 Boost||Catalyst 13.5 beta 2||850||925||1250||3072|
Radeon HD 7970 GHz
13.5 beta 2
Thanks to Intel, Corsair, Gigabyte, and OCZ for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.
Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.
Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.
In addition to the games, we used the following test applications:
The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
You can see in the raw frame time plots how the GeForce GTX 760 improves on the GTX 660 Ti and how the GTX 770 steps up performance from the GTX 680. In both cases, the newer 700-series cards produce more total frames than the 600-series products they replace, and the newer cards’ frame rendering times are ever-so-slightly lower, as well. The occasional spikes to higher frame times are a also bit less pronounced on the newer cards.
The traditional FPS average and our more latency-focused 99th percentile frame time agree that the new GeForces have edged ahead of the competing Radeons in their respective classes. The 99th percentile frame time is the threshold below which 99% of all frames were rendered, and the numbers for each of these cards look pretty good. All but the last 1% of frames were produced in about 35 milliseconds or less on even the slowest card. That should translate into generally smooth animation—and in our experience while play-testing, it does.
The broader frame latency curve illustrates that the new GeForces are generally just a hair quicker than their competition, but there’s trouble in that last 1% of frames rendered. Those longer frame times come from the occasional spikes shown in the raw frame time plots above. They happen to different degrees on each card. We could feel these hitches while playing, right as we moved up the tunnel at the beginning of the session and later when we loosed the two explosive-tipped arrows and they did their thing. It appears those slowdowns are a little more severe on the GeForces than on the Radeons.
That fact is reflected in our “badness” metrics, which show the time spent working away on frames where more than X milliseconds have already passed. For instance, a 70-ms frame contributes 20 ms to our “time beyond 50 ms” metric. 50 ms is our primary threshold of “badness;” it corresponds to 20 FPS at a steady rate, and producing frames slower than that is likely to threaten the fluidity of the motion being portrayed. A little time spent beyond 50 ms may not be a big deal, but you wouldn’t want it to add up. 33.3 ms translates to 30 FPS and corresponds to two refresh intervals on a 60Hz display. Staying below that threshold should mean very good things for animation smoothness. And 16.7 ms translates to 60 FPS or 60Hz, which is as fast as most monitors can update the screen.
In this case, our collection of analytical tools tells us this contest is almost a complete wash. The two new GeForces are generally faster than their Radeon rivals, but the Radeons do a bit better job of mitigating the occasional animation hiccup. Neither difference is huge, so take your pick.
Far Cry 3: Blood Dragon
The Radeon HD 7970 and 7950 have the advantage over the GTX 770 and 760 in a minor but consistent way across all of our metrics here. The more remarkable outcome is the fact that, even though we’re running at pretty high image quality settings at 2560×1440 resolution, none of the cards spend any time beyond our 50-ms “badness” threshold. Heck only one of them spends much time at all above 33.3 ms. Yes, the higher-end GPUs are faster and we have numbers to prove it, but a $250 card like the GTX 760 handles this workload very well.
Our Tomb Raider scenario is a a bit more challenging than our Blood Dragon test, but otherwise, the outcomes are almost exactly the same. I’m a little surprised at how modest the benefits are from the GTX 770’s faster GDDR5 memory. The 770 barely separates itself from the GTX 680 most of the time.
Guild Wars 2
So this is funky. Look at the frame time plots. The faster the GeForce card, the more latency spikes you’ll see from it. Must be some kind of quirk of this game engine or of Nvidia’s drivers, somehow. The thing is, those spikes are small, consistently below 35 ms, and just don’t add up to a problem in the grand scheme of things. The big story here, again, is the sheer competence of all of the cards. None of them breach our 50-ms threshold at all, and few surpass the 33-ms mark, either.
Oh, and Radeon versus GeForce, close contest, blah blah.
This is a pretty intensive game that looks good and has, in the past, been very helpful in pushing the fastest graphics card to their limits. In this case, though, there are only two things not incredibly boring about these results. The first is the relatively strong showing of the Radeon HD 7970 GHz Edition compared to the GTX 780 and Titan. The second is the fact that the mid-range graphics cards don’t struggle at all here, even though we have this game’s quality options turned up pretty darn high. You may have to choose your image quality settings a little more carefully with a $250-ish graphics card, but pairing a GTX 760 or a Radeon HD 7950 with a 2560×1440 monitor in a new gaming rig is a viable option. Heck, we think you should do it as soon as possible.
The Radeons have a unique capability called ZeroCore power that allows them to spin down all of their fans and drop into a very low-power state whenever the display goes into power-save mode. That’s why they tend to draw less power with the display off. At idle on the Windows desktop, only a few watts separate the Radeons and GeForces in the same class. During our real-world gaming workload in Skyrim, we have a split result: the 7950 draws less power than the GTX 760, but the 7970 out-draws the GTX 770. Again, none of the differences are terribly dramatic.
Noise levels and GPU temperatures
All of these cards use the default cooler from the GPU maker, and that last graph illustrates how AMD’s coolers for the 7950 and 7970 aren’t all that spiffy. By contrast, that GTX Titan-style cooler on the GeForce GTX 770 is blissfully quiet. I wish the same could be said for the GTX 760’s puny reference cooler, which is just like the 660 Ti’s. That cooler doesn’t register too strongly on our decibel meter, but subjectively, it’s worse than the numbers would seem to indicate. The 760’s blower makes a rough sound that grates on my ears, and I find it hard to believe the smooth hiss of the Radeon HD 7990 somehow generates more decibels worth of noise.
Thing is, unless you’re getting a 7990, Titan, or GTX 780, you’re most likely not going to be buying a card with one of these reference coolers attached. Companies like Asus, Gigabyte, and Sapphre tend to use their own coolers instead, and the latest crop of heatpipe-laden heatsinks tends to perform very well, combining low noise levels with decent cooling. We may have to test a few of those next time around, but it just wasn’t, uh, in the cards for today.
Let’s summarize our performance results and mash ’em with with pricing using our famous price-performance scatter plots:
What you’re seeing, folks, is something very close to parity between Nvidia and AMD, which should be no great surprise if you’ve been following these things. There’s been loads of back-and-forth jockeying for position in the past 18 months. AMD introduced its 7000-series Radeons months before Nvidia followed with the GTX 600 series. AMD then countered with a mid-cycle refresh by slipping in the 7970 GHz Edition and the 7950 Boost. For a time, Nvidia still held an edge in our latency-focused tests, until AMD addressed some issues with its drivers and recaptured the lead. Now, Nvidia has done its own hardware refresh with the introduction of the GTX 700 series.
At the end of the day, despite all of the incremental changes, the performance gaps between Radeon and GeForce are minimal. The overall scores could swing a few points one way or another if we altered our selection of games used in testing. This contest is close enough to make little differences seem larger than they are.
Nvidia undoubtedly had the Radeon HD 7950 Boost in its sights as it set the clock speeds and price for the GeForce GTX 760. The result is a card that ties or slightly outperforms the 7950 Boost at a lower $249.99 starting price. That puts the GTX 760 in a better position on our value scatter plot.
Meanwhile, the GTX 770 is in a tougher spot. When it was introduced a couple of weeks ago, its $399.99 price tag undercut the Radeon HD 7970 GHz Edition. The price advantage was especially welcome since the 7970 GHz is apparently still the faster card. Now, AMD and board makers have cut 7970 GHz prices in response, and the Radeon occupies the better spot in our value plots. In fact, it looks like AMD has queued up some limited-time offers to drop below $399, likely in anticipation of this next round of reviews.
When things are this close, oh, the games they will play.
Not that there’s anything wrong with that. In fact, this sort of competition is a very good thing for consumers. We’re just not sure how to declare any definitive winners in this ongoing fight, under the circumstances.
AMD has sweetened the pot considerably by bundling several big-name games with its 7950 and 7970 cards through some retailers. As long as that deal is available, and assuming you don’t already own the games and would like to have them, the Radeons may be the more attractive option. Meanwhile, Nvidia has its own set of advantages to offer, including a clearly better track record of driver support for just-released games, the nifty auto-optimization features available via its GeForce Experience software, and markedly quieter coolers for cards based on its reference designs. If you’re considering multi-GPU solutions, Nvidia’s SLI is easily superior at present, too.
All of which leads us to the ultimate reviewer’s cop-out. Under the circumstances, we’re not gonna choose a winner. We’re just gonna say: take your pick. You really can’t lose either way.