Nvidia’s GeForce GTX 760 graphics card reviewed

The GTX 700 series sure is growing fast. Over the past month alone, we’ve seen the arrival of the GeForce GTX 780, the GeForce GTX 770, and now, the GeForce GTX 760. We hardware reviewers have been working tirelessly to keep up, spending long days, nights, and weekends benchmarking the new arrivals.

When will it end?

Today, actually. Nvidia says it doesn’t plan to bring the GTX 700 series down further beyond the GTX 760. The company’s desktop graphics lineup will to remain as it is through the fall. Phew!

Oh, don’t get us wrong. We love to see fresh meat in the GPU market. However, the GTX 700 series isn’t based on new silicon. In each case, Nvidia has simply taken an old GPU and turned a few knobs, moved a few dials, and flipped a few switches to keep it from getting stale. All that tweaking has yielded some decidedly welcome performance-per-dollar improvements, but next-gen parts these are not.

The GeForce GTX 760 continues this succession of value-conscious makeovers by replacing the old GeForce GTX 660 Ti at a slightly reduced price: $249. As you’re about to see, Nvidia has made some interesting changes to the way it hobbles the GK104 chip to make this thing. Some unit counts have been increased, others have been reduced, and clock speeds have gone up. The result is a card that may be slower in some tasks but could be a better performer in today’s games—all for less money than its predecessor. That’s perhaps not as swoon-worthy as a brand-new graphics architecture, but it’s definitely something.

The GeForce GTX 760

The star of this morning’s show hails from the depths of Nvidia’s secret underground bunker. Or, more likely, some kind of QA lab or something.

Give us a twirl, won’t you, sweetheart?

From the outside, the GeForce GTX 760 looks pretty much exactly like its older sister, the GeForce GTX 660 Ti. The stock cooler is the same. The circuit board is just as stubby, and there are still two 6-pin PCI Express connectors providing power to the card.

What’s going on under the hood is quite different, though. With the GTX 660 Ti, Nvidia lopped off one of the GK104 chip‘s eight shader multiprocessors (SMXes), leaving 1344 shader ALUs and 110 texels per clock of texture filtering power. The company also disabled one of the four ROP partitions and one of the four memory controllers, which gave us, respectively, 24 pixels per clock of resolve power and a 192-bit path to memory.

GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

Shader

ALUs

Textures

filtered/

clock

ROP

pixels/

clock

Memory

transfer

rate

Memory

interface

width

(bits)

Peak

power

draw

GeForce GTX 660 Ti 915 980 1344 112 24 6 GT/s 192 150W
GeForce GTX 680 1006 1058 1536 128 32 6 GT/s 256 195W
GeForce GTX 760 980 1033 1152 96 32 6 GT/s 256 170W
GeForce GTX 770 1046 1085 1536 128 32 7 GT/s 256 230W
GeForce GTX 780 863 900 2304 192 48 6 GT/s 384 250W

In the GeForce GTX 760, all of the ROP partitions are enabled, as are all of the memory controllers. That means we have 32 pixels per clock and a full-fat 256-bit memory interface. However, one additional shader multiprocessor has been culled, which means we’re down to 1152 ALUs and 96 textures per clock.

The way Nvidia disables the SMXes also means different GTX 760 cards will have different tessellation capabilities. Remember, the GK104 chip’s eight SMX units are paired up inside four GPCs, or graphics processing clusters, and each GPC has a raster engine that can rasterize one triangle per clock. To make a GTX 760, Nvidia can either disable one entire GPC or turn off two SMXes in two separate GPCs. In the former configuration, one of the raster engines goes dark, and the card rasterizes three triangles per clock. In the latter, all raster engines survive the culling, and the raster rate goes up to four per clock.

This isn’t the first time Nvidia has played musical chairs with raster engines. The GeForce GTX 780 is similarly inconsistent, with either four or five triangles rasterized per clock depending on how the GK110 chip is pared down. Offering inconsistent specs in a single product may not be ideal, but the ability to prune any two SMXes (or any three in the GTX 780) gives Nvidia much more flexibility to repurpose defective GPUs. Also, as far as the GTX 760 is concerned, even the worst-case scenario beats the competition, since AMD’s Radeon HD 7950 Boost only rasterizes two triangles per clock.

Speaking of clocks, the the GeForce GTX 760 also boasts a higher clock speed than the GTX 660 Ti. Nvidia has cranked up the base speed from 915MHz to 980MHz, and it’s pumped up the peak Boost speed from 980MHz to 1033MHz. Part of the gain comes courtesy of Nvidia’s GPU Boost 2.0 algorithm, which uses GPU temperatures, not power draw, as the main factor to determine maximum speeds. When temperatures are low enough, GPU Boost 2.0 can raise voltages to increase the amount of clock-speed headroom for a given chip. No doubt thanks to that algorithm, you can expect to see even higher-clocked versions of the GTX 760 from Nvidia’s partners. Both Gigabyte and MSI will have cards with 1085MHz base clocks and 1150MHz Boost clocks, and some vendors will offer even faster models.

  Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

(Gtexels/s)

Peak

bilinear

fp16

filtering

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory
bandwidth
(GB/s)
GeForce GTX 660 Ti 24 110 110 2.6 3.9 144
GeForce GTX 680 34 135 135 3.3 4.2 192
GeForce GTX 760 33 99 99 2.4 3.1 or 4.1 192
GeForce GTX 770 35 139 139 3.3 4.3 224
GeForce GTX 780 43 173 173 4.2 3.6 or 4.5 288
Radeon HD 7870 GHz 32 80 40 2.6 2.0 154
Radeon HD 7950 Boost 30 104 52 3.3 1.9 240
Radeon HD 7970 GHz 34 134 67 4.3 2.1 288

Here’s how the GTX 760’s combination of higher speeds and tweaked unit counts translates into peak theoretical rates. As you can see, the higher clocks don’t quite make up for the missing SMX—peak texture filtering and peak shader performance are both lower than on the GTX 660 Ti. At the same time, the peak pixel fill rate has gone up a fair bit, which should mean better antialiasing resolve performance. Memory bandwidth has increased substantially, as well.

Compared to the Radeon HD 7950 Boost, the GTX 760 looks superior or competitive in all but raw shader speed and memory bandwidth. Of course, the 7950 Boost also has a wider, 384-bit memory interface, and it’s a slightly more expensive card right now. (Prices start at $279.99, or $259.99 after a mail-in rebate.)


Source: Nvidia

At 170W, the GeForce GTX 760’s power envelope is a little larger than the GeForce GTX 660 Ti’s 150W TDP. To keep noise levels in check, Nvidia has implemented a revised fan control algorithm that curbs fluctuations in speed. The result should be a more consistent noise profile—one that’s hopefully easier to tune out. A similar algorithm debuted in the GTX 780 last month, and it seemed to work wonders. Of course, the GTX 760 has a different heatsink and fan design, so we’ll have to do some hands-on noise testing to see how it fares.

But before that, let’s first have a look at another member of the GTX 700 series.

The GeForce GTX 770

The GeForce GTX 770 debuted a few weeks ago at $400. This higher-end offering is basically a juiced-up version of the GTX 680 for about $20 less. Today, we take our first look at how it performs.

Unlike the other members of the GeForce 700 series, the GTX 770 has a fully functional GPU. The GK104 chip is the same as what’s found in the GTX 760, but all four GPCs are intact, and so are the SMX units that lie within them. The same configuration was used in the GeForce GTX 680. This time around, however, the clock speeds have been turned up.

The GeForce GTX 770 has base and Boost clocks of 1046MHz and 1085MHz, respectively. Those are modest increases over the 1006/1058MHz clocks of the old GTX 680, and Nvidia’s new GPU Boost 2.0 algorithm deserves some of the credit. The clock-boosting tech is the same as that employed by the GTX 760 and other members of the 700 series.

On the memory front, the GeForce GTX 770 offers a 7 GT/s transfer rate, up 1 GT/s from the GTX 680. The memory interface is still 256 bits wide, so bandwidth has risen by a substantial 17%. Standard cards are available with 2GB of GDDR5 memory, and some vendors are offering 4GB variants for around $450. Card makers have also concocted hot-clocked models with Boost frequencies up to 1202MHz and memory transfer rates as high as 7.2 GT/s.

To accommodate higher clock speeds, the GeForce GTX 770 also has a higher thermal envelope. The 230W TDP is up 35W from the GTX 680’s, and the onboard power connectors have changed to help supply the additional power. Instead of the dual 6-pin PCIe connectors on the GTX 680, the GTX 770 has one six-pin connector and one eight-pin one.

The card we’ve tested is an Nvidia reference model that uses the same swanky cooler as the GeForce Titan. This cooler is beautifully crafted from magnesium and aluminum, and it’s whisper quiet. Unfortunately, the heatsink doesn’t seem to be available on GeForce GTX 770s out in the wild. Not one of the cards listed at Amazon or Newegg features the Titan cooler. Instead, they’re all equipped with custom solutions that quite likely aren’t as nice. Keep that in mind when looking at the noise and temperature results later in this review.

Other things of note

Nvidia has a handful of software bonuses for its recent GeForce cards. The first is GeForce Experience, which automates driver updates and game setting optimizations. Automating driver updates is fairly straightforward. Optimizing in-game detail settings based on the user’s hardware is a little more involved, though all the work is done on Nvidia’s end.

Games typically make their own settings recommendations based on system hardware. Those defaults tend to be fairly conservative, and they don’t always recognize new graphics cards. GeForce Experience is more aggressive, and it knows all about the latest GeForce models. It’s also capable of modifying game config files directly, making the optimization process a one-click affair for end users.

GeForce Experience’s optimization intelligence is based on profiling work conducted by Nvidia. The firm uses human testers to find demanding sections of games and benchmark the performance impact of various graphical settings. The performance impact of individual settings is weighted against their visual impact. Minimum frame rates are also defined based on the nature of the gameplay. All this information is fed into a software simulator that performs loads of iterative testing to determine the ideal settings for various hardware configurations.

For newbies who don’t know the difference between ambient occlusion and antialiasing, GeForce Experience takes the guesswork out of graphics tweaking—and explains how the various settings affect image quality. The settings recommendations aren’t just for the uninitiated, either. They can also be used as a starting point from which seasoned enthusiasts can proceed with further fiddling. The list of profiled games is already quite extensive.

On Kepler-based graphics cards, GeForce Experience will also serve as the server software for Nvidia’s Shield gaming handheld. Only a handful of games presently support streaming to the device, which is due to be released June 27.

Shield streaming relies on the H.264 encoding block incorporated in Kepler GPUs. Next month, that block will also be used by Nvidia’s ShadowPlay software. This application promises to record gaming sessions with much less of a performance penalty than existing game capture software. In fact, the performance overhead is so minimal that Nvidia expects gamers to have the feature enabled at all times. The always-on “shadow” mode allows users to allocate a chunk of system storage to recording the last few minutes of gameplay, ensuring there’s always evidence of epic feats. Let’s hope there’s an option for SSD users to point ShadowPlay to mechanical storage. There is an option for manual recording, and ShadowPlay may eventually support live broadcast via streaming services.

Test notes

The performance results you’ll see on the the following pages come from capturing and analyzing the rendering times for every single frame of animation during each test run. For an intro to our frame-time-based testing methods and an explanation of why they’re helpful, you can start here. Please note that, for this review, we’re only reporting results from the FCAT tools developed by Nvidia. We usually also report results from Fraps, since both tools are needed to capture a full picture of animation smoothness. However, we are building on a set of results from our GeForce GTX 780 review, and in that review, Fraps and FCAT generally seemed to agree on the nature of scope of any frame delivery problems. We think sharing just the data from FCAT should suffice for this review, which is generally about incremental differences between video cards based on familiar chips.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-9-24
1T
Chipset drivers INF update
9.2.3.1023

Rapid Storage Technology Enterprise 3.5.1.1009

Audio Integrated
X79/ALC898

with Realtek 6.0.1.6662 drivers

Hard drive OCZ
Deneva 2 240GB SATA
Power supply Corsair
AX850
OS Windows 7
Service Pack 1
Driver
revision
GPU
base

core clock

(MHz)

GPU
boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

GeForce GTX 660 Ti GeForce 320.39 beta 915 980 1502 2048
GeForce
GTX 680
GeForce 320.18 beta 1006 1059 1502 2048
GeForce GTX 760 GeForce 320.39 beta 980 1033 1502 2048
GeForce GTX 770 GeForce 320.18 beta 1046 1085 1753 2048
GeForce
GTX 780
GeForce 320.18 beta 863 902 1502 3072
GeForce
GTX Titan
GeForce 320.14 beta 837 876 1502 6144
Radeon HD 7950 Boost Catalyst 13.5 beta 2 850 925 1250 3072

Radeon HD 7970 GHz
Catalyst
13.5 beta 2
1000 1050 1500 3072

Thanks to Intel, Corsair, Gigabyte, and OCZ for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

In addition to the games, we used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Crysis 3


You can see in the raw frame time plots how the GeForce GTX 760 improves on the GTX 660 Ti and how the GTX 770 steps up performance from the GTX 680. In both cases, the newer 700-series cards produce more total frames than the 600-series products they replace, and the newer cards’ frame rendering times are ever-so-slightly lower, as well. The occasional spikes to higher frame times are a also bit less pronounced on the newer cards.

The traditional FPS average and our more latency-focused 99th percentile frame time agree that the new GeForces have edged ahead of the competing Radeons in their respective classes. The 99th percentile frame time is the threshold below which 99% of all frames were rendered, and the numbers for each of these cards look pretty good. All but the last 1% of frames were produced in about 35 milliseconds or less on even the slowest card. That should translate into generally smooth animation—and in our experience while play-testing, it does.


The broader frame latency curve illustrates that the new GeForces are generally just a hair quicker than their competition, but there’s trouble in that last 1% of frames rendered. Those longer frame times come from the occasional spikes shown in the raw frame time plots above. They happen to different degrees on each card. We could feel these hitches while playing, right as we moved up the tunnel at the beginning of the session and later when we loosed the two explosive-tipped arrows and they did their thing. It appears those slowdowns are a little more severe on the GeForces than on the Radeons.

That fact is reflected in our “badness” metrics, which show the time spent working away on frames where more than X milliseconds have already passed. For instance, a 70-ms frame contributes 20 ms to our “time beyond 50 ms” metric. 50 ms is our primary threshold of “badness;” it corresponds to 20 FPS at a steady rate, and producing frames slower than that is likely to threaten the fluidity of the motion being portrayed. A little time spent beyond 50 ms may not be a big deal, but you wouldn’t want it to add up. 33.3 ms translates to 30 FPS and corresponds to two refresh intervals on a 60Hz display. Staying below that threshold should mean very good things for animation smoothness. And 16.7 ms translates to 60 FPS or 60Hz, which is as fast as most monitors can update the screen.

In this case, our collection of analytical tools tells us this contest is almost a complete wash. The two new GeForces are generally faster than their Radeon rivals, but the Radeons do a bit better job of mitigating the occasional animation hiccup. Neither difference is huge, so take your pick.

Far Cry 3: Blood Dragon



The Radeon HD 7970 and 7950 have the advantage over the GTX 770 and 760 in a minor but consistent way across all of our metrics here. The more remarkable outcome is the fact that, even though we’re running at pretty high image quality settings at 2560×1440 resolution, none of the cards spend any time beyond our 50-ms “badness” threshold. Heck only one of them spends much time at all above 33.3 ms. Yes, the higher-end GPUs are faster and we have numbers to prove it, but a $250 card like the GTX 760 handles this workload very well.

Tomb Raider




Our Tomb Raider scenario is a a bit more challenging than our Blood Dragon test, but otherwise, the outcomes are almost exactly the same. I’m a little surprised at how modest the benefits are from the GTX 770’s faster GDDR5 memory. The 770 barely separates itself from the GTX 680 most of the time.

Guild Wars 2



So this is funky. Look at the frame time plots. The faster the GeForce card, the more latency spikes you’ll see from it. Must be some kind of quirk of this game engine or of Nvidia’s drivers, somehow. The thing is, those spikes are small, consistently below 35 ms, and just don’t add up to a problem in the grand scheme of things. The big story here, again, is the sheer competence of all of the cards. None of them breach our 50-ms threshold at all, and few surpass the 33-ms mark, either.

Oh, and Radeon versus GeForce, close contest, blah blah.

Sleeping Dogs




This is a pretty intensive game that looks good and has, in the past, been very helpful in pushing the fastest graphics card to their limits. In this case, though, there are only two things not incredibly boring about these results. The first is the relatively strong showing of the Radeon HD 7970 GHz Edition compared to the GTX 780 and Titan. The second is the fact that the mid-range graphics cards don’t struggle at all here, even though we have this game’s quality options turned up pretty darn high. You may have to choose your image quality settings a little more carefully with a $250-ish graphics card, but pairing a GTX 760 or a Radeon HD 7950 with a 2560×1440 monitor in a new gaming rig is a viable option. Heck, we think you should do it as soon as possible.

Power consumption

The Radeons have a unique capability called ZeroCore power that allows them to spin down all of their fans and drop into a very low-power state whenever the display goes into power-save mode. That’s why they tend to draw less power with the display off. At idle on the Windows desktop, only a few watts separate the Radeons and GeForces in the same class. During our real-world gaming workload in Skyrim, we have a split result: the 7950 draws less power than the GTX 760, but the 7970 out-draws the GTX 770. Again, none of the differences are terribly dramatic.

Noise levels and GPU temperatures

All of these cards use the default cooler from the GPU maker, and that last graph illustrates how AMD’s coolers for the 7950 and 7970 aren’t all that spiffy. By contrast, that GTX Titan-style cooler on the GeForce GTX 770 is blissfully quiet. I wish the same could be said for the GTX 760’s puny reference cooler, which is just like the 660 Ti’s. That cooler doesn’t register too strongly on our decibel meter, but subjectively, it’s worse than the numbers would seem to indicate. The 760’s blower makes a rough sound that grates on my ears, and I find it hard to believe the smooth hiss of the Radeon HD 7990 somehow generates more decibels worth of noise.

Thing is, unless you’re getting a 7990, Titan, or GTX 780, you’re most likely not going to be buying a card with one of these reference coolers attached. Companies like Asus, Gigabyte, and Sapphre tend to use their own coolers instead, and the latest crop of heatpipe-laden heatsinks tends to perform very well, combining low noise levels with decent cooling. We may have to test a few of those next time around, but it just wasn’t, uh, in the cards for today.

Conclusions

Let’s summarize our performance results and mash ’em with with pricing using our famous price-performance scatter plots:


What you’re seeing, folks, is something very close to parity between Nvidia and AMD, which should be no great surprise if you’ve been following these things. There’s been loads of back-and-forth jockeying for position in the past 18 months. AMD introduced its 7000-series Radeons months before Nvidia followed with the GTX 600 series. AMD then countered with a mid-cycle refresh by slipping in the 7970 GHz Edition and the 7950 Boost. For a time, Nvidia still held an edge in our latency-focused tests, until AMD addressed some issues with its drivers and recaptured the lead. Now, Nvidia has done its own hardware refresh with the introduction of the GTX 700 series.

At the end of the day, despite all of the incremental changes, the performance gaps between Radeon and GeForce are minimal. The overall scores could swing a few points one way or another if we altered our selection of games used in testing. This contest is close enough to make little differences seem larger than they are.

Nvidia undoubtedly had the Radeon HD 7950 Boost in its sights as it set the clock speeds and price for the GeForce GTX 760. The result is a card that ties or slightly outperforms the 7950 Boost at a lower $249.99 starting price. That puts the GTX 760 in a better position on our value scatter plot.

Meanwhile, the GTX 770 is in a tougher spot. When it was introduced a couple of weeks ago, its $399.99 price tag undercut the Radeon HD 7970 GHz Edition. The price advantage was especially welcome since the 7970 GHz is apparently still the faster card. Now, AMD and board makers have cut 7970 GHz prices in response, and the Radeon occupies the better spot in our value plots. In fact, it looks like AMD has queued up some limited-time offers to drop below $399, likely in anticipation of this next round of reviews.

When things are this close, oh, the games they will play.

Not that there’s anything wrong with that. In fact, this sort of competition is a very good thing for consumers. We’re just not sure how to declare any definitive winners in this ongoing fight, under the circumstances.

AMD has sweetened the pot considerably by bundling several big-name games with its 7950 and 7970 cards through some retailers. As long as that deal is available, and assuming you don’t already own the games and would like to have them, the Radeons may be the more attractive option. Meanwhile, Nvidia has its own set of advantages to offer, including a clearly better track record of driver support for just-released games, the nifty auto-optimization features available via its GeForce Experience software, and markedly quieter coolers for cards based on its reference designs. If you’re considering multi-GPU solutions, Nvidia’s SLI is easily superior at present, too.

All of which leads us to the ultimate reviewer’s cop-out. Under the circumstances, we’re not gonna choose a winner. We’re just gonna say: take your pick. You really can’t lose either way.

Comments closed
    • wingless
    • 6 years ago

    I just bought a GTX 760 and I have to say it is great bang for your buck. I upgraded from a Crossfire HD 4870 setup so my world has changed dramatically. I only have two concerns about Nvidia cards in general:

    1) Lack of DirectX11 11_1 feature set. I worry their lineup aren’t as future-ready as AMD’s.

    2) Lack of OpenCL 1.2 support. Nvidia OpenCL performance is PATHETIC! I may have been better served getting a Radeon HD 7950 Boost as I use Bitcoin and Folding@Home.

    Nvidia, please give me OpenCL 1.2 at least!

    • JuliaOfeefe4
    • 6 years ago
    • jonjonjon
    • 6 years ago

    lame. how much you want to bet every review site got a card that does four triangles per clock. people see the benchmarks and buy the card not realizing they could get an in inferior card for the same price. maybe i missed it but i didn’t see you mention what version of the card you tested for this article. seems like that should be something you should note.

    • MarionLima00
    • 6 years ago
    • rastaman
    • 6 years ago

    Disappointed there was no show of the 7870GHz edition in the review even though it was used in the spec comparison. Would have been nice to see how it fared against the bigger cards.

    • TO11MTM
    • 6 years ago

    Nvidia, get back to me when you can figure out what you are selling me.

    Still wish Matrix didn’t screw up so bad with Parhelia. =(

      • Airmantharp
      • 6 years ago

      That was the disappointment of that companies’ lifetime. They were dead to enthusiasts after that.

    • anotherengineer
    • 6 years ago

    I know the source engine may be old, however it is constantly updated and there are some decent games that run on it. It would be nice to see 1 source engine benchmark to see the gains over older cards or if the engine favors one brand over the other.

    Next review perhaps?

    • Bensam123
    • 6 years ago

    Meh… Why even bother adding a new number to the front of these cards? If they’re still competitive with AMD right now as soon as they release anything remotely better (say in the next four months), it’ll simply blow Nvidia out of the water. I can’t imagine they plan to stick with these cards for the next two years.

    Still the only thing I can possibly imagine to make up for this is they have a new series of cards they’re waiting to release with the 8xxx series, but that’d definitely make people who buy the 7xx series feel ripped off.

      • HisDivineOrder
      • 6 years ago

      Maxwell is going to come out later than the Radeon desktop line that will replace the 7xxx series. It was my understanding that AMD had used the 8xxx series as their rebadge for the 7xxx series (with not even a clockspeed increase or change in product) in the mobile line, so I’m wondering if they aren’t about to skip to the 9xxx series with the update that’s SUPPOSED to be coming at the end of this year.

      I’m not entirely convinced they’ll actually get it out this year, but I think nVidia’s next line of cards are supposed to be a die shrink AND a new tech, which means despite the fact they should be out in less than a year, it’s likely to be a year or more before they actually show up.

      Even when they do, this will be a case where drivers will be immature and in need of polish when they arrive. I think they’ll be great cards, but I think nVidia’s pricing with the Titan, then 780 are harbingers of where they imagine these future cards hitting in pricing. They’re setting our expectations ahead of time, so they can slap down a high end that’s between $650 and $999 and no one have a hissy fit. Instead of sinking a new line of cards with arguments about price, you’ll have people going, “Well, at least it’s a new card with new tech this time. Perhaps it makes sense at that high price!”

      nVidia are pretty damn good at manipulating expectations and Jedi-Mind-Tricking you into thinking suddenly, for example, that $650 for a video card is a deal when a year ago you thought $550 was not.

        • Bensam123
        • 6 years ago

        7xxx rebadge is only for mobile parts, they’ve been doing that for a few generations to keep them smaller and more concise.

        I doubt the 8xxx series for the desktops will be a rebadge. If that were the case they wouldn’t have held them back from this spring because they couldn’t figure out how to add a extra 80mhz to the GPU.

        I really don’t know what Nvidia is doing. They aren’t ‘dumb’ so they have to have something planned.

    • DeadOfKnight
    • 6 years ago

    LOL I just got an email from NVIDIA that says “GeForce GTX 670: The New Weapon of Choice”.

      • HisDivineOrder
      • 6 years ago

      Yeah, I laughed. It’s kinda clever when you think about it even if it was an accident. It gets it in their head, “This is a 670.” Then they look and it’s the 760, which should be weaker. Except they’re still thinking in the back of their mind, “This is a 670.”

      Next they’ll wave their hands and say, “These aren’t the droids you’re looking for…”

    • jabro
    • 6 years ago

    The bar chart on pg. 6 “Tomb Raider – Frame latencies by percentile” does not work for Radeons – it just shows a TR logo.

    Great article – tons of great info as usual!

      • jabro
      • 6 years ago

      Similar problem on pg. 7 with “Guild Wars 2 – Frame latencies by percentile” bar chart for Radeons.

      Also, it seems that the “System Noise Levels – Load” bar chart is repeated twice on pg. 9.

        • Cyril
        • 6 years ago

        Fixed pgs 6 and 7. Those buttons weren’t supposed to be there.

        Also fixed the repeated noise levels graph. Second one was supposed to show GPU temps.

          • willmore
          • 6 years ago

          Oh, crap, sorry, I saw that but forgot to mention it. My bad.

    • HisDivineOrder
    • 6 years ago

    Looks like an ideal way to get SLI to me.

    $500 for 690-level performance? Not bad. If only 670’s were $250, I’d buy another one to SLI. Alas. Maybe I’ll check ebay…

      • OmarCCX
      • 6 years ago

      Used 670s are in the low to mid $300.

        • Airmantharp
        • 6 years ago

        People that are selling perfectly good high-end cards so soon after release deserve to lose that much money on them…

    • JudyCrews22
    • 6 years ago
    • themattman
    • 6 years ago

    Ahh, I’m getting dangerously close to having my video card get eclipsed by the numbering scheme. My last change was actually a downgrade when my 8800 GTS burnt out. 8800 GT forever!

      • MadManOriginal
      • 6 years ago

      Fortunately for you, NV is only using 3 digits now, so your card is at LEAST 10x better, right?!?

      • DrCR
      • 6 years ago

      I’m still waiting for another generation that over 9000.

        • Krogoth
        • 6 years ago

        [url<]http://www.youtube.com/watch?v=SiMHTK15Pik[/url<]

    • indeego
    • 6 years ago

    99th percentile/average FPS/$ almost 100% matched. I think we can put this controversy to rest now.

      • Firestarter
      • 6 years ago

      If we stop testing for this, how would we know when the drivers/game engines screw up again?

        • willmore
        • 6 years ago

        “Trust, but verify.”

    • tfp
    • 6 years ago

    So when do we see the GTX 760Ti with the 256bit bus and the same number of cores on the 660Ti?

      • Airmantharp
      • 6 years ago

      Why does it matter?

        • tfp
        • 6 years ago

        Why doesn’t it?

          • Airmantharp
          • 6 years ago

          You’re asking the question, but you’ve provided no frame of reference that can be used to formulate an answer. The bus width, by itself, is meaningless- they could have used a 384bit or 512bit bus and yet the part could have been slower, and they could have used a 128bit bus and the part could have been faster. Why is having, specifically, a 256bit bus on this theoretical GTX760Ti important to you?

            • travbrad
            • 6 years ago

            Yeah the bus width is interesting from a tech geek perspective, but as someone buying a card the only thing that should really matter is performance and price. How they achieved that performance is academic.

            EDIT: -2 because I recommend buying based on price/performance, not arbitrary metrics that don’t matter to consumers? Okay then..

            • derFunkenstein
            • 6 years ago

            I think he’s just looking to find out when they decide to plug the price gap.

            • tfp
            • 6 years ago

            And because they have increased the bus bandwidth it would be interesting to see who that impacts a similar configuration to the 660Ti.

            I bet it would provide higher performance at a higher price!

            • tfp
            • 6 years ago

            You’re over analyzing.

            I picked the middle between the 760 and 770. Nvidia has been doing the Ti in this GPU range for the last generation or two because of this i am expecting more of the same. If they do add a GTX7x0 Ti card I expect them to create a GTX760Ti with the configuration listed above.

            • Airmantharp
            • 6 years ago

            *You’re

            I’m not trying to analyze anything. I’m trying to get at what the hell you’re going on about; I shouldn’t have tried.

            • tfp
            • 6 years ago

            I’m not going on about anything I’m asking a simple question, I don’t see how it has to matter.

            • Airmantharp
            • 6 years ago

            Your question is too simple, and you’ve yet to fully state it. Still waiting on the actual question.

            • DeadOfKnight
            • 6 years ago

            256 FTW!

      • Krogoth
      • 6 years ago

      In other words, you want a 670 using the newer revision of GK104 silicon like 770 and 780?

        • tfp
        • 6 years ago

        No not really I just asking a question, when is the Ti version coming out. There is always a Ti or Maxxx or whatever version. What listed would be an obvious choice for the HW configuration.

          • Airmantharp
          • 6 years ago

          There is always a new version, and Nvidia could just as easily call it the ‘GTX760 Screw You’ to throw people off. All this ‘Pro’ ‘Ultra’ ‘Ti’ etc. stuff has been recycled at least once already.

            • tfp
            • 6 years ago

            And?

            • Airmantharp
            • 6 years ago

            And please keep posting so that your trolling gets noticed. Thanks!

            • tfp
            • 6 years ago

            Airmantharp your the best! Thanks for the feedback.

            Please take the time to check how long I have been at TR. Me asking this kind of question is not out of character and should not be completely unexpected. I don’t know the last time I was called a troll but it’s nice to know you care.

            • Airmantharp
            • 6 years ago

            The same to you!

            But your inability to articulate questions makes your posts here, which is what we’re talking about, quite useless. You’re not adding to the discussion. Further, your attitude borders on trolling; I sincerely responded wanting to understand your question, and got nothing useful back. Thanks again!

            • tfp
            • 6 years ago

            I asked a simple question and got an argument, as entertaining as this is I will try to explain the “goal” of the Nvidia Ti chips.

            The goal of the Ti chips is pretty simple in their product line. If you look at the scatter chart in the TR review you will see there is a performance/price hole in the GTX7X0 line up that a 760Ti would fill nicely. Based on this the simple question is when will that chip come out and what will the specs be. I asked that questions and made the assumption on the specs. I expected people would be able to understand the goal of top to bottom a product line and to go form there.

            Instead of a no response, an intelligent response, or a discussion on expected 760Ti chip specs you took the high road with “Why does it matter?”. None of this matters nor does any new GPU release matter, so your problem is solved.

            • Airmantharp
            • 6 years ago

            Ti is just another product segmentation. I don’t think Nvidia even ships full-fat GPUs anymore; they’re all cropped one way or another. I really don’t know what you’re going on about. It’s like you desperately want Nvidia to make another product between a host of already very close products.

            Do you want your own special trim on your car too? Sheesh.

            • tfp
            • 6 years ago

            Haha sure. I guess even when things are spelled out life is too hard. Next time I’ll type slower for you. 🙂

            • Airmantharp
            • 6 years ago

            You can attack me personally if you like; I’m aware that no good deed goes unpunished.

            • tfp
            • 6 years ago

            Can you explain your point? I am hardly obsessed with Nvidia’s product segmentation and I’m posting just to answer you. There is *always* someone that will post how they should have waiting for the X card to be released. So trying to understand the product line in full is reasonable. Your response was disingenuous.

            I can not understand what you are after, it’s not like this will inflate your post count in the forums.

            • Airmantharp
            • 6 years ago

            My point was to ask you to explain your point; it took a while, but I think you got it out there.

            • tfp
            • 6 years ago

            You think? Can you explain my point back to me? I want to make sure you understand it.

    • Wildchild
    • 6 years ago

    Glad to see they finally replaced that stupid memory bus that originally came on the 660 Ti. The price is very tempting as well.

    • willmore
    • 6 years ago

    So, if this is all we’re to expect form the 7xx series from nVidia for a while (and one assumes no new 6xx cards), then nVida pretty much has given up the <$250 segment of the market? Or would it be better to say <$200?

    Unless something has changed price and performance wise since their laungh, aren’t the GF650 and down horribly uncompetetive?

      • Airmantharp
      • 6 years ago

      These are the parts, yes, but the prices are always fluid. Nvidia’s GPUs are smaller and use less power, so they can easily beat AMD on the component costs, which means that they can always go lower than AMD, if they want. The price stratification you see today is a result of Nvidia not having to go lower, as they’re apparently selling enough GPUs to meet their quota at current prices.

        • willmore
        • 6 years ago

        Well, GT640 ($90-$100) at 65W gets well beaten by an HD7750 ($100-$110) at 55W.
        GTX650’s are $109 and I can’t find any good performance data for them but we can assume that the HD7770 is the competition. It also goes for $109 and there are a bunch with rebates.

        Next up would be the GTX660 which is just under $200 which puts it head to head with the HD7870 GHz Edition.
        For performance, the 7870 wins by a safe margin.

        The GTX660Ti is starting at $249 and goes up from there quite quickly–and it’s up against the HD7950 Boost.

        I don’t see a lot of sunlight in there for nVidia.

          • Airmantharp
          • 6 years ago

          You don’t see a lot of sunlight, personally, but I’m sure Nvidia’s shareholders do. They’d be crawling up management’s ass if they weren’t.

          Like I said, all Nvidia has to do is push their current tech (which is not on the GT640, I don’t believe) down the line and price lower than AMD. They’d price them out of the market, just like Intel could do if they wanted.

            • willmore
            • 6 years ago

            Yeah, that’s hard to tell. According to: [url<]https://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units,[/url<] there are a half dozen or so GT640's. One is based on GF116, two are based on GK107, one is based on GK208. Only one of them is not marked as OEM and it's GK107 based, so I would guess if you wander into a store and walk out with a GF640, it's going to be that guy. Though the GK208 part isn't labeled either way. The difference between the two seems to be DDR3 on the GK107 part and GDDR5 on the GK208. So, I guess that, either way, they're Kepler designs. The more telling factor is probably the die size--which is going to tell us more about the cost to manufacture and, hence the profit on the part (if we know the market price). Those GK107's are 118mm^2 while the GK208 is a tiny 79mm^2! The smallest 28nm AMD part is the Cape Verde part used in the HD77x0 family and it's bigger than both at 123mm^2. Die size doesn't tell the whole story as small die size differences can be swamped by yield on the design. We only need to go back a process step to 40nm to see AMD parts with much better yield than nVidia, so that's a possibility--no one but AMD and nVidia probably know that. The Cape Verde parts range from 45W up to 80W while the GT640's go from 49W to 75W. So, power is a wash. In the end, you're probably right, nVidia could fight harder on the $200 and under front, but they're not doing it, yet. The only place they have light currently is with this GTX 760. But, price/performance isn't the only thing that drives purchases. I've heard tell of this thing called a 'fanboy' which may have some influence. 😉 Edited for spelling.

    • jossie
    • 6 years ago

    Whew, dodged a bullet there. I knew this thing was coming and bought a 660 on sale anyway. Looks like the 660 at $175 is still a better value.

    • jdaven
    • 6 years ago

    The PC component market is kind of like the Highlander (bad plot version) in reverse. Every year as fewer and fewer companies remain, we are experiencing the ‘Slowening’ of process tech and performance gains.

      • Airmantharp
      • 6 years ago

      You can blame yourself (the consumer) for part of that; if TDPs were still unrestricted, we could have much faster hardware. Instead, consumers have demanded more efficient electronics across the board, and companies like Intel and Nvidia have taken notice. Note that the GPU in everything from the GTX770 on down is a ‘half-Kepler’ which is tuned for performance per watt. Such GPUs used to be restricted to mid-range units at best.

      You can also blame TSMC (but not Intel) for fumbling lithography process steps. They fumbled 40nm and 28nm has been here far longer than planned.

        • travbrad
        • 6 years ago

        I agree consumers are partially to blame, but I doubt a lot of the people on this specific website are really to blame. Since 2004 I have spent $0 on laptops, and probably $3000+ on desktop PC parts. People always say “vote with your wallet” and I have, but it can’t counter-act the people willing to buy a new laptop every couple years.

        The amount of extra performance you get for a higher TDP doesn’t scale linearly either. Just look at AMDs new “5GHZ” CPU. IT has a 76% higher TDP than a FX8350, yet it’s only clocked 18% higher.

        [quote<]You can also blame TSMC (but not Intel) for fumbling lithography process steps. They fumbled 40nm and 28nm has been here far longer than planned.[/quote<] Not to be a TSMC apologist, but what they are doing is extremely difficult. They are basically in a fight against the laws of physics, and eventually the laws of physics are going to win. It appears even Intel may be starting to struggle with this, since they've delayed their 14nm chips. I know they don't have competition from AMD to push them, but surely 14nm would help them in their competition against ARM.

    • willmore
    • 6 years ago

    Looks like we should expect a price cut on those 7950Boost cards in response to the GTX 760–and maybe a bit of polishing of the driver to eak out some more performance.

    It’s good to see both companies keeping each others feet to the fire. Sure wish we had that in the CPU space.

      • Airmantharp
      • 6 years ago

      Yes!

      • eloj
      • 6 years ago

      Locally I see the cheapest 7950 boost (Sapphire) being about $20 (converted) cheaper than cheapest GTX760 (Zotac). A bit unfair given how new the 760 is, but if nVidia want to beat AMD on price they’re failing.

      Re: article saying “[..] but pairing a GTX 760 or a Radeon HD 7950 with a 2560×1440 monitor in a new gaming rig is a viable option. Heck, [i<]we think you should do it as soon as possible[/i<]." Thanks.. but no, I'm waiting for the next actual hardware generation. I've been ready to spend for a year now but these players don't want to play. Hopefully my 6950 will carry me into the next year and what I can only PRAY will be a fresh set of cards.

        • Airmantharp
        • 6 years ago

        If my two HD6950’s had worked like they should according to reviews and benchmarks at the time, I’d still be happily running them. If I’d stayed at 1200p, I’d still be happily running the GTX570 that the two HD6950’s replaced; the GTX570 didn’t have enough RAM for 1600p when I upgraded to a 30″ 2560×1600 primary monitor, or I’d have just gotten another. The AMD cards were technically plenty fast and much cheaper to get 2GB of memory on; I’d have had to sell the GTX570 with 1.25GB of RAM either way, and the GTX570’s with 2.5GB of RAM were quite overpriced (but worth it, apparently, as I’d still have them if I’d gone that route).

        But you’re right- there’s no reason to upgrade this time around. I didn’t even consider the HD7970’s- they were hotter, louder, and at release, they just weren’t that much of a performance jump. My HD6950’s were unlockable and had AMD’s decent stock blowers which I’d built my system around, so they ran very quiet; the HD7970’s blower was just too damn loud no matter what you did, and to make matters worse, you couldn’t get an HD7950 with a blower, which is what I would have done.

        • rxc6
        • 6 years ago

        I was in your situation. Just switched to an 1440p monitor and my 2+ year old unlocked 6950 wasn’t doing the job. I thought about waiting, but after seeing a Dual-X Sapphire 7970 for $300, I just bit the bullet. I thought about waiting for the next generation, but I think I got a good deal for the card.

          • Airmantharp
          • 6 years ago

          You did!

          And if AMD had release the HD7000-series with the performance they’re getting now, I would have figured out how to make one quiet and put that in the build. Their initial performance was just so underwhelming, and I didn’t want to put something louder than my HD6950’s in my build. They were just barely on the ‘quiet enough’ side of the noise threshold under load.

    • DeadOfKnight
    • 6 years ago

    On page 2 you mention that GeForce Experience is for Kepler, but it supports GPUs all the way down to the lowest Fermi cards.

      • Dissonance
      • 6 years ago

      Fixed. Got my lines crossed with Shield streaming support.

        • derFunkenstein
        • 6 years ago

        Never cross the streams.

          • JustAnEngineer
          • 6 years ago

          [url<]http://www.youtube.com/watch?v=jyaLZHiJJnE[/url<]

        • Silus
        • 6 years ago

        Speaking of which, will TR do a review of Shield ?

    • flip-mode
    • 6 years ago

    Much ado about nothing. The fan controller optimization is the most exciting feature of the card. Other than that, a bunch of “fine tuning” of the pruning of the GK104 that results in what looks like a 2% overall performance gain. The price cut is nice but you don’t need to launch a “new” card to cut prices.

    Wow. I’ve given up protesting against meaningless model number incrementation, but this latest round by both Nvidia and AMD really helps keep the cynicism alive. Take CPU and GPU events together and 2013 is turning into a bunch of hot air as far as hardware progress goes. Sheesh. It’s a good year to wait for next year.

      • CampinCarl
      • 6 years ago

      Yeah…this year has made me glad I updated my system last year.

      • Airmantharp
      • 6 years ago

      On the GTX760, yes- on the GTX770, the most exciting part is the new fan! Also, EVGA is shipping overclocked versions of the GTX770 with the excellent Nvidia fan; wish they’d gotten a mention.

      • HisDivineOrder
      • 6 years ago

      Remember, next year the only CPU upgrade that MIGHT be worthwhile from Intel is going to be Haswell-E (true octa-core? new chipset? fluxless solder? $500-900) and that’s if it’s not delayed. Certainly, Haswell Re-released for LGA/desktop is going to suuuuck.

      If nVidia and AMD don’t delay their GPU’s after Apple’s recent contract with TSMC, then MAYBE we’ll get some discrete GPU’s that’ll have tech worth being excited about, though.

      I suspect this year will be a good year to wait for two years from now. Which, I guess, makes it a good year to buy a next gen console since enthusiasts by now will have little more to buy once they’ve bought in with 600/700SLI/79xxCF, SB/IB/Haswell, and a SSD. Intel, AMD, and nVidia are currently letting the enthusiast rest for a good long while between giving us anything to buy.

      Maybe AMD’s next gen GPU’s will be different, but based on what they did with the 79xx series out the gate (two Decembers ago), I suspect more of the same with better performance per watt and much higher prices.

        • Farting Bob
        • 6 years ago

        Now has never been a better time to save your money and wait for better things next year in the PC world. About the only thing worth upgrading is to a new SSD, and that is only really if you are still on a HDD or early gen SSD. I plan on waiting till DDR4 is common and cheap enough to get 16GB of the stuff and slap it on whatever Intel CPU uses it. Even then i’ll probably keep my 7850 GPU.

      • travbrad
      • 6 years ago

      These new cards are probably more interesting to those who haven’t upgraded within the last couple years. I don’t think very many people with a 600 series card are going to get a 700 series card, just like not many people with Ivy/Sandy Bridge are going to get Haswell.

        • Airmantharp
        • 6 years ago

        Yeah, these aren’t for us. The ones for us will have 8GB or 12GB of DDR5 for compatibility with future console ports and 4k monitors.

        • Diplomacy42
        • 6 years ago

        well the 780 is the only 7 series card, and its 650+.

        everything else is just an extension of the 600 lineup, so yeah, trading in your 660 for a 760 makes about as much sense as trading into a 680.

          • Airmantharp
          • 6 years ago

          They’re all Kepler; same architecture that’s been tuned for different purposes at different price points.

        • flip-mode
        • 6 years ago

        Yeah, but, but, but, it’s barely any faster at all than a 660 Ti. I see the whole thing as a worthless exercise. Just drop the price of the 660 Ti and call it a day. But Nvidia’s reasons for doing this are very understandable and justifiable: to sell more cards. On those grounds, it’s very worth it, but as an informed consumer, it seems ridiculous.

          • PixelArmy
          • 6 years ago

          It is marginally faster than the 660Ti, but enough to shake up the value scatter plot (averaged it’s cheaper and better than the 7950 Boost), Just dropping the 660Ti price puts the 660Ti in a better spot on the plot (not as powerful but cheaper) when compared to the 7950 Boost. Given that, I think it’s existence is justified (a win-win for both the consumer and Nvidia), so now what to name it?

          Well it has different specs, so it needs at least a different name. To me, calling it a ‘7’ series is far less annoying than LE, Ti, Boost, or GHz Ed markings. Even worse would be to just call this the “new” 660Ti or 660Ti-xyz core. I’m sure you’ve complained about this…

          Edit: To clarify “better spot”, a lower priced 660Ti would move it’s plot position horizontally left, into a better spot compared to where it currently sits (directly under the 7950 Boost), not that that spot is necessarily better than the 7950 Boost.

    • brucethemoose
    • 6 years ago

    Still hate how TR never OCs cards. I’m in the market for a new card now, but this excellent review is nearly useless when it doesn’t represent how I’ll be running a card.

      • DeadOfKnight
      • 6 years ago

      Someone else will do it and you can see their numbers. I don’t expect to be shown an OC you may not even get on a value plot. These are reference design offerings. What percentage of readers do you really think is going to be overclocking them? I mean, if you’re really wanting to overclock cards, I highly suggest you go with a non-reference cooler especially with the new GPU boost technology. They may not be as pretty as the new GTX 690-style cooler, but they’ll give you better numbers any day of the week.

      For the record, I still thumbed you up after someone else thumbed you down because I don’t think it’s not a valid point, but we don’t need to be hating TR for it. You can decide AMD or NVIDIA based on features alone with them on top of each other like this. What more do you want?

        • Airmantharp
        • 6 years ago

        I’d actually like to see the overclocking results, but not for the performance or value- those are easy to extrapolate. I want to see how the reference coolers handle it, especially with respect to noise, as I only buy the reference blowers.*

        *When someone else makes a better blower for either flavor of card that is better than Nvidia’s best, I’ll reconsider. But I haven’t seen it.

      • danny e.
      • 6 years ago

      It’s like complaining that some auto magazine car review is useless because they run the cars stock.
      “But hey, I’m planning on pulling that stock 4 cylinder engine out and cramming a V8 in my fiesta!”

      You can’t review things that are not “stock” because there are too many variables.
      If the manufacturer ships the cards overclocked, that’s a different story. I believe, TR always leaves the clock speed alone on manufacturer overclocked cards when it reviews them.

      edit extra engine. tired.

      • travbrad
      • 6 years ago

      Maybe it’s just me but I’ve never had much luck with graphics card overclocking. I think the best OC I’ve ever achieved without artifacting is maybe 10% at best. I’ve owned both AMD and Nvidia cards from many different board partners, but none of them overclocked well.

      On the other hand CPU overclocking is super easy. I’m running a 40%+ overclock on my current CPU, and could run a 50% overclock if I wanted to put up with a lot of extra heat.

        • Airmantharp
        • 6 years ago

        I’m sold on EVGA Superclocked and FTW cards for now.

        • Firestarter
        • 6 years ago

        Depends very much on the GPU you got and what you consider overclocking. I got a HD7950 at launch and by the normal definition of overclocking, I have it overclocked 38%. It runs at 1100 MHz now, stock was 800 Mhz. Key here is that everybody found that the boards were capable of running the GPUs at much higher speeds than stock, in a sense AMD downclocked the cards for market segmentation, not because the cards couldn’t handle it.

        In contrast, if you buy a HD 7970 GHz edition right now and tried overclocking it, you’d end up with similar (probably a bit higher) clock speeds, but that would only translate to a ~15% overclock. Sounds less impressive, but it’s essentially the same result with more or less the same hardware, the only reason it’s not a overclocking monster in the traditional sense is because the hardware is already pushed quite hard to meet the marketing goals.

      • DrCR
      • 6 years ago

      I hate how TR never tests with phase change.

      😉

      • ULYXX
      • 6 years ago

      Oh come on now, that’s a bit harsh man. TR set a lot of standards and started that percentile benchmark. Not many people know how OC their cards anyway and there are niche sites specifically for that kind of review. 🙂

    • desertfox
    • 6 years ago

    I’m seeing lots of mentions of the GTX [b<]770[/b<] on page 2, including in the header. Copy/paste error?

      • Dissonance
      • 6 years ago

      Nope. We’re talking about the 770 there 😉

      • DeadOfKnight
      • 6 years ago

      Yeah. We haven’t seen a 770 review due to Haswell, Computex, and the crap ton of AMD reveals. This is it.

    • eofpi
    • 6 years ago

    I’m curious why the GTX 670 isn’t included in the spec tables on page 1. It’s clearly a relevant comparison to the 770, being its direct predecessor.

      • Airmantharp
      • 6 years ago

      The GTX680 is the GTX770’s direct predecessor; and they likely didn’t include it in the spec listing because they didn’t include it in the benchmarks. I would like to have seen it in the benchmarks though, since I’m running two of them. The GTX670 should slot right between the GTX760 and the GTX680, I’d think.

    • Alexko
    • 6 years ago

    ” Meanwhile, Nvidia has its own set of advantages to offer, including a clearly better track record of driver support for just-released games”

    In light of recent developments with Gaming Evolved and next-generation consoles, that’s a rather dubious argument.

      • StuG
      • 6 years ago

      Not really. Those are prospects that things might turn in their favour. Sadly, it cannot change the past, and his statement was right. A track record is a track record.

        • Alexko
        • 6 years ago

        And the Radeon 9700 Pro was much faster than the GeForce 4 Ti 4600, you cannot change the past.

        It’s also completely irrelevant to the current situation.

          • Airmantharp
          • 6 years ago

          It will be irrelevant when AMD un-borks their Crossfire drivers and TR/Anandtech verify the fix. Not everyone uses more than one GPU, but with 4k on the immediate horizon, I’m sure more than ever will be interested.

          Till then, their drivers are technically broke. Hard to recommend using a single card when you can’t recommend adding a second in the future.

            • Krogoth
            • 6 years ago

            The problem is much more fundamental. It is nature of multi-GPU rendering and SLI isn’t immune to it. Nvidia just invests more time in their driver profiles to make the experience more tolerable, but the performance, stability issues and micro-shuddering still exist on newly released titles or old stuff that where it is no longer supported.

            SLI/CF have always been about epenis scores and being a marketer’s wet dream.

            I read and seen too many ex SLI/CF users going back to single-cards after having enough of the SLI/CF shenanigans.

            • Airmantharp
            • 6 years ago

            I didn’t buy two cards to enhance my epenis- I bought them to run the game I was most interested in running at decent settings on a nice monitor (which was the real investment, and is used for things other than gaming).

            • Krogoth
            • 6 years ago

            A single Titan or 780 can archive all of that with none of the drawbacks of SLI/CF. It is at a cost though. $650-$999, take your pick.

            • Airmantharp
            • 6 years ago

            A single Titan may, you’re right- but I didn’t plan on getting two cards (otherwise saving up would have been wise). But I’m a two-card man now; my next upgrade will be to support a 4k monitor, roughly when both become affordable, which I’m pessimistically thinking will happen in 9-12 months.

            Reality is, though, that boards with dual 16x slots are basically the norm for gaming builds whether two cards are or not, and the same goes for PSU’s that have the connectors (four 6-pins, usually) in the 650W-750W range.

            So it’s not like a second card is much more than a drop-in for the average enthusiast.

            • Bensam123
            • 6 years ago

            Not everyone, but most people… say 98% of users have one card.

            I’ll believe 4k when it makes it mainstream. There is and always has been a better resolution on the horizon, but how long did it take us to get something, anything, remotely better then 1080p? 1440p is only now becoming semi-normal.

            • Airmantharp
            • 6 years ago

            1600p has been around far longer than 1080p, if you want to look at it that way. But it’s never been normal.

            I do, however, expect a quick race to the bottom with 4k; there’s really no reason it should be significantly more expensive aside from the ramp up to good economy of scale.

            All that Chinese 55″ needed was the electronics to accept a 60Hz signal at full resolution. It was even IPS, and could accept 1080p at 120Hz, and very few IPS monitors can do that.

            • Krogoth
            • 6 years ago

            4K is going to take a long time to become mainstream by that point. The current generation of GPUs would be completely out of date. It is like expecting a 1900XT/7950GTX trying to pull playable 4Megapixel gaming on current titles.

            • Bensam123
            • 6 years ago

            That’s good IMO. It gives hardware manufacturers something to strive for. A decade ago you sometimes had to reduce your resolution to get a playable game because GPUs couldn’t handle it. I see no reason why that shouldn’t exist today.

            • Airmantharp
            • 6 years ago

            In reality, the only thing that needs to be upgraded is the GDDR5; we need 8GB-12GB to handle the incoming console ports as well as output to 4k displays (with the extra memory usage needed for advanced per-pixel AA and shaders etc.)

            • Krogoth
            • 6 years ago

            You realize that Xbox One and PS4 do not have the hardware to handle 4K gaming? It is going to a while for performance discrete GPUs to handle such resolutions with AA/AF on top. That’s even with SLI/CF solutions.

            4K is currently in the realm of video playback and 3D render farms.

            • Airmantharp
            • 6 years ago

            No, they don’t have the hardware- but they do have 8GB of RAM, which means that they could easily be using >4GB for game graphics when rendering to 1080p.

            I’m saying that you’d need 8GB at a minimum to be clear for oncoming console games that can make use of the memory that the new consoles have while at the same time being able to render those games at up to 4k. 12GB would just be for those 384-bit memory controllers that currently support 6GB, as 6GB may not be enough.

            • Deanjo
            • 6 years ago

            They have 8 GB for total system memory. It is not just video ram.

            • Airmantharp
            • 6 years ago

            I’m not arguing the obvious. But we have no idea, yet, exactly how that memory will be used by the operating systems and games that will run on these consoles. We could assume that PC users would be safe with 6GB, but we would *know* that they’d be safe with 8GB. RAM is cheap; that’s why the consoles have 8GB in the first place. Why skimp on the PC side?

      • jihadjoe
      • 6 years ago

      You know, why we need so much tweaking just to get support for new games baffles me.

      The games are written with DirectX or OpenGL, the driver is supposed to support DirectX and OpenGL, and if things were working as intended then there shouldn’t be any need to tinker with it. I mean, that was the whole point behind having all these Hardware Abstraction Layers, wasn’t it?

      The only conclusion I can come to is that the standard DX and OGL implementations on both camps are broken to some degree, at least to the point where the someone has to tweak the driver just to get things working, when they should have been working in the first place.

        • Airmantharp
        • 6 years ago

        It’s mostly that the games are broken, I think; that doesn’t get fixed. Publishers are lazy as hell.

        The drivers, on the other hand, can be tweaked and there is economic incentive for the vendors to invest in those tweaks. I have to assume that the tweaks involve less ‘fixing’ and more ‘optimizing’ of datasets so that they flow from application to API to driver to hardware more smoothly and properly sync up with the other parts of the game, API, and driver.

        • Silus
        • 6 years ago

        And your conclusion is wrong.

        When you say “the driver is supposed to support DirectX and OpenGL”, it seems like you expect everything to work magically. But in the world of software and hardware interaction, that “magic” doesn’t exist.
        Hardware is certified for both DirectX and OpenGL, because both APIs require support in hardware for a pre-defined set of features. When you write a driver for a graphics card and take advantage of those features, you will need to translate all the DirectX and OpenGL API calls into proper instructions supported by the hardware. But this isn’t accomplished in a standard way. It’s done in the context of the given graphics architecture.

        Take GCN for example. It excels in integer calculations, but isn’t as good for anything else. So AMD’s drivers have to accomodate their drivers to take advantage of their architecture’s strengths to perform a simple API call from DirectX or OpenGL, which will involve several operations on the GPU. Kepler is not as good in integer calculations as GCN, but it excels in other areas, so NVIDIA’s driver will also make good use of Kepler architecture’s strengths to perform the same exact API call from DirectX or OpenGL, as the one I mentioned above for GCN. This requires a lot of tinkering and testing to use the best set of instructions to achieve the best possible performance at the best possible quality settings, in both drivers and sometimes game code too.

        This is why the direct to metal approach used in console development is fundamentally different and more efficient than what happens on the PC. And also why this nonsense that some people around here like to spread, of AMD having advantage on the PC because they will be in all next-gen consoles. The truth is that AMD can be in all the next-gen consoles, but that will not make them better on the PC, if their drivers continue to be shoddy. Until there is a common architecture for graphics, nothing will change in that regard. Since a common architecture is most likely never going to happen, the PC will remain as is for a very long time.

          • jihadjoe
          • 6 years ago

          Err that was the point I was making in the first place.

          That we need all this tweaking points to the fact that the drivers themselves are far from optimized. It’s not like the games are allowed to make direct-to-metal calls outside of the API, so it was basically a matter of properly, and efficiently implementing the entire API.

          If this was done correctly right from the start, then things should Just Work™, or at worst require a minimum of fuzz. It’s not like the API, or their own hardware is a great mystery for AMD or Nvidia…

          …or maybe it is and they are continually getting surprised everytime a publisher comes out with a new game?

            • Silus
            • 6 years ago

            No, your point was that something was fundamentally wrong and it isn’t, as I tried to explain. Abstraction is what creates the “problem”…it’s not really a problem, just how things are on the PC.

            Not sure what you mean by “That we need all this tweaking points to the fact that the drivers themselves are far from optimized”, since the tweaking IS what makes the drivers optimized as I also tried to explain.
            Tweaking exists because each architecture is different in how it achieves the same goal. It’s all about the architecture and its specifics, be it cache and its amount, general shared resources or how data is sent back and forth and how fast it gets to where the GPU needs it to be to perform optimally. This is what I was generalizing when I mentioned “strengths”. Kepler may require 3 instructions (ADD, MUL, etc) to achieve goal A, but GCN may require 4 or 5 to achieve that same goal and vice-versa. All of this is what drivers do: they translate an API call that to us is basically will just show a bunch of pixels on screen, into complex operations on the GPU that will allow it to do that in the fastest way possible.
            Also, different game engines don’t achieve the same goal in the same way, even if the API used is the same. This is software and you can achieve the same goal in a multitude of ways, which adds more complexity to driver optimizations and sometimes the need to add optimizations for certain vendors in the game’s own code.

            And this isn’t a problem, it’s just the way things work in a platform where graphics architectures are entirely different. To change it, NVIDIA, AMD and Intel would need to share the same underlying architecture for their graphics cards…Only then could developers use the direct to metal approach as they do with consoles…but that’s not going to happen anytime soon.

      • Klimax
      • 6 years ago

      No. GE games still often better on NVidia and consoles don’t matter to PC gaming. Porting to PCs will take care of it.

      • TO11MTM
      • 6 years ago

      ATI has been promising better drivers for years. The last time I found them tolerable was with the old 9000 series.

        • Airmantharp
        • 6 years ago

        I’ve been using AMD for a very long time. The only time I found them intoerable was when I used more than one.

    • Airmantharp
    • 6 years ago

    Very good review, thanks!

    I was also not aware that major GTX770 vendors were not shipping the Titan cooler. That’s just every kind of reprehensible and wrong!

      • Airmantharp
      • 6 years ago

      Actually, this [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814130922<]EVGA GTX770SC[/url<] came up during my first search, and is probably the card I'd get if I were getting one (but I'd be saving up for a GTX780 if I were in the market).

        • kilkennycat
        • 6 years ago

        BEWARE… this GTX770SC cooler is NOT the Titan-design. It is a GTX680 blower.

        The nVidia GTX770 reference-design with the Titan cooler is not available in the US. Maybe when the supply of excess GTX680 circuit boards (…with the updated GK104 silicon = GTX770….) are exhausted, then you might see a Titan-cooler variant. Probably at a slightly higher price, but well worth the extra cash for the associated peace ‘n quiet…

          • Airmantharp
          • 6 years ago

          Not that the GTX680 cooler isn’t an excellent cooler for GK104 (that’s what my GTX670 FTW has), but I didn’t know that this is what EVGA was doing. So sad.

          The Titan blower is definitely a selling point- they should ship it as a premium SKU for every GK104-based model.

      • darryl
      • 6 years ago

      if you want to read a good review that addresses GTX770 cooling, see here: [url<]http://www.hardocp.com/article/2013/06/17/asus_geforce_gtx_770_directcu_ii_video_card_review/#.UcmtmNKsGPo[/url<] and the price is very reasonable for the competitive performance of this particular card.

        • Airmantharp
        • 6 years ago

        Thanks, but I’ll stick to TR’s reviews; they’re written in English, and their benchmarks methods are the ‘benchmark’.

        I’m also, as alluded to in my two previous posts, not fond of third-party coolers that forego Nvidia’s excellent blower, like ASUS insists on doing. I’ll keep my EVGA’s.

          • DeadOfKnight
          • 6 years ago

          Why get the blower version? They offer less cooling and more noise. Slightly higher ambient case temps aren’t really going to affect your CPU with closed-loop liquid cooling that much if you run it as an intake as recommended. Of course, I don’t like the idea of running an intake through a radiator as that really messes up the physics of how case cooling is designed to work. Even still, I think you’d get more benefit from a twin fan design on the GPU. Particularly for gaming, higher GPU Boost clocks will give you more than higher turbo frequencies on the CPU.

            • Airmantharp
            • 6 years ago

            Using more than one card, and I don’t want a cavernous case.

            Not everyone’s list of requirements, I know; I should post some shots of my setup. It’s potent for being as small as it is.

            • DeadOfKnight
            • 6 years ago

            A 1-slot gap between the cards is adequate to see the benefit of twin fan designs over a blower, or are you using a µATX motherboard?

            • Airmantharp
            • 6 years ago

            I’ve seen tests that show conclusively to the contrary. And there’s common sense (not being so common) that says you have to get that heat out somehow, which means a noisier and likely dustier/dirtier enclosure to boot.

            • DeadOfKnight
            • 6 years ago

            I dunno. I’ve heard multiple times that rear exhaust really doesn’t work as well as you’d think at keeping the hot air out of your case. I personally want to try my hand at watercooling on my next build because it seems that otherwise you’re just going to have every cooling solution conflicting one another no matter what you do and I don’t want to make compromises.

            • Airmantharp
            • 6 years ago

            I’ve got them all working together-

            All case fans are filtered intakes with slow rates; the two video cards with blowers and the H60 are the exhausts. There’s some typical venting on the back of the case that isn’t slots, PSU or rear 120mm that allows other air to escape, so essentially the whole rear panel is exhaust.

            But that’s the key, after all. You have to design a ‘system’, not throw pieces together and hope that their individual magnificence makes it altogether awesome.

            • DeadOfKnight
            • 6 years ago

            Heh, it’s really just nitpicking though. It’s still altogether awesome when you got stuff like dual GTX 770s.

            I’d want those 770s to boost as much as possible though. Watercooling is a no compromise solution.

            • Airmantharp
            • 6 years ago

            A custom loop really isn’t too much to ask; most of the investment is in the extra build and testing time. Done right, there’s not much maintenance to be done, and they do perform very, very well.

            I almost did water- and I almost got the latest iteration of Silverstone’s FT-02, which might have worked better, but man that thing’s huge. Fractal Design’s Define series cases are literally as small as you can make ATX when you stretch it very slightly for fans. It’s a real tight build with two cards, but it does work very well.

            • kilkennycat
            • 6 years ago

            [quote<] Why get the blower version? They offer less cooling and more noise.[/quote<] Stress an eVGA GTX780SC with the Titan cooler in your favorite case some day and report back on the cooling effectiveness and noise level. Remember that none of the GTX770 currently on offer in the US have the Titan fan/cooler; the blowers are all the GTX 680 design. Seems as if the GTX770 partner-manufacturers are using up excess GTX680 material inventory and just installing the new GK104 chip.

            • Airmantharp
            • 6 years ago

            +1

            I’d like to see an SPCR build review with a ‘quiet’ P180-style case and two or three cards with the Titan cooler, comparing them with these gaudy open-air behemoths that people seem to prefer.

    • indeego
    • 6 years ago

    I smell a price war by game bundles this holiday season.

      • Airmantharp
      • 6 years ago

      And I’m looking forward to it!

      Now to get that second job so I can afford new cards for 4k, the 4k monitor, the Ivy-E setup to back it all up, and new lenses for my 6D (alternate obsession).

Pin It on Pinterest

Share This