Single page Print

Review: Nvidia's GeForce GTX 660 graphics card

Kepler drops a weight class

OK, wow, this is awkward. You know, we really can't keep meeting like this. Every few weeks, it seems like, we're back in this same place, and I'm telling the same story again. You know the one, where Nvidia has taken its potent Kepler GPU architecture, shaved off a few bits, and raised the performance ceiling for a lower price range. By now, you know how it ends: with me explaining that this new graphics card delivers enough performance for most people and questioning why anyone would spend more. You probably expect me to say something about how the competition from AMD is pretty decent, too, although the Radeon's power draw is higher. By now, the script is getting to be pretty stale. Heck, I can see your lips moving while I talk.

Well, listen up, buddy. I am nobody's fool, and I'm not going to keep playing this same record over and over again, like Joe Biden at the DNC. I can do things, you know. I should be, I dunno, explaining write coalescing in the Xeon Phi or editing a multicast IP routing table somewhere, not helping you lot decide between a video card with 10 Xbox 360s worth of rendering power and another with 14. This second-rate website can get a new spokesmonkey.

I'm totally not going to tell you about the video card shown above, the GeForce GTX 660. You can see from the picture that it's based on the same reference design as the GeForce GTX 660 Ti and GTX 670. And if you have half a working lobe in your skull, you know what's coming next: the price is lower, along with the performance. Look, it's as simple as a few key variables.

ROP rate
GTX 660 980 1033 25 83/83 2.0 3.1 6.0 GT/s 144 $229.99
GTX 660 Ti 915 980 24 110/110 2.6 3.9 6.0 GT/s 144 $299.99
GTX 670 915 980 31 110/110 2.6 3.9 6.0 GT/s 192 $399.99
GTX 680 1006 1058 34 135/135 3.3 4.2 6.0 GT/s 192 $499.99

You really don't need me for this. Versus the GTX 660 Ti, this ever-so "new" product is a tad slower in texture filtering, rasterization, and shader flops rates. And yes, that really is a drop from 14 Xboxes worth of filtering power to 10. The ROP rate and memory bandwidth haven't even changed, and yet the price is down 70 bucks. This value proposition doesn't involve difficult math.

Heck, you probably don't even care that the card has a mixed-density memory config with three 64-bit interfaces driving 2GB of GDDR5 memory. Who needs to know about that when you're Calling your Duties or prancing around in your fancy hats in TF2? All you're likely to worry about are pedestrian concerns, like the fact that this card needs only 140W of power, so it requires just one six-pin power input. I could tell you about its high-end features—such as support for up to four displays across three different input types, PCI Express 3.0 transfer rates, or two-way SLI multi-GPU teaming—but you'll probably forget about them two paragraphs from now. Why even bother?

A different chip
You know what's rich? This apparently pedestrian branding exercise actually involves new GPU silicon. They're calling this thing "GeForce GTX 660," but it's not based on the same chip as its purported sibling, the GeForce GTX 660 Ti. That's right: the GTX 660 is based on the GK106 chip, not the GK104 part that we've been talking about for months.

Functional block diagram of the GK106 chip. Source: Nvidia.

This is a smaller, cut-down chip with fewer resources throughout, as depicted in the block diagram above. The unit counts in that diagram are correct for the GTX 660, right down to that third GPC, or graphics processing cluster, with only a single SMX engine inside of it. Is that really the GK106's full complement of units? Nvidia claims, and I quote, that the GTX 660 "uses the full chip implementation of GK106 silicon." But I remain skeptical. I mean, look at it. Really, a missing SMX? I know better than to trust Nvidia. I've talked to Charlie Demerjian, people.

width (bits)
Die size
process node
GF114 32 64/64 384 2 256 1950 360 40 nm
GF110 48 64/64 512 4 384 3000 520 40 nm
GK104 32 128/128 1536 4 256 3500 294 28 nm
GK106 24 80/80 960 3 192 2540 214 28 nm
Cypress 32 80/40 1600 1 256 2150 334 40 nm
Cayman 32 96/48 1536 2 256 2640 389 40 nm
Pitcairn 32 80/40 1280 2 256 2800 212 28 nm
Tahiti 32 128/64 2048 2 384 4310 365 28 nm

With its five SMX cores, the GK106 has a total of 960 shader ALUs (calling those ALUs "CUDA cores" is crazy marketing talk, like saying a V8 engine has "eight motors"). Beyond that, look, the specs are in the table, people. The only thing missing is the L2 cache amount, which is 384KB. (Note to self: consider adding L2 cache sizes to table in future.) You've probably noticed that the GK106 is just two square millimeters larger than the Pitcairn chip that powers the Radeon HD 7800 series. Seriously, with this kind of parity, how am I supposed to conjure up drama for these reviews?

The GK104 (left) versus the GK106 (right)

I probably shouldn't tell you this, but since I've decided not to do a proper write-up, I'll let you in on a little secret: that quarter is frickin' famous. Been using the same one for years, and it's all over the Internet, since our pictures are regularly, uh, "borrowed" by content farms and such. I'm so proud of little George there.