OK, wow, this is awkward. You know, we really can't keep meeting like this. Every few weeks, it seems like, we're back in this same place, and I'm telling the same story again. You know the one, where Nvidia has taken its potent Kepler GPU architecture, shaved off a few bits, and raised the performance ceiling for a lower price range. By now, you know how it ends: with me explaining that this new graphics card delivers enough performance for most people and questioning why anyone would spend more. You probably expect me to say something about how the competition from AMD is pretty decent, too, although the Radeon's power draw is higher. By now, the script is getting to be pretty stale. Heck, I can see your lips moving while I talk.
Well, listen up, buddy. I am nobody's fool, and I'm not going to keep playing this same record over and over again, like Joe Biden at the DNC. I can do things, you know. I should be, I dunno, explaining write coalescing in the Xeon Phi or editing a multicast IP routing table somewhere, not helping you lot decide between a video card with 10 Xbox 360s worth of rendering power and another with 14. This second-rate website can get a new spokesmonkey.
I'm totally not going to tell you about the video card shown above, the GeForce GTX 660. You can see from the picture that it's based on the same reference design as the GeForce GTX 660 Ti and GTX 670. And if you have half a working lobe in your skull, you know what's coming next: the price is lower, along with the performance. Look, it's as simple as a few key variables.
|GTX 660||980||1033||25||83/83||2.0||3.1||6.0 GT/s||144||$229.99|
|GTX 660 Ti||915||980||24||110/110||2.6||3.9||6.0 GT/s||144||$299.99|
|GTX 670||915||980||31||110/110||2.6||3.9||6.0 GT/s||192||$399.99|
|GTX 680||1006||1058||34||135/135||3.3||4.2||6.0 GT/s||192||$499.99|
You really don't need me for this. Versus the GTX 660 Ti, this ever-so "new" product is a tad slower in texture filtering, rasterization, and shader flops rates. And yes, that really is a drop from 14 Xboxes worth of filtering power to 10. The ROP rate and memory bandwidth haven't even changed, and yet the price is down 70 bucks. This value proposition doesn't involve difficult math.
Heck, you probably don't even care that the card has a mixed-density memory config with three 64-bit interfaces driving 2GB of GDDR5 memory. Who needs to know about that when you're Calling your Duties or prancing around in your fancy hats in TF2? All you're likely to worry about are pedestrian concerns, like the fact that this card needs only 140W of power, so it requires just one six-pin power input. I could tell you about its high-end features—such as support for up to four displays across three different input types, PCI Express 3.0 transfer rates, or two-way SLI multi-GPU teaming—but you'll probably forget about them two paragraphs from now. Why even bother?
A different chip
You know what's rich? This apparently pedestrian branding exercise actually involves new GPU silicon. They're calling this thing "GeForce GTX 660," but it's not based on the same chip as its purported sibling, the GeForce GTX 660 Ti. That's right: the GTX 660 is based on the GK106 chip, not the GK104 part that we've been talking about for months.
This is a smaller, cut-down chip with fewer resources throughout, as depicted in the block diagram above. The unit counts in that diagram are correct for the GTX 660, right down to that third GPC, or graphics processing cluster, with only a single SMX engine inside of it. Is that really the GK106's full complement of units? Nvidia claims, and I quote, that the GTX 660 "uses the full chip implementation of GK106 silicon." But I remain skeptical. I mean, look at it. Really, a missing SMX? I know better than to trust Nvidia. I've talked to Charlie Demerjian, people.
With its five SMX cores, the GK106 has a total of 960 shader ALUs (calling those ALUs "CUDA cores" is crazy marketing talk, like saying a V8 engine has "eight motors"). Beyond that, look, the specs are in the table, people. The only thing missing is the L2 cache amount, which is 384KB. (Note to self: consider adding L2 cache sizes to table in future.) You've probably noticed that the GK106 is just two square millimeters larger than the Pitcairn chip that powers the Radeon HD 7800 series. Seriously, with this kind of parity, how am I supposed to conjure up drama for these reviews?
I probably shouldn't tell you this, but since I've decided not to do a proper write-up, I'll let you in on a little secret: that quarter is frickin' famous. Been using the same one for years, and it's all over the Internet, since our pictures are regularly, uh, "borrowed" by content farms and such. I'm so proud of little George there.
|Cooler Master's MasterCase 5 reviewed||3|
|The TR Podcast is live on Twitch||0|
|Thursday Night Shortbread||7|
|Don't call it Knots Landing: next Xeon Phi detailed||27|
|Apple will hold its next major event on September 9||24|
|Red Awakening mixes action, stealth, parkour, and the '80s||7|
|Rumor: Next iPhones to get a 12MP camera, 4K video recording||27|
|Join us tonight on the TR Podcast live stream||4|
|Dell jumps into G-Sync with its S2716DG 27" monitor||27|