2.6 billion. Six. The first figure is the number of transistors in AMD's new Cayman graphics processor. The second is the number of days we've had to spend with it prior to its release. Today's GPUs are incredibly complex beasts, and the companies that produce them don't waste any time in shoving 'em out the door once they're ready. Consequently, our task of getting a handle on these things and relaying our sense of it to you... isn't easy. We're gonna have to cut some corners, leave out a few vowels and consonants, and pare back some of the lame jokes in order to get you a review before these graphics cards go on sale.
"What's all the fuss?" you might be asking. "Isn't this just another rehashed version of AMD's existing GPU architecture, like the Radeon HD 6800 series?" Oh, but the answer to your question, so cynically posed, is: "Nope."
As you may recall, TSMC, the chip fabrication firm that produces GPUs for both of the major players, upset the apple cart last year by unexpectedly canceling its 32-nanometer fabrication process. Both AMD and Nvidia had to scramble to rebuild their plans for next-generation chips, which were intended for 32-nm. At that time, AMD had a choice: to push ahead with an ambitious new graphics architecture, re-targeting the chips for 40 nanometers, or to play it safe and settle for smaller, incremental changes while waiting for TSMC to work out its production issues.
Turns out AMD chose both options. The safer, more incremental improvements were incorporated into the GPU code-named Barts, which became the Radeon HD 6850 and 6870. That chip retained the same core architectural DNA as its predecessor, but it added tailored efficiency improvements and some new display and multimedia features. Barts was also downsized to hit a nice balance of price and performance. At the same time, work quietly continued—at what had to be a breakneck pace—on another, larger chip code-named Cayman.
Many of us in the outside world had heard the name, but AMD did a surprisingly good job (as these things go) of keeping a secret, at least for a while—Cayman ain't your daddy's Radeon. Or even your slightly older twin brother's, perhaps. Unlike Barts, Cayman is based on a fundamentally new GPU architecture, with improvements extending from its graphics front end through its shader core and into its render back-ends. The highlights include higher geometry throughput, more efficient shader execution, and smarter edge antialiasing. In other words, more goodness abounds throughout.
So when we say our task of cramming a review of Cayman into a few short days isn't easy, that's because this chip is the most distinctive member of the recent, bumper crop of new GPUs.
Our hardware reviewer's license stipulates that we must include a block diagram in page one of any review of a new GPU, and so you have it above. This view from high altitude gives us a sense of the architecture's overall layout, although it has no doubt been retouched by AMD marketing to add whiter teeth and to remove any interesting wrinkles.
Cayman's basic layout will be familiar to anyone who knows recent Radeon GPUs like Barts and Cypress. The chip has a total of 24 SIMD engines in a dual-core configuration. (Both Cypress and Barts are dual-core, too, with dual dispatch processors as in the diagram above, although AMD didn't reveal this level of detail when it first rolled out Cypress.) Each SIMD engine has a texture unit associated with it, along with an L1 texture cache. Cayman sticks with the tried-and-true formula of four 64-bit memory interfaces, each with an L2 cache and dual ROP units attached. In short, although it's a little larger than Cypress, Cayman remains the same basic class of GPU, with no real changes to key differentiators like memory interface width or ROP count.
Above is a look at the Cayman chip itself, along with some key comparative specs. Cayman is a bit of a departure from recent AMD GPUs because it's decidedly larger, but it's not a reticle buster like some of Nvidia's bigger creations. In terms of transistor count and die area, Cayman appears to land somewhere between Nvidia's two closest would-be competitors, the GF104 and GF110.