The graphics game has been nothing if not interesting the past year or so. AMD's Radeon HD 4800 series upended expectations by using a mid-sized chip to serve the bulk of the market and pairing of two of them in an X2 card to create a high-end product. This strategy has worked out pretty well, in no small part because the Radeon HD 4870 GPU has proven to be very efficient for its size. The result? Fast graphics cards have become very affordable, with prices driving to almost-embarrassing lows over time.
Nvidia, meanwhile, has been relatively quiet in terms of truly new products. The last new GeForce we reviewed, back in March, was the GTS 250, a cost-reduced card based on a GPU that traces its roots back to the two-year-old GeForce 8800 GT. Nvidia has milked that G92 GPU as if it were a cow mainlining an experimental drug cocktail from Monsanto. The higher end of the GeForce lineup has been powered by the GT200 GPU, a much larger chip than anything AMD makes with only somewhat higher performance than the Radeon HD 4870.
All the while, folks have been buzzing about what, exactly, comes next for GPUs. Intel's Larrabee project has been imminent for some time now, promising big things via the miracle medium of PowerPoint. In a sort of pre-emptive response, Nvidia employees have developed, en masse, a puzzling tick: speak to them, and they keep saying "PhysX and CUDA, CUDA and PhysX" after each normal sentence. Sometimes they throw in a reference to 3D Vision, as well, although they seem vaguely embarrassed to admit their chips do graphics anymore. For its part, AMD has been talking rather ambiguously about "Fusion," which once stood for a combination of CPU parts and GPU parts into a future uber-processor capable of amazing feats of simultaneous sequential and data-parallel processing but now seems to have morphed into "We'd like to sell you a CPU and an integrated graphics chipset, too."
In the midst of all of this craziness, thank goodness, work has continued on new and rather traditional graphics processors, which have become important enough to cause all of this fuss in the first place. Less than 18 months after the introduction of the Radeon HD 4800 series, AMD has produced a new chip that's roughly the same size yet promises to double its predecessor's power in nearly every respect, including shader processing, texturing, pixel throughputand, yes, GPU-compute capacity. The Radeon HD 5870 is more capable, too, in a hundred little ways, not least of which is its fidelity to the DirectX 11 spec. And in a solid bonus for its target market, the card based on it looks like the Batmobile.
What's under the Batmobile's hood
Where to start? Perhaps with codenames, since they're thoroughly confusing. The last-gen GPU that powered the Radeon HD 4870 was code-named RV770, a familiar number in a succession of Radeon chips. The rumor mill long ago began talking about its successor as the RV870, a logical step forward. Yet marketing types have hijacked codenames and proliferated them, just to make my life difficult, and thus the RV870 became known as "Cypress." The official name now is the Radeon HD 5870. We'll refer it to in various ways throughout this article, just to keep you on your toes.
Much like the RV770, the Cypress chip is the product of a three-year project conducted at multiple sites around the globe, directed from AMD's Orlando office by chief architect Clay Taylor.
The image above contains much of what you might want to know about the newest Radeon, if you squint right. What you're seeing truly is a doubling of resources versus the RV770. Cypress has twice as many SIMD arrays in its shader core, twice as many texture units aligned with those SIMD arrays, double the number of render back-ends, and even two rasterizers. The big-impact number may be 1600, as in the number of shader processors or whatever AMD is calling them this week. 1600 ALUs, at any rate, bring a prodigious amount of compute power to this puppy.
This GPU is more than just a doubling of what came before, though. If you could zoom in a little deeper, you'd find refinements made to nearly every functional area of the chip. In fact, we hope to do just that in the following pages. But first, we need to scare off anyone who randomly wandered in from Google trying to figure out which graphics card to buy by talking explicitly about chips.