Real-time 3D graphics has come a long, long way since it first arrived on microcomputers less than 20 years ago. We've moved from simple wireframes to texture-mapped polygons with per-vertex lightingall handled in software. Then, in 1996, custom graphics chips hit the scene in the form of Rendition's Verite and 3dfx's Voodoo Graphics. These cards, especially the Voodoo, were an instant success among gamers, who naturally appreciated the additional graphics power afforded by dedicated hardware. The Voodoo's success in the PC market caught even 3dfx off guard; the chip was originally intended for video arcade machines, not consumer PCs.
PC platform custodians like Intel and Compaq never saw the 3D tide coming. Unlike nearly every new PC feature we've seen in the past ten years, 3D graphics hardware was not a "must have" feature integrated and promoted by PC OEMs as a means of driving demand for new systems. (Intel, of course, would have preferred to do graphics processing work on the CPU.) At first, Voodoo cards sold primarily as retail, aftermarket upgrades. PC builders caught on pretty quickly, but in truth, raw consumer demand pushed dedicated 3D graphics chips into the mainstream.
Since the Voodoo chip, graphics has skyrocketed to a position of prominence on the PC platform that rivals CPUs. During this time, graphics ASICs have moved from relatively simple pixel filling devices into much more complex vertex and pixel processing engines. By nature, graphics lends itself to parallel processing, so graphics chips have been better able to take advantage of Moore's Law than even CPUs. Moore predicted exponential increases in transistor counts, and graphics chips have followed that progression like clockwork. Consider ATI's chips: there were about 30 million transistors in the original Radeon chip, roughly 60 million in the Radeon 8500, and about 110 million transistors in the new Radeon 9700. Desktop CPUs haven't advanced at anything near that pace. The resulting increases in graphics performance have been staggering.
In the early years, graphics hardware incorporated new features one by one, adding custom circuitry to support a particular graphics technique, like environmental bump mapping or cubic environment mapping. What's more, each one of these techniques was a hack, a shortcut used to approximate reality. But as time passed, graphics chips developed into GPUs, incorporating more programmability and allowing developers to replace some of their hacks with more elegant approximations of realitymuch cooler hacks, or shortcuts that cut fewer corners.
The progress of consumer graphics ASICs has shattered the traditional order of the graphics world. Long-time high-end leader SGI nearly imploded a few years ago, and the ranks of companies like NVIDIA and ATI are populated heavily with ex-SGI engineers. Consumer "gaming" chips have developed the necessary performance and internal precision to competein rebadged forms as Quadros and FireGL cardsagainst workstation stalwarts like 3DLabs' Wildcat line. And heck, 3DLabs recently turned the tables, getting itself bought out by Creative in order to fund a move into the consumer market with its P10 chip.
Consumer graphics chips have come a long way, but they haven't yet supplanted general-purpose microprocessors and software renderers for the high-quality graphics now used commonly in cinematic production. The sheer complexity and precision of rendering techniques used by professional production housesnot to mention the gorgeous quality of the resulting imageshas kept the worlds of consumer graphics and high-end rendering apart.
Of course, the graphics chip companies have frequently pointed to cinematic-style rendering as an eventual goal. NVIDIA's Jen-Hsun Huang said at the launch of the GeForce2 that the chip was a "major step toward achieving" the goal of "Pixar-level animation in real-time". But partisans of high-end animations tools have derided the chip companies' ambitious plans, as Tom Duff of Pixar did in reaction to Huang's comments at the GeForce2 launch. Duff wrote:
`Pixar-level animation' runs about 8 hundred thousand times slower than real-time on our renderfarm cpus. (I'm guessing. There's about 1000 cpus in the renderfarm and I guess we could produce all the frames in TS2 in about 50 days of renderfarm time. That comes to 1.2 million cpu hours for a 1.5 hour movie. That lags real time by a factor of 800,000.)Duff had a point. He hammered the point home by handicapping the amount of time necessary for NVIDIA to reach such a goal:
Do you really believe that their toy is a million times faster than one of the cpus on our Ultra Sparc servers? What's the chance that we wouldn't put one of these babies on every desk in the building? They cost a couple of hundred bucks, right? Why hasn't NVIDIA tried to give us a carton of these things? -- think of the publicity milage [sic] they could get out of it!
At Moore's Law-like rates (a factor of 10 in 5 years), even if the hardware they have today is 80 times more powerful than what we use now, it will take them 20 years before they can do the frames we do today in real time. And 20 years from now, Pixar won't be even remotely interested in TS2-level images, and I'll be retired, sitting on the front porch and picking my banjo, laughing at the same press release, recycled by NVIDIA's heirs and assigns.Clearly Pixar-class rendering was out of the chip companies' reachat least, that was the thinking at the time.