Single page Print

A first look at Nvidia's GPU physics


Explosions, tornadoes, and tapioca soup
— 8:00 AM on August 6, 2008

If you'd told me a year ago that my PC would have hardware PhysX support today, I'd have been a little dubious. Last summer, running hardware game physics simulations involved shelling out $150-200 for a PhysX card, and all you got for your investment was limited support in a handful of titles. Not exactly a stocking-stuffer.

That will all change next week. On August 12, Nvidia will release new graphics drivers that will allow owners of most GeForce 8, GeForce 9, and GeForce GTX 200-series cards to use PhysX acceleration without spending a dime. Along with the drivers will come a downloadable PhysX software pack containing free Unreal Tournament 3 maps, the full version of NetDevil's Warmonger, a couple of Nvidia demos, and sneak peeks at Object Software's Metal Knight Zero and Nurien Software's Nurien social-networking service. Nvidia provided us with early access to the pack, and we've been testing it over the past couple of days.

Physics on the GPU
Before getting into our tests, we should probably talk a little bit about what PhysX is and how Nvidia came to implement it on its graphics processors. In early 2006, Ageia Technologies launched the PhysX "physics processing unit," a PCI card with a custom parallel-processing chip tweaked for physics computations. Game developers could use Ageia's matching application programming interface to offload physics simulations to the PPU, enabling not only lower CPU utilization, but also more intensive physics simulations with many more objects.

We reviewed the PhysX PPU in June 2006, but we came away somewhat unimpressed by the hardware's intimidating price tag (around $250-300) and the dearth of actual game support. Ageia displayed some neat effects in its custom tech demos, but actual games like Ubisoft's Ghost Recon Advanced Warfighter used the PPU for little more than extra debris in explosions.

As PhysX PPUs seemed to be fading into obscurity, Nvidia announced plans to purchase Ageia in February of this year. Barely a week after the announcement, Nvidia said it would add PhysX support to GeForce 8-series graphics cards using its CUDA general-purpose GPU API. The idea looks great on paper. Running a physics API on a popular line of GPUs bypasses the need for expensive third-party accelerators, and it should spur the implementation of PhysX effects in games. Nvidia counts 70 million GeForce 8 and 9 users so far, which is probably quite a bit more than the installed base for PhysX cards.

The PhysX API is quite flexible, as well, since it can scale across different types of hardware and doesn't actually require hardware acceleration to work:

The PhysX calculation path. Source: Nvidia.

Nvidia's PhysX pipeline patches API calls through to different "solvers" depending on the host machine's hardware and settings. There are solvers for plain x86 CPUs, Nvidia GPUs, PhysX PPUs, and more exotic chips like the Cell processor in Sony's PlayStation 3. According to Nvidia, PhysX lets developers run small-scale effects on the CPU and larger-scale effects in hardware. "For example, a building that explodes into a hundred pieces on the CPU can explode into thousands of pieces on the GPU, while maintaining the same frame rate."

To give you an idea of the performance difference between different solvers, Nvidia claims its GeForce GTX 280 can handle fluid simulations up to 15 times faster than a Core 2 Quad processor from Intel. Check out page four of our GeForce GTX 280 review for more details.

How does Nvidia's PhysX-on-GPU implementation actually affect graphics quality and performance, then? I used my GeForce 8800 GT-powered desktop system as a guinea pig to get a feel for PhysX's behavior on mainstream graphics hardware.