Earlier this week, in anticipation of the first real review of Ageia's PhysX card beginning to show up, the folks at rival physics software company Havok sent out a juicy e-mail to the press, including us, talking down Ageia's solution. Havok, you may recall, is working with graphics companies like NVIDIA and ATI on a product called Havok FX that will accelerate in-game physics using a GPU. The primary focus of the e-mail is the one major game title so far to ship with support for PhysX hardware, Ghost Recon: Advanced Warfighter.
Havok contends in the e-mail message that Ghost Recon uses Havok’s API for all gameplay-impacting physics, on the PC and in the various console releases. They argue Ageia's API is only used for eye candy-type particle effects and only on the game's PPU-accelerated code path. What's more, they claim, those particle effects are unimpressive, with volumes easily achievable in software, yet the game slows down observably when PPU acceleration is active. Havok says Ageia lays the blame for these slowdowns at the feet of the graphics processor, as if it were a vertex processing bottleneck or the like. The email then dismisses that possibility, saying "NVIDIA specifically can technically verify that the GPU is not the cause of the slowdown."
Like I said, juicy.
Havok's livelihood is no doubt threatened by Ageia's push into physics, but this hyper-aggressive approach has NVIDIA's sweaty fingerprints all over it, in my view.
Anyhow, the drama intensified when AnandTech's benchmarks of Ghost Recon and the PhysX card showed lower frame rates with PPU acceleration than without, substantiating Havok's assertion and throwing discussions of physics acceleration into overdrive with speculation about the technical reasons for the lower frame rates.
The folks at FiringSquad asked Ageia to respond to Havok's claims, and they have an interview online that gives Ageia's side of the story. Some of the back and forth involves minor point scoring over how much the PhysX and Havok APIs are used in various versions of Ghost Recon, but Ageia then uncorks this revelation about low frame rates:
We appreciate feedback from the gamer community and based partly on comments like the one above, we have identified an area in our driver where fine tuning positively impacts frame rate. We made an adjustment quickly and delivered it in a new driver (2.4.3) which is available for download at ageia.com today.Truly, they have learned from the masters.
Ageia also talks down the notion that CPU or GPU bottlenecks are responsible for performance problems, asserting PhysX doesn't require an absolute high-end system config.
Obviously, these are the first salvos in a very long battle over physics acceleration on the PC. We will have to check out Ghost Recon performance with the new driver when our PhysX card arrives, but this one title won't necessarily tell us anything definitive about this first PPU's performance characteristics.
John Carmack expressed worries about this sort of problemhardware physics acceleration causing input lag and slowdownsin his address at last year's Quakecon. Early 3D graphics chips were guilty of the same, and it seemed like an obvious potential problem. We have since asked Ageia about this issue several times, including at CES. There, they showed us some developer tools with real-time, on-screen instrumentation for physics processing latencies, and the results were convincingly decent. As I wrote:
Ageia breaks physics problems down into frame-by-frame chunks, returning the required answers for each frame in some period of time that's hopefully less than the time required for the game engine and graphics card to process that same frame. They showed us a demo with on-screen counters reporting the number of milliseconds required to process each frame of a scene alongside counters showing the number of rigid bodies in action and the like. As the physical complexity of the scene grew rapidly, with lots of bodies moving around and bouncing off of one another, the time the PhysX chip required to process the physics of the frame grew impressively slowly and in a predictable fashion, without sudden spikes or drop-offs.This demonstration assuaged my fears somewhat, and my conversations with Ageia's CEO and other technical types have been encouraging, as well. Many of Ageia's chip engineers have backgrounds in building network-processing chips for fiber optic switches, a world where managing packet processing latencies is crucial. Although first-generation hardware accelerators have a difficult history on this front, I remain optimistic Ageia can avoid facing a constant, intractable problem with performance in its first-gen PPU.
That said, I came away from Ageia's pre-GDC press confab with the distinct impression that only certain parts of the PhysX API are currently accelerated in hardware by the PPUmainly because Ageia said so point blank. The PPU itself is programmable, and the company has so far concentrated on accelerating specific parts of its API it considers especially good candidates for optimization. There is a long development process ahead in order to get the whole of the API accelerated, as well as a learning process that will involve give-and-take between Ageia and game developers, as the parties sort out the usage model for hardware physics acceleration. Ageia will have to learn how best to tune its drivers and hardware to deliver the mix of effects and performance game developers are requesting, and game developers will have to understand what to ask of Ageia's hardware, as well. I don't know whether or not Ageia will succeed at making all of this work, but I certainly think it's much too early to count them out. We’ll be watching future developments here with interest.