Single page Print

Havok FX physics go fully interactive
Boyd's talk of physics being successfully mapped to the GPU using DirectX was surely a reference to Havok's GPU-based physics engine, Havok FX. Jeff Yates, Havok's VP of Product Management, followed Boyd on stage with a demo of that physics engine. Havok has shown demos of basic rigid-body physics acceleration running on GPUs in the past, but Yates also showed off a nice demo of cloth or fabric, which tends to require more computing power.

Then he produced a real surprise: Havok FX with "gameplay physics"—that is, physics interactions that affect gameplay rather than just being eye candy—running on the GPU. I wasn't even aware they had truly interactive GPU-based physics in the works, but here was a working demo.


Brick War shows Havok FX's gameplay physics in action

The demo game, Brick War, is based on a simple premise. Each side has a castle made out of Lego-like snap-together bricks, and the goal is to knock down all of the soliders in the other guy's castle by hurling cannonballs into it.

The game includes 13,500 objects, with full rigid-body dynamics for each. Havok had the demo running on a dual-GPU system, with graphics being handled by one GPU and physics by the other.

As the player fired cannonballs into his opponent's castle, the bricks broke apart and portions of the structure crumbled to the ground realistically. Yates pointed out that the GPU-based physics simulation in Brick War is fully interactive, with the collision detection driving the rest of the rigid-body dynamics and also driving sound in the game.

Havok seems to have made quite a bit of progress on Havok FX in the past few months. According to Yates, the product is approaching beta and will soon be in the hands of game developers. When that happens, he said, game developers will need to change the way they think about physics, because the previous limits will be gone.

Yates' was the last of the formal presentations, and a quick Q&A session followed.

Conclusions
I came away from the ATI event most impressed with the quality and relative maturity of the applications shown by the presenters. Each of them emphasized in his own way that the GPU's much higher performance in stream computing applications opens up new possibilities for his field, and each one had a demonstration to back it up. Obviously, it's very early in the game, but ATI has identified an opportunity here and taken the first few steps to make the most of it. As they join up with AMD, the prospects for technology sharing between the two companies look bright.

ATI still faces quite a few hurdles in meeting the needs of non-graphics markets with its GPUs, though. Today's GPUs, for instance, don't fully support IEEE-compliant floating-point datatypes, so getting the same results users have come to expect from CPUs may sometimes be difficult or impossible. ATI also hasn't provided the full range of tools that developers might want—things like BLAS libraries or even GPU compilers for common high-level languages—and so will have to rely on partners like PeakStream to make those things happen. I'm just guessing here, but I'd bet a software provider that focuses on oil and gas companies doesn't license those tools for peanuts. If stream computing is to live up to its potential, ATI will eventually have to make some of these programming tools more accessible to the public, as it has done in graphics.

One other interesting footnote. On the eve of ATI's stream computing event, Nvidia's PR types arranged a phone conference for me with Andy Keane, one of Nvidia's GPGPU honchos. (Hard to believe, I know, but Nvidia was acting aggressively.) The purpose of the phone call was apparently just to plant a marker in the ground signaling Nvidia's intention to do big things in stream computing, as well. Keane talked opaquely about how the current approach to GPGPU is flawed, because people are trying to twist a device into doing something for which it wasn't designed. They're using languages like OpenGL and Cg in unintended ways. Very soon, he claimed, Nvidia will be talking about new technology that will change the way people program the GPU, something that is "beyond the current approach."

That was apparently all he really wanted to say on the subject, but I stepped through several of the possibilities with him, from providing better low-level documentation on the GPU's internals to providing BLAS libraries and the like. Keane wasn't willing to divulge exactly what Nvidia is planning, but if I had to guess, I'd say they are working on a new compiler, perhaps a JIT compiler, that will translate programs from high-level languages into code that can run on Nvidia GPUs. If so, and if they deliver it soon, ATI's apparent lead in this field could evaporate.

For now, though, ATI is playing nice and simply letting its partners speak for it. Based on what those partners have said, the Radeon X1000 series seems better suited to non-graphics applications than Nvidia's GeForce 7 series for a range of technical reasons, from finer threading granularity to more register space on the chip. I expect we won't hear too much more from Nvidia on this front until after its next generation of GPUs arrives. TR

Nvidia's GeForce GTX 1070 Ti graphics card reviewedAnything you can do, I can do better 134
AMD's Ryzen 7 2700U and Ryzen 5 2500U APUs revealedInfinity Fabric ties Zen and Vega together 172
Intel's Core i7-8700K CPU reviewedSix shots of Coffee Lake, please 369
Intel's Core i9-7980XE and Core i9-7960X CPUs reviewedDid somebody say more cores? 176
The Tech Report System Guide: September 2017 editionHog heaven at the high end 100
Intel kicks off eighth-gen Core with four cores and eight threads in 15WMore of the good stuff 89
AMD's Ryzen Threadripper 1920X and Ryzen Threadripper 1950X CPUs reviewedI'm rubber, you're glue 126
Nvidia Quadro vDWS brings greater flexibility to virtualized pro graphicsPascal Teslas play host to Quadro virtues 2