Single page Print

AMD
AMD chose GDC as the place to announce a new push in graphics for small mobile devices like cell phones. This push includes a new set of tools for developers and a new family of Imageon mobile GPUs with a unified shader architecture based on the one in the Xbox 360. These GPUs will support a couple of mobile standards: OpenGL ES 2.0 for 3D graphics and OpenVG 1.0 for vector graphics (think Flash-style animation). Among the development tools for these GPUs is a familiar face: ATI's RenderMonkey shader development tool, which can now compile pixel shader programs to OpenGL ES code.

AMD had a couple of mock-ups of the new Imageon cores running that looked like so:


Both mock-ups were connected to PCs, and the Imageon cores were running in simulation on FPGAs at a fraction of their final target speeds. AMD expects products based on this technology to arrive in 2008.

After gawking at the new mobile wares, I had a chance to speak with AMD's Richard Huddy about the company's plans for its Fusion initiative. Huddy confirmed for us that the Fusion project, which looks to meld CPU technology from AMD and GPU technology from the former ATI, is still focused primarily on low-end parts. AMD hasn't yet revealed any public roadmap for high-end Fusion products. Huddy also said the first-gen Fusion parts will not include any logic or cache sharing between CPU and GPU elements. AMD has to learn to "cut and paste" first, he said.

What, I asked, is the advantage of an integrated Fusion CPU-GPU chip over a traditional chipset with integrated graphics? Huddy answered that low latency and data sharing are the two main advantages of the Fusion approach. Fusion, he said, will allow for a different class of interaction between CPUs and GPUs—real two-way interaction.

But aren't modern GPUs already designed largely to mask latency? Yes, Huddy admitted, and modern GPUs do a good job of masking latency. But he noted that modern GPUs don't transfer large amounts of data back to the CPU. With Fusion, the GPU can render to a texture and hand off the data to the CPU very quickly.

So are Fusion's architectural advantages mainly helpful for graphics or for more unconventional applications like physics? Huddy responded that Fusion's advantages will come mainly in unconventional uses.

But are integrated graphics cores powerful enough to handle graphics alongside physics or other tasks? Huddy's answer: "We do need sufficient compute density."

I will be interested to see how all of this plays out. Huddy conceded that AMD and ATI are just learning how to do CPU-GPU integration, and he also pointed out that one of Fusion's big immediate payoffs will come in chip packaging. CPU and IGP packaging and pinouts have become a size constraint, particularly in laptops, and Fusion will allow CPU-to-GPU communications channels to become on-chip interconnects rather than external I/O links. It's hard to imagine that AMD bought ATI and initiated the Fusion project for the sake of packaging concerns, but I suppose every advantage counts.

Nvidia
Nvidia took the wraps off of a new version of its development toolkit at GDC. Naturally, the new tools are geared toward DirectX 10 and GeForce 8-series GPUs. I met with Bill Rehbock, Nvidia's Senior Director of Developer Relations, to talk about these tools and various other issues. The most intriguing of the new tools may be the one that employs a GeForce 8 GPU, via the CUDA interface, to process texture compression much faster than a CPU alone could do:


Rehbock characterized this tool as an example of Nvidia walking its own talk, and he said it could dramatically speed up build times for game developers.

As you might imagine, one of the first questions on my mind was the status of DirectX 10 game titles and what sort of graphical improvements they might offer over DirectX 9 games. Without getting into too many specifics, Rehbock seemed confident that the games are coming and that they will be very good once they arrive. Interestingly, he only expected to see a handful of games debut with native DX9 support and then get patched to support DX10 after their release. Rehbock conceded that being first with DX10 hardware put Nvidia in the tough position of having to wait for software to take advantage of it, but he noted that as a consequence of Nvidia being first, virtually all DX10 games now in the works are being developed on the GeForce 8800.

I also asked Rehbock about the state of hardware-accelerated physics, including GPU-accelerated physics. Why had the hype come so early and then trailed off, and where were the games that use it? Rehbock said he was pleased that Nvidia hadn't pushed too hard on the physics hype of late, preferring to wait for the games to arrive. However, he didn't expect to see too many titles with hardware physics available in the near future, in part because the DX10 transition has occupied the time and attention of the best and brightest game programmers. Once that transition is made, he expects DX10's new capabilities to free up those coders to look into physics.

Rehbock identified shader development, in particular, as an area where DX10 will free up programmers. DX9 was initially billed as enabling game designers and even artists to create their own pixel shaders via drag-and-drop tools, but in truth, creating shaders in DX9 typically required the efforts of a skilled programmer. Rehbock believes DX10 really does make GUI-based shader creation accessible, which may allow top-flight programmers to spend more time on physics acceleration via mechanisms like CUDA.

Speaking of CUDA, we also talked about the early complaints that CUDA is more complex and difficult to program than initially anticipated, with multiple memory spaces to maintain and the like. Rehbock admitted Nvidia may have oversold the ease-of-use angle for CUDA somewhat, but said he believes the level of abstraction in CUDA is appropriate, especially for a first-generation effort. Rehbock argued that Nvidia had to make a tradeoff between ease of use and flexibility, and that developers will benefit from better understanding the chip's architecture by seeing it exposed at a relatively low level. The most capable programmers will then build tools and APIs for applications like physics, which others will be able to use.

Such talk, of course, sounds much like AMD's Stream Computing approach for its Radeon GPUs, although AMD doesn't offer some of the first-party tools Nvidia does, such as a C compiler.

Rehbock expressed some surprise that Microsoft hadn't chosen GDC as the place to launch its rumored DirectPhysics initiative, given the recent job listings from Microsoft in this area. He also emphasized that Nvidia welcomes such efforts from Microsoft and doesn't see CUDA as a competitor to them. Instead, he cited the coexistence of Cg and HLSL as a model for how CUDA and any Microsoft GPGPU effort might coexist.

As we wrapped things up, Rehbock took a second to communicate his optimism about the current state of PC gaming. 2007, he said, is packed with an unprecedented number of releases from big-name game development teams. Titles like Supreme Comander, Command and Conquer 3, and Hellgate: London are part of the mix, as well as Crysis and Unreal Tournament 3. Rehbock was especially sweet on Hellgate: London; he said the guys at Flagship studios had "really nailed it."