GDC 2007

THIS MAY COME AS a shock, but the Game Developers Conference is still very much, well, a conference for game developers. The agenda is full of sessions on topics like game design, using Microsoft’s XNA tools, and the immortal “PS3 Audio: More than Extra Channels.” The attendees look and dress like game developers, which is to say that wearing slacks and a shirt with a collar on it to GDC is akin to wearing a giant, flashing orange sign that says, “I do not belong here.” The last few days of GDC also include a more traditional trade show-style expo, complete with giant banners, booths, and even the occasional booth babe.

Since we irresponsibly skipped out on CES this year, I decided to attend the expo portion of GDC 2007 as part of my penance. So I packed my shirt and slacks and headed out to San Francisco to declare to all present that I was a first-time GDC attendee with absolutely no sense of how to dress. Fortunately, lots of folks on the expo floor seemed sympathetic, and were thus willing to talk at length with the guy with the funny clothes. What follows is my report from GDC, complete with what may be TR’s first booth babe picture in, heck, five years.

As with many of our trade show reports, I’ve divided my comments according to the companies involved, until that breaks down and I just start rambling.

Intel
Intel didn’t have much to announce at GDC, but they did pull out the roadmaps and give us a bit of a sneak peek. The next big event on their agenda for desktop hardware is the replacement for the 975X chipset, dubbed the X38. This new bit of core logic is due to arrive in the third quarter of this year, probably in July or August, and will incorporate a host of new features, including support for DDR3 memory at speeds up to 1333MHz, PCI Express 2.0, and the ICH9 south bridge. The ICH9 will come in standard, R, and DH varieties, and like any new chipset these days, it will have even more SATA ports. Another new feature of the X38 will be 1333MHz front-side bus speeds, which it will need in order to support Intel’s upcoming “Penryn” 45nm processors.

As for those processors, Intel’s 45nm parts remain on track for introduction in the first quarter of 2008. We already know these chips will essentially be a die shrink of the current 65nm Core 2 Duo, with only minor tweaks for features or performance. One of those tweaks will be the addition of SSE4 instructions aimed at improving performance in multimedia, graphics, and other applications. Interestingly, Penryn’s quad-core variants will not be monolithic, single-chip quad-core parts. Instead, they will have two dual-core chips placed together in a single package, like the current “Kentsfield” quad-core processors.

Intel won’t officially confirm it, but multiple sources have reported that the desktop variants of Penryn will be code-named Wolfdale and Yorkfield. Wolfdale will reportedly be the dual-core part, with 6MB of L2 cache and support for a 1333MHz FSB. Yorkfield, meanwhile, is the quad-core variant, expected to be a pair of dual-core chips paired up for a total of four cores and 12MB of L2 cache. Probably because Yorkfield’s two chips would each place an electrical load on the bus, the quad-core part’s FSB is rumored to be limited to 1066MHz.

One thing Intel did seem eager to confirm was the fact that Penryn derivates will not feature Hyper-Threading, the simultaneous multithreading capability built into the Pentium 4, despite persisent rumors that the tech will make a reappearance soon.

Intel’s GDC booth was showing off a number of multithreaded games, including Crysis and Supreme Commander. Another eye-catcher was this impressive arcade-style setup for racing games:

HP and Intel are working on this setup together. The display uses two projectors. Both projectors are aimed at the screen, and a camera is used to assist in calibrating the system. The host PC then uses real-time image processing to align the overlapping images for output to the projectors. The end-result is a visually seamless, wraparound wide-aspect display.

The chair is from a company called D-Box, and it moves about on three electric actuators. I was told the chair currently costs about fifteen grand, at which point my heart skipped a beat. D-Box is apparently working on bringing the price down, though, so hold off on the major cardiac events.

After talking with folks at Intel’s expo booth about high-end gaming hardware, I headed across the street to a hotel suite to talk integrated graphics with some other Intel reps. They were showing off the graphics core of the G965 chipset, especially its video playback and acceleration capabilities, which Intel has branded with the Clear Video moniker.

In order to show off how, er, clear Clear Video can be, Intel had set up a little bake-off between the G965 and a Radeon X1600 discrete graphics card. The bake-off consisted of a single scene from the HQV video quality benchmark CD, and in this scene, the G965 appeared to be markedly superior at avoiding a certain type of blocky artifact. Of course, trade-show bake-offs are not always perfect beacons of light and truth, but we’ll be sure to watch this test scene in our own upcoming reviews of integrated graphics solutions. The good news here is that Intel does seem to be paying attention to video playback quality and acceleration.

The G965 graphics core is intriguing for other reasons, as well. The GPU has a unified shader architecture that consists of eight programmable, scalar processing units. This shader core can accelerate both video and graphics operations, and Intel claims it was built for DirectX 10’s Shader Model 4.0. That said, the G965’s prospects are somewhat clouded for several reasons.

For one, current owners of the G965 aren’t getting hardware-accelerated vertex shading out of it, even in DirectX 9. Intel is, however, addressing this issue with a new driver revision that adds hardware vertex processing, and I saw a demo of Half-Life 2 running at reasonably acceptable frame rates with an early version of the new driver. Intel plans to make a beta version of the driver available online soon.

Beyond that, Intel openly admits the G965 may never receive a DirectX 10-capable graphics driver, even though the architecture could support it. Part of the problem, they admitted to me, is processing power. Remember, an eight-core unified shader is the equivalent of a two-pixel-wide pipeline, and that doesn’t even account for vertex or geometry shader processing, which are shared on the same processing elements.

Intel claims the G965’s unified architecture is a solid foundation for future DX10-capable parts, regardless of what happens with this generation’s graphics driver. To that end, we can expect a new chipset from Intel with more graphics processing power later this year. Before that happens, the G965 will find a larger audience when its mobile version makes a debut as part of the upcoming Santa Rosa mobile platform.

 

AMD
AMD chose GDC as the place to announce a new push in graphics for small mobile devices like cell phones. This push includes a new set of tools for developers and a new family of Imageon mobile GPUs with a unified shader architecture based on the one in the Xbox 360. These GPUs will support a couple of mobile standards: OpenGL ES 2.0 for 3D graphics and OpenVG 1.0 for vector graphics (think Flash-style animation). Among the development tools for these GPUs is a familiar face: ATI’s RenderMonkey shader development tool, which can now compile pixel shader programs to OpenGL ES code.

AMD had a couple of mock-ups of the new Imageon cores running that looked like so:

Both mock-ups were connected to PCs, and the Imageon cores were running in simulation on FPGAs at a fraction of their final target speeds. AMD expects products based on this technology to arrive in 2008.

After gawking at the new mobile wares, I had a chance to speak with AMD’s Richard Huddy about the company’s plans for its Fusion initiative. Huddy confirmed for us that the Fusion project, which looks to meld CPU technology from AMD and GPU technology from the former ATI, is still focused primarily on low-end parts. AMD hasn’t yet revealed any public roadmap for high-end Fusion products. Huddy also said the first-gen Fusion parts will not include any logic or cache sharing between CPU and GPU elements. AMD has to learn to “cut and paste” first, he said.

What, I asked, is the advantage of an integrated Fusion CPU-GPU chip over a traditional chipset with integrated graphics? Huddy answered that low latency and data sharing are the two main advantages of the Fusion approach. Fusion, he said, will allow for a different class of interaction between CPUs and GPUs—real two-way interaction.

But aren’t modern GPUs already designed largely to mask latency? Yes, Huddy admitted, and modern GPUs do a good job of masking latency. But he noted that modern GPUs don’t transfer large amounts of data back to the CPU. With Fusion, the GPU can render to a texture and hand off the data to the CPU very quickly.

So are Fusion’s architectural advantages mainly helpful for graphics or for more unconventional applications like physics? Huddy responded that Fusion’s advantages will come mainly in unconventional uses.

But are integrated graphics cores powerful enough to handle graphics alongside physics or other tasks? Huddy’s answer: “We do need sufficient compute density.”

I will be interested to see how all of this plays out. Huddy conceded that AMD and ATI are just learning how to do CPU-GPU integration, and he also pointed out that one of Fusion’s big immediate payoffs will come in chip packaging. CPU and IGP packaging and pinouts have become a size constraint, particularly in laptops, and Fusion will allow CPU-to-GPU communications channels to become on-chip interconnects rather than external I/O links. It’s hard to imagine that AMD bought ATI and initiated the Fusion project for the sake of packaging concerns, but I suppose every advantage counts.

Nvidia
Nvidia took the wraps off of a new version of its development toolkit at GDC. Naturally, the new tools are geared toward DirectX 10 and GeForce 8-series GPUs. I met with Bill Rehbock, Nvidia’s Senior Director of Developer Relations, to talk about these tools and various other issues. The most intriguing of the new tools may be the one that employs a GeForce 8 GPU, via the CUDA interface, to process texture compression much faster than a CPU alone could do:

Rehbock characterized this tool as an example of Nvidia walking its own talk, and he said it could dramatically speed up build times for game developers.

As you might imagine, one of the first questions on my mind was the status of DirectX 10 game titles and what sort of graphical improvements they might offer over DirectX 9 games. Without getting into too many specifics, Rehbock seemed confident that the games are coming and that they will be very good once they arrive. Interestingly, he only expected to see a handful of games debut with native DX9 support and then get patched to support DX10 after their release. Rehbock conceded that being first with DX10 hardware put Nvidia in the tough position of having to wait for software to take advantage of it, but he noted that as a consequence of Nvidia being first, virtually all DX10 games now in the works are being developed on the GeForce 8800.

I also asked Rehbock about the state of hardware-accelerated physics, including GPU-accelerated physics. Why had the hype come so early and then trailed off, and where were the games that use it? Rehbock said he was pleased that Nvidia hadn’t pushed too hard on the physics hype of late, preferring to wait for the games to arrive. However, he didn’t expect to see too many titles with hardware physics available in the near future, in part because the DX10 transition has occupied the time and attention of the best and brightest game programmers. Once that transition is made, he expects DX10’s new capabilities to free up those coders to look into physics.

Rehbock identified shader development, in particular, as an area where DX10 will free up programmers. DX9 was initially billed as enabling game designers and even artists to create their own pixel shaders via drag-and-drop tools, but in truth, creating shaders in DX9 typically required the efforts of a skilled programmer. Rehbock believes DX10 really does make GUI-based shader creation accessible, which may allow top-flight programmers to spend more time on physics acceleration via mechanisms like CUDA.

Speaking of CUDA, we also talked about the early complaints that CUDA is more complex and difficult to program than initially anticipated, with multiple memory spaces to maintain and the like. Rehbock admitted Nvidia may have oversold the ease-of-use angle for CUDA somewhat, but said he believes the level of abstraction in CUDA is appropriate, especially for a first-generation effort. Rehbock argued that Nvidia had to make a tradeoff between ease of use and flexibility, and that developers will benefit from better understanding the chip’s architecture by seeing it exposed at a relatively low level. The most capable programmers will then build tools and APIs for applications like physics, which others will be able to use.

Such talk, of course, sounds much like AMD’s Stream Computing approach for its Radeon GPUs, although AMD doesn’t offer some of the first-party tools Nvidia does, such as a C compiler.

Rehbock expressed some surprise that Microsoft hadn’t chosen GDC as the place to launch its rumored DirectPhysics initiative, given the recent job listings from Microsoft in this area. He also emphasized that Nvidia welcomes such efforts from Microsoft and doesn’t see CUDA as a competitor to them. Instead, he cited the coexistence of Cg and HLSL as a model for how CUDA and any Microsoft GPGPU effort might coexist.

As we wrapped things up, Rehbock took a second to communicate his optimism about the current state of PC gaming. 2007, he said, is packed with an unprecedented number of releases from big-name game development teams. Titles like Supreme Comander, Command and Conquer 3, and Hellgate: London are part of the mix, as well as Crysis and Unreal Tournament 3. Rehbock was especially sweet on Hellgate: London; he said the guys at Flagship studios had “really nailed it.”

 

Ageia
Ageia didn’t have a huge presence at GDC this year, but they continue pushing for development of games that use their PhysX SDK and hardware. The two new titles they had to show off were CellFactor: Revolution and Warmonger.

CellFactor: Revolution is a multiplayer game based on the original (and entertaining) CellFactor demo that Ageia commissioned as a showcase for PhysX hardware. The game is well along in development, and Ageia expects it to be released as soon as the end of this month or the beginning of next. The original plan was for the game to be sold as a budget title, but now Ageia plans to release it, with five levels, free of charge.

The game will run with or without PhysX hardware, although obviously it will run faster with hardware acceleration. In our initial review of the PhysX card, we found that the CellFactor demo would run just fine on a dual-core CPU with tens or hundreds of rigid body objects bouncing around the screen, but hardware acceleration was necessary for decent frame rates with effects like fluids and cloth. My experience playing through a level of CellFactor: Revolution was consistent with this assessment. Fortunately, the final game will allow players without PhysX cards to disable these advanced physics features in order to keep frame rates up. This concession should open up the possibility that a robust community might take to playing CellFactor: Revolution online. If so, Ageia hopes many of them will decide to buy a PhysX card in order to see how it can accelerate the game.

Warmonger, pictured above, is a gorgeous-looking FPS game that features loads of potential for environmental interaction, judging by the small portion of it that I got to try. The Ageia rep walked me through moving up to some stairs to a well-placed landing inside of a building—an ideal sniper post. We then blew up the stairs below us to make sure we wouldn’t be followed. Clever. The picture above isn’t the greatest, but it shows “metal cloth” in action, buckling as the bus takes damage.

Like CellFactor: Revolution, Warmonger will be released for free, as a title bundled with PhysX cards. In fact, this one can’t be played without a PhysX card. Ageia expects it to arrive in May or June of this year.

Bigfoot Networks
My main reason for meeting with the guys from Bigfoot was to apologize for utterly failing to get my review of the Killer NIC out of the door after committing to do one way back when the product was first hitting the market. The Killer NIC lingered in Damage Labs for longer than I care to admit as I focused my attention instead on the GeForce 8800, quad-core CPUs, and the like. Fortunately, the Bigfoot guys were forgiving, and we’ve made arrangements for a Killer NIC review here at TR before too long. That review should include the new, less expensive Killer K1, as well as the original Killer NIC.

For the uninitiated, the K1 is based on similar hardware but lacks the flashy heatsink of the original Killer NIC. That leads to slightly slower CPU clock speeds, but nothing that should affect gaming performance too dramatically, according to Bigfoot. The K1 also lacks out-of-the-box support for FNApps, applications that run independently of the host PC on the CPU and memory of the Killer NIC. Right now, however, Bigfoot is selling the K1 with FNApp support as part of a promotion.

Bigfoot had two bits of news to announce at the show. The first was the release of a new FNApp that may be of interest to some folks: a BitTorrent client, now available for download in beta form right here. Bigfoot has essentially ported an open-source BitTorrent client to the Killer NIC’s embedded version of Linux. Bigfoot “CEO & Mad Scientist” Harlan Beverly told me the client can sustain up to 40 concurrent torrents at once, with zero CPU utilization on the host PC. The client writes downloaded files to an external hard drive connected to the USB port on the back of the Killer NIC, and Bigfoot provides a GUI for copying the files over to the host PC later. The client itself provides a GUI, bandwidth controls, and a search button; it also automatically checks with Bigfoot for the availability of downloadable updates. This app complements the Killer NIC’s orginial FNApp, a firewall with iptables-based stateful packet inspection.

Bigfoot’s other news was the release of a white paper about Windows Vista networking performance written by Harlan himself. The Vista question obviously loomed large for a company whose products rely on bypassing the Windows networking stack. The paper’s basic claim is that Microsoft’s optimizations of the Vista network stack can be helpful for data throughput, especially with TCP/IP, but do little to help UDP or packet latency, the two factors of most concern in online gaming. Bigfoot expects to see even more performance benefits for the Killer NIC in Windows Vista than in XP. We shall have to see about that. Bigfoot does have drivers for both the 32-bit and 64-bit versions of Vista right now.

When I first talked to the Bigfoot guys, we discussed the possibility of other firms licensing the Killer NIC technology for use on motherboards and the like. I asked Harlan for an update on that front, and he said one of his current side projects is looking at the mobile space. He envisions one of those large, luggable gaming laptops with a couple of blood-red ports on the back, one for the Killer NIC’s Ethernet connection and another for USB. I wouldn’t be shocked to see it happen, if a company like Alienware or Voodoo PC becomes interested.

 

S3
S3 had nothing new brewing for GDC, but that didn’t stop them from hosting a first-class booth, complete with a team of booth babes and an array of MultiChrome systems running UT2004 for tourney play.

S3 also had this Gateway laptop on display that had been fitted with an S3 Chrome MXM module.

Unfortunately, if you were to turn it over, you’d see this sticking out of the bottom…

Doh! They don’t seem to have the cooling solution on this particular MXM module perfected just yet. I understand some of the S3 GPUs on MXM modules already fit perfectly into existing systems.

S3 is, incidentally, working on a DX10-class GPU with support for Shader Model 4.0, which they expect to position at the low end of the market like their current Chrome-series offerings.

Philips
Philips had a large booth in the West expo hall dedicated to its unique amBX ambient, err, environmental stuff. Rather than trying describe it, let’s start with a picture.

Here’s an amBX setup for PC gaming. Behind the monitor is the amBX hub, which plugs into the PC’s USB port. On top of the hub is a colored light, much like the ones atop the speakers. You can also see a pair of fans for blowing wind in your face and a keyboard wrist rest that provides console-style rumble feedback. Not pictured is a beefy subwoofer that’s also part of the package. Philips provides an API game developers can use to control all of this equipment, making it react to action in the game. The idea is to get sound, ambient light, air, and vibration feedback to work together to enhance the immersive experience. Obviously, game support for amBX will be an issue unless and until it really takes hold, but Philips does support a subset of amBX functionality in a catalog of older games.

I tried a demo of a racing game, as pictured. As speeds grew, the wind in my face ramped up, and when I hit a seam in the pavement, the wrist wrest made me feel it. I wish I could say that the whole experience really enhanced the gameplay, but it was mostly subtle and kinda cheesy. I suppose I could grow to appreciate it more with time, or in a different type of game. The one piece of the kit that I could really enjoy using is the rumble wrist rest, which just seems like a good idea. Unfortunately, Philips doesn’t sell this component separately; it comes as a part of the pricier amBX packages, which will range in price from $99 to $399.

Here’s a look at a larger home theater setup Philips put together to really show off the concept. In this room, the entire floor moved and vibrated in response to the action, and the fans were powerful enough to dry out your contact lenses. Personally, I’d prefer a big-ass HDTV, but I’ve gotta credit Philips for trying something different.

 

Matrox
Matrox soldiers on, beating the drum for its dual- and triple-screen video splitter products. Here’s a look at the TripleHead2Go doing its thing:

Although it’s not on display here, Matrox has added a new feature to its drivers for 3D games that accounts for the offset caused by the bezels on LCD monitors. The goal is to make the display more realistic, like one would see out of the cockpit of an airplane or the like.

Here’s the new digital edition of the TripleHead2Go that was announced at the show. This box takes a dual-link DVI input and breaks it out into three regular DVI outputs, so that laptops or other systems with a single DL-DVI output can drive three displays.

Sony
Sony’s booth included a working demo of Unreal Engine 3 running what was obviously a level from Unreal Tournament 3. I took a, uh, “screenshot” with my camera.

The demo looked pretty good, although maybe a little too much like past versions of UT. Frame rates were decent but not great, perhaps in the high 20s or low 30s, and the PS3 wasn’t doing any antialiasing or anisotropic filtering. The Sony rep I talked with said such things would be added later, after the game had been optimized for performance.

…and more
I saw some other innovative things out on the show floor as I wandered about. One of them was this so-called “3D mouse” from 3Dconnexion.

This thing allows you to move about in virtual 3D space with ease. The faster you push in any direction, the faster you move. You can twist and turn it to change your orientation or the orientation of on-screen objects, as well. As an FPS gamer, I’m not sure this kind of thing is really much better than a keyboard/mouse combo for movement in 3D space, but it does have its advantages.

Here’s something different. The One Laptop Per Child effort had a booth at GDC, where it was showing off a version of SimCity ported to the device. The most startling thing about the OLPC machine, when you see it in person, is how truly small it is. Only a child could use the keyboard comfortably, which I suppose is kind of the point.

And here is the most awesome booth babe I’ve ever seen at any trade show. The guys at +7 Systems have developed a back-end system that tracks and automatically enforces balance in MMO games by watching what users are doing. As I understand it, if an object of a certain type gets to be used too often, the system assumes it’s too powerful and shaves a few percentage points off of it impact in order to maintain balance. In that vein, they might want to consider that their own Level 20 booth babe gives them an unfair advantage in getting press attention. 

Comments closed
    • MadManOriginal
    • 13 years ago

    q[

      • UberGerbil
      • 13 years ago

      Well, it’s not like that’s news — it’s been widely reported for a while. Given Intel’s “tick-tock” strategy you wouldn’t expect them to do anything too radical while moving to a new process node. Despite the handful of new instructions and the increased cache this really is just a die shrink. Given that they plan to sell both dual and quad core versions of the design it makes sense to have it be purely a packaging issue. And of course it also means they’ll get higher yields because a defect just kills half a quad, which can be replaced — and that may be important too in the early days of a new process.

      As you said, there’s no evidence the lack of integration is hurting them with Clovertown, and by the time it does they’re planning on having their other interconnect ducks in a row. Besides, it makes for a nice contrast with Barcelona.

        • MadManOriginal
        • 13 years ago

        Oh, well I hadn’t seen that Penryn quads would be two-die packages before so it was news to me.

    • crabjokeman
    • 13 years ago

    You owe me a booth babe. I mean, I literally want one delivered to my doorstep (preferably in a cage).

      • Damage
      • 13 years ago

      You owe us all a crab joke.

        • blitzy
        • 13 years ago

        why did the crab cross the road

        o[< to get to the other tide <]o ahhh cheese burger

        • eitje
        • 13 years ago

        post of +1 damage.

      • ludi
      • 13 years ago

      I believe the number you wanted was 1-900-RUSSIAN.

      • Flying Fox
      • 13 years ago

      Well, he did mention S3’s full compliment of booth babes, don’t know if he has taken pictures of those?

      • Krogoth
      • 13 years ago

      I thought it would be because it can’t stand the subpar audio quality.

      o[< Refering to infamous Realtek AC97 CODEC series <]o

    • elmopuddy
    • 13 years ago

    I’d love to see a TR review of the Killer NIC.. bring it on!

    EP

    • Shark
    • 13 years ago

    While reading the Killer Nic white paper, I was bothered at the style in which is was done, until I realized even though it is called a white paper, it’s not.

    It’s an article, as written in the style seen everyday on most enthusiast websites.

    And then it made sense. It didn’t make me like the article anymore, but at least it made sense.

    • Dposcorp
    • 13 years ago

    Wow. The was a great article. Well written, witty, and not your normal just show off all the coolest stuff type write up.

    Job well done Scott.

    • whitneycruise
    • 13 years ago

    Good one, Damage ๐Ÿ˜› When I read that booth babe line, I was like, “Oh no! The dude’s lost it!” Good to see you didn’t resort to showing flesh for more page hits.

      • DrCR
      • 13 years ago

      Yeah, no kidding. Thinking “alas, TR going down as well..” only to get a good laugh at the end. ๐Ÿ™‚

    • whitneycruise
    • 13 years ago

    Good one, Damage ๐Ÿ˜› When I read that booth babe line, I was like, \”Oh no! The dude\’s lost it!\” Good to see you didn\’t resort to showing flesh for more page hits.

    • Bensam123
    • 13 years ago

    OMG you guys actually atteneded! I was waiting for a blog post with a few photos. ๐Ÿ˜›

    Nice write up.

    The Philips system looks largely useless in many ways (especially for the price). I’m guessing rumble mice died out for a reason…

    I don’t know about everyone else, but I would rather see the introduction of better physics into games rather then better graphics at this point. ๐Ÿ™

      • Gerbil Jedidiah
      • 13 years ago

      It is dug=)

    • BobbinThreadbare
    • 13 years ago

    With AMD’s fusion proc, I wonder if it could be used to accelerate physics while a dedicated card does the graphics.

    • blitzy
    • 13 years ago

    LOL @ the booth babe, have to admit I was disappointed not getting a real one ๐Ÿ˜‰

      • Flying Fox
      • 13 years ago

      Level 20 booth babe 0wned me. ๐Ÿ™‚

    • Jive
    • 13 years ago

    Welcome to my part of California – Damage, the weather couldn’t be any better, and the gasoline couldn’t be any more expensive either.

      • JustAnEngineer
      • 13 years ago

      q[

Pin It on Pinterest

Share This