Nvidia goes stereoscopic with GeForce 3D Vision

I’m going to assume you have, at some point in your life, tried out some sort of stereoscopic 3D glasses scheme. They come in all forms, from the early blue-and-red specs used in Jaws 3D to the virtual reality goggles with individual LCDs built-in. (Wow, I am old.) If you’ve seen a 3D movie in the theater more recently, you probably wore some simple cardboard glasses. And let’s be honest: all of these glasses have one thing in common: the cheese factor.

Yeah, they’re cheesy. You feel like a fool putting them on, and even if the 3D effect works pretty well, you can’t escape the sensation entirely. As if the people closest to you, the ones you care about most, could see you now—they would point and laugh.

If you’re at all familiar with the history of stereoscopic 3D schemes in computer games, you may have further reservations about these things. In my years covering this industry, I’ve managed to experience quite a few such products. I’ve seen the flicker, and I’ve come away with the vague headaches and sense of vertigo to match.

So when presented with these new GeForce 3D Vision glasses from Nvidia, you can imagine why my initial reaction was skepticism. I mean, what’s really changed between the last time we tried this and now?

A couple of things, it turns out. For one, incremental improvements in technology over the last little while have served to make stereoscopic schemes more viable. With each generation, GPUs have grown by leaps and bounds in terms of visual fidelity and performance. They can generate much more convincing images than they could just a few short years ago. At the same time, LCDs have supplanted CRTs as the display technology of choice. Display makers have adopted wider aspect ratios better suited to the human visual field and, crucially, the very newest LCDs are quick enough to refresh onscreen images at a rate of 120Hz—often enough to display a 60Hz image for each eye in a stereo vision scheme.

The second change is who’s making this latest push for stereoscopic visuals. This isn’t some start-up whose booth is tucked into the back corner of the convention hall at GDC. Nvidia is one of the leading graphics chip makers in the world, and it has strong relationships and substantial clout with game developers. Nvidia is also a mighty hard-headed—err, shall we say, determined?—company. If anybody can make this go, it’s probably them.

Not that the whole dorky 3D glasses thing is any sort of a slam-dunk proposition, of course. But I’ve spent some time playing around with Nvidia’s GeForce 3D Vision, and I’m at least intrigued by its potential.

What it takes

The GeForce 3D Vision package comes with two main components: a pair of glasses and a little, black, pyramid-shaped doohickey that happens to be an IR transmitter. Both of these devices plug into the computer via a USB cable. The glasses plug in only for the purpose of charging. Otherwise, they’re wireless and won’t keep you tethered to your PC, thank goodness. The IR transmitter must remain connected to the PC, of course, because it’s what sends signals to the glasses.

The 3D Vision scheme creates the impression of depth by showing different images to the right and left eyes, with the image for each eye adjusted for perspective. In order to be sure each eye sees a separate image, Nvidia uses a shuttering scheme similar to a number of past attempts at 3D. The display shows two different images in rapid succession, one for the left eye and another for the right. The glasses use active polarization to block the light going to the eye whose image isn’t being shown, and then, simultaneously, the display and glasses alternate to the other eye.

Without glasses, both images are visible—and confusing

This switch happens very rapidly—at up to 120 times per second with this new breed of LCD—and if all goes well, the resulting effect won’t make you blow chunks all over your living room floor. Instead, your eyes should struggle to focus for a second and then—BAM!—you have a splitting headache.

No, not really. At least, probably not. Even if you do, though, you should also have the distinct impression of depth inside of your computer monitor.

Interchangeable nosepieces offer a custom fit for your schnoz

The big, black, plastic glasses are key to making this magic happen, and Nvidia has obviously devoted considerable effort to their design. Despite their thick temples, which presumably house control electronics and a battery, the glasses are relatively lightweight, and the rubber nosepiece grips and rests comfortably on my outsized beak. Since a high proportion of 3D Vision’s potential customer base probably wears glasses, Nvidia has designed its magic shades to fit on your head while you’re already wearing regular glasses. To my great surprise, they seem to have achieved success in this department. At least, the 3D shades fit over and around my smallish, wire-rim glasses with no trouble whatsoever, without squishing them or compressing them into the sides of my head. If you wear gigantic horn rims, though, your mileage may vary.

As you may be able to tell from the pictures, the glasses manage to fit over a regular pair and stay on the face by, essentially, pinching your head. The green, plastic ends of the arms on the glasses are contoured and curve inward. On me, they grip right above the ears, compressing my enormous noggin tightly enough keep the 3D specs secure. Problem is, I have chronic, low-level TMJ syndrome. Pinching my head above the ears will not endear you to me. I really wish the glasses had better padding and a larger surface area where they meet the head. Most folks probably won’t run into the same problem, but you’d definitely want to try these things on before buying them.

Nvidia claims the glasses are good for 40 hours of gaming on a single battery charge. I didn’t quite test those limits, but based on my casual use and neglect for charging, I’d say the glasses do have a considerable run time.

What it takes — continued

The other bit of hardware needed for GeForce 3D Vision is this IR transmitter, which helps synchronize and activate the glasses’ shuttering mechanism. The picture above shows the back side of the transmitter, which houses a rather important control: that wheel adjusts the amount of depth in 3D games. Turning it up provides more stereo separation and greater depth, while dialing it back does the opposite. Although 3D Vision also supports 3D movies and videos, the depth wheel only works in games, because only games generate images on the fly.

The Samsung SyncMaster 2233RZ display. (Source: Nvidia)

In order for any of this to work, you’ll need a display capable of 120Hz refresh rates. In our case, we tried out a pre-production version of a 120Hz LCD, the Samsung 2233RZ. This 22″ display has a native resolution of 1680×1050, with a 16:10 aspect ratio and a 5ms rated response time. The $399 MSRP is mighty pricey for a 22″ monitor. Samsung’s own SyncMaster 2233BW sells for about 220 bucks at online retailers. List and street prices don’t always match up, of course, but you’re still likely to be paying quite a premium for a 120Hz display.

Beyond its 120Hz capability, our pre-production sample of the SyncMaster 2233BZ isn’t anything special, either. To make an entirely unfair comparison, next to the Dell 3007WFP-HC we usually have on our GPU test bench, the 2233BZ has noticeably inferior color reproduction, with visible loss of contrast and slight color shift at acute viewing angles. Perhaps production models will be improved somewhat when they arrive—they’re slated for release in April—but I doubt Samsung will be able to achieve the color reproduction of the best LCDs in combination with 120Hz quickness.

ViewSonic has also announced a 120Hz 22″ LCD with a funny name, the FuHzion VX2265wm. Happily, although its specs are similar to the Samsung, the FuHzion has a bad spelling discount of 50 bucks, bringing its suggested retail price to $349. That’s it for LCDs at present, although there’s promise on the horizon. Nvidia informs us that LG has a 23″ 120Hz panel planned for later this year, and that display will have a 1920×1080 native resolution. That’s more my speed, considering that the 3D Vision kit itself costs $199. Seems to me like this is something of a premium product, and 1680×1050 isn’t really a premium resolution.

If you’re into rather larger displays, 3D Vision is also compatible with a host of 1080p DLP HDTVs from Mitsubishi. And, if you have real money laying around, it’ll also work an HD 3D projector from a company called LightSpeed Design. I have a hunch that puppy will cost you more than, say, a nice Audi.

The final piece of the 3D Vision puzzle is a PC with a suitable GeForce graphics card. Nvidia has a list of compatible GPUs, most of which are at the higher end of the product spectrum. The oldest graphics card on the list is the GeForce 8800 GTX, and the cheapest is the GeForce 9600 GT. Anything newer or more powerful than those cards, including the GTX 200 series, ought to work. SLI is supported, as well, but only in two-way configurations.

The experience

So… does it work? In a word, yep. Nvidia’s decision to limit 3D Vision to displays with very high refresh rates makes this technology easily superior to most past attempts at stereoscopic 3D on the PC. There’s noticeably less flicker, and the illusion of depth works better as a result.

The biggest catch, at present, is spotty game compatibility. Most games aren’t designed with stereoscopic 3D in mind, and to cope with a variety of potential issues, Nvidia has created a host of game-specific profiles, much like the profiles it uses for SLI. 3D Vision profiles are a little more complicated, though. If SLI doesn’t work, the fall-back behavior is pretty simple: lower performance—frustrating, maybe, but not devastating. If 3D Vision has a compatibility problem, well, all manner of funky things might happen visually, many of which can ruin the sense of depth in the display or send your visual system into a tizzy. What’s more, 3D Vision’s incompatibilities tend to involve certain rendering techniques, so Nvidia will oftentimes ask you to disable some features of a game for the sake of compatibility. In fact, the game profiles will show compatibility information directly onscreen when a game starts up, like so:

Most of the games I tried (all of them relatively new releases) required a few adjustments, many of which meant compromising on visual fidelity somewhat. The most common trouble spot seems to be shadowing algorithms. The profiles frequently recommending dialing back the quality of shadowing in the game’s options, if not disabling shadows entirely.

I tried to get specifics out of Nvidia about what the issues are. Is it one approach, like stencil shadow volumes, that causes problems? But Nvidia has taken the “vague PR blob” approach to answering any and all questions about the technical specifics of GeForce 3D Vision. As a result, we have few tools for handicapping the prospects for future game compatibility with this technology. Instead, Nvidia offers only the reassurance that 3D Vision compatibility is a problem very much like SLI compatibility, and claims that it will take the same approach to surmounting any obstacles: a combination of collaboration with game developers and vigilant profile development. That sounds good, I suppose, but we’re left having to trust that Nvidia will be able to herd cats well enough to make this work.

These issues are relevant because… well, let me tell you about my experiences with using 3D Vision. I first tried Far Cry 2, because that’s a game I’ve spent quite a bit of time playing, so I’m familiar with its (very nice) visuals. When you run the game, the profile overlay pops up recommending that you adjust a couple of image quality options. I had trouble finding these options, until I realized that I had to switch the game into DirectX 9 mode rather than DX10. Once I’d done that, I was able to make the recommended tweaks, including reducing shadow quality.

I also had to turn off the game’s crosshair indicator. This is a common problem with 3D Vision and FPS games: a single crosshair in the middle of the screen will wreak havoc on the 3D mojo. Either the crosshair looks like it’s floating very close to your face, or you’ll see two of ’em. Both are disorienting. In its stead, Nvidia’s drivers have their own crosshair built in, easily enabled with a key combo. The drivers appear to auto-sense the correct depth for the crosshair in response to what’s happening in the game, and it serves its function pretty well, for the most part.

There are spots where this band-aid approach causes problems, though. For instance, some games use the crosshair graphic as an indicator, making it change color or shape depending on what’s happening. The Nvidia crosshair is just an add-on and doesn’t replicate that behavior, so you may lose out by using it. I also had some moments when I was zoomed in, looking at my target either through iron sights or a scope, when the Nvidia crosshair just didn’t seem right. In the grand scheme, those issues aren’t very common, however, and the Nvidia crosshair is generally a decent substitute.

With those changes in place, Far Cry 2 looked pretty good. The illusion of depth was real and obvious enough, and like I said, it’s better than past attempts at stereoscopic 3D, generally speaking.

One of the first things I noticed is that the amount of depth shown, by default, is very much on the low side, something like 16% of the total possible. At this default setting, one can perceive differences in the third dimension, but object themselves look flat—as if one were seeing a cardboard cut-out of a gun placed clearly in front of a cardboard cut-out of a bad guy, with a cardboard cut-out of a tree placed well behind him. I understand the reasons Nvidia chose to make the default setting have a relatively small amount of stereo separation; it’s easier on the eyes and presents fewer problems when the effect isn’t working perfectly. With lots of depth, initially adjusting to 3D Vision after putting on the glasses can take a few seconds to focus—like staring at one of those funky 3D poster things that were popular in in the ’90s. Still, the more I played with 3D Vision and the better the game’s compatibility was, the more separation I found I wanted. At 70-80% of the total possible depth, the cardboard cutout problem is largely banished, and most objects in a game take on a perceptible form. In some cases, I found myself reaching 100% of the available separation and wanting even more. That’s not really practical most of the time, though.

Although 3D Vision added tangible depth to Far Cry 2, the experience wasn’t perfect. Certain things, like on-screen indicators (the hand icon that appears before you open a door, for example), particle-based smoke effects, and water reflections just didn’t work right. They’re not aware of 3D Vision and don’t have proper stereo separation, which confused my eyes. I found that in frantic action, in the midst of a firefight, 3D Vision became disorienting, as my visual system worked overtime trying to process what it was seeing. I’d “lose focus” on the stereo image when an icon popped up and struggle momentarily to regain it. More than once, my character died in a routine skirmish while I was disoriented. This is a tough standard to meet, particularly when you’re dealing with games not expressly designed with 3D Vision compatibility in mind, but anything less than perfection can spoil the added value of a 3D display scheme like this one—especially in a fast action game.

Undaunted, I moved on to the game I most looked forward to trying in 3D: Race Driver GRID, a visually stunning title that just cries out for an added dimension. I figured I’d spend hours racing around the track in 3D with this game. I was shocked, though, when I saw Nvidia’s compatibility recommendations: you’re supposed to disable motion blur, which GRID uses to good effect, and to turn off shadowing entirely. On the face of it, the idea of losing shadowing seemed like a bad idea. In reality, it was even worse than I’d thought. Without shadows under the race cars—particularly your own—the game loses its sense of depth, even with a 3D display. Deeply disappointing.

As I was testing GRID, my kids walked into the room, and I decided to have them try out 3D Vision. They were puzzled by the muddy, doubled images they saw onscreen without the glasses. When I asked my nine-year-old son what he saw when he put on the shades, he happily reported that he could see things correctly again, as if he were wearing some kind of secret decoder glasses. Beyond that, I couldn’t quite get him to articulate that he saw depth in the display. When I asked him what he saw that was different, he said it looked like the cars were floating above the track without any wheels—which is precisely how GRID looks without any shadows beneath the cars. Both my seven-year-old daughter and my son took turns trying on the glasses for a while, but neither of them seemed especially wowed by the effect.

I moved on to other recent games with varying degrees of success. Call of Duty: World at War worked about as well as Far Cry 2, with few visual compromises required except for the Nvidia crosshair, but less-than-perfect results. Crysis Warhead looked quite good at first, until I got into the heat of battle, where this game’s intensive particle effects were completely broken. Plumes of smoke seemed to float way out in front of the screen, far from the objects burning. Water reflections didn’t work, sending my eyes and brain into a twisted tug-of-war, for which there could be no winner. And, well, check out the taillights on this jeep. I snapped this picture of the display with a camera, but you can see the problem without the aid of the glasses:

Two jeeps, one set of taillights

The jeep’s taillights are floating out in space, unconnected to either the right- or left-eye image of the jeep itself. With the glasses on, what you see is two sets of taillights, one off to the right and another to the left.

Mirror’s Edge worked better, and I was able to combine 3D Vision with PhysX effects for a perfect storm of Nvidia marketing hype. Moving quickly through this game’s virtual obstacle courses in 3D is a real treat. The high-contrast color palette of this game brought out another quirk, though: ghosting. When looking at a dark skyscraper juxtaposted against the bright sky, I could see a second, faint copy of the building, somewhat offset. Obviously, the left- and right-eye images were bleeding together, as if the glasses shuttering wasn’t quite up to blocking out everything. Once I noticed the ghosting, I later spotted it other games, but nowhere was it quite as obvious, or distracting, as in Mirror’s Edge.

In a little ray of hope, Fallout 3 was almost perfect, with all quality options cranked, marred only by the need for the Nvidia crosshair and the unfortunate fact that the sky textures were often scrambled somehow. I could see the potential for 3D Vision in this game, but I was beginning to get the impression that this tech would never fully realize its potential.

And then I tried Left 4 Dead.

Valve has obviously been working with Nvidia. For one thing, L4D has its own, depth-aware crosshair that behaves perfectly with 3D Vision, adjusting to depth more quickly than the one in Nvidia’s drivers.

Beyond that, though, here’s the thing: everything works just as it should. Nothing shatters the illusion of depth, even in the smallest detail. It just works. Like gangbusters. Well enough, in fact, to change my outlook about 3D Vision. Perhaps some of it is the fact that this game uses an older 3D engine and a darker, more subdued color palette, but killin’ zombies has never been more fun. I could actually see the possibility of people, quite willingly, wearing funny glasses in order to have this experience when gaming.

I made this discovery about Left 4 Dead‘s 3D excellence shortly before my buddy Andy Brown, of old-school TR fame, arrived in Damage Labs to serve as one of our 3D Vision test subjects. I resisted the urge to share my thoughts with Andy initially. Instead, I played things close to the vest in the hopes of getting an honest reaction out of him that was totally his own. And I started him out with Left 4 Dead, to see how 3D Vision in its purest form would appeal to him. I should say here that Andy is a smart, open-minded guy, but he’s a prototypical hard-core gamer who doesn’t tolerate gimmicks that get in the way of gameplay. I suspected the sheer novelty of 3D glasses wouldn’t count for much in his book.

Somewhat to my surprise, Andy’s initial reaction was pretty darned positive, thanks to the magic of Left 4 Dead. He didn’t laugh, didn’t call the whole scheme cheesy, and seemed to see the appeal of it quite clearly. Andy spent a fair amount of time running around in L4D with the glasses on and came away fairly impressed. His positive reaction to the whole scheme faded as I walked him through other games, though, winding up in bemusement over the state of particle effects and water reflections in Crysis Warhead.

My next victim was my buddy Mike, who joined Andy and me for a little Left 4 Dead co-op action. We took other machines and let Mike sit at the 3D Vision system. Mike was immediately impressed with the 3D effect, and he wound up playing through the entire first co-op campaign in Left 4 Dead while wearing the glasses. Every so often, between levels or during breaks, he’d stop and say, “You know, it really does give you a sense of depth,” or something like that, in slight wonderment. Eventually, he gave that up and just started hitting on Zoey, who we all agreed looks unexpectedly hot in true 3D. Although that might have been the Leapin’ Leprechaun speaking.

After more than, heck, at least 90 minutes of solid gaming with the 3D Vision glasses on (and a few pints of the Leprechaun), Mike didn’t have any complaints about headaches, irritated eyes, or anything of the sort. I expect that next time we all get together for some L4D co-op, he’ll be asking for the 3D Vision system again.

About that performance hit

If you follow these things at all, you probably know that it doesn’t take a terribly expensive video card to drive a 1680×1050 display in the latest games. Of course, stereoscopic 3D will necessarily involve some sort of performance hit, because you’ve basically got to render the each frame twice, once for each eye, in order to achieve a given frame rate. Handicapping the magnitude of this performance hit is difficult. Nvidia claims it does some nifty things in its drivers, including “smart culling,” in order to keep performance up, but it’s very light on the details. Going on what little information I had, I decided to play it what I thought was fairly safe and test 3D Vision with a GeForce GTX 260 graphics card (the version with 216 SPs). The GTX 260 is pretty fast, after all, and not a bad value at present.

Well, that didn’t work out too well. Some games felt sluggish with stereoscopic 3D enabled, even at 1680×1050, and even with features like high-quality shadows sometimes disabled. I really didn’t want slow frame rates to spoil the effect for my test subjects, so I tried swapping in a GeForce GTX 285 instead. When that wasn’t enough, I just went whole hog and plugged in a second GTX 285. That did the trick, but it ought to have—we’re talking about a pair of $350 graphics cards.

Since this is hardware review site, I’m required by OSHA and the FDA to supply you with some benchmark numbers to prove my point. (Well, OK, the numbers don’t have to prove a point, but they’re required anyhow.) I tested on the same basic system config documented here, using a single GeForce GTX 285 graphics card, both with and without stereoscopic 3D enabled. I then tested with two GTX 285s in SLI and 3D Vision enabled, as well. Here’s what I found:

As you can see, the performance hit is sizeable—maybe even bigger than the hit Michael Phelps took off of that bong. In Left 4 Dead, the extra work required for stereoscopic 3D doesn’t present much of a problem for the GTX 285; it still averages nearly 60 FPS. The performance drag is considerable, though: one GTX 285 without 3D Vision is faster than two GTX 285s with it.

Fallout 3 is similar in this respect. A single GTX 285 without stereoscopy is quite a bit faster than two GTX 285 cards with it. Even more unfortunately, SLI is no help at all in Crysis Warhead. Once you turn on 3D Vision, frame rates take a nosedive, regardless.

Of course, the performance hit will vary from one game to the next, and Nvidia claims it’s working on refining its 3D Vision profiles for improved performance as well as better compatibility. Still, right now, the stakes are pretty easy to see: if you want stereoscopic 3D, you’re going to have to fork out for a pretty beefy graphics subsystem, as well. This isn’t an issue one can ignore, because smooth frame rates are an incredibly vital component of perceived image quality in a game. As fundamental as depth is to our visual systems, the illusion of motion is even more crucial.

Conclusions

With all of the qualifications, caveats, gotchas, and frustrations I’ve expressed over GeForce 3D Vision, you probably have a good sense already that this technology just isn’t ready for prime time yet. Few games work well enough to make it worth buying, and from a purely value-oriented standpoint, the math is brutal. Not only do you have to buy the glasses for $199 and a display for $349 or more, but you’ll also need quite a bit more GPU power in order to keep 3D Vision performance up to snuff. Then, whenever you’re not playing games, you’ll be stuck with a relatively low-resolution, low-quality monitor. For somewhere in the same basic price neighborhood, you could instead pick up a 27″ or 30″ LCD, with an IPS or VA panel and much better color reproduction, and get higher frame rates with a cheaper graphics card. That’s easily a better deal than a 3D Vision setup, no question about it.

Yet I can’t help but feel sympathetic to what Nvidia is doing here. When I first saw this technology working properly at its full potential, I was struck by the fact that the GPU is already doing the math necessary to create truly 3D virtual worlds. Yeah, sure, we all know that, I suppose. But seeing it in action, in the third dimension, really drives the point home. The fact that a GPU maker would want to foster the development of 3D display technology makes perfect sense. The visual computing ecosystem would benefit greatly if this sort of thing became universally available and broadly compatible with existing applications.

So I hope Nvidia sticks with this. I’m still not sure whether or not they could get a large segment of the PC gamer population to embrace the prospect of wearing enormous plastic glasses when they play games, even if it worked perfectly. But if that’s ever going to happen, Nvidia will have to persist in working with game developers on 3D Vision compatibility for the next year or so, at least. With luck, perhaps we can revisit this technology, say, next Christmas and find a host of new games that offer as compelling an experience as Left 4 Dead does now. In the interim, we’ll have to settle for 3D Zoey, which is quite a bit better than nothing.

Comments closed
    • alphaGulp
    • 10 years ago

    The games I’m most curious to see in 3D are RTS games: having something like the Total War series with 3D terrain topography, or Supreme Commander with the planes and the huge experimental units towering above the rest…

    RPG could be cool too. It would be neat if Diablo-3(D) was coded to take advantage of this tech… I wonder how Dragon Age & such come out, as well?

    Muahaha! We have some good things to look forward to 🙂

    • roop452
    • 10 years ago

    … … … …

    • roop452
    • 10 years ago

    Ashu Rege is coming this February to India’s first and independent annual summit for the game development ecosystem – India Game Developer Summit (http://www.gamedevelopersummit.com/) to talk about the novel uses of GPU computing for solving a variety of problems in game computing including game physics, artificial intelligence, animation, post-processing effects and others.

    • kvndoom
    • 11 years ago

    Funny you mention TMJ… one of my ex’s from several years ago had that, and I remember the first time she brought it up. Me, being smartass on autopilot, once again let the mouth get in front of the brain. “What’s that, Too Much Jawbone?”

    It wound up being a bad night.

    • Krogoth
    • 11 years ago

    Nintendo’s Virtual Boy already tried this gimmick and it had failed.

    Nvidia is trying to find another way to justify the cost of SLI solutions and high-end GPUs. While the mid-range stuff is more than sufficient if do not need to feed a 24″ or larger monitor at native resolution.

      • Entroper
      • 11 years ago

      Nintendo’s Virtual Boy didn’t try *[

        • Meadows
        • 11 years ago

        On the other hand the principles Virtual Boy used were, and are, still best for virtual reality in the truest sense. It’s just that it looked like a product from 10 years before it actually came out. In the mid 90’s, it would’ve been a lot more successful had it come with a light design, a head strap, and twin colour displays (then again, colour LED displays would’ve made it expensive in that time).

        • Krogoth
        • 11 years ago

        Did you read the article?

        Stereoscopic 3D inflicts a significant performance hit on modern games. The Source engine isn’t really that demanding on GPUs these days. It is more CPU-bounded, especially when there are tons of actors on the screen.

          • Meadows
          • 11 years ago

          I’m inclined to disagree, with the latest real-time shadowing additions running under 4x or better antialias it can easily exhaust many low- and mid-range videocards at proper resolutions, 1680×1050 or above. And a lot of people have midrange cards, although they might not use antialias at all, based on the way I know the average player.

          • Entroper
          • 11 years ago

          Of course there’s a performance hit, it has to render the scene twice. I never claimed otherwise. Those who want to play Crysis with every switch on and every knob turned up to 11 will have to pay for the best hardware, regardless. At the same time, us mere mortals can still enjoy stereoscopic 3D on very modest hardware setups.

      • derFunkenstein
      • 11 years ago

      Open your visual cortex, man! This is The Wave of the Future!

        • Meadows
        • 11 years ago

        Lol @ strange language reference.

    • derFunkenstein
    • 11 years ago

    It looks like you can see…BOTH points of view…

    YAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHYYYYY!

    /CSIMiami

    • Grigory
    • 11 years ago

    So, would any <=8ms display do?

      • Meadows
      • 11 years ago

      In theory, yes, but in practice, 8 ms monitors will give you ghosting like there’s no tomorrow. Your best bet is to look at models from 2 to 5, and I believe the reviewed monitor was a 5’er.

        • Grigory
        • 11 years ago

        Thank you! 🙂

      • Waco
      • 11 years ago

      You’ll end up with extreme flicker related to the 60 Hz refresh rate and you’ll be capped at 30 FPS as well.

    • CapnBiggles
    • 11 years ago

    I nearly shot milk out of my nose when I saw the performance chart hit, and I wasn’t even drinking milk.

    • pepys
    • 11 years ago

    What fresh hell is this?

    • Pachyuromys
    • 11 years ago

    I’m waiting for the screen technology that can produce such stereoscopic depths of field on its own.

    Oh wait, it already exists.
    §[<http://en.wikipedia.org/wiki/Window<]§

    • Entroper
    • 11 years ago

    This looks like a better technology than the old ELSA shutter glasses from 2000, but IMO, give me the goggles with an independent LCD display for each eye. The old school goggles of the 80s and early 90s worked, they just ran at a low resolution and a low framerate, with flat-shaded polygons on an expensive SGI machine. Do this on a modern graphics card with much more realistic graphics, at 1024×768 in each eye at 60 fps. Sony, this is a console I would line up outside of a store in the cold to pay $600 for on launch day.

    • mako
    • 11 years ago

    Great writing, as usual.

    • Bauxite
    • 11 years ago

    Gimmick resurfacing for the Nth time.

    Will flop like it did many times before, call me when there are real 3D displays at conventional prices…maybe I’ll be on social security by then. (or not, since it’ll go bankrupt first but I digress!)

      • lycium
      • 11 years ago

      lol, aren’t we a ray of sunshine 😉

      • Meadows
      • 11 years ago

      That’s cheap but wrong and incorrect in more ways than one.

      • Entroper
      • 11 years ago

      I’ve already done this for Portal, with good results, but that was when I was still on a 32-bit OS. Finally nVidia has 64-bit Vista drivers! (As of Monday.) Thanks for the link, I had all but given up on this ever happening.

      BTW: If you have a pair of 3D glasses that are different colors from the Super Bowl glasses (mine are red/cyan for example), it should still work. You just have to configure the driver for your colors.

    • donkeycrock
    • 11 years ago

    when we play L4D its always a competition NOT to be zoey.. because if you are, you are your girlfriend/wife… And heven forbid you are not that person, and then you are perceived to be that person the rest of the night by girfriends/wifes first name. its a hoot. and it sucks to be zoey.

      • 5150
      • 11 years ago

      I am ALWAYS Zoey. Mainly because a bot Zoey sucks with the hunting rifle she always picks up.

      It also makes me feel pretty.

    • Steel
    • 11 years ago

    Heh. Takes me back to my Asus GeForce2 card with the LCD shutter glasses that plugged into the back of the card. It’s interesting that in nearly a decade they haven’t fixed some of the problems I had back then, mainly the floaty lights and “reflections”.

      • Freon
      • 11 years ago

      It’s the display limit. You need a display that has zero lag, or where all the pixels can change state instantly.

      This will probably never go away with shutter glasses. I agree it is still somewhat broken technology. Polarized light makes so much more sense, it just takes more exotic display tech. To be fair, price is comparable right now, but I imagine the 120hz LCDs should come down way faster than polarized displays.

      • fantastic
      • 11 years ago

      I just threw that thing away last year. It was sitting on my shelf collecting dust, but I kept it because it had the glasses. I never had a display to use it with and it was AGP, so into the can it went. I couldn’t give it away.

      • Aphasia
      • 11 years ago

      It goes back way earlier. Its at least the second time they dig these out of the ground to look at. My Asus TNT2 Ultra also used those boxy shutter glasses. Worked quite alright for what it was. But they should really have gotten everything down in 10 years time.

    • DrDillyBar
    • 11 years ago

    Friends, booze and 3D glasses. Good idea there.

      • UberGerbil
      • 11 years ago

      Yeah, mix alcohol with vertigo/nausea-inducing technology, and then have enough friends around so you can’t vomit in any direction without hitting one.

    • Usacomp2k3
    • 11 years ago

    A couple of questions:
    1/ Any word of this being used by the film-industry? With all the movies coming out in 3d, I could see something like the PS3 being able to use this via USB (it has an nVidia GPU, right?)

    2/ Multiple glasses? Since it’s a simple IR port, I can’t think of any reason why you wouldn’t be able to have multiple glasses for viewing. ie, if you were playing a console game, then you could have multiple people on the couch watching. Again, PS3 games or movies could be fun, even if they’re simple. I could see them being reasonable if there were priced at $50 for additional glasses, or roughly the price of a controller.

    3/ Has nVidia mentioned how many games they’re going to add to the catalog? I can see this being very nice for slightly older games because they wouldn’t be quite as processing intensive. It would also give good reason to go back and play some older games, like Far Cry. I could almost see Steam incorporating this someway.

      • Hattig
      • 11 years ago

      1/ Sony demonstrated Wipeout HD using 3D glasses at CES I believe. They said that they needed a single standard for 3D television however before it was worth going forward with.

      2/ I can’t see this being a problem, maybe a bit odd if you aren’t sitting straight on to the display

      3/ I’m sure that both Sony and NVIDIA will be working on this technology for 2012 mass-adoption. I’m sure it will be patented up the wazzoo as well, which could mean it’s a desktop PC and PS4 only technology.

        • Usacomp2k3
        • 11 years ago

        W.R.T. #2, it wouldn’t be any different than watching an IMAX or other 3d video in a theater. You’re certainly not head-on in that kind of capacity.

      • Freon
      • 11 years ago

      Disney uses polarized light in their 3D theater in Epcot, Florida. It is really impressive from what I remember (it’s been years…). I do not expect theaters would ever use shutter glasses due to cost of the glasses. You don’t want to hand even a $25 pair of glasses to mobs of people in theaters.

      For a theater, if you have two projectors (one for each eye) all you have to do is line them up (I suppose they don’t even have to be exactly on) on the screen and put a polarizer filter in front of each projector, one 90 degrees out of phase of the other. Play the left eye reel through projector A, right eye reel through projector B. Then the viewers wear cheap plastic polarizer filter glasses. No flicker, no need for high refresh, no headaches. It’s incredible, and that’s why I’m WAY more excited about something like the iZ3D system than anything with shutter glasses. You still pay a brightness penalty, but with two projectors it kinda takes care of that.

      I really think polarized light is the Right Thing (TM). But the displays are more specialized. It takes way more than a faster refresh rate with an otherwise “standard” display to make it work in the home. Dual front projection is not practical. But iZ3D figured out how to make a reasonably standard form-factor LCD display to do this.

      • PRIME1
      • 11 years ago
      • zqw
      • 11 years ago

      3D DLP (rear proj) TVs have been out for a while. They use shutter glasses, and the input signal is 60hz “interlaced” but like a grid. The ouput is 120hz at half vert and horiz resolution.

      Unfortunately, rear proj DLP is almost gone – replaced by LCD which doesn’t have a standard solution, and doesn’t have anywhere near the per-pixel response time.

      • Damage
      • 11 years ago

      1) As I mentioned in the review, 3D movies are possible with this tech. Nvidia supplied us with a couple of demo videos, but they’re still early stage stuff, requiring a custom player and introducing some format headaches. The trouble with them, in my view, is the fixed depth, which led to a lot of the cardboard cutout effect given how they’d chosen to do the separation in the sample videos.

      The PS3 has an older G7x-derived GPU and wouldn’t be compatible with this specific technology for games. Although surely it could be adapted. The PS3 is too slow, more than likely, for all games to work. But it could surely work for movies.

      2) Yep, with the IR scheme, multiple glasses per system are possible, and Nvidia’s docs even talk about provisions for dealing with a LAN party type environment with lots of 3D Vision setups in close proximity.

      3) I probably should have made this clearer, but they do a have profiles for quite a few existing games now. There was a list linked in the reviewer’s guide, but the link is dead. The trouble is the imperfect compatibility with existing games, which I think is a deal killer.

      As some folks have mentioned, with a patch, WoW supports these glasses natively, and I’d assume it’ll work well as a result. Nvidia has vague but grandiose promises about support for future games, as well. Like I said in the article, I think that’s the real key to success here. We’ll have to see how things develop in the coming months.

        • indeego
        • 11 years ago

        /[<"The trouble is the imperfect compatibility with existing games, which I think is a deal killer."<]/ Reminds me of SLI/Crossfireg{<.<}g

        • Usacomp2k3
        • 11 years ago

        Cool, thanks for the replies.

    • HurgyMcGurgyGurg
    • 11 years ago

    So, its pretty much still an expensive pseudo-gimmick thats better than it was last time around but still not “There”.

    Basically, wait 3-5 years until support is built into pretty much every new title, 120 Hz monitors are standard, and the current round of “Console-itis” with graphics requirements becomes even more obvious.

    Now how about AMD/TI get their version of this out and speed things up with some good old competition.

      • Bauxite
      • 11 years ago

      This is one those perpetual “3-5 years” things, just like it was in the 90s.

        • HurgyMcGurgyGurg
        • 11 years ago

        Except it actually has a chance to fulfill the 3-5 year requirement, instead of past versions, at least according to the review.

          • Freon
          • 11 years ago

          These shutter glasses seem to be no innovation over what was available 10 years ago. It’s just that this newer product expects to use 120hz LCDs instead of 120hz CRTs. And it is 6x the cost due to the expensive LCD, and they doubled the price of the glasses (I paid $80 for my Elsa Revelators).

          I think there is a bit of the ole’ TR turd polish going on calling this anything more than what we had 10 years ago. Requiring 120hz outright doesn’t really make it better. Using an LCD doesn’t seem to have solved the ghosting issues since LCDs also have lag time, even if it doesn’t have the exact same characteristics.

          There is nothing to do with shutter glasses to make them better other than improve the display you use in conjunction with them. Same issues as before.

            • JustAnEngineer
            • 11 years ago

            Haitex Resources X-specs 3D glasses were available in 1988. That was 21 years ago, not 10.

    • CasbahBoy
    • 11 years ago

    Yeah, Zoey is pretty hot.

    • boing
    • 11 years ago

    Didn’t Nvidia offer very similar glasses back in 1999 together with the original Geforce?

    • gtoulouzas
    • 11 years ago

    There are simply too many problems, far too few specifically tailored titles, and too high a performance hit to justify the asking price of 200 dollars.

    nVidia might need to bite the bullet with this one and subsidize sales of the doohickey, or it will not take off at all. After all, the thing forces you to buy faster gpu cards, and, specifically, *nvidia* cards at that. That has to be worth the financial hit incurred from selling at a low price.

    100 dollars is barely acceptable, for what currently amounts to an interesting novelty and has a whole host of additional requirements with it (100hz displays, fast nvidia gpus). 200 dollars are way off the mark.

    • Meadows
    • 11 years ago

    Horrible. Looking at the first screenshot of Crysis, this thing gives all-wrong “depth”, which creates the cardboard effect you mentioned. It shouldn’t display cardboard cutouts, at any level of the knob.

    Where’s the perspective? Where’s the _[

      • Freon
      • 11 years ago

      “Horrible. Looking at the first screenshot of Crysis, this thing gives all-wrong “depth”, which creates the cardboard effect you mentioned. It shouldn’t display cardboard cutouts, at any level of the knob.”

      Well, it is rendering the scene twice, from two different points (eyeballs) in space. There’s no hokie BS going on with rendering objects onto 2D then compositing them in a 3D space after the fact. It’s not that the objects don’t actually have depth effect. There is no “copying” or anything AFAIK, it really is rendering the entire scene twice. It is “real” 3D, not some sort of normal map or bump map type hack to make things appear 3D. The scene really is 3D and rendered twice.

      I think the effect reported here is purely because the depth slider was set so shallow that 3D objects did not have enough perceivable depth. The effect would exist in “real life” if your pupils were only 1/2″ apart as well. You’d have a hard time seeing an apple as a round object, but you could tell the tree 10 feet behind it was further away. As pointed out in the article it goes away when depth settings are set close to 100%. It’s like moving your eyeballs farther apart or closer. There is no change in the way it is rendered, just the interocular distance and possibly angle.

      I think I disagree about your view on “point of focus”. Optimally you want it set so objects near infinity are displayed on the screen so the left and right eye images are close to your interocular distance (the distance between the center of your pupils). Then it really will be perceived as near infinite depth because your eyes will both be aimed almost straight ahead, perpendicular to your face. I imagine the “100%” setting and using a standard DPI monitor should do this. You could measure it with a ruler and compare to your eyes. Objects that are supposed to be the same distance away as your monitor would be displayed the same (I believe what you are considering “point of focus”) . Objects closer than your monitor would reverse, so you have to cross eyes to focus on it.

      The crosshair is still an issue. In real life, you cannot properly aim a weapon with both eyes at the same time, nor floating outside your body in 3rd person. You have to stare down the sights or scope with one eye (preferrably your “dominate eye”). It would be awesome to have a game where it did this correctly, lining up the iron sights or scope with the eye of your choice, and either blacking out the other eye (for a scope) or keeping a real view of the weapon misaligned to the second eye.

        • Meadows
        • 11 years ago

        You’re right, I got really mixed up with focus there, but all I wanted to tell was that from the miniscule screenshots nVidia’s mojo seemed to just copy scenes or do only minimal perspective, as opposed to something more accurate.

    • Sevrast
    • 11 years ago

    Why is Valve always the only company to do something and do it right?

      • The Dark One
      • 11 years ago

      I think they just have an enthusiasm for new ways their customers can interact with their games. From what I read, they did a pretty good job with that novint falcon thing.

      I remember reading about a problem IMAX films would have when filming in 3D- it’s hard to get two lenses close enough to each other to properly mimic the amount of parallax a standard human type gets with their eyes.

      Does the system have any way of showing, besides some arbitrary percentage, how far apart each ‘camera’ is from the other in-game?

      • Silus
      • 11 years ago

      You obviously haven’t used Steam and/or played HL2 Episodes…

        • willyolio
        • 11 years ago

        obviously you haven’t, either.

      • eitje
      • 11 years ago

      They have the money that gives them the time to try new and different things.

      • Grigory
      • 11 years ago

      Because they’re awesome? Besides, there is always room for improvements: They could hurry the eff up with their episodes. 🙂

    • Nitrodist
    • 11 years ago

    “As you can see, the performance hit is sizeable—maybe even bigger than the hit Michael Phelps took off of that bong.”

    Aaaaaaaaaaahhahahaa.

      • DancinJack
      • 11 years ago

      I was just about to post nearly the same thing. Awesome.

      • GreatGooglyMoogly
      • 11 years ago

      Again with the Robert “Apache” Howarth witticisms.

        • eitje
        • 11 years ago

        You obviously haven’t been here very long.

          • GreatGooglyMoogly
          • 11 years ago

          Try 8-9 years.

      • SpotTheCat
      • 11 years ago

      😆 That’s pretty good.

    • FireGryphon
    • 11 years ago

    I’d like to see how these things perform with MAME games, or even old school games like DOOM, Wolfenstein 3D, or Jazz Jackrabbit, if that’s even possible. It might require a new monitor, but the lower GPU requirements would effectively cut the price of admission in half.

    I’m also a little bit curious about the technology behind how the goggles work, both from a curious technical standpoint, and what kind of EMI they produce.

    Perhaps by next holiday season, there’ll be an adaptation of this technology for a portable gaming system. Imagine if the next Nintendo, Sony, or Apple portable system is built with a nice GPU and fast screen. All you’d need is a nifty pair of glasses, and you can sit on the train and game in 3D. You know it’s coming, admit it! 😉

      • JustAnEngineer
      • 11 years ago

      They appear to work exactly as Haitex Resources X-specs 3D did: by alternately blanking the LCD panel in one lens then the other to synchronize with screen refreshes with alternating points of view.

        • FireGryphon
        • 11 years ago

        How do the glasses use polarization to “blank” alternate lenses?

          • Freon
          • 11 years ago

          Ghosting is likely always going to be an issue. There are only two “fixes.”

          One is a display with 0 lag, 0 latency. The display would have to hold a steady image for 8.3ms (1s/120), then instantly switching to another image for 8.3ms. Any lag will cause some holdover in the image. This was even an issue with CRTs with the Revelator glasses, but limited to bright white lights against a dark backdrop (since the phosphors when excited to “white” do not diminish fast enough). It seems latency is a multi-dimensional issue beyond just brightness with LCDs.

          The other fix would be to have a crossover period in the shutter glasses where BOTH eyes are blocked to give the LCD time to switch images. This reduces effective or perceived brightness even more.

          Also I agree with the conclusion. I really hope Nvidia (and AMD) stick with 3D this time and try to push it for a few years. Last time they tried the industry was moving from relatively fast CRTs to LCDs, which completely broke shutterglass 3D due to the lag of early LCDs. Now LCDs are faster, and more importantly some players are working on other tech which seems more promising (namely, polarized light).

      • MrJP
      • 11 years ago

      Really old stuff like Doom would look odd because the characters were 2D sprites scaled for depth, rather than proper 3D objects. Hence the cardboard cut-out effect would be even more distracting, assuming the game engine would even support this technology (since back then it was a pure software renderer).

      You’d probably be looking at the initial batch of OpenGL and Direct3D games as being the earliest that would be likely to work with this. That said, you probably don’t need to go that far back since current mid-range cards can render the likes of HL2 at over 100 fps at 1680×1050 if you’re prepared to go without AA. Even allowing for the 3D performance hit, this would still be playable. But similar to Scott’s conclusion, why spend extra money to play older games at lower resolutions?

        • cygnus1
        • 11 years ago

        You ever seen the back the back of a $20 bill? … On 3d? Oh, there’s some crazy shit, man. There’s a dude in the bushes. Has he got a gun? I dunno! RED TEAM GO, RED TEAM GO.

        • FireGryphon
        • 11 years ago

        Don’t forget that DOOM was ported to OpenGL a long time ago. Also, 2D only games, like side scrollers, would probably look really cool with the “3D cutout” look.

      • eitje
      • 11 years ago

      i’m behind you on the old-school games idea. Man, it would be AWESOME to play SNES games in 3D!

      • d0g_p00p
      • 11 years ago

      From my understanding they will not work. When I asked nVidia about this they told me that the glasses take the 3D info already programmed into the game. If the game as no 3D data then there is no way to make 2D 3D. At least it’s how it was explained to me.

      Also could we get a update to this when Blizzard updates WoW with the upcoming patch for these glasses?

        • Freon
        • 11 years ago

        3D stereoscopic technology is just a scene rendered twice, at a slightly different camera angle, that’s all. It’s like switching camera modes, except you are constantly rendering both angles, one for each eye. It is then multiplexed (if you will) to a human using an alternating display pattern.

        The core idea to render a scene twice is nothing amazingly complex. The driver grabs the camera position (inherent in all 3D applications) and adds [+x,+y,+z] to it and renders the scene again to a separate frame buffer.

        So you have to have an x,y,z world to do this. A 2D game would have to probably be rewritten to draw the 2D sprites onto polygons in 3D space. I think some 3D consoles do this for 2D games because they don’t actually have the ability to write sprites to the frame buffer directly.

      • zqw
      • 11 years ago

      It shouldn’t work at all. The driver intercepts “conventional” Direct3D calls, and fails on certain shaders and 2D/post-processing like glow, motion blur, stencil, HUD, etc.

      Maybe TWIMTBP will help for upcoming titles.

      And, FWIW OpenGL support is due this spring.

      • no51
      • 11 years ago

      Virtual Boy says hi.

    • UberGerbil
    • 11 years ago

    The really need to build the a bluetooth headset into the thing so the Borgdork look can be complete.

      • The Stench
      • 11 years ago

      That actually might be a good idea. Not the Borgdork look, but integrating a headset.

      I imagine wearing a headset over that might not be the most comfortable. So, might as well put everything together – glasses, speakers, and microphone. Not necessarily cans for the speakers – that has the potential to look really dorky – although it could be done. Make it with surround sound for headsets and that could make the 3D-ness more convincing.

      Not sure if Bluetooth would be the best platform for it though, but I have no idea what it’s strengths and weaknesses are.

      Now, that would be complete Borgdork-ness, and possibly a compelling product.

        • JustAnEngineer
        • 11 years ago

        “Surround” headphones are a useless gimmick.

          • Meadows
          • 11 years ago

          Not completely useless, but they’ll never even approach a generic 5.1 system in terms of spaciousness.

            • JustAnEngineer
            • 11 years ago

            *[

            • The Stench
            • 11 years ago

            Why is it that we have two ears, but need six – or more – speakers to emulate life’s sounds?

            I am not talking about headphones that have a bunch of little-bitty speakers; just one driver per ear.

            Done right, I think the headphone surround is a better idea than a full 5.1 setup.

            • Meadows
            • 11 years ago

            You clearly don’t understand hearing. Your brain can distinguish a lot of different directions due to the function of your outer ears and your own head itself – they take sound waves and conduct them differently, or block them entirely.

            Your brain detects every slight deviance from how it believes the sound is “correct”. For instance, if something emits a sound behind you, those waves can’t all enter your ear directly, they can only hit your outer ears that will muffle the sound slightly and your brain will tell you that it came from behind. In addition, your hearing takes into account the most minute differences in moment of hearing (one ear gets the sound before the other, almost always) and the shadowing your head provides, with these two effects often working together, the former being important for stereo only, while the latter is important for surround sound.

            If you don’t think surround sound is important at all, you should lose your ability and I’ll invite you to that blindfold game where I’m making noises all around the place for you to follow – you’d walk into a wall sooner than later, even if I was trying to make you steer or turn around.

            This is why a correctly placed surround setup will always be superior to whatever you can accomplish with headphones.

            • The Stench
            • 11 years ago

            My understanding of how the brain analyzes the delays and slight differences in the audio that each ear hears, is echoed almost perfectly by your explanation of it.

            Before I begin, I must say that I do not own or have access to a proper surround sound setup – all of this is my of own “theory crafting”. Please, forgive me and tell me if this is unacceptable. Also, let me know if this is getting too far off topic and if I need to go to a more appropriate place in the forums – should it be that someone cares to continue this conversation.

            With a 5.1 system, there are six speakers placed around the room, with the listener in the center. That means only six places that a particular sound could come from. Yes, we can mix two or more speakers with the same sound and even add delay, but that is quite a bit of approximation in-between speakers, especially if the room is large.

            Could there not be just two microphones picking up the sound (placed however far apart the average ears are apart; and perhaps attached to a human head analog), and two speakers reproducing that sound? After all, we never really have more than stereo; with our brains deducing the direction from delays and slight differences in the sound from each ear.

            Now, if the sound that is in a game is given the correct delays and effects to emulate the ear, the same effect that we get from our natural head shape and ear should be accomplished with a headset.

            For example, there is a gunshot from the 2o’clock position in the game. You are facing 12o’clock, and for the sake of the example, we will not take any echo nor objects that would alter the sound into consideration. Head and ear shape will be considered, although. The distance between you and the gun is 100 feet.

            The attenuation and delay from being 100 feet away is calculated. Calculations for the way the head creates a sound shadow to the ears, are done. As well as the slight alterations that the human head-form and ears’ shape create are computed. The delay from the right ear being a little closer to the gun than the left ear is also taken into account. With the audio processing done, the game then sends the completed audio streams out to the speakers, and into your ears. Your ears will then hear – from two speakers – the gunshot with the correct delays and alterations that the head and ears naturally make. Your brain can then analyze those effects and be able to tell that the gunshot came from the 2o’clock position, and that it was from 100 feet away.

            Now, I am not an audio engineer – by no means – and I’m sure I left out some effects or calculations. But from my knowledge of the matter, that should be a rough example of what would happen in processing audio for headphones.

            If you or someone else would correct me if I am wrong and why I am wrong, I would appreciate it.

            • Meadows
            • 11 years ago

            Your idea is not inherently wrong, but you’d need a lot of standardisation to make this work at any level.

            First and foremost, this requires extra computation, in every case. There may be a noticeable overhead compared to simply mixing things using a surround setup and being done with it.

            Secondly, the extra computation should be uniform between programs. This means you need to develop a layer much like Havok exists for physics, for example.

            Thirdly, even if you have the other two fleshed out, you need to use (or bundle, depends on the point of view) canalphones to sufficiently fool your brain with little to no outside interference. Using outside loudspeakers would present problems, for example, what if you turn your head or change your sitting position while playing? The program can’t calculate that, and your sense of where the sound came from may be compromised, unless you write another layer and bundle a camera too to attach on your monitor to keep track of you, but that would be too much now. 😉

            • SonicSilicon
            • 11 years ago

            Too much what?
            Certainly not expense:
            §[<http://www.free-track.net/english/<]§

            • The Stench
            • 11 years ago

            @ SonicSilicon:
            I believe Meadows was referring to the fact that tracking head movement would too much hassle. Furthermore, we would then be trying to do more than what an audio layer should be doing.

            @ Meadows:
            Yes, there most likely will be more calculations to perform. However, a lot games aren’t using all of the resources from a dual core processor. I don’t know how much the audio card is getting taxed on games these days, or if it would be capable of handling this, but that is another processor that this could be put on, as well.

            Depending if some programs have more filters and effect on than others, there may be some difference in how much calculation must be done between programs. Seeing as developers are probably piping sound through a sound API layer, this wouldn’t be a large difference from what is currently common practice. This would also standardize it throughout all programs.

            My idea is geared towards using headphones, ear buds; or as you suggested, canalphones. This would make this very attractive to people – like me – who don’t have the proper room shape or size for a 5.1 system.

            • Meadows
            • 11 years ago

            I understand room problems, heck, I could barely install my 5.1 too, but it’s still here. I love it, it was worth it (not expensive either).

            Canalphones are an absolute must for your idea, and even then, your brain will “be surprised” if you turn your head – for any reason – and a sound source still comes “from behind”. If you used outside speakers, this would sober up your brain completely and it would realise that every sound from thereon may be coming from the same spot.

            Not everyone has a dedicated sound card, let alone one with sufficient processing capability – it takes some effort to make it playable on integrated sound while offering something to users who spent $200 or more on sound alone. Simulating everything on the CPU for integrated users is a possibility, but should be kept an option, I don’t know how much serious audio calculations can bog down a processor.

            • SonicSilicon
            • 11 years ago

            Most USB sound solutions offload the processing to the CPU. Some even use licensed psychoacoustic (HRTF) “virtual” surround audio software.

            While still having to put up with sound positioning relative to one’s head, I found I got used to it relatively fast, but I had been using headphones for stereo gaming prior to having an SB Live with CL’s implementation of virtual surround.

            • The Stench
            • 11 years ago

            I was playing Halo 3 the other day, which has a pseudo-surround sound for stereo. If I listened carefully – and kept my head still – I could tell where some sounds were coming from. But, if was in the middle of a firefight, I could only tell left and right, with the very rare occasion of something behind me. This is with just two normal speakers sitting to the left and right of the computer monitor that I have for the 360, so head positioning would make a lot of difference with delays and head shadowing. Now, several months ago (when I played more competitively), I would almost always use ear buds because I could make out more sound direction from more places.

            So, yeah, head movement is a definite problem, although I don’t think it can be overcome without some sort of head tracking.

            • Tamale
            • 11 years ago

            You all realize this is exactly what EAX and A3D have been doing for over a decade now, right?

          • eitje
          • 11 years ago

          just like stereoscopic 3D!
          oh, wait…

            • JustAnEngineer
            • 11 years ago

            There is at least a technical reason that 3D video could work. There is zero technical basis for the “surround” headphone gimmick. The delay and damping calculations are better done in software before the sound is sent to a good pair of stereo headphones. You’ve got two ears, and the headphones move when you turn your head, so the “surround” headphones are providing no useful information for 3D positioning. 5.1 speakers are a different story, since you can perceive a change in positioning when you move your head. In fact, “surround” headphones provide a messed-up audio image, since the signal delays to the different channels were calculated for a 5.1 speaker arrangement, not for headphones mounted on your head. Setting your game to calculate audio effects for stereo headphones and wearing a good set of stereo headphones is the best way to go.

    • BoBzeBuilder
    • 11 years ago

    Sooo, unless I have a high-end SLI machine, this thing is useless.

      • marvelous
      • 11 years ago

      Don’t forget a 120hz LCD that cost whole lot more than 60-75hz LCD.

      • Hattig
      • 11 years ago

      Today, yes. However the presence of it will mean that within a couple of years all games will get the depth right on cursors and particle effects. And this will be when 120Hz displays start getting cheap and decent quality. When this reappears in 2012 all the games will work correctly, and hopefully the cost of entry will be far far far less.

      It is a shame the review didn’t say if the games became a lot better at lower resolutions. 1680×1050 might be too much, but was 1280×768?

Pin It on Pinterest

Share This