GeForce 8 graphics processors to gain PhysX support

During Nvidia’s fourth-quarter financial results conference call, Nvidia shed a little more light on its acquisition of Ageia and what it plans to do with the firm’s PhysX technology. Nvidia CEO Jen-Hsun Huang made no announcements regarding the deal until asked in the question-and-answer session, but he was happy to divulge a decent number of details.

Huang revealed that Nvidia’s strategy is to take the PhysX engine and port it onto CUDA. For those not in the know, CUDA stands for Compute Unified Device Architecture, and it’s a C-like application programming interface Nvidia developed to let programmers write general-purpose applications that can run on GPUs. All of Nvidia’s existing GeForce 8 graphics processors already support CUDA, and Huang confirmed that the cards will be able to run PhysX.

We’re working toward the physics-engine-to-CUDA port as we speak. And we intend to throw a lot of resources at it. You know, I wouldn’t be surprised if it helps our GPU sales even in advance of [the port’s completion]. The reason is, [it’s] just gonna be a software download. Every single GPU that is CUDA-enabled will be able to run the physics engine when it comes. . . . Every one of our GeForce 8-series GPUs runs CUDA.

Huang thinks the integration will encourage people to spend more on graphics processing hardware, as well:

Our expectation is that this is gonna encourage people to buy even better GPUs. It might—and probably will—encourage people to buy a second GPU for their SLI slot. And for the highest-end gamer, it will encourage them to buy three GPUs. Potentially two for graphics and one for physics, or one for graphics and two for physics.

Last, but not least, Huang said developers are “really excited” about the PhysX-to-CUDA port. “Finally they’re able to get a physics engine accelerated into a very large population of gamers,” he explained. Huang was unwilling to get into a time frame for the release of the first PhysX port. However, considering this will be purely a software implementation and Nvidia now has Ageia engineers on its payroll, the port may not take too long to complete.

Comments closed
    • WaltC
    • 12 years ago

    What has happened so far is that nVidia has purchased Ageia.

    Implementation of any existing Ageia tech into CUDA is yet to come. When that happens, if that happens, it will be optional for game developers, as opposed to something game developers will *have to* support (game devs have to support an API, for instance.) Should nVidia implement such support as a nVidia-only feature, then game developers who want to support all DX-capable cards in the market probably will not choose to use it. Of course, this won’t stop nVidia from marketing it as if everything everything already *does* use it…;) Judging by some posts in this thread–like the UT3 post, for instance–some people think it’s already happened, and the advantage to nVidia marketing in picking up the Ageia name becomes obvious…;)

    • Cannyone
    • 12 years ago

    Ok! My question is: does this mean I can use two similar, but not identical, Nvidia graphics cards in an X38 chipset board – where one of the graphics cards is handling the graphics and the second is handling the physics?

    See I have an 8800GTS/640 (Yeah I know it’s “old”!), and I have an Asus P5E motherboard with two PCIE 2.0 slots. This means I can’t do “SLi”, then again I’ve never been terribly impressed with SLi. But I might be interested in buying say an 8600GTS. Either to run a second display, or to perform Physics processing.

      • pogsnet
      • 12 years ago
      • pogsnet
      • 12 years ago
    • Anomymous Gerbil
    • 12 years ago

    Interesting that some people are apparently willing to spend $ on a second (slower) graphics card to run physics, but not to spend similar money on a physics card – is the thinking that the 2nd graphics card could help with graphics for non-physics games?

      • ludi
      • 12 years ago

      That, or maybe the extra display outputs for multiple-monitor setups (dualies off the main card, LCD television off the auxiliary card…?), and in any case, a graphics card is still useful as a graphics card if the user decides to upgrade or needs to move it to a different system. The versatility is valuable, and probably easier for many people to justify shucking out the green for, as compared to expensive one-trick-wonder toys.

      • willyolio
      • 12 years ago

      you mean people will be buying NEW, slower video cards instead of simply upgrading to a faster video card and using the older one for physics?

      • MadManOriginal
      • 12 years ago

      Someone else already noted that a second video card could do more than a PhysX card. But with NV backing it it is more likely to be adopted than when Ageia was pushing it.

    • liquidsquid
    • 12 years ago

    I think some of you are missing a few things here: There is a potential to merge these features on a SINGLE DIE. I cannot imagine nVidia requiring another entire video card to be dedicated simply to Physics. This is how I see it:
    nVidia has some optimized math paths which can be used for either graphics or general purpose accelerated math within the die, now slap in the Ageia IP in conjunction with what nVidia already has, and the GP math is now complimented with physics-specific math accelerator functions. As far as design flow, it would make more sense for a single silicon piece to contain this functionality. Now nVidia can leverage some of the Ageia code already written to complement the video drivers and their dev package.

    Now however since each die can perform both tasks, you can then optimize your configuration by adding another card, and “almost” double the performance of graphics and physics.

    • MadManOriginal
    • 12 years ago

    Finally a use for me for the second PCIe x16 slot! I’m pretty content with single card graphics solutions for graphics but I would likely buy a mid-low range graphics card at ~$100 for physics, especially if I could then use it for additional displays when using the desktop. Even better would be picking up a used GPU on the cheap.

    • Prospero424
    • 12 years ago

    Until they can actually demonstrate a top-tier game where this tech adds something substantial that can’t be accomplished in software, this is gonna remain a niche only filled by fanboys and people who can afford to drop $1200 for three graphics cards at the same time.

    I simply can’t imagine a gaming scenario where using a second card for physics instead of faster rendering (SLI) would make sense.

    I still say it’s mostly for their GPU Computing stuff…

      • Krogoth
      • 12 years ago

      Ironic, that the only genre that would show the true benefits of accurate physic is comatose fight and space simulators.

    • danazar
    • 12 years ago

    I wonder how much this plays into having an integrated GPU built into all future chipsets… even if you add a discrete GPU, you still have a GPU on the motherboard that could be used for physics calculations. Before too long everyone will suddenly have hardware physics capability even if they didn’t mean to buy it.

    Brilliant.

    • AMDguy
    • 12 years ago

    I’m glad Nvidia bought Ageia because they are the ideal company to get GPU physics acceleration into mainstream gaming. I’m also glad they are working toward open standards for GPU physics, because that will remove any reservations on the part of game developers.

    Lastly, I’m looking forward to the possibility that one day soon I may use my motherboard’s IGP chip to run physics. 🙂

    • continuum
    • 12 years ago

    I could care less about physics… but if I can get some nice OGR, RC5, or even F@H performance on an nVidia card, THAT would be compelling enough to get me to buy one!!!

      • BobbinThreadbare
      • 12 years ago

      Just buy a PS3?

    • shank15217
    • 12 years ago

    You guys are making an assumption that the geforce cards will be be able to accelerate physics better than Aegia’s custom processor. All this talk about 8800GTs screaming performance in physics, a card that was primarily designed for graphics processing, Im gonna laugh when performances falls well below the hype, enjoy your sli setups… People have bashed Aegia a lot about their hardware, but its an extremely opimized engine specifically made for one purpose. Geforce cards will be effectively “emulating” this architecture, something tells me it will turn out a LOT slower.

      • SPOOFE
      • 12 years ago

      “You guys are making an assumption that the geforce cards will be be able to accelerate physics better than Aegia’s custom processor.”

      It’s not like the PhysX processor has been a rounding success when it comes to “accelerating”.

        • shank15217
        • 12 years ago

        That was really up to the developers, they introduced half assed support, that card never reached it’s full potential.

          • SPOOFE
          • 12 years ago

          … So? That changes its utter failure to accelerate anything significant… how?

            • shank15217
            • 12 years ago

            it doesn’t, but geforce 8 cards aren’t gonna change that. That hardware is most probably even SLOWER than the phyx custom chip.

            • Meadows
            • 11 years ago

            Think again. Every modern videocard is a palm-sized supercomputer (whether it’s one or two palms is another matter). AGEIA’s attempt at a device was half-assed and too expensive and received developer support accordingly.

        • poulpy
        • 12 years ago

        To be fair most games that were slower than without hardware acceleration where poorly implemented and IMO doing more physics processing than the default non accelerated version anyway.

        What I’d like to see is full physics engines and not just particles, the latter is easily doable on cpus and gpus but the former would bring to its knees any quad core and certainly kill your framerate on any GPU.

          • SPOOFE
          • 12 years ago

          Yes, there’s a very good reason for PhysX’s failure to launch. That doesn’t change the fact that it was, indeed, a failure.

          This isn’t to say that the boys at Ageia didn’t create an interesting piece of hardware. They just didn’t create anything that was actually /[

            • Shining Arcanine
            • 12 years ago

            If it failed to launch, it would be vaporware.

            • SPOOFE
            • 12 years ago

            And if it were vaporware, I wouldn’t be talking about the product Ageia created, would I? Hmm… I must’ve been using “launch” in a different sense than the literal… but that would be crazy, wouldn’t it?

      • derFunkenstein
      • 12 years ago

      Who cares if it does? All it has to do is be faster than the CPU doing it.

      • ludi
      • 12 years ago

      Still smarting over the $250, eh?

      Anyway, you’re misrepresenting what a modern GPU really is. It is a fully programmable processor with enormous computing power in a specific set of operations. Although the design target was graphics, there are a few other types of applications that run in the same vein, and can put that computing power to good use provided someone writes the code.

        • shank15217
        • 12 years ago

        gpus aren’t fully programmable chips that run everything faster. GPUs are optimized to do one thing, render graphics fast, the optimization itself kills its potential as a fast gpgpu. Current programs running on gpus are still limited to a specific workload, they require heavy optimization, basically the problem has to fit. I dont own a phyX processor if thats what you mean by your initial comment.

          • Meadows
          • 11 years ago

          g{

      • UberGerbil
      • 12 years ago

      GPUs with universal shaders are optimized to perform ops on a 4×4 matrix of floating point numbers. Physics calculations are operations on a 4×4 matrix of floating point numbers. Conceptually, and numerically, there’s really no difference between transforming a polygon edge or projecting a motion vector. A physics processor only has an advantage because it doesn’t need some of the downstream circuits to do the actual framebuffer work so it can be made smaller. On the other hand, nVidia has mature circuit libraries and a lot of experience and volume advantages with its fab partners. Moreover, precisely because the GPU is useful as a GPU, they offer customers a reason to buy their product even before games adopt their physics engine. That, combined with a huge existing installed base and a mature developer relations program, all matter far more for developers who might consider adopting a physics API for their games.

        • LordVTP
        • 12 years ago

        ^- BING! Give the man a prize.

        • shank15217
        • 12 years ago

        If thats the case then nvidia would have at least given a tech demo of their cards doing physics operations. You are simplifying the problem far too much. Physics acceleration could be as simple as balls colliding or as complicated as fluid dynamics.

          • Meadows
          • 11 years ago

          Fluid dynamics are as simple as any other. You only need to disintegrate the problem into its elements – even if there are an assload of elements, a graphics card has the means to execute it rapidly and gracefully.

          Think about computers. They do everything by adding numbers. Even subtraction has been disintegrated into addition. Yes, you can look that up if you want. And videocards are optimized to work with such small instructions, but lots of them at the same time.

          There’s no reason why a graphics card couldn’t do something like that if people wanted it to. Compared to graphical loads, physics is actually light in most cases.

      • kilkennycat
      • 12 years ago

      Er, wait for the next GPU family from nVidia, currently in full design.There may be a surprise or two in store for both the graphics junkies and the physics junkies….

        • Fearless Leader
        • 12 years ago

        Remember, the article said they’d be getting PhysX support in Geforce 8 cards due to CUDA. So, it is kinda of deflecting to say wait until the next family GPUs as PhysX support is promised for the current family.

        Here’s the kicker though. Folding@home already runs for the AMD GPUs doing more than just graphics. The project leader has already said CUDA is what has been holding back a version nVidia cards, with not a single nVidia card currently supported. To me this indicates that CUDA has issues that have not been resolved in the current generation of cards.

        To me, that also spells potential crappy PhysX support in Geforce8 cards.

        In which case, CUDA would have to be changed for the next family Geforce cards. Will it be backward compatible? Maybe so. Or Maybe no.

    • Majiir Paktu
    • 12 years ago

    My guess (and objection) is that this will be pretty negative for performance and that there will be few options for disabling physics processing like there are for graphics options. With that said, I hope Nvidia works this so that a single GPU can deal with both physics and graphics processing as the application requires, so that PhysX-enabled applications can (hopefully) throttle their physics processing demands to maintain a level of graphics performance. I do not want a situation where I buy a second graphics card for SLI and I’m forced to make that a ‘physics’ card while my first one is the only one doing ‘graphics’ card work.

    • asdsa
    • 12 years ago

    physX is yesterday, Microsoft’s DirectX Direct Physics API which allows universal physics acceleration is yet to come. At least I hope it’ll come someday and nullify this physX monstrosity.

    • pogsnet
    • 12 years ago
      • BobbinThreadbare
      • 12 years ago

      It’s not about more fps, it’s about better more immersive physics.

    • derFunkenstein
    • 12 years ago

    at least they’re not going to be like Creative and force you to buy it like ALchemy for Audigy cards.

    • slot_one
    • 12 years ago

    This is good news. I’ve already got two GTS-640s clocked at 630/1500/2000, but my S939 Opteron 185 @3.0GHz is bottlenecking the crap out of them (except in Crysis). Turning one of them to do physics seems like a really good idea.

      • leor
      • 12 years ago

      a 3ghz opty is bottlenecking a 640GTS?

      that doesn’t sound right at all.

        • wingless
        • 12 years ago

        I have an Opteron 185 and used to run it at 3Ghz before the TEC cooler caught fire. ANYWAYS, compared to modern (Intel) CPUs, its a bit of a bottle neck. I actually can get 10200 3DMarks out of this system with a 2900XT even but the CPU score is ~2200 only. Trust me, thats a bottle neck compared to a C2D.

        • slot_one
        • 12 years ago

        I have two of them running in SLI. It wouldn’t bottleneck a single card, but two of them…definitely. Which is why I said that I’d like to dedicate one of them to physics.

    • ssidbroadcast
    • 12 years ago

    File this under obvious.

    That said, the implementation through CUDA is pretty smart stuff. Maybe physics engines will get much more sophisticated in the coming years.

    • Usacomp2k3
    • 12 years ago

    So buying something like an 8600gt would be useful as a secondary card, which you can get for under 100.

    I wonder how much performance of physics scales with the faster (I mean, more expensive) cards though.

      • echo_seven
      • 12 years ago

      At least for rigid bodies, I would guess this amounts to computing F=ma for each of the rigid bodies acting on each other. That sounds like it could be split into one thread for each rigid body, with dependencies between threads only occurring at each time step. Sounds pretty parallelizable…

      oops: the implication being, performance should scale linearly with the amount of SPs on the card.

    • herothezero
    • 12 years ago

    q[< #3, It is Nvidia's trying attempt to indirectly pushed for SLI = more video card sells.<]q I've read this four times now and still have no idea what you're trying to say.

      • BKA
      • 12 years ago

      Go out right now and buy another video card to increase our profits!!!

      *Did I just say that out loud?*

    • eitje
    • 12 years ago

    i was right! i was right! la lala la laaaaa!

    • DrDillyBar
    • 12 years ago

    *poof* Ageia is accepted by the masses. *sigh*

    • Jigar
    • 12 years ago

    I am so happy with this news, my 2 8800GTshould be more than enough to let me enjoy UT3 in full glory now.

      • Krogoth
      • 12 years ago

      To tell you the truth, the PhysX PPU only eliminated the CPU overhead for physics in UT3. The results are pretty much the same. I expect the same for Nvidia’s implementation.

        • Jigar
        • 12 years ago

        So you mean to say that my Quad was already doing this??? And this new driver upgrade is just going to transfer the load from CPU to GPU ?

          • DrDillyBar
          • 12 years ago

          Basically, and a checkbox for HW Physics.

            • shank15217
            • 12 years ago

            fallacy, you dont know what the hardware phyx chip was capable of.

            • DrDillyBar
            • 12 years ago

            indeed. Missed the sarcastic tags on the checkbox. 😉

          • Krogoth
          • 12 years ago

          Part of UT3’s installation was framework for AEGIA API which can work either in hardware or software mode.

          If you want to toy with some of its demo. Go to your control panel and look for AEGIA API.

    • Sargent Duck
    • 12 years ago

    Although free physx is always nice, I gotta disagree with Huang about his “encourage people to buy a second GPU” comment. It’s still gimicky, and I’m not willing to give up other expansion slots (tv tuner, sound card, wi-fi) for some physics.

      • Krogoth
      • 12 years ago

      It is Nvidia’s trying attempt to indirectly pushed for SLI = more video card sells.

      I cannot say that it would exactly help adopting of hardware-accelerated physics. It would better just implemented in future GPU designs by dedicating some of those streaming processors.

      • BobbinThreadbare
      • 12 years ago

      I could see buying a second cheaper card to get physx, like an 8400, or _[

        • Forge
        • 12 years ago

        Actually, depending on how NV implements it, you could have an ATI GPU doing just rendering and a second NV GPU doing just physics.

        I wonder how an 8600GT would compare to an original Physx PPU?

    • Walkintarget
    • 12 years ago

    Hey, theres a slick way to up the video cards price another $50+ !!! That may even pay off the acquisition in a couple of years. Gotta hand it to Nv for making it plainly obvious to us gamers what we /[

    • Gerbil Jedidiah
    • 12 years ago

    Wonder how this will impact performance…

Pin It on Pinterest

Share This