Physics on display in Havok, Ageia catfight

I had been waiting to address these things until our PhysX card showed up and we were able to do our own testing, but our card has apparently been caught up in a shipping snafu and won’t arrive this weekend. So, here goes nothing…


Earlier this week, in anticipation of the first real review of Ageia’s PhysX card beginning to show up, the folks at rival physics software company Havok sent out a juicy e-mail to the press, including us, talking down Ageia’s solution. Havok, you may recall, is working with graphics companies like NVIDIA and ATI on a product called Havok FX that will accelerate in-game physics using a GPU. The primary focus of the e-mail is the one major game title so far to ship with support for PhysX hardware, Ghost Recon: Advanced Warfighter.

Havok contends in the e-mail message that Ghost Recon uses Havok’s API for all gameplay-impacting physics, on the PC and in the various console releases. They argue Ageia’s API is only used for eye candy-type particle effects and only on the game’s PPU-accelerated code path. What’s more, they claim, those particle effects are unimpressive, with volumes easily achievable in software, yet the game slows down observably when PPU acceleration is active. Havok says Ageia lays the blame for these slowdowns at the feet of the graphics processor, as if it were a vertex processing bottleneck or the like. The email then dismisses that possibility, saying “NVIDIA specifically can technically verify that the GPU is not the cause of the slowdown.”

Like I said, juicy.


Havok’s livelihood is no doubt threatened by Ageia’s push into physics, but this hyper-aggressive approach has NVIDIA’s sweaty fingerprints all over it, in my view.

Anyhow, the drama intensified when AnandTech’s benchmarks of Ghost Recon and the PhysX card showed lower frame rates with PPU acceleration than without, substantiating Havok’s assertion and throwing discussions of physics acceleration into overdrive with speculation about the technical reasons for the lower frame rates.


The folks at FiringSquad asked Ageia to respond to Havok’s claims, and they have an interview online that gives Ageia’s side of the story. Some of the back and forth involves minor point scoring over how much the PhysX and Havok APIs are used in various versions of Ghost Recon, but Ageia then uncorks this revelation about low frame rates:

We appreciate feedback from the gamer community and based partly on comments like the one above, we have identified an area in our driver where fine tuning positively impacts frame rate. We made an adjustment quickly and delivered it in a new driver (2.4.3) which is available for download at ageia.com today.

Truly, they have learned from the masters.


Ageia also talks down the notion that CPU or GPU bottlenecks are responsible for performance problems, asserting PhysX doesn’t require an absolute high-end system config.

Obviously, these are the first salvos in a very long battle over physics acceleration on the PC. We will have to check out Ghost Recon performance with the new driver when our PhysX card arrives, but this one title won’t necessarily tell us anything definitive about this first PPU’s performance characteristics.


John Carmack expressed worries about this sort of problem—hardware physics acceleration causing input lag and slowdowns—in his address at last year’s Quakecon. Early 3D graphics chips were guilty of the same, and it seemed like an obvious potential problem. We have since asked Ageia about this issue several times, including at CES. There, they showed us some developer tools with real-time, on-screen instrumentation for physics processing latencies, and the results were convincingly decent. As I wrote:

Ageia breaks physics problems down into frame-by-frame chunks, returning the required answers for each frame in some period of time that’s hopefully less than the time required for the game engine and graphics card to process that same frame. They showed us a demo with on-screen counters reporting the number of milliseconds required to process each frame of a scene alongside counters showing the number of rigid bodies in action and the like. As the physical complexity of the scene grew rapidly, with lots of bodies moving around and bouncing off of one another, the time the PhysX chip required to process the physics of the frame grew impressively slowly and in a predictable fashion, without sudden spikes or drop-offs.

This demonstration assuaged my fears somewhat, and my conversations with Ageia’s CEO and other technical types have been encouraging, as well. Many of Ageia’s chip engineers have backgrounds in building network-processing chips for fiber optic switches, a world where managing packet processing latencies is crucial. Although first-generation hardware accelerators have a difficult history on this front, I remain optimistic Ageia can avoid facing a constant, intractable problem with performance in its first-gen PPU.

That said, I came away from Ageia’s pre-GDC press confab with the distinct impression that only certain parts of the PhysX API are currently accelerated in hardware by the PPU—mainly because Ageia said so point blank. The PPU itself is programmable, and the company has so far concentrated on accelerating specific parts of its API it considers especially good candidates for optimization. There is a long development process ahead in order to get the whole of the API accelerated, as well as a learning process that will involve give-and-take between Ageia and game developers, as the parties sort out the usage model for hardware physics acceleration. Ageia will have to learn how best to tune its drivers and hardware to deliver the mix of effects and performance game developers are requesting, and game developers will have to understand what to ask of Ageia’s hardware, as well. I don’t know whether or not Ageia will succeed at making all of this work, but I certainly think it’s much too early to count them out. We’ll be watching future developments here with interest.

Comments closed
    • DrDillyBar
    • 14 years ago

    Nicely done Damage!
    I must admit I’ve basically made up my mind to get one when either a game I actually want comes out, or they port the folding@home engine to support it. (Just imagine…). Until then, I get a kick out of all the conversations back and forth, while educating my friends to the correct details. 😀
    Edit: $389.95 CAD (BFG version); so not just yet then…

    • Freon
    • 14 years ago

    Clearly this is an immature technology. I don’t feel like being an early adopter.

    My only critical point would be that this won’t make people spend another $250 every time they refresh their PC. If it does take hold, it will have to carve money away from CPU and GPU sales, or count on memory, harddrives, etc. getting continually cheaper.

    • Anomymous Gerbil
    • 14 years ago

    Geez, why is everyone just so fricking negative?

    y[<"But its too teh expensive!!!".<]y Yeah yeah, and it will get cheaper. y[<"Who will buy a game that requires a PPU for gameplay until lots of people have PPUs?".<]y Only everyone who ever plays their games in single-player mode at least some of the time, i.e. virtually every gamer on the planet. And the more this happens, the more the buzz will go out, and the more they will sell, and the cheaper it will get, and so on. y[<"But why not just run it on my CPU or GPU?".<]y Two reasons - (i) because those chips aren't *[

      • Prospero424
      • 14 years ago

      Speaking for myself, my criticisms are levied because I WANT this technology to be a success, not out of being “eager to knock” it, not out of “stupidity”, and I can assure you, not out of immaturity; I’ve been in this business for a /[

        • lyc
        • 14 years ago

        there is a TON of anger floating around unfortunately (manifestating itself primarily as caps lock use and swearing at random strangers), and i really can’t even /[

        • Anomymous Gerbil
        • 14 years ago

        Interesting that you, as a sometime-analyst, didn’t actually reply to any of my points. Do you agree or disagree?

        Asking “why is everone being so fricken’ negative” is pretty obvious – it’s because they b[

          • Prospero424
          • 14 years ago

          /[

            • Anomymous Gerbil
            • 14 years ago

            As I said, it’s exasperation that the lack of thought evidenced in some people’s posts might adversely affect the thinking of others who read the comments seeking information. My post (which could have been worded better as you point out), was simply trying to point out the other side of some of the unneccesarily negative posts here.

    • Prospero424
    • 14 years ago

    Edit: Dammit, this was supposed to be a reply to #52. Feel free to delete this post.

    /[

    • blastdoor
    • 14 years ago

    If I were running Intel (yes, yes, we’re all glad that’s not the case, etc), I would want to make sure that I didn’t lose yet another high-margin market to a third party chip developer.

    One of the reasons Intel got out of the memory business back in the 80s was to focus on higher margin logic chips. Yet they completely missed the boat on the GPU, and now they might miss the boat on the PPU. If they want to continue to grow as a company in areas with high profit margins, they can’t be missing these boats.

    The PPU might be much, much faster than the second (or second, third, and fourth) core(s) of a multi-core chip, but right now that doesn’t matter, because nobody owns a PPU and there aren’t many games that support them.

    If I were running Intel, I would move aggressively right now to work with developers to do the best possible physics on multi-core CPUs, and I would be designing my next generation of CPUs to include specialized cores that are better at this sort of thing that current, more general purpose, cores.

    Heck, I’d even consider buying Ageia.

    • SLI_Fallen
    • 14 years ago

    We can debate the merits all day long, but at the end of the day (for now) the G.R.A.W. demo is entirely unimpressive…for a $250 premium for the privelege. And I agree with HAVOCS comments that the *SPECIFIC* implementation seen of the PPU in G.R.A.W. is entirely feesable with software/second core/GPU. As Damage said, The API is in it’s infancy and the implementation in G.R.A.W. is limited at best. But again, as it stands at this time, there is simply not enough justification for such an investment in a hardware implentation of Physics acceleration. Considering we have not even seen results of other (software based) methods. E3 maybe?

      • Beer Moon
      • 14 years ago

      It’s one thing to say they tacked on support for AGEIA’s physics, which they clearly DID, but it’s another altogether to claim the API is in its infancy.

      There is no evidence to support that.

      It’s not AGEIA’s fault that the GRAW team chose to use Havok for physics. It probably IS their fault for pointing out that GRAW “supports” AGEIA. This isn’t the type of implementation that’s going to win any gamers over.

      I suspect some of the Unreal 3 effects will, though.

    • cheesyking
    • 14 years ago

    Sounds like HyperTransport would be the really ideal interconnect. I’ve been doing a little reading on those DRC modules (which do sort of the same job) and DRC believe PCI can be an issue.

    Will us mortals ever get HT addon cards? I doubt it, but it would be pretty cool if we did!

    • Prospero424
    • 14 years ago

    I don’t know about the technical feasibility of this, but I think it would have been a lot smarter if Ageia had designed their product as a co-processor to be marketed to Ati and nVidia and/or graphics card manufacturers rather than a stand-alone product. It would be more attractive product, at least to my mind, if it were built onto the same PCB as the GPU. Also, I would imagine it would be at least noteably cheaper than purchasing two separate cards with both manufacturers maintaining a comparable profit margin because of the lower manufacturing cost.

    One thing that might be a bit of a hurdle would be the added latency to the PCI-E interconnect that would be shared with the GPU. But since it seems to be the case that the PhsyX card can operate properly on the comparatively-limited 33Mhz PCI bus, I would imagine that this would be a small problem.

    The big thing, I fear, would be the driver issue. Could it be done this way with two seperate drivers needed, or would a single driver have to control both the PPU and the GPU?

    Meh, just a thought I had.

      • Bensam123
      • 14 years ago

      Two different things… Graphics != Physics, just with early implementations it is easy to make them seem one in the same (more eye candy). For instances sound cards relate to graphics cards as much as physics cards relate to graphics cards.

      I personaly would prefer upgrading my video card when it needs to be upgraded without the extra costs associated of having a onboard PPU. You don’t see many if any video cards with onboard sound.

        • Prospero424
        • 14 years ago

        That’s because, most importantly, sound processors are built on to most mobos, for some of the same reasons I listed above.

        The vast majority of people only buy seperate sound cards these days for two reasons: games or audio workstation duty (home studio stuff).

        Only people who play games or create CG content are willing to spend the money on high-end video cards.

        The only use for a PPU, as currently envisioned, is for games and perhaps some scientific calculation acceleration. It performs a very specific function and doesn’t fullfill /[

          • dmitriylm
          • 14 years ago

          If they integrated the PPU onto nVidia and ATI GPU’s, it would mean that all current GPU owners are left with an obsolete product. If I just built a well powered computer, I would rather just get the add in card.

            • Prospero424
            • 14 years ago

            Just because something isn’t as capable as the top-of-the-line product doesn’t make it obsolete.

            Besides, if the average person had $300 to upgrade the performance of their gaming rig, do you think they’d spend it on a standalone “niche” product or put it towards a more capable graphics card or even CPU? Remember, this is the average gamer we’re talking about here, not the early adopters and “performance enthusiasts” (read: /[

            • Beer Moon
            • 14 years ago

            They’ll succeed just fine. They have developer support, and that’s what is going to get them into the next-gen games.

            The problem is that they’re TOO eager to get publicity. They should have told the GRAW developers just to forget it when they saw what was being done. It’s useless, and everyone can see that. Hopefully, the same won’t be true of its integration (real, actual integration not just tacked on) with the Unreal 3 Engine.

            And you can’t put it on the same card as the GPU, because they will BOTH need PCIe-quality bandwidth in order to function properly.

            • Prospero424
            • 14 years ago

            /[

            • Bensam123
            • 14 years ago

            You can’t just shove a card in the market and expect everyone to go out and upgrade or replace current hardware in their computer. It won’t happen. I think GRAW is a good stepping stone in the right direction.

            A extra $100 ontop of a video card is a little excessive IMO. I rather save the extra 300 dollars over the course of the next six years (they said the card will last 4-6 years before it will be fully utilized) rather then save $150 now by buying one thats integrated onto a graphics card.

            Anyone that would ever consider buying more then a mid-grade video card also wouldn’t use their onboard sound, they would buy a totally seperate sound card. This is no different.

            The only reason to get more then a integrated video card is just the same as anyone that wants more then integrated sound and it falls into two catagories.

            Games or Professionals and either one sees that money as well spent if they’re willing to buy a extra component.

            What you’re talking about is a option to get the PPU into as many computers as possible, no different then integrated sound/graphics on motherboards right now. It isn’t so essential that every computer needs it; really the only ones that need it are the two you mentioned and they’re willing to go a step farther and buy a addon.

            • Prospero424
            • 14 years ago

            /[

            • Bensam123
            • 14 years ago

            You tack it on a video card, you upgrade as soon as you upgrade your video card next then you get to retack it everytime you upgrade your video card. You fail to mention how long the life cycle of a video card is (for just about every gamer that cares about his game; in other words who this product is targeting is just two years).

            Who says the PCI-E version won’t be out next month or when you’re going to upgrade again? You notice how long parallel, serial and FDD controllers have been around? The tech won’t be dead within the year or maybe even the next, I’m pretty sure if it did die they would upgrade their bus imediately as would every other manufacturer of PC cards.

            Sound card is still just a extra card. For fifty dollars or less it would be more likely to tack on the little cheap extras then you propose by tacking on a PPU. The market doesn’t demand this though and it’s not about the cost here. It’s about flexability, upgradeability and what the market wants.

            Sound cards generally last four years, some alot longer depending on your listening prefrences. Video cards last about two. PPUs proposed will last about fiveish. Why would you want to tack any of those together so you need to upgrade them at the same point in time when it will cost you considerably more in the long run?

            You can of course spend your money on any portion of your computer you want. This is a completely new component and will demand it’s own attention. If you rather spend the extra $250 on a kick as graphics card and play games like crap because you didn’t want to buy a PPU then thats your perogative. $250 is right around the price of a mid range video card, maybe a high end mid range card. Either way it has it’s own price bracket because it is completely new.

            $250 will hardly break the bank unless you’re in poverty but at that point I would question what you’re doing with a mid-range video card or a dedicated soundcard in the first place. The world is a tough place and god forbid you have to spend a little extra money to make things 10x better. This really goes into how much your entertainment is worth. I spend quite a excessive amount of time on my computer and playing games and I think it is well worth spending a little extra to make what I do the most that much better.

            • Beer Moon
            • 14 years ago

            Speculation smeckulation.

            They have Unreal 3 Engine. That’s at least 40 games in the next few years, given the popularity of UE licensing in the past. That doesn’t include any of the other developers who are using their technology. The list is on their site, in case you were curious.

            They also already have an API for the Cell, and one in place for Xbox 360 too. So that puts them on both the PS3 and the Xbox 360.

            Not sure where you’re getting your information from, but most of what you say is speculation. Developer support for their technology is not speculation; it’s fact.

            • Prospero424
            • 14 years ago

            /[

            • Bensam123
            • 14 years ago

            Trends are started with just getting a few names on the train.

            When DX9 was released there weren’t any games that were ready to ship with the tech nor were there cards already on the market, in that aspect Ageia is already one step ahead by having games being developed before their hardware even hit the market.

            More will follow as soon as they see what physics can do for gaming.

            You sure agree alot more then you debate, I question why exactly you don’t like the tech if you speak so highely of it.

            • Prospero424
            • 14 years ago

            /[

            • Bensam123
            • 14 years ago

            Why just not get it standard on motherboards same as sound then? Go right to the source.

            • Prospero424
            • 14 years ago

            Because:

            As I already explained, a sound card fullfills a hell of a lot more roles on the average PC than a PPU would, because a PPU is essentially only useful for games (at this point). A motherboard is just not (usually) a “gaming product”, whereas high-end graphics cards are. It just wouldn’t make sense to put it on motherboards, except maybe for the “Fatal1ty”-type specialty boards, but such a limited run seems like it would be a waste of resources.

      • ludi
      • 14 years ago

      /[< don't know about the technical feasibility of this<]/ If "feasible" means either a 30-layer PCB or a card the size of micro-ATX mainboard, that chews up two or three bay slots, and boasts a pricepoint north of Ye Olde Kilobuck, then sure, it's feasible. Otherwise, it's cheaper and easier (and still won't take up more than three bay slots, unless you do SLI/Crossfire) to put the PPU on the second board. Why would you want the PPU upgrade cycle tied to your GPU upgrade cycle, anyway? Are folks so short on expansion slots and so flushed with cash as all that? I really gotta figure me a way into your wallet, I do...

        • Prospero424
        • 14 years ago

        /[

          • Beer Moon
          • 14 years ago

          Nowhere does it say it doesn’t saturate the 33mhz bus. For all we know, the lag is CAUSED by it.

          In fact, most of what I’ve seen indicates they want to make this available to as many computers as possible, even if it quite obviously requires a high end rig to take advantage of it. They’re simply trying not to limit their potential market, being the first iteration, precisely because of the chicken/egg arguments. If they limited themselves to PCIe chipsets only (less than half of the current PC install base?), they probably wouldn’t have been able to get funding.

          Also, others have mentioned that most of the hardware details are under wraps, yet you confidently state that it COULD fit on a GPU?

          Frankly, you’re speaking as if it were fact, but all you’ve got is speculation.

            • Prospero424
            • 14 years ago

            I never claimed to know its bandwidth requirements, and I said it didn’t seem to /[

    • albundy
    • 14 years ago

    Can I assume that future games with cool things blowing up showing more particles is more important than the actual game play? Trading fun factor for a second of eye candy seems to be the trend these days. Just waiting for another fps game that has been copied a million times over.

      • sigher
      • 14 years ago

      If you have killer gameideas then by all means contact gamestudios and make us all happy 🙂
      Not that you aren’t right with your little jab at doom3, but like books and movies, you can use an old familiar concept and still make it great, or you can not, depends not on the concept but the way it’s worked out, a good fps isn’t the same as any other fps when you play it, same for rpg’s etc.

      • nonegatives
      • 14 years ago

      I bet we will see re-releases with the API added, such as Half-life w/ Source. Any and all WWII games could add bigger explosions. I wonder if Crytek will buy into it, the physics demos from FarCry could sure use some help with multiple objects flying around.

    • eitje
    • 14 years ago

    First, let’s look at GPU pricing when accelerated graphics first started hitting the scene. An example:

    §[<http://www.wave-report.com/archives/1998/98030501.htm<]§ q[http://pc.ign.com/articles/066/066665p1.html<]§ how many of those games were really affected by the voodoo2, do you think? Starcraft? Worms2? Sure, Unreal and Half-Life probably saw some improvement, but I don't think Railroad Tycoon 2 saw anything spectacular performance-wise. in fact, if you take a look at the games that were available in 1997 (www.classicpcgames.com), you'll notice a significant lack of titles that would have benefited from 3d acceleration. it wasn't until after the technology existed that developers started to use it! Now, everyone just needs to take a DEEP BREATH and stop yammering about how useless a PPU is, and remember that - 10 years ago - someone was saying that GPUs weren't necessary either. edit: one last link before bed. §[<http://www.gamechoiceawards.com/archive.htm<]§ notice how it wasn't until 1999 that PC games really took most of the awards. prior to that, the N64 seemed to be rocking socks, as a discrete platform. i guess that's why consoles used to be so popular.

      • sigher
      • 14 years ago

      It’s convenient to compare the PPU to a product that was successful in the end and seemingly win an argument, but what if we compared it to a product that was not successful eh.
      Over the years people came up with stuff that failed, like ‘vector displays’ for instance, they were mocked at first and now.. well now nobody remembers that failed novelty.

      • lyc
      • 14 years ago

      the point has been made many times that glquake+voodoo1 truly rocked socks (like that expression ;), whereas the few extra bits of debris rock… well, they couldn’t rock a rocky rock with a rock rocker.

      until ageia find some way to justify the /[

        • sigher
        • 14 years ago

        The article says that ageia says the drop in framerate was a bug, so hold off on finalizing judgement about that.
        You make some excellent points though, but it’s not ageia that makes the games so they don’t really control how it’s used, perhaps they should do like other projects and companies did in the past, make a ‘certified for PPU’ label that games can use but only if they REALLY use a PPU, as in real physics and not some extra sparks.

          • lyc
          • 14 years ago

          they may improve the frame rate drop (possibly by lowering quality? debris count? i hope these things will be checked out) but bus latency is bus latency, and it’s much more of a problem for physx than it was for the first 3d cards because of the /[http://www.firingsquad.com/features/nvidiahistory/page2.asp<]§

            • sigher
            • 14 years ago

            I agree, it must be tested.
            And thanks for the compliment 🙂

          • R2P2
          • 14 years ago

          I’m thinking the “bug” was that the release drivers didn’t accelerate all the parts of the API that GRAW uses, so they got ones with a more complete implementation packaged up in a hurry. PC gamers are used to dealing with beta drivers, right?

      • indeego
      • 14 years ago

      Duke Nukem, bitboys.

      Nuff saidg{<.<}g 😀

        • lyc
        • 14 years ago

        ah yes, bitboys is a particularly painful example.

          • sigher
          • 14 years ago

          bitboys just got sold to one of the big names didn’t they, so I guess you could interpret that as a success in the end 😉

            • lyc
            • 14 years ago

            more like they got rescued by people who know how to ship products 😉 i’ll be the first to say bitboys had some awesome tech (they’re ex demosceners, from the “biggest” group there ever was: future crew) but when it comes to the world of 6 month release cycles… hmm…

            anyway i guess they did succeed in the end; they could’ve been bought out by s3 😉

            • sigher
            • 14 years ago

            eheh, s3

      • muntjac
      • 14 years ago

      After GLquake and Half-life and Quake2 and the like started showing up I don’t think anyone was saying software video was good enough. If we start adding a piece of hardware for every aspect of gaming then we’re going to make PC gaming more prohibitively expensive as a hobby than it already is.

    • sigher
    • 14 years ago

    I find the statement:
    /[<"There is a long development process ahead in order to get the whole of the API accelerated, as well as a learning process that will involve give-and-take between Ageia and game developers, as the parties sort out the usage model for hardware physics acceleration."<]/ a bit curious, for since the release of the card has been delayed for months, and they already had time before the official release obviously, I'd expect them to have done the whole api already, but here we are hearing that even if you buy the 250 dollar card their own driver doesn't even use it fully, and how are gamedeveloper going to help them tweak it when it's not implemented? you can test nor tweak a non-existant product can you. And how can we trust they talk to gamedevelopers in a way that help development when it's a reviewer that is the one that has to find out the framerate drops because of a bug? if they were so closely in contact with the gamedevelopers would they not have noticed it themselves? Well at least they admitted that their api doesn't fully use the card, glad that much honesty is still maintained. And as for time needed for developing pcie versions, we all know their 'breadboard' version already had a dual pcie/pci connectors setup, so what gives scott? don't go guessing too much if you lack information.

    • madlemming
    • 14 years ago

    What irritates me is that they’re charging $300 for the silicon equivalent of a low to mid-range graphic card. 125 million transistors, on a pci interface, using 28 watts, blah. This thing should cost $150 to $200 tops.

    Yeah, R&D costs money, but if ATI and nvidia start offering something similar as part of the GPU, they are in serious trouble.

      • sigher
      • 14 years ago

      Well they now said the retail price is 250, but you are right that it’s still a bundle of cash.
      It better be worth it and work as advertised else people are going to be very unhappy.

      • ludi
      • 14 years ago

      ATi and Nvidia have the advantage of established R&D centers, dominant control of the present market, a tiered sales structure, and a product that every computer needs. Also, even when an entirely new design is released, much of it is invariably assembled on a foundation of time-proven technology.

      With the PhysX card, Ageia is doing something similar to 3dfx breaking out the VoodooGraphics. They have one product to sell, it is an add-on to an exising, otherwise-fully-functional PC, it doesn’t have much of a history to build upon, and at present it has a very limited range of uses.

      The pure silicon and PCB cost (likely higher due to the reduced volumes, incidently) doesn’t begin to comprehend what Ageia’s final real risks are in releasing the product, or what they might have to sell it for in order to continue the business.

    • PerfectCr
    • 14 years ago

    Excellent post!

    I, like flip-mode, am still fuzzy on what exactly the benefits of having a PPU are.

      • axeman
      • 14 years ago

      Especially when a few years from now, most desktop will have at least one extra CPU core at their disposal.

        • sigher
        • 14 years ago

        Not really, since both intel and AMD are switched to dual core I expect that in a few years everything is multithreaded and we rely on the cores available and have no ‘spare’ power as is the case now due to singlethreaded apps.

    • flip-mode
    • 14 years ago

    Nice post Damage. Heh, trying to push the FNT out of sight? He he.

    Anywho, I haven’t put a lot of thought into the PPU scene. If the tech can be fully implemented over the next couple of years, where we see FULLY interactive worlds, then this stuff is potentially badass. I’m still fuzzy on the mechanics of physics processing, what can be done on the GPU, what needs to go to the CPU, etcetera.

    Still, some questions:

    Why doesn’t the add-in card handle *all* of the physics instead of just effects?

    I’ve heard some forum talk of the PCI rather than PCIE implementation as a bad choice. Any thoughts? Does that make a difference?

    Any reason Agea couln’t design a chip to reside directly on the video card?

    Have any game developers talked about tapping the second CPU core for physics? I know a lot of forum posts have mentioned it. If so, and with multicore only a matter of time, is this chip going to be stillborn?

      • Damage
      • 14 years ago

      y[

        • Shintai
        • 14 years ago

        The PCIe 2.5Ghz vs PCI 33Mhz should be an easy one. Depending on the datapath latency, a say 2.5Ghz CPU has to waste alot less cycles waiting on the PCIe edition.

        The PCIe can send at the same cycle (insert datapath issues here), where the PCI would have to wait 76 CPU cycles before sending anything back (insert datapath issues yet again).

        So aslong as it isnt oneway traffic as GFX and sound basicly is, or latency is so irrelevant like network. Then its a massive problem that the PPU is so far away.

          • Damage
          • 14 years ago

          Have you examined the bandwidth and latency requirements of the PhysX chip in operation? Hmm. I think there are too many unknowns here still.

            • Shintai
            • 14 years ago

            It should be rather logical, that if you want any interactive physics that the CPU is dependent on. Then it will be a problem, unless you have alot of free resources you can waste.

            • Damage
            • 14 years ago

            Not sure I understand what you’re saying. The CPU is dependent on the PPU, and…?

            I thought we were considering whether the use of PCI for the PhysX card was likely to be a problem. I then said I don’t think we know enough about how the PPU needs to communicate to the rest of the world to draw any strong conclusions. But you think we can, based on logic and… something about the CPU waiting for…. something?

            • Shintai
            • 14 years ago

            No, alot of people already have their eyes up for this issue. And its easy to make alot of buzzwords and such using GFX improvements that aint latency dependent. But Aegias only advantage over Havok would be the interactive part. And if they can´t do that they will depend on CPU code for the interactive part and buzzwords for the PPU card vs the extra GFX card or whatever the solution there will be.

            If interactive physics is so massively complex that the CPU would waste 1000s of cycles, sure the PPU card would be nice. But you will never be able to code for it with the latency. There is simply too many things that needs that information before being executed in a game. Its the realtime interactive nature it has. Same reason dualcore and such aint really loved.

            • Damage
            • 14 years ago

            Hmm. I’d like to see some numbers before drawing conclusions on PCI vs. PCI-E for the PPU. As for the rest of it, I believe you’ve still lost me.

            • PerfectCr
            • 14 years ago

            Shintel strikes again!

            • crabjokeman
            • 14 years ago

            Ok, you don’t like the guy. We get it.

            • PerfectCr
            • 14 years ago

            CrabMan strikes again!

            • Saribro
            • 14 years ago

            At 100fps like below, even your exampleCPU can waste 25000 cycles waiting for the PPU and still only drop to 99fps.

            Why are you in such a hurry to make up your mind on the PPU on such little amount of data anyway?

          • Saribro
          • 14 years ago

          Even at 100fps, the PCI-bus would’ve done 300.000 clocks per frame. Additionally, no CPU has the PCI-E bus directly attached, so the data would still have to pass multiple clock-boundaries (PCI-E -> chipset -> chipset to cpu-bus -> cpu-bus to core) and thus take a latency hit. Also, PCI-E traffic takes a latency hit in the serial<->parallel conversion and packetising parts of the interface. And on top of that, the PPU is not running at 2.5Ghz, and the data doesn’t pass through the PPU in 1 cycle. To finish, I didn’t hear anyone complaining about latency with the 66Mhz AGP clock.
          In conclusion, I find the clockspeed argument to be total bogus because the pure clockspeed-related latency falls deeply into the margin of error. It does leave the bandwidth question, which I do find important, but there’s absolutely no data on that yet, unfortunatly. Also, I don’t think there is any data on how much the whole physics-processing can benefit from on-card memory, so other improvements may be added to the 128MB GDDR3 configuration the cards have now.

            • Shintai
            • 14 years ago

            Read what I wrote about the datapath. AGP is not an issue and never had been. GFX is a oneway trip, the CPU sends information to the graphics card and dont want anything back that its dependent on. All the graphic cards result are dumped to the screen and therefor in no way to be returned. And the latency here is for a human to monitor, not a CPU.

            So, your AGP statement is bogus 😉

            If you want physics to be non interactive that AI and user threads etc doesn´t have to be updated with, sure.

        • Ryszard
        • 14 years ago

        Some developer/bringup boards have both PCIc and PCIe interfaces where you just flip the board around. Not 100% sure the PCIe interface is usable on all mainboards though, given the early nature of those samples.

        I think they’re up against bus latency on that interconnect, too, aggrevating the problem of getting data onto and off of the board in a timely manner.

        Great post.

    • PRIME1
    • 14 years ago

    I really don’t want to buy a separate card for physics acceleration.

    I would much prefer an API that uses the GPU or CPU. Even if it required a dual core CPU or SLI that would be better than a $250 card that only works in a few games.

      • muntjac
      • 14 years ago

      I agree. Why not just write physics code for a second CPU core. They’re definately going to be more popular than a $300 addon card that only works for a few games.

        • sigher
        • 14 years ago

        Their rationale for a seperate card is that:
        a) a CPU isn’t as good at matrix calculation, worse by several maginitudes
        b) that it requires 2way fast and dedicated ram to store and alter the data.

        Until now that made sense to me, but now that we hear real life results I’m starting to think that yes a dual core could do it.
        Still, this first game isn’t a good testbed for the capabilities, nor is their ‘partly done’ api going to show what is possible I guess.

      • Beer Moon
      • 14 years ago

      By this logic, then why buy a separate card for graphics? You can just get 2 CPU cores and do software emulation! Problem solved!!

      Emulating a graphics card on your CPU is about as fast as having a graphics card, right?

      I agree that there’s nothing out there now to make you need a PPU, but by the time there are, you can bet an entry level PPU won’t cost you $300. More like $100. Maybe less.

    • Hattig
    • 14 years ago

    “They argue Ageia’s API is only used for eye candy-type particle effects and only on the game’s PPU-accelerated code path. ”

    Chicken and Egg. Who will buy a game that requires a PPU for gameplay until lots of people have PPUs?

    However I remember reading that Ageia’s API allowed for real-world physics, i.e., physics that actually affect the gameplay and world, whilst the Havok API only does eye candy effects. If this is the case, then the above quote is quite laughable.

    I still think that Agiea will be the S3 Virge of the physics world in the end, i.e., too early, won’t be very big…

      • eloj
      • 14 years ago

      Of course either API allows for “real-world physics”. Eye-candy only or not is up to how they’re ./[

Pin It on Pinterest

Share This