Crysis 2 tessellation: too much of a good thing?

By now, if you follow these things, you probably know the sordid story of DirectX 11 support in Crysis 2. Developer Crytek, a PC favorite, decided to expand into consoles with this latest release, which caused PC gamers to fear that the system-punishing glory of Crytek’s prior, PC-only games might be watered down to fit the inferior hardware in the console market. Crytek assured its fans that no such thing would happen and, in a tale told countless times in recent years, proceeded to drop the ball dramatically. Upon release, Crysis 2 supported only DirectX 9, with very limited user adjustments, like so many other games cross-developed for the consoles. The promised DX11 support was nowhere to be found. Although the game’s visuals weren’t bad looking as released, they fell far short of fulfilling hopes that Crytek’s latest would be one of those rare games capable of taking full advantage of the processing power packed into a state-of-the-art PC. Instead, the game became known as another sad example of a dumbed-down console port.

Months passed, and rumors about a possible DirectX 11 update for the game rose and fell like the tide, with little official word from Crytek. Then, in late June, Crytek and publisher EA unloaded a massive, PC-specific update to Crysis 2 in three parts: a patch to version 1.9 of the game weighing in at 136MB, a 545MB patch adding DirectX 11 support, and a hulking 1.65GB archive containing high-res textures for the game. An awful lot of time had passed since the game’s release back in March, but the size and scope of the update sure felt like a good-faith effort at making things up to PC gamers.

With the DX11 update installed, Crysis 2 becomes one of the most visually striking and technically advanced video games in the world. The features list includes a host of techniques that represent the cutting edge in real-time graphics, including objects tessellated via displacement mapping, dynamically simulated and tessellated water, parallax occlusion mapping (where tessellation isn’t the best fit), shadows edges with variable softness, a variant of screen-space ambient occlusion that takes the direction of light into account, and real-time reflections.

The highest profile of those features is probably tessellation, which seems to be the signature new capability of DX11 in the minds of many. Tessellation allows the GPU to employ its vast computing power to transform the low-polygon models used in most games into much higher-detail representations, with a relatively minimal performance cost. Used well, tessellation promises to improve the look of real-time graphics in some pleasant and impactful ways, eliminating—at long last—the pointy heads on so many in-game characters and giving difficult-to-render objects like trees much more organic external structures.


Lost Planet 2 without tessellation


Lost Planet 2 with tessellation

One of the major benefits of DX11’s tessellation capability is its dynamic and programmable nature: the game developer can ramp up the polygon detail only where needed, and he can scale back the number of polygons in places where they wouldn’t be perceived—say, in the interior of objects compromised of flat surfaces or in objects situated further from the camera. Such dynamic algorithms can maintain the illusion of complexity without overburdening the GPU’s geometry processing capacity.

Unfortunately, we have so far seen few examples of tessellation used well in a video game. That’s true in part because of the aforementioned scourge of console-itis (though the Xbox 360 does have limited tessellation hardware) and in part because game developers must create higher-detail versions of their underlying 3D models in order to insert them into games—not exactly a cost-free proposition.

With its DX11 update, Crysis 2 had the potential to be one of the first games to offer truly appreciable increases in image quality via tessellation. A busy summer has kept us from spending as much time with the DX11 update as we’d like, but we saw some intriuging things in Damien Triolet’s coverage (in French) over at Hardware.fr. (English translation here.) We won’t duplicate all of his hard work, but we do want to take a look at one aspect of it: a breakdown of Crysis 2‘s use of tessellation using a developer tool from AMD called GPU PerfStudio.

GPU PerfStudio is a freely available tool for Radeon graphics cards, and its integrated debugger can analyze individual frames in a game to see where GPU time is going. The work needed to construct each frame can be broken down by individual draw calls to the DirectX 11 3D API, and a visual timeline across the bottom of the screen shows which of those draw calls are taking longest to complete. PerfStudio will even let you take a look at the DX11 shaders being used in each draw call, to see exactly how the developer has structured his code.

When we fired up Crysis 2 in its DirectX 11 “ultra” quality mode, we saw that some obvious peaks were related to the creation of tessellated objects. Not only could we see the hull shaders used in the first stage of tessellation—proof that tessellation was in use—but we were also able to see the polygon meshes output by the tessellation process. We noticed some of the same things Damien pointed out, along with a few new ones, including one of the true wonders of this game’s virtual world.

The world’s greatest virtual concrete slab

Yes, we’re talking about a concrete barrier of the sort that you’ll find lining highways all across the country at this time of the year. Also known as a Jersey barrier, these simple, stark structures are strewn liberally throughout the mid-apocalyptic New York cityscape in Crysis 2, providing cover and fortifying certain areas. You might not know it, and you almost surely didn’t expect it, but these flat concrete blocks are one of the major targets for enhancement in the DirectX 11 upgrade to the game.

Jersey barrier in DirectX 9

Enhanced, bionic Jersey barrier in DirectX 11

Here’s a look at the DX9 and DX11 versions of the Jersey barrier. Rather than resize these first few screen shots, I’ve cropped them to provide you with a pixel-for-pixel capture of the game’s imagery.

You can see that there’s not much visual difference between the two. The biggest change is the little “handles” atop the slabs. In the DX9 version, they’re flat and just textures. In DX11, they appear to be real structures protruding from the top of the barrier. I think there may be higher-quality textures in use in DX11, but some of the difference there may be the result of the fact that I haven’t duplicated the camera position precisely between the two shots. Whatever the case, the visual improvement when moving from DX9 to DX11 is subtle at best.

However, in the DX11 “ultra” mode, the handling of this particular object takes up a pretty good chunk of GPU time during the creation of this frame. Why? Well, have a look at the output of one of the most time-intensive draw calls:

The tessellated polygon mesh created in the DX11 version of Crysis 2

Yep, this flat, hard-edged slab is one of the most highly tessellated objects in the scene. The polygons closest to the camera are comprised of just a few pixels each. Further from the camera, the wireframe mesh becomes a solid block of red; there, we’re probably looking at more than one polygon per pixel. Those are some very small polygons indeed.

Let’s move around and have another look at the same barrier from the side, so we can get a cleaner look at its geometry.

These barriers a strewn all over the streets. Let’s take a look at another one from a different part of the game.

Yes, folks, this is some truly inspiring geometric detail, well beyond what one might expect to see in an object that could easily be constructed from a few hundred polygons. This model may well be the most complex representation of a concrete traffic barrier ever used in any video game, movie, or any other computer graphics-related enterprise.

The question is: Why?

Why did Crytek decide to tessellate the heck out of this object that has no apparent need for it?

Yes, there are some rounded corners that require a little bit of polygon detail, but recall that the DX9 version of the same object without any tessellation at all appears to have the exact same contours. The only difference is those little metal “handles” along the top surface. Yet the flat interior surfaces of this concrete slab, which could be represented with just a handful of large triangles, are instead subdivided into thousands of tiny polygons.

Venice in Gotham

Another of Crysis 2‘s DX11-exclusive features is, as we’ve mentioned, dynamically simulated and tessellated water.

Gazing out from the shoreline, that simulated water looks quite nice, and the waves roll and flow in realistic fashion.

GPU PerfStudio gives us a look at the tessellated polygon mesh for the water, which is quite complex. It’s hard to say for sure, but the tessellation routine doesn’t appear to be scaling back the number of polygons dynamically based on their distance from the camera. As a result, the mesh dissolves into a solid purple band of pixels near the horizon. Still, the complexity is used impressively; the water is some of the most convincing we’ve seen in any game.

From the same basic vantage point, we can whirl around to take a look at the terra firma of Manhattan. In this frame, there’s no water at all, only some federally mandated crates (this is an FPS game), a park, trees, and buildings. Yet when we analyze this frame in the debugger, we see a relatively large GPU usage spike for a certain draw call, just as we saw for the coastline scene above. Here is its output:

That’s right. The tessellated water mesh remains in the scene, apparently ebbing and flowing beneath the land throughout, even though it’s not visible. The GPU is doing the work of creating the mesh, despite the fact that the water will be completely occluded by other objects in the final, rendered frame. That’s true here, and we’ve found that it’s also the case in other outdoor areas of the game with a coastline nearby.

Obviously, that’s quite a bit needless of GPU geometry processing load. We’d have expected the game engine to include a simple optimization that would set a boundary for the water at or near the coastline, so the GPU isn’t doing this tessellation work unnecessarily.

A happier example

Not all of the additional polygons in the DX11 version of Crysis 2 are wasted—far from it. For instance, we were looking for tessellated objects in the scene above, when we saw this:

Those bricks alone the side of the window are incredibly detailed and are almost surely the result of tessellation with displacement mapping. In the DX9 version of the game, those bricks won’t have individual contours or variation like that. Make no mistake: this is a lot of polygons, again approaching one poly per pixel in places. Still, this is an addition to the game that genuinely improves image quality.

Unfortunately, because there’s a little patch of water out on the horizon, this entire area has a tessellated water mesh beneath it.

One complex scene

DirectX 11 tessellation and some of the other effects have been added into Crysis 2 somewhat haphazardly after the fact, and some scenes have little or no such “enhanced” content. Others, though, are chock full of it. We’ve seen a few examples of specific tessellated objects; now let’s take a closer look at a frame that ties a bunch of these elements together.

We’re back in the first level of the game, having just emerged into the city for the first time. The building we were in is a tangled wreck of debris. Again, these are some pretty nice visuals for a modern game. The splintered wood and rubble is complex enough to look relatively natural and realistic, for the most part.

The wood on the floor isn’t just a flat texture mapped to a flat surface. Instead, the boards are angled and warped—and tessellated heavily. Some of the boards really are just long stretches of flat surfaces, though, and even those are comprised of many thousands of tiny polygons.

The displacement-mapped bricks that we saw in the prior scene are back here, covering a partially destroyed wall. Once again, they look very nice, and the polygon counts are huge.

The splintered wood supports in the scene are also heavily tessellated, so much that the mesh begins to look like a solid surface.

A closer look at some of that tessellated wood reveals some straight, rectangular planks. The only really apparent complexity is at the splintered ends of one plank, but even those spikes aren’t terribly jagged. They’re just a handful of sharp points.

Amazingly, the flat window frame, the splintered plank, and the interior wall are all made up of incredibly dense polygon meshes.

Pulling back to our full scene once again, we note that beneath it all lies a blanket of wavy water, completely obscured but still present.

So what do we make of this?

Crytek’s decision to deploy gratuitous amounts of tessellation in places where it doesn’t make sense is frustrating, because they’re essentially wasting GPU power—and they’re doing so in a high-profile game that we’d hoped would be a killer showcase for the benefits of DirectX 11. Now, don’t get me wrong. Crysis 2 still looks great and, in some ways at least, is still something of a showcase for both DX11 and the capabilities of today’s high-end PCs. Some parts of the DX11 upgrade, such as higher-res textures and those displacement-mapped brick walls, appreciably improve the game’s visuals. But the strange inefficiencies create problems. Why are largely flat surfaces, such as that Jersey barrier, subdivided into so many thousands of polygons, with no apparent visual benefit? Why does tessellated water roil constantly beneath the dry streets of the city, invisible to all?

One potential answer is developer laziness or lack of time. We already know the history here, with the delay of the DX11 upgrade and the half-baked nature of the initial PC release of this game. We’ve heard whispers that pressure from the game’s publisher, EA, forced Crytek to release this game before the PC version was truly ready. If true, we could easily see the time and budget left to add PC-exclusive DX11 features after the fact being rather limited.

There is another possible explanation. Let’s connect the dots on that one. As you may know, the two major GPU vendors tend to identify the most promising upcoming PC games and partner up with the publishers and developers of those games in various ways, including offering engineering support and striking co-marketing agreements. As a very high-profile title, Crysis 2 has gotten lots of support from Nvidia in various forms. In and of itself, such support is generally a good thing for PC gaming. In fact, we doubt the DX11 patch for this game would even exist without Nvidia’s urging. We know for a fact that folks at Nvidia were disappointed about how the initial Crysis 2 release played out, just as many PC gamers were. The trouble comes when, as sometimes happens, the game developer and GPU maker conspire to add a little special sauce to a game in a way that doesn’t benefit the larger PC gaming community. There is precedent for this sort of thing in the DX11 era. Both the Unigine Heaven demo and Tom Clancy’s HAWX 2 cranked up the polygon counts in questionable ways that seemed to inflate the geometry processing load without providing a proportionate increase in visual quality.

Unnecessary geometric detail slows down all GPUs, of course, but it just so happens to have a much larger effect on DX11-capable AMD Radeons than it does on DX11-capable Nvidia GeForces. The Fermi architecture underlying all DX11-class GeForce GPUs dedicates more attention (and transistors) to achieving high geometry processing throughput than the competing Radeon GPU architectures. We’ve seen the effect quite clearly in synthetic tessellation benchmarks. Few games have shown a similar effect, simply because they don’t push enough polygons to strain the Radeons’ geometry processing rates. However, with all of its geometric detail, the DX11 upgraded version of Crysis 2 now manages to push that envelope. The guys at Hardware.fr found that enabling tessellation dropped the frame rates on recent Radeons by 31-38%. The competing GeForces only suffered slowdowns of 17-21%.

Radeon owners do have some recourse, thanks to the slider in newer Catalyst drivers that allows the user to cap the tessellation factor used by games. Damien advises users to choose a limit of 16 or 32, well below the peak of 64.

As a publication that reviews GPUs, we have some recourse, as well. One of our options is to cap the tessellation factor on Radeon cards in future testing. Another is simply to skip Crysis 2 and focus on testing other games. Yet another is to exclude Crysis 2 results from our overall calculation of performance for our value scatter plots, as we’ve done with HAWX 2 in the past. We haven’t decided exactly what we’ll do going forward, and we may take things on a case-by-case basis. Whatever we choose, though, we’ll be sure to point folks to this little article as we present our results, so they can understand why Crysis 2 may not be the most reliable indicator of comparative GPU performance.

Comments closed
    • luisnhamue
    • 8 years ago

    I just build my new gaming rig, which includes a Radon HD 6950 2GB. And I was going to play Crysis 2. After seeing this investigation techreport, im going to play anyway, i will cap the tesselation levels as recommended.
    For the cheating that Crytek did, i feel shame.

    • Kaleid
    • 8 years ago

    Video:
    [url<]http://www.youtube.com/watch?v=IYL07c74Jr4[/url<]

      • l33t-g4m3r
      • 8 years ago

      Watching it in motion has a lot more impact than screenshots.

    • Novum
    • 8 years ago

    There is a good reason why the water is always “visible”.

    It’s derived from the water rendering algorithm of Crysis 1/CryEngine 2. It’s a camera aligned mesh that is rendered after all opaque geometry and therefore invisible parts of it are very quickly rejected by the GPUs Hier-Z mechanism. This is very efficient without tessellation, because the geometry load is minimal without it. So always rendering it was actually the fastest way.

    With tessellation it’s another story, because the geometry load actually is quite substantial with all the domain/vertex/hull shader work to do. But doing the visibility calculations on the CPU instead or using occlusion queries wouldn’t have been trivial.

    As with the other D3D11 stuff it seems to have been implemented in a rush or as an afterthought.

    Nothing wrong with that from my perspective, because it’s a free add on. I also don’t think there is some NVIDIA conspiracy behind it, but you never know 😉

      • l33t-g4m3r
      • 8 years ago

      Between this and what wtfbbqlol says, it does seem that the crysis 2’s tessellation was somewhat of a rush job. The part that seems malicious is the concrete slabs. Still, you’d have thought crytek would have known about the water, and implemented a lod or something. Crytek, kings of bloat since crysis 1.

        • Novum
        • 8 years ago

        There is some kind of LOD. Otherwise it would turn all pink much earlier.

        Also I do still believe that Crysis is just heavy on computations and was actually pretty well optimized.

        For subjective “linear” better graphics you need exponential more computing power. That’s the problem.

    • wtfbbqlol
    • 8 years ago

    I wonder if tessellation is just tricky to apply optimally and these ridiculous amounts of polygons are a byproduct of that.

    I did a quick search for this and found a developer’s blog that highlights his beef with DX11’s inherent (unless very smartly controlled) tessellation wastefulness:

    [url<]http://sebastiansylvan.wordpress.com/2010/04/18/the-problem-with-tessellation-in-directx-11/[/url<] edit: maybe someone who knows DX11 tessellation programming could comment. I don't know enough about these things to explain.

    • davidm71
    • 8 years ago

    Who would have thought the business of gfx tech would be so cut throat?! So Nvidia conspired with Crytek to develop a more tessellated game that favors Nvidia’s more powerful 500 series cards over ATI 6000 series cards in head to head benchmarks! I almost don’t have a problem with that if your using Crysis 2 to sort out and benchmark the best of the best which just happens to be Nvidia. Sorry ATI but that’s just how that horse race played out and Nvidia’s ahead by more than a nose here. The only problem I see here is that also hurts people with weaker Nvidia cards and shows Crytek doesn’t give squat about well optimized code!

    Personally I really don’t care as I have best of both worlds with two 480s in Sli and two 6970s in crossfire. Just wish that damn dual gpu flicker in Crysis would go away!

      • Bensam123
      • 8 years ago

      Mispost

    • spigzone
    • 8 years ago

    The smoking gun is all the other added effects were applied with some precision and attention to detail to extract the greatest graphics quality improvement with minimum gpu loading. ONLY tesselation was applied to maximize gpu loading without significant graphics quality gain.

    I would not be at all surprised if:

    1. Nvidia programmers were directly involved with the tesselation coding.

    2. Retail salespersons at Best Buy etal. are being ‘guided’ to inquire of prospective GPU buyers if Crytek games in general and Crisis 2 in particular were games they had or intended to buy and strongly push Nvidia GPUs in general and high end Nvidia GPUs in particular if Crisis 2 was going to be played based on ‘objective’ performance scores.

    The crux of the problem with this TWIMTBP approach is it WORSENS the playability for AMD gamers instead of being NEUTRAL.

    It’s one thing for Nvidia gamers to have a better gaming experience because Nvidia added Nvidia specific optimizations that didn’t affect AMD gameplay, it’s another entirely for AMD players to have a worse gaming experience because Nvidia’s optimizations made the AMD gameplay worse than if those optimizations had not been implemented.

    That meets the literal definition of sabotage.

      • clone
      • 8 years ago

      in this case TWIMTBP worsens the gaming experience for everyone and not just AMD users.

      AMD users see a 30% drop because of TWIMTBP “optimisations” while Nvidia users also see a 20% drop in performance.

      everyone loses but Nvidia users lose a little less which is all good for Nvidia apparently.

    • Fighterpilot
    • 8 years ago

    Great,a fully occluded ocean scene with enough tessellation to exceed the geometry processing power of AMD DX11 cards…..but well within that of the NVidia competition.

    Good game NVidia.

    Hopefully BF3 will include high speed # hashing in order to reload or enter/exit a vehicle…..

    • swaaye
    • 8 years ago

    Possibilities –

    1) The guys who worked on this DX11 patch didn’t really know what they were doing. Seems unlikely because the patch does function and has some fancy stuff in it.
    2) They just wanted to put something out quick to appease the bitching around the ‘net and tessellated stupid stuff as a result.
    3) NVIDIA paid Crytek/EA to make sure tessellation was overboard because it will hurt ATI much more with their lower geometry throughput. Nice way to tilt the benchies.
    4) Crytek hates AMD and wanted to punish them for some evil reason. Maybe AMD/ATI devrel made them angry. I know some developers definitely prefer NV hardware.

    I’m leaning towards #2, myself.

    • beck2448
    • 8 years ago

    Nvidia for the pros, whining fanbois for AMD.

    • willyolio
    • 8 years ago

    the scene with the splintered wood was the most obvious “tesselation for the sake of tesselation” to me. Isn’t this exactly what tesselation is made for? to add extra polygons and splinters when you zoom in close?

    nope. 100,000 triangles added to make an incredibly smooth cylinder while the broken wood still looks like bart simpson’s hairdo.

    • NikK
    • 8 years ago

    This guy even deletes comments, amazing…..

      • Cyril
      • 8 years ago

      Do you see a post missing? Our comment system doesn’t let us make comment posts disappear; when a post is actually deleted, it remains in the comment hierarchy and its contents are simply replaced with the words “post deleted.” I don’t see anything like that in this discussion thread.

      • l33t-g4m3r
      • 8 years ago

      Cyril’s right. However, it sounds like you feel guilty about your posting, knowing you’re doing something that should get your posts deleted, therefore blaming a missing post glitch on it being deleted. Perhaps if you weren’t shilling for nvidia, you’d have a clear conscience. Just my 2c.

    • The Dark One
    • 8 years ago

    I think Carmack’s take on current-gen tessellation was pretty interesting:

    [url<]http://www.youtube.com/watch?v=hapCuhAs1nA#t=21m50s[/url<]

    • NikK
    • 8 years ago

    This is incredibly silly even to read…

    On one hand we have people SCREAMING for more detail and better implementation of DX11 and Tessellation and when a company Finally does that some people start whining…

    That aside it’s very “obvious” that certain websites and game titlesfavor NVIDIA and others AMD/ATI….

    Half Life 1&2 favored ATI solutions, so did pretty much every game Valve ever released….The same applies with many titles by Codemasters…..The same applies to many other titles in the market…..3DMARK used to support PhysX but as soon as NVIDIA bought the entire company behind it and ATI pushed they discontinued support…..We also saw how AMD/ATI have reduced the IQ of their drivers when they launched series 6xxx in order to improve their performance….Even more recently AMD/ATI stopped supporting Sysmark 2012 because it didn’t favor their products…..

    Strangely enough none of the above is mentioned in this discovery….Yet we hear about the bad HAWX II and the “conspiracy” over at Crytek….

    So from now on we will say to every developer out there to use worse graphics just so cards made by AMD/ATI can compete with the ones from NVIDIA ? Are you people for real ?

      • Bensam123
      • 8 years ago

      You didn’t even read the article or what the article highlighted that people are poopooing on.

      • Chrispy_
      • 8 years ago

      “On one hand we have people SCREAMING for more detail”

      They’ve at least quadrupled the polygon count per frame, and the only visible improvement is some lumps on top of a concrete barrier. and some lumpier bricks in the occasional windows.

      We want more detail, not more polygons. Once upon a time, there was a relationship between the two…

      • eitje
      • 8 years ago

      Welcome to Tech Report!

      I notice that you registered just prior to posting this, and I wanted to make sure you felt welcome!

      • sschaem
      • 8 years ago

      We are not talking about optimization focus on one platform, but about sabotage.

      Its one thing to leverage AMD or nVidia dev relation to your advantage, its another to release poison to end users.

      The ‘charity’ dx11 pack from Crytek is a joke. Crytek reputation on the PC with Crysis2 went down the drain.

      • ElMoIsEviL
      • 8 years ago

      Your post was incredibly silly to read fyi.

      People want a “better” implementation of DX11 and thus Tessellation. People do not want the usage of such features used to unreasonable/irrational degrees just to get a one up on the competition as is the case here.

      As for these “Titles” favoring AMD (ATi) or nVIDIA… well on the surface it may look like an equal playing field but in reality it is not the same. You have to look at the details in order to understand the differences… aka the means to an end.

      Half Life 2 (Source) ran better on ATi hardware because ATi followed the Microsoft DX9 specifications down to a T whereas nVIDIA had gone down its own path in retaliation to Microsoft (GeForce FX 5800 Ultra days).

      The Codemasters games are just games built to use many DX9/10/11 features. They are not compiled for AMD or any AMD specific features. Codemaster titles generally use many simple shaders rather than few complex shaders which tends to favor AMDs architecture. This has changed recently with AMDs switch from 5D to 4D. So no they are not in cahoots.

      3DMark discontinued support for PhysX because it would have given the nVIDIA cards an unrealistically larger score (not many games use PhysX so the scores would not be reflective of gaming in general). Yes rather logical reasoning on Futuremarks part.

      I have not witnessed a degradation of IQ and have not seen any websites highlighting this. Even if it were true… to use that as some sort of attack on AMD is hypocritical as nVIDIA ALWAYS reduce IQ and have since the days of the Detonator 2 drivers.

      AMD stopped support Sysmark 2012 because they reckoned it didn’t not fairly represent their products. I can’t speak for their reasoning and you appear to simply be grasping at straws trying to find any dirt you can on AMD in order to further your own agenda and justify your obviously biased and irrational world view.

      And the argument has nothing to do with “worse graphics”… can you not read? The reasoning behind this entire article is as follows (I’ll repeat it rather slowly for you simple minded folks)… there appears to be an unreasonably large amount of tessellation used in certain titles that result in a performance hit while not improving image quality one bit. That is to say that there appears to be a deliberate attempt at increasing the usage of tessellation in order to favor nVIDIA products while resulting in no net benefit image quality wise.

      Got it?

    • clone
    • 8 years ago

    very ballsy article, impressed how you managed to keep it honest, the sumnation potent yet balanced and politically correct.

    incredible respect for the site.

    thank you.

    • j1o2h3n4
    • 8 years ago

    Without Scott’s hard work we won’t know what went wrong behind the scenes, needless to say an article to put our opinion on. I think scott was very generous to quote “Damien Triolet” and his intelligent use of GPU PerfStudio on Crysis, as this article has so much more good elaborate details. Incredible exploit, i love it.

    By the way, i’m ignorant, “GPU PerfStudio” is from AMD, i wonder why Nvidia didn’t had a similar software?

    • Rikki-Tikki-Tavi
    • 8 years ago

    I’m a little bit of an Nvidia and Crytek fanboy, but something has gone seriously awry here. Yet, it’s a bit early to jump to the conclusion that there is malicious intent here. It smells of the “fundamental attribution error”, a well known effect in psychology.

    [url<]http://en.wikipedia.org/wiki/Fundamental_attribution_error[/url<] Those of you who have a job, ask yourselves: have you screwed up this badly or worse at work at some point of time? With the boss breathing down your neck, asking you to finish it quickly so you can do something that actually brings in money for the company? That is the essence of the fundamental attribution error: We are reluctant to see the pressured newly-hired code monkey charged with way more than he can handle (situational) and are very happy to see the evil corporate conspiracy (dispositional). Not that I'm saying there couldn't be such a conspiracy, I'm just saying I, for my part are still withholding judgment on this one. And any way you turn this it reflects badly on Crytek.

      • Kaleid
      • 8 years ago

      Don’t need a conspiracy to know that they are after more money. That’s how most businesses operate.

        • Rikki-Tikki-Tavi
        • 8 years ago

        If two companies decide to make a secret deal, that is good for them and bad for their costumers, that is a conspiracy. It’s not how most businesses operate, because if you suggest something like this to a company, there is always the risk that someone on the other side will go to the press.

          • Kaleid
          • 8 years ago

          Along with clergymen businessmen are amongst the least honest ones. Some things are kept quiet about because everyone knows how it things operate. Mutual silence benefits both sides. Watch Inside Job on the 2008 collapse for instance, many knew exactly was what going on.

            • Rikki-Tikki-Tavi
            • 8 years ago

            Different companies. People at financial institutions got there because they want to make as much money as possible. Most game makers are driven by their desire to make the best possible product. That would be a lot of motivation to blow the whistle if they found out that their company was doing the opposite.

      • cegras
      • 8 years ago

      [quote<]We are reluctant to see the pressured newly-hired code monkey charged with way more than he can handle [/quote<] So his solution to this stress was to just over tesselate flat objects?

        • Rikki-Tikki-Tavi
        • 8 years ago

        Yes. He screwed up. It happens, and it’s not proof of a secret deal.
        Maybe they had a spreadsheet that detailed what level of tessellation they where going to apply, and someone skipped a row or something else. Maybe the spreadsheet was right, but the indexing in the code is off. That one happened to me personally before. More than once.

        It all comes down to how Crytek reacts now: if they correct it, than we can pretty conclusively say they just screwed up. If they don’t, they are probably evil.

          • cegras
          • 8 years ago

          Occams razor. Nvidia funded Crytek heavily, they optimized for nvidia gpus, etc.

            • l33t-g4m3r
            • 8 years ago

            Exactly. The easiest way to tell if a game might be compromised is if you see the TWIMTBP/nvidia logo upon starting the game.

            • Rikki-Tikki-Tavi
            • 8 years ago

            Occam’s razor? You don’t even know what that means. It’s screw-up versus corporate conspiracy. Occam’s razor cuts the conspiracy to very fine shreds.

            • cegras
            • 8 years ago

            Nvidia pays Crytek lots of money to optimize. Programmers who work at Crytek are most likely not retarded enough to tesselate flat objects more than 1 polygon per pixel.

    • PixelArmy
    • 8 years ago

    Disclaimer: nVidiot here, so mod away…

    How many ultra-smooth sidewalks or roads do you see? Sure, your concrete or whatever is a [i<]basic[/i<] flat shape, but there's lots of little variations/bumps/texture everywhere. The author accepts this as the reason why there's crazy tessellation for the bricks, but fails to extend this to the jersey wall, wood planks, etc. I am not a graphics programmer... Anyone have technical details on how the water is implemented? Is it one mesh? How's the LOD work? My guess is if it's one object and their LOD is distance based, you're getting the higher LOD for the [i<]entire[/i<] object which just so happens to span way back to the horizon. Similarly, the underground water, it's added to the scene geometry or it isn't. Want a few flat triangles (and presumably bump maps?), well that's DX9 console stuff... You know the stuff everyone complains about... But add DX11 stuff and everyone pines for the DX9 stuff. This is why we can't have nice things.

      • Aphasia
      • 8 years ago

      [QUOTE]How many ultra-smooth sidewalks or roads do you see? Sure, your concrete or whatever is a basic flat shape, but there’s lots of little variations/bumps/texture everywhere. The author accepts this as the reason why there’s crazy tessellation for the bricks, but fails to extend this to the jersey wall, wood planks, etc.
      [/QUOTE]If they actually had used the triangles for deformation that would’ve been true, but the point was, that they didnt. Take a second look at the wireframe images. In the DX11 patch you had the geometry deformation effects accourding to textures, to actually get depth in the roadtracks, etc. But with the tessalation, its not any geometry deformation. It still a flat surface with virtual depth done in the texture. Look at the extension at the side of the concrete barriers, same with the planks and the windowsill. Except with the bricks, where you actually add deformation with the tessallation.

      So if it would’ve been one triangle, or a thousand, its still a flat surface. If they had used the extra triangles for true detail, that would have been a whole other thing.

    • aug
    • 8 years ago

    Great article. I was certainly disappointed once the trio of patches were released. As a 5870 CF user, I was hoping for a HF or profile update from AMD. While some were released for general performance, I still cannot enjoy the Ultra setting on HW that should normally chew through any game. I have another PC running 2 470s in SLI and the performance difference between the 2 is laughable. I will try adjusting the tess. setting in the CCC, though I’m pretty much reserved to running on the extreme preset

    • l33t-g4m3r
    • 8 years ago

    Tessellation is the new Physx. Physx tanked because nobody with an AMD card could use it, it was pointless, and it was occasionally too slow for nvidia users. Plus, nvidia was hyping it only when they had no dx11 cards out. Tessellation, on the other hand is something that is standard for dx11, so nvidia can directly manipulate benchmarks by sponsoring dirty tricks like this. However, amd now includes a tessellation option in their control panel, so users won’t be greatly affected. The real problem will be benchmarkers not using the options.

    Another side affect of this is blowback to nvidia users. We don’t have any tessellation control, and those with slower cards like the 460 will be unable to use tessellation because of slow performance. This is a lose-lose scenario, as the only side that benefits is nvidia from 580/SLI sales. It’s planned obsolescence.

      • Kaleid
      • 8 years ago

      Tesselation isn’t like Physx, which fails not only because Ati users cannot use it but also because it needs two graphic cards. That means extra problems with drivers, heat, power etc.

      Tesselation isn’t important because consoles do not use it.

        • l33t-g4m3r
        • 8 years ago

        Tessellation is part of what has driven people to upgrade from dx10, so I’d say it is important enough. Whether or not consoles use it is irrelevant, but they probably could implement a downscaled version if they wanted.

        It is totally being used like Physx right now. Have you not read the article? How many polygons are in that flat concrete slab?

        • BobbinThreadbare
        • 8 years ago

        I don’t think Physx requires 2 cards anymore

          • Kaleid
          • 8 years ago

          Hasn’t for a long time, but graphics and physx from the same GPU is what I’ve seen very slow.

            • l33t-g4m3r
            • 8 years ago

            Correct, or at least until Fermi came out that was true. Physx can still slow down a single card, but the older titles are now playable. Just don’t use the overkill settings.

      • Bensam123
      • 8 years ago

      PhysX was owned by a independent company. It ‘tanked’ before NVidia ever picked it up let alone it could be accelerated on a NVidia card. Ageia sold their own PPU… it has nothing to do with what you’re talking about and even if you didn’t have a PPU it could run in full software mode on all the cores you had available.

        • l33t-g4m3r
        • 8 years ago

        Wrong. Physx did not initially run on multiple cores or have sse optimization. It was designed to run slow on the cpu, and there was several articles about it, quite similar to this one about tessellation.
        [url<]http://www.realworldtech.com/page.cfm?ArticleID=RWT070510142143&p=5[/url<] Also, physx certainly does have a connection with tessellation, in the sense of nvidia sabotaging game performance on amd's cards. Nvidia is constantly trying to find ways to screw amd, be it Batman AA, Assassin's Creed, hawx, or Crysis 2. The only thing you were right about is that Ageia has nothing to do with anything. It's defunct.

          • Stargazer
          • 8 years ago

          The article you link to discusses Nvidia-managed PhysX. When Bensam123 says “it could run in full software mode on all the cores you had available” I believe that he’s talking about the Ageia-managed PhysX.

            • DarkUltra
            • 8 years ago

            In the nvidia control panel you can make PhysX execute on the CPU only instead of the GPU.

            • l33t-g4m3r
            • 8 years ago

            You’re not telling us anything we don’t already know. The issue is that Physx has been deliberately crippled on the cpu. Using the cpu gets you under 10 fps in a game, even if there are no visible effects on screen. Meanwhile, physx runs fine on consoles and smart phones. It’s a joke.

            • l33t-g4m3r
            • 8 years ago

            He’s not talking about anything because that was never possible even with Ageia. It always was 1 cpu thread with no optimizations. Nvidia ported the PPU code to GPU, and the CPU code stayed the same. Only recently has nvidia started to talk about allowing cpu optimizations, and I have to wonder how serious they really are. If you look at what else runs Physx: consoles and smartphones, then it’s fairly obvious nvidia is playing dirty.

            You can’t use ageia’s software with nvidia games anyway, it’s incompatible and the games force update physx. Gotta love installing bloat to play your games. Never had to do this with havok. I know a little bit about the Ageia PPU because I bought one off ebay. It doesn’t support much of anything, and doesn’t work at all with the new versions. Nvidia stopped supporting it, and the older versions that did disabled acceleration when detecting an ati card. You all are grasping for imaginary, nonexistant straws. Physx was a scam from day 1, it never was legit.

            Even if physics was a huge drain on the cpu, which it’s not under normal circumstances, physx was created back when people were using single and dual core cpu’s. Now we have 6+ cores and hyperthreading. I think that’s more than enough horsepower to obsolete physics accelleration. This is exactly what happened to sound cards, and video cards may not even be immune if cpu’s start taking them on.

            • Stargazer
            • 8 years ago

            [quote<]He's not talking about anything because that was never possible even with Ageia. It always was 1 cpu thread with no optimizations.[/quote<] That seems to depend on who you ask. Around the time of the article you linked to, there were some claims that software PhysX *used to be* multithreaded, and that Nvidia removed this capability. Nvidia supposedly then responded: [quote<]I am writing here to address it directly and call it for what it is, completely false. Nvidia PhysX fully supports multi-core CPUs and multithreaded applications, period. Our developer tools allow developers to design their use of PhysX in PC games to take full advantage of multi-core CPUs and to fully use the multithreaded capabilities.[/quote<] (http://www.geeks3d.com/20100121/nvidia-multi-core-cpu-support-is-not-disabled-in-physx/)
            i.e. their claim was supposedly that software PhysX did (and does) support multithreading, but that not all developers took advantage of that.
            Some site later claimed to have run tests confirming that this actually was the case (http://www.geeks3d.com/20100222/preview-multi-core-cpu-physx-on-a-quad-core-cpu/).

            I haven’t really looked closer at this, so I can’t say for sure what’s actually going on. However, it seems rather clear that there are multiple opposing claims floating around.

            [quote<]Only recently has nvidia started to talk about allowing cpu optimizations, and I have to wonder how serious they really are. If you look at what else runs Physx: consoles and smartphones, then it's fairly obvious nvidia is playing dirty.[/quote<] I have my doubts too. I'd be pleasantly surprised if they end up supporting AVX, but I kinda doubt that they will. I'm expecting PhysX 3 to take *better* advantage of CPUs, but I don't expect it to be anything near "optimal". I hope I'm wrong about that though. [quote<]Even if physics was a huge drain on the cpu, which it's not under normal circumstances, physx was created back when people were using single and dual core cpu's. Now we have 6+ cores and hyperthreading. I think that's more than enough horsepower to obsolete physics accelleration. This is exactly what happened to sound cards, and video cards may not even be immune if cpu's start taking them on.[/quote<] I'm not exactly sold on GPU (game) physics. Sure, GPUs are very well suited to the calculations that take place, but GPUs also tend to be the bottleneck in games, while CPUs tend to sit largely underutilized. To me, that'd seem to be a good reason to use *CPUs* for physics, even if GPUs are better suited for it (when physics are run in isolation).

            • l33t-g4m3r
            • 8 years ago

            [quote<]That seems to depend on who you ask. [/quote<] It sure does, and nvidia seems to be claiming they aren't responsible for non-multithreading, which is a load. Metro 2033 says they use multithreading, but that game is alreaady so slow, you don't want to bother. Anyways, you're mixing up the past and the present. Nvidia is presently moving toward multithreading, but in the past that was not the case. The older physx games were not COMPILED for multiple cores. Also, did you read the article I linked to? Probably not. Multithreading is only half of the equation. PhysX uses x87 for FP math almost exclusively, not SSE. Here, I'll post it again. Make sure to read it. [url<]http://www.realworldtech.com/page.cfm?ArticleID=RWT070510142143[/url<] Also, charlie seems to think Physx could run faster on the cpu than gpu. [quote<]Why is this important? SSE has scalar and vector variants for instructions. Scalar basically means one piece of data per instruction, and that is where you get the ~20% speedup over x87. Vector allows you to do the math on 1 128 bit instruction, 2 64 bit instructions, or 4 32 bit instructions simultaneously. Since PhysX uses 32 bit numbers, you can do 4 of them in one SSE instruction, so four per clock. Plus 20%. Lets be nice to Nvidia and assume only a 4x speedup with the use of vector SSE. What does this mean? Well, to not use SSE on any modern compiler, you have to explicitly tell the compiler to avoid it. The fact that it has been in every Intel chip released for a decade means it is assumed everywhere. Nvidia had to go out of their way to make it x87 only, and that wasn’t by accident, it could not have been.[/quote<] [quote<]In the end, there is one thing that is unquestionably clear, if you remove the de-optimizations that Nvidia inflicts only on the PC CPU version of PhysX, the GPU version would unquestionably be slower than a modern CPU. If you de-optimize the GPU version in the same way that Nvidia hobbles the CPU code, it would likely be 1000x slower than the CPU. If Nvidia didn’t cripple CPU PhysX, they would lose to the CPU every time.[/quote<] [url<]http://semiaccurate.com/2010/07/07/nvidia-purposefully-hobbles-physx-cpu/[/url<] That said, why is nvidia finally moving toward multithreading? Because they don't care about it anymore, and if it's too slow nobody will want to use it. It's also not good for benchmark numbers. Tessellation is the new gimmick, plus nvidia can easily manipulate the benchmark numbers with it.

            • Stargazer
            • 8 years ago

            [quote<]Anyways, you're mixing up the past and the present. Nvidia is presently moving toward multithreading, but in the past that was not the case.[/quote<] No I'm not. I gave a timeframe ("around the time of the article you linked to"), and it is correct. Heck, even the RWT article itself says: [quote<]A review at the Tech Report already demonstrated that in some cases (e.g. Sacred II), PhysX will only use one of several available cores in a multi-core processor. Nvidia has clarified that CPU PhysX is by default single threaded and multi-threading is left to the developer.[/quote<] and: [quote<]Moreover, PhysX code is automatically multi-threaded on Nvidia GPUs by the PhysX and device drivers, whereas there is no automatic multi-threading for CPUs.[/quote<] (note: *automatic*) They're saying the same thing I said, during the same time frame that I mentioned (I'm assuming that the article was posted around the same time the article was posted). Your "you're mixing up the past and the present" simply doesn't make sense. The RWT article also clearly shows that (Nvidia) PhysX was (heavily) multithreaded, but not *efficiently* multithreaded (please note this part, and don't go claiming that I've said otherwise). The clear majority of the work in the examined examples was made by one thread, with another thread doing a significant amount of work, and a few more threads doing some smaller amount of work. Other investigations (the second link I posted above) suggest that there are examples where multithreading is more efficiently used, but it seems clear that even if it was not very efficient, PhysX at least supported multithreading. You most certainly can't use the RWT article as a basis for claiming otherwise. Note also that Nvidia have not said that they're going to *add* multithreading with PhysX 3, they've talking about "More Effective Multithreading". This fits perfectly with what I've said above. Yes, current (edit: well, then-current. I suppose PhysX 3 might count as "current" now, and I hope the same limitation doesn't apply there) Nvidia PhysX CPU multithreading is not enabled by default, and even when it is enabled, it seems to be horribly inefficient in many (most?) cases. I've never claimed otherwise, and it doesn't change anything I've said (it *is* what I've said). [quote<]Also, did you read the article I linked to? Probably not. Multithreading is only half of the equation. PhysX uses x87 for FP math almost exclusively, not SSE. Here, I'll post it again. Make sure to read it.[/quote<] I read it at the time it was published, and I've read it a couple of times since then. Note that it supports what I said (and also seems to contradict some of your claims). And yes, I'm aware that (the tested version of) PhysX uses x87. That's horrible (also, it has nothing to do with what I was talking about, so I'm not sure why you're bringing it up as a potential reason for why I might not have read the article). They've claimed that PhysX 3 will improve in this regard, but, like I said earlier, I'm kinda doubting that they'll go as far as supporting AVX. I hope they will though... Oh. You might not want to make the mistake of assuming that I'm a "supporter" of PhysX.

            • l33t-g4m3r
            • 8 years ago

            Where’s your proof Physx is being multithreaded? You’re claiming it is, but offer no proof to back it up.
            Let’s look at the facts:
            Facts:
            Physx can be multithreaded, obviously because that is the nature of the program.
            Physx is normally NOT multithreaded on the CPU. This has nothing to do with the capability.
            Physx is multithreaded on consoles, and runs fine, contrary to the PC.
            Game develepors have to manually enable multithreading.
            (This may be true, but the gpu code is obviously multithreaded automatically.)
            Metro 2033 is multithreaded. This is the first and only game I know of that does this, and it’s recent.
            Up until recently, game developers have not used multithreading with physx.
            (Why should they when nvidia is sponsoring it?)
            The RWT article does not prove physx is multithreaded, on the contrary, there is only one physx thread with 90+ cpu usage. The game is what’s multithreaded, not physx.

            [quote<]neither workload is multithreaded in a meaningful way. In each case, one thread is doing 80-90% of the work, rather than being split evenly across two or four threads – or as is done in an Nvidia GPU, hundreds of threads. [/quote<] I still don't think you've actually read the article, or at best skimmed it to twist facts. The multithreading capability has been present since day ONE with Physx. How else can you run the software on a video card? That's not the issue, the issue is that multithreading was not being used on the PC. When you get proof to the contrary, you can say so. But I don't know of any older software that did it, and the vast majority (99%) of physx games were not multithreaded. Ageia didn't support it either. Phsyx was designed to be crippled on the PC. The capability may have been there from day1, but it was never used, just like SSE. That's the point. Physx is artificially limited. When I said you're mixing up the past and present, I meant how older games were all compiled to use a single thread, which has nothing to do with the capability to use multiple threads, and metro 2033 is the only game I know of that uses multithreading, and it's recent. See? Past vs Present. How hard is that to comprehend?

            • Stargazer
            • 8 years ago

            [quote<]Where's your proof Physx has been multithreaded? You're claiming it is, but offer no proof to back it up.[/quote<] Look at chart 2 (on the "Profiling Results" page). It shows the various threads used by the two examined programs. Then look at charts 3&4. They show the top two threads for each examined program. For both programs, both the top threads perform PhysX calculations (and that's only examining 2 of the *many* (up to ~100 apparently) threads for each program). Are the calculations efficiently distributed? No. I specifically said that they were not. The link (http://www.geeks3d.com/20100222/preview-multi-core-cpu-physx-on-a-quad-core-cpu/) I’ve referred to above claims to see much more effective multithreading (up to at least 4 cores apparently).

            [quote<]Physx can be multithreaded, obviously because that is the nature of the program. Physx is normally NOT multithread on the CPU. This has nothing to do with the capability. Physx is multithreaded on consoles, and runs fine, contrary to the PC. Game develepors have to enable multithreading.[/quote<] Interestingly enough, now *you* seem to be claiming that PhysX can be multithreaded, but that it is not enabled by default on the CPU. This is exactly what I've been saying. It's quite possible that very few PC games have efficiently *used* PhysX's multithreading capability. I don't know. I also never claimed anything else. I referred to a claim by Nvidia that PhysX supported multithreading, but that it had to be enabled by developers (something you appear to agree with). If people *take advantage* of that support or not does not change the (claimed) fact that the support *exists*. I'm also entirely in agreement that PhysX was (essentially) designed to such on CPUs. It was a mess (in more ways than one), and I really hope that PhysX 3 improved things. Time will tell I suppose...

            • Bensam123
            • 8 years ago

            “Interestingly enough, now *you* seem to be claiming that PhysX can be multithreaded, but that it is not enabled by default on the CPU. This is exactly what I’ve been saying.”

            He read my post where I linked to TRs review years ago. as well as yours, and now he’s trying to twist his argument around.

            PhysX was initially designed to scale with as many cores as you had, Nvidia removed or disabled that functionality, they renabled it a few months ago. I’ll have to dig up that news post after work.

            • Stargazer
            • 8 years ago

            Oh for crying out loud. If you’re going to continue to insult me, can you please at least do it in the first couple of edits, so that I don’t have to go back and make a new post?
            (also, please make some sense)

            [quote<] (from the article):"neither workload is multithreaded in a meaningful way. In each case, one thread is doing 80-90% of the work, rather than being split evenly across two or four threads – or as is done in an Nvidia GPU, hundreds of threads." (you):I still don't think you've actually read the article, or at best skimmed it to twist facts.[/quote<] Yes, obviously I'm twisting facts when I refer to the same thing as the above quote with: [quote<]The RWT article also clearly shows that (Nvidia) PhysX was (heavily) multithreaded, but not *efficiently* multithreaded (please note this part, and don't go claiming that I've said otherwise). The clear majority of the work in the examined examples was made by one thread, with another thread doing a significant amount of work, and a few more threads doing some smaller amount of work.[/quote<] I mean, it's not like 80-90% is a "clear majority" or anything like that. [quote<]When I said you're mixing up the past and present, I meant how older games were all compiled to use a single thread, which has nothing to do with the capability to use multiple threads, and metro 2033 is the only game I know of that uses multithreading.[/quote<] Right. Of course. When you accused me of mixing up the past and the present, you obviously meant that I was temporally confused about something I hadn't even talked about. That makes sense. After all, how could I have gotten it right if I hadn't even tried yet? I also like how you clarify that the capability to use multiple threads has nothing to do with how older games were all compiled to use a single thread, since (Nvidia's claims about) the capability was what I was talking about. We're in agreement then that when I said that Nvidia claimed that there was support for multithreading but that it had to be enabled by the developers, I wasn't talking about how many games actually took advantage of this? Excellent. [quote<]See? Past vs Present. How hard is that to comprehend?[/quote<] I'm starting to wonder the same thing.

            • l33t-g4m3r
            • 8 years ago

            [quote<]I wasn't talking about how many games actually took advantage of this? Excellent.[/quote<] Semantics. You are arguing freaking semantics and logical fallacies. Fanboyism at it's worst.

            • Stargazer
            • 8 years ago

            [quote<]Semantics. You are arguing freaking semantics and logical fallacies.[/quote<] I beg your pardon? You accuse me (twice!) of not reading an article that corroborates what I'm saying, and doesn't contradict what I'm saying. You accuse me of mixing up the past and the present, when what I said was given a time-frame that is corroborated by the article you cite (and by other sources cited by me). This ludicrous side-discussion only exists because you keep on attributing to me positions that I've never claimed, and that are inconsistent with the things I have claimed. While doing so, you've managed to keep your own arguments internally inconsistent, and lacking in logic. If you want to continue this charade, please go back to post #121 and see what I *actually* said when you accused me of mixing up past and present, rather than making up your own reality and responding based on that. [quote<]Fanboyism at it's worst.[/quote<] What exactly am I supposed to be a fanboy of? PhysX? The thing I've called "a mess", "horribly inefficient", and "designed to suck"? Or maybe Nvidia? The people that designed/maintained the "horribly inefficient" "mess" that was "designed to suck"? I must be the best fanboy ever.

            • l33t-g4m3r
            • 8 years ago

            Really? Let’s take a look at your previous statement.
            [quote<]I wasn't talking about how many games actually took advantage of this? Excellent.[/quote<] Do you know what this says? It says you don't care about facts, you only care about making things up and imaginary scenarios, and you admit it. Why should you bother saying anything else? You're admittedly not dealing in reality, however you want to describe it. Physx uses x87 for FP math and is single threaded. That's how it's compiled in reality, but you are arguing that since the possibility exists to optimize it, we should imagine it is so, across the board. No, that's not how it works. All that having the capability proves, is that Physx is rigged, and has been rigged from day one. I wouldn't support such a scam, even if someone offered to pay me to come on forums and shill for it like Brian_S. Wink, wink, nudge, nudge. (I couldn't possibly be insinuating something here about you, now could I?)

            • Stargazer
            • 8 years ago

            [quote<]Really? Let's take a look at your previous statement. "I wasn't talking about how many games actually took advantage of this? Excellent." Do you know what this says? It says you don't care about facts, you only care about making things up and imaginary scenarios, and you admit it. Why should you bother saying anything else? You're admittedly not dealing in reality, however you want to describe it.[/quote<] Uhm... In post #121 I claim the following: 1) At the time of the article, there were some claims that PhysX used to be multithreaded, and that Nvidia removed this capability. This (that these claims existed) is true. 2) Nvidia then made a statement that these claims were false, and that PhysX supports multithreading, but that developers have to take advantage of it. This statement is echoed in the RWT article (http://www.realworldtech.com/page.cfm?ArticleID=RWT070510142143&p=5)

            (In this post I also claim to have doubts about: a) PhysX3 taking advantage of AVX and b) GPU physics being preferable over CPU physics, but both those statements didn’t lead anywhere)

            In your response to #121 (#124 for those who care), you accuse me of mixing up the past and the present.
            In #133, you then clarify that:
            [quote<]When I said you're mixing up the past and present, I meant how older games were all compiled to use a single thread, which has nothing to do with the capability to use multiple threads, and metro 2033 is the only game I know of that uses multithreading, and it's recent.[/quote<] Since I hadn't actually talked about how many games took advantage of the multithreading, it seemed safe enough to say (in #136) that: [quote<]We're in agreement then that when I said that Nvidia claimed that there was support for multithreading but that it had to be enabled by the developers, I wasn't talking about how many games actually took advantage of this? Excellent.[/quote<] This is what you (selectively) quote in #147 when you say that this shows that I "don't care about facts", "only care about making things up", and that I'm "not dealing in reality". Or, the tl;dr; version: You accused me of mixing up the past and the present *based on something I never said*, and when I call you on this, you say that this is proof that I don't care about facts. [quote<]Physx uses x87 for FP math and is single threaded. That's how it's compiled in reality, but you are arguing that since the possibility exists to optimize it, we should imagine it is so, across the board.[/quote<] No I'm not. I've mentioned that *Nvidia has claimed* that it could be used in a multithreaded fashion (as did the RWT article). I've also called the actual multithreading usage (shown in the RWT article) "horribly inefficient", and the implementation "designed to suck". I've never once said that we should imagine that it's always used in a(n effectively) multithreaded fashion. [quote<]All that having the capability proves, is that Physx is rigged, and has been rigged from day one.[/quote<] Something like... "Being designed to suck on CPUs"? By golly, I wish I'd thought of that. Oh, wait... [quote<]I wouldn't support such a scam, even if someone offered to pay me to come on forums and shill for it like Brian_S. Wink, wink, nudge, nudge. (I couldn't possibly be insinuating something here about you, now could I?)[/quote<] Well, I don't think that many companies would want to pay people to go around saying that their products are "a mess", "horribly inefficient", and "designed to suck". However, on the off chance that you're on to something, I would like to take this opportunity to announce that if any companies are interested in hiring me to trash-talk your own products, I'm available at very reasonable rates.

            • l33t-g4m3r
            • 8 years ago

            [quote<]I'm available at very reasonable rates.[/quote<] Of course you are, that's pretty obvious. But why pay you for something that you'll poorly do for free? [quote<]I've also called the actual multithreading usage (shown in the RWT article) "horribly inefficient"[/quote<] If by horribly inefficient, you mean single threaded, then yes. But you are actually inferring that it is multithreaded with poor performance. No. The game is multithreaded, and physx is utilizing one main thread for it's calculations. It's not sharing the load. Just because nvidia throws out some pr statement about multithreading doesn't mean they want us to have multithreading. I think they could eaisly compile a version to automatically support any number of cpu cores, since it scales perfectly with the gpu. But they won't. The slower physx runs, the more video card sales they get. The only solution is to not use the crippleware in the first place. The majority of people now get this, and don't care, even if they have a nvidia card. I have one, and I don't care to use some stupid gimmick that lowers my framerate, so I turn it off if it runs slow. Overall, this is why nvidia is now focusing on tessellation, bringing me back to the original subject. Tessellation is not a gimmick, and is easier for them to manipulate benchmarks, because all dx11 video cards run it. So, if for any reason nvidia improves physx in the future, it's because they no longer need to cripple it, or simply must improve it to keep it from dying off. Then once we get hooked, they'll cripple it again. We're better off without it. The only way I'll ever trust Physx, is if nvidia makes it open source, or microsoft buys it out for dx12.

            • Stargazer
            • 8 years ago

            [quote<]If by horribly inefficient, you mean single threaded, then yes. But you are actually inferring that it is multithreaded with poor performance. No. The game is multithreaded, and physx is utilizing one main thread for it's calculations. It's not sharing the load. Just because nvidia throws out some pr statement about multithreading doesn't mean they want us to have multithreading.[/quote<] It's using more than one thread to perform physics calculations (see charts 3&4). That makes the physics multithreaded. The vast majority of those calculations are performed in one of the threads (presumably a non-multithreaded solver). That makes the multithreading horribly inefficient. Of course, if you only want to see that the physics *can be* multithreaded. I referred you to an article that shows (good actually, possibly including specifically multithreaded solvers) multithreading of physics (pre-dating the RWT article). You yourself say that Metro 2033 uses multithreaded physics (it might be "recent", but it's still running PhysX 2.x). It seems rather clear that CPU PhysX *can be* multithreaded, but that it's not enabled by default. At any rate, at the time you accused me of mixing up past and present, and of not having read the RWT article, all I'd said was that *Nvidia claimed* that it could be multithreaded, but that this was not enabled by default. This statement has been corroborated by multiple sources, *including the RWT article you accused me of not having read*, where it was also specifically stated in the text. So... Are you willing to take back those accusations yet, or did you accuse me of mixing up past and present based on something I hadn't said yet? Anyway... I don't know if it's because you're trolling or because you just don't know any better, but it's become abundantly clear that your responses are based on some fantasy-land reality, and not what I'm actually saying. This makes you a huge waste of time. Good bye. Looking forward do some amusing retort, possibly one involving me being called a fanboy.

            • Bensam123
            • 8 years ago

            I referred him to article on TR showing the same thing concerning multithreading.

            A lot of what you’ve written is what I’ve written before I even read your posts, I even concluded, as well, that he is some sort of troll or blatantly being ignorant and/or delusional.

            • l33t-g4m3r
            • 8 years ago

            [quote<]Good bye.[/quote<] Cya. Can't say I won't miss you pointlessly sticking up for a con. Useless fact of the day: The perpetrator of a confidence trick is often referred to as a confidence man/woman, con man/woman, con artist or grifter. When accomplices are employed, they are known as shills.

            • Bensam123
            • 8 years ago

            d00d, you so summed up his entire argument into a confidence trick… awesome.

            • Bensam123
            • 8 years ago

            Yup, he’s doing the same thing to me in the tangent below yours.

            • Bensam123
            • 8 years ago

            Yes sir. No one seems to remember PhysX before it was acquired by Nvidia and subsequently fell as well.

          • Bensam123
          • 8 years ago

          PhysX was owned by Ageia before they were bought out by NVidia. NVidia crippled PhysX.

          [url<]https://techreport.com/articles.x/10223/8[/url<] Look for articles before NVidia bought PhysX. Hell there is a few original ones from [b<]2006[/b<] on this site. You linked to a article from 2010 after the merger and driver changes. My original point being it tanked before NVidia ever acquired it and not for the points you're making. Your view is wrong and misinformed based on information obtained after the fact.

            • l33t-g4m3r
            • 8 years ago

            Yes, let’s talk about physx all day with the Physx rdf fanboys. That article doesn’t say didily squat. All it says is that a faster cpu runs the game faster, nothing more. It is not an extensive analysis, nor does it matter. Ageia is defunct.
            [quote<]This test was about rigid bodies[/quote<] [quote<]No liquids were present onscreen during this test, and no cloth[/quote<] [quote<]Testing PhysX performance in this way may be mostly bogus[/quote<] Biggest gotcha here: [quote<]AGEIA: So you mentioned a “software mode” – is it possible to run CellFactor without hardware? If so, how does it run? JS: Only if you want to miss out on a lot of what makes the game fun! I read on a forum somewhere that a player had done a command line switch and disabled support for the PhysX card. Of course he benchmarked it and it came back with a decent result. The reason for that is pretty simple – we never really intended for players to actually play the game like that, so we stripped the more advanced features out of the software mode (such as fluid and cloth); let’s not forget that AGIEA makes a very powerful physics software engine as well, so doing rigid body collisions (where boxes get tossed around) isn’t too much for the highest-end CPU's to handle. With that said, in software mode, you’ll still notice a significant slow-down at moments of peak physics interaction on even the latest and greatest multi-core machines. That’s why we have the PhysX card listed as a requirement. [/quote<] This article does not prove ageia supported multithreading. It only says that they stripped the advanced features out of the software mode to improve performance. The numbers are invalid. Also, keep in mind this was on a single/dual core. Back then computers were so limited by a single core, that even one extra processor helped tremendously, whether or not games were optimized for it. It's very possible that the game got a small boost merely from moving the physx thread to the spare cpu. The performance isn't doubled, so that's most likely what happened. Like I said. You're all pointlessly grasping at imaginary, nonexistant straws. Even if Ageia had optimized for the cpu, which they never did, it doesn't matter. Nvidia is who's running physx now. Another thing you need to keep in mind about these incorrect theories, is that I actually owned a PPU. There was no CPU optimization with ageia's software. I know this to be a cold hard fact from physically using the hardware and software. Software mode was useless. The only thing I agree with you on is that Physx tanked before nvidia bought it. The only reason it got to where it is now, is because nvidia has shoved it down our collective throats through their twimtbp program. Even with that, physx is still going nowhere, since nvidia's focus is now on tessellation.

            • Bensam123
            • 8 years ago

            Ageia made PhysX… what happened when they were around still happened. You’re just looking for reasons to hate on PhysX and wrongly state that it’s always been crippled on CPUs. That was not the case.

            [quote<]Tessellation is the new Physx. Physx tanked because nobody with an AMD card could use it, it was pointless, and it was occasionally too slow for nvidia users.[/quote<] What I proved was the premise of that whole statement was false. It never 'tanked' either on a AMD or Nvidia GPU. Matter of a fact it originally couldn't even run on either GPU and had a PPU made for it, which ran just as fast as a dual-core processor. You misread the graph as well. It was showing it being tested against a dual core with and without a PPU installed. Not with and without physics on. The dual-core runs faster all around. I never said it was 'optimized' for CPUs. I said that it would utilize all available cores and scale to it. There is a big difference and at the time the big thing was that the PPU was only sometimes a bit faster then simply having a extra core to run PhysX on. That and it removed some eye candy, such as tearable fabric, which really didn't matter that much. Nvidia, after acquiring PhysX, artificially limited it in software and as such destroy the potential benefits on a CPU. They re-enabled that functionality a few months ago and you can actually find that in the news history. TR hasn't tested that though. You can own however many PPUs you want, that doesn't make you right about anything just the same as owning a computer doesn't make me right about everything computer related. That's a fallacy.

            • l33t-g4m3r
            • 8 years ago

            [quote<]You misread the graph as well. It was showing it being tested against a dual core with and without a PPU installed. Not with and without physics on. The dual-core runs faster all around.[/quote<] Wrong. [quote<]Of course he benchmarked it and it came back with a decent result. The reason for that is pretty simple – we never really intended for players to actually play the game like that, so we stripped the more advanced features out of the software mode[/quote<] The program automatically stripped the advanced features out of software mode. The comparison is not legit. Why don't you load up a freaking phsyx game that doesn't strip features, and compare the results. The reality is that using the cpu gets you under 10 fps, like in Batman AA. [quote<]I said that it would utilize all available cores and scale to it. [/quote<] No it doesn't. Not on a CPU. It only does so on a PPU/GPU. Developers have to manually enable multithreading, AND NVIDIA ADMITS THIS. [quote<]Our PhysX SDK API is designed such that thread control is done explicitly by the application developer, not by the SDK functions themselves[/quote<]

            • Bensam123
            • 8 years ago

            …because it can’t do tearable cloth, yet you can have completely destructible buildings, it’s not legit?

            Batman AA wasn’t around before Nvidia butchered the PhysX driver.

            [quote<]Wrong.[/quote<] [url<]https://techreport.com/articles.x/10223/8[/url<] It's right there on page 8... It shows the dual core processor running just as fast as the single core at 2.4ghz with a PPU. That means there is a difference between the dual core and the single core processor. The extra core offsets having a dedicated PPU, that was a big deal at the time because a PPU was essentially worthless or close to it except for your fabled tearable cloth. [quote<]No it doesn't. Not on a CPU. It only does so on a PPU/GPU. Developers have to manually enable multithreading, AND NVIDIA ADMITS THIS.[/quote<] I think you're mixing together past and present together and ending up backwards. [quote<]Tessellation is the new Physx. Physx tanked because nobody with an AMD card could use it, it was pointless, and it was occasionally too slow for nvidia users. Plus, nvidia was hyping it only when they had no dx11 cards out.[/quote<] That essentially says that PhysX did not exist before Nvidia bought them out. You're discarding every piece of history and even went so far as to say it flunked because Nvidia locked out AMD. This brings us back to it running just as good with a extra core compared to not having a PPU, that is something Nvidia removed and Ageia had not originally intended. Once again the difference between the past and the present. Stop quoting stuff from 2010 and 2011 when the argument isn't about what PhysX is now, it's about what it was and how you completely mislabeled everything and were misinformed when you made your original post.

            • l33t-g4m3r
            • 8 years ago

            Jeez, you’re thick. Load up the old ageia software and try it in software mode. You won’t get anywhere.

            Ooh WOW, a dual core is faster than a single core. Do you know how ridiculous you sound? The SMALL speed increase is from the game and physx threads running on seperate cores, as well as clockspeed, and don’t forget it’s running a stripped version. That doesn’t mean ageia’s physx was scalable to a quad core, or that it could run smoothly on a cpu, since the PPU was doing additional work. Another thing to consider is that the PPU was bus limited, and never did achieve high framerates, but it was capable of processing physics that the cpu couldn’t at an acceptable framerate. Although that was all a scam, since the cpu path was crippled.

            [quote<]Of course he benchmarked it and it came back with a decent result. The reason for that is pretty simple – we never really intended for players to actually play the game like that, so we stripped the more advanced features out of the software mode[/quote<] I will repeat this EVERY SINGLE TIME you bring this up. The cpu is not running full physx, the ppu is. The numbers are INVALID. Grasp your imaginary straws elsewhere. The modern equivalent would be running batman AA on maximum physx on a gpu vs minimum physx on a cpu. Or running 640x480 vs 1920x1200. It's not a valid test. [quote<]PhysX did not exist before Nvidia bought them out.[/quote<] Nvidia is the only reason physx got anywhere, because they forced it on us. Had nvidia not done so, phsyx would have died off a long time ago. Physx was raised from the dead and shoved down our throats, all while screwing the competition. That behavior caused such a large backlash that nvidia is now encouraging it's partners in crime to finally enable multithreading. Too little, too late. Nobody wants to use it because of the negative history, plus enough investigative journalism has proven the whole thing to be a scam. Ageia was not any more honest than nvidia, either. Anyone with two brain cells to rub together can see it's not something worth messing with. We, the users, have no control of how well physx performs. Nvidia causes framerates to rise and fall at it's will. Overall, the only thing you're accomplishing to prove here is how far a physx fanboy will go to distort reality. You need to stop being such a physx fanboy, and move on to supporting a more legit physics software, like havok or bullet.

            • Bensam123
            • 8 years ago

            Stripped version, what are you talking about? That review was from 2006… five years ago. That was before Nvidia bought Ageia and before they crippled their driver. At the time when quad cores weren’t even available outside of servers that meant a lot. It’s different then benchmarks now, especially when you’re talking about 20-30fps. That’s the difference between playable and unplayable.

            You’re not reading the graph right. You don’t compare the 2.4 single vs duel with PPU enabled, you compare the ones without it enabled which shows a 10fps difference. Then you compare it against a single with PPU enabled, which shows the second core offloads it just about as good as the PPU itself (difference of 3 fps). Either way it DOES show that there is multi-threading taking place, which Nvidia removed, it didn’t use more then one core.

            PhysX titles now are made so you can’t run the older versions of PhysX, it simply doesn’t work with it. Nvidia made sure they crippled the drivers.

            [quote<]I will repeat this EVERY SINGLE TIME you bring this up. The cpu is not running full physx, the ppu is. The numbers are INVALID. Grasp your imaginary straws elsewhere.[/quote<] Like I said, you're only negating it because of tearable cloth. Destructible environments aside you think that invalidates everything? If the PPU made that much of a difference there wouldn't be a 10fps difference between the single and duel core without the PPU and a 4 fps difference between the single and dual core with the PPU. If it was really the PPU making all the difference then there would be the same difference between the single and dual cores. Simply stating that some thing will be different and giving a extreme example does not validate your point. I have actual test results I pointed you to. You're just blowing hot air with nothing to back it up. If you want to prove such a point pull results from 2006, not random articles that don't even relate from 2010-2011. [quote<]Nvidia is the only reason physx got anywhere, because they forced it on us. Had nvidia not done so, phsyx would have died off a long time ago.[/quote<] I believe PhysX is just as used now as it was back when Ageia owned it. Batman isn't really even a mainstream title that you keep bringing up nor are the physics it uses more then frilly eye candy, the same sort that was around when Ageia was working things. It is a port. Name a mainstream title that actually uses PhysX. I'm not talking about a console port, I'm talking about a mainstream multiplayer game. The only one I know of is UT3 and Ageia wrangled that, not Nvidia. [quote<]That behavior caused such a large backlash that nvidia is now encouraging it's partners in crime to finally enable multithreading.[/quote<] Funny considering you didn't even think PhysX had multithreading a few posts up, yet now you're preaching about it. I never said that PhysX couldn't have been more. I'm a firm believer that Ageia definitely had something going, but the reason that it died had nothing to do with Nvidia, developers simply did not pick it up. Developers fell into consolization, which is still going on now. It was free and had a scaleable software mode that scaled to all your cores. There was no down side to using it back when Ageia was running it. Right from the article that you said you read: [quote<]In the absence of PPU hardware, the PhysX API will fall back to software processing. In fact, the PhysX software is multithreaded in order to take advantage of multi-core CPUs. This software fallback is key to Ageia's world domination plans. The company is licensing its entire PhysX API and software development kit, complete with tools, to PC game developers free of charge. The only catch: those games must take advantage of a PhysX PPU if present. Ageia has also shepherded the PhysX API's migration on to next-gen game consoles. On the Xbox 360, game development houses can license the SDK for about $50,000, and it will use all three of the cores on the Xbox 360 CPU. Sony simply bought out the rights to the PhysX SDK for the PlayStation 3 so all developers can use it for free, and Sony engineers have ported the physics processing routines to the Cell processor. These efforts have made the PhysX API a reasonably complete, low-cost, cross-platform physics engine, and Ageia has had some success persuading game developers, game engine companies, and tool makers to use it. The list of upcoming PhysX-fortified titles is long, but easily most prominent among them are Unreal Engine 3 and Unreal Tournament 2007. Unfortunately, the list of current titles with PhysX support is depressingly short. We'll test the card with a couple of the most prominent titles shortly.[/quote<] Instead of calling people names you should really look into what you're talking about. You didn't even read the article, so even though you were misinformed at first, now you're just being blatantly being ignorant. [quote<]Nobody wants to use it because of the negative history, plus enough investigative journalism has proven the whole thing to be a scam.[/quote<] Nothing about PhysX has been a scam... where did this even come from? What is a scam? This is starting to edge on delusional. Like I said, I think you're mixing up the past and the present and have only known PhysX since Nvidia took them over. I never said I supported Nvidia and what they did to PhysX. I never said that I was supporting PhysX right now, what I did say was the premise of your initial point was completely wrong. Everything I've stated since then was to prove that point. I enjoyed the idea of PhysX when Ageia ran things, but that's gone and Nvidia butchered it. Nothing more, nothing less.

            • l33t-g4m3r
            • 8 years ago

            [quote<]Stripped version, what are you talking about? That review was from 2006... five years ago. That was before Nvidia bought Ageia and before they crippled their driver.[/quote<] "Ageia has also tackled the CellFactor software mode issue by publishing an interview with the demo's developer." [quote<]I have actual test results I pointed you to.[/quote<] "we stripped the more advanced features out of the software mode" Invalid test results. Bunk. Nada. Are contradictory facts one of those things that you can't help ignoring? Maybe you're one of the Three Wise Monkeys. [quote<]PhysX titles now are made so you can't run the older versions of PhysX, it simply doesn't work with it. Nvidia made sure they crippled the drivers.[/quote<] Two things: 1: You are admitting the software is crippled. Whoops. 2: You can load up the old agiea stuff without using nvidia's drivers. Not that anybody cares either way since it's a dead end, and newer games will overwrite ageia's software. Not that I think this is necessary, since nvidia claims they don't disable multithreading in their driver. Theorhetically, you can still run the older ageia stuff with multithreading, using nvidia's driver. IMO, nvidia is using physx exacly how Ageia intended and designed it to be used, but with an iron fist. [quote<]developers simply did not pick it up.[/quote<] Because it was a scam. Every instance of a post-ageia Physx game was nvidia sponsored. Everyone else used havok or an in-house concoction. Physx was an unnecessary solution in search of a problem. [quote<]It was free[/quote<] It was not free for the users, in any way, shape, or form. [quote<]had a scaleable software mode that scaled to all your cores.[/quote<] Nvidia has similar claims, except it has to be manually enabled by the nvidia sponsored devs. Ageia's software probably ran exactly the same way. They could enable and disable at will. Performance would still suck, because of using x87 for FP math. I believe nvidia did claim that part hasn't changed from Ageia. [quote<]you didn't even think PhysX had multithreading a few posts up[/quote<] No, it obviously does since it scales with a video card. However, the CPU version is artificially limited, and I said the only game I knew of that supported multi-threading was Metro 2033. [quote<]the article that you said you read[/quote<] I never said I read that article, only that you were ignoring contradictory information from it. Even if Ageia did somewhat support multi-threading, so what? It doesn't make Physx any less of a scam. You're nitpicking trivial minutia that doesn't affect the overall conclusion.

            • Bensam123
            • 8 years ago

            lol, I said the software was crippled now… I said that Nvidia did that and that it wasn’t always like that. You’re combining two different things.

            d00d… I quoted direct text from the article and you’re disputing it with fairy tales and lies?

            Physics are easily scaleable, that’s why it can be easily offloaded to GPUs… the same can be said about CPUs. There is no ‘magic multithreading code’ developers need to implement. It was originally implemented in the PhysX driver. It’s always been this way. Physics calculations in itself are easily paraliseable as it is calculating the positions of objects subsequently and making multiple projections of multiple objects. In essence each object is its own calculation… Nvidia just went out of their way to cripple it…

            lol… your definition of a scam is ridiculous. It competed with Havok and where as Havok was purely cosmetic, PhysX could be applied to the entire environment therefore PhysX is a scam…

            leet you’re fricking ridiculous

            • l33t-g4m3r
            • 8 years ago

            [quote<]There is no 'magic multithreading code' developers need to implement.[/quote<] I share the same sentiment, but that's not how it works in practice. Devs do need to "enable" multithreading. [url<]http://www.geeks3d.com/20100121/nvidia-multi-core-cpu-support-is-not-disabled-in-physx/[/url<] [quote<]I have been a member of the PhysX team, first with AEGIA, and then with Nvidia, and I can honestly say that since the merger with Nvidia there have been no changes to the SDK code which purposely reduces the software performance of PhysX or its use of CPU multi-cores. Our PhysX SDK API is designed such that thread control is done explicitly by the application developer, not by the SDK functions themselves. One of the best examples is 3DMarkVantage which can use 12 threads while running in software-only PhysX. This can easily be tested by anyone with a multi-core CPU system and a PhysX-capable GeForce GPU. This level of multi-core support and programming methodology has not changed since day one. And to anticipate another ridiculous claim, it would be nonsense to say we “tuned” PhysX multi-core support for this case. PhysX is a cross platform solution. Our SDKs and tools are available for the Wii, PS3, Xbox 360, the PC and even the iPhone through one of our partners. We continue to invest substantial resources into improving PhysX support on ALL platforms–not just for those supporting GPU acceleration. As is par for the course, this is yet another completely unsubstantiated accusation made by an employee of one of our competitors. I am writing here to address it directly and call it for what it is, completely false. Nvidia PhysX fully supports multi-core CPUs and multithreaded applications, period. Our developer tools allow developers to design their use of PhysX in PC games to take full advantage of multi-core CPUs and to fully use the multithreaded capabilities.[/quote<] This is how the TWIMTBP program works. Nvidia pays the devs to not multithread physx, shifting the blame. The end result is the same, cpu physx is crippled. But there might be some software out there that does support multithreading, so then we get people like you saying, see see physx supports multithreading, when in practice it does not. Physx IS a scam.

            • Bensam123
            • 8 years ago

            In a side note you edited the above post [b<]18[/b<], EIGHTEEN(!), times and each time you make a new post you incorporate new things that your opposition has used in other posts, but not the person you're talking to until further posts down the way. Essentially you try a turn about on your opponent because you can't formulate your own offense so you're just trying to muddle the waters and drive it away from the original argument while attempting to infuriate them. In essence you're no different the the average troll online.

            • l33t-g4m3r
            • 8 years ago

            Trolls trolling trolls. If you don’t like my post editing, don’t argue with me. I’m just clarifying my views and fixing grammar. Neither of you have anything good to say anyway, since your points depend on shaky evidence, or don’t matter since Ageia is defunct. Once you look at the whole picture, the arguments fall through. All I gotta do is point out the hole in the boat, and voila, it sinks.

            Physx is a joke, and a detriment to the community, so I don’t get why you are bothering to defend it. The cat’s been out of the bag for a while too, so arguing about it now is like beating a dead horse.

            Whether or not Physx had potential doesn’t matter as long as it’s being manipulated to sell video cards. You should support open alternatives instead.

            • Chrispy_
            • 8 years ago

            Twenty-five edits?I don’t normally take sides in petty bickerings, but in this instance:

            GIVE IT A GODDAMNED REST AND GROW THE HELL UP.

            You’ve filled my entire 1600 pixels of vertical screen with your nested tree of garbage and whining.

            • l33t-g4m3r
            • 8 years ago

            You’re an idiot. I did that on purpose. Bensam and Stargazer are who, “filled my entire 1600 pixels of vertical screen” shilling for nvidia. Simply reply to a different post to bump this one down, provided they quit advertising for physx as the greatest thing since sliced bread.

            • Bensam123
            • 8 years ago

            One of the reasons I pointed it out is perhaps you don’t think things through clearly the first time? …perhaps you should read and comprehend a bit more then edit after…

            Be more like a toad then a frog that spazes out anything they read then shoots for the first knee jerk reaction; even if that means deleting your three paragraphs and starting over.

      • sschaem
      • 8 years ago

      Tanked? All games made using engines like Unreal Engine or Unity 3D uses Physx…
      Physx is not about cloth flapping in the wind, its about even the most basic object dynamic, like collisions.

      Now if you wanted more shattering glass and cloth flopping in the wind, yes Physc is a fail. But for making game better is succeeded as each developer doesn’t have to re-invent the wheel. “How do I do efficient sweep test using bounding volume?”.. wait it all already done in Physx.

        • Bensam123
        • 8 years ago

        Yup, the physics race is over and what PhysX was it is no more, Nvidia ruined that. Not saying physics aren’t amazing, I’m just saying the emphasis is gone now and the little that is being put into games now is nothing more then frilly eye candy.

      • Novum
      • 8 years ago

      It’s not. AMDs GCN will have comparable geometry performance to Fermi.

    • tejas84
    • 8 years ago

    Wow I never thought I would see TR join [H]ardOCP as AMD GPU Shills.

    How much did AMD pay you to encourage TR to write this gratuitous attack on NV hardware.

    Smacks of an axe that AMD have been grinding for a while i.e their inferior tesselation hardware

      • Meadows
      • 8 years ago

      Read the article, moron.

        • DeadOfKnight
        • 8 years ago

        Please don’t feed the trolls.

          • Meadows
          • 8 years ago

          Aww mooom, don’t take away my fun!

      • sschaem
      • 8 years ago

      nothing to do with nVidia , this is all in the face of cRytek

        • danny e.
        • 8 years ago

        it might have something to do with nVidia – we just don’t know for sure. The important part is no matter the case this isn’t pro-AMD. It’s anti-stupid.

      • ElMoIsEviL
      • 8 years ago

      AMD Shills?

      When increasing the computational load for a given feature results in no noticeable increase in the end goal (image quality in this case) then why increase the load? Just because you can?

      Just because you can is not a logical/rational argument. I see no shilling here… what I do see is an individual (yourself) emotionally attacking various institutions for daring to hold differing opinions to your own.

    • Coruscant
    • 8 years ago

    My suspicion back to Crysis 1 was that the developers were likely more lazy than sophisticated. The fact that Crysis remained unplayable at max settings, and high resolutions through better than 3 generations of video cards should have been a hint. The graphical presentation is top notch, but was it 3x better? In relative terms, it’s easy to roll out highly detail, poorly performing graphics. The difficulty is in providing the graphical detail while maintaining performance. There’s pushing the envelope, and there’s complete ignorance as to the envelope at all.

    • ohtastic
    • 8 years ago

    Meh. Crysis 3 will be set in Arizona. Problem solved. Guess this proves graphics absolutely don’t sell a game, considering the ill-will and general negativity generated around Crysis 2. I do wonder how Battlefield 3 will play out as far as performance and differences between DX10 and DX11 go. BF3 has PC as lead platform, fingers crossed. EA should learn something from the Crysis 2 shenanigans.

      • sreams
      • 8 years ago

      Arizona? All of those flat deserts will require trillions of polygons. You just made the problem worse.

    • Spotpuff
    • 8 years ago

    That water thing would be hilarious if it weren’t so wasteful.

      • Stargazer
      • 8 years ago

      I think that similar wastes (rendering large amounts of polygons that can never be seen) is actually more common than one might think. Unfortunately…

    • DarkUltra
    • 8 years ago

    This is outrageous. I’m glad I haven’t bought Crysis 2 yet. If they fix this I will buy. It would be cool if the leaves on the ground where where tessellated and their control points had a geometry shader to make them flapping in the wind… hmm

    • ModernPrimitive
    • 8 years ago

    Great article. This is the kind of reviews and information the community needs. I don’t even own a gaming desktop now but I still appreciate the effort. Thanks

    • anotherengineer
    • 8 years ago

    Well I used to work at a concrete plant and outside in the sun I can say that I have NEVER seen the top of a jersey barrier to glow like that in the sun. And whats the mess on the ground beside the barrier, are those supposed to be leaves? I have seen better detail from the source engine.

    Just fail

    Turn off that HDR!!!

    • Chrispy_
    • 8 years ago

    Very nice exposé.

    Crysis ran like crap on hardware of its day because the engine wasn’t optimised very well.
    Crysis 2 runs like crap on hardware of today, because the engine isn’t optimised very well.

    How incredibly surprising.

    Nvidia bribing Crytek to pointlessly push tesselation to the max, for noticeable performance advantage over their major competitor?

    I am, yet again, shocked by this ‘discovery’

    I miss the good old days of Carmack when you were genuinely amazed at the efficiency and speed of an engine. Today’s solution seems to be “we can’t be bothered to code properly, just throw more GPU horsepower at it.”

    If developers are capable of making optimisations to get good graphics on limited console hardware, they’re equally capable of optimising the new DX11 features to improve, rather than worsen the user experience. Personally, I’d take “very nice graphics” at 60fps over “slightly better graphics” at 40fps any day of the week.

    • drfish
    • 8 years ago

    Got a chance to actually read this. Wow, just wow.

    Reminds me of the Civ5 thing I mentioned to you awhile back… With dual 5870s (@ 5540×1050) my frame-rate goes from 18-20fps to 45-55fps just by turning tessellation to low – with no difference in graphics quality that I can detect. Would be curious to see the polygon count there… Might have to try that tool myself…

    • Pax-UX
    • 8 years ago

    Can’t wait for the SETI@home intergeneration patch to make those graphics look more awesomer while searching for real Aliens!

    It’s great fun to watch stupid people being stupid in new and interesting ways… stupid peepole FTW!

    The Sea issue seems to me like a single infinite plain with deformation added, easy thing to code, adding clipping would be costly. The problem is the order of the clipping, Tessel -> Clip or Clip -> Tessel. Assuming infinite plain, it would always have the possibility of being seen so you need add some customization to the engine to turn Tessel on & off by object and visibility… i.e. bounding boxes based of character location + height + direction as cheap Line of Sign implementation before hitting the rendering engine…. then you have to worry about stuff like reflection be correct, not hard to fix just setup everything as a camera with a LoS and render accordingly.

    • Meadows
    • 8 years ago

    In a dash of ironic comedy, the last picture in the article (the “complex scene”) shows flat wooden planks having three thousand times more polygons than, – wait for it -, the steel I-beams around the middle of the picture, which still look like we’ve never left 2007.

    I’m unsure how to describe this. At the very least, I would’ve expected them to bolster the polygon count everywhere and be done with it, but this way it looks [i<]malicious[/i<], like it's [b<]designed to suck.[/b<]

      • derFunkenstein
      • 8 years ago

      Maybe they should quit their jobs and go work for Oreck.

      • ronch
      • 8 years ago

      [quote<]it's designed to suck[/quote<] Yeah. Suck money into Nvidia's pockets, that's what. Who knows, Nvidia told these guys to use lots of tesselation (and make AMD look bad in the process) and use up GPU power unnecessarily. Guess that's the only way to make faster-than-necessary GPUs start to look old and make you want to buy a new video card.

        • stdRaichu
        • 8 years ago

        Don’t tell anyone I told you this, but nVidia’s next line of GPU’s utilise thinking aluminium and denimite mem-shards, which will automatically optimise the graphics.

        nVidia: this way we’re meant to get paid!

          • ronch
          • 8 years ago

          For a second there I thought you said ‘dynamite’. Now that’s explosive graphics if I ever heard of one! 🙂

          • ronch
          • 8 years ago

          [quote<]Don't tell anyone I told you this[/quote<] And you post it here in the forums for everyone to read?

      • sreams
      • 8 years ago

      “So you’re acting now, you’re in a vampire movie, yes? That’s good. Finally, a role that requires you to suck.” – Triumph the Insult Comic Dog

    • holophrastic
    • 8 years ago

    It’s a stupidly-well written article, I’m really impressed in every way. Looks like it took a lot of tedious work too.

    But my conclusions are somewhat different. I’m an AMD Radeon customer, very satisfied since my 3dfx days ended. So I should be on the angry side of this fence.

    But I’m not. I’m actually impressed with the technology demonstration shown here.

    Instead of seeing a poorly-executed attempt, or a one-sided competative advantage, I’m seeing the other side.

    Hey look Ma! If I push these sixteen buttons, I can get it to look even better!

    Forget about the added detail versus GPU effort, and look at the added detail versus developer/designer effort. I’ll bet it took little incremental effort to produce the added detail. That’s impressive as far as technology goes.

    Sure, with more effort and more time and more fine polish, we can take less of a performance hit. But that’s not the idea at this point. The idea is to push the boundaries of detail, not the boundaries of efficiency.

    I remember in 1997, I got a personal tour of Alias Research aka Alias Wavefront, aka the guys behind Maya aka the guys behind the software that rendered the perfect storm aka the guys who had a sizable room with a refridgerated computer with, wait for it, 16 gigs of ram!

    The impressive part was not how few cpu cycles it took to render things. They used to speak of rendering time in years, not hours, let alone real-time. They were working on the running yeti, with hair.

    But up on the wall, was a poster of a lamborghini. It was rendered. It was awesome. It was two years of computer time for the single frame. No body cared about the computer time. The impressive part was the raytraced detail. It was very much photo-realistic, and every phong counts.

    The same is true here. I don’t care about proportionately here. I don’t think that was the idea. If you want higher detail, we have higher detail. So sorry it’s not efficient today. If you want it to be efficient, you can wait a year. But here’s the detail today. Enjoy.

      • APWNH
      • 8 years ago

      I really like the things you mentioned here. It’s quite amazing to see just how incredibly tessellated these models are and it really says something about the capabilities that our hardware is capable of achieving these days. Yes, it is difficult to overstate just how wasteful it is to be generating geometry for an entire ocean when it is invisible, and subdividing FLAT surfaces, but when you think about it from another point of view, the fact that all these millions of polygons are in fact being rendered in real time, it isn’t so hard to get back that nice warm fuzzy feeling. Because one day all those polygons are gonna be put to good use by some good devs, whether it is crytek or not. And my GTX 560Ti or Radeon 6950 will render it fluidly and interactively. And it will be glorious.

      • ThorAxe
      • 8 years ago

      Agreed. It is a very well written article.

      However, I still thoroughly enjoy the detail when it’s visible in Crysis 2

      • Meadows
      • 8 years ago

      I have no idea how to give you more than 1 minus vote.

      • GrimDanfango
      • 8 years ago

      I work in the effects industry. Rendering time is certainly a major consideration, but I can say with utmost certainty that the work that goes into creating a photo-realistic image far far outstrips the rendering time. Photo-realism doesn’t come from pressing the right button and waiting for a year, it comes from experience, understanding, artistry. Just flicking on “raytracing” and upping the ray-recursion limit will only yield a very accurately calculated unrealistic image. Raytracing is just a tool, and like any tool, it only gives the best results when used by someone with experience and skill.

      That is why this is such a preposterous demonstration of a new technology. Tessellation is a hugely powerful tool, but few developers seem to have actually realised that it will still require some very carefully considered and disciplined research and application to get anything out of it. It is not an automatic “make beautiful” flag… I know this for certain, because *nothing* in computer graphics comes for free.

      Take the Battlefield 3 videos – they are absolutely stunning, really breathtaking in places. I can guarantee this isn’t because they’ve got the latest DX11 bells and whistles switched on. The Frostbite 2 engine is no doubt highly cutting-edge, but it doesn’t mean jack without a team of incredible artists who have a cutting-edge understanding of how to model and texture and light the game photo-realistically.

      I’ve said it for a long time… GPU technology really isn’t as important as people like to imagine. If you put the right team of artists and programmers together, they could build a breathtaking and photoreal game on DX8. They could do more with DX11, sure… but DX11 alone, nor any new technology, will ever magically improve graphics without someone highly skilled putting in the work to get the most out of it.

      Tessellation will only show its true potential when a developer realises it has to build the engine and create the 3D assets specifically tailored to tessellation.

        • poulpy
        • 8 years ago

        I have no idea how to give you more than 1 plus vote.

    • r00t61
    • 8 years ago

    Actually, I think those slabs could use some more polygons.

    But seriously, I could read this article over and over. Great work, TR.

    • ThorAxe
    • 8 years ago

    I know it’s fashionable to bash Crysis 2 but I’m going to buck the trend and post some good news for Crytek:

    [i<]The big winner of the evening was undoubtedly “Crysis 2” by Crytek. Europe's gamers have rewarded the FPS not only with the awards for “Best European Action Game,” “Best European Sound,” “Best European Advertisement” and “Best European Art Direction”, but the game was also voted “Best European Game” overall. Crytek’s development HQ was awarded the prize of “Best European Studio”. Because Crytek won so many categories, the host nation, Germany, ended up just before the United Kingdom in the category “Best European Game Country”.[/i<] [url<]http://www.gamasutra.com/view/pressreleases/75977/European_Games_Awards_2011_Winners_Announced.php[/url<]

      • ThorAxe
      • 8 years ago

      Wow, marked down for posting some related news…must be the under 40 set at it again. 🙂

        • wierdo
        • 8 years ago

        Probably cause it looks like an ad billboard and barely related to the actual meat of the topic.

        • Chrispy_
        • 8 years ago

        You sound like a PR marketeer for EA/Crysis. People don’t like that.

        However you try to spin it, Crysis 2 was a rushed, dumbed down console-port by Crytek that represented everything bad about what sloppy console ports are, and this is in the face of Crytek’s promises that they wouldn’t abandon their roots and sacrifice PC development to pander to the console market.

        They lied.
        They failed.
        They disappointed.

        This patch, the olive leaf they offer in peace – is a shoddy, inefficient abuse of a highly-contested DX11 ‘feature’ which, when abused as badly as it here provides negligible IQ gain outside of the improved texture pack, yet has a bigger impact on AMD hardware. Is it significant that this is a TWIMTBP game? Probably – we can only speculate. Either Crytek were lazy and rushed, or they were bribed by nvidia. Either possibility doesn’t make Crytek look like the good guys.

          • Stargazer
          • 8 years ago

          [quote<]Either Crytek were lazy and rushed, or they were bribed by nvidia. Either possibility doesn't make Crytek look like the good guys.[/quote<] Now, now, let's not rush to conclusions. It's also quite possible that they're simply incompetent.

            • Chrispy_
            • 8 years ago

            Ah, there’s me trying to apportion blame on someone, when in fact the blame is on me for expecting Crytek to do their jobs properly.

            I am so naive sometimes 😉

          • ThorAxe
          • 8 years ago

          I likes the the game. It ran like butter for me with DX11 and the HRT patch at 2560×1440 maxed (minus AA) while still looking better than any other game I have seen recently. I still have wow moments that I haven’t had since Crysis despite losing much of the sandbox freedom.

          While I appreciate the article and hope that Crytek will use tessellation more efficiently in the future, perhaps even patch it again (unlikely I know), it didn’t make the game any less enjoyable for me.

        • sweatshopking
        • 8 years ago

        welcome to neckbeard land. people aren’t rational. your post is Fine. we’re discussing crysis two, and whether or not it sucked, and it seems the awards think it’s a good game. personally i hated it, but can appreciate your post. watch out for these nerds. they’re rabid.

          • ThorAxe
          • 8 years ago

          Thanks mate.

          If Duke Nukem was DX11 and had used tessellation efficiently would the game have sucked any less?

      • yogibbear
      • 8 years ago

      Just because someone wins awards doesn’t mean they’re allowed to produce code that lab monkeys would laugh at.

      • rndmuser
      • 8 years ago

      1) Everyone can go to Wikipedia or Metacritic or whatever and find all the relevant reviews/awards for any particular game. Why repost all of such easily accessible info?
      2) Awards don’t mean shit to many people. If you want to rely on someone’s purely subjective opinion – go ahead and do that. I personally prefer to play the game myself (or observe other people actually playing it) and then make my own conclusion, as do many other people.

        • ThorAxe
        • 8 years ago

        Given that this was posted on the day the awards were announced it seemed relevant.

        These awards are voted for, so yes, like just about everything said in this thread, it’s subjective. However, in this instance, it’s the opinion of the majority of people that voted.

        I could ask why post that geometry throughput isn’t great on AMD cards? Didn’t we know that already. It’s not as if the game is unplayable on AMD hardware. I played it all the way through with a 6870 crossfire setup maxed out at 2560×1440 except for AA and it was fine.

        People complain that it’s not DX11 and then when it’s delivered they complain that it’s not efficient, and when it’s efficient they complain that it’s not detailed enough because it runs too well. Just get over it.

          • DarkUltra
          • 8 years ago

          i NikK!

          I haven’t seen a proper response to your post yet. Half Life 2 was actually optimized with renderpaths for both Geforce and Raden. But the Geforce FX architecture was best at doom 3 type graphics with stencil shadows and per-pixel lightning. The then current Radeons had better pixel shader 2.0 performance and was faster at half-life 2 type of graphics.
          In addition the geforce fx was clocked way too high and was dreadfully loud.

          In this case with Crysis 2 we have horribly implemented tessellation with tons of polygons wasted on a flat surface. If the level designer had taken their customers seriously and done a proper implementation, then we could boast how nice and detailed the fermi architecture (and upcoming Radeons) make games.

          Edit: sorry reply was meant for Nikk

      • Bensam123
      • 8 years ago

      Brand loyalty went out with the information age. Now people can learn about what they talk about rather then garishly shouting ‘mines the best’… although some people still do.

    • Jigar
    • 8 years ago

    Nice find Damage, looks like bad code comments will continue for Crytek…

    • mako
    • 8 years ago

    The water cracks me up for some reason. Water, water, everywhere, nor any drop to drink.

      • jihadjoe
      • 8 years ago

      I’m sure the developers are just being considerate of the guy might want to do a ‘deep well drilling’ mod.

        • Meadows
        • 8 years ago

        Maybe they want to one-up Minecraft at some point.

    • Bensam123
    • 8 years ago

    I’d bet more on this all being done after the game was released and none of the planning done before hand. So someone on top tasked them with tesselating the game, being the analytical engineers they are, they started tesselating everything from the ground up the way all busy bees work. Then someone at top said ‘we spent enough time on this, release the damn thing’ and so they released what they finished.

    The water was just a side effect of the game being made for a console.

    This line of thought results from neither the people at the top or the people producing the work from caring what they make.

    • jihadjoe
    • 8 years ago

    So all that hidden geometry is there to make sure anyone with anything less than a top-end GPU basically chokes to death rendering unseen details.

    Great way to move those premium class GPUs!

      • NarwhaleAu
      • 8 years ago

      Anyone with less than a top end Nvidia CPU.

    • LoneWolf15
    • 8 years ago

    “We’ve heard whispers that pressure from the game’s publisher, EA, forced Crytek to release this game before the PC version was truly ready.”

    Does Electronic Arts ever wonder why the gaming community isn’t very fond of them, or do they just not care?

    I’ve seen too many shens by EA to want to buy games published by them. I can’t recall the last time I bought or played an EA-published game; I actually find myself looking for fun games that aren’t published by EA, and finding plenty to choose from.

      • kc77
      • 8 years ago

      I really wouldn’t place too much credence to the above statement. C2 was a console port pure and simple. Back in the day you ALWAYS started with the PC version first with all of the high poly assets in and then scaled them down to console. It’s pretty damn obvious when the PC version is no better than the console that the old school approach wasn’t followed here. That’s precisely how you get tessellation in areas you don’t need it. Superfluous animation in areas you can’t see it etc etc. The DX 11 path was an after thought. They grabbed the high poly / tex assets and added a few things here and there to appease the masses. Instead of developing the game from the start with DX11 in mind.

      • CasbahBoy
      • 8 years ago

      As long as they continue making bank, the simply don’t care.

      • designerfx
      • 8 years ago

      Of games not mentioned: Don’t forget Batman Arkham Asylum, where they did the same thing. Nvidia+EA have continued to do this for a long time.

      Isn’t it ironic that an AMD tool, in comparison, is helping them dig out this trend?

    • squeeb
    • 8 years ago

    Wow, talk about shady.

    • sschaem
    • 8 years ago

    Sabotage…. I bet the xbox360 doesn’t have any of this nonsense.
    We dont need this Dx11 ‘charity’ laced with poison.

    Funny how Crytek does a half baked, buggy, after the fact dx11 game hack while Dice require Dx10 as a minium for BF3 and embraced Dx11.

    Looking at the tech slides, Dice is 1 to 2 years ahead of Crytek.

      • `oz
      • 8 years ago

      Try playing COD Black Ops on ps3/pc and then on xbox360….. there is a clear difference favoring the xbox.

    • Sencapri
    • 8 years ago

    Fail post!! I acceppt defeat!

      • Sencapri
      • 8 years ago

      oh wow my paragraph below is not there ……… EPIC FAIL.

    • Krogoth
    • 8 years ago

    Tessellation = new FSAA

    enough said

    • axeman
    • 8 years ago

    They found a way to flex all that unused Fermi muscle! On things you can’t see!

    • someuid
    • 8 years ago

    This should become a standard part of your video game and video card reveiws for several reasons:
    – shines a light on such poor coding (really, and underground ocean? wtf crytek)
    – shines a light on possible dirty punches by AMD and Nvidia
    – gives us more information to consider when we’re about to plunk down $$$ on these games and video cards.

    As others have said, it is really sad that Crytek did such a piss poor job of this. Can you imagine how much detail they could have added to the game if they had done their 3D scene generation properly? Can you image how much better the frame rates could be? If these folks built cars, we’d all find cast iron anchors hidden under the back seat. “No wonder my gas milage is so bad. Screw you Crytek!!!”

    It almost makes me wish for 2D sprite gaming again.

      • CasbahBoy
      • 8 years ago

      The car analogy (while always a bad idea to bring up) reminded me of the modding community. I haven’t been following whether there are worthwhile mod tools available for the game, but I wonder if a few bored tech-heads will figure out a way to rip that anchor out of the back seat, so to speak.

    • ChangWang
    • 8 years ago

    Wow… talk about sloppy coding. Hmmm… you know, this makes me wonder if they did something similar with the shaders in the first Crysis…

    • CasbahBoy
    • 8 years ago

    Oh my god I was laughing uncontrollably when I had started the second page. After that it just got kind of…sad.

    • I.S.T.
    • 8 years ago

    Sad.

    I agree with removing this from the benchmark suite. It’s just not worth it.

    • BobbinThreadbare
    • 8 years ago

    Good work going through all this. Looks like this game went from my wait for a huge sale and see if it’s not as bad as people say to don’t buy unless they fix this.

      • NeXus 6
      • 8 years ago

      Gameplay is pretty good, but it does lag in spots and the graphics with the DX11 patch, while better, aren’t going to blow you away. Definitely wait for a $20 or less sale on this game.

        • sweatshopking
        • 8 years ago

        i don’t know if i agree the gameplay was pretty good. i’d say for the most part, it’s a step back from crysis 1.

          • NeXus 6
          • 8 years ago

          Parts of it were good, but the AI is definitely worse. You can tell they spent more time on level design, which is pretty awesome compared to Crysis. It’s consolized, but I didn’t think it was that bad compared to other games. The shooting mechanics are basically pointless due to the flaky AI. Lots of options, but they don’t really matter.

            • sweatshopking
            • 8 years ago

            the AI is really bad, much worse than Farcry even. The artwork was better, but i prefer the open world design of crysis vs the corridors of 2. it has more attention to detail this way, but i’d rather run through a cut and paste jungle, than down a straight path, no matter how pretty.

            the physics are much worse in 2, and i’ve posted a video before comparing them. a lot of the graphics are actually [i<] worse [/i<] , including things like light flares, etc.

        • wira020
        • 8 years ago

        I didnt really enjoy it. Its just felt like a generic fps with a better graphic than most. Most of the time i just melee the seph creature lol. they die easier that way.

          • sweatshopking
          • 8 years ago

          i melee’d them the entire game. the 4 invis ones, that you can see with cryvision, just got meleed to f too. lame ending.

    • TardOnPC
    • 8 years ago

    Wow. Excellent findings Scott. No wonder my 5870 chokes up on tessellation for this game.
    d!cK move by Crytek/NVidia.

      • swaaye
      • 8 years ago

      Well, AMD’s Cypress chips were pretty slow for tessellation anyway. Wasteful use of it isn’t helping anything though that’s for sure.

    • codedivine
    • 8 years ago

    For now, I am attributing this one to stupidity (or lack of time) rather than malice. The sheer dumbness is rather appalling though.

      • Hattig
      • 8 years ago

      Except that apart from the sea, bricks and possibly some of the wood, it shows a fundamental misunderstanding of tessellation, which is meant to add detail and contours to a flat object, not tessellate a flat object into a bazillion flat objects.

      And tessellation is meant to use distance-based LOD, so the sea is again unoptimised.

      I cannot help but think that the nVidia engineers who went to help implement DX11 into the game simply have a mandate to apply stupid levels of tessellation so that their products look better. And that just harms everyone.

      • Alexko
      • 8 years ago

      No one would be dumb enough to use thousands of triangles for a strictly flat surface. This has NVIDIA written all over it.

        • Peldor
        • 8 years ago

        Well, at least one other possibility has occurred to me. Crytek could just be screwing with us. “You wanna whine abount DX11 and console-itis? Okay, here’s 250,000 tesselated 100%-certified Grade A DX11 polygons. On the floor!”

      • Kaleid
      • 8 years ago

      I think you’re wrong. They have done things like this before like with the 64bit Far Cry patch for highres (when 64bit isn’t needed) and the need for DX10 to open up the highest settings in Crysis (again not needed).

      DX11 sells graphiccards just like 64bit sold AMD64 CPUs.

    • flip-mode
    • 8 years ago

    This is a situation that is too stupid to adequately express. I really can’t muster a comment that packs enough ridicule of Crytek into it. I can only say that I’m completely disgusted and I’m glad I don’t have the game as I would feel bad for giving Crytek one cent for such garbage.

    Thanks for the investigative reporting. For what it’s worth, I’d vote you totally exclude such garbage software from you benchmark suite.

      • Farting Bob
      • 8 years ago

      Or you could just play the game anyway, without DX11 ultra settings on if your GPU cant handle it. It’ll still look as impressive as this article shows. Unless you are a massive fan of metal caps on top of concrete blocks it would seem as though the DX11 doesnt really add much, if its hard to tell the difference in a screenshot then you have no hope of seeing the difference in-game.

        • flip-mode
        • 8 years ago

        Well, that would be missing the point. The game looks plenty good without DX11 – who knows, perhaps with tessellation done right the DX11 settings would have made a meaningful difference. But the point of refusing to buy the game is to refuse to buy a product that treats the PC gamer as a second class citizen.

        I’d rather EA and Crytek exit the PC market entirely than stay in it just to offer buggy, poorly coded, poorly ported crapware games.

        However, if none of that bothers you that is not my business and I won’t begrudge you buying the game.

          • NarwhaleAu
          • 8 years ago

          I own a Radeon. Seems like the game was intended to make my card look bad. I’m considering pirating this game out of spite… but I don’t really have any urge to play it (or I would have bought it already).

      • Meadows
      • 8 years ago

      My thoughts exactly. I’m so appalled at this (and I’m an NVidia user/fan) that I’m at a loss for appropriate words.

      • dragosmp
      • 8 years ago

      While these coding errors or lack of optimization may be interpreted as “designed to hurt Radeons”, the errors can also be seen as something that was hurried out the door to check the case “now we have DX11 and tessellation”.
      The game on the PC launched while unfinished, it wouldn’t be a far stretch to say patch 1.9 is unfinished dx11. Crysis 2 should be tested because (some) people play it – this discussion is a proof there’s interest in this game, but not included in the overall value comparison since it’s obviously buggy intentionally or not.

        • ThorAxe
        • 8 years ago

        There’s no point being objective here. No one will hear you.

          • flip-mode
          • 8 years ago

          So no objective criticisms can be made here? It seems pretty objective to question the use of tessellation on a bunch of flat surfaces and the persistence of tessellated water geometry in the rendering pipeline even after the water has passed out of view and the fact that there is no scaling of detail in the tessellated water geometry as it recedes in the distance.

            • ThorAxe
            • 8 years ago

            The article is clearly objective, Scott has done a fantastic job.

            However, some of the comments verge on the hysterical.

            • derFunkenstein
            • 8 years ago

            Scott also didn’t spend alot of time saying it’s caused by TWIMTBP, which seems to be the overwhelming conclusion in the comments.

            • derFunkenstein
            • 8 years ago

            It’s easy to tessellate flat surfaces and not have everything look like ATi’s TruForm-patented balloon characters, which is probably why they’re doing it here in the first place. I believe there’s more “lazy” than “malice” here, but it’s ridiculous nonetheless.

        • siberx
        • 8 years ago

        It doesn’t matter if “some pepole play it” or not; if the game isn’t representative of the kind of performance you can expect from your hardware in most applications, there’s no point including it. I could make a game that does a physics simulation down to the atomic level and thus requires a couple orders of magnitude more CPU power than any other game out there; if (some) people were to play that game, would it be sensible to include it in a review and subsequently bash a CPU for failing to keep up when it performs poorly, or to laud a processor that does better when neither has any effect in any other game on the market?

    • drfish
    • 8 years ago

    Oh oh! I remember talking with you about this over brats! 😀

Pin It on Pinterest

Share This