CryEngine will add DX12 multi-GPU and Vulkan in future versions

Whether you bleed GeForce green or Radeon red, we can all agree that the potential performance gains to be found using the lower-level DirectX 12 and Vulkan APIs are significant. Crytek, creator and curator of the ever-popular CryEngine, is doing its part to further adoption of the new APIs. The folks at OC3D noticed that Crytek has updated its online roadmap for the engine. That document shows DX12 multi-GPU and Vulkan support as "on target" for upcoming releases.

Chinese-exclusive martial arts MOBA King of Wushu was the first DirectX 12-enabled CryEngine title when it debuted earlier this year, but CryEngine still doesn't support multi-GPU systems in DX12 mode. That will be changing with the projected release of CryEngine 5.4 in late February 2017. Crytek may be showing off the new feature at the next Game Developer's Conference, which starts February 27.

Perhaps even more exciting is the "on target" listing for Vulkan support across all platforms, including Windows, Mac, Linux, and mobile. That support is coming even sooner: Crytek's roadmap lists an anticipated launch window of "mid-November 2016" for version 5.3. CryEngine's greatest rival engine package, Unreal Engine 4, already has basic Vulkan support. However, Unreal limits applications using the new API to the mobile feature set of the renderer for now.

Comments closed
    • kamikaziechameleon
    • 3 years ago

    Their game engine is so cool… but they don’t have a darn game in the pipe that is particularly good looking.

      • Farting Bob
      • 3 years ago

      Crytek seems to be moving into more of an engine supplier than a game dev.

      • tipoo
      • 3 years ago

      Their financials aren’t doing too hot

      [url<]http://vrworld.com/2014/08/04/ryse-fall-closer-look-cryteks-dwindling-industry-presence/[/url<]

    • brucethemoose
    • 3 years ago

    Every middleware game I can think of either uses Unity or Unreal.

    Why don’t more devs use CryEngine anyway? It seems like a good engine.

      • danielravennest
      • 3 years ago

      As a former CryEngine developer, I would say it’s uneven support by CryTek. The engine *is* good, and has always had nice features. But sometimes trying to find out what a feature *does* was like pulling teeth. I should point out my experience was with CryEngine 2 in 2009-2010.

    • Voldenuit
    • 3 years ago

    [quote<]we can all agree that the potential performance gains to be found using the lower-level DirectX 12 and Vulkan APIs are significant.[/quote<] (Looks at all the SNAFUs in recent DX12 branch performance in past 6 months). Bwahahahahaha. Oh, wait, you were serious. Let me laugh even harder. BWAHAHAHAHA.

      • Theolendras
      • 3 years ago

      You know the meaning of “potential” right ?

        • Voldenuit
        • 3 years ago

        Lots of things have potential, but probable outcomes are what we should be looking at.

        Nearly every DX12/Vulkan title bar a few (AOTS, DOOM) has only managed to trip themselves up with the extra rope that they’ve been given.

          • Theolendras
          • 3 years ago

          It’s definitely not a magic bullet with garanteed free performance boost that I agree, but enough in first gen title with relatively modest DX12/Vulkan path optimisation to be optimistic.

    • Freon
    • 3 years ago

    Wondering which version of optimized-for-DX12 it will feature… The Rise of the Tom Raider version where NV is out ahead, or the Hitman version where AMD reigns?

    I’m dying to get comments from developers from different camps on why we see such strong vendor preference either way. Of course, they seem unlikely to say much given the GE/TWIMTBP programs probably don’t much like them giving the tell-all, but perhaps TR can investigate?

    Is it better to write code that runs well on AMD vs. NV? Why? What are the real differences in terms of the software engineering efforts, the trade offs?

      • RAGEPRO
      • 3 years ago

      That’s not really a new thing, to be fair. We’ve always seen software which favored one vendor over another.

      The reason the effect seems more pronounced with DirectX 12 is the same reason it is hotly-anticipated and exciting when it does get implemented in a game. That reason is simply because it is a low-level API that promises the opportunity for tighter optimization by working closer to the “metal”, or the underlying GPU hardware.

      You can probably guess where this is going already, but just in case: when you’re optimizing software, past a certain point, you start optimizing it for specific hardware. This is doubly true when working with software that runs “close to the metal”. By a traditional definition, DX12 applications are still running quite “far” from the “metal”, but compared to DirectX 11, developers might as well be writing in machine language. And like software written in machine language, it doesn’t work all that well when you move it to a completely new processor architecture.

      [b<]TL;DR[/b<], low-level programming has to be done on at least an architectural basis, and NV and AMD graphics chips are [b<]radically[/b<] different at every level.

        • Voldenuit
        • 3 years ago

        [quote<]TL;DR, low-level programming has to be done on at least an architectural basis, and NV and AMD graphics chips are radically different at every level.[/quote<] Yeah, and future amd and nvidia architectures will be different from their current gen (and previous gen) products, too. DX12 is a lot of rope to hang oneself on, not just for developers, but potentially for users as well. EDIT: Also, let's not forget intel. Intel's EU architecture is even more likely to change over time than nv/amd, and they are a large chunk of the market. They also neglect and abandon driver development very early. What a mess.

      • Voldenuit
      • 3 years ago

      [quote<]Is it better to write code that runs well on AMD vs. NV? Why? What are the real differences in terms of the software engineering efforts, the trade offs?[/quote<] From reading various developer comments and blogs, it seems that most "good" developers have a different code path for nvidia and for AMD hardware, as the GPU architectures behave and handle differently. How primitives are stored, how memory is accessed, how loops are handled, how processes are spawned, etc. Even then, because of said architectural differences, the games are faster or slower on one depending on the underlying engine/rendering techniques.

      • brucethemoose
      • 3 years ago

      Those are DX11 games with DX12 support tacked on post-release.

      I believe games programmed for DX12 (and with a DX11 backport for the mass of humanity on Windows 7) or Vulkan from the start will behave differently.

        • Voldenuit
        • 3 years ago

        [quote<]I believe games programmed for DX12 (and with a DX11 backport for the mass of humanity on Windows 7) or Vulkan from the start will behave differently.[/quote<] Yeah, those ones will break the instant a new architecture is introduced. So far, Polaris has been an evolution of GCN (it's GCN 4 aka GCN1.3) and Pascal is essentially Maxwell on 16 nm with some improved scheduling. If and when the next paradigm shift like VLIW --> GCN happens, I fully expect natively DX12/Mantle games to break. Ironically, some form of middleware shim would be introduced, getting rid of the benefits of going 'close to the metal'. EDIT: Worth mentioning that my Radeon 970 Pro lasted me thru several GPU generations, as did my trusty 4870. I don't think my current 1070 will last me anywhere near that long.

      • synthtel2
      • 3 years ago

      I am in game dev (newly, but I’m here to work on this kind of stuff) and would like to answer this kind of question, but it’s not something that lends itself to short answers. I do want to do some real writing on this at some point, though.

      In the end, it’s mostly about the hardware differences we all already know about. Games that spend a lot of time doing rasterization tend to do better on Nvidia and games that spend more time on generalized number crunching tend to do better on AMD, plus or minus a few special tricks each side has. However, the reasons that a game might spend more or less time on a type of work are so varied that I can’t answer it concisely. 🙁

      It’s usually not better in any kind of absolute terms to write code that’s better on one or the other. Bias like that is more often a side-effect of other decisions made for other reasons. Conscious choices to prioritize one or the other are definitely a thing, though. AMD gets a lot more optimization work when consoles are a high priority, and GameWorks is appropriately infamous for bias.

      There is a factor for how much effort is put into the finer points of the code itself, not just the algorithms and architecture. This is highly situational, but occasionally a game just has a bad code path for some hardware.

      I don’t really have enough to go on to say anything about modern CryEngine. There have been a lot of changes since the Crysis 3 days, and I haven’t seen any presentations from them on their tech since Ryse. The documentation also looks pretty sparse for version 5.

      [quote<]they seem unlikely to say much given the GE/TWIMTBP programs probably don't much like them giving the tell-all[/quote<] I don't like those programs, I'm happy to be in a position where I don't have to deal with them, and I'm happy to keep talking as often as I have relevant things to say (aside from obvious NDA stuff, of course). 🙂

    • Concupiscence
    • 3 years ago

    Woohoo!

Pin It on Pinterest

Share This