Mantle to power 15 Frostbite games; DICE calls for multi-vendor support

Johan Andersson, the man behind DICE’s Frostbite game engine, spoke today at the APU13 conference in San Jose. After getting into the nitty-gritty details of AMD’s Mantle API (more on that as soon as I wrap my head around it), Andersson shared an update about the upcoming Mantle version of Battlefield 4. He also brought up other Frostbite games that will support the API, and he shared his own wish list for Mantle’s future.

The Mantle version of Battlefield 4 is on track to be released as an update in late December. Andersson said creating a Mantle version of the Frostbite 3 engine took about two months of work. The Mantle release’s core renderer is closer to the PlayStation 4 version than to the existing DirectX 11 one, and it includes both CPU and GPU optimizations. Andersson didn’t bring up performance estimates, but other developers who discussed Mantle at APU13 did. Jurjen Katsman of Nixxes, the firm porting Thief to the PC, mentioned a reduction in API overhead from 40% with DirectX 11 to around 8% with Mantle. He added that it’s “not unrealistic that you’d get 20% additional GPU performance” with Mantle.

Andersson also revealed another Frostbite 3 game that should have Mantle support “out of the box”: Plants vs. Zombies: Garden Warfare. But it’s not the only one. A whopping 15 other Frostbite 3-powered games currently in development will support Mantle. Based on the slide above, it looks like the Mirror’s Edge prequel and Dragon Age: Inquisition will support Mantle, as will future Mass Effect, and Need For Speed, and Star Wars games. That’s in addition to titles from other studios, such as Thief and Star Citizen.

Right now, Mantle only supports GCN-based Radeon GPUs in Windows. Andersson acknowledged that shortcoming, but he was very vocal about his desire for broader support for the API.

For starters, Andersson would like to see Mantle on Linux and OS X. It’s “significantly easier” to build an efficient renderer with Mantle than with OpenGL, he explained, and coupling Mantle with Valve’s SteamOS in particular would make for a “powerful combination.” He also sees potential for Mantle on mobile devices—including those from Google and Apple—on which the API would purportedly allow games to “fully utilize the hardware.”

But the “pink elephant in the room,” as he called it, is multi-vendor support. Andersson made it clear that, while it only supports GCN-based GPUs right now, Mantle provides enough abstraction to support other hardware—i.e. future AMD GPUs and competing offerings. In fact, Andersson said that most Mantle functionality can work on most modern GPUs out today. I presume he meant Nvidia ones, though Nvidia’s name wasn’t explicitly mentioned. In any event, he repeated multiple times that he’d like to see Mantle become a cross-vendor API supported on “all modern GPUs.”

I’ve gleaned more details about Mantle, and I’ll share those with you guys when I’m not scurrying between keynotes and meeting rooms. The sense I get from the developers AMD invited to APU13, though, is that Mantle yields considerable benefits in terms of development flexibility and performance, and it’s worth implementing even in its current, vendor-locked state. Andersson wasn’t the only developer to express a desire for multi-vendor support.

There’s no telling yet whether Mantle will ever become a cross-vendor, cross-platform standard, or whether the future holds something different, such as a competing Nvidia API or a future version of DirectX with some of the same perks. One thing is clear, though: Mantle looks set to shake up the industry in a very real way.

Comments closed
    • maxxcool
    • 9 years ago

    bloat == software vendor issue. not a cpu or gpu issue. sure dx is slower due to generalized support. i support getting more efficient dx code. but to split the market between amd and nvidia+intel hurts everyone.

    lastly. the dx-code console already runs a more optimized driver in a closed enviorment. mantle on a ”open” environment with vista + win7 + win8 with THREE DIFFERENT WDMS = CHAOS and lower performance for all. there will never be a ”super api” to replace dx like your hoping for.

    • Klimax
    • 9 years ago

    Addendum:
    There is CUDA rasterizer created under NVidia research:
    [url<]https://code.google.com/p/cudaraster/[/url<] Don't have time to read about it in full, so I do not know how complete it is, but it could be used in CUDA Renderer and thus provide missing link. Also NVidia may in future include it in their library. (Also Google shows few more projects and papers for "Cuda rasterizer".) I don't think anything else is missing as entire pipeline of whatever graphical architecture you chose can be now implemented.

    • Klimax
    • 9 years ago

    I know what DXGI is.

    It won’t put me on no square one, because there is no evidence whatsoever that Mantle is even able to solve problem AMD claims without any backing evidence exists. There are loads of claims, but zero evidence. They didn’t demonstrate anything, they didn’t prove anything. All they have is marketing and PR: Nothing technical. (Not even bloody code they used for benchmark, which speaks loudly about validity of their claims)
    All profiling I did on simple DX code shows, that Draw has most of overhead in synchronization, which you cannot override with Mantle, because it is necessary in any multithreaded code and thus not even Mantle will get you much. (I’ll post it if wanted, it is not useful for muh more, because classes for proper engine never got finished.)

    [s<]I don't have currently any CUDA code (didn't get to it yet), but you and AMD and some devs wanted lowest overhead, so I offer it. You can go nuts with CUDA, you are no longer beholden to any preselected algorithm for rasterization, you are then free...[/s<] On second thought, maybe too radical and direct. Also I completely forgot what point I tried to make originally and realized I expressed it badly. Fun stuff. On the other hand, I don't expect any gains, simply there are other problems to solve first, before number of draw calls becomes important again. (Also such metric is usually useless without details about measurement. ETA2: Forgot to post actual point I had on mind: Structure of CUDA and its objectives are similar to what AMD presented. Yes, AMD included bloody rasterization and few more things, but idea is not new and I doubt it will work any better then previously. (especially since we moved on from fixed function pipeline and got rid of supporting code for that hardware) Just another API out there solving problem nobody proven that currently exists, which existed in old system.

    • Klimax
    • 9 years ago

    Well, I might have still some reading to do, but you got whole boatloads to cover to even get close.

    • Klimax
    • 9 years ago

    [s<]Note: About CUDA I already wrote elsewhere, so in short: Just provide that result to presentation layer DXGI and be done with that.[/s<] ETA: Scratch that. just use bloody Compute shaders if you don't like interop and think it has some overhead and think that DX 11 is overhead monster. [url<]http://www.alexstjohn.com/WP/2013/07/22/the-evolution-of-direct3d/[/url<] "In fact, GPU architecture became so generalized that a new pipeline stage was added in DirectX 11 called DirectCompute that simply allowed the CPU to bypass the entire convoluted Direct3D graphics pipeline in favor of programming the GPU directly. With the introduction of DirectCompute the promise of simple 3D programming returned in an unexpected form. Modern GPU’s have become so powerful and flexible that the possibility of writing cross GPU 3D engines directly for the GPU without making any use of the traditional 3D pipeline is an increasingly practical and appealing programming option. From my perspective here in the present day, I would anticipate that within a few short generations the need for the traditional Direct3D and OpenGL API will vanish in favor of new game engines with much richer and more diverse feature sets that are written entirely in device independent shader languages like Nvidia’s CUDA and Microsoft’s AMP API’s." /ETA === You take claims by AMD without any reality check which is not good. Just a hint: I did VTune analysis on simple DX 11 code and there isn't much overhead in draw call, beside synchronization, which you cannot workaround. At which point any multithreaded code won't see much of improvement by Mantle, because what overhead Draw has, will be present in Mantle (or code using Mantle) There is no inherent overhead which Mantle Draw won't have, and if by chance it won't have that overhead, it will just exist in users code aka moving it just layer up, not really fixing anything or providing anything new. And if you want to code yourself, you could have already knock your self out and use DirectCompute aka shaders (different branch from regular pipeline) and use presentation layer. And you are not beholden to a GPU vendor. (And before you say HLSL, if I want to use HLSL I can outright forget anything nonstandard, because key component is then compiler and I can as well use DX/OGL) BTW: This discussion is extremely old and dates back to beginnings of computer graphics and outcome is usually same. Best devs can use more power, but the lower one goes, the harder it is for average coder to do the job and thus new wrappers and reinventing of wheel gets done. And we are back on square one. It is cycle, it is not new and it is not last, because ultimately complex stuff is usually subsequently encapsulated be it by Best Devs themselves or by Middleware providers and whole advantage is lsot again in layers of supporting code. ETA: Removed personal opinion of other poster. Don't have evidence to back it, so I removed it. And in any case I don't think it belongs into this post. (Aka, maybe don't post in a rush... and I had time.) = Frankly, why doesn't AMD publish whole benchmark with code? Why don't we get much more details. It is because their claim is bunk against DX11 and at best is result of comparison against DX9 (same bloody PR trick like Valve), so they can't be open about it, because otherwise we would immediately know how much PR BS we are being fed. Whatever overhead was in DirectX, got lost with DX 10 and 11.

    • Klimax
    • 9 years ago

    NVidia: Trade Mantle for CUDA or nothing. (At which point they are back at original point, where NVidia offered CUDA to AMD, but AMD refused)

    • Dysthymia
    • 9 years ago

    Nvidia,

    Trade you a license to use Mantle for a license to use G-Sync & PhysX.

    -AMD

    • marraco
    • 9 years ago

    Is not just a 20%. It unlocks future paths.

    Do you realize that a decade old console gpu runs with similar performance that a 10x more powerful PC GPU?

    That’s because PC gpu wastes 90% of his power due to call bloat (and also due to poor optimization allowed by DirectX)

    • clone
    • 9 years ago

    glide was closed, Mantle is open, glide is not mantle.

    • sschaem
    • 9 years ago

    Take a break, you are digging yourself deeper…

    Cuda does not support rasterizing, in contrast this is one of Mantle core design : to efficiently execute draw calls. (The execution of a vertex shader on a streams, associated with pixel shaders). Again, something cuda does not support, you HAVE to use interop with another API (OpenGL / D3D)… and back to square one.

    Read this from Dan Baker (A man that spent many years of his life around Direct3D at Microsoft):
    [url<]http://www.oxidegames.com/mantle[/url<]

    • sschaem
    • 9 years ago

    ??? How can you talk with that much confidence when you dont even understand Cuda.

    Cuda doesn’t support direct rasterizing. (the execution of vertex streams associated with a pixel shaders)

    Just spend a moment reading the link you find, you might learn something.

    • sweatshopking
    • 9 years ago

    LRN2PST
    HERE’S HOW ITS DONE: AMD IS FOR POOFACES AND THEY STUPID.
    UNINSTALL YOUR BROWSER
    NEVER PLAY AGAIN, SON

    • sweatshopking
    • 9 years ago

    I plused you for saying “flinging yourselves in traffic”

    • sschaem
    • 9 years ago

    Please, at least read the links you are posting. DXGI is not what you think it is.

    Again, using interopts to leverage d3d or OpenGL with Cuda put you back to square one in term of API efficiency.

    Do you care to post some code snippet that show how you render a triangle mesh using cuda & DXGI.

    • jihadjoe
    • 9 years ago

    You gotta admit that’s a pretty crummy deal if you did indeed buy it in the last 3 weeks considering you can get the GTX660, HD7870 or R9-270 for similar money.

    • jihadjoe
    • 9 years ago

    I wish Valve would follow EA in just ONE step and release a game with the number “3” on it…

    • jihadjoe
    • 9 years ago

    ahh but 50 fps gets you 10 more frames, and puts you over the magic 60fps barrier.

    • Pwnstar
    • 9 years ago

    I don’t mind some tearing. I’m used to it.

    • MarioJP
    • 9 years ago

    I honestly don’t care about these API wars. I see this as a new emerging technologies . Let it go dude no need to get hot and heavy with a single API only because Its native to “Linux” Games does not care what platform it runs it just runs.

    • jihadjoe
    • 9 years ago

    wait, you actually read all that?

    • Klimax
    • 9 years ago

    You’re wrong on PhysX. CUDA can be used on other hardware as it was designed to be portable.. NVidia even offered to license it to AMD. (And no talk about changing architecture, AMD refused on grounds they don’t support proprietary technology. Ironical considering Mantle)

    • Klimax
    • 9 years ago

    Apparently not to full extent. You don’t see NVidia to talk about issues like stagnation of technology. (They got some of their tech accepted into DX 11.2)

    And you know why Microsoft takes time? Because they want to ensure that none of GPU vendors can get vendor lock-in tech in there. This is more or less overriding principle since beginning of DirectX. It must allow all GPU vendors equal opportunity at implementation of functionality of DirectX.
    Details at:
    [url<]http://www.alexstjohn.com/WP/2013/07/22/the-evolution-of-direct3d/[/url<] Mantle is just another money sink, not providing much of use, just another proprietary API. (And If I want to go to low level I have already too many means in DirectX, no need for Mantle) CUDA lacks just presentation layer, but for that you got already layer in Windows. [url<]http://msdn.microsoft.com/en-us/library/windows/desktop/bb205075(v=vs.85).aspx[/url<]

    • Klimax
    • 9 years ago

    You don’t need anymore OpenGL or Direct3D. From DirectX 10 onward presentation layer got separated from DirectX Runtime, you don’t have to care that much about any supposed overhead.
    [url<]http://msdn.microsoft.com/en-us/library/windows/desktop/bb205075(v=vs.85).aspx[/url<] You can't go any lower... (because by now you are already hitting user/kernel mode transition as more significant problem) BTW: [url<]http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#direct3d-interoperability[/url<] So you don't lose that much there either, but if you go outside of Direct3D, then just skip to DXGI.

    • Klimax
    • 9 years ago

    You know, evidence that NVidia engineer said it? As for console developers, they want frozen technology, not low overhead. They don’t want to deal with variables, they want just code once and forget. They want to make PC like consoles nothing more. (And nothing better, because they want to remove primary advantage of PCS)

    How do you know DX has large overhead? Had you yourself analyzed? Where is your proof? Limitations… are none. You’d have to however even know how API looks and what and how, but you don’t appear to know, you just parrot what others say, hoping that they might be correct. Problem is, they are not.

    Trailing behind HW is easy, because this is years old debate between GPU vendors who want to throw as many proprietary features as possible in to any standard so they can gain any advantage. We have been there already, called NPatches and first version of Tesselation. Forgotten tech, because AMD and NVidia had incompatible ways of doing it and thus only few games supported it and a year later it was all removed from drivers. You want to have proper procedure for inclusion of features to prevent fragmentiaon, but you wouldn’t know it, because you never had to deal with CAPS or bloody hell known as OpenGL extensions. (Or you work for somebody who profits on such mess and thus love such fragmenting)

    Your reality is alternate universe and thus is no more valid then 2+2=9., because you don’t even know what GDI is in core, what was its purpose and why it was replaced and I doubt you know much about its replacement. Also your statement is wrong regardless of DirectX version, which BTW is currently 11 and your statement cannot apply to it, but you would have to know and understand that API.

    Lastly: If you mean “x86 virtual memory spaces” and unless I missed something then.
    [url<]http://msdn.microsoft.com/en-us/library/windows/hardware/ff570136(v=vs.85).aspx[/url<] [url<]http://msdn.microsoft.com/en-us/library/windows/hardware/ff568296(v=vs.85).aspx[/url<] If you mean caches, then those should be GPU private and managed by driver. (See Crystalwell and how well it worked for Intel) And TrueAudio? There is library for such cases called OpenAL. (Used to be open source...) As for API in Windows, originally it would be handled though DirectSound as HW acceleration, but either Core Audio could be used (low-level API) or maybe XAudio2. (Can't find info on HW acceleration and low-level too). === Frankly, your post is ignorant, parroting and wrong.

    • maxxcool
    • 9 years ago

    you sir are the 1:10000 who can.

    • maxxcool
    • 9 years ago

    but seriously.. why do people hate tomatoes ?

    • maxxcool
    • 9 years ago

    hahahahahahahahha +1111

    • maxxcool
    • 9 years ago

    ?? o.0 … umm no? Oregon 🙂

    • michael_d
    • 9 years ago

    Frostbite is not cutting edge engine by all means.

    Has any other leading engine developer – ID, 4A, Crytek, Epic expressed interest in Mantle?

    Other than being close to the metal can Mantle deliver superior visual fidelity as opposed to DX11?

    • alienstorexxx
    • 9 years ago

    this looks promising for the future of gaming… but even more for amd.
    now it has been announced that ps4 will include amd’s true audio chip.

    [url<]http://www.youtube.com/watch?v=Z5tBolPan8U[/url<] now let's see if the xbox one has it too.

    • sschaem
    • 9 years ago

    Its not AMD that make those claims about Direct3D.. its also nvidia engineer , and an army of independent console/game developers.

    Direct3d on windows carry large overhead and limitations. Feature and design.
    Its also trailing HW feature set by 2+ years.

    Reality of today … D3D is the new GDI.

    So on a 7790 .. what driver function does AMD write to exposed its virtual memory architecture to D3D?
    As a game developer, wthat DirectX API do I use for full TrueAudio support ?

    • sschaem
    • 9 years ago

    How do you use CUDA to generate draw calls ?

    Answer, you need OpenGL interopt (or direct3d)… and we are back to square one.

    • sschaem
    • 9 years ago

    Thats assuming the engine cannot perform some task more efficiencly using Mantle.

    Its like Comparing some blur kernel using pixel shader VS compute shader.
    Compute shader, on the same card, performing the same end result, can do things way more efficiently.
    So you reduce power usage, execution time, leaving more for the rest of the pipeline.

    So its not a given that even a overclocked i7-4930k with a R9-290x with water cooling would get no benefits.

    The biggest gain might actually be related to frame variance….

    • sschaem
    • 9 years ago

    ??? You dont think AMD (and ATI) worked with Microsoft for the past decade+?

    The issue is that Microsoft see D3D on windows as a high level generic graphic API, and its like pulling teeth to get anything done.

    As more detail emerge, Mantle will start to make more and more sense to the geek and gamer.

    BTW, CUDA != Mantle. You cant interchange the API in term of functionality.

    • Klimax
    • 9 years ago

    Addendum:
    Hereby I upgrade their idea from terrible to just bad.
    Reason: Although they can avoid most of problems with future compatibility in performance area, they will need multiple extra teams for that. Considering their financial situation, I don’t think it was wise idea. Best bet IMHO was working with Microsoft and Khronos to improve their own APIs and associated libraries.

    • Klimax
    • 9 years ago

    Frankly, AMD’s PR department still sucks and haven’t learned from Bulldozer.

    It was supposed to be “close to metal” and thus low level API. It ain’t so. It is copy of CUDA and its model. Which however will increase importance of software engineering and I don’t think they are yet up to task.

    They need perfect compiler for HLSL to be above DirectX and driver-level optimizations. Their library will have to be absolutely tightly developed to avoid any inefficiency otherwise they lose. Their driver will still need a lot of attention (what happened to rewrite of memory management?) and thus if they don’t keep it well written they’ll lose again.

    And I don’t see how it can help with OpenCL (for high margin applications) nor other CUDA-related fields, so there is question if it is worth costs.

    As for NVidia and Intel, they are likely to ignore it. NVidia has its CUDA an d fully developed ecosystem with vendor neutral APIs seconding it like DirectCompute and OpenCL, while Intel focuses on DirectCompute and OpenCL skipping vendor specific APIS. Neither company will help AMD to gain massive home turf advantage by adopting their API. (Just extra costs kill the idea)

    ETA: Link to presentation:
    [url<]http://www.geeks3d.com/20131113/amd-mantle-first-interesting-slides-and-details-target-100k-draw-calls-per-frame/[/url<]

    • Klimax
    • 9 years ago

    You accuse others of FUD yet you spread it yourself. Nice…

    Just a thing: Saying Intel’s IGP will be always second to AMD is ignorant. Hint: Iris IGPs are hint how things are shaping.

    Dual graphics won’t annihilate anything, because APU won’t have sufficiently strong CPU, GPU part will be throttled to oblivion and your battery life won’t be pretty. (Oh and I’d be careful about heat too)

    And trying to compare Iris against dedicated chips is very bad idea, because such compassion for your argument is invalid.

    As for NVidia, they have their CUDA + PhysX and their lead in tessellation is great and you shouldn’t forget that NVidia’s driver team is extremely good in optimizations, so you can’t claim they cannot match Mantle. (As it is not low-level low overhead API, but just another CUDA-copy and thus very dependent on driver and library teams expertise)
    And considering majority of income for NVidia is from professional sphere, you FUD valiantly but futilely.

    • Klimax
    • 9 years ago

    How often you have low thread count?

    Consider that it would need to factor following things into accounting:
    -each module a thread to avoid resource contention
    -yet as few modules active as possible for turboboost
    -yet you want threads sharing working set on common module to avoid cross-L2 traffic and maximize utilization of L2 caches
    -yet you want threads on module to have right instruction mix to avoid various contentions

    Not solvable in general and even with tight control it is extremely hard and a single interrupt can destroy entire careful orchestration.

    • Klimax
    • 9 years ago

    You treat redesigning architecture as trivial??? That is not even wrong.

    • Klimax
    • 9 years ago

    Well as it turns out. AMD PR is once again poor excuse for its intended function.

    It is in the end just copy of CUDA. And thus it is high-level API, with s
    [url<]http://www.geeks3d.com/20131113/amd-mantle-first-interesting-slides-and-details-target-100k-draw-calls-per-frame/[/url<] Compare: [url<]http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#programming-model[/url<] (Just hope they at least got one thing right this time)

    • Klimax
    • 9 years ago

    Too visible, especially in particular genres.

    • Klimax
    • 9 years ago

    If nobody from neutral party didn’t work on it, don’t complain that a vendor I related field will do work for its own advantage.

    Ask those in standards or display vendors, why they let NVidia get to it first.

    • Klimax
    • 9 years ago

    I said some companies just can’t learn from past and repeat mistakes again. Bad thing is that it is done by companies who can’t afford such stupidity…

    • Klimax
    • 9 years ago

    What burn? There is no burn, just ignorance of others.

    HLSL is high-level language derived from C created by Microsoft and is absolutely dependent on compiler and thus you can as well ignore Mantle as you won’t gain anything. (Just another set of bloody bugs to solve in AMD provided compiler)

    You want to see sort of issue to solve? Look back to Geforce FX and original compiler provided by Microsoft, which was based upon work by AMD…

    • Klimax
    • 9 years ago

    There was recently article on this. (ETA: [url<]http://hexus.net/gaming/news/pc/60729-amd-spent-much-8-million-eadice-battlefield-4-deal/[/url<] I bet most of it wen to Mantle) Sorry, but HLSL will not get you far enough without massive investment into compiler. (Because you are still stuck in high level world and thus no advantage for Mantle) Or you skip HLSL and go ASM and then good by portability. Good example was Global Illumination in Dirt 3 Showdown or TressFX. Sorry, but I know HLSL and if you stay there then you can as well forget bloody idiocy Mantle, because there will be zero gains for all that trouble.

    • Klimax
    • 9 years ago

    Finally got ahold of more information and it is just pure copy of CUDA.

    [url<]http://www.geeks3d.com/20131113/amd-mantle-first-interesting-slides-and-details-target-100k-draw-calls-per-frame/[/url<] Meaning that their PR was from technical standpoint again badly done. Like always. And also don't expect much, because it is not low-level API and thus you don't get rid of much overhead. And thus nothing interesting appeared and Mantle can be relegated to copies... (Apparent successor of Stream...)

    • Klimax
    • 9 years ago

    Considering this is just copy of CUDA, I don’t see why NVidia would waste any effort on it.
    [url<]http://www.geeks3d.com/20131113/amd-mantle-first-interesting-slides-and-details-target-100k-draw-calls-per-frame/[/url<]

    • Pwnstar
    • 9 years ago

    Way of the future: [url<]http://www.youtube.com/watch?v=4_Pbx9mvWPY[/url<]

    • Pwnstar
    • 9 years ago

    No, it is Frostbite that works with Mantle, not the games. So when they added that, all the new games have it. Battlefield 4 was simply the first to come out.

    • Pwnstar
    • 9 years ago

    If you care enough for 20% more GPU performance, you would have bought a recent card anyway, so this isn’t an issue.

    [quote<]That also raises the issue of pre-GCN AMD cards[/quote<]

    • Pwnstar
    • 9 years ago

    Not sure what you are talking about. I can easily see the difference between 60 and 120 FPS.

    • LostCat
    • 9 years ago

    Erm, no? People can collaborate on something without it being open source.

    • Bensam123
    • 9 years ago

    Yes. Optimized for GCN doesn’t mean Nvidia is out of the game or they can’t make their architecture to be optimized with Mantle (or even work with it). AMD even implied that mantle would work with ‘other vendors’ as it is… I think someone didn’t read the news post very well.

    Whether or not they have to redesign their architecture or simply do some driver changes to support it is up in the air, but it still can support it. It’s not like PhysX being vendor locked to Nvidia only cards.

    • auxy
    • 9 years ago

    As far as I have ever been able to see, ImgTec act like a bunch of giant hipsters when it comes to talking about other people’s graphics technology. “Well, I mean, I guess the performance is [i<]okay,[/i<] for an [i<]immediate mode renderer[/i<]." As if tile-based deferred rendering was the second coming of graphics technology. Or maybe that's just their fanboys. (¬‿¬)

    • renz496
    • 9 years ago

    so that’s mean the slide above is just wishful thinking from Andersson?

    • l33t-g4m3r
    • 9 years ago

    Looks interesting. I’m not a fan of COD or BF, but PvZ might get a curiosity purchase, provided it’s not f2p/p2w or something.

    • NeelyCam
    • 9 years ago

    I don’t hate New Jersey. In fact, most of New Jersey is quite beautiful.

    • superjawes
    • 9 years ago

    In case you haven’t picked up on it yet, complaining about downvotes is a perfect way to get more downvotes.

    • NovusBogus
    • 9 years ago

    Nvidia adoption is absolutely going to determine whether Mantle is the way of the future or just another bullet-point DX supplement that nobody cares about.

    • Chrispy_
    • 9 years ago

    Are you from Norway?

    • mesyn191
    • 9 years ago

    Same way they support DirectX, which is proprietary and only a standard ultimately because MS pushed it so hard and they have a OS monopoly.

    • maxxcool
    • 9 years ago

    ROFL -4 …

    • maxxcool
    • 9 years ago

    ew…. EAos ? nooo….

    • the
    • 9 years ago

    Knowing EA, they’ll just attempt to follow in Valve’s foot steps and create the Origin OS packed with EA games. I shudder at the thought.

    • the
    • 9 years ago

    Actually is quite simple for schedulers with a low thread count: you assigns one active task per module before going two threads per module. That’s what most of the Win 7 to Win 8 scheduling change does.

    • LostCat
    • 9 years ago

    They said it isn’t open source, in a tweet somewhere.

    • LostCat
    • 9 years ago

    It’s important for APU systems as mentioned, since they don’t tend to have as much horsepower to begin with.

    • ShadowTiger
    • 9 years ago

    I think this article gives some good insight on mantle from a developer’s point of view.

    [url<]http://www.gamesindustry.biz/articles/2013-11-01-escaping-the-shooter-mold-how-oxide-plans-to-revive-the-rts[/url<]

    • geekl33tgamer
    • 9 years ago

    I’m actually excited for this, and really wish AMD well to pull this off. It would be nice to have games target at this more console like API. If AMD stick to what the slides say, then it looks like it will work on nVidia too, and I guess Intel graphics.

    It would also be OS independent, I assume? That would be neat…

    • LostCat
    • 9 years ago

    BF4 was just patched.

    Also, it isn’t tied to AMDs GCN architecture. That’s simply what it works on now. Tied to would be like with PhysX, where it’ll never be on anything but NV hardware because NV were and are being dinks.

    • Billstevens
    • 9 years ago

    Fucking origin is one of those sites which won’t accept my credit card or paypal payments… go crawl back to steam and make it easy to buy your games again…

    • maxxcool
    • 9 years ago

    no, im facepalming..

    • Billstevens
    • 9 years ago

    Is anyone one else laughing at Plants vs Zombies with this engine. Its going to look like a freakin awesome claymation quality garden. That screen shot image could easily be in game rendered.

    • Modivated1
    • 9 years ago

    Obviously some of us are missing the big picture here. When the Mantle announcement began Dice had only agreed to support one game. It’s been a little while since then (about a month or two) and Dice has had time to play with the Mantle project.

    Think about that.

    After a month or two of experimenting with the Mantle project they have signed on to produce well more then 15 games. They have Already gotten an additional 20% performance out or it, that does not mean this the limit to Mantle, it means without going through a lot of red tape working with something new they have already gotten tangible gains.

    The point I am trying to make is this, they began by only sticking their big toe in the water without jumping in (I.E. Trying it on one game: Battlefield 4) then after they sampled it they decided to jump it. Meaning whether they are saying everything here or not there is something they find extremely worth it about what mantle can do. I don’t care how much someone is willing to pay you, you never bet the whole house (business) on something unless you think it’s going to pay off for years to come,

    • Blink
    • 9 years ago

    I won’t comment on BF as I don’t play. On the Mantle/GCN side; these are DICE slides, I think they are commenting that Mantle was designed in a way that it “could” be used cross-architecture. Not hard coded to work with GCN only. All of which, ties into the very next bullet on that slide.

    I’m really looking forward to seeing what becomes of Mantle/G-Sync. Both of these technologies have the potential to benefit consumers. I doubt the corporate leadership can get out of its own way though. I want the suits and ties to show us consumers why we should care about these things. I’m not interested in talking points and promises.

    • d0g_p00p
    • 9 years ago

    That’s cool and all but they need to focus the the PC version numerous bugs causing the game to crash every 20 mins or so or not having QTE not being triggered so you are running around the map wondering what to do.

    I had a blast with BF3 online with BF4 it’s not even possible since everytime you crash you lose progress. Fix the game first then worry about optimizing the game.

    Also LOL at the slide that states “Not tied to AMD’s GCN architecture” yet Mantle only works on AMD hardware.

    • maxxcool
    • 9 years ago

    I like tomatoes ? (-1??)

    • superjawes
    • 9 years ago

    Not true…do people still not understand how recontruction works?

    Okay, let’s say you are running 120 FPS (consistent) @ 120 Hz with V-Sync off. That sounds great, but if the refresh isn’t synchronized, you get image tearing, which is bad. So you turn V-Sync on. Now, when you drop closer to 90 FPS, you’re pulled all the way down to 60 FPS (or lower). Even though your system is going better than 60 FPS, you don’t get that performance because V-Sync locks to fractions of the maximum refresh.

    Now compare to G-Sync. Since the refresh doesn’t happen until the frame is ready, you get no tearing. However, unlike V-Sync, when your rate drops from 120 to 90/second, you get a 30 FPS increase over V-Sync.

    G-Sync can improve the end results across the board, not just “low frame rate” situations.

    • maxxcool
    • 9 years ago

    if it was opensource I would donate money to $%^& MS..

    • maxxcool
    • 9 years ago

    Not bitter. but we *have been here before*.. glide. this is dividing resources, segregating market, hurting games who dont have GCN cores, upping DEV costs for a lousy 12% fps increase that will only have dramatic effects for people who don’t even need it.

    If this was open source from the beginning and not some BS proprietary stealth project i would be all for it as NV/INTEL could also write code for it (ok intel would likely not… but you see what i saying)

    Gsync = BAD for the market
    Mantle = Bad for the market

    anything that causes a buyer or oem to be vendor locked is just bad.

    i mean, how’d you feel is airbags were illegal in every car BUT a damned audi? you’d be pissed. (yes a bit excessive in the example)

    • maxxcool
    • 9 years ago

    -1, nice job fanbois. so i cant hate EITHER company ? HPc’s…

    • maxxcool
    • 9 years ago

    wow -34 and no bet takers. fabois hedging their bets? why is that… oh yeah amd/ati under delivering over and over… shocker.

    look i have 3 amd rigs. all of them 1100t’s running 3.9ghz… **i like (previous) amd**. but this is $%^&ing the dumbest move they have made to date.

    And after blowing giant wads of cash in a year of MASSIVE LOSSES AND LAYOFFS their solution is to vendor lock their customers? … jebus hamsandwhiches people… if NV did this or intel did this you fanboy nutballs would be jabbering on the roof tops and flinging yourselves in traffic.

    • maxxcool
    • 9 years ago

    gsync is yet another crap standard segregating the market. despise that as well..

    • maxxcool
    • 9 years ago

    i’m referring to eye registration.. we not cats. but yeah you are technicaly right that with sync off and a 120hz lcd you still would register the increase in a benchmark.

    however unless the framerate drops under 40 only the super poweruser will notice.

    • maxxcool
    • 9 years ago

    glide … again ..

    • maxxcool
    • 9 years ago

    I do not want to cheat or load the deck.. even though everyone is just being a pile of $%^&’s and haters.

    despite the amd fanboi froth and stupidity. winning with the second best cpu possible (150$) is right where the “”mainstream”” buyer lives feel ‘right’..

    • maxxcool
    • 9 years ago

    frostbite update is a patch. NOT a full built from scratch build. EVERY GAME IS A PORT.

    • maxxcool
    • 9 years ago

    so your saying to use mantle you need to spend 500$ yeah thats going top help amd’s cause…

    if this is not going to help 99% of the market this is the dumbest move amd has ever made.

    • maxxcool
    • 9 years ago

    hahaha -4 for owning a asus-560 ? nj haters…

    • superjawes
    • 9 years ago

    Unless, of course, EA wants to do the “me too” strategy and release OriginOS…

    • lilbuddhaman
    • 9 years ago

    Good companies release sequels/DLC/expansions that exceed the quality of what the average modder does on a long weekend.

    I dare say the tools *are* easy to work with (barring some technical hurdles for the average joe that they are hinging on calling it “undoable”), and that we’d have a Elder Scrolls / Quake type situation where people would be putting out so much free (and good) content that their DLC/Expansions/Season Packs would fail to sell.

    Bethesda made it work well by putting new functionality / scripting into each expansion they put out, which could then be used for new types of mods. Even the mod junkies bought the expansions because it gave them new functions to make new crazy stuff.

    Think if that talent could make some custom BF3/4 maps?

    (I want to see a “miniaturized” map, with silly Toy Story art style, destructible Lego / K-nex / Lincoln Log buildings, Jets flying in between the bars of a baby crib, tanks rumbling past a floor full of misc toys)

    • superjawes
    • 9 years ago

    Basically this. Steam runs on Windows, OSX, Linux, and will run on SteamOS (obviously), but SteamOS is still separate, and as long as it’s as open as Valve says it is, anyone can relese a software for it outside of Steam’s distribution.

    EA should consider releasing an Origin client for it because the OS is gaming focused. [i<]Not[/i<] releasing for SteamOS would limit their potential market.

    • Blink
    • 9 years ago

    I haven’t combed through everything Mantle yet. Has AMD given any indication that THEY are interested in it being Open Source? I’ve heard it from devs and users but I personally haven’t garnered anything from AMD itself.

    • SCR250
    • 9 years ago

    Good questions now where are those answers.

    • lilbuddhaman
    • 9 years ago

    Had ME stayed RPG heavy and Shooter light I’d agree with you. I had two stop about 10hrs in to ME2. Its a cool series but simply not what I wanted to be playing. And yes, there will be another title in the universe.

    • SCR250
    • 9 years ago

    I must have missed it but when did AMD release Mantle as an Open Standard?

    [quote<]But the "pink elephant in the room," as he called it, is multi-vendor support. Andersson made it clear that, while it only supports GCN-based GPUs right now, Mantle provides enough abstraction to support other hardware—i.e. future AMD GPUs and competing offerings. In fact, Andersson said that most Mantle functionality can work on most modern GPUs out today. I presume he meant Nvidia ones, though Nvidia's name wasn't explicitly mentioned. In any event, he repeated multiple times that he'd like to see Mantle become a cross-vendor API supported on "all modern GPUs."[/quote<] How else can other vendors support Mantle unless it is Open.

    • nanoflower
    • 9 years ago

    What they haven’t said is what sort of performance gains you would see with Mantle on another platform. Given that Mantle was designed and optimized for GCN I suspect that the gains wouldn’t be nearly as great as the possible 20% (why do so many take is as gospel it will always be 20%?) that GCN sees. Which would raise the question of whether it is worth the effort if it only gives a 5% gain to Nvidia or perhaps it’s a -5% on Intel. I’m not against the idea of Mantle if it was truly open, used by all and gave real benefits over Direct X. I’m just very skeptical about the likelihood of it being adopted by Intel, Nvidia and giving the sort of performance boost that Dice is talking about.

    (That also raises the issue of pre-GCN AMD cards and if they will ever see Mantle support. Providing that support would be the easiest way for AMD to help sell people on the idea that Mantle isn’t tied that closely to the GCN platform.)

    • renz496
    • 9 years ago

    if amd really wants Mantle to be open source why they did not send the API to Khronos yet? why only open it later not now? also if Mantle is not tied with GCN as the slide above mention why not making Mantle available to their older radeon (at least down to 5k series)? it should be easier for them to do it since it is their own architecture in the past. if Mantle is really like what they talk it should help the performance on older radeon

    • Blink
    • 9 years ago

    Riiight. Speaking of spliting focus doesn’t NVidia do some ARM stuff?

    • GENiEBEN
    • 9 years ago

    Many didn’t notice, but Origin is built with Qt Framework, so it was always meant to be cross-platform. EA is just waiting for the right time, and SteamOS’ launch is going to be that.

    • homerdog
    • 9 years ago

    AMD and NVIDIA make a deal. AMD gets G-sync, NVIDIA gets Mantle. We win!

    • tanker27
    • 9 years ago

    So does that Slide deck prove there will be another Mass Effect title???? I just got chills thinking about it. (For all of its shortcomings I still think the ME series was THE game for the past decade. There were so many things right about it that I dont care about the wrongs. All I need to say is: [url<]http://www.youtube.com/watch?v=IitCQCaKi3E)[/url<] Anyways, those are some heavy hitters in respect to games.

    • RDFSteve
    • 9 years ago

    My God, man, do you get paid by the word??

    • Blink
    • 9 years ago

    Can’t help but notice your, what? 30, “OMG AMD SUX, NVIDIA DRIVERS ROCK FOREVER” posts in this article. Did AMD/ATI force your family into bankruptcy? You’re making a ton of ‘convenient’ assumptions about something none of us have seen yet.

    • Pwnstar
    • 9 years ago

    He’s going to need ice for that burn…

    [quote<]first two letter of the acronym are "High Level"[/quote<]

    • Pwnstar
    • 9 years ago

    This is exactly right. Nobody is saying it is a low level API. It’s high level, using HLSL. It just happens to perform like a low level API, which is where I think the confusion comes from.

    • Goty
    • 9 years ago

    He’s a little too invested, isn’t he…

    • ET3D
    • 9 years ago

    A lot of people here are speculating (wrongly) about Mantle when they could understand more about it by watching the deep dive video (for example here: [url<]http://www.pcper.com/news/Graphics-Cards/AMD-Mantle-Deep-Dive-Video-AMD-APU13-Event).[/url<]

    • chuckula
    • 9 years ago

    Yeah, but Mantle is supposed to help the most in CPU-bound situations (remember that whole business about “drawing calls” well the CPU executes those calls then the polygons get handed off to the GPU).

    If you want to win the bet you should be asking for a lower-end CPU, not a higher-end CPU where the GPU would be running at near 100% regardless of the API.

    • Pwnstar
    • 9 years ago

    You don’t need Gsync if you are running at 120 FPS. Gsync is for low frame rate situations like below 50 FPS.

    • squeeb
    • 9 years ago

    Wall of text crits me for 10,000

    • nanoflower
    • 9 years ago

    Not AMD cards. For AMD cards that use GCN. The rest of the AMD/ATI users can go pound sand. 😉

    • l33t-g4m3r
    • 9 years ago

    I did, but I also like the idea of Mantle, so long as it’s vendor neutral. As for the 780’s price, well that’s what I had to pay for good hardware with working drivers, and AMD didn’t have competitive hardware or working drivers at the time. Not going to cry over it.

    • l33t-g4m3r
    • 9 years ago

    Remember the FX?

    • peartart
    • 9 years ago

    If the Nvidia driver team is as good as you are implying, their GPU team really sucks, because empirically their good driver GPUs are only as good as AMDs bad driver GPUs.

    • Hattig
    • 9 years ago

    Dude, I don’t know what’s up your jacksy about mantle, but your rage about it is quite out of proportion to reality. Right now it is looking like a real win in the games that will support it, and adding support seems to be fairly easy. At each price point for a GPU, 20% extra performance is a major selling point (expect some sites to refuse to use Mantle renderers in GPU reviews because it is somehow unfair (to nVidia, who will have prominent adverts on those sites).

    It gains performance by removing layers of abstraction, having an API that is close to the modernest GPU architectures, and fixing parallelism within the API so that more cores actually helps during rendering.

    However AMD will need to hand the API off to Khronos if they want other vendors to support it in the long run.

    • Krogoth
    • 9 years ago

    Shield’s biggest problem is that nothing more than a proof of concept.

    It is too expensive and the software library is too limited for the portable gaming crowd.

    • Chrispy_
    • 9 years ago

    Good, it looks like AMD are continuing their tradition of releasing/investing in technologies that are good for everyone.

    Where Nvidia locks their stuff down and licenses it at cost, AMD seems to be striving for the open-standards that focus on improving graphics for the end user, rather than exploiting exlusivity to earn a quick buck.

    I’m pretty sure GSync will fail if it requires a Geforce, and I’m also convinced that Shield’s lack of success is partially caused by it’s GTX-only streaming lock-in.

    • Cataclysm_ZA
    • 9 years ago

    Mantle is Andersson’s baby, just like the Frostbite engine. DICE and EA will use it in Frostbite because it massively benefits them to support it. I’m not sure why people are calling DICE shills because they support AMD, Nvidia and Intel hardware equally well.

    • ronch
    • 9 years ago

    It never ceases to amaze me how these big companies can throw millions around like they’re chump change. Wish I had a few million bucks myself.

    • sschaem
    • 9 years ago

    Your SSE analogy is not applicable, Mantle uses HLSL.

    So according to your definition, it cant be in between, then Mantle is a high level API.

    .. reality is direct3d / opengl inefficiency doesn’t come from the code programming language, but all the gunk around it to manage the device itself.

    We will know more soon enough.

    • sschaem
    • 9 years ago

    Do you have proof that AMD paid Dice 8 million solely for them to use Mantle ?

    Mantle is optimized for GCN, yes, its a fact. But Dice (one of the core participant of Mantle) just indicated, that its not that HW specific… GCN cores you say? programmed via HLSL.. the same language used on nvidia kepler GPUs…

    Try to google what HLSL stand for.. clue: first two letter of the acronym are “High Level”

    • stupido
    • 9 years ago

    … and control is the shortest route to money… 🙂
    and the world turns around the money as earth around the sun…

    peace…

    • M3gatron
    • 9 years ago

    So what AMD marketing has said on blogs should be proof enough to contradict what the DICE guy claimed right??? You must be very intelligent.
    AMD doesn’t lie and neither does DICE. The fact that Mantle is optimized for GCN at first doesn’t dismiss the idea that it can run on other architectures at all. Simple logic. That is what DICE said.He is one of the few persons that really knows how Mantle works and what it can or can’t do.
    So yeah I will go with his opinion.

    • stupido
    • 9 years ago

    Here is a quote:
    [quote<] Q. How do you convince developers to use Mantle when other APIs are vendor neutral? A. We haven't had to convince them. Every single one of them has come to us and asked for it without prompting! [/quote<] from the interview @ Tom's Hardware... link I already supplied with my previous post (reply the long rant of HisDevineOrder)

    • M3gatron
    • 9 years ago

    Yeah nothing really to say just spread biased fud.
    15 Frostbite games will use Mantle(many of them are really huge titles). Plus other games. Thief is an example and maybe the next Tomb Raider. Not trouble for nvidia indeed.
    Intel’s iGPUs will always be second best to AMD’s solutions plus AMD has the alternative of Cross Fire. A Dual graphics AMD laptop will humiliate the much much more expensive i7+iris pro in games.

    • Klimax
    • 9 years ago

    It cannot and by definition of Low-level API. See Glide. So, who’s lying? Dice or AMD?

    [url<]http://community.amd.com/community/amd-blogs/amd-gaming/blog/2013/10/17/the-four-core-principles-of-amd-s-mantle[/url<] "While Mantle is uniquely optimized for PCs containing the Graphics Core Next architecture"

    • Klimax
    • 9 years ago

    Who are DICE Just studio paid by AMD itself to use Mantle (8mil USD )

    And AMD itself.
    [url<]http://community.amd.com/community/amd-blogs/amd-gaming/blog/2013/10/17/the-four-core-principles-of-amd-s-mantle[/url<] "While Mantle is uniquely optimized for PCs containing the Graphics Core Next architecture"

    • Klimax
    • 9 years ago

    Wrong on many accounts.

    Trouble for NVidia? Only in red dreams. (And there is still matter of CUDA)
    Intel just laughs. They are closing in fast and once their tech gets adopted AMD will get problem. (Ordering of shaders can have nice effects on architecture) And you forgot that mobile gaming lasting only minutes is not great idea…

    You can forget all things you said, because they won’t be happing in this reality…

    • M3gatron
    • 9 years ago

    Yes you are the big expert. Why should we believe you???
    If you have serious proof to contradict what the DICE guy(who is a real expert) is saying come foreword with them.

    • Klimax
    • 9 years ago

    Heh. I think Mantle is worst idea for a long time in GPU area. There was a reason why DirectX won and all that vendor specific crap died. There is a reason why I can play old, current and future games. (aka future compatibility: game->hardware and hardware->game)¨

    It is for second time AMD failed to learn from history (first was Bulldozer, where they managed to add extra layers of fail)

    BTW: I would say same thing no matter would do it, it just is that Intel and NVidia know why it would be bad idea.

    Note: My position is influenced by knowing old technology (part of collector) and partially having studied graphics (programming against DirectX 9 and 11) and debugging oddities in games.

    • stupido
    • 9 years ago

    FYI:
    [url=http://www.tomshardware.co.uk/amd-ama-toms-hardware,review-32825.html<] Over here... [/url<] @ Tom's Hardware and [url=http://www.tomshardware.co.uk/amd-ama-toms-hardware,review-32825-4.html<] mantle more detailed [/url<]

    • odizzido
    • 9 years ago

    I noticed that they put CPU optimizations under the mantle heading.

    • M3gatron
    • 9 years ago

    No it’s not. AMD tries to promote and sell their video cards so yeah you bet they said that but as a concept Mantle can work with any video card and it’s not limited to GCN only. The DICE Manager said that and I tend to think he knows what he is taking about.

    • M3gatron
    • 9 years ago

    Long and useless comment.
    AMD doesn’t need Intel’s and Nvidia’s support they already have EA’s and the support of other game developers. That is all they really need. 15+ games have already been announced to use Mantle in the future. That smells trouble for Nvidia and not to mention Intel as AMD’s Mobile APU will massacre Intel’s integrated graphics in gaming at a much lover price. Nvidia’s mobile gpu’s will also be in big trouble.
    AMD is doing great. Oracle’s support of HSA is a huge thing and Mantle also gets a lot of support.

    • Melvar
    • 9 years ago

    Speaking of ranting on and on, I’m starting to get the impression you don’t think Mantle’s all that great.

    • Klimax
    • 9 years ago

    Frankly, correct.

    • Klimax
    • 9 years ago

    Further updating of DirectX likely. Forget about NVidia though. You forget how good driver teams at NVidia are, so all they’ll do is to continue as is.

    NVidia likely will just show that their drivers don’t need such help, while devising such graphical techniques which are very hard on GCN-arch and thus forcing AMD changing GCN too much and thus breaking performance compatibility with previous code. (Example would be Intel’s tech in Iris IGP as shown here on TechReport)

    • Klimax
    • 9 years ago

    Agreed. Same goes for Intel. (In fact how about NVidia’s GPU on Intel’s process? Intel would be massive winner there…)

    • Klimax
    • 9 years ago

    Sorry to burst your bubble, but it is locked to GCN. (AMD itself said so or will you claim AMD lied?)

    And then either it is hardware locked (to be low-level) or it is high-level and thus GPU proprietary API. Also it doesn’t have any advantage over CUDA, which was designed to be adaptable.

    You never want GPU company in forefront of development, otherwise they will lock it down to gain advantage.

    Sorry, I don’t see any good intentions. All I see is just good old corporation. (Otherwise you have double standards regarding NVidia)

    • Klimax
    • 9 years ago

    Better results with G-Sync then Mantle.

    • Klimax
    • 9 years ago

    Shatter planet, because either way likely result for Mantle. (Aka failure in the end)

    • Klimax
    • 9 years ago

    Core i7-3930k 4,4GHz…

    • Klimax
    • 9 years ago

    Won’t help that much. But it would require knowing how DX 11 works and API looks like…

    • Klimax
    • 9 years ago

    So, vendor lock-in? Great idea. We were there. Called Glide. Nice fail in the end. It was very helpful to 3dfx…

    • Klimax
    • 9 years ago

    For a simple reason. BD architecture is impossible for schedulers. Too many variables each other contradicting. (it would require full future prediction and knowledge of code)

    • Klimax
    • 9 years ago

    Wrong. Basics didn’t change because we are still using same general architecture as always and not even basics of GPU tech changed that much. Or respectively it is getting harder as it is to have acceptable performance on major GPU architecture.

    Nope there are not many levels. Either you provide full abstraction so you don’t take hardware implementation dependency or you go under it and then take full hardware dependency. There is nothing in between because once one thing in your code base takes such dependency it is NO MORE portable.

    It is same thing as using a single SSE instruction on Pentium II or using MOV instruction on ARM.

    • Klimax
    • 9 years ago

    Strongly doubtful we’ll see anything like that. Overhead is simply not there. But I guess nobody saw DirectX 11 API otherwise AMD’s PR wouldn’t be so successful.
    HINT: You are already quite close to metal without breaking abstraction.

    • Klimax
    • 9 years ago

    Sorry, but it never could be in the cards beside absolute wishful thinking. AMD is trying to overplay any and all supposed overhead in DirectX, but it is just PR BS.

    Those 20% won’t last longer then few years and then you’ll be luck to have just 0%. (Also assuming such increase even exists)

    • Melvar
    • 9 years ago

    [quote<]I call BS on this. Either it is low-level or it has abstraction for other architectures. But it cannot be both. Full stop.[/quote<] This isn't 1994. There are a lot of levels software can run at, and one can be lower than another without requiring you to program directly to the registers in assembler.

    • Klimax
    • 9 years ago

    ETA since forever: Looks like PR got it wrong. I have changed my rating to Bad idea, from terrible based upon new information.

    WRONG! AMD is anything but thinking outside of the box. In fact this is no better then CUDA/PhysX respectively it doesn’t even have advantages of CUDA to begin with. And they are already following path Glide.

    Reminder: Beware of double standards.

    As for devs, it will do initially well and then crash hard or kick of new era of massive fragmentation. We have been there already. 90s called and want their GPU proprietary tech back.

    Sorry, but there is nothing to cheer for, but wish Mantle to die fast. (And if you won’t as a buyer ensure its failure, other companies will and AMD may not survive that with their financials)

    • Klimax
    • 9 years ago

    ” Andersson made it clear that, while it only supports GCN-based GPUs right now, Mantle provides enough abstraction to support other hardware—i.e. future AMD GPUs and competing offerings. In fact, Andersson said that most Mantle functionality can work on most modern GPUs out today.”

    I call BS on this. Either it is low-level or it has abstraction for other architectures. But it cannot be both. Full stop. This is attempt at masking fact that Mantle is vendor and hardware specific and trying to BS their way out of the corner they got themselves into.

    Frankly, when you don’t need to take care of abstraction, it is trivial to bypass CPU overhead, but there is NEVER free lunch. So One or the other. Either it is low-level low overhead API or it can support other GPUs.

    BTW: What is the reason why AMD doesn’t support CUDA? Because it is proprietary to NVidia. What is Mantle? It is proprietary to AMD. Irony and double standards, people. Either CUDA (and by extension PhysX) is bad and thus Mantle is bad or Mantle is good and thus CUDA/PhysX is good too.
    Your choice.

    • Srsly_Bro
    • 9 years ago

    I often wonder why people spend so much time ranting on and on and on and on when few read the post. TL;DR is implied

    • Firestarter
    • 9 years ago

    If they had released mod tools for BF3 then BF4 would have been pretty pointless, don’t you think?

    • xeridea
    • 9 years ago

    Why so negative? AMD has been thinking outside the box for a while, others have been doing the same old same old. They are getting huge support, why must you hate on them? Developers for years have been saying it would be nice if there wasn’t such a heavy API to go through for games, so obviously it will do well.

    • Pwnstar
    • 9 years ago

    Jesus Christ, dude….

    • Pwnstar
    • 9 years ago

    That’s right. The largest benefit will be seen in CPU-limited situations. But I bet we’ll still see a small benefit in GPU-limited situations, because Mantle will be more efficient for the GCN architecture. It just won’t be as large as CPU limited.

    • Antimatter
    • 9 years ago

    Remember that running 2 threads on a single module has about a 5%-10% performance penalty, due to resource sharing. So the Windows 8 update to bulldozer was only expected to give a 5%-10% improvement to performance.

    • sschaem
    • 9 years ago

    Me singing a gentle lullaby…

    • Pwnstar
    • 9 years ago

    Not true if you have a 120hz monitor.

    [quote<]60 fps will get *12* more frames. And after that its pointless[/quote<]

    • Pwnstar
    • 9 years ago

    I’d assume DICE is lying. “Yeah, we thought of it first”, etc.

    • Bensam123
    • 9 years ago

    Well snap son, there goes the ‘mantles locked to GCN’ bit. Definitely looking forward to this the more and more I hear. I wonder if Nvidia will suck up their pride and adopt it or they’ll make some sort of dumb vendor locked version and try to coerce developers into using it.

    I really hope this works out and we’ll see a change in power from directx to mantle. Having a graphics card company heading up the forefront of the API on which all things graphical run just makes sense. They don’t have a console to lock it down to and they don’t seem dead set on locking out the competition (completely the opposite of it). It just seems like all good intentions here, although that should be taken with a grain of salt considering that it is a company…

    • sschaem
    • 9 years ago

    Then, I dont get why you sound so bitter in that post.

    The 6 million you mentioned is how much EA charged AMD for the BF4 game keys to be used for the R9 serie cards.
    When you buy in bulk, price is expected to be lower then individual unit sale. and you pass that value on to customers. You can bet that AMD didn’t pay $60 per BF4 key.

    NVidia does the same. Value added has been done for decades.

    Also it was Dice that was interested in pushing the PC to its full capability and talked to AMD to offer a console like API for PCs.
    Should AMD have said, NO! “Keep using OpenGL”…

    And what should happen to TrueAudio? Should AMD sit and sit waiting for someone to make a API for it for another 2 years?

    If you dont know, all the feature in the upcoming Directx11.2 already exist in HW in the 2+ year AMD 7 serie…

    Should nvidia have waited for microsoft and never developed Cuda?

    Leave API development to the pros right… but in this case AMD and NVidia are the Pros.

    etc.. etc…

    • Pwnstar
    • 9 years ago

    Wow, that’s a pretty good summary of your huge post down below. I’m shocked!

    • HisDivineOrder
    • 9 years ago

    It’s in nVidia’s best interests to do something–anything really–different from Mantle just to keep AMD having to throw money at making Mantle work now that they’ve made such a big deal of it.

    In fact, this also dovetails nicely with Intel’s desires because Intel has no reason at all to want to aid AMD’s attempts to encourage HSA/HUMA usage and one of Mantle’s biggest advantages will be for APU users. Why would Intel want to improve AMD’s already superior integrated GPU’s in their APU’s? They’d rather slowly upgrade their own iGPU’s and let AMD run out of money struggling with now three separate standards while nVidia and Intel only have to support two.

    In a way, AMD’s dug themselves into a hole and now they’ve got no way out except by convincing two companies to help them that have all the reasons in the world not to help them. This was not a great plan, really. It was actually a horrible plan. They’re counting on Intel and nVidia to be compelled by software developers to do something they won’t do.

    Especially when AMD starts with homefield advantage with the API. The worst part, though, is DICE saying that Mantle is theoretically possible to be an “open standard” without admitting that while it may be true one could go in and built a custom version of Mantle for other GPU’s (or CPU’s?), in order for gains to be had on a “close to the metal” API, they’d have to build custom versions of each game for each architecture they wanted to support.

    So that’d be a version for nVidia, a version for AMD, a version for Intel. That’s bare minimum. Plus, you’d have to have versions of Mantle for older AMD GPU’s (pre-7xxx series, pre-Kaveri APU’s) and then the current one for 7xxx series and Kaveri or later. Is Maxwell a huge change? New version of Mantle code for it. Intel changes its GPU’s again? New Mantle for that. Toss in versions for Qualcomm, Samsung, ARM’s GPU, if you want to include tablets like they did. Because when you go that close to the metal, by definition, you’re going to have be building your software to expect certain architectural requirements. The theory that it could be open is great, but Mantle being open would just mean more support work to build up for different architectures.

    Or they can make a version for OpenGL and hit all the major players. No extra work required, extensions easily added instead.

    Hmmm… not hard to choose, really. Two months to add Mantle to the Frostbyte engine, but no word on how long it took them to QA the Mantle code specific to just BF4. No comment on support costs to keep it patched up. No word on much of anything, really.

    This is a huge smokescreen. The real purpose of Mantle is sow panic in the streets and try to get people to say, “Well, I don’t know how it’ll play out, but I’ll hedge my bets and go with the one that supports Mantle.” It might work, but AMD did such a shoddy job of really promoting Mantle at the beginning with so little info and a lot of obfuscation about first it was on the next gen consoles, then suddenly it wasn’t even remotely on them nor was it apparently ever part of them… waiting over a month to talk about it and letting the industry run rampant with rumors and criticisms… letting nVidia get a chance to have a huge smackdown on the whole concept by NOT saying a word and instead letting Tim Sweeney and John Carmack run ramshod all up and down on their biggest Mantle supporter’s face with obvious criticisms, complaints, and shortcomings…

    It defanged this moment. No one seems to really think Mantle can do much more than motivate OpenGL and DIrectX to be improved more than they were. Mostly, this comes in the context of the SteamOS improvements to OpenGL and someone’ll sometimes toss in a, “Mantle could really go well with SteamOS.” Except the major backer of Mantle is EA and EA has about as much to do with Steam anything now as nVidia does with any standards API made by AMD.

    Next to nothing. AMD played their hand too hard and too soon because they’re desperate, but they just came up short. They didn’t play it smart and even though they may have console penetration with their APU’s, that means next to nothing when the API they’re touting only gives 20% performance improvement and requires more support costs and man hours for that extra 20%. The companies they’d need to support it aren’t going to be even remotely interested in helping them and so it won’t be a standard. Moreover, AMD themselves said that it wouldn’t be opened up as a standard until POSSIBLY end of 2014, 2015.

    So not even AMD is going to do it anytime soon. The only companies saying that there’s any advantage to Mantle are the ones at AMD’s conference. Everyone else is shaking their heads because they know the obvious: AMD makes tons of standards and few of them ever take off because few of them overcome the fact the “standard” would benefit AMD more than it would its competitors.

    Intel and nVidia really have no reason to support this and they have so many reasons not to. Developers have no reason to go to the extra trouble, even months extra trouble, to support it. Publishers have no reason to pay the support costs and patching, QA costs to support it. 20% extra performance is just not enough. Mantle isn’t even promised for SteamOS and EA, Mantle’s biggest supporter, doesn’t go near Steam now.

    So… yeah. Truform, DX 10.1, Stream, Havoc GPU acceleration, Mantle. AMD makes lots of standards, but their competitors are going to sit by and let AMD burn tons of money trying to push them (along with also trying to support DirectX and OpenGL) while they get to just support the latter two. AMD’s going to spend themselves out of business trying to make this fly.

    It’s so much easier when your chief competitor is busy trying to split its focus between two giant corporations with tons more money than them and then they try to support more API’s than you for just 20% gain in performance that most games won’t ever even bother trying for. It might as well be corporate seppuku.

    • HisDivineOrder
    • 9 years ago

    Unfortunately, nVidia has no reason to accept a standard that starts out giving AMD the lion’s share of the advantage when they can stick with the status quo and bleed AMD dry of money by forcing them to try and keep pace with DirectX and OpenGL development instead.

    • LostCat
    • 9 years ago

    He mentions that, but we’re led to believe that’s for ports rather than builds that take further advantage of what Mantle offers.

    • ronch
    • 9 years ago

    Glass half-empty.

    • mcnabney
    • 9 years ago

    Pretty sure using that 8320 will CPU limit BF4 – in the real world if you want to test GPU impact you eliminate the CPU from influencing the results.

    • Ryu Connor
    • 9 years ago

    Isn’t getting [url=http://en.wikipedia.org/wiki/Imagination_Technologies<]Imagination Technologies[/url<] on board a more important goal than NVIDIA?

    • maxxcool
    • 9 years ago

    -12? hate all you want, here’s my paypal bet…. (1) charity of choice will receive (1) 25$ paypal donation if BF4 has a *>overall<* 30% increase in fps on a generic 8320 + amd r280 setup.

    Note=overaall. not just a “peak” increase but a full on avg increase of 30%

    i will even invite the TR staff to ban my account if i do not pay up.

    • Deanjo
    • 9 years ago

    [quote<]I have a feeling that the 20% number is going to be variable on a few factors.[/quote<] I can't honestly take such claims seriously in an industry that has a history of making wild claims only to fall flat on their face. Remember the improved scheduler in Windows 8 that was supposed to all of a sudden make bulldozer sing? A very few select scenarios were able to pull out a 5% gain but more often then not it did not have any appreciable impact on performance.

    • maxxcool
    • 9 years ago

    -1 for being right.. guess the apple folks have someone stealing their distortion bongs… if the cpu is loaded out (bad or not good engine coding) mantle will not help. you sir are right.

    • maxxcool
    • 9 years ago

    that is NOT what it posted for the direct quote from Jorjen Katsman of Nixxes. He mentions specifically only gaining 20% porting to mantle for amd cards.

    • maxxcool
    • 9 years ago

    asus 560 2 gig for 180$

    • Melvar
    • 9 years ago

    My guess is that if you are 100% GPU limited (e.g. running a game at 4K with high settings) you will get a 0% framerate improvement. Maximum benefit would come from a situation where the rendering thread on the CPU is holding things up, but the GPU and the other CPU threads are not maxed out.

    • chuckula
    • 9 years ago

    If 20% real-world is true then Mantle is actually worth it. Despite the hype, Direct3D and OpenGL aren’t *that* bad, so a 20% jump is quite decent.

    I have a feeling that the 20% number is going to be variable on a few factors. If you are already running an R9 at the top of its thermal envelope with a high-end CPU, then I really doubt you’ll see much of a jump. Lower end CPUs or APUs would likely see a bigger relative jump.

    • ronch
    • 9 years ago

    Considering the performance gain we’re gonna get is achieved by merely switching to Mantle and spending no money on our part, I guess we can’t complain about 20%.

    Still, I was hoping it’s around 50%, but I guess that would be asking too much.

    Also, considering how AMD GPUs generally offer better performance/price at present, getting a 20% boost means choosing Nvidia just got harder.

    • Melvar
    • 9 years ago

    That only applies to games that were not designed for Mantle and then ported to it. Games could be designed with an ultra high detail mode that would be fully playable when running on Mantle, but push that overhead unacceptably high on DX11.

    • MadManOriginal
    • 9 years ago

    Hmm, that’s an interesting take on it. While nerds loudly to irrationally rage against evil M$ and Intel, and cheer for the newcomers who will ‘save the day’, it’s all about control one way or another.

    • MadManOriginal
    • 9 years ago

    Wake me up when it’s 0.50 lines of code per GFLOP.

    • sschaem
    • 9 years ago

    This post make me wonder if you bought a GTX 780 for $649 3 weeks ago ?

    How much do you want to bet that Plants VS Zombie will run much better using Mantle on Puma / GCN tablets then using Direct3D?

    Longer battery life and/or smoother gameplay.

    Your PC doesn’t run Mantle ? you will get a direct3d or opengl fallback, same old same old.

    • Voldenuit
    • 9 years ago

    If MS sees it as a shot across their bow and reduces DX overhead, and if nvidia releases a wrapper for compatibility or a similar low level API, then it would be a win-win-win situation.

    Sadly, I don’t see any of these things happening, except maybe the last…

    • chuckula
    • 9 years ago

    Actually… while it looks like the PS4 won’t have its games written in OpenGL, the hardware most certainly can support it and Direct3D too* (GCN being a standard GPU architecture and all that).

    I guess Sony is sticking with a new version of its proprietary shader language then.

    * And theoretically Mantle although AMD claims Mantle isn’t being used on the PS4… yet.

    • Deanjo
    • 9 years ago

    Wake me up when it’s posted here: [url<]http://www.khronos.org/[/url<]

    • lilbuddhaman
    • 9 years ago

    And here I thought I’d be the first poster with a smart assed response.

    WHERE ARE THE DAMNED MOD TOOLS?

    • maxxcool
    • 9 years ago

    Omgzor1!!1 9x moar calls than dx11!!!! 40% less overhead!¡1. woot!

    wait… Only a optimistic 20% frame boost ?? … for almost 6 million$

    And to further damn this.. that means a game getting :
    20 fps will get *4* more frames
    40 fps will get *8* more frames
    60 fps will get *12* more frames
    And after that its pointless

    Sooooo 6 mill buys a avg of 12fps… and costs more development time costs more debug time and doubles the patch coding…

    Top this off… amd just took a #$/^ing half BILLION DOLLAR LOAN took help thier reorg…. ??? Split the gaming market, proprietary, extra dev costs, bribing vendors, exclude vendors and punish users without gcn bleed money….

    not impressed at all. Hated it to begin with for the damage it is going to do… now hate it even more after all the 9x !!!! more calls!!! to only get 20%..

    Amd stick to making the good cards you do, i love the r9’s… 🙂
    Amd $%^&ing get out of the api buisness unless you plan on giving away coding support to EVERY game dev in the market…which is wholly unsustainable.

    Sorry for typos… phone post

    • maxxcool
    • 9 years ago

    Through a hypervisor too boot

    • maxxcool
    • 9 years ago

    Closer to 6..

    • kalelovil
    • 9 years ago

    The PS4 is not using OpenGL, neither did the PS3 (a small number of titles used a highly modified Sony-Nvidia variant of OpenGL ES, but most used lower level APIs)
    [url<]http://scalibq.wordpress.com/2010/05/15/sony%E2%80%99s-playstation-3%E2%80%99s-main-graphics-api-is-not-opengl/[/url<]

    • sschaem
    • 9 years ago

    Funny how Mantle is said to have been a Dice request, not the other way around.

    AMD lying to us all?

    • keltor
    • 9 years ago

    Wake me up when it’s posted: [url<]http://developer.amd.com/tools-and-sdks/graphics-development/[/url<]

    • chuckula
    • 9 years ago

    If it goes beyond AMD’s own GPUs running on Windows then it certainly could. Otherwise it’s not really much of a standard.

    • BoBzeBuilder
    • 9 years ago

    $50 million!

    • chuckula
    • 9 years ago

    [quote<]For starters, Andersson would like to see Mantle on Linux and OS X.[/quote<] [quote<] In any event, he repeated multiple times that he'd like to see Mantle become a cross-vendor API supported on "all modern GPUs.[/quote<] These are all good things and my interest in and opinion of Mantle would increase massively if they come true... we'll see what actually happens in the next couple of years.

    • Star Brood
    • 9 years ago

    Looks like AMD’s $5 million investment into EA to make Mantle work was worth the effort.

    • chuckula
    • 9 years ago

    [quote<]The Mantle release's core renderer is closer to the PlayStation 4 version than to the existing DirectX 11 one, and it includes both CPU and GPU optimizations.[/quote<] Well since the PS4 is using OpenGL, it looks like we are finally getting a picture of Mantle: It's a pipeline rendered API that's closer to OpenGL (with AMD-proprietary tricks thrown in) but uses the HLSL shader language from Direct3D instead of the GLSL shader language from OpenGL....

    • slowriot
    • 9 years ago

    SteamOS isn’t a closed platform. EA will be able to release an Origin client for it. I think it’s becoming much clearer that hardware vendors and game publishers want to take control of the PC platform.

    • James296
    • 9 years ago

    would love if origin went the way of the doodoo bird or GFWL 😛

    • Amazing Mr. X
    • 9 years ago

    You don’t consider this an indication?

    • tviceman
    • 9 years ago

    “coupling Mantle with Valve’s SteamOS in particular would make for a “powerful combination.”

    Why does Andersson even bring up SteamOS when EA hasn’t released a game on Steam for over a year and have not made any indication of changing this course of action?

    • MadManOriginal
    • 9 years ago

    Will this be the first AMD standard since AMD64 to really take hold?

Pin It on Pinterest

Share This

Share this post with your friends!