Delving deeper into AMD’s Mantle API

AMD unveiled its Mantle graphics programming layer during a press event in Hawaii two months ago. The announcement immediately sent waves through the PC gaming community, and in its wake, we heard about a number of games adopting the API—games from Battlefield 4 to Thief to Star Citizen. However, AMD divulged comparatively little about how Mantle works or about the benefits we can expect from it. The lack of concrete information about Mantle spawned lots of speculation and debate, but most of it wasn’t very enlightening.

Fortunately, we now know some specifics. At its APU13 developer conference in San Jose, California, AMD invited journalists and developers to listen to hours worth of keynotes and sessions by Mantle luminaries. We didn’t just hear from the API’s architects; we also listened to some its more illustrious early adopters, a few of whom helped develop Mantle in collaboration with AMD.

Among the speakers were Guennadi Riguer and Brian Bennett, the two AMD staffers who created Mantle; Johan Andersson of EA DICE, the man behind the Frostbite engines that power the Battlefield series; and Jurjen Katsman, the CEO of Nixxes, a Dutch studio that’s porting the next Thief game to the PC. (Nixxes can also be credited with porting Deus Ex: Human Revolution, Hitman Absolution, and Tomb Raider to Windows.)

Altogether, the Mantle presentations and talks at APU13 amounted to well over three hours of material. Much of that material was laden with game programming jargon and cryptic PowerPoint diagrams, and almost all of it was presented by developers with a knack for talking really, really fast. What follows is some of the information we managed to glean from those sessions—and from talking with a few of those folks one-on-one.

What on earth is Mantle?

Before we get started, we should probably talk a little bit about what Mantle is.

Mantle is a new application programming interface, or API, for real-time graphics that’s intended to be a substitute for Direct3D and OpenGL. Mantle is designed to cut much of the overhead associated with those APIs, and in many respects, it’s meant to operate at a lower level, closer to the metal, than they do. In that sense, Mantle is similar—but not identical—to the low-level APIs used to develop games on consoles like the Xbox One and the PlayStation 4.

At present, Mantle support is limited to Windows systems with graphics processors based on AMD’s Graphics Core Next architecture. Games written using Mantle will run on discrete Radeons from the HD 7000 series onward, and they’ll work on upcoming Kaveri APUs, too. (The GCN graphics architecture has made its way into some other notable silicon, including the SoCs inside of both the PlayStation 4 and the Xbox One, but Mantle does not, to our knowledge, currently support those.)

All of this talk of being “close to the metal” refers to a classic tradeoff in programming interfaces, especially in real-time graphics. A programming interface may choose to go very low level by exposing control over the smallest details of the hardware, giving the developer access to exact buffer sizes and the like. Doing so can allow programmers to extract the best possible performance out of a particular piece of silicon. However, applications written for low-level APIs can become dependent on the presence of specific hardware. When a new chip architecture comes along, a “close to the metal” application may run poorly or even refuse to run on the new silicon. In order to maintain broader compatibility and flexibility, higher-level APIs restrict access to hardware-specific features and expose a simpler set of capabilities that presumably will be available across multiple chip architectures.

Console APIs can afford to be fairly low-level, since console hardware doesn’t change for years at a stretch. By contrast, the high-level nature of Direct3D is the bit of magic that allows us to run decade-old PC games on brand-new graphics cards without issue.

In Mantle’s case, according to Riguer, AMD has lowered the abstraction level in some areas but “not across the board.” DICE’s Johan Andersson described the traditional approach as “middle-ground abstraction,” where a compromise is struck between performance and usability. Mantle, by comparison, offers “thin low-level abstraction” that exposes how the underlying hardware works. Riguer boiled it down further by comparing Mantle to driving a car with a manual transmission—more responsibility, but also more fun.

Also, while Graphics Core Next is the “hardware foundation” for Mantle, AMD’s Guennadi Riguer and some of the other Mantle luminaries at APU13 made it clear that the API is by no means tied down to GCN hardware. Some of Mantle’s features are targeted at GCN, but others are generic. “We don’t want to paint ourselves in a corner,” Riguer explained. “What we would like to do with Mantle is to have [the] ability to innovate on future graphics architectures for years to come, and possibly even enable our competitors to run Mantle.” Jurjen Katsman of Nixxes was even bolder in his assessment, stating, “There’s nothing that I can see from my perspective that stops [Mantle] from running on pretty much any hardware out there that is somewhat recent.”

Of course, technical feasibility isn’t the only obstacle in the way of Nvidia’s hypothetical adoption of Mantle. We’ll discuss this again in a little more detail at the end of the article. But first…

The problem with Direct3D

To understand why AMD created Mantle, it helps to know about some of the pitfalls of development with current, vendor-agnostic APIs. That model involves a substantial amount of overhead, and it apparently puts much of the optimization burden on driver developers, leaving game developers with limited control over how the hardware runs their software.

Katsman was particularly critical, calling Direct3D “extremely unpredictable” and complaining that, in some titles, “50% of your CPU time is spent by the driver and by Direct3D doing something that you’re not quite sure about.” AMD’s Riguer blamed that high overhead partly on the fact that graphics drivers have “no straightforward way to translate API commands to GPU commands” and are “not all that lean and mean.” In consoles, where the APIs are closer to the metal, Katsman said overhead amounts to something like “a few percent” of total CPU time.

The slide above, taken from the Nixxes presentation, outlines some of Katsman’s grievances with Direct3D in more detail.

Among those grievances is the performance hit caused by the driver compiling shaders at “undefined times” in the background. Katsman noted that, in Deus Ex: Human Revolution, one of Nixxes’ PC ports, shader compilation caused the game to stutter—which, in turn, led players to complain online. For what it’s worth, we did notice some ugly stuttering in our own testing of that game, although it’s not clear if those slowdowns were caused by this specific problem.

Another issue with Direct3D is the developer’s lack of control over GPU memory. Riguer explained that consoles let developers achieve “much greater visuals than on [the] PC with comparable or greater memory configs.” Katsman provided some background information about why that is. “In general, [with] Direct3D, if you destroy and recreate resources all the time, the API is too slow to do that, so you’re stuck having a fixed amount of resources that you cache and you keep around,” Katsman said. “Memory usage on PC is actually far higher, and we’re not really getting anything in return.”

There’s also the overhead associated with draw calls, Direct3D’s basic commands to place and manipulate objects on the screen. Packing in the amount of detail in today’s games requires lots of draw calls for each frame, and that leads to what developers call the small-batch problem. In Riguer’s words, “You hit a wall after so many draw calls per frame.” The limit is usually around 3,000-5,000 draw calls per frame, although very skilled developers can purportedly manage 10,000 or more. According to Katsman, developers must “jump through a lot of hoops” and come up with “new and clever ways to have fewer draw calls.” The barrier to increasing the number of draw calls per frame lies not with the hardware, Katsman added, but with the API.

Katsman then decried the fact that driver optimizations are “almost required” for new games. Anyone who’s ever had to download multiple beta driver updates to support a new PC game will be all too familiar with that problem. Developers are, in effect, unable to make their games work well by themselves. “I think that’s actually very harmful and doesn’t really contribute to users getting a good experience from the games they buy,” said Katsman.

Finally, PC games underutilize multi-core processors. Four-, six-, and eight-core chips aren’t uncommon in modern gaming PCs, but AMD’s Riguer said that “very few of those cores are available for driving graphics today.” Katsman elaborated on this point, noting that developers must expect drivers to spawn extra threads. He brought up this hypothetical scenario: “If the system has eight cores, then as an app, we should probably only use five, because who knows, the driver may still use another three or so.” That truly is a hypothetical scenario, though—in practice, Katsman pointed out that most games “flatten off at one core.”

What Mantle does

Mantle takes a number of steps to alleviate the issues outlined on the previous page. By giving developers more direct control of the GPU and putting them, in Riguer’s words, in the “driver developer’s seat,” Mantle can cut overhead and allow for more efficient use of both the graphics hardware and the CPU.

Mantle’s most fundamental and innovative feature, according to AMD’s Brian Bennett, is its execution model. Here’s how he described it:

These days, a modern GPU typically has a number of engines that execute commands in parallel to do work for you. You have a graphics or compute engine, DMA, multimedia . . . whatever. The basic building block for work for those engines is a command buffer. In [the diagram above], a command buffer is a colored rectangle. A driver builds commands targeting one of the engines; it puts the command buffer into an execution queue, and the engine, when it’s ready to do some work, goes to the execution queue, grabs the work, and performs it.
[That’s] as opposed to a context-based execution model, where it’s up to the driver to choose which engine we want to target; it’s up to the driver to figure out where we break command buffers apart, and manage the synchronization between those engines. Mantle exposes all that, abstracted, to you. So, you have the ability to build a command buffer, insert commands into it, submit it to the queue, and then synchronize the work between it. This lets you take full advantage of the entire GPU.

More fundamentally to Mantle’s goals is the fact that you can create these command buffers from multiple application threads in parallel. . . . That is the key to opening up the potential of our multi-core CPUs these days. There is no synchronization at the API level in Mantle; there is no state that persists between command buffers. It is up to you to do the synchronization of your command building and of your command submission; and if you want to do work on multiple engines, we give you constructs to synchronize work between those engines. You have all the power.

Mantle’s execution model extends to multiple GPUs. Developers have access to all of the engines on all of a system’s Mantle-compatible GPUs, and they can control those GPUs and handle synchronization themselves. “Synchronization between the GPUs,” Riguer explained, “becomes a natural extension to the mechanism we exposed . . . on synchronization between multiple queues. In fact, we make [the] multi-GPU model exactly like a single-GPU model scaled up to multiple devices.”

As a result, developers have much more flexibility in the way they split up workloads between GPUs, and they can “try to make [their games] scale a lot better” than what’s possible with CrossFire right now. Techniques superior to today’s alternate frame rendering (AFR), whereby each GPU renders a different frame in the animation, can be developed, and asymmetric configurations—such as those with slow integrated graphics and fast discrete graphics—can be more readily exploited.

Moving beyond AFR is particularly important. While that technique works reasonably well with current games, Riguer said future titles will run more workloads with lots of frame-to-frame dependencies, such as compute-based effects. To handle those, “You would need to either duplicate the workload across GPUs or serialize across the GPUs. In either case, your scaling suffers.”

Mantle manages memory in a very different way than Direct3D, too. Here is Bennett’s explanation of that feature:

In traditional APIs, when you create an object like an image or a buffer, the driver implicitly allocates memory for you. [That] seems okay, but it has a number of problems. It’s difficult to efficiently recycle memory; you’re going to have bigger memory footprints because of that; creating the object itself is more expensive, because you have to go to the OS to get the GPU memory; and the driver becomes inefficient, because it spends a lot of time managing these OS video memory handles to work with the display driver model.
In Mantle, API objects are simple CPU-side info that have no memory explicitly attached. Instead, you as the app developer allocate GPU memory explicitly and bind it to the object.

Again, higher efficiency and flexibility is the name of the game.

That brings us to monolithic pipelines. To paraphrase Johan Andersson, Mantle rolls all of the various shader stages that make up the graphics pipeline into a single object. Above, I’ve added the slide from Andersson’s keynote, since it’s somewhat more enlightening than the one used by Riguer and Bennett in their presentation.

In short, monolithic pipelines help avoid draw-time shader compilation—a problem that, as I mentioned earlier, can make games stutter. Here’s how Bennett sums it up:

In the current implementations, draw-time validation that the driver does is super expensive. Since you can vary all your shaders in state independently, we spend a lot of time at draw deciding what hardware commands we should write. By compiling the pipeline up front, binding the pipeline is lightning fast in comparison.
Second, by compiling this up front, you give us the opportunity to spend some cycles to improve the GPU performance. If we know everything you’re doing in the whole pipeline, we can optimize that. And . . . with the draw-time validation models, sometimes you’ll bind a new state, call draw, and that draw will have an inexplicably high CPU cost. Maybe the driver had to kick off a shader compile in the background, and that’s going to impact you. [There are] no surprises with Mantle.

Mantle doesn’t just help prevent shader compilation from occurring mid-game. It can also prevent shaders from being recompiled each time the game is launched. According to Riguer, recompilation can account for a “lot of the startup time,” but with Mantle, “the shader compilation is a lot more predictable, and we give you the ability to save and load very quickly and easily a complete compiled shader pipeline, which should virtually eliminate all the loading time that stems from shader compilation.”

Incidentally, Bennett said he expects pipelines to look “different in the future.” He suggested that Mantle’s graphics pipeline abstraction will help the API adapt to these future changes—enabling “some stuff that we can’t do in real time now.”

Mantle introduces a new way to bind resources to the graphics pipeline, as well. According to Bennett, the traditional binding technique is a “pretty big performance hog,” and the currently popular alternative, which he calls “bindless,” has downsides of its own, including higher shader complexity, reduced stability, and being “less GPU cache friendly.”

Mantle’s binding model involves simplified resource semantics compared to Direct3D, and it works like so:

In Mantle, when you create your pipeline, you define a layout for how the resources will be accessed from the pipeline, and you bind that descriptor set. The descriptor set is an array of slots that you bind resources to. Notably, you can bind another descriptor set to a slot—and this lets you set hierarchical descriptions of your resources.

If your eyes just glazed over, that’s okay—mine did a little, too. In any event, Bennett said that the ability to build descriptor sets and to generate command buffers in parallel is “very good for CPU performance.” During his presentation, Johan Andersson brought up a descriptor set use case that reduced both CPU overhead and memory usage.

Bennett went over one more way in which Mantle can reduce CPU overhead: resource tracking. Right now, drivers spend a “lot of time” keeping track of resources. With Mantle, tracking resources is up to the application. Bennett said he expects apps to do a better job of it than the graphic drivers, and he hinted that developers won’t have to do much extra work to make that happen: “Your game engine is probably doing that sort of tracking already, because you’re supporting consoles that require it.”

Last, but not least, Mantle has some debugging and validation tools built into the API and the accompanying driver. AMD didn’t share a ton of specifics about those, but there was mention of “lots of extra controls for stress testing applications and forcing very specific debug scenarios.” Riguer added, “In fact, I would say that writing [debug] tools on top of Mantle, in many cases, would not be much harder than slapping on a fancy UI on top of capabilities we are putting right into Mantle.” Both Johan Andersson of DICE and Jurjen Katsman of Nixxes called Mantle’s debugging and validation tools “really powerful.”

More on how Mantle helps performance

Mantle’s closer-to-the-metal development model, coupled with a more lightweight driver, seems to pay some very real performance dividends. The game developers in attendance at APU13 were reluctant to quote actual performance figures from their games, partly because their work still isn’t quite finished. However, some figures were quoted that shed light on Mantle’s performance benefits.

For starters, Nixxes’ Katsman revealed that “very early figures from Thief” (which is “not fully running on Mantle yet”) showed a big reduction in draw call overhead. “Before, we would often see about 40% of the CPU time stuck in the driver, in D3D, or in various threads,” he said. “The early measurements we did, right now we have that down to about a fifth of that.”

The guys from Oxide offered a more visual representation of Mantle’s CPU overhead in their talk. Mantle is the yellow rectangle, the game engine is the blue one, and unused CPU time is shown in green:

DICE’s Andersson extrapolated upon that same notion in his keynote, saying that, with Mantle, the CPU “should never really be a bottleneck for the GPU anymore.” In a separate demonstration, Oxide showed their Mantle-enabled space game suffering no frame rate hit when the FX-8350 processor on which it ran was underclocked to 2GHz, or half its base speed. (Graphics processing in that demo was handled by a Radeon R9 290X.)

The reduction in draw call overhead also means more draw calls can be issued per frame. Riguer said Mantle raises the draw call limit by an order of magnitude to “at least” 100,000 draw calls per frame “at reasonable frame rates.” This isn’t just theoretical—Oxide showed their space game demo actually hitting 100,000 draw calls per frame. Andersson, who was in the audience for that presentation, was impressed enough to tweet about the demo.

Mantle will allow game developers to use more CPU cores, too, as these two slides from Andersson’s presentation show. According to Andersson, the Mantle model outlined in the second slide is the “the exact model that we’re using on all of the consoles”—both current and next-gen ones. In his talk, Katsman explained that, if a system has eight cores, Mantle allows developers to use all of those cores for their game. “So, we can have four to do rendering, a few more to do physics and some other things. We can make games that are far more complicated. We can increase the draw distance to significant distances, have far denser worlds.”

According to Katsman, “The density of everything in the world is something that’s being held back, and I think Mantle will help alleviate that.” That said, “Just because we can draw more things doesn’t mean we have the CPU resources to simulate them all.” For example, while Mantle might make it possible to draw many more characters in a given scene, developers will have to consider the cost of running AI simulations for all of those characters.

In addition to making more effective and efficient use of the CPU, Mantle will allow GPU resources to be used more efficiently. Katsman brought up the Radeon R9 290X, which has 5.6 tflops of compute power, and said that an “awful lot” of that compute power is “lying there dormant.” With current APIs, some of the compute power might be used for some parts of a frame, but other parts “will be bottlenecked by something else,” such as “getting things from memory, by fetching textures through the texture fetch units, [and] the rasterization units.” He went on:

The APIs we have right now, they just allow us to queue synchronous workloads. We say, “draw some triangles,” and then, “do some compute,” and the driver can try to be a little smart, and maybe it’ll overlap some of that. But for the most part, it’s serial, and where we’re doing one thing, it’s not doing other things.
With Mantle . . . we can schedule compute work in parallel with the normal graphics work. That allows for some really interesting optimizations that will really help your overall frame rate and how . . . with less power, you can achieve higher frame rates.

What we’d see, for example—say we’re rendering shadow maps. There’s really not much compute going on. . . . Compute units are basically sitting there being idle. If, at the same time, we are able to do post-processing effects—say maybe even the post-processing from a previous frame, or what we could do in Tomb Raider, [where] we have TressFX hair simulations, which can be quite expensive—we can do that in parallel, in compute, with these other graphics tasks, and effectively, they can become close to zero cost.

If we guessed that maybe only 50% of that compute power was utilized, the theoretical number—and we won’t reach that, but in theory, we might be able to get up to 50% better GPU performance from overlapping compute work, if you would be able to find enough compute work to really fill it up.

The 50% figure is a theoretical best-case scenario, but Katsman added, “It seems quite realistic that you would get maybe 20% additional GPU performance out of optimizations like that.”

Also, because Mantle lets developers use GPU memory more efficiently, the new API could allow for the use of higher-resolution textures in a given game, according to Katsman.

Some caveats

Mantle’s advantages are many, but a few downsides that were mentioned in the various presentations at APU13.

One of those is that, unsurprisingly, supporting an additional API incurs added development time and cost. Mantle currently works only on GCN-based Radeon graphics processors, which means that developers who adopt it must also use either Direct3D or OpenGL to support other graphics hardware. Andersson said DICE spent about two months porting Battlefield 4‘s Frostbite 3 game engine to Mantle. Asked for a ballpark cost figure, Katsman told me that, for a simple PC project like Nixxes’ Thief port, adding Mantle support might amount to roughly a 10% increase in development cost. He was quick to add, however, that such an increase is a drop in the bucket compared to the total development cost of the entire game for all platforms, which might add up to something like $50 million.

The lack of multi-vendor and multi-platform support is another one of Mantle’s notable downsides. Microsoft and Sony use different APIs for the Xbox One and PlayStation 4, and Mantle doesn’t yet support Linux, OS X, or Valve’s upcoming SteamOS. There are some mitigating factors here, though. Katsman noted that Mantle optimizations are “conceptually similar” to the ones developers write for next-gen consoles. That tells us developers won’t be starting from scratch when adding Mantle support to their games. Also, Katsman believes Mantle’s performance improvements make its implementation worthwhile even if only a fraction of users benefit. As he pointed out, developers already spend time writing support for features like Eyefinity and HD3D into their games, and those features have even smaller user bases.

Finally, adding Mantle support to current game engines, as Nixxes did with the version of Unreal Engine 3 used by Thief, can be a challenge. “Native D3D ports will not magically get much higher performance,” explained Katsman. “If you emulate the same system on top of Mantle, you will not get much better performance.” Fully optimizing an existing engine for Mantle seems to involve breaking and rewriting some chunks of that engine to take advantage of the new development model. But here again, Katsman believes the performance improvements make the effort worthwhile.

Mantle’s future

At least right now, Mantle’s immediate future seems clear enough.

AMD has already worked with a number of game developers on Mantle support. The most notable and productive of those collaborations is probably the one with Johan Andersson of EA DICE. Andersson told us a Mantle patch for Battlefield 4 will be released in late December, and the same Mantle-enabled Frostbite 3 engine will go on to power 15 other EA titles. The slide below, which Andersson showed in his presentation, hints that more than a few of those 15 games will belong to major franchises—Dragon Age, Star Wars, Mirror’s Edge, Need for Speed, and Mass Effect.

Mantle support is also coming to Eidos Montreal’s Thief, Cloud Imperium’s Star Citizen, and Rebellion Entertainment’s Sniper Elite 3. In the next couple of months, AMD will kick off a closed beta program that will allow even more developers to join. Given the efficiency gains Mantle seems to enable, the conceptual similarities it shares with console APIs, and the enthusiasm of the game developers who spoke at APU13, I wouldn’t be surprised to see Mantle support land in many more games over the next year or two. Guennadi Riguer also told me that, as with consoles, developers should be able to squeeze more performance out of Mantle over time. That could make adopting the new API an even more attractive proposition.

What comes after that is a little harder to predict.

During his keynote, Andersson expressed a strong desire to see Mantle support expanded beyond Windows and AMD GPUs. In a roundtable talk later that day, he added that Mantle support coming to third-party GPUs would be “really important for [DICE] in the future,” and the studio would like to use the API “everywhere and on everything.” However, he admitted that it would be “very difficult to get there”—not for technical reasons, but for political ones.

For its part, AMD isn’t opposed to addressing some of those political hurdles. Guennadi Riguer said the company is “fairly open to working with other [independent hardware vendors],” and he reiterated that Mantle has been “purposely structured . . . in such a way that it’s as clean as possible, as transferable to other vendors as possible.” When asked if AMD would be amenable to making Mantle an open API overseen by the Khronos Group—the same folks who look after OpenGL—he replied, “I don’t see why not.” At this point, Jurjen Katsman chimed in, saying that AMD shouldn’t hand Mantle to Khronos right away, because it’s “not done.”

Whether Mantle succeeds in its current state, becomes an open industry standard, or joins Glide in the graveyard of vendor-exclusive APIs, we can’t know for sure. Whatever happens, Katsman was adamant that Mantle has done a good thing by showing that “there’s something wrong with current APIs.” He and others seemed excited about the prospect of Mantle shaking up the industry and spurring change, regardless of how that change ultimately takes place.

Comments closed
    • Gadgety
    • 6 years ago

    An excellent, very clearly written piece . Thank you Cyril Kowaliski.

    • CBHvi7t
    • 6 years ago

    I don’t get the vocabulary. What is
    state (scene?)
    draw-time validation model (no idea)
    binding (resource allocation?)
    descriptor set (bunch of pointer arrays?)
    pipeline (to me that is a series of algorithms (SW) and their resources (HW))
    I might get the usage of pipeline now. It is not at all a pipeline but actually a loop with serial steps. At every time there is just one picture being prepared, not one for every step in the pipeline, thus the dataset changes with every stage that is invoked.

    • Kreshna Aryaguna Nurzaman
    • 6 years ago

    We’re living in the era of multiple cores, yet games seem to rely on single-core performance. Correct me if I’m wrong, but the way I see it, Mantle makes games rely less on single-core performance, and enables them to utilize more cores.

    Isn’t that a good thing?

      • Hz so good
      • 6 years ago

      I’m wondering if this can be ported to steamOS. Less software overhead in the OS in general, plus a low level graphics API could be a really good thing. We’d get more bang out of our hardware buck, so to speak. It’s been a long time since I’ve tinkered with linux, but between driver support, and X-Window and MesaGL (?) were issues.

      This is assuming steamOS picks up any major developer support.

    • Klimax
    • 6 years ago

    Small exercise:
    [b<]ETA: I have by mistake omitted in second calculation what number I used for improvement![/b<] I took numbers from recent Hexus review, where BF4 was used and did some simple calculations. [url<]http://hexus.net/tech/reviews/graphics/63057-gigabyte-geforce-gtx-780-ti-ghz-edition/?page=4[/url<] Stock 780 Ti does 77,2FPS and 50,9FPS R9 290X (Assuming uber mode) 67FPS and 44,7FPS If we assume that Mantle will grant on average 10% increase (safe and more likely numbers) then we get 73,7 and 49,17FPS. Which would end up just below 780.Ti. (Cannot estimate frametimes, because distribution will likely change depending how their approach will differ from driver team and whether there will be any visible effect of Mantle and their are missing from review) Now if we take more fantastic result more inline with AMD's expectations (20%), then we get 80,4 and 53,64 getting Radeon above stock 780 Ti. Now this is too simple, because there are number of factors I cannot predict like probability of 290X hitting thermal limits (better coolers may fix it to some degree), then there is question how much CPU will play role and how much time DICE will spend on CPU side (remember that BD architecture has number of often contradicting parameters) and also another consideration is how much room NVidia has left in their drivers. Of course unstated assumption is that developers can get on par with baseline in form of DirectX/Drivers combo. BTW: For historical perspective: [url<]http://www.activewin.com/reviews/software/drivers/detonator.shtml[/url<] That is comparision to show whether or not there is really gain from CPU optimization in drivers. (Remidner that it dates from time of fixed pipeline graphics)

      • Pwnstar
      • 6 years ago

      The numbers I saw estimated was an average of 20% improvement, not 10.

        • Klimax
        • 6 years ago

        As you can see, I am using both numbers. I use first 10% as mine estimate, but then I use 20% as their estimate, aka second set of results.

        Grrr, I now noticed I forgot to write in second case what improvement I took! (Yes, second calculation used 20%, which should have been possible to recheck on your own…)

    • DeadOfKnight
    • 6 years ago

    I’m all for change. Looks like it may be coming with the next version of DirectX, but that may not be enough. We need Microsoft to wake up and start giving a crap. Maybe that is happening with the change in leadership; maybe it isn’t.

      • Airmantharp
      • 6 years ago

      MS acknowledged during the first round of hubbub over Mantle that they have stuff in the works to alleviate the same basic issues that Mantle attacks. If they’re successful, it’ll be up to the developers to ensure that Mantle’s remaining advantages are properly showcased.

      The real news is, the industry is finally taking game engine optimization on the PC seriously! Your average enthusiasts’ system starts out with an order of magnitude more performance than the average console, yet the experience rarely fully reflects that gap with respect to fidelity and fluidity of gameplay.

        • Modivated1
        • 6 years ago

        The Real news is that AMD has already spent their time and money to produce a exciting new product that Game developers can use NOW and that programmers are already very excited about it!

        Microsoft and whoever else did not take the game developing community seriously until AMD completed a polished product (something they needed badly) now everybody is trying to scrambling in make their own version. AMD’s product is FINISHED and It’s HERE in the hands of Developers, know one wants to wait another year or two for Microsoft’s response to Mantle. By that time Mantle will become a standard and Microsoft will be trying to get their foot back in the door.

        Should grabbed while it was yours Microsoft, now it’s AMD’s piece of the pie 🙂 .

    • TwoEars
    • 6 years ago

    Now – this is why I like the TechReport!

    First rate article about the kind of stuff that’s been rattling my brain for the last couple of weeks.

    • chuckula
    • 6 years ago

    One thing that bothers me is the whole “we can do more drawing calls!” schtick.

    I hate to break it to everybody, but in modern graphics, at least with OpenGL, it’s not like you have to make a drawing call to put each & every polygon up on the screen. VBOs and tesselation shaders are the modern ways of making the whole “drawing calls are slow” problem kind of irrelevant.

    Here’s a fun video showing tesselation in OpenGL 4.0 using Qt. [url<]http://www.youtube.com/watch?v=v2b4yQ61lKI[/url<] Note what shows up at 0:34 ("All from one OpenGL draw call!") Incidentally, I've been playing arout with QML and the particle system in the new Qt 5.1/5.2 libraries, and you can do some pretty awesome effects out of it.

      • sschaem
      • 6 years ago

      Great find, you should send that link to dice and crytek and all the other fools that think that draw calls are a limitation …

      Let’s imagine for a second the whole game engine industry is really complaining about draw call limitation and its not a conspiracy theory… Why do you think they see ‘unlimited’ drawvcall a boon ?

      I guess to make it simple…. Why do you think game have to make more then one draw call per frame ?
      Answer that and you will have your answer.

        • WiseInvestor
        • 6 years ago

        Great find. You should open your own game studio and license your game engine or something.

        Let’s imagine gaming companies are run by people who “find things” on YouTube instead of making a game or a demo to prove their point.

      • CBHvi7t
      • 6 years ago

      My guess is that it is not about the calls but about the primitives.
      They just called them “calls”

    • Diplomacy42
    • 6 years ago

    The lack of concrete information about Mantle spawned lots of speculation and debate, but most of it wasn’t very enlightening.–

    like articles on news sites bemoaning how mantel was going to destroy pc gaming? eh?

    • Arclight
    • 6 years ago

    Mantle is a good thing since it will shake things up in terms of how current popular APIs work, or rather don’t, but if AMD invested a lot in it and it’s a gamble for them then they will probably regret it.

    What worries me is that they said that you can make a game engine work with Mantle but you won’t see the benefits unless you rewrite some part of the code from scratch to actually get better performance.

    What this means is that some titles with Mantle support, but not much optimizations, won’t show enough performance boost to make it look worth while for the consumers and even though the ideas behind it are brilliant, failing to execute them properly will create a bad rep.

    As soon as Mantle fails to deliver noticeable fps gains in popular titles with little optizations then the whole internet will explode and attack every aspect of the API, even though, at it’s core, it is good.

      • Antimatter
      • 6 years ago

      If it’s true that part of the code would have to be rewritten in order to see a performance benefit then it wouldn’t make sense to release a Mantle version of the game if the developers don’t intend to make the optimisations. They might as well stick to DirectX or OpenGL

        • Arclight
        • 6 years ago

        [quote<]Finally, adding Mantle support to current game engines, as Nixxes did with the version of Unreal Engine 3 used by Thief, can be a challenge. "Native D3D ports will not magically get much higher performance," explained Katsman. "If you emulate the same system on top of Mantle, you will not get much better performance." Fully optimizing an existing engine for Mantle seems to involve breaking and rewriting some chunks of that engine to take advantage of the new development model. But here again, Katsman believes the performance improvements make the effort worthwhile.[/quote<] I am convinced that various developers will make a Mantle version even if they don't properly optimize for it. For one they could be sponsored by AMD to make the port, that doesn't mean they''ll do it properly, especially when they are constantly pressed by time. Look at DICE games published through EA, historically they have always been pushed to launch games before they are reasonably bug free. Look what happened with BF 3 and BF4...major bugs at launch that took a lot of time to get fixed and it's not exactly something new for the franchise. I'm 100% convinced that many of them will have botched Mantle support and given that even AMD does not claim that the API is finished, there is no doubt, in my opinion, that ney sayers will have plenty of material to use against the adoption of this API. That's not a bad thing though, if it does manage to have even one big game title that shows huge improvements it will cause the consumers to ask for reform. BTW any news regarding G-synch? I'm quite interested in knowing if dynamically changing refresh rate causes headaches and nausea since i vaguelly remember that not all people are comfortable at any refresh rate, most prefering a certain fixed value. Maybe i remember it wrong though, it wouldn't be the first time.

          • puppetworx
          • 6 years ago

          Mantle has to deliver better performance than DirectX out of the box, that’s the only way they can make any headway into the market. According to AMD it does, they claim that using Mantle over DirectX lowers CPU overhead, so in that way simply using Mantle [i<]is[/i<] optimizing. The ability to optimize more thoroughly for various configurations being the icing on the cake(or should that be crust on the mantle).

            • Klimax
            • 6 years ago

            Not only it has to outperform DirectX, but it has to be above 10% or so to be significant. Otherwise they are too susceptible to driver optimizations by NVidia and Intel and it might be also lost in performance noise.

    • DPete27
    • 6 years ago

    So, my 3570K will last an eternity now!?!?!? With no/minimal CPU limitations, maybe I’ll strap a R9-290 onto a Bay Trail Atom for the next gaming rig I build.

      • Airmantharp
      • 6 years ago

      Or you could just wait for Nvidia’s next project, which puts an ARM CPU on the GPU itself :).

      • Pwnstar
      • 6 years ago

      That would work if every game you wanted to play had a Mantle version. I doubt every game will convert, so gaming on an Atom is a bad idea.

      • Theolendras
      • 6 years ago

      You still need a good single thread performance for the main game engine. And, if history is any indication, it could enable lazy port to be lazier still (tough I doubt someone would be entice to use Mantle to that end, it wouldn’t make sense to port to Mantle something you did not optimize in general).

      Or if well received by the market it could just push the boundaries for the CPU in other stuff, AI, physics etc..

    • Klimax
    • 6 years ago

    Well, what have we here. FUD, FUD and more FUD. Their claims against DirectX lack any evidence and proof at all. There is nothing verifiable about them. Their results cannot be replicated. In short their word is as good as salesman. Anybody taking their claims for granted is outright fool.

    They failed to detail even how they tested for bloody inefficiency and why they think their API will solve anything.

    That there are developers who suck at using API, doesn’t mean that it is API which sucks. Priamray candidate for this is their claim about compilation which is outright lie. Developer has full control when compilation happens, because either you can precompile before even game starts or you can compile when think it is best.
    Original D3DX function
    [url<]http://msdn.microsoft.com/en-us/library/windows/desktop/ff476261(v=vs.85).aspx[/url<] Note following paragraph: [quote<]Instead of using this function, we recommend that you compile offline by using the Fxc.exe command-line compiler or use one of the HLSL compile APIs, like the D3DCompileFromFile API.[/quote<] They mislead on such simple topic, why should anybody who has a clue believe them? Their claims about memory management is also quite not true either. First, drivers are supposed to know what is best location and manage it as their are assumed to have full knowledge of hardware and state. Second, no evidence for their claims, but I'll note that Microsoft intentionally changed DirectX operation, so devs no longer specify location of memory, but denote intention, so DirectX and driver can place it optimally, because frankly, it otherwise required quite extra code to ensure nothing went wrong like lack of memory. Another claim was about draw calls. That's simply bullshit without proper evidence, also their claim about "hoops" lack credibility. Not only they try to claim that one has to understand API to use it best, which surprisingly will apply to Mantle, but that there is some overhead, but if you flatten and lower pipeline/API, where do you think that overhead will go? Back to programmers code of course, because there is no free lunch. Also not proven. And last example of badness I saw there, is: [quote<]Katsman then decried the fact that driver optimizations are "almost required" for new games. Anyone who's ever had to download multiple beta driver updates to support a new PC game will be all too familiar with that problem. Developers are, in effect, unable to make their games work well by themselves. "I think that's actually very harmful and doesn't really contribute to users getting a good experience from the games they buy," said Katsman.[/quote<] Yeah, sure thing we want developers to spend even more time on doing GPU specific optimizations instead of fixing bloody bugs and only to see their work eliminated by next GPU series. There is a reason why this job is best left t driver teams, but AMD can't afford them now can they? So their solution is to return back to 90s and get developers to spend extra time on job, which driver teams are/should be more qualified. End result is more bugs for your money. I could spend days debunking their claims where possible (some things are simply too general to debunk), sadly I don't have that time. === And lastly all their presentations are useless. Give us documentation and gives SDK or shut up. Why can't download it and see for ourself in our own code? Why is it only for select developers? Maybe because we would find out that all those claims are invalid? That all that was just marketing and money-wasting exercise? Maybe because it is in reality weak form of vendor lock-in? (Or did everybody already forget Global Illumination in Dirt 3: Showdown?) And if you really wanted to go off-rails and do it on your own, there is DirectCompute, you'd just then hand-off your result to rasterizer and pass it to presentation. Nearly no overhead. Or use CUDA! There is even rasterizer in CUDA for your own use. (curtsey of NVidia Research) === TL;DR: Solution looking for problem (no such thing proven yet) provided by a vendor and just decreasing further quality of games. And likely failing as soon as competition upgrades their drivers. Don't be surprised if you won't see much gain... [/Klimax off]

    • HisDivineOrder
    • 6 years ago

    So AMD promises they’ll make this an open standard and we’re supposed to just believe them because…? Is this even just the fifth time AMD’s come up with something that they promised was going to become an open standard and then they forgot about it six months later?

    Plus, it seems like they’re in the “sell” phase right now where they’ll tell anyone anything to get people on board with it. I see lots of, “Developers seem to love it!” but few developers quoted or mentioned except the ones AMD’s already advertised and marketed. AMD’s basically had a long term partnership with Square-Enix long before Mantle existed. They had a Gaming Evolved deal with EA for over a year now, too.

    Star Citizen is using everything high end, apparently, including nVidia and AMD what’s-it’s. Where is the big engine win that’s going to sell the thing? Sweeney’s come out in outright hate of it with the now-irrelevant-to-PC-gaming former engine god, Carmack, also seeming against it.

    So yeah, I think this is one of the scenarios where we don’t assume this is the next coming until the day AMD proves it is. Not with Battlefield 4, but with multiple games across multiple developers. Mantle’s got decent promise, but unfortunately they didn’t introduce it as a standard for all to join. They introduced it as a competitive advantage at the start of the big reveal for R9 290 series. If they wanted an open standard, they sure did it in the most piss-poor way imaginable.

    Plus, coding to the metal for Mantle is going to lead to multiple codebases for different GPU’s (and CPU’s?), accounting for nVidia, AMD, and Intel’s differences in architectures. Even if Mantle is “clean” in allowing other architectures to integrate in, that doesn’t mean that it’s not going to require optimization effort across different architectures. Anything that goes “to the metal” is going to require fundamental optimizing effort to really use it well.

    And that would require a monumental effort on the part of nVidia and Intel to integrate. Why wouldn’t they just introduce their own variants tailor made for their own architectures instead? That’s assuming they want to upset the natural balance we already have where OpenGL can do most everything Mantle is promising without many of the limitations…

    Yeah, I’m going to wait until AMD proves this isn’t another Truform, DX 10.1, Stream, Havoc GPU integration, etc. AMD’s got a known track record of talking big and showing up with little.

      • nanoflower
      • 6 years ago

      Or since they are coding close “to the metal” it may be that the effort required to work on various architectures is actually put upon the game developer. How many game developers would want to optimize their Mantle engine for pre-GCN AMD, GCN AMD, Kepler, Fermi, and whatever Intel provides? Plus whatever comes next from each of the GPU/IGP providers.

      That’s a lot of work to be done whether it’s done by the developers of the games or the architecture/GPU providers, but at least if it comes with the GPU then we only have three companies involved in developing/optimizing for each architecture.

        • Klimax
        • 6 years ago

        Correct. Just debugging widely used API is hard, debugging new API with nonexistent toolset is insanity. (Been there, had a fun and lost number of hours, because I lacked toolset. pre-Visual Studio 2012 case, but saw a lot of disassembly)

      • Pwnstar
      • 6 years ago

      Frostbite is pretty big. Oxide has the potential to take over the RTS engine market. If they get Unity, Crytek and Unreal on board, it’s game over.

      [quote<]Where is the big engine win that's going to sell the thing?[/quote<]

        • Visigoth
        • 6 years ago

        If.

        • WiseInvestor
        • 6 years ago

        Crytek already supports Mantle API, that was confirmed by Tom’s Hardware’s Ask Me Anything community feature forum.

        Activision and Epic could be next to adapt Mantle out of sheer rivalry, CoD could use a engine overhaul and Unreal don’t want to see game developers going CryEngine for their closer to metal approach.

        And RockStar really need Mantle API badly, all GTA titles are draw distance and CPU bound.

          • Klimax
          • 6 years ago

          When IW talked about Ghosts they specifically stated they will update necessary, but no big rewrites.
          [url<]http://vr-zone.com/articles/call-of-duty-ghosts-new-engine-is-an-upgrade-but-not-all-new/32141.html[/url<] [quote<] In a bit of a case of "if it ain't broke," Volker definitely is aware of how big the franchise is and how important the beginning of a new series is. He spoke briefly about the internal politics of Infinity Ward and how they always want to improve on the engine rather than starting from scratch. Volker notes that Infinity Ward always makes "a really clear point to make sure we only upgrade the things that are necessary for driving the things and the gameplay experience we want to push for that project." [/quote<] As for GTA, is there any analysis on this topic? Because I'd think that their CPU boundary comes from simulation, not graphics.

      • El_MUERkO
      • 6 years ago

      My next GPU is going to be AMD so I’m happy either way, but I’d love to hear nVidia say ‘make it an open standard and we’ll seriously consider supporting it’, nothing carved in stone, just a recognition that D3D is a bit crap.

      • Theolendras
      • 6 years ago

      If I’m not wrong many element you name up there it not because AMD have been either ignored by a strategics partner or supplanted by more ressourful competitor. Not that it is a good sign on the delevery capacity, but it does not necesserily cast doubt on their good faith . Truform was proposed to Microsoft and never was included. Havoc was probably onto something because Intel bought the compagny outright and… well did nothing relevant with it. Stream well, I can’t agree more on that one…

      Mantle vs OpenGL seems to be a debate analog to C vs java, they both have strengh and weaknesses, but aren’t targeted at the very same market. For the ones that wants to publish to the greater number were pushing every bit of performance is secondary to interopability, OpenGL will prevail, even if Mantle takes off and become an open standard. If it does become an open standard tough. I’d really like to see that happen for PC graphics improvements won’t scale quite as much if with stick with the current model. Multi-core scaling seems to be painful with D3D and OpenGL. And single thread performance is improving incrementally these days and it may be that way for some time. The other positive side effect, is that if Mantle takes off, nothing would necessirily tie it to Microsoft. So not only it could make the PC more efficient, but long term it could also enable you to build a gaming box cheaper as in no Windows tax as well… But there is a lot of problem before getting there, it may never will.

    • WaltC
    • 6 years ago

    Thanks for the article. Explains much!

    So, yea, I guess Mantle is designed to usurp D3d, after all, then. That’s good, evidently, because Microsoft has sort of dropped the ball with respect to pushing ahead with much-needed improvements to D3d, and mantle represents that much-needed improvement. Microsoft these days is rather like the gnarled, schizoid relative that we all keep locked away in the attic–or chained in the belfry–the company definitely is of two distinct minds. On one hand it’s jealous of D3d’s Windows integration and protects it; on the other the company doesn’t seem motivated to improve D3d except in tiny little increments when the mood strikes. It seems remarkable how little has changed in the last eleven *years*, as I first ran D3d 9.0c on my 9700 Pro in 2002! I had no idea D3d had this much room for improvement and it’s easy to see why developers would use Mantle even if it meant using it in addition to D3d compatibility.

    This new “ARB” committee for OpenGL seems a lot more dynamic than the glacial ARB used to be, but I’m still a bit skeptical. At the moment, this should remain AMD’s baby until the smoke clears and we can all see a bit better.

    Some guy made the silly comparison (again) with GLIDE. When 3dfx came out with GLIDE, there was no D3d and ARB was just getting off the ground with OpenGL. To support the features 3dfx wanted to bring to market there was no choice apart from rolling their own API. That’s the story behind GLIDE. D3d was a late-comer and an imitator of GLIDE; in fact, D3d did not equal GLIDE feature support until D3d 7.x, roughly. For many years, 3dfx was way out in front and leading the way–GLIDE was a necessity because there was nothing that could take its place. So to call GLIDE something “bad” is ridiculous. If not for 3dfx and GLIDE, who knows what might exist today, if anything? Criticizing GLIDE is as foolish as comparing GLIDE to Mantle, imo. D3d is “cross-platform” with….nothing.

      • HisDivineOrder
      • 6 years ago

      Glad to see you’re finally getting what Mantle is. Why did you wait so long to read up on it?

      Also, the comparison to Glide is a bad thing in the same way as saying “It’s time to take off the training wheels” is a good thing. Training wheels, in and of themselves, are good things. They help you learn to ride a bike, they keep you from really hurting yourself. Eventually, though, they limit you more than they help despite all the help early on.

      That’s what Glide’s problem is. It was necessary in the early days and it served its purpose well. That said, in the end, supporting Glide instead of high level API’s (DX, OpenGL) started to limit more than it helped.

      That’s why Glide is a bad thing. It became a bad thing once solutions arrived that served most of the performance without the strict requirements of ONLY 3dfx hardware.

      In this way, Mantle is actually inferior to 3dfx’s Glide. Not only did Glide at least support all 3dfx hardware, but it was on multiple OS’s. And it was based off of an early version of OpenGL.

      Mantle, on the other hand, is entirely proprietary with limited support to only 7xxx series cards (no 4xxx, 5xxx, 6xxx series) and Kaveri APU’s (no Llano, Trinity, or Richland). Also, Mantle supports only Windows currently.

      So yeah, any comparison to Glide is really unfair..

      …to Glide. Mantle’s a pale imitation.

        • nanoflower
        • 6 years ago

        Yep.. I would feel much better about the portability of Mantle to other vendors if AMD made Mantle work on more than GCN. They have a LOT of customers out there that don’t have GCN but do have AMD cards. Why not provide them some support for Mantle and thus show that Mantle isn’t that closely tied to GCN. Because while it may be possible to port Mantle to other platforms it is also possible that the API is set up in such a way that it would be inefficient on those platforms in comparison to GCN. I’m not saying that is the case but until we hear about Mantle running on other platforms we can’t be sure.

          • sschaem
          • 6 years ago

          For older architecture you have opengl and direct3d.

        • WaltC
        • 6 years ago

        Heck, I even watched the developer presentations…the whole thing. They kind of fooled me with the “low-level”/”high-level” definitions used in Hawaii–but in the Mantle developer conference later, they said that those definitions really weren’t very good–so I kind of began to get an inkling, there.

        Well, what really murdered OpenGL years ago was the introduction of custom IHV extensions…before you knew it, ATi and nVidia OpenGL drivers started looking very proprietary, and OpenGL developers didn’t know “whose OpenGL” to build their game engines around. So naturally everything graduated to D3d as it was “neutral” in that regard, and even 3dfx dropped GLIDE and moved to D3d. (GLIDE didn’t kill 3dfx, the STB factory in Mexico killed them, but that’s another story.) Carmack–the greatest OpenGL proponent–eventually dropped OpenGL as his target API, citing custom extensions as the problem. Even he moved to D3d.

        I made the point last time, I believe, about how Mantle was restricted to 7xxx-series and up. But you see, that’s just forward-looking. Eventually, the 7000-series will be sitting where the 3/4000-series gpus sit today, and by then Mantle will cover everything AMD makes. By then, Mantle could also cover both consoles, too. And by then, even nVidia could get in on the game if that’s in the cards.

        You’ve got to be forward-looking instead of backwards-looking if API progress is what you want. Just like Rome wasn’t built in a day, it took Microsoft years, literally, to exceed GLIDE functionality–not until DX8, actually. If not for the existence of nVidia, you could have argued persuasively that GLIDE was the future. Back before Microsoft announced D3d, there was tons of widespread speculation that GLIDE would become Windows’ official 3d API. Microsoft had nothing–far less than AMD is starting with today–but the company pressed ahead, anyway. That’s what you have to do come hell or high water if you want to get anywhere…;)

        Mantle is so much in advance of GLIDE I’m surprised you haven’t read up enough on that…;) The stuff that Mantle will do, GLIDE never did, D3d doesn’t do, and OpenGL can’t do, either. Enter Mantle. Makes sense to me. There is really no other justification for Mantle–save that it provides developers with tools to address functionality that none of the other APIs support, but which developers actively will embrace because [i<]they want it.[/i<] Edits: typos

          • Narishma
          • 6 years ago

          Carmack hasn’t dropped OpenGL, even if he said on a few occasions that modern D3D has become better. His last game, Rage, still used OpenGL That was the major reason for all the launch problems on AMD hardware. AMD has crappy OpenGL drivers.

        • Theolendras
        • 6 years ago

        Ironic, if the training wheel analogy and bike was applied to 3d guess which kind of API would it be deserving it, the low or high level one ? As for any meaningful judgement on Mantle I’ll wait for the BF4 port.

      • Klimax
      • 6 years ago

      Your claim about Microsoft dropping ball on DirectX lacks any evidence whatsoever.

      BTW: According to HardOCP Battlefield 4 runs better on Windows 8.1 then 7. (Both AMD and NVidia cards, but NVidia saw bigger jump)

        • Airmantharp
        • 6 years ago

        Mantle *IS* the evidence.

          • Klimax
          • 6 years ago

          No it is not. AT ALL. Mantle is supposedly solution, but first we need evidence, and that is not by way of cheap talk by vendor which has most to gain by getting accepted their own (aka vendor specific) API.

          Sorry, but you have it backwards. First, proof that DirectX and OpenGL has significant overhead and then propose solution otherwise you have created successfully full circle.

            • Firestarter
            • 6 years ago

            So it isn’t enough that developers and CEOs from several (significant) companies are already aboard and touting the benefits of Mantle vs DirectX and OpenGL? Are you suggesting that we dismiss their statements because they’ve all been bought by AMD or something?

            • Klimax
            • 6 years ago

            No it is not enough at all. Talk is cheap and often they will support vendors in exchange for something. (Hardware, tech. support or early access.) Some for sure paly it on both side anyway. (Or all three…)

            Yes I suggest to dismiss baseless claims lacking any evidence whatsoever, because they are just hearsay, they are lacking foundation. Talk is cheap, action is much more harder, yet we see no action. And any supposed behind scene action is only assumed and thus invalid, no way to verify anything.

            And if there is bloody problem, so why can’t those developers demonstrate it? why can’t they show case it, if AMD can’t? What prevents that? Answer is once again missing like many other important things.

            Also you get other people, who are not impressed by it like Carmack. So what then? Just talk and slides…

            ETA: If you want to se how it should be done, look at NVidia with FCAT showing stutter and getting AMD to fix it. (Or PCPerspective and TechReport for working with FRAPS)

            • sschaem
            • 6 years ago

            You have put you credibility to ZERO by saying this
            “baseless claims lacking any evidence whatsoever, because they are just hearsay, they are lacking foundation”

            • Klimax
            • 6 years ago

            So where is that evidence? Where are articles on it? Where is data on this subject? There is nothing.

            Talk about credibility, when you can’t prove me wrong on that. (Would that be trivial, just some links to good articles, hell it would shut me up long ago, but still to this date nothing.)

            Contrast this whole thing with stutter and like, where there were complaints and some publications did work to confirm it and later NVidia provided hardware and software to measure it further and confirm it all, while confirming that they worked on solving it.

            Contrast this to G-Sync, which solves real problem and was already demoed as a mostly finished thing.

            Contrast this to Intel’s PixelSync(https://techreport.com/news/24600/intel-gets-serious-about-graphics-for-gaming) where they describe problem, describe solution and show how it changes output and get it into a game engine too quickly.

            • sschaem
            • 6 years ago

            What sources ? Clearly links to Dice presentations, AMD, and countless other developer dont matter, it all lie, right?

            If you are in denial, simply do a search on nvidia & batching… and ask yourself, why would nvidia write so much on batching if draw calls where not a performance issue.

            Also, Mantle draw call efficiency is just a part of the reason why its considered a next gen API.
            Resource management, dispatching (multi core / multi GPU), context caching, etc.. is what make it so attractive to AAA game engines.

            Clearly you dont trust the gaming industry & AMD.. so lets put this to rest until we can compare Battlefield4 on Windows7+Kaveri running with Mantle VS Directx11.

            • Firestarter
            • 6 years ago

            I’m seeing plenty of action with Frostbite getting Mantle support. That’s not the kind of thing that DICE would just whip up in a week, and it’s scheduled to release in 2013.

            • Klimax
            • 6 years ago

            Slight problem, you never want to target moving API/Specification or result will suck. Ask Microsoft how well it worked for Vista pre-reset.

            So how about at least bloody preview of SDK? If it is good for DICE to include it already, then it must be already quite far, but if it is still volatile, then how can it be used well in BF4? (Without causing just problems)

            • Fighterpilot
            • 6 years ago

            No,just your talk is cheap.
            Could you spam this thread with more replies please?
            How about some anti mantle,”I am a genius coder and AMD is clueless” rants as well….oh wait.
            Why aren’t you leading Nvidia’s GPU architecture design team by the way.
            Just can’t spare the time or is it a bit too low level for you?

            • Klimax
            • 6 years ago

            I love strawmen arguments, I love nonsensical replies, which lack any substance whatsoever.

            If you don’t have anything better then just nonsense to write, then just skip it altogether, nobody forces you read my posts.

            • sschaem
            • 6 years ago

            Vendor and half a dozen veteran game developers, including Microsoft engineer that work and developer Direct3D.

            Take off the blinders!

        • sschaem
        • 6 years ago

        I have a feeling Mantle will work on Windows7.

        And it wont be a surprise when BF4 on Windows7+Mantle will beat Windows8.1+Direct3D11

        So Windows7 user specially should give a big thanks to AMD & Dice for what they have done to the platform.

    • Pax-UX
    • 6 years ago

    Great article, been waiting to see a good write up on Mantle.

    This could force Microsoft to figure out what it’s doing with PC gaming as they’ve taken their eye off the ball here. nVidia will fall in line very fast if we start seeing 25% better performance in benchmarks.

      • Pwnstar
      • 6 years ago

      I’m not so sure. It seems like Microsoft no longer cares about PCs. They have a console to sell you.

      • Klimax
      • 6 years ago

      Incorrect. See DirectX 11.2. And there are actually comparisons between 11.0 and 11.2 in Battlefield 4.
      HardOCP had article on this.

      So Microsoft does optimization, but most of work already happened between Direct\X 9 and 10. (not only API changed quite completely, but whole parts of runtime were eliminated)

    • nanoflower
    • 6 years ago

    One question I’ve had with Mantle is how well will apps play with each other? Now I can bring up two games at the same time and have a movie up and switch between them and DirectX handles it all for me. With Mantle and the game being so close to the metal I’m wondering if the games will have to take control of the computer. Will you even be able to tab out of a game if it uses Mantle or will you have to quit first to do something else on your computer?

      • Spunjji
      • 6 years ago

      Good question, although given how well the current “next-gen” consoles handle multitasking I’d say there’s good reason to believe that this won’t be a problem 🙂

        • Klimax
        • 6 years ago

        Because on Xbox One it is done by Microsoft, through DirectX, no AMD involvement. (Rarely it is matter of hardware)

        Mantle is completely separate action, so I wouldn’t assume anything. And it might actually be up to devs how well they will support it. (Aka DirectX 9 style)

          • sschaem
          • 6 years ago

          Mantle talk to the same layer the OpenGL driver and DirectX driver talks to.
          Its just another API that leverage the low level driver, its just better designed for modern computers (multicore & GPGPU)

          Microsoft could probably copy a few thing from AMD software engineer to make Direct3D better,
          lets hope they do for Intel and Nvidia benefit.

            • Klimax
            • 6 years ago

            That doesn’t guarantee anything . It will most likely push quite bit of burden on driver team to keep it working with DWM. Will be interesting to see how well they’ll do in this regard.

            BTW: I am not convinced there is much to copy for Microsoft. Firstly, squashing entire pipeline isn’t in my opinion great from quite few perspectives including software engineering. (separation of concerns for example)

    • Jigar
    • 6 years ago

    4 Pages of pure awesomeness, great info and great article, thanks Cyril.

    • Anonymous Hamster
    • 6 years ago

    Mantle sounds a bit like Gallium3D from Linux.

    Although the purpose of Gallium3D is to subdivide “thick” drivers such as OpenGL into a device-independent “state tracker” part and a device-dependent hardware-specific part (there is also a OS-dependent part), it sounds like the APIs may have a bit in common. I wonder how Mantle APIs compare to the in-between APIs of Gallium3D.

    • UnfriendlyFire
    • 6 years ago

    Let’s see if AMD keeps their promise.

    If they do, I wonder what Nividia and Intel has in plans?

      • Pwnstar
      • 6 years ago

      Which promise?

        • UnfriendlyFire
        • 6 years ago

        That Mantle won’t be a vendor lock-in and instead be an open source.

          • slowriot
          • 6 years ago

          The lack of vendor lock-in does not equal open source.

          • jihadjoe
          • 6 years ago

          They never promised it would be open-source.

            • Narishma
            • 6 years ago

            Indeed, even OpenGL isn’t open-source.

    • puppetworx
    • 6 years ago

    [quote<]DICE's Andersson extrapolated... saying that, with Mantle, the CPU "should never really be a bottleneck for the GPU anymore."[/quote<] I can't wait to see if this comes true in the next edition of [i<][url=https://techreport.com/review/23246/inside-the-second-gaming-performance-with-today-cpus<]Inside the second[/url<][/i<]. [sub<]hint.[/sub<]

      • Airmantharp
      • 6 years ago

      While I won’t take his word for it, that was probably the most exciting quote from the article.

        • Pwnstar
        • 6 years ago

        It certainly gave me a boner.

          • sweatshopking
          • 6 years ago

          CIV VI!!!

        • LastQuestion
        • 6 years ago

        Indeed.

        • Klimax
        • 6 years ago

        Doubtful, but so far there is no evidence for their claims, so I would currently put Mantle into category “Solution to unproven problem”.

        If at least they already released that SDK, so some of us could begging analysis and such.

      • NeoForever
      • 6 years ago

      I was just about to bring that quote up.

      AMD’s strategy is brilliant!

      AMD is not competing very well against Intel CPUs. In terms of price/performance, how do you catch up with this huge company with TONS of R&D money and capabilities?
      You leverage your GPU side to make CPU performance less relevant. Then people wouldn’t hesitate to buy lower cost yet lower performing AMD CPU lines.

        • Pwnstar
        • 6 years ago

        You put your finger on the clever bit. CPU performance has been “good enough” for a while now, except gaming. If AMD has solved that, it really is checkmate for Intel.

          • Airmantharp
          • 6 years ago

          Well, gaming and content creation :D. That’s the only place where my 2500k just doesn’t seem to cut it. Of course, we were running spreadsheets and surfing the web on 486s once too!

            • cynan
            • 6 years ago

            The original Pentium cam out in 1993. While I’m sure some people slogged through web pages with a 486, I kind of doubt it was all that common. Heck, the first PC I regularly went on the net with had a Pentium II.

            Now if you’re talking BBS, that’s a different story.

            • Spunjji
            • 6 years ago

            I have distinct memories of my first AOL and Compuserve encounters on 486-based machines running Windows 95. Fun times. Pentium processors were still pretty pricey back then!

          • nanoflower
          • 6 years ago

          Checkmate? It’s a good move by AMD but to suggest that this is the end for Intel is just crazy talk. There’s already been mention of Intel bringing the IRIS Pro graphics to more CPUs in the future which will make Intel much more competitive with AMD’s APUs. I suspect if Mantle does turn out to be popular you can see Intel supporting it fairly quickly. Especially since Intel is clearly putting a lot of effort into improving their IGP performance.

            • Pwnstar
            • 6 years ago

            Iris Pro costs a lot, though. Intel will have to drop the price if they really want to be competitive.

            • nanoflower
            • 6 years ago

            That seems to be part of the plan. Reading through what Intel hopes to accomplish with Atom by 2016 makes me think that we will see EDRAM on Atoms in that time frame. So Intel will be bringing down the cost over time. Which should be expected since they’ve proven that this Embedded DRAM can be a great boon to both the CPU and the GPU.

            • Klimax
            • 6 years ago

            I doubt we’d see Intel support Mantle, I’d say they’d push their own solution. (Likely DirectCompute/OpenCL based)

            Also they have still this option of GPUs for NVidia…

            • the
            • 6 years ago

            We’ve already seen Intel try to push their own solution: x86 everywhere with Larrabee. The expectation was that programmers would simply move over to straight C and use software rendering. They then realized that a pure software solution for raster graphics wasn’t ideal. The performance and power advantages of certain fixed function units is too much to overcome. There was also the issue of Intel providing drivers for legacy API’s that relied on fixed function hardware.

            Still I do wish that Intel pursued this market still with Larrabee. The idea of using a standard CPU architecture for shader work is rather novel and has some strong advantages. I just don’t think that x86 was the architecture to try this idea. ROP and TMU work is best done with dedicated hardware and to quickly utilize these units the x86 architecture would need some additional extensions to hand off this workload. (Larrabee did add some implementation specific extensions but they were for its 512 bit vector unit, not addressing coprocessors.) ARM on the other hand was designed to explicitly use coprocessors. I think a Larrabee style project using ARM would be able to go further and hit the right mix of fully programmable shader/CPU cores and fixed function units. There is nVidia’s Project Denver and I fathom it could be nVidia’s response to Larrabee.

            • Klimax
            • 6 years ago

            Maybe in future after next Xeon Phi. (It is supposed to go into regular socket…)

            • NeoForever
            • 6 years ago

            I’m not sure if I follow. Implementing Mantle to Iris Pro will make i7 IGPs more desirable and may help Intel stay competitive with APUs.

            But it does nothing for Intel when it comes to selling “CPUs” to people who aren’t interested in the IGPs.

            • anubis44
            • 6 years ago

            “I suspect if Mantle does turn out to be popular you can see Intel supporting it fairly quickly. Especially since Intel is clearly putting a lot of effort into improving their IGP performance.”

            I smell nice fat licensing fees for AMD!

            It would be lovely to watch Intel and Nvida bleeding money to AMD for a change.

    • WillBach
    • 6 years ago

    Boo. This was easier to comment on when it was all idle speculation 🙁
    (My opinion, not my employer’s)

    • koziolek-matolek
    • 6 years ago

    They complained a lot about DX compiling shaders at it’s convenience and causing stuttering. Why not just use OpenGL 3+ where you explicitly invoke compilation? No need to create a new API.

      • mnemonick
      • 6 years ago

      With Carmack gone, I fear the days of OpenGL PC gaming are numbered.

        • Pwnstar
        • 6 years ago

        That’s right.

        • Canageek
        • 6 years ago

        On the other hand, SteamOS will probably use OpenGL, which might give it some life again.

          • Airmantharp
          • 6 years ago

          Hell, SteamOS might be the catalyst that gets Nvidia support for Mantle, if Valve pushes for it. Might even get Intel support too- would definitely give Intel’s ‘APUs’ a needed boost!

        • Theolendras
        • 6 years ago

        With Android, iOS, Nvidia Logan, SteamOS, I doubt OpenGL is going anywhere…

          • sschaem
          • 6 years ago

          Also OSX and web browsers via webgl. Its also the preferred api to use with opencl interops, and cuda.

          Opengl is not going anywhere, that’s for sure…

      • Heighnub
      • 6 years ago

      How do you explicitly invoke compilation with OpenGL 3+ ?

      The shader compilation they are talking about happens at the driver layer. D3D compile functions compile the shader down to a GPU independent byte code. The driver then takes this and compiles (and optimises) it to for whatever GPU architecture is running on the system, and it is this process that the application has no control over.
      I’d imagine OpenGL works exactly the same way.

        • Narishma
        • 6 years ago

        Nope. In OpenGL, there is no intermediate layer (GPU-independent bytecode). You give the shader source code to OpenGL and it calls the driver to compile it for your GPU. While this is an advantage in this case, the fact that each driver has it’s own GLSL compiler leads to many incompatibilities between vendors which don’t exist in D3D where Microsoft supplies the HLSL compiler.

          • Heighnub
          • 6 years ago

          Aha! Thanks for the info.

          • Klimax
          • 6 years ago

          Correct for DirectX.

        • Klimax
        • 6 years ago

        Not true. Compilation to bytecode was in D3DX, but whole library was removed and recommendation is to use standalone compiler.

        As far as bytecode is concerned, is treated by DirectX as binary by OS and only driver will do anything with it, but that ain’t compilation.

          • Heighnub
          • 6 years ago

          The standalone compiler works in exactly the same way – it produces byte code/intermediate code/”shader object” files that you send to the driver via a D3D call when your application is running.

          The driver does not necessarily compile and optimise this byte code as soon as it gets it (via above call) – it could wait until just before it’s actually needed by the GPU, which could be either later in the frame or many frames later. This is what developers at the summit were complaining about.

            • Klimax
            • 6 years ago

            I don’t think that is correct, because it violates several principals in graphics like no unnecessary expensive calls in core logic.

            And if that is true for a driver, then team is clueless on this. (Also I haven’t see such effect)

            Frankly, evidence for this claim and problem is missing and thus should be considered irrelevant till proven. Any other treatment will likely send you down the path of premature optimization and most likely solving wrong/nonexistent problem.

            • Heighnub
            • 6 years ago

            “I don’t think that is correct, because it violates several principals in graphics like no unnecessary expensive calls in core logic.”

            What is this core logic in the driver you speak of?

            “And if that is true for a driver, then team is clueless on this. (Also I haven’t see such effect)”

            So, because you personally haven’t seen this problem affect your game engine with similar functionality to Frostbite 3, there is no problem?

            Are you really in a position to claim that the driver teams from every IHV are clueless when you don’t even understand what is happening at the driver level(s)?

            • Klimax
            • 6 years ago

            Maybe I should have called it properly (too quick to reply) as hotspot, that is inside of performance critical code path. (Aka core logic for rendering) There are hotspots in game code and then there are usually hotspots in DirectX and drivers as each works on its part and driver has ultimately its job to finish processing of workload sent by game and to push it to GPU.

            Not mine engine. I haven’t see any such slowdowns over usual engines, games. Until now, nobody mentioned this “problem” nor was it observed. Nobody talked about it. There is no evidence it exists.
            (Funnily enough I do happen to work on my own, but it is far from finished and wouldn’t see this effect anyway due to low complexity of shaders I wrote)

            Their single example (DE:HR) doesn’t have this confirmed. There is no published analysis on it.

            So far no other game/engine out there suffers from it…

            ETA: BTW: if it were a thing, then we would see different limitations and CPU effects in games/engines like Crysis 3 or both Metro games.

            • Heighnub
            • 6 years ago

            Why would or should there be major analysis of this? Who is going to do this analysis and why would they do the analysis?
            This is something that affects at least some very sophisticated game engines, according to… developers of these engines. The general public, and even hobbyist developers like yourself, don’t care about this sort of stuff – this is intended for advanced graphics/engine programmers who are writing games that get released to the public. Your DirectX isn’t going anywhere, so why do you care whether or not Mantle is picked up by whoever wants to pick it up?

            How/Why would you see such slowdowns?
            This is not something that can be measured unless you have an instrumented version of the game/engine with advanced profiling tools, so even if some random wanted to look at this, they couldn’t.
            Sure, you can perhaps look at the CPU usage of the DX UMD’s threads, but you wouldn’t be able to distinguish the different functions that are being performed at any time.

            Here’s a thought for your weekend: look at the CPU benchmarks of BF4, Crysis3, Metro Last Light. With slower-clocked/fewer core CPU’s, the game is limited by the CPU. Don’t you think it would be better if the application developer could look at how his frame is structured, find a time in the frame with little CPU use, and force the driver to compile the shader there, thereby improving CPU efficiency?

            Btw, some info of this JIT compilation from AMD.
            [url<]http://amddevcentral.com/afds/assets/presentations/2902_2_final.pdf[/url<] And if you're wondering if it's only AMD that does this: [url<]http://www.linkedin.com/pub/rahul-joshi/3/978/960[/url<]

            • Klimax
            • 6 years ago

            How about look at recent history, stuttering and corruption (FRAPS/FCAT – analysis by TechReport and PCPer + NVidia) There was mountain of evidence, there was talk for years about problem.

            Now with supposed overhead and mistiming of adaptation of shaders, we have zero. There is no analysis, only unbacked claims by AMD. There is literally no confirmation of any of that.

            Where is nothing, maximal skepticism is required.

            There is no shred of evidence,. there is no documentation of claimed things and slowdowns. There is BIG BLOODY VOID.

            [quote<]This is not something that can be measured unless you have an instrumented version of the game/engine with advanced profiling tools, so even if some random wanted to look at this, they couldn't. Sure, you can perhaps look at the CPU usage of the DX UMD's threads, but you wouldn't be able to distinguish the different functions that are being performed at any time. [/quote<] Bad news, you don't need instrumented version of engine. Intel VTune can get you there. Well, it is expensive tool, but so is setup for FCAT. And funny thing is, I already have runs analyzed and it doesn't look like there is problem. (Though I didn't measure Deus Ex, but Crysis 3) And there is mine whole point. Nobody proved yet there is problem, thus nobody can even say that they have solution to that as it is not understood, nor verifiable. Look again at stutter and how it was handled and tell me again, why we don't have it for this.

      • Klimax
      • 6 years ago

      Err, you can do it in DirectX too. In fact I don’t even know what the hell thy are describing, because it was always explicit API call to D3DX library or standalone compiler and if idiotic developer did it in the middle of rendering loop, then it is hardly fault of API.

      In fact, there is no more Compiler embedded in new versions of DirectX (except in older runtimes which are in included for compatibility)

      So that claim is nonsense from start.

        • Heighnub
        • 6 years ago

        “I don’t even know what the hell thy are describing”

        See my reply above – there are two stages of compilation, one by the application and one by the driver.

    • Airmantharp
    • 6 years ago

    Thanks Cyril, we’ve been waiting for your report on Mantle!

    And I’m definitely more excited about it now and am looking forward to seeing the comparisons in BF4, hard as that is to properly benchmark.

    Still, there’s one thing that worries me: AMD’s new GPUs are running at their thermal ragged edge- might we see more throttling if Mantle enables even more strenuous workloads to be shuttled to the GPU?

      • Bensam123
      • 6 years ago

      lol, you had to bring up the 290 cooler again. “thermal ragged edge” (TM), AKA it’s normal operating temperature.

      Furmark or OCCTPT would probably give you a pretty good glimpse of what running your GPU at 100% would be like.

        • Airmantharp
        • 6 years ago

        It’s not hard to imagine the scenario, though. Mantle may well make the games smoother by reducing CPU load and allowing developers to properly control the rendering pipeline, and that’s the stuff we’ve already heard about, but the comment about ‘there’s all this compute stuff just sitting idle while we’re rendering that we could use’ seems to discount the problem that that AMD is running their cards as hot as they can go. So sure, you could do compute stuff while rendering with Mantle, but extra work doesn’t come for free- you’ll have to give up rendering performance if you schedule compute or other parallel processing into the workload just by virtue of the GPU having to downclock to keep up with the greater power draw and heat output of the now heavier workload.

        And I didn’t say anything about the cooler :).

          • Pwnstar
          • 6 years ago

          I get your point, but it’s wrong. 95c is the 290x’s operating temperature. If you put a better fan on it, the GPU will increase its performance until it is heated up to 95c again.

          Also, Mantle won’t cause it to reduce its performance due to throttling. Mantle will allow you to get 20% more performance out of it, so performance goes up, not down.

            • Airmantharp
            • 6 years ago

            You’ve missed it :).

            My point is that AMD *IS* operating at their thermal limit- meaning that their top-end GPUs can’t get any hotter. 95c is the limit, just from ‘lighter’ workloads that are single-minded. Using optimizations found in Mantle will increase the workload, meaning that to stay at 95c, the cards will have to down-clock.

            • Pwnstar
            • 6 years ago

            OK, it can’t get any hotter, but you can switch to a more efficient API and get more FPS out of it.

            • Airmantharp
            • 6 years ago

            Mantle will surely be more efficient per clock; that is the point, after all. But will it be more efficient per watt?

            That’s the issue I’m looking at. If Mantle increases the thermal load for the same performance, well, performance has to drop to stay in line. And hell, even if it does, it might still be faster; that’s what we’re waiting to see!

            • Voldenuit
            • 6 years ago

            [quote<]My point is that AMD *IS* operating at their thermal limit- meaning that their top-end GPUs can't get any hotter. 95c is the limit, just from 'lighter' workloads that are single-minded. Using optimizations found in Mantle will increase the workload, meaning that to stay at 95c, the cards will have to down-clock.[/quote<] Pretty much the way I see it. A more efficient API will let the GPU do more work per clock, especially in the case of being able to spawn more processes, like compute. However, performing computational work costs energy and produces heat, so Mantle will not help much, if at all, on a GPU that is thermally constrained (be it Hawaii or a hypothetical port to, say, Fermi). Mantle sounds like it will be a great tool to extract more performance out of GPUs, but it won't help improve performance/watt, except in non-GPU-limited scenarios - for example, a vysnced application may be able to run the same number or more operations in fewer clocks, allowing the GPU to idle for longer.

            • Airmantharp
            • 6 years ago

            From your example perspective, AMD adopting G-Sync makes even more sense: if you control the framerates, you can get the best performance out of the GPU by balancing the load, etc.

        • grndzro
        • 6 years ago

        Bah, a bit of high end TG and some creative fan shroud cutting can solve the heat issue.

        • USAFTW
        • 6 years ago

        Knocking a particularly good value card could mean one got a bad deal. 780 for six hundred fifty bucks.
        But seriously (bro), if mantle is going to offset some serious CPU bottlenecking, the 290s are gonna melt.

      • Srsly_Bro
      • 6 years ago

      wait for 3rd party options with better coolers or turn your fan up. what’s all the qq for bro? if you want something quiet get a ps4.

        • Airmantharp
        • 6 years ago

        Just raising the point- no ‘qq’ here :).

        I’m actually quite excited about Mantle, but I try to temper that with a realistic approach to understanding it’s limitations. And yeah, we’re still all waiting for the ‘real’ 290-series to stand up!

    • Bensam123
    • 6 years ago

    Wow, really good write up, although some parts of it went too far in depth where I didn’t know what they were talking about.

    This is definitely good news and it seems like Mantle is starting to live up to the hype, not by AMDs hand, but by the developers there talking about it.

    The next couple years will be shaky ground where we don’t know exactly what will happen. We’re in the territory where developers have to start supporting it or it’ll die out and we wont see any adoption. If developers get stuck in their ways of simply spitting out software for consoles it wont be a good thing. However, if they start adopting this it may be another hit to consoles (and a reason not to buy one in the first place or develop games for it) as there wouldn’t be the magical argument of console titles being closer to the hardware and able to extract more performance for less.

    In general this all sounds really good. Of particular interest is the ability for GPUs to execute parallel tasks. Although they were talking particularly about executing GPU tasks, if game developers start using a bit of compute code it may be entirely possible for part of the CPU workload to be offloaded to the GPU (like physics). So if you’re stuck on a old processor with a new video card, part of the workload could be balanced with that new video card. Although this muddles the water as far as system requirements goes, it would overall be great for everyone.

    The cost of adopting the API will only improve too. Developers have been using DX for over a decade now. A increase of 10% in development time to implement a API that they’re extremely enthusiastic about will only get better. As developers become more familiar with the API and all it’s quirks, it’ll definitely be easier for them to deal with it. That doesn’t even take into account game developers who simply license engines, a lot of the work would already be done for them, and straight ports, which are simply designed to be compatible with Mantle. Presumably a straight port would almost no development time.

    Hopefully Nvidia will hop on this train and cease its selfish ways if at least momentarily. I could see Nvidia shaking it’s head at this and spitting out it’s own API in a year and then trying to force it on developers in order to increase its adoption to ‘compete’ with AMDs solution, even though there is no reason to compete in this case. That is really a big part of getting this thing off the ground. While people may start buying AMD if enough software developers support Mantle and it becomes apparent how much of a performance improvement that will yield, it may also lead to software developers not supporting Mantle because the install base is small.

    This also goes for Sony and Microsoft who should also support this, but since they essentially ARE consoles, they can do whatever they want and throw their weight around. I could see Sony supporting it and MS begrudgingly adding support 2-3 years down the line when it becomes apparent (IF) Mantle wont be going away. Since MS would effectively help kill off it’s own API, this turns into a ego thing for them, which we all know MS is bad with. They would probably be less likely to support it then Nvidia is simply because they’re so stuck in their ways and image means so much to them. A half assed version of DirectX 12 will probably be coming in a year or two.

    All of this is really exciting and I definitely look forward to what is to come! The next year in computers and gaming in general will be really great. We may start to see game developers shed their old shell of simply writing poop games for consoles and really start to make amazing titles if they get caught up in this. Steamachine will help ease this transition (I’m sure Valve will get on this) and eventually we’ll end up with everyone back on PCs, with new consoles only being HTPCs, which is something I definitely also look forward to.

      • mnemonick
      • 6 years ago

      A lot of what happens next is up to AMD. Nvidia certainly hasn’t shunned open-source APIs in the past, e.g. they’ve had good to excellent OpenGL support for a very long time (since the Riva TNT, anyway).

      If AMD can do two things: one, demonstrate the improvement in performance in a shipping title (which it seems will happen soon); and two, release Mantle as an open-source API allowing Nvidia, game developers (and perhaps even Intel and Microsoft) to help build it out and extend it, then we could be looking at the dawn of a whole new level of performance and visual quality.

      And if the performance gains are as good as they claim, it also means that PC gamers with merely ‘decent’ systems could benefit the most – imagine being able to game at 1920 x 1080 with near ‘ultra’ settings on a sub-$200 CPU and a $150 graphics card. 😀

        • Bensam123
        • 6 years ago

        Yes, it will be essential BF4 performs well for first impressions. I can only imagine part of the issues with BF4 right now are in part due to Mantle being worked into it. They’re probably working out tons of bugs in the background, which is why it seems so buggy. We just can’t tell exactly what is happening because they aren’t really talking about it.

        Yeah, a lot of people don’t think about it, but this also goes down as well as up. Meaning it will breath life into older hardware or low end hardware that may not otherwise be usable. All of which is good.

      • Pwnstar
      • 6 years ago

      Already exists. NVAPI:
      [url<]https://developer.nvidia.com/nvapi[/url<] [quote<] I could see Nvidia shaking it's head at this and spitting out it's own API in a year[/quote<]

        • Narishma
        • 6 years ago

        NVAPI isn’t comparable to Mantle in any way.

          • Pwnstar
          • 6 years ago

          In a general sense, it is. A lower level API optimized for certain graphics cards. I get your point that Mantle is way better at what it does, but that wasn’t the comparison I was going for.

            • Bensam123
            • 6 years ago

            Aye, it’s not remotely close to Mantle. There is a reason game developers haven’t adopted it. NVAPI seems like something more for compute tasks or the like.

          • Klimax
          • 6 years ago

          Actually correct. It is mostly for managing stuff. (From public SDK)

          CUDA is closest to Mantle. (That it doesn’t have its rasterizer is only small inconvenience as there is published open source CUDA rasterizer under NVidia Research)

          And IMHO it is bit better, because it is already proven technology, quite debugged with significant toolset (+ 3rd party) and we know it can scale across various GPU series. (Adapting CUDA code, mainly scheduling wise)

            • Pwnstar
            • 6 years ago

            So 15 game companies using Mantle doesn’t make it proven?

            • nanoflower
            • 6 years ago

            We don’t know just how good/bad their experiences are. Only a few people are talking at this time. Also we don’t know just how much of a difference it will make in performance or what the drawbacks of using Mantle might be.

            • Klimax
            • 6 years ago

            Just to add context.

            This is ages old debate, low level against high level and always high-level won, because it made it easier for less skilled programmers to enter, while low level APIs where favoured by those who knew what they are doing and who were able to get maximum out of hardware.

            What always happens is that even those highly skilled developers will eventually introduce frameworks to eliminate mundane and repetitive tasks and thus a layer gets re-added. Meanwhile for those less skilled will be created frameworks too to make that API easier and we are back at the beginning, because you no longer have any advantage of low-level API. That is why consoles got away with low level programming, because they were static and thus even engine providers could do significant optimizations.

            Infinite cycle… cannot be avoided.

            • Klimax
            • 6 years ago

            No. We haven’t see a single output, we haven’t seen any backing evidence. Evidence doesn’t count for a thing. For example I am too interested in SDK, but do I think it is solving real problem?

            They might think so, but there is no evidence. And I recall Carmack not being impressed, so…

      • Narishma
      • 6 years ago

      There’s no reason for Sony or Microsoft to support Mantle on their consoles.They already have all it’s benefits and many more through their proprietary APIs.

        • Bensam123
        • 6 years ago

        Support a wider range of games, improving performance, allowing easy ports. Sometimes it’s not always about ‘me me me’. Sometimes a bit of ‘you helps me’. Other times it’s just about doing what is right.

      • HisDivineOrder
      • 6 years ago

      I suspect nVidia will double down on OpenGL (for SteamOS) and DirectX. They don’t seem to think having multiple proprietary standards really makes sense.

      And they–like anyone with sense–doubt a corporation that promises they “may someday” go open standard with their currently highly proprietary technology that supports only unreleased APU’s and only the last two years worth of discrete GPU’s from that same company.

      Nothing in what they said in that article is a promise of open standards. There are vague, “I don’t see why not.” That may be true, yet they may “see why not” tomorrow and not make that a lie because they didn’t see today.

      Until they attach a timeline to when 3rd party support is a priority, you’re doing yourself a disservice to think it’s going open standard because AMD themselves haven’t even promised it.

      “I don’t see why not” is corporate for, /shrug.

        • Pwnstar
        • 6 years ago

        I’d accept proprietary if it meant I was getting 25% more performance.

          • nanoflower
          • 6 years ago

          Except that means you only get that benefit so long as you use AMD hardware. Should you move to Nvidia or Intel you lose that benefit. Which is a reason AMD might not make it easy for Intel or Nvidia to take up Mantel (should they decide they wish to do so.)

        • Bensam123
        • 6 years ago

        It didn’t sound like ‘I don’t see why not’ as much as ‘If they actually decide to develop their products for this standard’. AMD doesn’t control Nvidia and vice versa. If Nvidia decides they don’t want to support Mantle, it’s not going to happen. It doesn’t need to be a open standard to be supported either. Directx isn’t a open standard.

        As far as a proprietary standard such as Mantle making sense, well that was in the review.

      • Klimax
      • 6 years ago

      [quote<]This is definitely good news and it seems like Mantle is starting to live up to the hype, not by AMDs hand, but by the developers there talking about it. [/quote<] Err, how livening up? Mantle is still only slides, nothing more. SDK nor docs are published, so their claims are without any evidence even on their Mantle. They didn't even publish bloody tech demo with it, showcasing it. (Or if they did, can't find it) Nothing exciting, just another API...

        • Pwnstar
        • 6 years ago

        I think Bensam is referring to all the games that will be using Mantle. A month ago, it was just claims on slides. Now, it’s a bunch of games that added 10% of the development time/budget for these companies. They must have good reason for doing so.

        • Bensam123
        • 6 years ago

        Slides with a lot of enthusiastic developers talking about it while working on it in the background. Just because all you see is slides doesn’t mean that’s all there is to it.

          • Klimax
          • 6 years ago

          Reportedly earliest public stable version is scheduled for 2014. They are talking about unfinished stuff, which is unproven and whose promise entirely lies in promises made by AMD. (Whose track is weak)
          That means that they are investigating and maybe coding against moving target. Which obviously is great idea.

          Frankly, they are irrelevant, because there is nothing, that they have convinced some devs. means nothing, because it doesn’t say anything about validity.

      • anubis44
      • 6 years ago

      “I could see Nvidia shaking it’s head at this and spitting out it’s own API in a year and then trying to force it on developers in order to increase its adoption to ‘compete’ with AMDs solution, even though there is no reason to compete in this case.”

      Nvidia would utterly fail if they tried this. One of the main reasons developers are salivating over Mantle is that it very dramatically reduces the difficulties in porting games between consoles and the PC. Since all 3 game consoles are now powered by AMD exclusively, no such Nvidia API would fly. The developers would look at it and see no benefit, except increased development costs. Nvidia would have to pay them very generously to develop for such an API, and the developers would use as little as possible of it’s features as they could get away with.

      Look, AMD has simply check-mated Nvidia when it comes to gaming, and it’s only a short matter of time before this becomes obvious. Jsen Hsun had his chance when AMD offered to merge with Nvidia back in 2005, but of course, Captain Ego insisted on being the Chairman of the combined AMD-Nvidia, which completely scuttled the deal. AMD bought ATI instead, and now, Nvidia is up $hit’s Creek without a paddle.

      Serves them right. I haven’t got a stitch of sympathy for them. This is a taste of their own medicine, only it’s a whole distillery’s worth of medicine to PhysX’s proprietary teaspoonful. I’ll simply enjoy watching AMD turn the screws on them.

    • dme123
    • 6 years ago

    So basically it’s like GLIDE, RRedline and various other proprietary technologies that weren’t cross platform, except without anything like the market share that 3DFX had when GLIDE was still a thing.

    This sort of crap is exactly the problem OpenGL and Direct3D were developed to solve. I sincerely hope that cash strapped AMD haven’t spunked too much of their borrowed cash into this.

      • Narishma
      • 6 years ago

      Glide was cross-platform. It ran on both Windows and Linux.

      Edit: And even DOS!

        • nanoflower
        • 6 years ago

        But Glide was proprietary. For now Mantle essentially is proprietary too since AMD says they aren’t going to open it up to other vendors until later. Plus there’s no buy-in from any other vendor. That may change once we start seeing games come out that are using Mantle (if it really does make a significant difference in performance without giving anything else up) but that hasn’t happened yet.

          • Firestarter
          • 6 years ago

          [quote<]For now Mantle essentially is proprietary too since AMD says they aren't going to open it up to other vendors until later. [/quote<] But at least they have acknowledged that for Mantle to thrive, they need to encourage adoption not only across OSes but also across vendors. I bet they want to charge Nvidia a pretty penny for it though. The best we can hope for I guess is that Nvidia and AMD cross-license Mantle and G-Sync.

            • Klimax
            • 6 years ago

            No. NVidia’s condition will be CUDA for Mantle, if they will want at all and there is so far no indication they’d be interested at all.

            • Airmantharp
            • 6 years ago

            A cross-license of Mantle and G-Sync would be perfect; both technologies appear to be essential for the future development of GPUs.

      • WaltC
      • 6 years ago

      Before you go off hog wild on your “cash strapped” scenarios (which are amusing, btw) please realize AMD has $1B cash on hand *and* $.5B of other people’s money to play with–not counting all of the company’s other sources of income (like gross & net profits, investments, etc.) But I guess you were too busy to read the last financial statement….

      You and I are “cash strapped.” By contrast, AMD is as rich as Croesus.

      Apparently, as well, you also are ignorant of the fact that AMD isn’t *charging* anyone any money for this. But I’m glad you’re so concerned about AMD’s financial welfare…trust me, if they need your expertise they’ll call you. Best thing you can do now is just sit there by your phone and …wait for the call!….;)

        • HisDivineOrder
        • 6 years ago

        They are hemorrhaging money. A lot of the money you speak of is tied up in contracts they can’t easily get out of (ie., deals with GloFo to buy a certain number of chips that they haven’t met yet) and in leasing options they’re increasingly getting to make themselves look more solvent than they really are.

        You should brace yourself for a rude awakening. 3dfx didn’t look like it was going under until, poof, one day it did.

        • sschaem
        • 6 years ago

        Well, their are 2+ billion in debt. and pay about 250 million a year in interest, and is flooded with ‘one time charges’, quarter after quarter.. next one. a 200 million fine to pay Global Foundry.

        I’m glad my finances dont look like that.

          • dodozoid
          • 6 years ago

          Where are you from sschaem? I hope not from an english speaking country…cause only that could apologize: “Well, their are”

          [url<]http://9gag.com/gag/5815722[/url<] Othervise you are probably right... unfortunately

    • Chrispy_
    • 6 years ago

    [quote<]Mantle has done a good thing by showing that "there's something wrong with current APIs.[/quote<] ...and coupled with AMD's hinting that Mandle isn't artificially vendor-locked and may well end up as an open standard when it's done makes me glad. AMD really is the underdog to fight for. I don't care who wins the performance crown, as long as projects like this that aim to benefit everyone keep coming. It was AMD who made Intel adopt 64-bit, it was AMD who proved that an IGP didn't have to be completely awful and it is AMD who historically keep Nvidia's pricing in check for us consumers. It's just such a crying shame that AMD have given up on CPU performance, because Intel are as useless as damp socks when there's no competition.

      • Bensam123
      • 6 years ago

      Welcome to the club.

        • Chrispy_
        • 6 years ago

        Heh, I joined the club 13 years ago with Socket A. I just wish the underdog would stop being so far under.

          • Airmantharp
          • 6 years ago

          Hell, now that Intel has opened up their foundries, how hard would be it for AMD to make something that isn’t necessarily better, but different?

          Steamroller on Intel 14nm anyone?

          (I’d sign up now if there was a list :D)

      • grndzro
      • 6 years ago

      AMD hasn’t given up the performance crown. It has retaken it.

      With PS4 and XB1 based on low power 8 core x86 chips games will not be reliant on IPC and will be tailored for multithreading. AMD’s Mantle moves the bar even further with better multitasking.

      AMD’s 8 core chips will outperform Intels 4 core ones in future titles.

      Add an AMD GPU to the mix and it is truly game changing.

        • Klimax
        • 6 years ago

        They won’t Jaguar cores are significantly weaker then cores in Core. (:D)

        I am sure that Core i5 won’t have any trouble with code written for Jaguar and furthermore I doubt that we will se on PC side that Jaguar optimized code. More likely we will se general recompile (along side of changed calls to OS API), which will likely eliminate or dwarf Jaguar specific code.

      • Yeats
      • 6 years ago

      The problem is the underdog eventually grows into a bully.

        • Meadows
        • 6 years ago

        Let them. Then we’ll root for intel.

          • Airmantharp
          • 6 years ago

          I wonder if the AMD fans realize that we started rooting for Intel when the Core 2 became a reality, and haven’t stopped yet?

        • anubis44
        • 6 years ago

        I’d LOVE to see AMD bullying nVidia and Intel at the same time. It would serve them both right, arrogant a$$e$. And it would be the kind of dramatic role reversal worthy of stage and screen!

      • Modivated1
      • 6 years ago

      All these AMD benefits we the consumer community have received are great! However, if AMD themselves does not start receiving some real benefit for their developments then pretty soon there will be no AMD to keep take the brave actions to challenge the Tech world.

      I want AMD to get a head start with this one before the rivals jump on the boat. Get the founders royalty they deserve.

    • anotherengineer
    • 6 years ago

    “although very skilled developers can purportedly manage 10,000 or more.”

    Big article and only 1 ‘purportedly’ ?!?!?!

      • Airmantharp
      • 6 years ago

      There can [s<]only be[/s<] [i<]be only[/i<] one. -Corrected, because nothing less would do.

        • Pwnstar
        • 6 years ago

        Wrong. The phrase is “There can be only one”. It’s almost like you aren’t a geek at all.

        • Srsly_Bro
        • 6 years ago

        You got nerd owned!! How does it feel? Take the walk of eternal shame!

        • entropy13
        • 6 years ago

        u wot m8?

    • ssidbroadcast
    • 6 years ago

    [quote<]At present, Mantle support is limited to Windows systems with graphics processors based on AMD's Graphics Core Next architecture.[/quote<] Boy that is a razor-thin slice of the market to be making such a large R&D investment on a new API. I'd say it's about as (relatively) thin as the Earth's crust.

      • ClickClick5
      • 6 years ago

      Sounds like support for another proprietary software from a competing company…you know, the facts about Mantle are as solid as the [i<]physics[/i<] of the earth. No need to claim otherwise.

        • ssidbroadcast
        • 6 years ago

        I’m not sure what you’re implying.

          • Pwnstar
          • 6 years ago

          He’s talking about nVidia’s PhysX.

            • ssidbroadcast
            • 6 years ago

            … but I’m not an nVidia fanboy?

            • Pwnstar
            • 6 years ago

            Click didn’t say you were. I think his point was your description applies to nVidia as much as it does to AMD.

            • ClickClick5
            • 6 years ago

            Bingo! Both sides are throwing rocks at each other now.

      • derFunkenstein
      • 6 years ago

      This is addressed in the article. And it’s huge compared to other niches like stereoscopic 3D.

        • dme123
        • 6 years ago

        Yeah and look how that caught on….

      • Bensam123
      • 6 years ago

      Hai, I’m chicken and egg. If you don’t start somewhere, nothing ever happens. GCN architecture isn’t razor thin, but putting that aside, if people don’t start doing things that may not be beneficial in the short term in order for a long term outlook, things never get better. That’s just outright being selfish to pad your bottom line.

      Battlefield 4 is definitely a good start to all of this. If it’s a success I can imagine the impact on adoption will be huge.

        • ssidbroadcast
        • 6 years ago

        Yeah that’s a good point. I’m certainly all for a more efficient API than D3D or OGL. If they want adoption to really take off they should start supporting other platforms/OS’s ASAP.

      • Srsly_Bro
      • 6 years ago

      The Earth’s crust relative to what, bro? I’m not sure what you’re implying.

        • Pwnstar
        • 6 years ago

        Yeah, I think the Earth’s crust is pretty big. It’s relative to what you are comparing it too, I guess.

          • ssidbroadcast
          • 6 years ago

          (The Earth’s crust, relative to the rest of it’s mass, is about as thin as the skin of an apple)

      • grndzro
      • 6 years ago

      With the amount of developers jumping on board Nvidia will have by now be working on tying into mantle. They have no choice. Not enough time to come up with an alternate solution.

      With Activision, Blizzard, EA, Cloud Imperium, Square Enix, Oxide, and others on board neither Nvidia or Microsoft can ignore Mantle. It will eventually work with ALL hardware.

        • ClickClick5
        • 6 years ago

        Or the perfect storm! If you want your cape in Batman to be all wavy, go Nvidia. If you want to game with Mantle, go AMD. How hard is this?

      • Meadows
      • 6 years ago

      How are you able to make a comment that gets 50 thumbs up only to say something so silly a few days later?

        • Airmantharp
        • 6 years ago

        He works hard at keeping his score balanced 😀

        • ssidbroadcast
        • 6 years ago

        It’s all in the beholder. People think the comment is spreading FUD but I was just making an observation. Believe me, I *want* Mantle to succeed. It just has a lot of hurdles to jump.

      • Theolendras
      • 6 years ago

      They probably cross developped it with the Xbox API so the R&D cost might not be that high.

Pin It on Pinterest

Share This