First look: AMD Mantle CPU performance in Battlefield 4

Since it was first announced last fall, we’ve been waiting with anticipation for our first up-close look at Mantle, the new graphics programming layer being produced by AMD in collaboration with the folks at DICE. Mantle’s aim is to offer game developers access to graphics hardware in a way that fits with how today’s GPUs really work. Mantle promises to reduce the overhead involved in producing each frame of animation, potentially unlocking higher performance and smoother gameplay than the standard PC graphics API, Microsoft’s Direct3D. If you’re unfamiliar with Mantle and the hype around it, the best starting place is Cyril’s intro to the topic.

Taking up the . . .

I think it’s safe to say Mantle has incited an unusual amount of interest for a programming interface. PC gamers and game developers alike are intrigued to see progress on this front, probably because we all have a sense that glassy smooth animation in PC games seems to require beefier hardware than it should—and that PC gaming performance hasn’t improved as rapidly as PC hardware has.

Naturally, then, we set out to test Mantle performance as soon as DICE released a patch late last week for Battlefield 4 that adds support for the new API. We had to wait a little longer for AMD to release the first Mantle-enabled graphics drivers, but the Catalyst 14.1 betas are now available to anyone who wants to try them.

Because these are beta drivers for an all-new API, they come with a host of caveats. AMD tells us the best performance is limited to some of its newest GPUs, particularly the Radeon R9 290 and 290X. The firm expects similar performance gains from any Mantle-capable GPU eventually—that is, anything based on the Graphics Core Next architecture, dating back to the Radeon HD 7000 series—but we’re not quite there yet. Also, although Mantle generally performs well in BF4, there is a known issue with occasional stuttering and potential instability. Furthermore, a number of hardware configurations aren’t supported with Mantle just yet, including the Enduro and Dual Graphics configs for laptops and Eyefinity multi-display setups with monitors in portrait mode. In other words, AMD has a lot of work left to do, and this first beta driver is just an early preview of what’s to come.

Mantle isn’t the only new capability in this driver, either. Catalyst 14.1 also includes frame pacing for multi-GPU solutions based on older GPUs, like the Radeon HD 7000 series and the R9 280X, when connected to Eyefinity multi-monitor setups and 4K displays. That feature remains limited to DirectX 10 and 11, not DX9, but we’re happy to see it. We’ll have to test it separately.

You may have gathered from the list of caveats for this driver that Mantle is an all-new thing. A lot of things we take for granted from Direct3D and OpenGL won’t yet work with Mantle. AMD says it’s been rebuilding all sorts of functionality, like multi-GPU and multi-display support, from scratch. One causalty of Mantle’s novelty is our usual suite of graphics performance tools. Programs like Fraps and the FCAT overlay are built for Direct3D, so when we switch BF4 into Mantle mode, we lose the ability to monitor performance using those tools.

Happily, the folks at DICE have built some good instrumentation into the latest version of BF4, as Johan Andersson explains in this blog post. The screenshot above shows the plot the game can display in the corner of the screen depicting frame rendering times as they happen. Frame production times for the CPU and GPU are tracked separately in different colors, so it’s easy to tell whether performance is mainly CPU-bound or GPU-bound. For instance, in the few seconds represented above, the CPU and GPU are pretty evenly matched, but the CPU is responsible for a few slower frames.

You can tell the game to log performance to a file, in which case it writes out a series of values for each frame: the CPU time, the GPU time, and the overall frame time. That’s it. We’re getting exactly the right raw data, and there’s not an FPS average in sight. This is progress, folks. When I commended Andersson for these tools on Twitter, he said FCAT support will be coming in a later update. That should allow us to do proper multi-GPU testing. For now, we have what we need to do frame-time-based testing with single GPUs and Mantle right out of the gate.

Enabling Mantle is as simple as setting an option in the BF4 video menu and then restarting the game. Once it’s up and running, Mantle seems to be almost entirely seamless. We were able to swich between windowed and full-screen modes without issue, so the API appears to play well with the rest of Windows, even as it’s bypassing the Direct3D rendering path.

Although we haven’t yet done any sort of detailed comparison, at least superficially, image quality in BF4 isn’t altered substantially by switching between the Direct3D and Mantle renderers. Everything looks solid, well filtered, and properly rendered with Mantle, just as it does in D3D. This isn’t a bad start for a new API, especially with an application as complex as Battlefield 4.

We’re beginning our coverage of Mantle by focusing on CPU performance rather than GPU performance because we expect the biggest gains on the CPU side of things. AMD says most of the benefits should come in CPU-limited scenarios, mostly thanks to a reduction in the number of draw calls needed to render a scene. (Direct3D has become rather infamous for the number of draw calls it requires, and those generally translate into additional CPU overhead.) We do anticipate some more modest GPU performance gains from Mantle, as well, and we plan to explore those in another article.

Battlefield 4

We tested Mantle performance versus Direct3D in Windows 8.1 using a couple of different processors, a Kaveri-based AMD A10-7850K APU and an Intel Haswell-based Core i7-4770K. The idea was to test in a CPU-constrained performance scenario using two processors with different levels of performance. In fact, I had hoped to show a lower level of CPU performance by including another AMD APU, a 65W Richland-based A10-6700. However, its performance turned out to be almost identical to that of the 95W Kaveri 7850K, so I held it out of our final results in order to keep things simple.

The main video card we used was a Radeon R9 290X card from XFX. This 290X comes with a custom cooler and sustains its peak Turbo clock almost constantly, even under the heaviest of loads. It essentially eliminates the clock speed and thermal variance issues we’ve seen with stock-cooled 290X cards. (I’ll be writing more about this card soon.) To ensure the GPU wasn’t the performance constraint, we tested BF4 at 1920×1080 on the “high” image quality presets, which is fairly easy work for a video cards of this power. We also tested at these same settings using a GeForce GTX 780 Ti, in order to see how Nvidia’s Direct3D driver fares compared to AMD’s D3D and Mantle implementations.

We captured performance info while playing through a two-minute-long section of BF4 three times on each config. You can click the series of buttons below to see frame-time plots from one of the test runs for each config we tested.


Even the raw plots readily show Mantle producing lower frame times and more total frames than Direct3D does with the same R9 290X card.

The known issue with occasional stuttering rears its head in one plot, for the 4770K with Mantle. You can’t see the full size of the frame time spike on the plot, but it’s 295 milliseconds—nearly a third of a second. We didn’t see this sort of hiccup all that often, but it did happen during some test runs, including the one we plotted for the 4770K.

AMD has made some big claims for performance improvements from Direct3D to Mantle, and the numbers from the A10-7850K appear to back them up. The leap from an average of 69 FPS to 110 FPS is considerable by any standard, particularly for an API change that apparently produces the same visuals. Even better, our latency-focused metric, the 99th percentile frame time, tends to agree that Mantle is substantially faster than D3D in this case. Mantle also outperforms Direct3D in combination with the Core i7-4770K, but the differences aren’t quite as dramatic.

One thing we didn’t expect to see was Nvidia’s Direct3D driver performing so much better than AMD’s. We don’t often test different GPU brands in CPU-constrained scenarios, but perhaps we should. Looks like Nvidia has done quite a bit of work polishing its D3D driver for low CPU overhead.

Of course, Nvidia has known for months, like the rest of us, that a Mantle-enabled version of BF4 was on the way. You can imagine that this game became a pretty important target of optimization for them during that span. Looks like their work has paid off handsomely. Heck, on the 4770K, the GTX 780 Ti with D3D outperforms the R9 290X with Mantle. (For what it’s worth, although frame times are very low generally for the 4770K/780 Ti setup, the BF4 data says it’s still mainly CPU-limited.)



The “time spent beyond X” graphs are our indicator of “badness,” of how long frame production times exceed several key thresholds. Those intermittent stuttering episodes with the early Mantle driver show up in the beyond-50-ms results for the A10-7850K, even though we didn’t see a hiccup of this size in every run. Since we’re showing the median result from three runs, the spike we plotted for the 4770K doesn’t show up at all here. (There were no such spikes in the other two test sessions.)

The big takeaway here comes from the “time spent beyond 16.7 ms” plot. You need to produce a frame every 16.7 milliseconds to achieve a smooth 60-FPS rate of animation. Mantle moves the A10-7850K much, much closer to that goal, even with that one big latency spike in the picture. If AMD can eliminate those hiccups, then slower CPUs like the 7850K should be capable of delivering a much smoother gaming experience than they can with Direct3D.

Conclusions

These are still early days for Mantle, but we can already see its ability to reduce CPU overhead rather dramatically compared to Direct3D. That’s exactly the sort of innovation folks have wanted to see in PC gaming, and AMD and DICE are already delivering. One would hope this demonstration of a more modern approach to graphics programming would spur others (ahem, Redmond) to innovate in a way that can benefit the entire PC ecosystem.

There’s lots of work yet to be done on Mantle. AMD needs to refine its drivers, add some key features, and improve performance scaling for its older GCN-based graphics chips. Meanwhile, in order for Mantle to really gain traction, EA and DICE will have to follow through on their promise to bring the Mantle rendering path to a host of other games based on the Frostbite 2 engine.

Based on these first results, the big beneficiaries of Mantle’s proliferation will probably be folks who, for one reason or another, have a PC that isn’t built to perform especially well in many of today’s games. PCs with slower processors stand to gain the most.

That said, there are already some well-worn paths to very good gaming experiences on the PC today. The Haswell-based Core i7-4770K is faster than the A10-7850K regardless of the graphics API. Switching from AMD’s Direct3D driver to Nvidia’s will get you more than halfway to Mantle’s performance on an A10-7850K, too. AMD would do well to work on improving its Direct3D drivers and CPUs, as well as pursuing Mantle development—but I’m sure they already know that. I’m happy to see AMD pushing innovation in graphics APIs at the same time.

We’ll surely test Mantle’s performance on a broader range of CPUs as it matures. I’m curious to play around with different core counts and to see whether low-power chips like Kabini can provide good gaming experiences with Mantle. Our next task, though, will be to see what performance benefits Mantle can deliver in GPU-limited scenarios. Stay tuned for that.

I tweet things on Twitter sometimes.

Comments closed
    • USAFTW
    • 7 years ago

    Maybe AMD shouldn’t have bothered with a new API and instead focused on optimizing driver for lower overhead like nvidia. It wouldn’t have cost as much as writing a new API from scratch. Such a disappointment.
    And who’s gonna stick a 290x into an FM2+ mobo for the only one game that supports mantle? And get inferior performance (CPU bottleneck) in the rest of the PC games? Anybody willing to invest in a 290x would likely cough up some more bucks on a 4770k, right?

      • Meadows
      • 7 years ago

      I said the same thing but people seemed to disagree for no reason.

    • itachi
    • 7 years ago

    Nice little benchmark.

    Still far from the advertised 45% benefit, except maybe for the A10/R9 290x combo I guess.

    Anyway remember guys it’s only a beta pretty much..we’ll see better results soon I’m sure

    And we’ll see the real difference when one day they will release a game that was developed from A to Z with mantle, then we can start talking about Nvidia/Amd..

    After I heard the price increase in the USA I checked and wow ! it’s insane..

    I’m from euro personally.. I’d probably get the 290 if I were to invest in a good mid to high end graphic card, if budget wasn’t a problem.. 780ti superclocked without a doubt !

    I don’t know but right now it seem like AMD is a bit out of the equation for you guys in the USA.

    Not sure how they’re gonna sell their cards at this price lol.

    • wof
    • 7 years ago

    This shows that mantle can do most of what d3d does and apparently it does it faster most of the time which is good but It’s the other way around that’s truly interesting as it could allow games to be written differently.

    Like having a 1000 different characters running around instead of having 200 instances of 5 models…

    So who, if anyone, will write that game first? or will it be a console to PC port that does it first?

      • Klimax
      • 7 years ago

      Doesn’t matter that much as that can and is done in DirectX.

      First problem: Same restrictions will always apply, no matter API, because main reason are available resources(msotly memory) not some hypothetical overehad of API.

      When we will see it alias second problem? When there will be good way to reduce costs of creating thsoe models in the first place during development. Current games are already getting way too expensive…

    • ronch
    • 7 years ago

    If anything, I think AMD’s DX drivers need more work. The hardware is willing, but the drivers are weak.

    • SCR250
    • 7 years ago

    Looks like using Mantle can cause problems for game developers having to support two API’s.

    [quote<]Dice has confirmed that there’s an issue that’ll be fixed with an upcoming patch, and that the bug is on its side of the code. [url<]http://www.extremetech.com/gaming/175998-battlefield-4-amds-mantle-causes-washed-out-foggy-graphics-compared-to-directx[/url<][/quote<] So who has to PAY to get this fixed (AMD or DICE)?

      • maxxcool
      • 7 years ago

      Normally the DEV for the game will pay to debug, pay to code, pay to beta then publish.

      This being AMD’s Hail-Mary to make KEV competitive.. I’m sure they will lend what ever is needed to get it done.

      • PixelArmy
      • 7 years ago

      Last I checked AMD had < 50% marketshare. Of those a an even smaller % are GCN-based. And of those, this only really helps if you have mismatched your CPU. What % of users is this really helping?

      I’m still not sure why they’d divert resources for this optimization that helps so few, and adds extra debugging in the process…

      Given that developing the Mantle version took resources (and it did, otherwise they wouldn’t have to delay a Mantle patch to fix BF4 bugs), one can’t help but think they could have launched a much more stable game…

      On one hand, squash bugs for everyone, on the other hand get 10% for 10% of your users?

      Now a 10% gain in the 4770 scenario isn’t nothing, but you’d have to try not to eek that out being much “closer to the metal”.

    • Arclight
    • 7 years ago

    The GTX 780 ti really stole their thunder. If it was the GTX Titan, Mantle would have looked a bit better.

    • Sabresiberian
    • 7 years ago

    [quote<]A lot of things we take for granted from Direct3D and OpenGL won't yet work with Mantle.[/quote<] Unfortunately we can't really judge Mantle until we get all the functions available that are in DirectX. More complex functions may well slow down Mantle such that the performance difference isn't as good as it shows here. (I remember, years ago, some people playing World of Warcraft claimed they got "better performance" running it on their Linux machines. Well, OpenGL did not at the time do some of the things DX did, so it might give them better frame rates, but not a better visual experience.) Still, I'm excited, and I'm very glad AMD is showing the desire to step up and lead here. For once the push for a better gaming experience is coming from them, instead of them just reacting to what Nvidia did. Microsoft certainly needs the competition to keep DX as good as it should be. OpenCL hasn't kept up enough to really put pressure on them, and they've been a bit lax in giving game developers what they wanted. Good news is that MS has already said they plan to implement some of Mantle's capabilities; if that happens then we all will benefit even more from AMD's push here. Kudos to AMD! 🙂

      • chuckula
      • 7 years ago

      See kids…. that’s a positive post about Mantle that I upthumbed because it at least had a dose of rationality built into it.

    • adam1378
    • 7 years ago

    Do you think the whole Design of Mantle is to integrate the consoles with PC gaming and prolong the console cycle?

    • ronch
    • 7 years ago

    I would be interested to know what Mantle can do for AMD’s 8-core FX processors. Wouldn’t it be cool if it finally pushes say, an FX-8350 up to the level of a Core i7?

    • ronch
    • 7 years ago

    I would be interested to know what Mantle can do for AMD’s 8-core FX processors. Wouldn’t it be cool if it finally pushes say, an FX-8350 up to the level of a Core i7?

    • sschaem
    • 7 years ago

    From the delta alone we can deduct that Mantle is more efficient then nvidia D3d.
    7850K : Mantle 290x vs D3D 780 ti : 110FPS vs 93FPS

    The 290x FPS win doesnt come from the 290x bone crushing the 780 ti, to the contrary
    When we are GPU limited we see that its the 780 ti that got the edge in this game.
    130fps vs 145fps … the 780 ti is the faster GPU here.

    So how can a faster faster GPU (~10%) be almost ~20% slower when CPU limited ?
    D3D overheads.

    So what now AMD haters? is nvidia also to blame for this ? crappy drivers ?
    If AMD and nvidia cant write optimized drivers.. who is really to blame ?

    Anyways, thats 30% delta is all Mantle. This seem to indicate to me that if nvidia would provide a modern API for their cards they could gain a 30% FPS boost with mid range processors. (and SoC like tegra)

    Yet,having said this, it seem possible that AMD could boost theirD3D drivers.

    In future test, it would be nice to see power usage per frame. D3D vs Mantle

    I have a feeling that the 60% boost we see with a 4 core Kaveri, is only half the story.

      • Ringofett
      • 7 years ago

      I wouldn’t disagree that there’s D3D overhead, but my point would be that I’d never (with their current competitive position) throttle myself with an AMD CPU. I’d have an Intel CPU, and continue to be, as I have for as long as I can remember, but GPU limited instead of CPU limited in gaming, as your average enthusiast machine typically is. Therefore, Mantle’s gains would never materialize for me.

        • Pwnstar
        • 7 years ago

        That’s not entirely true. There are games that are CPU limited, even on enthusiast machines, like Civ5.

        Mantle would help with those games (if they were written for it).

          • DancingWind
          • 7 years ago

          Honestly civ5 is bad example. The catch is mantle impreves cases CPU limited for GPU operations. Civ5 is limited usually not by graphics subsystem but by game simulation.
          What consistently kills D3D performance is many distinct objects (I think the technical term is ‘draw calls’). Devs have a couple of thousand calls available for each frame and if you go under it doesnt really matter how fancy or primitive the object is – you will tank the cpu.
          And that is why you might not get that much benefit from mantel in games designed for d3d but please consider that mantel actually opens design avenues that are simply in feasable under d3d. Strategy games like SupCom1, total war, and star swarm are obvious examples that would really benefit from it, but i don’t doubt that there are ways that other genres could benefit.

            • Pwnstar
            • 7 years ago

            If the game simulation ran on the GPU, it wouldn’t need to be run on the CPU. Thus, no longer CPU limited.

            • Theolendras
            • 7 years ago

            I semi agree with you. It can still be revelant if an application is CPU bound for other reason then graphics, it’s does not mean it has headroom for GPU calls, therefore the CPU overhaead of drivers and API might still influence the results. Civ 5 has quite a fair GPU load as well so it might be a good example even if it’s not a staple of best case scenario.

            • Klimax
            • 7 years ago

            No Mantle doesn’t allow nothing we didn’t have already. We can have massive object numbers, we can have all things Mantle supposedly enable. There is no benefit to it. Unless developers suck and alongside certain GPU vendor.

            In fact Civ V is great show case for DX MT, because you can control how they use DCL and how big batches are… And effect can be quite interesting even with default DCL usage.

            And I suggest to forget draw calls as they are mostly meaningless number to confuse less knowledegable people, because number of draw calls can be affected by masively more ways then you can count bugs in BF4. (Same would go for number of batches)

        • Firestarter
        • 7 years ago

        [quote<]Therefore, Mantle's gains would never materialize for me.[/quote<] There is more to gain with Mantle than just more efficient CPU utilization. I'm skeptical as to whether we'll actually see any other benefits in the short term though (short of a slight bump in GPU limited situations).

        • puppetworx
        • 7 years ago

        [quote<]Mantle's gains would never materialize for me.[/quote<] Unfortunately [i<]strictly scientific[/i<] performance analysis isn't (easily) possible in multiplayer but ALL of the data I have seen published on Mantle's performance in multiplayer has been incredibly one sided. With Mantle [url=http://www.pcper.com/reviews/Graphics-Cards/Battlefield-4-Mantle-Early-Performance-Testing/More-Mantle-Results<]a long way ahead[/url<]. This shouldn't be a surprise as BF4 is a game which is already mostly GPU bound, even without Mantle. When playing multiplayer the situation is different however and the game is easily CPU bound. So Mantle is relevant to enthusiasts who play multiplayer. Note that it should also benefit enthusiast Crossfire setups which tend to always be CPU bound. edit: fixed link

      • Pwnstar
      • 7 years ago

      Well said, sir. Really strips the issue down to the meat.

      Also, the 780 Ti costs $200 more than the 290x. It is the faster GPU. For $200 more.

        • slowriot
        • 7 years ago

        The price difference between a 290X and 780 TI is often less than $100. At this point you can’t use MSRP given the unfortunately crazy retail pricing on AMD’s 290(X) cards.

        [url<]http://pcpartpicker.com/parts/video-card/#c=153,146&sort=a7[/url<]

          • Pwnstar
          • 7 years ago

          That is a temporary situation. AMD did not raise their prices, the merchants did.

            • SCR250
            • 7 years ago

            First off it is not “temporary” as it has been going on for 3-4 months.

            Second, since you seem to imply that this “temporary” price increase will soon be gone, please enlighten us to exactly when the prices will be back to normal MSRP levels?

            Third, it doesn’t matter why the prices are high all that matters is that they are high.

        • puppetworx
        • 7 years ago

        Agreed. I completely failed to recognize it when I read the article but that data clearly shows Mantle’s efficiency.

    • brucethemoose
    • 7 years ago

    Wow, 223 comments and plenty of downthumbs by lunchtime. This article stirred up more controversy than usual.

      • LostCat
      • 7 years ago

      I’ve seen enough BSers to want to steer clear of the discussion.

        • brucethemoose
        • 7 years ago

        Sometimes it’s fun to read, but it’s getting pretty nasty down there.

    • ptsant
    • 7 years ago

    Thanks for the review.

    Conclusion #1: If you have an old/weak processor, R290X is much better than 780Ti in BF4.
    Conclusion #2: If you already have a R290X, you just got a free 10-50% upgrade in BF4.

    False Conclusion #1: 780Ti is much better than R290X (no, because the settings are not GPU-limited!)

    What I would like to know is the difference in CPU frame production time (obtainable from the in-game tool) in different scenarios, such as with a Kabini/Jaguar, Kaveri, FX, Haswell and Nvidia, AMD D3D, AMD Mantle.

    I know it’s a lot to ask, but I think this will allow us to better understand the merits of this new API.

      • maxxcool
      • 7 years ago

      Or you can have a I3 and not need mantle at all.

        • ptsant
        • 7 years ago

        Nobody needs mantle. I’ll take the free 20% improvement and say thanks.

        • rhammer
        • 7 years ago

        [url<]http://www.golem.de/news/amds-mantle-api-im-test-der-prozessor-katalysator-1402-104261-3.html[/url<] BattleField 4 with Mantle + 64 players + Core i7-3770K

          • Airmantharp
          • 7 years ago

          Still gaining ~8% at 4k. Would love to see their frametimes for that test!

      • kamikaziechameleon
      • 7 years ago

      False Conclusion #1: 780Ti is much better than R290X (no, because the settings are not GPU-limited!)

      I’m confused, if the GPU out preforms simply on a driver basis then wouldn’t that mean that say on the whole any Nvidia card will out preform any similarly powered Mantel AMD card regardless thanks to Nvidia’s superior driver implementation???

      So my false conclusion is that Nvidia drivers with the DX handy cap still are superior to Mantel…

        • ptsant
        • 7 years ago

        You can say that the 780Ti is better in this specific scenario, meaning with relatively low-settings on a 4770K. People buying a 4770K and 780Ti are not very likely to play at 1920×1080 on high. I have less hardware than that and choose high-ultra.

        The difference between the 780Ti and 290X is smaller in GPU-limited scenarios. On the 780Ti review you can see 53 vs 59 FPS, for example.

        The 780Ti is the faster card overall, but no by a very big margin and not in all situations.

      • Ringofett
      • 7 years ago

      [quote<]Conclusion #1: If you have an old/weak processor, R290X is much better than 780Ti in BF4. [/quote<] Last time I'll post a variation on this, but that doesn't make sense to me as an enthusiast. Who would have such a glorious GPU mated to an anemic CPU? Why plump for such a premium product and not toss just a little more money at the build and get a CPU that will never hold that GPU back, plus perform better on every possible task not related to gaming as well? Mantle, asides from AMD APU's held back by their CPU side, looks like a solution in search of a problem.

        • Pwnstar
        • 7 years ago

        That scenario won’t be common, but Mantle still helps the cheaper GPUs. What’s wrong with that?

        You’d also have to buy a new $150 motherboard with that CPU.

        • ptsant
        • 7 years ago

        [quote<] Who would have such a glorious GPU mated to an anemic CPU? Why plump for such a premium product and not toss just a little more money at the build and get a CPU that will never hold that GPU back, plus perform better on every possible task not related to gaming as well? [/quote<] This is only true if you build a whole computer at once every time. Otherwise, it's much easier and cheaper to add a new GPU than a new CPU. Also, note that you don't have to go all the way to the 290X to get some benefit. My 280X also got a nice boost and, at aprox $300 is not that expensive. To give you an obvious scenario, my three best gamer friends have 1. a nehalem, 2. a phenom I, 3. a phenom II. Despite their strained budgets, they could pay a 270X or maybe 280X. If the BF4 findings apply to other games, they difference with nvidia cards will grow, not shrink, for their slow cpus. As an example of mid-range GPUs with mantle, see legitreview (http://www.legitreviews.com/amd-mantle-api-real-world-bf4-benchmark-performance-catalyst-141_134959/3). This is a healthy change with a 260 card…

      • Firestarter
      • 7 years ago

      [quote<]free 10% upgrade[/quote<] *in average FPS* The 99% percentile figures are more impressive for the more typical 4770k/290x configuration, with a 25% reduction in 99 percentile frame times (or 30% improvement, if you're a glass half-full kind of guy). The 4770k/290x/D3D graph looks a lot worse than the 4770k/780ti/D3D graph, which is a testament to Nvidia's D3D drivers, but the 4770k/290x/Mantle graph looks even better, even though the average FPS figure is slightly worse.

      • chuckula
      • 7 years ago

      Conclusion #3: Even though AMD fanboys have spent over 6 months calling Haswell a failure, when it comes down to it Haswell makes the R9-290X look even better using the “obsolete” DX11 path than Kaveri does using Mantle. Mantle just adds some icing to the cake for R9-290X on Haswell’s performance.

      Oh, and the 4670K… which is typically considered better for playing games than the 4770K… is practically the same price as the A10-7850K.

    • unclesharkey
    • 7 years ago

    I wonder if Mantle will have a positive impact on the AMD E-350 in regards to performance?

    • psyph3r
    • 7 years ago

    It would have better to include an actual mantle program that is an actual benchmark. I got down to 4 fps with dx on both an I7 980x and a phenom 965. both have with a 7970 with 3GB of memery. With mantle I averaged 40+ fps. It greatly depends on the setup at the moment. Having more than 16GB of ram is a major plus and having 8 physical cores also greatly increases performance. Judging mantle during this initial beta release is retardedly short sighted.

      • Pwnstar
      • 7 years ago

      In all fairness, the article is called “first look” for a reason.

    • Tar Palantir
    • 7 years ago

    [quote<]We're getting exactly the right raw data, and there's not an FPS average in sight. This is progress, folks.[/quote<]. So why don't you put the minimum/max FPS in your graphs, right next to the average? You'd think that would be pretty relevant information, or am I missing something? Is there a way to see how FPS varies throughout the test on a X/Y graph? The average alone tells me that a card ranging from 1 fps to 200 is as good as one going from 60 to 140. (It's an honest question, I actually don't know if it's possible to do something like that). I also can't wait to see a GPU-bound setting, there might be 1080p/120hz people out there, but still, beastly cards like those working at less than 1440p are wasted in my eyes. I'm now going to refill my popcorn tub, the green/red nonsense in the comments is as entertaining as it is stupid.

      • Pwnstar
      • 7 years ago

      I think frame times are a better measurement for smoothness. This site is known for it.

    • Pwnstar
    • 7 years ago

    Why are there so many butthurt nVidia fans here? The article clearly shows the 780 Ti doing well on the high end in DirectX.

      • Klimax
      • 7 years ago

      As others don’t like GPU PhysX, we don’t like Mantle, because Mantle exists only to fix AMD drivers nothing else, while GPU PhysX is robust solution where none other exists at all.

        • bittermann
        • 7 years ago

        LOL…wut?

    • Klimax
    • 7 years ago

    Dear author, do not repeat this false claim:
    [quote<](Direct3D has become rather infamous for the number of draw calls it requires, and those generally translate into additional CPU overhead.) [/quote<] It may have been once true, but it for along time is not true. AT ALL. The only reasons it still exists, is incompetence of programmers to learn new way of using API and absence of similar API on consoles. (There are number of new ways to reduce number of draw calls and then even draw call itself has low overhead unless somebody botches entire function calling it; also even standard Draw call can operate apparently as multidraw command.) Note: At least for five years quote assertion wasn't true. MAybe longer, but I didn't study DX 10 in-depth. Frankly, so much for Mantle. The only reason it is needed is incompetence of certain company. That is all. Remember, it was NVidia who understood first that drivers need to be optimized back during Riva TNT: [url<]http://www.activewin.com/reviews/software/drivers/detonator.shtml[/url<] That driver has to be able to optimize sahders if architecture needs it. (Geforce FX [url<]http://alt.3dcenter.org/artikel/cinefx/index_e.php)[/url<] Jen-Hsun Huang said that NVidia is more SW company that HW one. Here you can see why. Well, I was wrong on two counts. I thought that most of gains was from GCN specific optimizations, which is incorrect based upon current evidence and that most of gains come from DCL-like approach in Mantle. But we don't need Mantle for that. We just need to kick AMD in the ass to get them to support it in DirectX and we can forget whole Vendor specific lock-in business. Second error and mistake is in underestimating NVidia in regards of optimizations. I though they'll need at least one more driver to get there, but apparently they are there already. So much for need for Mantle... End results is, Mantle will stay AMD only, because no other corporation is going to support it. (Not NVidia and definitely not Intel) Last thing: I am going through my older captures for some games .(Batman Arkham City, Bioshock Infinite, Crysis 2 and 3, Dirt 3 Showdown) refreshing them with correct Debugging Symbols. Funny thing is in all those captures DirectX always came below 5% of all samples and usually was around ~2,5%, meaning that even if one were to get rid of this, gains would be lost in noise and permanent overhead of any API you can create. === TL.DR. AMD's driver sucks and they need proprietary API to not suck so much. And still only to match vendor neutral API with good drivers...

      • NeoForever
      • 7 years ago

      Seriously? AMD’s drivers suck, so to not suck as bad they decided it’s easier to make a new API than to fix the drivers?
      That’s your logic?

      (slow.. clap..)

        • chuckula
        • 7 years ago

        [quote<]Seriously? AMD's drivers suck so instead of putting time in the drivers they decided it's better use of time and money to make a new API?[/quote<] You basically summed up AMD's game plan there. [quote<]That's your logic?[/quote<] No, that's AMD's logic. Klimax didn't write Mantle. [quote<](slow.. clap..)[/quote<] I thought you liked AMD?

          • NeoForever
          • 7 years ago

          Thx. Please do share if you have some more inside information from AMD management.

        • Klimax
        • 7 years ago

        Yes. Then there is that vendor lock-in… (Sorry, but at minimum de facto, because we know NVidia will not support it and I am pretty sure Intel won’t either)

        We know MT in DirectX worsk and is usable thanks to sample by Microsoft, Civilization V (which has no less then four options for jsut how to do DCL and you can set even bit more like batch size) and couple other games. NVidia implemented it, DCL showed great effect on performance, but AMD doesn’t support it…. almost seems deliberate considering their statements.

          • LostCat
          • 7 years ago

          I assume you realize the guy who programmed the graphics engine for Civ 5 is championing Mantle right now.

            • chuckula
            • 7 years ago

            Oh you mean Dan Baker?

            Here’s what he had to say about Intel: [url<]http://software.intel.com/en-us/articles/sid-meier-s-civilization-v-finds-the-graphics-sweet-spot[/url<] So if you don't trust that article, why do you trust him now?

            • LostCat
            • 7 years ago

            What exactly am I supposed to be trusting/not trusting? You lost me.

        • l33t-g4m3r
        • 7 years ago

        You been an AMD/ATI user long? ATI drivers have sucked since rage128, with the exception of dx9, late dx10(.1), and early dx11, and yes that sounds just about what they’d resort to instead of fixing broken drivers, especially if the driver code is too complex and bloated to just “fix”.

        I am not a professional programmer, but I do know a few coders and sometimes code from a monolithic project can get so out of control your best bet is to throw the whole thing away and start fresh, although status quo will never let that happen. Mantle IS AMD’s attempt to start fresh without making waves, albeit it’s a lower level api like glide.

        It’s good that Mantle works, and it does prove a point, but it also proves that AMD’s DX drivers are unoptimized buggy crap, and that they could have done the same thing with OpenGL. If anything, AMD could easily get away with throwing out their old OpenGL driver and making an optimized replacement, but they instead opted for plan C which was don’t fix DX or GL, and make Mantle. I suppose from a marketing standpoint that actually works better than fixing OpenGL, because few developers outside of id actually care to use it.

      • rhammer
      • 7 years ago

      AMD and Intel doesn’t have access to NVIDIA GameWork’s source code.

        • Klimax
        • 7 years ago

        Doesn’t matter in the least. It is as irrelevant as possible.

        All that matters is support for Driver Command Lists, which is in no way in or related to GameWorks.

        I have no idea why are you pulling this misdirection by way of introducing absolutely different unrelated thing. May be you could explain why did you introduce this?

        ETA: Fixed stupid typo.

      • iamgoingonaholiday
      • 7 years ago

      But a lot of developers, i mean majority of developers agree that DirectX isn’t really an API for efficient development. That leads me to thinking that they are incompetent..

        • Klimax
        • 7 years ago

        Agree? All their agreement is useless unless they got bloody data and evidence.

        So far I got to see DICE just proven NVidia on DCL and Star Swarm is either about incompetence or bribed to damage DirectX. (benefit of doubt went out of window when I heard some of their stupid assertions. especially about importance of “number of batches in FAQ”. Same idiocy as number of draw calls.)

      • ermo
      • 7 years ago

      [quote<]"most of gains come from DCL-like approach in Mantle"[/quote<] [url=https://techreport.com/news/25986/mantle-patch-released-for-battlefield-4-amd-drivers-mia?post=797477<]I told you so.[/url<]

        • Klimax
        • 7 years ago

        However we don’t need Mantle for that. That’s the thing. Mantle is simply thing to fix particularly atrocious drivers, nothing more. It doesn’t enable, unlock anything.

        There was never bottleneck not caused by drivers. And instead fixing their atrocious drivers, they waste time on this. (And furthermore waste time of devs on doing job of driver teams)

          • Heighnub
          • 7 years ago

          Holy shit man, do you not realise that some devs WANT to be doing the job of the driver team (like they do on console platforms). That is the whole point of Mantle!!!

          The fact of the matter is that every component of the driver is designed to support different applications with vastly different requirements, a one-size-fits-all piece of software.
          When this approach does not work as efficiently as it could, the driver teams jump in and create custom solutions for a specific game – this is the whole driver optimisation thing that results in gamer’s getting an X% increase in performance after a driver update.

          Now – doesn’t it make a lot more sense for the developers of the game, who have been working on the game for a long time, who know every detail of their data processing requirements, to have the ability to create these solutions?

          This is what Mantle allows.

          And the great thing is that this will eventually be open to the developing public, so that you don’t have to be a big player or have a popular game to get support from Nvidia and AMD on driver optimisation – you can just do it yourself!

          Doesn’t that sound like a good idea?

      • pogsnet1
      • 7 years ago

      I have AMD GPUs, I dont encounter what are you talking about. Probably you have a bloated OS. My system and all the rest of my computers are running on AMD GPUs all fine.

    • My Johnson
    • 7 years ago

    *Crap*

    Willing to trade HD7870 for 660GTX. I have an AMD CPU.

    Edit: Did I miss something? “Switching from AMD’s Direct3D driver to Nvidia’s will get you more than halfway to Mantle’s performance on an A10-7850K,..” Or is it because I flubbed GTX 660?

      • auxy
      • 7 years ago

      Why would you trade a 7870 for a GTX 660? The 7870 is already faster …

        • My Johnson
        • 7 years ago

        Because the GTX would perform better in AMD CPU limited situations? Mind that the GTX 660 is only slightly slower overall. I consider the cards to be equivalent.

        • MiG754
        • 7 years ago

        And it’s better for litecoins, which are still worth mining on GPUs, especially in winter. I’d gladly trade my 660ti for a 7950 and go from 200 to 700 khash/s, but no one in my country would make that trade :D.

    • Airmantharp
    • 7 years ago

    While there seems to be a ton of excitement concerning the use of Mantle-capable high-end GPUs with lower-end CPUs, and that’s warranted, I’m much more interested in how Mantle improves the performance of games that are CPU-limited regardless of the GPU used.

      • Pwnstar
      • 7 years ago

      You can simulate that by lowering the resolution to 480p and turning down the texture quality. FPS goes to 200 and gets limited by the CPU. This especially works with quad-SLI setups.

      Now you can compare how DirectX handles this situation versus what Mantle does.

        • Airmantharp
        • 7 years ago

        Simulating gets you pretty close, I agree, but it’s not ‘the real deal’ as there are other known factors to consider such as memory capacity, memory bandwidth, and bus bandwidth, to name a few.

    • puppetworx
    • 7 years ago

    It’s been discussed in the forum but now more than ever seems like the time that multiplayer testing has to come of age. Obviously it hasn’t been done so far because of the difficulties in getting repeatable results but now might be the ideal time to get something done about it.

    DICE is known to have a multiplayer simulation that they run multiplayer benchmarks on. AMD and DICE want to prove that Mantle is a success. A little pressure to DICE and AMD and maybe they will make that test accessible to the press.

    That would be a great step forward. It might not work but it’s certainly worth a try.

    Failing that it seems like only a community effort could make multiplayer testing become a reality. That’s a lot harder to do.

      • Airmantharp
      • 7 years ago

      Anything that’s not repeatable by the community is going to arouse suspicion, especially when it’s provided by a developer. DICE would have to release the MP benchmark to the public. This isn’t FCAT, after all.

    • soryuuha
    • 7 years ago

    Nvidia have better frame rates than AMD in D3D/Mantle…on Gaming Evolved title. argh the irony!

      • Pwnstar
      • 7 years ago

      I’d like to know how much money nVidia blew to make that happen. It’s a beautiful thing!

        • Airmantharp
        • 7 years ago

        Probably not much, really. Nvidia has been (lately) more of a ‘do it right the first time’ company when it comes to the quality and stability of their products while AMD has been in continuous rolling beta. Given each companies’ respective history, today’s results are about the best we could have hoped for from either of them. Mantle shows rather promising gains across the spectrum of CPU and GPU hardware, while Nvidia’s refined drivers expectedly hold their own.

        The results are just enough of a kick in the pants for AMD to need to stay focused with DICE in getting the bugs in Frostbyte 3’s Mantle implementation ironed out, while at the same time giving Nvidia and Microsoft ample motivation to reinvigorate their DirectX development teams!

          • Pwnstar
          • 7 years ago

          nVidia’s drivers clearly show improvement from 4 months ago. They must have spent a decent amount of time on them and time = money.

            • Klimax
            • 7 years ago

            They charge premium for their cards and people are willing to pay it. Here you see why.

            • jihadjoe
            • 7 years ago

            I imagine the driver development team is already on their payroll.

            Money being “spent” sort of implies NV is bringing in more people to work on the drivers, or having their guys work overtime.

            • Pwnstar
            • 7 years ago

            Not really. It just means they spent all their time on Battlefield 4, instead of optimizing other games like they should.

            • renz496
            • 7 years ago

            is that so? if nvidia focusing solely on BF4 in the past month you won’t see new SLI profile in thelatest driver

            [url<]http://www.geforce.com/whats-new/articles/nvidia-geforce-334-67-beta-drivers-released[/url<]

            • Pwnstar
            • 7 years ago

            Ok, so they spent some time on SLI profiles. Doesn’t really refute my point.

            • Airmantharp
            • 7 years ago

            What other games on the market should they be optimizing for? Did they miss something in order to optimize for Battlefield?

            • Pwnstar
            • 7 years ago

            What other games do people play? I sure wish The Elder Scrolls Online ran better on my nVidia card.

            • Airmantharp
            • 7 years ago

            The one that’s in Beta for two more months?

            I wish all Betas ran perfectly!

            • Pwnstar
            • 7 years ago

            The point is nVidia didn’t release beta drivers for TESO because they were spending all their time on BF4. TESO comes out in less than 2 months. They aren’t ready.

            • maxxcool
            • 7 years ago

            Oh just like AMD ala mantle for ONLY bf4

            • Pwnstar
            • 7 years ago

            No, not only for BF4 but for any game that uses Frostbite and there are like 15 of those.

            • chuckula
            • 7 years ago

            So wait.. AMD gets off scott-free for flailing around since Mantle == INNOVATION, but Nvidia is cheating in some sort of evil backroom conspiracy because they have the gall to pay people to maintain their drivers and provide good performance for different games?

            Oh, and Nvidia paying people to write drivers is NOT innovation.. or something.

            What sort of bizzaro world have I landed in?

        • maxxcool
        • 7 years ago

        Which is exactly why amd’s driver are garbage. No drive to succeed or reinvestment.

      • Theolendras
      • 7 years ago

      CPU usage from Nvidia’s drivers has been lower for quite some time. Starcraft II and Civ 5 both CPU limited titles performs better with nvidia and are CPU-bound for the most part. I don’t know why AMD still do not have drivers command list support… I guess they have too much on the plate already. Still, Mantle seem to have the capacity to give a bit more longivity in old system and better efficiency overall, so I’m not gonna complain, unless their Direct 3D drivers suffers to much from lack of focus…

      • rhammer
      • 7 years ago

      Note that GK110’s ~551 mm^2 die size is larger than Hawaii’s ~438 mm^2.

      • rhammer
      • 7 years ago

      [url<]http://www.golem.de/news/amds-mantle-api-im-test-der-prozessor-katalysator-1402-104261-3.html[/url<] BattleField 4 with Mantle and 64 players

    • Voldenuit
    • 7 years ago

    My takeway message from reading the Mantle reviews:

    1. AMD’s CPUs suck.
    2. AMD’s D3D drivers have high CPU overhead.

    I’m running an Athlon II X4 CPU with a GTX 660, and my last 3 GPUs have been AMD (the 3 before that were nvidia, and the one before that was AMD), so I don’t consider myself a fanboi (but then, how many fanbois do?).

    However, Mantle seems like a complicated answer to fundamental problems that are best solved at the root. Improving their CPU performance and reducing CPU overhead (whether it be by optimizing drivers, moving to OpenGL or having Microsoft overhaul D3D) all seem like more fruitful solutions than a new proprietary low level API (Glide3D anyone?) that developers need a separate render path for.

    If AMD were serious about Mantle, they should have optimized it for their APUs first and foremost – that might actually make laptops and AIOs with AMD inside attractive alternatives to intel IGP-equipped systems (assuming BF4 performance is a priority).

    Excavator needs to blow everyone away, or AMD is toast.

      • Airmantharp
      • 7 years ago

      I hope that AMD isn’t toast- but they’re trying their darndest to kill their desktop CPU business.

        • ermo
        • 7 years ago

        Or they’re trying their hardest [b<][i<]not[/i<][/b<] to kill their desktop and server CPU business. And failing pretty miserably too. And to add insult to injury, their driver-developer was told not to touch their D3D driver and instead focus on Mantle? Dear AMD, please hire an extra (accomplished) graphics driver developer to help out the poor soul currently tasked with the job. Two heads are better than one and all that.

      • Pwnstar
      • 7 years ago

      Why does lower performance mean it “sucks”? They are still quite playable, just lower performance and lower price.

        • Airmantharp
        • 7 years ago

        You’re right, but if you compare where they are today with where they were over five years ago, you see that the top performance bar hasn’t budged, and is falling behind Intel’s top-end consumer line in every measurable metric beyond being able to turn on.

          • JustAnEngineer
          • 7 years ago

          Where’s Global Foundries’ fabrication process technology at these days compared to Intel’s?

            • Airmantharp
            • 7 years ago

            Working on 28nm versus working on 14nm?

            • Theolendras
            • 7 years ago

            Yep people can fire at AMD all they want, their engineers are working at a significat process disavantage and that disavantage is growing. AMD would have to have a revolutionning architecture in order to take the lead.

            Couple that with an architecture that only improve total integer troughput at the expense of single thread and floating point which didn’t pan very well. For gamers this is very disapointing. Many application are only beginning to multithread properly and the SDK for HSA was just released. A disastrous timing I would say.

      • psyph3r
      • 7 years ago

      Mantle is completely compatible with nvidia if they simply write a driver. Open source

        • swaaye
        • 7 years ago

        AMD itself has said that Mantle is fundamentally designed for their GCN hardware.

          • LostCat
          • 7 years ago

          The Frostbite team said it could work on most modern GPUs with minor changes (at an AMD event iirc.)

            • swaaye
            • 7 years ago

            On the other hand I’ve seen AMD people say stuff like “… the entire API needs to be written for the architecture.”

          • Pwnstar
          • 7 years ago

          They were talking about AMD’s Mantle driver, not the API itself. nVidia could make a driver if AMD let them and supposedly they will open it up for everyone later this year.

            • swaaye
            • 7 years ago

            Yeah I am skeptical about how that will go down. Typically you don’t see competitors work together like that and NV has some of their own stuff in the works.

            • PixelArmy
            • 7 years ago

            x86 on Itaniums anyone?

        • nanoflower
        • 7 years ago

        It is not open source. AMD says they that others can use Mantle, but they dont say when or what other vendors will have to do to get the rights to develop their own Mantle driver.

      • rhammer
      • 7 years ago

      1. NVIDIA’s X86 CPU suck. [url<]http://www.nvidia.com/page/uli_m6117c.html[/url<] i.e. MIA. 2. You didn't factor in NVIDIA's TCC driver i.e. low dispatch/low overhead CUDA driver when compared DirectX/WDDM version.

      • M3gatron
      • 7 years ago

      Mantle improves performance for intel cpus as well.
      AMD d3d drivers are fine. If they had so “high CPU overhead” amd gpu’s shouldn’t be able to match nvidia gpus but that is not the case.

      • bittermann
      • 7 years ago

      “AMD’s CPUs suck”…compared to Nvidia’s cpu’s how are they?

      Yeah, thought so…

      • maxxcool
      • 7 years ago

      Mean.. but accurate.

      • Theolendras
      • 7 years ago

      I’ve been wishing for an AMD buyout since Piledriver. AMD does not have the ressource on it’s own to compete anymore.

    • the Lionheart
    • 7 years ago

    The 7850K goes from 50% to ~94% the performance of the 4770K under Mantle. That’s pretty impressive! Having said that, I would like to see Mantle used to utilize those HSA APUs for heavy phyisics and AI compuation. I’d love to see a game that use decent immersive physics simulation.

      • kc77
      • 7 years ago

      I’d love to see what this does to other models like the Piledriver chips that are clocked higher like the 8350? Will they experience the same gains?

        • Pwnstar
        • 7 years ago

        Probably not the same percentage improvement, no.

      • Klimax
      • 7 years ago

      Funny thing is, AMD could have that already by supporting DCL (one of basic features), but instead they ignored it… (and advertised DX 11.2 support, which frankly is irrelevant when you don’t support basic feature in DX 11.0)

      But no, they need to go vendor specific API to have that. (Just more wasted money IMO)

    • Krogoth
    • 7 years ago

    Not impressed.

    Software tricks aren’t to make up the difference in hardware prowess.

      • Airmantharp
      • 7 years ago

      You can’t fault AMD for trying to make an end-run around operating system bloat, even if the only industry-wide effect is the trimming of said bloat!

        • Klimax
        • 7 years ago

        Don’t think so. There’s not much remaining since W7 and DX 11.0 from POV of games.

      • Pwnstar
      • 7 years ago

      How is a new API a “trick” in your book?

      • pandemonium
      • 7 years ago

      How is optimizing software to the hardware available a trick? That’s just being smart…

      Why anyone would complain about improvements in any aspect is well beyond me.

      • psyph3r
      • 7 years ago

      You realize intel and nvidia advantages are almost all due to software tricks and Intel bias compilers. Optimization is the game. This is amd doing it.

        • Klimax
        • 7 years ago

        As for ICC compiler, evidence is missing that it has much of bias. (Considering it officially targets mostly Intel CPUs, one would expect to use all knowledge of architectures as documented by Intel…)
        Anyway, there is however evidence that often code by ICC is best even for Bulldozer.

        So, maybe once ages ago ICC did something, but that for years no longer is true…

          • M3gatron
          • 7 years ago

          “ICC did something”
          It was proved tha Intels’s ICC was crippling AMD cpu’s. Intel even paied a fine. Was it like 1 billion dolars??
          Even now Intels ICC crippels AMD cpus only Intel now issues a citation clarifying this aspect.
          For example Cinebench 11.5 uses AVX instructions for Intel Cpus and SS2 for AMD CPUs even if the AMD cpus can do AVX.

          If you have proof that ICC uses the same instructions for Intel and AMD cpus if ailable please show them.

            • tcubed
            • 7 years ago

            Arguing with Klimax is useless his understanding of CPU arch and knowledge of the industry is rivaled by a chicken’s intellect.

            Evidence is missing that’s something only an intel imbecile can say there is not… there even is a special page on intels site that intel was obliged to put up (I see the images are missing which will attract a fine from the FTC as they are bound by the agreement with AMD to keep it up) here:

            [url<]http://software.intel.com/en-us/articles/optimization-notice#opt-en[/url<] For more about it (you will find the FTC complaint, AMDs complaint and also the settlement agreement there): [url<]http://www.agner.org/optimize/blog/read.php?i=49[/url<] And the fine was 1+ bn $ and the FTC wants to reopen the case as it feels that there were more then just AMD involved and a large portion of the affected people (aka. clients and all other companies). Intel faces yet another huge fine from the FTC directly and by the looks of it they will loose again. They also paid a hige fine imposed by the european trade commission.

            • JumpingJack
            • 7 years ago

            Actually, the FTC did not impose any fine, they had Intel set aside 10 million to fund any company that wanted to recompile with a non-ICC compiler. The also required Intel to publish a notice that their compiler would not optimize the same for non-Intel CPUs.

      • mikato
      • 7 years ago

      This isn’t “software tricks”. This is trying to reduce the artificial penalty that DirectX and unneeded CPU usage incur on gaming with their GPU hardware. They aren’t getting the GPU performance they should, and they’re doing something about it.

    • f0d
    • 7 years ago

    it seems that mantle has a lower image quality compared to directx in BF4

    [url<]http://www.extremetech.com/gaming/175998-battlefield-4-amds-mantle-causes-washed-out-foggy-graphics-compared-to-directx[/url<] DICE says it will be fixable though so i guess its DICE's fault not AMD's fault who is slower with fixes DICE? or AMD?

      • the Lionheart
      • 7 years ago

      Never heard of the site and none of the well-known sites have reported any issues with image quality under mantle. But that aside, Mantle is a step forward towards a more elastic approach to game desing.

      I think Mantle will enable developers to utilize AMD’s APUs which could enable massive improvements in physics simulation and AI in games.

        • Airmantharp
        • 7 years ago

        You haven’t been around long, have you? Haven’t heard of ExtremeTech?

        Of further note, differences in image quality have been reported nearly universally, and DICE has recognized the problem.

    • Bensam123
    • 7 years ago

    Guess this is a short notice, but I expected more testing scenarios. Like GPU constrained scenarios as well to test if the improvements also affect the GPU end of this like the CPU (as there are a lot of people calling AMD out on this). I assume there is going to be a followup article on this…?

    Overall I’m pretty happy with mantle. Having encountered tons of CPU spikes while trying to stream BF4 with a 8350 pre-mantle, it has pretty much eliminated all of them. I originally had to turn down resolution to 540p/encoding setting/opencl in order to get semi-decent performance and even that I’d still get spikes with. One thing I noticed is Ambient Occlusion adds quite a bit of variation to frame times. Turning it off almost makes for a rock solid line, where as if you turn it on you get spiking. I’m guessing it’s not fully optimized yet. Everything else you can leave on Ultra (tested on a R9-290) with little to no variance in frame times.

    The spike noted above does appear with semi-regularity on about 15 minute intervals, with increasing duration (amount of time of the spike), till the game finally crashes. It takes about a map-map1/2 before it happens, but it eventually does.

    • DPete27
    • 7 years ago

    [quote<]In fact, I had hoped to show a lower level of CPU performance by including another AMD APU, a 65W Richland-based A10-6700. However, its performance turned out to be almost identical to that of the 95W Kaveri 7850K[/quote<] [url=http://www.anandtech.com/show/7677/amd-kaveri-review-a8-7600-a10-7850k<]Shocker.[/url<] Why not test this with a Pentium / Celeron duallie? If core count is a limiting factor, maybe a C2Q or Athlon x4. Let's see what Mantle can really do. Can I build a Bay Trail Atom quad core gaming PC?

      • vargis14
      • 7 years ago

      Probably, but you only have 2 games to play with mantle right now.

      Mantle is in its infancy once more game DEV’s jump on board and if Kaveris IGP with mantle performs well I think AMD is going to have a huge hit on their hands. Also remember AMD will come out with even better IGP’s/CPU’s in the next 2 years as mantle matures…we might even get that GDDR5 memory on a stick in the next year with a die shrink coming also.

      Imagine 60 fps @ 1080p with no discrete graphics card. I think it would bring a ton of people who do not have the money for full blown gaming rigs to join in the PC gaming world with premium games to boot.

        • chuckula
        • 7 years ago

        [quote<] Imagine 60 fps @ 1080p with no discrete graphics card.[/quote<] Maybe if the (likely overly) optimistic rumors about Broadwell (with a 2-TFLOP GT4 IGP) end up being true, but Kaveri's IGP has already been tested in Mantle and it's not performing any miracles: [url<]http://hothardware.com/News/AMD-Mantle-vs-DirectX-Benchmarks-with-Battlefield-4-and-Star-Swarm/[/url<] Short answer: Mantle buys you ~10% when using the IGP. Given what we've seen with the issues with AMD's DirectX driver, it's frankly debatable if AMD couldn't get 10% extra performance just by implementing all the features that are available in D3D.

          • vargis14
          • 7 years ago

          Heck a 36fps average @1080p is not bad for a IGP in BF4 on ULTRA.

            • maxxcool
            • 7 years ago

            To bad that is not true. medium settings, with dips into the teens.

    • kamikaziechameleon
    • 7 years ago

    Layman here so please help me. Mantel is good for old systems with inferior CPU’s???

    How old and how inferior? The cost of a decent CPU is pretty fair these days and has been since the Q6600 came out, lol.

    Side note, Nvidia seems to win here despite not being in mantel??? So what is the fuss? Can someone explain to me what is happening and the value presented. As I see it this is not a product for a AAA gamer who has modern setup, but rather that the gamer in question will fair better with an Nvidia solution that won’t have to be powered by some goofy proprietary tech. If I’m wrong please correct me.

    I also don’t get why we need “Mantel” meaning between open GL and Different versions of direct 3D out there couldn’t you find one that isn’t overly draw call intensive?

    Perhaps the best hope I get out of this is that Direct3D gets an overhaul to address the issues compelling the AMD development of Mantle. Or we continue to see Nvidia beat out AMD regardless of graphics API…

    Hugh…. the whole thing seems like a farce. Please correct me and tell me what’s up I didn’t get it from the article.

      • dmitriylm
      • 7 years ago

      It’s very simple. Let’s say you have a vehicle that does the quarter mile in X seconds. You then realize that this vehicles trunk and backseat is filled with cement. You now have two choices to achieve a performance gain. The brute force method would be to build a more powerful engine. The intelligent route would be to remove the cement.

        • superjawes
        • 7 years ago

        DON’T CONFUSE CEMENT AND CONCRETE!

        …please.

        And if you’re going to use a vehicle metaphor, isn’t Mantle more like replacing the road and tires?

          • dmitriylm
          • 7 years ago

          No, for both portions of your statement, the answer is still cement.

            • superjawes
            • 7 years ago

            [url<]http://en.wikipedia.org/wiki/Cement[/url<] [url<]http://en.wikipedia.org/wiki/Concrete[/url<] And on the note of metaphors, Mantle is supposed to improve the quality of life by streamlining the process. This is why I think anything improving the vehicle (like [i<]concrete[/i<] in the trunk) would be an improvement in GPU power, while implementing a better protocol would more closely align with improvements to the road (and the car's interface with said road, tires).

      • maxxcool
      • 7 years ago

      Good for inferior cpus.. no not really. In fact it just highlights the I3 superior fpu (or vastly weaker AMD fpu.. take your pick) .. even more.

      sure.. it ‘helps’ but as we have seen .. the more powerful the cpu the better the results. this is somewhat dependent on the game/app. so crappy code optimized by AMD would obviously be better.. take dice for example (intentional shot yes…).

      but overall, the true beneficiaries will be people with i7 / i5 quad core 3,5 ghz users and 4ghz running 8350 users. everyone else will see some benefit… but not the 50% the “heavy gaming” boxes will see.

      As I said in other heavily downvoted post. put a A10 in a laptop with a 1080p screen and the only way you will play BF4 is on 720p scaled rez, on low to medium settings and it will run like ass on 32+ mp maps.

      anything intel @ 3ghz or faster will see ‘better’ gains than it’s amd counterparts.. (at this time).

      As for why we need mantle ? Well Ars nailed it in a interview with AMD. They can’t code multi-threaded lookup/calls for beans. So they did not do it where nvidia did as best as they could based on MS whacky standards.

      If OGL were not hobbled by the suspender crowd it could do a much better job… but their money is in big apps and big iron so they move more slowly and it is not a target for game devs save mac and linux..

        • MrJP
        • 7 years ago

        Did you not read the article?

        “We’re beginning our coverage of Mantle by focusing on CPU performance rather than GPU performance because we expect the biggest gains on the CPU side of things. AMD says most of the benefits should come in CPU-limited scenarios, mostly thanks to a reduction in the number of draw calls needed to render a scene.”

        and

        “…the numbers from the A10-7850K appear to back them up. The leap from an average of 69 FPS to 110 FPS is considerable by any standard…. Mantle also outperforms Direct3D in combination with the Core i7-4770K, but the differences aren’t quite as dramatic.”

        Mantle helps most in CPU-limited situations, so will always give bigger gains with weaker CPUs when using the same discrete GPU. I think you’re getting confused with IGP performance, which is not going to be increased much by Mantle because in that scenario you’re far more likely to be GPU-limited in the first place.

          • willmore
          • 7 years ago

          You asked a fanboi if he read that article? Welcome to Tech Report!

            • Pwnstar
            • 7 years ago

            Hehe…

            • maxxcool
            • 7 years ago

            Mantle is glide. It is terrible but ”neat”. The real crime is now, instead of coding drivers for all d3d games they will be dividing their resources .. and games that wont use mantle will suck even more.

          • Klimax
          • 7 years ago

          Mantle out-performs only because it does things AMD doesn’t support in DirectX. Nothing more.
          (Also number of draw calls is so spurious metric which is heavily context and engine implementation dependent it is not funny)

            • maxxcool
            • 7 years ago

            ^ this.. as confirmed by ars.. lazy coding of drivers.

          • maxxcool
          • 7 years ago

          Not confused. Confused is expecting people to buy a a10+290x. AMD is cutting their own throat here.

          A I3 with a NV card is better and requires NO special coding.
          A I3 and a 290x is better than a a10 combo

          A i7 and a nv card wins hands down
          A I7 and a 290x is damn impressive.

          who the hell is going to buy a a10 knowing this ???

          Im not confused.. amd is.

        • kamikaziechameleon
        • 7 years ago

        But will this help with bitcoin???

          • mikato
          • 7 years ago

          I doubt it. Cryptocoin mining on a GPU uses barely any CPU… nothing like gaming where the GPU may have to be waiting on the CPU all the time.

      • psyph3r
      • 7 years ago

      You are drawing conclusions about the tech before the tech is even finished. It hardly optimized at all . It allows for massive detailed draw call heavy scenes with little to no cpu overhead. This allows games to have more realistic and detailed environments. This tech will eventually be the driving force behind console(biggest beneficiary long run) and pc game development . Nvidia can use this api by simply writing a driver. Mantle is essentially open source. It is in it’s infant stages right now.

    • esterhasz
    • 7 years ago

    Well, this explains why the latest generation of consoles do just fine with Jaguar cores.

      • Meadows
      • 7 years ago

      Consoles don’t use Mantle.

        • esterhasz
        • 7 years ago

        They use very similar techniques to reduce driver overhead.

          • Meadows
          • 7 years ago

          I know, but that’s beside the point. By that logic they might as well have used [i<]anything[/i<] for a processor in consoles.

        • psyph3r
        • 7 years ago

        Yet

        • rhammer
        • 7 years ago

        LOL.. Consoles doesn’t need Mantle since the consoles already has “console APIs”.

      • maxxcool
      • 7 years ago

      not only do they not use mantle.. one uses dx11.x the other runs ‘2’ hyper visors and the gpu is not directly accessed like mantle does.

        • esterhasz
        • 7 years ago

        Mantle, DX, etc. are just names of APIs, i.e. a set of functionalities available to programmers to build on. They are specifications, not implementations (at least in theory, specifications obviously constrain implementations). Behind a method like CreateTexture2D() that a programmer addresses, many different implementations – some slower, some faster – can (and do) exist. MS uses an implementation of DX11 in their consoles that eliminates overhead in similar ways than mantle does.

        Also, modern hypervisors have little to no computational overhead.

        But that’s not the point. The point is that people have wondered why console graphics are substantially better than PC graphics on equivalent hardware. Mantle shows – and very clearly I would say – that a) the CPU has to do much more work in DX than necessary and b) that this overhead can be reduced both by reducing the preparation and communication work the CPU is doing and by using multiple cores more effectively.

        That’s why current gen consoles and their anemic CPU cores are not the bottleneck people were anticipating when they heard about the use of Jaguar cores.

          • maxxcool
          • 7 years ago

          I disagree. it is more about the closed environment than the api.

            • maxxcool
            • 7 years ago

            ………………………………wrong post

          • maxxcool
          • 7 years ago

          But, your right in that DX can destroy mantle in 1 update.

        • rhammer
        • 7 years ago

        Xbox One’s Direct 11.X is superset of PC’s DirectX 11.2 and it’s lightweight.

        Read [url<]http://blogs.windows.com/windows/b/appbuilder/archive/2013/10/14/raising-the-bar-with-direct3d.aspx[/url<] Xbox One's Direct 11.X superset is already running on AMD GCN based solution. -------------- Xbox One's hypervisor is not Window server 2012's hyper-V. Read [url<]http://gamingbolt.com/xbox-one-the-one-feature-that-nobody-seems-to-be-talking-about[/url<] "However, the hypervisor on the Xbox One differs greatly from others in that it is interfacing with the hardware directly. What does this mean? It means that the different operating systems installed within the console will each have their own set of resources, and can thus run on dedicated basis. It means that they can access the system’s memory and CPU without any middleware"

          • maxxcool
          • 7 years ago

          true, but again the code can be specific because it is one gpu, and one set of cores. if it were a open box, the gains would cease to be as amazing.

        • rhammer
        • 7 years ago

        [url<]http://gamingbolt.com/xbox-one-the-one-feature-that-nobody-seems-to-be-talking-about[/url<] "However, the hypervisor on the Xbox One differs greatly from others in that it is interfacing with the hardware directly. What does this mean? It means that the different operating systems installed within the console will each have their own set of resources, and can thus run on dedicated basis. It means that they can access the system’s memory and CPU without any middleware "

    • odizzido
    • 7 years ago

    I’d be interested to see the test done for the original 780ti review run again. The settings are different, but back in this article [url<]https://techreport.com/review/25611/nvidia-geforce-gtx-780-ti-graphics-card-reviewed/7[/url<] the two cards were pretty close with the 780ti 6% ahead in the 99% frame time metric. In this new review, the 780ti seems to be 45% ahead of at least the d3d path for the 290X. That's a pretty huge difference. If it maintains the 45% lead with the upped settings I'd like to see if it is only for BF4 that these improvements exist. I imagine that it is.

      • fhohj
      • 7 years ago

      interesting thanks.

      I wonder if this has to do more with the Catalyst Beta driver being gimped in D3D over actual card performance. Perhaps D3D results would be closer with the stable driver.

      • Damage
      • 7 years ago

      You won’t see a big lead for the 780 Ti over the 290X in a GPU-bound test. This test was not GPU-bound and was thus more dependent on driver execution speed on the CPU. Very different sort of thing.

        • tviceman
        • 7 years ago

        For poops and giggles, I’d love to see your results at 1440p. I’d like to think that a large portion of 780 ti / r290x owners have better-than-1080p screens.

        • chuckula
        • 7 years ago

        The real trick is in the way you just phrased that response… you said “no big lead” for the 780TI… which is perfectly logical given the results we’ve already seen.

        However, *any lead of any kind for an Nvidia part* also contradicts the advertising we’ve heard that would indicate the 290X should end up with a big lead if Mantle truly enabled AMD to unleash the fury…..

          • Bensam123
          • 7 years ago

          Dude, seriously?

          You should follow the above comment up with ‘…and thus HL3 confirmed’.

            • chuckula
            • 7 years ago

            So you finally responded.

            Are you going to be man enough to take back some of those hateful comments that you spewed at Nvidia when the Titan/780/770TI debuted since even with Mantle Nvidia isn’t going away? Or are you going to just pretend like Mantle is perfect and that TR’s validated results showing the only way to get frame times > 50ms is by using Mantle are some evil conspiracy?

            I’m getting *really* tired of the hype worship and koolaid drinking and I’m getting less and less impressed by vendor-locked undocumented “standards” by the minute now that we are seeing they can’t even beat last-year’s models from Nvidia.. who aren’t exactly perfect to begin with.

            • psyph3r
            • 7 years ago

            Mantle is open source and nvidia can write their own driver. Instead of being tired. Read about the project.

            • Klimax
            • 7 years ago

            Openness is yet only in assertion and not established fact. And regardless, it will be de facto proprietary as nobody else will implement it. NVidia is already on record that they won’t use it and I doubt Intel will go for it and they have more basic problems to deal in first place. And mobile GPU vendors are glad for just OpenGL and sometimes DirectX…

            • Bensam123
            • 7 years ago

            The strawmen, red herrings they never end!

      • Pwnstar
      • 7 years ago

      The rumor I read said nVidia spent a lot of time and money optimizing their drivers for Battlefield 4 before Mantle came out.

        • chuckula
        • 7 years ago

        Uh.. you mean Nvidia basically did its job and optimized its drivers?
        And Nvidia didn’t stage a huge press conference and bribe DICE with several million dollars and force us to wait several months in the process just to do the job it was supposed to do?

        Are you going to accuse Seattle of cheating to win the Superbowl because they practiced really hard before the game?

        And you call me a troll?

      • mutantmagnet
      • 7 years ago

      This year will definitely be an interesting case study of how quickly Nvidia can target optimization for games as more and more games adopt Mantle but it’s not the only factor at play.

      [url<]http://abload.de/img/starswarm-mantlerjl8g.png[/url<] By Oxide's own admission the spent a lot more time optimizing DirectX engine. So there is the case of devs focusing on what's currently popular and only shifting so many resources to a new API that seems to requires less time to get similar results.

    • Pantsu
    • 7 years ago

    Nvidia has long been better in more CPU bound games and lower settings. I’d say the green team is a better choice for 120 Hz 1080p, at least for now. Radeon on the other hand has the advantage at 2560×1440 and up. Especially Eyefinity performance is far better than Surround.

    There’s quite a bit of different numbers floating around about BF4 performance and it looks like there’s plenty of variance depending on the test scenario. Ultimately people play this game for its mp, but testing it reliably is a problem. But what results I’ve seen is that Mantle certainly helps in many cases.

    Ultimately Mantle’s success isn’t tied to BF4. The current state of it is half baked at best, with a closed beta API and a really buggy game. Mantle isn’t supposed to be out with a public SDK until next year, so it’ll take a long time before we’ll see if it’ll catch on, and what you see now is just a small taste of what’s possible.

    Star Swarm shows us what’s technically possible already at this point, and it’s only going to improve. It remains to be seen what kind of implementations we’ll see in the future, and if DX or OpenGL can match it.

      • sweatshopking
      • 7 years ago

      Star swarm was a HORRIBLE demo. They totally fixed it, and their demo is basically a lie. I’d love mantle to succeed, but at this stage we have typical AMD inflated numbers, typical bad AMD drivers that don’t support the features in dx that would solve the problem, and rampant fanboi insanity. Nothing to see here guys, move on.

        • Pwnstar
        • 7 years ago

        What a load of bullcrap.

        They answer your fallacious allegations here:
        [url<]http://www.oxidegames.com/2014/01/31/star-swarm-faq/[/url<]

          • Klimax
          • 7 years ago

          No he is right. Code they push is so horrible that it is beyond any believe. It is small miracle it works even barely, because they are calling number of pipeline calls per object! It has very weak multithreading on its own. (even for simulation)

          They are doing everything wrong. Either incompetence, too early alpha or fixing DirectX code path versus Mantle. Well, so far I’d say incompetence based upon FAQ…

          Star Swarm is the worst engine I have so far seen…

          Screenshots soon from VTune.

          • sweatshopking
          • 7 years ago

          They answer them wrong. They ignore the facts that they code crazy badly for dx, make no calls, and AMD doesn’t support the dx calls that would make dx run properly.

        • psyph3r
        • 7 years ago

        What the f are you talking about, 90000 draw call ops per second exceeds the limitations of dx. Simple as that. That is the reason to offload these to the gpu. Nvidia can even participate in the open source nature of the api and write their own driver. This basically makes those 1000$ cpus a waste of money….which they were in the first place anyway. It seems to do exactly what they set out to do. I’m happy with my 20% increase in performance for free….

          • Klimax
          • 7 years ago

          This is so incredibly wrong it’s sad. Sorry, but not only max. number of draws calls is undetermined by properly written code and independently reviewed independently of vendor with stake in proprietary API, but it even misses fact that draw calls are very insufficient metric as it is bloody hell dependent on context and engine.

          You can have few draw calls for thousands of units already in DX or you can have draw call for every single bloody unit. All dependent on requirements and usage.

          And frankly, problem is not DirectX, but AMD. As NVidia just showed again…

          BTW: I would dispute alleged inability of 90000 draw calls. Just a meaningless number lacking context and proper data.

          • sweatshopking
          • 7 years ago

          yeah, klimax is right. you should do more research.

          • chuckula
          • 7 years ago

          When you can explain what instancing is (hint: It’s the several years old solution to the “draw call” problem that addresses the issues in Starswarm), and then explain why Starswarm mysteriously fails to use instancing in its D3D render path, then I might take something you say more seriously.

          Until then, you are just regurgitating marketing powerpoints instead of educating yourself and discussing technical issues in an informed manner.

    • Neutronbeam
    • 7 years ago

    “Now to tell you the truth I forgot myself in all this excitement. But being this is a new API, just being tested, you’ve gotta ask yourself a question: “Does it play Crysis?” Well, does it, punk?”

    • maxxcool
    • 7 years ago

    Not to anger the AMD crowd.. but I am also quite amused that the Intel CPU’s seem to benefit from mantle much more than their AMD counter parts based on this plus a few other site posts on the web..

      • fhohj
      • 7 years ago

      I can see how this comment would garner resentment. but that was largely to be expected. obviously an intel cpu would gain more than an amd cpu from any cpu boosting initiative.

      • jihadjoe
      • 7 years ago

      What are you on about?

      The 4770k gained 10% from Mantle, the A10-7850k gained close to 60%, and was even enough to reverse the advantage of the 780Ti.

        • maxxcool
        • 7 years ago

        the full review will explain this alot more.

      • Bensam123
      • 7 years ago

      Why would this anger AMD people? This just shows it’s vendor agnostic and benefits everyone, more to the point that it’s working and actually doing what it should. That’s a sign of a successful product.

      AFAIK Mantle wasn’t just made to benefit AMD.

      • Joerdgs
      • 7 years ago

      Did I miss something? AMD’s CPU made the biggest performance leap relatively. It tilts the price / performance balance in AMD’s favour in this case.

        • maxxcool
        • 7 years ago

        the full review will explain this alot more.

      • Vaughn
      • 7 years ago

      Maxxcool i’m not surprised at all.

      A CPU bottleneck is just that it doesn’t care who makes the processor!

      • SCR250
      • 7 years ago

      [quote<]Not to anger the AMD crowd[/quote<] If you say or show anything NOT Positive about any AMD product or software even if it is factually based then they get very angry. Hence all the RED minuses you see here even on your post. Edit: and mine.

        • maxxcool
        • 7 years ago

        but, red is pretty!

    • maxxcool
    • 7 years ago

    Am I the only one looking at the green bar and wondering.. “wow, if Nvidia made their own render engine they utterly destroy ever other GPU on the market….”

    Seriously, 10 or so million dollars for that result.

    YES, it needs to come out of beta… YES we need a much bigger breakdown. phenom ii + 2500k + 8230 and 8350’s, a10, a8’s. But this … does not impress me.

    And for those with 260/270’s ?

    need more data… but.. this meh.

      • Pwnstar
      • 7 years ago

      That’s not what they paid for:
      [url<]http://bf4central.com/2013/10/amdamd-paid-ea-5-million-battlefield-4-deal/[/url<] [quote<]Seriously, 10 or so million dollars for that result?[/quote<]

        • maxxcool
        • 7 years ago

        dev + RD + testing + Dice == easily 10+ mill

          • Pwnstar
          • 7 years ago

          Read the article I linked. DICE asked for Mantle, AMD didn’t pay them to ask for it. They paid them for other things, like game codes for Never Settle.

            • renz496
            • 7 years ago

            and you believe that 100% without question? when Origin PC drop radeon from their build do you really believe that is purely issue between origin and AMD only and not nvidia paying Origin to drop radeon?

            • Pwnstar
            • 7 years ago

            100%? No, but that’s what anonymous “sources” said the deal was about. You’d think some other spy would say otherwise if it wasn’t.

            I agree with you on Origin.

            • maxxcool
            • 7 years ago

            You think R&D and coding and testing prior to EA < 5 mill ? your nuts.

    • derFunkenstein
    • 7 years ago

    [quote<]AMD tells us the best performance is limited to some of its newest GPUs, particularly the Radeon R9 290 and 290X. The firm expects similar performance gains from any Mantle-capable GPU eventually—that is, anything based on the Graphics Core Next architecture, dating back to the Radeon HD 7000 series—but we're not quite there yet. [/quote<] So the interesting stuff still isn't ready. That's a shame - I'd be more interested in what Mantle does for integrated graphics. I appreciate that they have to reinvent the wheel to bring forward features they already support in DX but man...it was definitely not ready to announce when they announced it.

      • vargis14
      • 7 years ago

      I am looking forward to see what kind of performance it can bring out of lower end cards like the hd7750 , hd 7790, hd7870/270 ,270x etc.

      Also Kaveri IGP performance has me on the edge of my seat to see what kind of gains can be had on a IGP.

        • chuckula
        • 7 years ago

        As I posted above, your Kaveri IGP gets 10% (that’s rounding up too): [url<]http://hothardware.com/News/AMD-Mantle-vs-DirectX-Benchmarks-with-Battlefield-4-and-Star-Swarm/[/url<]

    • Meadows
    • 7 years ago

    The takeaway is that AMD’s DirectX drivers are crap.

      • auxy
      • 7 years ago

      [quote<]The takeaway is that AMD's drivers are crap.[/quote<]FTFY. (´ー`;)

        • Meadows
        • 7 years ago

        Well, yes, but anyway.

        • Firestarter
        • 7 years ago

        Well, would you say that AMDs Mantle drivers are crap? They seem pretty good for a first outing.

          • Meadows
          • 7 years ago

          Yes. Had they used that same effort on their regular drivers, they wouldn’t need new ones. The newest versions of DirectX support the same things Mantle touts so much regarding draw calls. Notice NVidia’s performance, for example.

            • Firestarter
            • 7 years ago

            Nvidia’s D3D performance is very good, yes, but I would argue that with the 4770K/Mantle/290X, the BF4 Mantle renderer with AMDs drivers is doing a better, more consistent job than the D3D renderer with Nvidia’s drivers. If that is going to be the trend for other Mantle releases as well, I’d call it a success.

            • Meadows
            • 7 years ago

            Objectively, a “success”, yes, but still just catching up to others. Just as AMD always has been for years.

      • Meadows
      • 7 years ago

      Wooo! Front page comment.

      • sschaem
      • 7 years ago

      They could be better, but not that much better.

      So AMD could invest effort to tweak D3D to the max, but it will plateau with a 20% gain in the best case scenario. (And my guess is that this will involve a lot of game engine related driver ‘hacking’. Analyzing game API usage pattern and tweak the driver to work the way this or that game uses D3D)
      This is what you see “Driver update: 20% boost in this or that game”
      nvidia give guideline on how to use D3D so it works well with there drivers, and its not uncommon that those recommendation are counter productive with AMD drivers.

      Having said that, AMD in their first EVER mantle driver get a 60% boost… And the goal is to make the API clean so driver hack are not necessary. (specially around resource usage)

      So 60% with the potential to go higher and a more stable modern API, VS hacking to death to plateau at 20%… the choice was clear, and well thought of.

      And calling D3D driver crap when they reach 90% of D3D potential when being GPU limited is going a bit far.
      I would say the reverse. nvidia D3D drivers are stellar.

        • Firestarter
        • 7 years ago

        [quote<]nvidia give guideline on how to use D3D so it works well with there drivers, and its not uncommon that those recommendation are counter productive with AMD drivers[/quote<] yeah you may want to quote a reputable source on that

          • Meadows
          • 7 years ago

          This is the first time I agree with you.

          • sschaem
          • 7 years ago

          I guess all those game that are nvidia “meant to be played” that are on release date kind of slow on AMD HW, but after a driver update gain 20+% is a myth?

          BTW, AMD does the same. They optimize/profile/guide based on their driver/HW architecture.
          Its not like nvidia is going to say “Oh BTW be careful, even so we handle that A.OK, on the current AMD drivers if you free/alloc allot of render target, it will blow up the driver memory manager”

          The proof is in the developer guidelines. But mainly in how many ‘nvidia’ title get way faster after AMD had a chance to optimize their driver for it.. and vis versa.
          Plenty of TR news about new AMD/nvidia driver showing those 20% or more % gain after a game was released with the backing of the red or green team.

          You want a third party claim ? maybe its worth investigating.. what was the last nvidia optimized game that showed a 20+% performance gain after AMD released an updated driver ?

    • fhohj
    • 7 years ago

    surprised by those D3D numbers vs. Mantle. Must be 3 cheers at nvidia. if it is true, as you say that they optimized hard in preparation for this launch (and why wouldn’t they?), they must certainly be pleased with the results and satisfied with any money spent or resources diverted from elsewhere. hopefully AMD can push theirs still further.

      • Pwnstar
      • 7 years ago

      Er, you’re really surprised that a card that costs $200 more than its competition actually does better? I know this is nVidia but come on!

        • fhohj
        • 7 years ago

        what does the price matter in this? since when do nvidia’s premiums mean anything at all, performance-wise? the 290X in uber-mode, which I just consider normal-mode now with the third-party cooler 290Xes, is just behind the 780ti. now those benches are at higher resolutions, and I am aware of the performance equalization effect of high res, as well as the fact that GCN is better at the high-end than the low end, and also the fact that the 290X has been designed to continue in that direction with its memory configuration. but, the performance difference seems larger here than it should be with just plain old D3D. when you add in Mantle on top of that, and the benefit it is supposed to have, it still somewhat surprising to see the 780ti do as well as it did.

        this is, as Scott said, a result of focused optimization on the part nvidia. and it’s largely payed off. only thing left is whether AMD still has room to optimize its new driver, or if this is as high as it can go given that they have been using this software a while having built this thing up on BF4.

          • Pwnstar
          • 7 years ago

          That’s why I said “I know this is nVidia”.

          [quote<]since when do nvidia's premiums mean anything at all, performance-wise?[/quote<]

        • Amazing Mr. X
        • 7 years ago

        It’s a 200 dollar more expensive card [i<]and[/i<] it's NVAPI accelerated in Battlefield 4. Sorry to say it, but the 780 Ti doesn't represent a pure D3D setup here. An Iris Pro would be more accurate in that regard.

        • SCR250
        • 7 years ago

        It is NOT $200 more it is only $90 more.
        [url<]http://pcpartpicker.com/parts/video-card/#c=153,146&sort=a7[/url<] 780ti = $670 vs $580 for the 290X is only $90 more. It is also NOT a "temporary" price on the 290X's as it has been going on for 3-4 months. But you already know this and yet you continue to spout outright mis-information on the prices.

          • Pwnstar
          • 7 years ago

          It is temporary. Once the Litecoin craze is over, the price will go down near the MSRP.

            • SCR250
            • 7 years ago

            Actually no smart buyer will buy a new 290X when the prices on eBay crater with all those former Litecoin’ers dump their stock of cards at nearly the same time.

            • Pwnstar
            • 7 years ago

            I said the MSRP, not the used price (that will be very low, yes).

    • PHANTOMROCKS
    • 7 years ago

    I MUST SAY THAT ATI HAS REVOLUTIONIZED THE GAMEPLAY AFTER THE DOMINANCE OF DIRECT X SINCE THE VERY BEGINNING….LUCKILY I HAVE MY 280X SUPPORTING MANTLE ON BF4 AND I MUST SAY THAT IT IS A GAME CHANGER IN EVERY ASPECT. AFTER ACTIVATING MY VERY 1ST EXPRESSION WAS wOw! CRISP N CLEAN WORLD WITH THE SHARPEST DETAILS I’VE EVER WITNESSED…COLOR SATURATION AND FPS EXPERIENCE WAS OUT OF THIS WORLD…I CONCLUDE IT’S TIME TO SAY RIP DIRECT X …….ONLY 1 AREA OF CONCERN…IT’S HEATING UP THE CPU A LOT SO LOTSA NOISE WAS WITNESSED FOR THE SAME SETTINGS THAT WERE MUCH QUIET IN DIRECT X.

      • chuckula
      • 7 years ago

      I give it an A for effort, but it didn’t score very high on the subtlety scale.

      • NeelyCam
      • 7 years ago

      [quote<]IT'S HEATING UP THE CPU A LOT SO LOTSA NOISE WAS WITNESSED FOR THE SAME SETTINGS THAT WERE MUCH QUIET IN DIRECT X.[/quote<] Hmm... this sounds interesting. Anyone have any corroborating observations?

        • fhohj
        • 7 years ago

        I know. seems strange. I wonder if he’s using Intel and AMD intentionally designed it to cause problems with their arch?

        • tanker27
        • 7 years ago

        Yes I do I actually made a post about it here: [url<]https://techreport.com/forums/viewtopic.php?f=3&t=91634[/url<] After an hours worth of gameplay with Mantle enabled I observed that both the CPU and GPU were 7-10 degrees higher than with the same time frame with DX11 enabled. Thus with Mantle all of my fans kick up to high. Ninja Edit: My CPU is a i7-4770K and the VID is R9 280x

          • NeelyCam
          • 7 years ago

          Hmm… I guess that would potentially limit the gains in the non-uber mode that’s temp limited.

      • tanker27
      • 7 years ago

      Caps lock is cruise control for the cool.

      • superjawes
      • 7 years ago

      Are you using Caps Lock so you can be heard over your system’s noise?

        • fhohj
        • 7 years ago

        hahaha very nice.

        what was previously in error thought to be an obnoxious use of the shout button, is now revealed to be the embattled journalist reporting from the hurricane or the top deck of the air craft carrier.

        I would +2 this

      • maxxcool
      • 7 years ago

      Oh look, he joined Tr today.. paid post. ban.

        • NeelyCam
        • 7 years ago

        In contrast with regular paid posts that are actually trying to convince people of something, to me this pure-troll post was more entertaining.

      • sweatshopking
      • 7 years ago

      NICE POST! I LIKE IT!

    • Stargazer
    • 7 years ago

    [quote<]One thing we didn't expect to see was Nvidia's Direct3D driver performing so much better than AMD's. We don't often test different GPU brands in CPU-constrained scenarios, but perhaps we should. Looks like Nvidia has done quite a bit of work polishing its D3D driver for low CPU overhead.[/quote<] According to Ars Technica ([url<]http://arstechnica.com/gaming/2014/01/amd-almost-rolls-out-mantle-its-high-performance-alternative-to-direct3d-and-opengl/[/url<]), Direct3D 11 allows certain things to be done in parallell, but AMD has chosen not to implement these features in their drivers. If this is indeed the case, it would seem that at least part of the issues AMD sees with Direct3D could be mitigated by... well... actually using the features made available by Direct3D. Supposedly Nvidia does use (some of?) these features ([url<]http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/24[/url<]), so that might explain some of what you're seeing.

      • chuckula
      • 7 years ago

      I would add that AMD still hasn’t released a driver with OpenGL 4.4 support even though the standard was publicly released last July.

      One of OGL 4.4’s big features is bindless texture rendering that makes it much cheaper to issue a large number of draw calls…. sound familiar in light of the Mantle advertising we’ve heard?

      • Ryu Connor
      • 7 years ago

      Thanks for sharing this.

      • maxxcool
      • 7 years ago

      Nice catch in the article. Not very deep but a good read. Not a NV fan by any means (i run a 7850 in mt gm rig).. but it appears the green team does do a much better job with the d3d api than team red.

      from the article : “”We asked AMD if the techniques could be used to provide gains to existing OpenGL and Direct3D programs. For example, Direct3D 11 permits command buffer generation to be done in parallel, with a feature called deferred contexts and multithreaded rendering. However, some video driver developers—including AMD—have not implemented multithreaded rendering support, so while the API supports parallelism, the work is done serially anyway, and sometimes more slowly than if no multithreading was used. As a result of this poor driver support, some game developers have removed multithreaded rendering support from their game engines.

      In response, AMD told us that developers have tried to do it but haven’t had much success with using such techniques with existing APIs, and that it requires Mantle to do the job properly. It’s not immediately clear to us if this is because of AMD’s refusal to implement the support in its drivers, or if there really is a problem with using Direct3D in this way; there is clearly something of the chicken and the egg at work here.””

      • l33t-g4m3r
      • 7 years ago

      This problem has been going on for years, and started when AMD canned it’s monthly driver updates for quality reasons. AMD’s driver performance is completely dependent on game specific tweaks, compared to general purpose optimization.

      I tried using a 7970 for several weeks, and I noticed massive performance problems in any non amd-supported AAA title. 30 fps in Magrunner, but 60+ in Metro: LL. Hard Reset was another game that performed poorly. Between that and the card build quality designed to hinder overclocking, nor exhaust heat from your case, I returned the card and went with a 780. Still not regretting it.

      Microsoft is supposed to be adding [url=http://blogs.windows.com/windows/b/appbuilder/archive/2013/10/14/raising-the-bar-with-direct3d.aspx<]further performance enhancements[/url<] to dx 11, and if AMD continues to ignore this while Nvidia takes advantage, the only scenario we're going to see in the future is AMD forcing it's users to resort to Mantle/Glide, while Nvidia users can run anything without issue. This scenario didn't turn out well for 3dfx, and they had a much bigger game library. Glide's not making a comeback folks, so AMD needs to instead start fixing/optimizing their horrible, and I mean horrible, DX11 code.

        • maxxcool
        • 7 years ago

        +1111… mantle is glide.

          • alientorni
          • 7 years ago

          You mean that the mantle is a glide?
          Glados + lie = glide?

            • Klimax
            • 7 years ago

            Considering AMD tried to lie to Ars writer about Driver Command Lists, more then likely.
            (It was linked somewhere in this thread. They claim that said feature in DX never worked properly and programmers abandoned it. They omitted small fact that it wasn’t working because of AMD’s drivers. Also it is not true it got abandoned it as it is used in quite few titles. Including Civilization V. And there is whole sample dedicated to it, which demonstrates it)

            • rhammer
            • 7 years ago

            For DX11.0’s MT + CmdList demo on AMD GPUs

            Read [url<]http://www.behardware.com/articles/770-2/dossier-amd-radeon-hd-5870-5850.html[/url<] We tested this on a Radeon HD 4890 with a demo supplied by Microsoft in its SDK and which works on all Direct3D 10 cards. This represents a scene limited by the CPU, - Rendering in immediate context mode: 55 fps - Rendering in deferred context mode, not multithreaded: 48 fps - Rendering in deferred context mode, multithreaded: 78 fps

            • maxxcool
            • 7 years ago

            Nice! 🙂

            • maxxcool
            • 7 years ago

            Oh look, you destroyed my morality module.. now I can call anything a compute unit..

        • psyph3r
        • 7 years ago

        By horrible you mean indecernible to the user. Hyperbole much? Every Intel nvidia “knowledgeable guy”

          • l33t-g4m3r
          • 7 years ago

          No. No I don’t. I specifically mentioned extreme slowdowns in less popular games / indie titles. 30 fps in an Unreal Engine game is not hyperbole, nor is it [i<]indiscernible[/i<] to the user, unless you happen to be an extreme fanboy looking at the screen with rose colored glasses and never play any games outside of "gaming evolved" titles. When I tried the 7970, all AMD had to offer was constant low quality Beta drivers, and they had supposedly just fixed the frame latency issues in Borderlands. (Wow one game. / More non-productive targeted optimization.) Borderlands may have been playable normally, but it wasn't playable using Tridef as I have a passive 3d monitor. My 780 had no such problems playing games in 3d, aside from normal slowdowns associated with doing 3d in graphics overkill dx11 games, which Borderlands is not one of. Hell, if I wanted to just play Borderlands normally, my old 470 could handle that game fine, so good job AMD in finally fixing it, which was pretty ridiculous that a $400 card with 3GB couldn't handle it without stuttering. Also, speaking of these driver problems, does anyone here remember RAGE? I think the name pretty much speaks for itself, since AMD didn't have decent OpenGL drivers for months after that game's release, and they distributed a completely BROKEN driver as the "Rage Hotfix". I'd say that stuff is pretty noticeable, and these issues are the norm, rather than the exception for AMD. Oh, and I used to use AMD exclusively through dx8-dx9-dx10, so when I say AMD has broken drivers, they have BROKEN drivers, as I have never experienced such massive problems on their older hardware, and these issues have have mostly popped up under AMD's leadership and the new GCN architecture. Makes me wonder if there is something fundamentally wrong with their d3d implementation on that hardware.

        • rhammer
        • 7 years ago

        DirectX 11.X superset is already running on AMD GCN based solution aka. Xbox One.

          • Klimax
          • 7 years ago

          What good DX 11.1 is when they don’t support basics?

      • Bensam123
      • 7 years ago
        • Pwnstar
        • 7 years ago

        ?

      • swaaye
      • 7 years ago

      I’ve seen some developers post about DirectX 11’s parallel features being not so great and difficult to benefit from. I think Civ 5 uses it on NVIDIA cards to some degree.

    • Firestarter
    • 7 years ago

    Wow that comparison with the 780ti is pretty damning, I didn’t expect the 780 in D3D to still beat the 290X in Mantle. It’s still promising to see how much smoother the graph for 4770k/Mantle is vs. 4770k/780ti, goes to show that although Nvidia definitely seems to have the better optimized D3D driver, there’s still something to be gained by reinventing the graphics API.

    • Chaseme
    • 7 years ago

    Good writeup, but more importantly is that Firestorm in the screenshot?! WANT!

    • swaaye
    • 7 years ago

    It is a bit disappointing to see the 780 Ti D3D beating fancy Mantle if you aren’t on a gimpy CPU.

      • Pwnstar
      • 7 years ago

      It’s a pretty good card. Of course, you pay for that privilege.

        • swaaye
        • 7 years ago

        About the same you pay for a 290X right now thanks to the coin nonsense. At least in the USA.

          • Pwnstar
          • 7 years ago

          That is temporary and not really AMD’s fault.

            • swaaye
            • 7 years ago

            I’m curious as to how temporary it will be. Either the coin thing will need to move away from GPUs or AMD will have to risk bumping up production hoping it’s not short term. A production increase will take months to impact prices.

            • fhohj
            • 7 years ago

            yup. newegg and other retailers aren’t just gonna let their bs margin go. if litecoin crashes, then they aren’t just going to be happy about charging less and just happily put down the prices. they will only do that if this harms their sales, drives a bunch of people away, and they see a big drop month to month for example. which I really hope it does and they do, bastards. right now they had an opportunity to fix prices just drop into their lap and they aren’t going to just let it go.

            and the thing is, for now, and who knows we may be past it already, but for it’s just on the retailers end, on things they already have bought. but when the manufacturers see the money newegg is getting they’re going to up the price themselves. newegg will pay more as long as they can charge more and rather than accept a margin closer to what they used to get before all this nonsensem, they will hike again. (not as much. but still there.) Then eventually AMD will hike, and manufacturers will pass along to newegg, and at that point, if newegg can still get away with hiking again, guess what they’re gonna do?

            if litecoin crashes and leaves a vacuum then the prices will fall. and I sincerely hope it does. and I also hope it harms newegg a little.

            this is money that could be going to AMD, newegg, and the manufacturers, in increased production and orders. instead it’s going to newegg right now, then the manufacturers, and then AMD, in that order. so yeah. only thing is certain is right now it’s all just nonsense.

    • chuckula
    • 7 years ago

    I’m sorry: If a supposedly obsolete card from Nvidia using a supposedly obsolete API like DX11 is able to beat a Mantle-enabled Highest-end AMD R9-290X in any conceivable configuration [the 4770K] then I’m calling the Mantle Miracle a bust.

    Oh, and even on Kaveri the Mantle support is *required* just to get AMD’s own GPU running faster than Nvidia’s GPU using AMD’s latest platform.. and the difference between 93 FPS and 110FPS isn’t some miraculous game changer either (especially when you look at the frame time variations more carefully).

    Once again: If AMD had top-notch D3D support and was getting these purported “miraculous” improvements on top of what was already a competitive solution with Nvidia then maybe… maybe… Mantle would be worth the trouble.

    Instead, we are just seeing that AMD has given up on mainstream gaming and wants to leverage its console monopoly into some sort of market lockdown.

    Color me Krogothed.

      • NeoForever
      • 7 years ago

      I didn’t know GTX 780 Ti was an obsolete card. I guess I need to follow this graphics stuff more closely.

        • chuckula
        • 7 years ago

        I’m just agreeing with the position of the AMD fans who put in their 2 cents when Nvidia launched the card.

        You can’t get away with insulting Nvidia up and down and calling their products crap and then turning around 4 months later and declaring victory when AMD spent several million dollars rewriting a game just for its own cards only to lose when using its highest-end GPU (that is apparently decades ahead of Nvidia if they are to be believed).

        Go look at the frame times again: If you want to spike beyond 50 milliseconds using an A10-7850K, then you’d better get Mantle!

          • Jon1984
          • 7 years ago

          It’s just a beta sir. I think they can improve in time.

            • chuckula
            • 7 years ago

            I like the whole notion that only AMD is capable of improving over time.

            Oh, and why is it that Nvidia’s card performs so much better with AMD’s Kaveri using the obsolete D3D than AMD’s own card does? I mean, Nvidia intentionally sabotages the performance of its cards on AMD platforms.. right?

            Could it be that maybe AMD has been intentionally giving people who bought their highest end cards the short end of the stick to develop Mantle? Oh wait, but you say that the long-term benefits of Mantle are worth it? Really? Do those benchmarks in a game re-written and re-optimized by AMD’s own employees make you think that Nvidia is completely doomed?

            • Jon1984
            • 7 years ago

            I didn’t say anything about NVidia being doomed. The GTX 780Ti is a better card than the 290X. But its also a lot more expensive.

            I went from a 560Ti to the 280X and not the 770 because of the price. I don’t really care about the driver history of AMD. My 560Ti had crappy drivers in its last year.

            I don’t care if it’s red or green, I want price/performance. AMD has it.

            And they are doing great things, trying to innovate. Innovation is always good.

            • l33t-g4m3r
            • 7 years ago

            I wouldn’t say your card had crappy drivers, you instead had a crappy card. The x60 series were all gimped GPU’s with cut back capabilities, and some even had a crippled memory bus. TR should have done a better job exposing that fact, but they didn’t use any games or settings that would have exposed it, like testing batman in dx9, or Metro with medium shaders. Previous gen uncrippled fermi cards like the 470 could outperform the 560’s in graphically intensive games, whereas the x60’s only did good under moderate workloads. Of course, today’s modern dx11 games aren’t going to run well on such a gimped chip, but you probably could have extended it’s life by turning off the advanced graphics options.

            • pandemonium
            • 7 years ago

            [quote<]I don't care if it's red or green, I want price/performance. AMD has it. And they are doing great things, trying to innovate. Innovation is always good.[/quote<] This. Only. Ever. This.

          • Pwnstar
          • 7 years ago

          No, you aren’t. You are just straight up trolling.

          [quote<]I'm just agreeing with the position of the AMD fans[/quote<]

            • chuckula
            • 7 years ago

            Oh rlly?

            Once upon a time Bensam123 sed:

            [quote<]Most disappointing new GPU generation launch ever. This is like getting hand me downs to replace your current clothes.[/quote<] You can read the full rant here: [url<]https://techreport.com/discussion/24832/nvidia-geforce-gtx-780-graphics-card-reviewed?post=733061[/url<] That was posted about the GTX 780... that appeared a full 5 months before the R9's launched and is actually available online for a semi-sane price right now, unlike the R9-290. You want to come out with a proprietary replacement for well-known graphics APIs? FINE... but the burden of proof is on YOU to show that it is mind-blowingly amazing in ways that practically defy imagination because you are taking away the freedom of developers and users when you play the lockin game. You STILL want to do it while you also manage to outright lose to the exact same cards from your competitors that you have been insulting non-stop for the better part of a year? Sorry, I'm not cutting AMD any slack. You live by the sword, you die by the sword. AMD had better learn that it has a much higher bar to clear when it decides to show hubris instead of real results and to arrogantly assume that a transient blip in cashflow from some game consoles has given it a perpetual monopoly on graphics.

            • Pwnstar
            • 7 years ago

            [quote<]you are taking away the freedom of developers and users when you play the lockin game[/quote<] Who is locking in? Every game I've seen announced for Mantle also has a DirectX version, including the engine built especially for Mantle (Nitrous with Star Swarm). [quote<]You STILL want to do it while you also manage to outright lose to the exact same cards from your competitors that you have been insulting non-stop[/quote<] He was insulting the 780 but the 780 isn't in this test, so I'm not sure what you are talking about. But even if what you say were true, you are still seriously overreacting to one person who isn't representative of any group. You need to stop trolling.

            • psyph3r
            • 7 years ago

            Mantle is NOT proprietary. Jesus…Facelalm*

            • chuckula
            • 7 years ago

            Oh Rlly?

            Link me to the website where AMD fully documents Mantle [b<]AND[/b<] where there is a legally binding licensing statement by AMD stating that anyone and everyone is free to implement Mantle on a royalty free basis and with a binding promise that AMD won't sue third parties... go ahead... I'm waiting....

            • NeoForever
            • 7 years ago

            Um.. no. How about YOU link a website where AMD mentions that Mantle is proprietary and/or they will sue if a third party uses it without a license.

            • chuckula
            • 7 years ago

            See Bensam123, that idiotic post is what a REAL strawman argument looks like.

            I know you get confused when you think that Strawman == any real-world fact that contradicts fantasies constructed from marketing powerpoints, but NeoF4evar just gave us an actual real-life example of what a strawman argument actually is.

            • ermo
            • 7 years ago

            Meanwhile in the real world, I would be surprised if [i<][b<]any[/b<][/i<] legal department would not err on the side of caution and assume that a technology is proprietary [i<][b<]unless[/b<][/i<] legally watertight evidence to the contrary is made available. In other words, even if I fear that good sir chuckula is deriving far too much pleasure from rubbing it in, he does have a fair point.

        • renz496
        • 7 years ago

        new card no doubt but some people called it obsolete because it is still the same kepler architecture that introduced with GTX680 while 290X are using updated GCN.

      • fhohj
      • 7 years ago

      I wonder what DX11 support looked like during the beta days?

      and real quick could somebody tell me what’s better? replying to a guy directly or continuing a thread and using quote if necessary? just trying get a handle on what to do about that

      • Blink
      • 7 years ago

      Sounds like sour grapes to me. This method of ‘Fair and Balanced’ seems reminiscent of Fox News. The balance comes at the expense of overwhelming negativity involving everything AMD in order to counteract all the AMD fanboys sunshine and rainbows. However, bias is bias.

      Even the author points out the dramatic improvement in CPU overhead, among other positives.

        • chuckula
        • 7 years ago

        Uh…. for the last 4 months all I’ve been hearing is MANTLE MANTLE MANTLE non-stop.

        Now we see real results in the real world and the best you can do is try to talk about a cable news network instead of actually addressing the real numbers that TR actually got doing actual tests?

        Yeah. Thanks for proving my point.

        Say, did you also whine that Seattle was griping about “sour grapes” after the Superbowl since Denver really won or something? Kuz that’s what you sound like.

          • Vaughn
          • 7 years ago

          Hey chuckula,

          Do you understand what a beta driver is?

          You are not seeing the final performance of mantel currently it will improve.

          Granted I don’t really care to get involved in the fanboy drama but you should try being alittle more open minded son.

            • chuckula
            • 7 years ago

            [quote<]Do you understand what a beta driver is?[/quote<] Yes, AMD puts them out all the time for a variety of products. [quote<]You are not seeing the final performance of [b<][i<]mantel[/b<][/i<] currently it will improve.[/quote<] I agree that we aren't seeing any performance info about [i<]mantel[/i<].. whatever that is... I also agree that this is the most trite refrain I've ever heard from the AMD camp: IT'S NOT FINISHED YET! YOU CAN'T JUDGE IT YET! UNFAIR!!! This is typically followed up by: AMD DIDN'T DELAY MANTLE IT'S ONTIME YOU SHILL!! Oh, and for some reason, I've never once seen an AMD supporter make loud and repeated posts during Intel/Nvidia reviews that it is unfair to test those products after they are publicly released and that we have to wait 6 months to be fair or some nonsense like that. Where is your post in the original Haswell review saying we have to wait for Intel's platform to come out of beta? Where is your post in the Nvidia reviews saying we have to wait for their drivers to come out of beta? Why is it that AMD and only AMD in the entire world of computing has a right to be congratulated for pushing powerpoint slides but is above any rational criticism because it's just not their fault that they can't get their promised miracles put together ontime?

            • snook
            • 7 years ago

            I agree that it will improve. That’s almost certain.
            A greater folly would have been to delay more. As bad as people speak of it now, They would have been more vocal and speculative.

            Note also that BF4 was not developed with mantle, rather patched with it. I don’t believe that is hairsplitting either. A game that implements mantle from the ground up will show us a truer picture.

            I’m a rabid AMD fanboi. But, my GPU upgrade money is ready. Mantle is all fine and what not, but the pressing matter is if I eat my words and go green. AMD is not moving fast enough in my book either, And that is the real issue, across the board.

        • maxxcool
        • 7 years ago

        hahaha… yaaa.. nope. AMDZOR!! MANTLE RAWWWRRR DESTROY>>> NIVIDIA SUXXOR… for the last 4 $%^&ing months…

        Mantle is the messiah !!!!!!!
        Mantle is great on pizza!!!!!
        Mantle no habla pantz!!!!

        mantle…oh wait we still lost to a gpu we called outdated and slow and hot in the last release despite our own $%^&ing cards overheating and not even running stock speeds at release..

        I agree with chuck. game over. its ‘neat’… but it is not a win. now take your 4 months of MANTLE screaming rooftop crap we have all endured from the rabid stupid crowd and turn it sidways and insert.

      • the Lionheart
      • 7 years ago

      D3D is obsolete? Hope you know what you’re talking about it… You don’t seem to know much beyond how to troll an AMD thread..

        • superjawes
        • 7 years ago

        Shills of all colors get banned. It’s just that AMD shills have been more prevalent and obnoxious in the last several months.

        • chuckula
        • 7 years ago

        Anyone who bothers to read my posts sees that I’m more than happy to be critical of any company… Intel and Nvidia included… that is clearly blowing crap and is promising way more than it can deliver.

        I’ve had it up to about 6′ above my head with MANTLE talk since October. With the exception of a very small minority of readers on this site who have actually written software in their lives, every single technical point I’ve made about Mantle… like how most of its “miracles” have been done by other APIs already… has been ignored or shouted down by people who’s main technical credentials include copying & pasting drivel from powerpoint slides.

        By the way you have personally made several of those ignorant posts and I’d appreciate an apology after TR has produced hard numbers showing you were wrong.

        TR has had the VERY FIRST numbers I’ve seen anywhere on the Internet that actually compare Mantle to the real competition… both AMD’s own D3D and Nvidia… and the Miracle has already vanished. Notice how it took all of two seconds to go from “MANTLE WINS” to “Uh.. it’s just a Beta, you can’t benchmark it! Unfair! Wait 6 months, but only shills would accuse Mantle of being late!” or “Look it’s kinda faster than our poorly-written D3D drivers! ULTIMATE VICTORY!”

        If AMD wants to make a new Windows-Only AMD-Only Undocumented lockin-API for its own products, then it had better absolutely annihilate every competing graphics solution in every benchmark [b<][i<]no exceptions[/i<][/b<]. I already have drivers that implement OpenGL 4.4 running right now on my Linux system. I ALREADY HAVE bindless texture rendering for "draw calls!" What I don't have is patience for vapid powerpoint slides and phony "miracles".

          • LostCat
          • 7 years ago

          Do you have any games running on those OpenGL 4.x drivers? I have yet to see one.

          • Pwnstar
          • 7 years ago

          Jesus, dude.

          If you actually looked at those slides, you’d would have seen that they claimed Mantle only really helps in CPU-constrained workloads, the kind of workload you wouldn’t have with a top-of-the-line CPU like the 4770k unless it is driving quad-SLI’d GPUs.

          There is no need to set up these straw men to knock down, unless you really do have a problem with AMD.

          [quote<]What I don't have is patience for vapid powerpoint slides and phony "miracles".[/quote<]

        • puppetworx
        • 7 years ago

        And have you ever wondered why the thumbs up button is green but the thumbs down button is red…

        I kid.

        If you’re going to ban shills and trolls he should be at the top of the list. Every post is complaining about AMD fanbois ruining everything when the biggest problem is his ****-posting.

        And here I am feeding the troll, I guess he gets points for effort.

      • shank15217
      • 7 years ago

      Didn’t it say right in the article nvidia probably spent a whole lot of time improving their BF4 performance in DX11? Not to mention, the 780Ti is the fastest gpu Nvidia makes with significantly more gpu resources than R290X.

        • nanoflower
        • 7 years ago

        so we are seeing both companies putting their best foot forward. Seems like a fair fight.

          • shank15217
          • 7 years ago

          well one is putting forward a beta driver while the other one is polishing theirs to a spit shine. I do think mantle has real potential, it is more than tweaks and optimizations.

      • psyph3r
      • 7 years ago

      Holy crap that is a lot of ignorance

      • maxxcool
      • 7 years ago

      -20 .. much buthurt the fabois have…

    • brucethemoose
    • 7 years ago

    BF4 multiplayer is more CPU intensive than singleplayer, but is impossible to benchmark consistently. The gains should be even greater there.

      • Prestige Worldwide
      • 7 years ago

      It would be nice if they could give the press access to their “64-man pseudo-player” multiplayer benchmark for a better, yet controlled and comparable, test environment.

        • lilbuddhaman
        • 7 years ago

        Its probably being ran on that “impossible hardware setup that no consumer could use” along with the mod tools / map maker for the game.

      • NeoForever
      • 7 years ago

      Wait, wasn’t this a multiplayer test?

        • brucethemoose
        • 7 years ago

        Is it? I didn’t see anything in there about it.

          • superjawes
          • 7 years ago

          [quote<]We captured performance info while playing through a two-minute-long section of BF4 three times on each config. You can click the series of buttons below to see frame-time plots from one of the test runs for each config we tested.[/quote<] Considering multiplayer segments are nearly impossible to replicate between tests, I imagine this, like all TR benchmarks, was taken in single player.

            • NeoForever
            • 7 years ago

            Oh I see, the screenshot seems to be from singleplayer gameplay now.

            I hope it’s not singleplayer. Just because mutiplayer is hard to measure doesn’t make singleplayer data any more useful (I doubt many care about SP at this point and I also doubt MP performance gains have any correlation with SP).

            There has to be a way. Here’s a suggestion:
            Why not just rehearse a fake (yet unbiased and fair) 10 minute gameplay, do it like 5 times and aggregate the frame-time data?

      • Waco
      • 7 years ago

      CPU intensive for things other than rendering though, no?

    • ssidbroadcast
    • 7 years ago

    So basically, if you’re using an AMD CPU + GPU this is great… for 1 game that isn’t as good as BF2.

    Here’s hoping that other more compelling games come out supporting this.

      • brucethemoose
      • 7 years ago

      The Nitrous engine looks promising, 1 or 2 mantle-driven massive RTS games are in the pipe right now.

        • Klimax
        • 7 years ago

        Promising? Maybe if they invest about tens of days in fixing their DirectX code. It looks interesting and does things I like, but code is disaster. (both simulation and interacting with DX)

      • SecretMaster
      • 7 years ago

      That generalization sort of misses the whole point as to what this is about. Obviously starting out Mantle has a very limited/niche application (i.e. confined to a single game). This is a good first stepping stone towards innovating and developing something with huge upside.

        • ssidbroadcast
        • 7 years ago

        Sure. I’m mainly concerned with the small segment of gamers that would benefit from this. If enough games come out supporting this framework then I could see it taking off.

        Of course, there’s the issue of whether Steam OS will support this framework. I think it’s on AMD to make a Linux-compatible version of this API. If I were them I’d make that top-priority over optimizing the Windows version. Particularly, making it easy (or easier) for smaller indie developers to develop on that platform. I think AAA developer’s might be too focused on PS4/XBone platforms to bother with Mantle.

          • MrJP
          • 7 years ago

          Do you not see the contradiction in stating concern that this only benefits a small segment of gamers, then suggesting that AMD prioritise the Linux version of the API over Windows?

            • ssidbroadcast
            • 7 years ago

            … you don’t think Steam OS/debian will do well?

            • l33t-g4m3r
            • 7 years ago

            I do, but OpenGL works just fine on Linux, and that’s the standard. The real issue is that AMD has always had horrible OpenGL drivers, and horrible Linux support, which the combination makes it unusable. Mantle might bypass their opengl problems, but it’s still a problem inherent with AMD, and not with OpenGL itself. Who’s to say Mantle won’t become like their DX11 or OpenGL years down the road? AMD would be better off to scrap it’s existing code, and rewrite the OpenGL driver from scratch, than to pin it’s linux hopes onto mantle. You’d still have the problem of dealing with all the existing opengl games, so they’re really better off putting resources into OpenGL, like they should have done from the beginning. Better yet, if that’s too difficult, just make the code open source and the community will fix their driver for them.

            The only real benefit I see from mantle is that it will force existing standards to increase their efficiency, and when they do there won’t be any reason left to use mantle.

            • MrJP
            • 7 years ago

            Not relative to Windows in general.

            I think you’re making the mistake of confusing the enthusiast gaming market (small) with the overall PC market that plays some games (big). Mantle will have much more impact at the low end in the mass market, relatively casual gamers than in the high end. SteamOS might be a player in 5 years time, but AMD would have to be crazy to push much of their limited development budget in that direction at this point.

            • Pwnstar
            • 7 years ago

            Oh, burn!

      • moog
      • 7 years ago

      Let’s not. Here’s hoping game developers use their time more wisely.

      • maxxcool
      • 7 years ago

      mean, but also true. Bf3 Bf4 = meh. played it 3 lans. done.. glad it was free…

    • Prestige Worldwide
    • 7 years ago

    Thirdly, why test at “High” and with single GPU only?

    Wouldn’t you be more CPU-bound with multigpu? I doubt anybody is realistically going to be running a 7850k and 290x in the same rig, although I understand why this test case should be used for demonstration purposes.

      • swaaye
      • 7 years ago

      Mantle apparently doesn’t support CF yet.

        • Pwnstar
        • 7 years ago

        CrossFire would make Mantle look better but as most people don’t use it, it makes sense that TechReport would show the typical scenario.

    • Jon1984
    • 7 years ago

    Typo in the second line of last page.
    Great article as usual.

      • Pwnstar
      • 7 years ago

      Oh noes!

    • NeelyCam
    • 7 years ago

    [quote<]First look[/quote<] How are we supposed to compete, when the First word in the article title is "First"?

      • dpaus
      • 7 years ago

      Well done, my young Padawan!

        • NeelyCam
        • 7 years ago

        When I left you, I was but a learner; now [i<]I[/i<] am the master.

          • dpaus
          • 7 years ago

          Your identity is a secret no more, [url=http://banthapedia.wikia.com/wiki/Cam_Neely<]Darth Cam Neely[/url<]! Just as the prophecy foretold, you are twisted and evil, more GPU than CPU now. At least you are finally coming to accept that AMD products [i<]are[/i<] superior to Intel - from a certain point of view...

            • RDFSteve
            • 7 years ago

            Darth Cam Neely? Darth [i<]Sweetums[/i<]?!? Oh. My. God....

Pin It on Pinterest

Share This