A quick look at Mantle on AMD’s Kaveri APU

It was on the morning of Monday, November 11 that AMD showed Battlefield 4 running on its Kaveri APU for the first time. The chip was just over two months from release, and AMD promised that its integrated graphics would deliver playable performance in the new shooter. In addition, the company told of future performance improvements that Mantle, its close-to-the-metal graphics programming interface, would bring to Kaveri in BF4.

Now, three months hence, Kaveri is here. After multiple delays and setbacks, so are the Mantle version of Battlefield 4 and AMD’s first Mantle-enabling Catalyst graphics driver.

Scott has already tested Mantle with AMD’s top-of-the-line Radeon R9 290X, and Geoff has already seen how Kaveri handles the Direct3D version of Battlefield 4. Over the past few days, I’ve been doing some additional testing to fill in the last piece of the puzzle: how does Mantle impact graphics performance on Kaveri?

I’ve now compiled the results, and I’m ready to share them with you guys.

Our testing methods

Instead of procuring my own Kaveri test rig, I pilfered Geoff’s. The results you’ll see on the next page were obtained with the same A8-7600 chip, Gigabyte motherboard, and DDR3-2133 AMD memory that Geoff used to run the numbers for original Kaveri review.

In this instance, testing was done with the chip at its highest wattage preset: 65W. The A8-7600 isn’t the fastest or the highest-wattage member of the Kaveri lineup, of course—that honor belongs to the 95W A10-7850K. However, the A8-7600 happens to be the Kaveri chip I had on hand to test, and its 65W TDP makes it a better fit than the A10-7850K for all kinds of small-form-factor builds, including those meant to complement a home theater setup.

Here’s a more or less exhaustive list of the parts and drivers used:

Processor AMD A8-7600
Motherboard Gigabyte F2A88XN-WIFI
Platform hub AMD A88X
Memory size 16 GB (2 DIMMs)
Memory type AMD Gamer Series

DDR3 SDRAM at 2133 MT/s

Memory timings 10-11-11-30 2T
Platform drivers AMD Catalyst 13.25
Audio Realtek ALC889

with default Windows drivers

Integrated graphics Radeon R7

with Catalyst 14.1 beta 1.6

Solid-state drive Crucial m4 256GB
Power supply PC Power & Cooling Silencer 760W
OS Windows 8.1 Pro

Thanks to AMD, Crucial, and PC Power & Cooling for helping to outfit this test rig.

Image quality settings were left at the control panel defaults, with the exception of two settings. Surface format optimizations were disabled, and tessellation was set to “use application settings.” Vertical refresh sync (vsync) was disabled for all tests, as well.

I used Battlefield 4‘s built-in benchmarking tool to log frame times. Capturing frame times while playing isn’t precisely repeatable, so I tried to make each run as similar as possible to the others. I tested each sequence five times per rendering mode (i.e. five times for Direct3D, five times for Mantle) in order to compensate for variability. In the frame-by-frame plot at the start of the next page, you’ll see the results from a single, representative pass through the test sequence.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to discuss them with us.

Battlefield 4

Testing multiplayer games involves a lot of finicky variability, so I stuck with Battlefield 4‘s single-player campaign. Benchmarking was done at the start of the Singapore beach landing, which features cataclysmic environmental effects as well as explosions, gunfire, and… well, see for yourself:

I tested at 1366×768 using the “Low” detail preset. The A8-7600 did fare reasonably well at 1080p with the “Medium” preset in Geoff’s testing, but that was in a different section of the game. The Singapore beach landing takes place outdoors, and it’s quite demanding from a graphical standpoint. I used this same stretch of the game to test the Radeon R9 270 last November.

These results look an awful lot like those from Scott’s first look at Mantle. On Kaveri, just as on Scott’s R9 290X-powered system, the Mantle renderer yields a somewhat flatter plot that sits lower along the Y axis, indicating faster, more consistent performance. At the same time, the Mantle renderer produces very high latency spikes on occasion. The graph above is clipped at 160 ms, but the spikes you see reach just over 200 ms. At least one such spike occurred in each of the five test runs conducted.

Let’s what we can learn from slicing and dicing this data into some different graphs.

The average FPS and 99th-percentile graphs highlight one of my observations from the frame-by-frame plot: the Mantle renderer does yield better performance on average than the Direct3D one. The difference isn’t very big on Kaveri, but it’s there. We’ll talk some more about subjective impressions in a minute.


The latency curve and “badness” thresholds show, in detail, the flip side of the Mantle renderer’s performance increase. On the A8-7600, occasional latency spikes cause the Mantle renderer to spend more time above 50 ms than the Direct3D one. In spite of that, Mantle yields enough of a performance and smoothness increase that it spends less time over 33.3 ms. That’s a double-edged sword if I ever saw one.

(We also have a graph showing differences above 16.7 ms, but in this case, the data are more academic than anything. The A8-7600 spent more than half of its 60-second run above the 16.7-ms threshold regardless of the renderer used.)

All right. Now that we’ve looked at the cold, hard numbers, let’s talk about the subjective, seat-of-the-pants impact of the Mantle renderer on Kaveri—and discuss whether or not you should use it just yet.

Subjective impressions and conclusions

As I said on the previous page, Battlefield 4‘s Mantle renderer is kind of a double-edged sword. That’s true on Kaveri much as it is on the high-end, R9 290X-powerd system that Scott tested. The renderer’s effect, in a nutshell, is generally smoother gameplay with occasional but quite noticeable skips in the action.

We at TR have a certain distaste for latency spikes and gameplay skips. One reason is that, after what must be hundreds of hours of inside-the-second testing, our senses have become finely attuned to them. Another reason is that hitching breaks the illusion of motion and can pull you out of the action—or, worse, get you killed in a heated multiplayer skirmish.

While playing Battlefield 4 on Kaveri, though, I found it hard to decide whether the negative of the latency spikes really outweighed the positive of the overall performance gain. Part of the problem is that, when you run a demanding game like BF4 on integrated graphics, you’re often right on the threshold of playability. That’s definitely true with BF4‘s Direct3D renderer on Kaveri. Even at 1366×768 using the “Low” detail preset, the game is choppy, the controls respond with a slight lag, and aiming at a moving target can be challenging. Small as it may be, Mantle’s overall performance gain (which comes in the form of lower and more even frame times) has a very palpable impact on playability and responsiveness. The difference is large enough that, after testing the D3D renderer, I had to take a little time to adjust to the Mantle renderer’s extra smoothness in order to keep the test runs similar.

And yet those latency spikes are there—and they’re noticeable, and they mar the experience somewhat. It’s a shame. I’d hoped that the problem would be fixed in the latest BF4 patch, whose change log mentions a fix for a “memory system leak that could cause stalls” in the Mantle renderer. However, the numbers you saw on the previous page were run with the patch installed, and they’re just as spiky as those I got before. Clearly, more work needs to be done, either by DICE or AMD, to make Mantle less of a mixed blessing.

For now, I’d still recommend that Kaveri users give the Mantle renderer a shot. There’s no harm in trying. Ill-placed latency spikes might get you killed in multiplayer, but at the end of the day, I think the Direct3D renderer’s general choppiness and lag might actually lead to more lost battles.

Comments closed
    • maxxcool
    • 9 years ago

    What makes that comment funny… is the number of people now googling astroglide for the 1st time..

    • Forge
    • 9 years ago

    GameGlide?

    No way. Not epic enough.
    AstroGLide!

    • NeoForever
    • 9 years ago

    LOL you’d be surprised.

    • NeoForever
    • 9 years ago

    I wouldn’t say “designed benefit” as much as this test doesn’t include the scenario that most people care about.

    • CeeGee
    • 9 years ago

    Indubitably.

    • stdRaichu
    • 9 years ago

    It’s a perfectly cromulent form of embiggenising.

    • CeeGee
    • 9 years ago

    Thanks for replying Cyril.

    I think I’d look for a cheaper solution for a budget CPU though.

    • UnfriendlyFire
    • 9 years ago

    Sweet idea. I’ll just put my life savings into shorting against AMD’s stocks and buying Intel’s and Nividia’s stocks.

    Oh, and I’ll buy some high-end PC components before Intel turns their $1000 “extreme edition” chips into $3000 chips, and Nividia charging $1500 for the GTX 780/880.

    It’ll take a few years for the FTC to do something about it while fighting Intel’s and Nividia’s lawyers.

    • interbet
    • 9 years ago

    “Testing multiplayer games involves a lot of finicky variability, so I stuck with Battlefield 4’s single-player campaign.”

    Aren’t you then ignoring the primary designed benefit of using it then?

    • sweatshopking
    • 9 years ago

    SOUNDS LIKE EXCUSES DEFENDING A LOSER. AMD SHOULD SHUT DOWN AND GIVE THE MONEY BACK TO THE SHAREHOLDERS.

    • sweatshopking
    • 9 years ago

    no you don’t. that can’t be true. nobodies that dumb.

    • UnfriendlyFire
    • 9 years ago

    They have PhysX.

    And I know two people that bought Nividia GPUs just because of PhysX, in 2013.

    • ermo
    • 9 years ago

    [quote<]"... better[b<][u<]er[/u<][/b<] ..."[/quote<] I approve.

    • derFunkenstein
    • 9 years ago

    At higher resolutions in D3D, Anand still shows a difference in average framerate between the two APUs. It’s not super-huge at any resolution, but there’s still something to be gained.

    • Rza79
    • 9 years ago

    Just because you’re fabless doesn’t mean you don’t have to design and validate your silicon. Did you think TSMC and GF design the silicon for their costumers?

    • MadManOriginal
    • 9 years ago

    Not any more. All three are fabless now, ARM and nVidia always have been, and AMD divested its fabs a few years ago, although I think it was after 2010 which is the data that started this thread.

    • the
    • 9 years ago

    The thing with R&D is that you have to put into perspective just how many markets a company is doing R&D in.

    nVidia is mainly graphics, their first in-house CPU being released this year and some wireless technology
    AMD has graphics, three CPU lines (two x86 + ARM), chipsets, and server interconnects (from SeaMicro)

    When it comes down to it, AMD is producing far more silicon parts than nVidia and that takes a lot of R&D to bring to market.

    Compare this to ARM. They don’t design or manufacture entire chips themselves, they let their license holders do that for them. Thus the multi-million dollar masks used in lithography do not fall under ARM’s budget even though the results get feed back to ARM.

    • UnfriendlyFire
    • 9 years ago

    ARM develops the base model architectures, and allows the users (such as Samsung, Apple, AMD, Nividia, etc) to either use the base models, or assists with developing customized architectures.

    ARM essentially licenses their architecture and leaves a large amount of the modifications and fabbing to their customers.

    • HisDivineOrder
    • 9 years ago

    Clearly, nVidia must create their own API to compensate for this huge advantage. Fortunately, just like when they went to make two cards in one system, they already have a name purchased when they bought up 3dfx after they tanked their company around the whole concept of focusing on their own API at the exclusion of the two other API’s that were relevant at the time (DirectX and OpenGL).

    nVidia can call their API, Glide. Perhaps GameGlide keeping consistency with Gamestream and Gameworks.

    Then they can email AMD with just two letters:

    GG

    • HisDivineOrder
    • 9 years ago

    Except that “most game developers who target the PC market” are going to want to target the entire PC market instead of just GCN-capable users since the latter consists entirely of 7xxx series or later discrete GPU’s and only Kaveri APU’s. That’s a small subset of an audience that’s already limited. Before you go quoting AMD %’s of the market, remember that a great many of those include non-GCN products like 6xxx series or older, rebadges from 6xxx series or older, Llano, Trinity, and Richland. That’s essentially every APU AMD has ever released except the one in limited release right now and any GPU older than a couple years.

    So it’s a subset (GCN) of a subset (AMD users) of a subset (PC gamers) that can use Mantle.

    Until either Intel or nVidia announce support for Mantle, this will continue to be true. Given that nVidia and Intel gain a lot from not supporting Mantle so that AMD gets to support three API’s (OpenGL is important again now because of SteamOS, Android, iOS, OSX, Linux) instead of the two that nVidia and Intel have to support, it seems likely this will not change in the future, either. That’s assuming AMD ever makes good on their vaguely stated acceptance of turning Mantle into a standard in the future.

    Not that they have made it a standard. Not that they have even taken a single step toward making it one.

    • Rza79
    • 9 years ago

    A huge chunk of AMD’s & nVidia’s R&D goes to silicon developement!
    ARM doesn’t have to deal with that so it’s not really directly comparable.

    • jihadjoe
    • 9 years ago

    ARM spends a piddly ~88M a quarter on R&D. A much smaller company, on a much smaller budget is a “tech giant” because of it’s success.

    ARMH Research and Development Expense (Quarterly) Benchmarks
    Companies
    Intel 2.826B
    ARM Holdings 88.68M
    Advanced Micro Devices 293.00M

    Sauce: [url<]http://ycharts.com/companies/ARMH/r_and_d_expense[/url<] Combine ARM and Nvidia's budgets, and you still won't even have half of AMD's. There is far more to this story than just budgets, like top-level decisions on where their architecture should be headed, for instance. Ironically, if AMD had spent much less on CPU development the last 5 years and stayed with a process-shrunk K10, they'd be better off than they are now and possibly further ahead in the APU/GPU game.

    • swaaye
    • 9 years ago

    I dunno. The memory bandwidth bottleneck is extreme at this point…

    • UnfriendlyFire
    • 9 years ago

    Nividia is focused on GPU. It hasn’t done well in the mobile sector, but at least its laptop/desktop/server market is secure.

    AMD has to deal with Mr. Intel and Mr. ARM’s buddies.

    If AMD had cut back their R&D to Nividia’s budget against those two tech giants, it would’ve just been like Kodak trying to play catchup with the digital cameras and then getting smashed in the face with the smartphones.

    • LostCat
    • 9 years ago

    You mentioned HSA. I just don’t see it helping in this scenario.

    • the Lionheart
    • 9 years ago

    Where did I say that? There is a significant reduction in CPU overhead with Mantle. That’s a great advantage, but the final stable product isn’t yet ready. Porting games to Mantle is easier and faster than porting them to D3D, which makes Mantle an adequate API for most game developers who target the PC market.

    • LostCat
    • 9 years ago

    So your solution to a weak GPU is to tax the GPU more? wot

    • the Lionheart
    • 9 years ago

    There are two points that can be concluded here and not in the first Mantle review, where the 290X used wasn’t set to Turbo mode and its recorded performance wasn’t fully reliable.

    Mantle still needs some work. It’s not finished yet and it will likely take months before Mantle matures into a usable product.

    The second things here is where does X86 performance matter and where it should matter. Is Mantle meant only to enable the CPU to serve the GPU better, or is a new programming model that is intended to utilize the latent HSA power of AMD’s APUs in the future.

    • chuckula
    • 9 years ago

    APUs aren’t very good for mining, so no, Newegg won’t have an insane markup since the supplies are plentiful and lots of vendors have them available.

    • LostCat
    • 9 years ago

    The A10-7850K is only about $10 higher than MSRP, so no. The low end is relatively unaffected.

    • SCR250
    • 9 years ago

    mc6809e I like your ID. That was one impressive CPU for it’s day and a vast improvement over the 6502 and the 6800.

    [url<]http://en.wikipedia.org/wiki/Motorola_6809[/url<] I ran FLEX OS on it at first but then went with Microware's OS9. I even co-developed a Pascal compiler for the OS9 OS and it sold through Microware and also was available for the Radio Shack Color computer. Ah, those were the days.

    • jihadjoe
    • 9 years ago

    Guess that means draw calls weigh in at 10% of the graphics pipeline.

    • Rza79
    • 9 years ago

    Apparently, according to this review (http://www.techspot.com/review/781-amd-a10-7850k-graphics-performance/) it doesn’t. Too bad.

    • Rza79
    • 9 years ago

    PC Perspective has tested hybrid crossfire and it seems fixed.

    [url<]http://www.pcper.com/reviews/Graphics-Cards/AMD-A8-7600-Kaveri-APU-and-R7-250-Dual-Graphics-Testing-Pacing-Fixed[/url<] I'm interested to know if Kaveri's XDMA unit is enabled. Any chance that TR would test if Kaveri works together with a R7 260?

    • Pwnstar
    • 9 years ago

    And when it does come out on Newegg, the price will be marked up 400%.

    • LostCat
    • 9 years ago

    I play once or twice a month heh.

    • Bensam123
    • 9 years ago

    Have you played since the 13th? There were quite a few people stopping in my channel asking about crashes. The game changes with each patch.

    • maxxcool
    • 9 years ago

    Beautifully.. when real x64 software finally came out intel did the needfull

    • maxxcool
    • 9 years ago

    Mantle will **NEVER** Be used for ps4 or xbone

    • LostCat
    • 9 years ago

    Everyone? Only time the game crashed on me was the first month I bought it.

    • Bensam123
    • 9 years ago

    Stutters are gone (for me), but the crashing still persists… But apparently everyone is crashing now, not just mantle users, so may not be a mantle problem. These problems also seem to pertain more to BF4 being a buggy PoS then mantle… but since we only have one data point, could be either/or.

    • sschaem
    • 9 years ago

    AMD K10
    Our research and development expenses for 2010, 2009, and 2008 were approximately $1.4 billion, $1.7 billion and $1.8 billion

    nvidia K10:
    “During fiscal years 2011, 2010 and 2009, we incurred research and development expense of $848.8 million, $908.9 million and $855.9 million, respectively.”

    If AMD had the same R&D budget as nvidia, just during those 3 years AMD would have ZERO debt and have close to half a billion in cash.

    Also note that during those 3 years, AMD paid over 1 billion in interest expenses.
    While nvidia MADE money on their cash and investment.
    That 2 billion delta is all how the company execute.

    nvidia is run by a profesional & visionary, AMD ? monkeys ?

    AMD still spend close to $400 million a quarter in R&D and thats without counting the millions in management expenses. I think it was close to 200 million in 2010.

    AMD is one of the company that grease the VPs well, and its not uncommon to see million $ option bonus, even when the company was tanking to bankruptcy from bad decisions from the same people.

    So absolutely not, AMD got a HUGE, HUGE R&D for what it delivers.

    • LostCat
    • 9 years ago

    …definitely the players.

    • xeridea
    • 9 years ago

    The APUs dynamically vary the clockspeeds on the CPU and GPU depending on the load on each to balance performance within power envelope, so less CPU overhead could lead to higher GPU clocks on the APU.

    • kamikaziechameleon
    • 9 years ago

    Well put, but the strange thing seems that the main reason AMD exists sometimes is to prevent complete market monopolies from Intel and Nvidia.

    That might be their greatest function.

    • derFunkenstein
    • 9 years ago

    It looks like the 7850K has 1/3 more CUs. Probably all the other parts of the iGPU are identical, but I’d be hoping for 20-25% better performance. Probably not enough to crank up the details more, but enough to make it feel more comfortable while playing.

    • derFunkenstein
    • 9 years ago

    are we talking about Mantle and the drivers, or the players? /zing!

    • chuckula
    • 9 years ago

    It should be out later this month or in March. TR apparently got the higher-end A10-7850K but hasn’t posted full benchmarks yet.

    • mikato
    • 9 years ago

    Can you buy this APU? I haven’t seen it on newegg yet.

    • Cataclysm_ZA
    • 9 years ago

    Yeah, I know that the scenes have changed but that’s still very close. I would have expected 40fps with those kinds of settings because it’s not too demanding a game when you have everything turned off. Do you have any results for the same scene but in HD with low or medium settings? I can’t find anything on the net that’s using the same scene or with Mantle mixed in and the few places that do have comparisons with the A8-7600 don’t all use the same settings. I couldn’t even find figures for the A10-5800K or A10-6800K in BF4.

    Also, as mentioned elsewhere in the comments, it would be nice to see how CPU usage changes under Mantle. PC Per recorded some big drops in utilisation which may have had an effect on performance.

    • mikato
    • 9 years ago

    And don’t forget about the BF4 problems. Lots of immaturity to go around before we can really see the potential.

    • Xenolith
    • 9 years ago

    Ditto. Tired of being teased with this chip. MUST BYE NOW!

    • Cyril
    • 9 years ago

    Again, the beach landing bit is pretty demanding, likely more so than the indoor skyscraper one. I think Geoff and I both picked whatever image settings got us just over the playability threshold in the sequence we tested, but Geoff’s sequence just happened to be a little easier on Kaveri’s IGP. (He’s away on vacation right now, so I’ll ask him when he gets back.)

    • Cataclysm_ZA
    • 9 years ago

    Those graphs look like you’re playing a section of the game that’s GPU-limited (most of it is GPU-limited), but when I look at the A8-7600 review, you’re getting almost exactly the same FPS and frame times as Geoff’s testing at much lower settings.

    Something is definitely wonky with that setup there, or it’s something that DICE has changed. Has anyone asked why the results are so close?

    • brucethemoose
    • 9 years ago

    They payed OEMs to shun Athlons, so pretty darn well!

    • LostCat
    • 9 years ago

    Intel shunned AMD64 once…how’d that work out for them?

    • ratborg
    • 9 years ago

    Any release date on the A8-7600? Newegg and Amazon don’t have it listed.

    • brucethemoose
    • 9 years ago

    With a sizable amount of eDRAM cache backing a unified memory controller, an APU should be faster than an equivalent CPU/GPU (as you see in consoles). It’s also cheaper to cool a single chip than it is to cool 2.

    There are also some significant gains if you take advantage of a unified memory architecture.

    Unfortunately, AMD chose to put out a low-end GPU on the same die as a mid-range CPU, and feeds it with a typical 128-bit ddr3 bus with no cache… And Intel shunned thier heterogenous compute effort for whatever reason, so getting that adopted will be tough.

    • enzia35
    • 9 years ago

    Now I wonder how my AXP-100 would stand up to your scrutiny. But the specs of the Noctua are impressive. The sink itself is barely an inch tall!

    • ronch
    • 9 years ago

    10x the draw calls for 10% better performance. Brilliant!!

    • ronch
    • 9 years ago

    I think combining CPUs and graphics in one piece of silicon is beginning to look like a bad idea. First, are the benefits so big? Second, how about power? Separating them would give you more flexibility in terms of cooling and raises each chip portion’s possible TDP ceiling. Third, how about yields? Fourth, AMD has found out that the process tech for CPUs and GPUs really don’t mix with each other well. CPUs need fast switching times first and density second, while GPUs need density first and switching speed can come second. Look at Kaveri. AMD couldn’t clock the CPU cores high enough because they gotta watch the APU’s total TDP and the process tech is apparently optimized for GPUs and compromises those Steamroller cores. And the fact that AMD doesn’t own its own fabs anymore only makes it more difficult and expensive to tune the manufacturing process for Kaveri, not to mention GF’s propensity to stab AMD with those one time charges while not really giving AMD the process tech they need to make their products very competitive (we know how GF’s 32nm could be better and how their 28nm SHP is killing Kaveri as well).

    • ronch
    • 9 years ago

    And even comparing Intel’s and AMD’s budgets, we don’t know how much of their respective budgets are/were allocated to CPU development. AMD’s graphics division no doubt takes a big chunk of that $1.2B figure but Intel also does lots of R&D on so many other things, not least of which is their fabrication process development. I wonder how much they have spent on their current processors.

    • ronch
    • 9 years ago

    AMD has had many interesting ideas over the years but most of them weren’t executed as well as they could’ve been. Examples include Bulldozer, GCN – specifically the HD7000 series (frame pacing issues late in its product life cycle), their long road to realizing a proper APU that really revolutionizes computing (even now Kaveri isn’t the blockbuster some had hoped APUs would be), inferior SATA, AHCI and USB implementations, inferior DX drivers, etc. AMD is proud to be the only company in the world that makes everything from BIG core CPUs to leading edge graphics to chipsets and everything, but it would seem that they’re the very definition of ‘Jack of all trades, master of none.’

    Still, honestly I’m amazed they’ve done all these things considering their miniscule R&D budget compared to other big tech companies. This is rocket science, kids. And AMD is doing all of it with a modest budget. You gotta give them big points for that.

    • chuckula
    • 9 years ago

    [quote<]My challenge to AMD is, EXECUTE![/quote<] DON'T SAY THAT AROUND AMD EMPLOYEES!

    • kamikaziechameleon
    • 9 years ago

    Sounds like AMD APU’s are just bad. Mantle seems to have interesting results here but ultimately inconclusive.

    I’m confident Mantle isn’t polished or anything and has tons of room for improvement, I’m not confident that AMD will deliver on its potential though seeing that Nvidia seems to be able to negate Mantle gains with DX drivers(Something AMD would tell you is impossible)

    My challenge to AMD is, EXECUTE! Improve your DX drivers! You’ve never as a company consistently been on top of the darn driver side of your business. When you have your DX driver business sorted THEN you can execute on Mantle. Blaming 2 decades of bad driver development on DX does not reflect well on you.

    • PixelArmy
    • 9 years ago

    That was the conclusion of the Kaveri review:
    [quote<]APUs occupy this awkward middle ground for so-called casual gamers who want something better than an Intel IGP but not as good as a halfway-decent graphics card. As Jerry Seinfeld would say, "who are these people?" Seriously, I've never met one.[/quote<] I feel like the investment in Mantle doesn't match it's returns (maybe eventually, but currently no) since it targets a niche anyways (GCN users who are highly CPU bound). This is just a niche of a niche...

    • LostCat
    • 9 years ago

    And yet still not the same thing.

    • Cyril
    • 9 years ago

    I haven’t had time to look at temps, but it does seem pretty quiet. The loudest thing in that test build was the power supply.

    • jdaven
    • 9 years ago

    Here’s one from todays shortbread:

    [url<]http://www.techspot.com/review/781-amd-a10-7850k-graphics-performance/[/url<]

    • maxxcool
    • 9 years ago

    oh my… +1

    • HisDivineOrder
    • 9 years ago

    This is DICE and Battlefield 4, we’re talking about. They were already pretty bad about fixing performance issues, but when you’re talking about BF4 it’s going to a whole new level. Then toss in Mantle as a different API and AMD with their own bundle of issues related to a brand new API…

    I don’t think they’re ever going to fix the issues completely before all parties involved move onto the next big thing.

    • chuckula
    • 9 years ago

    My best “trolls” are the ones that quote the story accurately.

    • derFunkenstein
    • 9 years ago

    No time for your facts, man!

    • chuckula
    • 9 years ago

    [b<][i<]OVER 7% MEGA OPTIMIZATION PERFORMANCE FORCE![/i<][/b<]

    • Deanjo
    • 9 years ago

    That’s almost 8% faster!!!!!!! 😉 (don’t you just love percentages for marketing)

    • sschaem
    • 9 years ago

    Have you contacted Dice to get to the bottom of this?
    (you noticing zero changes between pre and post patch)

    Have you noticed any power draw changes between the two modes?

    • stdRaichu
    • 9 years ago

    Agreed, Noctua are pretty much the Rolls Royce of CPU coolers however, they’re very nicely engineered and presented. There’s a betterer review [url=http://www.silentpcreview.com/Noctua_NH-L9i<]here[/url<].

    • chuckula
    • 9 years ago

    [quote<]I'd hoped that the problem would be fixed in the latest BF4 patch, whose change log mentions a fix for a "memory system leak that could cause stalls" in the Mantle renderer. However, the numbers you saw on the previous page were run with the patch installed, and they're just as spiky as those I got before. Clearly, more work needs to be done, either by DICE or AMD, to make Mantle less of a mixed blessing.[/quote<] -- TFA

    • chuckula
    • 9 years ago

    In an APU setup, if the CPU usage gets reduced by any appreciable amount, then the IGP gets to use the extra available power to boost performance.

    So if the CPU usage dropped by any appreciable amount, you just saw the effects of the IGP having more power there in those results.

    • smilingcrow
    • 9 years ago

    Regarding Open Office/Libre Office I personally wish they’d focus on basic features and compatibility and everyday performance before things like HSA.

    • CeeGee
    • 9 years ago

    Thanks, I’ll check it out.

    [i<]Edit - Holy hell it's expensive![/i<]

    • stmok
    • 9 years ago

    To give those some context in relation to R&D budgets…

    In 2013…
    Intel’s R&D Budget: US$10.1 billion
    AMD’s R&D Budget: US$1.2 billion

    Compared to other big companies…

    Samsung’s R&D Budget: US$10.4 billion
    Microsoft’s R&D Budget: US$9.8 billion
    Google’s R&D Budget: US$6.8 billion
    IBM’s R&D Budget: US$6.3 billion

    • Bensam123
    • 9 years ago

    According to the latest BF4 patch, the spikes may be fixed with mantle. I’ll be streaming it shortly and report back if they’re completely gone.

    [quote<]Feb 13 PC Game Update Notes -General stability improvements -Fix for an issue where spawning into, or switching to, a gunner seat in an IFV/MBT sometimes could cause the game to crash -Fix for missing sound in Team/Squad Deathmatch -Fix for an issue in the Defuse game mode, where a bomb carrier would be permanently spotted -Decreased the rate at which the kill card would incorrectly display 0 health, while the enemy was clearly alive -Fixed an invisible wall that was incorrectly positioned in one of the fallen concrete pipes on Zavod 311 -Fix for an issue where bullet impact sounds weren’t properly matching the actual number of impacts -Fix for an issue where the “Draw” message would not display on-screen once a Conquest round ended with both teams having the same amount of tickets -Fix for an issue where long IDs wouldn’t scroll on dog tags -Fix for missing grass physics in terrain Mantle -Fix for a crash that would occur when activating full screen in portrait mode -Fix for stuttering that could appear during video sequences on multi-GPU PCs -Fix for a memory system leak that could cause stalls, which would result in frames taking longer to process -Reduced the amount of stalls that occurred when running with high graphics setting that require more GPU memory than is currently available -Fixed screenshots on multi-GPU PCs[/quote<]

    • Narishma
    • 9 years ago

    That’s not a given. The GPU on the APU is pretty crappy.

    • Klimax
    • 9 years ago

    Could people, please stop spouting nonsense using number of draw calls as if it has any relevancy or it is of any use and end of all? Same BTW would go for number of batches.
    it is so engine and context specific and usually not indicative of anything at all, that it is simply useless.

    BTW: Too high number of draw calls more then likely indicates less efficient engine

    Anyway, there is no limitation in DX11, but reliance on drivers to do the job, which some drivers don’t. Like AMD’s drivers. And Mantle is nothing more then vendor specific fix for their drivers, forcing developers to do the job for them. All things you attribute as possible only with Mantle are possible with DirectX and thus there is zero reason for Mantle existence.

    Sorry, but your post is wrong, takes many claims by AMD for granted without proper evidence and misses too many things.

    Frankly, Mantle offers nothing but locks and I hope it disappears sooner then later.

    BTW: “Minimum Core i3+Mantle+Radeon or Core i7+DX11+NVidia required” won’t be ever correct. For NVidia cards it will be Core i3 all the same, regardless of existence of idiocy known as Mantle.

    • Maff
    • 9 years ago

    One thing that you forget in your post however is the fact that this isn’t a PC-only gaming world. Lots of developers bring out their stuff primarily for consoles first(from which the code could apparently easily be ported), which don’t suffer these same problems.

    Then again, its questionable whether that (relatively) tiny 8-core could actually do more with “mantle”(read closer to the metal programming)in consoles, than a high end PC cpu without mantle on the PC.

    As always, time will tell!

    • dragosmp
    • 9 years ago

    Not really, just more GPU power to make the APU CPU-limited. Mantle relieves CPU bottlenecks, the A8 doesn’t seem to suffer from this.

    …I would like to see Mantle on a Jaguar-like core.

    • LostCat
    • 9 years ago

    It’ll still probably game…up at the TV or something. Just won’t be my main box.

    • stdRaichu
    • 9 years ago

    I looks the same as I have on my haswell Xeon (in a U-NAS 800 case so I needed a low-profile cooler) – a Noctua LH-9i. On my setup at least, with PWM smart fan it’s only actually running for about 30s every minute, and is blissfully silent at ~700rpm. It’s not recommended for heavy use though – the store I bought it from warns against using it with anything above a 65W TDP processor. My xeon supposedly has a ceiling of 80W but I haven’t done anything with it to use anywhere near that amount of power yet.

    Review here: [url<]http://www.dragonsteelmods.com/noctua-nh-l9i-low-profile-cpu-cooler-review/[/url<]

    • LostCat
    • 9 years ago

    Hybrid Crossfire is relevant to D3D/OGL, not Mantle.

    Mantle has async queueing, so can push different workloads to different GPUs.

    • brucethemoose
    • 9 years ago

    There are hybrid crossfire reviews everywhere… And that makes sense, but out of genuine curiosity, why do you need a bigger GPU if you aren’t gaming? HD 4600 actually handles MadVR suprisingly well, and I think it can handle MuseMage (I haven’t tried yet, but llano ran it well enough).

    • LostCat
    • 9 years ago

    I will add that having an Intel GPU that would see no use whatsoever has never appealed to me.

    When I’m done using this as my main gaming box, it’ll still have a useful GPU in it…dedicated or no.

    • Meadows
    • 9 years ago

    You beat me to it.

    • brucethemoose
    • 9 years ago

    They need a higher power desktop APU. Maybe 3 steamrollet modules + a 7870-class GPU with quad channel DDR3 or dual channel with eDRAM cache.

    • LostCat
    • 9 years ago

    Since we have no examples of its hybrid GPU support yet, it can’t really be over or under rated.

    • brucethemoose
    • 9 years ago

    Same with an R7 260X!

    Hybrid GPU support is over-rated IMO, you can get a more powerful 7850 or better + FX 6300 for the price of a 7850k + equivalent GPU.

    • ronch
    • 9 years ago

    So what you’re saying is Mantle really can’t make a very significant impact for AMD’s CPU/GPU/APU bottomline in today’s world. Can’t say I disagree with you. I don’t really expect it to save AMD. It’s nice, that’s for sure, and it helps the industry move forward, but what AMD really needs to do is to create silicon that will be able to compete without these new gimmicks that rely on developers to take advantage of to give their products a modest boost a-la 3DNow!.

    • maxxcool
    • 9 years ago

    They support opencl.. which also includes hsa but is not directly ‘optimized’ FOR hsa. Benefit yes, but if hsa did not exist it still would get a boost for the gcn core opencl compliance

    • ronch
    • 9 years ago

    Intel’s memory controllers are really quite a lot more efficient than AMD’s. Remember the Core 2 vs. Phenom II days? Core 2’s microarchitecture is obviously more advanced than the K10 core but its memory controller can easily feed the beast well enough so that it would still be faster clock for clock than a Phenom II with its much-bragged IMC, and it was only until the Nehalem generation were we able to realize what putting the memory controller on-die would do to Intel’s basic Core 2/Nehalem architecture. It did, shall we say, really open the gates wide.

    Intel’s been making memory controllers for the longest time, back to the Pentium era when they started creating chipsets for their CPUs. They’ve had a lot of time (and [u<]money[/u<]) to pour into making their memory controllers more and more efficient. And although AMD has had enough time (since 2003) to perfect their on-die memory controllers, they don't seem to have really taken theirs to the level of Intel's controllers, although with their R&D budget it's not surprising.

    • LostCat
    • 9 years ago

    Mantles hybrid GPU support, TrueAudio, HSA, why not?

    • mc6809e
    • 9 years ago

    I’ve read that, too, that Mantle is better for improving performance in CPU limited situations, but I don’t think that’s entirely correct.

    What Mantle REALLY improves are the number of draw calls/second that can be made. That’s not quite the same thing as being CPU bound. It’s entirely possible for a game to be CPU bound but not draw call bound if, for example, the CPU spent much of its time involved with the AI of the game. Mantle isn’t going to help that much, except to provide somewhat more time to the AI.

    I expect Mantle to help mostly with games that have very dynamic and complex scenes with many objects.

    DX11 has a hard time handling such scenes, even with a Core-i7. As a result, programmers simplify things, preferring to include a few large textures over many smaller ones and using fewer large objects and a more static environment over many small objects and a dynamic environment.

    But this creates a bit of a chicken-or-the-egg problem for Mantle.

    Any game made with Mantle in mind is going to suffer on a DX11 system and any game written for DX11 isn’t going to fully take advantage of Mantle. As a result games will still be crafted for the more limited DX11 and the FPS increase will be modest.

    This is different than the problem of making a game that scales with the power the installed GPU. The same scene can be used over many different GPUs with many different CUs or shaders, etc. The FPS will differ for each card, but the programmers don’t have to do as much work to see that that game works with many different GPUs. Simply rendering fewer pixels is one way to maintain a minimum number of FPS. That’s relatively easy.

    This strategy doesn’t work with Mantle. It’s hard to create a rendering engine that will properly scale draw calls.

    Consider: if you have a very dynamic scene, for example, what should stop moving on a slower DX11 system? How does a programmer allow a system to gracefully degrade? Remove objects? Which ones? Rendering fewer pixels doesn’t work. Removing objects or making them static is going to seriously affect game play.

    That is Mantle’s problem. Because the design of the game itself is subject to the draw calls/second limitation, any game written to work with DX11 isn’t going to thoroughly take advantage of Mantle without making the DX11 version very slow.

    If Mantle isn’t adopted by Nvidia, we might see something like:

    “Minimum Core i3+Mantle+Radeon or Core i7+DX11+NVidia required”

    I do hope Mantle succeeds. There are probably good game ideas out there that were shelved because DX11 just couldn’t handle them.

    • brucethemoose
    • 9 years ago

    Don’t OpenOffice and GIMP support HSA, or are those just planned features?

    I think GCN 1.1+ GPUs are technically HSA compliant. You obviously don’t get the benefits of unified memory, but you also get a much more powerful GPU.

    • GeForce6200
    • 9 years ago

    Interesting to see Mantle on some of the lower Kaveri chips. I do think it would be nice though to see how the top end A10-7850K performs in that test setup. While Hothardware’s simple review of Mantle and the A10-7850K they benched it 36.16 avg FPS in BF4 (Ultra, 1080P) on it’s dedicated R7 APU. Hopefully TR can implement the the flagship APU when Mantle matures some.

    • UnfriendlyFire
    • 9 years ago

    HSA? (missing in action until at least late 2014, or 2015. Gaming support unknown.)

    • UnfriendlyFire
    • 9 years ago

    And even if you OC the GPU, there’s the memory bandwidth.

    From what I’ve read, Kaveri’s memory controller is less efficient in bandwidth compared to Intel’s memory controllers from SB and up, with the RAM being set at the same speeds.

    But the A8 is only a little over $100, so what can you expect?

    • Cyril
    • 9 years ago

    Like I said in the conclusion, the numbers in this article are from today’s update.

    • brucethemoose
    • 9 years ago

    It just dawned on me that the 7850k is $180!

    An FX-6300+cheap dGPU and slower ddr3 would be vastly more powerful for maybe $10-$20 more. A haswell i3+dGPU would be more power efficient for around that.

    Sure, there’s the “I need an htpc cpu with more GPU power than haswell for EXACTLY $180” niche, but outside of that, why on earth would anyone buy a desktop APU for gaming at this price?

    • Voldenuit
    • 9 years ago

    100% CPU in both scenarios?

    This is at 768p and Low settings, after all.

    • Airmantharp
    • 9 years ago

    The update for BF4 released earlier today is supposed to fix that issue amongst others, obviously too late for this review of course- but it’s pretty clear that TR isn’t close to being done with Mantle, so no big worry. We should also start seeing reports from users here or in the forums as people get a chance to see if stuff’s improved.

    • ahmedabdo
    • 9 years ago

    Good point.

    • Jigar
    • 9 years ago

    9X …

    • Herem
    • 9 years ago

    The current stalls look fairly significant. When the stalls are (hopefully) resolved will it reduce the average fps noticeably?

    • maxxcool
    • 9 years ago

    10X the draw calls!!1! 1! 1! 11

    • CeeGee
    • 9 years ago

    That low profile cooler looks cute, how does it perform Cyril?

    • kalelovil
    • 9 years ago

    nvm

    • Wildchild
    • 9 years ago

    I want to remain optimistic and just give AMD the benefit of the doubt since the Mantle driver is still in beta, but I don’t know. We’ll see.

    • Takeshi7
    • 9 years ago

    I would have liked to see CPU usage charts to see how well mantle reduces it.

    • ronch
    • 9 years ago

    Mantle is a nice effort from AMD, but I’ve read that Mantle is better at improving CPU-limited scenarios than GPU-limited ones. Now, the A8-7600 is really an underpowered APU for serious gaming, CPU-wise and GPU-wise, but I think the CPU portion could actually be capable of running BF4, more so than the GPU. So in this case, while Mantle does help the CPU a bit, the GPU just ain’t willing to give it a little more push.

    • Voldenuit
    • 9 years ago

    Obviously, Mantle [i<]crushes[/i<] D3D. I mean, those 2 fps there could be the difference between life and death, don'tcha know?

Pin It on Pinterest

Share This

Share this post with your friends!