AMD's take and the future of Mantle
From the day Microsoft announced DirectX 12, AMD has made it clear that it's fully behind the new API. Its message is simple: Direct3D 12 "supports and celebrates" the push toward lower-level abstraction that AMD began with Mantle last year—but D3D12 won't be ready right away, and in the meantime, developers can use Mantle in order to get some of the same gains out of AMD hardware.
At GDC, AMD's Corpus elaborated a little bit on that message. He told me Direct3D 12's arrival won't spell the end of Mantle. D3D12 doesn't get quite as close to the metal of AMD's Graphics Core Next GPUs as Mantle does, he claimed, and Mantle "will do some things faster." Mantle may also be quicker to take advantage of new hardware, since AMD will be able to update the API independently without waiting on Microsoft to release a new version of Direct3D. Finally, AMD is talking to developers about bringing Mantle to Linux, where it would have no competition from Microsoft.
Corpus was adamant that developers will see value in adopting Mantle even today, with D3D12 on the horizon and no explicit support for Linux or future AMD GPUs. Because the API is similar to D3D12, it will give developers a "big head start," he said, and we may see D3D12 launch titles "very early" as a result.
Naturally, AMD can motivate developers in other ways, too. While Corpus didn't address that side of the equation, VG247 reported last year that Battlefield 4's inclusion in the Gaming Evolved program—and its support for Mantle—involved a $5-8 million payment from AMD. That figure was never confirmed officially, but it's no secret AMD's and Nvidia's developer relations and co-marketing programs often involve financial incentives. Supporting Mantle may be a financially lucrative proposition for some game studios.
Nvidia seems to see lower-level graphics APIs as less of a panacea than AMD does. Tamasi told us that, while such APIs are "great," they're "not the only answer" because they're "not necessarily great for everyone." This statement goes back to what we said earlier about developers having manual control over things currently handled by the API and driver, such as GPU memory management. Engine programming gurus like DICE's Johan Andersson and Epic's Tim Sweeney might be perfectly happy to manage resources manually, but according to Tamasi, "a lot of folks wouldn't."
Nvidia also believes there's still some untapped potential for efficiency improvements and overhead reduction in D3D11. Since Mantle's debut six months ago, Nvidia has "redoubled" its efforts to curb CPU overhead, improve multi-core scaling, and use shader caching to address stuttering problems. (Tamasi freely admitted that Mantle's release spurred the initiative. "AMD and Mantle should get credit for revitalizing . . . and getting people fired up," he said.)
We saw first-hand the results of Nvidia's work two months ago. In a CPU-limited Battlefield 4 test, Nvidia's Direct3D driver clearly performed better than AMD's. That optimization work is still ongoing:
The performance data above, supplied to us by Nvidia, shows performance improvements over successive GeForce driver releases in Oxide Game's Star Swarm stress test. That test also supports Mantle, which helps put Nvidia's D3D11 optimizations in context. Tamasi conceded AMD's Mantle version "still has less slow frames" and that D3D11 "still [has] some limiting factors," but he reiterated his overarching point, which is that it's possible to "do a much better job" with D3D11. Even going by our own, perhaps less flattering numbers, we'd say that's a fair assessment.
What about OpenGL?
Direct3D 12 holds a lot of promise, but it won't help folks running Linux-based operating systems like SteamOS. Game developers seeking to write native ports for those OSes will need to use OpenGL, and they will have to extract whatever optimizations they can out of that API.
Tamasi told us Nvidia, AMD, and Intel have all been "working hard" to help developers achieve "super high efficiency" with OpenGL. In a GDC session entitled "Approaching Zero Driver Overhead in OpenGL," folks from all three companies demonstrated best practices for OpenGL optimizations. The techniques they outlined can be exploited with the current version of the API on today's hardware with existing drivers, and they can result in large performance gains.
During the session, we saw performance numbers obtained with APItest, an open-source benchmark developed by Blizzard's Patrick Doane. In Nvidia's words, APItest is "designed to showcase and compare between different approaches to common problems encountered in real-time rendering applications." The results showed order-of-magnitude performance differences between a "naive" approach, which Tamasi described as "writing OpenGL like Direct3D," and the best practices advocated by GPU manufacturers.
In the graph above, the baseline "naive" approach is the top bar, while the last bar is what Tamasi describes as "writing good code." The difference amounts to an 18X speedup. Obviously, this is an isolated test case rather than a comprehensive, game-like scenario. But I'd say the difference is large enough to make at least some OpenGL developers rethink the way they optimize their code.
The important takeaway here, I think, is that despite their involvement with D3D12, the big three makers of PC graphics hardware—AMD, Intel, and Nvidia—all have a stake in keeping OpenGL competitive. That's good news for Linux users, and it's especially good news for those of us hoping to see SteamOS become a real competitor to Windows in the realm of PC gaming.
Of course, SteamOS isn't due out until the summer, and the first D3D12 titles aren't expected until the 2015 holiday season. We'll have to revisit these matters in the future, when we can see for ourselves how next-gen games really perform on the two platforms.
114 comments — Last by sschaem at 12:21 AM on 04/29/14
|1. Ryszard - $603||2. Hdfisise - $600||3. Andrew Lauritzen - $502|
|4. Redocbew - $350||5. the - $306||6. SomeOtherGeek - $300|
|7. chasp_0 - $251||8. Ryu Connor - $250||9. mbutrovich - $250|
|10. aeassa - $175|
|Nvidia's GeForce GTX 950 graphics card reviewed...alongside the Radeon R7 370||154|
|Fable Legends DirectX 12 performance revealedA peek at the future of games and graphics||280|
|Tiny Radeon R9 Nano to pack a wallop at $650But AMD's performance numbers may overstate its case||186|
|Nvidia's Shield Android TV reviewedThe flagship of Android TV sets sail||76|
|GeForce GTX 980 Ti cards comparedEVGA, Gigabyte, MSI, and Asus square off||36|
|Asus' Strix Radeon R9 Fury graphics card reviewedFiji goes air-cooled||312|
|AMD's Radeon R9 Fury X graphics card reviewedThe red team vents its Fury||690|
|AMD's Radeon Fury X architecture revealedSome more insights into the Fiji GPU||155|
|MSI puts mobile Quadros to work in its WS60 and WT72 notebooks||2|
|HP's Envy 32 display blends FreeSync and living-room DNA||11|
|Prepare for the wasteland with Fallout 4's system requirements||49|
|Green means gaming on HP's updated Pavilion notebooks||17|
|Dell brings infinity display to XPS 15 laptop; launches XPS 12 2-in-1||29|
|Amazon redefines the sneakernet with Snowball data courier||34|
|Here be dragons on MSI's GK701 keyboard and DS502 headset||11|
|Soft Machines debuts CPUs and SoCs based on VISC architecture||68|
|Envy 34 curved all-in-one delivers Skylake power in style||31|
|It's almost as if the company held a big event this morning! ;)||+61|