A closer look at DirectX 12

At the Game Developers Conference this year, Microsoft pulled back the curtain on Direct3D 12, the first major update to its graphics APIs since 2009. The company announced some pretty big changes, including support for a lower level of abstraction and compatibility with not just Windows, but also Windows Phone and the Xbox One. This will be the first version of Direct3D to unify graphics programming across all of Microsoft’s gaming platforms. It may also be the first version of Direct3D to eke significant performance gains out of current hardware.

I already covered some of those developments in a couple of news posts during the GDC frenzy. Now that I’m back home with all my notes from various sessions and meetings with Microsoft and GPU vendors, I can go into a little more detail. In this article, I’ll try to explore Direct3D 12’s inception, the key ways in which it differs from Direct3D 11, and what AMD and Nvidia think about it.

First, though, let’s straighten something out. We and others have occasionally referred to Microsoft’s new API as DirectX 12, but what premiered at GDC was technically Direct3D 12, the graphics component of DirectX 12. As Microsoft’s Matt Sandy wrote on the official DirectX blog, DirectX 12 will also encompass “other technologies” that “may be previewed at a later date,” including “cutting-edge tools for developers.” That’s probably why we haven’t heard anything about, say, the next version of DirectCompute. The news so far has centered solely on Direct3D.

Now that that’s all cleared up, let’s take a closer look at Microsoft’s new graphics API—starting with a little history.

Direct3D 12’s inception

Given the way Microsoft has presented Direct3D 12, it’s hard not to draw parallels with AMD’s Mantle API. Mantle was introduced last September, and much like D3D12, it provides a lower level of abstraction that lets developers write code closer to the metal. The result, at least in theory, is lower CPU overhead and better overall performance—the same perks Microsoft promises for D3D12.

The question, then, almost asks itself. Did AMD’s work on Mantle motivate Microsoft to introduce a lower-level graphics API?

When I spoke to AMD people a few hours after the D3D12 reveal, I got a strong sense that that wasn’t the case—and that it was developers, not AMD, who had spearheaded the push for a lower-level graphics API on Windows. Indeed, at the keynote, Microsoft’s Development Manager for Graphics, Anuj Gosalia, made no mention of Mantle. He stated that “engineers at Microsoft and GPU manufacturers have been working at this for some time,” and he added that D3D12 was “designed closely with game developers.”

I then talked with Ritche Corpus, AMD’s Software Alliances and Developer Relations Director. Corpus told me that AMD shared its work on Mantle with Microsoft “from day one” and that parts of Direct3D 12 are “very similar” to AMD’s API. I asked if D3D12’s development had begun before Mantle’s. Corpus’ answer: “Not that we know.” Corpus explained that, when AMD was developing Mantle, it received no feedback from game developers that would suggest AMD was wasting its time because a similar project was underway at Microsoft. I recalled that, at AMD’s APU13 event in November 2013, EA DICE’s Johan Andersson expressed a desire to use Mantle “everywhere and on everything.” Those are perhaps not the words I would have used if I had known D3D12 was right around the corner.

The day after the D3D12 keynote, I got on the phone with Tony Tamasi, Nvidia’s Senior VP of Content and Technology. Tamasi painted a rather different picture than Corpus. He told me D3D12 had been in in the works for “more than three years” (longer than Mantle) and that “everyone” had been involved in its development. As he pointed out, people from AMD, Nvidia, Intel, and even Qualcomm stood on stage at the D3D12 reveal keynote. Those four companies’ logos are also featured prominently on the current landing page for the official DirectX blog:

Tamasi went on to note that, since development cycles for new GPUs span “many years,” there was “no possible way” Microsoft could have slapped together a new API within six months of Mantle’s public debut.

Seen from that angle, it does seem quite far-fetched that Microsoft could have sprung a new graphics API on a major GPU vendor without giving them years to prepare—or, for that matter, requesting their input throughout the development process. AMD is hardly a bit player in the GPU market, and its silicon powers Microsoft’s own Xbox One console, which will be one of the platforms supporting D3D12 next year. I’m not sure what Microsoft would stand to gain by keeping AMD out of the loop.

I think it’s entirely possible AMD has known about D3D12 from the beginning, that it pushed ahead with Mantle anyhow in order to secure a temporary advantage over the competition, and that it’s now seeking to embellish its part in D3D12’s creation. It’s equally possible AMD was entirely forthright with us, and that Nvidia is simply trying to downplay the extent of its competitor’s influence.

In any event, as we’re about to see, D3D12 indeed shares some notable similarities with Mantle. More importantly, it delivers something developers seem to have wanted for some time: a multi-vendor Windows graphics API that offers a console-like level of abstraction. Whatever part AMD played, it seems developers and gamers alike stand to benefit.

Direct3D 12 on existing hardware

Direct3D 12’s lower abstraction level takes the form of a new programming model, and that programming model will be supported on a broad swath of current hardware. AMD has pledged support for all of its current offerings based on the Graphics Core Next architecture, while Nvidia did the same for all of its DirectX 11-class chips (spanning the Fermi, Kepler, and Maxwell architectures). Intel, meanwhile, pledged support for the integrated graphics in its existing Haswell processors (a.k.a. 4th-generation Core).

Beyond the PC, Direct3D 12’s new programming model will also be exploitable on the Xbox One console and on Windows Phone handsets. Microsoft hasn’t yet said which versions of Windows on the desktop will support Direct3D 12, but it dropped some hints. During the Q&A following the reveal keynote, Microsoft’s Gosalia ruled out Windows XP support, but he declined to give a categorical answer about Windows 7.

Sandy’s blog post identified four key changes that D3D12 makes to the Direct3D programming model: pipeline state objects, command lists, bundles, and descriptor heaps and tables. These are all about lowering the abstraction level and giving developers better control over the hardware. Those of you well-acquainted with Mantle may find that some of those constructs have a familiar ring to them. That familiarity may be partly due to AMD’s role (whether direct or indirect) in Direct3D 12’s development, but I suspect it’s explainable to a large degree by the fact that both D3D12 and Mantle are low-level graphics APIs closely tailored to the behavior of modern GPUs.

For instance, Mantle’s monolithic pipelines roll the graphics pipeline into a single object. Direct3D 12 groups the graphics pipeline into “pipeline state objects,” or PSOs. Those PSOs work like such, according to Sandy:

Direct3D 12 . . . [unifies] much of the pipeline state into immutable pipeline state objects (PSOs), which are finalized on creation. This allows hardware and drivers to immediately convert the PSO into whatever hardware native instructions and state are required to execute GPU work. Which PSO is in use can still be changed dynamically, but to do so the hardware only needs to copy the minimal amount of pre-computed state directly to the hardware registers, rather than computing the hardware state on the fly. This means significantly reduced draw call overhead, and many more draw calls per frame.

Gosalia says PSOs “wrap very efficiently to actual GPU hardware.” That’s in contrast to Direct3D 11’s higher-level representation of the graphics pipeline, which induces higher overhead. “For example,” Sandy explains, “many GPUs combine pixel shader and output merger state into a single hardware representation, but because the Direct3D 11 API allows these to be set separately, the driver cannot resolve things until it knows the state is finalized, which isn’t until draw time.” D3D11’s approach increases overhead and limits the number of draw calls that can be issued per frame.

D3D12 also replaces D3D11’s context-based execution model with something called command lists, which sound pretty comparable to Mantle’s command buffers. Here’s Sandy’s explanation again:

Direct3D 12 introduces a new model for work submission based on command lists that contain the entirety of information needed to execute a particular workload on the GPU. Each new command list contains information such as which PSO to use, what texture and buffer resources are needed, and the arguments to all draw calls. Because each command list is self-contained and inherits no state, the driver can pre-compute all necessary GPU commands up-front and in a free-threaded manner. The only serial process necessary is the final submission of command lists to the GPU via the command queue, which is a highly efficient process.

D3D12 takes things a step further with a construct called bundles, which lets developers re-use commands in order to further reduce driver overhead:

In addition to command lists, Direct3D 12 also introduces a second level of work pre-computation, bundles. Unlike command lists which are completely self-contained and typically constructed, submitted once, and discarded, bundles provide a form of state inheritance which permits reuse. For example, if a game wants to draw two character models with different textures, one approach is to record a command list with two sets of identical draw calls. But another approach is to “record” one bundle that draws a single character model, then “play back” the bundle twice on the command list using different resources. In the latter case, the driver only has to compute the appropriate instructions once, and creating the command list essentially amounts to two low-cost function calls.

Thanks to all of this shader and pipeline state caching, Gosalia says there should be “no more compiles in the middle of gameplay.” Draw-time shader compilation can cause hitches (or frame latency spikes) during gameplay—and developers bemoaned it at AMD’s APU13 event last year. Dan Baker of Oxide Games says that, in D3D12, we “shouldn’t have frame hitches caused by driver at all.”

Both Mantle and D3D12 introduce new ways to bind resources to the graphics pipeline, as well. D3D12’s model involves descriptor heaps, which don’t sound all that dissimilar to Mantle’s descriptor sets. Sandy explains:

Instead of requiring standalone resource views and explicit mapping to slots, Direct3D 12 provides a descriptor heap into which games create their various resource views. This provides a mechanism for the GPU to directly write the hardware-native resource description (descriptor) to memory up-front. To declare which resources are to be used by the pipeline for a particular draw call, games specify one or more descriptor tables which represent sub-ranges of the full descriptor heap. As the descriptor heap has already been populated with the appropriate hardware-specific descriptor data, changing descriptor tables is an extremely low-cost operation.
In addition to the improved performance offered by descriptor heaps and tables, Direct3D 12 also allows resources to be dynamically indexed in shaders, providing unprecedented flexibility and unlocking new rendering techniques. As an example, modern deferred rendering engines typically encode a material or object identifier of some kind to the intermediate g-buffer. In Direct3D 11, these engines must be careful to avoid using too many materials, as including too many in one g-buffer can significantly slow down the final render pass. With dynamically indexable resources, a scene with a thousand materials can be finalized just as quickly as one with only ten.

According to Sandy, descriptor heaps “match modern hardware and significantly improve performance.” The D3D11 approach is “highly abstracted and convenient,” he says, but it requires games to issue additional draw calls when resources need to be changed, which leads to higher overhead.

According to Yuri Shtil, Senior Infrastructure Architect at Nvidia, the introduction of descriptor heaps transfers the responsibility of managing resources in memory from the driver to the application. In other words, it’s up to the developer to manage memory. This arrangement is again reminiscent of Mantle. AMD hailed Mantle’s manual memory allocation as a major improvement and as a means to make more efficient use of GPU memory.

Now, of course, lower-level abstraction of that sort can be a double-edged sword. Because developers have a greater level of control over what happens on the hardware, the driver and API are able to do less work—but this also leads to more opportunities for things to go wrong. Here’s an example from Nvidia’s Tamasi:

Think about memory management, for example. The way DirectX 11 works is, if you want to allocate a texture, before you can use it, the driver basically pre-validates that that memory is resident on the GPU. So, there’s work going on in the driver and on the CPU to validate that that memory is resident. In a world where the developer controls memory allocation, they will already know whether they’ve allocated or de-allocated that memory. There’s no check that has to happen. Now, if the developer screws up and tries to render from a texture that isn’t resident, it’s gonna break, right? But because they have control of that, there’s no validation step that will need to take place in the driver, and so you save that CPU work.

Developers who would rather not deal with such risks won’t have to. According to Max McMullen, Microsoft’s Development Lead for Windows Graphics, D3D12 will give developers the option to use the more abstracted programming model from D3D11. “Every single algorithm that you can build on 11 right now, you can build on 12,” he said.


But getting one’s hands dirty with the lower-level programming model should pay some very real dividends. One of the demos shown at GDC was a custom, D3D12 version of Futuremark’s 3DMark running on a quad-core Intel processor. The D3D12 demo used 50% less CPU time than the D3D11 version, and instead of dumping most of the workload on one CPU core, it spread the load fairly evenly across all four cores. The screenshots above show the differences in CPU utilization at the top left.

Oxide’s Baker mentioned other potential upsides to D3D12, including a “vast reduction in driver complexity” and “generally more responsive games . . . even at a lower frame rate.” D3D12 may not just extract additional performance and rendering complexity out of today’s hardware. It may also make games feel better in subtle but important ways. Also, if what Baker said about driver robustness checks out, PC gamers may waste less time waiting on game-specific driver fixes and optimizations from GPU manufacturers.

Direct3D 12 on future hardware

Developers will be able to exploit D3D12’s new programming model on a wide range of existing graphics processors. In addition to that programming model, however, D3D12 will introduce some new rendering features that will require new GPUs. Microsoft teased a couple of those rendering features at GDC:

I’m not entirely clear on what the new blend modes are supposed to do, but as I understand it, conservative rasterization will help with object culling (that is, hiding geometry that shouldn’t be seen, such as objects behind a wall) as well as hit detection.

Nvidia’s Tamasi told us D3D12 includes a “whole bunch more” new rendering features beyond those Microsoft has already discussed. I expect we’ll hear more about them when Microsoft delivers the preview release of the new API to developers, which is scheduled to happen later this year.

Which next-gen GPUs will support those new features? We don’t know yet. Since the first D3D12 titles are expected in the 2015 holiday season, I would be surprised if Nvidia and AMD didn’t have new hardware with complete D3D12 support ready by then. Then again, neither AMD nor Nvidia have announced anything of the sort yet. We’ll have to wait and see what those companies have to say when Microsoft reveals D3D12’s full array of new rendering features.

AMD’s take and the future of Mantle

From the day Microsoft announced DirectX 12, AMD has made it clear that it’s fully behind the new API. Its message is simple: Direct3D 12 “supports and celebrates” the push toward lower-level abstraction that AMD began with Mantle last year—but D3D12 won’t be ready right away, and in the meantime, developers can use Mantle in order to get some of the same gains out of AMD hardware.

At GDC, AMD’s Corpus elaborated a little bit on that message. He told me Direct3D 12’s arrival won’t spell the end of Mantle. D3D12 doesn’t get quite as close to the metal of AMD’s Graphics Core Next GPUs as Mantle does, he claimed, and Mantle “will do some things faster.” Mantle may also be quicker to take advantage of new hardware, since AMD will be able to update the API independently without waiting on Microsoft to release a new version of Direct3D. Finally, AMD is talking to developers about bringing Mantle to Linux, where it would have no competition from Microsoft.

Corpus was adamant that developers will see value in adopting Mantle even today, with D3D12 on the horizon and no explicit support for Linux or future AMD GPUs. Because the API is similar to D3D12, it will give developers a “big head start,” he said, and we may see D3D12 launch titles “very early” as a result.

Naturally, AMD can motivate developers in other ways, too. While Corpus didn’t address that side of the equation, VG247 reported last year that Battlefield 4‘s inclusion in the Gaming Evolved program—and its support for Mantle—involved a $5-8 million payment from AMD. That figure was never confirmed officially, but it’s no secret AMD’s and Nvidia’s developer relations and co-marketing programs often involve financial incentives. Supporting Mantle may be a financially lucrative proposition for some game studios.

Nvidia’s take

Nvidia seems to see lower-level graphics APIs as less of a panacea than AMD does. Tamasi told us that, while such APIs are “great,” they’re “not the only answer” because they’re “not necessarily great for everyone.” This statement goes back to what we said earlier about developers having manual control over things currently handled by the API and driver, such as GPU memory management. Engine programming gurus like DICE’s Johan Andersson and Epic’s Tim Sweeney might be perfectly happy to manage resources manually, but according to Tamasi, “a lot of folks wouldn’t.”

Nvidia also believes there’s still some untapped potential for efficiency improvements and overhead reduction in D3D11. Since Mantle’s debut six months ago, Nvidia has “redoubled” its efforts to curb CPU overhead, improve multi-core scaling, and use shader caching to address stuttering problems. (Tamasi freely admitted that Mantle’s release spurred the initiative. “AMD and Mantle should get credit for revitalizing . . . and getting people fired up,” he said.)

We saw first-hand the results of Nvidia’s work two months ago. In a CPU-limited Battlefield 4 test, Nvidia’s Direct3D driver clearly performed better than AMD’s. That optimization work is still ongoing:

Source: Nvidia.

The performance data above, supplied to us by Nvidia, shows performance improvements over successive GeForce driver releases in Oxide Game’s Star Swarm stress test. That test also supports Mantle, which helps put Nvidia’s D3D11 optimizations in context. Tamasi conceded AMD’s Mantle version “still has less slow frames” and that D3D11 “still [has] some limiting factors,” but he reiterated his overarching point, which is that it’s possible to “do a much better job” with D3D11. Even going by our own, perhaps less flattering numbers, we’d say that’s a fair assessment.

What about OpenGL?

Direct3D 12 holds a lot of promise, but it won’t help folks running Linux-based operating systems like SteamOS. Game developers seeking to write native ports for those OSes will need to use OpenGL, and they will have to extract whatever optimizations they can out of that API.

Tamasi told us Nvidia, AMD, and Intel have all been “working hard” to help developers achieve “super high efficiency” with OpenGL. In a GDC session entitled “Approaching Zero Driver Overhead in OpenGL,” folks from all three companies demonstrated best practices for OpenGL optimizations. The techniques they outlined can be exploited with the current version of the API on today’s hardware with existing drivers, and they can result in large performance gains.

During the session, we saw performance numbers obtained with APItest, an open-source benchmark developed by Blizzard’s Patrick Doane. In Nvidia’s words, APItest is “designed to showcase and compare between different approaches to common problems encountered in real-time rendering applications.” The results showed order-of-magnitude performance differences between a “naive” approach, which Tamasi described as “writing OpenGL like Direct3D,” and the best practices advocated by GPU manufacturers.

Source: Nvidia.

In the graph above, the baseline “naive” approach is the top bar, while the last bar is what Tamasi describes as “writing good code.” The difference amounts to an 18X speedup. Obviously, this is an isolated test case rather than a comprehensive, game-like scenario. But I’d say the difference is large enough to make at least some OpenGL developers rethink the way they optimize their code.

The important takeaway here, I think, is that despite their involvement with D3D12, the big three makers of PC graphics hardware—AMD, Intel, and Nvidia—all have a stake in keeping OpenGL competitive. That’s good news for Linux users, and it’s especially good news for those of us hoping to see SteamOS become a real competitor to Windows in the realm of PC gaming.

Of course, SteamOS isn’t due out until the summer, and the first D3D12 titles aren’t expected until the 2015 holiday season. We’ll have to revisit these matters in the future, when we can see for ourselves how next-gen games really perform on the two platforms.

Comments closed
    • grndzro
    • 6 years ago

    DX12 is DOA
    By the time it’s released all major game engines will have made the switch to OGL 4.4.
    Game over.

      • sschaem
      • 6 years ago

      The Xbox 1 and 360 will keep d3d alive for another decade…

    • southrncomfortjm
    • 6 years ago

    So, will AMD processors be more competitive against Intel chips for gaming if D3D12 actually does a good job of spreading the workload over more cores?

      • chuckula
      • 6 years ago

      Considering that AMD has effectively abandoned 8-core designs, it’s highly unclear how “moar coars” would help them at this point.

      I don’t care if games are 100% perfectly multithreaded, there’s no way an FX chip from 2012 is beating a mature Broadwell part (with a massive L4 cache) in late 2015.

        • Meadows
        • 6 years ago

        Regardless, I will hold on to my FX and if it does last that long, then it’s already worth it.

    • Sabresiberian
    • 6 years ago

    DirectX was originally created based on 2 basic intentions. The first was to make the Wondows platform viable for gaming, because Microsoft recognized that the Windows OS itself bogged things down to much. The “Direct” in DirectX comes from that philosophy – give the games as direct a relationship to hardware as possible while providing APIs that worked for the vast amjority of hardware.

    The second was “Ask the game devs what they want, and give it to them. Give them everything they want so they are thrilled to use DirectX to make their games.”

    Unfortunately MS lost its way at some point, and forgot both of these guidelines and commitments to what DirecxtX was supposed to be. To be fair it was and is the most complete full-featured API suite available, but has itself become bulkier and less developer friendly than it should be.

    It is heartening to see Microsoft recognize its mistakes and move to correct them.

    One thing I’ll say – MS’ release of DX12 will certainly be more full-featured than Mantle. I was not impressed with the fact that Mantle did not even allow for all the capabilities of DX9, much less DX11, and strongly suspect that it performed better in some games simply because it didn’t do as much as DX does. I’ll certainly give AMD the kudos the company deserves for making such a strong push for improvements in what game devs use, but if they are going to claim a better gaming experience they need to do it by providing more than a frame rate increase.

    The most exciting part, in my opinion, is not the lower-level tools, but the multi-threading capabilities. DX has actually held game performance back by being poor at using all of the CPU cores. DX12 changes that for even more room for game devs to produce beautiful games that respond much quicker to your inputs. (Now if we can just get Blizzard to clean up their Diablo 3 servers so they aren’t slugs much of the time. 🙂 )

    • DukenukemX
    • 6 years ago

    I believe the method that’s used to reduce driver overhead is done on the GPU now instead of the CPU. So instead of having the CPU convert Direct3D or OpenGL calls for the GPU, it’s done entirely on the GPU instead. It’s quicker and causes no overhead on the CPU.

    This is what I believe is happening, but feel free to correct me if I’m wrong. This is probably why both OpenGl and D3D can both do it.

    • ddarko
    • 6 years ago

    I’m confused about the relationship of DirectX 12 to consoles. Microsoft said DX12 will increase performance on the Xbox One but didn’t DX11 for consoles already have these “close to the metal” tweaks? Wasn’t that the advantage of programming on consoles – set hardware combined with lower access software access allowed developers to get more performance and wasn’t the point of APIs like Mantle and DX12 to make PC instructions more console-like? So now there’s all this untapped performance on consoles too that will be unlocked by DX12? There’s always puffery with these numbers – “increased performance up to….” almost always means you never see those numbers translate to the real world – so I take Microsoft’s claimed improvements on the Xbox One with large grain of salt but I assume it’s not complete BS. I’m curious to hear how DX12 is going to improve performance on consoles. Apparently DX11 for consoles isn’t as close to the hardware as it’s been made out to be?

      • Puiucs
      • 6 years ago

      dx12 won’t help xbox much in terms of resolution, it’s a hardware problem (at least for current game engines). but as far as we know they still haven’t finished optimizing the API inside xbox and it’s drivers. there also isn’t a game engine that is fully made around xbox hardware.
      when they finish their software development (api+drivers+game engine) that’s when we’ll see a rise in resolution or graphics quality. similar to how games evolved on last gen consoles.
      although i must admit that microsoft will never reach the PS4’s potential. what were they thinking putting only 16 ROPs and 32Mb of SRAM? i blame the billions they invested in kinect. putting all your eggs in one basket usually spells doom.

    • fhohj
    • 6 years ago

    I honestly wonder, exactly how much of this is Mantle.

    I mean, it’s practically come out that it is Mantle. AMD has specifically said they’ve given it all to Microsoft from the start. If AMD is developing this thing with the input of developers, Microsoft isn’t going to go and change the whole thing up so it doesn’t look like what they’ve been working on with AMD.

    Thing is though, with this, AMD now has made the entire developer ecosystem friendly to its arch. All the consoles have its hardware meaning games will focus on AMD’s GCN architecture. There will be a custom API on the PC to encourage adoption of excellently optimized PC ports, to take more advantage of development styles on console. And now, with this, the last piece of the puzzle fits into place as the de facto API now is essentially built upon AMD’s Mantle, something designed on and, at least in part for their hardware. It’s not unreasonable to assume DirectX12 or at least parts of it will operate very strongly on GCN.

    They’ve basically now got support for and from everything. Developers, the hardware on consoles that the developers use, and the APIs that the PC uses.

    nVidia had it comin a little bit though. They’ve been up to this crap for years with the way it’s meant to be played, and now gameworks, so now it’s going to them. Future games are less likely to come forth with Arham: Origins and Assassin’s Creeds in terms of hardware preference.

    • balanarahul
    • 6 years ago

    Considering that Microsoft was busy releasing D3D11.1, D3D11.2, and a variant of D3D11 especially tailored for the Xbone, I say AMD is telling the truth that Microsoft wasn’t working on D3D12 before Mantle.

      • nanoflower
      • 6 years ago

      I don’t see why you would believe AMD over Nvidia and Microsoft. Something like D3D12 is going to take a long time so having interim updates to D3D11 makes sense. Even getting a common API that everyone can agree on could take months. There’s no way Microsoft just put this together since AMD announced Mantle. I’m sure the Mantle announcement helped Microsoft get a greater sense of urgency to get D3D12 out but I don’t believe Mantle was the cause of Microsoft developing D3D12.

      • Klimax
      • 6 years ago

      Since NVidia stated otherwise and something which matches general development schedules then AMD at best only mislead public or outright lied.

      You don’t just make new API in few months…

        • Antimatter
        • 6 years ago

        It’s likely that Microsoft knew about Mantle well before the Mantle announcement. Considering that DX12 is still more than a year away while Mantle games are already available, DX12 development most likely started after AMD started working on Mantle.

          • Ninjitsu
          • 6 years ago

          [quote<]It's likely that [s<]Microsoft[/s<] AMD knew about [s<]Mantle[/s<] D3D12 well before the [s<]Mantle[/s<] D3D12 announcement.[/quote<] FTFY.

          • Klimax
          • 6 years ago

          Not so fast. They got already 3DMark and a game there, so nope. So at best AMD started to work on Mantle at the time of DX 12, if not later, because they outright claimed DX 12 doesn’t exist.

          You can’t twist it to be favorable to AMD.

      • Meadows
      • 6 years ago

      Can you imagine developing, testing, and releasing a major cross-platform API in half a year on a whim just because someone else announced one?

      • maxxcool
      • 6 years ago

      The xbone DX is the basis for dx12. Just pushed further.

    • Ninjitsu
    • 6 years ago

    So 2016 will be the year of the PC’s complete domination…I mean, Skylake, Pascal, AMD’s Next Islands, mainstream DDR4, affordable 1TB SSDs, far more capable IGPs and SFF parts for HTPCs, low-power media streaming on Intel platforms, SteamOS and Windows 9 with lower overhead versions of OpenGL and D3D.

    The One True Platform shall rise FOREVER! 😀

      • Pbryanw
      • 6 years ago

      [quote<]The One True Platform shall rise FOREVER! :D[/quote<] 2016 - Year of the Linux Desktop???

    • Tech Savy
    • 6 years ago

    If what I have been reading throughout the net is right then D3D 12 (or DX12) won’t be released for another year and a half. That would suggest that Mantle and DX 12 were not in development at the same time. I could see AMD trying to grab the spot light if DX 12 were being released in a relatively short period of time.

    Considering that Mantle currently is proprietary I would think that would cause Microsoft to release it asap to maintain API dominance and uniformity.

    • xeridea
    • 6 years ago

    Nvidia:
    Who cares about all this massive CPU reduction and efficient threading, we can spend thousands of hours tweaking every game in the driver, and get a partial improvement. Lets also show FPS rather than frame latencies so we don’t look as bad. Lets just keep trying to fix a model that is clearly inefficient because we don’t anyone to notice the competitions method is just better for efficiency.

    Microsoft:
    Yeah, we have totally been working on this for years, and just forgot to tell anyone. Just because AMD already has working games, is a more refined API, and we won’t have games for another year and a half doesn’t mean anything.

      • chuckula
      • 6 years ago

      Your derision of Nvidia’s driver tactics would be justified in a vacuum.

      However, we don’t live in a vacuum and when AMD is putting on the Mantle show… complete with some rather scary frame latency spikes that TR showed in its objective reviews… you kind of come off sounding like a sour-grapes fanboy who wants to trash everyone else since you know deep down inside that Mantle didn’t turn out to be the miracle you wanted.

        • xeridea
        • 6 years ago

        Even with occasional latency spikes, the games play better because they are more consistent overall. It is a brand new API, and a game engine being ported to it, something that hasn’t been done in over 10 years, I wouldn’t expect it to be perfect. Mantle turned out about how I was thinking, a lot less CPU overhead, more consistancy, and generally better frame times (dependent upon how good CPU is vs GPU). I knew it wouldn’t magically double framerate or anything.

        [url<]https://techreport.com/news/26253/battlefield-4-patch-fixes-balance-bugs-and-mantle-issues[/url<]

          • derFunkenstein
          • 6 years ago

          yes, even with the latency spikes games run better with Mantle than they do with AMD’s D3D11 drivers. And that’s basically all you need to know about why I stick to nVidia.

        • fhohj
        • 6 years ago

        while I’m not defending nvidia’s nonsense that they do, strong driver optimization is not a bad thing.

        but that’s not to say anything good about how they position it sometimes and certainly not relating to their other crap they do with interfering in the games software itself.

        didn’t nvidia spearhead latency reduction, spend god knows how much money on building tools to troubleshoot it, and make reviewers have a bad time by sending the stuff out to them so they had to bother with it? 😛

      • MathMan
      • 6 years ago

      Does it even matter who came up with this first?

      The end result is going to be that everybody wins once DX12 is on the market.

      According to this article, Nvidia isn’t even denying that AMD gave everybody a kick in the butt.

      • Ninjitsu
      • 6 years ago

      All nvidia’s said is that:

      1. DX12/D3D12 has been in development for quite a while.

      2. AMD’s Mantle efforts made them redouble their efforts on cutting their driver’s overhead (which was already lower).

      3. Even with D3D11, a lot of optimization is possible, which AMD doesn’t seemed too concerned with, since they have Mantle now.

        • xeridea
        • 6 years ago

        Talk to game developers trying to effectively optimize for D3D. Lower overhead is welcomed by game developers, Nvidia is saying it doesn’t matter.

          • chuckula
          • 6 years ago

          [quote<]Nvidia is saying it doesn't matter.[/quote<] Oh Rlly??? Here's a nice talk from two NVidia developers about lowering overhead in OpenGL: [url<]http://www.ozone3d.net/dl/201401/NVIDIA_OpenGL_beyond_porting.pdf[/url<] Note how there is more publicly-available technical content in this talk about OpenGL than there is anywhere on the entire Internet for the supposedly "open" Mantle API.

            • xeridea
            • 6 years ago

            So there is more information about optimizing an API that has been around for decades than there is about a brand new, closer to metal API that requires less effort to optimize?

            OpenGL is a great library, I use it, and many still will. There are benefits of an abstracted API. I am just saying Nvidia is just getting defensive playing it down like there is 0 benefit of less abstraction. I am sure they spent thousands of hours of in depth profiling and tuning for Battlefield 4 because they knew Mantle was coming out. If they want to spend thousands of hours custom tuning every game rather than letting the developers do much of it that’s their choice.

            • Ninjitsu
            • 6 years ago

            So we have to wait a decade for Mantle to be fully documented and optimised? And we shouldn’t use OpenGL and D3D till then? Kay.

          • MathMan
          • 6 years ago

          You’re imagining the things that nvidia is saying.

          They say there are still things to be squeezed out in DX11. They’re not saying at all that lower overhead doesn’t matter.

          • jihadjoe
          • 6 years ago

          Nvidia is saying you don’t need to make a completely new API just to reduce overhead, not that reducing overhead doesn’t matter.

          • Klimax
          • 6 years ago

          NVidia stated, that there is still load of work to do even for existing DX 11, which incidentally improves it for existing titles.

          And then devs still don’t use DX 11 fully, so doubt that DX 12 will fare that much better.

          • Ninjitsu
          • 6 years ago

          [i<]What[/i<]? No, they're not. Quotes, please?

      • HisDivineOrder
      • 6 years ago

      AMD:

      Who cares about relying upon developers traditionally known for being lazy and not updating their software to be compatible with the majority of GPU’s that people use all the time? Lowest common denominator means every game is going to be running like a dream on Intel hardware if they had their own API. Maybe nVidia as the next largest would get some boon if they too had their own API. And then AMD running last could get some benefit. Who cares if making a vendor specific API as the way to go for the future meant every game had to have at least three separate API’s in use to ensure broad compatibility? Who cares if that vendor specific API future meant that compatibility in the future when new architectures arrive would make life a hell on earth for people who decide in five years they want to play a game on their new AMD post-GCN architecture, right? Who cares if it takes the developer a lot longer to code to the metal and ensure greater performance gains than just going DirectX because they don’t have to code to the metal once, they have to code to the metal (best case scenario) with three different API’s, right? Who cares if that takes the burden off of our laid off driver making teams and puts it off on developers who are under such a time crunch they rarely like to go back and update their software after six months, so asking them to be patching their game two years later because architectures changed and coding to the metal left those games unable to run well on the architecture-specific API’s that have suddenly become the standard because AMD made it so everyone wants to have their own API?

      TLDR; you act like nVidia tailoring drivers to specific games is a horrible idea. I will act like AMD wanting API’s tailored to each GPU maker’s architecture specifically is a horrible idea. nVidia or any GPU maker tailoring drivers to specific games at least allows you to install different drivers to make certain games run better if you have to. Most often, it means profiles enable superior compatibility without that. AMD’s way means that you basically will need the same card you used to run the game on if for some reason a GPU maker decides to vastly change their architecture to such degree the game won’t run right on the newer hardware. Especially when you’ve taken the responsibility of fixing game compatibility issues away from the driver teams of the GPU makers and put them squarely in the hands of developers who rarely have the time or money or even manpower to go back and fix a game that came out more than three months ago.

      AMD’s way seems more fraught with peril to me.

      • sschaem
      • 6 years ago

      Sarcastically said, but on target.

      Dx12 would not exist no problem existed.
      Dx12 style optimization would have been out 4+ years ago if Microsoft was pushing the industry forward (instead they where holding back AMD / nvidia and developers, shielding them for HW features)

      So thumbs up for your clarity.

    • Bensam123
    • 6 years ago

    No update on allowing Mantle on NVidia or Intel hardware? Or Nvidia or Intel adding Mantle support?

    “I’m not entirely clear on what the new blend modes are supposed to do, but as I understand it, conservative rasterization will help with object culling (that is, hiding geometry that shouldn’t be seen, such as objects behind a wall) as well as hit detection.”

    I still don’t understand how something like that would affect hit detection. They seem to have nothing to do with each other and this didn’t help clear it up at all either. :l

      • Cyril
      • 6 years ago

      [quote<]No update on allowing Mantle on NVidia or Intel hardware? Or Nvidia or Intel adding Mantle support?[/quote<] AMD told me it probably ain't gonna happen. With DX12 coming so soon, it's hard to imagine why Nvidia or Intel would spend the extra resources getting Mantle working on their hardware and supporting it. [quote<]I still don't understand how something like that would affect hit detection. They seem to have nothing to do with each other and this didn't help clear it up at all either. :l[/quote<] Here's a whitepaper about that: [url<]http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter42.html[/url<]

        • MathMan
        • 6 years ago

        That paper describes how to do conservative rasterization, but it doesn’t say how you’d use it for collision detection.
        A google search turned up this paper from Microsoft research:
        [url<]http://research.microsoft.com/en-us/um/people/cloop/collision.pdf[/url<] (I've only read the abstract though.) When you google "conservative rasterization collision detection" there's also an interesting chapter through google books.

          • Andrew Lauritzen
          • 6 years ago

          Conservative rasterization is mainly a tool to construct data structures – uniform grids mostly. As such it can be used to construct a variety of acceleration structures, voxels, conservative depth buffers, etc. These structures in turn can be used to accelerate things like ray queries (which are used in physics, collision detection, culling, etc). Note that things like occlusion culling basically are collision detection problems at their core.

          I don’t think folks are necessarily going to use this for – for instance – the hit detection from your main weapon in an FPS or similar. Those checks are pretty “rare” from a computer’s point of view and don’t really need to be super-fast. But for stuff like particle/geometry collisions, occlusion culling, voxelization and more conservative rasterization is very useful.

            • MathMan
            • 6 years ago

            Would the following be a good, simplified example:
            – you render the objects in a 2D or 3D space, but without any actual shading. Just a fixed color per object
            – if all pixels or voxels end up with a ‘pure’ color then there was no collision
            – if some of the pixels are not pure (a blend of one or more pure colors), then there is the possibility of a collision. Not 100% guaranteed, because it’s conservative, but it warrants more investigation.

            In the latter case, you then do deeper analysis, either by using a finer grid and re-raster again locally, or by non-discrete, symbolic analysis?

            • Andrew Lauritzen
            • 6 years ago

            That sort of thing, but there’s no need to pretend it’s all “colors” and such these days as it was with old-school GPGPU. You can happily store lists, trees, etc. and traverse them directly.

            But yes, at a high level conservative rasterization can get you the “grid cell touched at all by polygon” and “grid cell fully covered by polygon” information efficiently, which is a basic building block for a lot of these algorithms. Usually you follow up with a more accurate test as you note.

            • MathMan
            • 6 years ago

            But you’re going to be using the ROPs of the GPU to raster the thing. Doesn’t that make it much harder then to use lists etc? That’s the part I don’t understand…

            • Andrew Lauritzen
            • 6 years ago

            Nope, since DX11 you can do arbitrary memory read/write/atomics from pixel shaders. That’s how we’ve been making linked lists, generating voxels and other such fun 🙂

            Together with pixel shader ordering (as seen in Haswell), you can even do arbitrary things with ROP-style primitive ordered semantics. Thus you can see how these two DX12 features can work together quite nicely.

            • MathMan
            • 6 years ago

            Oh, I see: so you can use the pixel rasterizer without having to use the ROP. So you get the coordinates of the pixel, use that to calculate the address of whichever memory location you’re interested and do regular read/write operations on those.

            That’s really very cool.

            Thanks for explaining.

            • Andrew Lauritzen
            • 6 years ago

            Yup, exactly.

            > Thanks for explaining.
            My pleasure!

            • Bensam123
            • 6 years ago

            How would you use it for hit detection? AFAIK all of the above are basically graphics oriented, they have nothing to do with collision (besides like light rays colliding with objects if you’re talking about ray tracing). Maybe voxels as they are the structure themselves…

            Hit detection wasn’t referring to hit detection from a weapon then? Or even unit collisions?

            • MathMan
            • 6 years ago

            It’s referring to the collision of complex shapes against each other.

            If you have multiple complex shapes, that each consist of tons of triangles, it is computationally very hard to determine wether or not the hit each other. The first order detection can be done with bounding boxes, but that only goes so far.

            Once you’ve determined that the bounding boxes intersect, you can use rasterization to check if they intersect, the way I suggested a bit earlier. It works both for 2D and 3D, but let’s use 2D:

            Say you have 2 complex 2D objects made of multiple triangles, you’re going to render both without applying any complex pixel shading: all you care about is that the same pixel gets occupied by more than 1 object.

            A dumb way to do this would be to assign a color of green to object 1 and a color of pure red to object 2, and you use additive blending.

            If after rasterizing both objects, none of the pixels have a color other than green or red, then there was no intersection. If the rasterization was conservative, then you can be 100% sure that the objects didn’t intersect. If, however, some pixels have both a green and a red component, then there may be an intersection at that pixel.

            In a game, if the rasterization is of high enough resolution, it may as well decide that there was a collision. For other cases, it may go then go triangle by triangle and check if there is a real intersection.

            Note that the rasterization resolution doesn’t have to be same as the rendering resolution: it can be quite a bit coarser to speed things up…

            • Bensam123
            • 6 years ago

            Hmmm… here I was under the impression that games don’t actually check skins, they check hit boxes. So what you see doesn’t necessarily represent where they will actually collide with eachother.

            Hence the term ‘hit boxes’ and why it’s raved about a lot in game communities when females have different hit boxes then males, often times they share the same hit box despite having different models/textures on top of it. AFAIK there is no absolute check based on pixels themselves, that would be something new and very computationally expensive… especially the more detailed the models get.

            I looked at the white paper and you’re essentially describing it, but something like that is never used (I’ve never heard of something like that being used). I would agree it would be better from a optimization standpoint, but thats about it.

            Even if models or textures collide, they don’t ‘blend’ or ‘smoosh’ into each other as they would in real life, they clip through each other.

            • MathMan
            • 6 years ago

            I’m not saying they check at the pixel level. They check at the raster level, which can be much coarser than that. I’m also not saying games are currently doing it this way. I’m saying that conservative rasterization allows them to do it like this.

            Whether or not that kind of accuracy makes sense needs to be determined on a case by case basis. I’m sure it could be useful when doing hard body physics collisions, where a bounding box is most definitely not going do be sufficient.

            • Andrew Lauritzen
            • 6 years ago

            As I said, it’s not likely that many games are going to switch all their gameplay hit detection checks to the GPU or anything; that’s not a very big load anyways and not worth decoupling from the core game loop/logic.

            The point is just that conservative rasterization is a data structure building tool. The sorts of data structures you can build with it are useful for “collision checks” of all sorts… pixels vs. raster grid, objects vs. rays, light volumes vs. tile frusta (see tiled deferred shading or forward+), etc. etc. Despite having different semantics, these are all just geometry collision detection problems and they can use the same techniques to solve them. Which technique makes sense in a specific case will often depend on the nature of the data, but in almost every case you’ll use some sort of acceleration structure. Conservative rasterization is a tool that helps to build those structures efficiently.

            Hopefully that sort of makes sense to folks…

        • Bensam123
        • 6 years ago

        It isn’t going to happen as in they aren’t going to allow Nvidia or Intel use Mantle or Nvidia and Intel will support Mantle?

          • Cyril
          • 6 years ago

          The latter. I don’t think AMD is opposed to third-party vendors supporting Mantle, but there’s little incentive for those other vendors to implement support at this point.

            • MFergus
            • 6 years ago

            It probably wouldn’t work well on non GCN gpus even if Nvidia/Intel tried to support it anyways. Would be just a waste of time.

            • Bensam123
            • 6 years ago

            What reasoning do you have for that? Because it isn’t AMD hardware?

            • Ninjitsu
            • 6 years ago

            Well, yes. That’s always been the reasoning behind people saying this.

            Look at it this way, three different GPU vendors have similar hardware at a high level, but at a low level, it’s different.

            Mantle likely contains more direct assembly calls tailored to GCN. Assembly calls aren’t just “MOV” or “ADD”…the names represent hex codes (called op-codes). Those hex codes will be different for different architectures. Why? Because they represent certain bit patterns that are input to the hardware.

            To convert Mantle into something useful on Intel or Nvidia’s stuff, they’ll have to take AMD’s ISA and sit and map it to theirs, or vice versa. Of course, they’ll lose features that GCN doesn’t support.

            Now, tell me:
            1. How will the driver be maintained? Who writes it? Does AMD write it for everyone? Do all of them handle it like D3D and OpenGL? If you say “yes” to the latter, then why not just use optimised versions of OpenGL and D3D?

            2. What happens when an instruction is supported by GCN and not supported by the rest? Will the image quality be affected? Will stuff crash (unlikely, because Intel/Nvidia will implement exception handling)?

            It would be a nightmare for the end user. No incentive for Intel/Nvidia. Why this seems like a good idea to you, i have no idea.

            EDIT: I haven’t actually ever written a driver, so I’m not going to pretend to be an expert on this. Just applying whatever i do know about software and hardware.

            • Andrew Lauritzen
            • 6 years ago

            So far AMD has refused to even release a spec to other IHVs (let alone discuss modifications to make it more portable) and shown no intention of allowing others to support it beyond their vague PR statements. We (Intel) have asked several times (from day one) and they have always refused.

            But as you say, with DX12 on the horizon it’s basically a solved problem on Windows now at least.

          • HisDivineOrder
          • 6 years ago

          What? You actually bought the line that AMD was going to turn Mantle into an open standard? Really? Don’t you think they’d have already made a single move toward doing that if were their real intention? That was their way of countering anyone who said, “Hey, why do we want a standard for a single vendor?” They could just say, “Hai, mebbe it’ll be a standard others will come to. Mebbe.”

          What do you say to that? “Nah uh, no way mebbe.” It’s difficult to fight a “maybe we’ll” argument to people that then take their “mebbe’s” to be gospel fact or real expectation.

          This is AMD’s new PR strategy. Really, it’s a new strategy used by all major corporations, so it’s not an AMD exclusive. AMD uses it a lot. Like when they said there was no DirectX 12 coming and that Mantle would be the only way to get low level access for Windows gaming. Like when they said fixes for frame latency were coming, then kept pushing the deadline out to the latest possible edge of their dates, so that “soon” (according to their special section at anandtech) early last year became “June, maybe August,” became August, became September 1st. Or Mantle’s delays from being revealed, but with few details in September and an R9 290X that was revealed without price or metrics with the promise of a preorder coming the next week… only for… NOTHING to be said. For weeks.

          AMD likes to keep people in a constant state of, “Maybe.” If they don’t hammer down specifics, then the competition doesn’t have anything to dispute, do they? 😉 They can just wave their hands around and say, “These aren’t the droids you’re looking for,” every single time you try to point out a discrepancy. That leads to people finally piecing together all their half-truth and maybe’s into one long, concerted attempt at deceptive stalling.

          I think you can go back to their handling of the Sea Islands delay last year when publications went to AMD thinking the old roadmap meant an update to the 79xx and 78xx series was coming, only to have AMD use a Jedi Mind Trick and say, “There’s no delay. We never said that roadmap we gave everyone was actually what we were going to do. You just thought that because we gave it to you. No updates to the mid, mid to high, or high end now.”

          So yeah. You can’t believe a single thing AMD says unless they are very, very specific with specific details about something that is coming out right the moment they are telling you about it. Assume nothing. Take nothing for granted. They prey upon that instinct to want to believe them at their word and to give them the benefit of the doubt.

          With AMD, one in the hand is worth a thousand in the bush.

    • kamikaziechameleon
    • 6 years ago

    Excited to see this scramble. Its good for all consumers. I wonder how a DX12 revision might effect xbone. I wonder if any of this is really helping streamline cross platform engine development.

    • Musafir_86
    • 6 years ago

    -Also, regarding Direct3D/DirectX 12 availability on Windows 7, I’m thinking it would [b<][u<]not[/u<][/b<] happen as Windows 7's Mainstream Support will end on 13 January 2015 (its General Availability was on October 2009). The same reason was used to exclude Internet Explorer 9 from Windows XP (i.e. it was past its Mainstream Support period), regardless Windows XP's popularity level at that time. Regards, -Musafir_86.

      • xeridea
      • 6 years ago

      Yeah, funny how OpenGL and and other browsers run on a variety of OS’s and not just the one MS is trying to throw at people that don’t want it. There is no technical reason why they intentionally make things only work on lastest OS.

        • Klimax
        • 6 years ago

        Incorrect. Kernel driver models have to be changed to support new things and those don’t get backported for stability reasons. (Among others reasons)
        ´
        OpenGL implementations use special API and can use separate vendor specific APIs. Vendor support is all over the map for OpenGL, showing how big mess vendors can make.

        And frankly OpenGL itself is mess too. (Including bloody extensions, which are biggest problem)

        • Ringofett
        • 6 years ago

        Yeah, those corporate dogs, I’m angry my Android from 2008 doesn’t get KitKat! Jerks. It’s only 6 years old with a CPU slower then a smart watch!

    • USAFTW
    • 6 years ago

    Something you wouldn’t expect to see is NVidia complimenting AMD’s efforts with Mantle, thatAMD with mantle encouraged NVidia to ramp up it’s driver optimization. I hope AMD doesn’t forget optimizing for DX11 (Only two games support mantle right now).

      • nanoflower
      • 6 years ago

      I don’t know if they will. AMD actually has an incentive not to optimize their DX11 drivers since they can point to Mantle as a great alternative. Hopefully they won’t do that, but given their limited resources (in comparison to Intel/Nvidia) it may happen.

        • USAFTW
        • 6 years ago

        That’s exactly what I’m afraid of as a 6970 user. Unless they pledge to make every single game mantle compatible, that is. And given that they’re paying people 5-8 million to adopt mantle, I don’t think they have the financials to back it up. Oh, what a tragic oversight. Just think what could be done if that money was spent on something practical. They could probably re-write their entire driver for that and have some change left for marketing. Darn it, AMD… What the heck is wrong with you?

        • Klimax
        • 6 years ago

        Sort of reverse 3DFx, where they started with Glide and tried to switch to DirectX/OpenGL.

    • Meadows
    • 6 years ago

    The D3D 11-12 comparison image is phony. The “new” one has missing shadows.

      • Aquilino
      • 6 years ago

      You don’t have to take that in account, just watch for the 0s on the top. The computational load is lower and more homogeneous among the threads.

        • xeridea
        • 6 years ago

        Ok, so you are saying it doesn’t matter if the tests are comparing equal workloads? Yes, it does a lot better job at splitting work between cores, but the the lower total cpu time could be heavily influenced by a simpler workload.

          • MathMan
          • 6 years ago

          The number of draw calls and driver overhead is not going to be heavily impacted by a shader that darkens more or less pixels. The geometric complexity of the scene looks almost identical. That’s what matters.

          (It is a bit clumsy to not show exactly the same thing. If only because it provides fodder for those who don’t quite know what to look for.)

            • Meadows
            • 6 years ago

            Okay, be that way, but it still looks weird. Who knows what else they might have culled in the distance?

            • Ninjitsu
            • 6 years ago

            If it’s legitimate culling, then that’s actually a good thing.

            • Terra_Nocuus
            • 6 years ago

            it could also be two different screenshots taken at different points in time during the demo (i.e., not exactly the same second). I’m assuming the environment is in motion, altering the shadows in the scene. That seems more likely than a lower-graphical-fidelity conspiracy claim (for lack of a better phrase).

    • shaurz
    • 6 years ago

    Meh. Direct3D is irrelevant to the mobile industry, all non-Microsoft consoles and anyone not running Windows. Microsoft need to stop pushing proprietary standards, it’s not 1995 any more.

      • Meadows
      • 6 years ago

      What else should they do?

        • shaurz
        • 6 years ago

        I’m just observing that pushing these standards that only work on Microsoft platforms is counter-productive, especially since the mobile industry is dominated by OpenGL. I guess it’s in their DNA to continue to push their own platform, they’ve been doing it for 3 decades after all. But the world is moving on…

          • nanoflower
          • 6 years ago

          Nothing that Microsoft is doing prevents developers from utilizing OpenGL. Instead it allows someone to develop a product for one platform (Xbox One, Windows PC or Windows phone) and bring it to other platforms much more easily. It makes sense from a business sense for Microsoft to provide the option.

      • Ninjitsu
      • 6 years ago

      Well, it’s still potentially relevant to the ~1 billion Windows PC users…

      (it’s not like all mobile users can/would benefit from OpenGL ES 3.0, either)

    • Klimax
    • 6 years ago

    [quote<] think it's entirely possible AMD has known about D3D12 from the beginning, that it pushed ahead with Mantle anyhow in order to secure a temporary advantage over the competition, and that it's now seeking to embellish its part in D3D12's creation. It's equally possible AMD was entirely forthright with us, and that Nvidia is simply trying to downplay the extent of its competitor's influence. [/quote<] There is no dichotomy to resolve. AMD claimed DX 12 doesn't exist. That was a lie and Mantle was attempt at forcing devs to use their proprietary crap to fix and avoid problems in their drivers. And therefore first option is the one true. All known evidence and data provide ample of proof for it. Too bad NVidia knows how to write drivers. It was after all them who started CPU optimizations back then. (in 90s) Anyway, console-like abstraction is frankly nonsense and oxymoron. They have nearly no abstraction to extract maximum performance, by going as low as possible in exchange for flexibility. And for devs pushing for something, they still don't use existing DX 11 abilities to full extent, so how long it will take them to use properly DX 12? Another six or more years? And it doesn't matter who you account for, be it engine makers like Epic or their own like DICE. And lastly, I don't like how they are tailoring to current HW. Reasons why DX exists and why it was successful, was abstraction of HW. You weren't forced into particular scheme or set of technologies, but you could do it quite differently. This looks more like regression, then anything else. (See original tile-based rendering) No, I don't like it so far, but hopefully docs and API specs will be here soon. (Will it be sooner then Mantle? 😀 )

    • SnowboardingTobi
    • 6 years ago

    We used to have bare metal programming. Funny how we’re trying to get back there again. Here, let me start them off with this

    [code<] mov ax, 13h int 10h [/code<] 😉

      • Klimax
      • 6 years ago

      On CPUs we accepted sort of this (but we are letting compiler and instruction extensions do it), but on GPUs we abandoned it long ago, yet we return there as if we all forgot how much it sucked.

      • shaurz
      • 6 years ago

      That’s not bare-metal, you’re calling a BIOS API.

      • just brew it!
      • 6 years ago

      That’s not bare metal, you’re still relying on a BIOS interrupt to do most of the work for you. It’s more like metal that has been sanded and primed, but not painted.

      If you were truly optimizing this code for performance you would be executing a block copy (movsw instruction or equivalent) of the data and attributes directly into the text mode frame buffer, instead of using int 10h function 13h. Taking the interrupt overhead out of the equation is a big win.

      😉

        • morphine
        • 6 years ago

        Optimizing a graphics card mode switch? What do you want to do, switch graphics modes a few thousand times per second and who knows kill the CRT monitor? 😀

          • just brew it!
          • 6 years ago

          It’s not a mode switch, it is displaying a text string on a display that is presumed to already be in text mode.

            • morphine
            • 6 years ago

            Nuh-huh. Sorry, but I cut my teeth into “proper programming” doing ASM. I remember those calls and a few dozens more like the back of my hand.

            [url<]https://courses.engr.illinois.edu/ece390/books/labmanual/graphics-mode13h.html[/url<]

            • flip-mode
            • 6 years ago

            Smart is so sexy. I’m not gay but you turn me on.

            • morphine
            • 6 years ago

            Thank you, but I should take this opportunity to note that JBI has probably forgotten more about programming (especially low-level) than I will ever know in my lifetime 🙂

            • flip-mode
            • 6 years ago

            I was trying to make him jealous.

            • SnowboardingTobi
            • 6 years ago

            Sorry, JBI, but morphine is right – this is code to change VGA modes. I think what you’re thinking of is if we had put 13h into upper 8-bits of the AX register vs the lower bits. I’d have to look that up since it’s been years, but I think that’s what you’re confusing this with.

            But to be fair, yeah, I’d be doing direct memory writes to fill memory array if I wanted to output to the screen.

            • just brew it!
            • 6 years ago

            Oops. My bad.

            • SnowboardingTobi
            • 6 years ago

            No harm, no foul. <3 you

            • morphine
            • 6 years ago

            <3 you too honey.

            And I’m not even going to pretend you can’t run rings around me when it comes to this low-level stuff 🙂

    • codedivine
    • 6 years ago

    The two things I am interested in knowing about DX12 from the point-of-view of a developer?

    1. How much work will it be to port DX11 code to DX12?
    2. Whats new for compute?

    I guess we might hear some answers at BUILD conference that is coming up (2nd to 4th April) but we probably won’t get a public SDK till later this year.

      • Ninjitsu
      • 6 years ago

      With regards to 1, there are likely two answers:

      A) Not much. DX12 will have a D3D11 like mode, where abstraction is similar to D3D11. This will end up like DX12 feature level 11 or something, so perhaps the names of some calls are changed, or other minor stuff.

      B) A lot, if devs want lower abstraction and more control, because they’ll have to distribute the workload more evenly by hand.

      Basically, for games that are already out, B isn’t likely to happen. Too much effort, lost time and revenue.

      With regards to 2, they haven’t announced the full spec yet so we’ll have to see…

    • Musafir_86
    • 6 years ago

    -Regarding to the “new” unreleased nVidia driver: when it does come out, I hope every reviewer (especially Tech Report) will scrutinizes the IQ (image quality) down to every single last frame, so the quality wouldn’t degraded in any way (missing objects, different textures, etc.) when compared to previous driver versions.

    Regards,
    -Musafir_86.

      • MathMan
      • 6 years ago

      Driver overhead is not about making the GPU do less, it’s about making the CPU get out of the way so the GPU doesn’t have to wait. That’s not really something you do by lowering image quality.
      E.g. Starswarm is something that explicitly stresses draw calls. You don’t wish those away by messing with some minor image quality settings.

      Those graphs by Nvidia are without a doubt best case scenarios: the sweet spot of GPU type, CPU type, and workload that exposes the CPU as a weak spot the most. (Just like the most optimistic Mantle numbers of AMD.) I think that’s fair game.

      It may be that those CPU optimizations are one way or the other very specific for this benchmark. But to think that this is a pure cheat by messing around with image quality? Doubtful. If you’re willing to disregard AMD’s FP16 optimizations already 4 years ago (which were not a big deal since it didn’t have a major visible impact), I can’t think of any major cheating in the last 6 or 7 of more years.

      I think it’s time to start giving either party the benefit of the doubt.

        • Mr. Eco
        • 6 years ago

        Intel still provides worse than competitors image quality. There was a recent article here on TR with image comparison.

          • DPete27
          • 6 years ago

          You sure you’re not thinking about [url=https://techreport.com/review/23324/a-look-at-hardware-video-transcoding-on-the-pc/3<]video transcoding?[/url<]

            • Terra_Nocuus
            • 6 years ago

            it was in the Brix review, re: [url=https://techreport.com/review/26166/gigabyte-brix-pro-reviewed/3<]Iris Pro[/url<] gaming performance.

            • chuckula
            • 6 years ago

            [quote<]Perhaps this is a bug specific to Borderlands 2. Other titles, as far as I could tell, didn't suffer from similar artifacting or filtering issues. [/quote<] So one game... using beta drivers... showed a few graphical glitches. What do you say when AMD's beta drivers have an issue here or there?

            • Terra_Nocuus
            • 6 years ago

            I chuckle to myself in amusement 🙂

            • Andrew Lauritzen
            • 6 years ago

            It is a bug in Borderlands – they treat Intel GPUs differently (for some unnecessary reason) and clearly don’t test that path. I imagine spoofing the device ID so that it runs the same path as NVIDIA/AMD would fix the “issue”.

          • chuckula
          • 6 years ago

          TR did a review where one game out of several showed a few graphical glitches when using an Intel IGP.

          If you use that “standard” to say that Intel has inferior image quality, then I sincerely hope you never read the footnotes that come out with AMD’s Mantle updates because you will be in for a rude awakening in the image quality department.

          All three major vendors have graphical glitches with some of their drivers and some modern games. It happens. Intel definitely isn’t perfect, but neither are AMD nor Nvidia, and the progress that Intel has shown over the last 2 years shows that they are taking graphics seriously even if they don’t make $1000 discrete GPUs.

          • MathMan
          • 6 years ago

          I was honestly not even considering Intel. And I’m willing to blame incompetence instead of malice. 🙂

    • LostCat
    • 6 years ago

    Comp gaming is getting exciting again! Excellent.

    • Jigar
    • 6 years ago

    Big thanks to AMD things are now changing for good. I also see my Q6600 not retiring any time soon if this is implemented successfully.

      • albundy
      • 6 years ago

      feel the same way about my phenom2 965 BE unit. its been running nice and fast for many years. the only thing i’ve ever upgraded was adding more ddr3 1600 ram and a SSD.

      • LostCat
      • 6 years ago

      I used to think my i7 920 kept up decently with modern stuff. hell. it didn’t.

        • shaurz
        • 6 years ago

        Q6600 is 2-3x slower than i7 4770K

      • Klimax
      • 6 years ago

      Thanks to AMD? Don’t think so. Did you forgot they claimed DX 12 didn’t exist?

Pin It on Pinterest

Share This