DirectX 12 will also add new features for next-gen GPUs

When Microsoft announced DirectX 12 yesterday, we learned that a broad swath of existing hardware—including all of Nvidia’s DirectX 11 GPUs and all of AMD’s GCN-based offerings—would support the new API. That wasn’t the whole story, however, as Nvidia’s Tony Tamasi clarified in an interview with us today.

DirectX 12 will indeed make lower-level abstraction available (but not mandatory—there will be backward-compatibility with DX11) on existing hardware. However, Tamasi explained that DirectX 12 will introduce a set of new features in addition to the lower-level abstraction, and those features will require new hardware. In his words, Microsoft “only teased” at some of those additions this week, and a “whole bunch more” are coming.

In that respect, the release of DirectX 12 should echo that of previous major DirectX versions: full support for the new API will only be available with a new generation of graphics hardware. That said, the lower-level abstraction seems to be a pretty huge part of what makes DirectX 12 what it is, and many of us will be able to reap the fruits of it on current-gen GPUs. As I outlined yesterday, the lower-level abstraction should translate into performance improvements and CPU overhead reductions.

So far, Microsoft has mentioned only two DX12 features that will need new hardware: new blend modes and something called conservative rasterization, which can apparently help with object culling and hit detection. Neither of those additions sounds hugely groundbreaking, but as Tamasi hinted, they may just be the tip of the iceberg. We’ll probably have to wait for another announcement from Microsoft to find out more.

Comments closed
    • TheMonkeyKing
    • 7 years ago

    With DX 12, I get MOAR COINS!
    <wraoh!>

    • marraco
    • 7 years ago

    I said this on another article. I was joking, but each day it looks more like it will be for real:

    -Will require new video cards, to get the full features, which will not be really used.
    -Will require Windows 9. MS ever does that.
    -As is MS tendency, Windows 9 will force cloud, MS accounts, paid suscriptions, and ads.
    -No game will full support it due to lower common denominator compatibility, which will be MS tablets and phones.
    -Will come many years in the future. They are promising end of 2015, but is normal to get delays.

    • moose17145
    • 7 years ago

    Personally I would like to see G-sync / freeSync / whatever you want to call it for dx12 be a part of the standard. I think it would help push more variable refresh monitors into the market as well as setting a single standard that the market can use and easily implement. I suppose idk if something like that is even related or possible to do using directx, but i think it would be nice to just throw in that in order for the card to be able to claim full directx 12 support it has to support variable refresh rates.

      • the
      • 7 years ago

      I’ll second this notion and also extend it. It’d be nice to have some official mechanism for large virtual surfaces like Eyefintiy and Surround View in conjunction with DirectX. This would enable things like one GPU per display split screen rendering at an independent refresh rate. The effect is that each physical monitor would run at the highest frame rate a GPU could produce. The downside is that there could be a vertical tear where the bezels as the multiple monitors wouldn’t be using the same synchronized frame. The thing is would anyone notice since there would be a bezel in the way?

      The other benefit is that this could be used to scale beyond 4 GPU’s in a system. Currently DirectX limited the number of concurrent frames rendering for AFR to 6. It has been possible to connect up to 10 GPU’s (five dual GPU cards) in a system but that extra compute would go to waste. This would enable better utilization in these extreme system configurations as scaling past 4 GPU typically unheard of.

      • UnfriendlyFire
      • 7 years ago

      Nividia used an upcoming Displayport standard to create G-sync.

      They knew that Displayport will take at least several months to finalize the new standard, and decided to act first.

    • alientorni
    • 7 years ago

    it seems that amd has made a LOT of good business besides just puting their apus/gpus on next-gen consoles.

    first it was mark cerny on gcn arquitecture, and the trueaudio chip on the ps4.
    now it seems that they have co-worked microsoft for a better graphics library support for their cards, for both dx12 and some code sharing to make mantle.

    now it’s pretty clear why nvidia talks that much about mobiles…

      • HisDivineOrder
      • 7 years ago

      Is that why Microsoft used an nVidia GPU to demo Forza Fiveza on PC to show the true power of DirectX 12 features? 😉 Is that why nVidia’s been working on DirectX 12 with Microsoft as long as AMD? Is that why–

      Silly fanboy. Have you ever seen a company so apparently excluded by Microsoft from the DirectX party have such a huge happy-joy-joy dance as nVidia over DirectX 12?

      The truth is Mantle is dead. nVidia seems to be there at ground level to be used as the demo hardware. I’m sure that has at least something to do with the fact that most of the demos last year at E3 were secretly running on nVidia GPU’s despite AMD’s apparent “good business” decisions that should have led to them being the demo hardware.

      Nevermind that, developers preferred to use nVidia hardware. Curious that, no?

      AMD hasn’t made many good decisions as of late. They rushed Mantle out to get some positive headwind in front of DirectX 12–which it now comes out has been in the making for over a year–and that explains Microsoft’s confused, “Wait, wha–?” reaction to its reveal in September. Who bothers to release an API right before the general API gets all the features it’s bringing? A company with no response to Gameworks, that’s who. This is also why nVidia kept quiet on making its own API. It knew AMD’s smokescreen would blow away by March.

      Even in the reveal, they promised to give more info they didn’t give, then promised a release date they didn’t remotely keep. They promised returns they couldn’t hit and the rush made the whole enterprise far buggier than one would like from a custom API. Moreover, all promises of ease of development and low time to implement disappeared when it came time to release games with it because so far every game that’s come has seen its Mantle version delayed until after the “real” (or main) release.

      Throw in their (lack of) reaction to the Cryptocoin scalping going on that basically removed the cards from the very users they needed to make hardware capable of TrueAudio and Mantle more available and prevalent, well, it’s hard to see very much except a stalled plane crashing back to Earth…

      And remind me? Who had two panelists at the OpenGL meeting of the minds between nVidia, AMD, and Intel? Seems like nVidia is the one that’s wisely sticking to supporting just two API’s and AMD’s the one trying to do that plus add another?

      And whose GPU’s were in the Steam Machine prototypes again? Whose GPU’s were demoing Xbox One software? Whose GPU’s were used to demo DirectX 12 last week?

      If AMD is so ahead of the game, why doesn’t the rest of the industry seem to know it? Perhaps you should go and tell them. 😉

        • encia
        • 7 years ago

        It depends on which developers i.e. both NVIDIA and AMD has their happy campers.

        As for AMD Mantle, there’s another console besides Xbox One i.e. Sony’s PS4.

        AMD’s work on Mantle driver can be recycled for DirectX 12 and AMD needs to replace AMD IL with HSA IL in their drivers anyway.

        • encia
        • 7 years ago

        Did you know TrueAudio’s existence during 7790’s product release?

        7790 already included TrueAudio on the silicon and it seems that no-one has notice it until R9-290X’s reveal.

      • UnfriendlyFire
      • 7 years ago

      But AMD is bleeding cash…

    • End User
    • 7 years ago

    Until games ship with DX12 (late 2015) I’m not going to give it much though. By that time my GTX 770s will be over two years old and I’ll be ready for an upgrade.

    • Saber Cherry
    • 7 years ago

    If DX12 does not come out for Windows 7, then it is irrelevant to me.

      • sschaem
      • 7 years ago

      Considering that Mantle works on windows7 and include even more features….

        • Saber Cherry
        • 7 years ago

        I have an Nvidia card (560 GTX) currently, but I’d much rather upgrade to a new ATI card than install a new OS. Even if the new OS was better. Which it isn’t.

      • UnfriendlyFire
      • 7 years ago

      Unless if MS pulls another Windows 7 with Windows 9.

      I think we all know that Vista was never as popular as Win7.

        • Saber Cherry
        • 7 years ago

        The only reason I went from XP to 7 was 64-bit support. Why would I switch to 9, spend a bunch of money, and a bunch of time reorganizing/relearning things? If you use Windows, there’s no point in getting a new OS unless you get a new motherboard, because Microsoft makes the upgrading process too annoying. If it was like an identical OS with under the hood improvements like better security, more speed, and lower RAM usage, fine, I’d pay for it. But Microsoft doesn’t do that. And until they do I won’t buy anything from them.

    • Andrew Lauritzen
    • 7 years ago

    I realize this sounds confusing guys, but it’s not really that complicated nor different from how it works in DX11 today. There are two separate things – the API, and the hardware feature level. The DX12 API (with all the low overhead goodness) will work on current GPUs (lists of the architectures have been given elsewhere already). The new feature level 12 stuff will likely require new hardware, just as feature level 11, 11.1, etc. did.

    Games are not going to require feature level 12 any time soon. Do any games even require feature level 11 today? I’m not sure I know of any offhand and that’s several years old already vs. not even being released until next year. That’s not to say no games will make use of the features in certain quality modes, but would you prefer that innovation in GPU hardware stopped entirely?

      • encia
      • 7 years ago

      One problem, AMD claimed “FULL DirectX 12 compatibility” for their GCNs.

      From [url<]http://www.amd.com/us/press-releases/Pages/amd-demonstrates-2014mar20.aspx[/url<] [i<]"Full DirectX 12 compatibility promised for the award-winning Graphics Core Next architecture".[/i<] AMD contradicted NVIDIA's statements. The reason for the contradiction from [url<]http://www.eurogamer.net/articles/digitalfoundry-2014-directx-12-revealed-coming-to-xbox-one[/url<] [i<]"Microsoft confirmed that DX12 was on the roadmap for the console, but "beyond that, we have nothing more to share.""[/i<] AMD has the Xbox One's built-in DX12/DirectX Next road map factor. AMD's Graphic Core NEXT relates to Microsoft's DirectX NEXT aka DirectX12.

        • Andrew Lauritzen
        • 7 years ago

        Stop repeating the same nonsense and read what I wrote. AMD’s current hardware will be “fully compatible” with the DX12 *API* in the same way that NVIDIA’s and Intel’s is. No current hardware supports conservative rasterization natively (let alone the other features) so nothing out right now is going to be a feature level 12 part.

        Just trust me, I know what I’m talking about.

          • HisDivineOrder
          • 7 years ago

          You should know better than to ask a fanboy to trust you when you’re killing his hopes and dreams of a dominant AMD future where having AMD in your PC commands respect and envy from all around you, leading to more mating opportunities with the more choice and select members of their preferred gender.

          In this world of yours where he trusts you, AMD is just another GPU with virtually no conferred advantage for all the trouble and expense of selling all those APU’s to Sony and Microsoft for a song. This would mean that all AMD’s blather about having an advantage because they’re in both consoles is just BS. It would mean Mantle is truly going to wither on the vine. It would mean–

          No. You cannot expect an AMD fanboy to accept this. You must know it is impossible. Londo Mollari would tell you what that is.

          “IT’S INTOLERABLE!” 😉

            • encia
            • 7 years ago

            DirectX doesn’t usually dictate the actual implementation i.e. it’s up to the GPU vendor to implement the said feature e.g. specialised hardware function or ALU/stream processor approach. The key is to avoid the CPU fall back mode and having a GPU design with enough flexibility to support the said feature.

            • Klimax
            • 7 years ago

            At best AMD won’t differ from NVidia…

            • encia
            • 7 years ago

            DirectX12’s “Programmable blend and efficient OIT with pixel ordered UAV” sounds like Intel Haswell IGP’s features.

            It depends if NVIDIA or AMD hardware can match the following features from

            [url<]http://software.intel.com/en-us/blogs/2013/07/18/order-independent-transparency-approximation-with-pixel-synchronization[/url<] Intel Haswell IGP can delay it's in-flight pixel shader execution, thus modifying it's order. AMD GCN has out-of-order wavefront processing which can change the order of execution.

          • encia
          • 7 years ago

          “Conservative rasterization” can be done with a geometry shader for current GPUs.

          DirectX doesn’t usually dictate the actual implementation i.e. it’s up to the GPU vendor to implement the said feature e.g. specialised hardware function or ALU/stream processor approach. The key is to avoid the CPU fall back mode and having a GPU design with enough flexibility to support the said feature.

          If GPU’s flexibility wasn’t sufficient enough, then the said feature would not be GPU accelerated. A CPU can run any version of Direct3D, but they are just slow.

          If the GPU runs the said feature slower than the host CPU then it’s “de-accelerated”.

          Have you asked AMD’s view on D3D12’s “conservative rasterization” and the new blending modes? What’s their response? Is it the same “Full DirectX 12 Compatibility For GCN” PR?

          You’re a “journalist”, ask the question.

            • Andrew Lauritzen
            • 7 years ago

            Geometry shader implementations of conservative rasterization typically over-estimate by quite a lot, in addition to performing poorly with sliver triangles and such. To get the desired level of accuracy would be pretty expensive, and stuff like underestimated conservative rasterization is usually infeasible. Frankly a software implementation on the CPU is probably better than that sort of GPU implementation…

            > You’re a “journalist”, ask the question.
            Where on earth did you get that idea? Are you a journalist? Do you work for AMD? You seem to be implying you know more here than what AMD has said, so feel free to back that up with some credentials if you want anyone to take you seriously.

            If AMD really thinks such an implementation (which can already be done by apps today on any GPU obviously) is acceptable to claim “full” support of conservative rasterization then they’re in a sad place. Thus I’m gonna assume that’s just your speculation and leave them with a little respect.

            • encia
            • 7 years ago

            Why do it on the CPU when GpGPU can do it?

            From [url<]http://gearnuke.com/mantle-to-bring-ps4-asynchronous-fine-grain-compute-to-pc/[/url<] Sony's PS4 can do "fine grain compute" with it's 8 ACE units and R9-290/R9-290X also has similar 8 ACE units improvements. According to Sony, it's ACE units acts like CELL's SPUs. GCN also has it's scalar processor for each CU which is not directly expose to DL and GL APIs. I have already posted an example for this scalar function. ------------------ >Where on earth did you get that idea? Are you a journalist? You created this article and it wouldn't be complete without asking the questions from the matching vendors. >Do you work for AMD? Nope. I just relaying AMD's simple claims. >You seem to be implying you know more here than what AMD has said. What is AMD's claim? it's pretty simple. >If AMD really thinks such an implementation (which can already be done by apps today on any GPU obviously) is acceptable to claim "full" support of conservative rasterization then they're in a sad place For "conservative rasterization", have you programmed at GCN metal? Geometry shader is nowhere near GCN's metal. GCN metal is lower than Mantle i.e. going down to ISA level specifics. >Thus I'm gonna assume that's just your speculation and leave them with a little respect Asking for clarification on the word "FULL" from AMD wouldn't be a bad start.

            • Andrew Lauritzen
            • 7 years ago

            Well with the start of your post you’ve basically parked yourself in full internet fanboy territory and given up any pre-tense that you know what you’re talking about. Thus I guess the technical conversation is over.

            > You created this article
            Uhh… no I didn’t… why do you think I work for TR? You getting confused with the subscriber icons?

            • encia
            • 7 years ago

            For AMD GCN, read [url<]http://www.slideshare.net/zlatan4177/gpgpu-algorithms-in-games[/url<] AMD talks about the following points 1. increasing culling details via HSA compute instead of the CPU. 2. "software rasterization" via HSA compute instead of the CPU. This is similar PS3's CELL style approach. Remember, a CPU can run any DirectX version. AMD has the basis for "FULL DirectX 12 compatibility" claim.

            • ogrdnv
            • 7 years ago

            > “Conservative rasterization” can be done with a geometry shader for current GPUs.
            The whole point of some features is perf(we can do tesselation, conservative rasterization, megatextures and many other currently hardware accelerated features since DX9 hardware), current geometry shader implementations of conservative rasterization should be bounded by geometry shader perf, geometry shader perf is one of the weak points of GCN already, so there’s no sense in emulation if the end perf is 10 – 20 times slower compared to competitor’s hardware implementations

            • encia
            • 7 years ago

            [url<]http://www.pcper.com/reviews/General-Tech/New-3DMark-Benchmark-Testing-Smartphones-Multi-GPU-Gaming-PCs/Ice-Storm-and-Clo[/url<] [i<]Graphics Test 2 3DMark Fire Strike Graphics test 2 focuses on particles and GPU simulations. Particles are drawn at full resolution and dynamic particle illumination is enabled. There are two smoke fields simulated on GPU. Six shadow casting spot lights and 65 non-shadow casting point lights are present. On average, 2.6 million vertices containing 240,000 input patches for tessellation are processed and 1.4 million primitives are generated with [b<]geometry shaders[/b<]. That results in 5.8 million triangles being rasterized per frame on average. Compute shaders are invoked 8.1 million times per frame for particle and fluid simulations and for post processing steps. On average, 170 million pixels are processed per frame. Post processing includes a depth of field effect.[/i<] From [url<]http://www.pcper.com/reviews/General-Tech/New-3DMark-Benchmark-Testing-Smartphones-Multi-GPU-Gaming-PCs/Fire-Strike-Stand[/url<] On Graphic Test 2, 7970 out scores 680

            • ogrdnv
            • 7 years ago

            [url<]http://www.ixbt.com/video3/gk110-2-part2.shtml[/url<] Search for "геометрические шейдеры" + N-body compute kernel also uses geometry shaders for visualization Some geometry shaders are limited by stream out bandwidth, in those cases 680 will be slower than 7970 for obvious reasons.

            • ogrdnv
            • 7 years ago

            [url<]http://www.ixbt.com/video3/gk110-2-part2.shtml[/url<] Search for "геометрические шейдеры" + N-body compute kernel also uses geometry shaders for visualization Some geometry shaders are limited by stream out bandwidth, in those cases 680 will be slower than 7970 for obvious reasons.

            • ogrdnv
            • 7 years ago

            [url<]http://www.ixbt.com/video3/gk110-2-part2.shtml[/url<] Search for "геометрические шейдеры" + N-body compute kernel also uses geometry shaders for visualization Some geometry shaders are limited by stream out bandwidth, in those cases 680 will be slower than 7970 for obvious reasons.

        • Ninjitsu
        • 7 years ago

        You’ve repeated this so many times i’ve wondering whether you work for AMD. But OK, we’ll see next year, what compatibility means to AMD.

          • encia
          • 7 years ago

          I’m also interested on AMD’s “FULL compatibility with DirectX12”.

          • encia
          • 7 years ago

          AMD has used “FULL” for their DirectX 11.1 support i.e. DirectX 11.1 Feature Level 11_1.

          [i<]"Yes, AMD has FULL DirectX® 11.1 support"[/i<] [url<]http://community.amd.com/community/amd-blogs/amd-gaming/blog/2012/12/14/yes-amd-has-full-directx-111-support[/url<] For DirectX 11.2 and AMD GCN. “The AMD Radeon™ HD 7000 series hardware architecture is [b<]fully DirectX® 11.2-capable [/b<]when used with a driver that enables this feature. AMD is planning to enable DirectX 11.2 with a driver update in the Windows® 8.1 launch timeframe in October, when DirectX® 11.2 ships. Today, [i<]AMD is the only GPU manufacturer to offer [b<]fully-compatible DirectX 11.1 support[/b<][/i<], and the only manufacturer to support Tiled Resources Tier-2 within a shipping product stack.” [url<]http://wccftech.com/amd-catalyst-14-1-beta-launching-q1-2014-supports-directx-112-mantle-trueaudio-frame-pacing-fix-gpus-apus/[/url<] [url<]http://cdn4.wccftech.com/wp-content/uploads/2014/01/AMD-14.1-Windows-8.1-Support-635x422.jpg[/url<] Again, AMD claims 1. "fully-compatible DirectX 11.1". 2. "fully DirectX® 11.2-capable". 3. "full DirectX 11.2 support". The keyword is FULL and AMD is consistent in that regard.

    • Ninjitsu
    • 7 years ago

    [quote<]DirectX 12 will indeed make lower-level abstraction available (but not mandatory—there will be backward-compatibility with DX11) on existing hardware.[/quote<] [url=https://techreport.com/news/26199/directx-12-to-support-existing-hardware-first-games-due-in-late-2015?post=809291<]CALLED IT![/url<] Well, at least got the Nvidia side right.

    • Amazing Mr. X
    • 7 years ago

    Ouch. Well, that information just killed sales of the GTX 700, GTX Titan, and GTX 800m units.

    If this information is accurate, I can see why Nvidia held back on the more expensive Maxwell series flagships and 800 series desktop parts. I mean honestly, who would buy that kind of super high-end gear with DirectX 12 guaranteeing it won’t be able to max games in 1.5 years thanks to mysterious and otherwise TBA features? You know, other than the guys who just buy this stuff [i<]every[/i<] year regardless? A GPU which will only take you 1.5 years before falling out of date doesn't sound like a great investment, especially in the GTX 770 area and up. At that point it'd just be less painful to wait, save your money, and buy whatever fully compatible cards Nvidia releases for DX12. That's assuming, of course, that the new features requiring new hardware are actually worth waiting for. I guess only time will tell for sure on that front.

      • encia
      • 7 years ago

      The lack of Feature Level 11_1 didn’t stop NVIDIA’s Feature Level 11_0 GPU sales.

      I’m guessing DirectX 12 Feature Level 11_0.

        • Amazing Mr. X
        • 7 years ago

        To be fair, 11_1 and even 11_2 were not high profile releases. They added very few features and were exceptionally small updates only intended for Windows 8 and 8.1 respectively. 11_1 only made it to Windows 7 because of complaints regarding segmentation designed to force sales of the new OS, and even then I can’t find any games that support it apart from Battlefield 4 or Star Trek Online. As far as 11_2 goes, I can’t find anything at all that claims support for it. Based solely on the poor adoption by developers and the relative lack of outcry related to it on the part of gamers, I think it’s safe to say those updates just weren’t important to the vast majority of the gaming crowd.

        Though, I think it’s safe to say that a release with as much media coverage as that of DirectX 12 is a bit more important, especially since it’s literally being pitted against AMD’s Mantle which has a ton of coverage out there already. For Microsoft to announce a competing standard this far in advance of a release, either they’re deeply afraid of AMD’s tech or they know what they’re doing is worth a year plus of hype. Personally, it inspires the thought that the features they have yet to discuss are likely the most compelling cards in their proverbial hand, and that thought alone certainly breeds excitement for me as a consumer. In fact, for me it breeds a fear that if I upgrade too early I might miss out on the most impressive parts of DirectX 12, the parts best saved for a last-minute reveal and last-ditch revitalization of marketing hype.

        Though, judging strictly by the high number of downvotes on my original post, I think it’s safe to say that my thoughts are merely my own, and do not reflect any of the other good people that call this comment section home. So, as to not insult the community any further, I withdraw my position. Clearly, thinking that a $500+ purchase focused around a much beloved hobby should remain at or near the very literal top of the litter for more than 1.5 years is strictly a thought that should remain outside of the realm of a graphics card purchase. Indeed, it is simply not an appropriate thing to think in [i<]any[/i<] context related to this situation. Very well, Internet. You win.

          • yogibbear
          • 7 years ago

          I haven’t thumbed you either way cause I find it mostly a silly system when people are actually making reasonable discussion (whether I agree or disagree with their point of view), but I don’t think the argument you’re trying to make is one I would agree with. I’ve had my 770GTX for quite a while now, and by the time a game is released with significant DX12 features I daresay it will have been more than 2 yrs. So I don’t see what your point is in your third paragraph in the original comment? The second paragraph about Nvidia being coy is a much more logical and agreeable one that they don’t want to scare away customers, which is fine. But again, I think the thing that sells the DX12 GPUs is DX12 software, not DX12 hardware. i.e. DX12 release date != DX12 software availability.

            • Amazing Mr. X
            • 7 years ago

            Ah, well that might take a bit of explanation to clarify, but the short end of it is that Microsoft appears to be implying that DirectX 12 software will be releasing alongside DirectX 12 as a standard. As such, in 1.5 years time when we all receive some sort of DirectX 12 related software patch on Windows, there will be games for us to try with it available within that launch window. To that end, Microsoft showed us a DirectX 11.2 optimized game in Forza 5 running in DirectX 12 after only a few months of porting work by a small team. To be fair, it’s a small game, but the implication is that a larger team could port anything to DirectX 12 and get a game up and running considerably faster. If we can expect development times to remain that small on DirectX 12, as is certainly implied, then the fact that Microsoft is making their SDK available so quickly is heavily implying that developers will have plenty of time to update their in-progress projects to the new standards in time for launch. Thus, my thought is that DirectX 12 will release alongside a handful of indie and AAA titles designed to push it early. We may even, from what they showed us, have updates for all of 3D Mark’s graphics benchmarking tools as well.

            It should also be noted that I was discussing future sales of graphics cards. People like you and I who have had cards for quite a while are hardly going to return our GPUs now. However, a new consumer, fresh off of the heels of this announcement, might think twice about blowing too much money on a GPU with so many promises so very close on the horizon. Even if a person was going to buy a Titan Black or two tomorrow, they may read an announcement like this and decide against it. At least, those were my thoughts. Clearly other people did not agree with me in the slightest.

            To each their own.

            • nanoflower
            • 7 years ago

            Yes, they did imply there would be SOMETHING available Christmas 2015 but they didn’t say what. Given the past experiences I expect we may see a game or two that support Direct X 12 features at that time but they will still run fine if you don’t have Direct X 12 capable hardware. It will take a year or so after that before we start to see a number of games coming in that use Direct X 12 (based on what we’ve seen Direct X 10/11.) Plus we know that GPUs can support later Direct X versions (as in the 700 series supporting Direct X 12) even if they don’t do it as well as if they supported the features in hardware.

            So I don’t see a major issue in buying a GPU today because of Direct X coming out in almost 2 years. Now next summer that might be an issue but not today.

          • encia
          • 7 years ago

          Forza 5 port runs pretty good on Feature Level 11_0 GPU.

    • Bensam123
    • 7 years ago

    Hit reg with DirectX? I didn’t know the two had anything to do with each other. Hit reg is usually a engine thing. oO That could be fairly interesting depending on what it turns out to be.

    So much for the added hardware physics acceleration people were talking about back when the whole physics push was going on in 2009. Honestly if MS was trying to do something bleeding edge, they’d still be working on something like that.

    Still waiting for a good old accelerated physics API to pave the way.

      • willmore
      • 7 years ago

      A polygon intersection engine of some sort?

    • joyzbuzz
    • 7 years ago

    How does DX12 structure into Microsoft’s long term strategy, namely a fully locked down ecosystem?

    Possibly using the carrot of Xbox exclusive games available to a much more locked down Windows 9 buyer? Microsoft has a heck of a lot of first party developer irons in the fire, why not use all those coming XB1 games to leverage acceptance of their walled garden ecosystem? Say Halo 5 being released concurrent with Windows 9 and playable only on XB1 and Windows 9 PCs, or maybe only available on DRMed to a fare thee well Microsoft Surface branded phones, tablets, laptops and desktops.

    Reasonable to assume those new DX12 features and subsequent GPU hardware modifications will derive from XB1 hardware capabilities?

    Be that as it may it’s clear there are a whole lot of developers eager to get on the Mantle bandwagon, and many of those would be only too happy to see a highly efficient universal API NOT under the thumb of Microsoft.

      • Voldenuit
      • 7 years ago

      Are you advocating that MS expand their walled garden ecosystem? Because I don’t see how that’s good for consumers.

        • joyzbuzz
        • 7 years ago

        Lol, I like open and free.

        Sinofsky’s vision was to create MicroApple and apparently sold that vision to Ballmer and the board, hence Gabe and Steam OS. Sinofsky over reached on Windows 8, which blew up in his face and got him fired but there’s no evidence Microsoft has changed it’s mind about creating that Apple-like ecosystem, so I assume DX12 is configured to help implement that over-arching strategy.

        Microsoft has a very steep hill to climb to successfully migrate to a walled garden OS, it’s going to need some serious carrots to lure customers there. XB1 exclusive games available in their store for Windows RT is one such carrot. Thus the idea the intention of DX12 is to migrate the XB1 API to the PC, particularly for upcoming GPUs, that will make it very easy for developers to port their XB1 games to Xbox (or whatever) branded hardware running WIndows X. Looking ahead a bit, an XB1 port to 20nm XB1 centric GPUs will provide an excellent gaming experience across Microsoft’s hardware ecosystem.

        Maybe, maybe not, just playing with the possibilities.

      • Grape Flavor
      • 7 years ago

      You don’t really know a thing about Microsoft’s long term strategy. That’s all just speculation.

      And for what it’s worth all the rumors that are out point to Windows 9 pivoting back TOWARDS the desktop, not cutting it away to create a Metro walled garden only OS.

        • joyzbuzz
        • 7 years ago

        You’re going to call me out for ‘speculation’ that Microsoft’s strategic goal is a closed ecosystem *cough* STEAM OS *cough* while speaking of ‘all the rumors pointing to ‘pivoting back towards the desktop’ when THAT is coming with windows 8.1 Update 1.

        Windows 9 is all about a unified OS and Metro 2.0 oh clueless one.

      • Sahrin
      • 7 years ago

      Microsoft is subject to an anti-trust settlement on x86 hardware; the ‘long term strategy of a fully locked down system’ is not in the cards because of the settlement.

    • Srsly_Bro
    • 7 years ago

    So I’m totally confused. There are rumors that DX12 is coming to XB1. XB1 is built on DX11 hardware AFAIK. Are the features of DX12 just compatible with DX11 hardware unlike DX9 to DX10?

      • Billstevens
      • 7 years ago

      Well the Xbone is their baby so maybe it has some of the new hardware in it, such as how the CPU and GPU interact. Or this confirms that the Xbone won’t take advantage of everything new in DX12. Its certainly possible to use the new API without using features not relevant to older hardware.

      • encia
      • 7 years ago

      Mr Tamasi is from Nvidia NOT from AMD or Intel.

      Tamasi’s statement should NOT be applied on non-NVIDIA products.

      From [url<]https://www.amd.com/us/press-releases/Pages/amd-demonstrates-2014mar20.aspx[/url<] "AMD Demonstrates Full Support for New DirectX 12"

        • nanoflower
        • 7 years ago

        Except that is not what the actual article says. The blurb by the writer says that but AMD only says that they will support DX12 on GCN, not that they will support all DX12 features on GCN.

          • encia
          • 7 years ago

          Tamasi’s statement should NOT be applied on non-NVIDIA products i.e. NVIDIA doesn’t have the authority over AMD, Intel and Qualcomm products.

          NVIDIA’s hardware feature level remains at 11_0.

          ————–

          From [url<]http://blogs.windows.com/windows/b/appbuilder/archive/2013/10/14/raising-the-bar-with-direct3d.aspx[/url<] "The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality" Xbox One's GCN hardware and DirectX 11.X graphics API already exceeds PC's DirectX 11.2 functionality. -------------- If you read [url<]https://www.amd.com/us/press-releases/Pages/amd-demonstrates-2014mar20.aspx[/url<] "Full DirectX 12 compatibility promised for the award-winning Graphics Core Next architecture". Notice "FULL DirectX 12 compatibility" for their Graphics Core Next architecture.

      • tipoo
      • 7 years ago

      DirectX 12 is also compatible with current GPUs, so no hardware change is required (though new chips will get new features with it too). That’s why it can work on the XBO GPU.

        • encia
        • 7 years ago

        You didn’t read [url<]http://blogs.windows.com/windows/b/appbuilder/archive/2013/10/14/raising-the-bar-with-direct3d.aspx[/url<] "The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality" Note "Xbox One hardware provides a superset of Direct3D 11.2 functionality". Xbox One's GCN exceeds PC's feature level 11_0 and feature level 11_1 (required for full DirectX 11.2 support). If you read from [url<]http://timothylottes.blogspot.com.au/2013/08/notes-on-amd-gcn-isa.html[/url<] "DX and GL are years behind in API design compared to what is possible on GCN. For instance there is no need for the CPU to do any binding for a traditional material system with unique shaders/textures/samplers/buffers associated with geometry. Going to the metal on GCN, it would be trivial to pass a 32-bit index from the vertex shader to the pixel shader, then use the 32-bit index and S_BUFFER_LOAD_DWORDX16 to get constants, samplers, textures, buffers, and shaders associated with the material. Do a S_SETPC to branch to the proper shader" AMD GCN ISA exceeds the current DX and GL.

          • renz496
          • 7 years ago

          MS still not talking about DX12 full spec. how are you so sure that current GCN hardware already have the hardware needed for all DX12 feature?

            • encia
            • 7 years ago

            From [url<]https://www.amd.com/us/press-releases/Pages/amd-demonstrates-2014mar20.aspx[/url<] "Full DirectX 12 compatibility promised for the award-winning Graphics Core Next architecture" - AMD. AMD's PR has claimed "FULL DirectX 12 compatibility" for their current GCNs. NVIDIA has yet to claim "FULL DirectX 12 compatibility".

            • renz496
            • 7 years ago

            so if AMD GCN hardware already covers all the hardware needed for DX12 then why MS did not use AMD gpu to demonstrate the Forza 5 demo using DX12?

            • encia
            • 7 years ago

            Perhaps the same reasons for the last DirectX 11.2’s Tiled Resource Tier 1 demo + NVIDIA GeForce GTX 770 hardware, while AMD’s PC GCN supports 11.2 Tiled Resource Tier 2/Full DirectX 11.2.

            Microsoft has to show their abstraction layer is good enough outside of Xbox One hardware and using AMD PC GCN wouldn’t show this case.

            Anyway,

            1. AMD’s PR claims “FULL DirectX 12 compatibility promised for … Graphics Core Next architecture.”

            2. Nvidia’s Tony Tamasi doesn’t have any authority over AMD, Intel and Qualcomm hardware.

            • sweatshopking
            • 7 years ago

            compatibility doesn’t mean support for all features. those aren’t the same words. OMG I CAN’T BELIEVE AMD WOULD EVER USE PR AND MARKETING TO PRESENT SOMETHING DIFFERENT THAN REALITY!!
            i’m not saying they DON’T support it all. I’ll just be skeptical till the whitepapers are out.

            • encia
            • 7 years ago

            Using the AMD64/X86-64 CPU example, “full compatibility” means full features.

            Your are forgetting the Xbox One factor.

            There are faster X86-64 CPU than the first gen AMD64/X86-64 CPUs.

            From
            [url<]http://www.eurogamer.net/articles/digitalfoundry-2014-directx-12-revealed-coming-to-xbox-one[/url<] [i<]Microsoft confirmed that DX12 was on the roadmap for the console, but "beyond that, we have nothing more to share."[/i<] The designed for Xbox One's GCN was on DX12 road map and NVIDIA missed out of Xbox One design. I have posted an example that shows AMD GCN ISA is years ahead of DX and GL.

            • sweatshopking
            • 7 years ago

            That doesn’t mean anything. I think it might be likely, but I still dont trust pr.

            • encia
            • 7 years ago

            The main difference is AMD used the word “FULL” for their compatibility with “DirectX 12”.

            NVIDIA haven’t claimed “FULL” for their DirectX 12 compatibility.

            ———-

            AMD has used “FULL” for their DirectX 11.1 support i.e. DirectX 11.1 Feature Level 11_1.

            [i<]"Yes, AMD has FULL DirectX® 11.1 support"[/i<] [url<]http://community.amd.com/community/amd-blogs/amd-gaming/blog/2012/12/14/yes-amd-has-full-directx-111-support[/url<] For DirectX 11.2 and AMD GCN. “The AMD Radeon™ HD 7000 series hardware architecture is [b<]fully DirectX® 11.2-capable [/b<]when used with a driver that enables this feature. AMD is planning to enable DirectX 11.2 with a driver update in the Windows® 8.1 launch timeframe in October, when DirectX® 11.2 ships. Today, [i<]AMD is the only GPU manufacturer to offer [b<]fully-compatible DirectX 11.1 support[/b<][/i<], and the only manufacturer to support Tiled Resources Tier-2 within a shipping product stack.” [url<]http://cdn4.wccftech.com/wp-content/uploads/2014/01/AMD-14.1-Windows-8.1-Support-635x422.jpg[/url<] Again, AMD claims 1. "fully-compatible DirectX 11.1". 2. "fully DirectX® 11.2-capable". 3. "full DirectX 11.2 support". The keyword is FULL and AMD is consistent in that regard.

            • sweatshopking
            • 7 years ago

            It means nothing. Repeating it again will not change that. Pr is lies and dishonesty. Saying they lied about it this way vs another doesn’t mean a thing. I trust AMD about as much as I trust NVIDIA.

            • encia
            • 7 years ago

            Your statement is just rhetoric and unproven.

            • sweatshopking
            • 7 years ago

            as is yours. WHICH IS EXACTLY MY POINT.

            • encia
            • 7 years ago

            The difference is one is from an offical source.

            • sweatshopking
            • 7 years ago

            [url<]http://www.neowin.net/news/nvidia-directx-12-will-support-unannounced-features-on-future-graphics-chips[/url<] so.... NVIDIA says new features will only work on NEW GPUS. who's right?

            • encia
            • 7 years ago

            Again, NVIDIA doesn’t have any authority on non-NVIDIA GPUs.

            For AMD GCN, read [url<]http://www.slideshare.net/zlatan4177/gpgpu-algorithms-in-games[/url<] AMD talks about the following points 1. increasing culling details via HSA compute instead of the CPU. 2. "software rasterization" via HSA compute instead of the CPU. This is similar PS3's CELL style approach. Remember, a CPU can run any DirectX version. AMD has the basis for "FULL DirectX 12 compatibility" claim.

            • sweatshopking
            • 7 years ago

            Why? Because they mention 2 of the features? Nowhere are the other, unreleased ones mentioned, so who knows. Man, you’re being really weird about this. You’ve been a lurker for a decade, then you go full on like crazy.

            • encia
            • 7 years ago

            “Programmable blend and efficient OIT with pixel ordered UAV” is an Intel API extension i.e. an example of NVIDIA having zero authority over non-NVIDIA products.

            • sweatshopking
            • 7 years ago

            when amd announces the 9k series with new extensions are you going to be this active on the story saying “OMG GUYZ! I WAS TOTES WRONG!”? there will be new features that amd doesn’t support.

            • encia
            • 7 years ago

            Again, AMD promises “FULL DirectX 12 compatibility” for their current GCN.

            Your statement doesn’t address the issues involved and focuses on the personal based debates.

            The article’s Intel based DX12 API already proven NVIDIA’s lack of authority over Intel products.

            • sweatshopking
            • 7 years ago

            OK MAN. SEE YOU AT THE 9K LAUNCH!

            • encia
            • 7 years ago

            It depends who can deliver the best bang for the buck at $399 USD.

      • Bensam123
      • 7 years ago

      Guessing that’s why they’re making DX12 backwards compatible. It sorta seems like this is just another revision of DX11.

        • HisDivineOrder
        • 7 years ago

        Seems like the majority of what’s going to be amaaaazing about DirectX 12 is backported from DirectX 11.x from Xbox One. That’s the low level access.

        I wonder why AMD thought it necessary to make Mantle since they were working on DirectX 12 (or 11.x’s port) for over a year now…?

          • encia
          • 7 years ago

          You’re forgetting Sony’s PS4 and EA DICE has stated their Battlefield 4 Mantle was similar to PS4’s the render path.

          AMD has two game consoles with GCNs and will have two low level APIs on the PC.

          It would be LOL if Intel Haswell has the more complete DirectX 12 over NVIDIA and AMD.

          • sschaem
          • 7 years ago

          Because Dx12 will be out in late 2015 ?
          Because it took 4+ years for Microsoft to support GCN feature that where designed in HW in 2011.

          With Mantle, AMD is free from Microsoft schedule. MS does only thing that fit their agenda.

          So if AMD decide to release a raytracing accelerator, it can be in the day the HW is released with Mantle. It would take 4 to 5 years for microsoft to support it.
          All Cryengine, Frostbyte, Unreal, etc.. games could leverage the engine better lighting model on day one. Moving the industry forward. Not stagnating, like microsoft like it to be.

          And look at the GPU options they had for the XB1… they castrated that product to the bone.

          And Look at Cuda VS direct Compute. Mantle exist for the same reason Cuda was created , and still exist. It took microsoft 3 years.. 3 years.
          Can you imagine if nvidia had relied on microsoft? none of the HW could have been used for 3 years.
          Same happen to GCN features. like the virtual texture memory manager. And it still cant be used on Windows7 !!! All of this is OUTRAGEOUS.

          Miscrosoft is also become ‘irrelevant’ in the API business… DirectX is now just a pain in the ars for developers.

          • Bensam123
          • 7 years ago

          MS hasn’t exactly been open about DX 12… Unless you were privy to insider information none of us were. I didn’t hear anything about this and AFAIK MS hasn’t released a development timeline for DX12.

    • southrncomfortjm
    • 7 years ago

    Guess I’ll stick with my 7850 a bit longer instead of upgrading to a GTX 770.

      • LostCat
      • 7 years ago

      I still haven’t decided what to go to from my 660, but it’ll definitely support Mantle.

      • encia
      • 7 years ago

      From [url<]http://blogs.windows.com/windows/b/appbuilder/archive/2013/10/14/raising-the-bar-with-direct3d.aspx[/url<] "The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality" Xbox One's DirectX 11.X graphics API and GPU hardware exceeds PC's DirectX 11.2. Tamasi's statement should NOT be applied on non-NVIDIA products i.e. NVIDIA doesn't have the authority over AMD, Intel and Qualcomm products.

        • BIF
        • 7 years ago

        [quote<]"Tamasi's statement should NOT be applied on non-NVIDIA products i.e. NVIDIA doesn't have the authority over AMD, Intel and Qualcomm products." [/quote<] What are you, a robot? We get it, so stop it already.

          • encia
          • 7 years ago

          Some users still applies NVIDIA’s DX12 issues on non-NVIDIA hardware.

            • HisDivineOrder
            • 7 years ago

            You’ve already made the leap from the seemingly arbitrary emphasis on the word “full” versus lacking full (only done by AMD, not even done by Microsoft) and gone to “nVidia’s DirectX 12 issues?”

            They have issues? Already? For an unreleased spec? Based on what? They’re admitting there are features that will be included in DirectX 12 that won’t be part of existing cards? That’s because all DirectX 11.2 cards, full compatibility or otherwise, are going to be missing certain features of the new spec.

            This feels like someone just aching to scream to the world, “WHY ARE YOU BUYING NVIDIA CARDS WHEN THEY ONLY SUPPORT DIRECTX 11.2 IN SOFTWARE!? DAMN MICROSOFT FOR GIVING INTO NVIDIA AND MAKING IT SO THEY’RE STILL LISTED AS DIRECTX 11 COMPATIBLE!? WE ALL KNOW THE TRUTH! NVIDIA IS A CHEATER! RAWR! LOUD NOISES!”

            Dude. The features that AMD supports that make its GPU’s DirectX 11.2 versus nVidia’s more pedestrian DirectX 11 are not gaming related. That’s why they don’t matter. You know what WOULD make more difference? If AMD supported multithreaded rendering in DirectX…

            [url<]http://forums.anandtech.com/showpost.php?p=31520674[/url<] Instead, they: [quote<]Coincidentally, last month's interview with AMD's Richard Huddy at Bit-Tech also has a lot in common with this. AMD says DX11 multi-threaded rendering can double object/draw-call throughput, and they want to go well beyond that by bypassing the DX11 API.[/quote<] That's right. They were planning to go it alone rather than take the time to update their DirectX drivers to do something that could conceivably have made DirectX far more capable when both GPU makers supported the same amazing feature. Acting like Apple, AMD held back vital features so they could better sell their locked-down, closed-down future product instead. Did I just blow your mind? DirectX 11.2 support by AMD isn't supporting EVERY OPTIONAL FEATURE OF DIRECTX!? Even some truly important ones that could have improved gaming by leaps and bounds and kept AMD from "needing" Mantle? Didn't it ever make you wonder when nVidia drivers seemed to be mostly keeping up with Mantle when push came to shove in normal scenarios? Dun dun dun. Psssst... do you think AMD will get around to adding in multithreaded rendering when/if it's a part of the "full" DirectX feature set? 😉

            • encia
            • 7 years ago

            YOU haven’t read the article i.e. NVIDIA already acknowledged their issues with DirectX 12.

            At similar time period, AMD used the word “FULL” for their compatibility with “DirectX 12”.

            Also, “Conservative rasterization” can be done with a geometry shader for current GPUs i.e. it can avoid the host CPU fall back when done correctly.

            DirectX doesn’t usually dictate the actual implementation i.e. it’s up to the GPU vendor to implement the said feature e.g. specialised hardware function or ALU/stream processor approach.

            ———-

            AMD has used “FULL” for their DirectX 11.1 support i.e. DirectX 11.1 Feature Level 11_1.

            [i<]"Yes, AMD has FULL DirectX® 11.1 support"[/i<] [url<]http://community.amd.com/community/amd-blogs/amd-gaming/blog/2012/12/14/yes-amd-has-full-directx-111-support[/url<] For DirectX 11.2 and AMD GCN. “The AMD Radeon™ HD 7000 series hardware architecture is [b<]fully DirectX® 11.2-capable [/b<]when used with a driver that enables this feature. AMD is planning to enable DirectX 11.2 with a driver update in the Windows® 8.1 launch timeframe in October, when DirectX® 11.2 ships. Today, [i<]AMD is the only GPU manufacturer to offer [b<]fully-compatible DirectX 11.1 support[/b<][/i<], and the only manufacturer to support Tiled Resources Tier-2 within a shipping product stack.” [url<]http://wccftech.com/amd-catalyst-14-1-beta-launching-q1-2014-supports-directx-112-mantle-trueaudio-frame-pacing-fix-gpus-apus/[/url<] [url<]http://cdn4.wccftech.com/wp-content/uploads/2014/01/AMD-14.1-Windows-8.1-Support-635x422.jpg[/url<] Again, AMD claims 1. "fully-compatible DirectX 11.1". 2. "fully DirectX® 11.2-capable". 3. "full DirectX 11.2 support". The keyword is FULL and AMD is consistent in that regard. ----- Xbox One's DirectX 11.X superset, PS4's GNMX/GNM and AMD Mantle APIs already proven that AMD's GCN hardware can handle multi-threading command submission, while NVIDIA hasn't proved they have 64 slot UVAs for all shader types which is required for Feature Level 11_1. Please show NVIDIA Kepler/Maxwell being hardware capable for 64 slot UVAs for all shader types. From [url<]http://www.eurogamer.net/articles/digitalfoundry-2014-directx-12-revealed-coming-to-xbox-one[/url<] [i<]"Microsoft confirmed that DX12 was on the roadmap for the console, but "beyond that, we have nothing more to share.""[/i<] You are forgetting the Xbox One factor for AMD GCNs. "Multithreaded rendering through command lists and multiple contexts" came with DirectX 11.0 not DirectX 11.1 or DirectX 11.2. You're wrong in that regard. Xbox One's DirectX 11.X superset has it's own "Multithreaded rendering through command lists and multiple contexts" and this version runs on AMD GCN.

            • sweatshopking
            • 7 years ago

            You still missed the multithreading. What AMD GPUS CAN TECHNICALLY DO and what they actually do aren’t the same. Amd drivers suck, and they’re lying when they claim full dx11.2 support when they don’t have multithreading, nm full dx 12 support.

            • encia
            • 7 years ago

            MTR at the driver level was introduced at DirectX 11.0 not at DirectX 11.1 and 11.2.

            AMD wasn’t lying for DirectX 11.1 and 11.2 and the GCN hardware is MTR capable as proven by Xbox One, Mantle, PS4 and soon DirectX12.

            • sweatshopking
            • 7 years ago

            WUT!? THEY FULLY SUPPORT 11.2, BUT NOT 11? sorry, that makes no sense. you’re REALLY reaching here. just let it go. amd has crap software support. their hardware is great, but they have garbage drivers, and an awful pr team.

            • encia
            • 7 years ago

            Who cares when AMD GCN has Xbox One’s DirectX 11.X level drivers.

          • encia
          • 7 years ago

          Well, some people don’t get it. The cited API is not new i.e. Intel’s API extension.

    • encia
    • 7 years ago

    The article’s Conservative Rasterization link runs on GeForce FX 5600. LOL.

    From [url<]http://timothylottes.blogspot.com.au/2013/08/notes-on-amd-gcn-isa.html[/url<] "DX and GL are years behind in API design compared to what is possible on GCN. For instance there is no need for the CPU to do any binding for a traditional material system with unique shaders/textures/samplers/buffers associated with geometry. Going to the metal on GCN, it would be trivial to pass a 32-bit index from the vertex shader to the pixel shader, then use the 32-bit index and S_BUFFER_LOAD_DWORDX16 to get constants, samplers, textures, buffers, and shaders associated with the material. Do a S_SETPC to branch to the proper shader" From EA DICE (2012)'s [url<]http://www.slideshare.net/DICEStudio/new-13912730[/url<] Slide 32 Graphics Pipeline is fast but fixed - No conservative rasterization - No programmable blending - no flexible texture filtering (min/max/derivative) Programmable blending is already available on other API e.g. [url<]http://renderingpipeline.com/2012/09/opengl-programmable-blending-apple_shader_framebuffer_fetch/[/url<]

    • UnfriendlyFire
    • 7 years ago

    So… Current gen GPUs would only support some of DX12’s features?

      • mesyn191
      • 7 years ago

      That would appear so.

        • encia
        • 7 years ago

        From [url<]https://www.amd.com/us/press-releases/Pages/amd-demonstrates-2014mar20.aspx[/url<] "Full DirectX 12 compatibility promised for the award-winning Graphics Core Next architecture" AMD PR's claimed "FULL DirectX 12 compatibility" for their current GCNs.

          • Klimax
          • 7 years ago

          AMD’s PR department is known for accuracy and no overpromising…

            • encia
            • 7 years ago

            There are links that shows AMD GCN ISA being years ahead of DL and GL for some reason.

            • Klimax
            • 7 years ago

            Evidence?

            • the
            • 7 years ago

            While I would not cite this as direct support for encia, the [url=http://developer.amd.com/wordpress/media/2012/12/AMD_Southern_Islands_Instruction_Set_Architecture.pdf<]assembler refernce for Southern Islands is publicly available (PDF).[/url<] Any sort of ISA level comparison should be citing such low level documentation as it describes exactly how the instructions work in the architecture.

            • encia
            • 7 years ago

            An example, from [url<]http://timothylottes.blogspot.com.au/2013/08/notes-on-amd-gcn-isa.html[/url<] "DX and GL are years behind in API design compared to what is possible on GCN. For instance there is no need for the CPU to do any binding for a traditional material system with unique shaders/textures/samplers/buffers associated with geometry. Going to the metal on GCN, it would be trivial to pass a 32-bit index from the vertex shader to the pixel shader, then use the 32-bit index and S_BUFFER_LOAD_DWORDX16 to get constants, samplers, textures, buffers, and shaders associated with the material. Do a S_SETPC to branch to the proper shader"

            • the
            • 7 years ago

            You do realize that DX and OpenGL are software abstraction layers and thus assembly will always be more efficient at the extreme cost of supporting only a single platform right? That says very little about the architecture itself and how it actually compares to others.

            If nVidia released similar low level documentation on how their shader architectures work, it’d be more efficient than using either DX or OpenGL as well.

            • Klimax
            • 7 years ago

            GCN didn’t exist at all in 2009 or before, when DX was developed.

            Comparison total wrong and assertion is comparing apples and oranges and pretends to show something which doesn’t exist.

            Weak…

            • alientorni
            • 7 years ago

            i think you people are forgetting to read the word compatibility… and it has always been like that.
            compatible doesn’t mean that can run all it’s features. that word comes from the stone age. you have been caught just because microsoft wanted to make some noise saying that dx12 will be compatible will this gen gpus
            -hey look LOOK, we can make all these improvements on directx librires on existing hardware across all platforms, (not like mantle)-

            silly boys..

            (edit: maybe i’m wrong, but i’m sure that this will become pretty clear on the months to go, is either this or the other post i’ve written)

            • encia
            • 7 years ago

            No, it doesn’t like that for CPUs e.g. AMD64/X86-64.

            The main difference is AMD used the word “FULL” for their compatibility with “DirectX 12”.

            NVIDIA haven’t claimed “FULL DirectX 12 compatibility” for their GPUs.

            ———-
            From [url<]http://www.eurogamer.net/articles/digitalfoundry-2014-directx-12-revealed-coming-to-xbox-one[/url<] [i<]"When asked about specific enhancements for Xbox One, Microsoft confirmed that DX12 was on the roadmap for the console, but "beyond that, we have nothing more to share.""[/i<] AMD has the Xbox One factor with Microsoft.

            • HisDivineOrder
            • 7 years ago

            I don’t think the DirectX 12 is finalized, is it? In which case, it wouldn’t be wise to claim FULL compatibility… yet.

            • encia
            • 7 years ago

            Yet, AMD has used “FULL” for their DirectX 11.1 support i.e. DirectX 11.1 Feature Level 11_1.

            [i<]"Yes, AMD has FULL DirectX® 11.1 support"[/i<] [url<]http://community.amd.com/community/amd-blogs/amd-gaming/blog/2012/12/14/yes-amd-has-full-directx-111-support[/url<] For DirectX 11.2 and AMD GCN. “The AMD Radeon™ HD 7000 series hardware architecture is [b<]fully DirectX® 11.2-capable [/b<]when used with a driver that enables this feature. AMD is planning to enable DirectX 11.2 with a driver update in the Windows® 8.1 launch timeframe in October, when DirectX® 11.2 ships. Today, [i<]AMD is the only GPU manufacturer to offer [b<]fully-compatible DirectX 11.1 support[/b<][/i<], and the only manufacturer to support Tiled Resources Tier-2 within a shipping product stack.” [url<]http://wccftech.com/amd-catalyst-14-1-beta-launching-q1-2014-supports-directx-112-mantle-trueaudio-frame-pacing-fix-gpus-apus/[/url<] [url<]http://cdn4.wccftech.com/wp-content/uploads/2014/01/AMD-14.1-Windows-8.1-Support-635x422.jpg[/url<] Again, AMD claims 1. "fully-compatible DirectX 11.1". 2. "fully DirectX® 11.2-capable". 3. "full DirectX 11.2 support". The keyword is FULL and AMD is consistent in that regard.

            • Klimax
            • 7 years ago

            First, AMD’s PR is on par with politicians,. so I strongly suggest to stop repeating their assertions as facts. Word “full” describes more then that.

            Second, one can be API fully compatible yet not support features.

            Or had you forgotten conveniently whole CAPS bits idiocy from DX9 and below?

            ==

            Are you paid by AMD? Anyway stop spamming!

            • encia
            • 7 years ago

            First, your post is on par with the partisans,. so I strongly suggest to stop manufacturing unproven assertions.

            Second, have you conveniently forgotten the whole Feature Level idiocy from DX11.1 and higher?

            Are you paid by NVIDIA? Anyway stop spamming!

            • Klimax
            • 7 years ago

            You have no standing for your baseless ironical accusation. So far there is only one person manufacturing assertions and misdirection and that is you.

            Attempting to compare Levels with Caps is height of ignorance and stupidity, because they are as different as one can get. False comparisons are not good idea.

            As for your last idiotical statement: Ad hominem, false equivalency and without any shred of evidence. I’d have far more evidence that it is you who is paid by AMD to promote and spam forums and by not disclosing affiliating you would be in direct breach of rules of site and thus bannable offense. (There were already such bans)

            • encia
            • 7 years ago

            You have no standing for your baseless ironical accusation since you don’t have proof that AMD’s claims for “FULL compatibility for DirectX 12” is false.

            • Klimax
            • 7 years ago

            Side note: Same way NVidia can say their DX11 cards will FULLY support DX 12. AMD has no advantage and is no better choice for GPUs.

            • encia
            • 7 years ago

            Your just manufacturing your $hit up. Post NVIDIA’s official PR source.

            • Klimax
            • 7 years ago

            [url<]http://blogs.nvidia.com/blog/2014/03/20/directx-12/[/url<] BTW: so far it is you who is manufacturing stuff, not me... Your assertions about me are quite bit full of irony...

            • encia
            • 7 years ago

            Can’t find “FULL compatibility for DirectX 12” claim in your link.

            AMD is either stupid or bold enough to claim “fully-compatible” with DirectX 12 for their GCNs.

            AMD is consistent with the FULL keyword.

            Again, AMD claims
            1. “fully-compatible DirectX 11.1”.
            2. “fully DirectX® 11.2-capable”.
            3. “full DirectX 11.2 support”.

            AMD’s statements officially contradicted NVidia’s statements.

            • UnfriendlyFire
            • 7 years ago

            We’ll have to see how do they support DX12 when MS launches DX12 with Win8 or Win9.

            • Klimax
            • 7 years ago

            [url<]http://blogs.nvidia.com/blog/2014/03/20/directx-12/[/url<] They said Fermi and up as far as API goes.

            • encia
            • 7 years ago

            Who cares when AMD is talking about “software rasterization” via HSA compute.

            Read [url<]http://www.slideshare.net/zlatan4177/gpgpu-algorithms-in-games[/url<] AMD talks about the following points 1. increasing culling details via HSA compute instead of the CPU. 2. "software rasterization" via HSA compute instead of the CPU. This is similar PS3's CELL style approach. Remember, a CPU can run any DirectX version. Anyone for 5.6 TFLOPS AMD Larrabee**? AMD has the basis for "FULL DirectX 12 compatibility" claim. **Unlike Intel Larrabee, AMD still has it's heavy rasterization hardware.

            • the
            • 7 years ago

            The thing is that modern GPU’s have very programmable GPU’s. Full DirectX support could come with select function being emulated as shader programs instead of directly executed by a hardware implementation.

            • mesyn191
            • 7 years ago

            That would probably be far too slow though.

            Kinda like how technically GPU’s have been able to emulate a x86 CPU for years now but their performance when doing so would be put to shame by a 1Ghz (old, in order) Intel Atom.

            While far more programmable then old DX7 level hardware today’s GPU’s still rely heavily on lots of specialized and fixed function hardware for a reason.

            • encia
            • 7 years ago

            If AMD’s 8 ACE units are like CELL’s SPUs as stated by Sony, then there’s a workaround.

            • mesyn191
            • 7 years ago

            I think you’re reading too much into Sony’s statements.

            They were speaking broadly in terms of usage not exact capability.

            You have to be super careful taking anything these companies say publicly these days since everything is so marketing driven to pump up the hype.

            • encia
            • 7 years ago

            For AMD GCN, read [url<]http://www.slideshare.net/zlatan4177/gpgpu-algorithms-in-games[/url<] AMD talks about the following points 1. increasing culling details via HSA compute instead of the CPU. 2. "software rasterization" via HSA compute instead of the CPU. This is similar PS3's CELL style approach. Remember, a CPU can run any DirectX version. AMD has the basis for "FULL DirectX 12 compatibility" claim.

            • mesyn191
            • 7 years ago

            Without explicit hardware details all you have is marketing fluff which for some reason you’re taking as the gospel truth. Which BTW are from sources (ie. Sony + AMD marketing) which are notorious for shading the truth to flat out lying.

            • encia
            • 7 years ago

            AMD is pretty consistent with the keyword FULL.

            Again…

            [i<]"Yes, AMD has FULL DirectX® 11.1 support"[/i<] From [url<]http://community.amd.com/community/amd-blogs/amd-gaming/blog/2012/12/14/yes-amd-has-full-directx-111-support[/url<] For DirectX 11.2 and AMD GCN. “The AMD Radeon™ HD 7000 series hardware architecture is [b<]fully DirectX® 11.2-capable [/b<]when used with a driver that enables this feature. AMD is planning to enable DirectX 11.2 with a driver update in the Windows® 8.1 launch timeframe in October, when DirectX® 11.2 ships. Today, [i<]AMD is the only GPU manufacturer to offer [b<]fully-compatible DirectX 11.1 support[/b<][/i<], and the only manufacturer to support Tiled Resources Tier-2 within a shipping product stack.” [url<]http://wccftech.com/amd-catalyst-14-1-beta-launching-q1-2014-supports-directx-112-mantle-trueaudio-frame-pacing-fix-gpus-apus/[/url<] [url<]http://cdn4.wccftech.com/wp-content/uploads/2014/01/AMD-14.1-Windows-8.1-Support-635x422.jpg[/url<] AMD claims 1. "fully-compatible DirectX 11.1". 2. "fully DirectX® 11.2-capable". 3. "full DirectX 11.2 support". This thread would be short if NVIDIA's statements are kept to it's own products. "Programmable blend and efficient OIT with pixel ordered UAV" API was from Intel and it proves that Nvidia has no authority over non-NVIDIA products.

            • mesyn191
            • 7 years ago

            They’ve been explicit before, doesn’t matter.

            • encia
            • 7 years ago

            “Software rasterization” and increase culling details via AMD’s HSA compute says Hi.

            Read [url<]http://www.slideshare.net/zlatan4177/gpgpu-algorithms-in-games[/url<]

            • mesyn191
            • 7 years ago

            That presentation has nothing to do with DX12 and doesn’t go into any detail on the ACE units either. Its just a general presentation on the capabilities of HSA.

            • encia
            • 7 years ago

            My point was AMD claimed “FULL DirectX 12 compatibility” for their current GCNs. AMD is either stupid or bold for making this claim.

            My link has information on moving CPU’s low detail culling into higher detail culling via HSA compute.

            • mesyn191
            • 7 years ago

            I know that is you’re point but you aren’t proving it.

            That AMD would be either stupid or bold to claim something they can’t deliver is nothing new at this point. They have failed to deliver many times over the last few years so cynicism is perfectly reasonable at this point.

            Culling methods also still doesn’t address (parallel vs serial + branchy workloads) what I was talking about.

            • encia
            • 7 years ago
            • the
            • 7 years ago

            Honestly I don’t think GPU’s could reliably emulate a full x86 CPU. Mechanisms to handle interrupts like an x86 CPU simply don’t exist on GPU’s. Outside of this though, most user space instructions could likely be emulated, albeit slowly as you point out.

            The wonderful thing about GPU’s is that they’re massively parallel at the high end. So even there is a good performance hit (say 10 times slower all other things being equal) for emulating new DX12 functions there is a chance it’d still be useful. For example, a Radeon R9 290X emulating new DX12 features would likely be able to out run the low end R7 class of DX12 capable GPU’s in those select functions. Outside of the new DX12 functions, the R9 290X would be way faster.

            • mesyn191
            • 7 years ago

            Nvidia said their GPU’s years ago, back in the DX9 days, could do it but performance was horrid. Something like 486DX 66-100Mhz level of performance back then IIRC.

            Can’t find the link now so feel free to call BS.

            • encia
            • 7 years ago

            AMD GCN has interrupts features e.g. “GPU sends interrupt requests to CPU on various events (such as page faults)”

            GCN’s memory pages (4KB) and cache lines are aligned to X86-64’s size.

            ————

            NVIDIA Kepler’s memory page size from [url<]http://docs.nvidia.com/cuda/gpudirect-rdma/index.html[/url<] "The CPU pages are usually 4KB and the GPU pages on Kepler-class GPUs are 64KB."

            • encia
            • 7 years ago

            Traditional X86 CPU wasn’t efficient at FMA math, lacking the GPU’s very large register size, lacks scatter/gather instructions. GPUs has very capable load/store units aka TMU and they are greater in number.

            AMD GCN’s CU has 64KBx4 register storage which is massive difference against Intel Haswell type CPU core.

            Intel Haswell’s CPU core gains gather instructions and FMA, but it’s core count is nowhere near a fat GpGPU.

            Intel Atom has better integer/branch performance than in-order processing GPUs, but AMD GCN has out-of-order wavefront processing. Intel Atom would get murdered with vector math. GpGPUs are not designed to handle CISC type instructions.

            On integer heavy litecoin, Intel Atom gets murdered by AMD GCN.

            GCN’s wavefront is basically MIMD instruction issue bundle, while AVX is a SIMD instruction bundle.

            You only do “software rasterization” if the GPU’s rasterization hardware is not flexible enough for a particular work type.

            Due DirectX11.2’s real time GpGPU results return to CPU latency issue, patching the API function via GpGPU is a non-starter. With HSA compute, software rasterization “patches” are feasible.

            • mesyn191
            • 7 years ago

            GPUs are good at embarrassingly parallel workloads and suck horribly at branch intensive serial work loads, whether they be integer or FP in nature, which is the exact sort of thing that x86 is. That goes double for emulation work loads BTW.

            This is also why pretty much all emulators run on the CPU and the performance requirements to emulate even old systems like the PS2 are still fairly high and something still won’t run even today after years of development effort. Emulation is hard, there is no easy or cheap way to do it. Certainly not on a GPU which is far less flexible than most any CPU.

            • encia
            • 7 years ago

            We are talking about graphics workloads which are “embarrassingly parallel workloads”.

            It’s silly to emulate X86 CPU workloads on GPUs.

            • mesyn191
            • 7 years ago

            Emulating embarrassingly parallel work loads is a highly serial and branch intensive operation. Again generally its done on the CPU for a reason.

            • encia
            • 7 years ago

            Again, read [url<]http://www.slideshare.net/zlatan4177/gpgpu-algorithms-in-games[/url<] AMD talks about software rasterization and increasing culling details via HSA compute and these are highly vectorised workloads. Your not factoring PS3's experience with vectorised graphics workloads on SPUs. "Software rasterization" can be broken down into tiles which are embarrassingly parallel. Tiled deferred render of lights pass for BF3 on PS3 was done on SPUs. On PC, BF3's Tiled deferred render of lights pass was done on compute shaders. Are you claiming modern GPU drivers doesn't have LLVM? AMD also has LLVM methods to JIT recompile DX ASM into VLIW5/VLIW4 or GCN ISA. The problem is GPU's flexibility. AMD GCN has the hardware capability to support C++ constructs i.e. more flexibility over VLIW5/VLIW4 Radeon HDs. Note that Swiftshader also uses LLVM methods to JIT recompile DX ASM into X86 ISA (includes SSEx support).

            • mesyn191
            • 7 years ago

            Your link doesn’t give the information you think it does and all the experience in the world on vectorizing problems will not solve fundamentally unsolvable problems.

            Everyone has all sorts of means of JIT’ing whatever into this or that, and has had them for years*, but what is the performance? If you’re so sure of the performance then you must have real world benchmarks…if you don’t have them then you’re claims are purely marketing based and useless.

            *They’re used for debugging purposes almost exclusively BTW, JIT is notoriously slow compared to a ‘native’ program.

            • encia
            • 7 years ago

            JIT LLVM is built into the current GPU drivers.

            For example
            [url<]http://i328.photobucket.com/albums/l327/encia/PC_hardware/AMD_IL_APP.jpg[/url<]

            • mesyn191
            • 7 years ago

            Which doesn’t matter and certainly isn’t a benchmark either. Its also mostly there for debugging purposes.

            Stop accepting PR as gospel, you and everyone else will be a lot happier.

            • encia
            • 7 years ago

            LOL, without JIT LLVM, your benchmark wouldn’t run on AMD GPUs i.e it’s part of the driver.

            Your CPU has a page fault feature and GCN has it’s own page faults features.

            • mesyn191
            • 7 years ago

            Lots of things are part of the driver, that doesn’t mean they play an active role with in game or benchmark performance.

            Post some performance numbers, otherwise you’ve got nothing to add.

    • HisDivineOrder
    • 7 years ago

    If nVidia and AMD didn’t have new hardware WORTH buying by Holiday 2015, they’re doing something very, very wrong.

    I like it, really. It means I’m going to get a great lifespan out of my existing system and know almost to the month when I need to have money for a new upgrade.

      • cynan
      • 7 years ago

      I think the software generally dictates when new hardware is required. And how demanding “next-gen” PC games will be at the end of 2015 is largely still up in the air. (And if current consoles have anything to do with it…)

      If your current hardware is good enough for everything you’re interested in doing with it, why risk paying early-adopter premiums for improvements in efficiency you don’t really need, or for new features that don’t yet exist in software (ie, games), or do, but not in that which you have any interest.

    • Helmore
    • 7 years ago

    I’m was already a little confused how a lot of NVIDIA GPUs would be compatible with DX12, but weren’t compatible with DX11.1 and DX11.2. Well, maybe confused isn’t the right word, but it would make the numbering convention a little confusing if DX11.1 and DX11.2 features aren’t going to be part of DX12.

      • nanoflower
      • 7 years ago

      It’s because DX12 is two things. One is new features that will require new hardware. That’s the part that Microsoft hasn’t really talked about in much detail yet. The 2nd thing is a rewrite or addition to Direct X 12 that lets developers get more control of the existing hardware. So they can use more cores easily and they lower the CPU overhead. That’s the part that doesn’t require an upgrade. It’s something Microsoft could have done in the past but for some reason they didn’t bother until now.

        • yammerpickle2
        • 7 years ago

        M$ could have done in the past, but didn’t bother until Mantle showed up.

          • Klimax
          • 7 years ago

          Wrong on all counts. DirectX 10 was full rewrite and changed structure of DirectX and associated libraries to the core.
          DirectX 11 contained number of upgrades including performance. Too bad you can’t see quite few of them on AMD’s cards, because AMD’s drivers suck. Including multithreading support. Funnily enough they did implement it. For their proprietary Mantle.

          So you are wrong about absence of optimizations, you are wrong about timeline and furthermore you are wrong about DX12 as it was in development long before Mantle got even hinted.

          And I suspect that either AMD lied/misdirected or didn’t participate because they claimed DX 12 doesn’t exist at the time when DX 12 was already under way. most likely to prepare world for its Mantle. If they told us that DX 12 is already being worked on, they would have hard time convincing us to use Mantle.

          So, no Mantle was never a influence and effecting DirectX development. Frankly, it seems Intel had more to do there. (Hints were during unveiling Iris Pro and of some technology Intel developed including PixelSync)

            • encia
            • 7 years ago

            Read [url<]http://www.eurogamer.net/articles/digitalfoundry-2014-directx-12-revealed-coming-to-xbox-one[/url<] [i<]"Microsoft confirmed that DX12 was on the roadmap for the console, but "beyond that, we have nothing more to share.""[/i<] You are forgetting Xbox One's DX12 road map and this is why AMD can claim "FULL DirectX 12 compatibility" for their GCNs. The number 12 may not been attached to DirectX12 e.g. DirectX Next.

            • HisDivineOrder
            • 7 years ago

            You’re forgetting that you’re making a mountain of a molehill from the use of a single word, “FULL” and not applying the same standard to what Microsoft’s saying, right?

            Because do you have a quote where Microsoft claims Xbox One will get FULL DirectX 12 compatibility? Otherwise, the rest of your argument seems irrelevant. If even Microsoft isn’t claiming “FULL DirectX 12 compatibility” for Xbox One, then your arguments that 1) not saying you have FULL DirectX 12 compatibility means you don’t have FULL compatibility seems to imply the Xbox One also doesn’t, which then guts your other argument that 2) AMD has to have full compatibility because Xbox One has full compatibility.

            If on the other hand, 1) Xbox One is going to get full compatibility without stating they have full compatibility for DirectX 12, then that means 2) nVidia’s not saying they have “full compatibility” means nothing because even Microsoft doesn’t hold themselves up to having to state that. That means you’re just jibber-jabberin’ out yer butt.

            In other words, you’re wrong either way. This is one of those rare times when someone backs themselves into the perfect loop of wrong logic that makes them wrong no matter what. Congrats. I’m impressed.

            • encia
            • 7 years ago

            Microsoft wouldn’t comment on PC GCNs which is AMD’s domain.

            NVIDIA’s Tamasi comments should NOT be applied on non-NVIDIA hardware i.e. NVIDIA doesn’t have this authority.

            You’re wrong in that regard.

            I have used eurogamer’s link as a supporting information NOT the primary source information.

            AMD has used “FULL” for their DirectX 11.1 support i.e. DirectX 11.1 Feature Level 11_1.

            [i<]"Yes, AMD has FULL DirectX® 11.1 support"[/i<] [url<]http://community.amd.com/community/amd-blogs/amd-gaming/blog/2012/12/14/yes-amd-has-full-directx-111-support[/url<] For DirectX 11.2 and AMD GCN. “The AMD Radeon™ HD 7000 series hardware architecture is [b<]fully DirectX® 11.2-capable [/b<]when used with a driver that enables this feature. AMD is planning to enable DirectX 11.2 with a driver update in the Windows® 8.1 launch timeframe in October, when DirectX® 11.2 ships. Today, [i<]AMD is the only GPU manufacturer to offer [b<]fully-compatible DirectX 11.1 support[/b<][/i<], and the only manufacturer to support Tiled Resources Tier-2 within a shipping product stack.” [url<]http://wccftech.com/amd-catalyst-14-1-beta-launching-q1-2014-supports-directx-112-mantle-trueaudio-frame-pacing-fix-gpus-apus/[/url<] [url<]http://cdn4.wccftech.com/wp-content/uploads/2014/01/AMD-14.1-Windows-8.1-Support-635x422.jpg[/url<] Again, AMD claims 1. "fully-compatible DirectX 11.1". 2. "fully DirectX® 11.2-capable". 3. "full DirectX 11.2 support". The keyword is FULL and AMD is consistent in that regard. ----

            • Klimax
            • 7 years ago

            Spam. And same support as NVidia’s. Well, NVidia has better as all Fermi cards will support DX12. FULLY…

            • encia
            • 7 years ago

            The full name for DirectX 12 ‘s new rendering features are

            1. Programmable blend and efficient OIT with pixel ordered UAV.

            2. Better collision and culling with Conservative Rasterization

            Point 1 seems to be similar to
            [url<]http://software.intel.com/en-us/blogs/2013/07/18/order-independent-transparency-approximation-with-pixel-synchronization[/url<] "Programmable blend and efficient OIT with pixel ordered UAV" diagram [url<]http://i.imgur.com/1UMiWno.jpg[/url<] It has "Well defined shade fragment from triangle X" with "wait" features. I wonder if GCN's out-of-order wavefront processing can duplicate Intel's pixel shader ordering function e.g. delay a wavefront via out-of-order wavefront processing.

            • Klimax
            • 7 years ago

            Most likely it will be done in drivers for current HW, be it GCN or Kepler/Maxwell. GCN was already far in development, when Intel published info on it for inclusion in DX 12)

            • encia
            • 7 years ago

            According to “Programmable blend and efficient OIT with pixel ordered UAV” diagram, it needs pixel shader program to stall/wait feature.

            I wonder if AMD GCN’s scheduler can stall pxiel shader program i.e. AMD GCN has out-of-order wavefront processing.

            • sweatshopking
            • 7 years ago

            Those aren’t all the new rendering features.

            • Klimax
            • 7 years ago

            Assertions by AMD PR department are not facts. We know how they operate.

            And considering AMD mislead about DX 12, their “full” support is in serious question. Frankly, I’ll get more pout of Fermi cards then you out of AMD’s cards of same age.

            • encia
            • 7 years ago

            The number 12 may not be attached to DirectX Next, hence “DirectX12” doesn’t exist at that point in time.

            From [url<]http://www.eurogamer.net/articles/digitalfoundry-2014-directx-12-revealed-coming-to-xbox-one[/url<] [i<] When asked about specific enhancements for Xbox One, Microsoft confirmed that DX12 was on the roadmap for the console, but "beyond that, we have nothing more to share [/i<] Your forgetting DX12 was on the roadmap for Xbox One.

            • Klimax
            • 7 years ago

            Yet AMD claimed it didn’t exist. So either AMD lied or was mostly left out or they don’t support yet new HW features anyway.

            Still doesn’t mean new HW features are in GCN… (And more then likely it would have NVidia too)

            • encia
            • 7 years ago

            The number 12 may not be attached on “DirectX Next” project during AMD/MS Xbox One project.

            Only GK110 and GM107 has HyperQ, while AMD has ACE units for the entire GCN product lines.

            Asynchronous compute hardware would be important when patching a GPU feature during synchronous compute.

            How can you be certain that NVIDIA Kelper/Maxwell has the same features as AMD GCN?

            What’s NVIDIA Kelper/Maxwell’s memory page size? Does it match X86-64’s memory page size?
            What’s NVIDIA Kelper/Maxwell’s cache line size? Does it match X86-64’s cache line size?

            Can NVIDIA match the following GCN ISA example?
            [url<]http://timothylottes.blogspot.com.au/2013/08/notes-on-amd-gcn-isa.html[/url<]

    • JustAnEngineer
    • 7 years ago

    20 nm GPUs are coming at the end of THIS year. Direct X 12 is coming at the end of NEXT year.
    It’s almost as if there is a conspiracy to encourage gaming enthusiasts to buy a new graphics card every year.

      • the
      • 7 years ago

      Tick tock?

        • Voldenuit
        • 7 years ago

        More like ‘cha-ching!’.

      • nanoflower
      • 7 years ago

      But games that NEED Direct X 12 won’t be coming until the end of 2016 at the earliest.

        • yogibbear
        • 7 years ago

        When they’ll arbitrarily need an upsize in VRAM so we can buy another gpu?

      • Ninjitsu
      • 7 years ago

      20nm GPUs will be DX12 capable, man. Chill.

        • HisDivineOrder
        • 7 years ago

        …but not fully DX12 featured.

        /whistles

        The truth is out there.

        Trust no one.

        She never sleeps.

        Who watches the watchers?

        If you build it, they will come.

        We’d better get back. It’ll be dark soon and they mostly come at night. …Mostly.

    • nanoflower
    • 7 years ago

    I’m not surprised they are adding some new features in DX12. I don’t see this as being a real issue since we know it will take quite a while before game developers take up those new features (that require new GPUs) so it’s likely you won’t need a new GPU to play DX12 games for at least 2-3 years.

Pin It on Pinterest

Share This