Nvidia finally lets Fermi GPU owners enjoy DirectX 12

Way back in 2014 when Microsoft was still finalizing DirectX 12, Nvidia pledged that all of its then-current hardware would support the new API. At the time, that meant GPUs from the Maxwell, Kepler, and Fermi families. Kepler and Maxwell DX12 support came around on day one of Windows 10's release, but Fermi languished on DirectX 11, meaning it never completely supported the OS's WDDM 2.2 standard. That is, until now: the latest 384.76 drivers quietly added DirectX 12 support for Fermi GPUs.

The new feature wasn't listed in the patch notes, possibly because users have been complaining for a while about Nvidia's apparent failure to live up to its promise. The change was first spotted by a couple of sharp-eyed folks on the Guru3D forums. User "maur0" noted that the DirectX diagnostic tool was now reporting Direct3D version 12 on his GeForce GTX 570. Then "user1" promptly ran the extremely-intensive 3DMark Time Spy DX12 test on his GTX 560M and achieved a staggering 373 points.

For comparison's sake, a GeForce GTX 660 or GTX 750 Ti can put up around 1200 points in the same test. Overclocked Maxwell parts and high-end Pascal processors like the GeForce GTX 1080 can breach six thousand. The GTX 560M is a mobile part based on GF116—a six-year-old GPU at this point—and frankly, the fact that it can complete the benchmark at all is impressive. Hats off to Nvidia for continuing to support truly ancient hardware.

The poor performance in Time Spy is likely attributable (at least to some degree) to the fact that almost none of the DirectX 12 API features are supported by the old Fermi hardware. Some folks coaxed user1 into running the DX12 Feature Checker on his laptop's GPU, and the resulting list is an excellent exercise in the many ways of reading the word "no." Still, even if the performance isn't great, DX12 support for these old cards means that they can stay in service for years to come. Also, simpler DX12 titles that aren't as demanding as benchmarks won't simply bomb out on otherwise-capable cards, like Vulkan apps forever will.

If you've still got a space heater Fermi GPU pressed into service and your favorite DX12 game is crashing out to the Windows desktop, head on over to Nvidia's website to grab the latest driver.

Comments closed
    • Drachasor
    • 2 years ago

    I’d be more excited if they were adding 10-bit opengl support to all their cards. I think this is just an artificial driver limitation to push the pro cards, but I admit I’m not 100% sure. It’s annoying though.

    • jackbomb
    • 2 years ago

    I’d imagine that the desktop GTX 580 is around 4 times more powerful than the GTX560M.

    Assuming that’s kinda-sorta true, a GTX 580 would score ~1492 points; faster than the Maxwell based GTX 750 Ti.

    • tipoo
    • 2 years ago

    So they nerfed parts of DX12s spec so it would work well on Fermi back in 2014, and then differed supporting it for three years, after most Fermi users have probably moved on, partly for lack of support.

    That ain’t cool, Jen.

    • ImSpartacus
    • 2 years ago

    Just wanted to say I appreciate the extra level of journalism that went into this article.

    I had heard of this release, but not the back story about Nvidia promising DX12 support on Fermi.

      • RAGEPRO
      • 2 years ago

      Thanks man. I appreciate your appreciation. 🙂

    • ET3D
    • 2 years ago

    Fine Wine.

    • NoOne ButMe
    • 2 years ago

    Worst bit, some DX12 features were “gimped” to make sure Fermi could make it in spec.

    Why Nvidia insisted DX12 be good for Fermi. Before lying about driver ETA for it than delivering super-late.

    Sigh.

    MS never should have come along Fermi.

    • slaimus
    • 2 years ago

    Just in time for Dolphin to remove DX12 support in the latest build and switch to Vulkan instead. Nvidia definitely cancelled Vulkan support last I heard. My GTX 460 768MB ran Dolphin DX12 okay with the one beta driver that had DX12.

    • xeridea
    • 2 years ago

    [quote<]Hats off to Nvidia for continuing to support truly ancient hardware.[/quote<] More like they just slapped together laughable support years after they said they would, and it is so far in the future, it isn't even worth having because anything that could benefit won't run above 5fps on the ancient, and horrid Fermi architecture, that has 0 hardware support for it. At this point it doesn't matter if the performance is horrid, people figure their 6 year old worthless GPUs are just too slow. Performance is so bad because it isn't really supported, Nvidia was wanting to stick with DX11 for another 10 years, so uarchs are hyperoptomized for the ancient code. AFAIK even Pascal still doesn't have stellar support for important DX12 features. I am pretty sure GCN 1.0 cards have better support.

      • chuckula
      • 2 years ago

      So there’s DX 12 support for a card that TR reviewed in March of 2010:
      [url<]https://techreport.com/review_full/18682/nvidia-geforce-gtx-480-and-470-graphics-processors[/url<] Maybe it's not perfect uber support, but can you name a single AMD product that was widely available in 2010... or 2011 that has even marginal DX12 support? Because TR didn't even get its hands on the early review sample of the HD 7970 until 2012: [url<]https://techreport.com/review/22192/amd-radeon-hd-7970-graphics-processor[/url<] [quote<]Performance is so bad because it isn't really supported, Nvidia was wanting to stick with DX11 for another 10 years, so uarchs are hyperoptomized for the ancient code. AFAIK even Pascal still doesn't have stellar support for important DX12 features. I am pretty sure GCN 1.0 cards have better support.[/quote<] Good, then Vega, which is unquestionably AMD's fastest architecture on a chip that's noticeably larger than the GTX-1080Ti (much less the GTX-1080) will blow the doors off the 1080Ti in literally every DX 12 benchmark no excuses.... right? Tell ya what, at the end of the month if Vega isn't annihilating the GTX-1080Ti in every DX12 benchmark you can find then I'll let you take back your ill advised statement.

        • homerdog
        • 2 years ago

        PCPer has already put out their Vega Frontier Edition review. It is an utter failure, sitting between the 1070 and the 1080, more often closer to the former than the latter.

          • USAFTW
          • 2 years ago

          What’s interesting is that the AMD loyalists always seem to move the goalpost. After every benchmark leak to come this year, they were dubious. Recent leaks were also discredited. Now, a review of a launched product is not sufficient evidence because its a professional card (even though it’s the same class as a Titan Xp price wise that has no problem at all with gaming drivers) and AMD themselves (according to Ryan Shrout) say there is no difference between gaming and Pro modes in driver. Also, apparently seven months since its unending parade of dog-and-pony shows is still not enough time to get the drivers for a new architecture to work as intended and the fine wine still needs to mature.
          I wanted Vega to be solid and be a counter to Nvidia’s GP102 but I was bitterly disappointed. A 300 watt, expensively designed and sold, cut down GP104 competitor is simply not enough, especially with Volta in sight.

          • southrncomfortjm
          • 2 years ago

          I think that’s still a bit premature. AMD has repeatedly said it is not “game” ready, so PCPer’s race to put out gaming benchmarks is dubious and not that helpful. I’ll wait for a more extensive review using “game ready” drivers from a site that doesn’t seem just hell bent on being first with the review. They joke about being FIRST in the early parts of the review.

          In the few productivity based tests that they ran, the FE Vega was doing as well or better than the Titan XP(p?) and many Quadro cards. I think Vega will do a lot better in games once it has proper drivers and even better once some non-reference card designs come out with better coolers. AMD’s reference designs always seem terrible.

            • K-L-Waster
            • 2 years ago

            [quote<]I think Vega will do a lot better in games once it has proper drivers and even better once some non-reference card designs come out with better coolers.[/quote<] But how much better? If the current FE card is splitting the difference between the 1070 and 1080, where does a driver optimized card end up? Ahead of the 1080 but behind the 1080TI? If so, it's going to get creamed when Volta launches.

            • DoomGuy64
            • 2 years ago

            Vega reminds me of the 4870’s dx10.1. Fully supported games will run well, otherwise it will perform between the 1070 and 1080. I don’t see a problem with that since they are currently offering nothing in that range, and you absolutely need a card of that level to run newer games on 1440p freesync monitors. It gives freesync owners [i<]something[/i<] to upgrade to, instead of throwing out their monitor for gsync/NVidia. Complaining that it doesn't hit Ti levels is nonsense, since it still has 64 ROPs, and AMD has been selling their cards on general purpose perf/$ instead of maximum 3dmark scores. Complaining only makes sense if they re-engineered their cards with Ti level specs and traded off compute for graphics and still couldn't cut it, but currently that's not the case. 1080 level performance is acceptable for what it is, as long as prices properly reflect that performance.

            • Voldenuit
            • 2 years ago

            [quote<]1080 level performance is acceptable for what it is, as long as prices properly reflect that performance.[/quote<] It's a huge die with expensive RAM, an interposer, and high power draw (at least in FE form). If it has to be priced competitively against nvidia parts that are cheaper to manufacture, that's not great news for AMD. I still think it's going to be a killer compute card, and I think RX Vega will probably be about 10% faster than the FE at gaming, but I don't think it will be a Pascal-killer for gamers.

            • DoomGuy64
            • 2 years ago

            The only people who think this is bad is chuckula and like minded individuals. There is nothing wrong with Vega performing at 1080 levels and being priced accordingly. It has the same ROPs as the 1080, so it is ridiculous to expect Ti levels of performance. AMD is not going to compete with Nvidia’s Ti line as long as they make general purpose chips and ignore the uber high end gaming market. Which is pretty niche in and of itself. No console port needs Ti level hardware anyway, as that hardware is exclusively catering to audiences who use 4k and VR. Vega is fine for everyone else.

            The FE is already priced for a decent profit, and HBM 2 should be cheaper than HBM 1. Vega being in line with the 1080 is a perfectly acceptable target, both for consumers and AMD. Complaining about Vega not being a Ti killer is both unreasonable and nonsensical. Save the drama for a scenario that would actually make sense to go full retard over, because this ain’t it, and you all look like chuckula fanboys for doing so.

            Vega is fine. Complain about not having a Vega Ti if you want, but there is nothing wrong with Vega as is. It’s a good enough upgrade for 1440p freesync users, and it shouldn’t be unreasonably expensive. I’m fine with that. If you still want a Ti, then go buy a Ti because you’re obviously not the target audience, nor do you understand the target audience. AMD thinks it’s a worthwhile product or they wouldn’t be making it. Their R&D budget doesn’t allow them to diversify product lines like Nvidia, so Vega is what it is, which is a gaming/compute compromise with a flexible architecture. I don’t see that changing until AMD can afford to diversify compute from gaming. Long term performance, general purpose features, and support should be better than a 1080 because of that. Big whoop that it doesn’t beat the Ti or is more profitable than a 1080, it still accomplishes the goal AMD was trying to meet. AMD deserves some credit for pulling it off with their limited resources, because they can’t afford another failure at this point.

            • chuckula
            • 2 years ago

            Nice wall of text with no content.

            Anybody who thinks there isn’t virulent AMD fanboy squad around here should consider the number of upvotes that Xeridia got for what is nothing more than a copy-n-pasted anti-Nvidia hate-rant that couldn’t even be bothered to be factually accurate. Meanwhile, my accurate post went from a +9 and being in the top comments section back to zero when the usual koolaid drinkers logged in and turned off their brains.

            Once again: Since according to you AMD is the holy-of-holies single source of innovation in the entire planet and the only company that has ever spent a single minute supporting DX12, then Vega damn well better be leagues faster than a smaller GTX-1080Ti in literally every DX12 benchmark no excuses.

            I’m getting a little tired of the slobbering adulation of anything AMD. The new lowered-expectation that any benchmark where a late 2017 Vega kinda sorta beats a mid-2016 GTX-1080 while consuming more power than a GTX-1080Ti is now a “miracle” and proof that Ngreedia has purposely held back the entire industry by a decade was trite last year, much less now.

            • DoomGuy64
            • 2 years ago

            Sup thar. I see you haven’t changed since last I saw you skulking around.

            [quote<]Since according to you AMD is the holy-of-holies [/quote<] Nope. I just think AMD has a legitimate product, and I'm calling out people like you for trashing the comment section with toxic fud, and abusing your gold subscriber voting privileges to troll further. [quote<]damn well better be leagues faster than a smaller GTX-1080Ti [/quote<] GTFO. Read my actual post before writing such garbage. I said Vega can't possibly be faster than the Ti with only 64 ROPs, and anyone with two brain cells to rub together should have already expected that. Second, it is PEFECTLY ACCEPTABLE for AMD to sell a product with 1080 class performance, since they currently offer [I]nothing[/I] in that range. Vega is by no means a Ti killer, but that doesn't make it a bad product either. Unless Nvidia starts supporting freesync, Vega is the only card available that drives those monitors at 1080 levels of performance. It is a legitimate product, whether you like it or not. You don't have an argument until Nvidia supports freesync.

            • Klimax
            • 2 years ago

            Reminder: Gaming drivers by Nvidia do not have optimizations for professional apps.

            So your conclusion is based on incorrect compassion.

          • Krogoth
          • 2 years ago

          It is a workstation-tier card. Quadros aren’t exactly that great at gaming performance when compared to their Geforce brethren either.

          There are massive driver issues with Vega FE and it appears that they are using “Fiji” pathway right now to meet launch date. It isn’t using the hardware to its full potential yet.

          Second crop of Maxwell chip had similar delays back in the day, but the only difference is that Nvidia could afford the delay since Kepler chips were still powerful and selling well.

            • MathMan
            • 2 years ago

            P6000 review with a bunch of non professional benchmark:
            [url<]https://www.servethehome.com/nvidia-quadro-p6000-high-end-workstation-graphics-card-review/[/url<] And some more here: [url<]https://hothardware.com/reviews/nvidia-quadro-p6000-and-p5000-workstation-gpu-reviews?page=6[/url<] There's no reason whatsoever why a PRO GPU should perform worse at games, and that's even more so for prosumer GPUs like the FE.

            • Krogoth
            • 2 years ago

            The difference comes from firmware and drivers.

            Workstation cards are coded for precision and accuracy in their renderings. They don’t take shortcuts or use optimization tricks for extra performance.

          • brucethemoose
          • 2 years ago

          I don’t buy that. Vega is performing EXACTLY like an overclocked Fiji, down to like a percentage point in some instances.

          For a GPU with considerably more transistors (its 500mm^2+) and features than Fiji, that makes no sense. It can’t just be AMD’s historically bad launch drivers, I suspect Vega FE is just a card AMD chunked out the door so they can tell investors they met a launch target.

            • Voldenuit
            • 2 years ago

            [quote<]For a GPU with considerably more transistors (its 500mm^2+) and features than Fiji, that makes no sense. [/quote<] Vega has twice the FP16 TFLOPs/clock of Fiji, so the execution units and registers would have to have gotten bigger. It also has the High Bandwidth Cache Controller and associated wiring. Also, it's unclear to me how much extra on-GPU cache Vega has, but AMD was talking it up, so I'm guessing more than Fiji did. Those things together could take up a lot of silicon die space. Sounds like AMD made a card that is going to be killer at compute (a theory hopefully testable soon if ppl can get mining apps to run on Vega without crashing), but there's no guarantee that any of these new features would be any use in gaming or even traditional GPU workloads.

            • RAGEPRO
            • 2 years ago

            No major execution unit or register changes for FP16 support. It simply supports packed math (allowing it to run two FP16 ops as an FP32), which Fiji didn’t.

        • NoOne ButMe
        • 2 years ago

        Uh, yay? DX12 has to support more, less refined architecture?

        Not good.

        AMD should have kept making VLIW5, at minimum VLIW4 updates, but part of advancement is moving forwards.

        Fermi shouldn’t have ever been pushed for being able to support DX12 by Nvidia (changed/made spec to allow Fermi support) and MS never should have let it happen aside from that.

        • ImSpartacus
        • 2 years ago

        Not sure how you worked Vega in there, but good for you. That was impressive.

      • USAFTW
      • 2 years ago

      And yet we’re still waiting for AMD to support DX12 on it’s Cayman series of GPUs which were all launched after not the GTX 480, but the Fermi refresh GTX 580.
      Let alone DX12 support, AMD stopped releasing ANY drivers for it’s Cayman and Cypress GPUs at December of 2015.
      In fact, there are new driver releases for Nvidia’s original Tesla GPUs (8800 GTX) which date all the way back to 2007. So, Nvidia supported a 10 year old GPU one whole year more than AMD supported its 6 years old GPUs.

        • swaaye
        • 2 years ago

        How about those Cayman-based APUs from 2013 that also have no support. Granted those are best forgotten. 🙂

        None of it is unprecedented for AMD. You don’t get support from them for the same length of time as NV. It also causes game developers to drop support sooner too.

          • juampa_valve_rde
          • 2 years ago

          Although probably Fermi parts support a subset of DX12 at hardware, doing the rest at software level (Tahiti does something like that i think), i seriously doubt any VLIW5 and VLIW4 GPUs or APUs from AMD/ATI could deal with most of the DX12 stuff on hardware level, even if possible i doubt any would put the inmense amount of development time required to make those GPU do something meaningful with DX12 code, it’s just not worth it.

          Anyway i would like to see what a Fermi can do with DX12 against modern GPUs on current games.

            • Krogoth
            • 2 years ago

            Take GTX 680, 760 and 960 results and reduce them by about 5-15% that’s what GTX 580 should hovering around.

            • juampa_valve_rde
            • 2 years ago

            i beg to differ
            [url<]http://wccftech.com/nvidia-gtx-580-dx-12-benchmarks-geforce-384-76-geforce-400-500-series-direct-x-12/[/url<]

            • swaaye
            • 2 years ago

            Oh I’m not really referring to D3D12. I doubt that the VLIW AMD parts are capable of D3D12.

            What I’m talking about is the lack of any and all support. Those AMD APUs and GPUs have not had any driver work done for any games released after 2015.

            GCN has had the longest support AMD has ever bothered with. I suppose it’s because they still sell GCN 1 GPUs.

        • Krogoth
        • 2 years ago

        It doesn’t really matter though as GTX 580 and 570 (the 2560MiB-3092MiB versions) have barely enough VRAM to handle DX12 titles today and certainly not enough for DX12 titles for tomorrow. It is the same deal on the AMD/ATI camp.

        It is just Nvidia continuing to spend enough money to test and slap WHQL on its pre-DX11 hardware.

        • NoOne ButMe
        • 2 years ago

        AMD never promised to support them.

        Nvidia promised to support Fermi, which meant the DX12 spec has to be made to accommodate it.

        Than Nvidia took way longer than it promised to get it working on them.

        If Nvidia hadn’t promised/pushed for Fermi support for DX12 the API probably would be better.

          • Andrew Lauritzen
          • 2 years ago

          Yeah, exactly…

          Indeed few would be whining if NVIDIA just never claimed they were going to support DX12 on Fermi at all. But promising people they were going to do it (and thus forcing the API to support it) and then delivering it so late as to be effectively useless doesn’t win you many friends 🙂

          I imagine a lot of the reason was so that they and Microsoft could make bold claims about the amount of “DX12 compatible hardware” that is out there and thus why developers should jump onto the new API immediately, etc. Certainly this was a very relevant part of the marketing around the time of Windows 10 release if you recall.

          Still, I’m glad they did end up eventually doing it. Better to be super-late than just liars in the end 🙂

      • DoomGuy64
      • 2 years ago

      Fermi aged better than Kepler, aside from ram limitations. Kepler moved a bunch of features Fermi supported in hardware to software, and as soon as Nvidia stopped optimizing for Kepler, performance stagnated in newer titles. Older games ran fine, and what new games did get legacy support ran fine, but otherwise performance was unpredictable, while Fermi kept chugging along.

      Fermi certainly is outdated, and lacks a large number of modern features, but anyone who got a 3gb 580 and stuck with it over a similar Kepler was better off. Saying a mobile Fermi part performs poorly is disingenuous, because most of those low end mobile parts Kepler included, cannot run modern games and barely ran games even on release. Only the high end desktop cards ever had real long term value, and some of those cards could have held up until Maxwell / GCN 1.2.

      Running one of those cards today is incredibly silly though. Just pick a 960/970 on the cheap. Good to see the support, although it probably was more because of unified driver consolidation. Easier to support the newer driver model than keep supporting the older driver model just for legacy cards. No new features were added other than dx12, so that is most likely what happened.

    • NTMBK
    • 2 years ago

    [url=https://forums.macrumors.com/attachments/fermi-grill-jpg.224129/<]Obligatory[/url<]

      • morphine
      • 2 years ago

      That’s interesting because I never thought that my 570 was particularly hot, bothered, or loud. vOv

        • Krogoth
        • 2 years ago

        That was really only applicable to GF100 chips. The most notably the GTX 480 because Nvidia pushed the clockspeed to its limits in the hope it could beat HD 5870 at the time.

        It was unable to catch-up to the Evergreen stuff at gaming performance and only really shined at general compute stuff.

        It end-up being Nvidia’s R600 a.k.a 2900XT.

        GF104 and GF110 respin addressed the architecture issues with gaming performance. That’s why GTX 570 and 580 fared better. Simliar to how ATI was able to respin and fix issues with the R600 into the more respectable RV670 a.k.a HD 3850 and HD 3870.

        • USAFTW
        • 2 years ago

        The only GPU that suffered a curious case of grillitis was the GTX 480 and to some extent the GTX 470. The rest of Fermi GPUs had no such problem.

          • Krogoth
          • 2 years ago

          GTX 580 was pretty toasty but it but had performance to back it up though.

        • Kougar
        • 2 years ago

        Probably because the 500’s where the re-spun power efficient versions of the 400 cards. There’s a reason the only “FTW” model 480 EVGA ever made was a watercooled part.

      • Kougar
      • 2 years ago

      Why do you think I keep my GTX 480 FTW in the kitchen.

Pin It on Pinterest

Share This