Nvidia will fix excessive GPU idle power use at high refresh rates

A couple weeks ago, PC Perspective came across an intriguing behavior when testing Asus' PG279Q monitor. While driving that monitor with a GeForce GTX 980 Ti, the site found that after selecting refresh rates higher than 120 Hz, the system power draw at the Windows desktop would climb dramatically for no apparent reason.

According to PCPer's analysis, the site's test system normally idles at about 75W. With the PG279Q attached, however, idle power draw climbed in tandem with refresh rates at the Windows desktop, hitting a peak of 200 watts. Eventually, the site tracked down the source of the problem. It turned out that the GTX 980 Ti's clock speed began climbing once the refresh rate exceeded 120 Hz.

The site brought the issue to Nvidia's attention, and the GPU maker has responded with a statement acknowledging the issue. A software fix will be appearing in a future driver release to solve the problem. In Nvidia's own words:

We checked into the observation you highlighted with the newest 165Hz G-SYNC monitors. Guess what? You were right! That new monitor (or you) exposed a bug in the way our GPU was managing clocks for GSYNC and very high refresh rates. As a result of your findings, we are fixing the bug which will lower the operating point of our GPUs back to the same power level for other displays. We’ll have this fixed in an upcoming driver.

Nvidia didn't reveal the precise cause of the issue. PCPer theorizes that the problem might be related to GPU and pixel clocks being in lockstep, but since a fix is coming, the reason is mostly moot. Gamers with fancy monitors and GeForce cards, rejoice, for your GPU clocks and power bill will soon return to normal levels.

Comments closed
    • ZGradt
    • 5 years ago

    PCPer also said on their livestream that the obvious workaround is to run your desktop at 120hz or less and let games use the full refresh rate, since the power draw shoots up during games anyway. That way, you’d only be missing out on smooth window dragging.

      • qasdfdsaq
      • 5 years ago

      Window dragging is pretty smooth at 120Hz anyway. I doubt there’s as big a difference between 120Hz and 144Hz than between 60 and 120Hz.

    • NoOne ButMe
    • 5 years ago

    Good, very good. But, why did Nvidia with it’s “far better drivers” and way more money fail to miss this when AMD didn’t.

    Still, very good. I am still a bit shocked this was true. I though of all the driver issues to come up NVidia would have made extra sure that their high display rate (especially the overclocking to 165Hrz) would have very through testing with all variables.

    Probably testing done on Kepler and some sub-block changed in a minor way in Maxwell that did it I suppose.

      • derFunkenstein
      • 5 years ago

      They actually didn’t fail to miss this, which is why it was discovered in the first place. 😉

      • ronch
      • 5 years ago

      Not sure anyone has done this test using AMD graphics cards. And if anyone ever did and the same thing happened, I’m sure AMD will just shove it under the rug and say these are isolated cases.

        • AnotherReader
        • 5 years ago

        The linked article at PCPer tested the R9 Fury and found no change in clocks at higher refresh rates. The power draw increased by 2 W at the highest end which seems to be low enough to fall within the error bar.

        On a side note, I don’t know of any review site that reports error bars for their measurements.

      • f0d
      • 5 years ago

      amd’s “far better drivers” also has issues of its own
      [url<]https://www.techpowerup.com/reviews/Gigabyte/GTX_980_Ti_G1_Gaming/28.html[/url<] notice multi monitor power usage and video (blu ray) power usage? it isnt exactly a new thing [url<]https://www.techpowerup.com/reviews/Sapphire/R9_290_Vapor-X/23.html[/url<] wheres the announcements from amd saying they are looking into the issue?

    • TruthSerum
    • 5 years ago

    So you’re saying drivers have bugs sometimes?

    } Ducks, old lady behind me gets punched in the nose {

    Honestly they’ll fix it and people will find new things to complain about…
    as billions of transistors tick and tock ever-smaller complaints onto the internet
    for ever-smaller people to get ever more upset about, forever…

    Bores Law?

    • Chrispy_
    • 5 years ago

    It’s nice that they’re fixing a bug, but I doubt people with 250W, $650 graphics cards and $800 monitors are worried about the power usage.

    As always, it never pays to be an early adopter; You pay through the nose to be a beta tester and by the time all the problems are fixed the expensive product you were testing is now an affordable commodity… :\

      • Firestarter
      • 5 years ago

      high power usage also means high heat and maybe high noise. Even with quiet fans you’ll notice when your little man cave heats up without you gaming

        • Airmantharp
        • 5 years ago

        The heat is definitely what I notice most…

      • Andrew Lauritzen
      • 5 years ago

      If it was load power usage, agreed that no one really cares. But idle/desktop power usage is pretty relevant to everyone, especially when we’re talking about >100W + fans. Personally despite having a high end machine, if they don’t fix this then I simply wouldn’t use the relevant >120Hz modes or similar… the power draw is more relevant than the additional refresh.

      Glad to hear they are looking into it and here’s hoping for a fix!

    • derFunkenstein
    • 5 years ago

    I know the site’s name, but PCPer sounds like somebody who’s smoking the Angel Dust.

      • willmore
      • 5 years ago

      Wow, keeping it classy, huh, Ben?

        • derFunkenstein
        • 5 years ago

        Just the era of when I grew up.

    • kerwin
    • 5 years ago

    I noticed this back in May when I first got my Acer XB270HU. It was talked about on the Geforce forums as early as October of last year. I’m glad it’s finally being fixed.

    • xeridea
    • 5 years ago

    It seems totally silly for your refresh rate to have anything to do with your GPU clock.

      • MathMan
      • 5 years ago

      Is it that unreasonable to assume that a higher refresh rate requires some part of the GPU to work harder as well?

        • xeridea
        • 5 years ago

        No, desktop use should have any modern GPU at extremely low usage, so increasing refresh rate by say 50% would be so absurdly negligible, it would never matter. This isn’t gaming we’re talking about, its idle desktop, which would be easy for a circa 1999 video chip.

        This is nothing more than a botched GSync driver in their rush to get their overrated, overpriced proprietary garbage out the door. Some users have reported this for over a year, so their QA dept must be asleep.

          • qasdfdsaq
          • 5 years ago

          Except running high refresh rates or multiple monitors on both nVidia *AND* AMD cards cause increased power usage. Both exhibit increased idle clocks, making the lowest power saving modes unavailable.

          Oh wait, you can’t blame G-Sync on AMD cards or 10-year-old VGA monitors.

            • xeridea
            • 5 years ago

            Point me to another card that uses 200W at idle in any situation, and I will send you a pizza. It would be plausible that there would be a small increase in power used, but this isn’t a small thing, it is astronomical. How can a laptop run 3 monitors with very little power usage, and this card has a crazy high idle power draw running one with refresh rates above 120Hz? If it is so hard for a modern GPU to run an display at idle… how can a lowly cell phone GPU run a high res screen + HDMI output?

            • MathMan
            • 5 years ago

            Read the reply from Nvidia again: it was a bug. They admit it. It’s no big deal. It will be fixed.

            • Klimax
            • 5 years ago

            None should. That’s why it is called a bug and is due to be fixed… and after then none will use…

          • MathMan
          • 5 years ago

          Correct, it should. That’s why they say it’s bug.

          But my mistake was assuming you were trying to have a technical discussion when all you really wanted was to look for an excuse to get into an old tired rant.

          My bad.

            • xeridea
            • 5 years ago

            I was looking for a technical reason to why this is, but there doesn’t seem to be one.

            • MathMan
            • 5 years ago

            The technical reason could be as simple as ‘we need to do stuff when switching to a lower power mode and this wasn’t specifically tested for these refresh rates and for one or the other reason these refresh rates failed to trigger the lower power mode. Now that we know about this, we’ll fix that.’

            Something like: “if BW<X then lower clocks” where X was defined for 120Hz instead of 144Hz.

            • qasdfdsaq
            • 5 years ago

            I suspect it could be some old/legacy code that did some scaling of some internal variable with refresh rates in an unbounded manner, and didn’t account for future monitors going above what was available at the time.

            Hopefully the resolution will also fix my power consumption, as my GTX 970 refuses to drop below 800Mhz clocks with a mixed-refresh 3 monitor setup. Mine’s recording idle power consumption of 34% TDP as well, around 50w.

            • Visigoth
            • 5 years ago

            I’m ROFL at xeridea’s armchair GPU architect comments! Ever heard of a software bug? Go back to school, boy, and learn something useful. Then come back and “educate” us on your extensive GPU driver stack experience.

            • xeridea
            • 5 years ago

            I am a programmer, I have about 7 years of experience with different areas of computing. I can code for PHP, JS, MySQL, Java and C++. Generally I am one of the lead developers wherever I work. I know bugs by nature aren’t known at first.

            It just seems strange that refresh rate could magically make your clocks spike really high as if you are in an intense game, when your GPU utilization would still be < 1% (because you are idling on the desktop, on a high end card).

            Also, this bug has been around for over a year. One would think they would test >120Hz if they are advertising it with Gsync, and there are high refresh rate monitors on the market. I mentioned Gsync as an issue because the issue seems to only happen with Gsync enabled, and it is known that they attempted to rush Gsync to market, which is why they cost an extra $100 for an expensive FPGA rather than an ASIC.

            • MathMan
            • 5 years ago

            Ah, yes: the fact that you know PHP (really?) makes you qualified to declare a product rushed because it uses an FPGA.

            We’re now 2 years after the introduction of GSync, and AFAIK, they’re still using an FPGA. Are they still in a rush?

            2 years should have been more than enough to develop an ASIC. Yet they didn’t. Has it crossed your mind that an FPGA works just fine as a solution and doesn’t require millions in investments for something that’s not really a core business of Nvidia? Have you ever considered the opportunity cost of assigning a chip development team on such a side project when that same team could work on high volume GPUs instead? Have you considered that using an FPGA allowed them to go out and grab a market quicker than an ASIC ever could?

            Meanwhile, new monitors with that rushed solution are getting top reviews all around.

            PHP…

            • Klimax
            • 5 years ago

            FPGA’s problem is cost for this type of use.

            • MathMan
            • 5 years ago

            The cost is only an issue when it prevents Nvidia from making money on it. And, once again, from something that’s only a side business, the NRE and opportunity cost of making a full ASIC probably makes going full ASIC more expensive than FPGA. The scaler business is notoriously low margin. It makes sense to corner a minor high value part of the market.

            What really doesn’t make sense is to claim that it’s rushed…

            • xeridea
            • 5 years ago

            The problem with the FPGA is it makes monitors cost $100 or so more than they should. If they wanted Gsync to be relevant in a few years, they would make an ASIC, but they are too lazy, and FreeSync will win out in the end. They rushed their proprietary (as always) tech in the most expensive way possible.

            PHP is my main language, but I know C++ and Java, and have an understanding of a vast amount of computer concepts. I am not an expert in the GPU field, but it doesn’t take an expert to see that an FPGA for such a task is lazy, and they should have done more testing, or realized this a year ago when the issue started being noticed by users. Are you a master GPU architect, or are you just accepting whatever Nvidia says? Is there something about me being a programmer that makes me inferior in spotting the issue?

            • chuckula
            • 5 years ago

            Yeah, not impressed.
            You [might] understand some relatively abstracted software development techniques, but when interacting with highly complex HARDWARE a bunch of those simplistic assumptions that might hold true for a PHP blog go flying out the window.

            Coding at this level is not merely analyzing a few lines of code, but understanding the complex and often non-obvious side effects that happen to the hardware when the code is executed. That’s not to mention the complexities of potential race conditions when the hardware may not execute the code in exactly the way you think it will.

            P.S. –> How come I never see you posting these screeds when AMD has one of its numerous driver bugs cause issues?

            • xeridea
            • 5 years ago

            I don’t dispute the fact that AMDs drivers are generally inferior. I have never had an issue that affected me though. I am commenting on this because they were flaunting their overpriced Gsync, that was rushed and apparently has a major driver issue that has been known about for a long time and is not fixed.

            • qasdfdsaq
            • 5 years ago

            Perhaps you chose to ignore where I said it happens without anything G-Sync capable even connected.

            • xeridea
            • 5 years ago

            I read some forum posts on other sites that said it was only with Gsync. I don’t have a $700 Gsync monitor so I can not confirm.

            • NoOne ButMe
            • 5 years ago

            A software bug that was reported about 1.5 years ago?

            Finding a clock profile which is over the minimum clocks and can drive such a refresh without drastically raising power takes 1.5 years?

            AMD with far less resources fixes things far faster than this. Their frame pacing fixes for the terrible 7990 took about 4 months. I’m sure some things have taken longer from AMD, but, I highly doubt most things, especially not things that are specifically aimed at high end gamers, have taken over 6 months to fix in most cases. Which led to it “topping” the latency charts: [url<]https://techreport.com/review/25167/frame-pacing-driver-aims-to-revive-the-radeon-hd-7990/1[/url<] In fact, the frametimes of the 7990 were under single cards from Nvidia. [b<]now, it did take years past Nvidia to impliment frame pacing. Certainly. However, the reviews pushing it started in late 2011 for the most part. And AMD wasn't terrible until the 7990. It should have been fixed earlier. Once Nvidia put FCAT out it was quickly fixed, thanks Nvidia for once.[/b<] Tech sites have had over 1.5 years to write articles on this power usage and didn't. Should have been fixed over a year ago. [b<]Nvidia has had nearly if not over 4x the time since this was diagnosed and talked about[/b<]. And it hasn't happened. I imagine it can only clock so low before the voltage for operations gets troubled, but, I'm sure they can at least release a BIOS you can flash to give it another clock state in the 600 or 700mhz with lower RAM clocks. Cut a bit portion off of that power increase. But, hey. Nvidia has way better drivers. [url<]http://www.overclock.net/t/1497172/did-you-know-that-running-144-hz-causes-ridiculously-high-idle-temperatures-and-power-draw-on-your-nvidia-gpu[/url<] [b<]Last edit for that OP was 20th June 2014[/b<]

      • Sargent Duck
      • 5 years ago

      Funny thing about bugs, they don’t always work like you expect them too.

        • morphine
        • 5 years ago

        99 bugs in the code, 99 bugs.

        Debug one down, commit the fix around, 100 bugs in the code.

          • DoomGuy64
          • 5 years ago

          [url=http://www.descent2.de/d2x-history.html<]Fixed: .....[/url<]

          • willmore
          • 5 years ago

          I believe that ends with “101 bugs in the code”.

            • qasdfdsaq
            • 5 years ago

            Not in my world 🙁

          • qasdfdsaq
          • 5 years ago

          [url<]http://rlv.zcache.co.uk/99_bugs_in_the_code_tees-r8e86f12f9c144608a9aa9f8e798dc5e9_va6l9_512.jpg[/url<]

      • just brew it!
      • 5 years ago

      Not at all. High refresh rates, when combined with high resolutions, imply really high pixel clock rates. Even with umpty-bazillion parallel pipelines to do the rendering, at some point all those pixels have to be gathered up from the frame buffer and merged together into a single output stream — an inherently serial process. At least some parts of the GPU will need to run at the pixel clock rate.

      So it isn’t hard to imagine a relationship between refresh rate and GPU clock, depending on how they’ve implemented the logic that derives the various clocks used inside the GPU. Hopefully this is something they can fix with a driver patch; if it’s a limitation inherent in the implementation of the DPLLs used for internal clock generation (i.e. hard-coded into the silicon), people with existing cards may be SOL.

      • NoOne ButMe
      • 5 years ago

      No, it isn’t. What was and still is surprising is that the GPU clocked up so high. If the GPU clock had risen by the [new refresh/old refresh] or about that number it would have made perfect sense.

      You need more power to push more pixels faster. I would imagine, that is. But, the fact it clocks up to 980Mhz if I remember right is very surprising. I’m quite sure there are clock states inbetween that also. Even stranger that the GPU didn’t think to latch onto one of those.

      Oh well, weird driver bugs happen, Nvidia’s rose above AMD after the initial W10 disasters and now appears to be about equal overall.

      • anotherengineer
      • 5 years ago

      No it doesn’t, more frames more bandwidth more power.

      Example, I have a 120 hz screen.

      @ 60hz GPU is 100 Mhz

      @ 120 GPU is 500MHz

      That’s the next step in the Vbios, same things happens with dual screens over a single screen.

      Now the jump, well that could probably be refined a bit, as it seems like a big jump to me, but that’s the way it is.

    • Shambles
    • 5 years ago

    Perhaps TR would be better to post this story when they actually have fixed it rather than when their PR machine tells that they totally have it solved and will release it at some unknown time in the future.

      • just brew it!
      • 5 years ago

      Has TR posted about this issue before? If not, then it is still useful, to let people who don’t read PC Perspective know that the issue exists.

      • DoomGuy64
      • 5 years ago

      Kinda sounds like you’re advocating a media blackout on nvidia’s driver issues, until after they’re fixed, which isn’t very ethical.

      This is especially nefarious since nvidia sells their hardware on power efficiency and gsync. Considering that this bug has gone unfixed for years, everyone who bought a card under that pretense was misled, and IMO opens nvidia up to a class action lawsuit or worse. This is a Volkswagen emission level scandal, as it invalidates nvidia’s claimed power efficiency ratings under these operating conditions.

        • Klimax
        • 5 years ago

        Actually, this didn’t go for years. At least there are no data on that. We know driver/Maxwell combination caused with fairly rare high-refreshrate monitors higher consumption. We don’t know if other cards get hit similarly. And we can’t go too far bac as support for such refresh rate will IIRC disappear.

        So you are going bit overboard there. Not only it is not going for years, there is so far no basis to claim VW-level of scandal, to claim class action is possible. You simply got too many errors there for such massive claims. You jumped way too far without backing…

        I suggest you check what happened in in WV cars. Because there is no similarity at all.

        • MathMan
        • 5 years ago

        If kind of hard to have this bug gone unfixed for years when the first 2560×1440 144Hz Gsync monitor was only introduced 14 months ago…

        Aside from that: Volkswagen emission level scandal? Really?

          • derFunkenstein
          • 5 years ago

          there were fixed-rate 144Hz monitors before that, though. Still, I think DoomGuy64 is mostly right because there should not be a blackout of sorts over Nvidia’s driver issues. For Shambles to suggest otherwise is irresponsible.

            • sweatshopking
            • 5 years ago

            [quote<] irresponsible [/quote<] STOP STEALING MY JOB, SHAMBLES

        • TruthSerum
        • 5 years ago

        I think he misworded what he was suggesting.

      • ImSpartacus
      • 5 years ago

      I think it’s important to share these kinds of announcements.

      Granted, the headline probably should’ve been something like, “Nvidia promises to fix excessive GPU idle power use at high refresh rates” since it’s just a “promise” at this point.

      But regardless, the article is a fine article.

      • torquer
      • 5 years ago

      Yeah we should definitely never hear about a problem until it’s solved. I also don’t want to hear the ends of any sentences!

      • TruthSerum
      • 5 years ago

      Or… maybe a follow-up? That’s probably a better suggestion. Disclosure -> conclusion.

    • drfish
    • 5 years ago

    I’m still confused by this… When it came up again I thought, wait a sec, didn’t this already get fixed? Check out [url=http://www.overclock.net/t/1497172/did-you-know-that-running-144-hz-causes-ridiculously-high-idle-temperatures-and-power-draw-on-your-nvidia-gpu<]this thread[/url<] from well over a year ago...

      • GTVic
      • 5 years ago

      I read the first few and last pages, no indication that it was fixed or even reported to NVidia. Sounds like PC Perspective was the first to report the problem.

      • egon
      • 5 years ago

      Heck, looks like it was raised in Nvidia’s own forum back in 2013:

      [url<]https://forums.geforce.com/default/topic/648304/1080p-144hz-excessive-gpu-usage-while-idle/[/url<] and again last year: [url<]https://forums.geforce.com/default/topic/779363/144hz-monitor-your-gpu-will-not-down-clock-high-power-usage/[/url<]

        • Shambles
        • 5 years ago

        Shhhh, don’t disturb the nVidia circle jerk.

        • Klimax
        • 5 years ago

        Based on results of PCPer, looks like incremental fixes. Because 144Hz behaved correctly, but 16Hz didn’t.

        ETA: Misremembered PCPer article. There is some jump in 144hHz. However your, links are about Kepler not Maxwell. Different parts of code. (Though NVidia needs to keep an eye on it more…)

Pin It on Pinterest

Share This