Nvidia denies rumored Kepler performance degradation

Earlier today, we saw a rumor claiming that Nvidia’s Kepler GPUs suffer from a flaw that results in "serious performance degradation over long periods of heavy load." The story was posted at Pnosker, a site we hadn’t heard of before, and claimed the issue might prompt a recall. Xbit Labs picked up the story, which has since been repeated elsewhere on the web.

There’s just one problem: it’s entirely false, according to Nvidia. Company spokesman Bryan Del Rizzo told us "there is no truth to this rumor." We asked specifically about not only the supposed recall, but also the claims of degraded performance under sustained, heavy loads. Neither is true, Del Rizzo said.

Despite Nvidia’s seemingly clear denial, there seems to be some doubt about whether the rumor, credited to a TSMC leak, may still be true. That strikes me as a little odd considering the original story, which remains short on details despite several updates. Kepler cards are out in the wild, in the hands of both reviewers and end users, and we’ve yet to see any credible reports of degraded performance.

Comments closed
    • crsh1976
    • 7 years ago

    Not saying that Nvidia is lying, but did they ever acknowledge the thermal issues with the GF 8600M from 2008 (used in the Macbook Pro and other notebooks)?

      • Yeats
      • 7 years ago

      They reached a settlement, which included repairs and replacements of defective notebooks.

      • Silus
      • 7 years ago

      They were the FIRST to acknowledge the problem in their quarterly results…

      [url<]http://www.nvidia.com/object/io_1215037160521.html[/url<]

    • PrincipalSkinner
    • 7 years ago

    Look on the bright side. At least there won’t be that many cards to recall!

    • pogsnet
    • 7 years ago
    • DragonDaddyBear
    • 7 years ago

    I think the fans of Diablo III would have noticed by now if it were an issue.

      • internetsandman
      • 7 years ago

      Diablo isn’t that demanding though is it? I’m on a Macbook pro and I can play with medium to high settings on my external monitor at 1920×1080, so I can’t see any performance degradation affecting people unless it’s incredibly severe

      • BobbinThreadbare
      • 7 years ago

      Why? Is the login screen really intense? /rimshot

    • Silus
    • 7 years ago

    This is the state of affairs in here…some unknown site says “X company has problems” and everyone plus dog follows upon it…it’s pretty pathetic to see something like this on the front page news…We already have one that’s pathetic enough that always makes up things about NVIDIA, but at least that one is known, if not just infamous.

    Next in the pathetic e-journalism will be some guy/girl posting on facebook about how NVIDIA sucks and TR will chime in with “Report: NVIDIA sucks says random guy @ facebook”

    Sad, sad indeed…

      • l33t-g4m3r
      • 7 years ago

      Sadder than a fanboy shilling for his favorite company? Probably not.

      News is news, whether or not you agree with it. Also, note that the article here is skeptical.
      [quote<]we've yet to see any credible reports of degraded performance[/quote<] So, take the foot out of your mouth, and stop your whining.

        • Silus
        • 7 years ago

        And yet again you show how detached you are from reality. These are not “news”. If anything they are just a “rumor” and from an unknown site at that. It’s actually funny how you defend these as “news” …it’s exactly as I said where the new low will be some guy on facebook claiming something and that being reported as “news”…

        Being skeptical makes it a forum post at best, not front page news. Or on a specific rumor section (something I’ve suggested for a while in here)…

          • l33t-g4m3r
          • 7 years ago

          You don’t get to decide what news is, and your whining doesn’t lend you credibility. If you wrote articles it would all be advertisements, lies, spin, and coverups. Bumpgate wouldn’t exist, buy a new rehashed nvidia card. Basically mind-numbing tripe that nobody would care to read. Some of the best news comes from rumors, and if a rumor’s big enough that’s at minimum newsworthy enough to be debunked. Ignoring a rumor sometimes makes it worse, so you should actually be appreciative. In the end you can always go elsewhere for news if you feel that strongly about it. Can’t say you’ll be missed.

            • Silus
            • 7 years ago

            LOL, of course I do. It’s MY opinion, just like it’s YOUR opinion that these are news. It’s called freedom of speech and why people discuss/argue about things. You don’t like it, tough luck! You can also go elsewhere if you don’t like it. You surely won’t be missed.

            • l33t-g4m3r
            • 7 years ago

            I don’t have a beef with the article or even you. Whatever I read on TR’s frontpage is news to me. I may not like the news, but that doesn’t make it not news. Where you get this idea that you can dictate what news is, I dunno, because you don’t have the authority to do so. Go make your own website if you want to control news. Aside from that you can talk about the articles here, but claiming news is not news when voila it’s right there is pretty incredible.

      • Krogoth
      • 7 years ago

      Status:

      DOLAN MAD

    • can-a-tuna
    • 7 years ago

    Denial is the first step to acceptance. Good luck rma:ing those people.

      • Yeats
      • 7 years ago

      [quote<]Denial is the first step to acceptance. Good luck rma:ing those people.[/quote<] You can RMA people? Are the defective people used to make Soylent Green?

        • willmore
        • 7 years ago

        Wait, soylent green is made from people? Aahhhhhhhhh!!!!

    • DeadOfKnight
    • 7 years ago

    This just goes to show how much credibility NVIDIA has if we’re still discussing this silly rumor.

    • l33t-g4m3r
    • 7 years ago

    Driver bugs/memory leaks/poor QC? I know for a fact that there is a serious performance degradation bug that affects my 470. Over a random period of time, the card will glitch out for no reason and permanently set the clock speed to 50mhz until I restart. I just reinstalled the driver today attempting to fix this. This has nothing to do with heat, because it will happen when the card is sitting idle. I think there may be a serious issue with adaptive power management, because maximum performance mode is a lot more stable. That TDR stuff may be related too, dunno.

      • DeadOfKnight
      • 7 years ago

      Why would anyone still use a GF100? That’s just as bad as continuing to deal with AMD drivers in Linux.

      Yeah that’s right, I just took a jab at both sides…come at me thumbs!

        • JustAnEngineer
        • 7 years ago

        [quote=”A troll”<] Why would anyone still use a GF100? [/quote<] Because they don't have $300+ to spend on a GF110 GPU just to replace a working GF100?

          • DeadOfKnight
          • 7 years ago

          That much is obvious; however, isn’t it going to cost a lot in the long run depending on how long you hold onto this card because of how horribly inefficient it is?

          I’m not rich by any means, but I do have a disposable income as a single guy so maybe my priorities don’t match up with everyone else’s. If I had a 470 I would definitely have sold it and bought a 560 Ti by now.

            • [+Duracell-]
            • 7 years ago

            The whole $2/month you’re saving is fairly negligible compared to buying a new video card for efficiency’s sake.

            • DeadOfKnight
            • 7 years ago

            There’s also noise and heat to consider. He’s already said he’s having problems with the card.

            • l33t-g4m3r
            • 7 years ago

            Don’t make things up. It’s with adaptive power management. The card plays games fine, but sitting idle for extended periods of time on adaptive makes the card stick @ 50mhz. There could theoretically be a bug in the power management code that is hardware independent, or my 470 just doesn’t like to sit idle at low clockspeeds.

            I dunno what exactly causes it, but I’ve seen other people having similar problems with newer cards, and if nvidia can’t fix it there’s no reason to stick with the brand. Performance is nice, but so is stability.

          • l33t-g4m3r
          • 7 years ago

          Exactly. I don’t feel any need to “upgrade” to a slightly faster gpu with the same or less amount of memory. My next upgrade will probably be a 7870, not another GF. Buying a X60 would actually be a downgrade in various areas like tessellation, and I don’t feel like spending $300+ on a 570 when they’re already outdated and aren’t that much more efficient. 670’s are too expensive, and ati’s last gen is too slow. My brother has a 6950, and it’s slower at least in the Heaven benchmark. That card gets 10-20 fps, whereas my 470 gets 30-40 fps. So the 7870 is it, but the price has to come down to $300 before I’ll make a purchase.

      • Krogoth
      • 7 years ago

      AMD had similar issues with 57xx and 48xx with GDDR5. The vast majority of the case happened with cards that were running stock. The overclockers/overvolters crowd didn’t experienced the problems.

      I suspect the problem is that Nvidia/AMD are just being too aggressive with power savings on their high-end chips. The GPU and memory doesn’t like going back and forth with different voltage/clockspeed between load/idle.

        • l33t-g4m3r
        • 7 years ago

        That would be my guess as well, only I don’t know how to permanently fix it without flashing the bios. Max performance mode works, but it’s not as efficient. Nvidia could add a medium power profile too, but so far nobody has attempted to address the problem.

    • Deanjo
    • 7 years ago

    As someone that has been loading these with SP Cuda loads since the 27th and 30th of last month 24/7 without so much as a crash on openSUSE I can only say BS as the performance has remained consistent and shows no sign of letting up.

      • Buzzard44
      • 7 years ago

      Not that I buy either side’s story, but I wouldn’t necessarily call 3-4 weeks “long periods”. All the people pointing out that the card hasn’t been out long enough to prove this one way or the other have a point.

      If there actually is an issue though, NVidia didn’t leave themselves any wiggle room. Which means (in my humble opinion) if there was ever a problem, it’s probably already been fixed, and will only apply to very early produced cards.

      Might be a lie, might be an internal TSMC memo blown out of proportion, might be based on something yet to rear it’s ugly head. Time will tell.

        • Deanjo
        • 7 years ago

        That 3-4 weeks of 100% load probably equates to a gamers load over a year. Those same loads typically kill weak VRM’s, cause thermal throttling, and expose weak VRAM on some consumer cards in the past.

        • cygnus1
        • 7 years ago

        Yeah, because 3 and a half weeks of continuous usage is normal for a gamer. I think it’s safe to say that’s a long period.

    • TurtlePerson2
    • 7 years ago

    What could possibly cause this degraded performance after extended use? Only temperature as far as I can tell. I don’t recall the temperatures being especially high in the review, so I have to assume that the rumor is false.

    Perhaps someone else can lend an explanation of how extended use could degrade performance?

      • TurtlePerson2
      • 7 years ago

      On second thought, it could be a drivers problem. If that were the case, then it would be easy to fix.

      • cynan
      • 7 years ago

      Not sure if this applies to the tech used in recent TSMC processes, but the second page of [url=https://engineering.purdue.edu/ece477/Homework/CommonRefs/CMOS_failure_modes.pdf<]this document[/url<] provides a basic overview of how semiconductor circuit paths can degrade. I suppose architecture variation could explain why Nvidia's transistor circuits might be susceptible to premature degradation while, for example, AMD's are not, but it doesn't seem likely to me. Though I'm pretty ignorant about this stuff.

    • Visigoth
    • 7 years ago

    This is a bullsh!t lie created most likely by some rabid fan boy:

    [url<]http://www.xbitlabs.com/news/graphics/display/20120521120817_Nvidia_Denies_Plans_to_Recall_GeForce_GTX_600_Due_to_Performance_Degradation.html[/url<] The rumor has already been squashed by NVIDIA themselves.

      • alwayssts
      • 7 years ago

      Not to completely write off what you are saying, as a response from nvidia is important, but we have less than zero reason to believe anything they say.

      nVIDIA is a company seemingly run by primarily marketing and PR-inclined people that are the best in the biz at plausible deniability and sweeping things under the rug unless they absolutely have to deal with them to the public.

      I do not say that with Charlie-like disdain, but because them be the facts that most anyone familiar could cite numerous examples.

      • l33t-g4m3r
      • 7 years ago

      Remember Bump-gate? Nvidia denied that for how long.

        • Silus
        • 7 years ago

        Don’t be an idiot. NVIDIA was the FIRST to talk about it in their own quarterly results meeting. Only then did your “friend” Charlie start with the rabid inventions that it affected every chip NVIDIA ever made, when it actually only affected the mobile chips.

          • derFunkenstein
          • 7 years ago

          Charlie is nobody’s friend. If Charlie was an ice cream flavor, he’d be pralines and dick.

            • UberGerbil
            • 7 years ago

            …and you’re allergic to pralines.

      • pogsnet
      • 7 years ago
    • LoneWolf15
    • 7 years ago

    “Kepler cards are out in the wild, in the hands of both reviewers and end users, and we’ve yet to see any credible reports of degraded performance.”

    Not to lend this story much in the way of credibility, but to quote you quoting it:

    “we saw a rumor claiming that Nvidia’s Kepler GPUs suffer from a flaw that results in “serious performance degradation over long periods of heavy load.””

    Who among us has seen a Kepler GPU operate over long periods? The card hasn’t been out that long.

    There’s probably no data to back up either side. However, we’ve seen more than one example of where nVidia silicon didn’t live up to the hype (nForce3/4 broken hardware firewall, Geforce 68xx GPUs with broken PureVideo, mobile GPUs with failed solder components). I hope for their sake this doesn’t turn out to be another one of them; I’d like to hope they’ve gotten more careful over time.

      • alwayssts
      • 7 years ago

      +1. That is EXACTLY what I was thinking. How could anyone actually know yet, but there is precedence like you mentioned…not to mention the recall of those EVGA cards.

      TBH, while it certainly all could be bull, with how kepler works (constantly switching clocks and voltages including going up to the bleeding edge of 28nm capability when under a light load causing alternating thermal stresses throughout the chip) I would have to believe this at least falls under what the mythbusters would call FEASIBLE. The tech is cool, but this is absolutely a potential drawback.

      For example, bumpgate. Because the chips (underfill) could not handle the cycling between off, low, and high loads repeatedly failures occurred. If this wasn’t 100% rectified, kepler would exacerbate this problem exponentially.

      • Airmantharp
      • 7 years ago

      I haven’t been able to use my GTX670 much (yet), but it definitely doesn’t have the issue mentioned in this article.

      Hell, I’ve been testing my overclocking with BF3 Multiplayer, which seems to be a better indicator of stability than EVGA’s stress tester, for whatever reason. Overclocked the card doesn’t have this issue. The clocks stay above stock, and the fan keeps the card cool on auto.

        • LoneWolf15
        • 7 years ago

        Key words = “long periods”. Nobody has these issues yet. Maybe they will, maybe they won’t, but we’re not there to find out yet.

          • NeelyCam
          • 7 years ago

          Maybe TSMC noticed something in burn-in tests – they’ve had Kepler silicon longer than anybody.

            • phileasfogg
            • 7 years ago

            I find that *extremely* hard to believe. The only companies outside of Nvidia who would have the required software tools and equipment to carry out a full-fledged burn-in are its inner-circle of card manufacturers. And TSMC certainly isn’t in that category. TSMC is a wafer fab first and foremost and prioritizes process-development and manufacturing to the exclusion of nearly everything else. The only “burn-in” they would be performing is to ensure the reliability of the 28nm transistors they build on their 300mm wafers – and given that this process is also used by several other IDMs (independent device manufacturers), *all* of these IDMs would be susceptible to said “performance deterioration”. TSMC is the sole authority when it comes to the integrity of their device models and design rules. But you can’t expect them to conduct ‘burn-in tests’ on a customer’s design; that is completely out of their bailiwick.

            • NeelyCam
            • 7 years ago

            The reliability of the transistor depends on how you use it. Maybe TSMC discovered something in their own transistor burn-in that could point to reliability issues in some designs (like NVidia’s) but not in some others (low-power cell phone chip).

            On the other hand, maybe NVidia “bent” the design rules. TSMC found out, and leaked this to cover their own butts from inevitable blame from NVidia when parts start failing after a year.

      • Rza79
      • 7 years ago

      That failing solder saga costed me a lot of money because I used the Gigabyte 73PVM-S2H motherboard a lot 3-4 years ago. Every single one of them failed and the ones that failed within warranty, Gigabyte would repair. That repair would fail again within 3-4 weeks.
      Clients were very angry because their computer didn’t even last 4 years without a serious malfunction. Luckily I could convince them that their problem was isolated (which I obviously knew wasn’t). Still to this date I have to deal with this issue. The Asus P5N73 motherboards that I used also failed but always out of warranty. I also get a lot of Acer computers and laptops with the same issue.
      In that same period, 2 year old 8800GTS & 8800GTX cards started to fail. Even my personal 8800GTS started to crash. Oddly manually increasing the fanspeed solved it.
      So yeah, I wouldn’t be too surprised if this story turns out to be true.

      • pogsnet
      • 7 years ago
    • jdaven
    • 7 years ago

    This is why we need more Kepler based Tesla supercomputers to accelerate web rumors like never before.

    • JustAnEngineer
    • 7 years ago

    There aren’t many credible reports of the GeForce GTX680 and GTX670 actually being available in stock in significant volumes. Maybe someone is just looking for an explanation for why NVidia isn’t producing enough GPUs.

      • chuckula
      • 7 years ago

      Keppler availability on Newegg as of today (05/21/2012):
      GTX-680: The aptly named Sir not Available on this Website.
      GTX-670: Three models out of eight are actually in stock.. not bad by Nvidia standards.

      To put it in perspective, Newegg had Ivy Bridge chips in plentiful supply at launch, sold out of the 3770K, and has already received a second shipment so they are back in stock.

        • JustAnEngineer
        • 7 years ago

        When I checked over the weekend there were none at all.

          • Deanjo
          • 7 years ago

          Have my choice of 5 different ones available locally (and has been that way since their release).

        • cynan
        • 7 years ago

        +1 for the Holy Grail reference.

        Not sure that the availability of cards have anything to do with this rumor. It’s probably just that Nvidia was getting antsy not having a 28nm part with which compete with AMD and perhaps jumped the gun a bit.

      • entropy13
      • 7 years ago

      [quote<]There aren't many credible reports of the GeForce GTX680 and GTX670 actually being available in stock in significant volumes in the United States.[/quote<] ftfy Looks like Nvidia purposely avoided the weak (although still big) US market and focused on re-surging markets like over here in Asia. Unless you're saying that all the GTX 680s I saw in the Philippines, Singapore, and Malaysia are all figments of my imagination, and that the pictures of several people buying 680s/670s over here in the Philippines are fake...

        • JustAnEngineer
        • 7 years ago

        I’m saying that I’ve checked stock of GeForce GTX680 cards at Newegg over 200 times since NVidia’s so-called “product launch” [b<]two months ago[/b<] and Newegg has [b<]NEVER[/b<] had a GeForce GTX680 available that I could purchase. I've also checked Amazon over a dozen times and they haven't had any GeForce GTX680 cards in stock, either. Furthermore, I'm saying that for at least two days this weekend, there were no GeForce GTX670 cards available at Newegg.

          • Deanjo
          • 7 years ago

          Newegg isn’t the end all or be all, seriously. Newegg and larger etailers will not accept small partial order fills where many smaller independent retailers will get their smaller orders filled because they are not ordering 10 pallets of XYZ supercard. Sure you may pay 20-50 dollars more then Neweggs listed price before shipping, but you also reduce the amount spent by not having to pay for shipping from the local small guy.

            • clone
            • 7 years ago

            given Newegg is an Nvidia launch partner which gains them preference I disagree completely.

            Newegg is not obligated to stock “10 skids” of stock, they get what they get and take money in exchange for it, in this case Nvidia is having trouble filling the channel and Newegg is selling out what little inventory they are receiving likely long before they get it.

          • entropy13
          • 7 years ago

          [quote<]I'm saying that I've checked stock of GeForce GTX680 cards at Newegg over 200 times since NVidia's so-called "product launch" two months ago and [b<]Newegg[/b<] has NEVER had a GeForce GTX680 available [b<]that I could purchase[/b<]. I've also checked [b<]Amazon[/b<] over a dozen times and they haven't had any GeForce GTX680 cards in stock, either. Furthermore, I'm saying that for at least two days this weekend, there were no GeForce GTX670 cards available at [b<]Newegg[/b<].[/quote<] I bolded the text that should have been bolded.

            • travbrad
            • 7 years ago

            The problem is that even the places that do have 680s in stock are priced way above the MSRP. The GTX680 is a fairly good bang-for-buck at $500, but at $580+ not so much (especially compared to the 670).

          • NeelyCam
          • 7 years ago

          [url<]http://www.amazon.com/PNY-GeForce-Graphics-Cards-VCGGTX680XPB/dp/B007KC961K/[/url<] "Only" six in stock... EDIT: wow, they are going fast. Only two left..

          • Silus
          • 7 years ago

          Yeah, it was all a lie! The GTX 680 and GTX 670 were never actually launched! It’s just a ploy by NVIDIA to deceive you and others like you and they even have Newegg, Amazon and other e-tailers in the whole gag! It’s hilarious! They were also able to pull in hundreds/thousands of people that say they own 1 or 2 GTX 680s just for the hell of it! Tech-Report didn’t actually review the card either (nr any othr review site). They all just had an old card with a new sticker on it and then faked the numbers. NVIDIA did provide the fake architecture slides and all that to make it look more real!

          Not even the devil can pull that one off! A product that many have, but in reality doesn’t exist! NVIDIA…evil business just to have fun!

            • l33t-g4m3r
            • 7 years ago

            Don’t forget they’re also held together with wood screws.

Pin It on Pinterest

Share This