Nvidia admits, explains GeForce GTX 970 memory allocation issue

We've been tracking an issue with GeForce GTX 970 memory use for a little while now, most notably via this thread in our forums. Some GeForce GTX 970 owners have noticed unusual behavior from these cards compared to the GTX 980. Specifically, the GTX 970 sometimes appears to allocate less than its full 4GB of memory in cases where the GTX 980 does. Also, when asked to go beyond 3.5GB using a directed test, GTX 970 memory bandwidth appears to drop. We even discussed the issue on the Alt+Tab Show last night.

Nvidia has been looking into the issue and has now issued the following statement:

The GeForce GTX 970 is equipped with 4GB of dedicated graphics memory.  However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section.  The GPU has higher priority access to the 3.5GB section.  When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands.  When a game requires more than 3.5GB of memory then we use both segments.

We understand there have been some questions about how the GTX 970 will perform when it accesses the 0.5GB memory segment.  The best way to test that is to look at game performance.  Compare a GTX 980 to a 970 on a game that uses less than 3.5GB.  Then turn up the settings so the game needs more than 3.5GB and compare 980 and 970 performance again.

Here’s an example of some performance data:

 

GTX980

GTX970

Shadows of Mordor

 

 

<3.5GB setting = 2688×1512 Very High

72fps

60fps

>3.5GB setting = 3456×1944

55fps (-24%)

45fps (-25%)

Battlefield 4

 

 

<3.5GB setting = 3840×2160 2xMSAA

36fps

30fps

>3.5GB setting = 3840×2160 135% res

19fps (-47%)

15fps (-50%)

Call of Duty: Advanced Warfare

 

 

<3.5GB setting = 3840×2160 FSMAA T2x, Supersampling off

82fps

71fps

>3.5GB setting = 3840×2160 FSMAA T2x, Supersampling on

48fps (-41%)

40fps (-44%)

On GTX 980, Shadows of Mordor drops about 24% on GTX 980 and 25% on GTX 970, a 1% difference.  On Battlefield 4, the drop is 47% on GTX 980 and 50% on GTX 970, a 3% difference.  On CoD: AW, the drop is 41% on GTX 980 and 44% on GTX 970, a 3% difference.  As you can see, there is very little change in the performance of the GTX 970 relative to GTX 980 on these games when it is using the 0.5GB segment.

Interesting. We explored a similar datapath issue related to the GTX 970's disabled SMs in this blog post. In that case, we looked at why the GTX 970 can't make full use of its 64 ROP partitions at once when drawing pixels. Sounds to me like this issue is pretty closely related.

Beyond satisfying our curiosity, though, I'm not sure what else to make of this information. Like the ROP issue, this limitation is already baked into the GTX 970's measured performance. Perhaps folks will find some instances where the GTX 970's memory allocation limits affect performance more dramatically than in Nvidia's examples above. If so, maybe we should worry about this limitation. If not, well, then it's all kind of academic.

Update: Nvidia Senior VP of Hardware Engineering Jonah Alben shared more information with us on how the GTX 970's unusual memory config works and how it affects performance. The chip is working as intended, he said, but Nvidia "screwed up" communicating the GPU's specifications to reviewers.

Comments closed
    • GhostBOT
    • 5 years ago

    This isnt the first time Nvidia has done sth shady…

    Truth-about-the-g-sync-marketing-module-nvidia-using-vesa-adaptive-sync-technology-freesync

    Check this out : [url<]http://gamenab.net/2015/01/26/truth-about-the-g-sync-marketing-module-nvidia-using-vesa-adaptive-sync-technology-freesync/[/url<]

      • VincentHanna
      • 5 years ago

      Why do people keep posting this completely irrelevant, poorly written, poorly coded, highly biased, grammatically indecipherable article?

    • NeelyCam
    • 5 years ago

    Wow. Almost 250 comments as of now.

    Every time something unusual is reported about a graphics card, all the fringe crazies come out of the woodwork. It’s a little scary.

      • sweatshopking
      • 5 years ago

      OMG YOU JUST DON’T UNDERSTAND. THIS IS [i<] [b<] LIFE [/i<] [/b<]

    • TaBoVilla
    • 5 years ago

    Awesome card and performance but LOL, this whole issue is pretty simple:
    [list<] [*<][b<]Engineering[/b<]: card has physical 4gb, behaves more like 3.5gb. [/*<][*<][b<]Marketing[/b<]: 3.5gb looks weird, can we get away with saying it's 4gb, fully enabled? [/*<][*<][b<]Engineering[/b<]: just don't get caught! [/*<] [/list<] Now they are saying it was [url=http://www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation<]technical marketing[/url<] who made wrong assumptions regarding card specifications and no one said anything? they knew all along what they had on die and what they had on paper.

    • sp33dk1ngp1n
    • 5 years ago

    Hey ripped off nvidia pplz, you no longer need to be a slave to the crApple of the pc graphics industry, come join the dark side! — cookies and your cheap R9 290 awaits 😉

    • f0d
    • 5 years ago

    this issue is getting blown way out of proportions too early

    there just isnt enough data out there (yes i have read the forum thread on here and other websites – its not enough information) to arrive at a definite conclusion yet

    wait for a reputable tech site like TR or someone else does an analysis before raising the pitchforks and firing up the torches

    • LoneWolf15
    • 5 years ago

    I guess the one question I forgot to ask was “When will TR or another site test to see if nVidia’s shown tests are really accurate and representative of what happens in a >3-3.5GB VRAM usage scenario”?

    Also (so I guess it’s two questions), has nVidia made their testing methods that produced the above data public so they can be replicated or verified for accuracy?

    I trust third-party testing from a known reliable source more than I do the vendor –no direspect, nVidia.

    • Wildchild
    • 5 years ago

    The way it’s meant to be played.

    • Meadows
    • 5 years ago

    Based on the performance examples, I don’t see the issue. High memory usage slows down the GTX 970 equally as much as the GTX 980.

    So what’s up, exactly?

      • HisDivineOrder
      • 5 years ago

      Not quite “equally.” They’re pretty closely matched, but clearly the 970 slows down a slight bit more than the 980. That said, you have to realize that these are nVidia’s benchmarks they’re providing. nVidia (and AMD and Intel and most any other company) have a long history of providing benchmarks that say whatever they want their benchmarks to say.

      The problem are the threads that suggest the drop is far more egregious and horrific to performance than the ones provided by nVidia. I mean, it’s possible that nVidia tailor-built drivers to put the least-used parts of a single game and benchmark on the slow segment of memory, then sent out word of mouth to say, “See? No prob.”

      But in general usage with most games without a patch or special driver, you might find this kind of tailor-made performance wouldn’t happen. Imagine if they had to special-patch every game to work properly within a specific set of memory constraints on the 970 for the rest of time to get the performance you expect, right? If so, then what happens when the 970 is no longer the second highest tier card they’re selling? What if it’s late 2015, they’ve got the brand new GM200-based Geforce X80, the GM204-based X70 and X60’s. None of which have this particular memory peculiarity. So suddenly the the 970 is the only card in the whole product stack that requires this “optimization.”

      How long do you think nVidia will focus on making sure every game, every application, anything and everything that uses beyond 3.2/3.5GB VRAM is going to use that memory properly for each and every part of said game/application before they get bored and wander off?

      People who want their GPU’s to last want to feel like they bought a GPU that isn’t going to have a sudden and noticeable drop in performance from when they were new not because they’re old, but simply because they require (beyond the norm) extraordinary driver tuning they were not aware was required when they originally purchased said card.

      Because I imagine there are people who would have just bought the pure 4GB card if they’d known the deal. Being open and forthright about what’s what is important.

        • Meadows
        • 5 years ago

        Those 1-3 percentage points are nothing. Even if the difference were twice as big it would still be too minor to care about. Not to mention I (we?) can’t tell how much of the delta is explained by differences in the number of processing units on board.

          • auxy
          • 5 years ago

          You’re taking NVIDIA-supplied benchmarks at face value?

            • Meadows
            • 5 years ago

            No, which is why I said “even if the difference were twice as big”.

            • auxy
            • 5 years ago

            More to the point, these are simple average FPS benchmarks. The problem causes stutter; that’s not going to show up in these kinds of tests. As a regular reader of TR, I would hope you’d understand…? ( `ー´)

            • Meadows
            • 5 years ago

            Away with your strange face signs. Of course I understand, but since that’s the case, the issue may require a classic TR investigation to become clear.

      • DancinJack
      • 5 years ago

      Ehhh, you’re right. Fanbois just freakin’

    • sschaem
    • 5 years ago

    I see many mentioning 3.5GB, but the issue happen just past 3.2GB
    (as documented from the test that is linked in the thread)

    And If you take the SM ratio between the the two cards: 64SM/ 52SM
    and apply it to the 4GB addressable, this gives you 3.25GB ,
    and seem to match perfectly the benchmark results.

    So we are really looking at .75GB that is affected by the 7x to 15x slowdown on the 970.

    • ultima_trev
    • 5 years ago

    So, what I take from this is:

    + GTX 970 can only use 52 ROPs at any given time before bottleneck ensues.
    + GTX 970 can only use 3.5 GB of memory before before bottleneck ensues.
    + All of these bottlenecks would occur where the FPS is well below 60 anyhow.
    + Despite the bottlenecks, GTX 970 is still as fast(er) and and more power efficient compared to R9 290X at the same price point.

    So, what’s the problem?

      • RdVi
      • 5 years ago

      Not sure where that last point came from. More power efficient? Yes. Faster and same price point? No.

      I went with a 290X personally as it was a little cheaper and faster and came with games I wanted more than the others. Yes, it emits a lot of heat when gaming that I can actually feel at my feet which is a small downside as it’s summer here, but in my case it is not loud and the performance per dollar is what I cared about most.

        • Pwnstar
        • 5 years ago

        What about during the winter?

      • sschaem
      • 5 years ago

      – The issue happen past 3.2GB
      – The GTX 970 is on average 20% slower then a 290x (and cost 10% more)

      20% faster
      [url<]http://www.guru3d.com/articles_pages/call_of_duty_advanced_warfare_vga_graphics_performance_benchmark_review,7.html[/url<] 25% faster [url<]http://www.pcper.com/reviews/Graphics-Cards/Middle-earth-Shadow-Mordor-Performance-Testing/4K-Testing-and-Closing-Thought[/url<] 290x compared to GTX 980, and the GTX looses. The GTX 970 would be about 20% slower [url<]http://www.pcper.com/reviews/Graphics-Cards/Civilization-Beyond-Earth-Performance-Maxwell-vs-Hawaii-DX11-vs-Mantle/2560x1[/url<]

        • ultima_trev
        • 5 years ago

        As I said, it only occurs where the frame rate is already below 60 therefore it’s of no consequence.

        + Almost no one games at 4k and not very many game at 1440P. Those higher resolutions provide minimal increase in visual fidelity versus 1080P, certainly not enough to justify the massive frame rate hit.
        + No one uses Mantle and that API will be irrelevant when Direct3D 12 hits.

        1920*1080/1200 is the most widely used resolution even by PC gamers and that is where it will remain until games actually have detailed enough textures (which they currently don’t) to justify the higher res displays. 1080P is also where the GTX 9xx series excels in benchmarks.

        AMD needs to focus less on the super high res displays of the future and focus more on the pedestrian displays of now. A fully enabled Tonga at 1.1 GHz with 7Gtps GDDR5 for $180 is what they need. Until then, nVidia has them beat at all price points >150.

          • Pwnstar
          • 5 years ago

          The benefits of a 4k resolution is not just in texture qualities. It also allows you to have a larger view into the world you are playing in, with more room for UI stuff.

          You seem to think people only care about image quality.

        • BlackDove
        • 5 years ago

        Are those benchmarks done with retail cards or the special ones AMD sent to reviewers?

        I still wouldnt buy anything AMD lol.

        • MadManOriginal
        • 5 years ago

        On average? I don’t believe you know what ‘average’ means, you picked 3 specific test scenarios. Here’s an ‘on average’ for you: [url<]http://www.techpowerup.com/reviews/MSI/GTX_970_Gaming/27.html[/url<]

          • beck2448
          • 5 years ago

          Here’s a recent test from Hardocp oops.
          ASUS ROG GTX 980 MATRIX-P to the ASUS ROG R9 290X MATRIX-P, it almost seems unfair for the R9 290X. The GTX 980 GPU is far superior to the R9 290X as it stands now. It performs faster, remains cooler, and uses less power than the R9 290X.

            • sschaem
            • 5 years ago

            Funny that the review start with Farcry4 and even one of the most costly GTX980 lose to the R9-290x.
            And then we also have tomb raider, where the GTX980 score 44fps, and the R9-290x score 57fps.

            And thats a $660 card … seriously ?

            • beck2448
            • 5 years ago

            Sorry but nobody thinks the 290x is as good as the 980.

            • auxy
            • 5 years ago

            I do. Lots of others do too. Give me a 290X any day.

            • Pwnstar
            • 5 years ago

            Depends on the game you play.

            • sweatshopking
            • 5 years ago

            IT’S NOT FOR FPS/WATT, BUT IN RAW FRAMES IT CAN BE, AND CAN EVEN BE FASTER IN SOME GAMES. DEPENDS WHAT YOU PLAY.

        • MadManOriginal
        • 5 years ago

        Also, the links you provided are just stupid red herrings, although it’s easy for someone who is lazy to just look at pretty pictures (graphs) and not read. But let’s take some quotes from the conclusions:

        1st link:[quote<]s stated in the intro, there are many things odd and off. As such we recommend you to look at the performance benchmarks you've just read with a grain of salt as I have been at the verge and threshold of asking myself the question whether or not to post these results. [/quote<] 2nd link:[quote<]In reality though, 4K gaming on Shadow of Mordor would be pretty tough with any of these single card solutions at Very High settings and until we have the multi-GPU fix from the developer you are going to running in sub-45 FPS experiences.[/quote<] 3rd link:[quote<]First, let's look at Beyond Earth and its DX11 implementations. It seems pretty clear that NVIDIA has the advantage here and the GTX 980 is faster than the R9 290X in both single GPU and multi-GPU testing. [/quote<] [quote<]But AMD has the advantage with Firaxis decision to implement a Mantle code path for Beyond Earth and the result is a very solid product; [u<]maybe more complete than any other Mantle game to date.[/u<][/quote<] Yup, totally 'average' typical results!

          • sweatshopking
          • 5 years ago

          to be fair, nvidia “no interest” in mantle, but amd cards support it. I don’t have a problem including it in benchmarks, as games often include other things like physx (only they negatively impact benchmarks)

          290’s aren’t bad. I’m pretty happy with mine. They’re certainly WAY cheaper than the 970, and perform similarly or better.

            • MadManOriginal
            • 5 years ago

            I don’t mind either, it’s just not an ‘average’ result, and from what I know Civ:BE is one of the most optimized-for-Mantle games there is.

    • south side sammy
    • 5 years ago

    since it’s probably the most known……… why don’t I trust nvidia on this……. remember the messed up ram on the 8800 series?…. and they just kept pumping the cards out………..

    I’m sure they knew about this little “problem” for a lot longer than the cards were available to us.

    • ThorAxe
    • 5 years ago

    So all those who purchased a 970 based their decisions on memory usage and not on in-game performance? Strange.

    I wonder if AMD is going to get sued because the R9 290X advertises 4GB but can’t keep up with the GTX 980 which also uses 4GB?

      • MadManOriginal
      • 5 years ago

      GET OUT WITH YOUR LOGIC. FANBOIS WON’T HAVE ANY OF IT.

        • auxy
        • 5 years ago

        There is no logic here. Don’t be ridiculous. (ノー`;) Nevermind that the 290X is as fast as or faster than the 980 (and obviously faster than the 970), but what does them both being 4GB cards have to do with anything?

        Just reading this stupid post makes me nauseous. If you purchased a car, and you were told it is capable of holding 40 liters of fuel, but later found that when you loaded more than 33 liters the acceleration fell through the floor, would you be pleased? Of course not. Even if 33 liters of fuel is enough to get you anywhere you need to be, it’s still not what you were told.

        This is literally the same situation. [b<]This[/b<] is logic.

          • Voldenuit
          • 5 years ago

          It sounds more like ppl who bought a Civic Si complaining that it’s not as fast as a Ferrari. Well, maybe not Ferrari, a 335i might be a better example (GM200 is the Ferrari).

          Entitled much?

          The data has already been out since release that the 970s fall behind in performance faster than the 980 at 4K resolutions. Now we know why. But the raw numbers have been out for a long time before then, so there’s really no rationale for crucifying nvidia over a known performance metric.

          One thing that does strike me as interesting is when Nai’s benchmark is run headless, there doesn’t seem to be much drop in memory bandwidth past 3.25 GB (EDIT: At least, I *think* that’s what the numbers are saying. It might help if the screenshots in the forum thread were better labelled). If nvidia rewrote their drivers to remove the 3.5 GB allocation issue, it sounds like it would be a simple fix to boost performance of the 970 in memory-limited scenarios. It still won’t be as fast as a 980, but then you knew that when you bought it, right?

            • auxy
            • 5 years ago

            [quote=”Voldenuit”<]It sounds more like ppl who bought a Civic Si complaining that it's not as fast as a Ferrari. Well, maybe not Ferrari, a 335i might be a better example (GM200 is the Ferrari).[/quote<][b<]No.[/b<] How does that even make sense? How does it relate? Map the analogy for me. [quote="Voldenuit"<]The data has already been out since release that the 970s fall behind in performance faster than the 980 at 4K resolutions. Now we know why.[/quote<][b<]No.[/b<] We already knew why -- reduced pixel and texture fill. You can render a 4K scene (very easily) without using >3GB of VRAM. [quote="Voldenuit"<]One thing that does strike me as interesting is when Nai's benchmark is run headless, there doesn't seem to be much drop in memory bandwidth past 3.25 GB.[/quote<][b<]No.[/b<] You are wrong -- testing headless is the only correct way to run the benchmark, and it shows the sharp drop-off in memory performance. [b<]I don't believe you understand the issue at hand here.[/b<] Please stop commenting until you do.

            • Voldenuit
            • 5 years ago

            [quote<]No. How does that even make sense? How does it relate? Map the analogy for me.[/quote<] As Scott already said, the performance delta of the 970's specific memory configurations are already baked into benchmarks, so is pretty much academic. [quote<]No. You are wrong -- testing headless is the only correct way to run the benchmark, and it shows the sharp drop-off in memory performance.[/quote<] You're right, I admit to misreading the screenshots in the forum threads, but in my defence, many of them are poorly labeled wrt to what hardware they were run on and how they were run (several cases of poorly run benchmarks are also up there). [quote<]I don't believe you understand the issue at hand here. Please stop commenting until you do.[/quote<] Wow. Just wow.

            • auxy
            • 5 years ago

            [quote=”Voldenuit”<]As Scott already said, the performance delta of the 970's specific memory configurations are already baked into benchmarks, so is pretty much academic.[/quote<]Do you actually understand what this means? Or are you just parroting Damage? Whatever the case, his statement is wrong anyway -- targeted testing to bring out the 970's memory flaws shows horrific stutter past ~3.3GB of VRAM usage. Check the overclock.net thread for specific details.

            • Voldenuit
            • 5 years ago

            Exatly. [b<]Targeted[/b<] testing. You're looking at corner cases. 4k is in a transitional state, and it is unrealistic to expect early hardware to handle it without any drawbacks. Personally, I find it ironic that you insult forum members for being "plebeian" if their hardware specs don't meet your exacting standards, yet are complaining about limitations on a cut-down card. The GTX 970 is not the best single GPU card for 4k gaming. We've all known that for a while, from the available testing (even if said tests may or may not have reached the corner cases). This new information just reinforces that. That doesn't make the card any less appropriate for ppl on 2.5k (where it is faster than a 290), even if it unfortunately means the card is probably less future-proof than it could have been (although let's face it, all current high end cards will be slow in 18 months, that's just how the industry goes). The practical upshot of this is that game developers will hopefully pay more attention to asset allocation, although I don't see it happening. Meanwhile, if a game's settings are making your game too slow, then you just have dial them back a bit or buy better hardware. That's also how it's always been.

            • auxy
            • 5 years ago

            We’re looking at corner cases because those are the best way to verify the problem. That’s how you test something — with targeted testing. [b<]It wouldn't be much help in determining if a problem exists if we didn't specifically try to make sure it exists, would it?[/b<] I don't know why you're talking about 4K resolution; that has nothing to do with this. The issue is with VRAM; game ASSETS use [b<]vastly[/b<] more memory than the framebuffers. The GTX 970's issues will be only exacerbated by future games with larger assets; it doesn't take 4K or even 2560x to bring out this issue.[quote="Voldenuit"<]Meanwhile, if a game's settings are making your game too slow, then you just have dial them back a bit or buy better hardware. That's also how it's always been.[/quote<]This is the first reasonable thing you've said in four posts, and you're not wrong here. However, the issue is that people with 970s should never HAVE to dial back their texture settings when someone with a 980 or 290 does not have to, and that's not the case we're seeing. That's the bottom line here -- the card is marketed as a 4GB card and it is not, practically speaking, able to live up to that. [i<]edit[/i<]: [quote="Voldenuit"<]Personally, I find it ironic that you insult forum members for being "plebeian" if their hardware specs don't meet your exacting standards, yet are complaining about limitations on a cut-down card.[/quote<]I was being facetious in that other post. I edited it to add a kaomoji so hopefully that's more clear. Mostly my purpose commenting here has been to correct people's misconceptions, like your own about the headless benchmarking on Nai's benchmark. Admittedly that's most of the reason I post anywhere, really. I don't personally care about the GTX 970; you'd never convince me that with current pricing it's a good value, 3.3/4GB VRAM issue or not. I just don't want people repeating and spreading misinformation.

          • MadManOriginal
          • 5 years ago

          It is not literally the situation whatsoever, and all the stupid analogies people are coming up with are wrong, including yours. The GTX 970 has 4GB of VRAM and can use 4GB of VRAM when needed, and will perform at the levels which independent game benchmarks have established in those cases.

            • sschaem
            • 5 years ago

            The fact is that nvidia played all of us, from non disclosures.

            No this is not a full 4GB card. its a 3.2GB card with an extra .8GB of ‘cache’.

            The .8GB is not sitting on the ‘same’ 256bit bus and is not addressed in the same manner.
            Making that extra .8GB up to 15x slower, nearly unusable.

            Thats why nvidia elected in their driver to limit allocation to ~3.2GB.

            I’m almost certain, when this is 100% verified, that this will land into a class action law suit
            and nvidia will lose.

            • MadManOriginal
            • 5 years ago

            Well, you may be right, this being America and all, that some jackass will sue them. That doesn’t make such a lawsuit any less stupid, because video cards are sold based on performance, not underlying architectural details themselves.

            Now, I would agree that if NVidia dictated the terms of independent benchmarks, or otherwise tried to hide the issue, that they were being deceptive if. However, unlike another company who did dictate benchmarking for a CPU release, I don’t believe NVidia did that. Clearly though, they are willing to discuss it, so they are not trying to hide anything. Any deceptiveness is just in yours and every other NV hater’s head.

            • travbrad
            • 5 years ago

            I agree video cards are primarily sold based on performance, but it’s still misleading at the very least to call something “4GB” when it’s actually more like a 3.2GB card. Their partners even have that “4GB” printed in big bold letters on most of the boxes.

            Whether that extra 800MB is going to matter is up for debate and depends on the monitor/resolution being used, but it’s something people should know when buying a 970.

            • September
            • 5 years ago

            What would you have them do? Manufacture a card with only 3.2GB of RAM? How would that work (chip count/die count/bus width)? Obviously they had to put 4GB of VRAM on the board, so why not use it all? I’m sure them spent a significant amount of time coding the drivers to enable the use of this extra memory, which is sure better then letting it go to waste!

            Personally I think this is a non-issue, if you need gaming performance at 4K you are going to either go 980 with GSYNC or go 980SLI or higher when the big chip becomes available and has 6/12GB.

            • swaaye
            • 5 years ago

            I suspect the bitching about 970 will vaporize once the big Maxwell card comes along and makes a new target for the hive mind’s AAA-eyecandy-gaming@4K desires. It will undoubtedly create a new whirlwind of angry energy with its likely $1300 price too. 🙂

            AMD is also expected to poop out some new stuff soon-ish too.

    • south side sammy
    • 5 years ago

    Now that we all got it out of our systems…….. lets see how it handles gpu physx. That’s one thing that’s always omitted from benchmarks. I bet it’s unplayable at the tested resolutions and settings.

    • rpjkw11
    • 5 years ago

    I don’t wish problems on nVidia or, especially, it’s customers. I’ve felt like I wasted money buying a 980 instead of a 970. Not feeling that way now, but I sincerely hope there’s an easy fix (assuming there really is a problem) like with Samsung’s 840 EVO SSDs. Only time will tell.

    • LoneWolf15
    • 5 years ago

    I remember the nForce 3/4 hardware firewall, broken in silicon, and brushed aside.

    I remember the Geforce 6800 issue where PureVideo was broken in hardware…brushed aside.

    I remember the nForce 5xx SATA issues. At least those were eventually fixed; by then, I’d gone away from nForce, but they were a real mess.

    I think the only time nVidia has actually paid for broken stuff was the Geforce/Quadro mobile business, which was too big to go away.

    Perhaps this time was by design; however, one has to ask how many Geforce GTX 970 drivers if they’d have purchased this particular card if they had known about this information beforehand. Once again, I’m disappointed.

      • Krogoth
      • 5 years ago

      [quote<]I think the only time nVidia has actually paid for broken stuff was the Geforce/Quadro mobile business, which was too big to go away. [/quote<] You are referring bumpgate, which is actually a manufacturing problem with board vendors and new ROHS standards. The new leadless-soldering alloys had thermal expansion/contraction issues with BGA-tracings. It effected first-generation components made with it. Laptops saw most of it because of their thermal cycle (rapid hot/cold sessions), but it also affect desktop parts (8800GTs and such). They were enthusiast would got their stuff working by "baking" their cards in ovens.

        • LoneWolf15
        • 5 years ago

        Correct.

      • Deanjo
      • 5 years ago

      Do you remember as well for AMD/ATI….

      1) Broken AHCI and RAID in their early chipsets…. brushed aside
      2) Deliberately claiming GPU accelerated video encoding only for it to be proven later it was a pure software solution with just really crappy but fast encoding parameters
      3) Radeons going thermal nuclear when furmark ran
      4) Low speed instability on HT3 capable processors
      5) Sata soft resets causing devices to drop out
      6) Broken HPET
      7) USB freeze when multiple devices going through a hub
      8) Of course the infamous TLB bug

      I could go on but you get the gist (intel isn’t much better either with doozies like having broken USB 3 on their own boards that would cause a no boot situation if a USB device was plugged into the USB port and many others).

        • Krogoth
        • 5 years ago

        Truth to be told, Furmark went nuclear on any GPU that didn’t have proper cooling for an ultra-heavy load. Nvidia GPUs were also affected. It is mostly a problem with card vendors for having inadequate HSF solutions on their cards for the worse case.

        AMD/ATI had issues with GDDR5 that weren’t addressed until HD 6xxx family.

        • LoneWolf15
        • 5 years ago

        I remember them too. This article isn’t about them. I don’t root for a side; I go with what works for me.

        And I remember dropping ATI for quite some time when my Rage Fury Pro had serious driver issues they couldn’t fix; my first ATI card was a VGA Wonder 512k and my first nVidia card was a Riva 128, so I’ve had time to have plenty of both.

          • HisDivineOrder
          • 5 years ago

          I think the point is that choosing a side based on which side hasn’t screwed the pooch with customers in a way they didn’t outright fix or replace is like choosing a cola in the 80’s without high fructose corn syrup.

          Good luck.

        • beck2448
        • 5 years ago

        Don’t forget Crossfire which actually didn’t work for over 2 years while it sold as their high end solution.

          • BlackDove
          • 5 years ago

          Still doesnt work in DX9 either.

      • maxxcool
      • 5 years ago

      Amd destroying raid arrays is what I remember…

    • Krogoth
    • 5 years ago

    Ouch, it looks like the problem was caught by Nvidia after taping the chip. They tried to quietly brush it under the rug so nobody would notice it. Unfortunately, curious gamers/enthusiast who wanted to see if 4GiB was *needed* found the bug by chance.

    I’m going to bet there’s going to be a class action suit against this, since Nvidia did advertise that 970 has 4GiB in memory capacity but in practice it only *uses* 3.5GiB.

    This is just like when a car manufacturer advertises that your sports car has a 8-Cylinder engine block but in only uses seven cylinder under a load and attempting to use the eighth Cylinder (bypassing the governor/electrical sensors) causes your car to stall out.

    In the grand scheme of things, it isn’t that big of deal since the 970 does work almost all of the time and when the bug becomes a problem. The 970/980 are already at their breaking point.

    The strange part is that Nvidia did market their products with effective capacity with the entire Fermi line which is why they had unusual memory capacities. They could have just marketed 970 as a 3.5GiB unit and this entire controversy could have been avoided. It would have not effected 970’s value and place in the market.

      • MathMan
      • 5 years ago

      What makes you think they only discovered this after taping out?

      This is the kind of stuff you simulate through and through.

      And it’s not that it can’t use the 4GB, it clearly does. Just that it sometimes prioritizes not to. I assume that different kinds of memory usage influence whether it’s used or not.

      This whole thing will go down the way the bendable iPhone went down. A fun diversion, but in the end nothing more than a tempest in a teapot.

        • Krogoth
        • 5 years ago

        Software simulations do not reflect the real world. That’s why CPU erratas and such are done.

          • MathMan
          • 5 years ago

          Simulations very much reflect the real world.

          It can happen that a corner case doesn’t get simulated, that’s how CPU bugs fall through the crack. But this is far from being a corner case: it’s a HW configuration that’s for sale. Not as if Nvidia decided long after the fact “how ’bout we make a GPU with only 13 SMs”.

            • Krogoth
            • 5 years ago

            No, simulations rarely reflect the real-world. They typically represent “ideal” situations.

            That’s the whole point of running trials with pre-production units and scaled models.

            This is engineering 101 stuff.

            • MathMan
            • 5 years ago

            I run digital simulations day in day out. They absolutely do exactly what the real world does.

            That is: whatever you simulate, will happen exactly the same on a correctly working digital circuit. Yes, the ‘correctly working’ seems like a huge cop out, but it’s not unless you think that the digital logic in the case is not doing what it’s designed to do due to glitches, noise, and so forth.

            The only thing you could argue is that they had a verification hole on their test plan that didn’t cover this particular case: they didn’t simulate what would happen with memory throughput on a 13 SM part when accessing high memory. That’s something only they can answer.

            The reason you run preproduction trials is to make sure you don’t have issues with the digitial logic not misbehaving due to electrical problems and to test in the real world that coverage holes don’t results in mayhem. But, again, not testing a production configurarion is not some minor coverage miss.

            Engineering 101.

            • mesyn191
            • 5 years ago

            If simulation was perfect no product would ever be released with bugs or errors and yet both happen constantly even in cases where the manufacturer has done extensive testing.

            • MathMan
            • 5 years ago

            No, you didn’t understand what I wrote: if you’d simulate all cases that could ever happen, and your digital hardware executed them correctly at all time (that is: no low level electrical issues due to noise etc), then it’d be bug free.

            The problem is: it’s impossible to foresee all cases that could ever happen. Your only options are: imagining all possible cases and hoping that you didn’t miss anything, random stimulus generation, and mathematical formal proof. The latter can be done for a subset of cases such as arithmetic units and arbitration blocks and the like, but the majority of logic relies on the imagination of the engineers, and on constrained random generation. (Constrained, because you have to supply random but valid stimuli that target a particular piece of functionality.)

            This is why you have tons of errata for CPUs (and certainly also for GPUs, but we are not aware of those.)

            But a very common use case of not using all the available units is not exactly a corner case. So the original argument that this is an unforeseen bug is not plausible.

            • sschaem
            • 5 years ago

            This was a known design limitation.
            nvidia is not oblivious on how their memory control works.

            Also it seem nvidia knew about this before launch, because people started to wonder why this card would often max at 3.5GB allocated. (driver limitation)

            So nvidia had put in their 970 driver a work around to the 7x/15x access slowdown > 3.2GB.

            So why didn’t nvidia put an errata if this was known? (as early as the design phase)

            Humm, to protect customers and review sites ? Because “You cant handle the truth ” ?

            • l33t-g4m3r
            • 5 years ago

            Why? Nv has experience mitigating these memory issues, eg 660. IMO the 970’s issues aren’t a bug, but NV trying to foist off a newer 660 as a fully enabled x70. After all, it’s “256-bit”, and not 192-bit.

            Knowing this, if I was in the market for a new card, I’d probably buy a 290x instead of anything NV has out right now, especially since I like to use DSR. Marketing shenanigans like this are really low. Not to mention, I don’t know if NV supports adaptive sync, and I’d like to buy one of those monitors once they start making them at reasonable prices.

            • mesyn191
            • 5 years ago

            No I understood.

            You’re giving a different argument here, shifting those goal posts around.

            The problem is as good as our understanding of physics is its not perfect so therefore none of the simulations are perfect, particularly for CPU’s. Even if you had the budget and time to do all the testing you ever wanted there would still be bugs and problems in the final product.

            • Ninjitsu
            • 5 years ago

            What MathMan has written makes perfect sense.

            What you can’t simulate perfectly is physical phenomena. Even so, you can simulate a large amount of it. You can simulate [i<]logic[/i<] perfectly though. The 970's problem is a design issue, not related to physics as such. They would have tested data access from the entire memory space, I'd assume that's standard.They clearly have a firmware(and/or hardware) level workaround in place, which means they knew before they shipped it.

            • mesyn191
            • 5 years ago

            Except he did claim they perfectly simulated physical phenomena. Which is nonsense.

            Also you can’t simulate logic perfectly either since none of the models for testing and engineering the logic aren’t perfect either.

            On top of that all of our math is fundamentally flawed.

            [url<]http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems[/url<] This doesn't mean we don't or can't know facts now or in the future or that math and science are useless and terrible, but you have to acknowledge the flaws and part of that means admitting anything we make right now and for the foreseeable future is going to have issues no matter what. nV's designs are limited by the physics and logic models they have to work with, same as anyone, so you can never ignore the physics. Especially when their devices push the limits of TSMC's mass manufacturing process.

            • Bensam123
            • 5 years ago

            Simulations are only as good as you program them and you are not omniscient regardless of what your degree tells you.

            • Ninjitsu
            • 5 years ago

            Your degree tells you standard operating and testing procedures, which work well– at least mine does.

            Anyway, when you write HDL code for a circuit, you would normally write a virtual “test bench” to test the whole/part of the circuit. So unless they made the mistake of not simulating I/O from the entire memory range (which is unlikely) they couldn’t have not known about it.

        • sschaem
        • 5 years ago

        Apple didn’t advertise that the phone would not bend under heavy pressure and in turn sold the phone as “undendable”.

        With nvidia, they knew that any memory above the 3.2GB ceiling is SUPER slow,
        so its not really usable and need driver work around to pririze access.
        Actually we seem to see that the driver will even limit allocation to 3.5GB.

        But nvidia did sell people a 3.2GB + a .8GB slow cache card as a full fledge 4GB card.

        So I dont see this issue going away for nvidia… The GTX 970 was/is a very popular card,
        and this is going to spread like a wildfire across the web.

          • HisDivineOrder
          • 5 years ago

          nVidia said the card has 4GB total memory. It does. It said peak memory bandwidth was a given amount. Again, true. I’m not defending the practice of not providing open and clear data about what is going on specifically with their memory layout, but to argue that they outright lied when you use the Apple example of “Well, they didn’t PROMISE it was unbendable so when it bent more easily in people’s back pocket, that’s not their fault” is really disingenuous.

          I mean, by that logic, nVidia not promising that every bit of that memory was at the same bandwidth shouldn’t have been assumed to mean that every bit of was at full bandwidth. That is, if Apple not saying the phone was unbendable meant it was okay for it to be more malleable and deformable than most phones conventionally are assumed to be.

          The truth is nVidia AND Apple should have done better. They should have known how their products were going to be used in the modern age and should have anticipated, gotten out in front of said problems, and been clear about what is and is not going to happen when people use them the way they’re bound to be used in late 2014 and onward.

          People stick smartphones in their back pocket. It happens. Do I? No. But there are plenty of idiots who do. Perhaps a warning about it might have saved some face, but it might also have drawn attention, right? So they’d rather not draw attention needlessly to something that is… totally relevant to a certain crowd. It’s the same as nVidia.

          There is literally no difference. Apple did not guarantee an unbendable phone and nVidia did not guarantee full bandwidth for the entirety of its memory subsystem. To be angry at one is to be angry at the other because you’d be angry about the thing that you didn’t think to ask that you weren’t guaranteed but thought you could assume you would be guaranteed.

          You thought wrong. Corporations are corporations. Don’t trust them. Question everything. This is the problem with taking a corporation’s word on anything whether it’s their response with a single game’s benchmarks with obviously tailored drivers (that were just released btw) or promises of how they’re going to open up their custom SDK and that dozens upon dozens of games are going to support it. Or when said corporation suggests that said low level access SDK is going to be “virtually identical” to the one used by Xbox One and/or PS4. Or when a corporation promises that their new TIM material between their CPU and metal shim is going to bring back the glory days of old (or when they said before that the difference between the fluxless solder versus their new alternative is virtually nil)…

          There’s one thing you just have to assume with corporations. They lie and when they aren’t lying to you directly, they’re waving their hands around so fast you’re missing something you should know.

          And you just have to keep looking until you start to see the pattern of what they’re trying hard for you not to see. The real question is not, “Which corporation do you trust?” The question you ask instead is, “Which corporation’s lies impact me the least in the long run?”

          I ask myself if I were in the market for a GPU, knowing the 970 is the way it is, versus the R9 290/290X and I have to say I’d still probably get the 970 mostly for the same reason I’d got the 670 way back instead of the 7970. Because when I went to replace my 8800 Ultra (yeah, old), I was still getting active driver updates from nVidia that worked without a problem or hiccup. The card was perfectly supported until midway through this year.

          AMD stopped supporting their cards officially from the same era years ago. I prefer to support the company that sees fit to support its cards for years and years to come. Sure, performance gains were fewer and farther between, but I didn’t have to worry about drivers just not working and the company clapping their hands together and saying, “Eh, we don’t support that product.”

          AMD’s lack of long range support is the single most important thing to me because when I buy something, I want it supported a LONG time even if I don’t need it to be. Just seems logical to me.

          So AMD’s LACK of comment on the subject of long term support driver-wise with new updates for new games, well that’s the worst hand-waving of all the companies atm.

      • MadManOriginal
      • 5 years ago

      Oh stfu with your conspiracies and lawsuits. The card performs how it performs, and it’s a high-end consumer card so game performance is all that matters.

        • l33t-g4m3r
        • 5 years ago

        Correct, it is [i<]marketed[/i<] as a "high-end" consumer card. But will it really perform as one, under [i<]all[/i<] high-end scenarios, like 4k and HD textures? I think not. This card will be obsolete as soon as games start taking advantage of it's full memory capacity.

          • MadManOriginal
          • 5 years ago

          As opposed to which other card currently available that won’t suffer the same fate?

            • auxy
            • 5 years ago

            GTX 980? R9 290X?

            • MadManOriginal
            • 5 years ago

            lol, no. They are only slightly faster than the GTX 970, and certainly not faster enough to make them great for 4k if the 970 would be terrible at it.

            • auxy
            • 5 years ago

            [quote=”l33t-g4m3r”<]Correct, it is marketed as a "high-end" consumer card. But will it really perform as one, under all high-end scenarios, like 4k and HD textures? I think not. This card will be obsolete as soon as games start taking advantage of it's full memory capacity.[/quote<][quote="MadManOriginal"<]As opposed to which other card currently available that won't suffer the same fate?[/quote<][quote="auxy"<]GTX 980? R9 290X?[/quote<] Now look again at your reply and tell me it is relevant. Go on, do it. You are literally telling me that 980s and 290Xes have the same problem discussed in this post. That is [b<]actually[/b<] what you are saying. (ノー`)

            • MadManOriginal
            • 5 years ago

            No, why are you so stupid? I am simply saying that no card is truly suitable for 4k. Whether other cards are slightly less unsuitable, but still clearly so, means it doesn’t matter. There have always been cases of games that brought all available cards to their knees, it just used to be specific games, now it is many games at extreme resolutions and settings.

            • auxy
            • 5 years ago

            [quote=”auxy in another comment thread”<]I don't know why you're talking about 4K resolution; that has nothing to do with this. The issue is with VRAM; game ASSETS use vastly more memory than the framebuffers. The GTX 970's issues will be only exacerbated by future games with larger assets; it doesn't take 4K or even 2560x to bring out this issue.[/quote<] So you, like Voldenuit before you, don't actually understand the issue. Awesome! What a waste of time. (ノー`) This whole argument of "it doesn't matter because when you're using 4GB of RAM the card will be running slow anyway" is just nonsense; literal nonsense. It's a [i<]non-sequitur[/i<], it does not make logical sense; ergo, nonsense. You could use 4GB of VRAM on a 1024x768 scene. You could use 4GB of VRAM offscreen on a compute task (which is what Nai's Benchmark does). There's no meat to this argument; it's made of ignorance and fanboyishness. Please stop repeating it.

            • MadManOriginal
            • 5 years ago

            4k is just shorthand for any high VRAM requirement situation, which is precisely the issue everyone is getting their panties in a twist about. But if you want to say I’m using stupid examples, please don’t post about VRAM mattering at a resolution from 1992. Yes, that ‘could’ happen, but it would be completely dumb and is not a realistic scenario.

            But please link actual game benchmarks where the GTX 980 and R290X significantly outperform the GTX 970 because of the last 500MB of VRAM speed, while being close in performance otherwise.

            FYI, this is a consumer chip for gaming, anyone seriously using it for CUDA compute tasks that matter is doing compute tasks wrong.

            • auxy
            • 5 years ago

            No, “4K” is shorthand for display resolutions with a width of around 4000 pixels. You don’t seem to understand that resolution is not the most important factor in VRAM usage. Game assets take up much more space in video memory than the buffers do. Most VRAM is there to hold the assets, not to hold the buffers; if it were just that, we could get away with 512MB cards. Easily. [b<]Rendering a game in 4K is not necessarily "a high VRAM usage situation".[/b<] Also, "completely dumb" is very scientific language! I'm glad you understand the issue so well you can explain to me why the situation is "completely dumb". Except you don't, so you can't, why is why you resort to the vernacular of a ten-year-old. Your request for benchmarks highlights your failure of understanding that the issue is about the card's stated-versus-actual capabilities, not about how it performs on game-of-the-week-X. (ー_ー;) If you're fine with being sold a graphics card that can't do what it claims to do, then I guess more power to you. I'm not okay with it. And attacking me in other comment threads on other articles is just juvenile. This isn't personal for me; I hope you could extend the same maturity.

            • Waco
            • 5 years ago

            Please, if you’re so intelligent, please point out where the bandwidth of that last 500 MB really hurts in a game.

            Nvidia has done a pretty good job of explaining how it works and what the (very slight) drawbacks are. What’s your evidence to back up your (apparently superior) position?

            • MadManOriginal
            • 5 years ago

            Pedantry. But ok, instead of 4k I will now take the time to write out ‘high VRAM use scenarios’ if that makes you happy because you can’t understand the meaning of the point otherwise.

            Also, lots of text, 0 links. Well done.

            • VincentHanna
            • 5 years ago

            Actually, I’m pretty sure he’s saying that there is no problem.

            • auxy
            • 5 years ago

            That’s what he THINKS he was saying, but because he doesn’t understand the problem (or perhaps VRAM in general), that’s not what he said. (ノー`)

            • DancinJack
            • 5 years ago

            Why do you always put those little marks in parens in your posts? What point do they serve? I’m legit asking, I really don’t know.

            • dragontamer5788
            • 5 years ago

            They are Japanese Style Emoticons.

            [url<]http://www.sherv.net/text/emoticons/japanese/[/url<] You don't need to look "sideways" to see Japanese-style emoticons. The parenthesis usually represents the sides of a face.

            • sweatshopking
            • 5 years ago

            MY FAVORITE IS THE SCROTUM ONE.

            • auxy
            • 5 years ago

            (;´・ω・)

            • puppetworx
            • 5 years ago

            It’s more escroti than emoji.

            Am I right guys??? Up top!!

            • auxy
            • 5 years ago

            _| ̄|○

            • dragontamer5788
            • 5 years ago

            Wrong direction.

            ○| ̄|_

            And its smaller equivalent is “[url=http://www.urbandictionary.com/define.php?term=orz<]orz[/url<]" The "o" is the head. The "r" is the arms and the begining of the body. "z" is the legs, folded up because the man is bowing. orz orz orz As in "I'm not worthy". Also can mean "banging head against the ground in frustration"

            • auxy
            • 5 years ago

            Do … do you think I don’t know that? And do you think it matters what direction it is…? (´Д⊂ヽ

            • VincentHanna
            • 5 years ago

            yes, because the head must always be facing mecca. Its like the flag.

            • derFunkenstein
            • 5 years ago

            Kinda looks like an exhausted man puking his guts out to me.

            • sweatshopking
            • 5 years ago

            that’s what i thought it was.

            • Pwnstar
            • 5 years ago

            WHY AM I NOT SURPRISED?

            • sweatshopking
            • 5 years ago

            BECAUSE YOU ALSO HAVE GOOD TASTE?

            • MadManOriginal
            • 5 years ago

            Because he/she/it is an annoying twit who needs to use childish looking faces as punctuation.

            • Krogoth
            • 5 years ago

            > there’s no reason to be upset.gif

          • sschaem
          • 5 years ago

          Not sure about obsolete. Its just a 3.2GB card VS 4GB

          nvidia made plenty of 3GB cards before the GTX 970 🙂

        • Krogoth
        • 5 years ago

        Nvidia got caught with shooting themselves in the foot and the Nvidia Defense Force is out in force playing damage control. 🙄

        This is bumpgate and NV40 (Purevideo) redux.

          • NTMBK
          • 5 years ago

          Calling people who disagree with you the NDF, classy.

            • Krogoth
            • 5 years ago

            Anyone who attempts to defend such sleazy-marketing tricks is a die-hard fanboy or a shameless shill.

            If AMD and Intel tried to pull the same shenanigans. I would call it out on them.

            • MadManOriginal
            • 5 years ago

            What marketing trick? Please show me NV marketing that says ‘The GTX 970 will use all 4GB of VRAM when possible, even when it is not necessary for the best performance.’ Everyone is getting way too obsessed with what’s going on under the hood when it doesn’t matter. This is getting as bad as audiophile arguments when people geek out about a specific part that is only one piece of an entire component, and they don’t take the whole into account.

            • sschaem
            • 5 years ago

            ?? How can you accept a company selling you a 4G card, and then because they knew that memory act like a dog, restrict allocation to ~3.2GB ?

            People didn’t pay for a 3GB card, this is the behavior of crooks.

            • MadManOriginal
            • 5 years ago

            Because, in the end all that matters is the performance. Also, 0 links so far to real-world benchmarks that show the GTX 970 is at a significant disadvantage because of this. And if there are such examples, they can’t be at uselessly low framerates …if the GTX 970 drops 20% more than the GTX 980 but they are both sub-35 FPS anyway, fucks given should equal zero for anyone who is halfway rational about it.

            • Krogoth
            • 5 years ago

            So you are okay with mislabeled products being sold and the manufacturer tries to cover it up?

            The only victims of this are current 970 users who weren’t informed of the memory issues until recently.

            • MadManOriginal
            • 5 years ago

            How is it mislabeled? Do the cards not have 4GB of VRAM on them?

            • Krogoth
            • 5 years ago

            Yes, but the card only uses 3.5GiB of memory in almost every circumstance unless it is force to use extra 0.5GiB. When it does use that memory it inflicts a performance penalty that normally shouldn’t be happening.

            This tiny detail was omitted at launch and Nvidia knew about it and only answered when pressured.

            • juzz86
            • 5 years ago

            Got a sweet ring to it, though.

            “Keyboards down gents, we’re with the NDF. Stand aside.”

      • yogibbear
      • 5 years ago

      TLDR: Unimpressed.

      • torquer
      • 5 years ago

      Wait what? I think you need to re-read the article a few dozen times. The behavior is expected and part of the measured performance. The card is fully capable of addressing 4GB, just in two separate partitions of memory.

      The PS4 and Xbone are both advertised as having 8 core CPUs, yet no game or application can *ever* use all 8 cores at once. Should they be sued as well?

      I’d advise trying to understand the issue before flaming. I’m unimpressed with your response.

    • jihadjoe
    • 5 years ago

    2-3% less scaling in actual games is a lot less horrible than the synthetic bandwidth tests would have suggested.

      • Pwnstar
      • 5 years ago

      That’s due to good drivers.

    • green
    • 5 years ago

    3.5GB memory limit. brings back memories of xp:
    [url<]http://blog.codinghorror.com/dude-wheres-my-4-gigabytes-of-ram/[/url<] different. but overall it reminds me that while something may say 1TB storage, 1GBit/s connection, or some other kind of advertised "maximum" that you probably won't get the full maximum out of it. glad to see that people have noticed this issue and are looking in to it (both the company and the community)

      • Pwnstar
      • 5 years ago

      There is no 3.5GB memory limit here, so not the same thing.

    • Chrispy_
    • 5 years ago

    Good news then, the synthetic test result is skewed by something Nvidia is perfectly within their rights to do, and to which they’ve openly admitted.

    If there is negligible, 1% performance scaling differences between GTX980 and GTX970, then I don’t see the problem – people are still getting a 4GB card that works in real-world usage as it should, and at $250 cheaper than the GTX980/

    I see it as no different to phone vendors selling you a 32GB phone, only to then lose 2GB from the binary/decimal conversion and another 5GB is reserved for the manufacturer’s OS, giving you 25GB of usable space from the 32GB advertised. It’s documented, explainable, and generally accepted by the well-informed.

      • exilon
      • 5 years ago

      Actually this synthetic test may have found a bug introduced by the VRAM partitioning.

      [url<]http://www.computerbase.de/forum/showthread.php?t=1435408&page=7&p=16912375[/url<] [quote<]The benchmark tries to reduce this effect by repeatedly requesting the data in each storage area in turn. That is the first demand "should" cause the benchmark a page fault. The page fault "should" copy the data on the GPU Page from the DRAM from the CPU to the GPU DRAM. Then the other global memory accesses would be carried out with the DRAM bandwidth. So at least the assumption on my part. Interesting way behaves the GPU not like me to do. This makes the GPU, the corresponding data does not seem to upload into the DRAM of the GPU, but again to request directly from the DRAM CPU with each memory access in a page fault in CUDA. Thus, the benchmark measures overall in such cases more or less the swapping behavior of CUDA and not the DRAM bandwidth. The whole can be verified easily by allowing any applications running in the background that consume a lot of DRAM from the GPU, thus more swapping is needed. In this case, the benchmark collapses also. [/quote<] The last 500 MB of virtual memory isn't being paged into the VRAM. The benchmark craters because it's hitting a page fault every time it tries to read that portion of virtual memory.

        • Ryu Connor
        • 5 years ago

        [quote=”Nai”<]The problem however, is that the benchmark is not fully owns the DRAM, the GPU itself. After running in the background Windows and various programs, all also claim some of the DRAM of the GPU.[/quote<] This part preceding your quoted statement from Nai is critical. The reason the last 500MB typically shows page faults is because the OS has already allocated its assets there (your desktop). We've been talking about this in the forum thread here on TR. His benchmark tool must be used in a very precise way or it will give bad results. It must be done with a headless configuration.

          • exilon
          • 5 years ago

          You aren’t understanding his quote. He’s well aware of that. The issue is that the last 500MB isn’t being paged properly so it’s constantly streaming from the system RAM.

            • Ryu Connor
            • 5 years ago

            I disagree, but I am having to use Google Translate – so perhaps some nuance is being lost.

            I just looked at it again and from my perspective he spent that post talking about the fact of how the program behaves when it runs into contention.

            • exilon
            • 5 years ago

            But DWM isn’t claiming 700MB+. We would see maybe 500 MB’s worth of real addresses being used by DWM. When the CUDA benchmark needs to use that, there will be thrashing and lots of swapping. This is why all runs with DWM active have the last two chunks with low score.

            For the GTX 970, there is DWM and something else that’s causing thrash in a 1 GB chunk of memory, almost as if the 500 MB partition isn’t handling swaps correctly. That shouldn’t be happening and Nvidia needs to fix it.

            • Ryu Connor
            • 5 years ago

            You’re absolutely right. The DWM results vary from user to user based upon their desktop configuration. The highest I’ve seen is around 384MB (1600p) (plus the very top 128MB isn’t tested as Nai reserved that for the tool). I’m presuming a 4K user might see over 500MB. I’ve caused the tool to show problem even beyond that range by running a game – for example – while testing.

            The 970 when properly tested does show lower bandwidth in that upper range.

            One idea I’ve seen floated to that effect – bring salt – is that Nai’s small little CUDA tool isn’t able to access the second segment. He even states in his recent update post that the tool gets handed global VMem addresses.

            An interpretation of all this is as follows:

            Perhaps what NVIDIA means by their statement of needing to use gaming to test this is: the driver is only able to use both segments when fullscreen applications are active.

      • Krogoth
      • 5 years ago

      The problem is that Nvidia didn’t mention this problem and hope that nobody would notice. (There’s a fair amount of circumstantial evidence for this)

      If Nvidia openly admitted the problem from start and marketed the units as 3.5GiB, nobody would have blink an eye. They did this back with the entire Fermi line-up. I blame Nvidia’s marketing division for this not the engineers.

        • Voldenuit
        • 5 years ago

        Doesn’t Nai’s benchmark show completely uniform access speeds from 0-4 GB once it’s run headless?

        It seems to me a more scientific and repeatable test than running a game where you don’t know exactly how much assets are being loaded into VRAM (and 512MB on a 4 GB card is a pretty small window).

        I get that some SoM users are having issues, but I don’t think we’re done finding the smoking gun yet. Or if there even is one.

          • Ryu Connor
          • 5 years ago

          Yeah, the entire 4GB of the VRAM is addressable and useable.

          In fact the OS frame buffer and assets live in the 3.5GB to 4.0GB range.

        • tsk
        • 5 years ago

        What problem?
        If nvidia’s statement is true there’s not a problem per se.

        The card also has 4GB, it just accesses 500MB of it differently.

          • Krogoth
          • 5 years ago

          It is a problem.

          The issue is how Nvidia’s marketing trying to spin it. This is where Nvidia is getting most of the heat from. They could have avoid all of this by admitting the problem from the start and market the 970 as 3.5GiB cards.

          Just imagine if Intel decided to release and marketed the “4670K” as a quad-core chip but using the fourth-core causes massive performance issues due to a hardware issue. Intel never mention bother this “little” detail until after the fact and some enthusiast found it by chance. Enthusiast that got the 4670K would want Intel’s blood.

            • Voldenuit
            • 5 years ago

            [quote<]The issue is how Nvidia's marketing trying to spin it. This is where Nvidia is getting most of the heat from. They could have avoid all of this by admitting the problem from the start and market the 970 as 3.5GiB cards.[/quote<] The GTX 660 had an asymmetric memory path and I don't recall anyone complaining. Nobody called for it to be re-branded as a "1.5 GB" card or somesuch. I'm not saying nvidia couldn't have done better from an engineering viewpoint, but from an economic viewpoint, I think it's perfectly understandable for a card that costs $200 less than its big brother to have some caveats and compromises attached. We already saw this with the ROP allocation issue, and that didn't stop TR from recommending the 970, or users from buying it.

            • Krogoth
            • 5 years ago

            Not in the eyes of lawyers. 😉

            That’s why Intel is so aggressive at recalling at stuff that has a hardware defect no matter how silly and rare it is. (FDIV bug being the most notable example)

            • tsk
            • 5 years ago

            Anyone with slight insight into the legal system knows that even in the highly unlikely event there would be a lawsuit, there’s no case against nvidia here.

            • Krogoth
            • 5 years ago

            You underestimate the impact of false advertisement.

            [url<]http://www.mediaite.com/online/red-bull-settles-lawsuit-for-13m-because-the-drink-doesnt-really-give-you-wings/[/url<]

            • MadManOriginal
            • 5 years ago

            [quote<]A Red Bull spokesperson has said that the settlement was made [i<]to avoid a potentially costly and drawn-out lawsuit.[/i<][/quote<] But hey, thanks for pointing me to an easy $10 😀

    • HisDivineOrder
    • 5 years ago

    It’s a shame, but it’s not a surprise to me. Even selling 4GB cards today as “the high end” is really not going to play out very well by next year when that limit will be hit easily, so selling 3.5GB cards with a much slower .5GB segment is worse imo.

    I don’t think 3.5GB vs 4GB is going to matter as much because I don’t think the 970 (or the 980 for that matter) were ever going to be great 4K cards. I think they’re 4k cards right now because they’re the high end nVidia has atm, but it’s not hard to see that a GM200 card is going to come along and save the day in terms of giving us 4K performance we all want.

    And it’s almost guaranteed to have 6GB VRAM. Well, probably 12GB at first in the Titan and 6GB in the replacement consumer high end part, which seems like a better fit for a card that is meant to run 1080p games with 5-6GB VRAM on consoles at 4K with higher resolution textures on PC ports.

    3-4 GB cards are going to look really rather pathetic (at Ultra settings) for 4K as next gen console ports start to be ported from games more capably using the RAM in the PS4 and Xbox One.

    So 3-4GB, it’s not going to be a huge deal. They’re all going to be starving for actual memory. Still, it’s no excuse for selling the cards as having 4GB as if the entire memory allotment is at the same speed when… it’s not.

    I just don’t see it making much difference in the long run and I don’t see it making much of a difference right now (in the short run), either. Seems like the choice between this and going up to the 980 or down to the AMD alternatives (at a somewhat lower cost) is still the same choice with a proviso that you could run into a problem as games go from using 3GB-ish to 6GB-ish over the next year at Ultra settings.

    But then, so will all the cards currently for sale that are being called “high end” besides a few insanely priced cards with 6 or more GB of VRAM.

    • kishish
    • 5 years ago

    How about some in depth FCAT examination of games using more than 3.5GB on the GTX 970?

      • Prestige Worldwide
      • 5 years ago

      Yes please!

    • wierdo
    • 5 years ago

    (delete – wrong article)

      • puppetworx
      • 5 years ago

      Posting in a wrong article thread.

        • September
        • 5 years ago

        Can you believe my premium smartphone in 2015 only comes with 16GB of NAND?

        Pretty soon my video cardz will have more VRAM than my phone has storage!

        /gripe-in-any-article

    • fellix
    • 5 years ago

    The problem could possibly be mitigated by more careful memory allocation management in the graphics driver. For instance, the driver can allocate a high-locality types of data (like static texture arrays) to cover the “slow” segment of the video memory, where the low frequency of random accesses won’t reflect too much on the overall performance. But this is yet another burden for take into account on top of already heavy driver profiling.

      • sschaem
      • 5 years ago

      I think they do already. Thats why we need to look at min frame time.

      nvidia should have sold/advertise this as a 3GB card, and leverage the extra 1GB in the driver as some high speed cache.

      The fact that nvidia try to sell this as a full fledge 4GB card doesn’t sit well with me…

        • yokem55
        • 5 years ago

        Do you protest over buying a phone with ’16 GB’ storage that with the OS and overhead leaves you with 12 accessible?

      • UnfriendlyFire
      • 5 years ago

      I don’t trust game developers to code their games to ensure that the performance doesn’t drop excessively.

      Look at the recent Assassins Creed game that ended up dropping Ubisoft’s stock prices.

    • sschaem
    • 5 years ago

    The thread show a 7x and 15x drop in performance in memory > 3.2GB

    Thats HUGE, so the reason games only show a minor drop in performance is because little of the memory is used , even when the game show a >3.5GB load.

    And then the driver prioritize the memory to reduce any possible references.

    We might also have to look at minim frame time, not just average FPS, some surprise might be revealed there also…

    To be honest, nvidia should relabel the card a 3.2GB card + .8GB of support storage.

    And BTW, nvidia KNEW this from day one because its was coded in the driver from day one.

    All that time I thought the GTX970 poor performance at 4K was due to its 256bit bus (VS the 290x) … Nice to know the truth.

      • Pwnstar
      • 5 years ago

      Is it really support storage though? It just sounds like RAM segmentation to me.

    • fellix
    • 5 years ago

    Ahh, memory address segmentation — the long forgotten days of DOS and extended memory drivers.

      • sschaem
      • 5 years ago

      This issue as nothing to do with any extended memory model.

      Its about .8GB of memory of the card being crippled, and the driver prioritizing allocation to avoid this super slow part of the VRam until forced to.

        • Krogoth
        • 5 years ago

        Nah, it is more to do with how Nvidia mismarketed 970.

        This is similar Creative’s little legal trouble back when they claim that Audigy 1/2 did 24bit/96KHz for everything when it in practice it downsample to 16bit/48KHz for inputs.

        Nvidia placed themselves into a similar position by claiming that 970 can utilize 4GiB of memory when it in practice utilizes 3.5GiB unless it is forced to and when does utilize that upper space it inflicts a performance penalty. They didn’t openly admit this or very least make disclaimers.

        They just open up an legal can of worms and it will no shock that in future GPU releases. They will be new disclaimers coming to the effect “due to disabling of components on the chip your memory usage, user-experience and performance may vary” in fine-print in the warranty/EULA.

          • UnfriendlyFire
          • 5 years ago

          Or, they’ll say: “Real-world performance is not guaranteed to match Nividia’s benchmarks performances.”

    • f0d
    • 5 years ago

    nvidias results seem to suggest this is a non issue (around the same performance drop for 970 and 980 at over 3.5gb usage)
    yet actual users results contradict them

    i have seen the gpu memory benchmark test and how it affects 970 owners and there does seem to be a big drop after 3.5gb and also seen how some people have troubles getting their 970 to use over 3.5gb of memory – yet these nvidia results suggest that the cards are using the memory above 3.5gb and there doesnt seem to be any performance drop while using it
    so much contradictions.!

    there needs to be further testing from someone like TR on the matter imo

    i wonder if it might also affect some users more than others depending on how they attacked the full gm204 with the mini chainsaw which i think they have done before with some previous gpu’s (different performance depending on how they disabled parts of the gpu)

    if it really is an issue as real world users suggest then nvidia made a double booboo by saying its a non issue

    but either way i think scott needs to do some testing of his own.!

      • sschaem
      • 5 years ago

      I agree. The numbers do not reflect the HW problem.

      The thread clearly show the HW performance.,and show a 7x and 15x drop in performance for the higher .8GB of the addressable space.

      The reason those games show a little drop is because the resource are prioritized by the driver,
      trying to hide the HW problem, so during a frame render the higher .8GB might only be accessed sparsely. So you will only have maybe 5% of your access drop by 7 to 15x, resulting in an overall 3% measured drop in expected FPS.

      The situation could get worse, or better depending on the game/app.
      Best case only 3.2GB is referenced, worse case during a single frame only the top .8GB is referenced. And for that frame the perf could go from 50FPS to 5FPS

        • MadManOriginal
        • 5 years ago

        So you’re saying the drivers are written very well to minimize this problem in actual use.

          • Krogoth
          • 5 years ago

          True, but the crux of the problem is that Nvidia misrepresented the 970 capabilities and tried to “quietly” brush it aside.

          They open themselves up for potential legal action. Creative tired the same stunt with early Audigy family.

            • MadManOriginal
            • 5 years ago

            Pray tell, what exactly was misrepresented?

            • l33t-g4m3r
            • 5 years ago

            The fact that the 970 is crippled. Don’t play dumb. It’s a 660 redux, but wasn’t marketed as such. Nv would have never owned up to it, if users hadn’t noticed the weird memory usage.

            • Krogoth
            • 5 years ago

            *sigh*

            Nvidia was aware of the problem since the beginning and coded their drivers around it. Despite this, they still marketed their card as of having 4GiB when it in practice it uses 3.5GiB unless you force it via synthetics or by-passing game profiles.

            It is akin to marketing an piece of software that pitched “feature x” and omitted the fact that “feature x” was disabled at official release because it can cause some “potential issues”, but you never notify your customers until after the fact.

            Nvidia could have avoided all of this stupid BS if they simply marketed 970 as having a capacity of 3.5GiB from the start and explain why it was this way. It would not effected its position and value in the marketplace.

            • MadManOriginal
            • 5 years ago

            The card has 4 GB of physical RAM installed and will use it when needed. It’s really that simple, there’s no misrepresentation unless you can find a place where NVidia said ‘Your card will use all 4GB of RAM when it is not necessary.’

            • VincentHanna
            • 5 years ago

            but it has 4GB and it uses 4 GB… so feature X isn’t disabled.

            • Krogoth
            • 5 years ago

            It doesn’t use it in practice because of gaming profiles and drivers with no mentioning of this or why prior to its discovery.

            • VincentHanna
            • 5 years ago

            you mean just like the 290x and the 980 don’t use it in practice either, because of the same gaming profiles?

            • VincentHanna
            • 5 years ago

            well, if it doesn’t matter… then does it really matter?

            • Krogoth
            • 5 years ago

            It is because if customer doesn’t force Nvidia to acknowledge little details like this and they keep buying the product despite such issues. It gives Nvidia and other vendor the green-light do to further binnings without explaining their possible impacts. They can even try to pitch it as a special “feature”.

            Can you imagine that the primary reason that you chose the 970, because of its 4GiM VRAM and capacity only to found it isn’t quite the case? This little detail wasn’t noted or mentions until after the fact. You would be rather annoyed by it. It is like picking up a case of your favorite beverage and only find out that it is missing a few units in the packaging, despite the fact it was marked for 12 bottles/cans. You would try to return it.

            It is a good thing that third-party enthusiasts are investigating the hardware to find such little problems and their potential impact.

            • ThorAxe
            • 5 years ago

            Can I return my 4870×2 because crossfire never worked or my two 6870s that didn’t work properly until about 3 years later?

            The issue with the 970 is almost a joke when compared to effectively losing 50% of my 4870×2 due to non-working crossfire.

      • jihadjoe
      • 5 years ago

      IMO testing with actual games is far more realistic result than testing with synthetic bandwidth measurements.

        • f0d
        • 5 years ago

        i 100% agree
        but should we just rely on nvidias results? i still believe that some testing independent of nvidia is justified

        if this isnt an issue then great
        if it is then its pretty deceptive of nvidia

        we need someone like scott to test this issue out to know for sure whats going on

        i certainly hope its going to be good news and that it isnt an issue but we wont know until further testing is done

          • Voldenuit
          • 5 years ago

          [quote<]but should we just rely on nvidias results? i still believe that some testing independent of nvidia is justified[/quote<] There's been plenty of testing going on in our forums and on overclock.net, both with synthetic tools and real world games, but I agree, it would be nice to have TR or some other reputable site perform methodical testing of this issue. I'm not trusting nvidia's tests prima fascia because a. They have a vested interest b. They change so many settings between their sub-3.5 GB test and >3.5GB test that it's hard to tease out the actual contribution of any memory bandwidth discrepancies. Having said that, the practice of disabling SMX units goes back a long way, and this is probably not the first card in history which has had an impact on memory performance as a result. We were all happy with our 9500Pros, 4850s, 6850s, 660GTXes, 770GTXes in the past, so the "outrage" against this issue really seems out of proportion. What part of "pay less money to get less performance but (hopefully) more bang for your buck" aren't people getting?

            • l33t-g4m3r
            • 5 years ago

            You’re the one not getting it. Nvidia didn’t market this as a crippled 660, but a 970. The 970 wouldn’t have sold as well if they had been up front with the memory issues.

            Another thing that irritates me is the 960. That’s not a x60 product, but a x50.

            Nv is playing games with their model numbers so they can charge higher prices, and AMD has somewhat enabled this by having uncompetitive products. IMO, that’s dirty business, regardless of AMD’s performance. Nv isn’t hurting Amd doing this, they’re hurting their reputation with their base who expected a true x70, and not a x60.

            • Voldenuit
            • 5 years ago

            Meanwhile, intel is marketing Atoms as “Pentiums”, AMD has been rebadging their system chipsets for literally years, and nvidia has also been rebadging chips.

            Consumers should pay attention to the price/performance of any given product and [i<]never, ever[/i<] rely on a model number or branding designation. Of course, they don't do that in the real world. But even before this issue with the 970s was discovered, most tech sites came to the conclusion that the 970 is and was not a credible single-GPU 4K solution, so I don't see a major loss to existing users. Now, if it spills over into making the card unsuitable for 4k SLI configurations, that would be a more serious issue, so I'd certainly like to see TR investigate both single GPU and SLI configurations in their testing. You are testing this, right, Scott? Right? :p

            • MadManOriginal
            • 5 years ago

            Who the hell cares what model number a card has? Oh yeah, haters who need an irrational reason to hate something. All that matters is the performance, a model number is completely arbitrary.

    • Kougar
    • 5 years ago

    Are there any games that will use 4GB of RAM but at settings that would not severely bottleneck the GPU?

      • NovusBogus
      • 5 years ago

      Heavily modded Bethesda games at come to mind–textures are notorious VRAM hogs and modders love to crank up the texture detail.

        • Kougar
        • 5 years ago

        Has anyone tested with these yet?

        I’m wondering if the severely GPU-bottlenecked settings are masking the last 500MB RAM use penalty.

          • auxy
          • 5 years ago

          Yes, we have tested. Check the forum thread, and the one on overclock.net.

        • jessterman21
        • 5 years ago

        MY DAGRONS NEED 8K TEXXXTUREZ TO GO WITH MY ENB!!!1!111!

          • auxy
          • 5 years ago

          Perhaps a plebeian playing in 1920×1080 is fine with the default textures. Those of us enjoying higher resolutions will require greater texture detail. ( *´艸`)

          • swaaye
          • 5 years ago

          Hehehehheheheh 🙂

          I recall the glorious days of modding Oblivion and having a 256MB X800XT trying to push 1920×1200. The 8800GTX’s 768MB was a blissful upgrade.

          I haven’t really gotten into Skyrim though.

    • puppetworx
    • 5 years ago

    [s<]Those are some massive performance drops, luckily it's only affecting 4K users (a minority) at this point. Still this is really important information for 4K users. Halving your frame-rate at 4K is a tremendous set-back.[/s<] Is there any easier way to address this than by constantly monitoring VRAM usage with GPU-Z or MSI Afterburner? I know you can limit VRAM usage in many games config files but really what you'd want is a way to limit it globally in the driver. I just noticed they changed more than one variable, memory use [i<]and[/i<] resolution/sampling. (Hence the strikeout.) It would be interesting to see the magnitude of the effect when resolutions and all else are held the same, but textures are varied to fill the memory.

      • yokem55
      • 5 years ago

      For both the 970 (with the memory limitation) and the 980 (without this memory limitation) there is a big drop when moving to settings that cause increased memory load. But the 970 only drops 1-3% more than the 980 does. Granted this isn’t a nice Inside The Second™ frame time graph, but it seems the impact of this design decision is nominal.

    • l33t-g4m3r
    • 5 years ago

    Yup. It’s a 660 repeat, but they’re charging 670 prices. Typical NV crippling. Why am I not surprised?

      • puppetworx
      • 5 years ago

      Can you explain what you mean by this please, I’m curious as I had a GTX 660 for a while and though I could never prove it I was sure it had a memory problems.

        • biffzinker
        • 5 years ago

        [quote=”derFunkenstein from PCPerspective”<]It's also not the first time they've done something bone-headed like this. The GTX 550Ti and GTX 660/650Ti with 192-bit memory controllers are affected by this weird asymmetrical memory configuration for a much larger percentage of their VRAM. A full half of their RAM is jammed into a single 64-bit channel, and the other 50% is halved and each given their own. And the performance of the 660 is what I would classify as "fine". It might have been slightly better with 3GB instead of 2, but I wouldn't feel cheated if I had that card.[/quote<]

          • derFunkenstein
          • 5 years ago

          Yah, after I wrote that there, I was going to write this here but my phone rang and I forgot.

          Ultimately, the 660 was a fast card for its time (keeping up with the 660Ti, generally, and using a lot less power in the process), and the 970 is a fast card now. My larger point was, look at the end result and judge, and don’t let what you find out afterwards taint what was already a pretty good result.

            • puppetworx
            • 5 years ago

            I remember now what my problem had been, I could never use more than 1400MB of the 2GB memory. I was only running a single 1080p monitor so I doubt WDDM(Win 7) was using 600MB – but then again I’m not well educated in such matters. I tried many games and they all went into 1300MB range but not beyond. There were no performance issues, I just thought it very odd that 30% of the memory was ‘unused’.

            Like you say the GTX 660 is a great card, despite the peculiarity I experience, in fact it’s one of the best for price/performance.

            • jessterman21
            • 5 years ago

            Mine goes up to 1530MB before I get a short stutter, and VRAM either goes over or bounces back down below 1500. Mostly tested with Skyrim and texture mods – though Ryse and Battlefield 4 use over 1600MB and I haven’t really noticed any bottlenecking.

            The problem is well managed; I agree it really is a great card, but it really should’ve been sold in 1.5 or 3GB capacities.

        • south side sammy
        • 5 years ago

        My 660 was the second worse card I ever owned. it had problems i couldn’t quite get a handle on. Memory/memory interface….. the latter I think was a culprit.
        Seems like they are cheapening things up and making micro advances and touting something new without giving any real world advantage over 2 year old hardware. I could care less about the energy usage. I want a new card that costs $100 less than the last great card with the same performance. And I think that is something we all look for and expect. Red and GREEN are just putting cards on the market to make money. I want something new but won’t/don’t buy. Fell for that stuff too much over the years. If they want my money I’m adult enough to know I want something for it.

          • BlackDove
          • 5 years ago

          Why else would they do it other than to make money?

          Why is a company making money a bad thing?

          Ignorant consumers falling for marketing and not doing research allow for companies to sell something worse next year instead of making better things.

          That was supposed to be a reply to l33t-g4m3r

      • JustAnEngineer
      • 5 years ago

      What are you whining about? Look at the performance of the card to judge the value.

      A hot-clocked version of the GeForce GTX970 performs about 5% worse than a Radeon R9-290X and costs only 10-18% more than a Radeon R9-290X (depending on whether or not you believe in MIRs). GeForce GTX980 performs about 10% better than a Radeon R9-290X and costs a whopping 84-96% more than a Radeon R9-290X. That makes the “crippled” GeForce GTX970 a heck of a bargain compared to the price of the less-crippled version of the same NVidia GM204 GPU in the GeForce GTX980.

        • sweatshopking
        • 5 years ago

        LoL. While you’re right, I think it’s annoying that they advertise as a 4gb, but really, it’s a 3.5gb.

          • Chrispy_
          • 5 years ago

          Read the quote:

          [quote<]When a game requires more than 3.5GB of memory then we use both segments[/quote<] It's really not 3.5GB, it's really still 4GB, but partitioned as part of a memory crossbar management system. Just because a synthetic test or monitoring tool doesn't activate the 2nd partition doesn't mean that it's not there or non usable by games. Their test backs that up and if they're full of crap I'm sure people will call them out on it very vocally and in very short order.

            • sweatshopking
            • 5 years ago

            It isn’t “usable” because it is slow as f. TECHNICALLY it exists, but it’s so slow as to be barely useful WHEN compared to the other ram.

            • Chrispy_
            • 5 years ago

            If 1/8th of the memory is [i<]slow as f[/i<] then the performance would drop massively in games using >3.5GB as tested; not the insignificant difference actually measured. The forum thread is interesting, seems to indicated that the tool used to raise this issue wasn't running in exlusive or headless mode, meaning that 128MB were reserved for the adapter and the remaining "missing" 384MB were in use by the WDM desktop composition. That hasn't been conclusively proven yet, but that's definitely what it looks like. edit: MB, not GB.

            • sweatshopking
            • 5 years ago

            IF it’s tester error, why has Nvidia CONFIRMED and then EXPLAINED that there is an “issue with memory allocation”.

            • Pwnstar
            • 5 years ago

            Because there is a small issue with the way they designed the 970. It’s very small though, like 3% performance hit. I can see why they didn’t spend a bunch of money to fix it.

            • Ryu Connor
            • 5 years ago

            Chrispy is talking about an additional issue.

            The testing tool reports erroneous results if the benchmark is not executed under precise conditions. The Windows OS (and all operating systems) place their frame buffer and assets at the top end of the cards memory range. In the case of the 970 those assets are in the final 500MB.

            This is amongst the reasons saying the card is 3.5GB isn’t right. The OS is actually using the 3.5GB to 4GB range. The popular tool for pointing out the problem with the 970 was called Nai’s Benchmark. That tool does not obtain fullscreen exclusive access, meaning it must contend with any other programs – including the operating system – that are presently running on the card when executed.

            If you had a windowed mode copy of Plants vs Zombies going and you ran Nai’s Benchmark it’s going to tell you that a significant portion of your upper memory range has crap performance. In some extreme cases running the test that way will even cause the driver to suffer a TDR.

            You must run the benchmark in a headless state to get proper results.

            So he’s not saying that the issue isn’t real: after all NVIDIA has confirmed that the 970 has a dual segment design. We’re just saying that there are secondary issues that have really muddied the waters with understanding what’s going on. That the popular tool for testing this can give bad data if not run under precise conditions.

            Nai has talked about some of these issues himself [url=http://www.computerbase.de/forum/showthread.php?t=1435408&p=16912375#post16912375<]here[/url<].

            • jessterman21
            • 5 years ago

            I’m fairly positive they cherry-picked those tests…

            And who knows what kind of microstutter is occurring within those average framerates?

            • l33t-g4m3r
            • 5 years ago

            Sure, but when a game DOES use more than 3.5 GB of memory, performance drops off. This is the PERFECT scam against your own customers, making cards get obsolete quicker than normal.

            As it’s been said, nobody will notice this under most CURRENT games that don’t use >3.5 gb ram, but 6mo down the road Nvidia will have fixed that issue by sponsoring games devs to use uncompressed hi-res textures, and you’ll have to cut down your settings a notch, making you want to buy another card.

            Nvidia is playing dirty tricks on it’s customer base. Just look at how long the 660 stayed relevant. That’s how long the 970 will stay relevant. People are wasting their money on these cards, as they will regret their purchases after they run games that use >3.5gb vram.

            You certainly won’t be able to use the 970 for 4k either, that’s for damn sure. Perhaps NV was actually crippling their cards against that exact scenario! Bought a 4k monitor? Buy a 980. The 970 is for 1080p peasants, even if you bought into SLI. This is the scam NV is playing with it’s customer base, and it’s disgusting.

            • Krogoth
            • 5 years ago

            Nah, it is more like Nvidia found the problem after making some units and testing it internally. They realize that by the time the silicon starts to uses 3.5GiB or more video memory. It is already at its breaking point whatever there was a bug or not. The majority of the its intended buyers aren’t going to notice or experience it.

            Just look how long it took for third-parties to discover this problem. It was only by chance since they wanted to see if 4GiB of VRAM was actually *needed*.

            • l33t-g4m3r
            • 5 years ago

            No way. They did it with the 660 on purpose. IMO they pulled the same stunt but didn’t admit it so they could raise prices.

        • geekl33tgamer
        • 5 years ago

        It also makes it poor value compared to the 290X.

          • yokem55
          • 5 years ago

          Unless you care about Linux support. Or power and thermals. Or PhysX. Or you’ve bought a gsync monitor. Or…..

            • Westbrook348
            • 5 years ago

            Or 3d vision

            • sweatshopking
            • 5 years ago

            NOBODY CARES ABOUT PHYSX. AND GSYNC IS DEAD.

            • l33t-g4m3r
            • 5 years ago

            Not 4k though. That would hit the memory wall quicker, and perhaps that’s the exact scenario NV had in mind when they crippled this card.

            • Pwnstar
            • 5 years ago

            You say that like games can only access 3.5GB, when GPU-z clearly shows otherwise.

            • l33t-g4m3r
            • 5 years ago

            Maybe in Bizzaro World. I’m clearly talking about situations where it DOES use the full 4gb, which would affect performance.

            If anything, it makes sense for NV to optimize memory use under 3.x, so it doesn’t cause stuttering. That doesn’t mean you don’t have access to the full 4gb, it means you lose performance when you hit full memory usage.

            • Pwnstar
            • 5 years ago

            Lots of people don’t care one whit for those things (Linux, TDP, PhysX or G-Sink). Or 3D Vision, as Westbrook adds. Really? People still care about 3D?

            • l33t-g4m3r
            • 5 years ago

            3d will be really important once 120+hz monitors and VR headsets catch on. AMD has nothing on 3d vision, aside from tridef which costs extra, requires manual setup, and doesn’t perform all that well. 3d vision on the other hand is fairly efficient and works out of the box for most games. It may not be big [i<]now[/i<], but down the road...

    • AMN3S1AC
    • 5 years ago

    What is the source of this staement? Which person or department issued this?

    I was speaking to Nvidia tech support moments ago and they say the issue is still under investigation and they are trying to find a solution.

    They also said directly that they know nothing of this statement and that any information would also be posted on their site/forum.

      • Damage
      • 5 years ago

      Nvidia PR emailed it to me.

        • AMN3S1AC
        • 5 years ago

        Nvidia tech support say that the matter is still very much open and under investigation.

          • sweatshopking
          • 5 years ago

          They’re clearly wrong.

          • MadManOriginal
          • 5 years ago

          You’ve never worked in a company with more than a handful of employees, have you.

          • HisDivineOrder
          • 5 years ago

          Tech support tells you the party line right up until the memo goes out–long after it went to the press–informing that yes they have confirmed the issues are real and no they are not liable. And no, do not volunteer this info unless the customer already is aware. And be careful not to accept blame, liability, or in any way mention “lawyers,” “class,” “action, “lawsuits,” “deceit,” “lies,” or “responsibility.” “Be generic as possible,” they would say, “and refocus the customer on the reasons this product more than matches what was promised even if said promises were not explicit or specific. If the customer starts to veer away from the obvious benefits our strategy has garnered for them and goes to the dark place where the dark things happen in their heads, do not continue to engage. Simply provide them a $20 gift card for a shiny new Tegra Tablet.

          “If a $20 gift card is not enough to soothe their sore soul, you are also authorized to provide a $25 gift card. Anything great requires prior approval by a supervisor, but in a few rare cases a $50 gift card may be authorized.”

          • VincentHanna
          • 5 years ago

          LOL, tech support. Help desk level 1, that’s cute.

            • travbrad
            • 5 years ago

            Have you tried power cycling your GPU?

    • south side sammy
    • 5 years ago

    where did they show that all 4gigs could be accessed and utilized?
    I take from this, yes, we advertised 4gigs but even though it’s on the cards you can’t use it. Kinda like when you sli we say you have 8gigs but only 4 are usable…….. but in reality instead of 4gigs+4gigs = 8gigs its still only 4gigs….(7)…. unless you use one of our 8gig cards but in reality you only have 7….??? talk about professionals blowing smoke….. and naive people buying it.
    I wonder how long they knew this and wondered if they thought it would flow under the radar. And why would they concoct this way of partitioning the memory? yeah, I read the article.

    ever hear “tie two birds together, though they have four wings they cannot fly”?

      • Pwnstar
      • 5 years ago

      They show the FPS after 3.5GB right in the chart. So, yes, they showed that the last 500MB could be accessed.

    • ronch
    • 5 years ago

    So wouldn’t the 970 be better off with just 3.5 GB? Or perhaps 3GB?

      • Ryu Connor
      • 5 years ago

      Given the results of the benchmarks up above, I’d say no.

      Running out of memory and having to hammer the PCIe bus stands to be worse performance.

        • Kougar
        • 5 years ago

        So basically what NVIDIA is showing with these results is that we have to use settings where the GPU is already such a severe bottleneck that the last ~500MB of RAM has just a minor affect on the results by comparison. I’m not sure if that’s a good thing or a bad thing.

          • Ryu Connor
          • 5 years ago

          [url=http://hardocp.com/article/2014/11/19/nvidia_geforce_gtx_970_sli_4k_nv_surround_review/1#.VMPq7u8o7uo<]HardOCP: NVIDIA GeForce GTX 970 SLI 4K and NV Surround Review[/url<] I'd [i<]presume[/i<] that this review from HardOCP - given the resolutions and setting used - would have been using that last 500MB. Brent came away from the results pretty happy for consumers. So perhaps it's best summed up as merely a technical curiosity.

            • MadManOriginal
            • 5 years ago

            Noooooooo, facts and information from actual real-world use. SCREW THAT, what matters is artificial corner cases that don’t actually affect real-world use because they mean NV IS UNDENIABLY EVIL AND LIED AND SHOULD BE SUED.

            CPUs throttle under Intel Burn Test load, therefore the advertised speed for real-world use IS A LIE! TIME TO SUE INTEL!

            • auxy
            • 5 years ago

            I don’t know why you would presume that.

            • Ryu Connor
            • 5 years ago

            [url=http://www.digitalstormonline.com/unlocked/images/articles/Nav/vramusage4k/crysis34kvram.jpg<]Mostly because of reported results like this[/url<] People reporting memory usage isn't super common in reviews, but it does happen. Reported values for memory usage available do detail that titles like BF4 and Crysis 3 at very high resolutions and settings will skirt past 3.5GB of VRAM usage.

            • auxy
            • 5 years ago

            Games cache assets in available VRAM. Most games are designed to stream their data in due to 512MB limit on the last-gen consoles, or to reduce loading time (or both).

            Crysis 3 will use a little over 3.5GB of VRAM when it’s available, but we have no way to know that the NVIDIA driver isn’t informing the game of ~3.25GB being available and then the game unloading old assets to avoid that usage.

            I don’t think this is something we can assume isn’t happening.

          • Jason181
          • 5 years ago

          I suppose it reveals that it’s a rather well-balanced design.

    • limitedaccess
    • 5 years ago

    Will you be following up with Nvidia and inquiring about behavior with other cards in the Maxwell family (eg. GTX 980m, 970m, 965m, GTX 750)? Possibly Kepler as well?

      • Ryu Connor
      • 5 years ago

      [url=https://techreport.com/forums/viewtopic.php?f=3&t=101965<]Our forums[/url<] have results from other Maxwell (GM107 2GB) and Kepler (GK104 4GB) cards. Laptops can't really be benched as Optimus causes them to generate incorrect results. The easy benchmark that has become so popular for testing has exacting conditions to be run under and can result in erroneous results if used wrong (which is easy).

    • EzioAs
    • 5 years ago

    If the performance drop are roughly similar between the GTX 970 and GTX 980 once memory usage goes above 3.5GB, this isn’t really much of an issue then, as I understand it. I agree, I think we need more sample (benchmark program/games)

      • sschaem
      • 5 years ago

      The HW drop by 7X with uncached access for any address >3.2GB
      and the drop is 15X with cached access for any address > 3.2GB

      This will affect game/apps in different ways.

      For games, this issue is not to important. I think they can expect a ~5% drop in card performance in general.

      The issue is more moral on the part of nvidia for not disclosing this known flaw.

    • sweatshopking
    • 5 years ago

    I think this may mean that it’s not a 32bit addressing limit? does scott need to hug some people?

      • ronch
      • 5 years ago

      I think people suffering this issue should give Jen Hsun a hug… a bear hug.

    • MadManOriginal
    • 5 years ago

    So the ‘real results’ is what really matters, as fun as the geek investigation is. Too many people seem to have lost sight of that. Anyway, my question is whether the 0.5GB memory portion is *actually* slower when games need to use it, and if so, is it not a bottleneck anyway? (The difference in performance drop would seem to suggest that is the case.)

    • swaaye
    • 5 years ago

    This sounds similar the GTX 550’s heterogeneous memory bus width. Or whatever that was called.

    • yokem55
    • 5 years ago

    Does this mean that 8 GB 970’s won’t happen?

      • Damage
      • 5 years ago

      No, I think it means they’d be more like ~7GB cards in effect.

        • Airmantharp
        • 5 years ago

        Nothing wrong with that…

          • Firestarter
          • 5 years ago

          they’d be advertised as 8GB cards though

          • sschaem
          • 5 years ago

          The wrong part is : advertising and selling them as 8GB parts, when really only 6.4GB is ‘usable’

            • Pwnstar
            • 5 years ago

            sschaem: right as usual.

Pin It on Pinterest

Share This