How much graphics memory do you really need?

When shopping for computer hardware, consumers tend to favor bigger numbers. I don’t blame them, either. The average consumer knows about as much about hardware as I know about needlepoint, which is to say very little. Most, I would suspect, have no idea exactly what a megabyte is, let alone a gigahertz. But they can count, so when faced with the choice between 2GHz or 2.5GHz, they’re going to go with the higher number. Because it’s better.

Except when it’s not, as was the case with the Pentium 4. Intel architected the P4 to scale to higher clock speeds than we’d ever seen before, birthing a megahertz myth that conveniently exploited consumer ignorance. Why would Joe Sixpack buy an Athlon at a mere 1.4GHz when he could get a Pentium 4 at 2GHz? Enthusiasts knew the score, but for years, mainstream consumers were easily persuaded—if they hadn’t assumed already—that the Pentium 4 was a far better processor because it had a higher clock speed.

More recently, we’ve seen a much smaller but not less absurd memory myth take hold in the graphics card industry. Budget cards equipped with ridiculous amounts of memory are the culprit here. For enthusiasts, a gig of memory on a sub-$100 graphics card makes about as much sense as putting a spoiler on a Yugo. Budget GPUs lack the horsepower necessary to run games at the kinds of resolutions and detail levels that would require such a copious amount of video memory. But what about the latest crop of mid-range graphics cards? Nvidia’s GeForce 8800 GT has considerable pixel-pushing power on its own, and when teamed in SLI, that power is effectively doubled. Perhaps a gigabyte of memory on this class of card isn’t so unreasonable.

Conveniently, derivatives of the GeForce 8800 GT are available with 256MB, 512MB, or 1GB of memory, making it easy for us to probe the impact of graphics memory size on performance. We’ve tested a collection of single cards and SLI configurations in a selection of new games, across multiple resolutions, to see where memory size matters, if it does at all. Keep reading for the enlightening results.

Bigger than yours

When Nvidia first introduced the GeForce 8800 GT, the card came equipped with 512MB of memory. Shortly thereafter, in response to AMD’s launch of the Radeon HD 3850 256MB, the green team filled out the low end of the 8800 GT lineup with a cheaper 256MB variant. The 8800 GT’s memory size didn’t scale upward until graphics board makers started fiddling with the design on their own. Palit put 1GB of memory on its 8800GT Super+, releasing it to market alongside some of the most subversively phallic promotional shirts we’ve ever seen.

To the casual observer, the Super+ doesn’t look all that dissimilar to other 8800 GT cards. Sure, it has a custom dual-slot cooler, and even three-phase power for the GPU, but nothing really screams out that this card packs twice the memory of your average GT—that is, until you turn it over.

GeForce 8800 GT cards with 512MB of memory have no problems squeezing the RAM chips on one side of the card. However, to accommodate 1GB of memory, the Super+ fills out both sides. The memory chips on the underside of the card normally lurk behind a bright blue heatspreader, but they’ve agreed to come out just this once for a picture.

Test notes

Before getting started we should probably take a moment to frame the issue at hand. Today we’re looking for two things: whether memory size has a tangible impact on in-game performance, and if it does, whether that impact comes at resolutions and detail levels that offer playable frame rates. We’re not interested comparing the performance of one slideshow to another.

With the obvious exception of Crysis, we were actually able to get decent frame rates in all of our games with their highest in-game detail levels, and with antialiasing and anisotropic filtering to boot. That makes resolution the most obvious candidate for scaling. I should apologize in advance for not having one of those swanky 30″ monitors that goes up to 2560×1600—a fact that pains me on an almost daily basis. The best my test systems can do is 1920×1440 on an old-school CRT, which is still a higher pixel count than common 24″ displays with a 1920×1200 display resolution. If you can afford a 30″ display, chances are you can do better than a GeForce 8800 GT, anyway.

We’ve tested single-card GeForce 8800 GT configurations with 256MB, 512MB, and 1GB of memory. The 512MB and 1GB cards were also tested in SLI. Since some of the cards we used are “factory overclocked,” we used nTune to normalize core, shader, and memory clocks to the GeForce 8800 GT’s reference speeds of 600MHz, 1.5GHz, and an effective 1.8GHz, respectively.

Our testing methods

All tests were run three times, and their results were averaged.

Processor

Core 2 Duo E6750 2.66GHz
System bus 1333MHz (333MHz
quad-pumped)
Motherboard

XFX MB-N780-ISH9
Bios revision 2.053.B0
North bridge nForce 780i SLI
SPP
South bridge nForce 780i SLI
MCP
Chipset drivers ForceWare 9.46
Memory size 2GB (2 DIMMs)
Memory type
Corsair
TWIN2X2048-8500C5

DDR2 SDRAM at
800MHz
CAS latency
(CL)
4

RAS to CAS delay
(tRCD)
4
RAS precharge
(tRP)
4
Cycle time
(tRAS)
12
Audio codec Integrated nForce
780i SLI/ALC888S

with Realtek 1.86 drivers
Graphics cards

XFX GeForce 8800 GT Alpha Dog 256MB



Gigabyte GV-NX88T512HP 512MB



Palit 8800GT Super+ 1GB
Graphics
drivers
ForceWare
169.25

Hard drive


Western Digital Caviar RE2 400GB
SATA

OS


Windows Vista Ultimate x86

OS updates
KB936710,
KB938194, KB938979, KB940105

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 60Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Call of Duty 4: Modern Warfare

We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. We cranked the in-game detail levels, including texture filtering, and set antialiasing to 4X.

Our Call of Duty 4 test teases out a tangible difference in performance depending on graphics memory size, but it’s not between configurations with 512MB and 1GB. Instead, it’s the 256MB configuration running behind the curve. At a relatively modest 1280×1024, the 256MB card lags behind by a little more than 10 frames per second. That gap only grows as we crank the resolution, with the 256MB card dropping well below a playable frame rate threshold at 1920×1440.

There’s essentially no difference in performance between our 512MB and 1GB cards here, even when they’re paired in SLI. As one might expect, overall performance drops as we scale the resolution up, but it does so at a slower rate than with the 256MB card.

Crysis
Crysis is easily the most demanding PC game on the market, and we were able to get reasonably playable frame rates with the game’s high-quality detail settings. We started at a relatively modest 1024×768 display resolution without antialiasing and scaled up from there. The scores below come from a custom timedemo recorded in the game’s first level.

Again, 256MB of memory proves to be a clear handicap for the GeForce 8800 GT. Even at 1024×768, you’re looking at a significant drop in performance. Things only get worse as the resolution goes up.

Between our single-card 512MB and 1GB configurations, we see no meaningful difference in performance up to 1600×1200. At that resolution, we’re under 25 frames per second, which is a little choppy to be considered playable, at least for a first-person shooter.

Our SLI configurations provide a little more drama as the 1GB cards inexplicably deliver lower frame rates than their 512MB counterparts at 1280×1024. We re-ran the tests numerous times and after multiple reboots, but got the same results. Things return to normal at 1600×1200, where only one frame per second separates our 512MB and 1GB SLI configurations, with the latter holding the slight lead.

Enemy Territory: Quake Wars

We tested Quake Wars with its highest in-game detail level settings and both 4X antialiasing and 16X aniso. We used the game’s timedemo functionality with a custom-recorded demo. Unfortunately, our 256MB card repeatedly crashed when running the game at 1920×1440, so we don’t have results for it at that resolution.

Even 256MB of graphics memory looks adequate for Quake Wars, that is until you want to run at resolutions higher than 1600×1200. Our 512MB and 1GB cards are a little faster, but not by nearly the margins we saw in CoD 4 and Crysis.

It’s no surprise, then, that there’s essentially no difference in performance between our 512MB and 1GB cards. There’s a bit of a gap when we start pairing cards in SLI, but surprisingly, it’s the 512MB configuration that comes out on top by a few frames per second.

Half-Life 2: Episode 2
Episode 2 brings higher quality textures than previous versions of Half-Life 2, and we were able to run the game with all its detail level settings maxed in addition to 4X antialiasing and 16X aniso. We used a custom demo in conjunction with the Source engine’s timedemo functionality.

Yet again, we found no difference in performance between GeForce 8800 GT configurations with 512MB and 1GB of memory. Not even SLI could coax a meaningful margin between those memory sizes at the resolutions we used for testing. It is worth noting, however, that as with Quake Wars, this system can probably run Episode 2 with playable frame rates at resolutions higher than 1920×1440.

Our 256MB 8800 GT admirably hangs on at 1280×1024, nearly matching the performance offered by 512MB and 1GB cards. This victory is short-lived, though; the 256MB card stumbles at 1600×1200 and drops well below the playable threshold at 1920×1440.

Conclusions

If you’re looking at running a single GeForce 8800 GT, the card’s default 512MB memory size is easily the best. Doubling the onboard memory to 1GB may make for interesting marketing, but it doesn’t improve performance a lick with the games and resolutions we tested. What’s more, a single 8800 GT runs out of steam at 1920×1440—if not at lower resolutions—indicating that higher resolutions that might benefit from additional video memory wouldn’t yield playable frame rates with single-card configs.

The benefits of 1GB of video memory are also a bust for GeForce 8800 GT SLI configurations, at least at resolutions up to 1920×1440. However, we’ve seen the 8800 GT 512MB in SLI deliver playable frame rates at 2650×1600 in Quake Wars and Episode 2, so a 1GB SLI config may yield performance benefits there. We’ve also observed the performance of a 512MB SLI config tank spectacularly in Call of Duty 4 when moving to 2560×1600, suggesting that additional video memory could be of help there, as well.

As for the 256MB variant of the GeForce 8800 GT, well, there’s little hope. The 256MB card fared well enough in Quake Wars, but it couldn’t keep up at even 1280×1024 in CoD 4 and started to drop off at 1600×1200 in Episode 2. And Crysis? Forget about it.

Given the relatively minor price gap between 256MB and 512MB versions of the GeForce 8800 GT, we see little reason to settle for the 256MB card. You really do need 512MB of memory to make the most of today’s games, especially if you want to crank up the eye candy. That said, today’s games aren’t so demanding that they’ll make good use of 1GB of video memory, at least not with the GeForce 8800 GT. Not even Crysis saw a meaningful performance increase with the 1GB cards, suggesting that tomorrow’s games may do just fine with 512MB, as well.

Comments closed
    • idgarad
    • 12 years ago

    One to to factor that you may not realize is that regardless of how much data you have, it’s how fast you can get to it AND how much you can grab at a time.

    Here is what I call a simplified example (not accurate, just for illustration purposes)

    You have 512 blocks vs. 1024 blocks
    You can grab up to 1024 blocks at a time IF they are all the same color
    If you want to grab blocks that are different colors you must skip a turn to switch colors
    You can grab as many blocks, up to 1024 so long as they are the same color.

    Now, given the above can people see how having more memory may not give better performance if there is only say, 340 blocks. And even if there are 1024 blocks, and there are 12 colors you are going to burn at least 12 turns to get all of them.

    Memory bandwidth is part of the equation (number of blocks at a time)
    Total memory (obviously)
    number of different contexts that the memory is used for (there is a difference between say four 1MB textures and one 4 MB texture)
    The speed of the processor (how many turns we get a second)

    If we burn 12 cycles switching between contexts at 1 cycle per second you can see we piss away 12 seconds.

    If we burn 12 but we run twice as fast we only burn 6 seconds.

    More memory can also mean more overhead in addressing it depending on how it is allocated.

    The tests are marginalized as we do not have a breakdown from the timedemo telling us how many textures were used, what sizes those textures were, how many shaders were active, etc…

    I’d love to see:
    A histogram of texture sizes
    average number of pixel shaders in use
    average total volume of ram used
    total number of memory Reads\Write

    as part of a timedemo summary…. (hint hint ID, Valve, Epic, etc..)

    • Bensam123
    • 12 years ago

    Could it be the drivers and the GPU itself aren’t tuned for this much memory and the manufacturers haven’t changed anything else besides the chips?

    • passionne
    • 12 years ago

    Very disappointing article since it is incomplete (lack 2560×1600 resolution and memory-hungry games).

    In 2560×1600, with my 2900 Pro 1 GB, the following games use more than 512 MB :

    – Oblivion : QTP3 + 4096×4096 normal maps, AA 0 X, AF 16 X, everything maxed : VRAM=1084 MB

    – Ghost Recon Advanced Warfighter 2, AA 0 X, AF 16 X, everything maxed : VRAM=650 MB

    – Stalker, AA in-game, AF 16 X, everything maxed : VRAM=700 MB

    – Lock On Flaming Cliffs, AA 0 X, AF 16 X, everything maxed : VRAM=748 MB

    – Call of Juarez, AA 0 X, AF 16 X, everything maxed : 560 MB

    And in all these games, the performance is good : between 30 and 40 fps.

      • indeego
      • 12 years ago

      Queue somebody to (pointlessly) say /[

    • swaaye
    • 12 years ago

    If you mod out Oblivion with, say, Qarl’s Texture Pack 3, it’ll use ~700MB texture RAM according to the in-game debug data. You can crank that higher by adding 4096×4096 terrain normal maps and distant landscape LOD textures, too. I’m guessing that having a card with 768MB+ RAM would help there. But really, I think that 512MB is this year’s sweet spot. Especially if the new games are all console ports with barely-changed assets.

    • SGT Lindy
    • 12 years ago

    How much do video ram do I need? Whatever the X3100 can use of my system RAM on my Macbook.

    This kind of crazy expensive video card crap is one of many reasons I quite gaming on a PC. I guess I am in good company looking at the sales numbers for 2007.

    • FireGryphon
    • 12 years ago

    I don’t see why the absence of 2560×1600 testing is such a big deal. This isn’t supposed to be a test for super-high-end graphics cards, and I therefore think it’s perfectly reasonable to test in a more moderate way. Heck, if such a respected hardware reviewer doesn’t even have a 30″ monitor, that says something about how many laymen must have one. The only problem is that TR is usually perfectly thorough in its methods, whereas this time it was not.

    • matnath1
    • 12 years ago

    Would minimum framerates be affected at all by memory size?

    When the 8600’s came out 512 was overkill for them due to lack of horse power but would 512MB of Vid Ram help 8600 GT “class” cards in any of todays games or are we still just talkin slide shows at resolutions that would make a difference anyway?

      • Meadows
      • 12 years ago

      No, going from 256 to 512 matters on some entry-level and *[

    • FireGryphon
    • 12 years ago

    I want that t-shirt.

      • indeego
      • 12 years ago

      Again, a low resolution was used. The whole point of a 1 Gig card is for the largest resolutions a Monitor card supportg{<.<}g

    • qbert_444
    • 12 years ago

    I have read some where from one of their developer that UT3 will run better with lots of VRAM. I would of liked if they would of tested that game.

    Ogherwise good review.

    • Vaughn
    • 12 years ago

    I thought the article was good, quite surprised at the whiners. He didn’t have a 30′ monitor get over it! It doesn’t make the article invalid. 1GB on a GT is a waste plain and simple. Nothing else needs to be said.

    P.s love the T-shirt.

    • SecretMaster
    • 12 years ago

    The only useful thing I can think of for 1GB VRAM are all those re-texture Oblivion mods. But even then I’d think 512 would be fine.

    • El_MUERkO
    • 12 years ago

    i’d be interested to see MMO performance numbers for those cards

    a busy city/castle/space-port in an mmo may well test gfx memory better than a single player game

      • Krogoth
      • 12 years ago

      I think network connectivity matters far more in that case.

        • Meadows
        • 12 years ago

        And how wrong you are.
        However, this depends on the game’s engine and methods.

        I play World of Warcraft, and Blizzard themselves have revealed that their game uses the system RAM the most. Texture data are ever so often streamed, and whatnot. It’s easy to see how system RAM will be THE bottleneck, even up against videomemory.
        I don’t know about competing games, but they can’t really stress secondary parts (such as the videocard) too much because that has the potential to decrease the player base. This is why WoW caters for the lowest common denominator in terms of system speed, for example.

          • Stranger
          • 12 years ago

          you sure? I’m pretty sure that 95% of the lag that you get when you gather a bunch of people together is server side.

            • Meadows
            • 12 years ago

            Yeah, right.
            Your framerate starts dropping because the *[

    • PrincipalSkinner
    • 12 years ago

    Spoiler on a Yugo, ROFLMAO. I live in the county the makes them, so ROFLMAO X2.

      • Vrock
      • 12 years ago

      They still make Yugos? Wow.

    • just brew it!
    • 12 years ago

    Great article. I think I’d sum it up thusly: 256MB is dead, don’t even bother. 1GB is overkill unless you want to run insane resolutions, or tend to hang on to your video card for a long time and want a bit of future-proofing.

    Off topic: Does anyone else find it silly that PriceGrabber lists both TigerDirect and “all-new CompUSA.com”? The CompUSA name is now owned by TigerDirect, and it seems like TigerDirect.com and CompUSA.com prices are always identical. It wouldn’t surprise me if the product even ships from the same warehouses. Seems rather redundant to me. (And yes, I understand TR probably doesn’t have control over the vendors PriceGrabber displays, but I thought I’d point it out anyway for those who didn’t get the memo about TigerDirect acquiring the rights to the CompUSA name.)

    • slackshoe
    • 12 years ago

    Whilst I agree that 1GB RAM is all but useless on mainstream graphics cards, the testing methods in this article were also a complete waste of time. The reason why no difference was found between 512MB and 1GB is because you didn’t use high enough resolutions! You should have been going up to 2560×1600, and if that wasn’t possible then cranking the antialiasing up to 8x or 16x with transparency supersampling. THEN you would have seen a difference.

    • moose103
    • 12 years ago

    It’s obvious.

    Radeon HD is better. 😛

    • Vrock
    • 12 years ago

    Great article! This is definitely the type of thing I like to see. It’s made me realize that my 7800GT is handicapped in more than one way in modern games. Guess it really is time for a replacement *sigh*.

      • Meadows
      • 12 years ago

      Why don’t you get a 9600 GT.

        • Vrock
        • 12 years ago

        I’ve thought about it, believe me.

    • jackaroon
    • 12 years ago

    Just a nitpick, but the graphs are 513 px wide, and have img tags squeezing them to 512. The text went a bit fuzzy in my browser.

    • Fighterpilot
    • 12 years ago

    One GB of vram seems to work well for the ATI 3870 X2 🙂

      • Meadows
      • 12 years ago

      Sure, since it effectively has 512 megs.

    • crazybus
    • 12 years ago

    One thing you have to keep in mind is that when framerates start dropping due to a lack of video ram it’s not a nice experience. The average framerate may look decent enough to be playable but is usually accompanied by annoying amounts of hitching and stuttering as memory gets paged in and out of the cards memory.

    The fact of the matter is that current games are designed with 512mb cards in mind, making more than that generally a waste. As the amount of onboard memory rises, the impact of resolution on memory usage becomes less of issue as the framebuffer takes up a lower percentage of total ram.

    • Flying Fox
    • 12 years ago

    Where can you get that t-shirt?

      • UberGerbil
      • 12 years ago

      If you qualified, you would already have it. 😉

        • Mourmain
        • 12 years ago

        Good one.

        • Flying Fox
        • 12 years ago

        So the t-shirts come with those 1GB cards for the e-peen? OMG I need to save up to get those über cards plus the t-shirt! 😀

    • Shining Arcanine
    • 12 years ago

    I think graphics cards need gigabytes of memory and double floating point precision. That will make them perfect for scientific computation.

    Damage, Nvidia’s graphics cards support CUDA, which means that you can run C programs on them, with certain limitations. Have you thought of exploring how greater amounts of memory affect the performance of custom C programs, specifically ones that are particularly resource intensive?

      • Raster
      • 12 years ago

      In response to Shining Arcanine, I am about to buy one of the 1 Gig boards even though I am reading all this. I will be using the 1 Gig for data storage along with an experimental effort to program the Stream Processors, etc. The 1 Gig sure is great when you think of non gaming uses for this card.
      So one further observation is that the extra RAM is justified for these other reasons but I also will play around with some gaming at times (not often, its too addicting) and so then I’m still going to enjoy the fact that I’ll never run out of on-board texture storage while playing these various games. I just like the idea of having a cushion.

      If I didn’t have the special use for the 1 Gig, and if the price difference were relatively low, I would normally go ahead and buy the card with more RAM just to get this “future proof” side benefit for texture storage.

      I remember Doom 3’s settings that let me select the finest resolution texture maps, and in the future, games could have this kind of option. Yes it’s true that games will be written to expect 512 MB, but some may allow you to crank up the quality even further. I’d like to see this. Heck some of the time when I play a game I do this just to experience the graphics.
      Ha! I work on graphics chips and I love to see the effects.

    • odizzido
    • 12 years ago

    so much AA 🙁

      • nerdrage
      • 12 years ago

      Agreed — I’m curious if using 2X AA (or turning it off altogether) would significantly improve the 256MB performance at 1280×1024. If so, the 256MB card might still be a good choice for 19″ monitor users who don’t use AA.

        • odizzido
        • 12 years ago

        thats me right there

    • kvndoom
    • 12 years ago

    It’s the same-ol same-ol… by the time video cards really do need a gig of RAM for mainstream, these video cards won’t be powerful enough for the games anyway. Not quite as bad as the 256MB Geforce FX5200’s (cards that were worthless no matter how much memory you put on them), but still in that vein. The t-shirt says it all, nothing but e-peen points and easy sales to fools who can’t understand that more simply does not always equal better.

    • TO11MTM
    • 12 years ago

    One thought… I find it a little silly to have the system ram equalling the graphics ram in the test machine. However from what I understand it may not do any good becaues 2 1GB graphics cards would have their memory shadowed in that range anyway.

    … Come to think of it maybe that has something to do with the SLI performance issues of 2 1GB Video cards. those two alone would put the “shadow” right at 2gb, to say nothing of your other components.

    Unless Vista magically fixed this in 32 bit versions, but I don’t think it did.

      • Meadows
      • 12 years ago

      Tested videomemory amounts fall into the “256”, “512” and “1024” MiB brackets. The system memory was at 2048 MiB.

      It is well within the addressing limit.

        • titan
        • 12 years ago

        Actually, the amounts are 256MB, 512MB, 1024MB, 1024MB and 2048MB. Regardless, the cards are still within the 2GB limit. I don’t think bumping up system RAM would have made a difference.

          • Meadows
          • 12 years ago

          I don’t recall the last time when SLI meant adding up VRAM.

            • cobalt
            • 12 years ago

            You’re right — it doesn’t add up to more effective RAM for the GPU to utilize in general. However, I think it can cut out of system RAM due to driver shadowing or even simple addressing limits.

            • TO11MTM
            • 12 years ago

            Correct, it Can, although I’m not 100% sure how that all works in Vista.

            And yes, although some point out that 2×1024=2048, and thus the two video cards alone wouldn’t push into the actual 2048MB of ram, those coupled with almost every piece of hardware will. As for whether this causes the performance numbers we’re seeing here or not, I cannot state for certain because I lack the hardware to perform such testing.

            Geoff:

            Send the two 512MB and two 1GB Cards my way. When I’m done testing I’ll send the 1GB cards and one of the 512MB back, and keep the other as a payment for my services. 😉

    • lethal
    • 12 years ago

    While the source engine is fine and has some good games under its belt, it’s also a relatively pansy engine compared to others out there, notably oblivion’s which went MIA from the reviews. With every setting manually dialed up (not with the “ultra high” preset), its still a very hungry game, even if the shadows can look weird at times :P.

    Edit: looking back at the reviews I don’t think those two games were compared head to head, so I’m probably off since every HL expansion has some improvements over the previous version.

    • Dposcorp
    • 12 years ago

    Nice and quick review.
    It has been said more then once, so I hope people are listening.
    No reason to pay for the extra ram of the the GPU doesnt have enough horsepower to use it.

    Interesting that a SLI setup does make the ram actually useful.

    P.S. Nice shirt; one only a geek could love.

    • cygnus1
    • 12 years ago

    jeebus, in an examination of video card memory sizes, you didn’t benchmark 2560 x 1600.

    you guys tested the 9600 at that resolution, yet it seems way more obvious of a test for this article

Pin It on Pinterest

Share This