Single page Print

64MB vs. 128MB graphics memory


Is 64MB enough?
— 12:00 AM on September 23, 2002

I REMEMBER JUST HOW badass I felt when I picked up a Voodoo 2 card with 12MB of memory instead of only 8MB. Even 8MB seemed opulent to me at the time, considering I was pairing the card with my Matrox Millennium II. Today, however, it's common to see mainstream graphics cards with 64 and 128MB of memory—even 32MB of memory has become passe.

But do we actually need 128MB of graphics memory, or is this one of those cases where the marketing folks are trying to sell consumers on anything with a higher number? My curiosity got the better of me, and I just had to find out. I grabbed 64 and 128MB cards from both ATI and NVIDIA and put them through the wringer to see just where extra graphics memory can benefit performance.

Do you really need 128MB of graphics memory, or is it all just a lot of hype and hot air? Let's find out.

The cards
For a comparison of graphics memory size to work, we need to use cards that feature the same GPU, running at the same speed, with the only difference being the amount of actual video RAM on board. Fortunately, with a little overclocking and underclocking, I was able to come up with a two pairs of cards to compare. We're going to be limited to comparing cards with 64 and 128MB of graphics memory, because that's really all that's available with common graphics cores.

In the NVIDIA corner, we have a couple of GeForce4 Ti 4200s running at 250/250MHz for their core and memory clocks. Running a GeForce4 Ti 4200 128MB with a 250MHz memory bus requires a little overclocking, since the GeForce4 Ti 4200 128MB's memory bus is supposed to run at 222MHz. The 64MB version, meanwhile, runs stock with 250MHz memory. Overclocking is necessary here to isolate memory size as a variable for the purposes of this article. This isn't a review of 64 vs 128MB cards, but more of a generalized comparison of different graphics memory sizes. You can check out our initial GeForce4 Ti 4200 review or our subsequent round-up of different GeForce4 Ti 4200 cards for benchmarks and analysis of cards running at stock speeds.

ATI's entries in the mix are a couple of Radeon 8500LE cards that run at 250/250MHz by default.

With both sets of cards running at 250/250MHz, it's worth taking a look at some theoretical fill rates and memory bandwidth. The following numbers aren't affected by graphics memory size, but they will help us explain some of the results of our testing.

  Core clock (MHz) Pixel pipelines Peak fill rate (Mpixels/s)Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
GeForce4 Ti 4200 64MB25041000220005001288.0
GeForce4 Ti 4200 128MB25041000220005001288.0
Radeon 8500LE 64MB25041000220005001288.0
Radeon 8500LE 128MB25041000220005001288.0

As you can see, all the cards we're testing have identical fill rates and available memory bandwidth. However, don't expect the performance of the NVIDIA and ATI cards to be equal. There's a lot more to actual performance than theoretical fill rates and memory bandwidth, and the subtleties of each chip architecture will come into play heavily when we start generating actual frame rates.

Keep in mind that this article intends to compare graphics memory size, not the merits of different GPUs. We've done enough articles here at TR that explore the Radeon 8500 and GeForce4 Ti 4200 in great detail, and graphics memory size deserves its own spotlight.

Competing for resources
The memory on a graphics card stores a whole lot more than just textures. In fact, there are a number of different players all vying for a piece of the action. Here are a few of them:

  • Frame buffer - The frame buffer holds a bitmap of what you eventually see on the screen, which makes the amount of memory it takes up dependent on your screen resolution and color depth. The formula for determining the size of the frame buffer is:
    Frame buffer size = X-size * Y-size * color depth
    X and Y sizes are measured in pixels, and the color depth refers to the number of bytes per pixel required to store color information. So, for a 1600x1200 screen resolution with 32-bit color, the total memory requirement for the frame buffer works out to 7.68MB.

  • Back buffer - While the frame buffer is feeding the monitor, the contents of the next frame are being assembled in the back buffer. Since the back buffer holds what is to become the new frame buffer, we use the same formula to determine how much memory it requires.

  • Z-buffer - The Z-buffer stores depth information about pixels in a scene, and you can use the same formula as our frame buffer formula to determine it's size, all you have to do is substitute the frame buffer depth for color depth. Most manufacturers keep the frame buffer in line with color depth, which means for the purposes of this review we'll be considering a 32-bit Z-buffer.

    Unlike the frame and back buffers, the Z-buffer can be compressed, so the our formula isn't going to give us the exact memory footprint for all cards and conditions.

  • Vertex and pixel shader programs - DirectX 8.1 hardware supports vertex and pixel shaders, and those programs end up being stored in a graphics card's memory. DirectX 8.1-class shader programs aren't really complex enough to take up a lot of room, so we don't have to worry about them too much.

  • Geometry data - 3D scenes are made up of polygons, and the geometry data for a scene takes up graphics memory. The more complex a scene, the more memory is needed to describe its geometry.

  • Textures - Finally, we have textures, whose memory footprint depends on the size, color depth, detail level, and total number of actual textures in a scene.
As you can see, there are plenty of competing elements ready to steal large portions of graphics memory, while others could nickel and dime you to death. Death, in this case, is having to pull textures across the AGP bus' grossly limited bandwidth. AGP 4X, whose bandwidth tops out at 1GB/sec, is far slower than the 8GB/sec of memory bandwidth that the graphics cards have available internally. New graphics cards and motherboard chipsets supporting AGP 8X are now showing up on store shelves, but even AGP 8X is limited to 2GB/s of memory bandwidth. Bottleneck, anyone?