How much video memory is enough?

4GB versus the world
— 10:57 AM on August 12, 2015

One question we haven't answered decisively in our recent series of graphics card reviews is: how much video memory is enough? More pressingly given the 4GB limit for Radeon R9 Fury cards: how much is too little? Will a 4GB video card run into performance problems in current games, and if so, when?

In some ways, this question is harder to answer than one might expect. Some enthusiasts have taken to using monitoring tools in order to see how much video memory is in use while gaming, and that would seem to be a sensible route to understanding these matters. Trouble is, most of the available tools track video memory allocation at the operating system level, and that's not necessarily a good indicator of what's going on beneath the covers. In reality, the GPU driver decides how video memory is used in Direct3D games.

We might be able to approach this problem better by using vendor-specific development tools from AMD and Nvidia—and we may yet do so—but we can always fall back on the simplest thing: testing the hardware to see how it performs. We now have a number of video cards based on similar GPU architectures with different amounts of VRAM, from 4GB through 12GB. Why not run a quick test in order to get a sense of how different GPU memory configurations hold up under pressure?

My weapon of choice for this mission was a single game, Shadow of Mordor, which I chose for several reasons. For one, it's pretty widely regarded as one of the most VRAM-hungry games around right now. I installed the free HD assets pack available for it and cranked up all of the image quality settings in order to consume as much video memory as possible. Mordor has a built-in benchmark that allowed me to test at multiple resolutions in repeatable fashion with ease. The results won't be as fine-grained as those from our frame-time-based game tests, but a big drop in the FPS average should still serve as a clear indicator of a memory capacity problem.

Crucially, Mordor also has a nifty feature that will let us push these video cards to their breaking points. The game's settings allow one to choose a much higher virtual resolution than the native resolution of the attached display. The game renders everything at this higher virtual resolution and then downsamples the output to the display's native res, much like Nvidia's DSR and AMD's VSR features. Downsampling is basically just a form of full-scene anti-aliasing, and it can produce some dramatic improvements in image quality.

Using Mordor's settings menus, I was able to test at 2560x1440, 3840x2160 (aka 4K) and the higher virtual resolutions of 5760x3240 and 7680x4320. That last one is a staggering 33 megapixels, well beyond the pixel count of even a triple-4K monitor setup. I figured pushing that far should be enough to tease out any memory capacity limitations.

My first two victims were the Radeon R9 290X 4GB and the Radeon R9 390X 8GB. Both cards are based on the same AMD Hawaii GPU, and they have similar clock frequencies. The 390X has a 20MHz faster base clock and a tweaked PowerTune algorithm that could give it somewhat higher clock speeds in regular operation. It also has a somewhat higher memory clock. These differences are relatively modest in the grand scheme, and they shouldn't be a problem for our purposes. What we're looking for is relative performance scaling. Where does the 4GB card's performance fail to scale up as well as the 8GB card's?

The 290X's 4GB of memory doesn't put it at a relative disadvantage at 4K, but the cracks start to show at 5760x3240, where the gap between the two cards grows to four FPS. At 7680x4320, the 4GB card is clearly struggling, and the deficit widens to eight FPS. So we can see the impact of the 390X's added VRAM if we push hard enough.

From a purely practical standpoint, these performance differences don't really matter much. With FPS averages of 16 and 20 FPS, respectively, neither the 290X nor the 390X produces playable frame rates at 5760x3240, and the highest resolution is a slideshow on both cards.

What about the Radeon R9 Fury X, with its faster Fiji GPU paired with only 4GB of HBM-type VRAM?

The Fury X handles 3840x2160 without issue, but its performance drops off enough at 5760x3240 that it's slightly slower than the 390X. The Fury X falls further behind the 390X at 33 megapixels, despite the fact that the Fury X has substantially more memory bandwidth thanks to HBM. Almost surely, the Fury X is bumping up against a memory capacity limitation at the two higher resolutions.

What about the GeForce side of things, you ask? Here it all is in one graph, from the GTX 970 to the Titan X 12GB.

Hmph. There's essentially no difference between the performance of the GTX 980 Ti 6GB and the Titan X 12GB, even at the very highest resolution we can test. Looks like 6GB is sufficient for this work. Heck, look closer, and the GTX 980's performance scales very similarly even though it only has 4GB of VRAM.

The only GeForce card whose performance doesn't follow the trend is the GTX 970, whose memory capacity and bandwidth are both, well, kind of weird due to a 3.5GB/0.5GB split in which the 0.5GB partition is much slower to access. We covered the details of this peculiar setup here. The GTX 970 appears to suffer a larger-than-expected performance drop-off at 5860x3240, likely due to its funky VRAM setup.

Now that we've seen the results from both camps, have a look at this match-up between the R9 Fury X and a couple of GeForces.

For whatever reason, a 4GB memory capacity limit appears to create more problems for the Fury X than it does for the GTX 980. As a result, the GTX 980 matches the performance of the much pricier Fury X at 5760x3240 and outdoes it at 33 megapixels.

We've seen this kind of thing before—in the only results from our Radeon R9 Fury review that showed a definitive difference between the 4GB and 8GB Radeons. The Radeons with 4GB had some frame time hiccups in Far Cry 4 at 4K that the 8GB models avoided:

As you can see, the 8GB Radeons avoid these frame-time spikes above 50 ms. So do all of the GeForces. Even the GeForce GTX 780 Ti with 3GB manages to sidestep this problem.

Why do the 4GB Radeons suffer when GeForce cards with 4GB don't? The answer probably comes down to the way GPU memory is managed in the graphics driver software, by and large. Quite possibly, AMD could improve the performance of the 4GB Radeons in both Mordor and Far Cry 4 with a change to the way it manages video memory.

There is one other factor to consider. Have a look at the results of this bandwidth test from our Fury X review. This test runs two ways: using a black texture that's easily compressible, and using a randomly colored texture that can't be compressed. The delta between these two scores tells us how effective the GPU's color compression scheme is.

As you can see, the color compression in Nvidia's Maxwell chips looks to be quite a bit more effective than the compression in Fury X. The Fury X still has a tremendous amount of memory bandwidth, of course, but we're more concerned about capacity. Assuming these GPUs store compressed data in a packed format that saves capacity as well as bandwidth, it's possible the Maxwell GPUs could be getting more out of each megabyte by using stronger compression.

So that's interesting.

Of course, much of what we've just demonstrated about memory capacity constraints is kind of academic for reasons we've noted. On a practical level, these results match what we saw in our initial reviews of the R9 Fury and Fury X: at resolutions of 4K and below, cards with 4GB of video memory can generally get by just fine, even with relatively high image quality settings. Similarly, the GeForce GTX 970 seems to handle 4K gaming quite well in spite of its funky partitioned memory. Meanwhile, at higher resolutions, no current single-GPU graphics card is fast enough for fluid gaming, no matter how much memory it might have. Even with 12GB, the Titan X averages less than 30 FPS in Shadow of Mordor at 5760x3240.

We'll have to see how this memory capacity story plays out over time. The 4GB Radeon Fury cards appear to be close enough to the edge—with a measurable problem in Far Cry 4 at 4K—to cause some worry about slightly more difficult cases we haven't tested, like 5K monitors, for example, or triple-4K setups. Multi-GPU schemes also impose some memory capacity overhead that could cause problems in places where single-GPU Radeons might not struggle. The biggest concern, though, is future games that simply require more memory due to the use of higher-quality textures and other assets. AMD has a bit of a challenge to manage, and it will likely need to tune its driver software carefully during the Fury's lifetime in order to prevent occasional issues. Here's hoping that work is effective.

Like what we're doing? Pay what you want to support TR and get nifty extra features.
Top contributors
1. BIF - $340 2. Ryu Connor - $250 3. mbutrovich - $250
4. YetAnotherGeek2 - $200 5. End User - $150 6. Captain Ned - $100
7. Anonymous Gerbil - $100 8. Bill Door - $100 9. ericfulmer - $100
10. dkanter - $100
Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.