auxy, your math may be off.
- A single 1920x1080 frame is ~8MB. Multi-sampled 4x, it's ~126MB.
- A single 2560x1600 frame is ~16MB. Multi-sampled 4x, it's ~250MB ( (2560 x 4) x (1600 x 4) x 4 ).
My math wasn't really off. I said 24MB
for 1920x1080x32bpp triple-buffer (three 8Mbyte frames) and 45Mbytes
for triple-buffer 2560x1440x32bpp. That's the size of the framebuffer before you start adding tricks like MSAA into the equation. I think your MSAA math might be wrong, though.
Take into account extra VRAM requirements for double and triple-buffering, texture mip-maps (which will increase), depth buffers, etc, and - via Mark I eyeball - you can easily looking at half to a gig over the requirement for 1920, if not more.
The textures won't increase the size of the render target; or, to put it more clearly, the screen resolution has little-to-nothing to do with the amount of video memory the textures use.
Of course, someone with a 3GB graphics card can easily use GPU-Z for monitoring and put this dilemma to rest.
You would think, but that only tells you GPU RAM usage, not which part is textures and which part is buffer.
I'd be really curious for a tool which can do this too; it'd help a lot with game performance troubleshooting (which I spend many hours doing for friends, family, and total strangers on the internet.)
Unless you enable non-postprocessing anti-aliasing, like MSAA or SSAA; then the gap widens quickly. Those extra 64-bits of memory bandwidth -- and extra ROPs to go along -- help a LOT. Look here: Tom's Hardware: Seven GeForce GTX 660 Ti Cards: Exploring Memory Bandwidth
The 660Ti loses out to a lowly Radeon 7870 at 8x MSAA.
Of course, Kepler with 256-bit RAM still sucks at AA, but it sucks a lot less...