Taking a crack at the great CPU-GPU balance question

We get some interesting questions and article suggestions via email from time to time, and we just don’t have time to address them all with a proper article, unfortunately. I received one such message recently that asks a burning question we’ve seen posed in many ways over the years. Let me just reprint this reader’s question for you:

I’m just throwing a suggestion for an article if you ever get bored/run out of ideas. I always hear about older CPU’s bottlenecking newer video cards, so it would be cool to actually see this tested. Especially since I always here people say “at Athlon X2 will heaviy bottleneck a HD 48xx series GPU…” and lines along those statements. However I don’t think I’ve ever seen anyone substantiate those claims with actual data. I can’t distinguish BS answers from factual answers, since everyone seems to have their own opinions and views. I think it would be great to see an article investigating this. Of course I understand you probably are quite busy a lot of the time, but I figured its worth a shot in providing a suggestion or giving you an idea.

This topic never goes away, but it is a rather difficult question to answer definitively, because it’s endlessly complex (or something close to it). Here’s my attempt at a quick answer, which I figured some folks might find interesting.

—-

Yeah, that is an interesting question. Complicated, too. Much depends on the workload you’re using, both for the CPU and the GPU. We haven’t focused an article on just this question, but we have looked at performance scaling in various ways.

Here’s an example with one GPU and multiple CPUs at different display resolutions:

And here’s another with multiple GPUs on one fast CPU at different resolutions:

The reality is that you need to have the right balance of CPU and GPU power for the display settings and resolution chosen in a particular game. But, as the first graph there shows, all of the CPUs we tested will average nearly 60 FPS in Far Cry 2, so the GPU is the primary bottleneck.

You have to drop down to the very slowest PC processors in order for the CPU to become any kind of bottleneck in a recent game, especially if it’s a console port, since console CPUs are dreadfully slow. Even the Pentium E6300 can sustain 30+ FPS in Far Cry 2:

Of course, this whole equation will change with a different game or different visual quality settings (or switching from DX10 to DX9) in this same game. But generally speaking, these days, even a $90 Athlon dual-core is likely to run most games well, with the possible exception of more complex PC-native RTS games and the Great Exceptions, Crysis and Crysis Warhead. Note the frames rates consistently in excess of 120 FPS for Left for Dead 2 and Wolfenstein in our Lynnfield review, for instance. I do advise gamers to avoid quad-core CPUs with really low clock speeds. A higher-frequency dual-core is a better bet when the going gets rough.

I’m not sure we can dedicate an article to this issue soon—this is a very tough question to answer definitively—but we do try to provide the information you need in our reviews to make a smart buying decision. I hope this helps a little!

Comments closed
    • Fragnificent
    • 10 years ago

    Thanks Damage! Now I no longer have to look down wonderingly at my e-peen and question its usefulness. So there really isnt a reason for me to upgrade to i7 or Phenom from a E8400 @ 3600 mhz. 😉

    • oldDummy
    • 10 years ago

    yes, it helps.

    GPU matters more than CPU…within reason.

    Thanks.

    • wagsbags
    • 10 years ago

    Nice article, though it would have been nice if you mentioned which gpu was used for the first graph and which cpu for the second.

    Some amount of scaling in gpu articles would be nice. I understand using 2 processors all the way through would double the work but running one game at a couple resolutions with maybe another 2 processors wouldn’t be that bad I think. Maybe a current midrange processor and a last generation lower end processor in addition to an overclocked i7 that no one actually has.

    • rUmX
    • 10 years ago

    Kinda reminds me of a upgrade I did a few years ago.

    At the time, I had a Athlon 64 X2 3800+ overclocked to 2.2ghz (yeah, it wasn’t stable at any other frequency past that) and a Geforce 8800GTS 640mb (yep, paid $600CAD for it when it came out). The mobo was a NF4 Ultra. Playing WoW @ 1600×1200 on my Dell 2007FP the system really choked, especially in a raid. Framerates were really inconsistent. Sometimes you’d be running at 30fps, other times it would crawl to 10fps. Mind you I did run the game with all the quality settings maxed out, and 4X AA. Playing WoW at a high resolution with max quality and AA still looks very good, even compared to the latest PS3/360 games IMO.

    But, some time later, I upgraded the system to a q6600 G0 overclocked to 3.3ghz, while still using the same video card and my framerates literally doubled!

    But yea… I already know that WoW is extremely CPU limited. Just wanted to add my own experience.

    • cheesyking
    • 10 years ago

    I once tried an FX5700 on a Pentium II 333, I seem to remember it scored more in 3dmark then an Athlon2000XP with an nforce1 IGP.

    It was a silly test really, I’ll never get those 5 minutes of life back again 🙁

      • Joel H.
      • 10 years ago

      All of the 3DMark’s, save 2001 (and maybe 1999) were explicitly designed to be GPU-centric. The nForce IGP was built around the GeForce 2, the FX5700 was a significantly more advanced design and much, much faster. Unless you were testing 3DM2K1, I’m not surprised.

    • WaltC
    • 10 years ago

    Thank you for that…;) You’ve just restored some of my faith in tech journalism! What a really nice, factual, level-headed presentation. I really wish some sites would do more reviews with the cpus most people are running–which are *not* Core i7’s. And, as you have put it so well, what would be the point of an i7 965 Extreme system, anyway–I mean, aside from bragging rights?

    When you’re already running your games smoothly, at 30-60fps on average, what are a few more fps going to add to the experience? Nothing, unless you are more interested in running frame-rate benchmarks than you are in playing your games. I’m certainly not.

    But looking at your gpu chart where we change up gpus but not cpus, even if we use 2560×1600 @ 4xFSAA/16AF, there’s a low frame rate of ~20 fps, and a highest frame-rate of just over ~80 fps. The difference a gpu can make at that resolution is on the order of 400%. At lower resolutions, the frame-rate performance spread is even greater between the lowest and highest frame rates.

    But in the cpu chart, where we change up cpus but not gpus, the lowest frame-rate average is 43 fps and the highest is 66 fps, or about a ~60-65% difference between lowest and highest frame rates, and IIRC this is at 1024×768, which is a low enough resolution to favor a difference in cpus much more so than if we looked at all the cpus running the same gpu at 2560×1600 .

    Clearly, the gpu employed makes a lot more difference in terms of frame rates than the cpu employed, and should be among the first things upgraded–according to my own preferences. Always nice to see this point made. Next, I’d choose the amount of ram to upgrade, if you haven’t already got at least 4 gigs (You might actually get by with less.) Only last on the upgrade cycle would I go for cpu and, if applicable, motherboard changes that might also necessitate an upgrade in ram speed/type as well.

      • reactorfuel
      • 10 years ago

      The reason tech sites do GPU reviews with top-of-the-line processors, and CPU reviews with top-of-the-line video cards, is evident in the article itself. PC performance is determined by the slowest component. It’s all about the bottleneck.

      An example might be useful. Let’s say I want to know how a midrange system, with an i5 and 4890, will run Alien Blaster 4000. I can look up how the i5 performs in the game, and see that, when the GPU isn’t a limiting factor (because the review used a crazy fast video card and turned the GPU-dependent settings low), it can run the game at a median low of 70 fps.

      I can also look up and see how the 4890 runs the game, and see that with the CPU not affecting performance (because the review used a crazy fast CPU), it’ll do a median low of 45 fps at my chosen resolution.

      Putting those numbers together, I can figure that my chosen system will run Alien Blaster 4000 at 45 fps or so, because that’s the location of the bottleneck. I also know that, if I want to increase performance, I’d do better to upgrade the video card than the CPU.

      Keep in mind that if the CPU review used a “realistic” test setup with a midrange graphics card and GPU-heavy settings, rather than a system built for benchmarking CPUs, I wouldn’t be able to see this. I’d see the speed of the graphics card, and wouldn’t have any useful information about bottlenecks if I decide to upgrade in the future.

      Benchmark systems are set to stress the component under test, not necessarily be a perfectly realistic depiction of how it might be used in the field. With a bit of knowledge about how everything comes together, knowing how a part behaves under stress is very useful. It’s the same reason airplane wings are tested on gigantic jigs until they break, rather than just seeing whether the airplane flies (which is equivalent to your “more realistic” test). It’s important to understand how a part will respond to a wide variety of operating conditions.

      • sigher
      • 10 years ago

      Nice of them to mention the game changes (pardon the pun) when you play RTS, then you need 20 cores and fast RAM and all that balance goes out of the window.

    • marvelous
    • 10 years ago

    GPU is key for gaming. Everyone should just get the best GPU they can afford and stop wasting money upgrading their CPU, mobo, ram just to browse the internet and watch videos.

    Of course you don’t want a 5+ year old CPU with the latest GPU. You going to bottleneck but in most situations you would be GPU limited at the resolution and AA settings you are playing anyway that far supersedes any CPU limitation.

    I have 3+ year old CPU. Original E6300 1.86ghz but overclocked to 3+ghz. I still have yet to find anything CPU limiting with this processor except 1 bad console port. GTA4 which I still finished with 20-30fps average. Besides that game I can play anything with my GTX 260+.

    I’m playing borderlands currently I originally thought the CPU was bottlnecking some because I was getting slow downs in some sections but it’s not the case. Soon as I turned off Dynamic shadows I was flying 80fps.

      • NeXus 6
      • 10 years ago

      I’m running an overclocked E6400, GTX260 and concur. I am upgrading to a socket 775 quad core CPU to hold me over for a few more years as game devs start to program for more than two cores. Dragon Age Origins is a good example.

    • wingless
    • 10 years ago

    Why do the Core i5s and Core i7s take a dive at 1600×1200 against the Core 2 Quads and Phenom II X4s?

      • Dposcorp
      • 10 years ago

      Edit:
      Which graph are you asking about?Not against all chips, just some.
      Clock speed is my guess in the graphc, and the fact that no more bandwidth is needed.

      In the 1st graph, it looks like it is video card limited.

    • OneArmedScissor
    • 10 years ago

    Sure, they all manage respectable average framerates, but what about the minimum FPS? If there’s a “bottleneck” in the CPU’s architecture, that’s what is actually going to hurt you.

    One thing that has always baffled me is that no one checks this at high resolutions. Look at what happened in the Lynnfield article when the resolution was upped. Suddenly the “fastest” CPUs were the slowest, and the outright architecturally “inferior” Core 2s were even beating the Nehalems across the board.

    And no one questions this?!?

    Their cache designs and memory controllers are radically different. That should have a pretty significant impact on just how low your minimum FPS drops when the CPU has to pull a lot from memory. Much more so than “per clock performance,” anyways.

    Now I’m going to sound like I’m blowing things out of proportion, but I almost feel like it’s a conspiracy. The standard benchmarks on PC sites across the board are incredibly manipulative and selective of the information they present.

    Every site and their dog could be telling everyone to go buy a $300-1,000 CPU to get games running the “fastest,” when in reality, some very cheap and simplified CPU could be less of a bottleneck.

    I absolutely do not care, at all, to see how fast a CPU runs a game without the GPU being pushed as hard as it’s realistically going to.

    That doesn’t even tell you if the CPU is faster in some general way that you can extrapolate to other usage scenarios. CPUs aren’t designed with the idea of making games go fast.

    • September
    • 10 years ago

    Still, these are all current or previous generation of processors. What about a 3.0 GHz Prescott? Its got PCIe x16, can I drop a 4870 in and still get decent fps? Or is it really the platform, the memory controller and RAM that are the bottleneck?

    • flip-mode
    • 10 years ago

    Scott, my love for you is now complete. Off to read… and I’m back. OK, “complete” may be a bit strong. I’m glad that you at least officially addressed this issue.

    While the question may, as you have stated, have a complex answer, the question and its answer seem to be one of the most important there is to address. Few “average joe” programs tax the CPU today – gaming is one of the few. For many people, CPU scaling can answer the question of whether or not a new CPU is needed.

    CPU articles always bench with games configured to minimize the effect of the GPU and maximize the effect of the CPU. This is a useful distortion, but a distortion none the less. It actually accentuates the differences between CPUs when, in real life, the difference is very small. This is a shame, because we buy newer parts to get faster real life performance based on results from scenarios that are not real. And yet, without accentuating the differences between the CPUs, reviewers may question why they even bother benchmarking with games because the differences wouldn’t be anything to get excited about.

    At the same time, how is a reviewer supposed to choose what hardware configuration and what software settings constitute “real life”? It would never be possible to please everyone.

    None the less, it would be nice, for CPU articles, if scaling was tested on even just one game.

    The bright side of benching CPUs with GPU bottlenecks minimized is that it will at least tell you what the performance ceiling for a game is. I’m just not sure if that is even useful to know.

    One final note: I have absolutely no idea how much work it is an how much time it takes to generate a review. I have a hunch that the answer to both of those is approximately “a lot”. I fully respect that and I fully respect all the work that the TR staff does. While CPU scaling is icing on the cake, not having it does not diminish the truly fantastic quality of the articles that are published here. As far as I am concerned, on the whole internet there is only one other site that produces articles of similar quality on the subject of computer hardware, and that is Anandtech. All other sites are a substantial distance behind TR and AT in this regard.

    Thanks, Mr. Wasson.

      • shank15217
      • 10 years ago

      Good clear article, nice to see Phenoms hold their own.

        • Flying Fox
        • 10 years ago

        When it comes to games they always do.

      • derFunkenstein
      • 10 years ago

      now complete as in finished? You don’t love him anymore? I’m sure he’s heartbroken.

      • sigher
      • 10 years ago

      Wow what a winded post, such writing should be limited to actual sites and articles not a comment IMHO.

        • dearharlequin
        • 10 years ago

        You know, nobody’s forcing you to read his comment, no matter how long winded it is 🙂

    • MadManOriginal
    • 10 years ago

    obligatory ‘But horrid console ports like GTA 4 use a quad core’ Actually I’ve seen some reviews that show an advantage for quad cores in certain other games, but yeah a *low speed* quad core isn’t great – that’s what overclocking is for 🙂

    Scott – some info is missing from the last graph that might be nice to know. Which graphcis card is it and what resolution and AA/AF settings?

Pin It on Pinterest

Share This