Inside the second: Gaming performance with today’s CPUs

As you may know, a while back, we came to some difficult realizations about the validity of our methods for testing PC gaming performance. In my article Inside the second: A new look at game benchmarking, we explained why the widely used frames-per-second averages tend to obscure some of the most important information about how smoothly a game plays on a given system. In a nutshell, the problem is that FPS averages summarize performance over a relatively long span of time. It’s quite possible to have lots of slowdowns and performance hiccups during the period in question and still end up with an average frame rate that seems quite good. In other words, the FPS averages we (and everyone else) had been dishing out to readers for years weren’t very helpful—and were potentially misleading.

To sidestep this shortcoming, we proposed a new approach, borrowed from the world of server benchmarking, that focuses on the actual problem at hand: frame latencies. By considering the time required to render each and every frame of a gameplay session and finding ways to quantify the slowdowns, we figured we could provide a more accurate sense of true gaming performance—not just the ability to crank out lots of frames for high averages, but the more crucial ability to deliver frames on time consistently.

Some good things have happened since we proposed our new methods. We’ve deployed them in a host of graphics card reviews, and they have proved their worth, helping to uncover some performance deficiencies that would have otherwise remained hidden. In response to your feedback, we’ve refined our means of quantifying the latency picture and presenting the info visually. A few other publications have noticed what we’re doing and adjusted their own testing methods; even more have quietly inquired about the possibility behind the scenes.

Most importantly, you, our readers, have responded very positively to the changes, even though we’ve produced some articles that are much more demanding reading than your average scan-and-skip-to-the-conclusion PC hardware review.

We have largely missed one important consequence of our insights, though. Latency-focused game testing doesn’t just apply to graphics cards; it’s just as helpful for considering CPU performance. We made a quick down payment on exploring this matter in our Ivy Bridge review, but we haven’t done enough to pursue it. Happily, that oversight ends today. Over the summer, we’ve tested 18 different PC processors from multiple generations in a range of games, and we can now share the results with you.

Before your eyes glaze over from the prospect of data overload, listen up. The results we’ve compiled confront a popular myth: that PC processors are now so fast that just about any CPU will suffice for today’s games, especially since so many titles are console ports. I’ve said something to that effect myself more than once. But is it true? We now have the tools at our disposal to find out. You may be surprised by what we’ve discovered.

The contenders

Yes, we really have tested 18 different desktop CPUs for this article. They break down into several different classes, delineated mainly by price. We have a full complement of the latest chips on hand, including several members of Intel’s Ivy Bridge lineup and a trio of AMD FX processors. We’ve tested them against their predecessors in the past generation or two, to cover a pretty big swath of the CPUs sold in the past several years. Allow me to make some brief introductions.

Quite a few PC enthusiasts will be interested in the first class of CPUs tested, which is headlined by the Core i5-3470 at $184. This Ivy Bridge-based quad-core replaces the Sandy Bridge-derived Core i5-2400 at the same price. The newer chip has slightly faster clocks and a lower power envelope—77W instead of 95W—versus the model it supplants. Two generations back, this price range was served by the Core i5-655K, a dual-core chip. The closest competing offering from AMD is the FX-6200 at $155, a six-core part based on the Bulldozer architecture. The FX-6200’s precursor was the Phenom II X4 980, which we’ve also invited to the festivities.

For a little more money, the next class of CPUs promises even higher performance. Intel’s Ivy Bridge offering in this range is the Core i5-3570K for $216, with a fully unlocked multiplier to ease overclocking. The 3570K replaces an enthusiast favorite, the Core i5-2500K, again with slightly higher clock speeds and a lower thermal design power (or TDP). This is also the space where AMD’s top Bulldozer chip, the FX-8150, contends. The legacy options here are a couple of 45-nm chips, the Core i5-760 and the Phenom II X6 1100T.

More relevant for many of us mere mortals, perhaps, are the lower-end chips that sell for closer to a hundred bucks. AMD’s FX-4170 at $135 gets top billing here, since our selection of Intel chips skews to the high end. We think the FX-4170 is a somewhat notable entry in the FX lineup because it boasts the highest base and Turbo clock speeds, even though it has fewer cores. The FX-4170 supplants a lineup of chips known for their strong value, the Athlon II X4 series. Our legacy representative from that series actually bears the Phenom name, but under the covers, the Phenom II X4 850 employs the same silicon with slightly higher clocks.

Finally, we have the high-end chips, a segment dominated by Intel in recent years. We’ve already reviewed the Ivy-derived Core i7-3770K, a $332 part that inherits the spot previously occupied by the Core i7-2600K and, before that, by the Core i7-875K. Also kicking around in the same price range is the Core i7-3820, a fairly affordable Sandy Bridge-E-based part that drops into Intel’s pricey X79 platform. The Core i7-3820’s big brother is a thousand-dollar killer, the Core i7-3960X, the fastest desktop CPU ever.

This selection isn’t perfect, but we think it provides a good cross-section of the market. Face it: the CPU makers offer way too many models these days. The sheer volume of parts is difficult to track without an online reference. If you’re having trouble keeping them sorted, fear not. We’ve broken down the results by class in the following pages, and we’ll summarize the overall picture with one of our famous price-performance scatter plots.

Our testing methods

Our test systems were configured to create as equal a playing field as possible for the CPUs. They all shared the same software, graphics cards, storage, and memory types. Here’s a look at one of the test rigs, mounted in a swanky open-air case.

The system configurations we used were:

Processor

Phenom II X4 850

Phenom II X4 980

Phenom II X6 1100T

AMD FX-4170

AMD FX-6200

AMD
FX-8150

Core
i5-2400

Core i5-2500K

Core
i7-2600K

Core i5-3470

Core i5-3570K

Core i7-3770K

Core
i7-3960X

Core i7-3820

AMD
A8-3850
Core
i5-655K

Core i5-760

Core i7-875K

Motherboard Asus
Crosshair V Formula
MSI
Z77A-GD65
Intel
DX79SI
Gigabyte
A75M-UD2H 
Asus

P7P55D-E Pro

North bridge 990FX Z77
Express
X79
Express
A75
FCH
P55
PCH
South bridge SB950
Memory size 8 GB (2 DIMMs) 8 GB (2 DIMMs) 16 GB
(4 DIMMs)
8 GB
(2 DIMMs)
8 GB
(2 DIMMs)
Memory type AMD
Entertainment

Edition

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s 1600 MT/s 1600 MT/s 1333 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
9-9-9-24
1T
9-9-9-24
1T
8-8-8-20 1T
Chipset

drivers

AMD
chipset 12.3 
INF
update 9.3.0.1020

iRST 11.1.0.1006

INF
update 9.2.3.1022

RSTe 3.0.0.3020

AMD
chipset 12.3
INF
update 9.3.0.1020

iRST 11.1.0.1006

Audio Integrated

SB950/ALC889 with

Realtek 6.0.1.6602 drivers

Integrated

Z77/ALC898 with 

Realtek 6.0.1.6602 drivers

Integrated

X79/ALC892 with

Realtek 6.0.1.6602 drivers

Integrated

A75/ALC889 with

Realtek 6.0.1.6602 drivers

Integrated

P55/VIA VT1828S with

Microsoft drivers

 They all shared the following common elements:

Hard drive Kingston
HyperX SH100S3B 120GB SSD
Discrete graphics XFX
Radeon HD 7950 Double Dissipation 3GB with Catalyst 12.3 drivers
OS Windows 7 Ultimate x64 Edition
Service Pack 1

(AMD systems only: KB2646060, KB2645594 hotfixes)

Power supply Corsair
AX650

Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing most of the processors, as well, of course.

We used the following test applications:

Some further notes on our testing methods:

  • We used the Fraps utility to record frame rates while playing either a 60- or 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per processor in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.
  • The test systems’ Windows desktops were set at 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

The Elder Scrolls V: Skyrim

We tested performance using Fraps while taking a stroll around the town of Whiterun in Skyrim. The game was set to the graphical quality settings shown above. Note that we’re using fairly high quality visual settings, basically the “ultra” presets at 1920×1080 but with FXAA instead of MSAA. Our test sessions lasted 90 seconds each, and we repeated them five times per CPU.


Frame time
in milliseconds
FPS
rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

The plots above show the time required to render each frame from a single test run. You can click on the buttons to switch to results for different brands and classes of processors. Notice that, because we’re reporting frame times, lower numbers are preferable to higher ones. You can even see some spikes representing long frame times in each plot. For the confused, we’ve included the table on the right, which converts some key frame time thresholds into their FPS equivalents. Also note that faster solutions tend to produce more total frames than the slower ones during the test period.

To get a sense of the performance range we’re dealing with, flip between the far-left “AMD budget” and far right “Intel extreme” buttons a few times. The fastest Intel processors produce very few frame times above 16.7 milliseconds—that is, they churn out a nearly steady stream of frames at roughly 60 FPS or better. Meanwhile, the slowest budget processors see regular spikes into 30 or 40 millisecond territory. That’s not a devastating outcome, but there are substantially more slowdowns than we see with the fastest processors.

All of the CPUs achieve FPS averages near or above the supposedly golden 60 FPS mark. Based purely on common FPS expectations, one might argue that any of them should be more than sufficient for playing Skyrim well.

There are some warning signs here, though. The Intel processors line up as one might expect, roughly in order of age and then model number, with only the slowest legacy dual-core trailing anything from AMD. The three Ivy Bridge parts, in lighter blue, fare well. However, the AMD processors don’t quite behave as expected. A prior-gen CPU, the Phenom II X4 980, takes the top spot among them. The three FX processors, in lighter green, don’t finish in the order expected, either. The low-end FX-4170 is the fastest of the three, although by a slim margin.

In the past, we might have dismissed these results as part of the noise and as not terribly relevant given the fairly high FPS averages. Look what happens when we turn the focus to frame latencies, though.

This is a snapshot of the frame latency picture; it’s the point below which 99% of all frames have been rendered. We’re simply excluding the last 1% of frames, many of them potential outliers, to get a sense of overall smoothness.

Again, the Intel processors perform well. All but one of them render the great majority of frames in under 23 milliseconds, which translates to a steady-state frame rate of just under 50 FPS. There is some reshuffling in the move from FPS average to a latency-sensitive metric—the “big iron” Core i7-3820 with its large cache and quad memory channels moves up the ranks, for instance—but the changes are what one might expect, given the hardware in question.

Meanwhile, the AMD FX processors suffer in this comparison. The FX-8150, which is ostensibly AMD’s top-of-the-line desktop processor, trails two older Phenom IIs and the FX-4170. The FX-6200 falls behind the A8-3850, a budget APU based on AMD’s prior CPU microarchitecture. The absolute numbers aren’t stellar, either. The FX processors are cranking out 99% of the frames in 33 milliseconds or so, which translates to a steady rate of 30 FPS—much lower than even the slower Intel processors.

What’s the problem? The broader latency curve suggests some answers.


The “tail” of the curve for the AMD processors is telling. Although the FX chips keep pace with the Phenom II X6 1100T in the first 95% or so of frames rendered, their frame times spike upward to meet the slower A8-3850 budget APU and Phenom II X4 850 in the last ~5% of frames. In the most crucial function of gaming performance, latency avoidance, the more expensive FX processors essentially perform like low-end CPUs.

Why? I think the answer is suggested by the relatively strong performance of the FX-4170 compared to the FX-6200 and FX-8150. As we noted, the FX-4170 actually has the highest base and Turbo clock speeds of the FX lineup. That means it likely has the highest per-thread performance of any FX chip, and that appears to translate into better latency mitigation. (I also suspect the FX-4170 spends more time operating near its peak Turbo speed, since it only has to fit two “modules” and four cores into the same 125W thermal envelope as the higher-end FX chips.)

Looks to me like the FX CPUs have an Amdahl’s Law problem. Even though they have a relatively large amount of cores for their given product segments, their per-thread performance is fairly weak. The Bulldozer architecture combines relatively low instruction throughput per cycle with clock speeds that aren’t as high as AMD probably anticipated. That adds up to modest per-thread performance—and even with lots of cores on hand, the execution speed of a single thread can limit an application’s throughput.

Thus, the FX-6200 and FX-8150 processors aren’t as well-suited to our Skyrim test scenario as their predecessors in the Phenom II lineup. Only the FX-4170 outperforms the CPU it replaces, the Phenom II X4 850, whose lack of L3 cache and modest 3.3GHz clock frequency aren’t doing it any favors.

Do the results from our new methods mean that some AMD processors are inadequate for Skyrim? Not quite. One of our key metrics for frame latency problems involves adding up all of the time spent working on frames above a certain time threshold. We consider it a measure of “badness,” giving us a sense of how severe the slowdowns are. We typically start at a threshold of 50 milliseconds, which translates to 20 FPS, since taking longer than that to produce a frame is likely to interrupt the illusion of motion. The thing is, none of the CPUs we tested spends any real time above the 50 ms threshold. They’re all adequate enough to deliver relatively decent gameplay. In fact, we’ve omitted the graph for this threshold, since it doesn’t show much.

However, if we crank down the tolerance to 16.7 milliseconds, the equivalent of 60 FPS, then the differences become apparent. The FX processors again fare poorly, relatively speaking. If you covet glassy smoothness, where the system pumps out frames consistently at low latencies close to your display’s refresh rate, then you’ll want a newer Intel processor. In this scenario, no entry in the FX lineup comes as close to delivering that experience as a Phenom II X4 980 or a Core i5-655K.

Batman: Arkham City

Now that we’ve established our evil methods, we can deploy them against Batman. Again, we tested in 90-second sessions, this time while grappling and gliding across the rooftops of Gotham in a bit of Bat-parkour. Again, we’re using pretty decent image quality settings at two megapixels; we’re just avoiding this game’s rather pokey DirectX 11 mode.

In our test session, we’re moving rapidly through a big swath of the city, so the game engine has to stream in more detail periodically. You can see the impact in the frame time plots: every CPU shows occasional spikes throughout the test run.


The severity of the spikes is lessened by having a faster CPU, though. Once more, the contrast between plots exposed by the far-left and far-right buttons is instructive.

Although they are very different ways of counting, the FPS average and the 99th percentile frame time largely appear to agree here. One distinction worth making is that the latency-focused metric is a tougher judge. Although the FPS averages range up to almost 90 FPS, the 99th percentile frame times don’t reach down to 16.7 milliseconds, so none of the processors provide a near-steady stream of frames at 60 FPS.


The broader latency picture in this test scenario is a good one. That is, frame times remain nice and low up until the last few percentage points, and none of the processors show a “tail” that spikes upward suddenly before the others. Yes, the Intel CPUs are generally quicker, but the differences are fairly minor overall.

In this case, our measure of “badness” provides the real distinction between the faster and slower CPUs. None of the Intel CPUs from the Ivy or Sandy Bridge generations spends any substantial amount of time working on long-latency frames. Even the Core i5-760 avoids crossing the 50-ms threshold for long. The AMD processors, however, all spend at least a tenth of a second in that space—not long in the context of a 90-second test run, to be sure, but enough that one might feel a hitch here or there. We’re left to ponder the fact that the flagship FX-8150 doesn’t avoid slowdowns as well as a legacy Intel dual-core, ye olde Core i5-655K.

Crysis 2

Our test session in Crysis 2 was only 60 seconds long, mostly for the sake of ensuring a precisely repeatable sequence.


Notice the spike at the beginning of the test run; it happens on each and every CPU. You can feel the hitch while playing. Apparently the game is loading some data for the area we’re about to enter or something along those lines.

Here’s a closer look at the spike on a subset of the processors. The duration of the pause appears to be at least somewhat CPU dependent. The A8-3850 APU takes nearly a third of a second to complete the longest frame, while the Ivy-based 3770K needs less than half of that time.

The FPS averages and 99th percentile results nearly mirror each once again, and we appear to be running into a potential GPU bottleneck on the fastest CPUs, which are bunched together pretty closely in both metrics. Fortunately for AMD, the FX processors don’t seem to have any trouble outperforming their predecessors in this test scenario. All of them remain slower than the Intel chips from two generations back, though.


Whoa. Check out the tails in those Intel latency curves. Notice how the Core i5-2400’s tail spikes upward just a little before the 2500K’s, which spikes a little before the 2600K’s. The same pattern is evident for the three Ivy-based CPUs, too. I’m sorry, but that is awesome. Remember, I played through these test sessions manually, five per CPU, attempting but never quite succeeding to play exactly the same way. To see our data line up like by CPU speed grade is ridiculously gratifying.

What it tells is that there are measurable differences between the Intel CPUs’ performance in the last 5-7% of frames rendered. The faster processors do a better job of keeping frame latencies low—and thus gameplay smooth.

Some proportion of the frames in this scenario present difficulty for each of the CPUs, whether it’s the final ~3% on the fastest processors, the final ~15% on the Core i5-760, or the final ~35% on the FX-6200. The tails for the different chips vary in shape quite a bit, and if you look at the frame time plots above, you can see the intermittent spikes that represent those frames. The spikes are smaller and less frequent on the faster processors. To keep things in perspective, though, even the slowest AMD chips deliver 99% of their frames in under 33 milliseconds, or over 30 FPS.

When we focus directly on the severity of slowdowns, the two top FX processors again fall behind their Phenom II counterparts, although only by the slimmest of margins. Again, the AMD processors are up to the task of running this game, but they perform similarly to Intel’s older, low-end parts.

Meanwhile, we should point out a trend on the Intel side of the aisle, which is the ongoing strong performances in our latency-related metrics for the Ivy Bridge processors. Here, the relatively affordable Core i5-3470 wastes less time on long-latency frames than the $1K Core i7-3960X does. Yeah, we’re splitting eyelashes, but it’s true. The tweaked microarchitecture in Ivy Bridge counts for something, and I suspect the 22-nm chips also spend a little more time resident at their peak Turbo clock frequencies.

Battlefield 3

As with Crysis 2, our BF3 test sessions were 60-seconds long to keep them easily repeatable. We tested at BF3‘s high-quality presets, again at 1920×1080.

Click the buttons under each screenshot to toggle between the different solutions. You might have to wait a second or two for a new image to load after each click.


Yikes. Here’s an example where the commonly held belief about PC games and CPU performance looks to be correct. None of the processors appear to struggle much at all in delivering nice, low frame times throughout the test run.

Wow. Every processor down to the A8-3850 delivers 99% of all frames in 16.7 milliseconds or less. That adds up to a nearly uninterrupted stream of frames at 60 FPS.


Yes, we can still discern fine-grained differences between the CPUs with a really tight threshold, but there’s really very little “badness” to be sifted out. Also, in a ray of light for AMD, the FX-8150 performs relatively well here. This is one of those cases, though, when nearly any modern CPU will do.

Multitasking: Gaming while transcoding video

A number of readers over the years have suggested that some sort of real-time multitasking test would be a nice benchmark for multi-core CPUs. That goal has proven to be rather elusive, but we think our new game testing methods may allow us to pull it off. What we did is play some Skyrim, with a 60-second tour around Whiterun, using the same settings as our earlier gaming test. In the background, we had Windows Live Movie Maker transcoding a video from MPEG2 to H.264. Here’s a look at the quality of our Skyrim experience while encoding.


Several things happen when we add a background video encoding task to the mix. For one, the Core i7-3960X, with its six cores and 12 threads, reasserts its place at the top of the charts. Although Skyrim alone may not need all of its power, the 3960X better maintains low frame latencies when multitasking. The FX-8150’s additional cores come in handy here, as well, as it surpasses the lower-end FX parts. Unfortunately, the 8150 still can’t quite match two of the Phenom IIs that preceded it.


The 3960X’s latency curve is clearly differentiated from the 3770K’s here, while the dual-core i5-655K struggles mightily, falling behind all of the AMD processors, none of which have only two cores.

With most of these CPUs, you can play Skyrim and encode video in the background with relatively little penalty in terms of animation fluidity. We’ve dialed back our threshold to 50 ms, and as you can see, all of the newer Intel processors avoid serious slowdowns entirely. The AMD chips aren’t bad, either, overall. Somewhat surprisingly, the Phenom II X4 980 outperforms the X6 1100T, despite having two fewer cores, presumably thanks to its higher clock speed.

Conclusions

As promised, we’ll sum up our results with one of our nifty value scatter plots. Our overall performance number comes from the geometric mean of all tests. We’ve converted the 99th percentile frame times into their FPS equivalents for the sake of readability. The pricing for the current CPUs comes from official Intel and AMD price lists. The legacy chips—specifically the Core i5-655K, i5-760, and i7-875K processors—are shown at their original introductory prices.

As usual, the most desirable positions on the plot are closer to its top left corner, where the best combinations of price and performance can be found.

As you probably expected, the Ivy Bridge-derived processors are near the top in overall gaming performance. Intel has made incremental improvements over the Sandy Bridge equivalents in each price range, from the i5-2400 to the i5-2500K and i7-2600K. The Core i5-3470 offers perhaps the best combination of price and performance on the plot, and the Core i5-3570K offers a little more speed for a bit more money. The value curve turns harsh from there, though. The i7-3770K doesn’t offer much of an improvement over the 3750K, yet it costs over a hundred bucks more. The Core i7-3960X offers another minuscule gain over the 3770K, but the premium to get there is over $500.

Ivy Bridge moves the ball forward, but Intel made even more performance progress in the transition from the prior-generation Lynnfield 45-nm processors—such as the Core i5-760 and i7-875K—to the 32-nm Sandy Bridge chips. From Sandy to Ivy, some of the potential speed benefits of the die shrink were absorbed by the reduction of the desktop processor power envelope from 95W to 77W.

Sadly, with Bulldozer, AMD has moved in the opposite direction. The Phenom II X4 980, with four “Stars” cores at 3.7GHz, remains AMD’s best gaming processor to date. The FX-8150 is slower than the Phenom II X6 1100T, and the FX-6200 trails the X4 980 by a pretty wide margin. Only the FX-4170 represents an improvement from one generation to the next, and it costs more than the Phenom II X4 850 that it outperforms. Meanwhile, all of the FX processors remain 125W parts.

We don’t like pointing out AMD’s struggles any more than many of you like reading about them. It’s worth reiterating here that the FX processors aren’t hopeless for gaming—they just perform similarly to mid-range Intel processors from two generations ago. If you want competence, they may suffice, but if you desire glassy smooth frame delivery, you’d best look elsewhere. Our sense is that AMD desperately needs to improve its per-thread performance—through IPC gains, higher clock speeds, or both—before they’ll have a truly desirable CPU to offer PC gamers.

Fortunately, there are some glimmers of hope emanating from AMD. The Trinity APU, which combines higher-IPC Piledriver cores with an integrated Radeon, beat out an Ivy-based mobile CPU in our gaming tests. Trinity is slated to make its way to the desktop this fall, and it may provide some relief when it arrives. After that, we expect an eight-core chip based on Piledriver and then an APU and CPU refresh based on Steamroller, another architectural revamp. We think the firm is moving in the right direction, which is a change from recent years. Whether it can do so quickly enough to catch up with Intel, though, is the truly vexing question.

Follow me on Twitter for less of the same.

Comments closed
    • tbone8ty
    • 7 years ago

    why are you using 12.3 drivers?

    12.7 beta/12.8 offers 25% increase for skyrim especially when using a GCN card like 7950.

    I wonder if this would change anything? i dunno maybe not jsut curious why you used older drivers.

    • Mr Bill
    • 7 years ago

    I’d like to chime in and say this was an excellent and informative review and obviously a massive amount of work. I have to note that I don’t play any of these games (I play mostly WOW) so I’m just looking at the data.

    Seeing how smooth Battlefield 3 was compared to The Elder Scrolls V: Skyrim makes me wonder if there might be a platform related issue (e.g. chipset) and not just the CPU’s causing the bumpy increases in render times. My impression is that Intel does seem to make quicker chipsets and that may factor in for feeding these CPU’s. However, this might rather be game rendering engine related or, simply the design of the game. So, nuff said.

    I think it would be really informative to have a second view of the “frame number” traces that stretch all the traces to the same length; with the X-axis labeled something appropriate like ‘rendered sequence’. I find myself constantly trying to compare the shapes of these traces and I see a lot of correlations. It would be informative to see if some <Edit> CPU’s (oops not cards) </Edit> have less trouble with a particular stretch than another and see them stacked.

    I would like to be able to see some of the Intel CPUs’ stacked with their best rivals instead of just with themselves.

    Kudos for a great review and you have convinced me (a hardcore AMD fan) to recommend to a couple friends that they should build around an Intel Core i5-3470 3.3GHz for a reasonably priced gaming box.

    Edit: CPU’s (not cards)

    • d0g_p00p
    • 7 years ago

    “It’s worth reiterating here that the FX processors aren’t hopeless for gaming—they just perform similarly to mid-range Intel processors from two generations ago”

    ouch!

      • mcnabney
      • 7 years ago

      And the games themselves are 8 CPU generations old unless you game at resolutions higher than 1080p.
      I have a Core2 and a 7770 and play all games at high detail smoothly at 1080p.

      I feel like some of this is trying to drum-up support for fast CPUs when there is no demand for them when used at typical resolutions.

        • Kaleid
        • 7 years ago

        Exactly. Most could probably do well still on an overclocked e6600 to say 3.3Ghz. From 2006.

        • halbhh2
        • 7 years ago

        I agree. One of the points the authors make in their podcast today is that it was a lot of work, and then people say “what about…”.

        But I’m not advocating they review 25 cpus. Just 15 or so (drop a few, and include a few) including 2-3 of the older/cheaper, like one popular core2 s and the TR recommended i3 2120.

    • tfp
    • 7 years ago

    Well Bensam123 as I was saying in the other thread this is WHY the new AMD processors are just not a good buy at any price for the desktop for most people. If you want to take power usage into account it is even more limiting.

    • scrummage55
    • 7 years ago
    • halbhh2
    • 7 years ago

    The question of how the i3 compares to the PhenomII x4 965 is getting even more pointed, as TR reports that AMD is cutting its price even lower:

    “The X4 965 BE’s price cut makes the $81 chip even cheaper than AMD’s Phenom II X2 models.”

    I saw that Newegg had the 965 on sale the other day for $90 (a 1-day kind of sale). Microcenter has had the 965 at $90 for a while now and still gives a $40 combo discount with a motherboard. For instance, a true “econo” build, with significant gaming speed, could start on the framework of a x4 965 and an ASRock 970DE3/U3S3 770 AM3+ (up to 32GB ram, USB 3, etc.), and these together with tax are a scant $130.

    Compare this to Microcenter’s price on the i3 2120 at $99 (Microcenter does give a combo discount for Intel combos too, but for i5 and higher). The cart price with a comparable motherboard ASRock Z77 Pro3 1155 ATX is about $210.

    Is the i3 setup worth the extra $80?

    Well $80 will buy you a nice SSD boot drive. Or….$80 can upgrade your graphics card from the adequte 7770 (now the GHz edition at that price) to something much more powerful, like a MSI 7850 !

    For me the answer is clear. I really do care about price, and I really can use $80 elsewhere.

      • nanoflower
      • 7 years ago

      Microcenter will give the same $50 discount for I3 CPUs. I just added in an Intel 3225 and a Gigabyte GA-B75M-D3H with 8 GB of Crucial Ballistix sport memory for a combined total of $200.31. Going for the same ASrock MB that you mention and a 3225 it shows a price of $185.48. So it looks like their $50 savings starts with the 3225 and not the 3220. They don’t even list a 2120 anymore as I guess that is going away now.

    • Pantsu
    • 7 years ago

    Great article, keep it up Scott!

    It’s sad to see AMD so poorly performing when a game is CPU limited. Perhaps it is an optimization issue, but at the end of the day these are the games that are out there and played, so we need CPUs that run them best. If AMD really wants the 8-core FX CPU to be fully utilized, it’s going to have do a lot lobbying to game developers, and even then I highly doubt it’s actually all that easy to utilize the extra cores in a gaming scenario.

    Perhaps things would be better for AMD if Intel had gone the 8-core route too, since it does have a near monopoly, but as long as Intel rules, AMD needs to follow suit, or it just won’t fit into a gamer’s CPU socket. While the HSA seems promising for future, I really doubt AMD can pull of the support side of things. And meanwhile Intel will have come up with their similar solution, that ends up standard. IMO as a gamer, it’s more likely that gaming will go all cloud before the situation changes, and after that it’s a moot point which one is faster, since all you need is to stream video.

    • Shobai
    • 7 years ago

    I may be a little late to the party on this one, but one aspect of the AMD processors that intrigues me is the HT and NB overclocking. Most places around the ‘net that I’ve seen suggest that overclocking the HT and NB doesn’t do much to increase FPS but does improve other benchmark scores. I wonder what effect it could have on frametime.

    Any ideas?

    • TDIdriver
    • 7 years ago

    The only thing that could possibly make this article better would be the inclusion of something like a Q9550 or Q9400 in the results. It’d be interesting to see how the Yorkfields hold up.

    • Arclight
    • 7 years ago

    Dang, look at that. I really do need a new CPU. Too bad that all Ivy Bridge quads are overpriced in my country.

    • ronch
    • 7 years ago

    I just found out that Ivy Bridge’s PCU (Power Control Unit) consists of an embedded i486-class CPU. A CPU within a CPU. Not sure if TR has already reported it. Here:

    [url<]http://www.youtube.com/watch?v=kJzf7jRqsAw[/url<] I would assume Sandy Bridge also uses the same i486 PCU. Very interesting. Thought I'd share it.

    • Xenolith
    • 7 years ago

    In summary, the CPU doesn’t define a gaming PC, the graphics card does. This article really had to work hard to find subtle differences in CPUs.

    • ronch
    • 7 years ago

    I’ve been wanting to grab an FX-8150 ever since it was announced but I’m strongly considering the i5-3470 now that it looks like it’s the best bang for the buck (even before I saw this article), esp. since I don’t OC. But the thing is, I just couldn’t justify spending about $600 on a CPU+Mobo+RAM+graphics card today when my 2.5-year old Phenom II is still very adequate. Perhaps I’ll buy next year or 2014. By then all the CPUs here are either phased out or selling for a lot less.

      • vargis14
      • 7 years ago

      I am pretty sure you can overclock it a get at least a few more 100 mhz out of it since ivys block goes to 106 easily. Plus i believe you can up the turbo speeds to say 38x multiplier maybe more for all 4 core at the same time in the bios. I am not positive on this but i recall it was possible.
      If its only using 2 cores you can turbo them to 4.0ghz, 3 cores 3.9ghz and 4 cores 3.8ghz. Plus that’s not messing with the block speed at all ….say you set the block to 105mhz x 38 for 4 cores you get 3990mhz so the chip does have a little bit of overclocking ability, say if you maxed the turbo settings an set the block to 105.5 when using 2 cores it would run at 4220mhz and 4 cores at 4009mhz .
      Besides its plenty fast stock but without adding any voltage you can get a extra 5-600mhz.

        • ronch
        • 7 years ago

        Yep. The i5-3470 allows overclocking 4 notches above its default multiplier. So folks should probably be able to easily clock it at 3.6Ghz to 4.0Ghz turbo, or something like that. But even if I could, I probably wouldn’t touch the default settings unless I’m doing something where every ounce of performance counts.

    • vargis14
    • 7 years ago

    After reading all these posts from everyone it looks lie TR has to do a revised gaming performance review with todays cpus:)

    1st and most importantly you have to add the i3 2120 and a sandy based pentuim for intel’s low end.

    2nd you need to add another 7970 or a a gtx690 or a crossfire or sli setup of some kind to show the needed extra cpu horsepower needed to run dual gpu’s over and single gpu and the effect it has on microstuttering/latency if you use 2 cards with a poor cpu. Example a i3 2120 “not in this review” might just give you great latency with 1 gpu but horrible results with 2 gpus.

    I know i am not alone with this request to the TR brainiacs . Hopefully a REV 2 of Inside the second: Gaming performance with today’s CPUs
    Does the processor you choose still matter?
    Please show the world again why this site is the best:)

    • JuniperLE
    • 7 years ago

    Good article, BUT
    you missed some of the most important CPUs…
    i3 2100, i7 920, Q6600, E8400, E5200, Phenom II X4 955

    all of them are still very popular CPUs with gamers,
    also, MOST techreport readers probably have a clue about overclocking, and some CPUs are made for overclocking, like the 920, e5200 and 955…

    I based my choice on overclocking for several CPUs I have owned.

      • travbrad
      • 7 years ago

      Of course more CPUs tested and overclocking would be nice, but they do have to draw the line somewhere. If they added all those CPUs you listed AND overclocked all of them it would have basically tripled the amount of testing time they would have to do, which is already quite a time-intensive process.

      Maybe just overclocking a couple of them would have been a nice addition to the results though. SB/IB can easily achieve 40-50% overclocks which is a hugely significant jump in CPU performance. I do think you’d start to run in to a GPU bottleneck in most games with “only” a 7950 though, since it was already happening at stock speeds in Crysis2/BF3.

        • JuniperLE
        • 7 years ago

        I fully understand BUT, I think some CPUs cannot be missed in a test like this, like the i3 2100 and at least one Core 2…
        as for the 655k, or the excess of i5s I’m not sure they are needed (I mean, don’t expect any big difference from the 2400 to the 2500k when running at stock speed)…

        I’m not really wanting to criticize for nothing, I just think is something they could consider for future tests…

        as for overclocking, a single sandy bridge a single FX and a single PII overclocked would be enough just to give an idea…

          • Zoomer
          • 7 years ago

          It will also be revealing as to what exactly is the limiting factor:

          Frequency, number of real cores/scheduling delays due to HT/other threads. A test to determine the effect of memory latency/bandwidth would be nice as well.

          So, addition of the 2600k/ivb equiv at stock and overclocked to 4.6 or so, with and without HT. 🙂

          • Jason181
          • 7 years ago

          Why would you include a Core 2? It’s not like there’s a latent market of enthusiasts just waiting to pull the trigger on a cutting-edge Core 2 duo system.

          The 655k at least gives an idea of how the i3-2100 would perform. The i5s were included because they already had the data, so why not include them?

          Overclocking is asking for problems from a benchmarking standpoint. People already are claiming biased reporting where there is none. Imagine the furor when the Intel cpu achieved a lower than average overclock, and the AMD cpu achieved an exceptionally high overclock, or vice versa.

          Overclocking can vary from cpu to cpu within the same family and even stepping. I think for that information you’d be better served by going to the launch review of the applicable cpu where usually some sort of overclocking is performed to get an idea of headroom, but using those in a comparison review like this is probably just asking for a headache and lots of angry emails.

            • Kaleid
            • 7 years ago

            I’d be interested to see how a E8400 @ 4GHZ performs. Have all the upgrades really been necessary.

            • JuniperLE
            • 7 years ago

            because people are still using Core 2, and it should be interesting to compare, see the gains, put things in perspective, how much it progressed or not for gaming.

            655k and i3 2100 are very different CPUs…
            the 655k is based on nehalem but with an external(and slow) memory controller, and higher clock, with turbo…
            i3 2100 is far more relevant, and performance is probably not the same.

            if it’s really a problem, they could be conservative with their OC config, no need to seek the absolute highest possible OC, just set a reasonable and average target… 4.5GHz or whatever for ivy/sandy OR the FX is easy in most cases…

            • Jason181
            • 7 years ago

            If you look at the gaming benchmarks, on [url=http://www.anandtech.com/bench/Product/144?vs=289<]this[/url<] page, they aren't all that different. As I said, the 2100 is bound to be a little faster due to architecture. Having owned a lynnfield-based i3 and then moving to sandy bridge, the clock-for-clock performance difference if not that great in games. They said they'd probably add the 2100 later, but in the meantime I think the architectural differences between the two are being overblown a bit. If you're actually looking at the transistors, there's a huge difference. But if you're looking at performance, in most games they perform somewhat similarly (although with an ipc bump for the 2100). There are instances where the newer architecture distinguishes the 2100 as a superior, but the differences still aren't nearly as comprehensive as the design changes between Lynnfield and Sandy would lead you to believe they might be.

            • JuniperLE
            • 7 years ago

            well, the link you provided is OK but they only tested older games, and they are using a different, less detailed methodology

            but it makes me think that is even more important to test more CPUs, in one of the games the 2100 was 15% ahead the 650 (which is the same as the 655k at stock clocks), and I remember seeing a test in other site showing even bigger margins in some games… this tells me that it would be really interesting to have the i3 (and other CPUs), as I said I think it’s a pretty good article, but it would greatly benefit from a bigger variety of CPUs and settings (overclocking, some Core i3s/lower end sandy bridges, Core 2s and a few more Denebs)

            i5 655k = Clarkdale, not Lynnfield,
            Clarkdale = pretty much a lynnfield with half the L3 and an external (separate die on the same package) PCIE/memory controller (with much worse latency), while the 2100 is just a regular sandy bridge with all the same architecture apart from obviously the reduced number of cores/l3.

            • Jason181
            • 7 years ago

            The page I linked is just their benchmark comparison… the benchmarks come from full articles. Anandtech actually predates the techreport, although I kind of prefer the techreport. I only say that because you can look at their methodology if you want. I noticed that there are some games where the architecture does make a significant difference (other than just the improved ipc).

            You can compare any cpus they’ve benchmarked for a ways back, or do a comparison by benchmark that shows all cpus. They don’t give the fine-grained analysis that this article does, but they do give a general idea.

            • Zoomer
            • 7 years ago

            [quote<] Overclocking is asking for problems from a benchmarking standpoint. People already are claiming biased reporting where there is none. Imagine the furor when the Intel cpu achieved a lower than average overclock, and the AMD cpu achieved an exceptionally high overclock, or vice versa. [/quote<] No, no, the increase in frequency is basically to see if the game engines with spikes benefits from a higher operating frequency. Basically, it's to see how much frequency still counts. Well coded engines probably took a hard realtime approach to the problem, which is evident from the bf3 graphs which exhibit very nice frame times, with the frames all meeting the requirements and with little jitter across many different speed cpus.

            • Jason181
            • 7 years ago

            I wasn’t talking about the mechanics of it, but the “politics,” so to speak. How long do you tweak each machine to get the best oc, or if you don’t go with the best, how do you pick your target? Which cpu(s) do you choose to test oced and why? I fully understand the interest in it, and as an ocer myself, I am interested too. I just don’t think it’s feasible from publication perspective.

    • Bensam123
    • 7 years ago

    I’m going to double post here, but the direction is a bit different from my original. So, I’ve generally been on board with the whole frame time marketing thing TR is doing… I still don’t believe that the amount of time spent beyond 50ms should be used instead of the amount of frames drawn below 20FPS. It’s still confusing and you have to explain it each time in each review so people understand that you’re simply changing the metric used to express FPS below a certain threshold.

    That aside, results are now all based around the 99th percentile. I’m going to say the majority of the readers don’t understand this metric and I still am having problems fully grasping this. I’ve reread the original inside the second article quite a few times over the past months when I actually try to figure it out again and I still have a problem grasping the 99th percentile frame time.

    This is the original explanation given of the 99th percentile frame time:

    [quote<]One way to address that question is to rip a page from the world of server benchmarking. In that world, we often measure performance for systems processing lots of transactions. Oftentimes the absolute transaction rate is less important than delivering consistently low transaction latencies. For instance, here is an example where the cheaper Xeons average more requests per second, but the pricey "big iron" Xeons maintain lower response times under load. We can quantify that reality by looking at the 99th percentile response time, which sounds like a lot of big words but is a fundamentally simple concept: for each system, 99% of all requests were processed within X milliseconds. The lower that number is, the quicker the system is overall. (Ruling out that last 1% allows us to filter out any weird outliers.)[/quote<] I understand average FPS and I understand frames drawn below a certain threshold. But as best I can tell the 99th percentile is a moving baseline that shows the amount of time that the graphics card spends drawing 99 percent of the frames. Wouldn't the length of the benchmark impact this metric? The play through benchmarks don't end at an absolute time. What is the formula for getting the 99th percentile frame time as well? What is the 1% of frames that are excluded? How is this figured out? What constitutes a 'frame' for the frame number... If you're attempting to figure out how long it takes to render the same amount of frames, you would use the same amount of frames (the graphs don't have the same number of frames). If the 'frame number' graphs are based on running for X number of minutes, then the axis are flipped (as is the whole frame time thing), since frame time isn't averaged. If you're simply running to a location then you would need to average 'frame time' or you would end up with different results. Lower is better, yet in the conclusion, higher is better, which makes relatively little sense. That would mean there has to be an absolute maximum to convert 99th percentile frame times to. There is no baseline. Either that or the conclusion at the end is not the 99th percentile frames, but something else. There is no 99th percentile frames a second as listed on one of the axis. As frame time itself is derived from the amount of time the graphics card spends rendering frames It seems like you're mixing two different types of scales, which yields results which aren't quantifiable. [url<]http://en.wikipedia.org/wiki/Level_of_measurement[/url<] (Yeah I'm confused on this. I really wish I still knew some of my stats professors from college so I could have them take a look at this. Something here really doesn't seem right, but I can't put my finger on it.)

      • Zoomer
      • 7 years ago

      Let’s say we have a series of frame times, in ms:

      960, 10, 10, … (total of 990 10 ms times) 50, 200, … (total of 8 200 ms times) 200, 500.

      Total frame time: 13000 ms = 13 seconds
      Total frames = 1000
      Avg frame time: 13 ms
      Avg frame rate: 1 / (13 milliseconds) = 76.9 hertz (fps)

      Now, this avg frame rate will seem like everything is perfectly smooth.

      However, the 200, 500 and 960 ms frame times will mean that you will get stutters. That will be evident because the instantaneous frame rate drops to 5, 2, 1 fps, respectively.

      To get 99th percentile frame time, sort the times:
      10, 10, … (total of 990 10 ms times), 50, 200, … (total of 8 200 ms times) 200, 500, 960

      Remove the top 1% (top 10 list items):
      10, 10, … (total of 990 10 ms times), 50

      The 99th percentile frame time is thus 50 ms.

      Now for real data, it’ll likely be as extreme as this example, and there will likely be clusters of frame times at 50 and other points.

      The benchmark is run from a point in the game to another, and frame times are async with that. Therefore, different runs will yield a different number of frames, with higher performing hardware yielding higher frame rates and thus a higher number of frames.

      Edit: I realized I should have included a cluster of 50 ms times in the example. The thing for real data is that there will likely be a good amount of frames near the 50 ms lull as well, which translates to ~20 fps. This 99th percentile data is meant to show that lull. The worst time is not used as that could be noise, ie. from badly coded engines, external events, or other random freakouts.

        • Bensam123
        • 7 years ago

        I understand that lower FPS will give you stutters. You don’t list a formula for finding frame time either and while I appreciate the ambiguous example you gave, I don’t see a relationship between the numbers.

        I understand removing outliers at 1% that’s not a issue, but how do you end up at 50ms after listing times at 10ms a piece? Would you put your work into a formula?

        Removing 1% outliers would not make up for the difference in results from average FPS to average frame time (frame time has to be averaged or results would be much higher). There would be a direct relationship. Sorting doesn’t do anything when analyzing results besides increasing readability.

        (Refresh rate or hertz isn’t the same thing as FPS)

        • Bensam123
        • 7 years ago

        .

      • Bensam123
      • 7 years ago

      I’ve been going over this over and over in my head trying to figure it out myself, but I can’t. Best I can tell no one here actually knows how to get frame time besides TR, yet we’re basing buying decisions off of a formula we can’t actually see. We’re discussing results we ourselves can’t actually get to and taking TRs word for it. Conclusions to articles are now completely summed up by these mystical results. Testing methodology was supposedly publicly available and repeatable (or used to be here).

      If frametime was derived from FPS (which it suggests as 1000/frames) then there would be a direct link between average FPS and frame time as the results look averaged. But there isn’t as they’re getting different results with 99th percentile frame time, then with average FPS.

      So what I do know is 99th percentile removes 1% outliers… That would slightly adjust the results, but not enough to show as big of shifts as we’re seeing. Frametime is supposed to be derived from FPS, but it’s not. 1000/divided by average FPS should get us close to frame time without removing the 1%.

      Possibly frame time is simply adding up the MS of all the frames (which is basically all the FPms added together), removing the 1% outliers, dividing it by MS to give us the average? That gives a more likely number. But still doesn’t take into account all of the ‘spikes’ that frame time is supposed to account for. Removing the 1% would be counterintuitive to that end and this method would be close to average FPS.

      Frame time is supposed to be a measure of these spikes, only it’s presented in a time like format. This is deceptive though as frame time is simply FPS or FPms (the speed at which it’s measured is different) that is converted to a time format. Milliseconds per frame instead of frames per milliseconds.

      Ideally frame time would measure the latency for each frame (which it’s purported of doing), but we are incapable of doing this. There is no way to know how long it takes a frame to be rendered after it hits the video card till when it comes out the other end. It is inferred through FPS or FPms, but FPS is simply a measure of bandwidth. How much a video card can crap out, how fast. That’s why this whole thing is deceptive.

      Unless TR actually has a method of measuring latency and not FPS or FPms that they haven’t told us about, this doesn’t make sense. Variance and standard deviation would capture everything that frame time was supposed to be purported of doing (except removing the 1%), but the results of frame time are not in a format that is conducive to std. dev. or variance.

      • BobbinThreadbare
      • 7 years ago

      [quote<]What is the formula for getting the 99th percentile frame time as well? What is the 1% of frames that are excluded? How is this figured out? What constitutes a 'frame' for the frame number[/quote<] Measure how long each frame takes to render, exclude the 1% slowest frames, report the largest number left. I find it hard to believe you don't know what a frame is, those are the images the graphics card produces and sends to the monitor. [quote<]Lower is better, yet in the conclusion, higher is better, which makes relatively little sense. That would mean there has to be an absolute maximum to convert 99th percentile frame times to. There is no baseline. Either that or the conclusion at the end is not the 99th percentile frames, but something else.[/quote<] The graph in the conclusion is clearly labelled as 99th percentile frames per second. I don't find this confusing at all. Take the time the 99th percentile frame takes to render and count how many it would produce in 1 second. [quote<]But as best I can tell the 99th percentile is a moving baseline that shows the amount of time that the graphics card spends drawing 99 percent of the frames.[/quote<] It's measuring the length of time of a single frame only, the frame that is the 99th percentile longest frame to produce. Does that help?

        • Bensam123
        • 7 years ago

        I do know what a frame is. The 99th percentile frame time graph isn’t just one frame though. How does he arrive at that number?

        If that is the complete method you should be able to arrive exactly at his results simply by plugging in the numbers.

        If there is a direct relationship between frame time and FPS then the average FPS would simply mirror the 99th percentile graphs, but it doesn’t. Taking the 99th percentile frames and counting how many would be produced in 1 second is FPS.

        ’99th percentile frame time’ is a range of 99% of all frames excluding 1%, it specifies more then one frame. I could most definitely be confused here as I’ve always assumed by the wording and how it was originally described that the 99th percentile frame time is a distribution of values representing operation of a graphics card via the original quote, not a single value.

        It’s starting to, thanks.

          • BobbinThreadbare
          • 7 years ago

          [quote<]The 99th percentile frame time graph isn't just one frame though. How does he arrive at that number? [/quote<] No. It is a single frame. Or more accurately 99% of all frames are produced in that amount of time or less. [quote<]If there is a direct relationship between frame time and FPS then the average FPS would simply mirror the 99th percentile graphs, but it doesn't. Taking the 99th percentile frames and counting how many would be produced in 1 second is FPS.[/quote<] You keep claiming that 1% of frames couldn't skew the results very much, but I don't see you bringing any evidence. Imagine this example: 100 frames are produced, 98 of them take exactly 20 ms, the other 2 each take 100 ms. The FPS would be 46, and the 99th percentile would be 100 ms. Another card takes exactly 40 ms for all 100 frames. It's FPS would be 25 (seems much worse), but the 99th percentile would be 40 ms (much better). That's the point of measuring this way. No one cares much about the average frame, it's the slow frames that interrupt your gaming session.

            • NeelyCam
            • 7 years ago

            [quote<]No. It is a single frame.[/quote<] Not this. [quote<]99% of all frames are produced in that amount of time or less.[/quote<] This. 99th percentile is a measure of how fast the card can render frames [i<]consistently[/i<]. Taking 99th as the figure of metric is somewhat arbitrary but reasonable IMO.

            • Bensam123
            • 7 years ago

            So the 99th percentile frame time IS based off a distribution?

            How is this number found? The other posters are asserting it’s found based off of looking at all the frames, disregarding 1% of the slowest frames and then taking the next slowest frame that pops up there.

            That seems consistent, but would be a terrible way of measuring the overall proposition of a CPU or GPU because it’s not generalizable to the entire processor. Just as one point in a plot doesn’t represent the whole plot, that’s why we use things like mean, median, mode, min, max, standard deviation, and possibly variance. You definitely couldn’t use it for a conclusion or summary.

            Maybe this isn’t getting enough attention because my initial post was too long.

            • BobbinThreadbare
            • 7 years ago

            I’m not sure what you mean by generalizable, but the fact here is that TR is not interested in average performance, but worst case performance (minus a very small fudge factor).

            Read my example again, it doesn’t matter if 98% of the frames are produced in a single millisecond, if the 2% worst ones cause microstutter.

            • Bensam123
            • 7 years ago

            Average isn’t some mean old ultra bad statistic that is the doom of benchmarks everywhere. It’s possible to get the mean from a more specific distribution (which is what I thought the 99th percentile frame time was) and show that.

            [url<]http://en.wikipedia.org/wiki/Descriptive_statistics[/url<] 99th percentile frame time excludes 1% of the 2% that you mentioned. I thought it's weird as well. Benchmarks are ran multiple times in order to take into account that 1%. One frame isn't the same as multiple frames in that 2% too. Such as multiple smaller hickups with greater frequency, which would be quite noticeable compared to one big stutter that only happens once. If 99th percentile frame time is only one frame then it's missing the entire frequency aspect.

            • BobbinThreadbare
            • 7 years ago

            [quote<]Such as multiple smaller hickups with greater frequency, which would be quite noticeable compared to one big stutter that only happens once.[/quote<] That's why they also report total time spent above 50ms.

            • Bensam123
            • 7 years ago

            But they only use 99th percentile frame time in the conclusion. That’s where you put graphs with the most objective meaning. Conclusions sum up everything.

            While they also do total time spent below a certain threshold it doesn’t roll it into one logical number (that’s also why variance and standard deviation are commonly used for this in addition to the mean).

            IF 99th percentille frame time is just one frame this could be a very arbitrary number as there are a lot of frames that fall below a threshold and this wouldn’t be representative of a overall GPUs performance, especially when you’re kicking out 1% outliers. This could easily exclude a consistently repeatable results simply because a GPU doesn’t have a hickup on a certain run.

            • halbhh2
            • 7 years ago

            Good question.

            • Bensam123
            • 7 years ago

            I know how outliers effect a average. I’ve taken quite a few statistics courses (that was part of my major).

            I was under the impression that frame time was a distribution though. As a single frame cannot give a full perspective, that is something you wouldn’t want to use to find a conclusion or summary. Would you make your entire buying decision based off of [i<]one[/i<] frame?

            • BobbinThreadbare
            • 7 years ago

            As Neely pointed out, I miscontrued what was going on. The 99 percentile is 99 percent of the frames.

            [quote<]Would you make your entire buying decision based off of one frame?[/quote<] If a single frame took something ridiculous like a full second to render, would you still buy the product? Assuming this is repeatable and consistent?

            • Jason181
            • 7 years ago

            Another thing to keep in mind is that it’s very unlikely that you’re going to run into a situation where the 99th percentile frame is an outlier since they run each test 5 times, averaging the result (I assume they average the 99th percentile result too).

            • Bensam123
            • 7 years ago

            So you believe ’99th percentile frame time’ is just one frame and not a distribution of frames?

            Neely believes it’s a distribution, Bob originally thought it was one frame then switched to two, I originally thought it was a distribution, but I’m still up in the air and that’s why I’m asking these questions.

            • Jason181
            • 7 years ago

            Take all of the frames, sort by frametime. Take the slowest 1% of frames and discard them. Take the slowest remaining frame.

            So yes, it’s a single frame from a distribution selected by a percentage. But since the test is done 5 times, it’s very unlikely that the slowest remaining frame is going to differ by 100% from the next slowest frame since they’re averaging. If you didn’t average, you could discover a “cliff” where if there was one more or less frame in the distribution it could cause the 99th percentile frame to be much slower or faster.

            Say you have 1000 frames. Discard 1% (10 frames) and you end up with the 990 fastest frames (which all rendered in 15 ms), so you end up with a 99th percentile frametime of 15 ms.

            Now imagine there are 1000 frames and 989 render in 15 ms and 11 render in 100 ms. Now discarding 10 frames leaves 989 15 ms frames and one 100 ms frame. Your slowest frame is 100 ms, so your 99th percentile frame is 100 ms.

            An extreme example, but averaging five runs will soften the blow of the outlier.

            • halbhh2
            • 7 years ago

            Jason, kudos on the clear and well written explanation of the 99% thingy. I understood it intuitively, but it is nice to see the straightforward description.

            • Jason181
            • 7 years ago

            Why thank you sir…

            • Bensam123
            • 7 years ago

            This can’t consist of ‘one’ frame and 99 percent of frames at the same time. They’re two completely different things and impossible to merge, that is why this is confusing.

            It’s either average of a distribution or one frame.

            If this is the 99% of all frames, then this is no different then average FPS with the 1% withheld. That would neither be representative of ‘spikes’ or a distribution of those spikes.

            • PixelArmy
            • 7 years ago

            I think the confusion is because the language is similar. It is a single value out of a distribution.

            Frame time = Inverse of instantaneous FPS. 100 FPS -> 10 ms frame time. Probably derived from FPS graph.
            99 percent of frames = The 99% of frames sorted by lowest frame time. [b<]This is many frames.[/b<] 99th percentile frame time = Highest frame time out of the 99%. I.E. 99% of frames are rendered at this speed or faster. [b<]This is a single value.[/b<] Now of course the 99th percentile frame time graph obviously is many of the singular values from different cards compared to each other... (Hopefully, [i<]any[/i<] [u<]graph[/u<] has more than one value otherwise it would be meaningless.) This is different from the plain old graph of the 99% of frame times If a card is running "smooth", the average FPS and the 99th percentile frame time should correlate well. If a card is stuttering more than 1% of the frames, this will detect those stuttering frames since those frames now show up in the set of 99% and increase the singular 99th percentile frame time. Now if you were to inverse this value back to FPS, it'll make that lower. This is graphed vs $$$ on the conclusion page.

            • BobbinThreadbare
            • 7 years ago

            Great explanation, you did a better job that me.

            • Bensam123
            • 7 years ago

            Yes, part of the confusion is from how it’s worded. Originally I thought 99th percentile frame time was a buzz word.

            Yup, I know frame time is simply 1000/FPS (or supposedly). One of the things I mentioned in the above posts is how frame time is supposed to represent latency, but it instead represents bandwidth… If this is truly how they measure it. TR never said how they actually drive frame time (even though we can surmise this).

            So you believe that 99th percentile frame time is simply one frame after excluding 1% of the lowest performing frames out of the distribution?

            You’re good till here.

            [quote<]Now of course the 99th percentile frame time graph obviously is many of the singular values from different cards compared to each other... (Hopefully, any graph has more than one value otherwise it would be meaningless.) This is different from the plain old graph of the 99% of frame times[/quote<] So you're saying that you'd hope the 99th percentile frame time to be representative of a distribution? In other words representative of a range of values or a average (even though it's not). Where is the plain graph of 99% of frame times? Did you just word it that way to show a difference between the two? Yes I agree... If frame time is simply one frame and that's all it is taken after 1% is removed then it's a relatively worthless number. Like the average of all frames below 16.7ms would be a good place. Or overall variance or standard deviation which I've mentioned before. I may be incorrect here, but you keep mixing 99% of all frames and 99th percentile frame time together as if they represent the same thing. They're very different things. One point out of a distribution isn't the same as the whole distribution. Simply excluding 1% of a distribution wont help accentuate the lower end of that distribution. Typically excluding outliers is very much frowned upon at that. They'd be better off taking like a 75-100 percentile distribution and finding a average of it. That's another confusing aspect. When 'percentile' is used in statistics it talks about a quarter chunk of a distribution. Here, we assume, it simply represents the % in 99th percentile. Breaking down the wording 99th = The percent of a overall distribution that is used (assuming the better performing 99% is taken) percentile = Is used to represent that it's a portion of the distribution frame = Showing that it is just one frame time = Showing what unit it is in: time I read this as: 99th percentile frame time = As one term that represents a benchmarking statistic such as 1% of a distribution that is averaged. It's sort of interesting that none of this was explained to us and we're basically piecing it together ourselves. There was no formal explanation of what 99th percentile frame time is besides what I quoted in the opening post.

            • BobbinThreadbare
            • 7 years ago

            They’re not trying to represent the entire distribution, just the worst case scenario of each distribution because no one cares about the time the GPU is pumping out frames at 100 per seconds.

            • PixelArmy
            • 7 years ago

            [quote<]There was no formal explanation of what 99th percentile frame time is besides what I quoted in the opening post.[/quote<]No, it was explained fairly straight forward in the original piece and most of the people talking have said the same thing over and over, just in different ways cause it's not sticking... [quote<]I may be incorrect here, but you keep mixing 99% of all frames and 99th percentile frame time together as if they represent the same thing.[/quote<]No no no! When I say 99% of all frames, I'm referring to the entire population (minus the 1% we're excluding). When I use the word "percentile", that is a singular value from that population. [quote="PixelArmy"<]Now of course the [b<]99th percentile frame time graph[/b<] obviously is many of the singular values from different cards compared to each other... (Hopefully, any graph has more than one value otherwise it would be meaningless.) This is different from the [u<]plain old graph[/u<] of the 99% of frame times[/quote<]I'm actually not sure where the confusion lies... Example of Bold = [url<]https://techreport.com/r.x/cpu-gaming-2012/skyim-99th.gif[/url<] (the single time from many cards). Example of Underline = [url<]https://techreport.com/r.x/cpu-gaming-2012/skyrim-intel-low.gif[/url<] (ok, this includes all 100% of frames - all the times from a single card) And I was being a smart ass-saying a graph with one data point is clearly not useful and shouldn't be graph. Ignore the original sentence parenthesis... [quote<]So you're saying that you'd hope the 99th percentile frame time to be representative of a distribution? In other words representative of a range of values or a average (even though it's not).[/quote<]You're right the percentile is not the average. It's a percentile! I assume you know what the "median" is? That's the 50th percentile, ie. half better, half worse. [url<]http://en.wikipedia.org/wiki/Percentile[/url<] The percentile is [i<]representative[/i<] of the max time in the remaining 99%. As people have said it represents the worst time (keep in mind the worst time may be repeated multiple times)... You might have different feelings on the usefulness. You seemed concerned more about average performance vs TR's concern on worst case performance. Fair enough, you have the FPS values still... Honestly, the only change I'd favor is making the graphs of this style (ex. [url<]https://techreport.com/r.x/cpu-gaming-2012/skyrim-beyond-16.gif),[/url<] the percentage of the benchmark runtime rather than absolute time. They sometimes tell you in the text what the benchmark length is, but this would be more explicit.

            • BobbinThreadbare
            • 7 years ago

            It’s not a mean, it’s the worst value out of the 99%.

    • Xenolith
    • 7 years ago

    Do you have the procedures published somewhere so others can replicate your study?

    • mboza
    • 7 years ago

    Can you plot the frame times against cumulative time, rather than frame number? It will hide the faster processors producing more frames, but that is restated in the average fps graph anyway. But it will make events in the test happen at the same point in each graph, and more obvious if some processors cope better than others.

    • nafhan
    • 7 years ago

    Not sure if someone’s already mentioned this, but could we get a compressed version of the “value” graph that only goes up to $500 on the price axis? That would make it much more readable…

      • indeego
      • 7 years ago

      Close your right eye.

    • Meyaow
    • 7 years ago

    Brilliant follow-up to the initial latency test. Two suggestions to make your graphs more readable.

    1. Consider using logarithmic axes which may separate some graphic cards visually in the graphs. One can try it one axis (x or y) or both (x and y). Test it on [url<]https://techreport.com/r.x/cpu-gaming-2012/crysis2-latency-intel.gif[/url<] to see you get an improvement. 2. Try to combine two graphs, like fps and 99th latency. Candidate graphs could be [url<]https://techreport.com/r.x/cpu-gaming-2012/crysis2-fps.gif[/url<] and [url<]https://techreport.com/r.x/cpu-gaming-2012/crysis2-99th.gif[/url<] Let "fps" be the x-axis and "99th" the y-axis and let each card be represented by a dot (with name attached to it). This would yield a graph identical in style to [url<]https://techreport.com/r.x/gpuvalue/crysis_plot16.png[/url<] It would be nice just to hear if any of these graphs worked better or not.

    • sidtai
    • 7 years ago

    i have been liking this method of measuring frames since it came out on tech report, but there are some suggestions i would like to make. first why did you not compare overclocked results? overclocking is free performance for amd processors, im sure they wont be as bad overclocked. i cant see anyone not overclock their cpu/gpus unless they have a power constraint.
    secondly i think you should use cpu intensive games for comparison, eg other than skyrim there is far cry 2, also not the single player but he multiplayer of battlefield 3. hop on server with a b2k map like strike at karkand with 64 players, explosions and buildings destroyed everywhere, and you will know how threaded bf3 multiplayer is. i think just comparing single player is a bit biased in favor of high ipc intel processors.

    edit:typos

    • LastQuestion
    • 7 years ago

    You guys must not be aware of how Skyrim utilizes CPUs. I’m honestly surprised you didn’t find this out while testing. Skyrim doesn’t know how to utilize more than 2 cores on a processor. Windows will distribute the 2 cores across all cores, but it’s still only 2 cores worth of load. As such using it as a benchmark is okay…except when doing it in a multi-tasking scenario.

    You see, back when people were figuring out Skryims performance issues I’d run some tests of my own. Namely, checking cpu load, and then running an encode in the background. Running the encode in the background did very little to Skyrim performance, however, running the encode in the background while playing BF3 made it all too noticeable that BF3 was fighting over resources.

    What everyone came to find out is that performance gains in Skyrim occurred due to more modern CPU architectures and higher clocks. Core count did practically nothing for Skyrim.

    I believe you guys should edit this article and replace the Skyrim+encode with BF3+encode. Skyrim is a terrible game to benchmark CPUs.

    • chuckula
    • 7 years ago

    Damage: This was brought up by Goty below but it might have gotten buried so I wanted to re-post here: In future benchmarks of this kind for multi-tasking would it be possible to show the performance of the both transcoding and the game? Goty had an excellent point that the relative performance differences in pure-gaming and multi-tasking are important, and seeing the actual transcoding performance during the game run would help to paint a more complete picture.

      • Damage
      • 7 years ago

      We’d have to change how we test in order to do what you ask. I think it’s feasible with a different transcoding scenario, perhaps, but the problem is that encoding times vary, and if they end before our gameplay session is complete, our gaming results are messed up.

      I also somewhat disagree with Goty and others about two things. First, I don’t think the gaming performance delta between with-encoding and without-encoding is more important than, you know, the actual gameplay smoothness delivered. So what if a CPU has a nice delta between the two, if it’s slower than other options in absolute terms?

      Second, I don’t think our test is *flawed* because we chose just to report gaming performance with a background task and not gaming + transcoding numbers (which would complicate the task quite a bit). We may not be delivering the test he prefers, but we are showing something of worth, with numbers that matter in a practical sense. I don’t believe only showing one side of the scenario, the gaming performance, constitutes a problem. Of course, folks are free to run SysMark if they want to see many tasks flying around at once and spitting out an overall performance number.

        • derFunkenstein
        • 7 years ago

        The big thing is that the transcode happens without dropping frames. As long as you can see that the encoder isn’t losing anything, I agree.

        • Goty
        • 7 years ago

        To clarify the statements chuckula is referring to, I’m not at all suggesting that the raw performance numbers aren’t relevant (they most certainly are), but rather that the performance hit taken by each CPU is another part of the same picture and is still relevant to the reader/consumer. The same goes for showing the encoding times while gaming; no, the test isn’t worthless without them, but it is another interesting part of the “multitasking” metric. Again, as I mentioned below, how effective a CPU is at multitasking is a difficult thing to measure; I’m just suggesting ways in which such a test could be made more informative.

        • kc77
        • 7 years ago

        I would have to agree with Chuck on this one. Games usually use no more than two cores. Video encoding isn’t branch heavy but it is very sensitive to IPC AND the amount of cores you have. When you are encoding the game really isn’t going to be affected too much but your video encode definitely will be by orders of magnitude if you are using those other cores for something else.

        i understand the premise of the article, but if you’re multitasking with another program it would be nice to see those results too.

    • Chrispy_
    • 7 years ago

    Scott, kudos and thanks.

    One thing that would make this great article an [i<]incredible[/i<] article is more focus on the lower end. We can now show from your latency data that almost every modern CPU provides [u<]acceptable[/u<] gaming performance, even the lowly A8 in this article providing decent fluidity for all but fractions of a second over the course of a minute. What we don't know is how cheap you can go before gaming is unacceptable, and of course, whether it's worth upgrading really old hardware like s939 and s478 platforms. Additionally, there are so many forum threads (here and elsewhere) focus on low budget builds - whole gaming PC's for well under $500. I assume this is from kids on low income working maybe one or two days a week between college lectures. As a polite request, [i<][b<]if and when you ever get the time[/b<][/i<] I think there is an entire interweb's worth of upgraders and beginners who are interested in seeing these tests with entry-level Intel such as the G540 or G620, and the similarly priced A4-3300 or dual-core AthlonII. When you can buy a 7750 and a G540, motherboard and 4GB RAM for around $200, [b<]the single most important question for those on a tight budget[/b<] is [i<]should you?[/i<]

    • grantmeaname
    • 7 years ago

    [quote<]From [b<]Ivy to Sandy[/b<], some of the potential speed benefits of the die shrink were absorbed by the reduction of the desktop processor power envelope from 95W to 77W.[/quote<] Last page, halfway down. Should be "Sandy to Ivy".

      • Damage
      • 7 years ago

      Fixed, thanks.

    • Dodger
    • 7 years ago

    Typo at the bottom of the Crysis 2 page: “Here, the relatively affordable Core i5-3740” should be i5-3470 right?

      • Damage
      • 7 years ago

      Fixed. Thanks.

    • flip-mode
    • 7 years ago

    This is a very good article. Only thing missing is an i3. It doesn’t make me happy to say it, but my guess is an i3 would have likewise blown the doors off all the AMD processors.

      • Damage
      • 7 years ago

      Amazing how we can test 18 CPUs and folks cry out: if only you’d tested 19! But I see the reason the Core i3 feels like it’s missing.

      Several things about that:

      First, we can certainly add it later. These game tests are just taken from our CPU test suite, and the intention always was to fill out our selection over time.

      Second, this article was busted out of a larger CPU review, which was framed as Ivy vs. FX. I have full CPU suite results for all 18 processors and will be following up with those shortly. There’s no low-end Ivy, so the omission made sense as we conceived of things.

      Third, we can test up to four CPUs at once, but there was tremendous pressure on our LGA1155 socket during testing. You have to think of it that way, when considering our larger and more scripted CPU test suite overall. Omitting an X79 config wouldn’t have saved us much time in that context, but adding another LGA1155 CPU would have meant another couple of days worth of work, easy.

      For what it’s worth, that’s the deal. We’ll look into the Core i3 later, I suspect, although we may have to start over for Win8 first. Ack!

        • flip-mode
        • 7 years ago

        Acknowledged. And if you’d had an i3 in there it would inevitably be some other complaint that would rise to the top! It’s a very good article, plain and simple, i3 or not. TR deserves most of the enthusiast traffic on the Internet as far as I’m concerned.

    • tanker27
    • 7 years ago

    Well once again congratulations on getting slash-dotted.

    I would have love it if you guys would have thrown in the i7-920 (1366). I am curious to how it holds up. (I think pretty darn well.)

    Great groundbreaking work. I just wish that other site *cough* hardocp *cough* would follow your guys’ lead. But then again there’s a reason I don’t frequent them much anymore.

    • bfar
    • 7 years ago

    This is a very informative article. Can I make a suggestion? A similar article with a focus on overclocking would be absolutely wonderful.

    I’d imagine unlocked cpus would look a lot better on the value charts, but who knows?

    • xeridea
    • 7 years ago

    I would like to also see what effect gaming had on the multitasking, and is the windows encoder well threaded (I just use Handbrake, which is threaded very well). The 8150 has 8 nearly true cores, while the Intels have 4 cores and 4 poor excuses for cores. I think the performance hit the encoding had was larger on the Intels, just saying. Also a note as to the relative gaming performance of each CPU while encoding.

      • Jason181
      • 7 years ago

      When it comes to fpu performance, it’s AMD that has the poor excuses for the other 4 cores, so it really depends on your usage scenario. You did read the article, didn’t you?

      This is why Bulldozer is not a gaming cpu.

        • xeridea
        • 7 years ago

        The 4170 is as fast or faster than the 8150 in most games, with only 2 FPUs vs 4, so that doesn’t seem to be the issue. For most tasks, being limited in FPU isn’t a huge issue, though some it is very apparent, gaming doesn’t seem to be one of them. There is FPU work to be done, mostly on the GPU, much of the game logic uses the integer cores. Bulldozer isn’t the best gaming CPU, though it will do ok, just not stellar. I was going to get a Bulldozer but am holding out with my Athlon II X2 until piledriver, it should close the gap some and give a decent boost to IPC, helping them non threaded games.

        I did read the article. I read the TR articles start to finish, because they are written well, and generally do a good job of showing what is going on. Other review sites I sometimes do, sometimes skim.

          • Jason181
          • 7 years ago

          It really depends on the game. Check out the disparity in Crysis 2. The 4170 has a higher base clock, and yet the 8150 comes out on top, about 10% faster.

          My point is that if you need FPU power, which some games need in spades, BD is not the right choice. That’s what makes its performance so uneven, and not a good choice. Of course massively more FPU work is done on the GPU, but that’s regardless of the CPU architecture.

          Any game that shows strong performance for the 8150 in comparison to Intel’s chips, but weak performance for the 4170 is either heavily threaded (unlikely, since hardly any games use more than 4 threads), or bottlenecked by FPU performance. The 8150 has stellar integer capabilities, so games that [i<]are[/i<] good performers on BD probably don't rely on the FPU as much.

      • chuckula
      • 7 years ago

      Please go read Neelycam’s post for an example of proper trolling.

    • daedricgeek
    • 7 years ago

    Great article Scott, really enjoyed reading it .
    I’m an AMD fan boy, but I have to agree that the top end is pretty bad for gaming atm …
    However there are a few things I’d like to point out:
    1. Where are the CORE i3 in the charts? I guess time is a major limiting factor , but try at least one i3 instead of i5 665 maybe … If not IB, SB i3 , since there’s <5% improvement on avg …
    2. For budget users, fitting a 300$gpu to 100$cpu is not likely .
    A rough ratio of 1:1 seems useful(correct me if I’m wrong) . In that case, a 7770 would show way less frame latency differences between CPU s <150$…

    An idea for the future: fixed budget say 500 $ for CPU + GPU + MB !!!
    becoz that’s how a budget build will be configured. Thnx

      • Jason181
      • 7 years ago

      The i5-655k is basically an i3 (two hyperthreaded physical processors).

      • halbhh2
      • 7 years ago

      “For budget users, fitting a 300$gpu to 100$cpu is not likely.”

      That is a very good point. Although the best choice now for gaming is definitely to spend more on the gpu than cpu, I would not myself go quite 3 to 1 (except if you are just upgrading your gpu for now, while waiting on a new build for later). More like 2 to 1 I think for an entirely new build, would be my own tendency. Not so much because it is actually better outright for some games — 2.5 to 1 or even 3 to 1 would be better for a big monitor, like the new $300 IPS apple-like panels. But because I wouldn’t feel like I was buying a bang-for-the-buck at 3 to 1. I would rather spend $200 or so on the graphics myself, and then spend $200 again in 18 months, and have a better card at that time than the current $300 level.

    • can-a-tuna-returns
    • 7 years ago
      • derFunkenstein
      • 7 years ago

      You need re-banned.

      • flip-mode
      • 7 years ago

      shameless jackwagon

    • Ushio01
    • 7 years ago

    “We’re left to ponder the fact that the flagship FX-8150 doesn’t avoid slowdowns as well as a legacy Intel dual-core, ye olde Core i5-655K.”

    Why is this surprising? everything i’ve seen on different sites shows that per clock/thread performance of Bulldozer is lower than Phenom 2 which at it’s best is equal to 65nm Core2 let alone 45nm Core2, Nehalem, Sandy or Ivybridge.

      • Jason181
      • 7 years ago

      The reason it’s surprising is because the i5-655K has [b<]two[/b<] cores with hyperthreading, whilst the FX-8150 has [b<]eight[/b<] "cores" (albeit effectively 4 simultaneous multithreaded as far as fpu work is concerned).

    • Bensam123
    • 7 years ago

    Love it, somethings worth noting…

    I would suggest testing the Bulldozer units with their virtual cores disabled (or whatever they’re called). So you only test the full processors. I really think this might make a difference and perhaps one run through should be made with like a 8150 to test it and see what happens with a update to this article.

    I would also recommend disabling hyperthreading and testing one or two of the Intel processors with and without it. In my experience it causes micro-stuttering in games, especially in ones that are more demanding of the processor (edging in close to 100% utilization).

    Something I would also like to point out. While trying to overclock my i7-860, I’ve found higher voltages and frequencies seem to adversely affect the ‘smoothness’ of gaming performance in ways I can’t fully makeout. It almost seems like micro-stuttering only on a even smaller scale, like micro-micro-stutters. I pushed it over 4ghz and while it was perfectly stable, had plenty of headroom cooling wise, and gave a pretty big boost to my FPS, it gave an almost ‘film grain’ experience.

    It was so terrible I had to downclock it to 3.8 before this effect became tolerable. I don’t know what was doing it, but it most definitely was counterproductive even though the quantifiable results (such as FPS) were showing a very good positive boost.

    I’m going to mirror Acha’s post about overclocking a couple of these processors with this methdology to see what happens.

    [quote<]Multitasking: Gaming while transcoding video[/quote<] This didn't make sense to me until you consider the prospect of streaming to sites like twitch.tv, which I currently do. I would consider trying this by running xsplit with a preset profile while a game is running in the foreground. This can make for some pretty interesting results as it tries to load balance the x264 encoding happening in the background as well as the game in the foreground. I ran into problems I never knew I had with my current gaming rig which is adequate for everything on its highest settings while not streaming. This category in and of itself presents a whole lot of hurdles that aren't traditionally tested. Like what sort of impact this has on memory, if memory speed matters more or less... Really everything changes when you try multitasking, even though that's what multi-threaded processors are made for. Maybe this is in part due to Windows shotty thread scheduler, maybe it's the processors... either way it presents some interesting hurdles. [quote<]We don't like pointing out AMD's struggles any more than many of you like reading about them. It's worth reiterating here that the FX processors aren't hopeless for gaming—they just perform similarly to mid-range Intel processors from two generations ago. If you want competence, they may suffice, but if you desire glassy smooth frame delivery, you'd best look elsewhere.[/quote<] This is worth reiterating for people that will tear these results completely out of context. The FX processors aren't looking nearly as good anymore, but they're still up there for the price. A FX 8150 costs $180 on Newegg as well, I don't know why it's $220 in the last graph.

      • Jason181
      • 7 years ago

      [quote<]I would also recommend disabling hyperthreading[/quote<] Just compare the i5-2500k and the i7-2600k. The clockspeed difference is negligible. [quote<]The FX processors aren't looking nearly as good anymore[/quote<] The fact is that BD [i<]never[/i<] looked good for gaming. It was, at best a very uneven performer. It really is a server chip sold as a desktop chip.

        • Bensam123
        • 7 years ago

        From the original review of the BD the 8150 looked competitive with the 2500, given price/performance. Perhaps Neely was correct in saying that the magnifying glass is being held too close and makes these hiccups look like glaring defects.

          • Jason181
          • 7 years ago

          Actually, the 2500k is:
          16% faster in BF3
          34% faster in Civ Late Game View
          5% faster in F1
          2% faster in Metro 2033
          41% faster in Valve’s Particle Simulation

          See what I mean when I say “uneven” at best? Yeah, it competes in some benches, but in others it really doesn’t. Its low L1 cache bandwidth and much weaker FPUs really cost performance in gaming. There are places where BD shines, but games isn’t one of those places.

            • Bensam123
            • 7 years ago

            You know the neat thing about using percents is it gives you no context of the size difference of those percents. Lets do a exercise where I change those percents into a FPS difference instead.

            [quote<]Actually, the 2500k is: 18 FPS faster in BC2 (BC2, not BF3. BF3 wasn't in the original article) 11 FPS faster in Civ Late Game View (Unit bench has a difference of 4FPS) 3FPS faster in F1 (He also failed to mention the 8150 beats the 2500K at higher resolutions) 2FPS faster in Metro 2033 (Also neat note here, the minimum FPS for the 8150 is higher then the 2500K) 41% faster in Valve's Particle Simulation See what I mean when I say "uneven" at best? Yeah, it competes in some benches, but in others it really doesn't. Its low L1 cache bandwidth and much weaker FPUs really cost performance in gaming. There are places where BD shines, but games isn't one of those places.[/quote<] Kinda cool what a little context does, huh? I'm not even going to comment on particle simulation, cause that's one of those synthetic benchmarks you seem to be fighting against. Edit: Oh yeah, paying for that couple extra FPS costs you $40.

            • Jason181
            • 7 years ago

            You’re correct that it’s BC2, but your numbers are wrong; average fps for 2500k are 87, and for 8150 it’s 75. That’s a 12 fps difference, and it matters to a lot of people, especially since you’re talking average.

            Why would I cite the no render benchmark??????????????????????????????????????? I cited the Late Game View. 43 vs 32 fps. Don’t start throwing accusations around when you can’t even do the basic math.

            The fact that BD does alright in Metro and F1 bolsters my point that its performance is uneven. I included it to show that it [b<]can[/b<] be competitive in certain situations, as well as for completeness and full disclosure. How you took that to be hiding something is beyond me. The context is that BD gets its proverbial butt handed to it in some games, and is reasonably competitive in others. That's not a gaming cpu I'd want. In addition to the real, tangible and measurable performance differences, percentages give us a rough idea of what to expect for future games. I really don't see how you could say that BD "looked competitive" with such uneven performance. Performance is downright dismal in some games. Check out the performance of Civ V using different settings (but still full render): the 2500k is [url=http://www.anandtech.com/bench/Product/434?vs=288<]176%[/url<] faster. That's the outlier, but there are lots of games where the 2500k is 40-50% faster. I would have large reservations recommending a BD processor for a gaming rig. It's too much of a crapshoot. What if the next game you are really into performs really poorly (not an unlikely scenario)? I'd say 10+ fps (it's more like 40+ in a lot of those games that I linked) is more than "a couple extra."

            • Bensam123
            • 7 years ago

            When you start getting up past 60, fps matters less and less. I can throw around all the accusations I want and still maintain a point. No I didn’t take time to figure out the percent differences between all of benchmarks, I just picked the ones that sounded close. The Civ5 was a misread on my part, but it still qualifies for a smaller impact just as you claim at a difference of 11.

            The BD doesn’t do better in F1 because performance is uneven, but rather because the processor doesn’t matter that much compared to a good graphics card. It’s not a bottleneck. Not that it performs that much better. That was my point and the point of all my prior posts.

            No, percents don’t have anything to do with future games. They don’t magically translate between games or generations of games. There really is no relationship there.

            Like I said, if you want 10FPS more when you’re shooting around 70 compared to $40 in your wallet or the next step up in a video card that’s your choice. They’re a horse a piece, but I’d wager to say a better video card would give you more then 10fps more in most cases. That’s being competitive on the price/performance.

            List a game from that review that has the 2500k doing 40+fps more then the 8150. That was the whole point of the last post, to disseminate that. Which I did properly except for the Civ5 benchmark which wasn’t 34FPS, it was 11. I went back and edited it so it reflects this properly.

            • Jason181
            • 7 years ago

            [quote<]List a game from that review that has the 2500k doing 40+fps more then the 8150.[/quote<] You really don't read closely, and it looks like you don't check your facts closely at all. Look [url=http://www.anandtech.com/bench/Product/434?vs=288<]here[/url<]. They used different settings (obviously), but the Late-game render of Civ is 80 fps slower on the BD. [quote<]No, percents don't have anything to do with future games.[/quote<] Relative performance is the [b<]only[/b<] measure we have to predict future game performance. Taking your reasoning literally, that would mean a Pentium 4 might be faster than a 2500k in a future game. Sure, there's a massive performance delta in current games, but that doesn't magically translate between games or generations of games, does it? [quote<]When you start getting up past 60, fps matters less and less.[/quote<] If you think an [u<]average[/u<] of 60 fps is acceptable, so be it. The entire point of measuring frametimes is that the average only tells part of the performance picture. Minimum framerates are where it's at. Note in the link up above that World of Warcraft performs rather poorly on BD. Do you really think that's because the game is so graphically demanding that the cpu isn't the bottleneck? And BD's performance isn't just uneven in gaming, it also does well in some applications things and abysmal in others. [quote<]The Civ5 was a misread on my part, but it still qualifies for a smaller impact just as you claim at a difference of 11.[/quote<] No, the difference is remarkably similar. The 2500k is 34.38% faster in the late game render, and 34.78% faster in the late game no render. The fact is that not only did you pick the wrong graph, but then you proceeded to use the wrong numbers on that graph. I'm not sure how that happened, but when you call someone out on an error and insinuate that they're being biased as a result, it's a good idea to make sure you're right. I'm pretty sure that lack of attention is the reason you thought that BD was "competitive" with the 2500k in the first place.

            • Bensam123
            • 7 years ago

            I’m not going to a different website in order to justify your claims. I browse other tech websites, but the only ones I use as facts are the reviews on TR simply because they are so stringent

            Looking at the link you posted it’s even more suspect and may even be a anomaly as the ‘render’ benchmark it below it reflects TRs results and the ‘no render’ benchmark is almost identical to the render one further below that. No render tests CPUs. Why would the render benchmark do worse then the no-render without there being a difference in video cards?

            I disagree completely. Performance is relative, but percents don’t give us any meaningful information when they aren’t put into context. They don’t tell us anything about the future as a game like Mega-Super-Crysis could be released and it will simply tank any and all available graphics cards. 10% lead on the 2500K in one game can mean jack diddly in another and most definitely doesn’t mean that it will give that sort of lead in another game.

            Saying ‘the 2500k is faster then the 8150 in games so it will more then likely be faster in future games’ is something I WOULD agree with. But that comes at a $40 premium and my entire argument was based around $/performance, which it still has. No where in any of these tangents did I say that the 8150 is faster then the 2500k, what I said was the $/performance is still there.

            I don’t think 60FPS is a absolute, unlike a lot of people here. But what I said was that when you get up higher and higher, a gap in FPS matters less and less as it becomes less perceptible. Once again the page you linked doesn’t list their methodology or even when that benchmark was taken.

            Let me outline how picking out the wrong graph results in using the wrong numbers. You pick the wrong graph and you pick the numbers from it. Now picking the wrong graph and using the numbers from a completely different graph seems pretty ridiculous in comparison.

            That aside, you are being extremely biased here! Reread what I’m saying, I’m not saying the 2500K is outright competitive with the 8150, I’m saying the $/performance IS competitive. There is a really big difference, the difference of $40. I’m not the one who seems to need to pay more atteniton, you seem to think I’m directly comparing a 8150 to a 2500K, I’m not. I’ve said this a few times now in my posts. Matter of a fact the article that these comments are in a response to says the very same thing that I’ve been saying in these posts from the parent down.

            • Jason181
            • 7 years ago

            [quote<]I'm not going to a different website in order to justify your claims...Looking at the link you posted it's even more suspect...[/quote<] So, which is it? I only have a few websites whose information I trust, and I've been visiting Anand's site since '97 (yeah, on geocities). They have been consistently accurate, and for awhile they were basically one of two sites to visit if you wanted good information. Different settings can result in very different results, but that does not necessarily invalidate those results. [quote<]That aside, you are being extremely biased here![/quote<] I disagree. If it was a steady 15% slower, that would be one thing. But there are games that are 40% or more slower, and these days not a lot of games are cpu-limited. That's very concerning; you may think that performance in current games doesn't have any bearing on performance in future games, but it's the best measure we have, and it's borne itself out to be a fairly reliable predictor. It's worst-case performance that concerns me. I'd rather have a slightly slower processor overall that's consistent than a processor that is screaming fast at some games and very slow at others. I really don't see saving $40 on an entire system (or even $100, given the cheaper AMD boards) if I sacrifice consistency. Maybe it's just a difference in priorities.

            • Bensam123
            • 7 years ago

            $/performance. The 2500K is $40 more then the 8150, possibly more now that AMD dropped the price on the 8150 (they were discounted before).

            There aren’t games that are 40% or more slower on the 8150. The above post detailed this. There are two ‘render’ benchmarks that are labeled exactly the same thing on the Anand site, with two completely different results. The no-render has like 20fps less.

            You yourself said that games aren’t very CPU constrained. If you spend $40 less on a CPU which has slightly worse performance then the 2500K, you can spend $40 more on a video card. What you’re basically sacrificing is efficiency more then anything with the 8150.

            • Jason181
            • 7 years ago

            WoW is more than 40% slower. You don’t have to look very hard for games that are much slower.

            Additionally, the FX-8150 is around $180 with the 2500 around $220. The 2500 is only 22% more expensive. For that 22%, you get up to 34% better performance in just the games that were tested here. There are lots of games that get a lot better bump, and some that are pretty even with the 2500.

            I personally would not spend $1,000 or so on a gaming system and save $40 or even $100 by going with a cpu that has such an erratic performance profile. You see it as saving $40 and not giving up much performance, but as someone who plays a lot of games, I don’t like the risk/reward profile at all.

            Maybe you disagree. That’s fine; we don’t have to buy each others’ choice. 🙂

            • NeelyCam
            • 7 years ago

            [quote<] I personally would not spend $1,000 or so on a gaming system and save $40 or even $100 by going with a cpu that has such an erratic performance profile.[/quote<] This. People tend to focus a bit too much on the price of an individual component. If spending 10% more on your SYSTEM price but it gives you 30% more performance, a quieter system, longer battery life, stability/reliability/consistency or whatever your quality metric may be, it could be a net win.

      • FuturePastNow
      • 7 years ago

      I’d also like to see a truly low-end Intel processor like the Celeron G530 added to the test. The i5-655 isn’t quite an analogue to that, since it’s older and has HT.

        • Jason181
        • 7 years ago

        I’d like to see that just for giggles too. I suspect Intel and AMD are not shipping a lot of “celeron” class review samples though. 🙂

      • MooPi
      • 7 years ago

      Informative article BUT. The report misses the reason that the FX line from AMD are viable gaming cpu’s is due to the enormous overhead to overclock. Overclocking taken into account should null the latency issue described. Both Intel and AMD chips can obviously be overclocked, but the effect should and is resolved when AMD chips are. AMD has marketed these chips expressly as overclocking accessible CPU’s and the article comes off as FUD to me because it doesn’t measure this aspect. Sure the gamer that is looking for the latest and greatest gaming rig should buy Intels offering but your average Joe with limited cash can build and use AMD rigs cheaper and still obtain smooth gaming on every level. An FX cpu that isn’t overclocked is simply wasted silicon.

      • Zoomer
      • 7 years ago

      [quote<] I would suggest testing the Bulldozer units with their virtual cores disabled (or whatever they're called). So you only test the full processors. I really think this might make a difference and perhaps one run through should be made with like a 8150 to test it and see what happens with a update to this article. [/quote<] I wouldn't expect much of an improvement since the decode stage, which is rather narrow, can push to one of the cores at a time. Disabling one core out of the unit would really only help with some FPU limited cases. Game engines aren't typically that. AMD made the decision to go with higher max instruction throughput and higher I latency.

        • Bensam123
        • 7 years ago

        But you can’t be completely sure. I disabled HT on my processor due to it causing microstuttering. The whole point of 99th percentile is to detect and draw attention to these microstutters. Just because something is supposed to work well in theory doesn’t necessarily mean it translates to that when you put it into real world tests. That’s why I suggested they try it with four of the eight half cores disabled.

    • NeelyCam
    • 7 years ago

    Bah! It’s so disappointing to read such biased, pro-Intel anti-AMD drivel on TR. This article doesn’t fairly tell the whole story about the elegance of AMD’s solution. In real world use, AMD CPUs just feel faster; something these synthetic benchmarks never catch.

    Besides, 99th percentile is a bad test point – real gamers are more than willing to handle a few slower frames as long as average frame rates are good. 70th or 80th percentile is a much more useful metric, inwhich FX-8150 is quite competitive on a performance/dollar basis. And overclocked, it trumps any other chip here

    Shame! It sounds like you guys were just paid off by Intel’s marketing dept. I guess I have to lump you together with other Intel marketing sites like Anandtech and Tom’s, and wait for a review by some truly unbiased sites such as SemiAccurate

      • entropy13
      • 7 years ago

      Hear, hear! AMD is the best and everyone with Intel CPUs are just plain idiots.

      • Jason181
      • 7 years ago

      BD’s uneven (at best) gaming performance is no secret. What rock have you been hiding under?

        • JustAnEngineer
        • 7 years ago

        [url<]http://www.youtube.com/watch?v=CyzyOnOh9A8[/url<]

        • NeelyCam
        • 7 years ago

        Not a rock, but something people use to cross a river

          • Jason181
          • 7 years ago

          Would it have a certain kind of decorative, climbing foliage?

            • NeelyCam
            • 7 years ago

            Sometimes, yes; those are the purtiest. Skyrim has lots of nice ones

      • dragosmp
      • 7 years ago

      It the C2Q vs Phenom II days I would have agreed to the “feel faster” feeling in regards to AMD’s processors, this difference is shown by the glorified C2D 655K vs PhII 980 comparison. Since Lynnfield this isn’t true any more, Intel advanced quite a bit. In games the “fast” feeling wasn’t as obvious even then IMHO.

      • chuckula
      • 7 years ago

      Neely! It’s good to have you back! Now excuse me while I go enjoy some megatasking platformance that only AMD can give me.

      • derFunkenstein
      • 7 years ago

      the people doing the -‘ing must be new here. I thought it was a funny take on AMD fanboys.

        • Beelzebubba9
        • 7 years ago

        I was amused!

        • NeelyCam
        • 7 years ago

        Thumbdowns can mean all sorts of things. Maybe they are thinking I’m shooting for a thumbdown record..?

      • Unknown-Error
      • 7 years ago

      I am giving you a +1 for the entertaining post. By the way, the insecure morons at [url=http://www.amdzone.com/phpbb3/viewtopic.php?f=532&t=139355<]AMDZone[/url<] are calling TechReport 'IntelReport' 😮 For AMDZone babies, even SemiAccurate is biased. 😮

        • chuckula
        • 7 years ago

        Hey look Unknown-Error: AMDZone is right! I mean, look at the GPU that TR used in this review… it’s an AMD GPU. Now, everybody knows that AMD’s GPU department has been in cahoots with Intel for years giving Intel an unfair advantage in benchmarks! Oh and don’t even get me started on Nvidia’s slobbering love of anything Intel!

        I don’t think we can trust any gaming benchmarks that use unfairly biased so-called “GPUs” from so-called companies like “AMD” and “Nvidia” that are just puppets for Intel.

      • Chrispy_
      • 7 years ago

      Sarcasm doesn’t seem to work on TR readers;
      I don’t know why but so often, people here [i<]just don't get it.[/i<]

        • superjawes
        • 7 years ago

        Yeah, TR readers seed the BBCode tags to catch it.

        • Oberon
        • 7 years ago

        I think they get it, they just don’t appreciate it.

          • Chrispy_
          • 7 years ago

          No, just read some of the replies to this exact post from Neely. Some TR readers [i<]don't actually recognise sarcasm[/i<]. It's hardly a worrying thing, but Neely was really quite blatant in his use so I was surprised to see that it was still missed by a few. I would expect many of the people thumbing down his comment are also completely missing the point of his post.... Anyway, this is offtopic. I'm so easily baited 😛

            • Darkmage
            • 7 years ago

            [i<]No, just read some of the replies to this exact post from Neely. Some TR readers don't actually recognise sarcasm.[/i<] A sarcasm detector? [snort] Like [i<]that[/i<] would be useful...

      • dashbarron
      • 7 years ago

      He even used a NVIDIA card because he’s so bias. Oh wait….

      • superjawes
      • 7 years ago

      Also…HEY! You’re not can-o-tuna!!!

      • liquidsquid
      • 7 years ago

      I am a real driver, and I don’t mind my car occasionally stalling at an intersection either, or backfiring while i am on the highway.

      The AMD car and the Intel car both have 500HPs, but the AMD stutters and backfires while you are driving compared to a nice, smooth ride. Both get you there, you get to choose the ride.

      OK so I don’t game much anymore, but claiming that real gamers don’t mind uneven frame rates is flat out dumb.

      • anotherengineer
      • 7 years ago

      Trollolololo

      was successful

      lol

      I remember resolutions affecting frame-rate whether it being cpu or gpu limited.

      I think a few runs at say 1280×1024 resolution would have been interesting to see how resolution affects the cpu frame-rate latency. (more than likely it would decrease, but by how much?)

      Other than that a decent article, and if one looks at Intel’s mid, high and extreme cpu’s there is very little difference in performance when it comes to frame latency, enforcing the fact that one does not really require an extreme or even high performance cpu for pc gaming.

      • torquer
      • 7 years ago

      krogoth is not impressed

      • I.S.T.
      • 7 years ago

      I love how people aren’t getting how sarcastic this is.

      • Bensam123
      • 7 years ago

      Is this supposed to be a mock can-a-tuna post?

      Some of it seems legitimate, other parts does not.

        • NeelyCam
        • 7 years ago

        None of it is legitimate

      • swaaye
      • 7 years ago

      Nice post. 😉

      • dmjifn
      • 7 years ago

      I would give this a +1 but I wouldn’t want to detract from your glory. 🙂

        • NeelyCam
        • 7 years ago

        Didn’t break thumbdown records but at least I got post #300

        EDIT: or so I thought. It says now that there are 300 posts, with mine being the last one, but when I switch to reverse chronological order, it says Arclight’s post is #300 and mine is nowhere to be found… bug?

    • pedro
    • 7 years ago

    Impressive review as usual chaps.

    This bit from the last page really hits home:

    [quote<]It's worth reiterating here that the FX processors aren't hopeless for gaming—they just perform similarly to mid-range Intel processors from two generations ago.[/quote<] Ouch!

      • daedricgeek
      • 7 years ago

      Considering most of FX is priced less than the i5’s, better to compare with i3’s or low end i5 …
      Overall bulldozer sucks yes, but like they say:
      “There is no bad product, only a product at a bad price point “

    • halbhh2
    • 7 years ago

    Missing in the analysis is the most pressing question: the performance of the TR recommended cpu for the Econobox, the i3 (vs such as the $100 PhenomII 965).

    But the PhenomII x4 965 or 975 BE (overclockables), which can be bought for about $90-$100 do have a pleasantly low price vs their performance. They are of interest for those on a tight budget who would like the advantages of 4 cores, and less costly motherboards (for the same feature sets).

      • daedricgeek
      • 7 years ago

      Yeah too bad they EOL Phenom …
      Although you can still buy them in some places in INDIA (same place where nVidia 8600 gaming card is sold for 150$ ) ;D

        • derFunkenstein
        • 7 years ago

        Phenom II’s can be bought on Newegg, too.

          • halbhh2
          • 7 years ago

          Yes, the x4 965 is very widely available online for around $100.

            • Jeff Grant
            • 7 years ago

            Got this from Microcenter a couple of days ago. Maybe someone can take advantage.
            AMD Phenom II X4 B93 Processor $39.99…AMD Phenom II X6 1045T Processor $89.99…120GB SSD $79.99…Save on ALL Cases, Power Supplies, Video Cards & More
            Inbox
            x

            • halbhh2
            • 7 years ago

            Yeah, that x6 is a very good bargain. It has “turbo”, and also can be clocked higher if needed. The true-OEM B93 x4 has such an amazing price. The x4 965 is a good bargain too, since it comes with $40 off any compatible motherboard.

      • ET3D
      • 7 years ago

      I agree, the low end Intel CPU’s are a glaring omission in this article.

    • torquer
    • 7 years ago

    *sigh* AMD, what happened to you? I almost exclusively ran AMD CPUs from my first Athlon 700MHz up through my Phenom II x6. I recommended them consistently to friends as the best bang for the buck – many still have Phenom IIs. But, for the last year I’ve been Intel only on the CPU front and it doesn’t seem to be changing anytime soon. AMD’s recent history is borderline tragic – like watching your former best friend get involved with a bad girlfriend and then you never hang out with him anymore.

    To watch them repeat the Netburst mistake in their own way with Bulldozer is just heartbreaking. Intel has done amazing things, but I weep for the AMD I once knew. All of us should be grateful to them for their contributions on the CPU front from a competitive front and hope that they can bring the fight back to Intel someday, unless of course they decide to exit that market entirely.

    RIP AMD performance CPUs. 2000-2010

      • vargis14
      • 7 years ago

      I too exclusively ran AMD CPUs from my 900tbrd to a 1400 tbrd to a 2400xp all on the same motherboard too a ECS k7s5a that strangly held ddr2 266 or sd ram, it had 2 slots for each kind of ram. then i went big $ and grabbed a 940 pin fx51. My last AMD build was a 4800×2 with x1800xts in crossfire with the master card and slave card connected by a goofie dongle on the video output side of the cards.
      I took off from gaming for about 5 years but ended up buying 2 more amd products from dell, the little Dell HD zino with a 1.5ghz am3 dual core 3250e cpus along with laptop hd 4330 512mb cards. They worked well as HTPCs no complaints on price either since i picked then both up with 4gb ram and 500gb HDs at the dell outlet as refurbs for 350$
      So i wanted to build a gaming rig the first one since the 4800×2 so i went with a 2600k and have replaced the zinos with 2 i3 2100 series cpus for my HTPC setups, And i have been super happy with SB cpus and love the little i3 2120s.

    • oldDummy
    • 7 years ago

    First of all let me compliment your article, Awesome!

    This was one time consuming sucker to get together. Well done.

    Now the not-so-good [for me]:

    I own three of the chips rated with none seemingly the “best” fps/cost.

    i7-3770, i7-875k, i5-655k.

    Still this is almost the exact type of article I was searching for today.

    I have older CPU’s which I would like to pair with various GPU’s around my work bench.

    My thoughts were: anything remotely current will be good enough to power say a 560 Ti 448.

    What this article implies is: Not so, at least with a higher end GPU.

    Another myth bites the crust.

    ps- the absence of a 980x doesn’t surprise me. It would embarrass both Intel and AMD. It would also be a rather awkward duck at 32nm. IMO. again, great article.

    • jbraslins
    • 7 years ago

    Excellent review.

    Recently I was looking into Hyperthreading and how it affects performance of OLAP and OLTP SQL servers. Would be nice to see a review like this that tests each applicable processor with HT on and off.

    Perhaps it would show that turning HT off on modern quad core processors is not a bad idea when it comes to gaming.

      • Jason181
      • 7 years ago

      That can easily be ascertained by comparing the performance of the 2500k and the 2600k in this article. Even though the 2600k has a slightly higher clockspeed and larger cache, it lags (almost immeasurably) behind the 2500k in most gaming benchmarks, including the ones in this article.

    • 12cores
    • 7 years ago

    Great article!!!!!!!!!!!!!!!!!

    I wonder if we would see different results for the amd fx chips if they were overclocked. I went from a 1055T @ 4ghz/3gz NB(ram 1600) to a fx-8120 @ 4.7ghz(ram 2133) pulling 2 overclocked 6870’s. My gameplay is much smoother with the fx-8120 at those speeds, much smoother. The 1055T was a beast but with the fx-8120 I am able to run ram my @ 2133 which make a big difference in my build. Once again awesome article keep up the good work, amd has a lot of work to do.

      • achaycock
      • 7 years ago

      Really? Are you sure you’re not experiencing a phantom effect. My partners FX8120 @ 4.4GHz is a lot slower in many significant respects to my X6 1090T @ 3.8GHz/3.0GHz. I know these clockspeeds are different but I find it hard to imagine 300MHz making that much difference to an FX8120.

      I’m not saying the FX8120 is terrible, but for what was expected of it and compared to it’s predecessor it’s a horrible disappointment.

      • speedfreak365
      • 7 years ago

      >>>>>PLACEBO<<<<<

        • achaycock
        • 7 years ago

        Yeah, I couldn’t think of the right word at the time. The essence of what I said remains valid though I believe.

        • Waco
        • 7 years ago

        Definitely. My 8120 at 4.5 GHz was slower in games and felt a lot LESS smooth than my X3 720 @ 3.5 GHz w/ 4 cores.

        I sent the 8120 back…best decision ever.

        • Bensam123
        • 7 years ago

        Placebo doesn’t mean what you think it means. The term you’re looking for is ‘confirmation bias’.

          • achaycock
          • 7 years ago

          You are absolutely right. I feel a bit of a pillock for not remembering that.

            • BobbinThreadbare
            • 7 years ago

            It could also be straight up delusion.

            • Bensam123
            • 7 years ago

            People use delusion usually in a derogatory sense, but it has conditions of which must be met by the DSM-IV (diagnostic manual for mental disorders). This is not being delusional. A bias isn’t the same thing as delusion.

            • BobbinThreadbare
            • 7 years ago

            I mean an imagined gain that isn’t real at all. Isn’t that a delusion?

    • marraco
    • 7 years ago

    TechReport. One step before.

    (before other review web sites).

    • UltimateImperative
    • 7 years ago

    Too bad you couldn’t include an i3-2100 — to me, there’s a doable budget gaming machine with a Phenom II 965 ($110) + a cheap AMD motherboard and a 7770; add some RAM, a Seasonic S12ii or one of its rebadges, whatever decent cheap case is on sale and a Samsung 1TB F3, and you’re set.

    I wonder how it would compare to an i3-based system.

    • clone
    • 7 years ago

    1st off thank you Tech Report for the review, very rare and puts it into perspective.

    that said, I`m glad I went AMD this time focusing on being as cheap as possible because the graphs show once you hit the minimum you move on to graphics….. I`m in no way endorsing AMD for anyone but me as my goal was to keep my last build under a certain point while still being able to game and knowing in advance that I would be replacing said build within 2 years.

    nothing in this review swayed that choice and no I see no valid reason for gaming with i5 in case anyone wants to attack my choice because I`d put and did put the $200 saved into more ram, graphics and the best SSD I could find at the time, all of which made far more sense.

    other users will do more than what little I do and to them …. GO FOR IT!!!!! but for me and my current or lack thereof interests AMD was the way to go.

    UPDATED: I’ve been wondering why I never went with an 1155 motherboard given it seems all of the latest Intel cpu’s are compatible with it, I had to take a look and discovered when I bought my setup their were no 1155 motherboards, instead I was stuck facing limited compatibility across 3 different sockets….. while I don’t list that as the main reason why I went AMD I do know when I did the research I had gotten annoyed that if I went with an Inexpensive Intel cpu I had no significant upgrade option.

      • plonk420
      • 7 years ago

      i have no idea how you see this. a $130 i3 sandy will trounce my $120 Phenom II X3 (or the only $110 phenom ii >X2 on newegg)… and SHOULD give you 10, if not 20fps in the several games that pull far far ahead of AMD, and even Nehalem.

        • clone
        • 7 years ago

        AMD X3 currently costs $78.99 not $120 now and Newegg has never once been the end all be all of websites, I paid $67.99 for my X3 455 6 months ago at NCIX.com with free shipping…… Newegg wanted quite a bit more.

        an additional 10 – 20 frames matters in the synthetic world, once above 45 frames I don’t care let alone the 50+ I’m getting.

          • Meadows
          • 7 years ago

          You raise some decent points, then you go on to brag about how your eyes are broken.

            • clone
            • 7 years ago

            neg, by 30 frames a user can only just see the transitions by 45 they have to be actively looking for them instead of gaming & by 60 anyone complaining is doing it for the sake of it.

            the lone caveat being of course that I don’t bother with racing games.

            • travbrad
            • 7 years ago

            It’s interesting you mention racing games, because I’ve tried watching F1 races in 30FPS and 60FPS and you get so much more sense of the speed when watching at 60FPS.

            In games 30FPS is bordering on unplayable for me (unless it’s a Turn-based strategy game or something). Unlike videos, games don’t run at a completely consistent framerate (as the “Inside the Second” testing has shown), so even if you are averaging 30FPS, there are going to be some seriously long frame times in there. You are also constantly interacting with games, so the choppiness is a lot more noticeable.

            I can’t imagine playing a First-Person-Shooter at 30FPS. It’s just an unpleasant choppy experience that really kills the immersion. It’s also a big competitive disadvantage in online games running at low framerates. I purposely lowered my settings a bit in BF3, and I play noticeably better when averaging 55FPS than when averaging 40FPS, not to mention it’s a much more pleasant experience.

            • clone
            • 7 years ago

            I find racing games to be the most obvious but in general games with expansive FOV’s tend to require more frames.

            on a side note I disagree on the FPS need for more than 30 frames in order to perform….. more than 30 frames to be fun yes but not more than 30 to perform, many moons ago I sold off my high end card in anticipation of getting the next generation and was limited to using an ancient, absolutely ancient ATI Radeon 7000 32mb DX7 supporting card for a month to play Unreal 2004.

            no details enabled and only keeping the resolution at 1024 X 768 for the sake of sanity, my match averages went up 20% going from 70+% to 90+%, the common consensus when discussed with friends was that I wasn’t distracted from gameplay, the experience was borderline agony with the fps banging between 20 & 30 causing me to squint occasionally while playing.

            • travbrad
            • 7 years ago

            [quote<]on a side note I disagree on the FPS need for more than 30 frames in order to perform..... more than 30 frames to be fun yes but not more than 30 to perform[/quote<] Your experience is completely different than mine then. At 35-40FPS in BF3 I was usually about 1/3rd of the way down the player "scores", but at 50-55FPS I am almost always in the top 1-3 players. I had a similar experience with the original Counter-Strike "back in the day" when I built a new PC (going from about 25FPS in CS to something like 200FPS)

            • clone
            • 7 years ago

            each his own as mentioned my whole post was regarding my personal choice and exp, it should not be taken as an endorsement for what others should do.

            • Spunjji
            • 7 years ago

            Gaming at 30fps isn’t the best experience, but I’ve managed it and been competitive in various types of game. It depends heavily on your playing style and personal tolerances.

            Anyway, what I’m actually objecting to here is the hammering on somebody who’s quite reasonably pointed out that once you hit the *bare minimum*, it’s really price that counts more than speed. Sure, you can get better deals by spending just a little bit more, but not everybody has that money. That’s a fact always worth considering.

            • Meadows
            • 7 years ago

            “Transitions” (differences between frames) are easily noticeable at 60 fps and they’re jarring at 30 fps, as long as we’re talking about videogame graphics that is.

            Adding motion blur makes it a little more like movies, in that it smoothes the frame transitions and improves realism.

            • clone
            • 7 years ago

            don’t entirely disagree but here we are in a discussion about cpu’s, the lowliest CPU of which can manage 50+ frames and no matter how much coin you throw at the problem the best you’ll get is ….. 80?…. consistently maybe 70?…. one lonely game managed 100+ but then again that same game saw the lowliest cpu hitting 58 which is so very playable.

            the difference in price of course going from in my case $69.99 to $1100.99+.

            frame dips well below 30 aren’t exclusive to low end processors btw, glitches happen…. no matter how much consumers throw at the problem they can’t buy their way out of them, they can only raise the average framerate.

            to me going from 50+ frames to 70 is worthy of little amounts of coin.

            • travbrad
            • 7 years ago

            [quote<]the difference in price of course going from in my case $69.99 to $1100.99+.[/quote<] That's some nice AMD marketting spin you have there. A $200-220 Ivy/Sandy CPU performs almost identically in games to that $1030 part, so using that as a comparison is pretty silly. If you are only interested in a "bargain bin" part then the X3 is a decent deal I suppose, but a [url=http://www.tomshardware.com/reviews/gaming-fx-pentium-apu-benchmark,3120-10.html<]G630 ($65-70) is still faster in gaming for the price[/url<]. If you step up to a dual-core Sandy Bridge (for $100-120) you are looking at significantly better gaming performance than a Phenom X3. Buying a CPU/GPU isn't just about how current games perform either, but also setting yourself up for the future. A little bit more money spent in the short-term can often save you a lot of money in the long-term. For example, I bought a E8400 instead of a Q6600 and saved $40 or so but then a couple years later the E8400 started to struggle in some games where the the Q6600 was doing fine. If I had spent that extra $40 I probably would have gotten another year or more out of my CPU.

            • clone
            • 7 years ago

            no AMD spin at all, go back to my original post where I claimed $200 saved by choosing AMD, when I said you can spend anywhere from $69.99 to $1100.99+ I was merely quoting the spectrum, don’t take the comment out of context.

            2nd you claim spending $100 on cpu will significantly improve your gaming… great, the budget is $600 after tax and shipping, show me your build with $190 (additional 100+ tax allocated) allocated to CPU and after I’ll take the same budget and allocate it to better video / SSD or both and have a snappier system better for gaming.

            as for your last paragraph again go back to my first post… as stated 2 year lifespan….as for your personal experience regarding your E8400 I’d have overclocked it.

            • travbrad
            • 7 years ago

            [quote<]2nd you claim spending $100 on cpu will significantly improve your gaming... great, the budget is $600 after tax and shipping, show me your build with $190 (additional 100+ tax allocated)[/quote<] You completely misread what I posted. I referred to the dual-core SB CPUs which are $100-120 TOTAL cost, not additional cost. Even the dual-core SB are significantly faster than an X3 (or even X4) in gaming. You're right about a $200+ CPU not making much sense for a $600 build, but I wasn't talking about a $200 CPU. Even so, a $200-230 CPU would still give the best gaming experience, whether you can afford it or not. [quote<]as for your personal experience regarding your E8400 I'd have overclocked it.[/quote<] I had it at 4.1ghz (up from it's 3ghz stock), and it was still struggling in some games. Buying some LN2 was about the only way I was going to get it higher than that, but that's not very practical/cheap for daily use. 😉

            • halbhh2
            • 7 years ago

            mmm…actually, it’s unclear to me that the TR recommended (econobox) i3 2120 is faster in gaming than the similar priced Phenom II x4 965. And….even less likely for me, since I will definitely have things going on in the background, like email coming in and getting antivirus check, etc.

            I’m not making a claim here. I’m pointing out exactly that the article doesn’t let us gauge this one, at that $90-$100 cpu price level.

            • clone
            • 7 years ago

            no overclocking ability at all with G630, trades the lead at stock overclocked ties and loses to the AMD X3.

            as for the $120 for cpu budget alone, spending the additional $50 + another $20 for the intel mobo would have killed the SSD I got and that aint happening.

            you can take my gun but you’ll have to pry my SSD from my cold dead hands 🙂

            still all in all I have to admit I didn’t look at that particular cpu at the time.

            • Chrispy_
            • 7 years ago

            Holy crap man, your eyes [i<]are[/i<] broken. I reckon I could tell you (within 10%) what framerate something was running at all the way up to about 100fps. On top of that, I definitely noticed a difference between 120Hz and 160Hz on my old CRT.

            • clone
            • 7 years ago

            eyes are fine and pls don’t be absurd, if I wanted to talk about a dead product tech long out of the market I’d be specific…..who still uses CRT?…. why would you insert that silly into this discussion?

            regardless you well know CRT tech doesn’t work like LCD or Plasma, CRT’s actively flicker no matter what the speed whereas LCD’s and Plasma replace an existing frame with another.

            • Chrispy_
            • 7 years ago

            You’re completely missing the point again; You’re saying that people struggle to notice transitions above 30fps, which is a pretty outlandish claim (unless your eyes really are broken).

            I’m saying that I can still see trasitions up to about 100fps and that whilst I struggle to see transitions, there’s a definite difference to me between 120fps and 160fps. In other words, your “struggling point” is four times lower than mine, and based on the comments to your post, much lower than everyone else’s too.

            CRT’s have [b<]absolutely nothing to do with my argument[/b<], It's merely an anecdote to the last time I was able to get vsync at 160 frames a second.

            • clone
            • 7 years ago

            never once said ppl struggle to see transitions at 30 frames….. said that at 60 and tbh I don’t believe your claim of distinguishable transitions at 100 fps save for when you aren’t playing and instead are looking for issues to criticize.

            my point has been about playability, by 30 it’s ok, by 45 it’s no longer distracting at all, by 60 you have to look instead of play and by 100 I don’t care and if you believe it’s my eyes that are broken because I’m fully immersed and not affected by any distraction by 50+ fps then my eyes are THANK GOD broken in the best possible way ever.

            p.s. don’t mention CRT’s and personally have never enabled V-sync for prolonged periods due to visible screen “tearing”.

            • Chrispy_
            • 7 years ago

            You seem to contradict yourself even in the same thread. Here’s you’re saying 30fps is okay, yet a few replies up we have: [quote<]...borderline agony with the fps banging between 20 & 30 causing me to squint.[/quote<] I know the feeling: Diablo II had a 25fps limit and that was horrible. I would get bloodshot eyes and headaches playing that for more than a couple of hours. 25fps is even worse than 20 because it doesn't work well with the 60Hz that almost every screen on the planet runs at. 20fps is an update every 50ms, the important word being [i<]every[/i<]; 30fps is every 33ms. 25fps is a nightmare though - the screen updates after 50ms, then after 33ms, then after 50ms again. It's not smooth, and you brain/eyes perceive the judder. It's the same principle as 59fps with a 60Hz screen; every second there is a single frame repeated twice and this "skip" is far more disturbing than if the whole thing were running at a constant 30fps (where every single frame is repeated twice). Where we can agree is that [b<]60fps feels smooth, and 30 is playable, but not pleasant.[/b<] Where we disagree is that you say anything over 60fps is pointless. If I were forced to play everything on a 60Hz screen again, I wouldn't be that upset, but there is an incredible visual and control-responsiveness improvement between 60Hz and 120Hz screens, assuming your PC can churn out frames fast enough to keep the game in vsync with the screen.

            • clone
            • 7 years ago

            neg, mentioning 20-30 (25 avrg) is unacceptable does not contradict 30min as acceptable unless claiming 3 – 7 frames also applies…. the minimum threshold as mentioned add nauseum is 30.

            I’ve always found V-sync caused visual tearing especially during rapid panning…. horrid feature offering limited benefit at significant expense.

            nothing is pointless… well some things are (V-sync) but I try not to deal in absolutes as a rule.

            • Meadows
            • 7 years ago

            Vsync causes tearing? This is rich.

            • JuniperLE
            • 7 years ago

            tearing in the sense of the VGA sending a frame with parts of different frames I’m pretty sure is impossible to happen with vsync,

            but I’ve read that at low refresh rate (like 60Hz) there is still some noticeable distortion because of how progressive scan works

            I tend to prefer vsync off in most cases, because of the performance advantage, lower input latency,
            I’ve been playing for many hours “Skyrim” locked at 40fps (using dxtory) with v sync off and haven’t noticed any significant tearing

            • clone
            • 7 years ago

            hmmm….. trying to figure how to describe it….. you pan quickly and it’s like you have 1 frame overlapping another with a visible ….. not a line but you can see the image shifting, it must be happening across multiple frames because when it happens it can take up to a 1/4 second to finish but then can be easily replicated by simply panning quickly again during certain scenes.

            is their a different name for this occurrence?

            note, JuniperLE described it accurately and my display is 60 refresh.

            also watched what looks like 2 frames a quarter inch offset from one another during rapid pans most notably during FPS.

            if those occurrences aren’t tearing then whatever, it happens and only when V-sync was enabled….. every time.

            just by disabling V-sync problem goes away.

            • Chrispy_
            • 7 years ago

            That’s a perfect description for the ‘tearing’ you get when vsync is off:

            One frame sent to the screen contains 1/4 of a new frame, and 3/4 of an old frame.
            The next frame sent contains 1/2 a new frame and 1/2 an old frame.
            The next frame sent contains 3/4 of a new frame, and 1/4 of an old frame.
            Over the course of those three frames, it looks like a tear running across the screen

            The closer your framerate is to 60 (on a 60Hz screen) the slower this tear happens – at 59 fps you will get one frame a second, at 58fps you will get two tears/second, etc. Reversely, at 59.5fps you will get a tear that lasts two seconds. It’s most distracting when you’re gettingt [b<]ALMOST[/b<] exactly the same FPS and monitor refresh rate. There's a good article on Anand about triple-buffering and vsync if you want to read more about it.

            • clone
            • 7 years ago

            interesting, seriously….. now why when enabled vs disabled when it’s supposed to be the exact opposite?

            given it’s a check box “enable V-sync” it’s not complicated and it’s happened with ATI back in the day, AMD and Nvidia, not exclusive to any card.

            in truth I’ve not used it in a year after the last attempt with the GTX 460 I went “bahh f$*k this feature.”

            p.s. I had assumed that was what it was for.

            p.s.s. on a side note this 26″ Acer is several years old… wonder if it’s the display, may try the feature on the 24 I have on the other system it’s only a year old.

            now I’m wondering.

            • Beomagi
            • 7 years ago

            Vsync prevents that tear, by syncing with the screen.

            The catch is if a frame isn’t ready for the sync, it will sync with the NEXT frame, instead of showing tears.
            All frames are therefore going to take a multiple of 1/60 frames – i.e. 1/60 (60fps), 2/60 (30fps), 3/60 (20fps) etc.

            • Meadows
            • 7 years ago

            [quote<]"who still uses CRT?"[/quote<] I do.

            • clone
            • 7 years ago

            sorry about your luck.

            • Meadows
            • 7 years ago

            I have yet to find an LCD that makes me switch. It’s not luck, it’s determination.

            • clone
            • 7 years ago

            intransigent seems more accurate.

            the nice being that fully functional used CRT’s are being given away, left beside the road and offered up for free by previous owners so at least their is a steady supply.

            • Meadows
            • 7 years ago

            Oh no, mine wasn’t free. This level of quality isn’t left beside the road.

            • clone
            • 7 years ago

            I’m not dogging the choice but when I made the move to LCD it was to get my desk back….. the price of a 26″ LCD vs CRT became compelling and no longer needing active room cooling just to surf the web was nice in spring, summer, & fall.

            • jihadjoe
            • 7 years ago

            Sony FW900?

            • Meadows
            • 7 years ago

            IBM C220P.

            • rrr
            • 7 years ago

            In having your eyes burned out? Viva la masochists!

            • Meadows
            • 7 years ago

            My eyes have never hurt or felt fatigued from using CRTs, ever, starting from the age of 4-5 or so. I’ve used 60 Hz screens for over 10 years (usually daily) and never felt a problem.

            My current screen shows the 2048×1536 pixel desktop at 80 Hz.

            • clone
            • 7 years ago

            60hz CRT = impending headache, constant squinting and notably low visual quality.

            to each his own, 85hz was ok but LCD got rid of it altogether.

            • rrr
            • 7 years ago

            Confirmed, I have experienced those at 75 Hz (Samsung 550s)

            • Meadows
            • 7 years ago

            You boys are [i<]weak[/i<]. 🙂

          • Beomagi
          • 7 years ago

          The major takeaway from this article is to not get suckered in simply by fps, but which processes can deliver more consistent smoothness in gaming.

          YOUR bottom line should be – “I do more than game, and for what I do, I was able to do it cheaply with my proc”.

        • derFunkenstein
        • 7 years ago

        The x4 965 is $109.99.

        [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16819103727[/url<] Although it's probably true that the i3 2100 will win, it's worth checking out..

          • clone
          • 7 years ago

          the X4 965 is $89.99 at NCIX, Newegg isn’t everything.

        • halbhh2
        • 7 years ago

        Why compare the current price of the i3 to the old price for a “$120 X3”? Why not buy the $99 X4? (the x4 965 BE is widely available at about $100).

          • clone
          • 7 years ago

          X4 965 is $89.99 at NCIX U.S. website.

        • rrr
        • 7 years ago

        Except you cannot OC it, and you can OC Phenom or Nehalem for that matter.

        I’d still prefer looking for good deals on i3-5xx, after OC you get better performing CPU for cheaper.

          • brucethemoose
          • 7 years ago

          this ^

          The i3-5xx is still the best budget CPU IMO.

      • halbhh2
      • 7 years ago

      It’s interesting everything you’ve said is sensible in the limits you said it, and you still got some compulsive thumbs down votes for it.

      There are plenty of people with AM3 or AM3+ board that could today choose to save money if they just have one of the 4 (or 6 core) Phenom IIs, overclock if needed, buy a SSD that they could take with them to the next build later, and according to the results in this article, have a good gaming experience, depending on their graphics card (which can also be upgraded and later used in a future build, etc.).

      Why the thumbs down? I don’t really know. I think of this site as better than a partisan site.

        • Chrispy_
        • 7 years ago

        I think the downvotes are for the flawed argument:

        He is justifying his AMD choice for gaming because they were cheaper, but you can buy dual-core Sandy Bridge Celerons and Pentiums for significantly less than the cheapest AMD alternatives.

        The article clearly shows Sandy/Ivy Bridge completely outclassing [b<][u<]similarly-priced[/u<][/b<] AMD processors by such a huge margin that you'd be forgiven for wondering if AMD's finest can even match Intel's budget offerings. For Clone to say [quote<]"I`m glad I went AMD this time focusing on being as cheap as possible"[/quote<] when AMD [i<]aren't[/i<] the cheapest isn't earning him any upvotes. Not only are they not the cheapest now, they haven't been the cheapest for a while - ever since Bulldozer launched, in fact. Outside the context of this article, his arguments are factually wrong, but within the context of this article, being pleased about his choice of CPU is just... I don't even know what word to use! Something between [i<]hubris[/i<] and [i<]misunderstanding[/i<].... I guess his anecdotal pro-AMD comments really don't match up with Scott's scientific and carefully measured verdict: [quote<]they just perform similarly to mid-range Intel processors from two generations ago. If you want competence, they may suffice, but if you desire glassy smooth frame delivery, you'd best look elsewhere. Our sense is that AMD desperately needs to improve its per-thread performance[/quote<]

          • halbhh2
          • 7 years ago

          Ok, that makes sense.[edit: I mean there was a logic, even though based on a questionable presumption about the low end comparison, which may be false]
          Like many others, though, I’m still hoping to see the processor that would matter tested in this exact way: the i3 recommended in the TR guide. Why? Cause it is in price competition with the Phenom II x4 965 for $100. That’s a “similarly priced” cpu (vs the i3) for which I’m no so certain about the outcome. See the problem with your conclusion in that regard? If it had been tested so, and that was the result, then sure. But I don’t see it there.

            • Chrispy_
            • 7 years ago

            Agreed 🙂

            The key matchup would be the i3 2100 with the X4 965BE – ie, Intels cheapest i3 against one of AMD’s best gaming quad-cores.

            Personally I suspect the i3 would trounce all over the AMD, since the i3’s used to hold their own against the i5’s in gaming but until we see that matchup with Scott’s ‘inside-the-second’ metrics, I am wrong to extrapolate down beyond the i5’s.

            Hopefully Scott will add a couple more popular processors to the test over time but for now I think the guy has earned himself a break; This article is already epic and was originally only part of an Ivy Bridge review.

          • clone
          • 7 years ago

          moved below.

        • swaaye
        • 7 years ago

        I think there are some mute impulsive people who come for the voting.

        • clone
        • 7 years ago

        their is a cult of personality in this forum regarding some topics, consider the number of comments that consider an i5 or better as the only cpu choice.

        while not untrue in some cases the view is limited, then you have a large number who`ve made purchases that don`t agree with a low end cpu choice… it becomes personal to them lest they question their own choices.

        when TR did it`s recommended system builds I mentioned a budget computer didn`t need 10% of it`s budget allocated to the case when that could be reduced to 3% by buying a sub $20 case with front usb and audio….. I got harshly criticized by the vast majority for deigning to mention money could be saved on case and cries that more than $50 had to be spent despite the very high customer experience reviews on the inexpensive cases.

      • 12cores
      • 7 years ago

      The fx 8 cores chips are not that bad you have to get them over 4.7ghz stable. I am running a fx-8120 at 4.75ghz and it destroys my 1055t @ 4ghz/3ghzNB. Its a lot smoother while gaming and only picked up 40 more watts from the wall, my rig is still under 600 watts with my overclocked 6870’s. In my opinion for a $159 its not a bad purchase if you know what you are doing. The i7-2700k cost almost $150 more and its certainly not twice as fast. Due to the huge gap in price between the fx-8150 and i7-2700k right now it would be interesting to do comparison between overclocked fx-8150 and a stock i7-2700k to see if the i7 is twice as fast. Bottom line AMD still provides a good value for the hardcore overclocker.

      Once again great article, bring on the overclocked comparison article.

        • Chrispy_
        • 7 years ago

        Why do people think overclocking adds value? It’s only relelvant if the alternative isn’t overclockable.

        The 2700K you’re comparing to frequently clocks to over 5GHz, and someone who has never seen a BIOS before can make it run at 4.5GHz, almost by accident….

        [url=http://www.kitguru.net/components/cpu/zardon/intel-core-i7-2700k-review/11/<]Here is a 5GHz i7-2700K absolutely trouncing a 4.8GHz FX-8150, as you'd expect it to.[/url<]

          • clone
          • 7 years ago

          non K intel’s don’t overclock, anything compared the feature adds value whether it be a K series Intel or an AMD.

    • bthylafh
    • 7 years ago

    IMO you should have either thrown out the i7-3960X or made a second chart that excluded it so we could get some more space between the other processors.

    I think adding an i3 would be a good idea as well.

    Excellent review otherwise.

      • chuckula
      • 7 years ago

      I’m with you on the i3, but you can click the buttons to group different CPUs that exclude the 3960x.

        • bthylafh
        • 7 years ago

        I was referring specifically to the graph on the last page, and I don’t see a button to change there.

    • Krogoth
    • 7 years ago

    Interesting article, it demonstrates the differences in CPU architecture under single and dual-threaded performance.

    The newer stuff shines in a few titles where the GPU isn’t the bottleneck. Clockspeed is still king for the most part. I suspect that Bulldozers would catch-up if you crank-up the MEGAHURTZ, but there’s nothing that prevents its competitors from doing the same thing.

    Anyway, the sweet spot is $200-249 range for a potent gaming CPU and you can allocate the difference towards a better GPU and I/O.

      • Krogoth
      • 7 years ago

      AMD-tards are now butt-devastated that almost four-year old Nehalem generation CPUs are still ahead in single and dual-thread performance arena?

        • Deanjo
        • 7 years ago

        Hell they have a hard time catching up to their old architecture.

          • torquer
          • 7 years ago

          Management incompetence strikes again. Really they should be teaching AMD’s story to students wanting to be engineers and executives both as a case in how to exploit your competitions malaise (Athlon) to how to copy their worst mistake (Netburst/Bulldozer).

          I think AMD was ultimately a company that was surprised by its own success and then lost its way immediately after capitalizing on it. Its sad, unnecessary, and ultimately the fault of the managers who made decisions based on god knows what.

            • BobbinThreadbare
            • 7 years ago

            Also, their competitor spend an amount of money equal to their entire company on R&D.

            AMD had to have perfect execution at pretty much every step to be successful.

            This doesn’t excuse Bulldozer, but it does explain why their Phenoms were unable to keep up with Intel.

        • flip-mode
        • 7 years ago

        You sure love the butts.

          • superjawes
          • 7 years ago

          Who doesn’t?

        • speedfreak365
        • 7 years ago

        AMD-tards are now butt-devastated that almost six-year old Conroe generation CPUs are still ahead in single and dual-thread performance arena?

        There….. Fixed it for ya

          • Krogoth
          • 7 years ago

          Istanbul-based Phenoms (Phenom IIs) were able to pull ahead of Conroe-based chips and traded blows with Penyrn-based units.

          It didn’t matter much to Intel at the time, since Bloomfields were around the corner.

      • Meadows
      • 7 years ago

      It’s amazing how you continue to try to look cool by purposefully typing “MEGAHURTZ” instead of “clock speed” or “MHz”. To recite an oft-repeated but nonetheless true line, [i<]the 90's called, they want their hipster aura back[/i<].

        • flip-mode
        • 7 years ago

        U can haz teh BUTTHURTZ.

          • Meadows
          • 7 years ago

          Tell me I’m wrong.

            • derFunkenstein
            • 7 years ago

            You’re wrong*

            ………………………………………………………………………………………………………………………….

            *You’re not wrong, but hey, you asked.

        • Krogoth
        • 7 years ago

        It is called an allusion.

        It is little to do with looking “cool”.

        Again, you are still pressing on your silly vendetta and look for anything to pounce on. No matter how trivial and inane it is.

    • irvinenomore
    • 7 years ago

    Excellent review, would love to see a flight simulator type of game in there though.

    Lock On in particular, although old, is a good stress test and has brought many a machine to it’s knees. More people probably play Forgotten Battles et al. though.

    Also if something is needed to stress and separate the high end cards, how about The Witcher II with Ubersampling enabled?

    The only other this is a recommended resolution and detail level for each card and game combination? It has been done a little less professionally elsewhere and helps in picking out the best bang for the buck for those who have less time to research?

    • Bauxite
    • 7 years ago

    Awesome review, and it really drives home a rule of thumb that I’ve stuck to for awhile now, I’m sure many others have as well.

    Buying a high bin cpu is a giant waste of money (especially the last few grades) but buying “good enough” is an incredible bang per buck. (IMO that has been at $200 or less for years now, and you still end up with something from the top [i<]bracket[/i<], especially with stuff like the microcenter combos) AMD surely feels the pain from this article, but I doubt Intel is all that happy either, because it pulls away the curtain from what I'm sure is one of their high margin product segments 🙂 The last $800 of a $1000 cpu is nearly worthless in [i<]real world usage[/i<] even to a performance focused enthusiast.

      • plonk420
      • 7 years ago

      it’s probably a “1%er thing” … if you have the money, why not? better heatsink (sometimes), and no chance of mucking something up (let alone a warranty) by OCing.

      you don’t go to an Overeaters Anonymous or gay club and say “i only date thin people” or “as a male, i only date women.” why complain about something you’re not in the market for? [or insert car analogy here]

        • Bauxite
        • 7 years ago

        Who said I was complaining?

        You can be flush with cash and still smart about spending it.

        On the other hand, I’ve seen my share of extreme edition rigs hooked up to 5 year old junky bargain basement lcds, neither flush nor bright…

        As for the cars, the fastest ones on the street are not the most expensive, spending for a pure status symbol is all fine and dandy in the free world with your own money (go ahead) but the souped up [insert massive internet argument here] will still blow your doors off if that the goal 🙂

        I’d image the status symbol computer folks all bought his and hers gold plated macs anyways.

          • plonk420
          • 7 years ago

          i’m sure there are uses. i just can’t think of any. maybe amateur setups where per-seat (CPU) licenses cost enough to warrant speed vs multiple cheaper machines (the VM world? certain 3D rendering apps?) an insignificant but still existent group — ignoring “extreme edition gaming rig” people :S

      • Krogoth
      • 7 years ago

      $800-1000 CPUs from Intel is worthwhile for content creators, number crunchers and 3D modelers they use applications that actually take advantage of their power.

      Most mainstream applications only utilize up to two threads with a few more that go the extra miles with four threads. The extra power in six-core and eight-core CPUs is wasted.

        • Jason181
        • 7 years ago

        I think this is exactly why we’re not seeing any 6-core Ivy Bridge processors. Even in the scenarios that utilize more than 4 cores, smt (aka hyperthreading) often helps quite a bit.

        It’s pretty obvious that $800+ CPUs are and probably always will be a niche product, but once you get under $500, I think you run into a lot of enthusiasts that are willing to pay a premium for higher performance in games, where as you correctly point out four threads is pretty much the max. It’s all a matter of priorities.

        What might be a giant waste of money to one person might be just the ticket, even for gaming (more money than time, dual-purpose machine, thread-intensive background tasks, etc.)

    • Madman
    • 7 years ago

    Great review!

    • drfish
    • 7 years ago

    Awesome stuff! I know I’ve been upset in the passed saying, “my frame-rate is low but my CPU/GPU aren’t at 100% usage, what gives?” – this is helping me wrap my head around that issue. I would love to see CPU usage numbers during the time span the latencies were measured.

      • Jason181
      • 7 years ago

      I suspect that would have more to do with the number of CPU cores than anything else. Most games use 3-4 cores, so anymore cores would simply reduce CPU usage by 33-50% for 6 cores and 50-63% for 8 logical processors.

        • drfish
        • 7 years ago

        Certainly a factor, but I’ve done testing where I disable 3/4 cores and still have issues like that. Of course its also a game engine thing to a degree.

    • Vivaldi
    • 7 years ago

    Bravo.

    The review quality on TR is freakishly excellent. You’ve ruined all other review sites for me. They just don’t compare.

    • Ashbringer
    • 7 years ago

    Just shows it really sucks to own an AMD processor. Oh crap, I own those…

      • derFunkenstein
      • 7 years ago

      I totally feel you, man. I’ve been there. Then again, I’m running an i3 2100 now, so I guess I’m OK. lol

      • brucethemoose
      • 7 years ago

      With the unlocked multiplier, my Phenom II X4 still holds itself together pretty well. It’s probably close to the stock i5 760 at 4.1Ghz (which is kinda sad), but I have to push it hard to get there.

      Now if I owned an FX… ya, that’s why I don’t own one.

    • achaycock
    • 7 years ago

    I found this review to be highly illuminating, but I am left pondering a question. What effect does overclocking have? I bought my Phenom II X6 1090T for running a large number of virtual machines at once and – I think rightly so – figured that 6 slower cores with lots of RAM would let me do this well. I have since clocked it up to 3.8GHz and more importantly got the northbridge/L3 cache running at 3.0GHz and have found it to be a perfectly competent gaming CPU with framerates in all games I play, from Skyrim to Starcraft being glassy smooth (using a 2Gb Radeon HD6970). So yes, an i7 2600k overclocked would doubtless be better, but then I’d have had no PCI-Passthrough support thanks to Intel’s strange feature policies. Now I wonder what would have been a better choice, an i7 2600 that I’d have had trouble overclocking or my Phenom II X6 that is overclocked.

    My second thought is what effect did using a 990FX have on the Phenom II performance. I recently had to replace my partners motherboard with a 970 based part and her Phenom II 720BE could no longer have it’s 4th core enabled and could also not maintain a decent overclock. In the end we bought an FX 8120 using a cashback offer and I was able to have it reach 4.4GHz with no overvolting, though it has to be noted that even at this speed, it’s generally slower than my Phenom II.

      • derFunkenstein
      • 7 years ago

      I have a feeling that OC’ing would be a largely linear increase up until the point where the GPU becomes a bottleneck. 15% OC might be 15% lower latencies overall, but when do you cease to see it on your 60Hz monitor and/or when does your video card start to block it?

      It seems like “budget” Intel CPUs (which are missing, I’ll note) would be probably where it’s at for “sweet spot” in terms of lowest latency per component $$. Aside from the BF3 results, there’s no way AMD would catch up, and even in BF3 it seems the video card is the limiting factor.

        • Bensam123
        • 7 years ago

        It’s not… Remember how IB bleeds voltage out the wazoo? Nothing says there are linear gains here if the chips aren’t designed to operate at those higher frequencies… Or maybe some chips, such as BD are designed to operate at higher frequencies, they just can’t get there normally…

          • Jason181
          • 7 years ago

          Increased heat will affect overclocking headroom, but not performance.

          Performance is usually pretty linear up to the point where the cpu is no longer a signficant bottleneck.

          • derFunkenstein
          • 7 years ago

          I’m talking about performance.

        • dragosmp
        • 7 years ago

        Imho IMC overclocking must be taken into account. Were the 15% OC be for CPU cores and IMC maybe there might be a 15% reduction in frame time (baring no GPU bottleneck). If an OCing test was performed it would be nice to see what smooths out the higher latency times: CPU core or IMC overclocking.

        My 2 cents: reducing memory latency or L3 cache latency would reduce most long frame times. Maybe one reason budget Intel CPUs would be good from a frame time perspective is that they have the same L3/memory latency as the mid-range.

    • StuG
    • 7 years ago

    Quite embarrassing for AMD. The exact reason I now have an Intel CPU in my machine.

    • brucethemoose
    • 7 years ago

    The sandy bridge i3 is more than just essential: it should literally be a cornerstone of this review. Including some Core2 CPUs and some overclocks would’ve been nice, but I had to double check when I didn’t see an i3 2100 in there.

    The debate between the Phenom II series and the i3 2xxx series as the best budget gaming processor is the most important debate that’s still unanswered. We all know the 2500k/3570k dominate their segment, that BD is terrible for gaming, that ivy is slightly faster than sandy, and that the choice between a Pentium, Athlon, or Llano processor depends on your setup and needs.

    But the difference between an OC’d Phenom II X4 and an i3 2120 is too close to call without some of TR’s latency magic. I also think it would be interesting to see how a dual core SB stacks up to it’s quad core brothers.

    Otherwise, awesome review.

      • derFunkenstein
      • 7 years ago

      Yeah, I agree. There’s a huge pile of CPUs here, and I totally appreciate it. Just wish there had been one more.

      • cegras
      • 7 years ago

      I’m also very surprised the i3-2120 was not included – the darling of all budget builds.

      • plonk420
      • 7 years ago

      it also competes with some of AMD’s offerings’ prices.

      • Jason181
      • 7 years ago

      The i5-655k is essentially an i3, though last generation, of course.

        • dragosmp
        • 7 years ago

        Actually no. The 655K doesn’t have an integrated memory controller (it is on the same package, not die) and the cores talk to the NB thru off-die QPI. This is what makes the i5-655K midway between a C2D and a fully integrated Lynnfield, but closer to the FSB based C2D from the memory latency standpoint. The i5-760 is a better comparison to a SB core i3/i5.

          • Jason181
          • 7 years ago

          I realize that, but it’s a fairly decent comparison. The point was that the 655k is competitive with the BD CPUs, so a SB i3 would almost certainly be slightly faster. The i5-760 has 4 physical and 4 logical cores, which is a vast difference from the i3’s two physical, 4 logical setup. I’m pretty sure that’s what the original poster was referring to (core count affects performance a lot more than the difference between Lynnfield and Sandy Bridge).

      • shaurz
      • 7 years ago

      Would have like to see some Core 2s in there too. Some of us still use them…

        • ostiguy
        • 7 years ago

        Yup. All the PC hardware bucks I have spent in the last 2-3 years has been on disk IO.

        Intel should pay TR $100k to do a C2D 2 core @ 1.83ghz overclocked vs a C2Q 4core overclocked (whatever the incredibly popular c2q was – the 6600?) vs the current sub $200 budget champ the i5-3470 bakeoff.

        I’d be looking at probably $600-700 to replace my box – C2D, 12GB ram, mobo, and it would probably be time to replace the original Lian Li 60.

        • halbhh2
        • 7 years ago

        Quite right. Many people have cpus from 4,5 years ago, and don’t know exactly what the difference would be to upgrade their cpu. This is true on the AMD side also, where AM3 motherboards are common, and at least the review does show the PII x4 doing okay in gaming, so someone with an AM3 board and a dual core or low-end Athlon II can consider that $100 option (and overclock some, etc.). But for the Intel side, you are in the dark unless you do the legwork of reading other articles and interpolating (or extrapolating back)….uh…and some guesswork!

    • BobbinThreadbare
    • 7 years ago

    Bulldozer makes me sad.

      • vargis14
      • 7 years ago

      Makes you sad and me MAD!
      They should of just shrunk the thuban 6 core phenoms massaged the memory controller and added some instructions! This should have been done over a year ago.

        • brucethemoose
        • 7 years ago

        This. BD has more transistors, and a similar die size on a much better 32nm process… but it’s worse than Thuban.

        Even if BD is better for server/hpc work, a massaged, die shrunk thuban would’ve been a much better consumer CPU.

          • derFunkenstein
          • 7 years ago

          They more-or-less gave you that with Llano but the clocks are so low (under 3GHz) as to be not competitive. A 3.5-3.6GHz Llano would make FX look bad. The conspiracy theorist in me wants to think Llano’s bad OC results (if you’re lucky you hit 3.4GHz) are damn-near intentional.

            • brucethemoose
            • 7 years ago

            With no L3, llano’s more of a low power, low end chip. My A8-3500M actually has an absurd amount of headroom: I’m running it at 2.1GHZ (1.5GHz stock) with a big undervolt too, but I haven’t even bothered to find the chip’s limits yet. My theory is that llano is designed to be a low power mobile CPU, hence it’s desktop incarnation OCs terribly.

            On a separate note, I have my own conspiracy theory. There’s an obscene amount of OC headroom in llano, and the stock voltage at idle is way too high. Mobile trinity, on the other hand, doesn’t have any headroom, and has a very low, very efficient idle that leaves little room for improvement.

            I think AMD knew that llano was good… too good, as in better than trinity good. I think they clocked (mobile) llano way below it’s potential just so that Trinity would end up being better than it’s predecessor, and not worse like BD. If people saw that piledriver was worse than stars, AMD would be in deep trouble.

            • derFunkenstein
            • 7 years ago

            Since the article is talking about desktops, and a desktop A8-3500 was tested (at 2.9GHz) that’s my point of reference. At 2.9GHz it’s not fast enough to keep up and they don’t generally go beyond 3.3 or 3.4 if you’re lucky, based on OC/review data aroudn the web. At the same 4GHz that 45nm Phenom IIs hit, it’d be pretty great.

            Also, Llano has 1MB of L2 per core compared to 512k/core of the X4s. That goes a long way towards negating the slow L3 that got removed.

            • brucethemoose
            • 7 years ago

            Hmm, you’re right.

            [url<]http://en.inpai.com.cn/doc/enshowcont.asp?id=7978&pageid=8068[/url<] Very interesting. Clock for clock, llano virtually the same as Deneb. If they added L3, and tweaked it to reach higher clocks like BD and Thuban do, it might just match an i5 760.

        • juampa_valve_rde
        • 7 years ago

        I had this tough from long ago, the stars core was aged but aged well and still had room for new tricks, instructions, better ipc and clockspeed. I estimate that an 8 core stars+ with a tweaked mem controller, new tricks and 8 mb l3 would have been smaller, with better thermals and performance (beating easily 4 ghz) than bd. Also their first 32nm processor (bd) was huge and never fab’d before, which doesnt make sense on the fab industry, in a new process the smart and conservative move is to try to make something known and small to know how it performs (to clean the pipe some say…).

          • achaycock
          • 7 years ago

          I think Stars was at the end of it’s life. It’s a lot of work transferring to a new process node and AMD needed a new architecture for the future. I suspect that the concept behind Bulldozer is fundamentally sound, but thanks to a number of restrictions, the engineers at AMD have had to make thousands of small, but vital cuts that have severely hurt the overall performance of the CPU. I agree that a 32nm Stars processor would have been better, but how much can be gained from doing so? It would have still been a budget CPU and a bleak outlook to the future. The Bulldozer gamble would have had to be made regardless.

          I am hopeful that there is a lot of room for improvement. I am still more hopeful that the next iteration of this architecture will perform substantially better. I am hopeful, but not certain and this means that unfortunately for the first time in 13 years, my next main build will be designed around an Intel processor. I want to support the underdog, but I’m a power user and it’s become increasingly apparent that Intel is the only game in town.

    • vargis14
    • 7 years ago

    How could you NOT include intel’s low budget i3 2120 a 3.3ghz dual core with HT.

    A great review but with intel’s i3 2120 missing it definitely does not show the power or lack of on a budget sandy dual core with HT.

    Pretty please test a $120 i3 2120 and add the results. I think it would surprise a lot of people.

      • tesmar
      • 7 years ago

      I have a 2350 in my laptop, the i3 and it is great. Can do light gaming with no problem. Driver: SF, ME3 and others.

      • `oz
      • 7 years ago

      Wouldn’t suprise me one bit as I am currently running an i3-2100 with a 660 Ti. No problems playing any games here.

        • rxc6
        • 7 years ago

        I have my i3 2100 paired with a 6950 2GB. At 1080p, I can push graphics all the way.

        • tesmar
        • 7 years ago

        Actually I have a T520 with just the CPU and HD 3000. It is good enough for intense games at 1200xwhatever at low settings and some games 1680xwhatever

    • Meadows
    • 7 years ago

    The review looks good. The green results look bad.

    • DPete27
    • 7 years ago

    Great article, love the buttons for the graphs!!! Why doesn’t an i3-21xx count as “Budget Intel?” After all, you are suggesting that in the Econobox Build in the TR System Guide.

      • BoilerGamer
      • 7 years ago

      So Gamers should just grab i5-3570K? I am glad since that’s what I am going to do.

    • chuckula
    • 7 years ago

    First of all: This is an [b<]excellent[/b<] review and I really appreciate that TR goes out and does unusual reviews like this instead of only cookie-cutter benchmarks on launch day of a new part. The work you have done with frame latency benchmarking is light-years ahead of practically all the other reviews on the Internet. Second of all: AMD can't be too happy with these results since you have shown that: 1. Brand new top-of-the-line AMD CPUs from late 2012 are still trailing Nehalem parts from 2009. 2. Bulldozer's main claim to fame was supposedly much better multi-tasking even if single-threaded performance wouldn't be amazing. Well, looking at your multi-tasking benchmark seems to refute that hypothesis pretty clearly. 3. I hope that your use of an AMD GPU in this benchmark dissuades any notion that one CPU platform is "cheating" in any way. Third of all: This review has shown that the CPU does definitely still matter to game performance. Now obviously you are going to need a GPU with some real horsepower to get great performance out of modern games, but the whole "CPU doesn't matter!" argument has been very effectively trounced by this article, especially with your frame timing benchmarks.

      • BoilerGamer
      • 7 years ago

      So gamers should just get i5-3570K and be happy? I am glad that’s what I did.

      p.s Sorry about the Double post.

        • Bauxite
        • 7 years ago

        Pretty much, yes.

        • ColeLT1
        • 7 years ago

        Me too, very happy with my 3570k, 4.7ghz runs great.
        [url<]http://i.imgur.com/7Qocl.png[/url<]

        • cynan
        • 7 years ago

        Or probably an i5-3470 if not overclocking to save yourself the extra $30 or so.

      • Goty
      • 7 years ago

      The multitasking test is actually a bit flawed in my opinion. TR took a test where the FX CPUs already performed poorly and, wonder of wonders, they still performed poorly. The real problem here is that, in addition to looking at the overall performance, you should also look at the impact the multitasking has relative to the performance when only running the game. I haven’t taken the time to actually calculate the differences (maybe I’ll get on that if I get bored later), but it looks like it may be that the FX series processors (the 8150 in particular) may have taken a less significant hit than the other processors in the test. If that’s the case, then they would in fact be “better” at multi-tasking than the other CPUs.

        • chuckula
        • 7 years ago

        [EDIT: See below for Goty’s point, it makes sense and it turns out we really need to see both the performance of the game and the simultaneous performance of the transcoder to get a 100% clear picture of what is going on]

        Dude really?

        1. Video encoding is a multi-tasking test that is actually representative of what people do in the real world where I’m sure there are people who run those jobs in the background while playing games all the time.

        2. Complaining that Bulldozer isn’t particularly good at a common task that is actually supposed to be pretty well matched to Bulldozer’s supposed architectural advantages does not reflectly poorly on TR’s review, but on Bulldozer.

          • Goty
          • 7 years ago

          Clearly you did not understand the point I was making. The idea is that it’s not a revelation in any way that the FX CPUs continue to turn in bad performance when multitasking; if the performance was abysmal to start, it’s going to continue to be abysmal. You yourself said that one of BD’s selling points was supposed to be that it handles multitasking better than other CPUs, but looking at the raw performance is NOT the way to measure that particular metric; the proper way to measure it would be to calculate the percent difference in the various values measured for each CPU and see which is taking a larger hit in the test. If, for example, the IB CPUs lost 20% of their performance whereas BD CPUs only lost 15%, that would validate the claim that the BD CPUs are more efficient multitaskers.

          I’ll write up the numbers when I get back home later tonight and show you what I mean.

            • Arag0n
            • 7 years ago

            I got your point. The FX-8150 shows that needs 16% more time while the PII-980 needs a 38% more time, so yeah, I would say that the multitasking capabilities of FX against PII are improved.

            Looking at the intel side of the things, just for comparison, The Core i7-3370K needed a 32% more time per frame in average and the i5-3570K 31%, both have a very similar penalty. So I would say that AMD multitasking is better than Intel too, it suffers less penalty from doing so and as pointed from you, the problem is that the performance was bad starting with. If AMD had a good performance the results would be different. I would say that techreport should have used battlefield 3 to check multitasking since its a test that perform similar for AMD and Intel, instead of looking for the weakest game for AMD.

            The funny thing is that if AMD only spend 52ms beyond 50ms during 60s (I can´t see someone to notice that…., and Skyrm was the worst game for AMD CPU´s) and the Core i5-655K is a good example of poor multitasking since it used to be better than all AMD but the PII-980 and while multitasking is worse than all AMD cpu’s.

            • Goty
            • 7 years ago

            I was going to suggest using BF3 as well, but that might not be the best choice since it doesn’t appear to be particularly CPU intensive in the first place. As for the differences in Skyrim, I pegged the “best” Intel processor in the 99th percentile frame time test to be the 3960X at around 15% slower while multitasking and the “best” AMD processors as the 8150 at around 19% slower while multitasking. (This is just a calculation of percent difference.) These two processors also happen to be 1 and 2 in the rankings overall respectively.

            Most interestingly, I think, is that the IB and SB processors are firmly middle- to back-of-the-pack, averaging around a 30% increase in 99th percentile frame time while multitasking. Another interesting fact is that the 8150 seems to fare much worse in the Average FPS category, with framerates falling about 20% as compared to about 15-20% for the more recent Intel processors.

            • Arag0n
            • 7 years ago

            You are right, if we want to mesure the slowdown of the CPU we need a CPU intensive game as Skyrm or Starcraft II. However, the results shouldn´t be absolute or at least not just absolute numbers of average frames or time spent over a threshold. As pointed by you, we should be comparing how worse all architectures at different core count perform while multitasking, so people can extrapolate what to expect in other CPU intensive games.

            • chuckula
            • 7 years ago

            OK I see your point now, that’s a good catch. Looking at the relative scaling numbers the 3960x is obviously the overall winner, but the 8150 comes in a ahead of any quad-core CPU, with the 3770K being a close third in scaling.

            [EDIT]: I just realized that even the overall scaling percentages are still not enough to make a final determination: We [b<]also[/b<] need to see the performance of the transcoder at the same time that the game is running! I should have noticed that earlier since we have no idea how much transcoding each CPU was actually doing during the game benchmark. That would be another important factor to consider in determining how well these CPUs multitask, since it would obviously be possible to just have the transcoder run incredibly slowly while assigning much higher priority to the game (or vice-versa).

            • Goty
            • 7 years ago

            I agree about the transcode time, too. The whole “multitasking performance” issue is a thorny one for sure.

            • Arag0n
            • 7 years ago

            Touché, task-priority may have a high effect into the game frame-rate, with no data about the actual work the transcoder does it´s hard to know if some priority tricks are being played by AMD, Intel or Both.

            • dragosmp
            • 7 years ago

            ” If, for example, the IB CPUs lost 20% of their performance whereas BD CPUs only lost 15%, that would validate the claim that the BD CPUs are more efficient multitaskers.”

            …but you can see that trend from the review data indirectly. In multitasking tasks, though BD didn’t gain any honors, it was closer to the performance of its betters. It follows that BD lost less due to the second task as long as it had sufficient cores (exception FX4xxx).

            I would simply look at the difference between the single tasking skyrim numbers by the multitasking numbers and see the 99% frame time loss due to the second task %=(MT-ST)*100/ST
            3960: +19.1%
            3770: +31.9%
            3570: +31.1% (probably rounding error vs 3770)
            2500: +28.3%
            655K: +84.0%
            980: +37.8%
            1100: +26.2%
            4170: +51.6%
            8150: +18.9%

            …so yes, the 8150 is a better multitasker than the 3770 since it looses less due to the second task. The problem is that the 3770 is so much faster that even by loosing almost twice as much it still runs faster than the FX.

            • Goty
            • 7 years ago

            Not sure why you’re being voted down since I’m being voted up above and your results are right in line with mine, but yes, I agree. I’d probably change your last paragraph to say that the 8150 is a more [i<]efficient[/i<] multitasker, not necessarily better simply due to the performance delta.

        • Arag0n
        • 7 years ago

        Doing the math, FX-6100 and FX-8150 seems to be winners with 16% and 25% penalty in front of the 32% top tier ivy bridge get.

        • Jason181
        • 7 years ago

        It’s only flawed if you’re interested in theoretical multitasking penalties. It’s very valid, as are the conclusions if you’re actually just running a game and encoding.

          • Arag0n
          • 7 years ago

          The point is that different games now and in the future will have different needs for CPU, so the only thing that will say is performance level and multitasking penalty. The first pages test the performance of the CPU’s for games and the last one is expected to measure the penalty cuz multitasking. I can´t predict if a game will be playable or not and how meaningful is the penalty with the data given, I only know what happened while playing skyrm. I need to use the data and calc some more parameters to have a reference for other games.

            • Jason181
            • 7 years ago

            The requirements of games isn’t set to change significantly in the foreseeable future of that platform. The instruction mix hasn’t really changed all that much for games in a long time. It would be impossible to test all of the combinations of different games and background tasks.

            The test was to give a general idea of performance given a gaming workload and a typical cpu-intensive background test. Extrapolating that to other games and workloads, or using that data to derive percentage penalties is generalizing anyway.

            So all you’re saying is that you can’t use that data to generalize, and you can’t predict the future. That really doesn’t have anything to do with Goty’s comment, except for the fact that he [i<]is[/i<] generalizing. The author of the article knew that it was a single test, and generating percentage penalties for one test and expecting it to apply to all situations intimates that more data is available than actually was. I was simply pointing out that the test isn't definitive of all workloads, and generalizing in percentages doesn't really add more value than what was already provided.

            • Arag0n
            • 7 years ago

            I just agree that a single game results can’t be extrapolated that we need more test to have a general idea. But no way to everything else. Skyrm is a very demanding CPU game, currently there is plenty less demanding CPU games so I would say that Skyrm represents better the future (2-3 years from now) than the current market.

            Still, as pointed before I see important to know how much work can be done by the multitasked task, priority tricks, etc. You can´t just pick a game where AMD already perform poorly and say that multitasking is poor because it can´t magically be faster than originally when it wasn´t multitasking.

            • Jason181
            • 7 years ago

            To be fair, it wasn’t like they picked the only game that AMD’s cpus were weak in; they were weak in most of the games, so I’d say it’s a fair assessment to say that you probably don’t want to multitask when gaming on those cpus.

Pin It on Pinterest

Share This