Weighing the value of today’s processors

PROCESSOR REVIEWS TEND TO FOCUS on performance, and rightly so. We all want to see how well the latest silicon plays games, encodes media files, renders scenes, and performs various other tasks. But performance isn’t the only metric that’s important for CPUs. For instance, power consumption and energy efficiency are also an important piece of the puzzle, as is the actual cost of the chip. We’ve considered all of these things in our CPU reviews for years, but we’ve never before set out to quantify the value proposition—to show exactly how much bang you’ll get from dropping your hard-earned bucks on a particular CPU.

Part of the reason we’ve avoided doing so is that, let’s face it, it has the potential to be kind of cheesy. There’s much more to a CPU’s value proposition than a cold cost-benefit analysis can capture, and in truth, doing such an analysis well can prove rather tricky. That’s why you should read our CPU reviews and our system building guides to see what they recommend.

However…

A vocal contingent of our readers has long been asking for a closer look at price-performance issues, and we think we’ve cooked up some novel ways of expressing that data that may make it feasible. So we’ve decided to give it a shot.

Fortuitously, AMD and Intel both took an axe to their prices last month, and we recently added Intel’s $113 Core 2 Duo E4300 to our constellation of test results, so now seems like a particularly appropriate time to consider performance per dollar. Join us as we look at the value proposition of 16 CPUs, from the Athlon 64 X2 3600+ all the way up to the Core 2 Extreme QX6800, across a wide range of games, applications, and even energy efficiency tests. Some of what we found surprised us, and it may change the way you think about CPU value.

Quantifying CPU value
In theory, quantifying value is easy. We can measure performace quite well, prices are easy to check, and dividing the former by the latter gives you performance per dollar—except it’s not quite that simple, for a number of reasons.

First, processors aren’t always the only factors affecting performance in a given task or benchmark. Games, for example, tend to favor GPU pixel-pushing horsepower over CPU computational grunt. Memory bandwidth can limit performance, software and operating systems can be wildy inefficient, and don’t get us started on the bottlenecking potential of hard disk drives. Then there’s the issue of whether software makes the most of a processor’s abilities. This one is a particular problem for quad-core systems, since few applications are multithreaded well enough exploit them fully.

And, of course, there’s the question of cost. Sure, it’s easy to pull from official price lists, but bulk pricing doesn’t always track with street prices. Bare processor prices don’t take into account overall platform costs, either, or the cost of power consumption on your utility bill.

We’ve attempted to mitigate some of these issues by providing value analysis for a wide range of applications and processors, so we can draw conclusions based on overall trends rather than just a handful of numbers. We can illustrate which processors offer better value than others and under which circumstances.

Here’s a quick run-down of the specifications of the Intel processors we’ll be looking at today. We took these prices from Intel’s official processor price list, since street prices tend to vary from vendor to vendor and fluctuate without warning.

Model Clock speed Cores L2 cache (total) Fab process TDP Price
Core 2 Duo E4300 1.8GHz 2 2MB 65nm 65W $113
Core 2 Duo E6300 1.86GHz 2 2MB 65nm 65W $163
Core 2 Duo E6400 2.13GHz 2 2MB 65nm 65W $183
Core 2 Duo E6600 2.4GHz 2 4MB 65nm 65W $224
Core 2 Duo E6700 2.66GHz 2 4MB 65nm 65W $316
Core 2 Extreme X6800 2.93GHz 2 4MB 65nm 75W $999
Core 2 Quad Q6600 2.4GHz 4 8MB 65nm 105W $530
Core 2 Extreme QX6700 2.66GHz 4 8MB 65nm 130W $999
Core 2 Extreme QX6800 2.93GHz 4 8MB 65nm 130W $1199

This is a classic example of CPU price structure. Intel’s Core 2 prices ramp up much quicker than key specs like clock speed, cache size, or the number of cores. For instance, the E6300 may have half the number of cores, one quarter the cache, and a 38% lower clock frequency than the flagship QX6800, but it costs 86% less. Or consider the E6600 and Q6600, both of which run at 2.4GHz. The latter is essentially twice the former, but the difference in price is actually close to 2.4 times. Spending more doesn’t necessarily get you an equitable boost in computational power, a trend we see continue with AMD’s offerings.

Model Clock speed Cores L2 cache (total) Fab process TDP Price
Athlon 64 X2 3600+ 1.9GHz 2 1MB 65nm 65W $73
Athlon 64 X2 4400+ 2.3GHz 2 1MB 65nm 65W $121
Athlon 64 X2 5000+ 2.6GHz 2 1MB 65nm 65W $167
Athlon 64 X2 5600+ 2.8GHz 2 2MB 90nm 89W $188
Athlon 64 X2 6000+ 3.0GHz 2 2MB 90nm 125W $241
Athlon 64 FX-72 2.8GHz 4 4MB 90nm 125W x 2 $599
Athlon 64 FX-74 3.0GHz 4 4MB 90nm 125W x 2 $799

AMD’s price range dips lower than Intel’s, but it also doesn’t reach beyond $799. Then again, the Athlon 64 FX-72 and FX-74 require a dual-socket motherboard that currently sells for more than $325, so there’s a considerable additional cost associated with that platform.

Here, also, prices ramp up faster than key specs. An FX-72 setup gets you the same clock speed, number of cores, and cache size as a pair of X2 5600+ processors, but it costs more than three times as much. Similarly, the Athlon 64 X2 3600+ gives up 37% of the clock speed and 50% of the cache of the 6000+, but sells for just 30% of the cost. Based on their specs alone, budget chips certainly look to have the best value propositions.

Charting relative value is a little new for us, so we’ve come up with a couple of ways to express performance per dollar. The first is the easiest: a simple graph depicting the value of a processor’s score in a given test—be it in frames per second, or as an encoding rate, or even an arbitrary benchmark score—divided by that processor’s price. In some cases, such as with media encoding, we’ve had to do a little multiplication to avoid generating value scores with too many decimal places to express succinctly. This doesn’t taint our results, though; it just makes them easier to read.

Our second tool for evaluating processor value comes in the form of a scatter plot, which looks like so:

Performance is tracked along the Y axis, and price along the X. Since we’re interested in chips that offer the best value, we’ll be looking for vertical progression on the performance axis with as little progression on the price axis as possible. Hypothetically, the best possible processor would sit at the top left of the plot, offering very high performance for free. Conversely, you wouldn’t want to buy a chip sitting at the bottom right of the plot, where price is high and performance is low.

Of course, picking a processor isn’t typically about what’s best as much as what sits in the mythical price-performance sweet spot. To determine that using our scatter plots, you’ll want to find the cutoff where either a) performance keeps increasing but starts to cost more and more, or b) performance stops going up significantly—or at all—with price. In a scatter plot like the one above, for instance, the latter would apply. There are exceptions to this rule, though, as we’ll see in the next few pages.

The scatter plot might look a little daunting, but it has the advantage of providing an instantaneous look at how price scales with performance. Most of us can afford a processor that costs a little more than the $73 Athlon 64 X2 3600+, so it’s helpful to be able to spot the best performing CPU within a given budget.

 

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

In some cases, getting the results meant simulating a slower chip with a faster one. For instance, our Core 2 Duo E6600 and E6700 processors are actually a Core 2 Extreme X6800 processor clocked down to the appropriate speeds. Their performance should be identical to that of the real thing. Similarly, our Athlon 64 FX-72 results come from an underclocked pair of Athlon 64 FX-74s, our Athlon 64 X2 4400+ is an underclocked X2 5000+ (both 65nm), and our Athlon 64 X2 5600+ is an underclocked Athlon 64 X2 6000+.

Our test systems were configured like so:

Processor Core 2 Duo E4300 1.8GHz
Core 2 Duo E6300 1.86GHz
Core 2 Duo E6400 2.13GHz
Core 2 Duo E6600 2.4GHz
Core 2 Duo E6700 2.66GHz
Core 2 Extreme X6800 2.93GHz
Core 2 Quad Q6600 2.4GHz
Core 2 Extreme QX6700 2.66GHz
Core 2 Extreme QX6800 2.93GHz
Athlon 64 X2 3600+ 1.9GHz (65nm)
Athlon 64 X2 4400+ 2.3GHz (65nm)
Athlon 64 X2 5000+ 2.6GHz (65nm)
Athlon 64 X2 5600+ 2.8GHz (90nm)
Athlon 64 X2 6000+ 3.0GHz (90nm)
Athlon 64 FX-72 2.8GHz
Athlon 64 FX-74 3.0GHz
System bus 1066MHz (266MHz quad-pumped) 1GHz HyperTransport 1GHz HyperTransport
Motherboard Intel D975XBX2 Asus M2N32-SLI Deluxe Asus L1N64-SLI WS
BIOS revision BX97520J.86A.2618.2007.0212.0954 0903 0205
North bridge 975X MCH nForce 590 SLI SPP nForce 680a SLI
South bridge ICH7R nForce 590 SLI MCP nForce 680a SLI
Chipset drivers INF Update 8.1.1.1010
Intel Matrix Storage Manager 6.21
ForceWare 15.00 ForceWare 15.00
Memory size 2GB (2 DIMMs) 2GB (2 DIMMs) 2GB (4 DIMMs)
Memory type Corsair TWIN2X2048-6400C4
DDR2 SDRAM
at 800MHz
Corsair TWIN2X2048-8500C5
DDR2 SDRAM
at 800MHz
Crucial Ballistix PC6400
DDR2 SDRAM
at 800MHz
CAS latency (CL) 4 4 4
RAS to CAS delay (tRCD) 4 4 4
RAS precharge (tRP) 4 4 4
Cycle time (tRAS) 12 12 12
Audio Integrated ICH7R/STAC9274D5 with
Sigmatel 6.10.0.5274 drivers
Integrated nForce 590 MCP/AD1988B with
Soundmax 6.10.2.6100 drivers
Integrated nForce 680a SLI/AD1988B with
Soundmax 6.10.2.6100 drivers
Hard drive Maxtor DiamondMax 10 250GB SATA 150
Graphics GeForce 7900 GTX 512MB PCIe with ForceWare 100.64 drivers
OS Windows Vista Ultimate x64 Edition
OS updates

Our Core 2 Duo E6400 processor came to us courtesy of the fine folks up north at NCIX. Those of you who are up in Canada will definitely want to check them out as a potential source of PC hardware and related goodies.

Thanks to Corsair for providing us with memory for our testing. Their products and support are far and away superior to generic, no-name memory.

Also, all of our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.

The test systems’ Windows desktops were set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

The Elder Scrolls IV: Oblivion
We tested Oblivion by manually playing through a specific point in the game five times while recording frame rates using the FRAPS utility. Each gameplay sequence lasted 60 seconds. This method has the advantage of simulating real gameplay quite closely, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent results. In addition to average frame rates, we’ve included the low frame rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

For this test, we set Oblivion‘s graphical quality to “Medium” but with HDR lighting enabled and vsync disabled, at 800×600 resolution. We’ve chosen this relatively low display resolution in order to prevent the graphics card from becoming a bottleneck, so differences between the CPUs can shine through.

Notice the little green plot with four lines above the benchmark results. That’s a snapshot of the CPU utilization indicator in Windows Task Manager, which helps illustrate how much the application takes advantage of up to four CPU cores, when they’re available. I’ve included these Task Manager graphics whenever possible throughout our results. In this case, Oblivion really only takes full advantage of a single CPU core, although Nvidia’s graphics drivers use multithreading to offload some vertex processing chores.

In raw performance per dollar terms, the Athlon 64 X2 3600+ runs away with this one, followed at a distance by the Core 2 Duo E4300 and Athlon 64 X2 4400+. Quad-core performance per dollar looks pretty dismal in Oblivion, in part due to the fact that the game doesn’t actually take advantage of four cores.

Looking over to our scatter plot helps clarify a few things, though. Even at a display resolution of 800×600 with the detail turned down—settings that should highlight differences in CPU performance—variations in frame rate between the chips here are fairly small. We expect those differences to shrink even further once you turn up the detail, at which point performance will really be dependent on the graphics card more than anything.

Considering even the Athlon 64 X2 3600+ can get you an average of 80 FPS (and a minimum of 57 FPS, close to the 60Hz cap on many LCD monitors) in this scenario where the GPU isn’t a bottleneck, we wouldn’t recommend spending much more on a processor to run this game.

Rainbow Six: Vegas
Rainbow Six: Vegas is based on Unreal Engine 3 and is a port from the Xbox 360. For both of these reasons, it’s one of the first PC games that’s multithreaded, and it ought to provide an illuminating look at CPU gaming performance.

For this test, we set the game to run at 800×600 resolution with high dynamic range lighting disabled. “Hardware skinning” (via the GPU) was disabled, leaving that burden to fall on the CPU. Shadow quality was set to very low, and motion blur was enabled at medium quality. I played through a 90-second sequence of the game’s Terrorist Hunt mode on the “Dante’s” level five times, capturing frame rates with FRAPS, as we did with Oblivion.

What we’ve just said for Oblivion rings even truer for Rainbow Six: Vegas. Here at 800×600, there’s a difference of just 13.2 FPS between the $113 Core 2 Duo E4300 and the $1,199 Core 2 Extreme QX6800. Once you turn up the detail, that gap will likely narrow even more. Again, therefore, we wouldn’t recommend spending all that much on a processor to run this game. Our FPS-per-dollar chart echoes this: the best value clearly lies with the Athlon 64 X2 3600+ and the Core 2 Duo E4300. (The X2 4400+ can be counted out, since the E4300 offers higher performance for less.)

 

Valve Source engine particle simulation
Next up are a couple of tests we picked up during a visit to Valve Software, the developers of the Half-Life games. They’ve been working to incorporate support for multi-core processors into their Source game engine, and they’ve cooked up a couple of benchmarks to demonstrate the benefits of multithreading.

The first of those tests runs a particle simulation inside of the Source engine. Most games today use particle systems to create effects like smoke, steam, and fire, but the realism and interactivity of those effects is limited by the available computing horsepower. Valve’s particle system distributes the load across multiple CPU cores.

More CPU-bound applications like Valve’s particle simulation benchmark also give the X2 3600+ the top spot in value terms, but they nonetheless see very significant performance gains from faster chips. The Core 2 Quad Q6600’s score is nearly four times that of the X2 3600+, for instance. Still, it’s hard to argue with cold numbers: the Q6600’s price tag is over seven times that of the 3600+’s, and it sits in the bottom half of our score point/dollar chart, clearly bested by dual-core chips on the value scale despite their significantly lower performance.

Within the dual-core realm, a glance at our value chart and scatter plot suggests the Core 2 Duo E4300, the Athlon 64 X2 5600+, and the Core 2 Duo E6600 are the way to go if you’d like a little extra performance over the Athlon 64 X2 3600+. The X2 4400+ doesn’t have bad value proposition, but it’s no match for the C2D E4300, as our scatter plot shows.

Valve VRAD map compilation
This next test processes a map from Half-Life 2 using Valve’s VRAD lighting tool. Valve uses VRAD to precompute lighting that goes into its games. This isn’t a real-time process, and it doesn’t reflect the performance one would experience while playing a game. It does, however, show how multiple CPU cores can speed up game development.

This second Valve test mirrors the first one somewhat. There’s a drastic gap between dual-core and quad-core solutions on the raw performance scale, but high prices prevent quad-core chips from faring all that well on the value scale. Quad-core offerings are obviously your best bet if money is no object (or if you need to compile maps on a regular basis, as the time savings from faster compiles really do add up), but the rest of us will be more interested in chips like the Athlon 64 X2 3600+, the Core 2 Duo E4300, the Core 2 Duo E6400, and the Core 2 Duo E6600. From best to worst value, those appear to be the best deals in the dual-core segment.

 

The Panorama Factory
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s multithreaded. We asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs. The program’s timer function captures the amount of time needed to perform each stage of the panorama creation process. We’ve also added up the total operation time to give us an overall measure of performance.

The scatter plot from our panorama stitching benchmark shows a pattern reminiscent of the previous two tests, but here the gap between dual- and quad-core chips is much narrower. As a result, the Core 2 Quad Q6600 is even worse off in our value chart.

On the dual-core front, where the value lies once again, we see an interesting pattern. AMD’s dual-core offerings clearly outpace the competition from Intel, as highlighted by their greater proximity to the Y axis on our scatter plot. Take your pick here, but remember value still goes down the higher you climb on the price ladder. Barring the X2 3600+, we’d probably pick the X2 4400+ or the X2 5600+ ourselves.

This is as good a time as any for a brief intermission to point out some other trends we’re seeing. So far, AMD’s Athlon 64 FX-72 and FX-74 offerings fare quite poorly—and they’d fare even worse if we factored in the price premium for the only Quad FX motherboard out today, which costs around $330. We also see Intel’s Core 2 Extreme X6800 processor consistently dipping toward the bottom right of our scatter plot, where value is worst. It really doesn’t pay to be a premium dual-core chip in this day and age.

picCOLOR
picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including MMX, SSE2, and Hyper-Threading. Naturally, he’s ported picCOLOR to 64 bits, so we can test performance with the x86-64 ISA. Eight of the 12 functions in the test are multithreaded, and in this latest revision, five of those eight functions use four threads.

Scores in picCOLOR, by the way, are indexed against a single-processor Pentium III 1 GHz system, so that a score of 4.14 works out to 4.14 times the performance of the reference machine.

Things are a little tighter in our picCOLOR benchmark. Here, the 3600+, E4300, 5000+, 5600+, and E6600 appear to be the best deals among dual-core offerings. The 4400+ isn’t a bad choice by any means, but the E4300 outperforms it by a hair while costing slightly less.

Up on the quad-core front, the Q6600 is clearly the best choice here, although we shouldn’t need to remind you that it still sits in the bottom half of our performance per dollar chart. In other words, it’s a great chip if you can afford it and need the extra performance, but it’s not a K-Mart blue-light special.

 

Windows Media Encoder x64 Edition
Windows Media Encoder is one of the few popular video encoding tools that uses four threads to take advantage of quad-core systems, and it comes in a 64-bit version. For this test, I asked Windows Media Encoder to transcode a 153MB 1080-line widescreen video into a 720-line WMV using its built-in DVD/Hardware profile. Because the default “High definition quality audio” codec threw some errors in Windows Vista, I instead used the “Multichannel audio” codec. Both audio codecs have a variable bitrate peak of 192Kbps.

In Windows Media encoding, the very same chips come out on top as in our picCOLOR benchmark: the 3600+, E4300, 5000+, 5600+, and E6600, in order from the highest performance per dollar to the lowest.

The Q6600 is in the same spot as before, too, although this test gives us a more concrete example to illustrate its position. If you look at our encoding numbers, the Q6600 encodes our file 302.3 seconds (just over five minutes) faster than the E6600. If you don’t encode movies very often then that’s probably not a big deal, but if this is a task you carry out regularly, then those chunks of five minutes may turn into hours. It’s up to you whether you think that time saving is worth the $306 difference between the E6600 and Q6600.

LAME MP3 encoding
LAME MT is a multithreaded version of the LAME MP3 encoder. LAME MT was created as a demonstration of the benefits of multithreading specifically on a Hyper-Threaded CPU like the Pentium 4. Of course, multithreading works even better on multi-core processors. You can download a paper (in Word format) describing the programming effort.

Rather than run multiple parallel threads, LAME MT runs the MP3 encoder’s psycho-acoustic analysis function on a separate thread from the rest of the encoder using simple linear pipelining. That is, the psycho-acoustic analysis happens one frame ahead of everything else, and its results are buffered for later use by the second thread. That means this test won’t really use more than two CPU cores.

We have results for two different 64-bit versions of LAME MT from different compilers, one from Microsoft and one from Intel, doing two different types of encoding, variable bit rate and constant bit rate. We are encoding a massive 10-minute, 6-second 101MB WAV file here, as we have done in many of our previous CPU reviews.

Surprise, surprise. The 3600+, E4300, 5000+, 5600+, and E6600 offer the four best value propositions in this test, as well. The Athlon 64 X2 6000+ nonetheless sits fairly close to the E6600 in terms of performance per dollar, and its raw performance is a wee bit higher.

Coughing up the extra dough for the Q6600 here is quite counterproductive, since this application can only use two cores at once.

 

Cinebench
Graphics is a classic example of a computing problem that’s easily parallelizable, so it’s no surprise that we can exploit a multi-core processor with a 3D rendering app. Cinebench is the first of those we’ll try, a benchmark based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores are available.

Cinebench breaks the flow of gentle alternance between AMD and Intel chips by giving AMD’s offerings the clear upper hand pretty much across the board. Even the ill-fated Athlon 64 FX-72 and FX-74 chips outperform the Q6600, although Intel’s quad-core contender gets a better score in our performance per dollar chart. Considering the costs associated with AMD’s Quad FX platform, the Q6600 is what we’d recommend for this application if your budget allows room for a quad-core solution.

POV-Ray rendering
We’ve finally caved in and moved to the beta version of POV-Ray 3.7 that includes native multithreading. The latest beta 64-bit executable is still quite a bit slower than the 3.6 release, but it should give us a decent look at comparative performance, regardless. Performance per dollar values were generated using the performance of each CPU with four threads.

The situation we just saw in Cinebench is both mirrored and amplified in POV-Ray. AMD’s lead is so pronounced here that the FX-72 manages to overtake the Q6600 in our performance per dollar chart. Looking at the scatter plot shows why: the AMD four-core offering has a huge performance lead over its Intel competitor, but the price difference between the two is very slight. Quad FX may yet be worth the outrageously expensive motherboard and increased power bills in POV-Ray.

 

MyriMatch
Our benchmarks sometimes come from unexpected places, and such is the case with this one. David Tabb is a friend of mine from high school and a long-time TR reader. He recently offered to provide us with an intriguing new benchmark based on an application he’s developed for use in his research work. The application is called MyriMatch, and it’s intended for use in proteomics, or the large-scale study of protein. I’ll stop right here and let him explain what MyriMatch does:

In shotgun proteomics, researchers digest complex mixtures of proteins into peptides, separate them by liquid chromatography, and analyze them by tandem mass spectrometers. This creates data sets containing tens of thousands of spectra that can be identified to peptide sequences drawn from the known genomes for most lab organisms. The first software for this purpose was Sequest, created by John Yates and Jimmy Eng at the University of Washington. Recently, David Tabb and Matthew Chambers at Vanderbilt University developed MyriMatch, an algorithm that can exploit multiple cores and multiple computers for this matching. Source code and binaries of MyriMatch are publicly available.

In this test, 5555 tandem mass spectra from a Thermo LTQ mass spectrometer are identified to peptides generated from the 6714 proteins of S. cerevisiae (baker’s yeast). The data set was provided by Andy Link at Vanderbilt University. The FASTA protein sequence database was provided by the Saccharomyces Genome Database.

MyriMatch uses threading to accelerate the handling of protein sequences. The database (read into memory) is separated into a number of jobs, typically the number of threads multiplied by 10. If four threads are used in the above database, for example, each job consists of 168 protein sequences (1/40th of the database). When a thread finishes handling all proteins in the current job, it accepts another job from the queue. This technique is intended to minimize synchronization overhead between threads and minimize CPU idle time.

The most important news for us is that MyriMatch is a widely multithreaded real-world application that we can use with a relevant data set. MyriMatch also offers control over the number of threads used, so we’ve tested with one to four threads. Also, this is a newer version of the MyriMatch code than we’ve used in the past, with a larger spectral collection, so these results aren’t comparable to those in some of our past articles.

Value scores were generated based on the performance of each chip with four threads.

AMD’s victory is short-lived. With 3D rendering tests behind us, we return to our progression of Athlon 64 X2 3600+, Core 2 Duo E4300, Athlon 64 X2 5000+, Athlon 64 X2 5600+, and Core 2 Duo E6600 as the five chips that offer the best value as we climb up the performance scale. There’s inarguably a pattern here.

Naturally, being a thoroughly multithreaded application, MyriMatch gives a sizeable performance advantage to the Core 2 Quad Q6600. The chip towers above its dual-core siblings and AMD’s Quad FX chips, only bested (and slightly so) by Intel’s own QX6700 and QX6800. It’s clear which chip to get if you’re more concerned about performance than value.

STARS Euler3d computational fluid dynamics
Our next benchmark is also a relatively new one for us. Charles O’Neill works in the Computational Aeroservoelasticity Laboratory at Oklahoma State University, and he contacted us recently to suggest we try the computational fluid dynamics (CFD) benchmark based on the STARS Euler3D structural analysis routines developed at CASELab. This benchmark has been available to the public for some time in single-threaded form, but Charles was kind enough to put together a multithreaded version of the benchmark for us with a larger data set. He has also put a web page online with a downloadable version of the multithreaded benchmark, a description, and some results here. (I believe the score you see there at almost 3Hz comes from our eight-core Clovertown test system.)

In this test, the application is basically doing analysis of airflow over an aircraft wing. I will step out of the way and let Charles explain the rest:

The benchmark testcase is the AGARD 445.6 aeroelastic test wing. The wing uses a NACA 65A004 airfoil section and has a panel aspect ratio of 1.65, taper ratio of 0.66, and a quarter-chord sweep angle of 45º. This AGARD wing was tested at the NASA Langley Research Center in the 16-foot Transonic Dynamics Tunnel and is a standard aeroelastic test case used for validation of unsteady, compressible CFD codes.

The CFD grid contains 1.23 million tetrahedral elements and 223 thousand nodes . . . . The benchmark executable advances the Mach 0.50 AGARD flow solution. A benchmark score is reported as a CFD cycle frequency in Hertz.

So the higher the score, the faster the computer. I understand the STARS Euler3D routines are both very floating-point intensive and oftentimes limited by memory bandwidth. Charles has updated the benchmark for us to enable control over the number of threads used. Here’s how our contenders handled the test with different thread counts.

Value scores were generated based on the performance of each chip with four threads.

Much like our 3D rendering test strongly favored AMD chips, our computational fluid dynamics benchmark visibly gives the advantage to Intel’s lineup. The 3600+ as always tops of the performance per dollar chart, but beyond that, the E4300, E6300, E6400, and E6600 beat their AMD counterparts. Despite the fact that this app clearly sees large benefits from quad-core processors—the Q6600’s score is a testament to that—AMD’s Quad FX offerings are both quashed by the lowly Core 2 Duo E6600.

 

Folding@Home
Next, we have another relatively new addition to our benchmark suite: a slick little Folding@Home benchmark CD created by notfred, one of the members of Team TR, our excellent Folding team. For the unfamiliar, Folding@Home is a distributed computing project created by folks at Stanford University that investigates how proteins work in the human body, in an attempt to better understand diseases like Parkinson’s, Alzheimer’s, and cystic fibrosis. It’s a great way to use your PC’s spare CPU cycles to help advance medical research. I’d encourage you to visit our distributed computing forum and consider joining our team if you haven’t already joined one.

The Folding@Home project uses a number of highly optimized routines to process different types of work units from Stanford’s research projects. The Gromacs core, for instance, uses SSE on Intel processors, 3DNow! on AMD processors, and Altivec on PowerPCs. Overall, Folding@Home should be a great example of real-world scientific computing.

notfred’s Folding Benchmark CD tests the most common work unit types and estimates performance in terms of the points per day that a CPU could earn for a Folding team member. The CD itself is a bootable ISO. The CD boots into Linux, detects the system’s processors and Ethernet adapters, picks up an IP address, and downloads the latest versions of the Folding execution cores from Stanford. It then processes a sample work unit of each type.

On a system with two CPU cores, for instance, the CD spins off a Tinker WU on core 1 and an Amber WU on core 2. When either of those WUs are finished, the benchmark moves on to additional WU types, always keeping both cores occupied with some sort of calculation. Should the benchmark run out of new WUs to test, it simply processes another WU in order to prevent any of the cores from going idle as the others finish. Once all four of the WU types have been tested, the benchmark averages the points per day among them. That points-per-day average is then multiplied by the number of cores on the CPU in order to estimate the total number of points per day that CPU might achieve.

This may be a somewhat quirky method of estimating overall performance, but my sense is that it generally ought to work. We’ve discussed some potential reservations about how it works here, for those who are interested. I have included results for each of the individual WU types below, so you can see how the different CPUs perform on each.

The Athlons fare significantly better than their Core 2 counterparts with Tinker and Amber work units, but Intel claws its way back into the picture when we switch to Gromacs. Performance per dollar varies, then, and that makes the value proposition of these CPUs for Folding@Home very much contingent on the kinds of work units Stanford will be issuing in the future.

Our scatter plot only considers total projected points per day, which is based on performance per dollar for all work unit types. With that metric, AMD’s chips look to be the better options. Things look a little better for Intel on the quad-core front, with the Q6600 just edging out the FX-72 in our performance per dollar chart.

 

Power consumption and efficiency
Our Extech 380803 power meter has the ability to log data, so we can capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, video card, hard drives, and anything else plugged into the power supply unit. (We plugged the computer monitor and speakers into a separate outlet, though.) We measured how each of our test systems used power during a roughly one-minute period, during which time we executed Cinebench’s multithreaded rendering test. All of the systems had their power management features (such as SpeedStep and Cool’n’Quiet) enabled during these tests.

Complete results are available in our Core 2 Extreme QX6800 review here, but for this article, we’re looking at the amount of energy used by each system to render the scene. This method should account for both power use and, to some degree, performance, because shorter render times may lead to less energy consumption.

You’ll notice that we’ve not included the Athlon 64 FX-72 here. That’s because our “simulated” FX-72 CPUs are underclocked versions of faster processors, and we’ve not been able to get Cool’n’Quiet power-saving tech to work when CPU multiplier control is in use. We have included our simulated Core 2 Duo E6600 and E6700, because SpeedStep works fine on the D975XBX2 motherboard alongside underclocking. The simulated processors’ voltage may not be exactly the same as what you’d find on many retail E6600s and E6700s. However, voltage and power use can vary from one chip to the next, since Intel sets voltage individually on each chip at the factory.

The 1/microjoules value in our power efficiency per dollar graph is really 1/(watt-seconds/1000000), or 1,000,000 m-2 kg-1 s2. That’s a little obscure, but it quantifies power efficiency in a readable fashion based on the source data, which is in joules. We’re looking at power efficiency per dollar.

Despite having nearly the highest render energy of the lot, the X2 3600+’s bargain basement price keeps it atop the power efficiency per dollar standings. The E4300 claims second place, offering higher power efficiency and a lower price than the 4400+. Our bronze medal winner is the E6400, whose power efficiency is quite substantially above the E6300’s, despite the small pricing gap between the two chips.

That said, we couldn’t get away without mentioning the Core 2 Quad Q6600, which tops the power efficiency scale despite its fairly reasonable price tag. If you do 3D rendering work for Al Gore, this is the chip to get. Still, it’s worth pointing out that the Q6600 doesn’t have the lowest idle power consumption (see our full results here), so the X2 3600+ may yet be the friendliest to your power bill if you don’t run compute-intensive tasks very often.

 
Putting it all together
So far, we’ve quantified performance per dollar and relative value across a wide range of applications. We also wanted to come up with an aggregate score that distilled those results into a single scatter plot for easy reference. This aggregate score is by no means the be-all and end-all of processor value propositions, but it summarizes the results we’ve stepped through on the preceding pages. To generate this value score, we averaged the percentage performances of each CPU against our Athlon 64 X2 3600+ baseline. Each application’s performance was weighted equally. We also left the render energy results out of this calculation, since that measure of energy efficiency is quite different from a benchmark score.

It would be unwise to draw too many conclusions based on this aggregate score alone. (You didn’t skip ahead to this page, did you?) This sort of accumulation doesn’t give us a singularly authoritative number to quantify CPU performance, but it does give us a good idea of how chips handle overall across our test suite.

Unsurprisingly, we see the same five chips come out on top as in many of our isolated tests: the 3600+, E4300, 5000+, 5600+, and E6600. We should, however, point out that the 4400+, E6400, and 6000+ sit very close to their respective competitors on our scatter plot, so they’re not necessarily bad choices—they’re just not the best. If you do a lot of 3D rendering and select the 6000+ for your own PC, for instance, you shouldn’t sacrifice much in terms of overall performance compared to the E6600.

Among quad-core processors, the picture is much clearer. The Q6600 is quite obviously the most sensible choice compared to both AMD’s underperforming Quad FX chips and Intel’s overpriced Core 2 Extreme offerings.

Conclusions
After running 16 processors through a lengthy round of testing and analysis, we finally succeeded in unmasking the “final five”—the five chips that offer the best overall performance per dollar across our tests. If you haven’t been paying attention, those are the Athlon 64 X2 3600+, the Core 2 Duo E4300, the Athlon 64 X2 5000+, the Athlon 64 X2 5600+, and the Core 2 Duo E6600.

This discovery shows that repeated price cuts have actually helped AMD’s processor lineup remain competitive on the value front, even in the face of Intel’s own price reductions and the launch of the $113 Core 2 Duo E4300. Whether AMD can keep this up until the rumored November-December release time frame of its Phenom chips remains to be seen, however.

Aside from our final five, Intel’s Core 2 Quad Q6600 receives an honorable mention for being the best overall choice for users more focused on performance than on saving money—though still concerned about both. $530 is a fair amount to spend on a CPU, but the performance divide between the Q6600 and dual-core offerings is large enough to justify that premium for pretty much any performance-conscious user. In the same playing field, AMD’s Quad FX platform fails to impress, and Intel’s quad-core Core 2 Extreme chips are just too expensive. A word of caution, though: a number applications still don’t benefit from the Q6600’s extra cores at all—games and LAME MP3 encoding come to mind—so this chip isn’t a performance panacea for everyone.

So there you have our first look at the performance per dollar of today’s CPUs. Considering the heated price war currently raging between AMD and Intel, we expect this subject to be one we’ll revisit in the future. 

Comments closed
    • halbhh
    • 12 years ago

    Of dozens of articles about computer value I’ve read over the years, and many dozens more about performance, this is easily the best. Many kudos.

    When someone is choosing a cpu they would best consider likely uses. For games for example, only certain current games need cpu speed above the low end dual cores. Low end dual cores also handle common heavy multitasking of “power” users at businesses except for certain (less common) applications where speed can be all-important. There’s no point in having a 5600+ (also my personal favorite) if you just don’t need that level of speed for what you do. There’s no point in settling for only a q6600 when all possible speed is critically needed and fundamental to immediate return (profits or productivity) for one’s business. Therefore the idea of value winners in segments is useful for those able to figure out how much cpu speed they need and without special needs, and who can also guess how much cpu speed they’ll need for 2-3 more years.

    But often just plain price and performance needs trump all else. Realizing what you can spend is often important for home users. Further complexities have to do with upgrading the cpu in your current motherboard, etc., and it’s all grist for the mill.

    I also enjoyed comments below. Again Kudos to TechRep for this excellent article.

    • d0g_p00p
    • 12 years ago

    Nice work gang!

    • Dposcorp
    • 12 years ago

    I still want to know if this was Cyril’s first big review here.
    He knocked it out of the park.

      • derFunkenstein
      • 12 years ago

      He was obviously under the influence of snake blood. Makes you smarter, that. 😆

    • deepthought86
    • 12 years ago

    EXCELLENT article. This is the kind of analysis you’ll never see at sites like Anandtech. They’ll spend pages and pages going over 3-5% differences in memory/motherboard but won’t use a multitasking test when measuring multi-core CPU’s. And quantifying value? Never

    Great job, TR

    • BobbinThreadbare
    • 12 years ago

    Are we going to get a graphics card version now?

    That would be awesome.

    • green
    • 12 years ago

    i would have liked to have seen a pentium d in there
    mainly because it’s intel’s cheapest offering
    but also just to see how far we’ve come in a year
    (and how much current processors are leaving netburst in the dust)

    • indeego
    • 12 years ago

    yay! exactly the kind of article I love on TR.

    next up, memory, graphics, and MB’s.

    I could care less about best performance, I care more about bang/buck/value (when purchasing for personal reasonsg{<.<}g)

    • Forge
    • 12 years ago

    Dude, you’re getting Sloshdatted!

      • flip-mode
      • 12 years ago

      [sarcasm]why do pointless articles get /.?[/sarcasm]

    • wierdo
    • 12 years ago

    Great article, doing something a little different this time around was nice, and reading the charts was interesting.

    However I didn’t understand how the overall platform cost issue was handled, did I miss the explanation of how differences in cost there were factored in?

    • NotParker
    • 12 years ago

    Google doesn’t buy fancy servers. They buy the cheapest workstation that fits their needs and clusters them together.

    I think if your software is “clusterable”, then the 3600+ is KING!!!

    If not, 50$ or 100$ more for a processor is a good trade-off.

    • leor
    • 12 years ago

    see, that’s why i like this site. different takes from what everyone else is doing.

    • PerfectCr
    • 12 years ago

    I have nothing to add other than to say thank you for this article. Great info, great effort. yet another reason why TR is the best tech site on the net. 🙂

    • HammerSandwich
    • 12 years ago

    The final graph should display price as a percentage of the 3600+’s, just as it does performance.

    • AKing
    • 12 years ago

    Very good and relevant article. Some people here seem to believe that overclocking is custom practice, on the contrary its very uncommon for people to take that risk, even among techsavvy people its not that usual to overlock.

    • flip-mode
    • 12 years ago

    Thanks Cyril. Applause for TR. You guys have been burning the candle at both ends lately.

    My only big question about not only this review but the coverage of the CPU world since the launch of th AM2 platform: why no coverage of AM2 Opterons? They all come with 1meg cache and their prices dip down to $120ish. 939 Opterons were known for overclocking and for being a great value – is the picture the same for AM2 Opterons? If AM2 Opterons had been included in this test is there a chance that one of them would have ended up being one of the highest value CPUs?

    Just wondering because I’ve seen zero coverage of AM2 Opterons and yet they’re readily available and very attractively priced.

    Respectfully,
    flip-mode

    • eitje
    • 12 years ago

    i liked how, in the final chart, the processors between intel & amd were kind of bunched together – you could see which was competitive with which other CPU.

    it made for an interesting second read, checking those “pairs” against one another throughout the previous section of the article.

      • eitje
      • 12 years ago

      ps – our foreign friends might not know the size of an american dollar! that completely invalidates these tests! XD

    • herothezero
    • 12 years ago

    q[

    • deathBOB
    • 12 years ago

    Great article but the gaming section needs some RTS tests. Can we see some Company of Heroes or SupCom info?

      • DreadCthulhu
      • 12 years ago

      TR has included Supreme Commander in other CPU tests, and there really isn’t much spread between processors. At the 1024×768 & default graphics they tested with, the difference in average FPS between the fastest CPU (X6800) and slowest (E6300) was less than 3 FPS, making it a pointless benchmark.

    • Ruiner
    • 12 years ago

    I posted this before, and it’s applicable here again:

    §[< http://www.xbitlabs.com/articles/cpu/display/dualcore-roundup_9.html#sect0<]§

    • jobodaho
    • 12 years ago

    Extremely well written aritcle, it’s nice to see a different style of article as well.

    • snowdog
    • 12 years ago

    Another factor, is I don’t care about saving $20 to drop to an inferior architecture.

    I think at this point everyone knows you can buy a bottom end Intel Core 2 Duo and easily overclock it to obliterate the entire AMD lineup.

    It is about getting the biggest bang for a reasonable buck.

      • Ruiner
      • 12 years ago

      It’s not 20 bucks, but $54 (or almost 2x the cost) just for the cpu alone, 3600×2 vs.4300. I haven’t even researched the mobo spread.

        • snowdog
        • 12 years ago

        I don’t care if $113 vs Free for the AMD. At that point it is a trivial amount of money in the price of the whole PC. I will pay that incremental amount to have better technology.

        That is $113 spread over the likely 4 or 5 years I will use the computer. That is nothing.

        This seems like a pointless article for an enthusiast site. Naturally the cheapest will give you the best bang for the buck, after that it is all diminishing returns.

        Hey next we can compare bang/buck of dell $299 PC vs enthusiast $1500 build…

          • Ruiner
          • 12 years ago

          Most ‘enthusiasts’ turn over hardware yearly or more. The $1500 platforms are very rare, listening to buzz on most forums.

          For a gamer at least, that extra 60 bucks put into a vid card (something that can’t be as easily overclocked, given the practice of pipeline cutting) could allow a jump from low range to mid range.

            • SPOOFE
            • 12 years ago

            I would disagree about “most”. Some people have a bigger urge to tinker, and they’re always giggling with glee buying increments. Some enthusiasts, judging by the number of forum posts going along the lines of “I’ve been waiting for X to upgrade, but blah I guess I’ll wait longer”, constantly wait for an oh-so-irresistable Sweet Spot before making a dive into hardware investment.

          • flip-mode
          • 12 years ago

          It seems a little overboard and self centered to call the article pointless. And unless you overclock, the whole discussion of which architecture is “better” is moot, because the better chip is the one that performs better for the same money or performs the same for less money. This article isn’t targeted at overclockers, and that’s actually fine since not even a majority of enthusiasts overclock. It’s an excellent guide for people who are building systems for friends/family/clients. In fact, in the introduction to the article it was stated that the article is a response to readers’ requests. So perhaps your opinion is out of sync with the larger enthusiast community? Regardless, no one is forcing anyone to read anything.

          Respectfully
          flip-mode

            • snowdog
            • 12 years ago

            According to previous polls, about 50% here have overclocked CPU. Now there could be some self selection factors in the polling that don’t make it strictly scientific, but I think that there is likely a hefty portion of people who overclock here.

            Even taking overclocking out of the picture still do consider the article somewhat pointless. All that is needed is performance. As performance/dollar is obvious for anyone who made it past elementary school. Also prices fluctuate like crazy and don’t even apply to people outside the USA were pricing structures often differ.

            I am always looking at bang/buck by checking performance reviews against pricing at outlets in my country (usually my local shop which I prefer to mail order).

            And even less is forcing you to read my posts.

          • jinjuku
          • 12 years ago

          Hello McFly… Performance for the $$ spent is: Performance for the $$ spent , is Performance for the $$ spent.

          What don’t you get about that? I don’t give a shit if the processor is built out of toilet paper as long as it performs…

          • tigen
          • 12 years ago

          4 or 5 years? If you care about performance then you won’t be using it for anything serious that far out. And if you don’t, then you might as well go for the cheap thing (and the mobos are cheaper also, a double whammy of cheapness.)

        • Ruiner
        • 12 years ago

        Replying to myself, after doing some cursory pricewatching at newegg, the c2d mobos are around 20USD pricier than the AM2s.

      • mesyn191
      • 12 years ago

      LOL. For most people all that matters is the bang vs. buck ratio which this review does a pretty good job of showing the X2 3600 as being the best all around. Also why would you compare stock vs. OC’d? You do know those X2 3600’s can be OC’d to around 3Ghz with stock cooling right? Some people are getting quite a bit higher too.

        • snowdog
        • 12 years ago

        “You do know those X2 3600’s can be OC’d to around 3Ghz with stock cooling right? Some people are getting quite a bit higher too.”

        From what I have read, most run into the wall before 3GHz.

        You do know that C2Ds both clock higher and are faster per clock as well? The combo results in a double whammy of extra performance.

          • Forge
          • 12 years ago

          Agreed. I haven’t had any AM2s to play with, but both of the C2Ds I’ve abused hit 3GHz with ease, and my e6600 is 24/7 stable at 3.2 with very poor cooling (room has no AC). The AM2s I’ve heard of tend to hit 2.6-2.8, with outliers reaching 3GHz but not a lot further. C2Ds seem to average 3.0-3.3, with outliers reaching 3.6-3.8.

            • flip-mode
            • 12 years ago

            This article isn’t targeting the overclocking crowd. Why would those who don’t overclock – 9X% of users – base their purchase on overclocking results? When you buy a car do you make the purchase based on how fast it’ll do a 1/4 mile? Personally I look for fuel efficiency and whether or not it has the LATCH system, among several other things.

            • Forge
            • 12 years ago

            I wasn’t commenting on the article. I wasn’t suggesting that Damage should overclock or account for overclockability. I was simply commenting on someone else’s observation on overclocking headroom.

            If we’re going to account for OCing, suddenly the E4300, the E6600 and the X2 3600+ will shoot out way ahead of all the others, since the multipliers/cache/etc suddenly matter in a whole other way.

            • Ruiner
            • 12 years ago

            My own 3600×2 just missed the 3GHz barrier with watercooling, but I might hit it with some tweaking.

            I just ordered another similar setup for my sis….OEM 3600×2, mATX 6100 mobo, 2gigs crucial, and a coolermaster HSF for $198 (including a $10 MIR).
            Awesome deal even if I only get 2.6 out of it.

          • mesyn191
          • 12 years ago

          Sure those C2D’s do tend to OC a little better, but those X2’s are nearly as crappy as you seem to make them out to be and are in fact more than enough for your avg. Joe Sixpack or for that matter many gamers. When it comes to bang vs. buck they can’t be beat. Quite a few people at XS.org seem to be getting around 3Ghz out of those X2’s, YMMV when OC’ing of course…

    • Hattig
    • 12 years ago

    Good article.

    I think it confirms what most people have suspected – that AMD and Intel are both good value at the lower end, and “Extreme” CPUs from both companies are a waste of money unless they can literally save you minutes, and the sum of those minutes is worth $$$ to you. I.e., not for a general purpose computer.

    I think it is fair to exclude overclocking right now – there’s enough article as it is, we don’t want to overcomplicate it. Overclocking would probably benefit the low-end Intel products more though, but don’t forget that not every one overclocks the same… and lots more people prefer to silence their machines than overclock these days.

    • Ruiner
    • 12 years ago

    In recent experience the Intel ‘street’ prices hold closer to the ‘list’ than the AMD’s, at least for the lowest tiers.

    The cheapest I’ve seen the 4300 is $114 but the 3600×2 is around $59 (it’s an OEM, but most overclockers don’t use stock cooling).

    • snowdog
    • 12 years ago

    The only bang for the buck interesting to me is after over-clocking.

    I have been building my computers since the 80486 days (when I switched from Amigas to PC’s) and I have NEVER run a CPU at stock speed, but I don’t tend to invest in super cooling solutions. Though my next build I will probably get a slightly better air solution than stock.

    So that is the only thing I am really interested in. What is the simple overclock for most of the these procs. What is the dead simple, drop in and overclock speed without needing liquid cooling, or super expensive ram.

    Because bang/buck at stock speed is pretty much dead obvious and bloody useless to a large amount of us who will never run stock speed.

      • blockhead
      • 12 years ago

      Overclocking is hard to quantify since:
      1) It would have to be highly statistical based on many user’s results with the same chips
      2) It would highly related to cooling (as you said you don’t invest in super cooling but others do)
      3) It would be related to chipset/motherboard

      It would be interesting, however, to see a running poll/database of each cpu model and the successful overclocking obtained. The middle of the bell curve could then be used as a typical overclock to add to bang/buck graphs like in this review. Maybe something like this exists?

      EDIT: Great review as is though!

    • Jigar
    • 12 years ago

    Nice article Sir.. Good work 🙂

    • beny
    • 12 years ago

    Great article! Keep up the good work.

    • baobab
    • 12 years ago

    This was an interesting read and a well written article.

    However, I think there is a flaw in the analysis.
    We cannot compare performance per dollar looking only at the cost of the CPU. We need to take into account the other components of the system (without monitor, speakers, keyboard), but we have to include the cost of the motherboard and the cost of the memory at a minimum. For games’ performance we also have to include the cost of the GPU, though if we benchmark them at low resolutions we should perhaps use a cheap/mainstream card to avoid skewing the numbers in the other direction.

    The problem is that how you defined performance/$ in this article, it corresponds to the slope of the lines that go through the origin and each of the scatter points. By considering only the cost of the processor, the cheapest CPUs receive an unrealistically high score. Performance is a function of more than just the CPU, and we simply cannot have a CPU running benchmarks without a motherboard, some RAM and a power-supply at a minimum.

    In these tests you used similarly priced components with all CPUs. By adding this fix cost to all systems, all your scatter graphs will slide somewhat to the right on the x axis. The value results would be a bit more interesting in that case and I think the X2 3600+ would not win that many tests anymore. If we assume that our system sans CPU costs $400 and we can add either a $100 or a $200 processor, then the cost of the second system is only 20% higher and not double the price of the first system. If it gets 20% better performance, then it has a better value.

    Of course, you could use cheaper components with a cheaper CPU to reflect how they are used in practice. Therefore, I think this analysis works best to analyze full systems. And perhaps a very good use of this analysis would be to compare the performance/$ value of the systems recommended by TR at various price points.

      • FireGryphon
      • 12 years ago

      If you read the article, you’d know that TR considered more than just the cost of the processor. In fact, they mentioned the cost of the entire system many times throughout the article. Perhaps rereading your article will address your concerns?

      We don’t need to address different GPUs since game performance is not the point of the article; performance in games is!

      In the article, TR analyzes the value of the CPUs. System-specific requirements, like motherboards, were, however, considered, since no matter what you’re doing with the processor, you’ll need the mobo. That should have been clear from the text.

        • baobab
        • 12 years ago

        & #64
        I’ve read the article. I know the author commented on how performance depends on more than just the CPU.
        There is little variance in the cost of the other components used in the test systems (which was also my point), but if you divide the performance score by only the cost of the CPU, you get skewed results.

        Assume you have two systems that differ only by the CPU cost, let’s say P1 and P2, while all the other components have a fixed cost F.
        These two systems get scores S1 and S2 in some benchmark.

        If you compare S1/P1 and S2/P2 as in the article, you may get a different ordering than if you compare S1/(F+P1) and S2/(F+P2). It is simple math, I hope it is obvious for you.

        By removing the fixed cost in a system, the results are skewed in favor of the cheapest CPUs and they do not show necessarily the true value.

          • FireGryphon
          • 12 years ago

          y[

            • Marsolin
            • 12 years ago

            I didn’t make the original comment, but I agree with it. When only CPU price is considered it becomes a factor that outweighs everything else, but it isn’t consistent with what a user will be spending.

            A $1000 processor must be 10X faster than a $100 processor to have the same performance/dollar, but if I were to by a $1000 system and then add the processor the results change dramatically.

            We would now be comparing an $1100 dollar system with a $2000 system and a 2X performance advantage for the more expensive processor also makes if the most price efficient. Performance then becomes a more realistic part of the equation and matches what a user would really be spending.

            Power draw in the article was appropriately measured at a system level. Pricing should be done that way too.

            Chad
            §[< http://linuxappfinder.com<]§

      • continuum
      • 12 years ago

      I think more like “artificially limited” rather than “flawed” would be a better term, simply because people have very personal definitions of what THEY need for their individual systems.

      I personally have very little use for such a value processor- so the best bang/buck for me per these charts is pretty much useless, but I’m not willing to spend huge piles of money either, so it gets a little more fuzzy for me. Especially for overclockers, too…

      e.g. quad-core for most is limited in utility and actual value, but especially on July 22 when it goes to $266… well, that’s a price cheap enough to be hard to refuse, period.

        • FireGryphon
        • 12 years ago

        I thought that about the 3600+, too. The E6600 or 6000+ seem like the best choices to me. They’re as fast as you can get before you start burning money on very little relative performance gain.

    • Whybee
    • 12 years ago

    An excellent article, but as an economist I would like to make a few remarks to put it in a larger context.

    Technically speaking, you offer a cost-benefit analyisis of using a computer, with CPU price as a proxy for cost and benchmark performance as a proxy for benefits.

    As far as the benefits are concerned, benchmark performance is probably the best proxy you can have, since benefits vary significantly from one user to another. For the cost, however, there are some better choices.

    First of all, lets say that your analysis would be perfetly applicable for an upgrade situation where you replace only the CPU. If, however, you buy a new system you should definetly consider the system cost. As suggested in #39, you could for example evaluate price performance for three types of systems – value, mainstream and performance. In this case you will most likely have different winners in each category, which makes perfect sense.

    If you wanted to find an even better measure of cost, you would have to concider the life-cycle cost (LCC) and big companies will often do just that when they evaluate theit IT purchasing options. But in our case LCC would probably be an overkill, so the system’s initial cost is the best option for the cost benefit analysis.

    Once again, this is not to criticize the choices you made in this article, but to suggest how you future articles can be made even better.

      • FireGryphon
      • 12 years ago

      The value proposition for different segments of the system market could be found by looking at different regions of the graph, can’t it? For example, We can observe groupings of certain processors across multiple tests and divide them into regions. We can assume that the 3600+ is the economy sector, the E4300 to E6700 is the midrange, and the quad-cores are the high range. We split up the graph that way and are able to choose the best value proposition in each region.

      Each region, too will have different system requirements. High end systems will be paired with more expensive hardware than economy systems, but since we can assume that any high end user will use more expensive parts and any economy user will use less expensive parts, we can disregard other system components since all processors in a particular region of the graph will be put into comparable systems.

      Is that what you were going for, or did I miss the point?

        • Whybee
        • 12 years ago

        No, actually the idea is to use the cost of typical systems (however defined) for the price-performance calculation. For example, you take a value Socket AM2 system with a price tag about 300$ (without CPU) and put in different CPUs. Then you measure the price-performance using the whole system cost (including CPU), like this:

        (model / performance / cost / price-performance)

        3600+ / 100 (points) / $373 / 3.73 ($ per point)
        4400+ / 120 / 421 / 3.50
        5000+ / 135 / 467 / 3.46
        5600+ / 145 / 488 / 3.37
        6000+ / 155 / 541 / 3.60

        This table shows that for a value AM2 system, 5600+ offers the best price-performance for the whole system. Of course, in this table the performance is based on the clock speed. In a real value system, performance will not scale as well because it will be bottlenecked by other components. So if you plug in actual benchmark data, you will be able to show (almost scientifically) which CPU gives the best system price- performance and can therefore be called the best ‘value’ CPU.

          • flip-mode
          • 12 years ago

          That just makes a hell of a lot of sense to me.

          • Bensam123
          • 12 years ago

          Thought this article was just on the CPUs and not the overall systems?

          I would just take Price / FPS = Dollar(s) per FPS.

          It’s what I do for hard drives… Maybe I’m just a simple guy.

          • halbhh
          • 12 years ago

          Of dozens of articles about computer value I’ve read over the years, and many dozens more about performance, this is easily the best. Many kudos.

          When someone is choosing a cpu they would best consider likely uses. For games for example, only certain current games need cpu speed above the low end dual cores. Low end dual cores also handle common heavy multitasking of “power” users at businesses except for certain (less common) applications where speed can be all-important. There’s no point in having a 5600+ (also my personal favorite) if you just don’t need that level of speed for what you do. There’s no point in settling for only a q6600 when all possible speed is critically needed and fundamental to immediate return (profits or productivity) for one’s business. Therefore the idea of value winners in segments is useful for those able to figure out how much cpu speed they need and without special needs, and guess how much they’ll need for 2-3 more years. But often just plain price and performance needs trump all else. Realizing what you can spend is often important for home users. Further complexities have to do with upgrading the cpu in your current motherboard, etc., and it’s all grist for the mill.

          I also enjoyed comments below. Again Kudos to TechRep for this excellent article.

    • TrptJim
    • 12 years ago

    I’d like to see how the chart would look if overclocking (safe, with default voltage) were taken into account.

      • Spotpuff
      • 12 years ago

      Ditto. The E4300 can be clocked > 3GHz and it beats the X6800 usually.

      • oldDummy
      • 12 years ago

      By moving the CPU points vertical to the highest safe OC you will get a viable view of a OC chart. The price remains the same but the OC will vary with each CPU point depending on it’s OC. The EE CPU’s will fair the worst in this chart IMO.
      OD

      • derFunkenstein
      • 12 years ago

      Probably unrealistic because OC’ing varies so heavily based on the samples of the RAM, CPU, and motherboard on hand.

    • Mr Bill
    • 12 years ago

    That was just, excellent.

    • thecoldanddarkone
    • 12 years ago

    I don’t mind this review, but what about
    e2140
    e2160
    e4400
    e6320
    e6420

    for completeness
    Some of these are newer, and some of these have been out for awhile (why use e6300 or e6400?).

      • eitje
      • 12 years ago

      money, time, and availability.

      • Shinare
      • 12 years ago

      I asked teh same question, but it no longer seems to be here for some reason. I just bought an E6420, would have been nice to see if that extra cache made any difference over the E6400.

        • Damage
        • 12 years ago

        I answered your question, but it’s now on page 2. Thanks for the goofy accusation, though! It’s blearing.

          • Shinare
          • 12 years ago

          *lol* sorry Damage, never seen (or just not noticed) that there could be multiple pages on the front page posts. I bow and beg forgiveness. 🙂

            • mesyn191
            • 12 years ago

            The extra cache hardly makes any difference BTW, it something like a ~5% on average increase in performance.

            • Forge
            • 12 years ago

            In my experience, when I switched from my e4300 at 3GHz to my e6600 at 3GHz, it was towards the 10% end of things far more often than not. The L2 cache was the only thing that changed.

            • mesyn191
            • 12 years ago

            Performance varies a bit depending on what you run, some apps do benefit more from more cache vs. other apps, but on avg. even 5% is an optimistic performance increase for doubling the cache on C2D.

            §[< http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=4<]§

            • wierdo
            • 12 years ago

            Also higher clock speeds can affect the usefulness of having more cache, because the relative disparity between the processor speeds and memory subsystem’s speeds grows as processor speeds go up.

          • lyc
          • 12 years ago

          accusation?!

      • Forge
      • 12 years ago

      Feel free to purchase any or all of the above and post them off to Damage. I’m sure he’ll get them added if you do.

        • thecoldanddarkone
        • 12 years ago

        This was a review about $ per performance. If it’s incomplete, it’s incomplete. Don’t get me wrong I think it’s a good idea. As for the e6320/e6420 that could have been simulated with a notation. I understand them not having the dual core pentiums, as that was explained.

        It’s a review, it’s decent, but as always they can get better.

    • Tarx
    • 12 years ago

    I’d also like to see the comparativeness that include system cost (CPU,MB,RAM,GPU, etc.).
    System cost can be set in the usual 3 categories i.e. value, mainstream and performance.
    I suspect that would probably push CPUs like the E6600 and 5600+ to the top…

    • 5150
    • 12 years ago

    Brilliant!

    • FireGryphon
    • 12 years ago

    I thoroughly enjoyed this informative article, and when I got to the end, I thought to myself, “this article fits right in with the innovative, high quality articles I’ve come to expect to find here at Tech Report.”

    Whenever I buy processors, I do a little math and build tables with score/dollar results on them (used to be MHz per dollar). These graphs are much more thorough, and I’m a little surprised at some of the results. The AMD chips are much better values than I thought.

    • Convert
    • 12 years ago

    Well done.

    I also have to say the Q6600 is really where it’s at if you can take advantage of it. Not only is it decent simply based on the CPU price but you save quite a bit of money as you don’t have to buy a second system to house a second dual core CPU. That is what it really comes down to for quads.

    • gyrfalcon1
    • 12 years ago

    Awesome read. This is why I love TR.

    • DrDillyBar
    • 12 years ago

    An interesting way to look at things.
    My E6420 is looking pretty good in all of that.

    • blastdoor
    • 12 years ago

    I think this article is a very good idea. It’s nice to see an original approach to reviewing processors.

    I cannot think of any better way to do this article. But I do have a comment —

    There is another way to look at the value of CPUs, and that is in terms of the value of the time saved by using a faster CPU. It would probably be very hard to reflect this in an article, since everyone values their time differently. However, it is something for people to keep in mind when choosing a processor. If you value your time at $50/hour, and buying a $500 CPU saves you 10 hours relative to a $100 CPU, then the more expensive CPU is worth it.

    • herothezero
    • 12 years ago

    Great article. This is why I don’t bother with other hardware enthusiast sites anymore–that, and it’s one of the few sites that’s actually still readable.

    • dragmor
    • 12 years ago

    Great review, its nice to see TR still comes up with the goods.

    For the next one could you do this for an “Office style PC” review i.e. mobo with integrated graphics, etc maybe 690 vs 965G as this is more likely to be what the cheaper CPUs are used for.

    • Drewstre
    • 12 years ago

    THE Tech Report has bubbled up to the top spot in my list of respectable tech websites, and efforts like this reassure me that that opinion is well deserved. This is precisely the kind of information I’ve been looking for, but have been too -[

    • provoko
    • 12 years ago

    Nice article, thanks.

    • Dposcorp
    • 12 years ago

    Was this Cyrils first full review?
    Awsome job to him, and the entire TR Crew.

    Well done indeed.

    Something that doesnt even have to be graphed is overclocking.

    Buy your 3600+ / E4300 and a little better board and ram.

    Overclock when extra juice i needed.

    “Kick it up a notch………….BAAM!!!!!!!!!!!!!!” (Emeril FTW)

    Heck, with software based Overclocking, no reboot is even need.

    Its funny, cause I have a AM2 3600+ here, and was looking at some Intel stuff today.

    I think I’ll stick to my low cost plan and pick up a Biostar Matx board with HDMI and run it stock, unless I need to overclock, which Biostar Matx boards do well.

    Now, do I get the AMD 690G or Nvidia 7050 based model?

    • Fearless Leader
    • 12 years ago

    I think this type of review is the start of a good trend in CPU and possibly GPU reviews.

    Keep it up.

      • ew
      • 12 years ago

      I agree! Those scatter plots are great.

    • IntelMole
    • 12 years ago

    I think the value proposition of the Q6600 is understated. I understand that conjecture is bad in a conclusion, but surely with the increase in multi-threaded code that should happen in the next few years, this is possibly just as wise an investment as the better dual cores?

    My argument goes something like as follows:
    – Right now, you get better performance, just not quite as much value.
    – For a lot of business applications, system downtime / instability / even the risk of instability is money. Lots of it. Not having to move to a whole new computer (upgrades are so pre-millennium) for a year or two longer is possibly worth a lot of money to some people.
    – This suggests to me, that a lot of people will buy a fast dual core, but will stick with it for that year or two longer anyways, and miss out on all the quad-threaded performance they would have had anyways.

    If I had the money now, that seems like good odds that I’d come out ahead. To me at least.

      • HighTech4US
      • 12 years ago

      Well Stated. In July the Q6600 will be priced at $273 which is quite a value. I will be purchasing one as soon as that price appears.

        • IntelMole
        • 12 years ago

        It’d have been stated even better if I could use the right words first time and didn’t have to edit it (understand vs. understated). I blame lack of sleep 😀

      • Nelliesboo
      • 12 years ago

      What computer are you on right now? (specs) Just wondering if that upgrade ever happened…

        • IntelMole
        • 12 years ago

        This is my parents laptop, a Celeron 1.5 with 512MB RAM.

        My laptop is what you’re probably referring to, which is still a P4 2.8GHz, 512MB RAM.

        Upgrade cycles never really happen in the real world anyways, let alone now when motherboard sockets and compability changes so often, which is why I believe the value of the quad core processor for applications very sensitive to downtime (basically any “work pc” then) is understated.

      • clone
      • 12 years ago

      r[< I think the value proposition of the Q6600 is understated. I understand that conjecture is bad in a conclusion, but surely with the increase in multi-threaded code that should happen in the next few years, this is possibly just as wise an investment as the better dual cores?<]r you could be right but if history is any indicator the Q6600 will likely be worthless in a few years. case in point. jump back 4 years in techonology throughout the history of computing. Pentium 4 and any Socket 939 today garbage. don't get me wrong in some ways these cpu's are ok but nothing compared to Conroe, multithreading will be the deathknell of Intel's current Quad cores due to limitations within the architecture at a time when Multi threading applications will be KILLING CPU's...... as has always been the case future proofing simply isn't an option for enthusiasts.... for general computing sure a Socket A or P4 will still be fine with 2gb's of ram but gaming is a different animal and 3 -5 years Multi threading if they get it to work is going to be killing todays dual and quad core cpu's. if on a budget the 3600 X2 can't be beat if the owner wants performance overclock the crap out of it..... it's a $73 cpu if it craps in 3 years who cares and the budget Intel Conroe's don't clock as well as the E6xxx's albeit they can reach 5600+ AMD performance so can the 3600 X2 and do it for half the price. I'm currently using an E6600 which cost just shy of $400 at the time but had I known I could get a 3600 X2 for $73 I would have simply clock the crap out of it with my water cooling setup.

        • IntelMole
        • 12 years ago

        Your argument that I may be right, but am most likely wrong is based on the premise of a massive multithreading explosion in the 3-5 year timeframe.

        I simply do not believe this will happen to the extent you are suggesting. Witness how long dual-core processors have been around, and how long it has taken us to make good use of two cores for programs that are not embarrassingly parallel? Making good use of further cores involves a exponential amount of effort.

        (As an anecdote to support this theory, a friend of mine recently purchased a dual-core laptop, and upon browsing Windows’ Task Manager was surprised to find that a program had crashed and was taking up 100% CPU time on one core. He hadn’t even noticed until then – which shows how few dual-threaded programs he runs.)

        I believe someone else here suggested that the practical limit for most code in the forseeable future is around 4-8 threads. Which means that the quad-core chip here is probably still quite viable for some time.

        As to your point about the difference five years makes, I implore you to consider the difference between those older chips and one made a decade ago. The difference there (in just about any metric you wish to consider) is pretty massive in relation to the progress made in the previous five years.

        I believe this progress will continue to slow, and that the list of applications where today’s processors will remain “good enough” is such that the progress made in the next 5 years will be insignificant for most people. I myself, completed a university degree recently on little more than a Pentium 4M 2.8GHz processor, which is now approaching 3 1/2 years old. Very rarely do I run into its limits even now – RAM is only just now becoming a concern, and perhaps the occasional long wait doing image processing or some other task in Matlab.

        It will remain “good enough” for me, for at least another six months to a year, and would probably be so for any office worker until we start viewing HD movies. I think even some quicktime HD clips can be decoded in real time on it though.

          • clone
          • 12 years ago

          I’m not arguing for or against I suspect as well that development is going to slow on all fronts as the idea of dual cores no matter how nice and thrifty at multitasking is simply the admission that they couldn’t go much faster with single core.

          Intel chose the race and only walked away when they couldn’t get any faster without facing expensive issues.

          that said the move to 45nm and then to 32nm seems to be an aggresive one on Intel’s side and given almost all of the current Conroe’s showed promise of hitting 4000mhz from the day they were released with little effort and no need for exotic cooling I doubt anything out today will be within 50% of the next gen offerings.

          Intel seems to be playing harder baseball than usual and are attempting to bury AMD back to the old 10-15% realm of marketshare which they may very well achieve if not outright kill them altogether.

          all that said $73 for a dual core that is capable for the enthusiast to run like a $300 dollar chip is a more convincing position than the hope of available performance 3 – 5 years from now from a current gen cpu.

          additionally I’m not sure future proofing is even required when a 2 or 4 gb ram equipped computer with a mid-range performing cpu and motherboard can be had for $250.00…… the question becomes why bother?

            • IntelMole
            • 12 years ago

            Because I’m approaching this from the business workstation ideology. Someone who needs a high-performance workstation, whose applications range from completely serial, to lightly multithreaded, possibly even a couple that might scale (just about) to 8 threads effectively, at which point Amdahl’s law starts kicking in.

            These people have slightly looser budgets compared to your average enthusiast, because the computer pays for itself with increased productivity. Therefore, they may justify the added expense for less than proportional gain now.

            Conversely, a week’s downtime – a day or two to build it, and then some further testing over the next week, plus a bit of buffer for any problems (I don’t think that’s unreasonable for a work machine) – ensuring that their new machine is stable enough for work is probably worth more to them than the average enthusiast. Probably that couple of hundred dollars 🙂

            So they can put off that loss for another year maybe, and end up with a faster machine the next year for the same money.

            • Delta9
            • 12 years ago

            Given the time frame it will take for the software to really take advantage of a quad core cpu would not this make the AMD or dual core Intel chip an ideal solution for today. In 12-18 months you can drop a nice cheap quad core chip into the same box with the same thermal envelope (AMD claims), then bump up the amount of system memory if needed. This would minimize your down time (machine has much longer shelf life, so no reinstall and setup) and allow you to double up cores. If $279 is your cpu budget get a cheap dual core now because at the rate manufacturers are giving away silicone your 6600 will be a $100 chip twelve to 18 months from now.

      • NotParker
      • 12 years ago

      If you can buy 2 or even 3 X2 3600+ PC’s for the price of a quad equipped PC, you can spend the savings on other things that will also improve productivity — such as 22″ monitors.

        • IntelMole
        • 12 years ago

        Ah, point. Though this is bringing other factors into consideration outside of the pure value for money of the processor in question.

        Of course, there’s a diminished return on that second monitor if you can’t find enough processor power to utilise it well.

        Damn multi-variable equations.

          • flip-mode
          • 12 years ago

          As a long time dual monitor user, CPU power or lack there of has never been a factor when it came to putting both monitors to good use.

    • willyolio
    • 12 years ago

    it’s pretty clear now… while AMD pretty far from the performance crown, they’re definitely staying competitive in value.

    • gratuitous
    • 12 years ago
      • mesyn191
      • 12 years ago

      Bear in mind even dual core chips aren’t being properly utilized at this point and probably won’t be for a while yet, by the time games come out that start to do so you’ll be looking to upgrade to 8/16 core chips anyways…

        • packfan_dave
        • 12 years ago

        I’ve said before and I’ll say again that quad-core is probably where things top out for mainstream desktops and notebooks for quite some time (quite possibly as long as single-core ruled the roost). Very, very few general-purpose apps scale effectively beyond two cores, let alone four, and going beyond that is going to require radical changes in programming techniques.

          • mesyn191
          • 12 years ago

          There was a headline a while back on the front page that mentioned a break through in parallel computing for compiling general purpose code. Obviously lots of things aren’t going to benefit from more than 2, 4, or even 1 core but for the few things that do in the consumer market place (ie. games…) you will see some practical benefit and thus have a group to sell to.

      • Forge
      • 12 years ago

      q[

      • Peldor
      • 12 years ago

      Honestly, the gaming tests are overly CPU heavy. It makes the article more interesting, but less realistic. People aren’t building or buying systems to play at 800×600. Nobody turns off hardware skinning. That’s why you buy dedicated hardware (the video card) in the first place.

      Cheap CPU + as much GPU as you can afford. It’s as simple as that.

    • Forge
    • 12 years ago

    Now that I think about it, another money/mouth convergence is possible… Just do it all yourself!

    You have the performance figures and the model info from TR’s existing reviews, all you need to do is go hit pricewatch or Newegg or whatever and do some division.

    Of course, TR still has a monopoly on the high-end graphing skills (thank you again for NOT going to those stupid animated Flash charts some sites have started (ab)using), but the raw numbers aren’t horribly difficult to compute.

    I’d imagine most of us have been doing a limited form of this in our heads for quite some time anyways. I know I always look at the CPU pricing, see what defines the top end of what I can afford, and then compare that subset. weighting based on performance/$ is just a bit farther along that line of logic.

      • mesyn191
      • 12 years ago

      But..but I like it when you do all my work for me!!

      Good review guys.

    • Forge
    • 12 years ago

    Money, mouth, converge.

    If you want to see more CPUs in a roundup, see more dollars donated to TR. Scott isn’t in this just to lose money and gain karma, there are expenses to consider and defray.

    If nothing else, I’d imagine that input from regular contributors figures heavily, and kibbitzing from us groundlings doesn’t matter much to Scott and the rest of the management.

    • Plinth
    • 12 years ago

    Yay! I like this. This is positive reinforcement so more of these will come from you guys in the future 🙂

    • nonegatives
    • 12 years ago

    Nice job in showing real “bang for the buck” for such a wide matrix of options. I felt bad starting off with a lowly 3600+ to get into an AM2 platform, but at $60 and running 2.28G, it’s not such a bad deal after all. Overclocking and good sales make those pin-points into skewed ovals with lots of overlap. Now THAT would be an interesting plot! (I’m sure you have plenty of free time to slap that together)

    • Hotdog
    • 12 years ago

    I too would have loved to see the E2xxx series included, as well. Especially since they’re well under 100 bucks now, and close to the X2 3600+.

    That being said, though, the results were interesting, if not predictable: Cost scales more than performance.

    Edit: Should have replied to the first post, but I’m too mentally slow to use a forum.

      • coldpower27
      • 12 years ago

      I would love to see how well the Pentium E21xx series would do on this value/dollar plotting method, no doubt it will likely fair close to the 3600+ and E4300.

        • Damage
        • 12 years ago

        Guys, FWIW, we asked Intel for an E2160 review sample, but they refused. I even tried to buy one a while back, but they weren’t in stock anywhere yet.

          • Hotdog
          • 12 years ago

          Oh, I understand completely. I have no idea of what kind of time frame you guys are working with, how long a review actually takes, etc, etc. The effort you put into this review was obviously huge, and doing some mental math of a “slightly slower e4300”, we can make our own conclusions about what a e2xxx would be like, hehe.

    • Shinare
    • 12 years ago

    The lack of any Exx20’s seems a little blearing… Thanks for the effort tho, looks great! I just wish the E6420 I just bought was somewhere on the graph.

      • Damage
      • 12 years ago

      I don’t know what blearing is, but the absence of E6x20 processors is simply an artifact of the fact we haven’t yet tested those CPUs. I would love to have the resources to do that, and I would also like a Ferrari 599GTB. I don’t have either.

      If this little experiment turns out to be a success, perhaps we can do full-body searches on the AMD and Intel lineups at some point in the future. For this first effort, the examples we’ve tested will have to suffice.

        • nonegatives
        • 12 years ago

        “It is so choice. If you have the means, I highly recommend picking one up.”
        -FBDO

        • Shinare
        • 12 years ago

        Oops, misspelled it, its blaring, as in completely obvious. I guess what I really should have said was it was a vacuum in the statistics that needs to be filled. No “goofy” accusation implied. Like I said, its a good article, it fills a gap that has hardly been touched on, and, as a complete cheapass, one I have called for in the past. 🙂

          • Forge
          • 12 years ago

          I think the word you’re searching for is ‘glaring’. While blaring could technically be used and be valid, glaring typically implies obviousness while blaring typically implies excessive volume.

          So unless you ‘read’ the review via text-to-speech and the volume of your speakers was an issue….

Pin It on Pinterest

Share This