review amds socket am3 phenom ii processors

AMD’s Socket AM3 Phenom II processors

It seems like only last month, we were reviewing the first Socket AM2+ versions of the Phenom II processor.

Perhaps because, well, we were.

Yet here we are a month later, and AMD has produced a new revision of the Phenom II capable of working with Socket AM3-style motherboards and DDR3 memory. Hard to keep up sometimes, innit?

Fortunately, although the change is no small accomplishment for AMD, it is relatively simple in the grand scheme of things. The Phenom II’s memory controller has been modified to add support for DDR3 memory, mainly. Another happy consequence of the new silicon revision is additional clock speed headroom for the “uncore” (as Intel might call it) portions of the Phenom II—the memory controller, L3 cache, and HyperTransport—whose clocks run at 2GHz in this wave of new Socket AM3 processors.

Beyond that, little has changed in a month. The chips are still manufactured using AMD’s 45nm SOI fab process, and AMD hasn’t even modified its die size or transistor count estimates: they’re still 258 mm² and 758 million, just like previous Phenom IIs. The new chips are still compatible with existing 7-series chipsets from AMD, as well.

A Socket AM2+ processor (left) next to a Socket AM3 CPU (right)

The move to a new memory type requires a new pinout configuration, though, and that’s where Socket AM3 comes into the picture. You’ll need a Socket AM3 motherboard in order to use these Phenom II processors with DDR3 memory. This new socket type looks an awful lot like the prior Socket AM2+, but it has two fewer pins, for a total of 938. As a result, Socket AM2+ processors can’t fit into Socket AM3 motherboards. But in a clever muggle trick, Socket AM3 processors will happily drop into Socket AM2+ motherboards and work with DDR2 memory.

Socket AM3 retains the same lever-style ZIF socket as

Socket AM2+ and should be compatible with the same coolers

As you probably know by now, DDR3 memory enables higher clock speeds (and thus bandwidth) than DDR2-type memory, and it can operate at lower voltages, leading to reduced power consumption, as well. As with many such transitions, DDR3 isn’t magically better than DDR2 in every way; it’s just an incremental improvement. And, although it’s been around for a while now in Intel systems, DDR3 still costs more per megabyte than DDR2. Most folks expect shipment volumes to tip in favor of DDR3 at some point this year, though, and when that happens, prices should become more even. Heck, DDR3 is already pretty stinkin’ cheap, even if it does cost more than DDR2.

The new Phenom IIs officially support DDR3 memory at up to 1333MHz, but the multipliers are present for 1600MHz operation, as well, as they are in high-end Core 2 and Core i7 systems. Unlike the Core i7, the Phenom II still has “only” two memory channels onboard, not three. I say “only” because each channel of DDR3-1333 memory can transfer up to 10.7 GB/s. Combined with the 2GHz HyperTransport 3 link on each CPU, the total bandwidth available via Socket AM3 is roughly 37.3 GB/s, considerably more than the peak data rate of 10.7 GB/s available via a Core 2 processor’s front-side bus (even if it is less than the staggering 64 GB/s possible with a Core i7-965 Extreme and three channels of DDR3 at 1600MHz.) One caveat: the Phenom II only supports 1333MHz DDR3—at least, officially—with a single DIMM in each memory channel. With four DDR3 DIMMs, 1066MHz is the standard. Such limitations are nothing new, of course. Previous Phenoms have long supported 1066MHz DDR2 memory, but only with a single DIMM per channel.

An unnecessarily large close-up of the Phenom II X4 810

Oddly enough, the newest Phenom II chips aren’t the fastest ones. The first wave of Socket AM2+ only processors, including the X4 920 and 940, are higher end products with faster core clock speeds. The first Socket AM3 parts are cut-down versions of the Phenom II with lower speeds and less cache. Here’s a list of ’em all.

Model Clock speed North

L3 cache speed

Cores TDP Price
II X3 710
6MB 3 95W $125
X3 720 Black Edition
6MB 3 95W $145
II X4 805
4MB 4 95W
II X4 810
4MB 4 95W $175
II X4 910
6MB 4 95W

I haven’t listed it above, but as with all Phenoms, these Socket AM3 processors have 512KB of L2 cache per core. Also, notice that there’s no pricing for the Phenom II X4 805 and the X4 910. Both of these processors are only intended for large PC makers, so AMD hasn’t set any retail pricing for these products.

We have two of the retail products in hand today. The X4 810 is a quad-core processor with 4MB of L3 cache (the remaining 2MB in silicon has been disabled), and AMD has positioned it roughly opposite the Core 2 Quad Q8200, given its price tag of 175 bucks. Like the Phenom II X4 810, the Q8200 has a 95W TDP rating, so the matchup between these two rivals should be fairly straightforward.

Less so is the case of the Phenom II X3 720, which has a higher clock speed of 2.8GHz, a full 6MB L3 cache—and one core disabled. AMD cites the Core 2 Duo E8400 as the 720’s most direct competitor, and that’s a bold statement indeed, since the E8400 has been an enthusiast value favorite for some time now. The E8400 has two higher performance cores, against the 720’s three lower performance ones. We’ll have to see how that dynamic works itself out in the performance sweeps, but the answer is likely to be complicated. Another complication: the E8400 is a 65W part, while the X3 720 has a 95W TDP, so you may pay in added power consumption for the additional core. That downside may be offset by the fact that the X3 720 is a Black Edition processor with an unlocked upper multiplier for dead-simple overclocking. All in all, an intriguing matchup.

Asus’ M4A79T Deluxe mobo

The first Socket AM3 motherboard to make it into Damage Labs is the Asus M4A79T Deluxe, pictured above. This is a relatively high-end board based on the 790FX chipset, and it includes a total of 32 PCIe 2.0 lanes for graphics, which can be configured in various ways across its four physical PCIe x16 slots, including dual x16 and quad x8 arrangements. As you can see, this mobo packs the customary complement of high-end features, with more ports than Oakland (and probably a better football team, too.) The M4A79T Deluxe is already listed at a couple of online vendors for around 200 bucks. In my limited use of this board during CPU testing, I found it to be in pretty good shape for such an early product, with exemplary stability during normal use and decent overclocking headroom, as well. We’ll see about subjecting it to a full review soon.

Test notes
In order to gauge the impact of memory type on performance and power use, we’ve tested the Phenom II X4 810 both with DDR2 memory on a Socket AM2+ board and with DDR3 memory on a Socket AM3 board. You’ll find the results in the follow pages, labeled appropriately.

The Core 2 Quad Q8300

Here’s a look at the Core 2 Quad Q8300 processor we used for testing. This processor came to us courtesy of the good folks at NCIX and NCIXUS. Thanks to them for making this comparison possible. We haven’t yet had a Core 2 Quad Q8000-series processor in house for testing, a situation we’re happy to remedy. This quad-core processor is based on a pair of 45nm dual-core Penryn chips, like other new Core 2 Quads, but the chips on the Q8300 have had their onboard L2 caches reduced from 6MB to 2MB, so the Q8300 has a total of 4MB L2 cache. That’s a big reduction, but these are value quad-cores. The Q8300 has a 1333MHz front-side bus and a core clock of 2.5GHz, and it sells for as little as $190 right now.

The more direct competition for the Phenom II X4 810 is the Core 2 Quad 8200, which runs at 2.33GHz, so we’ve underclocked our Q8300 to simulate a Q8200 for this review. I’m sure we’ll get around to testing the Q8300 at its stock speed, as well, eventually.

We’ve simulated several other speed grades via underclocking, too. Specifically, the Phenom II X4 920 is an underclocked 940, and the Core 2 Quad Q9550 is an underclocked Core 2 Extreme QX9650. We expect the performance of these “simulated” speed grades to be identical to the real things, but we generally omit these processors from our power consumption testing because we do anticipate power use would vary slightly from the actual products.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core
2 Quad Q6600
2.4 GHz
2 Duo E8400
3.00 GHz
2 Duo E8600
3.33 GHz
Core 2 Quad Q8200 2.33 GHz
Core 2 Quad Q9300 2.5 GHz
Core 2 Quad Q9400 2.66 GHz
Core 2 Quad Q9550 2.83 GHz
2 Extreme QX9770 3.2 GHz
2 Extreme QX9775 3.2 GHz
i7-940 2.66 GHz
Core i7-940 2.93 GHz
Extreme 3.2 GHz
64 X2 6400+
3.2 GHz
X3 8750
2.4 GHz

Phenom II X4 920
2.8 GHz
Phenom II X4 940
3.0 GHz
II X4 810
2.6 GHz

Phenom X4 9950
Black 2.6 GHz
II X3 720
2.8 GHz
Phenom II X4 810
2.6 GHz
System bus 1066
(266 MHz)
(333 MHz)
(400 MHz)
(400 MHz)
4.8 GT/s
(2.4 GHz)
6.4 GT/s
(3.2 GHz)
2.0 GT/s
(1.0 GHz)
3.6 GT/s (1.8 GHz)
3.6 GT/s (1.8 GHz)
4.0 GT/s (2.0 GHz)
4.0 GT/s (2.0 GHz)
4.0 GT/s (2.0 GHz)
Motherboard Asus
P5E3 Premium
P5E3 Premium
P5E3 Premium
M3A79-T Deluxe
M3A79-T Deluxe
DKA790GX Platinum
M4A79T Deluxe
BIOS revision 0605 0605 0605 XS54010J.86A.1149.
0403 0403 11/25/08 0703
North bridge X48
Express MCH
Express MCH
Express MCH
790FX 790FX 790GX 790FX
South bridge ICH9R ICH9R ICH9R 6321ESB ICH ICH10R ICH10R SB750 SB750 SB750 SB750
Chipset drivers INF

Matrix Storage Manager


Matrix Storage Manager


Matrix Storage Manager

INF Update

Matrix Storage Manager

Matrix Storage Manager
Matrix Storage Manager
controller 3.1.1540.61
controller 3.1.1540.61
controller 3.1.1540.61
controller 3.1.1540.61
Memory size 4GB
(2 DIMMs)
(2 DIMMs)
(2 DIMMs)
(2 DIMMs)
(3 DIMMs)
(3 DIMMs)
(2 DIMMs)
(2 DIMMs)
(2 DIMMs)
(2 DIMMs)
Memory type Corsair
ECC DDR2-800
speed (Effective)
CAS latency (CL) 7 8 8 5 7 8 4 5 5 8
RAS to CAS delay (tRCD) 7 8 8 5 7 8 4 5 5 8
RAS precharge (tRP) 7 8 8 5 7 8 4 5 5 8
Cycle time (tRAS) 20 20 24 18 20 24 12 15 15 20
2T 2T 2T 2T 2T 1T 2T 2T 2T 2T
Audio Integrated
with SoundMAX drivers
with SoundMAX drivers
with SoundMAX drivers
with SigmaTel 6.10.5713.7 drivers
with Realtek drivers
with Realtek drivers
with SoundMAX drivers
with SoundMAX drivers
with Realtek drivers
with Realtek drivers
Hard drive WD Caviar SE16 320GB SATA
Graphics Radeon
HD 4870 512MB PCIe with Catalyst 8.55.4-081009a-070794E-ATI
OS Windows Vista Ultimate x64 Edition
OS updates Service
Pack 1, DirectX redist update August 2008

Thanks to Corsair for providing us with memory for our testing. Their products and support are far and away superior to generic, no-name memory.

Our single-socket test systems were powered by OCZ GameXStream 700W power supply units. The dual-socket system was powered by a PC Power & Cooling Turbo-Cool 1KW-SR power supply. Thanks to OCZ for providing these units for our use in testing.

Also, the folks at hooked us up with a nice deal on the WD Caviar SE16 drives used in our test rigs. NCIX now sells to U.S. customers, so check them out.

The test systems’ Windows desktops were set at 1600×1200 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled.

We used the following versions of our test applications:

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

These most excellent squiggly lines show bandwidth at various stages of the cache and memory hierarchy. Stunningly confusing, innit? The new Phenom IIs perform more or less as expected here, for what it’s worth.

Since it’s difficult to see the results once we get into main memory, let’s take a closer look at the 256MB block size:

We get a bit of a look at DDR3 in action here, as the Phenom X4 810 on the Socket AM3 mobo transfers more data in this test than any other desktop processor save for the Core i7. The bandwidth boost when going from 1066MHz DDR2 to 1333MHz DDR3 isn’t huge, but it’s real and measurable.

The transition from DDR2 to DDR3 doesn’t exact a big penalty in terms of memory access latencies—just a single nanosecond on the X4 810. Notably, the Socket AM3 processors are a couple of nanoseconds quicker at getting to memory than the older Phenom IIs, likely due to the 200MHz higher L3 cache speeds of the Socket AM3 chips.

Crysis Warhead
We measured Warhead performance using the FRAPS frame-rate recording tool and playing over the same 60-second section of the game five times on each processor. This method has the advantage of simulating real gameplay quite closely, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent results. In addition to average frame rates, we’ve included the low frame rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

We tested at at relatively modest graphics settings, 1024×768 resolution with the game’s “Mainstream” quality settings, because we didn’t want our graphics card to be the performance-limiting factor. This is, after all, a CPU test.

In Warhead, as in most of today’s games, a pair of fast cores translates into better performance than three or more slower cores. As a result, the Core 2 Duo E8400 maintains higher frame rates than the Phenom II X3 720, and both of those processors are faster than the Phenom II X4 810. Among the quad cores, the X4 810 outperforms the Q8200. And, in a photo finish, the DDR2 and DDR3 configurations of the X4 810 perform almost identically.

Far Cry 2
After playing around with Far Cry 2, I decided to test it a little bit differently by recording frame rates during the jeep ride sequence at the very beginning of the game. I found that frame rates during this sequence were generally similar to those when running around elsewhere in the game, and after all, playing Far Cry 2 involves quite a bit of driving around. Since this sequence was repeatable, I just captured results from three 90-second sessions.

Again, I didn’t want the graphics card to be our primary performance constraint, so although I tested at fairly high visual quality levels, I used a relatively low 1024×768 display resolution and DirectX 9.

The new Phenoms edge out their ostensible rivals from Intel by very small margins here. Again, fewer, faster cores prove to be the best choice for this game, and DDR2 continues to match DDR3 on the X4 810.

Incidentally, some of the scores for Core 2 processors here are higher than you may have seen in our other recent reviews. After running into some strange results, I wound up re-testing the Core 2 processors in Far Cry 2, and several of them came out faster. I’m not sure what the cause of the problem was, but I’m confident these scores are now correct. I’ll be going back to the older reviews and updating those scores, as well.

Unreal Tournament 3
As you saw on the preceding page, I did manage to find a couple of CPU-limited games to use in testing. I decided to try to concoct another interesting scenario by setting up a 24-player CTF game on UT3’s epic Facing Worlds map, in which I was the only human player. The rest? Bots controlled by the CPU. I racked up frags like mad while capturing five 60-second gameplay sessions for each processor.

Oh, and the screen resolution was set to 1280×1024 for testing, with UT3’s default quality options and “framerate smoothing” disabled.

There’s undoubtedly some variance built into these results, since I was playing a pretty darned random botmatch, but I’m not convinced the X3 720’s strong showing is a fluke. Instead, I think its faster L3 cache may be what gives it the advantage over, say, the Phenom II X4 940. Also, hey, our little botmatch idea has produced a real gaming scenario where having more than two cores seems to matter. The Core 2 Quad Q8200 beats out the Core 2 Duo E8400 here, for instance. Along those same lines, the X3 720 seems to have the ideal balance of core count and clock speed for this scenario.

The Phenom II X4 810, meanwhile, churns out frames faster than the Q8200, and it gets a nice little boost from DDR3, as well.

Half Life 2: Episode Two
Our next test is a good, old custom-recorded in-game timedemo, precisely repeatable.

All of these frame rates are ridiculously high, of course. If we consider relative performance, the Phenom II X3 720 just trails the E8400 by a hair, while the X4 810 easily surpasses the Q8200.

Source engine particle simulation
Next up is a test we picked up during a visit to Valve Software, the developers of the Half-Life games. They had been working to incorporate support for multi-core processors into their Source game engine, and they cooked up some benchmarks to demonstrate the benefits of multithreading.

This test runs a particle simulation inside of the Source engine. Most games today use particle systems to create effects like smoke, steam, and fire, but the realism and interactivity of those effects are limited by the available computing horsepower. Valve’s particle system distributes the load across multiple CPU cores.

The X3 720’s third core makes itself known again here, as the X3 just beats out the Core 2 Duo E8400. The quad-core processors are very evenly matched, the X4 810 in a virtual tie with the Q8200.

WorldBench’s overall score is a pretty decent indication of general-use performance for desktop computers. This benchmark uses scripting to step through a series of tasks in common Windows applications and then produces an overall score for comparison. WorldBench also records individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results alongside the results from some of our own application tests.

The Socket AM3 processors prove to be a little disappointing in WorldBench compared to their rivals from Intel. The E8400 opens up a big lead over any AMD product, in fact.

Productivity and general use software

MS Office productivity

Firefox web browsing

Multitasking – Firefox and Windows Media Encoder

WinZip file compression

Nero CD authoring

Through the MS Office, Firefox, and multitasking tests, the Socket AM3 processors look to be very competitive. In fact, the Core 2 Quad 8200 has a much rougher time, finishing dead last in two of the three tests. However, the Phenoms suffer when we get to the WinZip and Nero tests, both of which tend to rely on disk controller performance to a degree.

Those are the breaks in these days of “platformization.” AMD’s entire lineup of south bridge chips for several years has had trouble with a key performance feature, Native Command Queuing for Serial ATA. Turning on NCQ can improve performance in these tests, but it comes at the cost of higher CPU utilization, which hurts performance in other tests—most notably, in WorldBench’s Photoshop test.

For this review, we’ve included results with AHCI (and thus NCQ and SATA hot-swapping) disabled, at AMD’s request, for all Phenom II processors. (The Athlon 64 and original Phenoms were tested with AHCI enabled.) When we disabled AHCI, we found that performance in Photoshop rose and performance in Nero and other tests dropped by offsetting amounts; the overall WorldBench score was unchanged.

Image processing


The Phenom IIs perform better here than the older Phenoms, as expected. Yet even the fastest Phenom II trails the slowest Intel processor, the Q8200, by over 30 seconds.

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs. The program’s timer function captures the amount of time needed to perform each stage of the panorama creation process. I’ve also added up the total operation time to give us an overall measure of performance.

The Q8200 finishes just a little sooner than the X4 810 here. Despite the relatively strong performance of the Intel processors in this application, though, the Phenom X3 720’s additional core puts it ahead of the E8400.

Below is a look at the individual operations required to create a panorama, if you care to see that sort of detail.

picCOLOR image analysis
picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including MMX, SSE2, and Hyper-Threading. Naturally, he’s ported picCOLOR to 64 bits, so we can test performance with the x86-64 ISA. Many of the individual functions that make up the test are multithreaded.

The 720’s third core isn’t sufficient to give it the advantage over the E8400 in this application, even though it is multithreaded. The Q8200 and X4 810 are in a familiar place, meanwhile: a dead heat.

Media encoding and editing

x264 HD benchmark
This benchmark tests performance with one of the most popular H.264 video encoders, the open-source x264. The results come in two parts, for the two passes the encoder makes through the video file. I’ve chosen to report them separately, since that’s typically how the results are reported in the public database of results for this benchmark. These scores come from the newer, faster version 0.59.819 of the x264 executable.

We’ll give the X4 810 the win over the Q8200 on the strength of its performance in pass one of the encoding process. The X3 720 proves faster than the E8400 in both passes, another triumph of its triple-core config.

Windows Media Encoder x64 Edition video encoding
Windows Media Encoder is one of the few popular video encoding tools that uses four threads to take advantage of quad-core systems, and it comes in a 64-bit version. Unfortunately, it doesn’t appear to use more than four threads, even on an eight-core system. For this test, I asked Windows Media Encoder to transcode a 153MB 1080-line widescreen video into a 720-line WMV using its built-in DVD/Hardware profile. Because the default “High definition quality audio” codec threw some errors in Windows Vista, I instead used the “Multichannel audio” codec. Both audio codecs have a variable bitrate peak of 192Kbps.

Windows Media Encoder doesn’t know what to do with a triple-core CPU, so it only spins off two threads, and the X3 720’s encode times suffer as a result. At least the X4 810 does well, finishing ahead of the Q8200.

Windows Media Encoder video encoding

Roxio VideoWave Movie Creator

Sadly, neither of WorldBench’s video manipulation benchmarks appear to use more than two threads, and thus cores, to any good effect. Put simply, I prefer our other video encoding tests, which are good examples of multithreaded real-world applications.

LAME MT audio encoding
LAME MT is a multithreaded version of the LAME MP3 encoder. LAME MT was created as a demonstration of the benefits of multithreading specifically on a Hyper-Threaded CPU like the Pentium 4. Of course, multithreading works even better on multi-core processors. You can download a paper (in Word format) describing the programming effort.

Rather than run multiple parallel threads, LAME MT runs the MP3 encoder’s psycho-acoustic analysis function on a separate thread from the rest of the encoder using simple linear pipelining. That is, the psycho-acoustic analysis happens one frame ahead of everything else, and its results are buffered for later use by the second thread. That means this test won’t really use more than two CPU cores.

We have results for two different 64-bit versions of LAME MT from different compilers, one from Microsoft and one from Intel, doing two different types of encoding, variable bit rate and constant bit rate. We are encoding a massive 10-minute, 6-second 101MB WAV file here.

This app favors fewer, faster cores, and as a result, some of our cheaper processors outperform their more expensive counterparts. At the same time, among the products we’re comparing today, the Intel CPUs finish encoding before their AMD rivals, regardless of the compiler used.

3D modeling and rendering

Cinebench rendering
Graphics is a classic example of a computing problem that’s easily parallelizable, so it’s no surprise that we can exploit a multi-core processor with a 3D rendering app. Cinebench is the first of those we’ll try, a benchmark based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

Chalk up another win for AMD’s triple-core wonder. As expected, it’s faster than the dual-core E8400 in this rendering app. The X4 810 is quicker than the Q8200, as well.

POV-Ray rendering
We’re using the latest beta version of POV-Ray 3.7 that includes native multithreading and 64-bit support. Some of the beta 64-bit executables have been quite a bit slower than the 3.6 release, but this should give us a decent look at comparative performance, regardless.

The chess scene shows us the sort of multicore performance scaling to which we’re accustomed with rendering applications, and the Socket AM3 processors perform well, as a result. The benchmark scene, on the other hand, involves a long calculation that’s not multithreaded, so the Core 2 Duo E8400 outruns the Phenom II X4 810.

3ds max modeling and rendering

Valve VRAD map compilation
This next test processes a map from Half-Life 2 using Valve’s VRAD lighting tool. Valve uses VRAD to pre-compute lighting that goes into games like Half-Life 2.

In our last two rendering tests, the Q8200 is just a few seconds faster than the X4 810, while the X3 720 rides its third core to wide victories over the E8400.

[email protected]
Next, we have a slick little [email protected] benchmark CD created by notfred, one of the members of Team TR, our excellent Folding team. For the unfamiliar, [email protected] is a distributed computing project created by folks at Stanford University that investigates how proteins work in the human body, in an attempt to better understand diseases like Parkinson’s, Alzheimer’s, and cystic fibrosis. It’s a great way to use your PC’s spare CPU cycles to help advance medical research. I’d encourage you to visit our distributed computing forum and consider joining our team if you haven’t already joined one.

The [email protected] project uses a number of highly optimized routines to process different types of work units from Stanford’s research projects. The Gromacs core, for instance, uses SSE on Intel processors, 3DNow! on AMD processors, and Altivec on PowerPCs. Overall, [email protected] should be a great example of real-world scientific computing.

notfred’s Folding Benchmark CD tests the most common work unit types and estimates performance in terms of the points per day that a CPU could earn for a Folding team member. The CD itself is a bootable ISO. The CD boots into Linux, detects the system’s processors and Ethernet adapters, picks up an IP address, and downloads the latest versions of the Folding execution cores from Stanford. It then processes a sample work unit of each type.

On a system with two CPU cores, for instance, the CD spins off a Tinker WU on core 1 and an Amber WU on core 2. When either of those WUs are finished, the benchmark moves on to additional WU types, always keeping both cores occupied with some sort of calculation. Should the benchmark run out of new WUs to test, it simply processes another WU in order to prevent any of the cores from going idle as the others finish. Once all four of the WU types have been tested, the benchmark averages the points per day among them. That points-per-day average is then multiplied by the number of cores on the CPU in order to estimate the total number of points per day that CPU might achieve.

This may be a somewhat quirky method of estimating overall performance, but my sense is that it generally ought to work. We’ve discussed some potential reservations about how it works here, for those who are interested. I have included results for each of the individual WU types below, so you can see how the different CPUs perform on each.

Yeah, so: the Folding parity between the X4 810 and Q8200 couldn’t be much clearer. And, once more with feeling, the X3’s multiplicity trumps the E8400.

MyriMatch proteomics
Our benchmarks sometimes come from unexpected places, and such is the case with this one. David Tabb is a friend of mine from high school and a long-time TR reader. He has provided us with an intriguing new benchmark based on an application he’s developed for use in his research work. The application is called MyriMatch, and it’s intended for use in proteomics, or the large-scale study of protein. I’ll stop right here and let him explain what MyriMatch does:

In shotgun proteomics, researchers digest complex mixtures of proteins into peptides, separate them by liquid chromatography, and analyze them by tandem mass spectrometers. This creates data sets containing tens of thousands of spectra that can be identified to peptide sequences drawn from the known genomes for most lab organisms. The first software for this purpose was Sequest, created by John Yates and Jimmy Eng at the University of Washington. Recently, David Tabb and Matthew Chambers at Vanderbilt University developed MyriMatch, an algorithm that can exploit multiple cores and multiple computers for this matching. Source code and binaries of MyriMatch are publicly available.

In this test, 5555 tandem mass spectra from a Thermo LTQ mass spectrometer are identified to peptides generated from the 6714 proteins of S. cerevisiae (baker’s yeast). The data set was provided by Andy Link at Vanderbilt University. The FASTA protein sequence database was provided by the Saccharomyces Genome Database.

MyriMatch uses threading to accelerate the handling of protein sequences. The database (read into memory) is separated into a number of jobs, typically the number of threads multiplied by 10. If four threads are used in the above database, for example, each job consists of 168 protein sequences (1/40th of the database). When a thread finishes handling all proteins in the current job, it accepts another job from the queue. This technique is intended to minimize synchronization overhead between threads and minimize CPU idle time.

The most important news for us is that MyriMatch is a widely multithreaded real-world application that we can use with a relevant data set. MyriMatch also offers control over the number of threads used, so we’ve tested with one to eight threads.

I should mention that performance scaling in MyriMatch tends to be limited by several factors, including memory bandwidth, as David explains:

Inefficiencies in scaling occur from a variety of sources. First, each thread is comparing to a common collection of tandem mass spectra in memory. Although most peptides will be compared to different spectra within the collection, sometimes multiple threads attempt to compare to the same spectra simultaneously, necessitating a mutex mechanism for each spectrum. Second, the number of spectra in memory far exceeds the capacity of processor caches, and so the memory controller gets a fair workout during execution.

Here’s how the processors performed.

I had really hoped to see DDR3 make a big difference in this bandwidth-intensive application, but things just didn’t work out that way. Regardless, both of the Socket AM3 processors fare relatively well.

STARS Euler3d computational fluid dynamics
Charles O’Neill works in the Computational Aeroservoelasticity Laboratory at Oklahoma State University, and he contacted us to suggest we try the computational fluid dynamics (CFD) benchmark based on the STARS Euler3D structural analysis routines developed at CASELab. This benchmark has been available to the public for some time in single-threaded form, but Charles was kind enough to put together a multithreaded version of the benchmark for us with a larger data set. He has also put a web page online with a downloadable version of the multithreaded benchmark, a description, and some results here.

In this test, the application is basically doing analysis of airflow over an aircraft wing. I will step out of the way and let Charles explain the rest:

The benchmark testcase is the AGARD 445.6 aeroelastic test wing. The wing uses a NACA 65A004 airfoil section and has a panel aspect ratio of 1.65, taper ratio of 0.66, and a quarter-chord sweep angle of 45º. This AGARD wing was tested at the NASA Langley Research Center in the 16-foot Transonic Dynamics Tunnel and is a standard aeroelastic test case used for validation of unsteady, compressible CFD codes.

The CFD grid contains 1.23 million tetrahedral elements and 223 thousand nodes . . . . The benchmark executable advances the Mach 0.50 AGARD flow solution. A benchmark score is reported as a CFD cycle frequency in Hertz.

So the higher the score, the faster the computer. Charles tells me these CFD solvers are very floating-point intensive, but oftentimes limited primarily by memory bandwidth. He has modified the benchmark for us in order to enable control over the number of threads used. Here’s how our contenders handled the test with different thread counts.

The switch to DDR3 doesn’t contribute much to Phenom II performance here, either, but it is enough to lift the X4 810 past the Q8200.

Power consumption and efficiency
Our Extech 380803 power meter has the ability to log data, so we can capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (We plugged the computer monitor into a separate outlet, though.) We measured how each of our test systems used power across a set time period, during which time we ran Cinebench’s multithreaded rendering test.

All of the systems had their power management features (such as SpeedStep and Cool’n’Quiet) enabled during these tests via Windows Vista’s “Balanced” power options profile.

Although we don’t usually include “simulated” CPU speed grades in our power results, I’ve made an exception for the Q8200 out of sheer curiosity.

Let’s slice up the data in various ways in order to better understand them. We’ll start with a look at idle power, taken from the trailing edge of our test period, after all CPUs have completed the render.

The Phenom II is a major, major improvement in idle power draw over the original Phenom. As with the past generation, though, the deactivation of one of the cores on the X3 product has no measurable benefit to power consumption at idle. However, I’d say the power draw of the DDR3 system is pretty decent, considering that our Asus Socket AM3 board is a high-end mobo.

Next, we can look at peak power draw by taking an average from the ten-second span from 15 to 25 seconds into our test period, during which the processors were rendering.

Looks like DDR3 saves a few watts in peak power draw, at least. Even so, the X4 810 consumes a little more power than Intel’s comparable quad-core processors. And the X3 720 draws just as much power as the X4 810. Looks like the 720’s higher clock speed and larger cache are making up the difference.

Another way to gauge power efficiency is to look at total energy use over our time span. This method takes into account power use both during the render and during the idle time. We can express the result in terms of watt-seconds, also known as joules.

We can quantify efficiency even better by considering specifically the amount of energy used to render the scene. Since the different systems completed the render at different speeds, we’ve isolated the render period for each system. We’ve then computed the amount of energy used by each system to render the scene. This method should account for both power use and, to some degree, performance, because shorter render times may lead to less energy consumption.

Boy, are these last two measures close. Our simulated Q8200 uses just a little less energy to render the scene than the X4 810. Intel may have a slight edge in power efficiency, but in this product category, the Phenom II is very, very similar.

Because it’s a Black Edition, overclocking the Phenom II X3 720 is just a matter of turning up the CPU multiplier in the BIOS. For my run at glory with the 720, I used the Asus Socket AM3 board, just for fun, along with a ridiculously huge Cooler Master heatsink/fan combo. I tested stability by booting into Windows and running a multithreaded Prime95 stress test. The log of my attempts is below; the process was fairly simple, and I started at 3.4GHz based on my prior experience with Phenom IIs.

-3.4GHz, 1.325V – BSOD on boot
-3.4GHz, 1.35V – Boots Windows, reboot in P95
-3.4GHz, 1.375V – BSOD in P95
-3.4GHz, 1.4V – Seems OK
-3.5GHz, 1.425V – Boots Windows, reboot in P95
-3.5GHz, 1.45V – Seems OK
-3.6GHz, 1.45V – Reboot in P95
-3.6GHz, 1.475V – BSOD in P95
-3.6GHz, 1.5V – BSOD in P95

With a relatively modest number of attempts, I’d determined that this X3 720 can run at 3.5GHz, at 1.45V, without much trouble. Not too shabby.

Overclocking the Phenom II X4 810 was a more complicated process since I had to turn up the base HyperTransport clock in order to raise the CPU speed. Making this work involved more voltage tweaks, reductions of the HyperTransport multiplier, adjustments to memory clocks, and several forms of psychotropic drugs. Eventually, I settled on the following stable configuration: a 3.458GHz core clock at 1.45V, with a 266MHz base clock, a 1596MHz HyperTransport link, and 1418MHz memory. I’d given additional juice to the north bridge and RAM, along with a slight increase in HyperTransport voltage, when all was said and done. Again, not bad, but getting there was a bit more work than with the X3 720, and I was less confident in the overall stability of the system at the end of the process.

What about performance?

At 3.5GHz, the X3 720 is really stinkin’ fast. The X4 810 is less impressive at 3.46GHz, probably held back here by its smaller L3 cache and slower HT/L3 “uncore” frequency.

Well, jeez, it’s hard not to like the Phenom II X3 720, which is just a bundle of gimpy goodness. Thanks to its higher clock speed and larger cache, the X3 720 quite frequently outperforms its bigger brother, the Phenom II X4 810, even though it costs less. And, at 2.8GHz, the 720 is fast enough to match up pretty well against the Core 2 Duo E8400 in many applications—including games—that tend to run best with fewer and faster cores. In more widely multithreaded apps where the 720’s third core kicks in, the Phenom II X3 almost always outruns the E8400, sometimes dramatically. Oddly enough, the 720’s combination of three cores and relatively high clock speeds may be the ideal trade-off for the current state of PC software. Who knew?

Add in the X3 720’s fairly tame power consumption, its apparently excellent overclocking proposition, and the fact that—regardless of memory type—the Phenom II has a superior system architecture to the Core 2, and the E8400 starts to look rather weak by comparison. The Phenom II X3 720 is our new favorite among mid-range PC processors. Look for it to secure a place in one of the builds in our upcoming system guide refresh.

The Phenom II X4 810 is also generally faster and more attractive overall than the Core 2 Quad Q8200, but I can’t say I like the value propsition with either of these processors all that well. Because of their reduced cache sizes and clock speeds, these value quad-cores rely almost entirely on multithreaded applications to achieve strong performance. When software doesn’t oblige (and it often doesn’t), they stumble, as illustrated by the Q8200’s poor showings in several of our benchmarks, including MS Office, Firefox, and the gaming tests. For the vast majority of users, the Phenom II X3 720 will be a better choice, and it costs less.

Oh, and we didn’t see much in the way of performance gains when moving the Phenom II X4 810 from DDR2 memory to DDR3 memory. That’s no great shock, all things considered, and no knock on AMD’s implementation of Socket AM3. I suspect we may see more benefits from DDR3 once we get our hands on a non-neutered Socket AM3 quad-core, like a Phenom II X4 940 or something even faster, especially if AMD builds in support for higher memory frequencies. Until then, Socket AM3 is a fine upgrade path waiting for a reason to exist.

0 responses to “AMD’s Socket AM3 Phenom II processors

  1. so basically you’re both saying that both chips can garner an extra 800MHz or so off the overclock?

  2. Well thanks for confirming what I already knew about you.

    If you really wanted to be a true grammar nazi you would have pointed out I missed an “it” in that sentence. But I guess you’re really not THAT good.

  3. That explains why you mistype “too” or why your name is so easy to remember. Having trouble with long words?

  4. The tests that are run and the way those tests are run make perfect sense. But I disagree when you say there are no gaps. You can read post 117 to see the specific gap I thought was worth mentioning. None the less, hearts are pretty hardened regarding this matter, so I’ll bow out here.

  5. My point is there are really no gaps if people take a small amount of time and actually think. However for the few people that can’t they can always just look around for that specific info.

    This is all of the reviews (well maybe not HDs and the very old platform they are tested on…). CPU reviews are run with the best or at least close to best graphics card around, most people don’t own that. It’s done so the GPU isn’t bottlenecking the CPU for the game tests. The GPU tests are run with about the best CPU around so that the CPU isn’t a bottleneck in the GPU tests. Both are not realistic steps up for almost anyone but are the most effective ways of isolating the performance of the items actually being reviewed.

  6. The issue I have with your remarks is that you are basing what benchmarks TR should run based on whether or not someone else is running them, where as I am wondering about what benchmarks should be run based on where I perceive there to be gaps in the information provided.

  7. Exactly it’s like the people that run the site think the readers here can think.

    It’s not like they have time to run ever benchmark in the way everyone would want them to run it.

  8. Considering I actually looked at Min and Max frames per second I could tell that both CPUs would work fine. Also its not like anyone would consider those in comparison because of the price difference.

    For me those graphs don’t apply anyways but I only have a 9600GT and I wouldn’t get those FPS even at 1280×1024. Also 1280×1024 is the max res I can run and really is a good overall res to compare. It’s not like they are running 640×480 in all of the tests or actually any… To me it would be nice if they ran something not FPS but hey I understand why they run what they do.

    I might consider one of the cheaper quads over the E8400 because they are a little faster in some games, (not that I care about FPS in the first place) and I could get 2 more cores just for a little more money.

  9. So you’re ok with a cpu review not being able to tell you whether you should buy one cpu over another if you intend it solely for gaming?

  10. No not really but thanks for going off into the weeds.

    The thing is they are testing the cpu, not GPU or GPU limitations or specific games. The point is if you want playable res. tests look at the [H] where you can’t tell what is going on because they are always changing the settings.

    Frankly the same can be said about the GPU test very very few people play at the res. That is tested in a number of cases but whatever most people can guess how thing will run on a smaller monitor or they can look elsewhere. The fact people only believe TR’s results is what is really silly…

  11. The last 2 page of thread just got silly with you guys arguing with Krogoth.

    Its the never ending soap opera I get my kicks from when bored and having a beer reading.

  12. Maybe TR shouldn’t do reviews at all then – that is the logical extension of such an argument. Seriously, that is just silliness to me.

  13. Because TR is generally the most comprehensive review site around, and I’m just pointing out that they are being bested by other sites (which I don’t trust nearly as much) in certain, specific areas.

    That and the fact that I feel that the gaming tests are pretty much useless at low resolutions.

  14. A Star Trek character actually, but I wouldn’t know any names because I never watched a single episode of that.

  15. I didn’t get it at first either, I checked back and was sitting back further. I think it’s an alien doing a forehead slap :p

  16. Why should TR change when there are other reviewers out there that are legit and give you the info you want?

  17. Some people have that fixed belief that “GPU-bound” means “I can stick in a Pentium and it wouldn’t make a difference”. It’s just that the maximising bottom-line goes lower a bit.

  18. Funny that most of nonsense comes from users who seemed to have an ax to grind.

    Opinions are serious business!

  19. Well I’d recommend some people just go with a ancient A64X2 cuz they are so effective for the money. So if the Q6600 is cheap enough, it certainly is a good value still. All it comes down to is price.

  20. FWIW, Krogoth’s last comment was in reply to Meadows, not to Usacomp2k3.

    That said, this entire thread is unbelievably stupid.

  21. Yeah Q6600 is still pretty sweet. I have had one myself since August 07 when everyone was excited about the G0 stepping. It does 3.0 GHz while undervolted from stock! My mobo has the wonderful vdroop “issue” that results in the CPU receiving about 1.17v when loaded down.

    I also got a Phenom II 940 for fun to mess with. It’s a decent chip, but it needs more voltage (stock is 1.35v but my mobo is giving it 1.40 apparently) and probably dumps at least as much heat as the Q6600. It’s definitely not much faster than the Q6600 @ 3.0, if faster at all. It’ll do about 3.4 GHz on that 1.40v.

    As to the best value, well I still tell people to go with Wolfdale if they game, otherwise just grab a cheap A64X2. But I do a lot of work with video encoding so quad cores are for me.

  22. Awesome, a new trick from Krogoth, ASCII pictures!

    I won’t speak for anyone else (see that?) but I’m not out of touch. I’ve built a few AMD IGP systems over the past year or so for ‘average joes’ because that’s all they need and will be likely to need for some time.

  23. That is far from the truth. If my *old* CPU is still spending most of its potential on folding or idling along. It is still bloody overkill for the majority of PC users. Some of these users still get by with Athlon XP and P4 rigs!

    There is a big reason why Intel and AMD are pimping out binned down parts to their OEM customers. They are dirt-cheap and powerful enough to do practically anything where time is not money. They also help keep the platform’s cost down to a minimal. Remember, average Joe does not have quite the generous budget that some enthusiasts enjoy.

    It seems that some enthusiast are so caught-up with high-end parts that they never give lower-end stuff a chance. They also seemed to be out of touch with the mainstream market’s needs and assume that average joe *needs* a performance GPU and quad-core CPU.

  24. Oh lay off man. He was just saying that he is impressed how well the processor he bought 2 years ago is performing. Just like how people who bought the 640GB SE16 comment on how well it still performs. It isn’t at all for someone to purchase one at this time, just a comment on the lifespan that the purchase has shown.

  25. I believe the majority, ergo >50%, want a processor that uses less power than yours and does more – or better – work than yours (optional but preferable), for less money than yours, and the same applies on the videocard front.

  26. My X2-3600 also remains quite useful and powerful, and I just ordered a quite useful and powerful Sempron 3000.

    There is nothing special about the Q6600. At the right price, it remains a decent purchase even today. But so does the Sempron 3000 I just bought.

  27. ……………………………………..________……………………

    You clearly did not bother to read the entire thread and now jump into baseless conclusions. I never said anywhere where my system was powerful enough for anybody. Anybody = 100% of PC users. It is actually closer to massive overkill for majority of them. FYI, majority = greater than 50% of the population. It is not 100% you mind.

    The point that I had always been trying to make is that PC technology has been progressing much more slowly in areas where we were used to huge leaps. The overall difference between the mid-range and high-end is diminishing. It is becoming more difficult to justify spending the 2x-3x premium that the high-end commands. That is unless you use the PC for serious work.

  28. Where do you exactly get this distorted image from?

    I only said that my Q6600 manages to be quite useful and powerful despite that the design is two years old. I would imagine that most sensible buyers like to have some longevity in their product?

    It is not that much different in the GPU spectrum. High-end GPUs from two years ago are still quite capable of handling anything that does not need to feed a 30″ monitor at native resolution. Even the current generation of budget and mid-range GPUs can do similar or greater feats. My X1900XT may be old, but it still handles itself at 1280×1024 with most modern games. I have yet to find a demanding, worthwhile title that brings to X1900XT to its knees.

    In the end, it has never been a better time for budget-minded PC gamers. You can get so much power for under a grand.

  29. That’s good, because I never said whether people need high-end parts or not either. I just don’t base it on what I’ve got. It’s just typical of you to poopoo parts that are faster than yours, examples are your ongoing ‘q6600 ftw’ here or what you post in every video card article. If you’re happy with what you’ve got and it works for you, great, go ahead and say that, but don’t use it as a baseline to tell others what they might need or want.

  30. That’s the point, there are users who care but can’t afford everything. Not every computer enthusiast takes home twelve thousand dollars every month.

  31. 1-3% is worse case, ~20% is best, while 5-10% is expected. I honestly doubt you can see a 5-10% difference outside of benchmarks that record the results. The users who *care* about such trivial differences will never settle for any of the budget and mid-range parts in the first place unless their budget is tight.

  32. You are a blind or spoiled fool.

    I never say anywhere where CPUs under $299 were good enough for *[

  33. And here I thought Krogoth’s painful “I’m out of ammo, let’s paste something” replies were low.

  34. except Intel’s sockets have the pins. This makes so little sense I have to think it was an aborted joke attempt.

  35. Guys don’t bother, it’s just Krogoth trying to make himself feel good about his personal PC by just using generalities rather than coming out and saying it. It’s just like what he posts in video card story comments, the tipoff was this q[

  36. The list of OSes superior to XP is a quite long one, incorporating several Linux distros, MacOS X, and Windows Vista and 7.

  37. If they cut those pins off like intel, it will shorten the length the data has to travel from one part of the system to the other, thus, resulting in a faster data rate, which will in turn lead to increased performance of the the overall computer.

  38. When a game runs at 120 fps instead of 90, that shows a definite performance advantage in percentages as well. Besides, you quickly shifted from “1-3%” to “5-10%”, talk about being wrong. Not to mention that 5% already matters to a few people, and 10% is about enough for almost anyone to notice.

  39. Exactly.

    Yet you’d be pretty damn amazed by how many still think its confusing (or are trying and failing to be funny).

  40. Post-XP era *nix kernels already beat XP in many areas.

    Vista is also superior to XP in several ways, despite some aching issues. 7 and Vista SP2 will no doubt resolve these issues.

    XP does show its age in several areas. Device interrupts hang the OS (Optical drives are infamous for this). 2GiB memory limit for applications without boot.ini hacks. The quirks with its older audio engine. XP is not tune to deal with SSDs.

    Did I forget to mention that XP is made of failure in terms of security? You always wonder why XP gets pwned by malware by doing a simple, direct connection to the internet? XP was always the biggest victim of countless waves of malware as well?

  41. The benches show just how well Q6600 holds up despite being two years old. I never said anyway where I would recommend anyone to get one these days.

  42. I don’t see how the benchmarks say anything good about the Q6600. The Q9400 and X4 810 match or beat the Q6600 (however little) while costing LESS, using less power, and having more up to date features (did I mention they cost less?).

    The fact that the Q6600 loses by less 10% doesn’t change the fact that there’s little reason to buy it. Recommending a Q6600 is telling someone to buy a more expensive chip that only matches or is inferior to cheaper and more up to date chips; it is never superior to them in anything (so you’re paying more for nothing).

    Well, that is unless you buy into the retarded ‘more cache == better’ hype. In which case, you shouldn’t be DIYing your own computer to begin with.

  43. Dude, you need to check those numbers again.

    There is nowhere in the bench suite where the Q9400 had a 20%+ delta over the Q6600. The closest is almost 20%. That is only in a few multimedia benches, but for the most part it is like 5-10%.

    Again, it is not an earth-shattering difference. The raw numbers in some the of benches make the difference even less impressive. OMFG, you are saving ~30-40 seconds! You are getting 5 more FPS out of 130 FPS!

    Core i7 stomps them both in these areas.

  44. The Q9400 sometimes has a performance delta of over +20% over the Q6600, depending on the test.

  45. 1-3% difference is hardly ground-breaking. The performance freaks who care about these differences are not going to be interested in these chips. They will be looking towards the i7.

  46. Well, Intel has their Core i7s which are significantly faster than any of the Core 2 parts. The i7 platform also commands a hefty premium. AMD does not have nothing to go against the i7. However, it does not need to have one for a while.

    CPUs under $299 are powerful enough for majority of PC users. It is kinda hard to justify spending more unless you are doing serious work and time is money.

  47. I see Lynnfield as Amd’s big competition rather than Westmere. When you get to the real budget space, success is determined by OEM design wins and the ability to supply rather than performance. And with its anemic 5-year old IGP derivative, Westmere isn’t going to be lighting up budget gaming rigs.

    Lynnfield, with 4 cores and 2 channels of DDR3 memory (not to mention IMC and integrated PCIEx16), is probably going to stomp Phenom II, and at a cheaper pricepoint than Nehalem, is going to put even more of a price squeeze on amd.

  48. I think you are on to something there. I would like to see the uncore running at like 2.5Ghz, I bet it will have a nice improvement on performance.

    The AM3 cpu’s seemed to get a decent boost with it at 2Ghz.

  49. Ah, but using a slower CPU will give you lower GPU framerates than the maximum GPU framerates. And as flip-mode alluded to, you can’t simply interpolate the results.

    It gets even more complicated when we look at varying game resolutions, IQ settings, etc.

  50. You can use the data together. The CPU tests show the maximum framerates the CPU will get you, while the GPU tests show the maximum rates the GPU will get you. So you don’t want to buy one that is significantly faster than the other.

  51. Because it shows how little has changed in the CPU front in the last two years? Because it took AMD that long to deliver a product that barely beats it for a slightly lower price point?

  52. Scott, I respect all the work you do, so please never suspect that I am “attacking” you or being snotty or thinking the world has to revolve around me. None of those are the case. If we were face to face right now, I’d insist on buying all the beer !:-D

    You CPU benches show the highest frame rates we can expect from a given processor. It seems to me that what they don’t show is this: at what point does the CPU make a difference as game detail scales. Correct me if I am wrong, but I don’t think I can look between your GPU reviews and your CPU reviews and somehow interpolate that information, can I? I find that to be a fairly important question given the price curve for CPUs.

    Regardless, TR reviews are still the best written and, as far as I am concerned, most trustworthy reviews out there.

  53. It does not have to be about shutting people up. Having a dialog is an ok thing. I don’t think that anyone is trying to be a jerk about this, even if it comes across that way over teh intranets.

  54. So, I understand what you guys are saying about the “initial release and support” phase of a product, but that doesn’t really bite to the heart of my question –

    I meant to ask *how* exactly a BIOS might improve *in* that period to improve memory performance. Here are my thoughts:

    So, when it comes to memory, you’ve got a bunch of things to worry about, and I’ll talk about 4 of them: (note, I have no clue how exactly other people do their stuff, this is just my experience)

    – Silicon, host-side: the memory controller obviously isn’t going to change unless you go buy a later stepping processor.
    – Board electricals: this is perhaps the most common thing to improve that allows more headroom and better noise mitigation for higher clock speeds in nascent platforms. I’d place my vote on _this_ being the source of any future improvements.
    – DIMM “quality” – it’s a little known fact that some DIMM vendors adhere better to DDR spec than others. It makes designing memory controllers (and memory reference codes) very tough sometimes, but suffice it to say that a 1066 DIMM from one vendor may run very stable on one controller, but completely fail to train on a less refined piece of host silicon. YMMV, but if there are training problems, swapping vendors has a pretty good shot at resolving them (at least as good a shot as loosening timings).
    – And now the interesting subject: BIOS. So, let me just point out that you really should listen to my interview on the podcast last week. I talk a bit about this, but I’ll restate here. BIOS has no interaction with the OS (for all intents and purposes) after OS load nowadays. The only time when the BIOS can affect memory subsystem performance is when determining training parameters and algorithms. Frankly, most of this data is pulled from SPD (serial presence detect, provided as an SMBUS-like interface on the DIMM itself) and used directly. Oftentimes the BIOS will impose “min/max limits” on these timings, and THAT might change from BIOS to BIOS, but otherwise, it’s very unlikely that any BIOS changes will result in performance changes – after all, what determines memory performance? Timings and clock rate. BIOS sets and forgets these. Now, let me add one more thing: BIOS also has a bunch of population detection algorithms which may also be affecting how DIMMs are enumerated, and, sure, those may be made more robust as time goes on: my whole point is, you shouldn’t expect *BIOS* to make memory performance better in and of itself. You might be setting yourself up for disappointment.

    My 2¢. -Matt

  55. This seems to be the first time in the past two years I’ve looked at actual benchmarks and thought an AMD platform made some sense, and rightfully so the conclusion was positive. Even then, the Intel 775 solutions still aren’t getting blown out of the water. You still should probably get a C2D if you are more worried about games, for instance. It’s still a pick-and-choose across the benchmarks. Heck, I’ve always thought TR tended to be overly apologetic in their conclusions based on the benchmarks they present.

    Still not going to make me trade in the E8500 I already bought a few months back. AMD already missed my window. They need a strong followup now to compete with the i7.

  56. You could probably shut everybody up with a graph showing that they get (basically) identical FPS for each CPU (or a sampling of them) by cranking everything up. Not saying it’s a good idea or good use of resources (because it’s not) but it would shut them up. 😀

  57. You DO get those numbers, in a roundabout way. If the GPU tests have framerates lower than the CPU test, then the CPU is sufficient for that game. Honestly, it’s not that hard to figure out.

  58. This actually makes me really sad, I didn’t know what I expected from AMD, but I thought whatever they released in the AM3 socket would beat anything they had in the AM2 socket. It makes me sad to see that the Phenom II X4 940 is still the leader of the pack for AMD. Obviously the name is higher (owwww ahhh) but I thought maybe they would release Phenom II X4 950 and above for the AM3 socket that would make a good release onto the market touting DDR3 support and whatnot. Now I’m just disappointed.
    Oh well, go AMD with your tri-core, still a success.

  59. Is it just me or does it look like the IMC “un-core” clock might be prohibiting increased memory performance? Possibly a reason for clocking up the un-core for the AM3 Phenom II’s, they only needed 1800MHz for DDR2 1066 but needed a bit more for DDR3 1333.

  60. What it doesn’t show you is whether it is worth it to spend additional $$ on a better processor in order to get better framerates.

    Basically, I want some numbers to back up this type of recommendation:
    §[<<]§ So is the difference between the e7400 and the e8400 greater than the difference between the 4850 and the 4850 when you are actually playing at a realistic resolution? After thinking about this for a little while, I think I finally understand your reasoning in providing these numbers. You are saying that if the Phenom2 X3 720 gets 57.4 fps at low resolution, that is pretty much the fastest it is going to get. Any increase in resolution will decrease that number. It's a best-case, if-you-will. So thus upgrading to a c2q Q9550 will increase that to 62.8 as a best-case. If that is your reasoning, let me provide my issues with it. That 5.4fps gap may seem meaningful at 1024x768, but at 1920x1200, that gap will likely fade to close to 0. A better example would be the HL2:2 numbers with different CPU's. A Q8200 gets 128fps while the Q9550 gets 173fps. That's a sizeable increase (35%). One might mistakingly think that it would be worth it to get a Q9550 over a Q8200. However, if you look at the 4870 1gb review, you see that at 1920x1200 (on a Q9650) A GTX260 gets 92fps while a 9800gtx+ gets 69. This is roughly the same 35% increase in performance. What would the difference between the Q8200 and Q9550 be at 1920x1200? We'll never know, because it is never shown. The reason that this question is a big deal is because one has to look at the bang-for-the-buck for the various parts of a computer that you can purchase. If you look at pricing, the 9800gtx is $145, the GTX260 is $200, the Q8200 is $170 and the Q9550 is $280. The $55 between the two video cards will give you a 35% boost in framerates at 1920x1200. The $110 difference between the CPU's will give you an unknown boost in framerates at 1920x1200. If we could find a figure for what that increase would be, then it would be much easier to determine how much is the best to allocate for a CPU upgrade compared to a GPU upgrade. If you consider the base system as an E5200 ($73) and a 4670 ($70). If you can upgrade to a Q8200 for $100 more or a 9800gtx for $75 more, which is the better upgrade? In my head I would think that the 9800gtx would be since it has a known performance increase in gaming at non-low resolution versus the 4670. What I really want (and have been asking for quite some time now) is numbers to back up that recommendation. If it turns out that the E5200 really is that much slower than an q8200 at gaming at non-low resolution then it might be worth that $100 for the quad-core. I apologize for seeming harsh and/or antagonistic. I just feel that I am not alone in my quest for knowledge. Others might that have the same opinions as myself and we all want to see TR give us the tools that we need in order to make wise purchases. The CPU-GPU trade-off is one of the biggest one's when it comes to deciding on the major components of a system, or incremental upgrades. Having the data to properly weigh these 2 selections is imperative to making a wise decision to get the best performance for our investment.

  61. The AM2+ vs AM3 thing was based upon you saying that the ultimate price difference would come down to DDR2 vs DDR3 pricing toward the end of 2009. As you say AM2+ = DDR2, AM3 = DDR3 so talking about one memory tech automatically means talking about one socket.

  62. Not true, of course. The frame rates achieved in our tests are very relevant. If you can live with those frame rates, the CPU won’t be a problem for you in those games–crank up the graphics settings until your particular choice of GPU runs out of oomph and don’t worry. We’ve also, separately, tested a wide range of graphics solutions at multiple resolutions, and you can refer to those results for more insight.

    Then again, in a couple of our gaming tests, the slower CPUs aren’t all that whippy even with the GPU bottleneck out of the way. Something to consider.

    Since our tests also show you relative performance, the frame rates achieved (which are sometimes quite a bit higher than necessary) still can enlighten if you accept that current games are the best proxies for future games that do more of everything–game engine work, AI, physics, GPU setup, etc.–current games do now.

    flip-mode, you may wish to see us test differently, but I disagree strongly with the notion that our results are not helpful or relevant just because they aren’t exactly what you want. They do take a little bit of thought to interpret, but not much, and you get a nice return on investment. 😉

  63. Nice review.

    Market cap of 1.43B and no P/E. Yikes. I personally wouldn’t recommend, but I look at more than benchmarksg{<...<}g

  64. Awsome review! Thanks.

    Now I know I can stick with my 9500 and 8GB of DDR2, and pop in a X3 720 for cheap when I am ready.

  65. True, but you have to remember that the resolution for most of those games were at 1024×768. Most people who take gaming seriously enough don’t game at that resolution; if you’ve got a good enough graphics card at set the resolution higher they’ll all play about the same FPS.

    Though if the game only takes advantage of 2 cores, with a 720 you’ve got an extra core to run something else in the background without impacting the experience.

  66. Thanks, but 720 has 3rd core, so it beats 8400 is several cases. Also, as has already been pointed out, TR’s gaming tests -[

  67. thanks for sharing?

    you know that that’s useless unless you game at those resolutions with that exact configuration, right? Either of those processors is GPU limited at any reasonable resolution.

  68. Gaming comparison: E8400 vs Phenom X3 720

    E8400 outperformed Phenom X3 720 in Crysis and HL2.
    In farcry 2 the E8400 got only 0.1 FPS less but it had higher minimum FPS

    Phenom X3 720 only won in UT3

    E8400 cost $164.99 at newegg, while 720 cost $169.99 at newegg

  69. That’s what I hear as well, but again I’m pretty sure the 700 series south bridge chips were touted as being better than the 600 series in this area. So I’m not holding out too much hope but I’ve still got my fingers crossed.

  70. Agreeing. There is practical information and there is theoretical information. The low res game tests provide theoretical information, but there is an absence of practical information. If TR would just run ONE “real life scenario” gaming test in each CPU review, that would at least be something. As it is, TR is basically showing us which CPU excels at a situation we all try to avoid: playing games at sh1tty detail levels.

  71. Yeah but the E8400 starts at almost 3ghz. That 810 went from 2.6 to 3.4, which is pretty awesome and really did provide a huge boost in performance.

  72. Then why test games at all? To get relative performance increases, that’s why. The specific numbers in this aren’t applicable to anyone (because no one games at those resolutions with this hardware), but the numbers in a test at higher resolutions would still show the relative performance increases and yet would still be useful as specific values for those that have the same class card. What I’m getting at is that I’d rather trade a test that is good at showing relative performance by is useful for specific performance for a test that still is good at showing relative performance while being able to show specific performance to at least a subset of the readership.

  73. Welcome to Tech Report!

    I know, I know… you joined a couple of weeks ago. But this is the first time I’ve seen you post since then!

  74. Maybe the cold bug is preventing them and they needed to find something a little warm then the outsides.

  75. Losing most of the tests to cheaper and more sensible products doesn’t make it “significant” in my eyes. The Q6600 is like Windows XP, it should just die already.

  76. A two-year old CPU still remains significant. That is pretty impressive if you ask me. Q9400 is not really that much faster and not that much easier on power than a G0 Q6600. Phenom II X4 810 and Q6600 are trading blows.

  77. Speak of the devil!

    They just announced a dual-core with 6MB of L3 cache named Callisto…so you can just look at other Phenom IIs and that’s about what you’ll get. :p

  78. I understand what you’re saying. I just don’t get the AM2+ vs. AM3 thing.

    AM3 = DDR3. The end. COULD AM2+ support be abandoned and newer AM3 CPUs beyond Phenom IIs be made only for AM3 motherboards? Yes, but there’s no reason for that coming. If AM3 CPUs can be AM2+ compatible now, then there’s no reason for that to change, as it comes down to nothing but DDR3 support. AMD is sticking to Phenom IIs for a good while to come.

    Bulldozer is supposed to be a complete redesign…and it’s not planned to be released until 2011. I wouldn’t count on it using AM3.

  79. Finland? Why do they even need liquid nitrogen? Just run the benchmark with the system outdoors! 😀

  80. budget pc’s are the best 😀

    i <3 my e5200. i would have waited for something like this except i was unable to play games on my a64 3500 + 6800gt + 1gb ram 🙁


    btw that is my factual opinion.

  81. Yes it maybe awhile before Westmere launches but it is AMDs biggest threat.

    HTT should give it the edge in Multi-threaded Apps.
    Turbo Boost will give it the edge in Single-threaded Apps.

    If it OCs like Yorkfield then AMD is in serious trouble. P55 design is even more simpler than P35.

    AM3 is a good deal now but it looks like Intel is lining up i5 to compete directly with it. AM3 looks to only give AMD a short relief and if the cannot follow up a successor to Ph2 fast then the future off AMD looks very bleak.

  82. Page 12, on the power consumption graph for the Core 2 product (the first graph), what is the dark purple line for? It isn’t in the legend, or maybe I’m just losing it.

  83. Well ok, I kind of take it back. It appears that CPU isn’t as cache-starved as quad core Phenom I CPUs. It’s got 2MB of L3 cache but only 2 cores, not 2MB/4 cores. The biggest change to Phenom II was the L3 cache size. Some quick scans of reviews show ~10-20% improvement per clock over K8 A64 X2s which isn’t bad but not incredibly compelling. It doesn’t seem like a great upgrade chip from an A64 x2, dual core to dual core, for significantly more CPU power an AM3 quad core would make more sense imo. Perhaps AMD-buying only fans might build a new dual core with it. A test would be worthwhile just to see how it does I suppose, other sites have tested it of course

  84. Yeah, I had the same problem on my 1024×768 monitor at work. I kinda gave up trying to read all the configs, I figured I’d get the gist from the graphs. You had to like, scroll to the bottom to scroll sideways and then back to the top to read the setup.

  85. dude was like a walking meme of complaints against TR’s reviews. It was almost funny how little effort he put into it.

  86. Hey danny e. when you get the extra time, can you update your Price:Perf forum thread to include the e8400 and PhII 720 X3 ?

  87. Great review, with that many CPUs tested I’m surprised the article still got out so fast.

    Sorry, AMD, as much as I would love to support you and get that X3 720 or whatever high end X4 is out by the end of the year. The switch over cost for a new mobo is to much. I’ll just wait for the Q9550 or Q9650 to be around $200, whenever that is, even if I have to wait to after its discontinued. An e8400 still rules the roost (For gaming) when its at 3.9 GHz. (With the exception of UT3).

    I rest my conscious on the fact that I bought a HD 4870.

    Oh, and whats the abbreviation for these guys.

    P II or P2 or what?

  88. The q6600 gets beat in all benchmarks by the q9400 and by most benchmarks by the X4 810, it also costs more then either, uses more power and has less features like better VM switching, sse4 etc..

  89. Dear Tech Report,
    The “Test Configurations” chart on page 2 is a real killer. It is both too wide *and* too tall for the page (which is really squeezed by ads, links and price checks; can’t they make room for a big table?).

    To read this table far enough to the right (edit) (offscreen), you have to side-scroll it. But the row headers disappear when you scroll it sideways.

    In order to read the specs at the bottom of the chart, you have to scroll your browser down so the column headers disappear, too. You can always scroll up and down to read them, but this is a pain.

    Thank you.

  90. §[<<]§ Before anyone gives me any crap about it being Wikipedia, the future processors lists have always been completely accurate that I've seen. The point here is that Deneb, Heka, Propus, Rana, and Regor have been well known for quite a while. I've read stuff all over the place about them all for months and months. This is an overall list, but one thing is missing. There's no name for dual-cores with L3. It may very well never exist, unless they end up with enough REALLY screwed up CPUs that started life as a quad-core, as eventually did happen with the original Phenoms. If they manufacture discrete dual-cores, that could also give them the option of single-cores, again, being most appropriate for laptops, so considering the differences, it makes plenty of sense that they'd separately manufacture a different dual-core.

  91. Really what OneArmedScissor said. The AM3 platform needs a bit more tweaking and bug removal. Updatable stuff I hope.

    And kudos for being a BIOS Engineer! I looked into that…not my thing. lol
    Massive Kudos though! I tip my hat!!!

  92. Tri-cores were supposed to be OEM only, and yet, here we are, two revisions later, and now they’re intentionally pushing them in retail. Things can change pretty quickly, and the reasons for it to happen exist, as UberGerbil noted.

    I imagine the biggest reason at this exact moment is that they don’t want to undercut themselves. All of the retail CPUs are presently unique, and they have the entire price range covered from $125 to $230, in $20-30 steps. If they give us what is effectively just a cheaper 810 right away, they’ll just be losing tons of money by selling them.

  93. Eh? In the event that they make dual-cores similar to the full Phenom IIs, even a 920 would be a better estimate than an old Phenom. It’s not as if most things are really using more than 2 cores.

    However, the dual-cores may be Athlon-branded, non-L3 cache only, which also have a larger L2 cache, making them completely different from anything else. It makes sense that they would specifically manufacture those separately in that way, as they will probably use those for a mobile Phenom line, as well.

  94. Not that I can answer for him, but I imagine he’s suggesting that the AM2+ boards have all the kinks worked out, while AM3 boards/CPUs are brand spankin’ new and still have their teething phase to go through.

    This has already been made apparent, as they have issues running more than one stick of RAM per channel at DDR2 1,066 MHz and DDR3 1,333 MHz, which they have said is a software issue that can be resolved.

    Will it mean anything in the end? No telling. Certainly not an invalid point, though.

  95. Phenom II and Phenom aren’t that different architecturally, and it could give a good estimate of the Phenom II dual core derivatives’ performance.

  96. So I’m sure HP will be happy to hear you’ll be buying a complete system from them. 😉

    AMD may be worried that they will only be able to bin enough to meet OEM demand and so don’t plan on selling any through to the retail channel. Alternatively, the OEMs may have demanded an exclusive and AMD has so little market clout (and is so desperate for OEM business) that they acquiesced. Though note that Intel has produced “specialty” CPUs that are only found in Apple or Dell machines (at least temporarily).

    Either way, those chips may show up in the retail channel eventually (given the economy, OEM orders will be lower than AMD expected, and any exclusivity likely sunsets at some point).

    The real mistake AMD made here was even admitting those chips existed. They should just be selling them to the OEMs, not talking about them. Word gets out eventually, of course, but there wouldn’t be complaints like this from Day 1.

  97. So I am going to idiotically ask a question of you as a BIOS engineer for Intel:

    How do you figure a BIOS release will improve memory performance?

    …I’m not saying you’re incorrect, I just wonder why you say that. I’ll pontificate after/if you respond.

  98. Why in god’s name is the Phenom II X4 910 limited to “large PC makers” only? This looks like it would be the exact combination of cache size, core count, and price that I’d be looking for in a CPU!

  99. Ok, nevermind, I still don’t think you understood my initial reply. Lynnfield seems to be a bit of a moving target for launch date. The latest I can find now is ‘Q3,’ the launch really depends upon the correct socket mobos and chipsets. The only relevance of Lynnfield is to place the timeframe. So AM2+ or AM3 comes down to DDR2 vs DDR3 price, supposedly prices will be in rough parity near the end of the year but who knows. The reason to talk about sockets AND memory types is that they’re tied together, AM2+ = DDR2, AM3 = DDR3. I do pity da fool who buys an AM2+ toward the end of this year unless they wouldn’t want to upgrade the CPU at all before getting a new mobo. Unless they get a pretty budget CPU to start at least AM3 would allow that going forward, AM2+ wouldn’t. Maybe the same people who thought s939 was a good buy 6-9 months after AM2 was out will get an AM2+ later this year :shrug: I recall a lot of those folks crying about their socket being abandoned though…derdeder.

  100. E0 stepping 8400s supposedly hit 3.8+ with ease and low to mid 4’s with a little work.
    3.5 on the p2 is disappointing, but it is early.

  101. Nah I coulda sworn there was a thread where you and Krogoth (yet again) got into a fight with it ending in Krogoth drawing a picture of a Metroid in ASCII…

  102. I’m not retracting anything. I’ve never heard that Lynnfield CPUs are coming out “late 2009.” Q2 to Q3 was my understanding, which would make sense. Why would it take them an ENTIRE year to ramp up production enough with a manufacturing process they’ve already been using for a long time? Westmere is what’s coming at the end of this year. We will just have to wait and see how it all pans out, though.

    Regardless, we know what’s coming out ahead of time, and that affects our decisions. I’m not going to buy what makes sense in my price range right this second, if I know something new is going to kill it within a few months.

    But it’s potentially the opposite here. If you buy a Phenom II with DDR2 right now, you’re probably getting a cheaper computer than you’re going to be able to get at the end of 2009. Nothing is really changing, except that DDR2 will undoubtedly cost more, and DDR3 won’t match DDR2 prices now.

    And what’s the big deal about socket AM2+? AM3 CPUs are backwards compatible with it.

    Motherboard manufacturers intend to continue selling primarily AM2+ boards, so that’s not likely to change, as there’s not going to be a replacement for the 790GX until the end of the year. Far and away most boards out there will be AM2+.

  103. Sorry, not suggesting you don’t understand. I was just trying to point out how drastically overboard it has gone. And yet, even with all that bandwidth, 2,000 MHz RAM doesn’t help anything.

    I think it’s the same case with graphics cards. Dumping infinite more processors on them seemed to stop mattering, so they’ve turned to advertising significantly increased bandwidth. But does a 4870 really need double a 4850, because of a small clockspeed boost? Yeah right lol. I’m not singling out AMD, though. Nvidia and their 448-512 bit, over $9,000 to manufacture, cards can piss off, as well. In both cases, you have to pay a significant price premium, and for what advantage that we’re actually seeing in reality?

    Too many gimmicks as of late, across the board.


    Actually, I was talking about Lynnfield when I brought up the price difference with DDR3 and DDR2. The point was that it may be cheaper than Core i7, which would give reason to believe it will be a better buy than a Phenom II, but the only price difference is X58 boards, and that’s just about been thrown out the window, as they’re getting down into the $180 range and may go lower.

  105. Btw, applause to Damage. No wonder this review took so long; just look at how many different CPUs/Socket-types are represented here!!

  106. ………………..,-~*’`¯lllllll`*~,

  107. Why, it’s a Phenom I isn’t it? The only people who should be vaguely interested in it are those looking to upgrade for the sake of it from an A64 x2 which quite frankly would be a waste of money. Maybe someone who has an AM2 board with a single core would find it useful but that’s an odd niche. Otherwise it’s irrelevant at this point with Phenom II out.

  108. Chill and understand what I said in relation to your post, which concludes that AMD will be a good value proposition. You’re talking the end of 2009 or early 2010 timeframe. The DDR3 tax will be much lower by then and AM2+ will be on the way out if not totally gone. Therefore DDR2 vs DDR3 won’t matter pricewise so choosing AM2+ DDR2 rather than AM3 DDR3 b[

  109. It looks like the next XP-2500 M+ to me. Especially, if the chip proves itself to a cheap, but easily overclockable component.

  110. No, the poster complained that DDR3 was inadequate and called for faster modules. While the problem lies with the CPUs or rather they do not *need* the speed yet.

    AMD is just making their platform ready for DDR3’s eventual market penetration like they did with DDR2 and Socket AM2.

    Besides, nice ad hominem at the end. It really strengthens your position.

  111. That is what I am getting at. Like the whole SLI/Crossfire x16 <=> x16 slots. Just a single X16 and a x8 do the job.
    All this bandwidth stuff is silly to me. It is essential for chip communication yes, but when my 1066Mhz FSB keeps up with the new 1333Mhz and the QPI, mah. SO what about these little diffrences, I still do things within ample time, and I have yet to see a game say:
    Minimum RAM specs: 1GB DDR2-1066.
    Recommended RAM specs: 2GB DDR3-1600.

    lol, 10.7GB/s or even 200,000GB/s, I personally don’t loose sleep over this for a rig. (Actually, 200,000GB/s….would that even be stable? That is some seriously pricy RAM!!!)

  112. l[<(even if it is less than the staggering 102.4 GB/s possible with a Core i7-965 Extreme and three channels of DDR3 at 1600MHz.)<]l Last I checked, 1600*8*3 was 38.4GB/s, not 102.4GB/s. i7-965s do have prodigious bandwidth, but they'd need 8(!) channels of DDR3-1600 to match the stated figure.

  113. The point isn’t to find the “exact results” of what your computer would achieve. The point is to see how DDR3 affects a game being run in a way that represents reality, where the GPU is undoubtedly the limit.

    There can still be variations, as seen in CPU comparisons, but there’s simply not one with DDR2 vs. DDR3.

  114. All that bandwidth means little to nothing with desktop applications of pretty much any sort.

    Ever seen single, dual, and triple channel comparisons? Even with single channel, the difference is negligible to non-existent in almost any case.

    It’s exactly why Intel stuck with the FSB and Lynnfields are “only” dual channel. They’d undoubtedly still be using the FSB if it hadn’t screwed them over in certain parts of the server market.

  115. LOL you used different words to say the exact same thing and simultaneously tell him how wrong he was. Moran.

  116. what he sees in real life won’t matter unless he has the exact same GPU anyway, so I don’t see how those numbers would help anyone.

  117. l[<"...the total bandwidth available via Socket AM3 is roughly 37.3 GB/s, considerably more than the peak data rate of 10.7 GB/s available via a Core 2 processor's front-side bus (even if it is less than the staggering 102.4 GB/s possible with a Core i7-965 Extreme and three channels of DDR3 at 1600MHz.)"<]l And here is where price comes in. $999.99 vs about $249? Mmmm. Yeah. Plus, when the BIOS's get refined to work better with the AM3/DDR3 setup, performance will be seen there. I'm thinking of a AM3 rig in July. A little 4th of July present for myself! And at AMD's prices, I'll have enough money left over to buy some fireworks too!

  118. Translation:

    This is a absolutely horrible test, in my opinion, and is one of the worst I’ve ever seen from a reputable site. I believe the low resolution in the games testing is not applicable to many users because of the extremely low resolution that the games are tested at. The motherboard is not very good as it is made by Asus. Asus has been known to have very poor stability among other issues, so results will be skewed specifically within the overclocking section and others.

  119. Why? Nothing is really changing from what we’ve already seen for quite a while. There will be 6 core Core i7s over a year from now, which will undoubtedly cost a fortune and still make up only 1-2% of the CPUs Intel sells. Whoopdeefreakinda.

    It’s not like Lynnfield is going to be radically cheaper. It’s stuck with DDr3, and look how high Intel kept Core 2 prices for so long. It will be priced the same. You can pretty much already put together a Core i7 computer for about the same price as a higher end Core 2 one, as there are less insanely priced X58s now, and more coming. P55 boards may be a bit cheaper, but they’re still going to be two 8x PCIe 2.0, so considering that the real draw of Nehalems are how much better they work with multiple GPUs, that’s not terribly attractive.

  120. Very good work guys.

    Keep it up as usual.

    I am still surprise how well the venerable Q6600 at stock manages to hold the line. Nevermind, the fact that an mildly overclocked Q6600 can outrun the Phenoms and keep up with its newer Yorkfield parts.

  121. No, the problem is that Phenom does need anymore bandwidth or tighter latency then what the higher-end DDR2 DIMMS can provide. Core 2 family has the same darn problem. Core i7 only benefits from the greater amount of bandwidth that DDR3 provides in applications where it matters.

    The primary advantage of DDR3 has always been the potential to have cheap higher density modules. 4GiB modules for <$99.

  122. Uh…yes, it is, for a good while.

    Even when the Westmere dual-cores do come out…they’re dual-cores, and intended for the $50-150 range, which will be dirt cheap computers where value and power efficiency is a much bigger factor than outright performance, hence the on-die GPU. AMD will have Phenom II-based dual-cores by then, including the Athlon branded ones with no L3 cache, which will undoubtedly be extremely cheap. With their chipsets, they probably don’t have anything to worry about, aside from the fact that Intel’s profit margins will be higher.

    The immediate competitor to Phenom II would be Lynnsfield, but if they’re clocked lower than i7s, I don’t think they’re going to have a huge problem there. Phenom IIs will go up in speed over time, but Intel has a habit of keeping everything below the one or two “extreme edition” parts. Unless you’re using multiple graphics cards, in which case, you’re probably not paying much attention to price/bang for your buck, Phenom IIs will be just as good. People will still gravitate to Phenom IIs because of DDR2 prices.

  123. Now the test of how well they will preform against Dual Core 32nm i5 with HTT. Its good that AMD is finally catching up but 45nm Penryn isn’t going to be AMDs main competition.

  124. The phenom 2 940 black with ddr3 and faster uncore clock will give the q9550 and q9650 a good run for its money and hopefully reach IPC parity on several apps. Thats really good news for AMD..

  125. Everytime I see an AMD review I want to go and yell at them for not getting their south bridges in order. Honestly I’d love to see you guys compare, say, a nForce chipset with AHCI enabled to see if there’s a performance improvement in productivity tasks (maybe during one of those slow tech news weeks).

    All and all another fine review to end, at least my reading, of the AM3 processors’ introduction.

  126. I can’t tell if you’re just an idiot or you’re trolling, but the games are run at low resolutions in order to reduce the importance of the GPU on the overall framerate, as this is this a test of CPU’s.

  127. Absolutely HORIBLE test i ever seen. Low resolution in games WTF who play in this res in this days (you maybe). And that mobo is CRAAAAPP crp-[

  128. Wow, the 720 gets a Damaging endorsement. Well then it will get a serious look from me when the time comes. Man, I hope the time comes soon.

  129. I assume the simulated CPUs have no issue with various clock domains being incorrect like the simulated i7 940 (or was it 920) did?

    It will also be interesting to see how the x3 710 is incorporated in to recommended builds. Is it better to recommend a cheaper AM2+ DDR2 platform that’s on the way out or a more expensive AM3 DDR3 one that is new?

  130. I don’t think you could possibly show them in a better light. They’re slightly beating competitively-priced Intel processors. If you want to spend $180-200 on a CPU for a PC, you would not be going wrong to pick AMD.

    Even the X3 720 is a pretty nice processor for $145.

    The only knock on AMD is that they’re not able to build the fastest CPUs around, and therefore they’re at Intel’s mercy when it comes to pricing. The chips themselves (according to the anandtech review) are still quite large and would therefore have lower yields, so dropping prices might not be feasable. If Intel drops their prices or otherwise introduces a full-sized quad (full-sized cache anyway) in this price range, everything I wrote above becomes null.

  131. He’s well acquainted with the arguments. He’s just on the side of thinking that he wants to see results that will predict what he’ll see in real life, even if they’re all exactly the same because even the slowest CPU is spending a chunk of its time idling while waiting for the GPU.

  132. The justification is in the article. If you doubt TR’s methods – what is it you know that the rest of us don’t?

  133. *grumble*
    </obligatory grumble about the uselessness of ‘testing’ gaming at low resolutions>

    Good review though. The x3 certainly looks more compelling than I would have previous thought.

  134. Truth of the matter though, is that the AM3 socket doesn’t really have a processor potent enough to properly utilize DDR3. Also, they need support for faster DDR3 memory.

    It appears that 1333 is really the break even point for DDR3, beyond that the frequencies start to pull away, but you really don’t see much of a benefit till then.

  135. Finally. Now all the “TR has been bought by Intel because there’s no AM3 review!” whiners can STFU.