AMD’s Athlon 64 X2 3800+ processor

WHEN WE FIRST reviewed the Athlon 64 X2 processor a few months back, we said that it was an outstanding CPU, but we wished out loud for AMD to start selling a 2GHz version of the X2 at a lower price. After all, we argued, Intel’s Pentium D 820 is a killer deal at just under $250, while the least expensive Athlon 64 X2 costs over twice that. Sure, the X2’s performance might well justify the price premium, but we’ll take more for our money when we can get it.

Today, our wish is fulfilled in the form of the Athlon 64 X2 3800+, a dual-core processor running at 2GHz with 512K of L2 cache per core. AMD has priced this baby at $354—significantly less than any of its other dual-core products. It doesn’t take a Ph.D. in computer engineering to figure out that the X2 3800+ ought to offer a very potent combo of price and performance.

As is our custom, we compared the X2 3800+ against over a dozen single- and dual-core competitors to see just how it fits into the big picture. Then we overclocked the living daylights out of the thing, and everything went soft and fuzzy. Our heads are still spinning. Keep reading to see why.

Code name: Manchester
Our first exposure to the Athlon 64 X2 came in the form of the 4800+ model. That chip is code-named “Toledo,” and it packs 1MB of L2 cache per processor core, as do the dual-core Opterons. Toledo-core chips sport a transistor count of about 230 million, all crammed into a die size of 199 mm2.

AMD also makes several models of Athlon 64 X2 that have only 512K of L2 cache. In the past, CPUs with smaller caches have sometimes been based on the exact same chip as the ones with more cache, but they’d have half of the L2 cache disabled for one reason or another. That’s not the case with the X2 3800+. AMD says this “Manchester”-core part has about 154 million transistors and a die size of 147 mm2, so it’s clearly a different chip. AMD rates the max thermal power needed to cool the X2 3800+ at 89W—well below the 110W rating of the 4800+—and they’ve revised down the max thermal power of the X2 4200+ to 89W, as well. The Manchester core is obviously a smaller, cooler, and cheaper-to-manufacture chip than Toledo.

The Athlon 64 X2 3800+ Cosmetically, though, you’d never know it, because the X2 3800+ looks like pretty much any other Socket 939 processor. The X2 3800+ is intended to work with AMD’s existing Socket 939 infrastructure, and it may well be an upgrade option for current owners of Athlon 64 systems. You’ll want to check with your motherboard maker to see whether or not your board will support an X2 before making the leap, though. Some boards need only a BIOS update, but we’re finding out that some others just can’t handle X2 processors. Most newer motherboards should be fine.

Before we dive into the benchmark numbers, let’s have a quick look at where the X2 3800+ fits into the bigger picture. With its introduction, the Athlon 64 X2 family now looks like so:

CPU Clock speed L2 cache size Price
Athlon 64 X2 3800+ 2.0GHz 512KB $354
Athlon 64 X2 4200+ 2.2GHz 512KB $482
Athlon 64 X2 4400+ 2.2GHz 1024KB $537
Athlon 64 X2 4600+ 2.4GHz 512KB $704
Athlon 64 X2 4800+ 2.4GHz 1024KB $902

At $354, the X2 3800+ isn’t exactly cheap, but it does extend the X2 line into more affordable territory. You’ve probably noticed the apparent hole in the X2 models at 4000+. Logic would dictate that the X2 4000+ would run at 2GHz and have 1MB of L2 cache. So where is it? I asked AMD this very question, and they told me that they won’t comment on unannounced products—and besides there aren’t any plans for an X2 4000+ right now.

I’m not too broken up about that, because I’m not convinced the additional L2 cache is worth paying more money to get. We’ll address that issue in more detail when we look at the benchmark results.

Now for something really confusing. How does the Athlon 64 X2 3800+ stack up against the competition? Figuring out such things has become horribly puzzling as Model Number Mania has taken hold of the CPU market. Here’s my attempt at lining up the various AMD and Intel CPU models according to rough price parity:

CPU Price CPU Price CPU Price CPU Price CPU Price
Pentium 4 541 $218 Pentium 4 630 $224 Athlon 64 3200+ $194
Pentium 4 551 $278 Pentium 4 640 $237 Pentium D 820 $241 Athlon 64 3500+ $223
Pentium D 830 $316 Athlon 64 3800+ $329 Athlon 64 X2 3800+ $354
Pentium 4 561 $417 Pentium 4 650 $401 Athlon 64 4000+ $375
Athlon 64 X2 4200+ $482
Pentium D 840 $530 Athlon 64 X2 4400+ $537
Pentium 4 571 $637 Pentium 4 660 $605 Athlon 64 X2 4600+ $704
Pentium 4 670 $851 Athlon 64 FX-55 $827
Pentium 4 XE 3.73GHz $999 Pentium XE 840 $999 Athlon 64 FX-57 $1031 Athlon 64 X2 4800+ $902

From this handy table, we learn that the X2 3800+’s dual-core competition from Intel is probably the Pentium D 830. You can have a single-core Pentium 4 551 for about 75 bucks less than the price of the X2 3800+, or you could pick up AMD’s single-core Athlon 64 3800+ in the same basic price range as the “equivalent” X2. The Athlon 64 3800+ runs at 2.4GHz and has a 512K L2 cache, so you lose 400MHz and pick up a whole second CPU core by going for the X2 3800+ instead. I’d say that’s an easy tradeoff to make, but the benchmark results will tell us more about the shape of that choice.

Why didn’t you…?
I wish I could have included results here for a number of interesting CPU models, including the X2 3800+’s most direct competitor, the Pentium D 830. The reason I didn’t include them is simple: lousy multiplier control. I couldn’t get the motherboards in my test rigs to clock down some of these CPUs to lower speeds in order to simulate lower-speed-grade processors. That’s why you won’t see results here for the Pentium D 830, and that’s mostly why there are only three of the five Athlon X2 models represented. Sorry about that. We will try again next time around with different motherboards.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Pentium D 820 2.8GHz Pentium 4 660 3.6GHz
Pentium D 840 3.2GHz
Pentium Extreme Edition 840 3.2GHz
Pentium 4 Extreme Edition 3.73GHz Athlon 64 3500+ 2.2GHz (Venice)
Athlon 64 3800+ 2.4GHz (Venice)
Athlon 64 4000+ 2.4GHz (130nm)
Athlon 64 FX-55 2.6GHz (130nm)
Athlon 64 FX-57 2.8GHz
Athlon 64 X2 3800+ 2.0GHz
Athlon 64 X2 4200+ 2.2GHz
Athlon 64 X2 4800+ 2.4GHz
Pentium 4 670 3.8GHz
System bus 800MHz (200MHz quad-pumped) 800MHz (200MHz quad-pumped) 1066MHz (266MHz quad-pumped) 1GHz HyperTransport
Motherboard Intel D945GTP Intel D955XBK Intel D955XBK Asus A8N-SLI Deluxe
BIOS revision NT94510J.86A.0897 BK95510J.86A.1152 BK95510J.86A.1234 MCT2/dualcore
North bridge 945G MCH 955X MCH 955X MCH nForce4 SLI
South bridge ICH7R ICH7R ICH7R
Chipset drivers INF Update INF Update INF Update SMBus driver 4.45
IDE driver 4.75
Memory size 1GB (2 DIMMs) 1GB (2 DIMMs) 1GB (2 DIMMs) 1GB (2 DIMMs)
Memory type Corsiar XMS2 5400UL DDR2 SDRAM at 533MHz Corsiar XMS2 5400UL DDR2 SDRAM at 533MHz Corsiar XMS2 5400UL DDR2 SDRAM at 667MHz Corsair XMS Pro 3200XL DDR SDRAM at 400MHz
CAS latency (CL) 3 3 4 2
RAS to CAS delay (tRCD) 2 2 2 2
RAS precharge (tRP) 2 2 2 2
Cycle time (tRAS) 8 8 8 5
Hard drive Maxtor DiamondMax 10 250GB SATA 150
Audio Integrated ICH7R/STAC9221D5
with SigmaTel 5.10.4456.0 drivers
Integrated ICH7R/STAC9221D5
with SigmaTel 5.10.4456.0 drivers
Integrated ICH7R/STAC9221D5
with SigmaTel 5.10.4456.0 drivers
Integrated nForce4/ALC850
with Realtek drivers
Graphics GeForce 6800 Ultra 256MB PCI-E with ForceWare 71.84 drivers
OS Windows XP Professional x64 Edition
OS updates

All tests on the Pentium systems were run with Hyper-Threading enabled, except where otherwise noted.

We have included results for the Pentium D 840 in the following pages. We obtained these results by disabling Hyper-Threading on our Extreme Edition 840. Since the Pentium D 840 is just an Extreme Edition 840 sans HT, the numbers should be valid. Similarly, the Athlon 64 3500+ scores you’ll see in the following pages were obtained by underclocking an Athlon 64 3800+ (with the new “Venice” core) to 2.2GHz. The performance should be identical to a “real” 3500+.

Thanks to Corsair for providing us with memory for our testing. Their products and support are both far and away superior to generic, no-name memory.

Also, all of our test systems were powered by OCZ PowerStream power supply units. The PowerStream was one of our Editor’s Choice winners in our latest PSU round-up.

The test systems’ Windows desktops were set at 1152×864 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory performance
Up first are some simple memory performance tests. These results won’t tell us about real-world performance, but they do have an impact on that.

The X2 3800+ is the only Athlon 64 among the bunch, dual-core or single, that runs at 2GHz. As a result, its bandwidth scores are a little bit lower than the rest, but they’re still quite good. With dual channels of DDR400 memory and a built-in memory controller, the X2 3800+ has a very fast memory subsystem.

Linpack shows us, among other things, the basic performance of the cache hierarchies on these CPUs. A second CPU core is no help in this single-threaded test, and the X2 3800+ is again the lowest-clocked Athlon 64 in the group. You can see, also, how its performance drops off once Linpack starts crunching on matrix sizes above about 576K. That’s where we hit the limits of either core’s 64K L1 data cache combined with its 512K L2 cache. The Athlon 64 processors with 1MB of L2 cache perform better with larger data matrices, as does the Pentium D, which packs a 1MB L2 cache per core.

When it comes time to grab data from main memory, the X2 3800+ is very quick. That’s the big advantage of its integrated memory controller.

Gaming performance
Up next are some gaming tests. Notice that we’ve included above each result a little graph generated by the Windows Task Manager as the benchmark ran on a dual Opteron 275 system (with four total CPU cores.) This should give you some indication of the amount of threading in the application. In some cases with single-threaded apps like the games below, the task will oscillate back and forth between one CPU and the next, but total utilization generally won’t go above 50% for a dual-core or 25% for a quad-core (or quad-front-end, in the case of the XE 840 with Hyper-Threading) system.

Doom 3
We tested performance by playing back a custom-recorded demo that should be fairly representative of most of the single-player gameplay in Doom 3.

Far Cry
Our Far Cry demo takes place on the Pier level, in one of those massive, open outdoor areas so common in this game. Vegetation is dense, and view distances can be very long.

Unreal Tournament 2004
Our UT2004 demo shows yours truly putting the smack down on some bots in an Onslaught game.

The gaming performance of the X2 3800+ isn’t stellar compared to higher-clocked Athlon 64 models, but it’s still better than any Pentium D or Pentium 4 in the bunch, even the Extreme Edition 3.73GHz. The X2 3800+ surprisingly manages to best the Athlon 64 3500+ in a couple of tests, even though the X2 3800+ runs at a lower clock speed.


3DMark’s main test is obviously graphics-bound, since the CPU doesn’t seem to matter much to the overall score. The CPU test, however, uses multiple software threads to handle vertex processing, and the dual-core processors get to strut their stuff. The X2 3800+ finishes just behind the Pentium Extreme Edition 840 and just ahead of AMD’s single-core monster, the Athlon 64 FX-57.

POV-Ray rendering
POV-Ray just recently made the move to 64-bit binaries, and thanks to the nifty SMPOV distributed rendering utility, we’ve been able to make it multithreaded, as well. SMPOV spins off any number of instances of the POV-Ray renderer, and it will bisect the scene in several different ways. For this scene, the best choice was to divide the screen up horizontally between the different threads, which provides a fairly even workload.

The X2 3800+ rips through this POV-Ray scene faster than a Pentium Extreme Edition 840, and it trounces any single-core would-be competition. When tasks are easily parallelizable like rendering, dual-core processors reign supreme.

Cinema 4D rendering
Cinema 4D’s rendering engine does a very nice job of distributing the load across multiple processors, as the Task Manager graph shows.

Here again the X2 3800+ puts in a strong showing. It’s a little quicker than the Pentium D 840, a processor that costs quite a bit more than the X2 3800+. Once more, the single-core CPUs are left in the dust.

The tables turn in Cinebench’s single-threaded shading tests. The X2 3800+ runs near the bottom of the pack here.

LAME audio encoding
LAME MT is, as you might have guessed, a multithreaded version of the LAME MP3 encoder. LAME MT was created as a demonstration of the benefits of multithreading specifically on a Hyper-Threaded CPU like the Pentium 4. You can even download a paper (in Word format) describing the programming effort.

Rather than run multiple parallel threads, LAME MT runs the MP3 encoder’s psycho-acoustic analysis function on a separate thread from the rest of the encoder using simple linear pipelining. That is, the psycho-acoustic analysis happens one frame ahead of everything else, and its results are buffered for later use by the second thread. The author notes, “In general, this approach is highly recommended, for it is exponentially harder to debug a parallel application than a linear one.”

We have results for two different 64-bit versions of LAME MT from different compilers, one from Microsoft and one from Intel, doing two different types of encoding, variable bit rate and constant bit rate. We are encoding a massive 10-minute, 6-second 101MB WAV file here, as we have done in our previous CPU reviews.

The mighty Athlon 64 FX-57 struggles to keep pace with the X2 3800+ in the multithreaded MP3 encoding tests, falling behind in three of the four instances. The Pentium D does relatively well here, though, with the 840 topping the X2 3800+ most of the time.

Xmpeg/DivX video encoding
We used the Xmpeg/DivX combo to convert a DVD .VOB file of a movie trailer into DivX format. Like LAME MT, this application is only dual threaded.

Windows Media Encoder video encoding
We asked Windows Media Encoder to convert a gorgeous 1080-line WMV HD video clip into a 640×460 streaming format using the Windows Media Video 8 Advanced Profile codec.

Despite its relatively low clock speed, the X2 3800+ makes a very decent media encoding processor.


We’re using the 64-bit beta version of ScienceMark for these tests, and several of its components are multithreaded. ScienceMark author Alexander Goodrich says this about the Molecular Dynamics simulation:

Molecular Dynamics is lightly multithreaded – one thread takes care of U/I aspects, and the other thread takes care of the computation. The computation itself is not multithreaded, though Tim and I were looking into ways of changing the algorithm to support multi-threading programming a couple years ago – it’s a lot of effort, unfortunately. When MD [is] running there [is] a total of 2 threads for the process.

Here are the results:

The Primordia test “calculates the Quantum Mechanical Hartree-Fock Orbitals for each electron in any element of the periodic table.” Alex says this about it:

Primordia is multithreaded. Two main tasks occur which allow this to happen. Essentially, we identified 2 parallel tasks that could be done. We could probably take this a step further and optimize it even more. There is an issue, however, with the Pentium Extreme Edition that we’ve identified. The second computation thread gets executed on the logical HT thread rather than the 2nd core, so performance isn’t as good as it could be. This will be fixed in the next revision. This doesn’t effect [sic] the regular Pentium D. A workaround could include disabling HT on Pentium EE. There are 3 threads for primordia – 2 threads for computation, 1 thread for U/I.

Yet again, the X2 3800+ is running closely with the Athlon 64 FX-57, oddly enough. The X2 processors congregate at the top of the pack in the molecular dynamics simulation, while the X2 3800+ falls to the middle of the bunch in Primordia.

SiSoft Sandra
Next up is SiSoft’s Sandra system diagnosis program, which includes a number of different benchmarks. The one of interest to us is the “multimedia” benchmark, intended to show off the benefits of “multimedia” extensions like MMX and SSE/2. According to SiSoft’s FAQ, the benchmark actually does a fractal computation:

This benchmark generates a picture (640×480) of the well-known Mandelbrot fractal, using 255 iterations for each data pixel, in 32 colours. It is a real-life benchmark rather than a synthetic benchmark, designed to show the improvements MMX/Enhanced, 3DNow!/Enhanced, SSE(2) bring to such an algorithm. The benchmark is multi-threaded for up to 64 CPUs maximum on SMP systems. This works by interlacing, i.e. each thread computes the next column not being worked on by other threads. Sandra creates as many threads as there are CPUs in the system and assignes [sic] each thread to a different CPU.

We’re using the 64-bit port of Sandra. The “Integer x16” version of this test uses integer numbers to simulate floating-point math. The floating-point version of the benchmark takes advantage of SSE2 to process up to eight Mandelbrot iterations at once.

The Pentiums rock and roll in this test, thanks to their prowess with vector math. If you are doing vector math, though, it’s nice to have a second core to help out. The X2 3800+ beats any single-core Athlon 64 by a wide margin. Sphinx speech recognition
Ricky Houghton first brought us the Sphinx benchmark through his association with speech recognition efforts at Carnegie Mellon University. Sphinx is a high-quality speech recognition routine. We use two different versions, built with two different compilers, in an attempt to ensure we’re getting the best possible performance. However, the versions of Sphinx we’re using are only single-threaded.

You will compromise some single-threaded performance by going with the X2 3800+, as these results illustrate. The X2 3800+’s relatively low clock speed catches up with it here.

picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including MMX, SSE2, and Hyper-Threading. Naturally, he’s ported picCOLOR to 64 bits, so we can test performance with the x86-64 ISA.

At our request, Dr. Müller, the program’s author, added larger image sizes to this latest build of picCOLOR. We were concerned that the thread creation overhead on the tests rather small default image size would overshadow the benefits of threading. Dr. Müller has also made picCOLOR multithreading more extensive. Eight of the 12 functions in the test are now multithreaded.

Scores in picCOLOR, by the way, are indexed against a single-processor Pentium III 1GHz system, so that a score of 4.14 works out to 4.14 times the performance of the reference machine.

Another strong finish for the X2 3800+, ahead of some processors that cost nearly three times as much.

Power consumption
We measured the power consumption of our entire test systems, except for the monitor, at the wall outlet using a Watts Up PRO watt meter. The test rigs were all equipped with OCZ PowerStream 520W power supply units. The idle results were measured at the Windows desktop, and we used SMPOV and the 64-bit version of the POV-Ray renderer to load up the CPUs. In all cases, we asked SMPOV to use the same number of threads as there were CPU front ends in Task Manager—so four for the Pentium XE 840, two for the Athlon 64 X2, and so on.

The graphs below have results for “power management” and “no power management.” That deserves some explanation. By “power management,” we mean SpeedStep or Cool’n’Quiet. In the case of the Pentium 4 600-series processors and the Pentium D 840 and Pentium XE 840 CPUs, the C1E halt state is always active, even in the “no power management” tests. The Pentium D 820 and P4 Extreme Edition 3.73GHz don’t support the C1E halt state or SpeedStep.

The beta BIOS for our Asus A8N-SLI Deluxe mobo wouldn’t support Cool’n’Quiet on the X2 processors. I was able to update to Asus’ 1011 BIOS rev and get Cool’n’Quiet support for the FX-57, and using BIOS version 1013-002 allowed me to enable Cool’n’Quiet on the X2 models 3800+ and 4800+. Oddly, I couldn’t get Cool’n’Quiet working on the X2 4800+ with any of these BIOS revisions.

I’ve seen ’em before when we’ve reviewed other X2 processors, but these results continue to astonish. The system based on the X2 3800+ draws less power at idle and under load than anything here but the single-core A64 3800+. Under load, the Pentium D 840-based rig draws 292W at the wall socket, while the X2 3800+ system draws 166W. And the X2 3800+ outperforms the Pentium D 840 more often than not. The performance-per-watt picture on the X2 3800+ is impressive indeed.

With very little effort and even less drama, I was able to get the X2 3800+ running stable at 2.4GHz by setting the HyperTransport clock to 240MHz. The Asus A8N-SLI Deluxe mobo on our test system was giving the X2 3800+ about 1.31V by default. I turned that up to 1.3375V, backed the HyperTransport multiplier down to 3X, and the X2 3800+ seemed quite happy.

Now, that’s a sweet overclock all by itself, but hitting 2.4GHz has the added benefit of bringing everything into line. When the memory clock is set to the proper divider for DDR333 operation and the HyperTransport clock is raised to 240MHz, the memory actually runs at 400MHz even. Lock down the PCI and PCI-E bus speeds using the motherboard’s BIOS, and you’re running virtually everything but the CPU and HyperTransport link at stock speeds. I was able to leave the RAM timings at 2-2-2-5, nice and tight. This is the sort of overclock I could live with for everyday use.

With a little more coaxing, I managed to get the X2 3800+ running at 2.5GHz long enough to record benchmark scores, but I had to back off of the memory timings a little bit in order to do it. Here’s how it performed.

The extra clock speed headroom translates into quite a bit more performance, as one might expect. The smaller cache doesn’t hold it back much, either; the X2 3800+ challenges the X2 4800+ pretty well when they’re both at 2.4GHz.

Well, we asked for a cheaper Athlon 64 X2, and AMD delivered. As expected, the Athlon 64 X2 3800+ performs quite well in our test suite, which is heavy on multithreaded applications and 64-bit binaries—the types of programs that an X2 purchased today should spend much of its life running. In fact, in multithreaded applications, the X2 beats out AMD’s single-core flagship, the Athlon 64 FX-57, more often than not. There is a tradeoff involved in the X2 3800+, because its 2GHz clock speed is relatively low, and as a result, its performance in single-threaded applications is decent, but not stellar. Still, the X2 3800+ plays today’s single-threaded games better than any form of Pentium 4 or D. The Pentium D 820 is still a good value at $241, but I suspect most enthusiasts will think the extra hundred bucks or so is worth it to step up to the X2 3800+. AMD’s cheapest dual-core processor generally outruns the Pentium D 840, and in some cases, the Pentium Extreme Edition 840, as well. I’d still like to see AMD compete at the $250 range with a dual-core offering, but I suppose that will come with time. The X2 3800+ is a step in that direction.

In fact, now that the entry point for dual-core Athlon 64 processors has dropped to $354, I am almost ready to stop recommending single-core processors for anything but budget PCs. Unless you absolutely cannot afford it, I’d suggest picking a dual-core CPU for your next system. Even for gamers, there’s little point in passing on a second CPU core just to get a somewhat higher clock speed, in my view. The X2 3800+ is more than passable for today’s games, and multithreaded game engines and graphics drivers are already on the horizon. For anything but games, having a second CPU around, even if it’s just to handle antivirus and antispyware chores, makes perfect sense.

Now, if you’ll excuse me, I’m going to step out of the way. AMD says these chips should be available for purchase right now. If most X2 3800+ chips overclock like our review sample did, then PC enthusiasts are going to stampede toward this thing en masse.

Comments closed
    • Hector
    • 14 years ago

    Scott is’nt it a bit unrealistic to expect a $250 AMD dually? I mean I think AMD would love to have a $250 dually like intel has. They meet or beat them on price at every other performance point so why can’t they or won’t they with dual core? They just can’t deliver at that price point!! I would’nt bet on those X2’s being all that profitable even at $350 because of low yeilds as dual core is one core manufactured together really AMD’s way– AMD’s is one complete die with crossbar, cache, two CPU’s all of which must be flawless or they get trashed. — meaning 144-233 million transistors which must be manufactured perfectly must have lower yield rates VS. say a AMD single core @ ~70-113 million depending on core. Or Intel’s solution of putting two singles together for thier dual core.

    • Hector
    • 14 years ago

    Ouch. The $350 3800+ beats the $1100 840XE in majority of apps..How embarressing is that?

    • Mr Bill
    • 14 years ago

    SiSoft Sandra’s Mandelbrot benchmark has amazingly poor SSE2 performance for AMD chips. I wonder if they are using the Intel compiler and the AMD chips are being instructed to run non SSE2 code?

    • Nelliesboo
    • 14 years ago

    Damage, Damage, Damage….. Just yesterday i was looking for a review of this cpu (s) and couldn’t find one…Only to wake up this morning and find one on the front page….you rock….

    • ol blue
    • 14 years ago

    Great article! What a processor, I can’t wait to get one. Just gotta save some cash for the PCI Express video card and 939 motherboard.

      • continuum
      • 14 years ago

      Exactly. $600 or so at a bare minimum to migrate from single core Socket A/AGP to dual core socket 939/PCI-e, $360ish for CPU, $100-150 for motherboard, then a Geforce 6600GT video card. Mmmmmhmmm. Tempting!

      Not too bad at all, considering you get dual cores for that price!

        • indeego
        • 14 years ago

        Don’t forget newer memory, unless you want to be saddled with crappy memory with nice processors, etcg{<.<}g

        • ElderDruid
        • 14 years ago

        I have a 6800GT AGP card, Athlon 2500, 1GB Corsair XMS. If I’m going to splurge for a new mobo and proc, it would probably make sense to just go for the 7800 GTX card.

        • Pettytheft
        • 14 years ago

        Still looking at around a $700-$1000 upgrade. Most people running Socket A’s will need a new PS, and memory as well.

    • Logan[TeamX]
    • 14 years ago

    In that case, I’m looking forward to meeting that OC with a favourable chip with my TT Venus 12.

    Ack, twas a reply to Damage – post #4. My bad.

    • Logan[TeamX]
    • 14 years ago

    Ahem – SOLD. I’m ordering one next week.

    Holy crap.

    Even if I get to 2.2GHz, nevermind 2.4 or 2.5GHz… I’ll be happy.

    • elmopuddy
    • 14 years ago

    Great review as always..

    I’d love to see a test with all 4 DIMM slots full… I have 4x 1G on my desk waiting for my new AN8-SLI, manual says mem speed steps down to DDR333 when using 4 double-sided DIMMS.. which would suck, if true even with “venice” core cpu.


      • flip-mode
      • 14 years ago

      With Venice you should get DDR400 without any trouble, but it would be nice to see TR’s reviews exhibiting that. Look in yesterday’s shortbread for the terribly written x-bit article.

    • liquidsquid
    • 14 years ago

    Well, my shiny new 4200+ holds pretty well… and yes, they run VERY cool, which I am still amazed about. My UPS reports this new machine as requiring 157W total with TWO 19″ LCD monitors on at the same time during normal work use. Each monitor requires ~20 to 25W, so the entire box on my system is around 110W. Amazing considering the power supply at best is 85% efficient, so the rest of the computer is under 100W. I think that may be the lowest power requirement machine I have owned to date. Gaming I am sure takes some more power as my 6800 spins up, but I haven’t logged it yet. My older P4 2.4G Compaq $350 box took 225W idle, not including the monitor. It is amazing what a good die shrink can do for power consumption.

    What is really neat is watching the CPU fan on the new box barely move and that is enough to keep it cool. In fact the CPU is the coolest thing in the box. Bravo AMD for making a true engineering marvel. I am thinking the fan included with the CPU is overkill, that is one nice fan assembly.

    To have a modern mature desktop processor at 2GHz churn through data at two times the speed of another processor makes no sense. It seems Intel has the ability to crush this gap, but yet doesn’t. I guess that is up to Intel to make a new core.


    • Vrock
    • 14 years ago

    I feel conflicted by this review and the whole dual core thing.

    On the one hand, it’s pretty obvious that dual core is The Next Big Thing for processors. Which means that future software is going to be designed to take advantage of this.

    On the other hand, haven’t we all been burned by The Next Big Thing before? Remember when the GF3 came out? It was actually slower than the GF2 Ultra in many benchmarks, but it was a DX 8 part, and DX 8 was The Next Big Thing for graphics. So people went out and bought GF3s and waited for good DX 8 games to come out….and they waited….and they waited….and by the time decent games came out, GF 4 was here, and suddenly DX 9 was on the horizon. People would have been better off with GF 2 ultras.

    How about 64 bit? Still waiting for that Next Big Thing to show significant dividends.

    No sense in forking down big bucks for a CPU that’s slower than the ones it is supposed to replace using today’s software.

    Heck, I still have a socket A setup with a *gasp* PATA hard drive and a *gasp* AGP video card.

      • liquidsquid
      • 14 years ago

      The big advantages are when you use your computer for more than one thig at once, and actually have multi-threaded software like RF simulators and compilers. Also the cool-running factor and the blessed silence of lower-speed fans, and one fan rather than two. I have no near-future plans to go 64-bit, though it has the capability, but the dual-core makes a lot of sense.

      As far as 64-bit vs. 32-bit, most software these days still runs much faster than we can put data at them. ie. MS Word. When is the last time you saw a blip on the Task Manager while using any office software besides the dip in available memory? It isn’t often you need 100% of the CPU, so 64-bit offers even less during those short times. Only games (and of course large-scale simulations like folding@home) can ask more of a CPU than it can deliver 100% of the time it is running, and it is the software that will benefit the most from 64-bit. You will probably get more from a video card investment than a CPU investment for the eye candy desired, so ignore the 64-bit and pay attention to the dual-core benefits. 64-bit support is just a bonus.

      • blastdoor
      • 14 years ago

      The big flaw in the GF3 analogy is that the GF3’s extra functionality was completely useless until DX8 came out. Not so with dual-core — if ever you need to run two CPU-intensive apps at once, you benefit from dual core right away.

      Of course, if you only use your PC for gaming, then your point is valid.

        • Vrock
        • 14 years ago

        I mostly use my PC for gaming. I do a bit of office work on it here and there, and I’ve been THINKING about getting into video capture and editing to transfer my Star Wars laserdiscs to DVD for my personal use, but that’s about it.

        I can’t see a situation where I’d be running apps like you mentioned. Of course I know the computer world doesn’t revolve around me…..but still. I’m an enthusiast, and I don’t need it. Joe Sixpack surely doesn’t need it.

        Dual core is a step backwards, not forwards in my opinion. If they could make single core CPUs faster with less heat, they would. Unfortunately they broke Moore’s law a while back and they had to do something….hence inventing a product that fills a need that doesn’t exist for most computer users.

        The people who truly need or want SMP have it already.

          • blastdoor
          • 14 years ago

          Apparently the computer world does revolve around me, because I can use dual core quite nicely 🙂

          And I did not have a dual processor system before, because that is *much* more expensive.

          So, I agree that if you are a gamer, dual core doesn’t help you much right now. But take my word for it, there are people out there who really can benefit from it today.

          As for Joe Sixpack, he’s irrelevant to any conversation regarding CPUs that run faster than 1 GHz. E-mail, web browsing, and MS Office don’t need anything faster than 1 GHz (arguably they don’t need anything faster than 500 MHz).

            • flip-mode
            • 14 years ago

            Agreed. Joe Six does not need a faster gaming chip. More than anything, Mr. Six needs some Lysol for the keyboard and an air can to blow out the PSU.

            • rgreen83
            • 14 years ago

            And if the industry moved its pace at what joe sixpack needed, we would all still be using 300mhz cpus and 128mb of ram, thats all microsoft recommends for running xp after all.

      • Logan[TeamX]
      • 14 years ago

      I’m overjoyed – one core for F@H, and one core for United Devices. Add in a very solid gaming performance and you have an undeniable winner.

      • UberGerbil
      • 14 years ago

      I remember when 3D graphics was the Next Big Thing, and the early cards were “decelerators” and yeah, we never did get much benefit from 3D.

      I remember when 32bit was the next big thing, and none of my existing 16bit software could take advantage of it, and I had to wait to get 32bit software, that was sure a waste of time and effort because there wasn’t much payoff there.

      I remember when GUIs were the next big thing, and none of my DOS games or programs could take advantage of it. That really sucked.

      Yeah, there’s never any progress in computers.

      Sure, for a lot of people who aren’t compute-bound in anything except games (which are single-threaded) this is overkill. There’s no rush to upgrade. BUT:

      There is software today that is multithreaded (rendering apps for example, and photoshop filters, and lots of other workstation software). These chips make creating a dualie workstation MUCH cheaper and easier, and all in a case that will comfortably fit under a desk. I have server software today that exploits 64bit. I’m writing software that will take advantage of 64bit. And I’m writing multi-threaded code. These processors are great news for me; this makes it much easier for me to scale up clients — give them a single-core today and swap it for a dual core tomorrow, without having to replace everything else (though I’ll probably be building a dual proc / dual core Opteron machine for my own use later in the year).

      And there’s a little more multithreading going on than you might think. The OS is multithreaded, and so is the net code in DirectX. I expect to see more threading in DirectX in the future, so you get some benefit even if the games themselves don’t use the second core.

        • Vrock
        • 14 years ago

        No need to be so sarcastic. I’m not knocking anybody, and I’m sorry if you lack the ability to see the distinction between my comments and “there’s never any progress in computers”.

        My statement is that it doesn’t make sense for the majority of computer users to spend more money on something they don’t really need/can’t use to the fullest extent right now anyway. People are running around acting like dual core is the second coming of Christ for computer users and I’m sorry, but it just isn’t so right now.

          • Logan[TeamX]
          • 14 years ago

          It’s not the second coming of Christ for end users, but it IS the second coming of Christ for DC projects and SMBs that want to double their processing horsepower using existing Opteron servers in stock.

          It’s the most cost-effective way to build a dually rig, bar none. And for that alone it IS remarkable.

          • PerfectCr
          • 14 years ago

          We mock what we don’t understand….

          • rgreen83
          • 14 years ago

          Of course you dont actually need any more single core performance than what you have right now anyways since you are still on socket A, so why do you want more single core performance? I would say you were a perfect candidate for dual core when you decide to upgrade, why? because you obviously buck the trend of upgrading everytime something new comes out, not that that is a bad thing, but if your next upgrade is to last you 3 years, dual cores will very much be in more use by then and probably give you substantially more life out of it than a single core.

          Right now I have 330 threads running just sitting at XP desktop, not counting firefox, every one of those threads can run on its own processor, are you telling you dont multitask? If you run XP, you do.

      • Buub
      • 14 years ago


      • Hector
      • 14 years ago

      I’m telling you cheap guys (in a good way:)) –a $44 duron1800 @2.5 on the best chipset of all time, NF2 with MCP-T and you got it made for cheap for a very long time!

        • Hattig
        • 14 years ago

        Yeah, I’m still on my 30 month old system, and even now I can’t see a worthwhile upgrade for a non-games-player like me.

        My system? 1700+ overclocked to 2GHz. ABit NF7-S. Radeon 9500.

        I’ve added a DVDRW and various hard drives over time, but in the end it is still fast enough for most of what I do.

        Now a dual-core 3800+ is getting there – same speed, two processors, 64-bit and architectural improvements. Maybe in a year or so the X2 4600+ will be $200 on 65nm, the new motherboards will have come out for the next AMD socket, and something will die on my current system.

        I think it is worth upgrading for >3x the performance in general. If I had an older Athlon running at <1.4GHz, I’d seriously be considering this new X2.

          • Hector
          • 14 years ago

          LOL You’re a wise man.. I used to be like that too but something happend — had my celeron 300 for about 4 years before getting a Athlon 1.2 — since then I’ve had about 12 computers all way overpowered for normal tasks– hey it’s cheaper than drag racing.

          PS if you got a tbred B 1700 that chip should easy hit 2.4:D

            • Hattig
            • 14 years ago

            I used to do that. It comes and goes, the urge to spend money on toys!

            Still, I spent a bunch last week on a 12″ iBook to replace my ancient PII266 laptop. 5x faster processor, 8x graphics memory, 100x graphical power (Neomagic vs Radeon 9550, heh) 10x the memory. 10x the hard drive. I’ll notice that upgrade I’m sure, but I doubt I’ll get more than 3 years out of it.

    • flip-mode
    • 14 years ago


    Thus begins my crusaid in favor of reviews that populate all available RAM slots. The revision E A64 cores are supposed to have an optimized memory controller allowing 4 double-sided dimms at default DDR400. Please TR, start populating all of the memory slots. You’re testing $1000 CPUs and $1000 GPUs and you’re cheaping out on the memory – and you can’t say that you’re doing it to stay in line with your readership, cause your readership isn’t buying X2 4800s and GF 7800s either.

    Please don’t make me continue going over to x-bit labs for there terribly written review of such matters (linked to in yesterday’s shortbread)

    Oh, and by the way, thank you AMD.

      • rgreen83
      • 14 years ago

      Its already been over before in other reviews, yes 4 slots will run at 400, they have for a while, but they still wont do so at 1T.

        • flip-mode
        • 14 years ago

        Yes, I’m aware. I still think that if a board comes with four slots, any proper review of that board will populate all four slots. Even if its 4*256MB. The point is to test every aspect of the board looking for any weakness or limitation. Suprises happen all the thime. And then if you have to, take out two sticks for your overclocking tests.

          • Dissonance
          • 14 years ago


            • BobbinThreadbare
            • 14 years ago

            But the memory controller is on the processor, so testing it should include testing the memory controller.

            • flip-mode
            • 14 years ago

            Woah, true indeed.

            • flip-mode
            • 14 years ago

            Thanks Diss, I missed that (sarcasm). If I wait untill the next motherboard review to mention this it will be kind of too late. I also looked at the last five motherboard reviews recently and none of them used more than 2 sticks of RAM.

            Please know that I hold TR in the highest esteem, and I find you guys in a league of your own when it comes to reviewing hardware. Besides whining for more variety in hardware that is reviewed, the RAM thing is the only shortcomming I can come up with.

      • My Johnson
      • 14 years ago

      Good points.

    • blastdoor
    • 14 years ago

    I would just like to point out that my overclocking experience with the X2 4200+ did not go nearly as smoothly as suggested in this article or other articles on the web.

    I managed to get into Windows at 2.4 GHz, but once under load I BSODd. I did not OC the RAM at all — it was purely a memory divider OC, so very conservative.

    Could be that I was just unlucky, but I thought I’d throw that data point out there for anyone considering buying this with the expectation of OC — my advise, don’t assume you can OC!

      • crose
      • 14 years ago

      What was your max OC and which motherboard did you use?

        • blastdoor
        • 14 years ago

        MSI Neo 4f [Nforce 4, but not SLI]

        Since I couldn’t get it to run under load at 2.4 GHz, I gave up. I suppose the max stable OC must be somewhere between 2.2 and 2.4, but what’s the point of that?

        I managed to get it to boot into Windows at up to about 2.5 GHz, with voltage set to a little over 1.5v. But it died pretty quickly after booting.

          • Xylker
          • 14 years ago

          I know that there are a lot of MSI boards out there that work fine for people, but I will never get another MSI product. 3 MBs and a GF2 GTS all died before their time. BOO MSI

      • Krogoth
      • 14 years ago

      Dual-core CPUs are far more difficult to OC, since you have to deal with twice the variables. SMP crowd already has experienced OC difficults with dual-chip system for years. The primary problem was that some of the dual-chip boards weren’t exactly OC’ing friendly. At least there’s a far greater range of ethusiast-class boards for dual-core, single chip CPUs.

    • FubbHead
    • 14 years ago

    Yep. Nice review as always.

    But I think it is still too expensive compared to the competition. It shouldn’t be $100+ over the 820 D. And I wonder why they don’t bother releasing a 1.8GHz version aswell? Oh well.. Guess I just wait for the price to drop..

    • Hattig
    • 14 years ago


      • Logan[TeamX]
      • 14 years ago

      Nevermind the cost of the new board for the Pentium D. Whoops, there goes the immediate AND the long-term savings. 😀

    • LiamC
    • 14 years ago

    Great review as usual.

    I would love to see the 4400+ included in the mix, if only to see how much the extra cache matters. I have yet to find a review that does compare the 4200/4400+ or 4600/4800+.

    • BooTs
    • 14 years ago

    When I built my current PC around my 3500+ I knew that I would be upgrading to an X2 family chip very soon. This CPU makes me want to go and do that right now, but for most of the applications I run, the 3500+ is still faster. However, dual-cores would probably make everything run so much smoother when multi-tasking…. Its very tempting, but I can wait out for a price drop.

    • highlandsun
    • 14 years ago

    Since it’s now public knowledge that Intel’s compilers generate skewed code that executes more slowly on non-Intel processors… The comparisons with the MS compiler are a good thing, but I’m curious about what results you get with gcc too. Any chance of throwing that into the mix?

    • Tuanies
    • 14 years ago

    As soon as it drops below 300 I’m picking one up.

    • Vrock
    • 14 years ago

    I didn’t see the X2 3800 in any of the graphs on page 5. Is there a typo there, or am I missing something?

    edit: Never mind. It’s there now, on page 4. Weird.

    • totoro
    • 14 years ago

    Yay for 3800 X2!
    Reading now.

    Edit: Great review as always. Those wattage numbers are crazy! Adding a second core doesn’t require any extra juice? Sign me up, please.
    Unfortunately, Monarch doesn’t have these in til 8/8 : (

      • R2P2
      • 14 years ago

      I especially like how the X2 3800+ under load uses slighty more juice than any dual P4 does when /[

    • Jon
    • 14 years ago

    Another well written article that gets straight to the point. Enjoyed reading that Scott, thanks. (Even at 2am :))

    • dragmor
    • 14 years ago

    Importantly the X2 3800+ seems to be in stock everywhere as of last week. In fact all 5 computer stores I walk past on the way home from work had them up in the window for $540 Aus.

    • Dposcorp
    • 14 years ago


Pin It on Pinterest

Share This