Home AMD’s Athlon 64 X2 6000+ processor

AMD’s Athlon 64 X2 6000+ processor

Scott Wasson Former Editor-in-Chief Author expertise
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.
IF YOU FOLLOW CPUs at all, you already know that Intel has been pretty much cleaning up with its Core 2 processors. Since this past summer, Intel has had the drop on AMD thanks to an excellent new microarchitecture that delivers high performance with low power consumption—a killer combo. AMD has been creeping slowly ahead itself toward a 65nm manufacturing process and its own new microarchitecture, due in the middle of this year. In the meantime, though, the Athlon 64 has fallen out of favor with enthusiasts somewhat, only making one of the four primary configs in our latest system guide—and the low-end one, at that.

AMD is looking to stem the tide with the only tools available to it in the short term, and they’re both very old-school: a price cut and a clock speed increase. Can these time-worn techniques put the Athlon 64 back on the map in this age of fancy-pants architectural tweaks and CPU cores multiplying like guppies? The Athlon 64 X2 6000+ is a great test case. It now costs less than its natural competitor, the Core 2 Duo E6700, and it has quietly become the first Athlon 64 X2 processor to reach the 3GHz milestone.

The X2 6000+ debuts amid a changing landscape, as well. Windows Vista is here, and the conversion to Vista will likely mean increased uptake for 64-bit software on the desktop. Accordingly, we’ve moved our CPU testing to Windows Vista x64, with a healthy mix of 64-bit and multithreaded code to run on it. Keep reading to see how the X2 6000+ fares in this new environment.

The X2 6000+ up close
The Athlon 64 X2 is a familiar quantity by now, so I’ll spare you the details. The vitals on the X2 6000+ are simply this: two cores at 3GHz with 1MB of L2 cache per core, primed for Socket AM2. AMD has begun shipping some 65nm processors, but this isn’t one of them; it’s still made using a 90nm fab process. The combination of a high clock speed and a 90nm fab process brings the X2 6000+ one less desirable trait: a max thermal power rating of 125W, well above the 65W and 89W ratings of the lower rungs of the Athlon 64 lineup.

Here, for your viewing pleasure, are some unnecessarily large close-ups of our X2 6000+ review sample.

Yep. Uh huh.

As I said before, this processor’s most natural competitor is the Core 2 Duo E6700, which runs at 2.66GHz but has the higher clock-for-clock performance of Intel’s Core microarchitecture going for it. Concomitant with its lower clock speed and 65nm fab process, the E6700 has a much nicer thermal design power rating of only 65W, as well. AMD seems to have built a discount into the X2 6000+ in order to make up for that shortcoming. The E6700 currently lists for $530, and the X2 6000+ undercuts it with an initial price of $459. Below this price point, things align more closely, with the X2 5600+ at $326 facing off against the E6600 at $316. Above this price point, you’re into quad-core territory.

Thus, to keep things simple, we’ve decided to clear off the table and match up the X2 6000+ against the Core 2 Duo E6700 for a head-to-head comparison. (Well, that and we’ve spent too much time figuring out how to sidestep the quirks of Windows Vista to provide you with more results today. We’ll be following up with results from additional CPUs—including the less expensive quad-core processors—fairly soon.) Can the Athlon 64 X2 6000+ make a case for itself in the face of formidable competition from Intel? Let’s take a look.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Please note that our Core 2 Duo E6700 is actually a Core 2 Extreme X6800 processor clocked down to the appropriate speed. Its performance should be identical to that of the real thing, although its power consumption may vary slightly. Then again, power consumption tends to vary from older chips to newer and from one chip to the next.

Our test systems were configured like so:

Processor Core 2 Duo E6700 2.66GHz Athlon 64 X2 6000+ 3.0GHz (90nm)
System bus 1066MHz (266MHz quad-pumped) 1GHz HyperTransport
Motherboard Intel D975XBX2 Asus M2N32-SLI Deluxe
BIOS revision BX97520J.86A.2618.2007.0212.0954 0903
North bridge 975X MCH nForce 590 SLI SPP
South bridge ICH7R nForce 590 SLI MCP
Chipset drivers INF Update
Intel Matrix Storage Manager 6.21
ForceWare 15.00
Memory size 2GB (2 DIMMs) 2GB (2 DIMMs)
Memory type Corsair TWIN2X2048-6400C4
at 800MHz
Corsair TWIN2X2048-8500C5
at 800MHz
CAS latency (CL) 4 4
RAS to CAS delay (tRCD) 4 4
RAS precharge (tRP) 4 4
Cycle time (tRAS) 12 12
Audio Integrated ICH7R/STAC9274D5 with
Sigmatel drivers
Integrated nForce 590 MCP/AD1988B with
Soundmax drivers
Hard drive Maxtor DiamondMax 10 250GB SATA 150
Graphics GeForce 7900 GTX 512MB PCIe with ForceWare 100.64 drivers
OS Windows Vista Ultimate x64 Edition
OS updates

Thanks to Corsair for providing us with memory for our testing. Their products and support are far and away superior to generic, no-name memory.

Also, all of our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.

The test systems’ Windows desktops were set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory performance
We’ll begin by measuring the memory subsystem performance of these solutions. These synthetic tests don’t track closely with real-world application performance, but are enlightening anyhow.

Notice that I’ve included a graphic above the benchmark results. That’s a snapshot of the CPU utilization indicator in Windows Task Manager, which helps illustrate how much the application takes advantage of up to four CPU cores, when they’re available. I’ve included these Task Manager graphics whenever possible throughout our results.

The X2 6000+ achieves higher memory throughput with lower latencies thanks to its built-in, on-die memory controller and dual channels of DDR2 800MHz memory. The Core 2 Duo E6700, meanwhile, hits a bottleneck in the form of its 1066MHz front-side bus; its chipset-based memory controller lies beyond that bus. That limits the E6700 to about 5.6GB/s of memory bandwidth, though it, too, has a dual-channel DDR2 800MHz memory subsystem.

These system architecture differences contribute to the E6700’s higher memory access latencies compared to the X2 6000+, although the gap is bridged somewhat by the Core 2 Duo’s so-called memory disambiguation feature, which moves memory loads ahead of stores in certain situations.

As our use of the term “memory disambiguation” shows, we like to scare off newbies here at TR in order to drive advertising revenues down. (It’s all about the tax write-off.) As part of that effort, here’s a look at some 3D graphs showing access latencies when going to L1 cache (yellow), L2 cache (orange), and main memory (er, burnt sienna).

The advantage in access latencies for the X2 6000+ is consistent and pronounced, and it grows at larger step sizes. This advantage is offset, however, by the E6700’s larger 4MB L2 cache. Whether the E6700 or the X2 6000+ has the upper hand in memory-constrained scenarios will depend on the application and its memory access patterns.

These are interesting things to know, but their bearing on overall performance is mixed. Let’s move now to some real-world tests to see how these two very different CPU and system architectures compete.

The Elder Scrolls IV: Oblivion
We tested Oblivion by manually playing through a specific point in the game five times while recording frame rates using the FRAPS utility. Each gameplay sequence lasted 60 seconds. This method has the advantage of simulating real gameplay quite closely, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

For this test, we set Oblivion’s graphical quality to “Medium,” but with HDR lighting enabled and vsync disabled, at 800×600 resolution. We’ve chosen this relatively low display resolution in order to prevent the graphics card from becoming a bottleneck, so differences between the CPUs can shine through.

Obviously, both CPUs deliver more-than-playable frame rates when the graphics card isn’t a bottleneck, but you may be surprised by how close the contest is. There’s only three frames per second worth of difference between the lowest frame rate from the E6700 and the lowest from the X2 6000+.

Rainbow Six: Vegas
Rainbow Six: Vegas is based on Unreal Engine 3 and is a port from the Xbox 360. For both of these reasons, it’s one of the first PC games that’s widely multithreaded, and ought to provide an illuminating look at CPU gaming performance.

For this test, we set the game to run at 1024×768 resolution with high dynamic range lighting enabled. “Hardware skinning” (via the GPU) was disabled, leaving that burden to fall on the CPU. Shadow quality was set to low, and motion blur was enabled at medium quality. I played through a 90-second sequence of the game’s Terrorist Hunt mode on the “Dante’s” level five times, capturing frame rates with FRAPS, as we did with Oblivion.

This one is even closer than Oblivion, with the X2 6000+ delivering the highest average framerate and the E6700 avoiding the lows a little better.

Valve Source engine particle simulation
Next up are a couple of tests we picked up during a visit to Valve Software, the developers of the Half-Life games. They’ve been working to incorporate support for multi-core processors into their Source game engine, and they’ve cooked up a couple of benchmarks to demonstrate the benefits of multithreading.

The first of those tests runs a particle simulation inside of the Source engine. Most games today use particle systems to create effects like smoke, steam, and fire, but the realism and interactivity of those effects is limited by the available computing horsepower. Valve’s particle system distributes the load across multiple CPU cores.

The E6700 achieves a decisive edge in Valve’s particle simulation.

Valve VRAD map compilation
This next test processes a map from Half-Life 2 using Valve’s VRAD lighting tool. Valve uses VRAD to precompute lighting that goes into its games. This isn’t a real-time process, and it doesn’t reflect the performance one would experience while playing a game. It does, however, show how multiple CPU cores can speed up game development.

The Athlon 64 can’t quite keep up with the Core 2 Duo when computing lighting in VRAD, either.

3DMark06 combines the results from its graphics and CPU tests in order to reach an overall score. Here’s how the processors did overall and in each of those tests.

This is another very close one, with a slight yet consistent lead for the E6700. We’ve become accustomed to close outcomes in 3DMark06, since it tends to be GPU-bound rather than CPU-bound. The exceptions to that rule are 3DMark’s CPU tests.

Even in the CPU tests, though, the X2 6000+ doesn’t trail the E6700 by much at all.

The Panorama Factory
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs. The program’s timer function captures the amount of time needed to perform each stage of the panorama creation process. I’ve also added up the total operation time to give us an overall measure of performance.

The X2 6000+ proves quicker overall here—by a sliver.

picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including MMX, SSE2, and Hyper-Threading. Naturally, he’s ported picCOLOR to 64 bits, so we can test performance with the x86-64 ISA. Eight of the 12 functions in the test are multithreaded, and in this latest revision, five of those eight functions use four threads.

Scores in picCOLOR, by the way, are indexed against a single-processor Pentium III 1 GHz system, so that a score of 4.14 works out to 4.14 times the performance of the reference machine.

The E6700 has the edge again, and this time, it’s one function making up the bulk of the difference between the two CPUs: picCOLOR’s rotation function. The X2 6000+ is slower overall than the E6700, but it is by no means outclassed.

Windows Media Encoder x64 Edition
Windows Media Encoder is one of the few popular video encoding tools that uses four threads to take advantage of quad-core systems, and it comes in a 64-bit version. For this test, I asked Windows Media Encoder to transcode a 153MB 1080-line widescreen video into a 720-line WMV using its built-in DVD/Hardware profile. Because the default “High definition quality audio” codec threw some errors in Windows Vista, I instead used the “Multichannel audio” codec. Both audio codecs have a peak bitrate of 192Kbps.

The margin of difference may not seem large in the graph, but the E6700 finishes encoding this video clip nearly 90 seconds before the X2 6000+.

SiSoft Sandra Mandelbrot
Next up is SiSoft’s Sandra system diagnosis program, which includes a number of different benchmarks. The one of interest to us is the “multimedia” benchmark, intended to show off the benefits of “multimedia” extensions like MMX, SSE, and SSE2. According to SiSoft’s FAQ, the benchmark actually does a fractal computation:

This benchmark generates a picture (640×480) of the well-known Mandelbrot fractal, using 255 iterations for each data pixel, in 32 colours. It is a real-life benchmark rather than a synthetic benchmark, designed to show the improvements MMX/Enhanced, 3DNow!/Enhanced, SSE(2) bring to such an algorithm.

The benchmark is multi-threaded for up to 64 CPUs maximum on SMP systems. This works by interlacing, i.e. each thread computes the next column not being worked on by other threads. Sandra creates as many threads as there are CPUs in the system and assignes [sic] each thread to a different CPU.

We’re using the 64-bit version of Sandra. The “Integer x16” version of this test uses integer numbers to simulate floating-point math. The floating-point version of the benchmark takes advantage of SSE2 to process up to eight Mandelbrot iterations in parallel.

This test is pretty much a best-case scenario for the vector math capabilities of todays’ CPUs, and the Core 2 Duo’s superior SSE2 capabilities, including the ability to process a 128-bit SSE instruction in a single cycle, are on display here.

Graphics is a classic example of a computing problem that’s easily parallelizable, so it’s no surprise that we can exploit a multi-core processor with a 3D rendering app. Cinebench is the first of those we’ll try, a benchmark based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores are available.

The X2 6000+ gets a little relief here, taking a win from the E6700. Let’s try another rendering app and see if the advantage holds.

POV-Ray rendering
We’ve finally caved in and moved to the beta version of POV-Ray 3.7 that includes native multithreading. The latest beta 64-bit executable is still quite a bit slower than the 3.6 release, but it should give us a decent look at comparative performance, regardless.

The X2 6000+ again beats out the E6700 in a rendering app. We’ve used this chess2.pov scene for ages, but it doesn’t employ the latest features of POV-Ray like the app’s recommended benchmark scene does. Let’s have a look at performance with that scene, as well. For this test, I’ve just used the optimal number of threads on each CPU, rather than testing scaling with one, two, and four threads. That means two threads for these two dual-core processors.

The Athlon 64 hangs on to win, but the gap closes somewhat with the benchmark.pov scene.

I had hoped to use 3dsmax 9 to test rendering performance, too, but it appears to have some issues with Windows Vista x64. We’ll have to revisit it once its developers get the problems resolved.

Our benchmarks sometimes come from unexpected places, and such is the case with this one. David Tabb is a friend of mine from high school and a long-time TR reader. He recently offered to provide us with an intriguing new benchmark based on an application he’s developed for use in his research work. The application is called MyriMatch, and it’s intended for use in proteomics, or the large-scale study of protein. I’ll stop right here and let him explain what MyriMatch does:

In shotgun proteomics, researchers digest complex mixtures of proteins into peptides, separate them by liquid chromatography, and analyze them by tandem mass spectrometers. This creates data sets containing tens of thousands of spectra that can be identified to peptide sequences drawn from the known genomes for most lab organisms. The first software for this purpose was Sequest, created by John Yates and Jimmy Eng at the University of Washington. Recently, David Tabb and Matthew Chambers at Vanderbilt University developed MyriMatch, an algorithm that can exploit multiple cores and multiple computers for this matching. Source code and binaries of MyriMatch are publicly available.

In this test, 5555 tandem mass spectra from a Thermo LTQ mass spectrometer are identified to peptides generated from the 6714 proteins of S. cerevisiae (baker’s yeast). The data set was provided by Andy Link at Vanderbilt University. The FASTA protein sequence database was provided by the Saccharomyces Genome Database.

MyriMatch uses threading to accelerate the handling of protein sequences. The database (read into memory) is separated into a number of jobs, typically the number of threads multiplied by 10. If four threads are used in the above database, for example, each job consists of 168 protein sequences (1/40th of the database). When a thread finishes handling all proteins in the current job, it accepts another job from the queue. This technique is intended to minimize synchronization overhead between threads and minimize CPU idle time.

The most important news for us is that MyriMatch is a widely multithreaded real-world application that we can use with a relevant data set. MyriMatch also offers control over the number of threads used, so we’ve tested with one to four threads. Also, this is a newer version of the MyriMatch code than we’ve used in the past, with a larger spectral collection, so these results aren’t comparable to those in our past articles.

The two CPUs scale similarly from one thread to two, but the E6700 has the advantage in all cases. Again, however, the X2 6000+ shadows it pretty closely.

STARS Euler3d computational fluid dynamics
Our next benchmark is also a relatively new one for us. Charles O’Neill works in the Computational Aeroservoelasticity Laboratory at Oklahoma State University, and he contacted us recently to suggest we try the computational fluid dynamics (CFD) benchmark based on the STARS Euler3D structural analysis routines developed at CASELab. This benchmark has been available to the public for some time in single-threaded form, but Charles was kind enough to put together a multithreaded version of the benchmark for us with a larger data set. He has also put a web page online with a downloadable version of the multithreaded benchmark, a description, and some results here. (I believe the score you see there at almost 3Hz comes from our eight-core Clovertown test system.)

In this test, the application is basically doing analysis of airflow over an aircraft wing. I will step out of the way and let Charles explain the rest:

The benchmark testcase is the AGARD 445.6 aeroelastic test wing. The wing uses a NACA 65A004 airfoil section and has a panel aspect ratio of 1.65, taper ratio of 0.66, and a quarter-chord sweep angle of 45º. This AGARD wing was tested at the NASA Langley Research Center in the 16-foot Transonic Dynamics Tunnel and is a standard aeroelastic test case used for validation of unsteady, compressible CFD codes.

The CFD grid contains 1.23 million tetrahedral elements and 223 thousand nodes . . . . The benchmark executable advances the Mach 0.50 AGARD flow solution. A benchmark score is reported as a CFD cycle frequency in Hertz.

So the higher the score, the faster the computer. I understand the STARS Euler3D routines are both very floating-point intensive and oftentimes limited by memory bandwidth. Charles has updated the benchmark for us to enable control over the number of threads used. Here’s how our contenders handled the test with different thread counts.

Chalk up this one for the E6700, which has roughly a 38% advantage over the X2 6000+.

Next, we have another relatively new addition to our benchmark suite: a slick little Folding@Home benchmark CD created by notfred, one of the members of Team TR, our excellent Folding team. For the unfamiliar, Folding@Home is a distributed computing project created by folks at Stanford University that investigates how proteins work in the human body, in an attempt to better understand diseases like Parkinson’s, Alzheimer’s, and cystic fibrosis. It’s a great way to use your PC’s spare CPU cycles to help advance medical research. I’d encourage you to visit our distributed computing forum and consider joining our team if you haven’t already joined one.

The Folding@Home project uses a number of highly optimized routines to process different types of work units from Stanford’s research projects. The Gromacs core, for instance, uses SSE on Intel processors, 3DNow! on AMD processors, and Altivec on PowerPCs. Overall, Folding@Home should be a great example of real-world scientific computing.

notfred’s Folding Benchmark CD tests the most common work unit types and estimates performance in terms of the points per day that a CPU could earn for a Folding team member. The CD itself is a bootable ISO. The CD boots into Linux, detects the system’s processors and Ethernet adapters, picks up an IP address, and downloads the latest versions of the Folding execution cores from Stanford. It then processes a sample work unit of each type.

On a system with two CPU cores, for instance, the CD spins off a Tinker WU on core 1 and an Amber WU on core 2. When either of those WUs are finished, the benchmark moves on to additional WU types, always keeping both cores occupied with some sort of calculation. Should the benchmark run out of new WUs to test, it simply processes another WU in order to prevent any of the cores from going idle as the others finish. Once all four of the WU types have been tested, the benchmark averages the points per day among them. That points-per-day average is then multiplied by the number of cores on the CPU in order to estimate the total number of points per day that CPU might achieve.

This may be a somewhat quirky method of estimating overall performance, but my sense is that it generally ought to work. We’ve discussed some potential reservations about how it works here, for those who are interested. I have included results for each of the individual WU types below, so you can see how the different CPUs perform on each.

These results are consistent with what we’ve seen before from these two CPU architectures. The Athlon 64 crunches through Tinker and Amber work units much quicker than the Core 2 Duo, but the tables turn with the two Gromacs WU types. If you average the four types, the X2 6000+ comes out ahead of the E6700 on the strength of its performance with Amber and Tinker WUs.

Power consumption and efficiency
We’re trying something a little different with power consumption. Our Extech 380803 power meter has the ability to log data, so we can capture power use over a span of time. As always, the meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, video card, hard drives, and anything else plugged into the power supply unit. (We plugged the computer monitor and speakers into a separate outlet, though.) We measured how each of our test systems used power during a roughly one-minute period, during which time we executed Cinebench’s rendering test. All of the systems had their power management features (such as SpeedStep and Cool’n’Quiet) enabled during these tests.

Right away, you can see that the two CPUs live up to their thermal/power ratings—125W in the case of the X2 6000+ and 65W for the E6700. The two systems’ idle power consumption levels are similar, but under load, the Athlon 64 X2 draws quite a bit more power.

We can break down the power consumption data in various useful ways. We’ll start with a look at idle power, taken from the trailing edge of our test period, after all CPUs have completed the render.

There’s only 9W difference between the two systems at idle, despite our AMD system’s use of the nForce 590 SLI chipset, which has earned a reputation for being power-hungry.

Next, we can look at peak power draw by taking an average from the five-second span from 10 to 15 seconds into our test period, during which the processors were rendering.

Here is where the differences between the two CPUs become readily apparent. The system based on the X2 6000+ draws 78W more at peak than the Core 2 Duo E6700-based system.

Another way to gauge power efficiency is to look at total energy use over our time span. This method takes into account power use both during the render and during the idle time. We can express the result in terms of watt-seconds, equivalent to joules.

The E6700 uses quite a bit less energy during our test period, regardless of whether the single-threaded or multithreaded test is used. In general, though, multithreading pays benefits in terms of energy use.

Finally, we can consider the amount of energy used to render the scene. Since the different systems completed the render at different speeds, we’ve isolated the render period for each system. We then computed the amount of energy used by each system to render the scene, expressed in watt-seconds. This method should account for both power use and, to some degree, performance, because shorter render times may lead to less energy consumption.

The E6700’s combination of strong performance and lower peak power draw gives it a convincing lead in energy efficiency over the Athlon 64 X2 6000+, as this last and best indicator of power-efficient performance demonstrates vividly.

AMD’s two old-school tricks, the price cut and the clock speed bump, have combined to give the Athlon 64 X2 6000+ a pretty good value proposition. Performance-wise, the X2 6000+ is slower overall than the Core 2 Duo E6700, but not by much. That may be a surprising outcome to those accustomed to seeing Core 2 Duo processors convincingly trounce the competition, but these two architectures were never that different in terms of clock-for-clock performance. It stands to reason that a 3GHz Athlon 64 X2 could nearly pull even with a Core 2 Duo at 2.66GHz. The X2 6000+ is also about 70 bucks cheaper than the E6700, making it a pretty sweet deal in the grand scheme of things. Of course, as always, there are better deals to be had at lower price points than this one, but the X2 6000+ offers a compelling alternative to the E6700.

That said, there’s a reason the old-school clock speed bump has become much less fashionable of late, and the power consumption numbers for the X2 6000+ are a testament to it. Power draw rises proportionately with clock frequency and with the square of core voltage. Those power curves tend to get hairy at the higher clock frequencies possible with CPUs made on a given fab process. That’s why the X2 6000+ has a 125W thermal/power rating, while the X2 5600+ needs only 89W to run at 2.8GHz. Both are 90nm chips. (AMD has made some progress on this front, incidentally; the Athlon 64 FX-62 was also a 2.8GHz part, but had a 125W rating. And all of AMD’s 65nm CPUs to date have a rating of 65W or less. The progress at 90nm probably helped open up the possibility of 3GHz parts like the X2 6000+.)

If you decide to save the 70 bucks to get yourself an X2 6000+ rather than an E6700, you will pay for it with much higher peak power consumption. That may translate into higher system temperatures, more fan noise, or (though we haven’t had time to test it) less overclocking headroom. Some folks, I expect, will be willing to take that bargain.

The X2 6000+ also signals a broader realignment in the Athlon 64 X2 lineup, with price cuts across the board that make AMD’s offerings more attractive as alternatives to the Core 2 Duo. Some of those Athlon 64 X2s, at lower clock speeds, have power consumption ratings of 65W or less. We’ll soon expand our Windows Vista x64 performance and power results to encompass more price points, so stay tuned.

The Tech Report - Editorial ProcessOur Editorial Process

The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

Scott Wasson Former Editor-in-Chief

Scott Wasson Former Editor-in-Chief

Scott Wasson is a veteran in the tech industry and the former Editor-in-Chief at Tech Report. With a laser focus on tech product reviews, Wasson's expertise shines in evaluating CPUs and graphics cards, and much more.

Latest News

US Judge Signs A $4.5B Settlement For Terraform With SEC
Crypto News

US Judge Signs A $4.5 Billion Settlement For Terraform With SEC

Microsoft Delays Release of Windows Recall Feature Due to Security Concerns

Microsoft Delays Release of Windows Recall Feature Due to Security Concerns

Microsoft has hit the brakes on the release of Windows Recall, a controversial feature that would record all your activities on your PC, taking screenshots of your active screen every...

Solana Labs Launches Bond, A New Blockchain Customer Loyalty Platform
Crypto News

Solana Labs Launches Bond, A New Blockchain Customer Loyalty Platform

The entity behind the Solana blockchain, Solana Labs, is moving to attract more non-crypto brands to the Web3 space. The company recently rolled out Bond, a new blockchain customer loyalty...

Coinbase and ASA Challenge SEC's $2.6 Billion Budget Proposal Both Seek Cuts
Crypto News

Coinbase and ASA Challenge SEC’s $2.6 Billion Budget Proposal Both Seek Cuts

Congressman Proposes a Bill to Terminate Federal Reserve After being Inspired By Bitcoin
Crypto News

Congressman Proposes a Bill to Terminate Federal Reserve After being Inspired By Bitcoin

XRP Lawyer Claims SEC is Expressing 'Strange Animus' to Ripple
Crypto News

XRP Lawyer Claims SEC is Expressing ‘Strange Animus’ to Ripple

Meta To Delay Its Plans of Using EU Users’ Data For AI Training 

Meta Forced to Delay Plans of Using EU Users’ Data for AI Training