Intel’s Core 2 Duo and Extreme processors

INTEL’S DESKTOP PROCESSORS HAVE NOT been in a good place for the past two and a half years. Pentium 4 and Pentium D CPUs have run at relatively high clock speeds but delivered relatively low performance compared to their competition from AMD. They’ve also drawn a tremendous amount of power, which they’ve generously expended as heat. In other words, they’ve been hotter than Jessica Simpson and slower than, well, Jessica Simpson. Despite heroic efforts by Intel’s engineering and manufacturing types, these chips based on the Netburst microarchitecture haven’t been able to overcome their inherent limitations well enough to keep up with the Athlon 64. As a result, Intel decided to scrap Netburst and bet the farm on a new high-performance, low-power design from the Israel-based design team responsible for the Pentium M.

The product of that team’s efforts is a new CPU microarchitecture known as Core, of which the Core 2 Duo and Core 2 Extreme are among the first implementations intended for desktop PCs. We’ve been knee-deep in hype about the Core architecture for months now, with a stream of juicy technical details, semi-official benchmark previews, and clandestine reviews of pre-release products feeding the anticipation. Clearly, when a player as big as Intel stumbles as badly as it has, PC enthusiasts and most others in the industry are keen to see it get back up and start delivering exciting products once again.

Fortunately, the wait for Core 2 processors is almost over. Intel has decided to take the wraps off final reviews of its new CPUs today, in anticipation of the chips’ release to the public in a couple of weeks. Fish have gotta swim, politicians have gotta dissemble, and TR has gotta test hardware, so of course we’ve had the Core 2 processors on the test bench here in Damage Labs for a thorough workout against AMD’s finest—including the new Energy Efficient versions of the Athlon 64 X2. After many hours of testing, we’re pleased to report that the Core 2 chips live up to the hype. Intel has recovered its stride, returned to its winning ways, gotten its groove back, and put the izzle back in its shizzle. Read on for our full review.

Conroe up close
We first previewed the chip code-named Conroe back in March, and now we finally have our hands on one within the confines of our own labs. In spite of all the hype, the Core 2 Duo processor itself is a rather unassuming bloke that looks no different than Pentium CPUs that preceeded it. Like them, it resides in an LGA775-style socket and runs on a 1066MHz front-side bus.

The Core 2 Duo E6700 processor Also like its most immediate predecessors, the Core 2 Duo is manufactured on Intel’s 65nm fab process. Unlike them, however, the Core 2 Duo is not comprised of two chips crammed together on one package; it’s a native dual-core design with a total of roughly 291 million transistors arranged in an area that’s 143 mm2. By contrast, each of the Pentium Extreme Edition 965’s two chips have an estimated 188 million transistors in an 81-mm2 die. If you add the two chips together, the Pentium Extreme Edition 965 has more total transistors and a larger total die area than the Core 2 Duo.

Intel plans to offer five flavors of Core 2 processors initially, with prices and features like so:

Model

Clock speed Bus speed L2 cache TDP Price
Core 2 Extreme X6800 2.93GHz 1066MHz 4MB 75 W $999
Core 2 Duo E6700 2.67GHz 1066MHz 4MB 65 W $530
Core 2 Duo E6600 2.4GHz 1066MHz 4MB 65 W $316
Core 2 Duo E6400 2.13GHz 1066MHz 2MB 65 W $224
Core 2 Duo E6300 1.86GHz 1066MHz 2MB 65 W $183

The prices on the mid-range models are quite reasonable once you consider performance, as we’ll do shortly. What you’ll really want to notice about the Core 2 chips, though, is the column labeled TDP. This parameter—thermal design power—specifies the amount of cooling the chip requires, and the numbers are down dramatically from the Pentium Extreme Edition 965’s rating of 130W. Clock speeds are down, as well, since the Core microarchitecture focuses on achieving high performance per clock rather than stratospheric clock frequencies. The fastest Core 2 processor is the X6800 Extreme, which is separated from the regular Core 2 Duos only by its 2.93GHz clock speed and a 10W higher TDP—oh, and by almost half a grand.

Intel says complete PC systems based on the Core 2 Extreme X6800 and individually boxed products will both begin selling on July 27th, while Core 2 Duo processors with 4MB of L2 cache should show up on August 7th. Intel will be transitioning its CPU production gradually away from Pentiums to Core 2 Duos, and that transition might not happen as quickly as the market would like. I wouldn’t be surprised to see strong demand and short supply of these processors for the next couple of months, until Intel is able to ramp up production volumes. The less expensive versions of the Core 2 Duo with 2MB of L2 cache are initial casualties of this controlled ramp. They aren’t expected to be available until the fourth quarter of this year.

On a brighter note, the supporting infrastructure for Core 2 chips is already fairly well established. The processors should be compatible with a number of chipsets, including the enthusiast-class 975X and the upcoming 965-series mainstream chipsets from Intel. NVIDIA’s nForce4 SLI X16 Intel Edition should work, too, as well as the yet-to-be-released nForce 500 series for Intel. In fact, the Core 2 can act as a drop-in replacement for a Pentium D or Pentium Extreme Edition, provided that the motherboard is capable of supplying the lower voltages that Core 2 processors require. Only the most recent motherboards seem to have Core 2 support, so you’ll want to check carefully with the motherboard maker before assuming a board is compatible. Our Core 2 Duo and Extreme review samples, for example, came from Intel with an updated version of the D975XBX motherboard, since older revisions couldn’t supply the proper voltage.

Speaking of which, the upgrade path for those who buy motherboards for Core 2 processors in the next few months isn’t entirely clear. The server/workstation version of the Core microarchitecture, the Woodcrest Xeon, already rides on a faster 1333MHz front-side bus. The Core 2 Duo may move to this faster bus frequency at some point, but Intel hasn’t revealed a schedule for this move. Intel has revealed plans to deliver “Kentsfield,” a quad-core processor with two Conroe chips in a single package, in early 2007, but we don’t yet know whether current motherboards will be able to support it. Investing in a Core 2-capable motherboard right now might be a recipe for longevity, but it might also be a dead end as far as CPU upgrades are concerned.

What’s with the name?
Before we go on, we should probably take a moment to talk about the Core 2 Duo product name. It’s dreadful, of course, but for deeper reasons than you might think. You see, microprocessors tend to be known by several names throughout their lives, and usually those names aren’t really related. For example, the chip code-named Willamette, based on a microarchitecture called Netburst, became the first product known as Pentium 4. The multiple names may be a little difficult to keep straight, but they’re distinctive and follow a coherent logic.

This chip, however, is different. The microarchitecture is called Core, the chip is code-named Conroe, and the product is called Core 2 Duo. By that logic, the chip code-named Willamette would have been based on the Willette microarchitecture, and the first product might have been the Willette 4 Quadro, which everyone knows is actually a disposable razor.

The Core 2 Duo’s name does make sense from a certain perspective, though, because Intel has been shipping the original Core Duo as a dual-core mobile processor since the beginning of the year. There’s also a single-core version of that processor known as the Core Solo, which explains the whole Duo suffix. And the mobile version of the Core 2 Duo, based on the chip code-named Merom, will be the follow-up to the Core Duo.

See? Ahh.

So why name the microarchitecture Core? You’ve got me. The Core microarchitecture is a descendant of the one found in the current Core Duo, but it’s been pretty extensively reworked and certainly deserves a new name. The fact that its name matches up with the previous-gen product’s name is confounding. We’ll simply have to, as one Intel employee admonished at the Spring ’06 IDF, “Deal with it.”

 

The Core microarchitecture
The heritage of the Core microarchitecture can be traced back through the Core Duo and Pentium M, through the Pentium II and III, all the way to the original Pentium Pro. That original design has undergone some serious evolutionary changes, plus a few radical mutations, along the way, and the Core microarchitecture may be the most sweeping set of changes yet. Even compared to its direct forebear, the Core Duo, the Core design can be considered substantially new.

Core’s genesis was a project known internally at Intel as Merom, whose mission was to build a replacement for the Pentium M and Core Duo mobile processors. The Israel-based design team responsible for Intel’s mobile CPUs followed a distinctive design philosophy focused intently on energy efficiency, which helped make the Pentium M a resounding success as part of the Centrino platform. When power and heat became problems for Netburst-based desktop and server processors, Intel turned to Merom as the source of a new, common microarchitecture for its mobile, desktop, and server CPUs.

Because of its orientation toward power efficiency, the Core architecture is a very different design from Netburst. From the very first Pentium 4, Netburst was a “speed demon” type of architecture, a chip designed not for clock-for-clock performance, but to be comfortable running at high clock frequencies. To this end, the original Netburst processors had a relatively long 20-stage main pipeline. For a time, this design achieved good results at the 130nm process node, but all of that changed when Intel introduced a vastly reworked Netburst at 90nm. With its pipeline stretched to 31 stages and its transistor count up significantly, the Pentium 4 “Prescott” still had trouble delivering high clock speeds without getting too hot, and performance suffered as a result.

The Core architecture, meanwhile, is the opposite of a speed demon; it’s a “brainiac” instead. Core has a relatively short 14-stage pipeline, but it’s very “wide,” with ample execution resources aimed at handling lots of instructions at once. Core is unique among x86-compatible processors in its ability to fetch, decode, issue and retire up to four instructions in a single clock cycle. Core can even execute 128-bit SSE instructions in a single clock cycle, rather than the two cycles required by previous architectures. In order to keep all of its out-of-order execution resources occupied, Core has deeper buffers and more slots for instructions in flight.


A block diagram of one Core execution, err, core. Source: Intel.

Like other contemporary PC processors, Core translates x86 instructions into a different set of instructions that its internal, RISC-like core can execute. Intel calls these internal instructions micro-ops. Core inherits the Pentium M and Core Duo’s ability to fuse certain micro-op pairs and send them down the pipeline for execution together, a provision that can make the CPU’s execution resources seem even wider that they are. To this ability, Core adds the capability to fuse some pairs of x86 “macro-ops,” such as compare and jump, that tend to occur together commonly. Not only can these provisions enhance performance, but they can also reduce the amount of energy expended in order to execute an instruction sequence.

Another innovation in Core is a feature Intel has somewhat cryptically named memory disambiguation. Most modern CPUs speculatively execute instructions out of order and then reorder them later to create the illusion of sequential execution. Memory disambiguation extends out-of-order principles to the memory system, allowing for loads to be moved ahead of stores in certain situations. That may sound like risky business, but that’s where the disambiguation comes in. The memory system uses an algorithm to predict which loads are to move ahead of stores, removing the ambiguity.

See? Ahh.

This optimization can pay big performance dividends.


A picture of the Core 2 die. Source: Intel.

In contrast to the various “dual-core” implementations of Netburst, the Core microarchitecture is a natively dual-core design. The chip’s two execution cores each have their own separate, 32K L1 instruction and data caches, but they share a common L2 cache that can be either 2MB or 4MB in size. (The execution trace cache from Netburst is not carried over here.) The chip can allocate space in this L2 cache dynamically on an as-needed basis, dedicating more space to one core than the other in periods of asymmetrical activity. The common cache also eliminates the need for coherency protocol traffic on the system’s front-side bus, and one core can pass data to another simply by transferring ownership of that data in the cache. This arrangement is easily superior to the Pentium D’s approach, where the two cores can communicate and share data only via the front-side bus.

As Intel’s brand-new common microarchitecture, Core is of course equipped with all of the latest features. String ’em together, and you get something like this: MMX, SSE, SSE2, SSE3, SSE4, EM64T, EIST, C1E, XD, and VT, to name a subset of the complete list. The most notable addition here is probably EM64T—Intel’s name for x86-64 compatibility—because the Core Duo didn’t have it. In order to make its way into desktops and servers, Core needed to be a 64-bit capable processor, and so it is.

The scope and depth of the changes to the Core microarchitecture simply from its direct “Yonah” Core Duo ancestor are too much to cover in a review like this one, but hopefully you have a sense of things. For further reading on the details of the Core architecture, let me recommend David Kanter’s excellent overview of the design.

AMD answers with Energy Efficient Athlons
Anticipating better power efficiency from Intel’s new desktop processors, AMD has begun offering Energy Efficient versions of many of its CPUs for the new Socket AM2 infrastructure. Much like the Turion 64 mobile processor and the HE versions of the Opteron server chips, these Energy Efficient Athlon 64s have been manufactured using a tweaked fabrication process intended to produce chips capable of operating at lower voltages. Making these more efficient chips isn’t easy, so AMD charges a price premium for the Energy Efficient models that averages about 40 bucks over the non-EE versions.


The Athlon 64 X2 3800+ Energy Efficent Small Form Factor (left) and Athlon 64 X2 4600+ Energy Efficient (right)

Just as we wrapped up our testing of the Core 2 Duo, a pair of these new Energy Efficient processors arrived from AMD. On the right above is the EE version of the Athlon 64 X2 4600+. AMD rates its max thermal power at 65 W, down from 89W in the stock version. Currently, the X2 4600+ EE commands a $43 price premium over the regular X2 4600+.

The processor on the left above may have the longest product name of any desktop CPU ever: “Athlon 64 X2 3800+ Energy Efficient Small Form Factor.” This long-winded name, though, signals a very frugal personality; AMD rates this processor’s max thermal power at only 35W. Making the leap from the stock version to the EE SFF model will set you back roughly 60 bucks, or you can stop halfway and get the X2 3800+ EE with a 65W TDP for 20 bucks more than the basic 89W version.

By the way, you may be tempted to compare the TDP numbers for the Core 2 Duo with these processors, but there is some risk in doing so. AMD generates its TDP ratings using a simple maximum value, while Intel uses a more complex method that produces numbers that may be less than the processor’s actual peak power use. As a result, direct comparisons between AMD and Intel TDP numbers may not reflect the realities involved.

For all intents and purposes beyond power consumption and the related heat production, the EE versions of the Athlon 64 X2 ought to be identical to the originals. They run at the same clock speeds, have the same feature sets, and should deliver equivalent performance. Because that’s so, and due to limited testing time, we’ve restricted our testing of these Energy Efficient chips to power consumption.

 

Our testing methods
Please note that the two Pentium D 900-series processors in our test are actually a Pentium Extreme Edition 965 chip that’s been set to the appropriate core and bus speeds and had Hyper-Threading disabled in order to simulate the actual products. Similarly, our Socket AM2 versions of the Athlon 64 X2 4800+, 4600+, and 4200+ are actually the Athlon 64 FX-62 and X2 5000+ clocked down to the appropriate speeds, and the Core 2 Duo E6600 is actually an underclocked Core 2 Extreme X6800. The performance of our “simulated” processor models should be identical to the actual products.

Also, I’ve placed asterisks next to the memory clock speeds of the Socket AM2 test systems in the table below. Due to limitations in AMD’s memory clocking scheme, a couple of these systems couldn’t set their memory clocks to exactly 800MHz.

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Pentium D 950 3.4GHz
Pentium D 960 3.6GHz
Pentium Extreme Edition 965 3.73GHz Core 2 Duo E6600 2.4GHz
Core 2 Duo E6700 2.66GHz
Core 2 Extreme 2.93GHz
Athlon 64 X2 4200+ 2.2GHz
Athlon 64 X2 4800+
2.4GHz
Athlon 64 X2 4600+ 2.4GHz
Athlon 64 X2 5000+ 2.6GHz
Athlon 64 FX-62
2.8GHz
Athlon 64 X2 3800+ Energy Efficient
Athlon 64 X2 4600+ Energy Efficient
System bus 800MHz (200MHz quad-pumped) 1066MHz (266MHz quad-pumped) 1066MHz (266MHz quad-pumped) 1GHz HyperTransport
Motherboard Intel D975XBX Intel D975XBX Intel D975XBX Asus M2N32-SLI Deluxe
BIOS revision BX97510J.86A.1073.
2006.0427.1210
BX97510J.86A.1073.
2006.0427.1210
BX97510J.86A.1209.
2006.0601.1340
0402
North bridge 975X MCH 975X MCH 975X MCH nForce 590 SLI SPP
South bridge ICH7R ICH7R ICH7R nForce 590 SLI MCP
Chipset drivers INF Update 7.2.2.1007
Intel Matrix Storage Manager 5.5.0.1035
INF Update 7.2.2.1007
Intel Matrix Storage Manager 5.5.0.1035
INF Update 7.2.2.1007
Intel Matrix Storage Manager 5.5.0.1035
SMBus driver 4.52
IDE/SATA driver 6.67
Memory size 2GB (2 DIMMs) 2GB (2 DIMMs) 2GB (2 DIMMs) 2GB (2 DIMMs)
Memory type Crucial Ballistix PC2-8000
DDR2 SDRAM
at 800MHz
Crucial Ballistix PC2-8000
DDR2 SDRAM
at 800MHz
Corsair TWIN2X2048-8500C5 DDR2 SDRAM at 800MHz Corsair TWIN2X2048-8500C5 DDR2 SDRAM at 800MHz*
CAS latency (CL) 4 4 4 4
RAS to CAS delay (tRCD) 4 4 4 4
RAS precharge (tRP) 4 4 4 4
Cycle time (tRAS) 15 15 15 12
Audio Integrated ICH7R/STAC9221D5
with SigmaTel 5.10.4991.0 drivers
Integrated ICH7R/STAC9221D5
with SigmaTel 5.10.4991.0 drivers
Integrated ICH7R/STAC9221D5
with SigmaTel 5.10.4991.0 drivers
Integrated nForce 590 MCP/AD1988B with SoundMAX 5.10.2.4490 drivers
Hard drive Maxtor DiamondMax 10 250GB SATA 150
Graphics GeForce 7900 GTX 512MB PCI-E with ForceWare 84.25 drivers
GeForce 7900 GTX 512MB PCI-E with ForceWare 84.21 drivers (WorldBench only)
OS Windows XP Professional x64 Edition
Windows XP Professional with Service Pack 2 (WorldBench only)

Thanks to Corsair and Crucial for providing us with memory for our testing. Both of them provide products and support that are far and away superior to generic, no-name memory.

Also, all of our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.

The test systems’ Windows desktops were set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Memory performance
We’ll begin our tests with a customary look at memory subsystem performance. These results won’t track with performance in most real-world applications, but they can teach us a thing or two about these processors and how they compare.

Although our Intel motherboard has dual channels of 667MHz DDR memory, The Core 2 Duo’s path to main memory is limited by its 1066MHz front-side bus. With their on-chip memory controllers, the Athlon 64 processors can take better advantage of the peak bandwidth offered by two channels of DDR2 memory. That said, the Core 2 Duo doesn’t achieve the same throughput as the Extreme Edition 965, which also rides on a 1066MHz bus. The gap between these two Intel CPU architectures may stem from the algorithms they each use to govern pre-fetching of data from main memory into the L2 cache. The Netburst processor may be more aggressive here in a way that benefits it in this synthetic test.

Next up is our ancient version of Linpack. This classic benchmark is traditionally used to measure floating-point math performance, but we use this unoptimized version simply to get a look at the “shape” of the memory subsystem. Unfortunately, this rendition of Linpack has a fixed maximum matrix size of 2MB, so we can’t really see how the Core 2’s entire L2 cache or main memory performs. I would have cut these results out of the review entirely, were they not so dramatic.

The Core 2 processors look to have one heck of a fast cache subsystem, at least in the first 2MB. Neither the Pentiums nor the Athlons come close.

Memory bandwidth is important, but memory access latencies are arguably more important, though the two are interrelated. This result is intriguing, because the Core 2 processors manage to achieve much lower access latencies than the Netburst-based Pentiums, despite using the same memory timings on the same type of motherboard. These numbers, however, are just one sample point in a range of possibilities. Let’s look at representatives of the three different microarchitectures in more detail.

The graphs below show results from multiple step and block sizes. I’ve color-coded the graphs to make them easier to read. For each processor, the yellow areas represent block sizes that fit into the L1 data cache, the light orange areas represent L2 cache, and the dark orange areas represent main memory.

The Athlon 64’s built-in memory controller gives it a pronounced and consistent advantage in getting out to main memory quickly, but the Core 2 really does shave 15 to 20 nanoseconds off of main memory access times versus the Pentium Extreme Edition. I hate to speculate too much about the reasons, but they may include the Core 2’s lower latency caches (which we see illustrated here), potentially less aggressive pre-fetching (and thus a less saturated bus), and possibly even its ability to move loads ahead of stores via memory disambiguation.

Oh, and CPU geeks may be interested to note that our latency test app reports the Core 2’s L1 cache latency is three cycles, for what it’s worth.

 

Gaming performance

Quake 4
We tested Quake 4 by running our own custom timedemo with and without its multiprocessor optimizations enabled. These can be switched on in the game console by setting the “r_usesmp” variable to “1”.

Above the following benchmark graph, and throughout most of the tests in this review, we’ve included Task Manager plots showing CPU utilization. These plots were captured on the Pentium Extreme Edition 965, and they should offer some indication of how much impact multithreading has on the operation of each application. Single-threaded apps may sometimes show up as spread across multiple processors in Task Manager, but the total amount of space below all four lines shouldn’t equal more than the total area of one square if the test is truly single-threaded. Anything significantly more than that is probably an indication of some multithreaded component in the execution of the test. Because WorldBench’s tests are entirely scripted, however, we weren’t able to capture Task Manager plots for them, as you’ll notice later.

NVIDIA’s video drivers are now multithreaded, so we should see some amount of multithreading action happening in any application that uses the GPU for 3D graphics, even if the game is only single-threaded.


With “r_usesmp 0”


With “r_usesmp 1”

Just like that, we see a new order being established. The three Core 2 Duo chips capture the top three spots, with even the E6600—at 2.4GHz and $316—outperforming the Athlon 64 FX-62. The Core 2’s advantage over the Athlon 64 X2 is similar to the one the AMD chips have held over the Pentiums for so long. Obviously, there’s utterly no contest between the new Intel processors and their predecessors. Will this pattern hold in other games?

The Elder Scrolls IV: Oblivion
We tested Oblivion by manually playing through a specific point in the game five times for each CPU while recording frame rates using the FRAPS utility. Each gameplay sequence lasted 60 seconds. This method has the advantage of simulating real gameplay quite closely, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

We set Oblivion’s graphical quality settings to “Medium,” 800×600 resolution, with HDR lighting enabled. Our Oblivion test is a quick run around the Imperial City Arboretum.

The Core 2 processors show up strong again here. Only the E6600 falls behind the FX-62, and the Core 2 Extreme X6800 cranks out roughly twice the average and minimum frames rate of the Pentium Extreme Edition 965.

 

F.E.A.R.
We used F.E.A.R.’s built-in “test settings” benchmark to get these results. The game’s “Computer” and “Graphics” performance options were both set to “High.”

The Core 2 processors come up big in F.E.A.R., as well, by posting solid gains over the Athlon 64s in both average and minimum frame rates. To give you some sense of how much more effective this microarchitecture is for gaming than Netburst, consider that the Core 2 E6600’s low is 52 frames per second—double that of the Pentium D 950 and an indicator of much smoother gameplay.

Battlefield 2
We used FRAPS to capture BF2 frame rates just as we did with Oblivion. Graphics quality options were set to BF2’s canned “High” quality profile. This game has a built-in cap at 100 frames per second, and we intentionally left that cap enabled so we could offer a faithful look at real-world performance.

BF2 has been considered something of a system hog in the past, but all of these CPUs are fast enough to run BF2 acceptably. The Core 2 Extreme keeps the game practically locked at its 100Hz peak refresh rate.

Unreal Tournament 2004
We used a more traditional recorded timedemo for testing UT2004, but we tried out two versions of the game, the original 32-bit flavor and the 64-bit version.

UT2004 has been a thorn in Intel’s side for ages, but no longer. Core 2 changes the equation entirely. As for the question of 32-bit code versus 64-bit, looks to me like none of the processors gain significantly more than the others by going to the 64-bit executable.

 

3DMark06
3DMark06 combines the results from its graphics and CPU tests in order to reach an overall score. Here’s how the processors did overall and in each of those tests.

3DMark’s four graphics tests are almost entirely GPU bound, even with our test systems’ GeForce 7900 GTX graphics cards. The Core 2 chips gain their advantage in the overall 3DMark score by doing very well in the two CPU tests.

 

WorldBench overall performance
WorldBench’s overall score is a pretty decent indication of general-use performance for desktop computers. This benchmark uses scripting to step through a series of tasks in common Windows applications and then produces an overall score for comparison. WorldBench also records individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results alongside the results from some of our own application tests.

Well, Core 2 processors don’t just excel at 3D gaming. They’ve also taken the top three spots in WorldBench, and their margins of victory are impressive. The Core 2 Extreme opens up a huge lead on the Athlon 64 FX-62, setting a new WorldBench record (at least for us) in the process. Oh, and the Pentium Extreme Edition is a staggering 43 points behind the Core 2 Extreme X6800.

Audio editing and encoding

LAME MP3 encoding
LAME MT is, as you might have guessed, a multithreaded version of the LAME MP3 encoder. LAME MT was created as a demonstration of the benefits of multithreading specifically on a Hyper-Threaded CPU like the Pentium 4. (Of course, multithreading works even better on dual-core processors.) You can download a paper (in Word format) describing the programming effort.

Rather than run multiple parallel threads, LAME MT runs the MP3 encoder’s psycho-acoustic analysis function on a separate thread from the rest of the encoder using simple linear pipelining. That is, the psycho-acoustic analysis happens one frame ahead of everything else, and its results are buffered for later use by the second thread. The author notes, “In general, this approach is highly recommended, for it is exponentially harder to debug a parallel application than a linear one.”

We have results for two different 64-bit versions of LAME MT from different compilers, one from Microsoft and one from Intel, doing two different types of encoding, variable bit rate and constant bit rate. We are encoding a massive 10-minute, 6-second 101MB WAV file here, as we have done in many of our previous CPU reviews.

MusicMatch Jukebox

There are no real surprises here. The Core 2 processors excel at audio encoding and editing, just as they seem to everywhere else.

 

Video editing and encoding

Windows Media Encoder x64 Edition Advanced Profile
We asked Windows Media Encoder to convert a gorgeous 1080-line WMV HD video clip into a 320×240 streaming format using the Windows Media Video 8 Advanced Profile codec.

Windows Media Encoder

Adobe Premiere

VideoWave Movie Creator

Intel’s new processors have the edge in video encoding, as well.

 

Image processing

Adobe Photoshop

ACDSee PowerPack

picCOLOR
picCOLOR was created by Dr. Reinert H. G. MΓΌller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. MΓΌller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including MMX, SSE2, and Hyper-Threading. Naturally, he’s ported picCOLOR to 64 bits, so we can test performance with the x86-64 ISA. Eight of the 12 functions in the test are multithreaded.

Scores in picCOLOR, by the way, are indexed against a single-processor Pentium III 1 GHz system, so that a score of 4.14 works out to 4.14 times the performance of the reference machine.

Add image processing to the list of categories that the Core 2 processors handle well. Notice that in picCOLOR, the Core 2 Extreme X6800 works out to be about 11 times the speed of a Pentium III 1GHz. How’s that for progress?

 

Multitasking and office applications

MS Office

Mozilla

Mozilla and Windows Media Encoder

Two of these three tests, the MS Office and Mozilla plus Windows Media Encoder ones, attempt to simulate real-world user multitasking. The Core 2 processors handle them both very well, unsurprisingly.

 

Other applications

Sphinx speech recognition
Ricky Houghton first brought us the Sphinx benchmark through his association with speech recognition efforts at Carnegie Mellon University. Sphinx is a high-quality speech recognition routine. We use two different versions, built with two different compilers, in an attempt to ensure we’re getting the best possible performance.

The Core 2 Extreme busts out a new record in Sphinx, needing only about 30% of its power to run this high-quality speech recognition routine in real time.

WinZip

Nero

WinZip is another impressive victory for the Core 2 CPUs, while the field is very tight (and probably largely I/O bound) in the Nero test.

 

3D modeling and rendering

Cinebench 2003
Cinebench measures performance in Maxon’s Cinema 4D modeling and rendering app. This is the 64-bit version of Cinebench, primed and ready for these 64-bit processors.

This one is another victory for the Core 2, but the contest is a little closer this time, with Athlon 64 processors taking second and fourth places.

Cinebench’s shading tests are single-threaded, and they allow us to compare the performance of shading with the Cinema 4D engine and software OpenGL with GPU-accelerated OpenGL. Our three Core 2 processors excel in all cases, whether they are doing the shading themselves or feeding a GPU.

 

POV-Ray rendering
POV-Ray just recently made the move to 64-bit binaries, and thanks to the nifty SMPOV distributed rendering utility, we’ve been able to make it multithreaded, as well. SMPOV spins off any number of instances of the POV-Ray renderer, and it will divvy up the scene in several different ways. For this scene, the best choice was to divide the screen horizontally between the different threads, which provides a fairly even workload.

We considered using the new beta of POV-Ray with native support for SMP, but it proved to be very, very slow. We’ll have to try it again once development has progressed further.

POV-Ray rendering has been difficult turf for Netburst-based CPUs to defend, but the Core 2 is much more competitive. Note that the Core 2 processors don’t seem to scale as well when going from one thread to two as the Athlon 64s. I’m not entirely confident that’s not the fault of a quirk in the latest version of SMPOV, so I wouldn’t read too much into it. Using an external program to call the renderer has its perils.

3dsmax 8 rendering
For our 3ds max test, we used the “architecture” scene from SPECapc for 3ds max 7. This scene is very complex and should be nice exercise for these CPUs. Using 3ds max’s default scanline renderer, we first rendered frames 0 to 10 of the scene at 500×300 resolution. The renderer’s “Use SSE” option was enabled.

Next, we rendered just the first frame of the scene in 3ds max’s mental ray renderer. Notice that we’ve changed our time scale from seconds to minutes for this one.

Check out those render times with mental ray. Yeowtch. The Core 2 Extreme finishes nine minutes before the FX-62.

 

SiSoft Sandra Mandelbrot
Next up is SiSoft’s Sandra system diagnosis program, which includes a number of different benchmarks. The one of interest to us is the “multimedia” benchmark, intended to show off the benefits of “multimedia” extensions like MMX and SSE/2. According to SiSoft’s FAQ, the benchmark actually does a fractal computation:

This benchmark generates a picture (640×480) of the well-known Mandelbrot fractal, using 255 iterations for each data pixel, in 32 colours. It is a real-life benchmark rather than a synthetic benchmark, designed to show the improvements MMX/Enhanced, 3DNow!/Enhanced, SSE(2) bring to such an algorithm.

The benchmark is multi-threaded for up to 64 CPUs maximum on SMP systems. This works by interlacing, i.e. each thread computes the next column not being worked on by other threads. Sandra creates as many threads as there are CPUs in the system and assigns [sic] each thread to a different CPU.

We’re using the 64-bit port of Sandra. The “Integer x16” version of this test uses integer numbers to simulate floating-point math. The floating-point version of the benchmark takes advantage of SSE2 to process up to eight Mandelbrot iterations at once.

The Core microarchictecture’s rich execution resources are on display here. Netburst-based chips perform relatively well in these tests, probably because they are executing what I believe is a fairly simple, well-optimized program loop at a very high clock speed. The Core 2 processors, however, have the ability to handle four 128-bit floating-point operations per cycle, or eight 64-bit floating-point operations per cycle, which is considerably more work per clock than competing microarchitectures. The E6600 at 2.4GHz doesn’t quite double the performance of the X2 4600+ at 2.4GHz, but it’s close.

 

Power consumption
We took our power readings at the wall outlet using an Extech 380803 power meter. Only the PC was plugged into the watt meter; the system’s monitor and speakers, for instance, were not. The “idle” readings were taken at the Windows desktop, while the “load” readings were taken using SMPOV and the 64-bit version of the POV-Ray renderer to load up the CPUs. In all cases, we asked SMPOV to use the same number of threads as there were CPU front ends in Task Manager—so four for the Extreme Edition 965, two for the Core 2 and Athlon 64 X2 processors. The test rigs were all equipped with OCZ GameXStream 700W power supply units.

The graph below for idle power use has results with and without “power management.” By “power management,” we mean the dynamic clock speed and voltage throttling technologies from Intel and AMD, known as SpeedStep and Cool’n’Quiet, respectively. The Intel processors also have an enhanced halt state known as C1E. A processor’s halt state is invoked by the OS whenever the system is able to sit idle for a moment. The C1E halt state in the Intel processors ramps down the CPU clock speed and voltage in order to save power, so even without SpeedStep, the CPU’s idle power use is reduced. Keep that in mind when considering the “No power management” results for the Intel processors at idle.

Interestingly, we found that the Core 2’s C1E state doesn’t lower CPU voltage. The CPU multiplier drops to 6.0, bringing the clock speed down to 1.6GHz, but voltage appears to remain unchanged. Turning on SpeedStep, however, drops the CPU’s core voltage, allowing for even lower idle power use.

Another tricky part about power consumption testing is getting good numbers for our “simulated” CPU speed grades. In order to make it work, you have to set the proper CPU core voltage, not just the right clock speeds. I made an attempt at simulating the Athlon 64 X2 models 4800+, 4600+, and 4200+ and the Pentium D 950/960 by setting the CPU voltages manually, but I’ve put an asterisk next to those CPUs in our results as a reminder that they’re simulated. I didn’t even bother including some simulated CPU models because of the difficulty involved and a few questionable results.

For the Athlon 64 X2 4800+, I set the voltage at 1.35V. The X2 4600+ and 4200+ were set to 1.3V. The “power management” idle scores were simply taken from chips with the same cache size (the FX-62 and 5000+, respectively), because all of these processors share the same 1 GHz/1.1V idle with Cool’n’Quiet.

The Pentium D 950 and 960 were trickier, since each Pentium D’s voltage needs are programmed at the factory. In this case, I stuck with the default of 1.312V for both speed grades. On an 800MHz bus, the Pentium D 950 and 950 both clocked down to 2.4 GHz at idle via the C1E halt mechanism. The Extreme Edition 965 clocked down to 3.2 GHz at idle.

You’ll notice that the results below include numbers for the Energy Efficient versions of the Athlon 64 X2 3800+ and 4600+. AMD sent these CPUs out to us along with a more power-efficient motherboard than our Asus M2N32-SLI Deluxe test platform, whose nForce 590 SLI chipset seems to be something of a power hog. The board AMD sent, however, is not an enthusiast-class mobo with dual graphics slots, so we elected not to include it in our tests. We wanted to test the EE chips opposite the Core 2 Duo on an enthusiast-class board, so we stuck with the M2N32-SLI Deluxe. It’s possible that enthusiast-class boards based on the Radeon Xpress 3200 or the nForce 570 SLI chipsets could lower power consumption for all of the Athlon 64 processors here without compromising performance.

Whoa. Performance is way up with the Core 2 processors, and power draw is way down. The Core 2’s mythical “performance per watt”—which is actually a rather slippery thing to quantify—has gotta be the best on the market. The Core 2 Duo E6700 outperforms the Athlon 64 FX-62 more often than not, yet the E6700-based system draws 74 fewer Watts under load.

AMD has made substantial progress on this front with its new Energy Efficient processors. Under load, the Athlon 64 X2 4600+ EE system pulls about 20W less at the wall socket than the stock X2 4600+ system, and the 35W-rated X2 3800+ system draws less power than anything else we tested. Still, even these new CPUs can’t match the performance of the Core 2 processors that are in the same neighborhood in terms of power draw.

 

Overclocking
Like many of the thousand-dollar, high-end processors of late, the Core 2 Extreme X6800 has an unlocked multiplier, so overclocking this beast is simply a matter of turning up that value in the BIOS—even on an Intel motherboard. With very little effort, I had the X6800 running at 3.46GHz on a 1066MHz bus. This was with air cooling—a Zalman CNPS9500 LED. That’s very good air cooling, yes, but nothing terribly exotic.

I had to raise the voltage from the stock 1.2V to 1.375V in order to get it to be completely stable, but at those settings, the CPU ran a pair of Prime95 torture tests for a good while without throwing any errors. With the cooler fan running at its top speed, the CPU leveled out at about 70Β°C while running those tests.

I tried to coax the X6800 into running at the next multiplier up the ladder, for a top speed of 3.73GHz, but it wouldn’t quite go there. Even with 1.45V flowing into it, the X6800 would POST but wouldn’t boot into Windows. I suspect this CPU could go well past 3.46GHz with some bus overclocking, but I haven’t tried it yet. Even at “only” 3.46GHz, performance was astounding.

This architecture has more headroom than a Ford Excursion. Performance scales up very well with clock speed, too. In fact, these scores may give us some insight into why Intel chose not to move to a 1333MHz front-side bus for the Core 2: the 1066MHz bus just doesn’t look like a serious performance bottleneck.

 
Conclusions
After years of wandering in the wilderness, Intel has recaptured the desktop CPU performance title in dramatic fashion. Both the Core 2 Extreme X6800 and the Core 2 Duo E6700 easily outperform the Athlon 64 FX-62 across a range of applications—and the E6600 is right in the hunt, as well. Not only that, but the Core 2 processors showed no real weaknesses in our performance tests. (I would say that Core looks like a more balanced architecture than Netburst, but at this stage of the game, Netburst just seems slow almost across the board.) No matter what you’re hoping to do with your PC, a Core 2 processor should be a very solid choice.

The PC industry can also breathe a collective sigh of relief about power and thermal issues now that Core 2 has arrived. Intel finally has a firm handle on those problems. These processors consume less power—and thus produce less heat—than desktop Pentiums have for quite a while. The E6700 system’s total power draw when fully loaded was 156 W, only 14W more than the Pentium Extreme Edition system drew while sitting idle. What’s more, even the high-end Core 2 processors’ power use was in line with that of the Energy Efficient versions of the Athlon 64 X2. That leaves room for many good things to happen, from less expensive cooling systems to quieter, smaller enclosures and even some righteous overclocking. Combine the low power draw with the performance we’ve seen, and the Core 2 is clearly the most energy-efficient desktop processor around.

As much as I appreciate the performance and efficiency of these new CPUs, though, I can’t endorse forking out a cool grand (minus one) for a Core 2 Extreme X6800. These top-end CPUs are always iffy values, even if they’re insane performers. Meanwhile, the prices on the first two Core 2 Duos are very reasonable for what you get. At $316, the Core 2 Duo E6600 looks like a tremendous deal, provided you can get your hands on one. The E6700 is pricier at $530, but it’ll beat the much more expensive FX-62 at almost every turn.

In fact, after seeing the Core 2 in action, many folks may be wondering how AMD is going to keep up. The Athlon 64 X2 4200+ currently lists for more than the Core 2 Duo E6600, and that’s just not gonna cut it. Fortunately, AMD has confirmed to us that a major price move is coming in July. We don’t have the specifics just yet, but they say they intend to maintain a competitive price-performance ratio. That may mean we’ll see the dramatic price cuts rumored to be coming, which would be a good start.

For its next trick, AMD needs to get its 65nm fab process going ASAP. I’ve heard prognostications that AMD won’t be able compete against Core 2 chips with its current AMD64 microarchitecture. That may be the case, but I’m not entirely convinced. The contest we’ve seen in the preceding pages pitted CPUs manufactured on AMD’s 90nm process against CPUs made on Intel’s 65nm process. The Netburst fiasco at 90nm has made us forgetful about the benefits of process shrinks, but they can be substantial. AMD could be in a much stronger position if it gets to 65nm quickly.

Regardless of what happens with its competition, though, the big story here is that Intel has replaced its troubled Netburst microarchitecture with a world-beater. The Core microarchitecture and the chips based on it are a huge improvement, and a fitting end to the era of the Pentium. 

Comments closed
    • Z-Gradt
    • 13 years ago

    This is refreshing for a change. A new architecture where the midrange (E6600) is about the same performance or faster than the previous top of the line (FX-62). Just look at how many different generations of processors were rated at the “3200” mark or only marginally higher. On top of that, it’s about $800 less than what the FX-62 was selling for last month!

    My upgrade strategy from my 486SX-25 up until my Athlon XP 1600+ was to upgrade to a CPU which was 2X fast as my current one, as long as it cost between $150 – $200. This meant an upgrade about every 2 years, and was always a new architecture each time until it stalled out because of the K8 (both on timing and price), so I ended up with 3 different generations of K7 (Thunderbird, Palomino, and Barton). I’d have to say that my most exciting and longest-lived upgrade was the Palomino, which I might’ve had longer had it not died a mysterious death after over 3 years of dependable service.

    Now that Intel is competitive both performance-wise and price-wise, hopefully AMD will become competitive price-wise. When you look at the big picture, the performance leap is noticable, but not huge over what is currently available, especially considering the disturbing lack of 2MB L2 cache benchmarks. The real story is the down-to-earth price they’re charging for a well-designed chip, which puts AMD firmly back into the role of the underdog. Well, that and what I like to refer to as the end of the “Netburst ordeal”.

    I think I’ll have to revise my pricing guideline and go for the E6600. The thought of having a chip with the performance of an FX-62 is just too tempting.

    • Tommyxx516
    • 13 years ago

    The CPU’s are competitively priced, however people should take note that the motherboards for the Conroe costs nearly $300.

    • HotToddy
    • 13 years ago

    This sucks i just got an athlon 64 oh well guess its already time to go back to intel πŸ™ ……….I won’t let them see me cry

    • ej4love
    • 13 years ago

    time for every one to come up for air, this is my favorite site, but i do go to {h} every day, i save the best for last and that tech report. but every one is entitled to their opinion, so let kyle have his.

    • APWNH
    • 13 years ago

    r[<70<]r degrees C?? Aren't these chips supposed to be more power efficient, and the TDP for the X6800 is 75W, is it not? How does this correspond to 70 degrees on a monster HSF running its fan at full blast?? I've only heard of Prescotts reaching those temperatures, and this chip is supposed to have half the heat dissipation. From the CNPS9500 cooler review: y[

      • accord1999
      • 13 years ago

      Miscalibrated sensor.

      Techreport’s review showed the X6800 system matching the power consumption of a X2 4600EE for example.

      Β§[<https://techreport.com/reviews/2006q3/core2/index.x?pg=16<]Β§

      • Shintai
      • 13 years ago

      Its a known problem with badaxe boards showing wrong temperatures with C2D and some P4s with beta bioses.

    • Soul Colossus
    • 13 years ago

    What kind of prices can we expect on the E6600s when they hit the street? Certainly not $316 πŸ˜›

      • Shintai
      • 13 years ago

      Around 330-350$ I think.

      316$ is if you buy 1000.

        • Krogoth
        • 13 years ago

        $316 is in 1000s and assumes that supply and demand would be steady, which I doubt is going to be the case in the next several months.

        I expect $350-400 to the norm until next year when Intel’s fab capacity fully shifts to from Netburst to ICM parts.

    • deathBOB
    • 13 years ago

    I just read the [H] review. More like [L] for limp. I’m just glad TR does such a good job with this kind of stuff.

      • Soul Colossus
      • 13 years ago

      limp?

        • FireGryphon
        • 13 years ago

        As opposed to [H] for Hard, I presume. I haven’t read the [H] review. [H] is ussally good, but TR is still best, as this article is amazing.

    • blastdoor
    • 13 years ago

    I’ve been trying to think of any scenario by which AMD could possibly respond to this in the next year. Here’s the best idea I can come up with–

    If you look at the transistor counts of the 4MB Core 2 and the 1 MB (512k per core) X2, you’ll notice the Core2 is approximately twice as “big”. This suggests that when AMD moves to 65 nm, the X2 will be half the die size of the Core 2.

    We also see that with socket AM2, AMD has quite a bit of surplus memory bandwidth.

    So, doesn’t this mean that AMD could build a quad-core A64 as soon as they make the shrink to 65nm, that would be approximately the same cost to manufacture as Intel’s Core 2 Duo? Granted, it’s easier to make cache than logic circuits, so maybe a quad core K8 would be somewhat more expensive to make than the dual core Core2, but probably not too much more.

    So, for just a little bit more than the price of a Core 2 Duo, could AMD sell an A64 X4? I would think that spending those transistors on extra cores rather than cache would improve performance quite a bit in cases where all four cores can be used.

      • coldpower27
      • 13 years ago

      There are some problems with what you said.

      Conroe with it’s die size with a 4MB cache is 143mm2 however Allendale with it’s 2MB cache is 111mm2.

      Now remember the Newcastle to Winchester transistion for AMD. 144mm2 to 84mm2.

      Theoretically you should have a die size 47% on 90nm of the 130nm, in theory, the actual transition produced a die size of 58% the size of the 130nm one. So you save less then the theoretical amount.

      A theoretical 90nm to 65nm would produce a die that is 52% as large as the 90nm one. Though this is only in theory. So let’s use a more practical example, 60% would be reasonable.

      The 2x512Kb model of Windsor currently has a die size of 183mm2.
      So on 65nm the 2x512Kb model Brisbane should have a die size of 110mm2 for the 2x512kb model.

      With the 2x1Mb model, which they will keep for products over 500US, they currently have a die size of 230mm2 with the Windsor 2x1Mb model, if that shifts over to 65nm, it’s die size will become 138mm2, not much smaller then Conroe, more like on par.

      Though you have to keep in mind AMD 65nm process will have SOI plus be a layer thicker,so it isn’t an apples to apples comparison.

      65nm only makes Quad Cores feasible rather then economical, it will take the 45nm process to make Quad Cores economical.

    • leor
    • 13 years ago

    i think i can explain the disappearance of porkster. it’s the conversation of fanboyism and critical mass principle. if you’ve noticed certain posters on here either showed up or started posting a lot more when porkster disappeared . . .

    only so much fanboyism can be contained in one place, after it reaches a certain level it acheives critical mass and shatters. now due to the conservation of fanboyism law, that rabid intel fanboyism could not be destroyed, so it splintered into shintai, fighterpilot and proesterchen.

    however the force of the original explosion was so great that an anti-porkster was created – kitty.

      • A_Pickle
      • 13 years ago

      Wow. That was a really mature post.

        • leor
        • 13 years ago

        zounds! i forgot A_pickle in the list! :-))

    • indeego
    • 13 years ago

    OK, I was kidding before. Intel did win. It’s obvious. Kitty is indeed just rolling with the punches.

    My only gripes:
    1. They led reviewers along for a ride. *[

      • Usacomp2k3
      • 13 years ago

      To match the price/performance, AMD is going to have to scale the Fx-62 to around $300, and the rest of them much lower than that.

        • BabelHuber
        • 13 years ago

        I expect AMD to bundle 2 AMD FX CPUs for a 4×4 setup to stay competitive in the high-end. Everything else won’t match the Core 2 EE in the short run, I don’t assume that the 90nm A64s have much headroom for further clockspeed enhancements.

        In Europe you can already get OEM AM2 X2 3800+ for ~190 Euros, I don’t know how it is elsewhere. This seems like the first sign of a massive price reduction by AMD.

          • indeego
          • 13 years ago

          /[<"This seems like the first sign of a massive price reduction by AMD."<]/ Not really. The price of that processor would have to be about 30-50% of that to remain competitive with Intelg{<.<}g

            • BabelHuber
            • 13 years ago

            30-50%? You must be joking. The E6300 is rated by Intel at $183. Even *[

    • quarantined
    • 13 years ago

    That was a great read, TR.

    Finally, it’s time to start a pc piggy bank.

    • Vrock
    • 13 years ago

    I think the kitty is an Intel employee who posts here to get us to say that Intel’s Core2Duo is better than any current AMD offering. It’s like reverse psychology or something.

      • Flying Fox
      • 13 years ago

      So Porkster is an AMD employee then? πŸ˜‰

        • Vrock
        • 13 years ago

        Hmm, that would explain why he’s been quiet lately…..I’d expected him to be oinking all over the place in joy because of Core2Duo.

          • jobodaho
          • 13 years ago

          It all starts to make sense now…

    • Shining Arcanine
    • 13 years ago

    Damage, Anand has the pricing for the E6400 and E6300 at $224 and $183 respectively instead of $183 and $163:

    Β§[<http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=2<]Β§ His numbers are what I have been seeing everywhere else for months. You might want to change the prices you listed.

      • Damage
      • 13 years ago

      Doh. I biffed on that. Fixed. Sorry.

    • Fighterpilot
    • 13 years ago

    y[< but right now it looks like there may be no advantge at all.<]y Advantage (Intel) 22.8% 55.0% 43.9% 30.9% Nice try Kitty.....best not to deny the obvious tho or no ones gonna reply to you anymore....

      • Flying Fox
      • 13 years ago

      Er… what benchmark/games were you referring to? I really don’t like the so-called “across the board” percentages.

        • thecoldanddarkone
        • 13 years ago

        They are from anand, the used 800×600 for complete cpu performance on crossfire

    • Fighterpilot
    • 13 years ago

    CPU Quake 4 HL2 Ep 1 F.E.A.R. BF2

    AMD Athlon 64 FX-62 156.7 170.0 164.0 108.7

    Intel Core 2 E6800 192.5 263.5 236.0 142.3

    Advantage (Intel) 22.8% 55.0% 43.9% 30.9%

    Β§[<http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=17<]Β§ In case you missed this Kitty...... :)

      • hellokitty
      • 13 years ago

      what? more theoretical and useless in real world tests, from the site that already screwed up the first Intel demo ( admitted so, and changed firgures at a later date ).

      Still can’t buy any of these,

      When they do show up and are faced with AMD’s Opterons and FX line at 1/2 of what they cost now ( because AMD will have to lower the price by that much ). Then we can talk price vs performance. I expected 5-15% advntage on intel’s side ( far from what they claimed ), but right now it looks like there may be no advantge at all. It;s in AMD’s court. Lower prices ( FX line is way overpriced and they kept it that way because it had no competition. ) drastically and intel has nothing.

        • PLASTIC SURGEON
        • 13 years ago

        You are truely the AMD fanboy of this site.

        • Flying Fox
        • 13 years ago

        I am not sure how you can say Quake 4, Half-Life 2 Episode 1, F.E.A.R. and Battlefield 2 “theoretical and useless”.

        While I don’t know if AMD will pull that one off, but the price cuts that have been reported affect only X2’s and single core A64’s. No mention of FX and Opterons. It would be nice if they cut the S939 Opterons by a big margin, but that will be suicide for them. :-S

        • Shintai
        • 13 years ago

        Yes you can buy them kitty.

        Β§[<http://80.167.217.210/pics/e6800.jpg<]Β§ Look and cry, B2 stepping packed and shipped July 5th :P

    • SGT Lindy
    • 13 years ago

    A review using real cpu heavy games:)

    Β§[<http://www.simhq.com/_technology2/technology_090b.html<]Β§

      • indeego
      • 13 years ago

      Why do very few of the benches have labels on their graphs? We have no idea what we are really looking at hereg{<.<}g

      • DrCR
      • 13 years ago

      Right with you, SimHQ is the best. Haven’t been there for a while. Thanks for the link; I wouldn’t have though to check there. I know FiringSquad included Lomac in a few of their benches at one point…

      DrCR

      • Flying Fox
      • 13 years ago

      Ouch, you are going to hurt kitty, the E6700 beats the FX-62 by *[<50%<]* at 640x480 (lowest resolution you can get to eliminate GPU influence). FS2004 is 40% for the E6700 over the FX-62 as well. This single-cycle SSE thingy really showed its money worth.

        • Shintai
        • 13 years ago

        And rise of legends in 1600*1200

        Β§[<http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=15<]Β§ 120.5FPS vs 78.4FPS. Seems to kick hard in more demanding games.

        • SGT Lindy
        • 13 years ago

        Flight Sims and RTS games need way more CPU than a FPS…..and less GPU.

        Lomac is a perfect example. It looks good but NOT Fear or even BF2 good. As it is even with the best video setup its not super fast because the CPU has to keep track of everything that is going on in the game.

    • Forge
    • 13 years ago

    Wow. Nice to see that Intel had a second coup d’etat, with the engineers retaking the company from the market weenies.

    Sign me up for some C2D loving. I’ll keep my dual Opterons, cause they lift heavy loads like nobody else, but the C2D is overwhelming, unstoppable. I’mll be watching 975X/NF4Intel/NF 590 Intel mobos closely now, and I think there’s a C2D on my desk in 2 or 3 months, tops.

    Now if Nvidia would just enable SLI on non-NV chipsets, I’d have a 975X and be a very happy d00d.

    • Damage
    • 13 years ago

    Friday night topic: Core 2 Duo. Discuss.

      • Jive
      • 13 years ago

      lol, what was the biggest front page thread we ever had at TR?

      FP!

        • Krogoth
        • 13 years ago

        Nope, nothing beats the orignal thread that refuses to die.

        *[

          • IntelMole
          • 13 years ago

          I hope that thing is backed up.

          On a 75GXP πŸ˜›

          There are some damn important memories and stories there,
          -Mole

          • jobodaho
          • 13 years ago

          I can’t seem to find the link to that one, though I know I’ve seen it before.

            • Jive
            • 13 years ago

            Would this: Β§[<https://techreport.com/onearticle.x/3035<]Β§ be it?

            • jobodaho
            • 13 years ago

            That’s the one I pulled up, but I thought there was one that had like 1000’s of posts.

            • lethal
            • 13 years ago

            AFAIK, im the only one with a 1337 post :P… although you had to post as a anonymous gerbil in the old comments. I’m almost sure that was the thread, but apparently it got the axe real bad :(.

            • Anonymous Gerbll
            • 13 years ago

            Is that news thread locked because of spam? I tried posting but it won’t work.

            Anyway i still use my near 7 year old DTLA 307030 as my main drive.

            • PerfectCr
            • 13 years ago

            No, it’s the old comment system, you’ll need to register for an account (yes registration still works for old comments)

            Β§[<https://techreport.com/manage_account.x<]Β§

            • Anonymous Gerbll
            • 13 years ago

            thanks, it worked.. i don’t like that system though… can’t edit it

      • My Johnson
      • 13 years ago

      262 comments? Intel FTW!

      • DrDillyBar
      • 13 years ago

      Finally (#40).

      Back when my two computers were a PIII-800EB and a Dual PIII-S 1.266GHz, I watched the arrival of the Pentium 4; After the first reviews decided to wait for 2.5GHz+before it’s even as fast as my Dually was; Until I noticed my FSB was saturated. At that point, I knew the Athlon XP and such were doing well for gamers, but gaming is not the only reason I own my computer(s)

      Then my Dual (Tyan) broke a socket and I dropped ~$600CDN on a P4C 3.0GHz (HT) w/ 512MB and a ASUS P4P800 Dx (MP’s cost a left nu7 at the time). 939’s mobo’s were not yet available, and I (wrongly) chose not to go with S754. Tyan ultimately replaced the board for me, for an arbitrary $75US; kudos.

      The reign of the Athlon64 is worthy. Hammer took ForeveR to arrive, but who remembers that bit Now? And other then an AMD 486DX4-100MHz of my own, (with PCI slots which was the biggest thing selling Pentium 60/66) the K5 and K6 were ‘Les MisΓ©rables’.

      Then Quantum Physics prevented Intel from scaling to an ever smaller process with an ever increasing clock rate; Net Burst had a turbo failure and The Pentium ceased.

      They’ve taken the P-III, made it the P-M, and then increased the number of cylinders and given it gobs of memory.

      Sweet! or Finally.

      • Flying Fox
      • 13 years ago

      And there I was on the front page looking for the separate entry for this FNT. πŸ™‚

    • PerfectCr
    • 13 years ago

    Can’t you guys see hellokitty is nothing more than a troll (and poor one at that) and an attention whore. Just ignore him and he’ll go away. Geez.

      • thecoldanddarkone
      • 13 years ago

      my new slogan, if you can’t link it shut it (this actually only applies to kitty)

    • zgirl
    • 13 years ago

    I’d hate to say this but the numbers don’t match up.

    The part that was previewed back in March was supposedly the 6700.

    And it was crushing the FX-62.

    Yet with TR review. The 6700 isn’t putting up those number and the 6800 is just barely quite reaching them.

    Granted Intel has a very good CPU here. No one doubts that, but honestly based on those preview number the 6600 should have been where the 6700 is at, the 6700 where the 6800 is at and the 6800 should have had us slack jawed mouths agap in awe.

    That didn’t happen in my opinion. And that those early preview numbers are fishy.

    But if the prices come in where they are supposed to be the 6600 might be the sweet spot. I certainly am not paying $1000 for the 6800. I should have been getting that performance with a lesser chip.

      • thecoldanddarkone
      • 13 years ago

      I really don’t think its fishy if you look at the am2 transition you realize that there was a gain.

      Looking through multiple reviews seems to reinforce that fact.

      • Shintai
      • 13 years ago

      They didnt compare it to a FX-62 back then, but an OCed FX-60 on 939.

      Techreport uses a single 7900GTX. Back then they used crossfire 1900XT.

      So the numbers are missing crossfire/sli, AM2 transistion+chipset and maybe a few other things.

      Oh and not to forget, 64bit OS among others was used here. Core 2 Duos fusion macro dont work so well with 64bit. So pure 32bit would be alittle faster.

      • Flying Fox
      • 13 years ago

      LOL Shintai quoting what I have said basically. πŸ˜€

      I would add that Anand screwed up the first preview tests, so if you go read his retest on the black box the difference has already shrunken.

      TR uses DDR2-800 for the top end FX-62 you would expect it closes the gap a bit. Otherwise AMD will be seriously in trouble.

    • indeego
    • 13 years ago

    Why did you choose to only O/C the least bang/buck processor (6800) over the 6600? If the 6600 ramps up as much as the 6800, you have an absolutely incredible bargaing{<.<}g

      • Jive
      • 13 years ago

      Over at Anand, they got both the 6800 and the 6600 up to ~4 ghz actually, but that was with an aftermarket cooler i believe.

        • 5150
        • 13 years ago

        Damage used a Zalman on his for overclocking, that’s aftermarket.

        • IntelMole
        • 13 years ago

        Possibly because:
        1) Unlocked multiplier on EE chip = easy
        2) Highest clocked chip is most likely to have most headroom, which ultimately gives you more information about a core than a speed-binned lower speed part (IMO) i.e. I can say now “this core can get up to 3.9GHz, but my part doesn’t…”

        If yields are good, overclocks are good, and you’re more likely to get a decent performing lower-bin chip.
        -Mole

          • Jive
          • 13 years ago

          the E6600 hit 4ghz on air as well, not only the E6800. The E6700 hit 3.9ghz, while the E6800 hit 4ghz. They used the “Tunic Tower” on all three chips. So unlocked multiplier on not, they were all able to reach 4ghz or close too, but just the X6800 was able to do it with the other components at stock speed.

            • IntelMole
            • 13 years ago

            Which tells you one of two things:

            – That Intel’s yields on this chip are pretty damn tasty, they’re basically speed-binning because otherwise everyone would have an X6800 EE for nothing.

            – That everyone’s getting hand picked selections marked for certain speeds, and so all of these overclocks are worthless anyways and just a test of motherboard stability.

            Personally, I’d guess that neither is right. But I still value overclocking the highest speed processor because it lets me know how far I might be able to get :-D,
            -Mole

    • hellokitty
    • 13 years ago

    So, that’s it??? Athlon beats it in few test including some of the more important ones like Cinema4D rendering and lame encoding, in others it’s almost right up there. ( I’m ignoring the overcloked fake Conroe part )

    Intel clearly stated at least 30% across the board on Core duo 6700, and that chip actually either loses or is barely faster than the athlon.

    I told you so.

    All AMD has to do is cut prices of those overpiced top parts in half and I’m getting one ( and they will ). Core Duo = even more disappointing than expected.

      • 5150
      • 13 years ago

      …and so it begins…

        • kvndoom
        • 13 years ago

        We have to have Kitty, to balance out Porkster. Otherwise the universe would spiral out of control.

          • 5150
          • 13 years ago

          Porkster still come around here? I guess I haven’t seen him for a while. In fact, when I first read this post I thought this was Porkster, but then I realized that in fact it was his polar opposite.

            • danny e.
            • 13 years ago

            Shintai = Porkster

            • Shintai
            • 13 years ago

            A what?

            danny e.=hellokitty?

            • danny e.
            • 13 years ago

            lol if only i could delude myself like hellokitty.

      • FroBozz_Inc
      • 13 years ago

      Dude, you really don’t know WTF you are talking about, do you? Wait, don’t answer that πŸ˜‰

      • Jive
      • 13 years ago

      And so it starts… Oh boy…

      • Krogoth
      • 13 years ago

      We some bacon to counter this pile of kitty litter.

        • hellokitty
        • 13 years ago

        Well, it couldn’t be more obvious. 6700 is the part that Intel supposedly demoed originally. And that part lost in lame encoding and Cinema 4D, and in most others is barely 1% – 10% faster.

        A far cry from what Intel promised.

        So, what exactly is unclear to you guys?

          • 5150
          • 13 years ago

          You make it sound so simple. Want to take care of that whole Middle East problem going on? You should be able to wrap it up in about a paragraph.

          • Smurfer2
          • 13 years ago

          E6700 normally beats the FX-62. The E6800 is the chip in direct competition with the E6800. (both cost roughly 1000 dollars) The E6700 is 530 dollars, much cheaper. Yes, AMD can compete with price cuts, but that will hurt AMD’s bottom line.

          How are those the most import benches? Gaming benches are just as important and the E6600 beats the FX-62 in most gaming benches.

          Now for the big one, why am I wasting my time with this?

            • hellokitty
            • 13 years ago

            6700 chip is not competing with anything. You still can’t buy it and we’ll see what the price cuts will be.

            Gaming benchmark are the most important? I guess that proves I’m dealing with children here.

            • 5150
            • 13 years ago

            Children !? I know you are but what am I ?

            • IntelMole
            • 13 years ago

            You missed out “…. you said you are ….” in the middle πŸ˜›

          • Chryx
          • 13 years ago

          it was demoed with a pair of X1900’s in crossfire.. against a DDR1 based Athlon64…

          look how intellectually dishonest you are.

      • Flying Fox
      • 13 years ago

      y[

        • DaveJB
        • 13 years ago

        It’s worth noting that Intel claimed that Core 2 would be 40% faster than /[

        • hellokitty
        • 13 years ago

        excuses, excuses, nothing else.

        The whole argument started with the initial Intel demo, can’t go back and adjust claims after the fact.

        I said it wouldn’t be even close to what those initial tests claimed, and any adjustment as you say, just proves my point. Too late to backtrack.

          • thecoldanddarkone
          • 13 years ago

          Link or shut it

          Intel told us to expect an average performance advantage of around 20% across all benchmarks, some will obviously be higher and some will be lower. Honestly it doesn’t make sense for Intel to rig anything here since we’ll be able to test it ourselves in a handful of months. We won’t say it’s impossible as anything can happen, but we couldn’t find anything suspicious about the setups.

          Β§[<http://www.anandtech.com/tradeshows/showdoc.aspx?i=2713<]Β§ first preview

            • Flying Fox
            • 13 years ago

            y[

            • thecoldanddarkone
            • 13 years ago

            I linked the original article the one that they made the mistake in. They have seemed to have editied the picture, nevertheless they say that they did edit them.

      • derFunkenstein
      • 13 years ago

      mmm…no, I do believe I see the X6800 on top of all the Athlons in Cinema4D. Get your eyes checked.

        • Flying Fox
        • 13 years ago

        Nah, he’s still hung up on Anand’s PUI’ed claim of 40% lead of the E6700 over the FX-60 @ 2.8GHz (FX-62 speeds minus DDR2). And somehow he stretched it to “Intel claimed the E6700 can beat the FX-62 40% across the board”, which no one (even Anand’s PUI) ever claimed.

          • totoro
          • 13 years ago

          y[

            • Flying Fox
            • 13 years ago

            If he wasn’t PUI, then why did he mess up so bad on the tests back at IDF?

            Running the AMD platform @ 1280×1024 vs the Intel platform @ 1024×768 is the kind of newbie mistake that you would not expect him to make.

            • totoro
            • 13 years ago

            I forgot that people couldn’t detect sarcasm over the Internet.
            At any rate, maybe he was just too excited? I don’t know.
            His (initial) results were poorly obtained too be sure, but I just wait for TR anyway.

            • Flying Fox
            • 13 years ago

            No emoticons, no sacarsm tag. It will be easy to get confused. πŸ™‚

        • hellokitty
        • 13 years ago

        6700 is the real competition to Athlon FX. that is the one ( supposedly ) used in the initial demos, and once AMD cuts prices, that will be the battle 6700 vs fx-62 Nobody will buy these extreme chips for $1000, just like nobody was buying FX chips at that price. But that’s about to change.

          • Flying Fox
          • 13 years ago

          I will remember this. No way the FX is going down to ~$500 like the 6700. If it is not fast enough, AMD usually just releases a higher speed grade and EOL the previous FX. Marketing wise FX’s competitor is the X6800, and for the same price there is really no competition. Keep dreaming.

          They can have a similarly clocked X2 for around the same price though.

          • green
          • 13 years ago

          true
          but thanks to intels speed grading the market competitor to the e6700 will be the x2 5000+
          both are top end desktop dual core chips from their respective camps
          though the e6700 is clocked 66.6’mhz higher than the x2 5000+
          at their premium end the x6800 will compete with the fx62 until the fx64 comes out
          the x6800 is clocked 133.3’mhz higher than the fx-62 which would give it some advantage

          in terms of price though it’s a different story
          waiting on the price cuts to be official before making comparisons
          simply saying up to 50% off doesn’t tell you crap
          from what i can tell though the sempron and x2 lines are cut
          fx and opteron lines will reduce in price as per usual with seasonal change and new processor introduction timelines

            • Fighterpilot
            • 13 years ago

            y[

      • Freon
      • 13 years ago

      Stop feeding the troll…

    • stdPikachu
    • 13 years ago

    Has anyone seen any power consumption figures for the low end Conroe’s? I’m interested in upgrading the HTPC under the telly for a unit that consumes less power than my 3500 winnie… at the moment I’m looking at a 479 mobo with a T2300E which seems to be the best bang for power buck at the moment.

    • willyolio
    • 13 years ago

    what happened to those 30% performance gains from the intel black boxes?

    all in all, though, very impressive. i don’t need a new comp any time soon though, so i might even be able to hold out until K8L and see how things go from there.

      • Shintai
      • 13 years ago

      Might be more with faster GFX. A Single 7900GTX is used here. The Intel benches at IDF etc was with a crossfire setup. And AM2 gave a boost to games with the higher bandwidth.

      • Flying Fox
      • 13 years ago

      There are some tests that are 30%. If you look at the SSE benches then there is really no contest.

      Also, TR’s tests are mostly done on XP64. There is supposed to be a bigger difference if it is run on XP32.

    • vdreadz
    • 13 years ago

    What about HT ( Hyper-Threading ) ???? Just curious. I heard that they’ll include it later on or something like that? Why not not now? What is the reason for this?

      • just brew it!
      • 13 years ago

      HT was essentially a stopgap measure to deal with the negative performance impacts of the long pipelines which were dictated by the Netburst architecture. The potential benefits of HT for a design like Core (which is inherently better at keeping its execution units busy) are much smaller.

      I do not expect to see HT reappear any time soon. Maybe in a few years when Intel and AMD start trying to push clock speeds into the 10GHz range. πŸ˜‰

        • vdreadz
        • 13 years ago

        Thanks alot for clearing that up πŸ™‚

        • packfan_dave
        • 13 years ago

        I’ve read in other places that simultaneous multi-threading (of which HT was an implementation of this on the P4) actually makes a lot of sense for a very wide architecture like ICM. IBM has a different version of SMT in Power5 (which is also a pretty wide architecture) that works pretty well, and yet another variant in the Xbox 360 CPU (though that one’s a narrow/high-clock speed core). So it wouldn’t be a big surprise if HT returns in Core 3.

          • Flying Fox
          • 13 years ago

          They will go to quad cores first before even entertaining the idea of bringing HT back.

          • IntelMole
          • 13 years ago

          It does make a lot of sense if the core is massively wide. But it only gives you another processor in windows on which to execute, whose performance is dependant on other threads not stealing its resources.

          If instead, you can utilise one core that’s half the width 100% (or nearly) most of the time, then why not just build two identical cores like that? You get budget bin rewards, you’re not building such a massive chip in the first place so your overall yields for the dual core will be higher too, and you don’t need to support something that potentially impacts performance in unpredictable ways like HT did.

          It could, however, make sense in a massively threaded application, such as server stuff – every few percent could matter there.

          But HT is dead. Long live SMP.
          -Mole

            • intangir
            • 13 years ago

            l[http://news.yahoo.com/s/pcworld/20060713/tc_pcworld/126403;_ylt=AsmzYaJPW.RUM7kV.QpNMtWs0NUE;_ylu=X3oDMTA3cjE0b2MwBHNlYwM3Mzg-<]Β§ y{http://www.xbitlabs.com/articles/cpu/display/replay_15.html).

            • Flying Fox
            • 13 years ago

            Very few apps have long enough instruction sequences that have no load/store dependencies. Building super wide is a waste of resources because most of the resources will be idling.

            • IntelMole
            • 13 years ago

            y[http://www.xbitlabs.com/articles/cpu/display/replay_15.html).<]y I don't think I can dispute this theory, partly because I haven't looked into it enough, and partly because it sounds reasonable enough. The P4's branch predictor was pretty good though. Other reasons it hurt performance included things like necessarily having to halve some of your execution resources (which were shared but exclusive), IIRC this included the trace cache, and the storage of instructions in flight. -Mole

            • intangir
            • 13 years ago

            y{

    • Gungir
    • 13 years ago

    Anyone else feel the pillars of the Earth shaking? I’m pretty new to the computer world, with under 4 years’ experience, so this is the first time I’ve seen the performance crown passed in such a dramatic way. Ever since I started building machines, it was AMD, AMD, AMD, but Intel’s just smashed their way into the very top.

    As amazing as Core 2, is, let’s not forget the lower end of the market, and AMD’s (rumored, but likely) upcoming price drops. Depending on how low things falls, AMD X2s, especally the 3800+, 4200+, and 4400+, may take up the post of the average-user CPU. But, that will take a very aggressive price drop on AMD’s part. Core 2 does seem to make single-core Athlon 64s, especially low and middle range versions, almost defunct. I don’t see this as a complete and total domination of the market by Intel, though it is a very bold move. I see this as forcing AMD to rely on the average end user, not the enthusiast, to make their revenue for a while. That said, they have their work cut out for them; both because AMD is not nearly as widespread as Intel in the consumer’s mind, and because AMD will need one bloody fine architecture in K8L to trump Core 2.

    Honestly, when the smoke settles, I hope AMD takes the crown back. If my perceptions are correct, they brought precision and well-designed architectures to the CPU world, an alternative to Intel’s hack-and-slash, high clock speed approach.

    Please correct any errata in what I’ve said. I’m sure there are a few. πŸ˜‰

      • PerfectCr
      • 13 years ago

      The price cuts are not rumored, they are happening.

      • bthylafh
      • 13 years ago

      It was pretty dramatic when the Athlon came out in ’99. That was the first time in my memory that Intel wasn’t top dog in x86 performance.

      • swaaye
      • 13 years ago

      Pentium4 was really Intel’s only bad design IMO. P3, P2, PMMX, Pclassic, 486, 386, etc were all pretty much the best on the market. The competition just copied until the 5th gen. Literally copied. They would scan the new chips from Intel and put them on the floor all blown up and copy them.

      I had a Am5x86 though, nice 486 clone on 350nm that could clock really, really high for a 486. 160 MHz out of that sucker when Intel’s best was 100 Mhz. The writeback L1 was sweet too, lol.

      Everybody started to actually build their own designs once Intel got this copying outlawed. AMD, Cyrix, Nexgen, IDT started to make their own stuff. AMD K5 and K6 sucked pretty bad.

      AMD only stopped sucking when they got Athlon out in ’99. At least for gamers. K6 was a sweet integer chip, but the platform and FPU sucked badly.

        • SGWB
        • 13 years ago

        There have been a few times when Intel’s new chips were slower at introduction than the previous generation. Intel hasn’t pulled off a performance jump like this since they introduced the 486.

        The early Pentiums were slower than the 486 DX4 100MHz. Also, the Pentium MMX extensions were problematic. MMX could improve multimedia performance, but there was a potential performance penalty when using MMX code because it used the same registers as the floating point execution units. Applications wouolod have to switch between two modes to run either MMX or Floating point operations. Intel addressed these problems when they released SSE.

        I personally thought the PII was a very compromised design, due to the external, half speed L2 cache. The on-die L2 cache in the P-Pro was too expensive to manufacture for a consumer level CPU. So Intel moved the L2 off-chip and created the Slot 1 interface to accomodate it. The P-Pro routinely bested the PII in many benchmarks and applications.

        The early P4 did not distinguish itself as better than the Athlon, although it was much better than the PIII. It wasn’t untill the Northwood P4’s that Netburst came into it’s own.

          • Fire Child
          • 13 years ago

          If memory servers me correctly, the 486 DX4-100 came out after the Pentium 60/66 and it didnt handle floating point anywhere near as well. For most real world purposes the DX4-100 was on par with the performance of a Pentium 60, except when games like Quake 1 came along. The DX-4 100 did cost a hell load less than the Pentium 60 though so at the time, alot of people considered going the Pentium route a waste of time unless you could get the Pentium 90/100. Even the later Pentium 75 wasnt that amazing really.

          However, the Pentium 60 was no way slower than the DX-4 100. That said, in all my time using computers since the humble 486 SX-25 – I have never seen Core 2 style domination across the board until, well…Core 2 :\

          Intel previously had been going up against pretty slow competition so its kind of fitting that they get all their aces when the competition just happens to be playing at its best.

          • swaaye
          • 13 years ago

          “There have been a few times when Intel’s new chips were slower at introduction than the previous generation. Intel hasn’t pulled off a performance jump like this since they introduced the 486.”

          Yeah that’s true.

          “The early Pentiums were slower than the 486 DX4 100MHz. Also, the Pentium MMX extensions were problematic. MMX could improve multimedia performance, but there was a potential performance penalty when using MMX code because it used the same registers as the floating point execution units. Applications wouolod have to switch between two modes to run either MMX or Floating point operations. Intel addressed these problems when they released SSE.”

          Pentium MMX was a big boost not so much because of MMX, but because it got a 32 KB L1 cache (over the 16KB L1 before). MMX was useless for games, but it did get use with audio and 2D image manipulation.

          “I personally thought the PII was a very compromised design, due to the external, half speed L2 cache. The on-die L2 cache in the P-Pro was too expensive to manufacture for a consumer level CPU. So Intel moved the L2 off-chip and created the Slot 1 interface to accomodate it. The P-Pro routinely bested the PII in many benchmarks and applications.”

          Yeah but the P2 bested everything from the competition. PII had double PPro’s L1, a faster L2 eventually anyway, and MMX (even if useless). It’s also faster on 16-bit code than Pentium Pro.

          Sure the cache was cheaper in design, but I think it was a necessity for clock speed ramping (PPro never went above 200 MHz). PPro’s cache was not on-die, it was on-package and connected to the CPU core. If the cache was defective, the whole thing, including the installed CPU core, had to be tossed. Testing couldn’t take place until it was assembled.

          “The early P4 did not distinguish itself as better than the Athlon, although it was much better than the PIII. It wasn’t untill the Northwood P4’s that Netburst came into it’s own.”

          P4 was bested by P3 for months after launch. Willamette was never competitive with much. Especially if you compared it to Tualatin P3. Northwood was the peak of the design, because K8 hadn’t shown up yet and AthlonXP was getting old.

      • hellokitty
      • 13 years ago

      So, that’s it. Athlon beats it in few test including some of the more important ones like Cinema4D rendering and lame encoding, in others it’s almost right up there. ( I’m ignoring the overcloked Conroe part )

      Intel clearly stated at least 30% across the board on Core duo 6700, and that chip actually either loses or is barely faster than the athlon.

      I told you so.

        • thecoldanddarkone
        • 13 years ago

        Where do you get this 30 percent stuff, find me the qoute, because on 2 occasions I quoted 15-20 percent from intel vs the fx 60. Prove it or shut it seriously.

        • SGT Lindy
        • 13 years ago

        The putty can dish it out but cant take it……thanks for laughs baby!

      • Crayon Shin Chan
      • 13 years ago

      Well, if you include gfx cards, you can go look at the old days when the ATi Radeon 9700 Pro ruled the roost. Nowadays I think even a 6600GT can beat it, but back then it handily paid the NV25 back for the injustices it commited against the Rage, Rage6C and R200.

    • Freon
    • 13 years ago

    Wow. I’m stunned. I was expecting this to challenge AMD, but Intel whipped out the lead gauntlet this time.

    The E6600 is looking killer. Only about $20 more than the current X2 3800+ (non-EE) street price and it is just about trading punches with the FX-62. I’m a motherboard and RAM swap (to DDR2) away. I think they’re going to have to put the X2 4600 in the low $300 range to keep me from jumping ship.

    This is a huge relief. Maybe AMD will get with it and drop the X2 prices in half. It’s way overdue–they’ve really been milking them.

    • Mr Bill
    • 13 years ago

    ———-

    • Spotpuff
    • 13 years ago

    I like the overclocking numbers, but running a processor at 70C worries me… that’s incredibly hot.

    My A64 (if the readings are right) goes to full load of around 42C at 2.4GHz.

      • tfp
      • 13 years ago

      How does it do temp wise up at 3.4? πŸ˜‰

      • just brew it!
      • 13 years ago

      Hey, 70C is no hotter than the Athlon T-Bird core was back in the day. And the only reason it was a significant issue for the early Athlons was because AMD didn’t have a proper on-die thermal diode back then.

    • cf18
    • 13 years ago

    TR can you guys update your CPU ring for all the newest Core, Core2 and AM2 CPUs?
    Β§[<https://techreport.com/cpu/<]Β§

    • PerfectCr
    • 13 years ago

    From Kyle at [H]:
    l[

      • thecoldanddarkone
      • 13 years ago

      I think he proved himself wrong, in the anandtech article at 16×12 there were differences.

      Alot people were using fraps. So ya whatever. Look at the anand article it even has fraps being used for some of the test.

      So in the end he made himself look like an ass, and then proceeded to prove himself wrong.

      • 5150
      • 13 years ago

      [ H ] oser.

        • mirkin
        • 13 years ago

        HAHAHA eh

      • Damage
      • 13 years ago

      Display resolutions, graphics subsystems, game engine rendering loads, and end-user image quality settings vary widely in the real world. Kyle’s numbers are correct only for the settings at which he tested, which happen not to be especially revealing about CPU performance since they are largely GPU-bound. We intentionally choose settings that are not primarily GPU-bound when possible in our CPU game tests in order to demonstrate clearly the differences between processors. Those differences may be muted by other system bottlenecks, including graphics, but the relative performance merits of the CPUs involved tend to remain consistent. In other words, you may see a 20% advantage for Core 2 over Athlon 64 in our low-res Oblivion tests and only a 5% advantage at a higher display resolution, but it’s unlikely the Core 2 will fall behind the Athlon 64 when the GPU bottleneck hits.

      Kyle seems to be making a narrow and specific point, which is that CPU performance alone doesn’t determine gaming performance in totality when other bottlenecks come into play. I wouldn’t dispute that sentiment. It is an important point for the novice to understand.

      But to call our test results virtually worthless or misrepresentations is simply unfair. We have been fairly meticulous about documenting our testing methods and letting readers know what display resolution we used and the like. Our intention is to offer the reader good and useful information in a proper, helpful, and robust context. Yes, some understanding of computers may be required in order to fully appreciate the meaning of our results, but eh, life is complex. TR has never aimed only for the computing novice audience. We do offer system building guides geared more toward beginners in which we tend to recommend relatively powerful graphics cards and less expensive CPUs, especially for gamers.

      To claim that what we are doing is misleading or worthless is unfair at best. We could spend quite a few words criticizing the methods of competing publications, too, but we’ve tried to lead by example instead. Fortunately, all of this silliness is mostly harmless, thank goodness, because you guys do tend to understand these issues.

        • jobodaho
        • 13 years ago

        Well said damage, it’s also nice that you look beyond games to show a CPU’s performance. I’m particularly interested in the 3ds max rendering graphs because I use that program alot for my work. Gaming isn’t the only thing out there guys.

        • thecoldanddarkone
        • 13 years ago

        Professionally answered, techreport FTW, no attacks, just simple awnsers with your perception.

        • Shintai
        • 13 years ago

        I cant take KyleΒ΄s preview seriously. Specially since he disabled all power saving features of the Core 2 Duo. And who does a CPU review based on bottlenecked GFX.

        KyleΒ΄s review just gotta hit him in the rear with G80/R600 and upwards. And is, if not, misguiding the consumer for a future proff buy.

          • Jigar
          • 13 years ago

          I though Kyle was a girl.. πŸ™‚

        • packfan_dave
        • 13 years ago

        Yeah. Kyle’s methodology — which is to try and figure out the highest playable combination of resolution and features, and then get the framerate — is great for video card evaluations. But it’s awful for CPU evaluations — at least with the games and video card he used — because current cutting-edge games are GPU-bound at high resolutions with a single high-end video card.

        • PerfectCr
        • 13 years ago

        Thanks Damage. Good response. I find it extemely unprofessional of him to attack other publications. Instead of presenting his review, he instead takes the first few sentences to attack others. That’s low IMO. Good job, keep up the good work.

        • muyuubyou
        • 13 years ago

        I think it’s fine to have different websites with different schools of thought. I can only disagree with Kyle’s hollier-than-thou attitude in this case.

        The point of the TR review: compare the new Intel processors to AMD’s counterparts.

        The point of [H] : focus in your graphics card. Well right, but what was this review about again?

        Tangentially, if he was really to make a useful review for the gamer, he should have focused on a Geforce6800-ish card – that’s what the average enthusiast runs these days – and the lowest new Core 2s. Let the reader now how much improvement would he get with that.

        • toxent
        • 13 years ago

        This is why i come to TR. Intelligent, well thought out response. Good one Damage.

        • DrDillyBar
        • 13 years ago

        y[<'...but eh, life is complex.'<]y ROFL

      • Freon
      • 13 years ago

      I’ve proven to myself that the guys and [H] have very little grounding in scientific method nor presenting things with any sort of neutral point of view.

      Get your reviews elsewhere. Most of us moved on long ago.

      • vdreadz
      • 13 years ago

      Just trust TR πŸ˜‰

      • indeego
      • 13 years ago

      Wow, so people really do still go to [h] and tom’s, this isn’t just rumorg{<.<}g

        • Krogoth
        • 13 years ago

        [H] is alright for hardware reviews, but you have to bare in mind it is geared towards higher-end gamers.

        THG is always good for a laugh or two for the blanant to subtle slant on whoever pays(-[

        • PerfectCr
        • 13 years ago

        I don’t “go there”. Someone mentioned it in the comments so I went to look at it and I found his comments.

          • 5150
          • 13 years ago

          Whatever you closet [H] visitor! Why don’t you just accept it?

            • thecoldanddarkone
            • 13 years ago

            I used to visit h for reviews, but now that he is taking pot shot at other reviewers, I am going to have to review this in my head.

      • mirkin
      • 13 years ago

      Kyle pole vaults cleanly into the arena of opinion by fixing the gaming experience as maxed out resolutions. Whos gaming experience would this be? How is this relevant across different games…

      He is really testing the limitations of a few arbitrary games engines – consider outdoor maps etc where it doesnt take much onscreen action to cause performance to bottom out and your experience to go slideshow. What does it take to make a game become GPU limited? And how can we really say a GPU twice as fast would push the frames in a way that scales according to the hardware. Its my opinion that Kyle wouldnt know the difference. His vast knowledge of hardware is mocked by his understanding of code.

      A good review needs to be useful.

        • Vhalidictes
        • 13 years ago

        HardOCP is no panacea, and Kyle’s prose could use some cleanup, but I just checked the article, and his take on “Does this new CPU matter for GPU-limited gaming” is both relevant and useful.

        Both of my current gaming PCs would experience zero benefit from a Conroe upgrade, as the article shows.

        Speaking of the article itself, it specifically mentions its goal, and seems very targeted toward testing a specific (if common) situation.

        Also, keep in mind that even budget PCs are going to be GPU limited before they are CPU limited – lower end video cards will peak sooner, so in no case is the CPU used going to be a giant framerate-fix.

        I don’t know anyone that runs PC games at 800×600 any more, and the most likely person to do so (a new gamer) is the exact type of person that won’t have a Conroe.

        I’m just not getting the [H]ate, especially considering that HOCP has run genuinely controversial articles in the past…

          • mirkin
          • 13 years ago

          Its asks a non question – can I render this benchmark GPU limited and then declare the value of this cpu.
          Its just his opinion – its not useful. Thats why I always [H]arp on him : )

            • Stranger
            • 13 years ago

            its a perfectly valid question. Even in GPU limited situations the CPU still plays a large part in performance.

      • Spotpuff
      • 13 years ago

      I’ll go out on my own here and say that while the core2 looks great at 10×7, I don’t play at 10×7 so the benchmarks are not indicative of performance that matters to me.

      Running games at 19×12, so if the core2 doesn’t help with that it’s a pointless upgrade for me. It would be nice if TR did two sets of benches, one at low res for CPU bottlenecking and one at high res to check if it even improves framerate or if the GPU is the bottleneck.

      While it’s nice to see numbers that are such a huge jump over the previous generation, in real world performance at resolutions I actually play at, then if thre is no benefit that is good knowledge to have.

        • swaaye
        • 13 years ago

        And that would belong in a video card review. You are just profiling where the video card is maxed out and CPU impact becomes a limit approaching zero. heh. That has never changed in the entire lifetime of the consumer 3D graphics card market.

        For gamers, the undeniable thing to do is to get the most powerful GPU you can afford, especially if you want to run super-high resolution. After that, grab a nice mid/low-end dual core.

          • Anomymous Gerbil
          • 13 years ago

          y[

            • pureevilmatt
            • 13 years ago

            2 years from now, when there are GPUs on the market that are 2-3 times as powerful as todays GPUs, are todays games still going to be GPU limited? No. Will a 30% performance difference between the two current top end offerings have an impact on gaming at that point? Yes.

            Say you’ve bought into hype and think you might need an upgrade, is kyle’s “CPU review” useful in determining if you need a new CPU today? Yes. (you probably don’t). But, say you’ve already decided to upgrade, is kyle’s review useful in determining which CPU you want to buy? No… It’s a shitty CPU review.

            Tech Report’s review answers both the above questions(should I upgrade, what should i upgrade to), and is fair, complete, scientific, and well written article. Can’t say the same for the [H]’s.

          • Usacomp2k3
          • 13 years ago

          IMHO, that would be better in a cpu-scaling type article, rather than an introductory article about the CPU as Damage’s was. That said, one of the great things about TR is that they present (what I believe) to be the set of benchmarks that I need to formulate my own opinions (example: the memory latency tests). It could just be that me and Scott’s desires are similar in that respect πŸ˜‰

          • Z-Gradt
          • 13 years ago

          Holy crap. You mean the video card has more impact on gaming than the CPU??? That’s shocking and amazing at the same time… This Kyle guy must be a genius.

          I would try to run some benchmarks on my setup to see how it compares, but my monitor strains to get 1280 X 1024 resolution. Besides, isn’t 40 fps kinda on the low side?

        • ej4love
        • 13 years ago

        I agree, who plays games at low resoloutions, LET EVERY ONE HERE
        TELL US

        • ej4love
        • 13 years ago

        Iagree, who plays games at low resolution, LET EVERY ONE HERE
        TELL US what resolution they play at. HEY DAMAGE HAVE U HAD POLL ABOUT RESOLUTION AND WHAT WAS THE RESULTS. IM SEARCHING FOR THEM NOW.

          • ej4love
          • 13 years ago

          Β§[<https://techreport.com/sympoll/polllist.php?dispid=48&vo=48<]Β§ the above poll from this site shows that about 60% of us uses 1280 x or higher960 and the test was run at 1280 x 1024 which 54 percent of responder uses, so the test looks very relevant to me. good test, im waiting for my x2 to come down so i can buy 5 or more 4200 or higher, thanks intel, my brothers and sisters are going to love you for that, AMD WERE WAITING FOR A RESPONSE, IF YOU DONT HAVE ONE IN THE NEXT FEW MONTHS, HOW DID YOU SURVIVE THIS LONG. HOW COULD YOU SIT THERE AND THINKS THAT WHAT YOU HAVE NOW IS MORE THAN GOOD ENOUGH, IM NOT GOING BACK TO INTEL, I WILL GIVE UP COMPUTERS ALL TOGETHER BEFORE I GO BACK.

            • kvndoom
            • 13 years ago

            While yer waiting to buy those X2’s, get a new keyboard too. Your Caps Lock is broken.

            • ej4love
            • 13 years ago

            no that is intentional.

            • PerfectCr
            • 13 years ago

            Get a life dude, seriously.

            • ej4love
            • 13 years ago

            thanks for the free advice, what other advice do you have.

    • Ruiner
    • 13 years ago

    Grab an e6300 and OC it and you’ll get my attention.

    • wierdo
    • 13 years ago

    Nice, but what about something in the $100 range? Are the core based celerons ok, or are they planning to put them on a seperate platform and cripple their caches too much as with the old P4 celerons? These could be good products if done right too.

    I got me an A64 3000+ for $89 and an A64 3500+ for $109 for my friend two weeks ago. With prices that low it’s hard not to upgrade a few old PCs lying around here and there πŸ˜‰

      • Shintai
      • 13 years ago

      Core based Celerons are E4x00 series and 2MB cache or 1MB with 800Mhz FSB,

        • Flying Fox
        • 13 years ago

        Not really, I would think the single core Conroe-L will be more worthy of the “Celeron” moniker. Only in 2007 though.

    • Chrispy_
    • 13 years ago

    That’s a shame. AMD have had the lead for 3 years and because of industry scepticism (and possibly sales/competition malpractice by intel) AMD have not had the money to accellerate their R&D, being too busy to play “survive and catchup”

    Intel have sat on their arse with one helluva ugly runt, yet the netburst architecture has continued to outsell the massively superior (in almost every respect) A64 architecture right up until very recently.

    I, like most people will be buying whatever offers the best performance for my money so AMD will no longer have me as a customer. Intel deserve my money once again, it’s just sad that AMD didn’t get the money (and the software support) they deserved for their even riskier and more daring switches when they moved from K7 to K8.

    /me sheds a tear for AMD and will watch with bated breath to see how AMD retaliate in 2008/2009. We all know that the performance crown isn’t going to switch back to AMD again until then, and that’s ASSUMING that Intel sit idly by for another 2-3 years.

      • Shintai
      • 13 years ago

      2008 brings core 3 if it is named so and 45nm. Core 2 Duo and Core 2 Quadro is relatively shortlived.

      • blastdoor
      • 13 years ago

      Yeah, I also feel bad for AMD. And in the longer run, it will be a shame for the consumer, too, if AMD gets squashed (as they very well might).

      Already we can see that Intel is intentionally holding back the speed of these chips (given their apparently incredible ability to overclock), just because they know they can. Without AMD offering competitive chips, we won’t get the best that Intel can deliver.

      Good news for overclockers, though!

      • just brew it!
      • 13 years ago

      I don’t think it’s “a shame” at all.

      Having two competent players helps everyone (OK, everyone except Intel and AMD’s shareholders) by keeping prices down. If Intel had not come up with a credible answer to the K8, then a couple of years from now AMD would be the 500 lb gorilla, charging extortionate prices for their CPUs — just like Intel did back in the 486 and early (pre-Celeron) Pentium days.

      Hey… now I can continue buying AMD processors with a clear conscience, because I can tell myself I’m still supporting the underdog. πŸ˜‰

    • derFunkenstein
    • 13 years ago

    Brutal. Absolutely brutal.

    They’re coming out just in time for me to covet one, sigh…

    • LicketySplit
    • 13 years ago

    Great review and its a win/win for consumers. Gotta give Intel credit for finally getting there shizzle together. Time for AMD to get off the pot and get to work:)

      • Flying Fox
      • 13 years ago

      Actually time to spank them for waiting too long. A year (or more) later to get their new stuff out is just going to hurt so much.

    • Stijn
    • 13 years ago

    I already own a system with an AMD Athlon X2 3800+, and I really like it.. However, I need to buy a second pc because I’m going to study in another town.. Should I go for a cheap AMD (price cuts are coming) or buy a low or middle-range Core 2 Duo?
    I saw Core 2 performs better, but I’ll mainly use it for programming and everyday use…

      • Flying Fox
      • 13 years ago

      Wait a month further and get a Merom notebook?

        • Krogoth
        • 13 years ago

        If power figures and performance are any indiction it seems Merom will be the ulimate DTR chip. The older Yonah is still the better laptop chip, because it consumes less power and provides enough performance for 99.9% of laptop user needs.

        It is not surprising though as Yohan was designed as a SMP solution with the power consumption of the previous Dothan. Merom is a completely different animal that was designed to defeat the K8s, and power consumption went back as a secondary objective.

          • Flying Fox
          • 13 years ago

          Oh really? I think average power consumption is actually down from Yonah? And of course, the complete package is actually Santa Rosa next year, but I have less hope of it being lower power with the 800MHz FSB.

            • Krogoth
            • 13 years ago

            Β§[<https://techreport.com/reviews/2006q2/core-duo/index.x?pg=15<]Β§ BTW, the test system was using the more power hungry 7800GTX 512 versus the Core 2 review's 7900GTX.

            • Flying Fox
            • 13 years ago

            It says nothing about Merom. I meant in my last post that Merom *should have* a lower average power consumption than Yonah. Without any numbers how can you claim Merom will use more power than Yonah? And Merom is definitely not going to be DTR only. Actually some ODMs are going to use C2D in their DTR units.

            • Shintai
            • 13 years ago

            Average power consumption on Merom is lower than Yonah. Also with 100% load aswell. Its only short aboslute peaks that are a tiny bit higher.

            Intel TDP is a whole new ballgame, 65W TDP is around 35-40W, 80W TDP was 59W etc under 100% load. SO a Core 2 Duo is already the perfect DTR. Merom being the perfect laptop chip and Core Duo is only left for ultra portables for a short time.

            But even if you compare completely raw, its <5W Merom vs 31W Yonah.

            Or 5W for a dualcore Merom ULV with 5.5W for singlecore Yonah ULV or 9W for dualcore..

            • Flying Fox
            • 13 years ago

            Nah, there is no “perfect chip”, only a better chip. And 533FSB Merom isn’t it in my opinion. I am saving my pennies for a ThinkPad Txx with 800FSB Merom on Santa Rosa. πŸ™‚

            • Shintai
            • 13 years ago

            Was close to put perfect in quotation πŸ˜›

            With 533, you do mean 667? πŸ˜‰

            • Flying Fox
            • 13 years ago

            Yeah, whatever it is that is not 800. πŸ˜€

    • green
    • 13 years ago

    good review. my next upgrade is a conroe
    the video encoding benchmarks are what sold me
    i encode and batch up around 6 streams a day
    i’d be happy to get that number up to 10 per day

    • Fighterpilot
    • 13 years ago

    lol you’re kidding right?
    Unless Kitty logged in as Walt somehow…

    • WaltC
    • 13 years ago

    q[

      • Shintai
      • 13 years ago

      2+3+4, well actually 1+2+3+4. This just looks like a whine.
      1. Β§[<http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=4<]Β§ And 8x? 4MB vs 2x1MB or 4MB vs 2x512KB? Next thing is its Intel fault AMD is going back to 2x512KB instead of 2x1MB? 2. Retail X6800 B2 stepping clocks fine to 3.4Ghz aswell. Yes these have shipped. 3. Wrong, it already shipped some places. So that part is obsolete to use. Also, want a retail pic of the CPU? Β§[<http://80.167.217.210/pics/e6800.jpg<]Β§ 4. But K8 vs P4 was ok? Next we will have K8L vs Core2 45nm. And if K8L loses you can just say K8L is a minor K8 and still old tech. Its just not valid...

      • Proesterchen
      • 13 years ago

      Walt, at some point one has to wonder why you still open your mouth, as pretty much every post you write just shows how little you understand the things you comment on. (and also how biased you are)

      With pretty much anyone else, I’d offer them a little bet right now, regarding AMD’s “near-term regaining of the performance crown”, but with you, I know that you’d do just about everything to wiggle your way out of it come pay up time.

      • IntelMole
      • 13 years ago

      I think I’m about to disprove every one of your “points”:

      y[<(1) The 4-mb L2 cache versions of Core2 you tested each have a significant multiple of the L2 cache found in the Athlons you tested. I am sure that in no small measure Core2's performance is intimately tied with this 4mb L2 cache--which is either 2x, 4x or 8x the size of the 90nm Athlon's L2 cache, depending on which Athlon you are comparing. If you had been given a 2mb L2 Core2 sample to test, I have a feeling that it would have been very revealing, and that that's exactly why Intel wants to start shipping its obviously lower-yielding 4mb Core2's first...;)<]y OK, first things first, AMD have stated that their architecture is not limited by L2 cache speed. Second, the increase from 512KB to 1MB does practically nothing. What makes you think that a move to a 4MB cache will do anything but destroy AMD's yields? Bottom line: this is not about the cache. They were not given a 2MB sample to test because they ain't out yet. QED. No conspiracy here, move along. Even if they had, the Linpack numbers show that the smackdown in terms of performance (including performance per watt) per dollar would have been just as great, if not greater. y[<(2) As you were forced to restrict your Core2 testing to an engineering sample as opposed to a production-grade sample (all the Athlons you tested were production products), this indicates to me that yields may well be an issue moving ahead for the Core2 4-mb L2 variants you tested. We'll see.<]y Well d'uh. The Athlons have been out for three years. Try grabbing a production-grade sample of Conroe yourself, before or shortly after the NDA. You won't. Yes, with a giant L2 cache yields may be something of an issue, but for two things. One, Intel has experience with large caches from the netburst days, if anything this cache is less demanding than that one since it doesn't prefetch so aggressively. Two, the die size of this is actually *[

        • WaltC
        • 13 years ago

        q[

          • Flying Fox
          • 13 years ago

          Some of what you said are definitely valid points, but there are somethings I would like to clarify.

          y[

            • Shintai
            • 13 years ago

            You called! πŸ˜‰

            2MB vs 4MB

            Β§[<http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=4<]Β§ Β§[<http://www.pconline.com.cn/diy/evalue/evalue/cpu/0605/791941_13.html<]Β§ There are alot of 2MB ES samples, they are just 1.86Ghz mainly and "uninterresting" for the sites where we mainly see them.

            • IntelMole
            • 13 years ago

            Average improvement at anandtech – 3.5% (minimum 0%, maximum 10%, so there’s a lot more at the lower end than the higher end).

            WaltC, I think that proves my position as saying the 2MB cache versions won’t be that much slower. Thanks Shintai :-D,
            -Mole

            • Shintai
            • 13 years ago

            Your welcome πŸ™‚

            300!

            Anandtechs 2vs4MB also shows bigger cache primary only affects non realtime applications. Only Quake and Oblivion dungeon shows any real gains. And Quake is just due to pure memory speed. At 1333Mhz FSB a 2MB cache would run away from a 4MB version on 1066Mhz FSB anyday in Quake.

          • IntelMole
          • 13 years ago

          y[

            • Shintai
            • 13 years ago

            Just wanted to say, 2MB cache Core 2 Duo is/will be widely avalible at launch the 27th πŸ˜›

        • Prototyped
        • 13 years ago

        Where are the Allendale benchmarks? Those are out at the same time as Conroe, correct?

        (Allendale == 2MB L2 Conroe at a lower clock, specifically the E6300 and E6400 now, and in the next quarter, E4200 and E6200.)

          • Flying Fox
          • 13 years ago

          Check #295

      • Flying Fox
      • 13 years ago

      Those were ok comments, however, if you read past the confrontational styles of Shintai and Proesterchen, then you can see that most of your concerns are not quite valid anymore. Here is my additional 2 cents.

      Sure, in a tech race this is what we should expect, the newer stuff from one company leapfrogging the other one, and so on. So this should come as no surprise.

      And lower latencies and memory bandwidth. Those are fine and dandy theoretical numbers. The actual results with real applications speak for themselves. The K8 arch still has an advantage for 4S server systems, so that’s why Dell will be selling those. But the 4S market is not that big to being with.

      As for the 65nm scene, well, this is where little AMD seems to be having trouble. Current roadmaps put AMD’s first 65nm in 2Q07, a full 3 quarters behind the first Woodcrests and more than a year behind the first 65nm Intel chips (P4 6xx/9xx). Even more worrisome is that the initial 65nm chips from AMD will be lower speed grades and that just does not bode well for that process, at least initially. And you know these semiconductor process things, they tend to slip more than be ahead. πŸ™ Then you have the capacity problem, there is only that much AMD can do given the comparatively limited fab space that they have. Coupled with the profits that they have to maintain in order to repay the debt payments they are obligated to, etc.

      Let’s hope things will be smooth for AMD with the current roadmap, and you are right in that AMD is lucky that Intel will still be stuck with the Netburst past for the rest of the year with low supply of C2D’s. But any more delays on AMD’s part will not be good.

    • IntelMole
    • 13 years ago

    Anyone else here thinking that as badly as the A64 slapped the P4 line about, that it’s got just the same treatment here?

    I’ve also been telling people to wait if they can for this to come out, the price drops will hopefully be worth it. Linking them to this when I see them online next.

    Excellent review btw.
    -Mole

    • Jigar
    • 13 years ago

    I cant see Kitty around and i want my K8L as soon as possible now

      • Flying Fox
      • 13 years ago

      You give them a couple billion and that should help. πŸ™‚

      And yes, money does make the world go round.

        • Jigar
        • 13 years ago

        Ya i called up AMD they said they are making special K8L for me … that would kill Conroe or Core2duo in the corner… Rusaph phasph…

          • ScythedBlade
          • 13 years ago

          Except AMD’s got pwnt like crazy now

    • cheesyking
    • 13 years ago

    I’m just amazed by how bad this makes Netburst look. K8 beat it in a variaty of benchmarks by a healthy margin but now Core is beating K8 by a similar margin… well!

    • Fighterpilot
    • 13 years ago

    Nice review.
    A new standard for CPU performance per watt and at $300-$550 it looks very good against the AMD chips with similar prices.
    Sure I’d love an Extreme Edition, it totally dominated those tests but for $999 or whatever..forget it.
    The 2 new E6600 and E6700 are really fast,easy on power and heat and look to have some headroom for fun overclocking.
    Good one TR.

    • Severus
    • 13 years ago

    Not much to say that hasn’t already been said but I will say this – very, very nice review, guys. Well done.

    • radioactive21
    • 13 years ago

    Thank goodness the hype lived up to its name. I had lots of friends asking me about upgrading or building a new PC, i’ve been telling them for months to hold it until offical results of conroe came out.

    I myself will most likey upgrade to the E6600 or the E6700 (depending on my wallet size at the time) in the coming month or so. My nice P4 2.8GHz had a great run, time to kick it up a notch.

    Who knows i might go crazy and get the extreme edition. I am due for a major upgrade anways, or so i tell myself.

    • Shintai
    • 13 years ago

    Damn the power consumption looks good. The performance we knew they could achieve πŸ™‚

    My E6600 will arrive this month along with my Asus P5B Deluxe πŸ˜€

    So a 65W TDP Intel chip is now way below that. Same a 80W 3Ghz Woodcrest used 59W too. Now if just GFX makers would do the same…

    And nice to see the OC is so easy reached. And Anandtech also reached 4Ghz on air, quite nice. πŸ™‚

    Anyway, where is kitty?

      • Proesterchen
      • 13 years ago

      ‘kitty was prolly overrun by an imaginary truck bringing Core 2 Duos that don’t exist to the OEMs/distris.

      • JustAnEngineer
      • 13 years ago

      Damage’s power consumption figures were skewed because he used the extremely power-hungry NVidia chipset. Anand’s power consumption test with an energy-efficient ATI chipset yield an entirely different set of results:
      Β§[<http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=7<]Β§

        • Shintai
        • 13 years ago

        Its does? 2.4ghz core 2 duo vs 2.4ghz am2 EE shows what I talk about.

        And for idle they only used half of the speedstep options. So regular CPU vs special EE CPU. About same πŸ™‚

        • Flying Fox
        • 13 years ago

        I did mention elsewhere that EE vs C2D are about the same, and EE SFF wins out a bit in lower power usage (but may not lead performance per watt). So AMD can still pitch the overall power consumption with those EE SFF ones. However those SKUs are lower volume and higher price.

    • absinthexl
    • 13 years ago

    Holy crap, look at those 3ds max rendering times – thank you for updating both the software and scene.

    This is probably the single biggest jump I’ve ever seen in one generation.

    • leor
    • 13 years ago

    the cool thing about this is intel took a great CPU (the a64) and leapfrogged it to the same degree AMD used to own netburst. hopefully now AMD will answer in kind with 65nm and K8L, and so on . . .

    in the end it’s all better for us.

      • Flying Fox
      • 13 years ago

      Let’s hope AMD can hang on in the upcoming year before they can bring out the new stuff, or else it won’t be pretty.

        • tcunning1
        • 13 years ago

        Will AMD survive? Let’s remember that TR readers are hardly the average consumer; they buy pre-built boxes at Best Buy and Costco and look more at the price on the shelf than raw performance. AMD will likely have a price advantage after the cuts, and if Intel marketing doesn’t convince everyone to buy Core 2 AMD will likely continue to do quite well in the marketplace.

          • leor
          • 13 years ago

          all this talk of will amd survive is kind of silly. they just signed a deal with dell, they still have a very robust server presence (and intel can’t touch them in 4way and up systems) and they just released the turion x2.

          nevermind the fact that businesses have their ups and downs, particularly in the tech industry. did nvidia survive the 5800fx debacle? AMD was in worse shape when they were stuck on the palomino, they’ll be fine, don’t lose any sleep over em.

    • Chaos-Storm
    • 13 years ago

    sorry double post

    • Chaos-Storm
    • 13 years ago

    I’ve got to hand it to Intel. I was sceptical. I would not have trusted any other review site until I saw it on TR. It looks like intel really has about a 15-25% clock speed advantage.

    On the one hand this does not look good for AMD. However, I am convinced AMD will catch intel when they also hit 65 nm.

    Even if AMD did not change their architecture one bit besides the change to 65 nm i am convinced they will reach parity with intel when that switch occurs. Lower latency memory combined with higher clock speeds and lower thermals at 65 nm would allow this. AMD is making some changes though that should hopefully increase preformance.

    what i’m really looking for though is a cheap X2 to replace my barton 2500. I still don’t think i’ll go with intel this time unless AMD keeps their X2 prices as high as they have been.

      • Proesterchen
      • 13 years ago

      q[

        • blastdoor
        • 13 years ago

        Agreed. Just do the math. At the same clock speed, Core2 beats A64 by about 20%. The FX62 is 2.8 GHz. 1.2*2.8 = 3.36. We have no evidence of any A64 chips reaching such a high clock speed in the next year. All of the roadmaps actually indicate that 65nm parts will enter at low, not high, clock speeds. Meanwhile, we have just seen evidence that Core2 overclocks to 3.4 GHz relatively easily (which AMD could only match with a 4 GHz A64).

        K8 is dead in the water relative to Core 2. AMD will survive the rest of this year due to poor availability of Core2. But in 2007 it’s going to be ugly for AMD.

    • just brew it!
    • 13 years ago

    Does anyone else find it humorous that Dell — after many years of being a fanatical Intel-only shop — finally decides to start using AMD CPUs just as AMD cedes the performance crown back to Intel?

      • thecoldanddarkone
      • 13 years ago

      I know it doesn’t make any sense to me and yes ironic in a way.

      • Flying Fox
      • 13 years ago

      Lots of theories being thrown around, let me list a few:
      – 4S market is still Opteron’s territory
      – Intel not doing single preferential pricing to them. This is Dell’s payback move.
      – Intel is sleeping with Apple+Lenovo this time. They may have a supply problem and need something else to supplement the stock.

        • DrDillyBar
        • 13 years ago

        Lawsuit….

        • Mr Bill
        • 13 years ago

        q[<- 4S market is still Opteron's territory<]q Thats the one that I hear stands out from a performance standpoint.

      • duffy
      • 13 years ago

      Profits, not performance. Isn’t that Dell’s mission statement?

      • SGT Lindy
      • 13 years ago

      I will go with Dell finally offered AMD just to shut up the AMD crowd….all the while knowing that Conroe was going bi@tch slap it down.

      Now they offer everything…..and no one complains anymore….and their overall consumer status goes up.

      Now the demand for Dell/AMD products will down…before it ever got ramped up.

      • StashTheVampede
      • 13 years ago

      Margin.

      Remember that one word, that’s all that is needed to answer why Dell went with AMD. Intel is moving to a “more” flat race pricing between OEMs (to keep them competitive), creeping into Dell’s margins.

      AMD, in Dell machines, keeps the margins the way they like it — high. Dell is a company that likes to make money. Once a supplier cuts into their ability to make the money they like to make, they either raise prices or change a supplier.

      • Freon
      • 13 years ago

      AMD could’ve sold me a second CPU while they still had the crown had they not been gouging on X2 prices for the past 10 months. So I only bought one, now back to Intel.

      • Crayon Shin Chan
      • 13 years ago

      The Inq had an article on that. It mentioned yields, but I don’t think I quite believed that one. I also thought about it too, but the existence of such an article (even though I thought was farfetched) banished the thought from my head very quickly. I wonder why it was that quick.

      • poulpy
      • 13 years ago

      It could actually make sense that the firm without the most powerful tech around is more likely to give you a good deal/rebates on their cpu !
      Intel was very generous with Dell when they had to keep selling their crappy P4s/Celerons.
      I think AMD would be pleased to leave the tech crown to Intel for a few months and instead get the hot spot with Dell to gain market share and make money.

        • SGT Lindy
        • 13 years ago

        I bet Dell will sell more Conroe’s than AMD cpu’s in 2006.

        I bet Dell will get more Conroes than anyone esle as well.

          • JustAnEngineer
          • 13 years ago

          I’ll take that bet.

          Dell is not about the high-end. If they were, they would not have become the largest computer OEM in the world selling only the lame Pentium4 and its crippled Celeron derivatives. If Dell can get AMD processors and compatible motherboards cheaper than it can get Intel parts, that’s what it will sell the most of, not the high-end Conroe that Intel will price out of the budget market.

            • packfan_dave
            • 13 years ago

            Err… they became the largest OEM selling Pentium IIIs, and competitive to world-beating Pentium 4s (oh, and some Willamettes in between). They held on to the lead despite Intel’s problems, but I don’t think they would have gotten it in the first place with the performance landscape as it was prior to Conroe/Woodcrest.

    • just brew it!
    • 13 years ago

    An absolutely top-notch review of a kick-ass CPU. Glad to see Intel has finally gotten their act together, ensuring that the CPU market does not become a one-horse race.

    Guess I picked the wrong time to upgrade to Socket 939 — I should’ve waited for the price cuts!

      • Flying Fox
      • 13 years ago

      Consider the extra price you paid for the 2-month headstart. I hope you max out the CPU usage to at least make the money worthwhile.

      That said, I will be hunting for cheap S939 X2’s as well, for my 3200+, hopefully they will still be available by end of this year or early next year.

      • Flying Fox
      • 13 years ago

      LOL, most kids have gone to sleep probably. Expect the digg floods to start showing when Europe wakes up in a couple of hours.

      • SNM
      • 13 years ago

      I just got linked here off of Google News too.

      • DrDillyBar
      • 13 years ago

      ‘DU’ double double ‘G’.

    • SNM
    • 13 years ago

    holy cow. That’s all I can say.

    Of course, the only benefit this is going to offer me personally is a really cheap s939 X2. πŸ˜‰

    • Vaughn
    • 13 years ago

    Great review, And I agree on the sandra part, keep the memory benchmark scores get rid of the rest of it.

      • Vaughn
      • 13 years ago

      Yep me too, now I can get a nice price drop on that 170 operton i’m looking to purchase! πŸ™‚ yay conroe

      lol reply is to 51 not myself oops

        • Flying Fox
        • 13 years ago

        Optys prices don’t seem to be on the chopping block I am afraid. πŸ™

    • sn_85
    • 13 years ago

    Definitely a fast processor but I hope to see some more in-depth reviews come out. Most reviews have games set at 800×640 or 1600×1200 AA/Anis. Easy to see the power of conroe at the lower resolutions like 800×640 and 1024×768 but i’d like to see some 1280×1024 some AA and Anis in there too. Either way its pretty impressive.

      • Flying Fox
      • 13 years ago

      At higher res the GPU will influence the results, just like the [H]’s article. It will not be a CPU comparison anymore. I actually think TR should run everything at 800×600 myself. πŸ™‚

      • UberGerbil
      • 13 years ago

      “Easy to see the power of conroe at the lower resolutions…” Right. And that’s exactly the point. This is a CPU review, so you want to see the power of the CPU. Higher resolutions just hides it: you end up measuring how good it is at twiddling its thumbs while the GPU works flat out. Turning on AA just tells you how good your GPU is at AA; it tells you nothing about the CPU.

      There’s a place for “rounded” gaming reviews. That is not what this is. H apparently did one and called it a CPU review. TR did an actual CPU review.

        • [SDG]Mantis
        • 13 years ago

        I think that the [H] review was really looking at something very specifc: what does this mean to high end gaming systems. And the answer was: not that much, they are still GPU limited. The [H] review was definitely not attempting to be rounded, but was looking at an aspect that some other reviews have neglected.

        I’ve built AMD for years. But that was price for performance and then flat out gaming performance with the A64’s. I’ve held off on dual core so far. But I am very impressed.

        Since I tend to agree with the assessment about pricing on the E6600, I still think that I might wait for another round of video card upgrades to go with the processor. Might even have Vista at that point…then I can decide if I jump now or later when the DX10 games start shipping in a year or so.

        FYI, I am running a S754 A64 3200+ and an AGP 6800GT right now, so any way you cut it I am looking at a total system replacement. The question is timing for me. None of my LANing buddies has cracked above an X850XT and most aren’t even in the same class as the 6800, so I am safe for a while.

    • flying hippo
    • 13 years ago

    Nice!

    Thank god for AMD forcing Intel into making something worth buying, and thank god for Conroe forcing AMD into making the forthcomming price cuts.

    Low wattage probably is the nail in the coffin for BTX too.

      • Krogoth
      • 13 years ago

      BTX has been dead, since higher-end case manufacturers had developed “themrally-advantage” ATX cases that provide most of BTX’s thermal characteristics without completely breaking the form factor.

    • Flying Fox
    • 13 years ago

    Too bad we do not have enough 64-bit apps to do a comprehensive review on XP64/Win2K3/*nix quite yet. With Vista close, we should be looking into 64-bit performance as well, although rumours had it that the difference will not be that dramatic.

      • Damage
      • 13 years ago

      Huh?? Wha?? /me checks OS and apps used once more.

      • UberGerbil
      • 13 years ago

      Uh, Fox, you might take a look at the “Testing Methods” page again. If I’m not mistaken, all the tests were run on x64 except for WorldBench, and many of the tests themselves (Sandra, POV-Ray, Cinebench, LAME, Windows Media Encoder, picColor, and Unreal) were 64bit as well.

      Now, if you’re proposing a 64bit vs 32bit, Windows vs Linux shootout… yeah, that would be interesting I guess. It also would be a hell of a lot of work.

        • Flying Fox
        • 13 years ago

        I thought only UT is 64 bit? FEAR, Oblivion, BF2, Q4 should be running under WOW32 right?

        Of course it is only fair for the game devs since the majority are still on 32-bit.

        Then again, I wonder if general 32-bit performance difference will be more pronunced, approaching the 40% mythical feline mark (sorry I am not creative)? :rolleyes:

    • DrDillyBar
    • 13 years ago

    Finally.

    • redpriest
    • 13 years ago

    The Integer SSE4 tests use pmulhrsw.

    This instruction sacrifices fidelity for speed.

    • thecoldanddarkone
    • 13 years ago

    firing squad has theres up its pretty good as well. Has a few aspects techreports doesn’t but tells pretty much the same thing. (conroe is fast).

    • danny e.
    • 13 years ago

    its very nice to see two competitors in the market.. because there for a while there really was only one.
    now AMD will be forced to drop prices quite dramatically… or come up with some good improvements on their line.

    since I got an X2 4400+, I probably wont be upgrading till next year sometime… but its still great to see Intel finally make themselves actually worth considering.

    • indeego
    • 13 years ago

    Silly. They are unavailable. Oh well, I’ll check this review out again in Jang{<.<}g πŸ˜‰

        • SGT Lindy
        • 13 years ago

        That was funny!!!!!!!!!!!!

          • thecoldanddarkone
          • 13 years ago

          I have to admit thinking, that was good before I went to bed

            • indeego
            • 13 years ago

            And when you wake up and can’t get it, it was good for me and proved my point furtherg{<.<}g

        • Krogoth
        • 13 years ago

        Ouch, 1.3K to get a retail conroe when you can wait a month for rest of the line to show up.

        I would be not surprise that X6800 might go up to 1.5K do the sheer demand for the next couple of months though.

        • indeego
        • 13 years ago

        OH LOOKY THERE, OUT OF STOCKg{

          • Severus
          • 13 years ago

          Yet they weren’t six hours ago when the link was posted.

            • indeego
            • 13 years ago

            Sweet, so what you are saying is I can’t get this processor after say the hours of midnight to 11 p.m.? I can camp out at newegg, have a little campfire, and an alarm clock. That reminds me of like when ATI released some of their products and I couldn’t buy any of those either: I looked elsewhere. Cool that we have a review we can twidle our thumbs to, or buy a Dell with it in a few weeks with /[

            • SGT Lindy
            • 13 years ago

            Someone got those that newegg had….and if you buy one now from newegg I bet you would have one in two weeks.

            A far cry from Jan 2007.

            • indeego
            • 13 years ago

            /[

            • SGT Lindy
            • 13 years ago

            Yeah I feel so bad.

            Honestly you will be able to get you hands on easily within 60 days.

    • krazyredboy
    • 13 years ago

    Well, I think the 6 hours of paperwork that I postponed for the refresh button was worth it. Awesome review.

    • dragmor
    • 13 years ago

    Nice to see Intel back in the game. I’m curious about how they got the memory performance.

    1) Any chance of you runing TCaseMax on the 2 EE A64’s and posting the results. I want to see what the chips details.
    2) Could we get some undervolting results for Core 2 Duo’s?

    • Krogoth
    • 13 years ago

    Conroe is defintely fast, but not worth the upgrade for dual-core S939 users. You have spend at least $500 to gain 20% increase per clock speed and overclockiblity of most existing Conroe motherboards are unknown.

    The only people that would be interested are P4, K7 to single-core K8 users that are looking for a fast, dual-core upgrade.

      • Flying Fox
      • 13 years ago

      y[

        • Krogoth
        • 13 years ago

        I said y[<15-25% faster per clock<]y which means ICM has a higher IPC then K8 by that amount. Intel chipsets are certainly nice and stable, but are not exactly cheap. It seems that the Nforce 4/5 woes are from excessive marketing features like Activearmor, psuado-hardware RAID and SLI. Nforce 4 works just about as good as any of the other chipset solution out there in the market in terms of stability and performance if you do not use the Activearmor and other nonsense. IMO, SiS is probably has the best chipset for cost, stablitiy and performance since they lack some of the extra features -[

          • d0g_p00p
          • 13 years ago

          Agreed. I have been using nForce chipset’s since the very first one. I have been though the 1/2/3/4 series without any issues people have. I use all the standard drivers and never used the “extras” that come with the chipset. Never had an issue.

          Off that, Core 2 Duo looks awesome and thanks for the awesome review guys.

            • Flying Fox
            • 13 years ago

            The nForce (esp. the 5xx) is still worse in the power consumption department. πŸ™

            • imtheunknown176
            • 13 years ago

            I remember reading that the Intel 965s are quite power hungry too. But I guess that power hungry Intel is still better than any nForce. If it’s passivley cooled, I don’t care.

    • NeRve
    • 13 years ago

    Damage, don’t forget to add that Core 2 Duos also have SSE4!

      • UberGerbil
      • 13 years ago

      There are just a handful of additional SSE instructions. Nothing exciting — mostly just rounding out and filling in a few holes. They won’t make a significant difference and they’re not really worth mentioning. AFAIK the “SSE4” moniker was invented by the media — I haven’t seen any Intel docs that use the term.

        • redpriest
        • 13 years ago

        They make a huge difference in Sandra. when you factor that you can process 8 16-bit multiplies in a row vice 4 (multiply-add 16-bit with 32-bit intermediate), and you factor in that Conroe has a 128-bit SSE unit, that’s baseline 4 times faster than an Athlon.

          • Krogoth
          • 13 years ago

          When real-world applications started to take advantage of SSE4 or get anywhere near what synthatics benches like SANDRA get. I will be more then interested.

          Otherwise, the P4 dyansty proves that data from synthatic benches like SANDRA are utterly meaningless.

            • TSchniede
            • 13 years ago

            SSE mostly matterd for multimedia stuff, but unlike the netburst architecture, this time there is some use for it, this architecture doesn’t have that much dumb brute force to do this job (relatively).

    • droopy1592
    • 13 years ago

    DAMNNNNNNNNNNNNNNNNNNN

    seriously, that’s ass whoopage. It’s gotta be 5-35% of an advantage (5% is an outlier) across the board, top end vs top end, low end vs low end.

    Might ebay my 3800 X2

      • Krogoth
      • 13 years ago

      “ass-whopage” is a bit of a overstatement, as Conroe is only 15-25% per clock then the K8. It was expect for Intel to develop an architech that can beat something is already three years old.

      It just reminds me of how Northwood Cs were clearly faster then the Bartons/T-bred-Bs of its heyday. They were also more expensive to boot.

        • Flying Fox
        • 13 years ago

        The almost 2x (that’s 200% :o) jump in the SSE benchmarks are nothing to sneeze at. Provided more developers turn on the SSE switch, this could be huge.

          • TSchniede
          • 13 years ago

          since the SSE unit is now twice as fast, that is exactly as espected.

        • thecoldanddarkone
        • 13 years ago

        hmm I remember when amd came out with hammer, and everybody said they were rocking p4’s.

        and yet in some apps slower than the p4, and in games 10-25 percent faster.

        • Proesterchen
        • 13 years ago

        Intel’s $316 mid-range E6600 competing with, and beating in many occasions, AMD’s $1032 FX-62, that’s ass-whooppage all right.

        • blastdoor
        • 13 years ago

        I don’t think it’s an overstatement at all. Core won every single benchmark, and almost all of them by a good margin. AND Core can clock higher than the A64. Better IPC, higher clock speed, lower power. That’s ass whoopin, my friend. (Oh, and they’re cheaper!)

          • Krogoth
          • 13 years ago

          The performance delta almost looks like what K8 did the Northwood-based P4s at launch per clock. It is not as bad as what the latter revisions of K8 did to the Prescott-based P4s. πŸ˜‰

          BTW, Core 2 consumes almost as much juice as a K8 when loaded and a little more when idle. It is certainly better then the previous Netburst dyansty though.

            • Flying Fox
            • 13 years ago

            y[

            • Shintai
            • 13 years ago

            Also when you step back and compare it performance wise, you can get lower clocked Core 2 Duo. Maybe even LV types 😑

            • blastdoor
            • 13 years ago

            In terms of IPC, the P4 never lead the K7/K8 by design. However, in terms of total performance, the K8 did not dominate the P4 to the extent that Core 2 dominates K8. Check this out:

            Β§[<https://techreport.com/reviews/2003q3/athlon64/index.x?pg=11<]Β§ The K8 won in games, but the P4 was still faster in mp3 and video encoding. the X2 dominated the Pentium D when those both came out, but not by the margins we're seeing here. I think we have to go back to pre-K7 days to see one processor so totally dominate another (perhaps the pentium3 versus the K6? I wasn't really paying attention back then, so I don't know for sure). The only question mark here is availability. But Core2 shortages will only last so long. Come 2007, AMD had better be ready to compete.

    • SpotTheCat
    • 13 years ago

    there we are, eastern time.

    • True Xploder
    • 13 years ago

    sold. Now, I just have learn some about intel mobo’s and see how the cheapest conroe does vs 3800+X2

    • duffy
    • 13 years ago

    It’s amazing what a little healthy competition can do.

      • True Xploder
      • 13 years ago

      I hope that AMD & Intel keep this cycle going
      Especially the price cuts. The one thing I hated about AMD was how they never made a cheapo desktop dual core part. Now with intel in the lead they will and many people will upgrade easily

    • slavik6
    • 13 years ago

    I was unsure, but this review made up my mind. Conroe here I come.

    • Vrock
    • 13 years ago

    Meow?

      • Wajo
      • 13 years ago

      anyone seen kitty around?

        • jobodaho
        • 13 years ago

        This article took his 9th life.

        RIP hellokitty

          • Krogoth
          • 13 years ago

          ROFL………..

          good, kitty play dead.

      • DrDillyBar
      • 13 years ago

      *sound of electric-can-opener*

    • droopy1592
    • 13 years ago

    lol

    seriously hardocp’s review was the most awful crap I’ve seen in years.

    Damn, test it at the lowest res possible to get a CPU bench.

      • lugnplu
      • 13 years ago

      They showed a great video card review. It was well rounded and showed that the best video card works on all cpus.

      *gets note it was a cpu review*

      oh dear lord.

        • thecoldanddarkone
        • 13 years ago

        lol, well accordingly it was a Gaming Review, πŸ˜›

    • redpriest
    • 13 years ago

    Glad I’m not the only one unable to bust this processor past 3.466 stably.

      • jobodaho
      • 13 years ago

      Are you running an air cooled setup as well?

        • redpriest
        • 13 years ago

        Yep, mine is air cooled as well.

      • thecoldanddarkone
      • 13 years ago

      well at least 3.466 isn’t slow πŸ˜›

    • UberGerbil
    • 13 years ago

    An interesting follow-up would be to underclock the Core Duo to 2.13GHz so we can make a direct clock-for-clock comparison with the Yonah featured in your “Core Duo on the Desktop” article, just to see how Core 2 compares with its most direct predecessor.

    • jehurey
    • 13 years ago

    Comparing Jessica Simpsons with Jessica Simpson.

    You’ve been saving this one in the jewelbox for quite some time, haven’t you? Well, you couldn’t have picked a better review. This is really informative and thorough.

    Great review.

    • Convert
    • 13 years ago

    Welcome back Intel. Although in some ways you never left /strokes his pd 805 with a oven mitt.

    • thecoldanddarkone
    • 13 years ago

    Nice review, lifted my spirit, from the hardocp tests. Yah

    one more thing, it doesn’t seem well half done.

    • jobodaho
    • 13 years ago

    What a freaking review! I’m officially convinced the e6600 is my new processor, I just hope it’s available at the $316 asking price.

    FP!

      • sativa
      • 13 years ago

      there is no freaking way you read that review in <1 minute.

      SP

        • jobodaho
        • 13 years ago

        There are ways, you should be able to figure out how…

        • Flying Fox
        • 13 years ago

        SBR?

          • vdreadz
          • 13 years ago

          I 2nd as well!

        • DrDillyBar
        • 13 years ago

        it’s called ‘last%20page.htm’.

      • Krogoth
      • 13 years ago

      I doubt that E6600 will be anywhere near $300 mark for at least six months considering the intial demand and limited supplies. $350 is probably a more realistic price until demand and supply stablized.

        • jobodaho
        • 13 years ago

        Even if it is $350, that’s still reasonable for the kind of performance it provides. The main benches I was focused on were the 3D rendering, and it’s clear that it is the processor for my next build.

          • Bet
          • 13 years ago

          Ideal for my build too, which makes me doubt we’ll be getting it for less than $500 in August. Crossing my fingers all the same, though. August will be one hot month if I can get one of those processors into a system.

Pin It on Pinterest

Share This