Intel’s Core i7-980X Extreme processor


I have to tell you, sometimes, being a critical reviewer in the realm of technology is not an easy task. The problem comes down to the sheer rate of improvement among the products we review. If we were Car and Driver, it would look something like this. One year, we’d be reviewing a car that could accelerate from zero to 60 in eight seconds. A year later, we’d be testing a car in the same price range with a six-second 0-60 time. Another year after that, the standard would be down to four seconds. The next year? Three. Soon, pressing the accelerator would subject the driver to forces strong enough to be lethal in the right amounts.

Which is, for a car guy, a barrel of fun.

We have that sort of dynamic going on with computer chips, and it’s also quite entertaining, if you’re so inclined. I’ve gone from listening to short programs load in from tape on an Atari 800 to 12-megapixel monitors arrays playing amazing-looking games in full motion. This is not normal in any other walk of life.

Now, don’t get me wrong. I can pick nits with the best of ’em. But some days, I’m still amazed that I don’t have to listen to a series of bleeps and bloops for 30 minutes before I get to play Borderlands. At times like that, this new processor Intel will be officially introducing soon is almost incomprehensible. The Core i7-980X builds on the foundation established by the first Core i7 processors back in late 2008, but it raises the core count from four to six and adds a bundle of performance in the process. Given this thing’s performance and other qualities, I’m having a difficult time finding reasons to complain. Keep reading, and you’ll see what I mean.

Gulftown chips on a wafer. Source: Intel.

Introducing Gulftown

If you’ve been following Intel CPUs lately, you’re probably well-versed in code names. Knowing them is helpful because the complexity of Intel’s product portfolio is surpassed only by that of its naming scheme. Consequently, we’ve started referring to Clarkdale, Lynnfield, and Bloomfield rather than attempting to enumerate all possible products based on those bits of silicon. The Core i7-980X adds a new code name to that constellation: Gulftown.

Like the dual-core Clarkdale Core i3/i5 processors introduced earlier this year, Gulftown is a part of the Westmere family of 32-nm chips. This six-core processor is primarily known, in its server/workstation guise, as Westmere-EP; Gulftown is the code name for the desktop variants of the chip. Gulftown is intended to be a drop-in replacement for the existing members of the Core i7-900 series, all of which are based on the quad-core chip code-named Bloomfield.

If your head hasn’t exploded yet from code-name overload, I congratulate you. The main things you need to know about Gulftown are reproduced in the table below, which should act as something of a code-name decoder.

Code name Key

products

Cores Threads Last-level

cache size

Process
node

(Nanometers)

Estimated

transistors

(Millions)

Die

area

(mm²)

Penryn Core 2 Duo 2 2 6 MB 45 410 107
Bloomfield Core i7 4 8 8 MB 45 731 263
Lynnfield Core i5, i7 4 8 8 MB 45 774 296
Westmere Core i3, i5 2 4 4 MB 32 383 81
Gulftown Core i7-980x 6 12 12 MB 32 1170 248
Deneb Phenom II 4 4 6 MB 45 758 258
Propus/Rana Athlon II
X4/X3
4 4 512 KB x 4 45 300 169
Regor Athlon II X2 2 2 1 MB x 2 45 234 118

Compared to Bloomfield, Gulftown has 50% more cores and cache, yet it fits into the same basic power envelope at the same clock speed. Gulftown packs substantially more transistors into a smaller die area than Bloomfield, too. All of this magic comes courtesy of Intel’s new 32-nm chip fabrication process, which combines second-generation high-k + metal gate transistors with first-generation immersion lithography.

A map of the Gulftown die. Source: Intel.

The image above shows Gulftown’s layout nicely. As a drop-in replacement for Bloomfield, Gulftown has no integrated PCI Express connectivity (a la Lynnfield) and no integrated graphics (a la Clarkdale). Instead, it relies on a QuickPath Interconnect to link it to the X58 chipset.

Interestingly, Intel’s architects call the uncore area running up the center of the chip “the tube.” (Well, I thought it was interesting, anyway.) Your eye may also be drawn to the top left corner of the chip, where there’s a pretty big area with not much going on. In a briefing, Dave Hill, Westmere’s lead architect, acknowledged this “white space” and noted only that he wasn’t going to talk about the reasons for it. Presumably, Intel would want to minimize wasted space on a design like this one, so I’m intrigued. Almost looks to me like one could eliminate the apparent white space on both sides of the memory controller and the I/O, uncore, and memory controller would wrap pretty snugly around four cores their associated L3 cache. As far as we know, though, Intel has no plans to release a native quad-core derivative of Westmere. Instead, the firm will press ahead with a quad-core version of Sandy Bridge, the upcoming architectural refresh slated for the 32-nm process.

Speaking of which, the chips in the Westmere family are a “tick” in Intel’s vaunted tick-tock cadence. They’re a refinement of the quad-core Nehalem architecture introduced at 45 nanometers, with a relatively conservative set of enhancements outside of the obvious changes in core counts and cache sizes. Sandy Bridge will be a “tock” with more radical architectural remodeling. Still, the same Oregon-based team that created Nehalem also did Westmere, so the ins and outs of the processor were already familiar to them. They couldn’t resist making a few tweaks along the way. Most notable among them is the addition of seven new instructions tailored to accelerate the most common data encryption algorithms.

Another improvement, carried over from the Lynnfield Core i5/i7 chips, is the addition of a gate that can cut off power to most elements of the “uncore” when the chip is idling in its lowest sleep states, substantially reducing power consumption and even leakage power. This provision extends the power gate concept first implemented in Nehalem processors. Gulftown has seven power gates, one for each core and one for the uncore. Not all elements of the uncore are affected by the power gate. Notably, the chip’s built-in power management processor isn’t shut off, for obvious reasons. Meanwhile, the memory controller, QuickPath Interconnect, and L3 cache have their voltage reduced to “retention levels.” The chip’s architects say there’s no substantial increase in the time required for the CPU to wake up from its deeper sleep states.

Other Westmere changes are perhaps even more esoteric. The APIC timer now remains running all of the time, even during sleep. Large pages, up to 1GB in size, are now supported, and some improvements have been made for the sake of virtualization performance. Despite the presence of more and larger caches, the data pre-fetch algorithms for the caches remain the same.

One other modification in Gulftown will please folks trying to achieve higher memory clocks. With Bloomfield, the maximum memory speed is half the uncore frequency. As a result, Bloomfield’s uncore must run at 4GHz in order to accommodate 2GHz DIMMs. Like Lynnfield, Gulftown’s uncore only needs to run at 1.5X the max memory speed, so 2GHz memory frequencies are possible with the uncore at 3GHz.

The Core i7-980X Extreme gets a fancy cooler

Gulftown processors will drop into an LGA1366-style socket, like those used on all X58 motherboards, and should generally be compatible with current boards with the help of a BIOS update. Intel’s own DX58S0 “Smackover” board can handle a Core i7-980X after a quick BIOS flash, as did the Gigabyte X58A-UD5 in our test system. As is often the case, though, the move to a smaller fab process has prompted some voltage changes, so you’ll want to check with your motherboard maker to verify compatibility. Like Bloomfield, the i7-980X supports three channels of DDR3 memory at up to 1066MHz. Oddly, Intel has withheld its official endorsement of higher memory frequencies, although the chip’s memory controller will easily run at higher speeds.

Model Cores Threads Base core
clock speed
Peak
Turbo

clock speed

L3
cache

size

Memory

channels

TDP Price
Core i5-750 4 4 2.66 GHz 3.20 GHz 8 MB 2 95W $196
Core i7-860 4 8 2.80 GHz 3.46 GHz 8 MB 2 95W $284
Core i7-870 4 8 2.93 GHz 3.60 GHz 8 MB 2 95W $562
Core i7-920 4 8 2.66 GHz 2.93 GHz 8 MB 3 130W $284
Core i7-930 4 8 2.80 GHz 3.06 GHz 8 MB 3 130W $294
Core i7-960 4 8 3.20 GHz 3.46 GHz 8 MB 3 130W $562
Core i7-975
Extreme
4 8 3.33 GHz 3.60 GHz 8 MB 3 130W $999
Core i7-980X
Extreme
6 12 3.33 GHz 3.60 GHz 12 MB 3 130W $999

The table above shows Intel’s current Core i7 lineup. The Core i7-980X is the first—and so far only—Gulftown-based product to come to market. As an Extreme edition, the 980X has an unlocked multiplier to facilitate overclocking. If you’re willing to cough up a grand for its best processor, Intel won’t stand in the way of you having a little fun with it. As you can see, the 980X essentially supplants the Core i7-975 Extreme at the same price and frequency, with more cores and cache.

That’s about it for the Core i7-980X’s competition. We have included the fastest desktop processor from AMD, the Phenom II X4 965, in our testing, of course, but it lists for only $185 and simply can’t match the performance of the fastest Intel CPUs. AMD does have a six-core version of its Opteron processor that fared pretty well in our last round of server/workstation CPU tests, but the firm has so far elected not to bring it to the desktop.

Pictured above is the Core i7-980X (trust me, it’s under there) installed in our Gigabyte X58A-UD5 mobo, along with Intel’s nifty stock cooler for this CPU. That’s 12GB of Corsair Dominator DIMMs in the picture, by the way—a new arrival in Damage Labs—although we tested with just three DIMMs and 6GB for the sake of continuity with our existing results.

The new stock cooler will come with retail boxed versions of the Core i7-980X, and thank goodness, it has a screw-based installation mechanism with a retention bracket that goes on the underside of the motherboard. Intel claims the retention mech has been tested with shock forces up to 50 Gs, which should prevent it from breaking off and bouncing around inside the case of a pre-built PC—like the tab-based Intel cooler that I installed in my brother-in-law’s PC did, killing a GeForce GTX 260 in the process.

The cooler has both Quiet and Performance modes, which can be set with a switch on the heatsink. We found it to be fairly hushed in quiet mode and pretty darned effective in performance mode, as you’ll soon see.

And now, we have an incredibly large set of CPU test results to navigate, comparing the Core i7-980X to everything from a Core i7-870 to a five-year-old Pentium 4. I’m going to keep the commentary to a minimum since we’re still fresh off of our last massive CPU roundup, and the only big change here is the addition of the i7-980X. Let’s get started.

Test notes

We’ve underclocked the Core i5-661 to 2.8GHz in order to simulate the Core i3-540. Although we did change the core clock to the proper speed, the processor’s uncore clock remained at the i5-661’s stock frequency. We believe shipping Core i3-540 processors have a 2.13GHz uncore clock, while the i5-661 has a 2.4GHz uncore clock, so our simulated processor may perform slightly better than the real item due to a higher L3 cache speed. The differences are likely to be very minor, based on our experience with Lynnfield parts—the L3 cache is incredibly fast, regardless—but we thought you should know about that possibility.

Additionally, our Core i7-960 is an underclocked Core i7-975 Extreme, but in that case, we’re fairly certain all of the clocks match what they should, since Bloomfield gives us a little more control over such things. In order to run the Core i7-960’s memory at 1333MHz, we raised its uncore clock to 2.66GHz. That comes with the territory, and I expect many Core i7-960 owners have done the same.

As is our custom, we’ve omitted the simulated processor speed grades from our power consumption testing.

After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled. We did disable these power management features to measure cache latencies, but otherwise, it was unnecessary to do so.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we reported the median of the scores produced.

Our test systems were configured like so:

Processor Athlon II X2 255 3.1GHz

Athlon II X3 440 3.0GHz

Athlon II X4 630 2.8GHz

Athlon II X4 635 2.9GHz

Phenom II X2 550 3.1GHz

Phenom II X4 910e 2.6GHz

Phenom II X4 965 3.4GHz


Pentium E6500 2.93GHz

Core
2 Duo E7600 3.06GHz

Core 2 Quad Q6600 2.4GHz

Pentium
4 670 3.8GHz

Core
2 Duo E8600 3.33GHz

Core 2 Quad Q9400 2.66GHz

Motherboard Gigabyte
MA785G-UD2H
Asus
P5G43T-M Pro
Asus
P5G43T-M Pro
Asus
P5G43T-M Pro
North bridge 785GX G43
MCH
G43
MCH
G43
MCH
South bridge SB750 ICH10R ICH10R ICH10R
Memory size 4GB
(2 DIMMs)
4GB
(2 DIMMs)
4GB
(2 DIMMs)
4GB
(2 DIMMs)
Memory
type
Corsair

CM3X2G1600C9DHXNV

DDR3 SDRAM

Corsair

CM3X2G1800C8D

DDR3 SDRAM

Corsair

CM3X2G1800C8D

DDR3 SDRAM

Corsair

CM3X2G1800C8D

DDR3 SDRAM

Memory
speed
1333
MHz
1066
MHz
800
MHz
1333
MHz
Memory
timings
8-8-8-20 2T 7-7-7-20 2T 7-7-7-20 2T 8-8-8-20 2T
Chipset

drivers

INF
update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

INF
update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

INF
update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

Audio Integrated

SB750/ALC889A with Realtek 6.0.1.5995 drivers

Integrated

ICH10R/ ALC887 with
Realtek 6.0.1.5995 drivers

Integrated

ICH10R/ALC887 with Realtek 6.0.1.5995 drivers

Integrated

ICH10R/ALC887
with Realtek 6.0.1.5995 drivers

Processor Core
i5-750 2.66GHz

Core i7-870 2.93GHz

Core
i3-530 2.93GHz

Core
i3-540 3.06GHz

Core i5-661 3.33GHz

Core
i7-920 2.66GHz
Core
i7-960 3.2GHz

Core i7-975 Extreme 3.33GHz

Core i7-980X Extreme 3.33GHz

Motherboard Gigabyte
P55A-UD6
Asus
P7H57D-V EVO
Gigabyte
EX58-UD3R
Gigabyte
X58A-UD5R
North bridge P55
PCH
H57
PCH
X58
IOH
X58
IOH
South bridge ICH10R ICH10R
Memory size 4GB
(2 DIMMs)
4GB
(2 DIMMs)
6GB
(3 DIMMs)
6GB
(3 DIMMs)
Memory type Corsair

CM3X2G1600C8D

DDR3 SDRAM

Corsair

CMD4GX3M2A1600C8

DDR3 SDRAM

OCZ

OCZ3B2133LV2G

DDR3 SDRAM

Corsair

TR3X6G1600C8D

DDR3 SDRAM

Memory
speed
1333
MHz
1333
MHz
1066
MHz
1333
MHz
Memory
timings
8-8-8-20 2T 8-8-8-20 2T 7-7-7-20 2T 8-8-8-20 2T
Chipset

drivers

INF
update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

INF
update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

INF
update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

INF
update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

Audio Integrated

P55 PCH/ALC889 with Realtek 6.0.1.5995 drivers

Integrated

H57 PCH/ALC889 with Realtek 6.0.1.5995 drivers

Integrated

ICH10R/ALC888 with Realtek 6.0.1.5995 drivers

Integrated

ICH10R/ALC889 with Realtek 6.0.1.5995 drivers

They all shared the following common elements:

Hard drive WD
RE3 WD1002FBYS 1TB SATA
Discrete
graphics
Asus
ENGTX260 TOP SP216 (GeForce GTX 260) with ForceWare 195.62 drivers
OS Windows
7 Ultimate x64 Edition RTM
OS
updates
DirectX
August 2009 update
Power
supply
PC
Power & Cooling Silencer 610 Watt

I’d like to thank Asus, Corsair, Gigabyte, OCZ, and WD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

The test systems’ Windows desktops were set at 1600×1200 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.

We used the following versions of our test applications:

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Power consumption and efficiency

We have reams of test results to wade though, but we’ll begin with our power consumption tests, since they’re especially relevant to a new 32-nm processor like the Core i7-980X.

For these tests, we used an Extech 380803 power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we ran Cinebench’s multithreaded rendering test.

We’ll start with the show-your-work stuff, plots of the raw power consumption readings. We’ve broken things down by socket type in order to keep them manageable. Please note that, because our Asus H57 motherboard tends to draw more power than we’d like, we’ve tested power consumption for the Core i5-530 and the Core i5-661 on our P55 mobo, instead.

We can slice up these raw data in various ways in order to better understand them. We’ll start with a look at idle power, taken from the trailing edge of our test period, after all CPUs have completed the render.

Next, we can look at peak power draw by taking an average from the ten-second span from 15 to 25 seconds into our test period, when the processors were rendering.

The Core i7-980X’s power draw, both at max and idle, mirrors that of the Core i7-975 quite closely. Heck, it’s a few watts lower at peak, despite the addition of two more cores and extra cache.

We can highlight power efficiency by looking at total energy use over our time span. This method takes into account power use both during the render and during the idle time. We can express the result in terms of watt-seconds, also known as joules. (In this case, to keep things manageable, we’re using kilojoules.)

The X58 platform’s relatively high power use at idle keeps the i7-980X from performing well by this measure.

We can pinpoint efficiency more effectively by considering the amount of energy used for the task. Since the different systems completed the render at different speeds, we’ve isolated the render period for each system. We’ve then computed the amount of energy used by each system to render the scene. This method should account for both power use and, to some degree, performance, because shorter render times may lead to less energy consumption.

In our most direct measurement of power efficiency, the Core i7-980X takes top honors. With six cores and, thanks to Hyper-Threading, 12 hardware threads, the 980X makes short work of Cinebench’s test render. By finishing so quickly, the 980X-based system requires the least energy to render this scene. Holding the line on clock speeds and raising the core count is a very effective strategy for attaining energy-efficient performance in multi-threaded applications, and Intel has followed that template almost perfectly with Gulftown.

Memory subsystem performance

Now that we’ve considered power efficiency, we’ll move on to our performance results, beginning with some synthetic tests of the CPUs’ memory subsystems. These results don’t track directly with real-world performance, but they do give us some insights into the CPU and system architectures involved. For this first test, the graph is pretty crowded. I’ve tried to be selective, generally only choosing one representative from each architecture. This test is multithreaded, so more cores—with associated L1 and L2 caches—can lead to higher throughput.

With six L1 data caches, six L2 caches, and a massive 12MB L3 cache, the Core i7-980X is the fastest solution at nearly every data point.

This graph becomes almost impossible to read once we get to the larger block sizes, where we’re really measuring main memory bandwidth. Stream is a better test of that particular attribute.

Gulftown essentially matches Bloomfield here, with near-identical bandwidth scores.

The 980X’s very low memory access latencies are even more impressive given the fact that its L3 cache is 50% bigger than Bloomfield’s. (Larger caches typically have longer latencies.) Intel informs us that Gulftown’s L3 cache runs at the same speed as Bloomfield’s, so there’s no improvement due to higher frequencies. I do think, however, that we may have to adjust our sample to the 32MB block size soon. Latencies for Gulftown at the 16MB size may be getting partially cushioned by the 12MB L3 cache. At the 32MB sample size, latencies for the i7-975 and i7-980X are almost identical and work out to about 51 ns.

For what it’s worth, this benchmark reports that the latency for the Core i7-975’s L3 cache is 36 cycles (of the CPU core), while the i7-980’s is 43 cycles.

Borderlands

This is my favorite game in a long, long time, so I had to use in it our latest CPU test suite. Borderlands is based on Unreal Engine technology and includes built-in speed test, which we used here. We tested with the game set to its highest quality settings at a range of resolutions. The results from the lowest resolutions will highlight the separation between the CPUs best, so I’d pay the most attention to them. The higher resolution results demonstrate what happens when the GeForce GTX 260 graphics card begins to restrict frame rates.

Well, yeah. Borderlands runs quickly enough on a sub-$100 processor like the Athlon II X2 255, so the Core i7-980X shouldn’t find it a challenge. There’s little improvement from the i7-975 to the i7-980X, but this game engine doesn’t use enough threads to take full advantage of Gulftown—not that it needs to.

DiRT 2

This excellent new racer packs a nicely scriptable performance test. We tested at the game’s “high” quality presets with 4X antialiasing.

So continues our object lesson in how most of today’s games don’t really require the fastest CPUs. The 980X does well; so does everything but the Pentium 4.

Modern Warfare 2

With Modern Warfare 2, we used FRAPS to record frame rates over the course of a 60-second gameplay session. We conducted this gameplay session five times on each CPU and have reported the median score from each processor. We’ve also graphed the frame rates from a single, representative session for each. We tested this game at a relatively low 1024×768 resolution, with no AA, but otherwise using the highest in-game visual quality settings.

Look, folks. Those IBM CPUs in the Xbox 360 aren’t gonna set any land speed records. For now, the largest-budget games and biggest hits are likely to have relatively modest processor needs.

Left 4 Dead 2

We tested Left 4 Dead 2 by playing back a custom demo using the game’s timedemo function. Again, we had all of the image quality options cranked, and we tested with 16X anisotropic filtering and 4X antialiasing. The game’s multi-core rendering option was, of course, enabled.

Valve’s Source engine is no challenge to any modern processor, either. The 980X again shows that it’s good for gaming—but so is the Core i3-530.

Source engine particle simulation

Next up is a test we picked up during a visit to Valve Software, the developers of the Half-Life games. They had been working to incorporate support for multi-core processors into their Source game engine, and they cooked up some benchmarks to demonstrate the benefits of multithreading.

This test runs a particle simulation inside of the Source engine. Most games today use particle systems to create effects like smoke, steam, and fire, but the realism and interactivity of those effects are limited by the available computing horsepower. Valve’s particle system distributes the load across multiple CPU cores.

At last, a more targeted test where Gulftown gets to show us what it can do. If game developers make heavy use of these effects in games—and if they don’t accelerate them via the GPU, which even Valve now seems to be doing—then newer Intel processors with Hyper-Threading should handle them especially well. Older ones with Hyper-Threading, not so much.

Productivity

We have, for quite some time now, used WorldBench in our CPU tests. Over that time, we’ve found that some of WorldBench’s tests can be rather temperamental and may refuse to run periodically. We’ve also found that some of the same tests tend to have inconsistent results that aren’t always influenced much by processor performance. Other applications in WorldBench 6, like the Windows Media Encoder 9 test, make little or no use of multithreading, despite the fact that such applications are typically nicely multithreaded these days. As a result, we’ve decided to limit our use of WorldBench to a selection of its applications, rather than the full suite.

MS Office productivity

Firefox web browsing

Multitasking – Firefox and Windows Media Encoder

Both the Office and Firefox/Windows Media Encoder tests have an element of multitasking built into them, but Gulftown’s extra cores and hardware threads aren’t much help when the applications themselves involve mostly serial operations and (in the case of this older version of Windows Media Encoder) only a few threads. The 980X performs well here, but no better than its predecessors.

File compression and encryption

7-Zip file compression and decompression

Whoa. 7-Zip puts Gulftown’s six cores to use, with stunning results—over 10X the performance of a Pentium 4 and just under twice the performance of the Core i7-920.

WinZip file compression

This old version of WinZip in the WorldBench suite uses maybe one or two threads, and the results are predictable. Again, the 980X comes out looking pretty good, but it’s not really any faster than Ye Olde Core 2 Duo E8600.

TrueCrypt disk encryption

Here’s a new addition at our readers’ request. This full-disk encryption suite includes a performance test, for obvious reasons. We tested with a 50MB buffer size and, because the benchmark spits out a lot of data, averaged and summarized the results in a couple of different ways.

This, folks, is without any help from Gulftown’s new instructions that accelerate encryption. My understanding is that a version of TrueCrypt with support for Westmere’s new instructions is forthcoming, and we’ll try to test it once it’s available. Still, Gulftown is fast enough on its own to encrypt data more than quickly enough for most storage subsystems.

Yeah, this little data dump is for those of you who are really, really interested in a particular encryption routine. Enjoy.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

In the past, we’ve added up the time taken by all of the different elements of the panorama creation wizard and reported that number, along with detailed results for each operation. However, doing so is incredibly data-input-intensive, and the process tends to be dominated by a single, long operation: the stitch. So this time around, we’ve simply decided to report the stitch time, which saves us a lot of work and still gets at the heart of the matter.

The 980X will stitch together your panorama in half the time it takes a Core 2 Quad Q6600 or an Athlon II X4 635.

picCOLOR image processing and analysis

picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including SSE extensions, multiple cores, and Hyper-Threading. Many of its individual functions are multithreaded.

Recently, at our request, Dr. Müller graciously agreed to re-tool his picCOLOR benchmark to incorporate some real-world usage scenarios. As a result, we now have four new tests that employ picCOLOR for image analysis. I’ve included explanations of each test from Dr. Müller below.

Particle Image Velocimetry (PIV) is being used for flow measurement in air and water.
The medium (air or water) is seeded with tiny particles (1..5um diameter, smoke or oil fog in air,
titanium dioxide in water). The tiny particles will follow the flow more or less exactly, except may be
in very strong sonic shocks or extremely strong vortices. Now, two images are taken within a very
short time interval, for instance 1us. Illumination is a very thin laser light sheet. Image resolution is
1280×1024 pixels. The particles will have moved a little with the flow in the short time interval and
the resulting displacement of each particle gives information on the local flow speed and direction.
The calculation is done with cross-correlation in small sub-windows (32×32, or 64×64 pixel) with some
overlap. Each sub-window will produce a displacement vector that tells us everything about flow speed
and direction. The calculation can easily be done multithreaded and is implemented in picCOLOR with
up to 8 threads and more on request.

To give you some context for these results, picCOLOR’s scores are indexed against a Pentium III 1GHz system; a score of 1.0 represents its performance. The Core i7-980X is 54 times that fast in this test.

Real Time 3D Object Tracking is used for tracking of airplane wing and helicopter blade deflection and deformation in wind tunnel tests. Especially for comparison with numerical simulations, the exact deformation
of a wing has to be known. An important application for high speed tracking is the testing of wing flutter, a
very dangerous phenomenon. Here, a measurement frequency of 1000Hz and more is required to solve the
complex and possibly disastrous motion of an aircraft wing. The function first tracks the objects in 2 images
using small recognizable markers on the wing and a stereo camera set-up. Then, a 3D-reconstruction
follows in real time using matrix conversions. . . . This test is single threaded, but will be converted to 3 threads in the future.

Multi Barcodes: With this test, several different bar codes are searched on a large image (3200×4400 pixel).
These codes are simple 2D codes, EAN13 (=UPC) and 2 of 5. They can be in any rotation and can be extremely fine
(down to 1.5 pixel for the thinnest lines). To find the bar codes, the test uses several filters (some of them multithreaded). The bar code edge processing is single threaded, though.

Label Recognition/Rotation is being used as an important pre-processing step for character reading (OCR).
For this test in the large bar code image all possible labels are detected and rotated to zero degree text rotation.
In a real application, these rotated labels would now be transferred to an OCR-program – there are several good programs
available on the market. But all these programs can only accept text in zero degree position. The test uses morphology
and different filters (some of them multithreaded) to detect the labels and simple character detection functions to locate the text and to determine the rotational angle of the text. . . . This test uses Rotation in the last important step, which is fully multithreaded with up to 8 threads.

The 980X’s strong performance continues, but the newcomer isn’t able to distinguish itself from its predecessors in operations with fewer threads. I’m not sure what happened in the Multi Barcodes test, where it was even a little slower.

picCOLOR’s synthetic tests measure a number of the program’s individual functions, and the program then computes an average score, again indexed versus a 1GHz Pentium III. The 980X grabs the top spot here by just a bit.

Media encoding and editing

x264 HD benchmark

This benchmark tests one of the most popular H.264 video encoders, the open-source x264. The results come in two parts, for the two passes the encoder makes through the video file. I’ve chosen to report them separately, since that’s typically how the results are reported in the public database of results for this benchmark.

Remember, kids: pass two is where the magic happens. That’s where the six cores of Gulftown truly get exercised, with happy results—nearly a 50% increase in encoding rate over the Core i7-975.

Windows Live Movie Maker 14 video encoding

For this test, I used Windows Live Movie Maker to transcode a 30-minute TV show, recorded in 720p .wtv format on my Windows 7 Media Center system, into a 320×240 WMV-format video format appropriate for mobile devices.

Wow, Microsoft. You really couldn’t see this coming? This is a video encoding app, pretty easily parallelized, and you thought, “Hey, eight threads ought to be enough for anybody.” Really?

LAME MT audio encoding

LAME MT is a multithreaded version of the LAME MP3 encoder. LAME MT was created as a demonstration of the benefits of multithreading specifically on a Hyper-Threaded CPU like the Pentium 4. Of course, multithreading works even better on multi-core processors.

Rather than run multiple parallel threads, LAME MT runs the MP3 encoder’s psycho-acoustic analysis function on a separate thread from the rest of the encoder using simple linear pipelining. That is, the psycho-acoustic analysis happens one frame ahead of everything else, and its results are buffered for later use by the second thread. That means this test won’t really use more than two CPU cores.

We have results for two different 64-bit versions of LAME MT from different compilers, one from Microsoft and one from Intel, doing two different types of encoding, variable bit rate and constant bit rate. We are encoding a massive 10-minute, 6-second 101MB WAV file here.

LAME MT remains in our test suite after many years as an example of the limits of multithreaded software—and, by extension, multi-core processors. Yes, you can encode multiple files at the same time faster on a six-core, 12-thread machine like a Gulftown, but we’re not aware of an encoder that uses more than two threads well while encoding a single audio file. Hence, the i7-980X is no faster than the dual-core Core i5-661.

3D modeling and rendering

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

Ah, rendering. The embarrassingly parallel task that spawned the GPU. This is happy territory for the Core i7-980X and unhappy territory for any of its competitors. It’s nearly twice the speed of the Phenom II X4 965 here.

By the way, there is a newer version of Cinebench out, release 11.5, that hopefully resolves some problems with performance scaling at higher core and thread counts. That’s not much of a problem for the Core i7-980X here, obviously, but we have seen issues with dual-socket, Nehalem-based systems. Unfortunately, we didn’t have the stomach for re-testing twenty-some processors with Cinebench 11.5 for the sake of this review.

POV-Ray rendering

We’re using the latest beta version of POV-Ray 3.7 that includes native multithreading and 64-bit support.

In the chess2 scene, Gulftown accomplishes in 37 seconds what the Pentium 4 670 does in 10 minutes. POV-Ray’s benchmark depends largely on a long, single threaded operation, like some sort of strange testament to Amdahl’s Law. That’s why the i7-980X can’t improve performance there as much.

3ds max modeling and rendering

The first 3ds max test measures 3D modeling speed, not rendering, which is why it shows no gain with the 980X.

Valve VRAD map compilation

This next test processes a map from Half-Life 2 using Valve’s VRAD lighting tool. Valve uses VRAD to pre-compute lighting that goes into games like Half-Life 2.

The presence of the Pentium 4 kind of distorts the scale of this bar chart, but the 980X again delivers a nice reduction in rendering time.

Scientific computing

Folding@Home

Next, we have a slick little Folding@Home benchmark CD created by notfred, one of the members of Team TR, our excellent Folding team. For the unfamiliar, Folding@Home is a distributed computing project created by folks at Stanford University that investigates how proteins work in the human body, in an attempt to better understand diseases like Parkinson’s, Alzheimer’s, and cystic fibrosis. It’s a great way to use your PC’s spare CPU cycles to help advance medical research. I’d encourage you to visit our distributed computing forum and consider joining our team if you haven’t already joined one.

The Folding@Home project uses a number of highly optimized routines to process different types of work units from Stanford’s research projects. The Gromacs core, for instance, uses SSE on Intel processors, 3DNow! on AMD processors, and Altivec on PowerPCs. Overall, Folding@Home should be a great example of real-world scientific computing.

notfred’s Folding Benchmark CD tests the most common work unit types and estimates the number of points per day that a CPU could earn for a Folding team member. The CD itself is a bootable ISO. The CD boots into Linux, detects the system’s processors and Ethernet adapters, picks up an IP address, and downloads the latest versions of the Folding execution cores from Stanford. It then processes a sample work unit of each type.

On a system with two CPU cores, for instance, the CD spins off a Tinker WU on core 1 and an Amber WU on core 2. When either of those WUs are finished, the benchmark moves on to additional WU types, always keeping both cores occupied with some sort of calculation. Should the benchmark run out of new WUs to test, it simply processes another WU in order to prevent any of the cores from going idle as the others finish. Once all four of the WU types have been tested, the benchmark averages the points per day among them. That points-per-day average is then multiplied by the number of cores on the CPU in order to estimate the total number of points per day that CPU might achieve.

This may be a somewhat quirky method of estimating overall performance, but my sense is that it generally ought to work. We’ve discussed some potential reservations about how it works here, for those who are interested.

We have, in the past, included results for multiple WU types, but given the fact that per-core performance results are distorted when Hyper-Threading allows multiple threads to be run simultaneously, we’ve decided simply to report the overall score this time.

A nice result, but I should note that you can probably expect to accumulate many more points per day if you use the SMP client for Folding. I’m hoping notfred will succumb and change the benchmark to use the SMP client soon. If not, we may have to retire this test, since the SMP client seems to be what everyone is using these days.

MyriMatch proteomics

Our benchmarks sometimes come from unexpected places, and such is the case with this one. David Tabb is a friend of mine from high school and a long-time TR reader. He has provided us with an intriguing new benchmark based on an application he’s developed for use in his research work. The application is called MyriMatch, and it’s intended for use in proteomics, or the large-scale study of protein. I’ll stop right here and let him explain what MyriMatch does:

In shotgun proteomics, researchers digest complex mixtures of proteins into peptides, separate them by liquid chromatography, and analyze them by tandem mass spectrometers. This creates data sets containing tens of thousands of spectra that can be identified to peptide sequences drawn from the known genomes for most lab organisms. The first software for this purpose was Sequest, created by John Yates and Jimmy Eng at the University of Washington. Recently, David Tabb and Matthew Chambers at Vanderbilt University developed MyriMatch, an algorithm that can exploit multiple cores and multiple computers for this matching. Source code and binaries of MyriMatch are publicly available.
In this test, 5555 tandem mass spectra from a Thermo LTQ mass spectrometer are identified to peptides generated from the 6714 proteins of S. cerevisiae (baker’s yeast). The data set was provided by Andy Link at Vanderbilt University. The FASTA protein sequence database was provided by the Saccharomyces Genome Database.

MyriMatch uses threading to accelerate the handling of protein sequences. The database (read into memory) is separated into a number of jobs, typically the number of threads multiplied by 10. If four threads are used in the above database, for example, each job consists of 168 protein sequences (1/40th of the database). When a thread finishes handling all proteins in the current job, it accepts another job from the queue. This technique is intended to minimize synchronization overhead between threads and minimize CPU idle time.

The most important news for us is that MyriMatch is a widely multithreaded real-world application that we can use with a relevant data set. MyriMatch also offers control over the number of threads used, so we’ve tested with one to eight threads.

I should mention that performance scaling in MyriMatch tends to be limited by several factors, including memory bandwidth, as David explains:

Inefficiencies in scaling occur from a variety of sources. First, each thread is comparing to a common collection of tandem mass spectra in memory. Although most peptides will be compared to different spectra within the collection, sometimes multiple threads attempt to compare to the same spectra simultaneously, necessitating a mutex mechanism for each spectrum. Second, the number of spectra in memory far exceeds the capacity of processor caches, and so the memory controller gets a fair workout during execution.

Here’s how the processors performed.

The drop from 59 seconds with the Core i7-975 to 41 seconds with the 980X is pretty darned good for a benchmark that purports to be largely bound by memory bandwidth. Gulftown is efficient enough with its memory accesses, perhaps in part due to its larger cache, to extract more performance from its additional cores.

STARS Euler3d computational fluid dynamics

Charles O’Neill works in the Computational Aeroservoelasticity Laboratory at Oklahoma State University, and he contacted us to suggest we try the computational fluid dynamics (CFD) benchmark based on the STARS Euler3D structural analysis routines developed at CASELab. This benchmark has been available to the public for some time in single-threaded form, but Charles was kind enough to put together a multithreaded version of the benchmark for us with a larger data set. He has also put a web page online with a downloadable version of the multithreaded benchmark, a description, and some results here.

In this test, the application is basically doing analysis of airflow over an aircraft wing. I will step out of the way and let Charles explain the rest:

The benchmark testcase is the AGARD 445.6 aeroelastic test wing. The wing uses a NACA 65A004 airfoil section and has a panel aspect ratio of 1.65, taper ratio of 0.66, and a quarter-chord sweep angle of 45º. This AGARD wing was tested at the NASA Langley Research Center in the 16-foot Transonic Dynamics Tunnel and is a standard aeroelastic test case used for validation of unsteady, compressible CFD codes.
The CFD grid contains 1.23 million tetrahedral elements and 223 thousand nodes . . . . The benchmark executable advances the Mach 0.50 AGARD flow solution. A benchmark score is reported as a CFD cycle frequency in Hertz.

So the higher the score, the faster the computer. Charles tells me these CFD solvers are very floating-point intensive, but oftentimes limited primarily by memory bandwidth. He has modified the benchmark for us in order to enable control over the number of threads used. Here’s how our contenders handled the test with different thread counts.

Hmm, what was I saying above about memory bandwidth, efficiency, and caches? The same must apply here, where the 980X’s performance again scales up awfully well. AMD’s fastest Phenom II achieves well under half the computational rate of the i7-980X.

Overclocking

After our Clarkdale overclocking exploits yielded very healthy overclocks—speeds of 4.4 and 4.5GHz for the two chips we tried—I had high expectations for their Gulftown cousin. Since the 980X has an unlocked multiplier, I simply turned up the multiplier and CPU core voltage in order to overclock it.

At its stock 1.25V, our Gulftown didn’t take well to higher frequencies—a humble 3.6GHz was all it would do. Fortunately, taking the voltage up to 1.41V did the trick, and our 980X was stable with a 31X multiplier, which should yield 4.13GHz. In fact, I left Turbo Boost enabled during my overclocking attempts, and once the system had booted into Windows, the 980X simply ran at 4.26GHz pretty much all of the time, with one thread or 12, even during our Prime95 torture test. That’s a bit better than the 4GHz we coaxed out of our Core i7-975 a while back.

Here’s how the 980X performs at that speed. (Note that I’ve also included a few other overclocked CPUs from here. The “H57” notations are explained there.)

You’re really not going to extract much more out of DiRT 2 with a faster CPU, I’m afraid, but Cinebench is clearly another story entirely. Good grief.

What about power consumption at this speed and voltage?

Now you can see why Intel chose to hold the line on clock frequencies for Gulftown. There’s room in the 32-nm process, obviously, to reach higher speeds, but you’ll need to increase the voltage to get there. Higher voltage means exponentially higher power draw, taking the Core i7-980X well outside of the established power and heat boundaries.

Fortunately, though, Intel’s cooler for the Core i7-980X is up to the task of cooling the CPU when it’s overclocked to this degree. In our torture tests, using the cooler’s Performance mode, CPU temperatures were in the high fifties Celsius and steady. The fan wasn’t exactly silent at that point, but it was a good deal quieter than the worst CPU and GPU coolers I’ve heard.

The value proposition

Now that we’ve buried you under mounds of information, what can we make of it all? One way to filter the information is to consider the value proposition for each CPU model. Exercises like this one are inherently fraught with various, scary dangers—giving the wrong impression, committing bad math, overemphasizing price, coming off as irredeemably cheesy—but our value comparisons have proven to be popular over time, so with the capable assistance of TR System Guide guru Cyril Kowaliski, I’ve taken another crack at it.

What we’ve done is mash up all of our performance data in one, big summary value for each processor. The performance data for each benchmark was converted to a percentage using the Pentium 4 670 as the baseline. We’ve included nearly every benchmark we used in our overall index, with the exception of the purely synthetic tests like Stream. We excluded MyriMatch and Euler3D, since not all processors were tested in those benchmarks. In cases where the benchmarks had multiple components, we used an overall mean rather than including every component score individually. Each benchmark should thus be represented and weighted equally in the final tally. (The one case where we didn’t average together a single application’s output was WorldBench’s two 3ds max tests, since one measures 3D modeling performance and the other rendering.)

This overall performance index makes me a little bit wary, because it’s simply a mash-up of results from various tests, rather than an index carefully weighted to express a certain set of priorities. Still, our test suite itself is intended to cover the general desktop PC’s usage model, so the index ought to suffice for this exercise.

We then took prices for each CPU from the official Intel and AMD price lists. Note that AMD’s prices include a small cut since our last CPU roundup. For our historical comparison, we’ve also included the Core 2 Quad Q6600 and the Pentium 4 670 in a couple of places at their initial launch prices.

If we simply take overall performance and divide by price, we get results that look like this:

By this measure, you should almost always buy one of the cheapest CPUs on the market. This bar chart gives us a strong sense of value,but it may focus our attention a little too exclusively on CPU prices alone. For many of us, time is money, and faster computer hardware is relatively inexpensive. What we really want to know is where we can find the best combination of price and performance for our needs. To give us a better visual sense of that, we’ve devised our nefarious scatter plots.

The faster a processor is, the higher on the chart it will be. The cheaper it is, the closer to the left edge. The better values, then, tend to be closer to the top-left corner of the plot. If you wish, you can find your price range and look for the best performer in that area.

For our purposes today, the most noteworthy result is how the Core i7-980X Extreme delivers a major performance boost over the Core i7-975 Extreme at the exact same price, which gives it a much nicer position on the scatter plot, in spite of its relatively high price.

That gets us closer to the heart of the matter, but in reality, the price of a processor is just one component of a PC’s total cost, and the various platforms do have some price disparities between them. To give some context, we’ve selected a series of components for each processor and platform that might go into a fairly high-end PC of the sort a Core i7-980X might inhabit. The specs were largely based on the Double-Stuff config from our recent system guide. Our goal was to achieve rough parity by selecting full-featured ATX motherboards with dual PCIe x16 slots, each with a full 16-lanes of connectivity if possible. Here are the components we picked for the different platforms, along with system prices:

Platform Total price Motheboard Memory Common components
AMD 790FX $1839.89 Gigabyte GA-790FXTA-UD5

($184.99)

8GB Corsair DDR3-1600

($219.98)

XFX Radeon HD 5870 1GB graphics card ($399.99), Intel X25-M G2 160GB ($499.00), 2x Western Digital Caviar Green 2TB ($359.98), LG WH08LS20 Blu-ray burner ($179.99), Asus Xonar DX ($89.99), Cooler Master Cosmos 1000 ($179.99), Corsair HX750W ($149.99)
Intel X48 $1784.89 Asus P5E3 Pro

($129.99)

Intel P55 $1839.89 Gigabyte GA-P55A-UD4P

($184.99)

Intel X58 $2034.89 Gigabyte GA-X58A-UD5

($279.99)

12GB OCZ DDR3-1600

($319.98)

What happens when we factor these rather considerable system prices into our value equation?

Voila! You’re pretty much compelled to buy a Gulftown now, folks. Fire up the credit card and go for broke. The numbers don’t lie! In the context of a high-end system like this one, the additional performance offered by the Core i7-980X is actually worth the price of entry.

The scatter plots tell the same story in nearly as compelling a manner. You’re getting a major increase in overall performance by stepping up to the Core i7-980X, and the added cost isn’t a huge part of the total expenditure.

Of course, the AMD fans in the house—who have a deep and abiding affection for cheap processors—and some other value mavens would have me remind you that less expensive CPUs will look like better values in the context of cheaper systems. See our last roundup or our even lower-cost analysis for some examples. Our focus today is on more expensive CPUs and systems since we’re reviewing the 980X.

Performance per dollar isn’t the whole story these days, though. The power efficiency of a processor increasingly helps determine its value proposition for a host of reasons, from total system costs to noise levels to the size of your electric bill. We measured full system power draw and considered efficiency earlier in this article; now, we can factor in system prices to give us a sense of power-efficient performance per dollar.

The 980X’s combination of a high price and excellent power efficiency puts it in the top third of the pack on our bar charts. Both the bar chart and the scatter plot tell us that the 980X represents a major improvement over the Core i7-975 Extreme at the same price. If you were running a render farm, a Gulftown (or more likely, a Westmere-EP Xeon) system could be a very rational purchase once energy costs were taken into account. Still, some of Intel’s less expensive processors offer pretty good power efficiency for much less money, so they’re at the top of the bar chart.

Conclusions

Ok, so maybe being a technology critic these days isn’t so hard. The drawbacks of the Core i7-980X are pretty straightforward. If you’re using programs that don’t take advantage of more than four cores—and ideally more than eight threads—then you’re not going to see much performance advantage from a six-core, 12-thread processor like this one. Heck, we’ve seen virtually no performance improvement at all in single- or dual-threaded applications versus the prior generation, as represented by the Core i7-975. Also, wow: this thing costs a grand. That’s an awful lot of money for a CPU these days, especially since they keep getting faster—and going obsolete—at a pretty good clip.

And last but not least, most of today’s popular games really don’t need any more CPU power than what you’d find in a good pocket calculator. If you’re hoping the move from something with four cores to the Core i7-980X with six cores will improve your gaming experience in tangible ways, you can probably give that up right now. One could surely measure improved gaming performance in the right PC games—Supreme Commander 2, we are looking at you and your RTS buddies. I wish we’d tested one of you. But given our past experiences, I have my doubts that even a measurable increase in frame rates would be translate into a noticeable, seat-of-the-pants improvement. We want games that take advantage of a processor like this one—games with A.I. that doesn’t get stuck in doors and say the same three lines repeatedly in random order—to exist. We just don’t think they do yet.

All of which means the Core i7-980X isn’t for everyone. But then you probably had that figured out by now.

For the sorts of folks who buy high-end PCs, this six-core monster might just make some sense, as our performance-per-dollar value analysis made abundantly clear. If you’re into some of the things we tested that did show performance gains—video editing, image processing, file compression and encryption, 3D rendering, Folding, scientific computing—then getting 50% more cores and cache at the same price, speed, and power draw as the previous model could be a heck of a deal. In fact, the deal may be good enough to prompt you to step up from one of our prior favorites, like the Core i5-750 or i7-860. We rarely recommend that folks go whole hog and purchase an Extreme edition processor like this one, but a case can be made for the Core i7-980X. Heck, I even like the 980X’s stock Intel cooler pretty well, and it seems to offer some overclocking headroom.

The fact that we’re actually saying nice things about a thousand-dollar CPU’s value proposition speaks volumes about the potency of the six-core Gulftown chip. The prospect that really has me excited, though, is what comes next. The 980X is slated to become available for purchase in the next few weeks. Surely, at some point after that, Intel will start repopulating the entire Core i7-900 lineup with Gulftown-based parts. That range right now extends to under 300 bucks. If Gulftown processors get to be that cheap, they could very well dominate our value charts for the rest of the year.

Comments closed
    • indeego
    • 9 years ago

    Just got this. Holy feces! So fastg{

    • ronch
    • 10 years ago

    Dear AMD,

    Please make sure that Bulldozer is really, really, REALLY FAST!!!

    Thanks.

    ronch

    • NeelyCam
    • 10 years ago

    /[

      • OneArmedScissor
      • 10 years ago

      “6-core with a reasonable TDP is probably only doable on 32nm, though”

      Eh…AMD don’t seem to have an issue with 12 cores. They’ve been doing 6 cores for a while now. Intel themselves are doing Nehalem EX at 45nm, which is an absolutely enormous chip.

      I imagine the explanation for a lack of 32nm quad-cores is much less technical.

      There really cannot be that many people buying Nehalem quad-cores for desktops. Look how long they waited to get to Lynnfield.

        • NeelyCam
        • 10 years ago

        Yeah, I guess you missed the part about severe clock frequency reduction…?

        Magny-Cours isn’t out yet. I’m sure when the benchmarks come out, Gulftown will have a healthy performance margin over M-C… after all, Gulftown has 12 threads and is clocked MUCH higher. Even at same clock speeds, Nehalem architecture is faster.. what do you think will happen when frequency is increased by 50%? Overall, comparing M-C and Gulftown is apples and oranges.

        And what do you think everyone is buying for their desktops…? Athlons? Dear sir – keep in mind that Intel still owns some 80% of the market. My guess is that Nehalem quad-cores are selling quite well.

          • OneArmedScissor
          • 10 years ago

          AMD have had 2.8 GHz 45nm six core CPUs for nearly a year. Intel’s own Nehalem EX will be eight cores, and with a cache larger than most entire CPUs, but still with a standard TDP.

          This has nothing to do with AMD vs. Intel. What you said about six core CPUs not being possible at 45nm because of TDP is clearly wrong.

          Intel just has no reason to make 32nm native quad-cores. They are no good for servers. They would not help their desktop sales.

          And yes, I do think people are buying Athlons. They’re also buying Pentium dual-cores, which are far and away the majority of Intel’s sales.

          Desktop Lynnfields and Bloomfields combined are only a tiny percentage of Intel’s sales, and at no point did Intel ever expect them to be any more than that.

          That’s reality. There is absolutely nothing to argue.

            • NeelyCam
            • 10 years ago

            Care to show links to numbers to support your “reality”?

            And I never said it wasn’t possible. I said it would require cutting frequency (and performance) or increase TDP to the point where it doesn’t make that sense anymore.

            I’m sure even you agree that a 3GHz Nehalem core is faster than a 3GHz Athlon/Phenom, in almost every benchmark. That’s reality. There is absolutely nothing to argue about this. So, with all due respect, maybe we shouldn’t be comparing AMD’s 2.8GHz six-cores to Intel’s 3.3GHz six-cores… completely different beasts, in different performance classes.

            • flip-mode
            • 10 years ago

            You’re saying we shouldn’t compare one 6 core x86-64 processor to another 6 core x86-64 processor because they’re apples and oranges?

            Even if Intel’s 6 core was and apple (it is not an apple) and AMD’s 6 core was an orange (it is not an orange), they could be compared quite convincingly, and in fact, someone has already done the work for me:
            §[<http://www.theamericanview.com/index.php?id=802<]§ Anyway, back to your original post, #190, I don't know if I even know what you're trying to say. It seems like you are trying to figure out why Intel is running different processors at different nodes. I'm pretty sure you're at least half right - Intel is doing their 6 core at 32nm, not because 45nm is not possible, but probably because 32nm is better in terms of power consumption. Maybe their 45nm lines are pretty fully utilized already too; they're still making Penryn and derivatives too, in addition to Nahalem and derivatives. I think the bottom line is that Intel has several, several chip fabrication plants and they are trying to keep them all utilized to the fullest extend possible, plus the fact that 32nm might give their 6 core CPU better power consumption than 45nm.

            • NeelyCam
            • 10 years ago

            Yeah, I’m pretty much ready to take back my analysis… I’m sure Intel has reasons to not make 4-core 32nm CPUs… I just can’t figure out what they are.

            I’m confused, though, why the AMD fanbois felt compelled to attack me and start listing reasons why AMD is so great… in an Intel thread. I guess I reap what I sow…

            On another note, Intel just released a 32nm 2.26GHz 6-core Xeon with a 60W TDP. I can’t wait to see what OneArmedScissor is going to say about that.

            • flip-mode
            • 10 years ago

            Heh, that’s cool. We’re all a bunch of experts at speculation and counterspeculation around here.

            • NeelyCam
            • 10 years ago

            Yeah, it was the picture of an apple and an orange that did me in.

      • yuhong
      • 10 years ago

      And the unfortunate thing about it is the mess in CPU features this creates, particularly relating to AES and PCLMULQDQ (and that is only the user-mode visible new features of Westmere!). For the same price range, you can get a dual-core Clarkdale-based Core i5 600 series CPU with AES, PCLMULQDQ, VT-d, TXT, and integrated graphics, or a quad-core Lynnfield-based Core i5 750 with only SSE4.2 and no IGP. What is even more unfortunate is that they stripped AES and PCLMULQDQ out of lower-end Core i3 Clarkdales, and the lowest end Clarkdale, Pentium G9650, have even SSE4.x stripped, like the Core 2-based Pentiums. So in the end, you have to get either an expensive 6-core LGA1366 Gulftown or a dual-core LGA1156 Core i5 600 series to get Westmere’s new features like AES and PCLMULQDQ! The good news is that Sandy Bridge will be even better, being the first processor to implement the 256-bit AVX instructions, which will require OS support BTW to save and restore the wider YMM registers, just like when SSE a decade ago was introduced that required OS support to save and restore the XMM registers.

    • Bensam123
    • 10 years ago

    Now if only games would use more then two cores…

    Come on… you can do it!

      • Krogoth
      • 10 years ago

      There are a few that do, but I doubt we will ever see mainstream programs utilizing more than eight threads. You be lucky to see any that even use more than four threads.

    • AMDisDEAD
    • 10 years ago

    Indeed Intel’s superiority position is complete, dooming what is left of AMD to bargin basement CPU hell.

    This new Intel Processor is Sweet!
    Gonna grab two for a 12c workstation as soon as the price starts to drop.

      • NeelyCam
      • 10 years ago

      It was complete several months ago. This is overpriced junk that only appeals to render masters and fanbois.

        • AMDisDEAD
        • 10 years ago

        Nope.
        These processors are headed to their true target markets;
        Servers, HPC, and High Performance embedded. This is where the big dollars are.

          • grantmeaname
          • 10 years ago

          Xeons are the real processors destined for servers.

          Likewise, anyone with a clue in the HPC world is using Optys and CBEs.

            • AMDisDEAD
            • 10 years ago

            Think again. Servers are much more commodity now-adays. Many of the Hex devices will fit server apps quite well.
            Opterons in server apps still fall well behind the sales of Intel devices. The MAJORITY of HPC systems are based on Intel and not AMD devices. Anyone with an ounce of common sense would readily recognise this fact.

            However, IBMs integration of Opterons and Cell is a powerful system, but IBM has since moved on from this architecture.

            • grantmeaname
            • 10 years ago

            l[

            • AMDisDEAD
            • 10 years ago

            Of course I understood your analysis based on 15 year old data and realities.
            The fact is, the Xeon BRAND was created at a time when server sales were specialized, unlike today’s server market which is far more commodised. Today, ISPs are much more cost sensitive than those in the past. Also, there are certainly far fewer differentials between commodity processors and those branded as “server” processors.
            LOL, today many ISP servers no longer even deploy ECC memory.

            IBM sees this change and is slowly phasing out their server design division as they did with desktop when it also became a non-differentiated market.

            • grantmeaname
            • 10 years ago

            Nothing you just said addressed anything in my post. Do you want to try again?

            • Meadows
            • 10 years ago

            Give up, he’s stupid. I enjoyed your comments, however. Thanks.

            • JumpingJack
            • 10 years ago

            It is beginning to appear that Beckton will do an effective job of displacing Opty’s in the 4P and up, the HPC will begin swinging more toward Intel I would suspect.

    • Freon
    • 10 years ago

    Pretty cool. About what you’d expect with adding two more cores to an i7-975, give or take. The multithreaded apps scale very nicely.

    • Kunikos
    • 10 years ago

    Can you please use Dragon Age for benchmarks for gaming on >4 core CPUs? That game scales VERY well.

    • indeego
    • 10 years ago

    Amazing how much this thing blows away a server we spent ~$25K on ~2 years ago. le sighg{<.<}g §[<http://indeego.com/sigh.png<]§

    • brucect
    • 10 years ago

    Now lets see AMD’s 6 core parts
    INTEL’s costs $1000 @#$@!%%#% in this economy impossible to upgrade (no credit card option) :(( fawkeem
    and 12 core AMD magny cours selling at ebay 8000 dollars (gulp)
    here at link to ebay
    §[<http://cgi.ebay.com/MAGNY-COURS-set-of-four-G34-socket-AMD-12core-OPTERON_W0QQitemZ280464735612QQcmdZViewItemQQptZCOMP_EN_Networking_Components?hash=item414d003d7c<]§

      • Shining Arcanine
      • 10 years ago

      I can still use credit cards. :/

      How would you upgrade in a different economy?

        • brucect
        • 10 years ago

        I’ve been out of job almost six months . Dunno Tell me how would it be in a different economy.

    • KGA_ATT
    • 10 years ago

    Rats, I was late to the ‘review’ today. Reading through the posts it appears my thoughts have been echoed again. The concept of pairing a $100 cpu with a $500 SSD, $400 GPU, etc. when making the attempt to reflect real-world(?) system performance/price comparison is still leaving a disconnect. The end result for the method used will always conclude the $1k CPU as a ‘value’ buy, and it just cannot be unless compared to other CPU’s near the same cost.

    I would be stunned if CPU’s above $200 are found in more than 5% of workplace PC’s in the US alone. Very well could be, but I would be stunned. I’d really like to see this Workstation CPU(and very much an enthusiast CPU) vs other $1000 CPU’s. That would be a more respectful battle.

    • NeelyCam
    • 10 years ago

    Is it just me, or does this look like two tri-cores stapled together…?

    Are we getting an “i-6” triple-core clarkdale soon…?

      • UberGerbil
      • 10 years ago

      It does look like it wants to break in half, doesn’t it?

      They certainly could do that, but it’s unclear if they see a need — particularly with Sandy Bridge (and the shrunken quad replacement for Lynnfield) on the way. Pairing a tri-core with a GPU in a Clarkdale-style MCM is an interesting idea, but then they don’t need a bunch of the uncore either (since the GPU holds the memory controller). More importantly, it’s unclear how marketable it would be given the diminishing returns for core counts above two, particularly for the value segment. Not to mention it would only differ from the current Clarkdale (which of course already has the other benefits of Westmere) by one core and a higher price, while Intel’s i3/i5/i7 consumer naming scheme has been intent on obscuring the core count (and almost everything else)

    • NeelyCam
    • 10 years ago

    +5.

    Friggin’ reply fail.

      • RumpleForeSkin72
      • 10 years ago

      awesome freaking name dude..

      /go B’s

        • NeelyCam
        • 10 years ago

        77 forever!!!

    • PeterD
    • 10 years ago

    The opening paragraph says:
    “The problem comes down to the sheer rate of improvement among the products we review.”
    But I don’t agree.
    Yes, if you look at the figures: things are so many times faster, bigger, etc…
    But you still can’t get a decent computer which does not hang up, which does not need a reset button, and so on.
    So: progress? Not really.
    Impressive figures? Yes.
    But obviously not impressive enough to make for a real improvement in the user’s experience.

      • SubSeven
      • 10 years ago

      I smell a rant…. go get an Apple and some some Vaseline and well, I’ll let you figure it out.

      • UberGerbil
      • 10 years ago

      My elderly mother hasn’t had her computer hang up in years. Her new one doesn’t even have a reset button, nor has she needed it. And her new computer is much more responsive than her old one, so her “user experience” is much improved. (She also enjoys Win7 a lot more than XP, but that’s a separate argument)

      • Meadows
      • 10 years ago

      So, your computer hangs up, you’re using Windows XP, and Internet Explorer 6. Every day you tell a new tidbit about your sad conditions.

      Hey, guess what, my computer doesn’t hang. And every piece of electronics needs a resetting function/button, one way or another.

      • MadManOriginal
      • 10 years ago

      It’s probably your OS.

      • PeterD
      • 10 years ago

      Point is: he’s comparing the wrong figures.
      If you compare two cpu’s, and the one has 2x GHz as the other, than, yes, there is an improvement.
      However: that does not mean that your user experience is twice as better.
      To actually /[

        • flip-mode
        • 10 years ago

        Seriously with this? He’s not reviewing a “computer experience”. He’s reviewing a processor in terms of it’s performance. If you want a system review then head over to CNet or PC World or where ever.

        • Meadows
        • 10 years ago

        There needs to be a 10-20% improvement before most users will readily notice that something has just improved, not a 500% one like you erroneously claim.

    • protomech
    • 10 years ago

    §[<https://techreport.com/articles.x/18581/12<]§ >>Wow, Microsoft. You really couldn't see this coming? This is a video encoding app, pretty easily parallelized, and you thought, "Hey, eight threads ought to be enough for anybody." Really? << Should be four threads, six-core 980X shows no benefit over four-core 975X, both of which are barely faster than a quad core phenom II. Props for the big bang theory shot : )

      • Ushio01
      • 10 years ago

      It is 8 threads as the 975 has hyperthreading try reading next time.

        • Waco
        • 10 years ago

        *sigh* The program itself uses 4 as evidenced by the performance results when comparing to AMD quads. Learn to read.

    • WaltC
    • 10 years ago

    /[

      • UberGerbil
      • 10 years ago

      He didn’t write it with a straight face… and that seemed pretty obvious to me, but apparently not. As for the rest… tl;dr.

      • Meadows
      • 10 years ago

      For freak’s sake, be concise.

    • Dizik
    • 10 years ago

    Bazinga!

      • NeelyCam
      • 10 years ago

      Now, with 50% faster second pass.

      • Buzzard44
      • 10 years ago

      I caught that reference.

      Many would say that post would best be saved for Fermi.

    • ssidbroadcast
    • 10 years ago

    q[

      • LawrenceofArabia
      • 10 years ago

      Im thinking you’re right

      • Damage
      • 10 years ago

      That’s not how Intel counts its ticks and tocks. I checked their press materials before writing!

      • UberGerbil
      • 10 years ago

      §[<https://techreport.com/r.x/fall-idf/tick-tock.jpg<]§ New architectures are "tock" Shrinks of existing architectures are "tick" Merom was a "tock" Penryn was a "tick" Nehalem was a "tock" Westmere was a "tick" Sandy Bridge will be a "tock" §[<http://www.legitreviews.com/images/reviews/899/32nm_westmere_slide11.jpg<]§ I know that seems backwards, but Intel is a fab-oriented company. The shrinks are the things that cost the most because they involve retooling billion-dollar fabs; a new architecture just requires paying a relative handful of very smart guys to sit in a room for a while. (Yes, I'm kidding... sort of) The shrinks are what make the next architecture possible, not the other way around. That's how Intel looks at it, and it makes sense to them.

        • oldDummy
        • 10 years ago

        Thank you.
        This is the first time I read a reasonable explaination.
        Doesn’t matter if it’s true, at least I can understand it.

        • ssidbroadcast
        • 10 years ago

        Ah, I see where I made the mistake. I just had the “ticks” and “tocks” backwards.

        • MadManOriginal
        • 10 years ago

        “Our clocks don’t make sounds like your clocks.”

        (tock-tick)

      • NeelyCam
      • 10 years ago

      It’s been four years, and this tick tock thingy still confuses the hell out of everyone.

      Let’s just forget about what “tick” means and what “tock” means.

      Conroe: New architecture.
      Penryn: New process.
      Nehalem: New architecture.
      Clarkdale/Gulftown: New process.

      Clear enough?

      • dpaus
      • 10 years ago

      Ve haf ways to make you tock….

    • End User
    • 10 years ago

    I’d love to pop the i7-980X in to our render boxes at work. That makes total sense to me. Anything to speed up the workflow.

    The enthusiast in me says no way. i5-i7 quads in the $200-$300 offer some serious bang for the buck if you factor in OC’ing. My i7-920 D0 @4.2 (on air), using Cinebench R10 as a guide (compared to the stock i7-980X) scores 17% higher (6050) in the single CPU test and only 12% lower (24,400) in the multi CPU test. Even if you OC the i7-980X you are only going to match the cheaper OC’ed i5/i7 quads in all but the most multi-processor intensive tasks.

    It would be very cool if TR would add OC results from a wider selection of CPUs.

    I bought my ASUS P6X58D Premium with the thought of upgrading to Gulftown in the future. Even if the i7-980X dropped to $300 tomorrow I could not be persuaded to upgrade. Intel needs to improve the single core performance by a substantial amount before I buy a new CPU.

    Edit: Aside from my minor quibble about the lack of OC results, I have to say that it was a fracking awesome review. 🙂

      • Clint Torres
      • 10 years ago

      Hey End User, what kind of rendering do you guys do? We do broadcast graphics and our renders are mainly V-ray via 3DS Max.

      An OC’d i7 860 @ 3.5Ghz is nearly as fast as our dual quad-core 3.0Ghz Xeons (Core2 class). Depends on the complexity of the render. More complex renders seem to slow the i7 down but with simple renders it actually out performs the 8-core machines…weird.

        • End User
        • 10 years ago

        Our render boxes run Mental Ray (Maya 2010 on the client side).

    • OneArmedScissor
    • 10 years ago

    It would be very interesting to see how they could rig up the turbo modes on one of these to work like the Xeon L3426 Lynnfield.

    While this has limited application as a high power CPU to most of the world, the potential for low power multi-core CPUs in general use is very interesting.

    There could be SIX separate turbo boost profiles! That’s enough to tailor each one to just about any given use scenario, without it actually requiring the development of an application-aware profile switching system.

    It would be possible to use even as a low TDP mobile CPU that’s capable of handling pretty much any situation.

    • flip-mode
    • 10 years ago

    Code names pile up so fast we’re going to need a Bulldozer to clear them all away… he he he

      • Krogoth
      • 10 years ago

      Nice and on-point! +1

      • Mr Bill
      • 10 years ago

      Speaking of Bulldozers, in memory subsystem performance, does Istanbul’s 6-core Opteron already win?
      §[<http://www.techreport.com/articles.x/17005/4<]§

        • NeelyCam
        • 10 years ago

        I can’t take anything you say that seriously – I keep getting flashbacks from the awesome, care-free times I had back then when Mr Bill was on MTV some 20 years ago.

        Distracted – yes. I wish I could go back in time, though.

          • Mr Bill
          • 10 years ago

          Yeah, I took the user name Mr Bill because all through highschool (70-74) my friends and classmates would keep saying “Ohhhhh!! Nooooo! Mr Bill. I figured I might as well just use the name since I was practically branded with it.
          @flip-mode I guess it chaps my hide a bit that AMD never seems to win against Intel FUD or not. AMD is a bit like IBM. IBM could not market OS/2; I doubt either IBM or AMD could market immortal life. But AMD is directly responsible for dropping the cost very fast CPU’s (AMD’s and Intel’s) into the price range where I can buy them. For that, I am AMD’s fan.

            • UberGerbil
            • 10 years ago

            How is that possible? Mr Bill was a bit on the original SNL, which started in 1975 (the fist Mr Bill short was on the second season, in 1976; IIRC, SNL solicited “home movies” in the first season and it was one of the submissions). Unless your friends were the actual original creators of Mr Bill, or there was a tremendous coincidence, it would seem you’re mis-remembering your high school years. (Though I’d guess a lot of people who were in highschool in the early 70s have at best a hazy or somewhat hallucinatory memory of it).

            • Mr Bill
            • 10 years ago

            Come to think if it, what they actually used to sing “My girl Bill” as in “She’s my girl, Bill”. And also about that time everybody liked to hum “Oh Bungalo Bill what did you kill…?. I guess Mr Bill did come along a couple years later while I was in college. I was more or less in college for the next 19 years. I think I did not start using it as a handle in online forums till I joined Ars Technica, Tech Report, Lost Circuits, and 2CPU at roughly the same time. Come to looking and it was 2002.

        • flip-mode
        • 10 years ago

        My honest response is “who cares”. I personally don’t by processors for their Sandra scores.

    • rootbear
    • 10 years ago

    Looks like a great CPU for 3D rendering. I look forward to the cheaper versions to come. I love the code name decoder table on page one. Could you perhaps add a line for nehalem, right under penryn?

      • Damage
      • 10 years ago

      Nehalem = Bloomfield
      Westmere = Gulftown

      Westmere also = Clarkdale’s CPU, but with only two cores.

      That kinda should clear it up, best we can given Intel’s schizophrenia + naming disorder. 😉

        • rootbear
        • 10 years ago

        Ah, thanks. It really is confusing…

        • UberGerbil
        • 10 years ago

        Part of the problem is that Intel’s codenames have changed their meaning somewhat over time — depending on who is talking and when, “Nehalem” and “Westmere” have been applied to individual chips, to an architecture, and to a combination of architecture and process node.

        I think it’s easiest to think of it this way:
        Nehalem — new architecture (“tock”) on the 45nm node
        . . Bloomfield (socket 1366, 4C, QPI, 3xDDR3, no PCIe)
        . . Lynnfield (socket 1156, 4C, DMI, 2xDDR3, PCIe)
        . . Clarksfield (same as Lynnfield but for mobile)

        Westmere — existing arch on a new (32nm) node (“tick”)
        . . Gulftown (socket 1366, 6C, QPI, 3xDDR3, no PCIe)
        . . Clarkdale (socket 1156, 2C, 2xDDR3, no PCIe)
        . . Arrandale (same as Clarkdale but for mobile)

        Both Arrandale and Clarkdale also have a (45nm) GPU sitting alongside in the socket, so (again depending on who is talking and when) sometimes that gets included in the features of “the chip” (which goes into the socket) vs the CPU (which sits alongside the GPU in the socket)

        Notable in all this is the lack of a Westmere 4C equivalent to Lynnfield. As Intel has stated all along, we’ll be waiting for the Sandy Bridge generation to get a 32nm quad core. (Unless that gets delayed and/or Intel has a lot of Gulftowns with a bad core or two to salvage….)

        As confusing as all of that is, it’s still much clearer than trying to figure out what you’re getting based on whether it is an i3, i5, i7, or i9 — all of which overlap different processor designs and are intended to address the supposed “good / better / best / extreme!!!” purchasing strategy of non-technical customers…which Intel somehow managed to do while geeking it up in a way that is both cryptic to those customers and infuriating to the more technical ones.

    • wira020
    • 10 years ago

    I think the price of the system should only include motherboard and the cpu.. maybe the memory also… everything else is cross-compatible so why bother… i think that kinda kill the value comparison…

      • NeelyCam
      • 10 years ago

      Well, you do need the rest as well… if calculating performance/price ratios, the “baseline” cost needs to be included.

      The component selection for low-end CPUs should’ve been scaled accordingly, though, to have a fair value comparison.

        • UberGerbil
        • 10 years ago

        Those two scatter plots would make better use of the available space if the horizontal axis just started at $2K. Of course it’s bad practice to offset an axis like that, but that was my first thought when I saw them.

        Of course you could achieve something close to the same effect by reducing the comparison to just the CPU+mob+RAM combinations. But, as you say, total system cost is an interesting real-world number that people will be comparing, so it is worth having.

    • Krogoth
    • 10 years ago

    I give props to Damage.

    In a nutshell, Gulftown is a workstation and server-class chip. There is no real point in getting one if you are gaming and running other common mainstream tasks. The more affordable Clarkedales, Lynnfields and Phenom IIs are more than up to that task.

    FYI, game benchmarks are there to prove that GPUs matter far more than CPUs for performance. I am sure that a 5970 or 285 SLI would show i980’s edge at low resolutions, but what’s the point? You would still be getting an absurdity high framerate with a single mid-range GPU under the same conditions.

      • sigher
      • 10 years ago

      I think more power is followed by usage patterns too, not just the other way round.
      In other words if CPU become 10 times as fast then people will find ways to utilize that new standard.
      Look at multi-cores, at first you had a situation where one core could do one thing and you’d have power left to do another thing simultaneously, then they started to multi-thread more and more software and soon your 2 cores were locked doing stuff and you needed more of them to simultaneously do stuff.

    • PRIME1
    • 10 years ago

    can I have the heatsink when you are done with it?

    Pleeeeeeeeeeease.

    🙂

    • StashTheVampede
    • 10 years ago

    We really need to update the list of gaming benches. L4D2 may be a recent game, but it’s clearly not heavily CPU pushed. The particle benchmark is a better tool from Valve and it should stay.

    Modern Warfare2 also should go: it’s not a heavy CPU game either and it’s also not significantly different than the previous title for benchmarking purposes. It may be a popular game, but probably should get removed from a CPU bench.

    What we should have is Battlefield: Bad Company 2 (BC2). Several other sites have shown what 2/4/8 cores give you for this game and I’d love to see additional gains with this 6 core bad boy.

      • Krogoth
      • 10 years ago

      Sorry to disappoint you, but games are not going to push more than four-threads. The performance benefits of going beyond four-threads is devious at best, while it is a nightmare to code for it.

      Isn’t the laws of diminishing returns a pain? 😉

        • StashTheVampede
        • 10 years ago

        BC2 benchmarks:
        §[<http://www.nvnews.net/vbulletin/showthread.php?t=148199<]§ Scroll to second post with CPU benches showing more cores doing many more FPS. I'm positive not all games will do this, but I haven't yet found a BC2 benchmark that was strictly a FPS:core comparison. My point about those two games is simple: they aren't pushing any "modern" CPU and should be removed from a CPU benchmark.

          • jdaven
          • 10 years ago

          As long as Intel markets it’s CPUs as extreme gaming chips in their advertisements then they should be benchmarked in games to see if the advertising is all fud or not.

          • Firestarter
          • 10 years ago

          Wow, the difference between the Phenom X2 and X4 is nothing short of staggering! The HT on vs HT off graph is also disturbing.

      • NeelyCam
      • 10 years ago

      The more I think about it, the less I want this chip… I don’t need that kind of processing power. If I was a gamer, I’d be far better off with i5-750, HD-5870 and an SSD, and it would cost me less than this chip alone.

      Did I just blurt out something obvious that everybody already knows?

        • UberGerbil
        • 10 years ago

        Yes.

        You are admitting you’re not using your computer as a lek token, however.
        (I don’t think anyone is, successfully, but there seem to be plenty who try).

        • neoanderthal
        • 10 years ago

        I can’t help but keep coming back to the fact that, all things being equal, I could go with dual 5870s in Crossfire for just about what that processor costs.

        It’s really impressive, technologically, and for a developer’s workstation, or some other task where more cores = more work done, I can really see a case being made for it.

        For your average everyday enthusiast setup, not so much.

    • crsh1976
    • 10 years ago

    I’m not impressed with the 2 MB of cache per core for something totted as “Extreme”.

      • maroon1
      • 10 years ago

      It is not 2MB per core

      It is 12MB L3 shared memory. Which means that each core can read the 12MB cache

        • sigher
        • 10 years ago

        Oversimplified, but a good counter to the other simplification you reply to.

    • elmopuddy
    • 10 years ago

    The price will come down, it always does… this is my next CPU, it will certainly be on upgrade from my 9450@3.2. My VMWare thanks you Intel 😛

      • shank15217
      • 10 years ago

      Really? You seem pretty sure about that, lets see.

    • Fighterpilot
    • 10 years ago

    Nice article Scott.
    That thing doesn’t make much of a difference with most of the slender thread stuff people use regularly but I must say it’s very cool seeing its performance in 7 zip.
    Hardware solution looking for good software it seems.
    I think the new Intel heatsink/cooler is snazzy.
    Long over due…but neat.

    • Ryhadar
    • 10 years ago

    Believe me, Scott, I know what you’re saying about getting excited over Gulftown and its performance improvements.

    That being said, I think this CPU as a product is rather boring. It’s hella fast, but in it’s ridiculous price range it performed like a $1000 CPU should perform. If Intel released a Gulftown chip at about half that price (which I’m sure they will) then I’ll be impressed.

    • dpaus
    • 10 years ago

    Is it just me, or do the CPU reviews seem to be using more games (proportional to the total number of tests) even as their relevance plummets? Granted, finding software that a “regular” person would use on a day-to-day basis that can actually make use of the sheer power of these CPUs is getting tougher and tougher.

      • toyota
      • 10 years ago

      their relevance hasnt plummeted at all. TR is just not using any cpu intensive games.

        • dpaus
        • 10 years ago

        OK, is it just me, or is it really strange that I make a comment about an unnecessary acceleration and it’s responded to by “toyota”?

          • khands
          • 10 years ago

          Irony is the word you’re looking for.

      • PeterD
      • 10 years ago

      That’s because more normal applications, like office applications, can’t show the difference in speed anymore. Games, which have high demands on video and audio, can.

        • Meadows
        • 10 years ago

        “Office” applications are not “normal” for people who buy Extreme (or even quad) processors. They use other stuff. I agree though, not necessarily games either.

      • sigher
      • 10 years ago

      I’d say working with HD video and (un)packing huge rar files certainly makes CPU power usable for average users.

        • indeego
        • 10 years ago

        Average users don’t do that for such a significant period of their day where it makes a difference.

        I unzip/zip/7z and use a lot of file manipulation, but it’s usually never a time rush, and I just put it all aside and do other things while it’s going on. If you Render, or compile, or your time is tied close to your money, then this easily adds up as worthwhile.

        I’m impressed at how well the e8x00 series does. That cache really helps with most tasks that the common office worker doesg{<.<}g

    • jokinin
    • 10 years ago

    So this will be like 900 to 950€ VAT included in Europe.
    And i say, no thank you, it’s so freaking expensive, i will build a computer with a higher performing cpu in 2011.
    I really hope AMD gives Intel some real competition so this insanely high prices come down fast.

    • green
    • 10 years ago

    queue the “test @ higher rez for more realistic results” in 5…4…3…

      • flip-mode
      • 10 years ago

      Um, he did test at higher res.

    • spworley
    • 10 years ago

    I wonder if there was a problem (or early beta) with the BIOS used for the motherboard. It doesn’t seem to be supporting the greenest new feature of the 980x, which is the per-core power shutdown feature. The idle power of the 980x system in the test is *[

      • OneArmedScissor
      • 10 years ago

      They’re both “terrible” because they have all of those high power QPI links. Shutting off a handful of transistors that likely don’t do a whole lot to begin with isn’t going to make a world of difference.

      It’s not a desktop platform. Those things need to be there and work the way they do.

        • NeelyCam
        • 10 years ago

        And 50% more DDR. Maybe there’s also something else in X58 that just sucks power like nobody’s business.

        Awful idle power. Another acre of trees dead.

      • bcronce
      • 10 years ago

      They used Windows 2008, not 2008 R2. R1 does NOT support core parking, so all 6 cores would stay on.

    • Anonymous Coward
    • 10 years ago

    Arggg! You said something about SupCom2, so I am compelled to rant.

    Someone can clear this up for me, but since SupCom2 will run on Xbox 360 (if you don’t mind I’d like to hurl and insult at it) which is powered by what is more or less three Atoms at twice the netbook clock speed connected to 512MB of RAM, can’t we expect that game to be pretty crappy?

    I played SupCom kind of a lot for a while (though TA was way better) and if someone had only 1GB of RAM it would become a huge problem. The Athlon 64 X2’s around 2.4ghz (with 2x 512k L2) ran out of balls pretty quick, but the C2D’s at 2.4ghz (4MB L2) were good. I can’t see how an Xbox would survive.

    Maybe they gutted the game.

      • Mentawl
      • 10 years ago

      Yup, SupCom2 pretty much fails, unfortunately, as they gutted most of the good parts of the game (economy, map scale, the assist system, experimentals). But apparently it’s more accessible? Bah, whatever. SupCom1 is still far more of a challenge to your PC than the sequel.

        • Anonymous Coward
        • 10 years ago

        I don’t understand how they could fail so completely to make a decent TA2. They got the engine down pretty well (although I actually liked the old graphics better, except for explosions and smoke) but they totally failed when it came to game play, and now… they sell out entirely and port to Xbox. What a crime.

          • NarwhaleAu
          • 10 years ago

          True – SC2 could have been so much more expansive, with some real cutting edge graphics instead of the “optimized for Xbox360” engine they served up. Don’t get me started on the economy or trying to keep 5 engineers queued with pay as you go resources…

          That said, I bet a Gulftown would work great with SC:FA. 😀

      • tfp
      • 10 years ago

      Supcom1 would drag a Q9400 @ 3.36 to slower than 0 sim speed with only 2 AIs, 2k unit limit and 2 human players.

      They had large problems with the AIs and CPU usage.

      Also Supcom1 was on the Xbox360 as well so I’m not sure what the point of your argument is.

      Lastly as it has been said multiple times on TR the supcom benchmark was not a true test of the games CPU usage. A really difference can only be seen with in a real game with multiple AIs. However that is almost impossible to bench.

        • Waco
        • 10 years ago

        Supcom 2 has trouble bringing down a dual core /[

          • derFunkenstein
          • 10 years ago

          It’s not just coded for performance to scale better? It has to be dumbed down?

            • OneArmedScissor
            • 10 years ago

            SupCom: FA is to CPUs what Crysis is to GPUs.

            The new one sure as hell better run on a dual-core. There’s no reason the original shouldn’t have.

        • Anonymous Coward
        • 10 years ago

        Heh, I never realized SupCom was on Xbox (was it crappy?), but I definitely saw the SupCom TR benchmarks were not helpful, back when they did them. No connection at all to what I was seeing in the real world.

        Anyway I continue to scorn SupCom2, if for no other reason than their demo movie showed an army driving through wreckage like a ghost goes through doors. Even TA did that much correctly, and it was awesome.

        Also I could never understand why their sub-commanders could also be walking nuclear power plants (which were small, armed, and hard to kill, naturally). Been years since I’ve looked at that game, I was so disappointed.

    • ClickClick5
    • 10 years ago

    Dear Lord….redundant much?

    Intel…wait until the software world can USE that before you ship out an 8 core beast…or even a 16 core (32 thread) monster. Wait.

      • Meadows
      • 10 years ago

      The world can use more than this.

        • djgandy
        • 10 years ago

        Indeed. ClickClick5 I don’t remember you being voted in as representative for the world?

        Also most people are using and need dual cores now. In 2005 we didn’t need them. Should we not have bothered with all the research until now? Do you think dual core apps would exist if the hardware was never there to test and develop with?

      • stdRaichu
      • 10 years ago

      The world of virtualisation is obviously new to you 🙂

      We’ve got dual quads in the blades for our VMware cluster, and seeing as VMware licensing allows up to six processors per core, you could install these as drop-in replacements if CPU power was your bottleneck.

      Similarly, on a more workstation-oriented taskset, anyone who does lots of video encoding will love these. I do alot of H264 stuff with x264, and whilst I’ll generally limit encodes to two threads to keep the quality optimum, I’d be able to do three runs in parallel instead of two.

      • OneArmedScissor
      • 10 years ago

      Every socket 1366 CPU is a server chip.

      Every one of them.

        • derFunkenstein
        • 10 years ago

        no, no, these are being marketed as desktop CPUs. Call me when the name is “Xeon”.

          • djgandy
          • 10 years ago

          Well there is obviously demand. The line between desktop and server is thin these days in many cases. If you do not need unbelievable reliability, then why bother with more expensive usually slower, server components.

          I’m sure people with render farms will snap these up. Same for build machines any similar tasks.

          • OneArmedScissor
          • 10 years ago

          It’s not being markted as a “desktop” CPU. It’s being marketed as an “extreme” CPU, and being sold at the same price premium the Xeon version will have, because it’s still intended for the same sort of use.

          There’s plenty of application for it in workstations that may handle server-like loads.

          Every single Bloomfield and Gulftown has or will have an identical “Xeon” branded counterpart, at about the same price, but with all of the QPI links enabled.

          They don’t change anything to make them “desktop friendly.” They get the same minimal turbo boost setups as the “Xeon” branded ones and everything.

            • derFunkenstein
            • 10 years ago

            Yes, you’re going to put so many “Extreme” CPUs in servers. Nobody (well, a very low percentage) bought any Extreme Edition or Athlon 64 FX to put in a server. You’re kidding yourself if you think otherwise. If these are server chips, why aren’t we seeing server benchmarks?

            • UberGerbil
            • 10 years ago

            You two need to stop splitting semantic hairs just for argument’s sake (well, unless the argument is the whole point). OAS is making a technical distinction, derFunk is marking a marketing one.

            These are server silicon being marketed to a non-server market. They have the same socket, the two QPI links (with one disabled), and every other accoutrement of the 6 core Xeon EP chips. They are just being put into different boxes and sold to different customers.

            Call them Xeons in drag.

            And for the customers who actually have a need for a six-core workstation processor, there’s nothing wrong with them (not even the price, since these are generally going to be purchased by businesses that see the benefit to throwing extra cores at sufficiently-parallizable tasks). If Gulftown didn’t exist, those customers likely would be using Xeons to get more cores working on the problem — it just would cost them more, since they’d be buying all the other server-class hardware to go around the chips. Being substitutable for each other doesn’t make them /[

            • OneArmedScissor
            • 10 years ago

            This is the internets. Of course it’s arguing for arguing’s sake!

            I agree that we’re splitting hairs, but I do not agree that Intel are actually marketing this as a straight up “desktop” CPU.

            For the record derFunkenstein, I never said the “extreme” ones are going into servers. They’re still going to see their primary application for the same sorts of work loads as servers may handle, just on a more limited scale.

            How many computers with one of these do you really envision being able to sit on someone’s desk? :p

            • UberGerbil
            • 10 years ago

            §[<http://fc03.deviantart.net/fs25/f/2008/178/6/f/INTERNET_FIGHT_by_691.png<]§ §[<http://xkcd.com/386/<]§ Ok. Obviously this overlaps almost exactly with the Xeon 36xx 1P line, but those are /[

    • maroon1
    • 10 years ago

    No one is expecting to see an improvement in gaming. No game can utilize 6 cores/12 threads

    However, the improvement in multi-threading is impressive. 980X is up to 50% faster 975 in multi-threaded applications

    • cal_guy
    • 10 years ago

    For the WMV test you could consider using Microsoft Expression Encoder 3 rather than Windows Live movie maker 14.

    • Pax-UX
    • 10 years ago

    Great review, and have to agree. While I like these chips are coming out, I just don’t see any practical need for them beyond 3D and highly parallelized work. This chip would awesome in servers but doesn’t offer much over a Quad for anything I’d be doing with it.

    • Mystic-G
    • 10 years ago

    I know I’m late, but is it safe to say the Cell processor isn’t as powerful for gaming as Sony makes it out to be now?

    • Meadows
    • 10 years ago

    That’s the most insane stock cooler I have /[

      • flip-mode
      • 10 years ago

      Yes. That thing looks Boss.

        • no51
        • 10 years ago

        I’m kinda curious as how it compares to aftermarket coolers. Roundup please Damage?

          • derFunkenstein
          • 10 years ago

          Just based on scale, it looks to have a 92mm fan on it. My GUESS is that it’s competitive with coolers like the Arctic Cooler Freezer 7 Pro or Xigmatek HDT-S983. It’s not going to run with the 120mm big boys, I don’t think.

      • Krogoth
      • 10 years ago

      Insane? Hardly.

      It looks like a run of the mill, tower-style aftermarket cooler.

      Not exactly suitable for a 1U and 2U chassis, but I am sure there are custom coolers for those chassis. 😉

        • derFunkenstein
        • 10 years ago

        For a stock cooler, it is insane. That’s really just a different way to say what the comment you replied to said. I know it’s hard, but do try not to be stupid.

          • Damage
          • 10 years ago

          Or rude…

            • derFunkenstein
            • 10 years ago

            I’ll see what I can do!

          • Krogoth
          • 10 years ago

          I guess you don’t remember these guys. 😉

          §[<http://www.btxformfactor.com/files/31/promo2.jpg<]§

            • derFunkenstein
            • 10 years ago

            That’s no different than any number of Shuttle XPCs with ICE coolers.

      • Delphis
      • 10 years ago

      I was thinking it’s pretty awesome for a stock cooler. Looks like the heatpipe thingie I have on my current quad-core. Very nice.

    • AlvinTheNerd
    • 10 years ago

    I think there is a good bit of unfairness with the platform price analysis.

    Intel has several socket types and forces different mobo prices with different processors.

    AMD has the same socket across the line, an advantage to the consumer, but your price system throws it out the window. You use one of the most expensive mobos available with an AM3 socket for the analysis across the AMD product line. While this might make sense with a Phenom II 965 and might be how you ran the benchmarks, very few people would put a $50 Athlon II X2 with a high end 790FX mobo. You need to have at least two tiers for AMD with the other option being the 770 (which meets the criteria) or the more popular but IGP based 785G for the Athlons in the cost analysis.

      • JumpingJack
      • 10 years ago

      You should check again, they generate two plots, one on just processor price — assuming you are just looking at the processor costs, and a second one for building a total system from scratch, using common components.

      The article addresses your concern.

      EDIT: Though you have a point, in their total system costs plots they include some very expensive drives (SSDs, 2TB mechanical, and a Blue-ray). When you figure in total system costs, it dilutes the value advantage AMD builds into their pricing, but much of that dilution is over amplified by choice of drives.

        • MadManOriginal
        • 10 years ago

        Really. People need to stop crying about the ‘whole system build’ chart which, btw, was added specifically because people asked for it since ‘CPU only’ was ‘unfair.’ Anyone who is going to DIY a computer ought to be competent enough to use the CPU-only chart to decide where their particular complete build choices fall.

        Stop being so freaking lazy. If you’re genuinely looking to build a system put in a little effort, if not just ignore it because the whole ‘value’ section doesn’t really matter too much.

          • JumpingJack
          • 10 years ago

          Meh … I don’t disagree, many should probably be smart enough to figure out what they want to spend and what performance they really want anyhow.

          I am simply trying to be objective to see other’s points of view. I am not one that is bothered much dropping $1K on a processor.

      • Buzzard44
      • 10 years ago

      See #10.

    • mongoosesRawesome
    • 10 years ago

    cpu decoder table says westmere when it should read clarkdale

      • Damage
      • 10 years ago

      The CPU portion of the Clarkdale two-chip package is a 32nm, dual-core Westmere chip. That chip is what’s specified in the table. The 45nm IGP/north bridge is not. Hence…..

    • Kent_dieGo
    • 10 years ago

    Wow. Very impressive. I hope that stock cooler makes to other processors.

    • Buzzard44
    • 10 years ago

    Eh, although I can see you’d want to use the same common components for comparison of total system performance/dollar, doing so by giving all systems ridiculously expensive components just reverses the natural bias from best value being cheapest CPU to best value being the most expensive CPU.

    Who in their right mind would get an X2 255 processor to throw it in with a 5870, a 160GB X-25M, a 750W PSU, 2 2TB HDDs, blu-ray burner, etc? A $2300 X2 255 system?!

    I’m not attacking this, and I think the value section is wonderful, but perhaps having 2 or 3 tiers of common components to match with processor performance would make for less bottle-necked and more realistic computers and computer values.

      • JumpingJack
      • 10 years ago

      Yeah, that is what I just thought through….

      Jack

      • Lans
      • 10 years ago

      I do agree and most people wouldn’t but… March 9, 2010 shortbread had:

      Legion Hardware on Radeon HD 5870 CrossFire CPU scaling performance

      And CPU isn’t too much of a bottleneck at higher res and at 2560×1600 the difference isn’t that large. Too bad there were no Athlon II X2… Probably doesn’t make a whole of sense but a case can be made for graphics card.

      Then again, I built my Phenom II X4 955 machine for around $500 (IGP, no video card… waiting…, no new PSU).

      • Bombadil
      • 10 years ago

      I agree. Both of my inexpensive 785G motherboards have no trouble running my Phenom II X4 940 at ~3.6 GHz–my Xigmatek 1283 seems to be the weak link.

      • SubSeven
      • 10 years ago

      +5. The weaker nature of AMD processors greatly skews their perf/dollar on a system basis making them look like HORRIBLE processors when they really are not. Getting top notch hardware to accompany a top notch CPU makes sense because the CPU is bound by the slowest component. However, when you have top notch hardware that are bound by the CPU….. well you get the drift. This is on top of the fact that it makes no sense to pair $2000 with a $100 cpu.

      Again, I don’t mean to be a critic here and I certainly understand the consistency requirement that was needed to be met. However, perhaps a caveat is on order?

        • NeelyCam
        • 10 years ago

        Of course I have to disagree: you were a bit of a critic, and the criticism was well deserved.

        The value comparison doesn’t really make sense.

      • juampa_valve_rde
      • 10 years ago

      agree, in this review they should have included only the performance mainstream cpus from amd, (p2 965, 955, and a2 630 as reference), because nobody puts an a2 250 in a $ 180 mobo… personally for such cpu i dont spent more than $ 80/90 for a mobo, yet adquiring a decent one with 785g+sideport, acc, etc etc.

        • Damage
        • 10 years ago

        We’ve done value comparisons with cheaper systems in the very recent past, and we will do again in the future. We chose to focus on high-end systems in this article because it is a review of the Core i7-980X. Yes, it is true that cheaper CPUs with lower performance will look relatively better in cheaper systems. That just wasn’t the focus of our attention today.

      • Damage
      • 10 years ago

      We’ve done value comparisons with cheaper systems in the very recent past, and we will do so again in the future. We chose to focus on high-end systems in this article because it is a review of the Core i7-980X. Yes, it is true that cheaper CPUs with lower performance will look relatively better in cheaper systems. That just wasn’t the focus of our attention today.

      I’ll add an extra disclaimer to the value section to help make that clear.

        • Damage
        • 10 years ago

        Disclaimer is in:

        g[

          • flip-mode
          • 10 years ago

          LOL. I was going to say before that is was so damn obvious that it’s a shame you need to add a disclaimer. But, sometimes you get to have a little fun with the disclaimer!

          • SubSeven
          • 10 years ago

          Blah… I’m going to go hide in my corner now. That’s what happens when you skim stuff. I shouldn’t read these things at work; but it was just tempting!!!!

          • d0g_p00p
          • 10 years ago

          ell oh ell….

      • djgandy
      • 10 years ago

      I agree, it would require quite a lot of added complexity.

      You’d need to pick 10 or so graphics cards to do this ‘fairly’ I think. That’s a lot of benchmarking.

      You could use estimates with lower end graphics cards, just to get a general idea where they fall.

      Maybe it’s time to split off the GPU workloads (games) from the CPU workloads?

    • alphacheez
    • 10 years ago

    This looks like it’d make my video transcoding a lot less painful than my Pentium Dual Core E2180 OC’d to 3 GHz. Too bad the processor itself costs almost 3 times what I paid for my entire (decidedly budget) computer in H2 2007. It’s times like this I wish AMD had a CPU competitive at the high end to make intel push these 6-core beasts down to prices that might be able to be enjoyed by the masses.

    Also, it really does look like the processor was initially laid out with 4 cores and then someone said, “We’re not even close to our transistor/die area budget yet” so they slapped a couple more cores on. Are there any plans for intel to make and MCM (it’d probably take a new socket since it’d be such big hunks of silicon, I could also see thermal issues) of this a la AMD’s Magny Cours or will they just be pushing forward towards Sandy Bridge at this point?

    Too bad about the stagnating single/dual core performance we’ve been seeing since Nehalem was first released but I think we all saw it coming. The writing’s been on the wall since intel hit the MHz/thermal wall with the late-gen P4; I’m glad that forced them down the path they went with core2 and now Nehalem/Westmere to wring as much performance as possible out of a core/clockspeed.

    Looking forward to the Truecrypt results on Westmere chips once that’s available.

      • eternalmatt
      • 10 years ago

      Did you see that there is empty space on the diagram? Look to the top left corner of the schematic. They STILL haven’t fit that “transistor/die area budget”

        • alphacheez
        • 10 years ago

        That’s sort of what I mean; it looks like they literally copy/pasted two more cores at the edge and said “screw it” when they then had unused die space. Maybe they’re hoping defects will end up in that portion of the chip 😛

        Too bad they couldn’t/didn’t put more memory channels (this would make the chip need all new motherboards so that has to be out) or something in that space. They should toss an Atom core in the empty space, then you’ve got 7 cores (joking).

    • eternalmatt
    • 10 years ago

    This thing will knock this shit out of your ass.

    • bdwilcox
    • 10 years ago

    I’m a bit of a noob, but would this processor make Doom run faster?

      • JumpingJack
      • 10 years ago

      You are probably better off getting that game in the MS arcade for Xbox360. The 6-core evolution is not going to do much for gaming as they remain mostly GPU bound anyway, serious computer users who need workstation class performance though will enjoy it I suppose.

      • MadManOriginal
      • 10 years ago

      Successful noob is successful.

      • SecretMaster
      • 10 years ago

      No, but it allows for CPU based physics calculations for bullet trajectories when playing Duck Hunt.

      • Meadows
      • 10 years ago

      It would make Crysis run faster than Doom.

        • hermanshermit
        • 10 years ago

        Actually, switching to software rendering, maybe.

          • l33t-g4m3r
          • 10 years ago

          wasn’t doom’s software renderer limited to 30 fps?
          The only way to get above that is with 3rd party ports.

    • MadManOriginal
    • 10 years ago

    So, would a 2P Gulftown system have a…series of tubes?

      • w00tstock
      • 10 years ago

      Didn’t you know the -[

    • yuhong
    • 10 years ago

    “Large pages, up to 1GB in size, are now supported”
    Which was copied from the AMD K10. In fact, Intel copied AMD’s RDTSCP instruction from the DDR2 K8 processors with the original Nehalem.

      • Meadows
      • 10 years ago

      Spreading good things is not a bad thing.

      • derFunkenstein
      • 10 years ago

      I hear there’s an x86 cross-licensing agreement between the two.

      • UberGerbil
      • 10 years ago

      And Intel copied the x64 design from AMD. And AMD copied MMX and SSE.
      The horror!

      FWIW, large pages have been part of the architecture since the Pentium; extending them from 2MB/4MB to 1GB isn’t a particularly notable innovation. (Whereas having the same feature in both processor families makes it more likely the OS will actually make use of it — as the contrast between x64 and 3DNow! demonstrates, features often only matter to the extent they’re “copied”)

      And there’s no such thing as a “K10” — AMD calls it the “10h Family”

      • ManAtVista
      • 10 years ago

      You “intel copied AMD!” guys amuse me. Isn’t x86 intel’s design in the first place? Didn’t intel make the first cpu? When AMD copies, it’s interoperbility and a win for consumers, when intel does it, it’s big bad capitalist corporation stealing. I get it.

        • yuhong
        • 10 years ago

        Except that that is not what I mean. I didn’t say “stealing” in my comment at all, in fact.

          • Shining Arcanine
          • 10 years ago

          I think his comment is directed at a much broader audience and not you in particular.

        • grantmeaname
        • 10 years ago

        No, intel did not make the first CPU. Fail.

          • derFunkenstein
          • 10 years ago

          The first x86 CPU is how I read it. “Isn’t x86 intel’s design in the first place” before the question of making the first CPU made it so for me. “Didn’t intel make the first (x86) CPU?” I believe the answer there is an unequivocal yes.

          • ManAtVista
          • 10 years ago

          *[http://inventors.about.com/od/mstartinventions/a/microprocessor.htm<]§ Maybe the term CPU is ambigious, but I believe the 4004 was the first of the modern CPUs of the type we use today.

        • AMDisDEAD
        • 10 years ago

        No, Intel did not invent the CPU.
        They invented the first commercially available “fixed” instruction set micro-coded CPU.
        At the time, AMD developed and sold a series of custom instruction set CPUs, Bit-slices. They were not the first or sole vendor selling slices.

        It is safe to say that since the U.S. military (not IBM) forced Intel to second source the x86 CPU, AMD has been following 2 steps behind Intel in all of their development except for the brief period several years ago when they managed to take the lead for a few quarters.

      • NeelyCam
      • 10 years ago

      Burn the TROLL!!

        • JumpingJack
        • 10 years ago

        I don’t think he is being a troll necessarily, Intel has followed AMD in those regards….

        @ yuhong — these “Intel copied AMD” type posts are usually flame bait material, you can point out that AMD led the charge without a condescending tone.

        AMD and Intel often copy one another or follow on another when the innovation needs to be somewhat standard across the x86 space or there is a clear trend in that direction. Examples include new instructions (AMD copies Intel), x86 64-bit extension (Intel copies AMD), at 32 nm AMD will copy Intel with HighK/MG on the process side.

          • NeelyCam
          • 10 years ago

          Yes, and it’s everybody knows that. His comment was an attempt to get a response by making a one-sided statement – hence, get the gasoline and a box of matches.

          I can’t believe I’m saying this stuff… Did I just get tired of trolling, or am I just tired?

        • AMDisDEAD
        • 10 years ago

        Palin, is it?

          • NeelyCam
          • 10 years ago

          I wouldn’t even spit in her direction, although I probably should.

Pin It on Pinterest

Share This