Intel’s ‘Sandy Bridge’ Core processors


She’s finally here. At last, Intel is taking the wraps off of one of the most anticipated bits of silicon we’ve seen in years: Sandy Bridge. We’ve known the architectural details of the processor code-named Sandy Bridge for months—they are formidable, new, and different—but we haven’t known exactly how the changes would translate into performance and power efficiency, which is the big question about any product overhauled this extensively. Fortunately, Damage Labs has been churning away for weeks in anticipation of this moment, and we have a pleasantly extensive look at Sandy Bridge’s—ahem, I mean “the second-generation Core microprocessors'”—performance ready for your perusal.

Sandy takes the stage

Sandy Bridge is, essentially, a next-generation replacement for Intel’s primary CPUs for desktops and laptops, including those based on quad-core Lynnfield and dual-core Clarkdale silicon. Because so much information about Sandy Bridge has been available for months, we’re going to skip the architectural deep dive in this review, give you a quick overview of Sandy’s key features, and then focus on our test results. The thing is, even a quick overview of this new chip will take some time, simply because so very much has changed.

At the heart of Sandy Bridge is an essentially new processor microarchitecture, the most sweeping architectural transition from Intel since the introduction of the star-crossed Pentium 4. Nearly everything has changed, from the branch predictors through the out-of-order execution engine and into the memory subsystem. The goal: to achieve higher performance and power efficiency, even on single-threaded tasks, where the integration of multiple CPU cores hasn’t been much help. Additionally, each of those cores holds a revamped floating-point unit that supports a new instruction set called AVX. These instructions allow the processing of vectors up to 256 bits in width, and the hardware supports them quite fully. The result should be much higher sustained rates of throughput for floating-point math, giving new life to media processing applications and other sorts of data-parallel computation.

An overview of Sandy Bridge’s logical layout and features. Source: Intel.

Beyond its potent new cores, Sandy Bridge incorporates more of a PC’s basic functions on a single square of silicon than any prior CPU in its class. Not only does it have the memory controller and PCIe links (in addition to two to four CPU cores), but it also brings a graphics processor onboard. This creeping integration of system components has resulted in higher performance, lower platform power consumption, and more compact packaging, which is why both Intel and AMD are moving deliberately toward further integration.

At the same time, integrated graphics processors (IGPs) are growing more capable, relatively speaking. Sandy Bridge’s IGP bears little resemblance to Intel’s past attempts at graphics; its execution units are capable of substantially more work per instruction and per clock cycle. What’s more, the IGP’s video processing block can both decode and, somewhat distinctively (if you don’t count, say, the iPhone 4), encode H.264 high-definition video streams, opening up the possibility of fully hardware-accelerated video transcoding that barely burdens the CPU cores.

To facilitate better integration, Intel’s architects gave Sandy Bridge a high-bandwidth, ring-style interconnect between the cores, with their associated L3 cache partitions, and the IGP. This fast (up to 384 GB/s in a 3GHz quad-core chip) interconnect has a number of purported benefits, including easing data sharing between cores, providing the throughput needed for the processor’s revamped floating-point units, and allowing the onboard graphics component to expand its available bandwidth by making use of the L3 cache.

The quad-core Sandy Bridge die with major sections labeled. Source: Intel.

Better integration has created new possibilities for power management, as well. Sandy Bridge extends in several ways Intel’s Turbo Boost feature, which takes advantage of available headroom in the CPU power delivery and cooling mechanisms to deliver higher clock frequencies at lower load levels. The first change is simply more clock speed headroom generally. Although Turbo behavior varies from model to model, Sandy Bridge reaches higher clock speeds and ramps up more aggressively than older processors. The revised Turbo algorithm also does something that may seem a little counterintuitive at first, allowing the CPU to ramp beyond its maximum rated power use (thermal design power or TDP) for brief periods of time. As I understand it, Intel is taking advantage of the lag between when a relatively cool idle chip begins to warm up its environment and when temperatures have risen to levels where full cooling capacity is needed. During this span of time, the chip may opportunistically push beyond its rated thermal peak by running at higher-than-usual frequencies within its Turbo Boost range. Once the surrounding system has warmed up or enough time has passed (the algorithm is complex, and Intel hasn’t shared all of the details with us), the chip will drop back to operating within its TDP max. Intel claims this feature has an important usability benefit for common usage patterns, where periods of high-utilization are “bursty” by nature—think of opening a program or running a PhotoShop filter. Furthermore, Sandy Bridge’s Turbo Boost algorithm incorporates not just the CPU cores but the IGP, as well; it can raise the operating frequency of the graphics processor when the CPU cores aren’t at full utilization.

Code name Key

products

Cores Threads Last-level

cache size

Process node

(Nanometers)

Estimated

transistors

(Millions)

Die

area

(mm²)

Penryn Core 2 Duo 2 2 6 MB 45 410 107
Bloomfield Core i7 4 8 8 MB 45 731 263
Lynnfield Core i5, i7 4 8 8 MB 45 774 296
Westmere Core i3, i5 2 4 4 MB 32 383 81
Gulftown Core i7-980X 6 12 12 MB 32 1168 248
Sandy Bridge Core i5, i7 4 8 8 MB 32 995 216
Sandy Bridge Core i3, i5 2 4 4 MB 32 624 149
Deneb Phenom II 4 4 6 MB 45 758 258
Propus/Rana Athlon II X4/X3 4 4 512 KB x 4 45 300 169
Regor Athlon II X2 2 2 1 MB x 2 45 234 118
Thuban Phenom II X6 6 6 6 MB 45 751 346

The table above shows the key specs for the quad- and dual-core versions of Sandy Bridge alongside other recent chips. Thanks to Intel’s 32-nm, high-K metal gate fabrication process, the nearly one-billion transistors in the quad-core version of Sandy Bridge fit into a die area smaller than either the Lynnfield chip it replaces or the “Deneb” Phenom II with which it competes—and neither of those other chips have integrated graphics. If you’re counting along at home, Intel tells us each CPU core is made up of roughly 55 million transistors, while the graphics core is 114 million. We suspect a great many of the remaining transistors are packed tightly into the chip’s 8MB of L3 cache.

Now that you have a sense of the scope of the thing, it should come as no surprise that there’s much, much more to Sandy Bridge than we can cover in this context. If you’d like more detail, please have a look at our Sandy Bridge primer, which considers the microarchitectural changes in more depth. If you want even more detail, we suggest reading David Kanter’s overview of the Sandy Bridge microarchitecture, too.

The products

We should pause briefly to let you know that Intel may attempt to confuse you by referring to Sandy Bridge-based products as “second-generation Core microprocessors.” Do not be taken in by this strange attempt at marketing. What they are talking about is not the Merom/Conroe generation of chips, sold under the “Core 2” banner and based on the microarchitecture known as Core. Nor are they talking about Nehalem chips based on the second-generation Core microarchitecture and sold as Core i3, i5, and i7. No, that would be entirely too simple. They’re also not talking about the later Westmere/Gulftown chips, which comprised the second generation of processors to fall under the Core i3/i5/i7 naming scheme. Nope! What Intel means when it uses this funny turn of phrase is, strangely enough, Sandy Bridge-based processors. How they are the second generation of anything with a Core name on it is beyond me, but you may want to file away this information unless the phrase comes up. There’s probably no good way to map it out logically, so memorization may be key.

While I’m complaining about confusing marketing, let’s have a look at the Sandy Bridge product stack. Normally, we’d make up a nifty, blue table with all of the various models and their key specifications, but this time, I’ve decided to relay it to you just as it came to us from Intel. Apologies for the small lettering, but there’s a lot to cram into the space.

The Sandy Bridge-based desktop product stack. Source: Intel.

Listed above are just the desktop variants of Sandy Bridge being announced today, and only those within the traditional lineup. There are also low-power S models and even lower-power LV variants intended for small-form-factor PCs, along with “supplemental” models aimed only at large PC manufacturers—not to mention the mobile versions for laptops and such, which are increasingly important. In all, Intel is announcing 29 new CPU models and 10 different chipset variants today.

The sheer variety itself isn’t a huge problem, but you’ll want to watch carefully before buying a Core ix-2000-series processor because, given a great number of possible knobs and dials it could tune in order to differentiate its various CPU models, Intel has chosen to twiddle with virtually all of them—base and Turbo clock speeds, core count, thread count, L3 cache size, IGP type, you name it. If there’s a feature you want or need, there’s no guarantee that just buying a new processor get it for you. (The difference between the HD 2000 and 3000 IGPs, incidentally, is the number of execution units; the 2000 has only six enabled, while the 3000 has 12.) Intel has no doubt been wildly successful over the years and will likely continue to be with these new products. Still, we can’t help but wonder whether that success has come in spite of its product segmentation practices, which are surely incredibly confusing to most consumers, rather than because of them.

With that complaint out of the way, the wonderful thing you’ll want to notice about Sandy Bridge is that she’s really quite an affordable date. The most expensive, unlocked K-series quad-core, the i7-2600K, rings up at only $317. The rest of the lineup costs less and extends down to the $117 Core-i3 2100, a 3.1GHz dual-core. These really are mid-range and lower parts. Intel is leaving the high end to its venerable LGA1366 socket and Gulftown-based six-core processors. Whether the Sandy Bridge quads will challenge the Gulftown parts on performance, though, is quite another story, as we’ll soon see.

The quad-core versions of Sandy Bridge listed above should be available for purchase as soon as you read this text. The dual-cores, in both desktop and mobile form, are slated to reach the market in four to five weeks.

New socket, new chipsets

The vast changes to Intel’s CPU silicon bring with them big changes to the surrounding infrastructure, including a new socket type and new chipsets. You can’t plug a Sandy Bridge processor into an older LGA1156-type motherboard, and trying might leave you with a mangled socket, since the new LGA1155 socket is a different animal with the retention notches in another location. Believe me, I came close to closing the lid on a socket with the wrong chip type inadvertently in place, which would have been catastrophic. Look closely at the pictures below, and you’ll see the difference.

A Clarkdale based LGA1156 Core i5-560 (left) versus a LGA1155 Sandy Bridge (right)

A Sandy Bridge processor sits in our Asus P8P67 Deluxe motherboard’s socket

1155 pins, or so they say. Start counting?

In short, you’re going to need a new motherboard in order to build a system around a Sandy Bridge CPU. Although Intel has a host of different chipsets targeted at different markets, that mobo will likely be based on one of two offerings: the performance-oriented P67 or the more pedestrian H67.

Logical block diagram of the P67 chipset. Source: Intel.

These new chipsets have a handful of nice improvements over the prior-gen P55 and friends, including the fact that the eight PCI Express lanes branching off of the chipset now transfer data at 5 Gbps, the full rate supported by PCIe 2.0, rather than half that. The increase should help for auxiliary I/O chips like USB 3.0 and SATA 6Gbps controllers. Speaking of which, the chipset itself now has a pair of SATA 6Gbps ports built right in, along with four more SATA 3Gbps ports. The only really glaring omission here is USB 3.0 support, which most motherboard makers have overcome by using third-party USB controllers.

One thing that you’ll want to watch for in Sandy Bridge mobos: with the new Turbo algorithm, our understanding is that VRM design can potentially influence the amount of time the processor spends resident in Turbo mode clock speeds. It’s possible motherboard designs may once again influence overall performance in way that they haven’t for several years. Could make things interesting. Stay tuned for Geoff’s take on four of the first such boards based on the P67, which we should be posting soon.

The deal with overclocking

In Sandy Bridge processors, one base clock running at 100MHz governs just about everything in the CPU. Changing that base clock in order to raise the core clock speed will cause nearly everything else in the system—PCIe connections, I/O links, and such—to run at the wrong speed, potentially causing all kinds of Very Bad Things to happen. As a result, the bulk of Sandy Bridge overclocking efforts, at least initially, will likely be limited to adjustments of the CPU multiplier. I don’t think the CPU was designed this way in order to put the clamps on big, bad overclockers. Instead, I get the impression Sandy Bridge was designed by a team who primarily had mobile applications, like laptops, in mind.

Regardless of what happened there, though, Intel does have a couple of solutions to this dilemma to offer to enthusiasts. The first, obviously, is its K-series processors with unlocked multipliers, including the nicely priced Core i5-2500K at $216—just 11 bucks more than the locked version—and the Core i7-2600K at $317. If you have serious overclocking ambitions in mind and good cooling at your disposal, a K-series CPU will probably be the way to go.

Source: Intel.

Intel is leaving some headroom available to those who opt for a non-K model, as well, in the form of four steps of the multiplier—or 400MHz—above the stock Turbo clock speeds. The example above looks to be a Core i5-2500, with base clock of 3.3GHz, which has Turbo Boost enabled. With the Turbo speed tweaked, it’s possible that a single, occupied core could run as fast as 4.1GHz—and all four could run at 3.8GHz. That’s a pretty decent amount of headroom left open for “free,” I suppose.

I’d expect most users who want a modest amount of overclocking on fairly pedestrian air or water cooling to be satisfied with these arrangements, especially given the pricing of the 2500K. We may be past the days of a $99 bonanza of overclocking headroom, though, as we’ve seen in some cheap chips from the past. That’s kind of a shame. The handful of extreme overclockers out there probably won’t be happy at all with the base clock limitations, but they never did represent the average enthusiast’s aspirations in such matters.

Sandy and her rivals square off

We have fallen behind a bit in the past six months or so on the CPU reviewing front. We’ve been so busy covering new technology announcements and such that we’ve missed out on testing several new speed grades of existing processors, and now is our chance to catch up in a big way, with spiffy new test rigs, a refreshed suite of tests, and a vast array of current and past CPUs to compare. Here’s a look at what we’ve tested.

Model Cores Threads Base core

clock speed

Peak Turbo

clock speed

L3 cache

size

Memory

channels

TDP Price
Core i3-2100 2 4 3.1 GHz 3 MB 2 65 W $117
Core i5-2400 4 4 3.1 GHz 3.4 GHz 6 MB 2 95 W $184
Core i5-2500K 4 4 3.3 GHz 3.7 GHz 6 MB 2 95 W $216
Core i7-2600K 4 8 3.4 GHz 3.8 GHz 8 MB 2 95 W $317

First, we have a nice selection of Sandy Bridge processors spanning the range of introductory prices, including both of the unlocked K-series chips. Notice that the quad-core parts have 95W TDP ratings, just like most current Lynnfield-based products, yet they incorporate graphics into the mix, as well.

Model Cores Threads Base core

clock speed

Peak Turbo

clock speed

L3 cache

size

Memory

channels

TDP Price
Pentium G6950 2 2 2.8 GHz 3 MB 2 73 W $87
Core i3-560 2 4 3.33 GHz 4 MB 2 73 W $138
Core i5-655K 2 4 3.2 GHz 3.46 GHz 4 MB 2 73 W $216
Core i5-760 4 4 2.8 GHz 3.33 GHz 8 MB 2 95 W $205
Core i7-875K 4 8 2.93 GHz 3.60 GHz 8 MB 2 95 W $342
Core i7-950 4 8 3.06 GHz 3.33 GHz 8 MB 3 130W $294
Core i7-970 6 12 3.2 GHz 3.46 GHz 12 MB 3 130 W $885
Core i7-980X Extreme 6 12 3.33 GHz 3.60 GHz 12 MB 3 130 W $999

Next, we’ve tested a pretty broad range of the prior-gen Intel processors spanning from $87 to $999. Many of these chips are newer speed grades, such as the Core i3-560 and Core i5-760, that represent the latest top bin offered in their respective ranges.

We thought we had a beautiful test design when we were putting this plan together, but I have to admit to being snookered by Intel’s name conventions on one front: the Core i3-560 and i5-655K are nearly the same thing, with very similar clock frequencies. In fact, the i3-560’s base speed is straddled by the i5-655K’s base and Turbo speeds. All I can say is that I wanted to represent the most desirable i5-600-series part, and that was definitely the unlocked 655K, in my view. The fact that we had already included nearly the same thing didn’t hit me until testing was well underway. There’s really no harm done, but that’s what happened.

Model Cores Threads Base core

clock speed

Peak Turbo

clock speed

L3 cache

size

Memory

channels

TDP Price
Athlon II X3 455 3 3 3.3 GHz 2 95 W $87
Phenom II X4 840 4 4 3.2 GHz 2 95 W $102
Phenom II X2 565 Black 2 2 3.4 GHz 6 MB 2 80 W $115
Phenom II X4 975 Black 4 4 3.6 GHz 6 MB 2 125 W $195
Phenom II X6 1075T 6 6 3.0 GHz 3.5 GHz 6 MB 2 125 W $199
Phenom II X6 1100T Black 6 6 3.3 GHz 3.7 GHz 6 MB 2 125 W $265

AMD offers a lot of different CPU models, but its product stack’s pricing has been compressed quite a bit by competitive pressures from Intel. After all, you usually can’t charge much more for a chip than what your competitor asks for one with similar performance. AMD has still been very active in the context of those limitations, offering 100MHz speed bumps regularly. Three of these CPUs debuted early last month, including the new flagship, the Phenom II X6 1100T, the high-frequency dual-core Phenom II X2 565, and the $87 triple-core Athlon II X3 455. Two more of them are brand-new products being announced today (or very soon) as competition for Sandy Bridge: the Phenom II X4 975, which is a speed bump up to 3.6GHz, and the Phenom II X4 840.

AMD’s new Phenoms are here to greet Sandy Bridge

Proving that Intel doesn’t have a lock on confusing marketing moves, the Phenom II X4 840 is actually a new entry in AMD’s lineup of quad-core chips based on Propus silicon, which lacks an L3 cache. This is not a higher speed version of the Phenom II X4 810, which had a 4MB L3 cache. The logic of AMD’s naming scheme to date would dictate that this product would be called the Athlon II X4 650, but marketing has triumphed over logic and given us the newly minted Phenom II X4 840.

Nevertheless, with four cores at 3.2GHz, the X4 840 could prove to be a worthy rival to the new Core i3-2100, and the AMD chip costs less. That logic is a little easier to follow. At around the same price point, the i3-2100 will also face off against the Phenom II X2 565 and its younger sibling, the Core i3-560. Among those, only the Phenom II X2 565 has an unlocked multiplier, as its Black Edition name suggests.

Stepping up a class, we have a six-car pile-up at around $200, with the Core i5-2400 and i5-2500K at the, err, front and rear, to extend the analogy well beyond any reasonable bounds. AMD’s brand-new Phenom II X4 975 is smack-dab in the middle at $195, and it’s an unlocked Black Edition, making it a pretty direct rival to the also-unlocked Core i5-2500K. If you wish, you could shed a few ticks of clock frequency and opt for more cores at virtually the same price in the form of the Phenom II X6 1075T, whose peak Turbo Core speed is 3.5GHz. Among the older Intel CPUs, the Core i5-760 is the latest entry in its lineup, having supplanted one of our long-time value favorites, the Core i5-750. The two new Sandy Bridge processors will have to contend with all of these rivals in order to prove their worth.

Finally, the Core i7-2600K is as close as the Sandy Bridge processors come to a flagship offering. The Extreme Editions and such will remain the Gulftown six-core parts on the X58 chipset. As $317, the 2600K will essentially replace the Core i7-875K, and it may do unkind things to the quad-core Core i7-950, the cheapest current LGA1366 chip. The only kinda-sorta competition from AMD will be the Phenom II X6 1100T Black Edition, which is somewhat less expensive but similarly unlocked.

All of that covers the current landscape quite well, I believe. For those folks looking to upgrade from an older processor, we have tested three CPUs of historical interest. The first of these is a fun one: the Pentium Extreme Edition 840, one of the very first dual-core PC processors ever. The Pentium EE 840 was the result of the Pentium 4’s power and heat problems at higher clock frequencies; it was Intel’s first attempt to take advantage of the power efficiency advantages of limiting clock frequency and increasing thread-level parallelism via multiple cores. Thanks to Hyper-Threading, the EE 840 exposes four threads to the operating system and, thanks to its Pentium 4 roots, still runs at a healthy 3.2GHz. Amazingly, the EE 840 installed and ran happily in our Intel X48 chipset-based motherboard. The only accommodation we had to give it was, of course, a larger cooler.

The other two are more recent: a Core 2 Duo E6400, one of the first mid-range variants of the Merom/Conroe architecture and an early value favorite, and a Core 2 Quad Q9400, a reasonably priced quad-core based on 45nm Penryn chips. If your box is rocking one of these processors or something similar, it may be time to upgrade. We’ll put these in the context of a wide range of today’s CPUs, so you can see what you might get out of taking a step up.

Test notes

As I’ve mentioned, our CPU test rigs are all-new this time around, and we’ve taken the opportunity to freshen up the hardware used in them.

Asus’ ENGTX460 TOP 1GB graphics card

For graphics cards, Asus was kind enough to supply us with its version of the GeForce GTX 460 1GB with a swanky DirectCU cooler. This cooler is pleasantly quiet, and it has helped reduced the noise levels in Damage Labs when we have a number of systems running tests concurrently.

Corsair’s Nova V128 SSD

We’ve also made the move to SSDs courtesy of Corsair, which supplied its Nova V128 SSDs for our test rigs. We chose these drives for couple of reasons. The Nova V128 has been one of our editorial picks for a while, and the TRIM implementation in its Indilinx controller is pretty aggressive about clearing unused pages, which should mean disk write performance doesn’t vary greatly from run to run while we’re benchmarking. We’re really pleased with the move to SSDs for our test rigs. They’re constantly being rebooted, and the SSDs shorten that process noticeably. Also, the silence is golden.

Our Sandy Bridge test rig with 8GB of Corsair RAM and an Asus P6P67 Deluxe mobo

After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled. We did disable these power management features to measure cache latencies, but otherwise, it was unnecessary to do so.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we reported the median of the scores produced.

Our test systems were configured like so:

Processor
Athlon II X3 455 3.3GHz

Phenom II X2 565 3.4GHz

Phenom II X4 840 3.2GHz

Phenom II X4 975 3.6GHz

Phenom II X4 1075T 3.0GHz

Phenom II X4 1100T 3.3GHz

Pentium
Extreme Edition 840 3.2GHz
Pentium
G6950 2.8GHz
Core
2 Duo E6400 2.13GHz
Core
i3-560 3.33 GHz

Core i5-655K 3.2GHz

Core i5-760 2.8GHz

Core i7-875K 2.93GHz

Core
2 Quad Q9400 2.67GHz
Motherboard Gigabyte
890GPA-UD3H
Asus
P5E3 Premium
Asus
P7P55D-E Pro
North bridge 890GX X48 P55
South bridge SB850 ICH9R
Memory size 8GB
(4 DIMMs)
8GB
(4 DIMMs)
8GB
(4 DIMMs)
Memory type Corsair

CMD8GX3M4A1333C7

DDR3 SDRAM

Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Memory speed 1333 MHz 800
MHz
1066 MHz
1066 MHz
1333 MHz
1333 MHz
Memory timings 8-8-8-20 2T 7-7-7-20 2T 7-7-7-20 2T
7-7-7-20 2T
8-8-8-20 2T
8-8-8-20 2T
Chipset

drivers

AMD
AHCI 1.2.1.263
INF
update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

INF
update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated

SB850/ALC892 with

Realtek 6.0.1.6235 drivers

Integrated

ICH9R/AD1988B with

Microsoft drivers

Integrated

P55/RTL8111B with

Realtek 6.0.1.6235 drivers

Processor Core
i7-950 3.06 GHz

Core i7-970 3.2 GHz

Core i7-980X Extreme 3.3 GHz

Core
i3-2100 2.93 GHz

Core i5-2400 3.1 GHz

Core i5-2500K 3.3 GHz

Core i7-2600K 3.4 GHz

Motherboard Gigabyte
X85A-UD5
Asus
P8P67 Deluxe
North bridge X58 P67
South bridge ICH10R
Memory size 12GB
(6 DIMMs)
8GB (4 DIMMs)
Memory type Corsair

CMP12GX3M6A1600C8

DDR3 SDRAM

Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Memory speed 1333 MHz 1333 MHz
Memory timings 8-8-8-20 2T 8-8-8-20 2T
Chipset

drivers

INF update 9.1.1.1020

Rapid Storage Technology 9.5.0.1037

INF update
9.2.0.1016

Rapid Storage Technology 10.0.0.1046

Audio Integrated

ICH10R/ALC889 with

Realtek 6.0.1.6235 drivers

Integrated

P67/ALC889 with

Microsoft drivers

They all shared the following common elements:

Hard drive Corsair
Nova V128 SATA SSD
Discrete graphics Asus
ENGTX460 TOP 1GB (GeForce GTX 460) with ForceWare 260.99 drivers
OS Windows 7 Ultimate x64 Edition
Power supply PC Power & Cooling Silencer 610 Watt

Our test systems for integrated graphics looked a little bit different. They were configured like this:

Processor
Phenom II X4 1075T 3.0GHz
Core
i3-2100 2.93 GHz

Core i5-2500K 3.3 GHz


Core i5-655K 3.2GHz
Motherboard Gigabyte
890GPA-UD3H
Intel
DH67BL
Gigabyte
H57M-USB3
North bridge 890GX H67 H57
South bridge SB850
Memory size 8GB
(4 DIMMs)
8GB (4 DIMMs) 8GB
(4 DIMMs)
Memory type Corsair

CMD8GX3M4A1333C7

DDR3 SDRAM

Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Corsair

CMD8GX3M4A1600C8

DDR3 SDRAM

Memory speed 1333 MHz 1333 MHz 1333 MHz
Memory timings 8-8-8-20 2T 9-9-9-24
2T
8-8-8-20 2T
Chipset

drivers

AMD
AHCI 1.2.1.263
INF update
9.2.0.1016

Rapid Storage Technology 10.0.0.1046

INF
update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated

SB850/ALC892 with

Realtek 6.0.1.6235 drivers

Integrated

P67/ALC892 with

Microsoft drivers

Integrated

P55/ALC889 with

Realtek 6.0.1.6235 drivers

Graphics Integrated

Radeon HD 4290 with

Catalyst 10.12 drivers

Integrated

Intel HD Graphics with

8.15.10.2266 drivers

Integrated

Intel HD Graphics with

8.15.10.2246 drivers

They shared the following common elements:

Hard drive Corsair
Nova V128 SATA SSD
OS Windows 7 Ultimate x64 Edition
Power supply PC Power & Cooling Silencer 610 Watt

We’d like to thank Asus, Corsair, Gigabyte, and OCZ for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

The test systems’ Windows desktops were set at 1900×1200 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.

We used the following versions of our test applications:

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

We typically start with some synthetic tests of the CPUs’ memory subsystems, just to weed out any new readers who might be intimidated by such things. Can’t become too successful, you know. These results don’t track directly with real-world performance, but they do give us some insights into the CPU and system architectures involved. For this first test, the graph is pretty crowded. We’ve tried to be selective, only choosing a subset of the CPUs tested. This test is multithreaded, so more cores—with associated L1 and L2 caches—can lead to higher throughput.

Lean into your monitor, squint, and try to isolate the Core i7-2600K and several of its key competitors…

Ah, there. The 2600K achieves higher throughput in its L1 and L2 caches, as expected given its clock speed edge over the Core i7-875K. The Phenom II X6 1100T hangs with the 2600K pretty closely at the smaller block sizes, though it is drawing from the L1 and L2 caches spread across 50% more cores. An interesting thing happens at the 4MB block size, though. We’re firmly into the L3 caches on all of these chips, and the Sandy Bridge processor manages more than twice the throughput of the Phenom II X6—and roughly 3X that of the Core i7-875K. This very nice increase over prior architectures may be the result of Sandy Bridge’s high-speed ring interconnect.

With the exception of the triple-channel Core i7-900-series CPUs, nearly all of these processors are using dual channels of DDR3 memory, and the majority are running at 1333MHz with identical timings. Nonetheless, the Sandy Bridge processors extract quite a bit more bandwidth out of their memory subsystems than the other dual-channel configs from Intel and AMD. Each core in Sandy Bridge has symmetric load/store units capable of two 128-bit loads per cycle, whereas older Nehalem-derived processors in the Core lineup are limited to a single load per cycle. Those architectural enhancements look to be paying off here.

Memory access latencies are tricky because they’re measured in CPU cycles, and with technologies like Turbo Boost doing their thing, the clock speed isn’t something we know for sure. We’ve assumed, in this case, that the Intel processors with Turbo Boost are running at their peak Turbo frequencies for this test. We’ve also guessed that the Phenom II X6 processors with Turbo Core don’t respond quickly enough to run this test at their Turbo Core peaks, which appears to be a valid assumption given the way the results match up.

Anyhow, if we’re right about the clock speeds, then Sandy Bridge more or less holds the line on memory access latencies at right about where Lynnfield was before it. The dual-core Core i3-2100, though, accesses memory much quicker than the Clarkdale-based Core i5-655K and friends, which are hobbled somewhat by the fact that the memory controller is on a separate chip in the same package.

StarCraft II

We tested StarCraft II by playing back a recording of an epic 30-minute, eight-player match that we found online and capturing frame rates with Fraps. Thanks to the relatively long time window involved, we decided not to repeat this test multiple times, like we usually do when testing games in this fashion.

Well, I’d say this is a pretty auspicious beginning, since the slowest Sandy Bridge is faster than the Core i7-875K.

We can show you the frame-by-frame performance results, if you’d like to see them. We actually took the average above starting from about 400 seconds in; the frame rates before that were a bit inflated because there weren’t many units populating the map. Here’s how the whole period looks plotted out.

That’s frickin’ cool looking, but it’s also pretty difficult to read. If we zero in on the later portion of the game, where frame rates really started to slow down, and separate the CPUs by class, we get something much more readable.

There is a slight temporal shift in some cases because we started our recordings manually, but you get the picture. The Sandy Bridge processors are at the top of their respective classes, and the 2600K is at least the equal of the fastest six-core, the Core i7-980X Extreme.

Battlefield: Bad Company 2

Most of our performance tests are scripted and repeatable, but Battlefield: Bad Company 2, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each Fraps sequence five times per CPU in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

Looks like frame rates are topping out here around 90 FPS no matter what, perhaps due to our video card hitting its performance limits, but three of the four Sandy Bridge processors reach that threshold, and again, the lowly Core i5-2400 is outdoing much more expensive siblings such as the Core i7-875K. Realistically, in this game as in many others, you can get by with much less processor than a Sandy Bridge—even the Core 2 Quad Q9400 has no trouble pushing a 60 FPS average. Then again, the very fastest Phenom II X6 isn’t any faster than the Core i3-2100—and the i3-2100 has a higher minimum FPS.

Civilization V

The developers of Civ V have cooked up a number of interesting benchmarks, two of which we used here. The first one tests a late-game scenario where the map is richly populated and there’s lots happening at once. As you can see by the setting screen below, we didn’t skimp on our the image quality settings for graphics, either. Doing so wasn’t necessary to tease out clear differences between the CPUs.

Apparently the results for the last two games were not a fluke, because we’re seeing similar dominance from the Sandy Bridge processors yet again here. The dually Core i3-2100 is almost embarrassingly fast, especially compared to AMD’s finest.

Civ V also runs the same test without updating the screen, so we can eliminate any overhead or bottlenecks introduced by the video card and its driver software. Removing those things from the equation reshuffles the order slightly. Apparently, the game has better threading than the video driver and/or Direct3D 11, because the Phenom II X6 is able to catch and surpass the Core i3-2100.

The next test populates the screen with a large number of units and animates them all in parallel.

Once more the Sandy Bridge processors overachieve, much as we’ve seen before.

Eliminating the rendering portion of the task focuses more fully on the CPU alone, and in that context, Intel’s new chips still look very strong. Only the dual-core i3-2100 falls victim to an actual in-class defeat—well, it’s a tie, at least—at the hands of the quad-core Phenom II X4 840.

F1 2010

CodeMasters has done a nice job of building benchmarks into its recent games, and F1 2010 is no exception. We scripted up test runs at three different display resolutions, with some very high visual quality settings, to get a sense of how much difference a CPU might make in a real-world gaming scenario where GPU bottlenecks can come into play.

We also went to some lengths to fiddle with the game’s multithreaded CPU support in order to get it to make the most of each CPU type. That effort eventually involved grabbing a couple of updated config files posted on the CodeMasters forum, one from the developers and another from a user, to get an optimal threading map for the Phenom II X6. What you see below should be the best possible performance out of each processor.

Perhaps we were a little too aggressive with the image quality settings for our GeForce GTX 460, since the fastest CPUs are running into its limits even at 1280×800. Still, the top three spots at that resolution go to Sandy Bridge processors, and the i3-2100 continues to terrorize the Phenom II X6 1100T. Really, the Core 2 Quad Q9400 remains pretty competent, with a minimum frame rate of 35 FPS.

At higher resolutions, you’d obviously be better off sticking with a lower-priced CPU and upgrading your video card, if you want to run this game at its highest image quality settings. You might also want to, you know, just turn down the IQ levels a bit.

Metro 2033
Metro 2033 also offers a nicely scriptable benchmark, and we took advantage by testing at four different combinations of resolution and visual quality.

Obviously, the performance differences between the CPUs become smaller, to the point of becoming negligible, as the display resolution and graphical quality level rises. At the lowest resolution, though, the Sandy Bridge chips continue to show that they’re just plain faster than their predecessors or the would-be competition from AMD.

Give the strange variability of the minimum frame rates reported here, I wouldn’t make too much of those numbers. Beyond that puzzle, the reality appears to be that you don’t need a very fast CPU to run this game. The Athlon II X3 455 nearly hits 60 FPS and should be more than adequate.

Source engine particle simulation

Next up is a test we picked up during a visit to Valve Software, the developers of the Half-Life games. They had been working to incorporate support for multi-core processors into their Source game engine, and they cooked up some benchmarks to demonstrate the benefits of multithreading.

This test runs a particle simulation inside of the Source engine. Most games today use particle systems to create effects like smoke, steam, and fire, but the realism and interactivity of those effects are limited by the available computing horsepower. Valve’s particle system distributes the load across multiple CPU cores.

This test is widely multithreaded and clearly makes use of all 12 threads on the Gulftown Core i7-970 and 980X. Even so, check out the contest between the Phenom II X6 1100T and the (lower-priced) Core i5-2400. The 1100T has six cores running at speeds similar to the those of the i5-2400’s four cores; neither chip has Hyper-Threading; yet the i5-2400 is measurably faster. Similarly, the dual-core, 3.1GHz Core i3-2100 is faster than the quad-core, 3.6GHz Phenom II X4 975—although it’s a little slower than the Core i3-560.

Productivity

SunSpider JavaScript performance

If you were expecting Sandy Bridge’s dominance to diminish as we moved from gaming into productivity-style applications, you may want to revise your theories.

7-Zip file compression and decompression

The Sandy Bridge chips come back to earth a little bit in this test, where the lack of Hyper-Threading on the two middle models really hurts. The competing Phenoms land a few solid punches for once, as a result.

TrueCrypt disk encryption

This full-disk encryption suite includes a performance test, for obvious reasons. We tested with a 500MB buffer size and, because the benchmark spits out a lot of data, averaged and summarized the results in a couple of different ways.

TrueCrypt has added support for Intel’s custom-tailored AES-NI instructions since we last visited it, so the encoding of the AES algorithm, in particular, should be very fast on the Intel CPUs that support those instructions. Those CPUs include the six-core Gulftowns, the dual-core Clarkdales, and Sandy Bridge.

The CPUs with AES-NI support place well here, obviously. That average is affected inordinately, I fear, by the AES results. Here’s what happens with AES, in particular, thanks to Intel’s hardware encryption support.

Not even close, eh? Notice the huge performance gap between the Core i5-655K and the Core i3-560, two very similar Clarkdale-based products. The difference is that Intel has disabled AES-NI support on the i3-560 for the sake of product segmentation. The same is true for the Core i3-2100. Lame.

If you want to see how the processors perform across the full range of algorithms tested by TrueCrypt, have a look at the massive graph below. Generally speaking, the AMD chips perform pretty well here, outside of AES. The Sandy Bridge processors aren’t bad, either, though.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

In the past, we’ve added up the time taken by all of the different elements of the panorama creation wizard and reported that number, along with detailed results for each operation. However, doing so is incredibly data-input-intensive, and the process tends to be dominated by a single, long operation: the stitch. Thus, we’ve simply decided to report the stitch time, which saves us a lot of work and still gets at the heart of the matter.

The Sandy Bridge processors’ dominance here is quite impressive, given that this application makes good use of all the threads available on the Gulftown processors, yet even the Hyper-Threading-free Core i5-2500K is quicker. This same operation takes nearly six times as long on our poor Pentium EE 840.

picCOLOR image processing and analysis

picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including SSE extensions, multiple cores, and Hyper-Threading. Many of its individual functions are multithreaded.

At our request, Dr. Müller graciously agreed to re-tool his picCOLOR benchmark to incorporate some real-world usage scenarios. As a result, we now have four tests that employ picCOLOR for image analysis. I’ve included explanations of each test from Dr. Müller below.

Particle Image Velocimetry (PIV) is being used for flow measurement in air and water. The medium (air or water) is seeded with tiny particles (1..5um diameter, smoke or oil fog in air, titanium dioxide in water). The tiny particles will follow the flow more or less exactly, except may be in very strong sonic shocks or extremely strong vortices. Now, two images are taken within a very short time interval, for instance 1us. Illumination is a very thin laser light sheet. Image resolution is 1280×1024 pixels. The particles will have moved a little with the flow in the short time interval and the resulting displacement of each particle gives information on the local flow speed and direction. The calculation is done with cross-correlation in small sub-windows (32×32, or 64×64 pixel) with some overlap. Each sub-window will produce a displacement vector that tells us everything about flow speed and direction. The calculation can easily be done multithreaded and is implemented in picCOLOR with up to 8 threads and more on request.

Real Time 3D Object Tracking is used for tracking of airplane wing and helicopter blade deflection and deformation in wind tunnel tests. Especially for comparison with numerical simulations, the exact deformation of a wing has to be known. An important application for high speed tracking is the testing of wing flutter, a very dangerous phenomenon. Here, a measurement frequency of 1000Hz and more is required to solve the complex and possibly disastrous motion of an aircraft wing. The function first tracks the objects in 2 images using small recognizable markers on the wing and a stereo camera set-up. Then, a 3D-reconstruction follows in real time using matrix conversions. . . . This test is single threaded, but will be converted to 3 threads in the future.

Multi Barcodes: With this test, several different bar codes are searched on a large image (3200×4400 pixel). These codes are simple 2D codes, EAN13 (=UPC) and 2 of 5. They can be in any rotation and can be extremely fine (down to 1.5 pixel for the thinnest lines). To find the bar codes, the test uses several filters (some of them multithreaded). The bar code edge processing is single threaded, though.

Label Recognition/Rotation is being used as an important pre-processing step for character reading (OCR). For this test in the large bar code image all possible labels are detected and rotated to zero degree text rotation. In a real application, these rotated labels would now be transferred to an OCR-program – there are several good programs available on the market. But all these programs can only accept text in zero degree position. The test uses morphology and different filters (some of them multithreaded) to detect the labels and simple character detection functions to locate the text and to determine the rotational angle of the text. . . . This test uses Rotation in the last important step, which is fully multithreaded with up to 8 threads.

The Sandy Bridge processors’ strong showing continues. I’m not sure what more to say.

picCOLOR also includes some synthetic tests of common image processing functions, and those results hold few surprises for us.

Video encoding

x264 HD benchmark

This benchmark tests one of the most popular H.264 video encoders, the open-source x264. The results come in two parts, for the two passes the encoder makes through the video file. I’ve chosen to report them separately, since that’s typically how the results are reported in the public database of results for this benchmark.

Note that we are not using hardware-accelerated QuickSync video here. This is all on the CPU. The first pass of the conversion process isn’t as widely multithreaded as the second, obviously, and so the standing change quite a bit from one pass to the next. The Sandy Bridge chips perform well overall, but the Phenom II X4 840 is faster than the i3-2100 in pass two.

Windows Live Movie Maker 14 video encoding

For this test, we used Windows Live Movie Maker to transcode a 30-minute TV show, recorded in 720p .wtv format on my Windows 7 Media Center system, into a 320×240 WMV-format video format appropriate for mobile devices.

I’m running out of ways to say “continued dominance,” folks. Perhaps in Fake Spanish, which I studied for four years in high school? Continedola dominancia! Or something.

3D modeling and rendering

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

The showing of the full-fat Core i7-2600K tells you much of what you need to know about the Sandy Bridge architecture. Not only are its multi-threaded results easily the fastest for a quad-core processor and, indeed, superior to the Phenom II X6 1100T’s, but the single-threaded results show it to have easily the fastest single core around.

POV-Ray rendering

We’re using the latest beta version of POV-Ray 3.7 that includes native multithreading and 64-bit support.

Valve VRAD map compilation

This next test processes a map from Half-Life 2 using Valve’s VRAD lighting tool. Valve uses VRAD to pre-compute lighting that goes into games like Half-Life 2.

Our last few rendering tests don’t produce any real surprises. The Phenom II X4 840 is able to outdo the Core i3-2100 in the two POV-Ray tests thanks to the presence of two additional CPU cores, but otherwise, it’s the Sandy Bridge show.

Scientific computing

MyriMatch proteomics

Our benchmarks sometimes come from unexpected places, and such is the case with this one. David Tabb is a friend of mine from high school and a long-time TR reader. He has provided us with an intriguing new benchmark based on an application he’s developed for use in his research work. The application is called MyriMatch, and it’s intended for use in proteomics, or the large-scale study of protein. I’ll stop right here and let him explain what MyriMatch does:

In shotgun proteomics, researchers digest complex mixtures of proteins into peptides, separate them by liquid chromatography, and analyze them by tandem mass spectrometers. This creates data sets containing tens of thousands of spectra that can be identified to peptide sequences drawn from the known genomes for most lab organisms. The first software for this purpose was Sequest, created by John Yates and Jimmy Eng at the University of Washington. Recently, David Tabb and Matthew Chambers at Vanderbilt University developed MyriMatch, an algorithm that can exploit multiple cores and multiple computers for this matching. Source code and binaries of MyriMatch are publicly available.
In this test, 5555 tandem mass spectra from a Thermo LTQ mass spectrometer are identified to peptides generated from the 6714 proteins of S. cerevisiae (baker’s yeast). The data set was provided by Andy Link at Vanderbilt University. The FASTA protein sequence database was provided by the Saccharomyces Genome Database.

MyriMatch uses threading to accelerate the handling of protein sequences. The database (read into memory) is separated into a number of jobs, typically the number of threads multiplied by 10. If four threads are used in the above database, for example, each job consists of 168 protein sequences (1/40th of the database). When a thread finishes handling all proteins in the current job, it accepts another job from the queue. This technique is intended to minimize synchronization overhead between threads and minimize CPU idle time.

The most important news for us is that MyriMatch is a widely multithreaded real-world application that we can use with a relevant data set. I should mention that performance scaling in MyriMatch tends to be limited by several factors, including memory bandwidth, as David explains:

Inefficiencies in scaling occur from a variety of sources. First, each thread is comparing to a common collection of tandem mass spectra in memory. Although most peptides will be compared to different spectra within the collection, sometimes multiple threads attempt to compare to the same spectra simultaneously, necessitating a mutex mechanism for each spectrum. Second, the number of spectra in memory far exceeds the capacity of processor caches, and so the memory controller gets a fair workout during execution.

Here’s how the processors performed.

Interestingly, AMD’s six-core Phenom IIs snatch a victory over the two non-Hyper-Threaded Sandy Bridge quads here. The Core i7-2600K is a little bit faster than the i7-875K, but not by much more than the clock speed differences between the chips might dictate, given similar per-clock performance.

STARS Euler3d computational fluid dynamics

Charles O’Neill works in the Computational Aeroservoelasticity Laboratory at Oklahoma State University, and he contacted us to suggest we try the computational fluid dynamics (CFD) benchmark based on the STARS Euler3D structural analysis routines developed at CASELab. This benchmark has been available to the public for some time in single-threaded form, but Charles was kind enough to put together a multithreaded version of the benchmark for us with a larger data set. He has also put a web page online with a downloadable version of the multithreaded benchmark, a description, and some results here.

In this test, the application is basically doing analysis of airflow over an aircraft wing. I will step out of the way and let Charles explain the rest:

The benchmark testcase is the AGARD 445.6 aeroelastic test wing. The wing uses a NACA 65A004 airfoil section and has a panel aspect ratio of 1.65, taper ratio of 0.66, and a quarter-chord sweep angle of 45º. This AGARD wing was tested at the NASA Langley Research Center in the 16-foot Transonic Dynamics Tunnel and is a standard aeroelastic test case used for validation of unsteady, compressible CFD codes.
The CFD grid contains 1.23 million tetrahedral elements and 223 thousand nodes . . . . The benchmark executable advances the Mach 0.50 AGARD flow solution. A benchmark score is reported as a CFD cycle frequency in Hertz.

So the higher the score, the faster the computer. Charles tells me these CFD solvers are very floating-point intensive, but they’re oftentimes limited primarily by memory bandwidth. He has modified the benchmark for us in order to enable control over the number of threads used. Here’s how our contenders handled the test with optimal thread counts for each processor.

The improvements to Sandy Bridge’s memory subsystem are likely the source of its solid gains over the prior generation in this application. Just look at how far we’ve come on this front from the days of the Pentium EE 840. Amazing.

Power consumption and efficiency

We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we ran Cinebench’s multithreaded rendering test.

We’ll start with the show-your-work stuff, plots of the raw power consumption readings. We’ve broken things down by socket type in order to keep them manageable.

We can slice up these raw data in various ways in order to better understand them. We’ll start with a look at idle power, taken from the trailing edge of our test period, after all CPUs have completed the render. Next, we can look at peak power draw by taking an average from the ten-second span from 15 to 25 seconds into our test period, when the processors were rendering.

You already know that the Core i7-2600K is often substantially faster than the i7-875K across many of the applications in our test suite. Now consider that the 2600K-based system has lower idle power draw and requires 26W less under load than the 875K. The gains in power efficiency seem mind boggling—so let’s quantify them.

We can highlight power efficiency by looking at total energy use over our time span. This method takes into account power use both during the render and during the idle time. We can express the result in terms of watt-seconds, also known as joules. (In this case, to keep things manageable, we’re using kilojoules.) Note that since we had to expand the duration of the test periods for the Pentium EE 840 and Core 2 Duo E6400, we’re including data from a longer period of time for those two.

We can pinpoint efficiency more effectively by considering the amount of energy used for the task alone. Since the different systems completed the render at different speeds, we’ve isolated the render period for each system. We’ve then computed the amount of energy used by each system to render the scene. This method should account for both power use and, to some degree, performance, because shorter render times may lead to less energy consumption.

The Core i5-2500K system requires roughly 25% less energy to render our test scene than the Core i7-875K system does. That is a staggering improvement, even if our bar chart is a bit thrown off by the presence of the Pentium EE 840. To put a really fine point on the comparison between Intel’s quad cores and AMD’s, consider that the Core i7-2600K config needs under half the energy that the Phenom II X4 975 system does to accomplish the same work.

Integrated graphics and QuickSync video processing

After accomplishing what we did with our core CPU testing, we kinda ran out of time for really extensive testing of Sandy Bridge’s integrated graphics and video processing capabilities. We did manage to run a few quick tests, though. First up is a look at QuickSync video transcoding, in which we’re doing something very similar to what we did in our Windows Live Movie Maker test—transcoding the same 30-minute, 720P, MPEG2-format video into a 320×240 H.264 format. For this, we used a pre-release version of CyberLink MediaEspresso with QuickSync support.

The relevant number to see here is the drop in encoding time for the Core i5-2500K when QuickSync is enabled. The Core i3-2100 is similarly fast with QuickSync, even though its HD 2000 IGP has half the number of execution units (and those EUs are used for encoding, offering flexibility you wouldn’t get with dedicated hardware alone).

Some other things: The Core i5-655K’s IGP can’t encode video, but it does have decode assist, and that seems to work nicely in unburdening this dual-core processor and speeding up the transcoding process—more so than I expected, I will admit. Also, although this program purports to take advantage of AMD’s Stream hardware decoding and GPU encoding capabilities, only the decode ability was exposed in the interface as an option with the 890GX IGP, and when we enabled it, performance dropped. Furthermore, we had hoped to use Nvidia’s CUDA and a GeForce GTX 460 to compare transcoding performance versus a discrete GPU, but that was obviously broken in this preview version of MediaEspresso, as well. We will have to revisit QuickSync transcoding in the future in more detail, but at least the nice drop in encode times on the Sandy Bridge chips should give you a sense of the possibilities.

We had considered using a range of casual and older games for our IGP tests, perhaps alongside those that we’d used in our earlier CPU tests involving a discrete graphics card, but then Intel decided to pull in the Sandy Bridge launch by two days, ruining those ambitions. Instead, we just punted and ran whatever we already had installed on the test systems. We expected various levels of failure from these IGPs, but Sandy Bridge had a few more surprises in store for us.

Amazingly, Bad Company 2 actually runs pretty well on the HD 3000 IGP. We had to turn down all of the game’s IQ settings and drop the resolution to 1280×800, bit was darn nearly playable on the Core i5-2500K. We’d kind of expected the game to detect Intel’s drivers, roll over, and die.

Heck, the Sandy Bridge IGP is substantially more capable than the Radeon HD 4290 in AMD’s 890 GX chipset, based on these results.

From there, we got more ambitious, firing up StarCraft II and giving it a shot at 1280×800 with the game’s “Medium” quality presets. Again, we used the same recorded game and time frames as in our big-boy CPU tests with discrete graphics.

OK, so maybe we pushed a little too hard. Any of these IGPs would almost certainly handle SC2 just fine at its lowest IQ settings, but man, does the game look awful then. If you want something a little prettier, you’re almost reaching competency with the Sandy Bridge IGP, at least the HD 3000 version.

Dialing back the quality levels in both of these games to their lowest possible settings produced rather different results. F1 2010 doesn’t look great, but it runs quite nicely on the HD 3000. Civ V is pretty much hopeless, regardless of which IGP you use.

This is a very small sample size, but the fact that the Intel drivers handled all of these relatively new games without crashing or producing obvious visual corruption feels like a step forward to me. That impression is underscored by the fact that the HD 3000 IGP is nearly twice as fast as the Radeon IGP in AMD’s 890GX chipset. We’re not getting our hopes too high, but all of the mobile variants of Sandy Bridge are slated to have HD 3000 graphics. Could it be that we’ll see somewhat competent mobile gaming capabilities in the average laptop in 2011? That would be quite the development.

Overclocking

We didn’t have much luck with raising the base clock on the Core i7-2600K with the Asus P8P67 Pro mobo. Even going to 103MHz was iffy. Using the unlocked multiplier, though, was a snap, and we eventually got the chip stable at 4.5GHz. Asus’ “auto” CPU voltage setting performed well enough for us that manual tweaking didn’t add any additional headroom. (CPU-Z reported the CPU core voltage at 1.304V.) We simply left Turbo Boost enabled during these attempts, and the clock frequency remained steady at 4.5GHz with the CPU loaded up with an eight-way Prime95 load. Interestingly, we did see occasional dips to 4.3GHz during a single-threaded rendering test in Cinebench, for whatever reason.

As you might expect, Sandy Bridge is stupid fast at 4.5GHz.

Conclusions

Although this may be the first completely new Intel processor architecture since the Pentium 4, Sandy Bridge is undoubtedly a very, very different beast. Sandy Bridge is an improvement over the previous generation on almost every front. Performance is up across the board, regardless of the metric: clock-for-clock, single-core/single-threaded, multithreaded, you name it. Power consumption is down at the same time, and therefore power efficiency is up substantially over the Lynnfield quad-core chips. Sandy Bridge comes by it honestly, with both process tech and microarchitecture improvements contributing to the cause. Shockingly, even the integrated graphics processor in Sandy Bridge is the best of its kind, from what we’ve seen.

Core i5-2400
Core i5-2500K
Core i7-2600K
January 2011

We haven’t yet had time to put together one of our famous value scatter plots for the new Core i3-2100, i5-2400, i5-2500K, and i7-2600K processors versus the world, but happily, this is math we can do in our heads: the last three are unequivocally the performance leaders in their price classes. The dual-core Core i3-2100 is sometimes challenged by the quad-core Phenom II X4 840, but really, the i3-2100 is probably going to be a better choice for most desktop users, especially those who want strong gaming performance.

If you’re looking to upgrade—and if you have something like the Core 2 Duo E6400 in your system, our test results suggest you should be—then the only question may be which verison of Sandy Bridge you ought to buy. I’m not sure how coldly analytical I can be about this, because the Core i7-2600K with Hyper-Threading is frickin’ awesome and you should totally get one, but you’re free to spend less if you want to wuss out like that. Surely the Core i5-2500K will be hit with a great many enthusiasts, since it’s unlocked and slots in at the popular $200-ish price point. And heck, the gaming performance we saw from the i5-2400 suggests most folks won’t need anything more than it for some time to come.

I’ve said it before, but we truly are living in a golden age of processor design. Now that we’ve seen what Sandy Bridge can do, we have the next six or so months to fill out the other half of the picture: how AMD’s trilogy of Zacate, Llano, and Bulldozer stacks up to this incredibly formidable competitor. Here’s hoping 2011 holds more surprises as pleasant as Sandy Bridge.

Comments closed
    • zorglub
    • 8 years ago

    I have programmed H.264 encoding acceleration using Intel Media SDK for Sandy Bridge.

    An i3-2100T using hardware acceleration can H.264 encode 1920×1080 within 13ms.
    And same result using a i7-2600. This is about 76 img/s.

    Interestingly, hardware H.264 acceleration requires image size to be multiple of 16 in x and y dimension and be MV420 format. 1080 is NOT a multiple of 16… And 1920×1088 is actually the maximum image size the GPU will handle.

    With Media SDK, it is easy to switch between software H.264 (CPU) versus hardware (GPU). i3-2100T CPU requires 39ms, while i7-2600 requires 20ms to encode.

    • indeego
    • 8 years ago

    Just got a i7-2600K. Not bad at all. I score about 300 MIPS less on 7-zip than Scott, for some reason.

    • Xcamas
    • 9 years ago

    its impressive the way they managed to compete with themselves. Intel is doing great at every price point. For guys like me using E8400, now can definitively upgrade.

    • Clint Torres
    • 9 years ago

    Some may say “meh” but I must retort “OMG!”

    I am getting 40-50% greater performance on my 3dsMax 2010 renders with my OC’d 2600K (4.5Ghz) than my OC’d i7 860(3.5Ghz).

      • marvelous
      • 9 years ago

      Looks more like clock speed difference to me.

      Surely you could get the 860 a bit faster than 3.5ghz.

        • Clint Torres
        • 8 years ago

        Yeah, I’m a low-hanging-fruit type of overclocker. 3.5 is easy, stable and bclk is a nice round 166Mhz.

        I’ve worked it out to be about 15-20% faster clock for clock. The real beauty is the ease at which that 4.5Ghz is achieved.

        The thing that makes me wonder is how I can get that idle GPU to do some stuff.

    • insulin_junkie72
    • 9 years ago

    Seven of the CPUs* are up at NewEgg:
    [url<]http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007671%20600095610&IsNodeId=1&name=LGA%201155[/url<] * i5 2300, i5 2400, i5 2400S (lower-power variant), i5 2500, i5 2500K, i7 2600, and i7 2600K; no i3 listings yet

      • Dashak
      • 9 years ago

      The i3 lineup is due out on Feb 20th according to Intel.

    • steddy
    • 9 years ago

    I can’t help but think that the relatively low memory speed might be a bottleneck, especially in IGP benchmarks. Why do you downclock your memory, which is rated for DDR3-1600?

    • Bensam123
    • 9 years ago

    I’m curious why you guys still test chips with Turbocore and Turbo frequency enabled. They both add a erroneous variable to any statistical testing you do. Since neither of the above is ‘guaranteed’ and largely depends on your cooling solutions and the chip itself, it’s like overclocking a chip and then stating it performs that well. How you treat manual overclocking results should be the same as how you treat TurboX and it shouldn’t be something that is seen as going to be the same for everyone.

    They could simply be sending you cherry picked units so that they perform at peak operation when everyone else is getting less then optimal units.

    Putting that aside, consider adding Supreme Commander 2 to your testing suites. Especially later on in games with 8 players and hovering around unit caps, it can grind any computer to a halt (not as bad coding wise as SC1 though).

      • wibeasley
      • 9 years ago

      Disabling turbo might reduce the results’ variability, but it would make the results less representative of the meaningful dimension. In other words, it adds another source/variable, but not an “erroneous” one.

      And most people wouldn’t consider the reporting the median as ‘statistical testing’.

        • Bensam123
        • 9 years ago

        Using TurboX is no different then reporting overclocking results as what everyone will get.

    • jstern
    • 9 years ago

    As a person who’s into CPUs but not to the point where I can tell all the detail by just the model’s name, this has been really hard to follow. It’s hard for a person like me to compare the CPUs in a chart when you can’t tell how many cores the CPUs you’re looking at have, or how fast Mhz the cores have. I had to Google them individually, until I decided that it was wasting my time. Again, I’m not expert, but since these CPUs are totally different from the ground up, why didn’t they name it something other than core i5, i7? Before this article I just assumed it was a minor upgrade.

      • PixelArmy
      • 9 years ago

      [url<]https://techreport.com/articles.x/20188/2[/url<] has a big table with all the specs... i3 = 2 cores w/ hyperthreading i5 = 4 cores w/o hyperthreading i7 = 4+ cores w/ hyperthreading K suffix = unlocked + slightly less crappy graphics The "2" (in i7-[b<]2[/b<]600 for example) = Gen 2 of the Core i Family. The only thing that kinda sucks is not knowing the clock speed easily. I blame model numbers in general. Thanks Barton. As to why they reused the Core i branding... Don't know, but I think of it like car series (think BMW).

        • Flying Fox
        • 9 years ago

        [quote<]I blame model numbers in general. Thanks Barton.[/quote<] Actually, blame the Pentium 4 for hyper-inflating the worth of GHz with really low IPC and an inefficient architecture.

          • PixelArmy
          • 9 years ago

          Well, you [i<]do[/i<] want more GHz... You just don't want to use them to compare different architectures... There are two basic factors, IPC and Hz. Who's to say how much one should be focused on over the other. [url=http://www.realworldtech.com/page.cfm?ArticleID=RWT082610181333&p=2<]Hell, look at Bulldozer's "speed demon" approach.[/url<] Sound familiar? That's because pursuing clock speed [i<]is a valid strategy.[/i<] Both are fair game. That's why we look forward to new architectures but at the same time cheer on die shrinks and overclocking. While the P4 architecture's goal of GHz made AMD introduce PR, the reason [i<]I[/i<] don't blame the P4 is that [i<]Intel was still reporting the actual specs...[/i<] In the end, it was with the introduction of Barton that we saw PR numbers used. And looking back, I feel they didn't solve anything, just added another hurdle between me and the specs. *EDIT* Fixing URL BBCode.

    • michael_d
    • 9 years ago

    How come Core i7 960 is not incldued?

      • Flying Fox
      • 9 years ago

      Isn’t the 950 enough for you to extrapolate? It is almost impossible time-wise to test every CPU under the sun, no?

    • Farting Bob
    • 9 years ago

    I was gettoing ready for an upgrade when reading these reviews, but im still not convinced i really need one. Got a Q9300 already which is pretty good (and quad goodness already), although id like DDR3 RAM and a motherboard that isnt pants. But if i upgrade the whole lot to a 2500k im looking at alot of money, and im still thinking “how often will i use the extra speed?”
    GPU also needs a upgrade, so id be looking at getting rid of everything except the HDD and case really. Damn this just got expensive.
    Oh well.

    Only sad face about SB ive seen is that quicksync or whatever the video transcoder thing is called only works when your running on the IGP. If you got a discrete in your system then youll have to stay with standard x86 encoding. Thats annoying since that was one of the big selling points for me when looking at the speedups it offered.

    • ub3r
    • 9 years ago

    Damage great review. BIG Thanks!!

    • WaltC
    • 9 years ago

    After reading this splendid TR review, I was all set to *sincerely* accord the Sandy Bridge launch with the highest accolades I’ve ever given an Intel cpu launch–and then I read AnandTech’s review, and now I am scratching my head and frankly am not nearly as impressed.

    Intel has gotten much more savvy over the years since AMD thrashed it royally with the k7 and up, thrashed them pretty good right up until Core 2 hit the bricks, when the thrashing proceeded to operate in reverse…;)

    I think the point is that in the old days Intel would have priced SB wherever it wanted to–into the stratosphere, probably–and never have worried at all about the Total Cost of Upgrade to SB, and assumed that most people would migrate to SB without much regard for the TCU.

    Today, however, it seems that Intel is much smarter, and has priced SB all the way around to *factor in* the TCU–ie, motherboards selling for between $150 and up have to be added into the SB TCU (’11 will be the Year of the Acronym, I see it now!…;)) Thus, the actual cost of a high-end SB upgrade will be on the order of $450-$500.

    Still, I’m baffled at precisely why Intel has decided to take this ostensibly aggressive route as this advance news will no doubt impact its current and near future non-SB cpu sales very seriously–as seriously, if not more seriously, than it will affect AMD (since Intel is selling more cpus and stands to lose more sales in the interim before SB ships in quantity–depending on the marketability of both the SB cpus *and* the mobo’s they require.) So…

    I’m pondering why Intel would *announce* that it is prepared to aggressively undercut *itself* in the short term, never mind AMD, with SB–this will clobber Intel’s cpu ASP’s. This is completely out of character for Intel–when was the last time you recall Intel releasing a brand-new, very high-performance cpu that outperformed the previous generation of its cpus by a far margin in many cases– [i<]but launching the new cpu at up to 60% **less** than their current high-performance cpu flagship[/i<]? OK, it is true I haven't bought Intel since 1999, so maybe I've missed a few things along the way...;) Still, it just seems a very odd thing for Intel to do. About the strange 48-hour acceleration of the NDA...This is commented on by everyone but no one posits an explanation, and I'll guess that is because Intel did not provide one. Let's speculate... *Intel has it on good authority that Bulldozer isn't much to fret about and has decided to move up the SB NDA by a couple of days in order to launch the most devastating PR campaign it has launched against AMD in years, if ever. This would explain Intel's preemptive SB pricing. *Intel has it on good authority that Bulldozer will in all likelihood just raze SB to the ground in terms of multithreaded performance, and so has moved up the SB launch a couple of days in the hopes of landing a solid PR punch on AMD prior to the Bulldozer launch in order to gain the most positive publicity possible for SB. This would also explain SB's very un-Intel like, bargain-basement pricing. *Intel is not internally anticipating a volume shipment ramp for SB for *several months* but is launching now in order to pre-empt by publicity anything AMD will do in that time frame as pertains to its upcoming cpus. In this case, SB's value pricing would serve as anti-AMD publicity until such time that Intel could in fact ship SB in quantities enough to displace the selling of its current generation cpus--which SB is apt to do at these prices. No matter which scenario winds up closer to the truth, Intel is going to take an ASP beating at its own hands because of this cpu announcement, and that fact demands an explanation that so far I haven't seen anyone offer to date. My opinion would be that if Intel fully expects Sandy Bridge to be available in mass quantity that *if it could* Intel would be asking a premium for the top-end SB very similar to the premiums it has asked for years for its highest-performing cpus. the fact that this is not true with Sandy Bridge is certainly indicative of something significant happening behind the scenes--it would just be nice to know what it was...;) This is an interesting topic that screams out for discussion, and I hope a few people will comment. Edit: Also oddly interesting is the fact that Intel has on several occasions publicly referred to Sandy Bridge as "risky," which, in light of these reviews, I find a fascinating statement. So how's it "risky"? Is it x86 compatible? Yeppers, no risk there whatever. Does it need special compilers or special code to derive the full or even competitive performance out of the Sandy Bridge architecture? Not at all, as these reviews illustrate. Running ordinary current x86 code, SB seems to excel. So no risk there, either. Is SB going to cost so much that Intel fears it will have few takers even though it offers greater performance? Nope, the pricing as we've all seen is bargain-basement. No risks there. Is it "risky" because Intel is shipping an on-die gpu? that hardly seems the case, as people will buy SB regardless of the on-board gpu--indeed, Intel offers SB without the on-board gpu. (Besides, few people give a crap about *any* of Intel's very sub-par gpus to date: they all stink, including SB's, and in fact SB's stinks a bit less...;)) Surely, no risk there... So what [i<]is[/i<] this "risk" Intel keeps talking about? Is it the manufacturing process? That seems about the only unknown quantity at this point that I can think of--nothing else about SB seems a risk. If Intel is worried it won't be able to ramp up production of the 32nm cpu then, yea, I think Intel is taking a big risk in announcing it as they've done. But, if Intel has 32nm down cold and is preparing to launch SB in quantity--then there's no risk there, either. So if there's no obvious risk to SB, then WTF is Intel talking about?

      • BoBzeBuilder
      • 9 years ago

      Well said Walt.

        • bimmerlovere39
        • 9 years ago

        Well, I’m genuinely interested about this schedule change, now, too, after reading your take.

        As for the risk: if SB really is a major architectural shift, that’s pretty significant risk. There’s always a chance that SB could have turned into a Larrabee. The thought of Sandy Bridge being such a failure that the battle this summer – and for the back to school season – became Lynnfield vs Bulldozer, well, if I was a beancounter at Intel, that thought would scare the crap out of me.

        Obviously, that’s being a bit melodramatic. But a non-negligible departure would be a risk.

      • JumpingJack
      • 9 years ago

      Could you link up one or two of the occasions that Intel called SB ‘risky’ publicly? I am curious about what context they would make that statement.

      In terms of your post, you start out
      “After reading this splendid TR review, I was all set to *sincerely* accord the Sandy Bridge launch with the highest accolades I’ve ever given an Intel cpu launch–and then I read AnandTech’s review, and now I am scratching my head and frankly am not nearly as impressed. ”

      But nothing following that explains why you became less impressed with the CPU, it reads more like you are upset that Intel would pull in a launch and lower prices.

      • Anomymous Gerbil
      • 9 years ago

      Walt, you need an editor.

      • Meadows
      • 9 years ago

      Aside of your nauseating post size, you also overuse the wink emoticon, which got me disgusted by the third time of seeing it or so. Sadly, that wasn’t the last time either.

      I’ve been contemplating making a youtube video where a guy reads up one of your comments and makes corny impressions of [i<]every single[/i<] emoticon you had in it, too. I doubt it would be successful, but then again I doubt I could easily fit it into the 10 minute limit of youtube. πŸ˜‰

      • yogibbear
      • 9 years ago

      I thought none of the SB’s that are reviewed here are the premium SB’s. And usually Intel releases the premiums and one other chip first to market along with the premium mobo (hence the HUGE upgrade cost), then 3 months later they release the awesome cpu’s like the e6600 and q9450 etc. that sit at value points. However with this launch they’ve done it the other way around…

      Also it’s pretty obvious that they can’t make SB backwards compatible with older mobo’s. So… the upgrade cost was always going to be there. Personally $400-500 dollars is not very much money considering it’s at least 2x-4x my current CPUs performance minimum in everything. (which is a lot better than GPU upgrades these days which cost anywhere from $250-500.)

      Or at least that was my interpretation… maybe i’m wrong as I don’t/haven’t been folloing SB at all.

      • Krogoth
      • 9 years ago

      Intel calls it a risk because they know the next round is all about who gets the first successful mainstream system-on-a-chip platform. They are hoping SB has enough performance and value to ward off AMD’s own attempt (Fusion). That is precisely why Intel is releasing SB before AMD gets its own hold. Notice how SB’s NDA lift concides with AMD’s own NDA lift on Fusion? Not a concidence.

      All-one-systems are going to replace the good, old tower desktops as we know it.

    • marvelous
    • 9 years ago

    I see less than 15% faster clock for clock.

    Check sandy bridge turbo speeds. It’s clocked much higher than older i7’s

      • axeman
      • 9 years ago

      I’m too, wondering why the consensus seems to be that it is so incredibly fast. For something that’s supposed to be such a big change architecturally, it doesn’t seem to be a big leap over Lynnfield. The first i7 seemed to push the bar a lot higher relatively speaking than this.

        • Flying Fox
        • 9 years ago

        Add -20% power consumption?

        • ThorAxe
        • 9 years ago

        I said something similar.

        Be careful or you will get marked down for not being a part of the herd. πŸ™‚

          • Game_boy
          • 9 years ago

          The improved power consumption mostly comes from the die shrink vs. Lynnfield. I think a 32nm Lynnfield would have recieved this praise.

            • ThorAxe
            • 9 years ago

            Agreed, which is why the improved power consumption doesn’t really impress me.

            I’m not saying it’s a bad thing though.

      • flip-mode
      • 9 years ago

      15% clock for clock is somehow extremely impressive to me. AMD is outdoing iteself to deliver 5% if not actually losing some (Brisbane). AMD is only just matching the 2006 Conroe core for per clock performance. And we’re poo-pooing 15% over Bloomfield?

        • abw
        • 9 years ago

        Not true.
        just check the benchs.

          • JumpingJack
          • 9 years ago

          Rather than eyeball it, someone has taken the data and compiled a core for core, clock for clock (turbo off, SMT off and both on variants) comparison:

          [url<]http://www.computerbase.de/artikel/prozessoren/2011/test-intel-sandy-bridge/47/#abschnitt_turboskalierungsrating[/url<] Translated: [url<]http://translate.google.com/translate?js=n&prev=_t&hl=en&ie=UTF-8&layout=2&eotf=1&sl=de&tl=en&u=http%3A%2F%2Fwww.computerbase.de%2Fartikel%2Fprozessoren%2F2011%2Ftest-intel-sandy-bridge%2F47%2F%23abschnitt_turboskalierungsrating[/url<] It is 15 to 17% without hyperthreading on average. It's 30 to 36% with hyperthreading on average. (EDIT: mis-read, the w/HT is not clock for clock)

            • ThorAxe
            • 9 years ago

            “It’s 30 to 36% with hyperthreading on average.”

            From what I can see on the link when hyperthreading is enabled they compare the 2600K@3.4GHz to an i7 930@2.8GHz, so you can’t really draw a clock for clock comparison given that the i7 930 is at a 600MHz disadvantage.

            • JumpingJack
            • 9 years ago

            Oooops, thanks…. that’s true.

            • Sunburn74
            • 9 years ago

            Actually inpai.com has a review comparing a 875k with a 2600k both locked in at 3.4ghz. Overall difference is about 5-6%.

        • abw
        • 9 years ago
      • r00t61
      • 9 years ago

      I’m more annoyed by the inscrutable feature segmentation.

      1. Intel elects to put the best integrated graphics cores (HD 3000) only on the K-series chips. But K-series chips will undoubtedly be purchased by enthusiasts who will run discrete cards. Why not put the best graphics on the lower-end chips that will undoubtedly be used in the vast majority of mainstream OEM systems?
      2. Of course, this point is moot, since the enthusiast motherboard chipset (P67) doesn’t support the integrated graphics core. So you’ve got a couple hundred million transistors worth of apparently decent integrated graphics doing NOTHING on your otherwise shiny, new, and pretty darn fast P67 SB system that you just built.
      3. Of course, you could buy an H67 motherboard to pair with your K-series processor, to take advantage of the integrated graphics, but then you lose the ability to overclock your K-series processor, which is ostensibly the primary reason you bought a K-series processor in the first place. Cripes.
      4. And the coolest new “feature” of the chip, separate from the evolutionary improvements in general-purpose-processing, Quicksync (which, by the way, is a completely misleading label for this transcoding technology) only works on H67 boards, AND only if a monitor is physically connected to the integrated graphics core. Supposedly all these lame issues will be corrected by an upcoming chipset named Z68. I don’t see why Intel couldn’t get this sorted on launch.
      5. I haven’t even mentioned how the K-series processors strangely lose VT-d virtualization capability, yet the non-K-series retains it.

        • Flying Fox
        • 9 years ago

        [quote<]3. Of course, you could buy an H67 motherboard to pair with your K-series processor, to take advantage of the integrated graphics, but then you lose the ability to overclock your K-series processor, which is ostensibly the primary reason you bought a K-series processor in the first place. Cripes.[/quote<]I only read about the current H67 tests out there were using Intel's own board, and they were never designed for overclocking anyways?

      • maroon1
      • 9 years ago

      Clock for clock comparison is useless

      SB has higher default clock speed. So, if you are not an overclocker then clock for clock comparison is useless for you.

      And if you are an overclocker, then clock for clock is still useless for you because SB can reach higher clock speed than previous generation when you use similar air cooling solution.

      So, in real life SB will almost always have higher clock speed.

    • NeronetFi
    • 9 years ago

    As someone that was about to upgrade and purchase an i7-950, plus mobo, and RAM to upgrade my old Socket 939 AMD Athlon 64 X2. I want to thank TR for including the Core i7 950 in the benchmarks. This was an excellent comparison and I think I will be upgrading to the i7-2600k instead of the i7-950. That’s if the mobo prices are decent. πŸ™‚

    Thank you again

      • axeman
      • 9 years ago

      As someone who was going to get duped into buying into the LGA1366 platform, you were going to overpay for the motherboard anyhow.

        • NeronetFi
        • 9 years ago

        I was actually looking at the GIGABYTE GA-X58A-UD3R. Its only $200.

          • axeman
          • 9 years ago

          And this would be better than a i7-8xx and a motherboard that’s 70 dollars cheaper because.. you need SLI? I can’t think of a single reason to go LGA1366 other than SLI or 6 memory sockets. It’s not really any faster, 875k and overclock it, save your money. The way I see it, the day LGA1156 came out, going 1366 other than for people with enough cash to buy the extreme high end made no sense. i7-920 made some sense until LGA1156 came out. i7-870 is faster than i7-950 half of the time while using less power, and a far less expensive platform.

            • flip-mode
            • 9 years ago

            Well, it kinda depends on what one is looking looking for: dual gig-e or extra SATA channels or CFSLI or amount of RAM or whatever. It’s hard to say he’s overpaying if he’s paying for an additional feature that he actually wanted.

            • axeman
            • 9 years ago

            True, but it seems nowadays that even lowly mATX boards have more ports and what not than most people are going to use. Like dual ethernet… really? Enthusiast motherboards are being used as servers or firewalls? I was definately in the category of build a full tower system with a full size ATX board until I realized all those slots and drive bays are never, ever going to be used. I have one machine in use as a server, and even it I’m only using 4 out of 6 SATA ports. What on earth do you need 10 SATA ports (on the OP’s aforementioned GA-X58A-UD3R) in a system that is likely being used as a desktop? I guess I’m not a very enthusiastic enthusiast anymore.

            • flip-mode
            • 9 years ago

            I wasn’t saying anything about your enthusiasm, rather that we don’t know anything about his reasons for choosing an X58 mobo. Of course, his reasons must not be all that elite if he’s going to trade down to an P67 motherboard, so maybe he was just paying for epeen anyway.

            • bimmerlovere39
            • 9 years ago

            6 SATA Ports:
            Boot SSD
            Optical
            2xRAID 0 Speed Drives
            2xRAID 1 Backups

            (No, I don’t own that setup. Yes, I want it.)

            Yeah, I don’t see much use in more drives than that for a desktop. But still… I don’t like the thought of having all of my SATAs filled. Kinda puts you up a tree if you migrate big drives over.

            That said, motherboards are getting there, feature-wise. I just don’t necessarily like how they distribute said features. (DIE, PCI, DIE!)

            • JustAnEngineer
            • 9 years ago

            6 SATA ports on my old X48 motherboard:

            1 SATA Blu-ray drive
            4 RAID 0+1 system volume
            1 storage drive

      • NeronetFi
      • 9 years ago

      I am holding out to see what Bulldozer brings to the table. Then I will make my final decision b/c by then more Mobo’s for the SB’s will be on the market.

        • NeelyCam
        • 9 years ago

        Good decision. By then, next-gen SSDs are going to be out in numbers – you’ll be able to build a fantastic rig at a good price.

    • Hattig
    • 9 years ago

    It looks like a decent CPU to me, at a decent price. AMD and NVIDIA have dropped the ball on hardware video encoding acceleration in my opinion, despite having decent video decode functionality.

    The on-board graphics are irrelevant to any gamer however, making the HD 3000 graphics in the K series pointless, and doubly-so that the hardware accelerated video transcode is not available with discrete graphics attached. Let’s see how AMD respond.

    (I didn’t even notice this review on the new TR front page though – I looked underneath at Featured Articles and it wasn’t listed there in plain text, so I didn’t see it!)

      • derFunkenstein
      • 9 years ago

      How could you miss the giant graphic? πŸ˜†

      Although I get what you mean. Maybe they should put Featured Articles at the top and the OMGXBOXHUEG graphic right below that.

        • Hattig
        • 9 years ago

        Yeah, I know… I just didn’t see it, I guess the image fits into the page layout really seamlessly :-S

    • basket687
    • 9 years ago

    Scott, how do you explain the low score for Athlon II X3 455 in the 7-zip benchmark? Do you think that it is using only 2 cores? Because its score is much lower than Phenom II X4 840.
    Thank you.

    • ub3r
    • 9 years ago

    AMD has no hope.
    Unless…. their next cpu is codenamed Emma Watson. Hmmm

      • Flying Fox
      • 9 years ago

      That has been said so many times and AMD is still standing, albeit barely.

        • sigher
        • 9 years ago

        Now that the DRM controversy exploded, and the somewhat dubious remote nuking functionality too, well now AMD might be rescued a bit, I see many people saying they will go AMD over that, although people say those things in the spur of the moment and don’t follow through often.

          • Flying Fox
          • 9 years ago

          What DRM controversy? Just don’t sign up on Intel Insider and not buy online using that service and you would be fine? At this point what I am reading is that II is going to be on big box systems like those you can buy from Best Buy? The general TR populace do not buy from BB anyways, no?

          Remote nuking I have not read too much about yet, but I suspect it is tied to the business oriented vPro/TXT thing? So buy the models that don’t have those (unfortunately it seems all mobile models will have that, but then again it needs software installed to work IIRC).

    • JJCDAD
    • 9 years ago

    I’m seeing headlines around the ‘net talking about built-in DRM in Sandy Bridge chips. Can someone boil it down for me? Is it anything to be concerned about and would it prevent any of you from purchasing one of these parts?

    Edit: Disregard. Looks like it’s just some hardware DRM that enables some new HD video streaming service. False alarm.

    • Mr Bill
    • 9 years ago

    It would be nice if you could get some Phenom II Black Edition CPU’s and then clock their memory at 1600MHz and their NB at 2400MHz (which any Black Edtion cpu is supposed to support) so we can see how the unlocked Black Edition versions come out against the locked CPU’s.

    • smilingcrow
    • 9 years ago

    My only disappointment is that seemingly you can’t over-clock using the H67 boards at all not even with the K versions. Whoever came up with that idea must have been on Special K.
    I can see why they wouldn’t support higher memory speeds or the limited 4 bin over-clocking of the standard chips but to disable over-clocking with the K series seems perverse.
    This isn’t 100% confirmed but I’ve seen nothing but anecdotal evidence suggesting that it isn’t true and at least one review where an Intel H67 boards was tested and didn’t support any O/C. I’m grasping at straws and hoping that was just a BIOS issue.

      • TREE
      • 9 years ago

      I get the feeling that if you had access to the overclocking of the CPU side on the H67 boards then you’d also have access to the GPU overclocking. Is there not going to be a later ‘Z’ variant of the ’67’ chipset which is supposed to address the ability of having an enabled GPU as well as CPU overclocking acess?

      • Flying Fox
      • 9 years ago

      Intel board is not usually known for their overclocking features. I would wait for proper reviews from the usual hitters before passing judgement about the H67.

    • d0g_p00p
    • 9 years ago

    I am confused about the conclusion of the article. Scott you state that Sandy Bridge is a new Intel processor architecture. I thought SB is still a “P6” microarchitecture. Also how is it totally new vs Conroe and Nehalem. Those are both new Intel PA’s right? I thought SB was just a heavily tweaked Nehalem.

    I am obviously confused, please help me out. Great article as usual BTW.

      • Damage
      • 9 years ago

      See here for a discussion of roots and such:

      [url<]https://techreport.com/articles.x/19670[/url<] Sandy Bridge really is something new.

      • Flying Fox
      • 9 years ago

      You will have to read the architecture preview article linked from this review because Damage has said he would not delve into that discussion in the context of that review.

      Strictly speaking, everything is P6 (or even x86 if you really want), but there are lots of architectural elements that make these qualify as a “new generation”. Conroe brought us 4-issue wide pipelines, memory disambiguation and a new focus (for desktop processor anyways) to performance per watt. Nehalem brought us the IMC and modular building block approach. This time, we have the new ring bus, on-chip GPU, and new a lot of things (I remember Damage said everything from the branch predictor to most other aspects of the processor has been changed).

      • UberGerbil
      • 9 years ago

      Or see [url=http://www.realworldtech.com/page.cfm?ArticleID=RWT091810191937<]Kanter's article at RWT[/url<]. One observer's "new architecture" is another's "pile of tweaks" and you can go round and round arguing it, usually arriving back at your starting place with everyone's preconceived ideas still firmly intact. I'm not really interested in comparative featurology, so I frankly don't care if this generation is more or less of a departure from its predecessor(s) than any of them were from theirs. However, there are enough changes here that I feel comfortable acknowledging it to indeed be a new generation. Like Nehalem, it (further) consolidates system-level functionality onto the die and incorporates a new (and different) internal interconnect to tie that together; like the P3 and P4, it adds a significant (and extensive) set of new vector-oriented instructions (and like those generations and the original Opteron, it extends the architectural registers); like the P4, it adds or refines many features for OoO execution. Taken together, I don't think it's an exaggeration or an ingestion of too much Intel Marketing koolaid to call it a new architecture.

    • yogibbear
    • 9 years ago

    Awww… my Q9450 didn’t like reading this. It started weeping and asked me to bring it chocolates. It must be trying to tell me something… πŸ™‚

      • anotherengineer
      • 9 years ago

      You sure it wasn’t a nice SSD instead of chocolates??

      I had to go out of town for Christmas, so travel costs did in my monies, another Christmas filled with underwear and socks 😐

        • NeelyCam
        • 9 years ago

        “underwear and socks ”

        Lingerie and stockings?

    • Nutmeg
    • 9 years ago

    Looks really sweet. Now for the pricing, and to see by how much I can’t afford them! πŸ˜€

    • PenGun
    • 9 years ago

    Does anyone know about the built in DRM? Is that going to be a problem for us “stainless steel rat” types.

      • UberGerbil
      • 9 years ago

      It’s going to require chipset support, just like TXT / TPM does currently. And just as with those, the only way it will be active is if you purchase a motherboard using the appropriate chipset (and enable it in the BIOS). They can’t get under your tinfoil hat today unless you’ve gone out of your way to buy a Q chipset; that will be true this time around as well.

        • Flying Fox
        • 9 years ago

        AFAIK Q is for business. I think what PenGun was talking about is the built-in “Intel Insider” support where you can buy movies off the studios. They will be on those BestBuy boxes but who knows what chipset they will use.

    • DancinJack
    • 9 years ago

    my i7-860 @ 3.6, 4GiB of DDR31600, an SSD and TBs of storage feel pretty good. No need for this. Some cool improvements though. I’m glad I don’t do any heavy video related work or I’d really want to upgrade something.

    • Peldor
    • 9 years ago

    i3-2100 specs…

    Page 2 says it is 2 cores/4 threads while page 3 says 2 cores/2 threads. Pretty sure it’s the former.

      • Damage
      • 9 years ago

      Yep, it’s 2/4. Fixed. Thanks.

    • c0d1f1ed
    • 9 years ago

    I wonder how this CPU scores with SwiftShader. With Nehalem it was already able to outperform some IGPs and low-end GPUs. Sandy Bridge’s wider vector support could close the gap even further. And to really make the CPU efficient at graphics they could add gather/scatter instruction support. Then the IGP could go away and more generic CPU cores can be added instead. They already share caches and memory controllers anyway, so it’s the next logical step in the convergence between CPU and GPU.

    • axeman
    • 9 years ago

    H.264 encoding has dedicated logic on the CPU? What a baffling feature when GPUs are getting more and more flexible. I dub thee, MMX2.

      • Krogoth
      • 9 years ago

      Perfect for HTPC-types.

      • insulin_junkie72
      • 9 years ago

      The onboard hardware encoder uses less power than using GPU-based transcoding.

      (Not that I’d use any of them for anything other than transcoding to a phone or something, quality-wise)

        • Flying Fox
        • 9 years ago

        Since the app in use is still pre-release, I would not say for sure at this point that using the CPU hardware encoder is going to suck in terms of quality.

          • JumpingJack
          • 9 years ago

          TR did a great job, but glossed over the transcode features and had problems with the CUDA variant. Anand has done the more thorough job of comparing transcode times as well as image quality:
          [url< ]http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i5-2600k-i5-2500k-and-core-i3-2100-tested/9[/url<] Frankly, the transcode feature of SB is probably the most impressive of the lot, even if it is restricted to the most common class formats. Looks to be as much as 2x faster than a GPU dedicated transcode (6970) and even faster than a CUDA based transcode (460) and with much higher quality as well (over the 460), the quality on the 6970 is quite impressive as well. What is more interesting is this is only using something like 3 mm^2 of die space fo the fixed function units, yet beats GPUs that have hundreds of mm^2 more silicon on the problem. (http://www.anandtech.com/show/3922/intels-sandy-bridge-architecture-exposed/6)

          In terms of efficiency, this is simply obliterating GPGPU transcoding.

          I would be interested to see how this compares to a more GPU centric/specific transcoder like Badaboom.

            • Da_Boss
            • 9 years ago

            Keep in mind that Anand also noted that modern GPUs also use a fixed function hardware decoders, rather than using the shaders to decode.

            So the question really is: How did Intel do it so much better?

            • insulin_junkie72
            • 9 years ago

            At the risk of stating the obvious, while video cards have had dedicated hardware DEcoder ASICs for several generations, they don’t have any dedicated hardware for ENcoding – which these Intel CPUs have. It’s not apples to apples.

            • JumpingJack
            • 9 years ago

            GPUs (AMD/nVidia) have some fixed function hardware for decoding, this is what Anand tech states and this is true. But decoding is only 1/2 of the transcoding equation. The encoding we see from nVidia (CUDA) and ATI (STREAM or now APP) are done on the GPU shaders.

            I find it quite remarkable that Intel can almost double, in some cases, the throughput with only 3 mm^2 of silicon compared to, say a GTX460 which has > 150X more silicon realestate.

            Though this is over simplifying the observation, Intel’s apporach is only good for well established and popular video formats. Fixed function, naturally, lacks the flexibility to adapt to new video codecs as they become available.

            In short, GPUs will still be the best option for workstation level video transcoding via prosumer and professional level software, where as SB is really nice for consumer oriented transcoding utlities since the that market has settled in on a standard (much like JPG for images).

            It’s a trade off. For me — meh, I am more of a videophile myself I don’t find this to be applicable to my usage specifically, though to simply get home movies onto my portable devices — I would take it over a GPU at this point givent he current data. I am more astounded by the efficiency both in terms of thorughput/mm^2. I haven’t seen much in terms of energy efficency, but it’s gonna be stellar.

            The answer to your quesiton is in some of the slide decks that have been published over the past few months… decode and encode are dedicated outside of the EUs. ( [url<]http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i5-2600k-i5-2500k-and-core-i3-2100-tested/8[/url<] decode has been moved completely off the EUs but more importantly the encoding is done in fixed function HW as well). EDIT: My bad, some of the motion estimation is done in the EU array.

            • sigher
            • 9 years ago

            You are not correct, the latest AMD cards for starters have a dedicated video section that does encoding too, and I think nvidia has that too in their separate video processing section, and the decoding has evolved and it now doesn’t just ‘help’ but does the whole thing, you are basing yourself on the old days.
            Not that many people don’t use old cards though.

            • JumpingJack
            • 9 years ago

            Nope. Not true. AMD (formerly ATI before AMD bought them) had some ASICs for encoding if I recall, and ADC for analog input for VIVO via the Theatre series chip, but it was also GPU assisted (again, as I recall but I could be wrong there). However, most AMD has re-introduced their catalyst level transcoder and ported into Stream which encodes via the programmable shader units inside the GPUs.

            Dedicating functional HW for such a parallel task as encoding defeats the entire concept of GPGPU (which AMD is betting the farm is the future). In 2008, in fact, AMD released their utility ported over to STREAM. Anand did a low down ont he first attempt (as did many other reviewers as it was AMD’s first foray into GPGPU for the consumer to challenge nVidia’s push)…. it was not good:
            [url<]http://www.anandtech.com/show/2685[/url<] Prior to this, ATI (before AMD bought them out) provided a transcoder in catalyst (recall utilizing it some in an All-in-Wonder 9700), but I am fuzzier on the older attempts by ATI in this regard, they may have had a separate ASIC to do the encoding or some fixed function there as well. Make no mistake, both nVidia and ATI are using the programmable SIMD units in their respective architecture to do the encoding phase. heck, even the major consumer level transcoder ISVs are programming to Stream (Media Expresso, for example, specifically calls out stream as a supported API). [url<]http://www.cyberlink.com/products/mediaespresso/gpu-optimization_en_US.html[/url<] On another note... I mispoke above, through a very poor choice of words, decoding for all 3 major players is now all done in HW blocks dedicated for video processing in the GPU die/area, I should not have used the word 'some'.

            • sigher
            • 9 years ago

            Linking to an anad article from 2008 is hardly proving anything about the current hd69xx series of cards is it now?
            And your second link is a company that links to an outdated AMD ‘avivo’ page which is still listing the HD4000 series as the most advanced they had.
            However I checked AMD’s site and it does indeed say (mentioning the 68XX series) that the encoder process is UVD3 hardware decode then pre-processing and scaling then ‘compute encode’, which would indeed mean shader stuff I guess.
            Mind you the shaders of today are not quite comparable to those of 2008 either in number or speed or especially capabilities.

            Edit: I hope I didn’t sound too dismissive, you sound like a guy who knows about these things to me, and that’s why I did a search because I figured there was a good chance you were right after all.

            • djgandy
            • 9 years ago

            Let us not forget Moorestown can decode 1080p streams using a battery πŸ™‚ Video decoding on a GPU is a flexible solution, but not an energy efficient nor die space economic one.

            Personally I feel that for Intel, good video decoding is more important than a good 3D GPU. Most of the users of Intel integrated hardware are going to do a lot more video than gaming.

            Take a low end notebook for surfing the internet and accelerate all the flash videos on a low powered decoder and you will save a lot of battery life.

    • canmnanone
    • 9 years ago

    is there a way to compair the q6600 oc to the same speed of the new sandy bridge. would like to see how comparable it is with that cpu. i have my q6600 oc to 3.6. i dont really want to upgrade just yet. tia

      • Krogoth
      • 9 years ago

      Unless, you are doing some hardcore content creation, number crunching or simply want to cut down on power consumption without killing performance. Sandy Bridge isn’t worth it.

        • canmnanone
        • 9 years ago

        hey krog thanks for the relpy. ya i dont really plan on upgrading anytime soon. but would like to see benchmark numbers just to see where my setup is stacked up against this new cpu. anand did that comparison but he didnt oc the q6600, he just letft it stock.

          • NeelyCam
          • 9 years ago

          What you want is an SSD.

    • Krogoth
    • 9 years ago

    Sandy Bridge is getting too many accolades. It is just the final product of Intel’s goal of intergrating core logic onto the CPU. It all started with Nehalem. Calling it the biggest architectural change since Netburst is a bit too much. It is really just an extremely evolved Conroe. IMO, the last architectural change from Intel was Conroe. Ivy Bridge is looking like the next generation.

    It is fast? Of course. Does it do it with better power efficiency? Indeed. Is it enough of a gap to be worth upgrading over the previous generation? Depends on your needs. IMO, the intergrated graphics is Sandy Bridge’s greatest asset. It yields enough performance for mainstream gamers. It is Intel’s preemptive strike at AMD’s Fusion.

    Anyway, the upcoming Haswells will be more impressive. FYI, Haswell is the workstation version of Sandy Bridge which means another socket platform. I expect it to rock at rendering and number crunching. It will come at a cost though ($400+).

    It looks like the fears of K series commanding a hefy premium were unfound. It is just minor tax that overclockers can easily stomach.

      • Flying Fox
      • 9 years ago

      [quote<]It is really just an extremely evolved Conroe.[/quote<]You can say the Conroe is an extremely evolved Pentium III too. And this thing still runs x86. That's progress for you I suppose. [quote<]Anyway, the upcoming Haswells will be more impressive. FYI, Haswell is the workstation version of Sandy Bridge which means another socket platform. I expect it to rock at rendering and number crunching. It will come at a cost though ($400+).[/quote<]AFAIK Haswell, is after the Ivy Bridge shrink. So it will be the "tock" generational change like Nehalem and now Sandy Bridge. The workstation version of SB is the SB-E.

        • Krogoth
        • 9 years ago

        IMO, Conroe is more like the synthesis between P6 and Netburst architectures. It incorperates their strengths and overcomes their shortcomings. It further improved on those strengths. That is why Conroe was such a monster at launch. It destroyed its Netburst predecessors and stomped its K8-based rivals.

          • Flying Fox
          • 9 years ago

          I have not gone through Kanter’s article yet, but just the “ring bus” alone is a big enough change for me to call it generational. Is it worthy of the label “biggest change akin to the original Pentium” like Intel marketing would want us to believe? Probably not. I like Damage’s positioning of “biggest change since Pentium 4” though.

          • JumpingJack
          • 9 years ago

          Could you enumerate what architectural details of Netburst was part of Conroe and how exactly you see this as a synthesis?

          SB seems to be more of what you are describing — SB has a PRF as did Netburst, Conroe to Nehalem did not. SB has a uop cache, where uops are stored during the decode phase, this is similar to a trace cache (Netburst) where as Conroe to Nehalem did not.

          Conroe, even up through Nehalem, could be traced back as kin to Banias, Dothan, Yohan of which were the brethren of P6 which was divergent to Netburst specifically to go after lower power mobile in their time.

          It would be educational to really take a close look at Kanter’s article:
          [url<]http://www.realworldtech.com/page.cfm?ArticleID=RWT091810191937&p=1[/url<]

            • Flying Fox
            • 9 years ago

            AFAIK at least the branch predictor (which had to be pretty good in order to hide Netburst’s longer pipeline) and the quad-pumped QDR FSB were taken from Netburst. Together with the 4-issue and one of the “op fusion” (forgot whether it was micro-op or macro-op which came from Banias), made Conroe look quite different from P-M/III and P4 worthy of being labeled as a “new architecture”.

            • JumpingJack
            • 9 years ago

            The QDR FSB I would not associate as a Netburst specific feature that made it to Conroe. The FSB was architected with the Pentium Pro, and major attention was made to implement a cache coherent protocol for multisocket as well as serve as the bus to the NB. NB simply reused this bus from the P6 generation with little or no really changes (perhaps some pipelining depth or something, not completely remembering many details really).

            I do agree, though not entirely certain, that the branch predictor key components were probably borrowed heavily from Netburst. Netburst’s incredibly long pipeline probably forced some significant design work around the branch predictors at the time.

            Netburst was a 3-issue design, going wider was simply a design choice specific to Conroe. Macro-op fusion was also specific to Conroe, micro op fusion was in place in Banias as I recall (and as do you).

            Frankly, macro-op fusion was not well implemented in Conroe in my opinion, it only really fused two complimentary instructions and from the resulting length, did not work in 64-bit mode. Macro-op, in my opinion, was only properly done in Nehalem.

      • NeelyCam
      • 9 years ago

      Ivy Bridge is a Sandy Bridge “shrink” (to 22nm) – not a new architecture.
      Haswell is not a Sandy Bridge workstation version – Haswell is the new architecture after Sandy/Ivy Bridge.

        • NeelyCam
        • 9 years ago

        *sleeping*

      • morphine
      • 9 years ago

      So let me see: you’ve got a CPU that is incredibly fast at every task type, takes up less idle power, takes up less load power, and hits most pricepoints available. In short, it excels and is unmatched in every angle, and it’s “getting too many accolades”?

      I mean, what would it take for Intel to impress you, ship you a CPU and a cake with a stripper in it?

        • Flying Fox
        • 9 years ago

        Plus 2x performance I suppose. The days of that are long gone I’m afraid.

        • Krogoth
        • 9 years ago

        It isn’t really that “faster” than Bloomfields that came out two years ago in most applications. The lower power consumption is very nice, but that mainly benefits smaller form factors. The most impressive part of Sandy Bridge is the integrated graphics. It rivals the current budget discrete solutions which is good enough for mainstream gamers.

        IMO, Sandy Bridge is only attractive to enthusiasts who didn’t invest in Bloomfield, Lynnfield and Phenom II platforms.

        From Intel’s standpoint, Sandy Bridge is their first serious attempt at a System-On-A-Chip. It looks like it will succeed in that regard.

          • morphine
          • 9 years ago

          It’s all about value, man.

          To put it simply, within their price brackets, they are a tactical nuke. No two ways around that.

            • Krogoth
            • 9 years ago

            That’s always been the case for years. Generation one product for the platform x always commands a hefty premium and usually has “issues”. Generation two fixes the “issues” and is cheaper/faster Generation three is cheaper and faster than generation two.

            Sandy Bridge wasn’t the first CPU architecture to do it nor will it be the last.

          • Flying Fox
          • 9 years ago

          It is already doing “unkind things” to the newer 950. Don’t think the initial 920 is going to hold up much. Of course, incrementally buying generation N+1 is never going to net you huge gains. That is why the smart buyers will always skip a generation or 2 (or even 3).

      • sigher
      • 9 years ago

      “It yields enough performance for mainstream gamers.”
      Uhm, the modern games only run at lowest settings and lowest resolution really, that’s not what ‘mainstream gamers’ expect, unless you define mainstream as people that play solitaire.
      You do understand that in this techpowerup test the games were first tested with a geforce460 right? It’s a CPU test, then later we read “Amazingly, Bad Company 2 actually runs pretty well on the HD 3000 IGP. We had to [b<]turn down all of the game's IQ settings and drop the resolution to 1280x800[/b<], bit was darn [b<]nearly[/b<] playable on the Core i5-2500K." And although that's the i5-2500 the higher models also are not much use for gamers as other sites can tell you. To appeal to 'mainstream gamers' it would have to be competitive with an xbox360, and since that too is a (dated) CPU with a closely tied GPU you have to wonder why it can't

        • kamikaziechameleon
        • 9 years ago

        Good point how can intel not produce a integrated solution today to compete with a integrated solution from 5 years ago.

        • Krogoth
        • 9 years ago

        Mainstream gamers use OEM rigs from the likes of Dell, HP and Gateway. They usually have IGP or bargain basement GPU solutions that have similar or worse performance than the IGP in Sandy Bridge. Mainstream gamers typically play console ports or MMORPGS which do not require the processing power of modern discrete GPUs to have an enjoyable gaming experience.

          • Meadows
          • 9 years ago

          Typically wrong on the first point, very wrong on the second point, and vastly inaccurate on the third point (both halves of it).

          Congratulations, you win 1 Trophy of Fail.

          Whether they use OEM or family-member-made or custom, but store-built computer setups is very debatable, but exactly for that reason, you can’t say flat-out that pure OEM machines dominate what you call the “mainstream”. Almost all but the most basic office machines have SOME sort of discrete videocard in them, further disproving your point – which I wager you just pulled out of your ass as usual, anyway.

          And the notion that integrated videocards can play console ports enjoyably? Dude, in what universe do you still hold yourself as “level-headed”? That’s the most ridiculous statement I’ve seen in years. Even the cheaper discrete cards suffer under actual mainstream gaming with today’s popular widescreens, let alone some integrated crap, be it from nVidia, AMD, or god forbid intel (especially intel).

            • Krogoth
            • 9 years ago

            ROFL, your assessement would have been correct if it was ten years ago.

            However, it isn’t quite the case anymore. The only OEM build stuff that has performance-grade GPUs of some sort is stuff “geared” towards hardcore gamers (Northwest Falcon, Alienware, Voodoo PC, Dell’s XPS line). Discrete GPUs are a rartiy in mainstream OEM rigs. If you doubt it, go to any brick and motor store and any of the major OEM online website. Just check their default options on their mainstream line. The results will surprise you.

            Sandy Bridge;s IGP does yield sufficent performance for mainstream gaming. Mainstream gamers aren’t playing their favorite tiles at 2 Megapixel+ with AA and AF on top nor do they care. They usually play their games whatever auto-configuration assigns them. As long as the game isn’t running like a slideshow. They simply don’t care enough to get something more powerful.

      • Anomymous Gerbil
      • 9 years ago

      “Anyway, the upcoming Haswells will be more impressive.”

      This just in: new hardware expected to be better than old harware.

        • Krogoth
        • 9 years ago

        Not always the case.

        Remember the Prescott at its launch? πŸ˜‰

          • sweatshopking
          • 9 years ago

          God knows I do….

    • ApockofFork
    • 9 years ago

    I just wanted to note with some concern that the description of the more aggressive turbo boost implies that it may be boosting these benchmark if they are being done after a period of idling in which the computer was able to cool off. I also suspect that if the temperature is being monitored to control the speed an open air test bench with good cooling could also be boosting the results. It would be interesting to see if you guys can define a method of showing the effects of turbo boost. Perhaps heat the thing up running prime95 and then jump into a benchmark or use a weaker cooling setup and see if the numbers change. I would be interested to see if these kinds of things make a difference.

    That being said the performance even if it is perhaps slightly boosted from what one might get in the real world is very good.

    • codedivine
    • 9 years ago

    Would like to see some results of clock-per-clock comparisons of Sandy Bridge and Lynnfield with turbo turned off and processors at the same frequencies. I realize that will be a purely academic exercise from a consumer point of view but still will give a lot of insight into the architecture.

    • kamikaziechameleon
    • 9 years ago

    I would have liked to see a OC comparison between all the black and K processors.

    • kamikaziechameleon
    • 9 years ago

    I wonder if AMD will continue their differing approach, the x6 core processor for under 300 was a very good answer to intels CPU leading up to sandy bridge. I wonder if AMD will push 8 or more cores for under 300 bucks this coming year?

      • SoulSlave
      • 9 years ago

      Hummmm…

      Seems to me that AMD might wanna change it’s strategy. I can see Bulldozer leading in Multi-Threaded performance in an 8 Intel’s Thread (4 HyperThreaded Cores) versus 8 AMD Threads (4 “Modules”) scenario. But there’s no prediction as to single threaded performance. In that fron’t AMD can be leaps and heaps (Am I using it wright?) better than it is today and still not matching Intel’s SandyBridge…

      Still, I’ve got a pretty nasty value out of my current Phenom (X3 modded into an X4 plus overclock) so I got a little spoiled… And it will take some serious competition for that to happen again, until then I’m sticking to my current config…

        • kamikaziechameleon
        • 9 years ago

        The fact that AMD offers pretty competative Multi core preformance still at their current price speaks to their strengths. I wouldn’t buy a Intel under 200 dollars, and I wouldn’t buy a AMD over 200. Intel could cut all their prices more aggressively to have a outright victory on all fronts but they have never been that aggressive.

          • travbrad
          • 9 years ago

          Intel’s CPU prices are pretty good in the sub $200 range actually IMO. Their weakness is when you look at total system cost. Usually you can find cheaper mobos on AMD side, and the sockets tend to last longer as well (long enough that you may actually get an upgrade out of them). If someone is on a budget, they likely will be in the future too, so that upgrade option is nice to have.

    • grantmeaname
    • 9 years ago

    frist pots!!1!

      • Meadows
      • 9 years ago

      Shucks, I must be the only guy who gave this poor man a thumbs up.

        • Flying Fox
        • 9 years ago

        His post was #83, not exactly “frist”.

          • Meadows
          • 9 years ago

          How was I to know? It’s not like [i<]Metal[/i<] wants me to know, and I'm not switching to Flat because it's terrible. I might as well call every post first post and be done with it.

            • Flying Fox
            • 9 years ago

            You just need to be here to read the real first post. πŸ˜‰

        • sweatshopking
        • 9 years ago

        i did……

      • ssidbroadcast
      • 9 years ago

      ^^At over 15 [color = red]Thumbs Down[/color], this post [i<]really[/i<] needs to be in Comic Sans.

        • Mr Bill
        • 9 years ago

        This may be an illustration for why thumbs up / thumbs down should both tabulated instead of just the net sum. πŸ˜€

          • wibeasley
          • 9 years ago

          How about (a) net sum and (b) total votes. I don’t want to do division unnecessarily.

            • TaBoVilla
            • 9 years ago

            no no no, how about: [b<]NO THUMBS DOWN[/b<]. Thumbs up = votes. Your post has no thumbs up = you suck.

        • ssidbroadcast
        • 9 years ago

        … well I give up on trying to make that word red.

          • Meadows
          • 9 years ago

          They haven’t managed reintroducing colours into Metal yet, along with clubbing the date stamp to death and spreading the vote buttons horizontally, instead of stacking them and thus inflating the box height.

    • flip-mode
    • 9 years ago

    Nice central processing units, there, Intel.

    I have a very hard time going over $200 for any single computer component; such a hard time, in fact, that I only have done it once in 10 years – for DDR 400 way back in 2003. So if I was buying I’d buy the i5-2500K.

    These CPUs are bad news for AMD who won’t have anything to fight with for the next 6 months. AMD still offers terrific value with their low end Athlon II X4s, but that’s all that’s worth buying from AMD now.

    My opinion is that a Yorkfield or Deneb or better is not worth upgrading from unless you are doing sick amounts of video encoding.

    As for the review, the use of a GTX 460 is a bit unfortunate.

    But, yeah, Sandy Bridge is a very exciting CPU. Very impressive. I definitely want one, but, that poor thing would just sit largely idle in my machine that is only really used for gaming.

      • derFunkenstein
      • 9 years ago

      How much RAM did you get for $200 in 2003? I’m thinking back then even a 1GB DIMM was like $65-70. You’d get a ton of RAM in 2003 for $200. πŸ˜€

        • flip-mode
        • 9 years ago

        It was [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16820146950[/url<] for $245 on 5/24/2004 to be exact πŸ™‚ Newegg's invoice history for the win. Well, it was DDR 433, not DDR 400. Oh, and that RAM sucked. I think I payed $20 or $40 more than "value" RAM to get OCZ copper heatsink overclockability but they didn't overclock worth squat. Never again. Value RAM 4ever. But, yeah, that's the only time I payed more than $200 for any "single" thing. And it wasn't long after that DDR prices plummeted. They cost half as much some months later and I remember feeling bummed that the price dropped so quickly after my purchase.

          • pedro
          • 9 years ago

          That’s some serious on-pointness there Mr. Flip!

          • derFunkenstein
          • 9 years ago

          Holy crap, bad memory. Err, my memory was bad. And the memory wasn’t what it was billed to be (but at that time, wasn’t OCZ still a shady dealer?)

      • axeman
      • 9 years ago

      If you can find a good deal on a Phenom II X4, the value from AMD is very good, considering you can find one for less than an i3, and even reuse your DDR2. Although buying a new motherboard that supports DDR2 might be questionable, it’s an option.

        • flip-mode
        • 9 years ago

        Being able to upgrade a DDR2 board by dropping in a Phenom II is indeed awesome, and all with negligible performance difference compared to DDR3. Anyone out there still rocking a Phenom 1 or an Athlon X2 on an AM2+ board and even select AM2 boards would be very well served by that option. But I wouldn’t buy an AM2+ mobo just cause I had some DDR2 laying around. It seems to me that at that point, it’d be time to reach just a little deeper into the pocket lint to jump to DDR3 and an ix 2×00 if at all possible.

          • axeman
          • 9 years ago

          That’s why I said it was questionable, hehe. I did it, but it was more fluke… AMD(ATi) chipset issues have been even worse in Linux, not to mention the superiority of Nvidia Linux graphics drivers (NVidia IGP in use in this case), so I managed to find one of the few remaining Nvidia chipset based motherboards (Geforce 8200, aka NForce 730a I think they call it), new enough to support Phenom II, HT 3.0, etc, that just happened to be DDR2. Cha-ching.

          That said, if you’re on a tight budget, you’re not going to get more from scraping the bottom of the barrel on the Intel side – you’re really going to have to shell out to get any better performance. The article’s price quotes the i3-2xxx as more expensive than the Phenom II X4 840. True, it’s lower power, a dual almost keeps up with quads, so it’s technically impressive. But spending more to get less ? The motherboard is likely to be 30-50ish more as well. Meh.

          That said, i3/i5 dual cores were already not great value, so as long as Intel’s comfortable technology and performance lead continues, this is unlikely to change.

          • Flying Fox
          • 9 years ago

          Well, I did that a year ago, but then I started off with a cheap 785G board and a cheap Athlon II 250 at the time. Now the system is rocking a Phenom II 965 BE. πŸ™‚

      • Krogoth
      • 9 years ago

      GTX 460 demostrates how GPU is the largest factor in gaming performnace if you go 2 Megapixels or greater. I doubt that GTX 580 would makes things any better. Because, Sandy Bridge chips were the fastest in CPU-bound games like Starcraft 2.

        • flip-mode
        • 9 years ago

        That’s actually a good point.

    • derFunkenstein
    • 9 years ago

    I have one complaint: they changed the socket AGAIN. That makes the cost of upgrading anywhere from $100-$200 (depending on features and form factor desired) more than just the CPU. Not that I have an 1156 system or anything, but it’d be nice if Intel rewarded loyalty.

      • flip-mode
      • 9 years ago

      I know exactly where you’re coming from and can see why you’d feel that way, but I can’t see room to complain. It’s the price of progress. If maintaining socket compatibility would in any way hamper the advancements Intel can make to the CPU, then I see it as a losing proposition. And as an enthusiast, it’s always pretty exciting to get a new mobo anyway.

        • axeman
        • 9 years ago

        While it’s possible that the move is necessitated by new features, it’s more probable Intel, as the only vendor now for Intel compatible chipsets, is primarily motivated by making sure there is nice profit from a nice Intel MCH (or whatever they call it) sold to accompany each new CPU sold.

        This would be why they have a new socket *every single time*, or even going back to Socket 370, 478, and 775, they make sure new CPUs have slightly different pinouts or voltage requirements, to *prevent* them from working on older motherboards even when the physical socket doesn’t change at all. That ploy was a little too obvious (when disconnecting or jumping a few pins together makes new CPU work magically fine in old motherboard as it did for me), so now they just release a new socket every 6 months.

        Power consumption is going *down*, so that’s not a reason to change sockets at all. LGA 1156 already supported both CPUs sporting graphics as well, so I don’t believe for a second this was necessary.

        It’s obviously possible for CPUs to have the extra transistors to work in old or multiple sockets, albeit with reduced performance or features (example: newer AM2+/AM3 CPUs support faster HT speeds, but can work in motherboards that don’t support them), even including DDR3 and DDR2 compatible memory controllers on the same CPU. This of course, probably eats into the transistor budget, but Intel with their large advantage in manufacturing would scarcely notice.

        Not that it’s worth raging over. Intel is a business, and they are motivated by profits. It’s good to be king, because you can maximize profits without losing sales. AMD is going to go further to make the sale, whether it nets them a sale of a new accompanying chipset or not. To expect anything else from Intel puts this topic in R&P territory.

        Yes, being an enthusiast, getting a motherboard might be exciting, but from my standpoint Intel can *insert insult here*

          • derFunkenstein
          • 9 years ago

          Well, after reading more about it, it seems like the “master” bus might have different signalling. Whether it *has* to have been that way is another story, but they’ve apparently made some electrical changes.

      • kamikaziechameleon
      • 9 years ago

      Totally agree, I went AMD last gen because of the musical sockets.

      Atleast this time they combined 3 sockets into one.

        • Ushio01
        • 9 years ago

        Not really 1156 with or without compatability with the intergrated graphics is replaced with 1155 again with two options while 1366 will be replaced with 2011.

      • Krogoth
      • 9 years ago

      Get use to it.

      That’s the cost of intergrating core logic onto the CPU.

      Any time AMD and Intel decides to change the layout of the chip. The socket needs to change. The days of socket compatibility between new and old generation CPUs is long gone.

        • Flying Fox
        • 9 years ago

        Isn’t that what I have been saying too? πŸ˜‰

        • Meadows
        • 9 years ago

        [i<]The days of socket compatibility between new and old generation CPUs is long gone.[/i<] Don't forget to remind AMD of that, sometime.

          • Flying Fox
          • 9 years ago

          They just did with AM3+. At some point the integration of more components onto the CPU will necessitate new signal pins, or re-arrangement of some. Since you cannot get easy performance gains by just bumping the clock or adding more cores, the 2 CPU guys are now looking at overall platform stuff to eke out any gains they can.

          Have been saying that for years.

            • NarwhaleAu
            • 9 years ago

            AMD maintains backward compatibility though (AM3+ will run older CPUs), which is a nod to a mindset I approve of. It’s an evolution of AM3, not a completely new socket. That’s a world apart from Intel, where it seems that new CPU = new socket. Intel customers (90%+) have to purchase a new motherboard to switch to AMD. As the market leader, Intel is betting that given the option, customers will decide to purchase a new Intel motherboard instead. AMD maintains as much compatibility across sockets as possible so that customers factor in the cost of a new mainboard when considering a switch to Intel. In short, their behavior is determined by their respective market positions.

            • Flying Fox
            • 9 years ago

            Choice is good, right? So that is why I don’t see why we have to bitch so hard at Intel for that.

            • JumpingJack
            • 9 years ago

            I wonder how they will handle Llano then? Do you think that will generate a new socket? That is ultimately the chip that will fit up against what was shown today.

            However, that is moot… it is clear, AMD does a much better job at synergy and extending socket lifetime, but expect new sockets from AMD in the future as well. All 3 major product categories will generate new sockets, Brazos will likely be soldered to the PCB directly, Llano will need a new socket no AM3 or AM3+ there, and BD for desktop will not drop in, in all cases I would need to buy a new MB.

            • Krogoth
            • 9 years ago

            It is because AMD hasn’t done any major change to the core logic since the release of Phenom IIs. The upcoming Fusion chips will have entirely new sockets for them.

    • TREE
    • 9 years ago

    Anyone else thinking that if Intel were to release a full blown GPU based on this current architecture they’d have a solution capable of competing with Nvidia’s and AMD’s latest GPU’s?

    Although Intel’s graphics probably wouldn’t have the best feature set (apart from GPU h.264) or performance from the get go, but it would certainly stand its ground against the well entrenched GPU’s currently out.

    • TaBoVilla
    • 9 years ago

    Excellent and timely review Scott, hope didn’t take away too much from the holidays. I was more over impressed by the Core i3-2100 performance, amazing for a dual core/HT part.

    [i<]I'm running out of ways to say "continued dominance," folks. Perhaps in Fake Spanish, which I studied for four years in high school? Continedola dominancia! [b<]Or something[/b<].[/i<] This made me laugh =)

    • chischis
    • 9 years ago

    So I just bought a Phenom X6 1055T setup for content creation. I was concerned that SB might blow it out of the water, but what with VAT rising in the UK tomorrow, I’d convinced myself it was a good move financially.

    And after reading TR’s excellent review… I still am. Okay, SB brings massive improvements in efficiency but considering what it is likely to cost over here at launch (damned VAT), I’ll take my new system, thanks. I just don’t see enough of a performance gap in the Cinebench and Povray benchmarks to make SB worth considering. Also, Intel need to stop messing around and bring 6-core SB out at REASONABLE prices!

    My take: For gaming, a Q6600 or Q9400 is still rocking when paired with a suitable GPU. So I wouldn’t worry guys, if you still own such a CPU. But for content creation, an upgrade might be worth considering.

      • derFunkenstein
      • 9 years ago

      Yeah, I took two things away from the review:

      1.) Sandy Bridge is OMGFAST
      2.) My current setup is still plenty fast enough for my needs (Phenom X4)

      But yeah, wow. Stupid fast. Excessively fast. “AMD isn’t going to catch up this year” fast.

    • crsh1976
    • 9 years ago

    I’m very exciting about SB as an upgrade to my aging C2D E8400 setup, it looks like it was well worth waiting and skipping over the LGA1156 Core i3/5/7 generation given there’s yet another socket change. Haven’t decided which chip to pick up yet, I’m definitely going with a quad-core this time, but I’m torn between the non-HT and HT versions.

      • d0g_p00p
      • 9 years ago

      I’m in the same boat as you with the exact same CPU and issue.

      • BoBzeBuilder
      • 9 years ago

      There you go:
      [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/39555-intel-sandy-bridge-core-i5-2500k-core-i7-2600k-processors-review-6.html[/url<] Personally, I'll go with HT enabled processors.

        • Flying Fox
        • 9 years ago

        Those -bigadv WUs are very tasty at 20K ppd. πŸ˜‰

    • blastdoor
    • 9 years ago

    Nice review.

    The thing that’s impressive about SB is the balance — everything is better with SB. There are no tradeoffs to be made. This is a huge contrast to the P4, which involved big tradeoffs relative to the P3.

    I suspect that Bulldozer will take a different approach, bringing more targeted performance increases that will be of great interest to specific markets.

    It’s an interesting contrast between different philosophies. Intel is still pushing the idea of the general purpose processor that is used for everything from laptops to server farms (and SB is a pinnacle achievement for general purpose processors). AMD (and IBM and ARM et al) is moving away from the general purpose processor, trying to provide different products to different market segments. In part this is just a function of where these companies are in the marketplace (AMD can’t really afford to compete with Intel for general purpose processors even if they wanted to), but I think it’s also a function of how computing needs are evolving. With SB, Intel is arguably improving a lot of things that don’t really need to be improved at the expense of improving some things even more that would benefit from even more improvements. AMD’s counter will be to say “ok, we’re not improving X anymore because it’s good enough already, instead we’re going to put all our efforts into improving Y”.

    My own view is that AMD et al probably are more in tune with where computing needs are heading. But of course AMD’s success is predicated on the assumption that bulldozer will totally dominate SB at some things while lagging behind in others. If Bulldozer lags behind in some things while only matching SB in others, then AMD will be in a tough spot.

    • sweatshopking
    • 9 years ago

    Looking good. I appreciate the increase in performance, and reduction in power, but at this point, I can’t help feeling cpu’s are fast enough. yes, you get an extra 10 fps in sc2, but if you’re doing video encoding, it’s MUCH faster on an nvidia or ati card. I’m not sure that I’m seeing any reason to upgrade my 65nm q6600, even at this point. It’s still fast enough.

    also, man. big bang theory? really? I would have expected some kind of cluture from you gentlemen. πŸ™ I guess you like your prole shows…

      • StuG
      • 9 years ago

      I feel more and more people are going to agree with you. A good Quad-Core will keep most people afloat in today’s computer world, with good snappy response times still. Also, the 10 fps increase is at low rez, higher rez the cards make it even more minimal.

      Oh well, still cool to read and see.

      • travbrad
      • 9 years ago

      I feel the same way with an E8400 (@ 4ghz). Yes I would get more performance by upgrading but it’s very hard to justify spending $400+ (CPU/mobo/RAM) when every game I’ve tried runs pretty good (The E6400 in the charts isn’t really representative at all since I’m at double the clock speed + more cache)

      Gaming isn’t everything of course, but when you factor the GPU-assisted encoding of videos, what else do most people use their CPUs for (where the CPU is actually the limiting factor)? I do appreciate the scientific, synthetic, 3d rendering, etc tests just to see what the CPUs are really capable of. But those aren’t things most people do.

        • NeelyCam
        • 9 years ago

        Sure, but let’s see how you feel when your grandmother, your unemployed neighbor and your boss’ dog have faster computers than you.

          • sweatshopking
          • 9 years ago

          i’m not worried about what other people have. that’s ridiculous. the jones aren’t a concern around my house. my real peen is big enough, I don’t need to brag with my e one. my computer does what I need it to do. why spend money on nothing?

    • pedro
    • 9 years ago

    Awesome article folks. We all appreciate your hard work. The power consumption improvements vs. the sometimes shocking performance increases just amazes me. My reference point throughout this article was the i5-760 (which I own). It got trounced quite severely on quite a few occasions.

    I very much look forward to reading the respective motherboard reviews.

    Intel have done it again, as have TR!

    BTW, that test rig pic on page 4 is pure porn for geeks.

    • Silus
    • 9 years ago

    Good article as always. Very thorough!

    There’s a typo in the “Integrated graphics and QuickSync video processing” page:

    “We had to turn down all of the game’s IQ settings and drop the resolution to 1280×800, bit was darn nearly playable on the Core i5-2500K”

    Should be “but it was darn nearly playable”

    • Meadows
    • 9 years ago

    [i<]We'd kind of expected the game to detect Intel's drivers, roll over, and die.[/i<] You crack me up, little buddy.

    • kravo
    • 9 years ago

    Does-or rather can- the discrete gpu power down completely if it isn’t used by any programs (for example, if one’s only browsing the net), when the IGP could take over? Will Intel support such features (or does it)?

    Maybe I missed the line that answers my question, if so, then I apologize.

      • Voldenuit
      • 9 years ago

      There is no existing solution on the desktop (yet). Lucid is working on [url=https://techreport.com/discussions.x/20182<]Virtu[/url<], which promises to bring Optimus-like switching to desktops, but we're not sure what type of business model they will pursue - licensing fees from mobo makers, or pay software for end-users?

    • abw
    • 9 years ago

    As impressive as they might be, these scores
    will be Bulldozed on the multithread department.

      • travbrad
      • 9 years ago

      Maybe, but beating $200-$300 CPUs from 6 months ago (assuming thats how far away it is) wouldn’t exactly be an enormous achievement. If it can compete with whatever Intel has on offer by then, then I’ll be seriously impressed.

      Don’t get me wrong I’d love for AMD to pull off another Athlon64 and keep Intel honest, but it’s hard to imagine Bulldozer beating Intel’s offerings (keeping in mind these are still “mid-range” parts with only 2-4 cores).

        • SoulSlave
        • 9 years ago

        I dunno…

        I’m not sure they would want to pull something like that of again… Intel is too dangerous when threatened… They would probably do some nasty Ilegal stuff long enough for they to come up with something that could obliterate the competition… And then AMD would be up to the neck in debt again trying to survive…

        If they find some profitable niche (like IBM and SUN did with their RISC CPUs), they could maintain their margins, and slowly, incrementally increase their market share.

        Like that frog and hot water story…

        • abw
        • 9 years ago

        Fairly right, although 8C BD is promised for april s end.
        Compared to the X6 , it will have more than 50% increased processing power,
        while getting the usual instructions Intel always “invent” to make its processors
        look better than they really are.
        Leave my hat to TR for including Cinebench 11.5 , as Anand didn t use it since
        it would have showed the X6 in favourable light.
        Ho, yes, he did use it in its first preview, but he removed the X6……
        This time, he didn t use it, but he mention it in his current article
        to say that P2 X4 is behind SB, yet, no graph…
        Hey, i thought that SB was a processor for modern softs……

          • Flying Fox
          • 9 years ago

          [quote<]instructions Intel always "invent" to make its processors look better than they really are.[/quote<] I call that progress, otherwise we would still be stuck on x87. BTW what's with the self-linebreaking? 😐

            • abw
            • 9 years ago

            X87 is 80 bit FP precision while SSE2 is 64 bit FP precision .
            Yeah, that s progress.

            • Flying Fox
            • 9 years ago

            More bits does not necessarily mean better, depending on context of course.

            These new vector stuff look nice though and will enable more speedups in certain routines. The days of blanket improvement with a bump in GHz is over, so they have to do tricks now.

            • abw
            • 9 years ago

            “The days of blanket improvement with a bump in GHz is over”

            You did forgot the turbo boosts….

            That said, the context for X87 being “inadequate” was the athlon
            trouncing intel s cpus in floating point computations..

            • djgandy
            • 9 years ago

            …but the end results in X87 are a maximum of 64-bit. 80 bits are used internally to reduce rounding errors. This brings problems itself though as it can lead to non-deterministic results depending on what code the compiler generates.

            64 bit SSE is a step forward for nearly all scenarios, especially performance. The next step will be 128-bit. Who in General Purpose computing really needs that precision though?
            .

          • PixelArmy
          • 9 years ago

          Did you look at the Cinebench 11.5 results vs the X6? Sandy Bridge wins… at best it’s a tie [i<]if[/i<] you want to go by price/performance [i<]and[/i<] use the -K version pricing. [quote="https://techreport.com/articles.x/20188/14"<]its multi-threaded results easily the fastest for a quad-core processor and, indeed, superior to the Phenom II X6 1100T's[/quote<] The "lowly" i5-2400 beats the X6 1100T a majority of the time across the entire TR benchmark lineup...

            • abw
            • 9 years ago

            Intel s superiority is mainly the 32nm process.
            A theorical 32nm P2 X8 die would be 10% smaller
            than a 4C/8T SB, with competitive perfs.

            • PixelArmy
            • 9 years ago

            All of that X8 theory is nice and all but…
            what does that have to do with your claim that Cinebench 11.5 shows the [b<]X6[/b<] in a favorable light? [quote="abw"<]Leave my hat to TR for including Cinebench 11.5 , as Anand didn t use it since it would have showed the X6 in favourable light.[/quote<] I'm merely asking you to elaborate as to what you mean since based off of TR's graph, I don't see your silver lining.

            • abw
            • 9 years ago

            Well, i m extrapolating, as there will be no 32nm P2 X8, but since AMD
            promise 50% better perfs from 12 MC cores to 8 BD modules (16C),
            we can expect the 8C BD to scale even better compared to the P2 X6.

            As for the X6 perf in Cinebench 11.5 , i find it quite competitive considering
            the age of the architecture as it s as efficient as SB if compared in term of
            die efficency. SB has the upper hand because of the process node,
            but if the P2 was to be shrinked to 32 nm, it would be as efficient
            as Intel s best offering if not more, as a 8C P2 would easily battle it,
            even in term of power efficency.

            • NeelyCam
            • 9 years ago

            If you’re doing this kind of silly area normalization, maybe you should try comparing X6 1100T (346mm^2) to, let’s say, a scaled-up i7-950 (263mm^2) from the same Techreport review. By blindly scaling the area, i7 would gain an extra 1.25 cores (= 2.5 threads), or 1.32x more performance.

            5.31 x 1.32 = 7.01. In other words, Intel’s previous-gen CPU completely whoops AMD’s top-of-the-line CPU (5.86) in Cinebench when “area-normalized”. Note that this wasn’t even the top-of-the-line CPU; i7-965 is clocked higher than i7-950… but it wasn’t used in the techreport review, and even i7-950 is more than capable of wiping the floor with 1100T.

            And how on earth could you think that 1100T would do better against Intel’s newer architecture that’s clocked higher?

            • abw
            • 9 years ago

            Didn t you read that i gave the precision “considering its age” ?….
            The only thing that is lacking is some new instructions, not to
            say that Intel has not some performance advantages.
            When K8 defeated the P4, it wasn t because of big innovations,
            but mainly because it acquired the SSE2 that the Athlon XP
            was lacking.

            • NeelyCam
            • 9 years ago

            Didn’t you read that I suggested comparing X6 to Nehalem (you know, old architeture from two years ago), using the same “node”, so all this extrapolation crap could be skipped.

            From that comparison – both old architectures, both at 45nm – X6 is NOT competitive. That is all.

            • abw
            • 9 years ago

            Some benches………..

            [url<]http://www.hardware.fr/articles/815-16/intel-core-i7-core-i5-lga-1155-sandy-bridge.html[/url<]

            • NeelyCam
            • 9 years ago

            So, what’s your point? i7-950 – again – beats X6’s in performance/area. Both are in 45nm, both could be considered “old architectures” (although X6 is still based on the latest AMD architecture).

            • abw
            • 9 years ago

            The point is to help you understand where all is heading…
            Zambezi will have more than 50% perfs improvement compared to the X6.
            This put things on perspective, and it will necessitate a 6 cores SB
            to compete adequatly..

            • maroon1
            • 9 years ago

            There is no evidence that Zambezi will be 50% faster than X6

            Those are just empty claims. Let us wait for the actual benchmarks from independent (not AMD) reviewers.

            • abw
            • 9 years ago

            The writing is on the wall…Desktop BD has 16 integer ALUs , SB
            can count on a maximum of 12 , while the X6 also rely on 12 but with poor bandwith and scheduling …
            as for FP exe ressource, it will be a no contest …..

            • djgandy
            • 9 years ago

            FP? Non contest? Did you not see AVX?

            Please don’t say you are talking about x87.

            • abw
            • 9 years ago

            BD will provide even more FP power increase than than in Integer.
            Moreover, these gains will be already visible for current softs.
            Intel will no more benefit from its instructions discrimination policy,
            that is, BD has all the possible instructions that matters, not
            counting its fused multiply/add capability that will be implemented
            by intel only in its next processors.
            Btw, BD has AVX….

            • djgandy
            • 9 years ago

            yawn, you don’t even make sense any more.

            • abw
            • 9 years ago

            The non sense is to brand something nonsensical without the slightest argument…

            • djgandy
            • 9 years ago

            or in your case just going on and on using wild un-sourced claims as gospel arguments.

            • abw
            • 9 years ago

            Still pessimistic ?…This is AMD s numbers…
            Indeed, they claimed for years that their BD will be more powerfull than SB.
            Expect 50 % INT performance increase from a P2 X6 to a 8C zambezi,
            and as much as 85% for FP computation capability…

        • derFunkenstein
        • 9 years ago

        well in 6 months you’re not going to see another replacement for SB – you might see clockspeed improvements but that’d be it. So if Bulldozer can show strong against the CPUs available in 6 months AMD will be fine.

        AMD should also be doing very well in the server virtualization world (not sure if they are or not, though) because many “slower” cores would be better utilized than a few (relatively speaking) “faster” cores.

        • abw
        • 9 years ago

        BD 8C is meant to compete with 4C/8T SB,
        BD 6C against 4C/4T SB and BD 4C vs 2C/4T SB.

        AMD will have an adequate couterpart as they will have
        better offering at the top and bottom..

          • kamikaziechameleon
          • 9 years ago

          Just looking at AMD’s current price scheme and assuming they build turbo core tech into BD I think we could assume the their processors will at most be 300 dollars for 8 cores, and then the price will probably drop 70-100 dollars in the first 6 months similar to the x6 core launch and successive price cut.

            • djgandy
            • 9 years ago

            Yeah they will be like they are currently. Selling huge die monsters for nothing. 350mm2 competing with 130mm2. It’s not good for profit.

            • abw
            • 9 years ago

            8C BD will be about 200mm2.
            Indeed, using 32nm, current X4 or X6 would be quite competitive.

            • djgandy
            • 9 years ago

            How do you figure that? 45nm -> 32nm shrinking the current 350mm^2 X6 to 150mm^2, which I figure is competitive considering you claim BD 8 core is 200mm^2, is a shrinkage of 60%.

            I think they will get the X4’s in at around 160mm^2, and the 6 cores will be around the 220mm^2 mark.

            You are way off I’m afraid.

            • abw
            • 9 years ago

            A 45 to 32 nm shrink double the density, so that put the X6 at 180 mm2 at most,
            and no more than 150 for the P2 X4.

            4C/8T SB is 215mm2…

            • djgandy
            • 9 years ago

            You never achieve perfect scaling when moving an old arch to a new process. You are very ambitious, plus your numbers you were throwing around were beyond what double scaling could even achieve.

            8 Cores at 200mm^2, and now X6 is going to be 180mm^2. I think you need to sit down with a sheet of paper and a calculator and figure out what your story is.

            • abw
            • 9 years ago

            180mm2 is for a current P2 X6 shrink to 32nm.

            200mm2 for 8 BD cores, that is, 4 modules.
            A module is 31mm2 with 2MB cache included, add 52mm2 for the 8MB L3 cache,
            that makes 176mm2, the remaining 24mm2 are more than enough for the rest of the uncore.

            • djgandy
            • 9 years ago

            31mm^2 is the size of 2 cores and 2MB L2. That will be the size of exactly that. None of the critical logic that is required to use the modules will be included.

            What about the memory controller? What about cache control, hyper transport, power gating and all the interfaces each core requires to communicate with other cores? 24mm^2 for all that for an 8 core processor?

            The thing is going to be more like 300mm^2.

            • abw
            • 9 years ago

            You re completely wrong.
            One SB core with HT is 30mm2, that is as much as one module (2C).
            Why should AMD uncore be significantly bigger at the same node ?

            • kamikaziechameleon
            • 9 years ago

            WOW, I feel stupid. You guys are really smart. Electrical engineers by any chance?

            • abw
            • 9 years ago

            no, just average joe like most of us…..at least for my case….

            • djgandy
            • 9 years ago

            Not an EE. I do work for a chip company though.

            • djgandy
            • 9 years ago

            doublepost

            • NeelyCam
            • 9 years ago

            I’m an EE

            • abw
            • 9 years ago

            An Intel EE ?…..

            • NeelyCam
            • 9 years ago

            Don’t you think I should be smarter if I was an Intel EE…?

            • djgandy
            • 9 years ago

            Are you forgetting SB has 256k of L2 per core? i.e 4x 256k = 1MB + 8MB L3

            This is why BD can’t be so close to DB.

            • abw
            • 9 years ago

            You should do the maths : BD module WITH included 2MB L2 is the same
            size as ONE SB core, no matter the 256K cache.
            Short memory ?….

            Also, you simply forgot SB s IGP and BD has none…
            Probably, it will less than 10% smaller, but it will outperform it…

            A guy asked why Intel prompted the launch two days earlier.
            It simply that OEM and manufacturers already have BD samples
            and that its performances are already known in restricted circles,
            including of course Intel…

            • JumpingJack
            • 9 years ago

            “A guy asked why Intel prompted the launch two days earlier.
            It simply that OEM and manufacturers already have BD samples
            and that its performances are already known in restricted circles,
            including of course Intel..”

            Odd, how does pulling the NDA off two days earlier make any difference then??

            • abw
            • 9 years ago

            The sooner people get hammered by the usual intel hype, the better,
            as inertia will help when the BD base ball bate will strike……

            • JumpingJack
            • 9 years ago

            Ohh, ok. thanks

            • djgandy
            • 9 years ago

            Sorry, last post before bed but I was trying to provoke thought on your part.

            You just said that BD and SB have roughly the same sized cores with L2 cache included at around 30mm^2. Think about that for a second. AMD with 1.5MB more L2 cache per core and arguably a core with more integer resources fits into the same space as a SB core?

            SB cores are less than 20mm^2. A quad core SB with no IGP is 216mm^2 – 46mm^2 = 170mm If you take a look at the die shot on Anandtech, even with the IGP gone the 4 cores are still below half of the die space.

            The AMD die shot posted below does not show a core that is as tightly integrated as SB. Substantially less than 50% of the die is taken up by the cores or “modules” and they come in at 31mm^2 each. So at 31x4x2 (assuming half the die is cores) BD is going to be at least 248mm^2.

            If BD were to hit 200mm^2 which is only 30mm^2 more than SB, it would be packing at least 44mm^2 more core into that space. At what cost would this come?

            • abw
            • 9 years ago

            It is said in this thread that one SB core is 30mm2.
            I don t know where your 20mm2 come from.

            So we have two BD cores + 2MB L2 at the same area,
            both processors have 8MB L3 and more or less the same uncore, with SB having
            an added GPU.
            So why would a 8CBD be significantly bigger than a 4C/8T SB ?..

            That said, BD can be a little bigger than SB provided the performance/mm2
            is as good, and it will be better according to the few infos already known.

            • djgandy
            • 9 years ago

            Where in this thread? And who is the authority?

            Look at the article. [url<]https://techreport.com/articles.x/20188.[/url<] If the SB Cores are 30mm^2 the die must be about 350mm^2! You can squeeze approximately 12 SB cores into that 216mm^2 space. That puts a core at around 18 mm^2, substantially less than BD. Core + L3 cache is probably about 30mm^2.

            • abw
            • 9 years ago

            My bad, SB core is 20mm2 including 256KB L2.
            BD module is 31mm2 including 2MB L2 so its total cores+L2
            areas is 44mm2 bigger than SB , but this one has a 45mm2 gpu,
            so the surfaces are of the same order, but with BD being more powerfull,
            in what extent, well 20% better in multithreaded rnvironement is a
            cautious guess.

            • NeelyCam
            • 9 years ago

            Djgandy: a decent way to estimate the SB core+L3 cache area is to look at the 4C+HD3000 and 2C+HD3000 area numbers – Tom’s Hardware has them:

            [url<]http://www.tomshardware.com/reviews/sandy-bridge-core-i7-2600k-core-i5-2500k,2833-2.html[/url<] 4C=216mm^2. 2C=149mm^2. (216-131)/2=33.5mm^2. As you can see from the die photo, there is some empty space on the left side of the memory controller - this also disappears when removing two cores, so the 33.5mm^2 includes some of that too. Putting a ruler on the die photo, that empty space is 1/8 of the die height, bringing the total core+cache area to about 7/8*33.5mm^2=29mm^2... can be rounded up to 30mm^2. Again, using the ruler on the die photo, core height is about 1/2 of the die height, so the core itself is about 0.5*33.5mm^2=17mm^2.

            • NeelyCam
            • 9 years ago

            Nevermind, you already got it (blindess is an unfortunate side-effect of hunger)

            • maroon1
            • 9 years ago

            Let us not forget that SB has integrated PCI-E lanes, not just integrated GPU

            The older Lynnfield CPU has larger die size than Bloomfield because Bloomfield don’t have integrated PCI lanes

            • NeelyCam
            • 9 years ago

            [quote<]Also, you simply forgot SB s IGP and BD has none...[/quote<] Yeah, and if you want a fair comparison, you should deduct that IGP area from SB calculations. Or, would you rather add the low-end graphics card chip area to your BD 'total system chip area' calculation?

            • abw
            • 9 years ago

            Problem is that this GPU is more of a cost adder than a real useful feature.
            Who would want a such poor performing GPU with top bin CPUs ?
            Intel should have better segmented the market, as the GPUless SB
            will just be recycled dies with faulty GPUs..

            • NeelyCam
            • 9 years ago

            Those who use CPUs for something else than 3D gaming. I.e., most of the market.

            • abw
            • 9 years ago

            CPU lifecycle is longer than GPU one.
            Old notebooks CPUs are enough for many tasks, but browsing
            the web is just awful thanks to outdated graphics that barely manage
            to deal with flasheries, and this trend is to stay since CPGPU like SB
            exibhits outdated GFX perfs from the start.
            Lucky those who will choose one with a decent mobile GPU..

            • NeelyCam
            • 9 years ago

            Good point.

            • Flying Fox
            • 9 years ago

            The problem is that crowd probably does not need the K processors either. Which means the decision to put the higher end GPU with the K line is, strange at least.

            • djgandy
            • 9 years ago

            The GPU is 46mm^2, so you just say SB is a 170mm^2 core.

            Each BD core is ~13mm^2 larger than an SB core so even if all else is equal BD will be 52mm^2 larger at 222mm^2

            I don’t think all things will be equal though. I think AMD will have more HT links than Intel has QPI links. If AMD past layouts are to go by they will not be anywhere near as tightly packed as Sandy Bridge is.

            • NeelyCam
            • 9 years ago

            [quote<]Why should AMD uncore be significantly bigger at the same node ?[/quote<] Same node doesn't mean same process. Logic and analog circuits have different "densities" in different processes, even at the same node. Moreover, you were talking about scaling from 45nm->32nm, to estimate the size of your imaginary 32nm X6. Uncore, particularly (mostly analog) I/O circuitry, doesn't scale as well as pure digital logic. You can't just blindly assume 50% scaling from 45nm to 32nm for neither digital nor analog, and you most certainly can't take two completely different 'cores' on different processes, say that they are the same size, and somehow conclude that everything else will be the same size too.

            • JumpingJack
            • 9 years ago

            [url<]http://aceshardware.freeforums.org/amd-bulldozer-preview-t1042-165.html[/url<] Hans DeVries (die size guru) took the photoshopped die shot, found a common block and scaled to the similar block of Llano (die size known). He estimated BD 4 module die at 320 mm^2. So I think you are close.

            • abw
            • 9 years ago

            Hans is a respected author, but on this one, i think he got it wrong.
            320mm2 implies that the die is as big as 10.5x 2 cores+L2 wich
            seems exagerated when looking at this die.
            Beside, that shot was publicly known as being heavily photoshoped
            before publication, so my point of view is as valuable as Hans one…

            • JumpingJack
            • 9 years ago

            Hans has a pretty good and established record of estimating die sizes from obscure die shots. BD is very cache heavy, so just estimating on core alone seems to be a bit off. There is 8 meg of L3 cache as well, and the northbridge which will probably be well overhauled.

            • djgandy
            • 9 years ago

            Well there we go. Hans may be out a bit with 320 mm^2, but he’s sure a long shot from 200 mm^2.

            By comparison Sandy Bridge is far more tightly integrated than the bulldozer die. [url<]http://www.pcper.com/article.php?aid=608[/url<] Of course Hans is making an estimate, but even with a new layout I fail to see how they are going to get to 200 mm^2. I believe the original die shot was from AMD and while it has been photoshopped to hide specific details it is quite easy to see what a core + L2 is πŸ™‚

            • NeelyCam
            • 9 years ago

            Your linky points to a Nehalem layout… πŸ˜›

            But I agree – Sandy Bridge is an exceptionally well floorplanned chip. That said, I don’t trust those pictures on that aceshardware page… the photoshopping is just too extreme πŸ™‚

            • djgandy
            • 9 years ago

            Woops, this is what happens when you have 20 tabs open. Well there is a die shot in the TR article. Not hard to find πŸ™‚

            • abw
            • 9 years ago
            • NeelyCam
            • 9 years ago

            I wouldn’t trust the “current price scheme” in figuring out what BD is gonna cost. Current pricing scheme is dominated by AMD’s current performance weakness in the market place. Since BD is likely to be more competitive in terms of performance, it will most likely be priced also higher than the current stuff.

            Honestly, AMD needs the money, and if they can charge more for BD and still sell every part they make, they WILL charge more.

        • blastdoor
        • 9 years ago

        If things go well for AMD, then BD will beat SB by a healthy margin in some things while losing badly in others. AMD is no longer pursuing a “one chip to rule them all” strategy — it’s all about tailored products. For example, SB may very well crush BD in video transcoding, but video transcoding isn’t very relevant for a lot of server applications.

      • ub3r
      • 9 years ago

      Wait till sandy bridge gets 8 and 12 cores.

        • Flying Fox
        • 9 years ago

        And socket 2011 will cost an extra arm…

          • NeelyCam
          • 9 years ago

          …but it makes you epeen longer than your arm was – it’s a net win.

    • Anonymous Coward
    • 9 years ago

    I’m most impressed by the i3-2100. Not bad for two cores stuck at 3.1ghz!

    • ssidbroadcast
    • 9 years ago

    [quote<]Normally, we'd make up a nifty, blue table with all of the various models and their key specifications, but this time, I've decided to relay it to you just as it came to us from Intel.[/quote<] Where apparently Intel is still using display adaptors with only 256 colors! Holy dotted dithering, batman!

      • LiamC
      • 9 years ago

      TVGA 8900 FTW!

      • Meadows
      • 9 years ago

      It’s called GIF optimisation, not everyone wants to clog the intertubes.

        • ssidbroadcast
        • 9 years ago

        Yeah, I know Mister No-sense-of-humor. Intel doesn’t have to be chincy with their marketing slides. The server could handle another 4kb for a decent jpeg.

          • derFunkenstein
          • 9 years ago

          Well, I was thinking TR did something to shrink the file, but I might be wrong.

    • Pettytheft
    • 9 years ago

    Need to upgrade my overclocked E4400. Wow it’s looking sad.

    • StuG
    • 9 years ago

    I have to say, that these look good but its not blowing me out of the water. This is exactly what I was expecting, and how it should look for the next generation of processors in all reality. I imagine Bulldozer will come out, and do what AMD did during this last generation. Their top end will snuggle between Intels medium and high end, while their low end battles intel’s low end offering better price/performance but less raw performance. Hopefully AMD can do more than this, but I see them ATLEAST being able to accomplish such a strategy with Bulldozer.

    None the less, great review as always TR staff.

      • NeelyCam
      • 9 years ago

      This seems to be the current trend. Back then when ‘new stuff’ came out every 2-3 years, it always seemed like such a leap. Now stuff comes out every 9-12 months, and the evolutionary improvements possible at such a rapid pace don’t seem too exciting.

        • StuG
        • 9 years ago

        There is truth in your statement. The biggest issue is though, as these markets turn more rapid, it will be interesting to see how the smaller companies or more niche companies and keep up to the big powerhouse giants.

          • NeelyCam
          • 9 years ago

          See, I have to disagree with that a bit. It’s the biggest company in the market (Intel) that was able to increase the rate of new product introductions (as part of their Tick-Tock model). Smaller/niche companies don’t have the R&D muscle to do that, especially if they have to rely on third-party process development (=GloFo/TSMC/etc)

    • Forge
    • 9 years ago

    Heh. From the end of the review, I think it summarizes nicely to say “stupid fast”. When the top SB chip only trailed the hex core monsters in most tests, and approached even those heights when over clocked, it’s hard to find any fault here.

    Quicksync also makes me feel funny in my pants. Do want.

      • Flying Fox
      • 9 years ago

      This truly deserves Kicking Pat’s famous “bitchin’ fast” label.

      And yes, the 2600K and even the 2500K are doing unkind things to the i7-950.

      It is good though that the 875K still has some legs and I can still overclock it. Of course you have to pay the power bills but for those who bought into Nehalem/Lynnfield they are still hanging in there (within around 20%?).

      Will be interesting to see if QuickSync still gives the same quality or it is another BadaBoom with somewhat poor quality.

    • NarwhaleAu
    • 9 years ago

    Unless Bulldozer is a huge advance over the Phenom II there is no way it is going to compete with Sandy Bridge. I was going to wait for Bulldozer (to support AMD), but I might have to go with an i5 or an i7 after all.

      • NeelyCam
      • 9 years ago

      It’s supposed to be a huge advance. Depending on what you have now, you might still want to wait…

        • NarwhaleAu
        • 9 years ago

        Understood, but I didn’t expect Sandy Bridge to be so much of a leap forward. Bulldozer is going to have to deliver on its promises just to be competitive. I’m running one of those Core dual core CPUs that Scott is suggesting should be upgraded… πŸ™‚

        Edit: I didn’t expect Sandy Bridge to have such strong single core performance and to be able to bring the fight to the 6 core i7s.

          • Crayon Shin Chan
          • 9 years ago

          Kanter already said that Bulldozer will not win on the single threaded performance front. That much is obvious.

    • pedro
    • 9 years ago

    Huge call from Charlie @ semiaccurate.com:

    [quote<]SANDY BRIDGE WAS shaping up to be the killer CPU of the year, a huge step forward in the 'uncore', decent graphics and big gains in the core as well. Instead, we got broken graphics, non-working feature sets, and a showstopper bug. What a shattering disappointment.[/quote<]

      • NarwhaleAu
      • 9 years ago

      He’s upset that there aren’t Linux drivers available for the IGP and USB3 ports. Not exactly the end of the world (unless you are a Linux user).

        • yuhong
        • 9 years ago

        For more info on the topic:
        [url<]http://www.phoronix.com/scan.php?page=news_item&px=ODk2OA[/url<] And BTW, on USB3, there is a driver built into most recent Linux kernels, but in most distros you have to manually modprobe it due to lack of suspend support.

      • Forge
      • 9 years ago

      Wow. I normally try to defend Charlie, but he’s smoking a lot of industrial strength crack today. He rants about a bug, a very small bug, when trying to boot a USB3 thumb drive in a USB3 port. Whaaambulance time.

      And his “OMG Sandy grafx be broke” bit? No Linux drivers. Ok, that’s not good, but it reflects on Intel’s Linux support, not the hardware itself.

      Charlie is on crack.

      Intel! If you send me a nice shiny i7-2600K, I’ll be happy to dig in and crank out Linux numbers on it, too! Begz0r! Your product is hot!

      • NeelyCam
      • 9 years ago

      Bad out-of-context quote. Charlie tried to install Linux without drivers and it blew up on him.

      Much like his Sony laptop.

        • pedro
        • 9 years ago

        The quote was [i<]his lead-off[/i<] on a page entitled "Sandy Bridge is the biggest disapointment of the year" on [i<]his website[/i<]. So I'm hardly taking him out of context here. The context is spot on in fact. Two red thumbs down... brutal. Don't shoot the messenger. πŸ™‚

          • TaBoVilla
          • 9 years ago

          I do not agree with thumbing down comments. thumbs up if you agree! /notices he’s not on youtube right now

        • NeelyCam
        • 9 years ago

        I think I might be a combined thumbs down champ over here (although grantmeaname up there is making a race out of it)… even my reasonable posts get thumbed down with vengeance.

        AMD fanbois still pissed off that I’ve called everything right for almost a year now…?

          • poulpy
          • 9 years ago

          [quote<]AMD fanbois still pissed off that I've called everything right for almost a year now...?[/quote<] That would clearly be the only rational explanation, you -a true Intel messiah- being persecuted by an army of fanbois.

      • JumpingJack
      • 9 years ago

      Charlie is upset that he can’t install Linux like he wants, and he blames the CPU for this failure.

      He has a bit of a history for over-exaggerated sensationalism.

      You see, Charlie hates Microsoft and Windows specifically.

        • shank15217
        • 9 years ago

        An entire OS fails to run on a processor thats supposed to be fully x86 compliant.. yea I would think thats big deal.. not everyones gonna wanna play games n rip pr0n on their windows box.

          • derFunkenstein
          • 9 years ago

          I’m pretty sure that by the time January is out you’ll see Macs with these things, too. If OS X and Windows can run on it, then my guess is that Linux is the problem. πŸ˜‰

            • poulpy
            • 9 years ago

            [quote<]If OS X and Windows can run on it, then my guess is that Linux is the problem[/quote<] Unless you were being sarcastic I'd suggest you check the number of architectures supported by Linux and then the ones supported by Windows/OSX before you go down this road.

          • JumpingJack
          • 9 years ago

          “That’s when the crippling bug surfaced. It seems the USB3 ports on the Intel DH67BL don’t want to work. Ubuntu 10.10 installs fail during the install, no fix was found. Plug the same stick into a USB2 port, and it works fine. Alternately, install from a USB2 stick on a USB3 port, and things work fine.”

          The entire rant is really about how he had problems using Linux with old drivers:

          “No drivers were provided for any flavor of Linux, and none were available short of building from source on our own. We do not feel this meets any reasonable standard for ‘available’. We await appropriate drivers from Intel for re-testing, but as of press time, none were available.”

          He thusly blames it as a poor CPU rather than a lack of adequate drivers.

          Drivers for linux often lag in release, ATI (AMD) is notorious for putting out update linux drivers well after HW launch.

          Charlie has a habit of ranting over tiny details, lack of proper Linux drivers is not necessarily tiny…. but Charlie has it in for various companies he, naturally, does not like nVidia and he despises Windows — thus in his eyes if he can’t use linux then everything in the universe is out of kelter.

      • green
      • 9 years ago

      click-baiting. he’s targetting intel & amd fans to get page views

      • StuG
      • 9 years ago

      No reason for this to be downrated, so I got you back to 0. Sometimes Charlie is interesting, and othertimes he is not. This is one of those times where he is not. I am personally someone who roots for AMD, but the awesomeness that is these processors is hard to ignore in any light. Especially since they aren’t out yet, we could see Linux drivers pop up in 2-3 days from a company such as Intel.

      • TaBoVilla
      • 9 years ago

      I burst out laughing in the middle of the office when I caught the difference on the “One uses Sandy Bridge GPUs, the other discrete. Can you spot the difference?” section in the original article:

      [url<]http://semiaccurate.com/2011/01/02/sandy-bridge-biggest-disapointment-year/[/url<]

      • Silus
      • 9 years ago

      Yes, because “Charlie” is such an accomplished reviewer…I’m still in awe at how many people follow that tabloid and even take his words for anything other than sensationalism…

      • Flying Fox
      • 9 years ago

      Man and I was trying to see what the fuss is all about, and the SA site goes down again! What kind of narrow window of uptime that site has?

      • kc77
      • 9 years ago

      If you have the time. Read this.

      [url<]http://libv.livejournal.com/22502.html[/url<]

    • VILLAIN_xx
    • 9 years ago

    . I should have worn diaper before seeing this round up of intel’s new sandy bridge.

    • themattman
    • 9 years ago

    Great article and the first to be posted on TR V3.0!

    AMD will have their hands full in 2011, and I hope they can still compete on some level.

    …and I can now say that I camped out at my computer until midnight to read the sandy bridge review as soon as the NDA lifted.

    • BoBzeBuilder
    • 9 years ago

    Awesome giant review. Love it. Hopefully AMD can strike back with Bulldozer, but these things are hard to beat. Must get me one.

    • Goty
    • 9 years ago

    Aaaaaand… meh. I don’t really see a reason to upgrade from my nicely overclocked 920.

      • Forge
      • 9 years ago

      I suggest to you the same two words that have made me dissatisfied with my i7-920:

      Unlocked Multiplier

      That really is the one thing I want and can’t easily get. Building an i7-875K rig for a friend recently was torture.

        • herothezero
        • 9 years ago

        What’s torture about building an 875K system? I bought the CPU to replace my 860 (scored a $200 deal at Microcenter), stuck it in the socket and turned on the system. All done. I have to agree with Goty; I’m not seeing the value proposition for 875K owners, especially if they overclock, even mildly, like 4GHz.

        I’m not saying the Sandy Bridge stuff isn’t great, but the delta in performance isn’t what I was expecting for the higher-end CPUs.

        Great review, by the way, Scott; this is what sets TR apart from the other enthusiast poser sites.

          • Forge
          • 9 years ago

          It was torture because I got to spend two weeks tweaking and fiddling with an unlocked multiplier, then handed it over to the buyer and was discontented with my formerly awesome i7-920. Having a fully unlocked multiplier doesn’t necessarily mean more GHz, but it does mean you can run out the board (BCLK), memory, and CPU clock completely independently. That made dialing in his machine for maximum performance a real treat. Also, once I’d seen that 166 BCLK was completely solid, and his 1600MHz RAM liked 1666, and his CPU was fond of 4.166GHz, it made all the multipliers, clocks, and settings line up very nicely.

          • sigher
          • 9 years ago

          Much less poweruse, new extension that aren’t even used yet that improve speed even more, reasonable cost, I see some advantages in those, especially that poweruse, it’s already quite impressive compared to a normal i7 but compared to an OC’ed one it’s really shining how the poweruse compares.
          Only thing that’s slower is the RAM access (compared to triple channel), and that doesn’t matter in 90% of the cases, only if you need to shuffle around large blocks.

          • Flying Fox
          • 9 years ago

          [quote<]I'm not saying the Sandy Bridge stuff isn't great, but the delta in performance isn't what I was expecting for the higher-end CPUs.[/quote<]Keep in mind they are literally crossing market segments in the comparison this time: the i5/i7 2xxx Sandy's are meant to eventually replace the i5/i7 6xx/8xx processors. The i7-9xx is the real high end here and as Damage put it so nicely, both the 2500K and 2600K are doing "unkind things" to the entry level high end of the 950. It basically means there is no point buying entry level 9xx anymore if you are building a new box [i<]today[/i<]. We also need to put into the context of power. Sure performance wise from the last generation we are looking at 10-20% improvement, but we are also looking at 10-20% [i<]lowering[/i<] of power consumption. These two aspects add up to quite a significant difference. For most enthusiasts around here the power aspect may not be of utmost importance to them, but as far as evaluating the new micro-architecture is concern, both the outright performance and power consumption form a great improvement.

            • ThorAxe
            • 9 years ago

            If you are a hardcore gamer or overclocker then buying Sandy Bridge is relatively pointless. The i7 9xx is more than competitive in most tests and even a mild overclock of any 9xx leaves it in the dust.

            The venerable Core i7 920 can easily reach 3.2Ghz (the stock clock of the i7 970) at default voltage, leaving the overclocked 2600K at 4.5GHz in it’s wake despite being at a 1.3GHz disadvantage.

            Then you have the issue of being limited to 16 pci-e 2.0 graphics lanes instead of the 40 lanes on the X58 (though for graphics cards it’s effectively 32 lanes.

            You make a valid point on power consumption but as you say enthusiasts could care less.

            I’m not saying that anyone considering an upgrade should get a 1366 CPU, it would be wiser to wait for the Patsburg platform and its LGA 2011 socket which will support four DDR3 memory channels and 40 PCIe 3.0 lanes.

            • Flying Fox
            • 9 years ago

            If I am buying new today why would I spend similar money on the i7-950 but need to suck up additional costs in at least 3 sticks of RAM and a more expensive motherboard? For people looking to upgrade from the 9xx, you are absolutely right. No need to look for upgrades, not that their options are plentiful to begin with (970? 980X?).

            You do have a point about the X58 chipset and the potential for 2×16 SLI/CF. However, with mainstream display resolutions now settling in on 1680×1050, 1920×1080(1200) and our current generation of video cards, is there really a need to go dual cards for most gamers?

            • Flying Fox
            • 9 years ago

            Oops, first reply fail with double post. 😳

            • Voldenuit
            • 9 years ago

            Most games are not CPU bound. But for the ones which are, SB kicks ass.

            15 fps faster than the $300 875K from a $200 CPU on Starcraft II? The same performance as a $1,000 980X? Which, at its stock clock of 3.33 GHz, is already running faster than your putative “3.2 GHz OC’ed 920”.

            For the professional renderer or media encoder, 6 real cores and more system interconnect is better than 4 cores and a crippled southbridge, but the hardcore gamer and overclocker are not well served by Gulftown or Bloomfield in place of SB.

            • ThorAxe
            • 9 years ago

            The 2600K loses in the Productivity benches by up to 40%, hardly an auspicious start for a brand new architecture.

            As you say most games are not CPU bound. We also need to bear in mind that an old i7 920 can easily do 4GHz with little effort that produces tangible benefits while the 2600K doesn’t seem to benefit greatly even at 4.5GHz, if we go by the benches in this article where even with a 1.3GHz advantage in clock speed it loses to the i7 970 by over 10% in some cases or is even or only slightly better in others. It is clear that a 4GHz i7 920 would utterly destroy an overclocked 2600K at 4.5GHz in these tests.

            I guess I feel somewhat underwhelmed by this launch.

            • Voldenuit
            • 9 years ago

            That’s purely a function of having 50% more cores (and picking apps that scale linearly with cores). Very few apps are able to take advantage of more than 4 cores, so the ‘productivity’ benches are very academic.

            You specifically mentioned gamers and hardcore overclockers in your reply to FF, [i<]neither of which benefit from more cores[/i<]. [url=http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i5-2600k-i5-2500k-and-core-i3-2100-tested/15<]Anand's Sysmark and Photoshop benches[/url<] are a lot more representative of real life workloads than 7zip and Truecrypt if you're going to be talking about typical application performance. And in PS CS4, the 2600K actually beats the 980X that has 50% more cores.

            • ThorAxe
            • 9 years ago

            Sorry I was thinking of an overclocked 920.

            Some games do benefit from more cores (though they are few and far between)

            Metro 2033: up to 20%*
            Prince of Persia: up to 10%*
            Arma II: up to 5%*
            Battlefield Bad Company 2: up to 10%*
            Grand Theft Auto 4: up to 10%*
            Dirt 2: up to 10%*
            Resident Evil 5: up to 15%*
            Splinter Cell Conviction: up to 10%*
            Metal of Honor: up to 10%*
            Civilization 5: up to 40%*
            Ruse: up to 20%*
            Dead Rising 2: up to 20%*
            Dragon Age Origins: up to 5%*
            Arcania Gothic 4: up to 30%*
            F1 2010: up to 10%*
            Lost Planet 2: up to 15%*
            Anno 1404: up to 30%*
            *compared to a quad core*

            [url<]http://www.pcgameshardware.com/aid,794274/From-Medal-of-Honor-to-Civ-5-17-Games-that-already-benefit-from-six-cores-CPUs/Practice/[/url<]

            • Flying Fox
            • 9 years ago

            970 has 4 cores? What are you smoking?

            • ThorAxe
            • 9 years ago

            Nothing. I’m just taking stupid pills.

            • Flying Fox
            • 9 years ago

            The 920 was not tested in this review, and the 950 was consistently beaten by the 2600K. So I ask again, what are you smoking?

            • ThorAxe
            • 9 years ago

            Let’s look at the i7 950 and 2600K then to keep you happy.

            A 3.06GHz (3.33GHz Turbo) chip barely beaten by a 3.4GHz (3.8GHz Tubo) chip? When saying barely beaten I am referring to the % performance gained versus the % increase in clock speed.

            The gaming tests resulted in minimal performance increases (except Civ 5) and in some cases such as Metro 2033 @ 1680×1050 the minimum FPS was 40% less (14fps to 10fps).

            If this impresses you then can I please have what you are smoking?

            • Flying Fox
            • 9 years ago

            [quote<]A 3.06GHz (3.33GHz Turbo) chip barely beaten by a 3.4GHz (3.8GHz Tubo) chip?[/quote<]Have been ignoring what I said about different segments these CPUs are in? The fact that a new midrange unit is stepping onto the former high end bracket is not something to be counted? [quote<]Metro 2033 @ 1680x1050 the minimum FPS was 40% less (14fps to 10fps).[/quote<]Same resolution, high detail the 950 is 10fps vs 12fps on the 2600K for minimum FPS. So what does that tell us? The fact that the 2500K scored 19 vs the 2600K scoring 10 is odd. I think it may be just test variances. And why are you clinging so much on the minimum FPS? Granted the average FPS does not show the complete picture, taking to the other extreme and using just minimum FPS does not give you the complete picture either. Really hardcore gamers should be focusing on the GPU anyways, and the fact that I don't have to spend as much on a cheaper motherboard and 1 less stick of RAM gives good value for people [i<]looking to buy new[/i<]. If they already have a 9xx system of course it is not worth it to upgrade. Heck even my 875K has some legs left to go. Did I ever say the 9xx is not worth keeping and one should go scrap it for a 2500/2600K? My point has always been if you are buying new the 950 does not seem to be a good buy at this point. Prices will change, so in a few weeks time the picture will be different.

            • ThorAxe
            • 9 years ago

            I agree. I would certainly not advocate buying any 1156 or 1366 CPU at the moment.

            I also agree that the GPU is far more important for gaming.

            I am not sure that the 1 less stick of ram argument holds as you do not have to use three sticks in a 1366 board if you choose not to. However, it is handy having the additional ram for multitasking.

            I am just dissappointed by the gaming performance increase given that this is a brand new architecture. It doesn’t feel as large of a leap as Nehalem did.

            • NIKOLAS
            • 9 years ago

            How can you judge gaming performance when the reviewer uses a ridiculous choice of GPU(i.e GTX 460) ?

            • Voldenuit
            • 9 years ago

            That was a [i<]horrible[/i<] article. No test system data, methodology or even... benchmarks! We don't know what CPUs they ran the tests on, at what resolutions and detail settings. I'm not saying that there are no legitimate reasons to get a Gulftown, but this is not it. And the payoff for spending $900-1000 on a 6-core CPU is pretty slim when you get much better performance/price from other components.

            • ThorAxe
            • 9 years ago

            Agreed.

            • sigher
            • 9 years ago

            Comparing the i920 that doesn’t have the AES extensions in truescript test is not completely fair since it’s only because truecrypt uses encryption heavily and supports the extensions that it wins in that one special circumstance.

      • NeelyCam
      • 9 years ago

      I’m perfectly happy with my passive-cooled i5-670. Sure this thing would transcode blurays faster, but running stuff overnight gives plenty of time.

      • r00t61
      • 9 years ago

      The fixed function video transcoder is very cool, but according to Anand it only works if the on-die GPU is enabled, which won’t be the case with every P67 motherboard, which is a lame state of events, in my opinion.

      Given the massive months of hype I was also expecting Sandy to be much faster than it was on this initial rollout. My primary CPU-bound activity is video encoding with x264. On my Q9550 a typical encode at 1080p might take 24-48 hours (for the second pass only). On i7 2600k it looks like it’ll take 12-24 hours, which means I still will in all likelihood have to leave the machine on overnight. It’s kind of strange to have a product on an older EOL platform (x58-980x) that’s faster than the top chip on this new platform (p67-2600) in this regard. Only with a massive overclock to 4.5 Ghz does 2600k manage to equal, more or less, the 980x.

      Plus I’d have to dump my current rig which is working just fine at the moment. I suppose I could wish that the CPU or motherboard would spontaneously die to justify an upgrade but that smacks of bad karma to me. The silicon gods would surely be displeased.

      No price/performance graphs this time but I guess that’s what happens when Intel changes the release schedule at the 11th hour. Hasn’t Intel learned that in the business world, schedules are never supposed to slide to the left? Schedules only have one place to go – to the right.

      I still get the impression that Intel is letting all this tech trickle out slowly, on their terms, since they know that they have little to no competition in the mid-high end segments. They have no incentive to lower prices nor release monumentally faster chips. We get doled out little bits of CPU evolution every cycle, but that’s so unsatisfying. I want Star Trek in my living room and I want it yesterday. I’m cravenly spoiled and I want a giant burst of CPU revolution to blow our minds. I want to feel like I did when I first used an SSD, and couldn’t get over how much faster perceived performance improved over a mechanical disk. I guess I’m being unreasonable. But still, here’s hoping that AMD’s Bulldozer comes out to light a fire under Intel’s collective corporate rear end.

        • NeelyCam
        • 9 years ago

        “They have no incentive to lower prices nor release monumentally faster chips. We get doled out little bits of CPU evolution every cycle, but that’s so unsatisfying.”

        Actually, I was a bit surprised on the 2500K pricing; some $200 for that performance is much better than Intel’s previous offerings. To me it seemed like a pre-emptive strike against… something? Llano, maybe?

          • Voldenuit
          • 9 years ago

          Can’t be. Llano can’t (and won’t) hold a candle to SB.

          Even bulldozer is going to have stiff opposition from what is ostensibly intel’s consumer part, and I’m not confident they can beat SB, let alone the LGA 1356 variants.

          $200+ is not bad for an unlocked part, but it is a bit sucky that there are no cheaper overclocking options. And it’s silly that QuickSync (which is an inordinately dumb name that’s completely uninformative and nondescriptive) is disabled if you run a discrete GPU. Anand also nailed it when he said that AMD is still viable for bottom end buyers.

          But if you’re an enthusiast, there really is no smarter choice than Sandy Bridge right now.

            • NeelyCam
            • 9 years ago

            Let’s not forget that Llano will be 32nm SOI, with an insane IGP. We don’t yet know how magical that 32nm SOI will be…

            • Flying Fox
            • 9 years ago

            [quote<]We don't yet know how magical that 32nm SOI will be...[/quote<]And we don't even know if they will make it. It seems only Intel is now able to do their process shrink successfully on schedule out of all the guys that still have fabs.

            • Voldenuit
            • 9 years ago

            It’s (Llano) going to be a Stars CPU + an Evergreen (most likely) GPU with somewhere between 320-480 SPs. The performance potential of Llano is down to two relatively well known quantities and has had much educated guesswork and speculation devoted to it already. The key factor for Llano is that it is going to be wholly reliant on software optimizations to make use of any synergies between the GPU and CPU.

            Whereas Sandy Bridge has shot out of the gate with available CPU performance that pretty much knocks everything else out of the ballpark (with the exception of $1000 EE CPUs). Yes, SB’s GPU is not great for GPGPU, but Anand’s accelerated encoding tests have shown that current GPU encoding results are being bottlenecked (presumably by CPU, or linear code) anyway, so Llano’s theoretical GPGPU advantage is whittled down on this common consumer usage model at least.

            Depending on how well Bulldozer matches up, Trinity (BD Core + on-die GPU) will be one to watch, although it will be competing with Ivy Bridge by that time. AMD will still continue to compete (and survive) in the marketplace, but anyone expecting a KO like they did with the K8 vs P4 is probably going to be disappointed.

        • UberGerbil
        • 9 years ago

        [quote<] I guess I'm being unreasonable.[/quote<]Ya think?

        • Flying Fox
        • 9 years ago

        Even since Intel declared the Tick-Tock approach you should have known this “trickle out” is going to happen. Didn’t you get the memo back in 2006?

        • JumpingJack
        • 9 years ago

        There are few errors in your assessment.

        The most obvious one is “It’s kind of strange to have a product on an older EOL platform (x58-980x) that’s faster than the top chip on this new platform (p67-2600) in this regard. Only with a massive overclock to 4.5 Ghz does 2600k manage to equal, more or less, the 980x.”

        The top end sandy bridge will release later this year. Intel’s flaship 980X and some rumored speed bump will be the high end desktop chip for the time being.

        The i7 is a consumer, high volume part. For a quad core chip to get close to the 980X (6 core) is pretty remarkable, and speaks to some of the architectural improvements.

        You find it odd, I also find it odd that someone would complain about getting 980X like performance in the 200-300 price bracket.

    • anotherengineer
    • 9 years ago

    Great review Scott!

    Just one discrepancy though I have found. (not so much to the review as the motherboards)

    The testing with Intel was done with Asus Motherboards, while the AMD was done with a Gigabyte Motherboard.

    In my experience Gigabyte Motherboards tend to Overvolt the cpu (I guess for easier OC) on auto bios. However this can skew power comparisons.

    I have built AMD 890GX systems with Asus and Gigabyte mobo’s, both with the exact same cpu, on the Asus board it runs at 1.300 to 1.350, while the gigabyte board will pump 1.380 to 1.410V to the cpu if left on auto. ( I have to manually set it down to 1.350 on gigabyte mobo’s)

    Did you check the BIOS or run CPUZ to check what voltage the Gigabyte mobo was feeding the AMD cpu’s???

    Edit : [url<]http://www.xbitlabs.com/articles/mainboards/display/amd-890gx_10.html[/url<] Edit 2 : [url<]http://www.xbitlabs.com/articles/mainboards/display/amd-890gx_11.html[/url<] Down near the bottom you can see how the Gigabyte Auto settings overvolting the CPU adds 22.3W and 32.3W to the power consumption while running CPU Burn!!!! I believe for a review with less variables the same manufacturer of mobo be used across platforms.

    • Forge
    • 9 years ago

    Intel! If you send me one of these i7-2500Ks and a mobo, I will tell EVERY PERSON I SEE about how awesome it is, for however long you’d like. SRSLY.

    • Crayon Shin Chan
    • 9 years ago

    I thought NDA was Jan 7?

      • Ronald
      • 9 years ago

      Read the first sentence here from S|A:

      [url<]http://www.semiaccurate.com/2011/01/02/introduction-sandy-bridge/[/url<]

      • JumpingJack
      • 9 years ago

      They obviously pulled it in. Probably because they were worried AMD might launch their Fusion brand a day earlier. Who knows.

    • Damage
    • 9 years ago

    If you’re reading this right after its 11PM CT posting, what you’re seeing is a raw, unedited preview. We’ll clean it up, promise.

      • NeelyCam
      • 9 years ago

      It’s all good – thanks for putting it up asap!

    • JustAnEngineer
    • 9 years ago

    At last! Now how long before Newegg starts selling them?

      • Berek
      • 9 years ago

      Excellent question JustAnEngineer. Usually I see them sold a day after reviews go up, but who knows here. What’s exciting is that it appears we won’t need to upgrade our memory at all… CAS 9 1333mhz should be just fine I think. Even overclocking potential won’t be seriously hindered, from what I understand about the CPUs.

        • JustAnEngineer
        • 9 years ago

        Seriously… [b<]When[/b<] can I buy one of these?

          • bobboobles
          • 9 years ago

          My Micro Center email says they’ll be here on the 9th.

            • wibeasley
            • 9 years ago

            Too bad the in-store discounts aren’t as deep initially as I remember them being for the i7-920 and i7-860 initially.

            [url<]http://www.microcenter.com/storefronts/powerspec/index.html?utm_medium=email&utm_campaign=E0971%20Computer%20Parts%2020110105&utm_source=ACT_BYO&[/url<]

            • insulin_junkie72
            • 9 years ago

            I saw it when you posted it overnight, but the page seems to have been taken down (actually changed to something else) in the interim.

            Once you factored in the in-store discounts (which they put in smaller print below the online price), even those prices are surely going to blow away NewEgg and friends.

            Alas, from a selfish perspective, no i3 prices were listed. Or a nice i3/cheapo mobo combo.

            • wibeasley
            • 9 years ago

            That’s funny how they changed it to a page with Lynnfield cpus. Maybe they got confused with the NDA date.

            It’s also funny how the new page indicates that you need an i7 (instead of the i3 or i5) to take advantage of Blu-Ray.

Pin It on Pinterest

Share This