Review: Intel’s Core i7-3770K ‘Ivy Bridge’ processor

Yes, folks, today Intel is introducing the long-anticipated new CPU code-named Ivy Bridge. The release of any new microprocessor comes with a tremendous amount of complex information, and Ivy is certainly no exception. Intel has handed over vast amounts of detail about its new chip, the products based on it, and their dizzying arrays of features. To that, we’ve added a boatload of test results comparing Ivy Bridge to her contemporaries. We’re practically bursting with info to share with you.

However, I’ve been reviewing CPUs for quite a while now, and I’ll let you in on a little secret. Sometimes, beneath all of the complexity, the scuttlebutt on a new chip is pretty simple. And, truth be told, that’s pretty much the case with Ivy Bridge. This new CPU is an incremental refinement of Sandy Bridge; its benefits are a slight jump in performance and a somewhat larger reduction in power consumption.

I’m oversimplifying, of course. The changes to Ivy Bridge’s integrated graphics are sweeping, for example, and there are a multitude of other tweaks worthy of note. Still, I see no reason not to give you the simple answer up front. Much of what follows is our attempt to distill a vast amount of information about Ivy Bridge down to the most relevant details and then to poke and prod this new chip to see how it compares. Because, you know, that’s what we do. We’ve had Ivy Bridge on the test bench here in Damage Labs for some time now, and we’ve focused most of our efforts on devising—and executing—new methods of testing CPUs. Ivy has been an intriguing test subject, and we hope to show you some things about her that you won’t learn anywhere else.

Into Ivy

Ivy Bridge comes into life with every advantage, because it is derived from Intel’s excellent Sandy Bridge processor. Ivy is a “tick” in Intel’s vaunted tick-tock development cadence, a familiar architecture ported to a new, smaller chip fabrication process. Thus, Ivy looks very much like her older sister in terms of overall layout; they both have quad CPU cores, 8MB of last-level cache, integrated graphics, and built-in PCI Express connectivity, all tied together by a high-speed communications ring. Ivy Bridge processors will drop into the same LGA1155 socket as Sandy Bridge CPUs, in fact.

The biggest change here is the transition from a 32-nm fab process with Sandy to a 22-nm process for Ivy. Don’t zone out when you hear those words, folks. This conversion is not at all trivial, even though Intel has made regular work of transitioning to new fabrication processes every couple of years. The drum-beat of Moore’s Law has continued apace only because Intel and others have sunk billions into the development of new chipmaking techniques, and these transitions are getting harder to achieve each time. Those pesky laws of physics are becoming ever more difficult to navigate at the nanometer level, which is why companies like GlobalFoundries (which makes AMD CPUs) and TSMC (which makes GPUs for both AMD and Nvidia) are still struggling to produce enough chips with the right characteristics at the 32/28-nm level. Meanwhile, Intel is at least one full generation ahead by shipping 22-nm Ivy Bridge chips in volume today.

In order to make that happen, the firm has fundamentally rebuilt the transistor using a three-dimensional structure that it calls the tri-gate transistor. Intel has claimed these new transistors offer “up to 37 percent performance increase at low voltage versus Intel’s 32nm planar transistors,” a property its had said will prove especially useful for “small handheld devices” like smart phones, whose low-power chips should be able to operate at considerably higher clock speeds. There’s another way to capitalize on the process improvements, too. The new transistors can deliver even larger power savings at the same operating speed as 32-nm chips; the firm has claimed power reductions of over 50% in that case.

These things are well and good, of course, but the trick is how they’ll translate into desktop processors like the Core i7-3770K we’re reviewing today. The claims cited above were expressly made about operation at relatively low voltages and clock speeds compared to those of current desktop CPUs. At clock speeds approaching 4GHz and their accompanying voltage levels, the advantages offered by Intel’s 22-nm process are more modest. Desktop CPUs are probably approaching the hairy end of the frequency-voltage curve for 22-nm chips, where exponential growth in power consumption really begins to ramp up. That reality, perhaps combined with the changing dynamics of the PC market, appears to have driven Intel to make an unusual decision with Ivy Bridge: to realize 22-nm process tech improvements in the form of power reductions, not speed increases, for its desktop processors. Rather than rolling out a bunch of new CPU models with higher clock speeds in the traditional power bands, Intel has elected to reduce desktop power envelopes and hold clock speeds more or less steady.

In fact, the Core i7-3770K’s basic CPU clocks and specs are very close to those of the Core i7-2600K introduced in January 2011. The 3770K’s base and Turbo clocks are 100MHz higher, just like the 2700K model released last October. Prices have largely held steady, too. The 2600K’s introductory price was $317, and it hasn’t dropped over the course of the past 16 months. The 3770K supplants it for $5 less. The only truly dramatic change is the reduction in TDP, from 95W for the top Sandy Bridge chips to 77W for their Ivy-based replacements.

This move will have some positive impacts, of course, but they’re not exactly the sort of price-performance gains that have made PC enthusiasts swoon in the past. Will folks be excited by claims like “reduced cubic volumes for desktop enclosures” or “easier integration into all-in-one systems?” I’m having a hard time imagining the banner ads. Of course, everything Intel is doing with Ivy Bridge makes a tremendous amount of sense for laptops and other types of mobile devices, which is where much of the PC market is headed.

Code name Key

products

Cores Threads Last-level

cache size

Process node

(Nanometers)

Estimated

transistors

(Millions)

Die

area

(mm²)

Lynnfield Core i5, i7 4 8 8 MB 45 774 296
Gulftown Core i7-970, 990X 6 12 12 MB 32 1168 248
Sandy Bridge Core i5, i7 4 8 8 MB 32 995 216
Sandy Bridge-E Core-i7-39xx 8 16 20 MB 32 2270 435
Ivy Bridge Core i5, i7 4 8 8 MB 22 1400 160
Deneb Phenom II 4 4 6 MB 45 758 258
Thuban Phenom II X6 6 6 6 MB 45 904 346
Llano A8, A6, A4 4 4 1 MB x 4 32 1450 228
Orochi/Zambezi FX 8 8 8 MB 32 1200 315

One bit of good news for somebody, whether it’s Intel shareholders or eventually consumers, is that Ivy Bridge should be very affordable to produce once Intel’s 22-nm process matures. At 160 mm² for the quad-core variant with the beefiest HD 4000 graphics, Ivy Bridge is easily one of the smallest desktop processors in recent years.

A closer look at the numbers above will give you a sense of how far ahead of the competition Intel truly is. It’s no secret that you can expect to see Ivy Bridge outperforming the FX-8150 processor, yet Ivy occupies almost half the die area of Zambezi—and Zambezi lacks integrated graphics and PCIe connectivity. The gap in TDPs between the two would be laughable, if it weren’t kind of dire. AMD’s true competitor in Ivy’s weight class is Llano, which has four cores and almost the same transistor budget, yet Llano is a larger chip because it’s fabbed on a 32-nm process. Llano’s prospects for matching Ivy in CPU performance are similar to my hometown Royals’ prospects for winning the A.L. Central.

Call it a tick-plus?

In spite of holding such a commanding lead, Intel hasn’t simply shrunk Sandy Bridge and left it at that. The ~50% increase in Ivy Bridge’s transistor count should be a clue that there’s much more going on here.

As often happens with “ticks,” Ivy’s CPU core microarchitecture has been tweaked in a host of small ways in order to improve per-clock performance. Intel architect Stephen Fischer told us he estimates the cumulative effect of those improvements to be a 4-6% gain in IPC, or instructions per clock. Among the changes is deeper pipelining in the divider unit, which should result in double the throughput for both integer and floating-point math. The cache prefetcher has gotten smarter and is able to cross page boundaries, allowing it to better track and anticipate complex access patterns. The prefetcher also has an adaptive mechanism to avoid hogging memory bandwidth; when queues grow too deep, it will throttle back its activity. There are other tweaks to improve Hyper-Threading (a few queues are now partitioned dynamically between two threads, rather than shared statically at 50-50) and AVX performance (more registers to help deal with memory access that cross cache lines).

The neatest trick is probably the virtualization of move operations; rather than moving data through the ALU, such operations can be accomplished via register renaming, so long as the source and destination datatypes are the same. Fischer told us this feature alone results in an IPC gain of roughly 1.5%.

Ivy has even added several new instructions. Some are related to a new feature intended to prevent escalation of privileges exploits. Another accesses a new on-chip digital random number generator, which will act as a high-quality entropy source for encryption algorithms of all types, not just the AES algorithm that’s already accelerated explicitly. Ivy also adds AVX instructions to convert quickly between 32-bit and 16-bit floating-point datatypes, allowing for high-precision 32-bit computation to be combined with more compact 16-bit storage.

All in all, the microarchitectural changes are fairly extensive for a “tick,” but they are just the tip of the iceberg. This chip has a number of new power-saving features, too numerous to recount in any detail here. One of the big ones is the power-gating of DDR memory at idle, which should help notebook battery life quite a bit. Also, interestingly enough, Intel now tests the optimal voltage for each chip at multiple frequencies and stores that information on the die, where it can be used by the power management controller. Previously, only two frequency points were tested, and the power controller would interpolate between them. Products with Turbo Boost enabled should presumably operate more efficiently, and Intel has some related tricks up its sleeve, such as products with configurable TDPs. A laptop chip could, for instance, operate at one TDP while on battery power and switch to a higher TDP when snapped into a docking station.

Meanwhile, Intel graphics architect Tom Piazza isn’t content to call Ivy Bridge a “tick” all all. He calls it a “tick+” because the graphics architecture has been extensively overhauled, more along the lines of what happens with a “tock” refresh on the CPU side. At IDF last fall, Piazza acknowledged some risk in introducing an “unknown” new graphics core in concert with a process shrink, especially because “the last thing you want to do at Intel is hold up a factory,” but the move was apparently a success. In fact, he said he saw no reason not to continue with major graphics architectural improvements like this one, particularly since “graphics move fast.”

Ivy’s new graphics core adds a broad range of new capabilities, in many ways bringing Intel up to feature parity with AMD’s Llano (and forthcoming Trinity) IGPs. The headliner is support for the DirectX 11 graphics API, with all that implies, including hardware tessellation capabilities and a broader selection of texture formats. Additionally, like most DX11 GPUs, Ivy’s IGP supports a range a compute-focused features, making it compatible with both Microsoft’s DirectCompute and the OpenCL 1.1 standard. As we understand it, all of the major compute-focused capabilities are truly present in the hardware, not just emulated in software, including double-precision FP datatypes, denorms, and support for atomic transactions.

The IGP’s execution unit count is up from Sandy Bridge—from 12 EUs to 16—but don’t let that number lead you astray. The EUs have been totally restructured in what amounts to a doubling of almost all resources versus Sandy Bridge, with the exception of memory bandwidth. Another interesting change is the addition of a 256KB L3 cache in the graphics core, a feature Piazza said was originally intended for Sandy Bridge but was “retracted” because it didn’t offer much performance benefit. Piazza claims this cache delivers an “amazing” reduction of bandwidth utilization between the graphics core and the 8MB last-level cache. Those reductions in ring traffic translate directly into power savings, which turns out to be the cache’s primary benefit.

Overall, it sounds like Intel is cleaning up quite a few loose ends with this IGP refresh. Although the firm has miles to go in catching AMD and Nvidia in terms of software support and game compatibility, we do expect changes like the expansion of texture formats to go a long way toward improving compatibility with existing games. Another issue Piazza says they’re cleaning up this time around is the anisotropic filtering algorithm, which in Sandy was highly variant depending on the surface’s angle of inclination. Now, he tells us, the IGP will “draw circles instead of flowers” in the aniso tunnel test. In part thanks to the doubling of texture samplers, the IGP’s media processing capabilities should be substantially faster, too, including QuickSync video encoding.

In a bit of a surprise to us, Intel has upped the number of discrete displays the IGP can support from two to three, of any major output type, including DVI, HDMI, and DisplayPort.

Piazza told us the IGP has been laid out in five physically distinct “slices” comprised of different resource types. Most notably on that front, the EU/texture sampler slice can be scaled up and down. The first use of that capability will surely be the lower-end versions of Ivy with Intel HD 2500 graphics, which should have 8 EUs and half the texturing capacity of the HD 4000. However, Piazza explicitly mentioned future “scale-up opportunities” in this context, as well. Hmmm. We’re unsure whether he was thinking of the next “tock” code-named Haswell or something more imminent.

A new-ish platform, too

Although Ivy Bridge fits into the same LGA1155 socket as Sandy, hardware compatibility will depend on the motherboard maker and chipset type. At the very least, motherboards based on Intel’s older 6-series chipsets will require a BIOS update to ensure compatibility with Ivy. Some of Intel’s business-focused chipsets officially won’t support Ivy Bridge at all.

Instead, Intel has introduced a range of new 7-series chipsets to go along with its new CPU. The one of most interest to enthusiasts will surely be the Z77 Express. We’ve already published a nice round-up of Z77 boards right here, for those who are interested. The only major update in the 7-series platform controller hub (PCH) silicon is the addition of support for USB 3.0. There are a few software enhancements, though, including the addition of a suspend-to-SSD feature inherited from Intel’s mobile offerings.

Above are a couple of pictures of the Core i7-3770K alongside the MSI Z77A-GD65 motherboard in our test system. As you can see, Ivy’s packaging will be difficult to distinguish from Sandy’s by looks alone.

Test notes

We’ve completely overhauled the portion of Damage Labs dedicated to desktop CPU testing for this review, and we’ve added a number of new tests and methods along the way, as well. Here’s a look at one of our new CPU test rigs, the one destined for Ivy Bridge:

Yep, we’ve mounted it in one of those slick open-air cases from MSI, which is just about ideal for our purposes. Sadly, this MSI case isn’t a commercial product, but stay tuned: we plan to give one away to a lucky reader shortly.

The rest of the hardware involved was provided by several companies who were kind enough to support our efforts. For this system, we used MSI’s Z77A-GD65 motherboard, as we’ve noted. Although they’re kind of hard to see in the pictures above, Corsair provided the Vengeance DIMMs, which are 4GB each and capable of 1600MHz operation at 1.5V. Corsair also supplied the AX650 power supply, which is very efficient at low loads and is incredibly quiet, particularly because it switches off its cooling fan under low loads.

That handsome graphics card is a Radeon HD 7950 DD Edition from XFX. These cards have granted our test systems a much higher ceiling, so we can test CPU performance in recent games at common resolutions without running into GPU bottlenecks. These cards draw very little power when idle, so they don’t contribute too much when we test system power draw and CPU efficiency. Last but not least, these Radeons are PCI Express 3.0-compatible, so they should be able to talk to Ivy Bridge at full speed.

Also kind of hidden in the first couple of pictures is the Kingston HyperX 120GB SSD. Based on the latest SandForce controller with synchronous NAND, this drive is one of the best SSD configs available. It’s also completely silent, very power efficient, and cuts our boot times between tests dramatically versus a hard disk drive.

We’ve built four of these test systems for the different CPU socket types out there, so we’re able to test multiple processors concurrently. Our Ivy review here has “only” seven different processor types included, but we expect to be able to expand that number over time and to include a range of different CPU vintages and socket types, just as we’ve done in the past. Just bear with us as we accumulate results with our new methods and test rigs. Fuller specifications for the individual test systems are available below.

Our testing methods

We ran every test at least three times and reported the median of the scores produced.

The test systems were configured like so:

Processor AMD
FX-8150

Phenom II X6 1100T

Core
i7-2600K

Core i7-3770K

Core
i7-3960X

Core i7-3820

AMD
A8-3850
Motherboard Asus
Crosshair V Formula
MSI
Z77A-GD65
Intel
DX79SI
Gigabyte
A75M-UD2H 
North bridge 990FX Z77
Express
X79
Express
A75
FCH
South bridge SB950
Memory size 8 GB (2 DIMMs) 8 GB (2 DIMMs) 16 GB
(4 DIMMs)
8 GB
(2 DIMMs)
Memory type AMD
Entertainment

Edition

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s 1600 MT/s 1600 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
9-9-9-24
1T
9-9-9-24
1T
Chipset

drivers

AMD
chipset 12.3 
INF
update 9.3.0.1020

iRST 11.1.0.1006

INF
update 9.2.3.1022

RSTe 3.0.0.3020

AMD
chipset 12.3
IGP drivers 8.15.10.2696 Catalyst
12.3
Audio Integrated

SB950/ALC889 with

Realtek 6.0.1.6602 drivers

Integrated

Z77/ALC898 with 

Realtek 6.0.1.6602 drivers

Integrated

X79/ALC892 with

Realtek 6.0.1.6602 drivers

Integrated

A75/ALC889 with

Realtek 6.0.1.6602 drivers

 They all shared the following common elements:

Hard drive Kingston
HyperX SH100S3B 120GB SSD
Discrete graphics XFX
Radeon HD 7950 Double Dissipation 3GB with Catalyst 12.3 drivers
OS Windows 7 Ultimate x64 Edition
Service Pack 1

(AMD systems only: KB2646060, KB2645594 hotfixes)

Power supply Corsair
AX650

Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

We used the following versions of our test applications:

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at 1900×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
  • We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we encoded a video with x264.
  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled. We did disable these power management features to measure cache latencies, but otherwise, it was unnecessary to do so.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

We typically kick off our CPU test results with a look at the performance of the memory subsystems, and I figure we might as well continue that tradition. These synthetic tests are intended to measure specific properties of the system and may not end up tracking all that closely with real-world application performance. Still, they can be enlightening.

No real surprises here. We’ve clocked the memory for all of these systems at 1600MHz with an aggressive 1T command rate, and the 3770K does as much with its dual memory channels as any of the other two-channel solutions—though not much more than its predecessor does.

This would be a good time to introduce the various contenders, I think. The 2600K is the Sandy Bridge incumbent, and it’s very similar to the Ivy-based Core i7-3770K in most regards, with the slight exception that its base and Turbo clock speeds are 3.4GHz and 3.8GHz, 100MHz slower than the 3770K’s respective speeds. The 2600K was the fastest Sandy Bridge derivative when that chip was first introduced, and it should be a nice foil to the 3770K throughout our tests. Yes, had I used a Core i7-2700K instead, we’d have had a true clock-for-clock comparison. What we have here is more of a price-parity comparison, since these two CPUs are priced only $5 apart.

Another interesting contender from Intel is the Core i7-3820, which we have not properly reviewed up to this point. The 3820 is a quad-core Sandy Bridge-E part; it shares the same 3.8GHz Turbo peak with the 2600K, but its base clock is 3.6GHz, 100MHz higher than the 3770K’s. The 3820 also has quad memory channels and a 10MB last-level cache. We’re curious to see how often its additional platform bandwidth can grant it an advantage over the regular Sandy and Ivy parts. At $294, the 3820 is moderately priced, probably to offset its higher platform costs.

The Core i7-3960X is the 3820’s big brother, a six-core Sandy Bridge-E monstrosity with a 3.3GHz base clock, a 3.9GHz Turbo peak, and a 15MB LLC. At $999, it is Intel’s fastest desktop processor to date—unless Ivy takes that crown in a bit of an upset. Obviously, the four memory channels on these processors give them substantially more bandwidth, as our test results indicate.

Finally, we have the three contenders from AMD. The FX-8150 is AMD’s fastest desktop processor, based on the new “Bulldozer” microarchitecture. Although it’s a large, eight-core chip, the FX-8150 lists at $245, substantially cheaper than the 3770K. The FX also has a much higher TDP, or thermal envelope, of 130W, like the Sandy-E parts. With dual channels of 1600MHz memory, it nearly extracts as much throughput as Ivy. The A8-3850 is more like Sandy and Ivy’s spiritual competitor, a smaller chip with quad cores and integrated graphics. However, the A8-3850 is based on an older CPU core with less aggressive prefetchers and no L3 cache, so it doesn’t do as much with its dual memory channels. Below that is the Phenom II X6, AMD’s prior desktop leader, before Bulldozer arrived. The X6 takes even less advantage of this relatively fast RAM, but you may be surprised by how well it keeps pace with the FX-8150 overall.

This test is multithreaded, so it captures the bandwidth of all caches on all cores concurrently. The different test block sizes step us down from the L1 and L2 caches into L3 and main memory. I think the short answer here is that Ivy’s internal caches are no faster or slower than Sandy’s. Most likely, the 100MHz difference in clock speeds explains the differences between the 3770K and the 2600K here.

This is a new latency testing tool. SiSoft has a nice write-up on it, for those who are interested. We used the “in-page random” access pattern to reduce the impact of prefetchers on our measurements. We’ve also taken to reporting the results in terms of CPU cycles, which is how this tool returns them. The problem with translating these results into nanoseconds, as we’ve done in the past with latency measurements, is that we don’t always know the clock speed of the CPU, which can vary depending on Turbo responses.

The only real divergence between Sandy and Ivy is at the 8MB data point, when we’re right at the edge of the last-level cache. I’d wager the difference there is due to Ivy’s improved cache prefetchers, which can cross page boundaries. Perhaps they’re not fooled by the in-page randomization in Sandra’s access pattern.

I’ve omitted a lot of the other CPUs for the sake of readability. The Intel chips all deliver very similar results. The FX-8150’s latencies don’t look so bad here, especially when you consider that its peak clock speed is 4.2GHz and that the entire architecture was apparently intended to run at even higher frequencies.

Some quick synthetic math tests

We don’t have a proper SPEC rate test in our suite (yet!), but I wanted to take a quick look at some synthetic computational benchmarks, to see how the different architectures compare, before we move on to more varied and robust application-based workloads. These simple tests in AIDA64 are nicely multithreaded and make use of the the latest instructions, including Bulldozer’s XOP in the CPU Hash test and FMA4 in the FPU Julia and Mandel tests. The latter two tests also use Intel’s FMA3 AVX instruction.

Looks to me like those estimates of 4-6% IPC gains from Sandy to Ivy are probably about right, although the 3770K also has a 100MHz advantage on the 2600K. I warn you, the question of IPC gains versus clock speed differences is going to haunt you in the following pages. My apologies in advance, folks.

The FX-8150 is competitive only in the CPU Hash test, where its eight integer cores and XOP instruction give it the advantage. Otherwise, in the two FPU-focused tests, the FX’s four AVX-capable floating-point units are distinctly disappointing. In theory we’d expect them to be matching Sandy and Ivy clock for clock, but nothing of the sort happens.

Power consumption and efficiency

Well, why wait, right? Let’s take a look at Ivy’s finest attribute, her increased power efficiency, in our first-real world test. Note that we’ve measured total system power consumption at the wall socket, so our results are taking account of the whole platform picture.

Our workload for this test was encoding a video with x264, based on a command ripped straight from the x264 benchmark you’ll see later. This encoding job is a two-pass process. The first pass is lightly multithread and will give us the chance to see how power consumption looks when mechanisms like SpeedStep and core power gating are is use. The second pass is more widely multithreaded.

There’s the story for Ivy Bridge, right there in those two plots, if you read ’em right. The 3770K draws less power than the 2600K, yet it finishes the job about five seconds faster. Let’s see what specifics we can derive from these data.

Ivy’s power draw at idle is very similar to Sandy’s, despite Ivy’s ~50% higher transistor count. Of the other solutions, only AMD’s A8-3850 comes close.

We measured the 2600K’s peak power draw at 17W higher than the 3770K’s. The gap between their TDP ratings? 18W. With Turbo Boost’s dynamic power management, both chips are likely reaching something close to their peaks and remaining there. And Ivy is clearly more efficient.

Here’s a look at energy consumed over our entire test period, where both active and idle time are taken into account. During the entire span, the 3770K’s combination of low peak power, quick execution, and relatively low idle power allows it to take the top spot.

Now we’re looking at just the energy consumed while the video was being encoded. Here, the 3770K’s lead on the other processors grows. Thanks to the benefits of Intel’s 22-nm process and some modest improvements in per-clock performance, Ivy Bridge is the most energy-efficient chip of this bunch by a considerable margin.

The Elder Scrolls V: Skyrim

Now for something completely different.

Yep, it’s time for some game benchmarking, but not, perhaps, as you know it. We tested performance using Fraps while taking a stroll around the town of Whiterun in Skyrim. The game was set to the graphical quality settings shown above. Note that we’re using fairly high quality visual settings, basically the “ultra” presets at 1920×1080 but with FXAA instead of MSAA. Our test sessions lasted 90 seconds each, and we repeated them five times per CPU.

The thing is, as we tested, we were recording the time required to produce every single frame of the animation in the game. Our reasoning behind this madness is explained in my article, Inside the second: A new look at game benchmarking. Much of what we said in that article was oriented toward GPU testing, but the same methods of game benchmarking can apply to CPUs, as well. This is our first chance to give those methods a try in the context of a CPU review, so we’re excited to see what happens.

Frame time
in milliseconds
FPS
rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

Here’s a crack at explaining the reasons behind our new testing methods. The constant stream of images produced by a game engine as you play creates the illusion of motion. We often talk about gaming performance in terms FPS, or frames per second, but most of the tools that measure gaming performance actually average out frame production over an entire second in order to give you a result. That’s not terribly helpful. If you encounter a delay of half a second, or 500 ms, for a single frame surrounded by a stream of lightning-quick 16.7 ms frames, that entire second will average out to about 35 FPS. Most folks will look at that FPS number and think the performance was reasonably acceptable, if not stellar. (Because, hey, a stream of frames at a constant 35 FPS wouldn’t be half bad.) They will, of course, be very wrong. Even a shorter interruption of, say, 200 ms or less while playing a game will feel like an eternity, destroying the illusion of motion and any sense of immersion—and possibly getting your character killed.

Fortunately, we have the tools to measure and quantify gaming performance in much greater detail, and we can bring those to bear in considering CPU performance. Let’s start by looking at plots of the time required to produce individual frames during one of our test runs. (We’ve used just one run for the visualizations, but the rest of our results take all five runs into account.) Remember, since we’re looking at frame times, lower is better in these plots. Also, if you want to convert FPS because it’s more familiar, you can simply refer to the table on the right.

As you can see, the raw data show some clear differences in performance between the CPUs. The faster processors tend to produce more frames, of course. There are spikes in frame times for all of the processors, but the sizes of the spikes tend to be larger in certain cases. Some frames take quite a bit of time to produce, which isn’t good. The AMD chips especially seem to struggle during the opening moments of our test run, where we’re up by the Jarl’s castle, looking out over Whiterun and the mountains beyond.

The traditional FPS average gives us a sense of the performance differences. Obviously, the Core i7-3770K acquits itself well in this test, as do all of the Intel CPUs. The AMD processors are all quite a bit slower. However, even the slowest one averages over 60 FPS. Doesn’t that mean all of the processors are more than adequate for this task?

Not necessarily, as those spikes in frame times tend to show.

Another way of thinking about gaming performance is in terms of real-time frame latencies. That is, after all, what smooth animation relies upon. We’ve borrowed a bit from the transaction latency measurements in the server benchmarking world and suggested that a look at the 99th percentile frame latency might be a good starting point for this approach. This metric simply offers a bit of information, telling you that 99% of all frames were produced in x milliseconds or less. It’s a simple way of thinking about overall frame delivery.

Here, all of the Intel processors again perform very well. They’re cranking out 99% of all frames in the 17-18 ms range, not far from the 16.7-ms frame time that equates to a steady 60 FPS. The 99th percentile frame latencies for the AMD chips are nearly double that.

Then again, this metric only considers one select point where 99% of all frames have been produced. We can look at the entire latency picture for each CPU by plotting the latency curve from the 50th percentile up.

The contest between the Intel processors is incredibly tight. For most intents and purposes, they are all evenly matched.

Things become more interesting when we look at the AMD CPUs. The Phenom II X6 and the FX-8150 are essentially tied in both the average FPS and 99th percentile results. However, a funny thing happens to the FX-8150 while it’s rendering the toughest 5% of the frames, on the right edge of the plot: its frame times shoot up above the Phenom II X6’s. That outcome is likely the result of a unique characteristic of the Bulldozer architecture: its relatively low per-thread performance in many cases. When this real-time system, the Skyrim game engine, runs into a trouble spot, the FX-8150 doesn’t have the per-thread oomph to power through. I’d say the Phenom II X6 is a better Skyrim companion than the FX-8150, as a result. (Although Lydia is still the best.)

We are, of course, splitting hairs a bit here, just because we can. Even frame latencies in the 30-plus millisecond range are relatively decent. One reality check we can give ourselves is to consider the worst-case scenarios, those long-latency frames that are most likely to ruin the sense of smooth motion. We’ve done that in the past, with GPUs, by looking at the amount of time spent rendering frames beyond a threshold of 50 milliseconds. 50 ms equates to 20 FPS, and we figure if you dip below 20 FPS, most folks are going to notice. However, none of these CPUs deliver frames that slowly. Our next obvious step down is 33.3 milliseconds, or 30Hz. If you have vsync enabled while gaming on a 60Hz monitor, frames that take longer than 33.3 milliseconds won’t be shown until two full display refresh cycles have passed.

None of these CPUs spend much time at all working on frames that take longer than our 33.3 ms threshold. However, we can ratchet things down one more time, to 16.7 milliseconds or a constant 60 FPS, and see what happens then.

If you are looking for glassy smooth animation in Skyrim, any of these Intel CPUs will deliver it. Interestingly enough, the Ivy Bridge chip with its slightly improved per-clock performance has an ever-so-slim lead over even the mighty Core i7-3960X. The AMD processors, meanwhile, spend quite a bit of time working on frames beyond 16.7 ms. They’re not poor performers here, but the Intel processors ensure more consistent low-latency frame delivery.

Batman: Arkham City

Now that we’ve established our evil methods, we can deploy them against Batman. Again, we tested in 90-second sessions, this time while grappling and gliding across the rooftops of Gotham in a bit of Bat-parkour. Again, we’re using pretty decent image quality settings at two megapixels; we’re just avoiding this game’s rather pokey DirectX 11 mode.

These plots are much spikier than what we saw in Skyrim, and they’re consistent with what we’ve seen from this game in the past, in GPU testing. The severity of those spikes looks to be somewhat CPU-dependent, which could prove interesting.

Once again, nearly all of the solutions average over 60 FPS. The Intel chips score higher, but not by as wide a margin as in Skyrim.

The latency picture is pretty remarkable. The Intel chips fare better across the entire curve, including our stopping point at the 99th percentile. Once again, there’s little difference between them. The AMD CPUs simply require more time to render frames.

As we saw in Skyrim, the Core i7-3770K fares best of all the processors when dealing with the worst-case scenarios. Regardless of where we put our thresholds—at the equivalent of 20, 30, or 60 FPS—the Intel processors fare better. In spite of averaging over 60 FPS, the FX-8150 and Phenom II X6 burn quite a few cycles on long-latency frames. Having a faster CPU doesn’t mean that this game’s frequent latency spikes are eliminated, but it means their durations are reduced to much less consequential levels.

Crysis 2

Our test session in Crysis 2 was only 60 seconds long, mostly for the sake of ensuring a precisely repeatable sequence. Also, we got to stealth kill two Cell soldier dudes with a knife to the chest and a neck snap, which was great for taking out some aggression.

Yeah, those plots are hard to read due to the nature of the data. Sorry about that. The first thing you’ll probably notice here is that big spike near the beginning of the test run on every single CPU. We noticed that while playing; it appears to be a bit of a hitch in the game, probably because the next area is being loaded or something. Let’s zoom in on that portion of the sequence and see how it looks.

The spike happens on every single CPU, but notice that it appears to be at least partially CPU dependent. We’re waiting for nearly a third of a second on the A8-3850, while Ivy Bridge gets past the problem in under half that time.

The FPS averages are closer than ever between the Intel and AMD camps, and there’s essentially no difference between the various Intel chips once more.

The latency situation is a bit different in this game. Several of the processors have funny shapes to their curves. However, even the 99th percentile frame times are in the twenties for all CPUs, so things never get to be terribly difficult for any of them.

The 3770K continues to be the champ at ensuring consistently low-latency frame times, although the margin between it and the Core i7-2600K is pretty tiny, in the grand scheme of things.

Battlefield 3

As with Crysis 2, our BF3 test sessions were 60-seconds long to keep them easily repeatable. We tested at BF3‘s high-quality presets, again at 1920×1080.

You know how some people say that CPUs don’t matter for gaming performance, since they’re all fast enough these days? Here’s a case where that’s actually true. Have a look at all of our metrics, and they all agree.

Any of these CPUs will spit out 99% of the frames rendered at a near-constant 60 FPS rate in BF3. The few spikes we do see don’t add up to much of anything, with roughly a tenth of a second spent rendering beyond our 16.7-ms cutoff, generally.

Multitasking: Gaming while transcoding video

A number of readers over the years have suggested that some sort of real-time multitasking test would be a nice benchmark for multi-core CPUs. That goal has proven to be rather elusive, but we think our new game testing methods may allow us to pull it off. What we did is play some Skyrim, with a 60-second tour around Whiterun, using the same settings in our earlier gaming test. In the background, we had Windows Live Movie Maker transcoding a video from MPEG2 to H.264, just like in our stand-alone video encoding test. Here’s a look at the quality of our Skyrim experience while encoding.

Overall, these processors handle the dual workloads quite well. As with x264, encoding in Windows Live Movie Maker appears to be a two-pass deal, with the number of threads rising later in the process. We kicked off a new encoding job before starting each test run, so we never got to the later, more heavily threaded encoding workload during our Skyrim runs. With the exception of the A8-3850, all of these processors support at least six simultaneous threads, so they didn’t seem to be too burdened by what we were asking them to do.

We’re curious to add some lower-end chips to the mix, including the Hyper-Threading-deficient quad-core Core i5 parts that look to be good deals, like the 2500K and its Ivy-based analog, the 3570K. We’re interested to see if the lack of Hyper-Threading hinders multitasking smoothness. Among the processors we’ve tested, we can’t help but notice that the Core i7-3960X, with six cores and 12 threads, fares best. Still, the quad-core 3770K isn’t far off its pace.

Civilization V

We have one more gaming test to include before moving on to bigger and better things. This test is a simple scripted one that spits out an FPS average, because there are only so many hours in the day.

Civ V will run this benchmark in two ways, either while using the graphics card to draw everything on the screen, just as it would during a game, or entirely in software, without bothering with rendering, as a pure CPU performance test. Oddly enough, the 3770K comes out the clear winner in the conventional game test, but the six-core 3960X easily takes the top spot without the pesky graphics card getting in the way.

Productivity

Compiling code in GCC

Another persistent request from our readers has been the addition of some sort of code-compiling benchmark. With the help of our resident developer, Bruno Ferreira, we’ve finally put together just such a test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. Here is Bruno’s note about how he put it together:

QT SDK 2010.05 – Windows, compiled via the included MinGW port of GCC 4.4.0.

Even though apparently at the time the Linux version had properly working and supported multithreaded compilation, the Windows version had to be somewhat hacked to achieve the same functionality, due to some batch file snafus.

After a working multithreaded compile was obtained (with the number of simultaneous jobs configurable), it was time to get the compile time down from 45m+ to a manageable level. This required severe hacking of the makefiles in order to strip the build down to a more streamlined version that preferably would still compile before hell froze over.

Then some more fiddling was required in order for the test to be flexible about the paths where it was located. Which led to yet more Makefile mangling (the poor thing).

The number of jobs dispatched by the Qtbench script is configurable, and the compiler does some multithreading of its own, so we did some calibration testing to determine the optimal number of jobs for each CPU. We found that one job per core worked best on Llano/Phenom II, six on the quad-core Intel chips with Hyper-Threading, and eight on the Core i7-3960X and Bulldozer.

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, so the encoding of the AES algorithm, in particular, should be very fast on the CPUs that support those instructions. We’ve also included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions.

7-Zip file compression and decompression

SunSpider JavaScript performance

The Ivy-based Core i7-3770K shows us a few flashes of likely IPC improvement in Qtbench, 7-Zip compression, and Sunspider, where it puts some distance between itself and the 2600K. We should also note many of these productivity tests are widely multithreaded and rely heavily on integer math rather than floating-point. As a result, the FX-8150 tends to be much more competitive than it was in most of our gaming tests.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

In the past, we’ve added up the time taken by all of the different elements of the panorama creation wizard and reported that number, along with detailed results for each operation. However, doing so is incredibly data-input-intensive, and the process tends to be dominated by a single, long operation: the stitch. Thus, we’ve simply decided to report the stitch time, which saves us a lot of work and still gets at the heart of the matter.

picCOLOR image processing and analysis

picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including SSE extensions, multiple cores, and Hyper-Threading. Many of its individual functions are multithreaded.

At our request, Dr. Müller graciously agreed to re-tool his picCOLOR benchmark to incorporate some real-world usage scenarios. As a result, we now have four tests that employ picCOLOR for image analysis: particle image velocimetry, real-time object tracking, a bar-code search, and label recognition and rotation. For the sake of brevity, we’ve included a single overall score for those real-world tests, along with an overall score for picCOLOR’s suite of synthetic tests of different image processing functions.

Video encoding

x264 HD benchmark

This benchmark tests one of the most popular H.264 video encoders, the open-source x264. The results come in two parts, for the two passes the encoder makes through the video file. I’ve chosen to report them separately, since that’s typically how the results are reported in the public database of results for this benchmark.

Windows Live Movie Maker 14 video encoding

For this test, we used Windows Live Movie Maker to transcode a 30-minute TV show, recorded in 720p .wtv format on my Windows 7 Media Center system, into a 320×240 WMV-format video format appropriate for mobile devices.

Remember how I said the question of what gains to attribute to the 3770K’s 100MHz clock speed advantage and what gains to credit to Ivy’s IPC improvements would haunt you? Yep.

3D rendering

LuxMark OpenCL rendering

We’ve deployed LuxMark in several recent reviews to test GPU performance. Since it uses OpenCL, we can also use it to test CPU performance—and even to compare performance across different processor types. Since OpenCL code is by nature parallelized and relies on a real-time compiler, it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both claim to support AVX. The AMD APP driver even supports Bulldozer’s distinctive instructions, FMA4 and XOP.

We decided to test with both of the ICDs when possible. LuxMark will let you specify which OpenCL devices to use, so we asked it to use the Radeon HD 7950 GPUs in our test systems, as well, for a bit of dramatic flair—and to see if the different CPUs acting in support had any effect on the GPU’s performance. Finally, we combined two devices, the AMD APP x86 ICD and the Radeon HD 7950, to see if a CPU and GPU could team up to complete the job faster than either one could alone.

Funny thing: the AMD APP ICD runs faster on Intel chips than Intel’s own OpenCL driver. Meanwhile, the Intel driver refuses to run on the non-AVX-infused AMD chips.

The fastest processor here, by far, is the Radeon HD 7950. The Core i7-3770K has to settle for a distant third, but it’s the undisputed champ of its weight class. Happily, the poor FX-8150 doesn’t look to be as completely outclassed as it was in our earlier synthetic AVX tests, although none of the AMD CPUs like being asked to team up with a Radeon. Shades of corporate politics, perhaps? Meanwhile, the Intel CPUs can contribute to a higher overall score while also supporting the Radeon HD 7950.

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

POV-Ray rendering

The 3770K fares well enough, if predictably, in the rest of our rendering tests, although they’re not quite as exciting as the new OpenCL hotness, in my view.

Scientific computing

MyriMatch proteomics

Our benchmarks sometimes come from unexpected places, and such is the case with this one. David Tabb is a friend of mine from high school and a long-time TR reader. He has provided us with an intriguing new benchmark based on an application he’s developed for use in his research work. The application is called MyriMatch, and it’s intended for use in proteomics, or the large-scale study of protein. I’ll stop right here and let him explain what MyriMatch does:

In shotgun proteomics, researchers digest complex mixtures of proteins into peptides, separate them by liquid chromatography, and analyze them by tandem mass spectrometers. This creates data sets containing tens of thousands of spectra that can be identified to peptide sequences drawn from the known genomes for most lab organisms. The first software for this purpose was Sequest, created by John Yates and Jimmy Eng at the University of Washington. Recently, David Tabb and Matthew Chambers at Vanderbilt University developed MyriMatch, an algorithm that can exploit multiple cores and multiple computers for this matching. Source code and binaries of MyriMatch are publicly available.
In this test, 5555 tandem mass spectra from a Thermo LTQ mass spectrometer are identified to peptides generated from the 6714 proteins of S. cerevisiae (baker’s yeast). The data set was provided by Andy Link at Vanderbilt University. The FASTA protein sequence database was provided by the Saccharomyces Genome Database.

MyriMatch uses threading to accelerate the handling of protein sequences. The database (read into memory) is separated into a number of jobs, typically the number of threads multiplied by 10. If four threads are used in the above database, for example, each job consists of 168 protein sequences (1/40th of the database). When a thread finishes handling all proteins in the current job, it accepts another job from the queue. This technique is intended to minimize synchronization overhead between threads and minimize CPU idle time.

The most important news for us is that MyriMatch is a widely multithreaded real-world application that we can use with a relevant data set. I should mention that performance scaling in MyriMatch tends to be limited by several factors, including memory bandwidth, as David explains:

Inefficiencies in scaling occur from a variety of sources. First, each thread is comparing to a common collection of tandem mass spectra in memory. Although most peptides will be compared to different spectra within the collection, sometimes multiple threads attempt to compare to the same spectra simultaneously, necessitating a mutex mechanism for each spectrum. Second, the number of spectra in memory far exceeds the capacity of processor caches, and so the memory controller gets a fair workout during execution.

This time around, we’re using a brand-new MyriMatch binary with a larger data set, so our results won’t be comparable to the ones you’ve seen here in the past.

STARS Euler3d computational fluid dynamics

Charles O’Neill works in the Computational Aeroservoelasticity Laboratory at Oklahoma State University, and he contacted us to suggest we try the computational fluid dynamics (CFD) benchmark based on the STARS Euler3D structural analysis routines developed at CASELab. This benchmark has been available to the public for some time in single-threaded form, but Charles was kind enough to put together a multithreaded version of the benchmark for us with a larger data set. He has also put a web page online with a downloadable version of the multithreaded benchmark, a description, and some results here.

In this test, the application is basically doing analysis of airflow over an aircraft wing. I will step out of the way and let Charles explain the rest:

The benchmark testcase is the AGARD 445.6 aeroelastic test wing. The wing uses a NACA 65A004 airfoil section and has a panel aspect ratio of 1.65, taper ratio of 0.66, and a quarter-chord sweep angle of 45º. This AGARD wing was tested at the NASA Langley Research Center in the 16-foot Transonic Dynamics Tunnel and is a standard aeroelastic test case used for validation of unsteady, compressible CFD codes.
The CFD grid contains 1.23 million tetrahedral elements and 223 thousand nodes . . . . The benchmark executable advances the Mach 0.50 AGARD flow solution. A benchmark score is reported as a CFD cycle frequency in Hertz.

So the higher the score, the faster the computer. Charles tells me these CFD solvers are very floating-point intensive, but they’re oftentimes limited primarily by memory bandwidth. He has modified the benchmark for us in order to enable control over the number of threads used. Here’s how our contenders handled the test with optimal thread counts for each processor.

Well, lookie there. I’ll bet you were getting tired of seeing the exact same finishing order for the top four processors. Right before we conclude our tests, the Core i7-3820 finally manages to overtake the 3770K for once, likely thanks to the bandwidth provided by its two extra memory channels. The 3820 doesn’t often appear to benefit from the additional bandwidth, but it does in the final test in our CPU performance suite.

Overclocking

The overclocking-related knobs and dials of recent Intel processors. Source: Intel.

Ivy Bridge offers a few additional tweaking opportunities over Sandy, including slightly higher peak multipliers, memory speeds up to 2667MHz, and the ability to control memory clocks in 200MHz increments. Ivy doesn’t offer the additional flexibility of allowing for multiple base clock speeds, like Sandy Bridge-E does, however.

Fortunately, since our Ivy-based subject is a K-series model with an unlocked multiplier, we didn’t have to worry about fiddling with the base clock. If you care at all about overclocking, we think it’s worth paying a few extra bucks for a K-series part.

I’ve been knee-deep in other work, so Geoff stepped in to handle our overclocking experiments with Ivy. He’s written up his experiences right here, if you’re interested. The bottom line was that our 3770K sample overclocked quite nicely, to 4.4GHz at its stock voltage and to 4.9GHz at 1.35V. However, something funny happened on the way to 5GHz: even with a massive Thermaltake Frio cooler rated for 220W of heat dissipation, our 3770K reached the boiling point of water and began thermal throttling. In other words, our cooler ran out of thermal headroom before our Ivy Bridge chip ran out of clock speed headroom. Geoff checked power consumption, and it turns out the 3770K was indeed drawing enough power to tax that beefy cooler.

So Ivy Bridge appears to be a pretty willing overclocker, but if you’re planning on raising the voltage much above stock, you’d better bring along a good cooler. Here’s a look at how our 3770K performed at 4.9GHz, the highest speed we could maintain without invoking thermal throttling. Note that these scores came from a different test system than our usual one, with an Asus motherboard, although with the same memory speed and timings.

Performance does scale up nicely at 4.9GHz, I’d say.

IGP performance: Skyrim

Before we conclude, let’s take a quick look at how Ivy’s integrated graphics compare to Sandy’s and Llano’s in a couple of recent games. Up first is Skyrim, again in a 60-second loop around Whiterun, at the settings shown below.

Ivy’s HD 4000 graphics have closed the FPS gap with the A8’s integrated Radeon substantially, but the A8 still leads in the FPS sweeps. Look what happens when we consider the frame latency picture, though.

The A8 produces more frames and thus achieves a higher FPS average, but it also has quite a few more spikes caused by longer-latency frames. As a result, the A8’s advantage over the 3770K evaporates as we approach the 99th percentile, and the last 1% of frames are higher latency on the AMD APU.

Those long-latency frames contribute to the A8 spending more time rendering beyond our 33.3-ms threshold. It’s close, but we’d say the 3770K provides a smoother gaming experience, both by the numbers and by the seat of our pants.

As you know, Skyrim is somewhat sensitive to CPU performance, so it’s possible—perhaps even likely—that the A8-3850’s relatively pokey CPU cores could be contributing to those long frame times. The A8 did fare a little better in our earlier test with a discrete graphics card, but remember that Llano will throttle back its CPU cores in order to clear out enough thermal headroom for its IGP. Dynamic power management in Llano is a one-way street.

IGP performance: Battlefield 3

As you may recall, Battlefield 3 tends not to be CPU limited with any of these processors. In that sort of game, the A8-3850 manages to outperform the 3770K any way you measure it.

Still, Intel’s new IGP has closed the gap with AMD’s Llano substantially. We’d say AMD should be concerned, if we weren’t expecting a similar leap in graphics performance from AMD’s own upcoming Trinity processor, which should be arriving very soon.

AMD’s bigger concern, perhaps, might be what happened in Skyrim. If the CPU portion of the processor becomes a limiting factor, then Intel doesn’t have to match the performance of AMD’s integrated Radeons in order to provide a better overall gaming chip.

IGP performance: Luxmark

One more crazy experiment before we tie things up. Intel’s new IGP supports OpenCL 1.1, so how does it compare to Llano’s IGP on that front?

AMD’s old IGP is faster in LuxMark than Intel’s newer one, but, well, they’re both pretty slow—vastly slower than their own CPU cores in this nicely parallel workload, in fact. There is a little bit of performance to be gained by throwing the CPU cores and IGP at the same workload, though. This outcome raises some interesting philosophical questions about the relative worth of the CPU and IGP components of these integrated processors, but we’ll save that discussion for a later date.

Conclusions

I told you up front that the story on Ivy Bridge was relatively straightforward. Now that we’ve conducted enough analysis to bring down a healthy adult bison in its prime, let’s boil things down to a simple scatter plot showing price versus overall performance. Many inputs go into creating our overall performance scores, which are derived from the components of entire CPU test suite, save for those initial synthetic benchmarks. For gaming, we’ve used the 99th percentile frame times as our performance inputs. Our prices come from the manufacturers’ official price lists.

As the plot shows, the Core i7-3770K is just a little bit faster than the 2600K at essentially the same price. That’s progress, but mild progress, on the value front. The real gains with Ivy Bridge come in terms of power use, with the 18W reduction in peak power draw and the accompanying improvements in power efficiency. Don’t say I didn’t tell you!

You can add some other positives to Ivy Bridge’s corner that we discovered in more detailed testing. For one, I think we’ve demonstrated that there are measurable and probably tangible benefits to having a fast processor in several of today’s games, even when FPS averages rise above the vaunted 60-FPS mark. Frame delivery is simply quicker and more consistent with a fast processor, and the Core i7-3770K is among the world’s best on that front. Based on our experience, Ivy also overclocks like Flava Flav, as long as you can keep it cool. We’re curious to see how others fare on this front, since our experiences with one Core i7-3770K chip may not translate into universal success. Finally, Intel has gained substantial ground on AMD in terms of integrated graphics. The 3770K’s HD 4000 IGP still isn’t as fast as Llano’s built-in Radeon, but it’s within striking distance—and Ivy’s substantially quicker CPU cores may give it a playability advantage in some games.

If you’re considering buying a desktop quad-core processor, though, you probably don’t care too much about integrated graphics. For PC enthusiasts, I’d say the decision on Ivy bridge is pretty easy. If you already own a Sandy Bridge-based CPU, there’s probably no point in upgrading. If your CPU is older than that, then Ivy Bridge represents a substantial performance upgrade from prior generations, just as Sandy Bridge did 16 months ago. The big difference now is that the TDP has dropped to 77W. That’s enough to earn the Core i7-3770K a TR Recommended award.

The improvements in Ivy Bridge look fairly nice in a desktop package like the Core i7-3770K, but they’re almost certain to make a bigger impact in the mobile market, where the power efficiency gains conferred by Intel’s 22-nm process should pay off more dramatically. I’m already tempted by the prospect of an Ivy-based Ultrabook, once the dual-core variant of Ivy arrives later this year.

Follow me on Twitter if you dare.

Comments closed
    • Kaleid
    • 7 years ago

    Even with 65nm we had four cores. Now it’s 22nm and still not 8 cores?

    CPUs are damn boring.

      • flip-mode
      • 7 years ago

      It’s called the Xeon E5, bro. Not boring at all, just expensive.

      • Krogoth
      • 7 years ago

      Not exactly an oranges and apples comparison.

      Modern CPUs have the memory controller, PCIe controller and integrated GPU sharing the same die.

      • OneArmedScissor
      • 7 years ago

      CPU [b<]cores[/b<] are boring. We haven't seen a truly new x86 core design in quite a few years (I don't know what to call Bulldozer's), but CPUs as a whole have have still changed in all sorts of ways that make new devices possible. Sorry, you're not going to get a new desktop that does what your current desktop already does, but twice as fast. The hardware has existed for a long time, but the supporting software isn't there because the usage scenario isn't there.

        • Damage
        • 7 years ago

        Sandy Bridge, Bulldozer, Brazos, Nano, and Atom were all essentially clean-sheet new x86 CPU cores introduced in recent years. Has been exciting to see!

          • OneArmedScissor
          • 7 years ago

          I mean conceptually, as in the departure from the Pentium 4 / Pentium D to the Pentium M / Core 2. The two do not much resemble each other, and the new core itself was responsible for such an improvement in efficiency that it allowed laptops to replace desktops for many people.

          We haven’t seen a new core bring about such a change since. The status quo has been maintained since the Core 2 and largely even the Athlon 64.

          Bulldozer comes close as far as the departure in concept, but as for new uses…uh…status quo. :p

          Today, we are seeing big changes in component integration, drastically improved GPU power, [b<]and[/b<] ways to put it to work. From here on out, this is what facilitates new software and ways to use computers. So that's not to discount the whacky new things that have turned up, but just to point out that we shouldn't pay much mind to ever-increasing CPU performance.

            • rrr
            • 7 years ago

            Actually, Core 2 derives from P6 (aka Pentium 3). I mean it’s pretty distant and obviously heavily improved relative though. Nehalem also essentially was Core 2 + NB + architectural improvements to the cores. SB is more of the same, just more refined. I wonder what with Haswell.

            • Airmantharp
            • 7 years ago

            Beat me to it!

            The P6 (Pentium Pro, II, III, M, Core 2, etc…), Netburst (Pentium IV’s), K7 (all Athlons, Phenoms), and Bulldozer (don’t care to look up) are the most recent cores.

            I’d say Atom represents the biggest departure in x86, with its in-order design, but it’s really just a retarded step backwards, as even ARM CPUs are migrating to out-of-order execution!

    • FranzVonPapen
    • 7 years ago

    So, in other words, no compelling reason to jump on Ivy Bridge from a Core i7 920/X58.

    Fingers crossed for Haswell, when it comes out. No rush! The need to upgrade your CPU every other year is long gone after a steady decline in the 2000s.

      • BIF
      • 7 years ago

      “The need to upgrade your CPU every other year is long gone after a steady decline in the 2000s.”

      I find this to be very true. My current system has been running fine for 5 years now. A bit long in the tooth, but still working well with no workarounds or rebuilds needed thus far. So maybe I am being spoiled by that track record.

      • rrr
      • 7 years ago

      Yep, my C2D E8400 still working fine and performing adequately to my needs.

        • flip-mode
        • 7 years ago

        Penryn is a great processor.

      • El_MUERkO
      • 7 years ago

      my 9550 is getting old but still no reason to upgrade

      zzz…

    • blitzy
    • 7 years ago

    So is this just a paper launch? I’m looking around and I haven’t seen any Ivy Bridge cpus for sale yet, only a few weeks until diablo 3 and I want to upgrade my old 1.8ghz C2duo

      • blitzy
      • 7 years ago

      oh, the launch date is 04/29/2012 – [url<]http://ark.intel.com/products/65523/Intel-Core-i7-3770K-Processor-(8M-Cache-up-to-3_90-GHz)[/url<]

      • Bensam123
      • 7 years ago

      I don’t think you’ll need to upgrade from that for Diablo 3… >>

        • blitzy
        • 7 years ago

        Yeah d3 wont require huge specs I just want it to run nicely and ive been putting up with this machine since around ’06. Using SC2 as a comparison I can’t run that at high settings, and I would prefer to run on a bit better than lowest – mid settings. 1.8ghz just doesnt cut the mustard, especially when its a core2 vs i5 for the same clockspeed. I’m just long overdue to upgrade. nows as good of a time as any

    • ULYXX
    • 7 years ago

    Very good review and I can’t wait to start seeing Ivy Bridge in mainstream laptops.

    ::::Sigh::::

    Still much of Sandy inventory to reduce.

    • odizzido
    • 7 years ago

    [quote<]If your CPU is older than that, then Ivy Bridge represents a substantial performance upgrade from prior generations, just as Sandy Bridge did 16 months ago.[/quote<] Maybe it's because I didn't purchase one, but it really feels like SB is newer than that. edit--------- Or maybe because I am an old man now.....damn those younguns.

    • shank15217
    • 7 years ago

    Tells you a lot about both companies..

    “Funny thing: the AMD APP ICD runs faster on Intel chips than Intel’s own OpenCL driver. Meanwhile, the Intel driver refuses to run on the non-AVX-infused AMD chips.”

    • S_D
    • 7 years ago

    Am I the first to point out the the core and thread count for SNB-E chips on the first page should be 6 and 12 respectively, not 8 and 16 as listed?

      • Damage
      • 7 years ago

      The counts are correct for the chip. The desktop CPU models based on it have cores and cache disabled. Xeons (SNB-EP is the same chip) are available with all cores enabled. Have a nice dually workstation here for testing, in fact…

    • smilingcrow
    • 7 years ago

    Intel clearly stated that the big gains in power efficiency for the 22nm process was going to be with low voltage parts so the power efficiency gains with IB desktop parts are as expected.
    Not sure if by low voltage they meant Tablet/Smartphone only or whether it also includes the forthcoming low voltage laptop CPUs?

    Seems a solid release to me unless you are an aggressive over-clocker; although it over-clocks nicely at stock voltage. Some people are reacting as if it’s another Bulldozer which is bizarre; gotta laugh at the fanboys.

    • anotherengineer
    • 7 years ago

    I think the thing that amazes way more than IB, is the FX vs the old Thuban X6, especially if you consider it on a clock per clock basis.

    I mean it’s not like it’s slower than a 386 chip, it’s a decent chip, however after all that time and money, and get beaten by a chip that’s 2 yrs older, with a basic architecture that is even older than that, at a lower clock speed is disappointing indeed.

    I think AMD should have shrunk Thuban/Deneb to 32nm, increased the speed of the L1,2,3, caches, bumped up the L2 size to 1MB per core and refined the IMC to handle 1866 ram. And used that time to either refine BD or something.

    In a few (very few) benchmarks you can see the potential of bulldozer, however it doesn’t follow through in others.

      • OneArmedScissor
      • 7 years ago

      [quote<]I think AMD should have shrunk Thuban/Deneb to 32nm, increased the speed of the L1,2,3, caches, bumped up the L2 size to 1MB per core and refined the IMC to handle 1866 ram.[/quote<] That's exactly what they did with Llano, sans the L3 part, and plus an integrated PCIe controller, but all sorts of things went wrong there, from manufacturing to performance issues. Unfortunately, it just wasn't meant to be.

        • UberGerbil
        • 7 years ago

        Yes, unfortunately you can’t “just shrink” things, and the need to redesign everything gets worse the smaller you go. Wire distances change, relative areas of functional blocks change… Some things that worked well stop doing so; some things that seemed unfeasible become possible.

        Not to mention that would assume AMD was [i<]planning[/i<] to have extensive problems getting a performant Bulldozer out the door. You do construct contingency plans and build in conservative assumptions, of course, but unless you have an entire design team sitting around doing nothing you generally don't start a full second project that will only pay off if the first fails completely.

          • anotherengineer
          • 7 years ago

          Indeed. Hindsight is 20/20 as they say.

          It’s still too bad they discontinued them though, I mean a 960T could be had for around $110 bucks which is a good deal IMO, same with the 955BE before it was sold out.

          I wish I had a spare 110 bucks, I would grab one for the htpc, even though the 255 X2 does the job.

          However a 1090T at $180 would probably hurt those FX sales.

          Ah well, my money is still being saved for an SSD.

    • BIF
    • 7 years ago

    Okay, so am I the only one here who is not thrilled to have my desktop die carrying millions of unused GPU assets?

    I know I know, all CPUs carry all circuitry anyhow, and it’s just disabled in some models. Or certified at a lower clock speed, etceteratta.

    I want to upgrade my 2007 Q6600 system and I want it to last for the next 5+ years.

    I want capacity for 16-24 GB of RAM, 8 or more physical cores, USB3, and Thunderbolt built in. I want it quiet and cool at idle. I don’t need graphics on the CPU; I’m happy to buy a new card.

    Just not thrilled at all the hype for a feature that I don’t need…

      • shaurz
      • 7 years ago

      I think you’ll have to wait for Haswell for 8 cores… that’s what I’ll be doing. Going from a quad core to a 50% faster quad core doesn’t seem worth the upgrade.

        • BIF
        • 7 years ago

        This is good advice. I agree about the 50% part.

        Maybe I can hold off for Haswell.

        That is, assuming that no “other” requirement creep sneaks up on me in the meantime. I’m thinking of small or large disasters such as a power surge or exploding motherboard capacitors, or maybe a me-caused event like a 2012 hardware purchase spurring an immediate need for another PCI-E slot or Thunderbolt, etc).

      • End User
      • 7 years ago

      [quote<]Just not thrilled at all the hype for a feature that I don't need...[/quote<] You are still running with a Q6600 and you have the gall to b*tch about Ivy Bridge?!?

        • BIF
        • 7 years ago

        I wasn’t bitching about the product. Okay, maybe just a little bit. But it’s the hype that annoys.

        And the Q6600 is still doing fine. I don’t “need” an upgrade just yet, but I prefer to time them on my own terms.

        BTW, it was not me who downthumbed you. 😀

      • flip-mode
      • 7 years ago

      It’s not hurting you either, though. It might even be helping in terms of heat dissipation. I have an i5 2500 K and never give a thought to the GPU lying dormant in there. I don’t understand why it’s something to get annoyed about.

        • BIF
        • 7 years ago

        Heat dissipation had not occurred to me. Thank you for reminding me about that. I feel less annoyed now, yay!

      • Bensam123
      • 7 years ago

      [quote<]...and Thunderbolt built in.[/quote<] [quote<]Just not thrilled at all the hype for a feature that I don't need...[/quote<] Oh, the irony... On a more serious note, you can use integrated graphics and Lucid Virtu for a pretty healthy chunk of power savings.

        • BIF
        • 7 years ago

        Not sure what you meant about “Oh the irony”, but it occurred to me that I should mention that there is already a device I want that will use Thunderbolt starting later this summer.

        [url<]http://www.sweetwater.com/store/detail/ApolloQuad/[/url<] So, yes I have an actual use for Thunderbolt...moreso than integrated graphics. And I want to continue to use Eyefinity, so it's all-ATI for me going forward...

          • Airmantharp
          • 7 years ago

          The only question I have about such a device, is that why it would need more capability than USB3 provides?

          Oh wait, it’s because Apple machines don’t have USB3.

          I’m not dogging you on your position or use of the equipment, but I don’t really see the tremendous need for Thunderbolt over USB3.

            • BIF
            • 7 years ago

            Yep, I gotcha.

            But just to clarify for others, this particular device uses either Firewire 800 (out of the box) “or” Thunderbolt (with an addon card (in the device, not the host computer)).

            USB (either 2.0 or 3.0) is not available for this device, and there is no single comparable device that does what this one does (give you the ability to use UAD DSP plugins “on input” (which means “with very low-to-no latency”).

            So the upshot is that my choices are either Firewire or Thunderbolt.

            I’m not fond of Firewire because new system builds always seems to require some sort of messy workaround or driver regression action. Until that is debugged and resolved, you’ll see blue screens so much that you’ll think they were made into a screen-saver!

            Besides, I believe the trend going forward will be fewer and fewer Firewire motherboards and more and more Thunderbolt motherboards. As mentioned above, I would prefer to go to USB 2 or USB 3 (either one is fine for audio and MIDI). But if they are not available, then my next choice would be “anything but Firewire”. If that means Thunderbolt, then I guess I look for a motherboard with Thunderbolt built-in.

            Yes, I acknowledge that Thunderbolt may be dirtier than Firewire on a Windows system, and may have a slew of its own problems.

          • Bensam123
          • 7 years ago

          Wooo…

        • Airmantharp
        • 7 years ago

        I’m with you there.

        Thunderbolt sounds great for product integration, but I’m not terribly sure where it provides a functional capability that didn’t already exist.

      • indeego
      • 7 years ago

      What am I missing here? Get a Xeon X7560 and spring for the 20 seconds to install a third party Thunderbolt and GPU. It will take you far longer to evaluate systems than just getting the best of the best. Your RAM requirements are nothing. Your core requirement isn’t qualified. Your requirement for lasting more than 3 years is strange for a workstation (anything mid-level in three years is going to run circles around whatever you get today.)

      • FranzVonPapen
      • 7 years ago

      BIF: the thought of completely unnecessary/unwanted graphics capability taking up die area bothers me, too.

      It’s a waste of materials and energy. Don’t try to tell me the graphics area doesn’t draw *any* power even when idle.

    • Bensam123
    • 7 years ago

    Wish you guys would’ve tested a 3570k as well. Something tells me if they’re based on the same chip, with that much overclocking headroom, there is no reason to buy the i7.

    • Arclight
    • 7 years ago

    Wait a effing secound. Where are the temps? How much pressure did you get from Intel not to show the results?

    IB sucks. We waited for so long and for what? Higher temps and 0. performance gains?

    Curse you Intel

      • ronch
      • 7 years ago

      What Intel did with Ivy is nothing compared to what AMD did with Bulldozer.

      • flip-mode
      • 7 years ago

      Who gave you the idea that IB was going to be anything different? The tick-tock concept has been known for a few years now. Tick = process upgrade. Tock = architecture upgrade. This is a Tick. Beyond that, IB has a few minor tweaks to the architecture that are probably worth 5% on average performance increase per clock. I suggest you google and read up on Intel’s tick-tock so that next time you aren’t the sucker.

      As for the higher temps – think about it. You have a much smaller area of silicon that has to dissipate only a slightly smaller amount of energy – so the temperature is very obviously going to be higher. Temperature is a measure of heat energy per area (or per volume). So if you take a certain amount of energy in a certain area and then you shrink the area the temperature will increase.

        • Arclight
        • 7 years ago

        [quote<]Who gave you the idea that IB was going to be anything different? The tick-tock concept has been known for a few years now. Tick = process upgrade. Tock = architecture upgrade. This is a Tick. Beyond that, IB has a few minor tweaks to the architecture that are probably worth 5% on average performance increase per clock. I suggest you google and read up on Intel's tick-tock so that next time you aren't the sucker.[/quote<] Oh please, like you weren't expecting to see if 3D transistors deliver what they promised...... [quote<]As for the higher temps - think about it. You have a much smaller area of silicon that has to dissipate only a slightly smaller amount of energy - so the temperature is very obviously going to be higher. Temperature is a measure of heat energy per area (or per volume). So if you take a certain amount of energy in a certain area and then you shrink the area the temperature will increase.[/quote<] You talk as if you didn't expect the smaller fab. process to bring lower power consumption as well as lower temps and allow higher clocks.....Was i srsly the only one expecting that? Not to mention it would be done with 3D transistors.

          • flip-mode
          • 7 years ago

          [quote<]Oh please, like you weren't expecting to see if 3D transistors deliver what they promised......[/quote<] The tri-gate process delivered exactly what it promised - lower power consumption. [quote<]You talk as if you didn't expect the smaller fab. process to bring lower power consumption as well as lower temps and allow higher clocks.....Was i srsly the only one expecting that? Not to mention it would be done with 3D transistors.[/quote<]I must admit that I didn't build up too many expectations for IB - and the fact that [url=http://www.anandtech.com/show/5626/ivy-bridge-preview-core-i7-3770k<]Anandtech had essentially a full review of the chip up one May 6th[/url<] also takes most all of the surprise out of anything revealed today. The only surprise is your surprise. I think there's a lesson to be learned here about expectations, even though that lesson has been available to learn since the introduction of tick-tock. And your refusal to acknowledge the physics of the situation is just perplexing. Let me try another method: let's look at watts per mm: (IB 160mm) / (SB 216mm) = 0.74 (IB 100watt) / SB(117watt) = 0.85 (this is actually very incorrect as it is total system power and thus will obscure some of the gains made by IB. IB is probably a lot better than just 85% of the power draw of SB) (IB 1400t) / (SB 995t) = 1.4 X more transistors - or 1.92 X more transistors per mm So IB has 75% of the area and uses 85% of the power while flipping 40% more transistors: tri-gate delivers. Regarding temperatures, IB would have to dissipate 1.15 times more heat per mm to stay at the same temperature as SB. You tell me how it is possible dissipate MORE heat per mm if you're using the SAME system (Thermaltake Frio) to dissipate that heat. The physics of the Termaltake Frio does not change to miracle matter as the die size gets smaller. The general idea is that with less area to dissipate heat IB is going to be inherently more difficult to cool due to the pure physics of heat dissipation. There's nothing wrong with this picture. The only folly to be found is in overblown expectations.

            • smilingcrow
            • 7 years ago

            It’s usually the case that the chip on the newer process has a smaller area and consumes the same or less power so I’m not sure this is the issue.
            It would be good to see the watts per mm squared for a range of CPUs to see if that points to IB breaking the trend.

            • flip-mode
            • 7 years ago

            Indeed, seeing a watts/mm and also transistors/watt would be some nifty info. Architectural differences and tweaks do play a role in power consumption – it’s more than just process tech.

            • NeelyCam
            • 7 years ago

            Watts/mm is certainly useful. Transistors/watt, I think, is less useful – transistors are used for different purposes at different power levels, at different levels of activity etc.

            • Arclight
            • 7 years ago

            You didn’t build up much expectations exactly because Anandtech posted the results a while back and you chose to believe them. I didn’t since i considered the temps to be too high, thinking that they probably had an engineering sample which usually get more juice and run hotter.

            Alas i was wrong, they had the final stepping. But don’t tell me you knew about the fact that the CPU would run at 100 degrees Celsius OCed at sub 5Ghz on air cooling, before that review. I call BS.

            • flip-mode
            • 7 years ago

            [quote<]Alas i was wrong, they had the final stepping. But you don't dare tell me you knew about the fact that the CPU would run at 100 degrees Celsius OCed at sub 5Ghz on air cooling, before that review. I call BS.[/quote<] Wow, getting catty. Um, ok, I didn't dare say any such thing and I promise not to dare to. But maybe there is a better cooler than the Frio that could do just such a thing.

            • Arclight
            • 7 years ago

            Evasive, hanging on to an expression. There i edited out “dare”. Now can you reply? Did you or didn’t you knew this CPU would boil water at 1,3 V before the Anand “leak” or any other leak regading this matter?

            • Yeats
            • 7 years ago

            Maybe it’s time for you to take off the tiara, pop open a Schlitz, and chill.

            • Arclight
            • 7 years ago

            Or maybe we should start saying things that many don’t dare to say out of fear of getting thumbed down. I say what i say because i fight the sheep mentality. It would have been easy for me to say “Wow, awesome review” and get the approval of the community.

            Do you think i don’t value TR and the staff? Do you think i don’t realise they have more experience /knowledge about the stuff i’m talking about? I do and that’s why i care about the content.

            • flip-mode
            • 7 years ago

            Fighting the sheep mentality is great. Fight the ass mentality, too, while you’re at it.

            • Arclight
            • 7 years ago

            Still evasive flip-mode, you haven’t answered my question above.

            I can’t be critical and not be an a-hole. I’m no robot. If you meant not to make an ass out of myself then ithank you for caring.

            • Yeats
            • 7 years ago

            If you *really* cared about the article and it’s content, you would have been more respectful & constructive in your posts.

            If you had been following the IB rumor mill & previews and considered Intel’s tick-tick strategy, you would have known what to expect with Ivy Bridge, which is exactly what they delivered.

            If you carefully considered the circumstances, you would have never attempted to equate Ivy Bridge with Bulldozer.

            In short, if you were genuinely concerned with the lack of information (as you perceive it) you could have opined tastefully and respectfully, but instead you decided to be accusatory and act like a drama queen. I guess you got the attention you were looking for.

            PS – “…out of fear of getting thumbed down.” Do you honestly think anyone here fears being thumbed down?

            • Arclight
            • 7 years ago

            [quote<]PS - "...out of fear of getting thumbed down." Do you honestly think anyone here fears being thumbed down?[/quote<] I do remember much more pertinent posts before the up/down thumb became functional so yes i presume many are holding back, plus i saw many people reply, or edit their posts after they get thumbed down trying to defend their opinion or accuse others for not agreeing with them. So yes i honestly think the post rating mechanic is detrimental to uncensored, unfiltered expressed oppinions.

            • Washer
            • 7 years ago

            Now, if only your guts came with evidence or without baseless accusations.

            • Arclight
            • 7 years ago

            Like the US did when they brought evidence for WMDs in Iraq? Ohhhhh

            • Kurotetsu
            • 7 years ago

            [quote=”Arclight”<]I do remember much more pertinent posts before the up/down thumb became functional so yes i presume many are holding back[/quote<] Which is fairly pathetic, if that's actually happening. [quote="Arclight"<]So yes i honestly think the post rating mechanic is detrimental to uncensored, unfiltered expressed oppinions.[/quote<] Not really. If you're being held back by something that does nothing to the actual content and/or visibility of your post and only has the affect of hurting your extremely fragile internet feelings, then you probably shouldn't be posting anything on the internet to begin with.

            • flip-mode
            • 7 years ago

            I’ll reply, but it’s merely repeating myself, so perhaps the problem is with your reading rather than my writing: I never claimed to know anything at all about what temp IB would hit at 1.3 volts.

            • Arclight
            • 7 years ago

            You did insinuate you knew what to expect though and clearly you didn’t expect this either…..

            • flip-mode
            • 7 years ago

            I guess you hadn’t seen this thread either:
            [url<]https://techreport.com/forums/viewtopic.php?f=2&t=81394[/url<] You'll notice I first posted in that thread 10 days ago. So, yes, I was expecting it to run hot. Oh, but wait, your name is there in the thread too. Ah. Successful troll is successful.

            • Arclight
            • 7 years ago

            10 days ago is after the Anand leak. My point stands, you didn’t expect it, you just can’t admit it to yourself.

      • Damage
      • 7 years ago

      Since you appear to have missed it below, I’ll re-post here:

      No one at Intel pressured us in the least to not talk about temperatures (or anything else, for that matter). I assure you, we do not respond well to that sort of pressure, and the folks who work with us know it.

      As I said in response to another comment on this topic, Geoff did our OC coverage and talked about temperatures and reported overclocked power draw numbers in bar graphs here:

      [url<]https://techreport.com/articles.x/22833[/url<] There is also this section, taken directly from my review, which included a link to Geoff's article: [quote<] However, something funny happened on the way to 5GHz: even with a massive Thermaltake Frio cooler rated for 220W of heat dissipation, our 3770K reached the boiling point of water and began thermal throttling. In other words, our cooler ran out of thermal headroom before our Ivy Bridge chip ran out of clock speed headroom. Geoff checked power consumption, and it turns out the 3770K was indeed drawing enough power to tax that beefy cooler.[/quote<] Now, you are right there is no bar graph reporting temps. If you look back, you'll find that such graphs are not a regular part of our CPU coverage. That's because CPU temps are largely a function of power draw and the ability of the cooler to dissipate heat, which is complicated by lots of variable secondary factors like fan speed, ambient temps, airflow and such. We figure showing you power draw in an accurate, measured fashion is more helpful than posting temperature readings that will be highly dependent on variable factors. In this case, of course, we still talked about OCed temperatures, but the omission of a bar graph wasn't a change from our usual procedure at all. You would do well to read more carefully before you make brash accusations of the sort we tend to take seriously around here. We work hard at this and value our credibility. Please consider that before you hit the post button.

        • Arclight
        • 7 years ago

        I didn’t miss it. I made this post first earlier today, than i made that reply later with basically the same content. I did read your post. Thank you.

      • Visigoth
      • 7 years ago

      The only thing that sucks right now is Bulldozer, which, along with the soon-to be released Piledriver, will further sink AMD to the bottom. Perhaps Steamroller will make a drastic difference, but so far things are looking grim.

      • travbrad
      • 7 years ago

      Almost no one was expecting a big performance increase over SB. All the leaks said there were probably going to be “up to” 10-15% gains, which seems about right.

      Don’t forget laptops outsell desktops these days, and in that market this will be a decent jump forward. Saving 10 watts on a desktop is pretty meaningless, but on a laptop that’s a lot of extra battery life. The GPU will be a nice improvement too, since most people use the onboard graphics on their laptop.

      In either case it’s not worth upgrading from Sandy Bridge, but if I was looking to buy a new laptop I’d definitely want Ivy Bridge.

    • seeker010
    • 7 years ago

    wow, from all the comparisons, the bulldozer architecture is looking dated already, and it doesn’t even have the excuse the 1100t has…

    • Xylker
    • 7 years ago

    I can’t be the only one who looked at the die sizes and though back to this…

    July 2001:
    AMD CEO Jerry Sanders III said the following about Intel’s P4 processor on Thursday during AMD’s quarterly conference call

    “The P4 is a lousy product, it’s a dud, and has no advantage to the user. It’s a second rate product and they made a wrong call with memory. Pentium 4 is a loser. The only hope they have is to build capacity. It’s the world’s worst return on capital.”

    “If Intel is going to try take the Pentium 4 and compete with the Athlon they’re going to get hurt in their margins,” said Sanders. The die size of the Pentium 4 is twice as big as the Athlon – the “difference is frigging huge”, said Sanders, delicately

    AMD Athlon at ~120 mm^2 die with SDRAM or DDR
    P4 Memory at ~217 mm^2 die and initially only RDRAM

    OK, so we get 4 CPUs in the same area (+/-) but, wow, CPUs are huge compared to the bad old days! (https://techreport.com/discussions.x/17915)

      • ronch
      • 7 years ago

      I noticed that too. Performance/mm2 has gone up but boy, are there a lot of mm2 these days!!

      Good to see IB is bringing it down quite a bit.

      • NeelyCam
      • 7 years ago

      Sanders was such a cool guy. He [i<]was[/i<] AMD, and now that he's gone, AMD is just a shadow of itself.

      • NeelyCam
      • 7 years ago

      [quote<]https://techreport.com/discussions.x/17915[/quote<] Made me smile. I wonder what else has happened here at TR before I discovered it..

        • Bensam123
        • 7 years ago

        We enjoyed quiet evenings of sipping tea and talking about the weather.

          • NeelyCam
          • 7 years ago

          I hope you’re not implying NeelyCam helped TR transcend..?

            • Bensam123
            • 7 years ago

            Or perhaps descend… The Neely Cam that entered TR for the first time is seemingly quite a bit different then the one now though.

            All in the name of progress.

            • NeelyCam
            • 7 years ago

            I just wanted to make a grand entrance, and that Intel lawsuit was the perfect vessel

    • LoneWolf15
    • 7 years ago

    I don’t see this as a reason to upgrade my desktop.

    However, I’ll be really interested to see Mobile Ivy Bridge. I’m hoping that my current notebook’s BIOS will be flexible enough to allow the chip-swap. It’d be nice to improve the IGP and get lower power consumption.

    • Krogoth
    • 7 years ago

    Ivy Bridge is like the jump from Conroe to Penyrn.

    Marginal improvements in IPC, power efficiency that are welcome, but give Conroe/Sandy Bridge users little reason to upgrade.

    I’m curious onto why Damage forgot to add power consumption and temperature measures for overclocked IB unit, since it has been the talk of the web. 22nm process is allegedly leaky as hell when you throw more volts at it.

    Otherwise, it is a great reviews that covers almost all of the bases.

      • NeelyCam
      • 7 years ago

      I don’t know why people are freaking out about the overclocking power consumption. As far as I can tell, it pretty much follows the standard f*V^2 trend

      • tfp
      • 7 years ago

      I believe the reasoning was:

      [quote=”Damage”<]Geoff did our OC coverage. Linked in the review, but it's also here: [url<]https://techreport.com/articles.x/22833[/url<] Temperature isn't a benchmark, but he does talk temps and show power consumption while overclocked. [/quote<]

        • NeelyCam
        • 7 years ago

        Use tags [ quote ] and [ /quote ] (take out the spaces)

          • tfp
          • 7 years ago

          Hey thanks!

      • Vivaldi
      • 7 years ago

      +1

      And not to be that guy, but for consistency’s sake, don’t you mean the jump from Conroe to Wolfdale?

      I also think you mean the jump from Nehalem and/or Sandy Bridge will give users little reason to upgrade, not Core. I can guarantee my (Core family) Kentsfield Q6600 will be a night and day experience, jumping from that to Ivy Bridge.

      I get the jist of what you’re saying though.

      • ronch
      • 7 years ago

      Krogoth not too impressed.

      • Arclight
      • 7 years ago

      [quote<]I'm curious onto why Damage forgot to add power consumption and temperature measures for overclocked IB unit, since it has been the talk of the web. 22nm process is allegedly leaky as hell when you throw more volts at it.[/quote<] Indeed from the other reviews that did show temps (only few did) the problem is indeed real. I'm dissapointed that TR is between the many that were pressured *allegedly* by Intel not to show their new CPU in a bad light. To me IB is just like Bulldozer. There are games in which a i7 2600k or 2700k (forgot about it?) beats the 3770K. Shamefull. On top of that it sports huge temps under load when OCed requiring the most impressive coolers just to touch frequencies that a 2600k would get with a $20 cooler.

        • NeelyCam
        • 7 years ago

        [quote<]To me IB is just like Bulldozer. [/quote<] I think you're exaggerating just a little bit... IB is faster than SB and does it at lower power [i<]if[/i<] you don't overvolt/clock them. BD is the exact opposite when compared to PhenomII

          • Arclight
          • 7 years ago

          IB is not faster in any meaningful way just like BD isn’t compared to Thuban. Both arhitectures have been hyped…remeber the promise of 3d transistors and all that? Well guess what, the CPU is hotter than SB forbidding higher clocks…..so what’s the real advantage here?

          Oh the IGP….yes, cause people spending so much $ on a CPU certainly do it for the IGP to play games with…..

          Face it, it’s a damn flop. We have been waiting so much to be dissapointed…..

            • Bensam123
            • 7 years ago

            Is sensationalism your new word of the day?

            While a lot of over the top BD bashing isn’t needed, this just seems like folly.

            • Arclight
            • 7 years ago

            TR didn’t post temperature readings, masking what many were awaiting to be clarified ever since the leaked reviews a few weeks (months?) ago.

            Yes i do feel angry that TR gave in to Intel just like the other sites that didn’t comment on temps.

            • Damage
            • 7 years ago

            No one at Intel pressured us in the least to not talk about temperatures (or anything else, for that matter). I assure you, we do not respond well to that sort of pressure, and the folks who work with us know it.

            As I said in response to another comment on this topic, Geoff did our OC coverage and talked about temperatures and reported overclocked power draw numbers in bar graphs here:

            [url<]https://techreport.com/articles.x/22833[/url<] There is also this section, taken directly from my review, which included a link to Geoff's article: [quote<] However, something funny happened on the way to 5GHz: even with a massive Thermaltake Frio cooler rated for 220W of heat dissipation, our 3770K reached the boiling point of water and began thermal throttling. In other words, our cooler ran out of thermal headroom before our Ivy Bridge chip ran out of clock speed headroom. Geoff checked power consumption, and it turns out the 3770K was indeed drawing enough power to tax that beefy cooler.[/quote<] Now, you are right there is no bar graph reporting temps. If you look back, you'll find that such graphs are not a regular part of our CPU coverage. That's because CPU temps are largely a function of power draw and the ability of the cooler to dissipate heat, which is complicated by lots of variable secondary factors like fan speed, ambient temps, airflow and such. We figure showing you power draw in an accurate, measured fashion is more helpful than posting temperature readings that will be highly dependent on variable factors. In this case, of course, we still talked about OCed temperatures, but the omission of a bar graph wasn't a change from our usual procedure at all. You would do well to read more carefully before you make brash accusations of the sort we tend to take seriously around here. We work hard at this and value our credibility. Please consider that before you hit the post button.

            • Arclight
            • 7 years ago

            I’ve red that part before posting….but saying “even with a massive Thermaltake Frio cooler rated for 220W of heat dissipation, our 3770K reached the boiling point of water and began thermal throttling” in an obscure corner of your review doesn’t exactly show any real data.

            Also i would debate that Frio isn’t exactly a “cheap” cooler as you guys stated in your review. Certainly it’s not regarded as a bargain heatsink, it’s more like mid to high end-ish when it comes to heat sink costs, thanks to the great competition in this market.

            A screen shot with the temps registered by a certain monitoring software would have been enough to warn the gerbils.

            To me it seems misleading, especially to consumers who would glance your review and think that the 3770K is better than 2600K for gaming and actually it’s not when you put OCing into the equation. Even if you don’t take OCing into consideration the SB are better cause mobo prices for them are, afaik, lower.

            • Damage
            • 7 years ago

            Let me get this straight. You saw fit to accuse us of bowing to pressure from Intel because:

            -Although we provided the information, we didn’t put it into the format you wanted.

            -Discussing overclocked temps in the section of the review labeled “overclocking” is hiding said info in obscurity.

            -A separate article on overclocking with power draw bar graphs is also obscure.

            -The acceptable format to avoid insinuations of corruption is a screenshot from a monitoring program.

            -The statement “This is the same cooler we use on our storage test systems, and at $48 online, it’s eminently affordable” is unacceptable because in your view, “affordable” = “cheap” and $48 is not in your view “cheap” for a CPU cooler. Ergo, asserting that a $48 cooler is affordable is provably incorrect.

            -We are “misleading” people when they fail to read our words and draw the wrong conclusions, and we must accept responsibility for this.

            I’m taking notes because the logic is hard to follow.

            Seems like you have a strong opinion about Ivy’s OCed power draw (which translates into temperatures), are a little spun up about it, and are throwing out inflammatory statements without stopping to consider the facts. Maybe a brief cooling off period and some reflection would help you sort things out.

            Let me say this again. We were in no way intending to mislead or obscure anything, and we certainly weren’t bowing to pressure from Intel.

            • Arclight
            • 7 years ago

            [quote<]-A separate article on overclocking with power draw bar graphs is also obscure.[/quote<] Power draw doesn't =/= temperature to me. The way i understand it is to relate it to previous CPUs (since i don't have the knowledge nor backgroud to be satisfied just by power draw and work from there without temperature values), for exaple FX 8150 has higher power draw OCed than OCed 3770K (is that a fact? iirc it is) but it certainly doesn't reach 98 degrees when pushed with far higher voltage.

            • Damage
            • 7 years ago

            Power draw does translate into heat pretty much directly, all other things being equal.

            Other things aren’t always equal, though.

            We used a very beefy water cooler supplied by AMD to overclock the FX-8150. I expect most others in the media did, too, since AMD was sampling those coolers. Very loud and obnoxious, but effective. We used a less beefy air cooler to OC Ivy.

            The FX probably also has a lower thermal throttling point, keeping it from ever reaching those higher temperatures. Intel appears to have chosen to tolerate more heat with Ivy than in past CPU, or at least to allow it in OCed configs.

            Another possible issue with Ivy is that it will have to dissipate more its heat over a smaller surface area than Sandy. We were worried about that issue as we worked on these articles, so Geoff did some back-of-the-envelope calculations. IIRC, Ivy wasn’t any worse off in terms of W/mm^2 than Sandy at stock settings, which was reassuring to know.

            That math could change dramatically with overclocking and overvolting, though. Extreme overclocking with Ivy could become quite the challenge, if things break a certain way. Having heat density become a primary limiting factor to CPU cooling would be pretty extraordinary, AFAIK. We’ve seen things like cooler mounting downforce standards increase over time to ensure good contact and heat transfer, and that has so far apparently been adequate.

            Still, it remains true generally that CPU power draw turns into heat, which must be dissipated. Temperature is determined by the amount of heat being produced and the effectiveness of the cooling. When we show you the power draw of an overclocked config, we believe we’re telling you what you need to know about the CPU side of the equation.

            • Bensam123
            • 7 years ago

            While I’m not trying to fuel his argument, temperature graphs for the cooler would’ve been nice. Power usage is nice and everything, but it isn’t in a form that is easy to read and process by most viewers. Stating 212w isn’t all that helpful when putting it into a real world context.

            That and I wish you guys tried multiple coolers to see if there was actually a difference between them. That pretty much could be an entirely different article assessing heatsinks rather then the overclockability of the processor though. But if there is still easily accessible headroom left in the processor it wasn’t a successful overclock IMO.

            • Arclight
            • 7 years ago

            But you, yourself said that “That math could change dramatically with overclocking and overvolting, though. ”

            Since i don’t know the “math”, i use empiric results from different reviews to figure out what works before i jump onto a product. If no review dares to tackle the issue in any shape or form out of fear that the result may vary due to factors like :cooler contact pressure, thermal paste brand and application method, etc. how am i supposed to get the info i’m after? Am i to buy a product test it myself before i know if it will satisfy me or not? Then wth am i doing here?

            • NeelyCam
            • 7 years ago

            [quote<]Power draw doesn't =/= temperature to me.[/quote<] As your understanding of physics doesn't seem to be sufficient for this debate, you should probably refrain from continuing your rant. Your reputation is getting damaged (pun intended) pretty bad.

            • Arclight
            • 7 years ago

            My understanding of physics is indeed limited as i didn’t study it that much in school. Nonetheless i believe i represent the majority when i say that power draw is not a real life indicative of temperature since i, like most, lack the understanding to calculate and figure it out.

            A temp graph could easily be done with the stock cooler and an after market one, but i see now that TR doesn’t believe it’s necessary. What they don’t understand is that they are going above the head of many of their readers.

            Now can you repeat the values about temperature, but this time, in English? If you catch my drift.

            • mattthemuppet
            • 7 years ago

            jeez mate, listen to what everyone’s telling you. Power produced = power dissipated. If a processor uses 50W of electricity, it dissipates (more or less) 50W of heat. Therefore, the more power a processor uses the more heat it produces. Now, using a small cooler (old intel stock cooler for example) on a 50W CPU is fine, as it can transfer that heat to the air quickly enough so that the CPU temperature doesn’t rise. However, if you use a 100W CPU, that same cooler will either a) spin up it’s fan to transfer heat to air more quickly or b) not transfer the CPU heat to air quickly enough and the CPU temperature will rise.

            So THE BIG TAKE HOME MESSAGE IS

            THE HIGHER THE CPU POWER DRAW THE;
            A) BIGGER THE COOLER NEEDED TO KEEP IT COOL

            OR
            B) HIGHER THE CPU TEMPERATURE.

            It’s as simple as that. If you use the same cooler on a 100W and a 200W CPUs, the higher wattage CPU will either run hotter or the cooler fan will run faster. If the cooler fan can’t run any faster, the 200W CPU will run hotter.

            Now, putting a temperature value on the OC’d IB chip will only give you the temperature that THAT CPU, using THAT COOLER, in THAT ROOM, in/ on THAT CASE at THAT AMBIENT TEMPERATURE. So, if you were to run your own IB set up, you’d have to recreate the Damage labs and use the exact same components for that temperature value to be in any way useful. Other coolers may be better, they may be worse. Running the rig in a case with high airflow may be better than running it on an open bench (hint, it will be). There are enough CPU cooler reviews out there for you to find the cooler you need to dissipate that much heat, seriously.

            • Arclight
            • 7 years ago

            [quote<] So THE BIG TAKE HOME MESSAGE IS THE HIGHER THE CPU POWER DRAW THE; A) BIGGER THE COOLER NEEDED TO KEEP IT COOL OR B) HIGHER THE CPU TEMPERATURE. [/quote<] I knew that, but thank you for trying. What i said is that i need real life results not TDP values....what's the point? I said it before, there are CPUs with higher TDP than IB OCed and yet they don't reach 100 degrees Celcius. I appreaciate the fact that i know how much power it consumes but as you can see with IB it's not a clear indicative of real life temperature. Why? For once because IB is the first 22nm CPU so there hasn't been any other before it. How am i supposed to use the TDP values if 22nm chips do not behave the same as the old chips(probably because of the smaller dissipation surface)? Pfff where is Aperture Science when you need them. They're always glad to do more testing.

            • NeelyCam
            • 7 years ago

            [quote<]I appreaciate the fact that i know how much power it consumes but as you can see with IB it's not a clear indicative of real life temperature. Why?[/quote<] Let me try to take an honest stab at explaining it with an analogy. (Apologies to science purists - this is not a perfect analogy..) Think of the CPU chip as a deep lake. There is an equally deep river (thermal cement between the chip and the heat spreader) flowing from that lake to a dam (thermal paste+cooler+fan). In this analogy, power consumption is the same as somebody pouring water into the lake. If you overvolt/overclock the chip, that means more water is pouring into the lake, or if you're idling, almost no additional water is going into the lake. The [b<]water level represents chip temperature[/b<]. When the water level rises, it starts flowing through the river to the dam. The dam passes the water through at some flow rate, keeping the water level from rising too much. Now, if you have a large lake (=large chip), somebody can pour a huge amount of water (power consumption) and the water level (temperature) doesn't rise that fast. It will rise, though, and eventually the dam has to pass the water through... The amount of water that needs to pass through is pretty massive, so you need a big dam. If you have a small lake (=small chip), the water level rises much faster for the same amount of water being poured in than with the large lake. Also, the river is unfortunately smaller (because the chip area with the thermal cement touching the heat spreader is so small). Even the hugest dam doesn't help that much because the river is so damn small - the water levels are still higher. This is what's happening in IB. The chip is small (small lake) but the power consumption is still large, especially overclocked/volted (water pouring into the lake). Even a very large dam that's fully open (large cooler) doesn't keep the water level (temperature) that low. Maybe you need pumps (water/LN2) to pump the water out right at the mouth of the river, to keep the lake from reaching the lakefront houses (temp going over Tjmax),,? The mouth of the river is still pretty small - you can't fit too many pumps there.. Things are much easier with a larger lake (die), say, Lake BullDozer. You can pour more water in before the water level rises too much, and the river is wider so it's easier to keep the level low. The only way Lake IvyBridge residents can handle this is by building their houses to higher elevation (higher Tjmax).. The weather is much sunnier at Lake IvyBridge, though, so it's all worth it.

            • mattthemuppet
            • 7 years ago

            “What i said is that i need real life results not TDP values.”

            well, the only way you’re going to find out how hot IB OC’d gets in your set up is to test it yourself. Unless you can exactly replicate the conditions that any tech site uses to measure CPU temp (cooler, fan speed, case, case fan speeds, ambient temperature, other system components) then any results you see on the interweb are going to be approximations at the very best, which is why temperature data on its own is largely useless.

            If you can’t understand that, then there’s not much point us trying to explain it, is there?

            • Vasilyfav
            • 7 years ago

            I think you just might be an idiot.

            • NeelyCam
            • 7 years ago

            Or a successful troll

            • Bensam123
            • 7 years ago

            Don’t feed.

            • BobbinThreadbare
            • 7 years ago

            They’re not mutually exclusive.

            • Yeats
            • 7 years ago

            [quote<]Also i would debate that Frio isn't exactly a "cheap" cooler as you guys stated in your review. Certainly it's not regarded as a bargain heatsink, it's more like mid to high end-ish when it comes to heat sink costs, thanks to the great competition in this market.[/quote<] I see TR referring to the Frio as "affordable", but I don't see it labeled as "cheap". Edit: Damage beat me to it

        • Washer
        • 7 years ago

        [url<]https://techreport.com/articles.x/22833[/url<] I guess you missed this article, where TR overclocks the chip, overvolts it, provide temperature measurements and specifically discuss hitting the thermal limits of the chip.

        • Yeats
        • 7 years ago

        [quote<]To me IB is just like Bulldozer. There are games in which a i7 2600k or 2700k (forgot about it?) beats the 3770K. Shamefull. On top of that it sports huge temps under load when OCed requiring the most impressive coolers just to touch frequencies that a 2600k would get with a $20 cooler.[/quote<] If by "like", you mean that IB has 2x the performance and lower power consumption as Bulldozer, then yes, IB is "like" Bulldozer. Also, IB is doing what it is supposed to do: small improvement in performance, big improvement in graphics, smaller process. Not a home run, but still mission accomplished. OC capability will improve as the process improves. Bulldozer, OTOH, was a damn *regression* on some levels; it was supposed to be a new architecture to return AMD to the hearts of PC enthusiasts, but it fell on its face, unfortunately.

      • rrr
      • 7 years ago

      I’m surprised anyone expected more than that. People have data from Conroe > Penryn jump and what gains were, it does not make any sense to expect much more than that, percentage wise.

    • I.S.T.
    • 7 years ago

    So, um, where’s the temperature benches?

      • Damage
      • 7 years ago

      Geoff did our OC coverage. Linked in the review, but it’s also here:

      [url<]https://techreport.com/articles.x/22833[/url<] Temperature isn't a benchmark, but he does talk temps and show power consumption while overclocked.

    • ish718
    • 7 years ago

    Intel’s 22nm and their 3D transistors is a bluff afterall…

      • NeelyCam
      • 7 years ago

      It doesn’t cook or clean for me. It’s no good.

    • dalisam
    • 7 years ago

    This might sound like a stupid question, but could you use the onboard graphics for PhysX only?

      • geekl33tgamer
      • 7 years ago

      Nada, Nvidia’s locked that out of the PhysX system software. Some people have hacked certain titles showing that a modern CPU can easilly handle PhysX on one or 2 cores, let alone a GPU. But Nvidia paid so much for PhysX, they still insist on keeping it vendor locked…

    • derFunkenstein
    • 7 years ago

    Nice CPU I suppose, but the OC results make it for me no better than Sandy. The overall system power consumption drop is OK but it’s not worth upgrading if you have a Sandy i5 or i7.

    I have a Sandy i3, though. I’ve been saying all along that when Ivy comes along I’d grab a quad. But now I’m not so sure. I wonder if Sandy quads will hang around for a while and drop in price now that the new hotness is out. A 2500K is plenty for me. Guess I’ll watch eBay for early adopters to upgrade – my Z68 board will take Ivy but it doesn’t HAVE to.

    • ColeLT1
    • 7 years ago

    Any word on when these go on sale, I picked up a asus Z77 v pro and have a nice warm home waiting for the 3570k?
    [url<]http://imgur.com/a/tMWR8[/url<]

      • integer
      • 7 years ago

      [url=http://www.overclockers.com/intel-i7-3770k-ivy-bridge-cpu-review<]April 29[/url<] [Overclockers.com]

        • ColeLT1
        • 7 years ago

        Thank you sir.

    • Washer
    • 7 years ago

    On the Battlefield 3 page:

    [quote<]You know how some people say that CPUs don't matter for gaming performance, since they're all fast enough these days? Here's a case where that's actually true. Have a look at all of our metrics, and they all agree.[/quote<] Is that true? It seems more to me you just found a game that was GPU limited with the given settings and not CPU limited like the others.

      • Airmantharp
      • 7 years ago

      It’s true when testing single-player BF3- the situation (somewhat) reverses itself when you jump into a large map with 64 players. You need as much CPU and GPU as you can get, and you’ll still be left wanting more.

        • Washer
        • 7 years ago

        The latter part of your post is why I find the statement worrisome. We don’t have to go far, just to BF3’s multiplayer, to see immediately that the CPU does make a difference in gaming. All these results showed us is that currently you’re more likely to hit the limits of your GPU before your CPU in BF3’s single player. The statement though suggests that this is the case for all gaming, which isn’t true. I feel like I’m nitpicking but I also feel that the test itself is misleading. I can assure everyone that they will not have nearly the same experience on an A8-3850 vs a i7-3770K when playing BF3 multiplayer even if both configurations have a HD7950.

    • Majiir Paktu
    • 7 years ago

    Is it just me, or does Ivy Bridge have much better virtualization support (VT-d?!) at lower prices than Sandy Bridge did?

      • Washer
      • 7 years ago

      Yes, you can get VT-d at a lower price point now. However, Intel is still removing VT-x (and VT-D, and SIPP, and TXT) from the K model processors. That’s ridiculous and frankly makes me a bit angry. Spend more and you’re getting a less useful processor solely because Intel enjoys being a massive a-hole… that’s sickening.

        • UberGerbil
        • 7 years ago

        That’s not correct — VT-x is available in the K processors ([url=http://ark.intel.com/compare/65523,65719,65520,52214,52210<]for both the 'Bridges[/url<]), and software that requires "hardware virtualization support" will generally run fine on K processors. However VT-d, as you say, is (still) unavailable in the K editions. I suspect, or at least hope, that that is going to change as Intel pushes Thunderbolt into its platforms, because bringing PCIe out to external devices without sandboxing them when they have full access to the physical memory map is a security and stability disaster waiting to happen.

          • Washer
          • 7 years ago

          Ahh, thanks for the correction about VT-x. I’m still angry though. Disabling VT-d on the K models is still a low blow on Intel’s part (and the other extensions).

        • Krogoth
        • 7 years ago

        VT-x and VT-d are enterprise-level features. Disabling them on their K and lesser versions of IB/SB is just Intel’s way of segmenting markets.

        K series chips are geared towards enthusiast who care more about overclocking and less about virtualization. The lesser versions of IB/SB are geared towards average joe types who probably never heard of virtualzation.

        The enterprise market who have a genuine use for VT-X typically don’t overclock their systems because it compromises stability and data integrity.

          • Washer
          • 7 years ago

          I guess you missed the part where the i7-3770, i5-3570, i5-3550, and i5-3470 all have VT-d and VT-x. Also, VT-d is the “enterprise” feature, VT-x is the “standard” one.

          But hey, thanks for talking to me like I’m a moron even though you’re the clueless one.

            • Krogoth
            • 7 years ago

            The higher-end, normal SB/IB always had VT-d support, but the lesser versions do not. An example i5-2320. The “Celeron” versions of SB/IB don’t even have VT-x or VT-d support.

            It is all artificial segmentation, but doesn’t really matter in the grand scheme of things, because their intended markets don’t need it or miss it.

            • Washer
            • 7 years ago

            Many of the lesser versions do in fact have VT-x. For instance, the Celeron G530 has VT-x, the Core i3 2637m inside my cheap laptop has VT-x, as does the i5-2320. VT-d is now coming down to many of the Ivy Bridge parts. The feature is useful. There are numerous students or people educating themselves at home on their consumer hardware that could take advantage of these functions.

            The fact that it is entirely artificial segmentation is why I’m upset and why my mind is blown with you defending Intel on this. It’s ridiculous, especially when you buy the more expensive unlocked version of an otherwise identical processor you have those features available on the standard model removed. That’s outrageous and people should make more noise over the behavior, even if they themselves don’t use the feature being removed.

            • NeelyCam
            • 7 years ago

            You should head out to Walgreen’s. They have a sale on chill pills – two for the price of one. I think it’s a great deal; $5 would keep you supplied for almost a week!

            • flip-mode
            • 7 years ago

            The $5 pills have the chill feature disabled; they’re just placebo.

            • NeelyCam
            • 7 years ago

            funny stuff, +1

            • jazper
            • 7 years ago

            The i5-2320 has vt-d. I know, I have used it

            • xeridea
            • 7 years ago

            Yes, because everyone who wants to use VMs for testing and/or development totally wants to spend $1000 on their computer just because Intel wants to be an a-hole. I use VMs on my $60 Athlon II, I guess I wouldn’t miss it though, not like its a standard feature of the 21st century or anything.

            • Krogoth
            • 7 years ago

            You exaggerate. VT-x and VT-d support isn’t that expensive if you genuinely need it. A vanilla i5-2500 and i5-2400 have both features and hover around $199 price point. All of the Bulldozer-based Phenoms have it.

            AMD didn’t support VT-d (IOMMU) on their Phenom line until Bulldozer generation. It was Opteron-only for the Istanbul (Phenom II) generation. Athlons do not have VT-d support and bargain basement models don’t even have VT-X.

            The only people who are creating a storm over this are completist types or just bashing AMD/Intel on principle.

            • Firestarter
            • 7 years ago

            The i5-2500 has it, the i5-2500K doesn’t. You can’t explain that!

            • Krogoth
            • 7 years ago

            You realize that I said it was the “vanilla” versions of i5-2400 and i5-2500 not the “K” flavors of the chips?

            If you really want to overclock, VT-X, VT-D support? All of the Sandy Bridge-E chips have it.

    • thesmileman
    • 7 years ago

    “Since OpenCL code is by nature parallelized and relies on a real-time compiler, it should adapt well to new instructions.”

    That is an incorrect assumption. Intel just now released new OpenCL drivers to support the new instructions so unless you installed the new drivers from Intel then the OpenCL compiler could not take advantage of them.

      • Damage
      • 7 years ago

      I’m assuming you didn’t read the article carefully. Try again?

    • vargis14
    • 7 years ago

    I know it would not make a much of a difference,but i feel it would have been nice to see the 2600k matched in the bios to the same core speeds along with turbo speed steps as the 3770k,or used a 2700k.

    I feel it would give us a much better picture on performance increase over sandy.Not including the IGP tests of course.

    Hats off to another great TR review,keep up the great work,

      • thanatos355
      • 7 years ago

      With quantifiable thermal data as well, please.

    • Chrispy_
    • 7 years ago

    Never wanting to be ungrateful, that was an enjoyable read, but the most interesting feature (by far) of Ivy Bridge is the IGP, since almost every other performance improvement over Sandy Bridge is negligible.

    Can we get a more in-depth look at the IGP, perhaps some image quality tests like you did for the HD3000, which really showed that Intel drivers can perform well on paper now, but not in practice?

    We all hate Intel IGP’s, but they’re getting to the point where I’d tolerate them in a laptop because you get adequate performance in a very slim and sexy package. Optimus works well, of course, but you don’t see nvidia GPU’s in many slim, light, or long-lasting laptops.

    • UberGerbil
    • 7 years ago

    I know the review suggests an interest in testing some of the i5-3xxx models, but do you have any plans to run the 2700K through these benches in an effort to de-haunt your readership?

      • Scrotos
      • 7 years ago

      Seconded on the 2700K.

    • jensend
    • 7 years ago

    Damage, thanks for the continuous quantile plots. Did you read [url=https://techreport.com/discussions.x/22666?post=623948<]my comment last time[/url<] about using a semilog scale to make the differences in those last few percent clearer? (Also, does whatever you're using to plot these have the capability to antialias the plotted line?)

      • jensend
      • 7 years ago

      Damage wrote pages worth of replies to trolls and people with misconceptions but not a word in response to me. I feel unloved.

        • Damage
        • 7 years ago

        Excel’s “line smoothing” will AA yer lines. It also adds curves where none exist:

        [url<]http://vizwiz.blogspot.com/2011/12/when-you-use-smoothed-line-chart-your.html[/url<] I think we have enough data to prevent severe distortions, but by nature, I don't like what this option does. I may get over it, for the sake of pretty pictures, but... Log scale on which axis? On Y, you just end up compressing the various lines together. Unhelpful. On X, well, that's hard to do in Excel. However, it looks like a log scale there would have the same effect, with less distance between, say, 90 and 95% than we have with a linear scale. You had something else in mind?

          • jensend
          • 7 years ago

          Ah, you’re using Excel. Lots of stuff out there more suited to such plots and analysis (easiest to find alternatives which involve a bit of programming, like R, MATLAB/Octave, Python+numpy+mathplotlib, etc), but I guess there’d be a fair bit of a learning curve and workflow adjustment.

          Though I was thinking of simple AA, Excel’s line smoothing should actually be OK for this situation. Excel’s line smoothing appears to use either Bezier curves or cubic splines to interpolate between your data points. Drawing straight lines between the data points is linear interpolation, and can be misleading for the same reasons (you’re plotting interpolated values where you have no data). More sophisticated interpolations are likely to be [i<]more[/i<] accurate than linear interpolation, especially when we expect the underlying phenomenon behind the data to be somewhat smooth (differentiable); in this case the underlying quantile function is smooth (inverse of an integral is differentiable). When you're dealing with only a few disconnected data points, linear interpolation does have the advantage that the individual real data points stand out against the interpolated ones because of the sharp corners. But you're plotting enough points (and your function is smooth enough) that that's not the case. In fact, with how many points you're plotting and how smooth the quantile function is, the difference between linear interpolation and cubic spline interpolation will be fairly minimal and the main effect of turning line smoothing on really will be just that the curve is antialiased. My [url=https://techreport.com/discussions.x/22666?post=623948<]previous post[/url<] gave a somewhat intuitive explanation of the scale and included an image link. The image links had expired; I just edited it to include a new link. Ignoring what's done with labels for a moment, a normal semilog plot simply plots log(x) against y. My suggestion is to plot -log(1-ratio) (obviously ratio=percent/100) against frame time; this is the same as doing a semilog plot of 1-ratio against frame time and then flipping it left to right.

            • jensend
            • 7 years ago

            BTW here’s the octave code to do [url=http://minus.com/mKK4Ggvh1#1<]those plots[/url<], with some explanatory comments:[code<]x=linspace(.5,1-.5^11,100); %100 linearly spaced points from 50% to ~99.95% plot(100*x, norminv(x,0,1)); %plot of quantile function of a standard normal dist figure; %new plot window y=logspace(-log10(2),-11*log10(2),100); %100 logarithmically spaced points from 50% to ~0.05% %if we kept the linear spacing for the log plot, %the plot points would be almost all towards the left edge. plot(-log2(y),norminv(1-y,0,1)); %set the labels to percent below z=100*(1-2.^-(1:11))'; set(gca,'XTick',1:11); set(gca,'XTickLabel',num2str(z,4));[/code<]All you'd have to do to use that for your quantile plots is replace norminv(*,0,1) with quantile(FRAMETIMES,*).

          • Bensam123
          • 7 years ago

          SPSS damage son.

            • jensend
            • 7 years ago

            Who thumbed you down for suggesting SPSS? Fixing. SPSS isn’t my favorite but it’s certainly better suited for these kinds of data analysis than Excel.

            (I’ve heard R Commander is a good way for people who are more used to the menu-driven GUI approach usually used with Excel or SPSS to ease into R. Otherwise most of the GUIs I know of for various data analysis and visualization packages are really IDEs.)

            • Bensam123
            • 7 years ago

            Same person that got ya good too.

            • jensend
            • 7 years ago

            It’s ridiculous.

            If someone is trolling or throwing insults, a thumbdown needs no explanation. If there’s a debate going on, I can understand people thumbing up the side they agree with and thumbing down the side they disagree with without getting involved any further. But when people are trying to make thoughtful suggestions it’s aggravating that people thumb down without giving any reason for disagreeing.

            [b<]Thumbdown use has largely devolved into being a way cowards can anonymously spit at those they've taken a disliking to- a gutless way they can show disdain without facing up to people when they have no rational critique of what's been said.[/b<]

        • Firestarter
        • 7 years ago

        Yes, log. All kids love log!

          • jensend
          • 7 years ago

          ??

            • flip-mode
            • 7 years ago

            [url=http://www.youtube.com/watch?v=2C7mNr5WMjA<]It's big, it's heavy, it's wood![/url<]

          • TardOnPC
          • 7 years ago

          I guess somebody didn’t watch Ren & Stimpy growing up.

    • Ratchet
    • 7 years ago

    So… no time to read just yet, but this thing’ll work in my P8Z68-V/GEN3 motherboard, correct?

      • UberGerbil
      • 7 years ago

      It’s supposed to, assuming you have the latest bios update from ASUS. But you might want to wait for reports of success in the field before you plunk down your cash. Though I think the takeaway from this review is that unless your current Sandy Bridge is a relatively low-end one and the Ivy Bridge you’re planning to buy is near the top of the range, you’re not going to see a big gain. Going from SB to a similar (in clocks and threads) IB doesn’t give that much of a jump unless you’re gaming with the IGP or something.

      But as a life-extender for that platform around the time Haswell arrives, it may be a good option (if there’s any price drops of IB then).

    • luisnhamue
    • 7 years ago

    Truth: Ivy has less letters than Sandy, so do the processors have the same fabrication process relation. but, Sany is still beautiful and I love her.

    • ronch
    • 7 years ago

    Whatever reasons you may still have to choose Bulldozer over Intel offerings, IB chews and spits them out.

      • flip-mode
      • 7 years ago

      Heh, no, there were no reasons before, and no reasons now.

      • jdaven
      • 7 years ago

      AMD still has better integrated graphics performance for games (see Anandtech review for indepth Llano vs. IB comparison), upgraders who can’t afford a new MB and CPU and of course, us die-hard AMD fanboys. 😉

        • NeelyCam
        • 7 years ago

        Yeah – wasn’t IvyBridge supposed to mop the floor with Llano? Well, didn’t happen. Trinity => massacre.

        Doesn’t matter to me much – I’m waiting for Haswell

          • chuckula
          • 7 years ago

          [quote<]Yeah - wasn't Ivy Bridge supposed to mop the floor with Llano?[/quote<] Weren't you the main one saying that around here???? Nobody else really thought Ivy would beat Llano by a lot, especially on the desktop. In the mobile space I think IB will be competitive with Trinity at 17 Watts, but Trinity will have the GPU lead above that.

            • NeelyCam
            • 7 years ago

            Yeah, because Charlie and/or Anand told me so. I have no opinions of my own – I just parrot what those in the know tell me.

            Aren’t you the one saying that 17W IB will likely beat Trinity in graphics because tri-gate 22nm is so power-efficient? You even link David Kanter to support that view?

            Seeing what smackdown Llano gave IB, I’m not so sure..

            • chuckula
            • 7 years ago

            1. I’ve said repeatedly that a 17 watt IB part will be *competitive* with a 17 watt Trinity part at the GPU. That’s *not* the same as “smackdown”. P.S. –> Kanter agrees with me and neither one of us have changed our positions.

            2. Go back to Anand and read the *other* article he posted today with a preview of a mobile Ivy Bridge at 35 watts vs. the highest-end mobile LLanos…. the GPU in IB trades blows with the Llano machine and wins about as many benches as it loses.

            I will keep saying this until I’m blue in the face: Intel designs GPUs for *ultrabooks* and everybody else just goes along for the ride. IB’s biggest jumps are in the mobile arena, and Intel is not about to beat AMD at high-TDP solutions because Intel drives all the power into the CPU instead of the GPU… and considering that Trinity still won’t beat low-to-midrange AMD 7000 series parts on the desktop, there’s a certain method behind Intel’s madness.

            • NeelyCam
            • 7 years ago

            What I see is that the 2x improvement over SB didn’t materialize, and Llano is still the king of the IGP hill. IB was supposed to beat Llano and it didn’t.

            Rumors are pointing to Trinity IGP being some 50% faster and twice as efficient as Llano. I’m not sure what your definition of “competitive” is, but I predict Trinity will beat IB at 17W and murder it at 35W.

            Look also at the power efficiency of that mobile IB. Unimpressive. Could be that Asus somehow mucked up the efficiency with something (like using a power-hungry HDD or NVidia GPU) – much like Samsung did with SB – but nonetheless, losing to 32nm SB in battery life is not a good sign for the 17W IB parts..

        • LocalCitizen
        • 7 years ago

        except this time Ivy Bridge will also allow reuse of old (Sandy) board

        • geekl33tgamer
        • 7 years ago

        But no BD CPU has a GPU built inside it yet, right?

        • ronch
        • 7 years ago

        Hey, I like AMD as much as the next guy. For the record, I want AMD to succeed. But as it is, AMD is not offering the performance, price and power efficiency people need, and people stick with AMD nowadays mainly due to brand loyalty while they get away with their complacency for the past few years. Intel has offerings that rival AMD in the lower market segments and their prices aren’t really higher (they may be, say, $10-$20 pricier… what’s it to most people?).

        Llano may be ok where it’s aimed at, but I’m not sure it’s the best choice. I recently built an Intel system (Core i3 + Intel DH67BL + 8GB DDR3-1333 + AMD HD5570 1GB) for [u<]less[/u<] than a comparable Llano system (A8-3870K + Gigabyte FM1 board + 8GB DDR3-1600), and with future upgradability to an i5/i7 and without the hassles of making Dual Graphics work. Last time I checked, it's a pain in the butt to get working. So, for less money, I was actually able to build an Intel system with better single-threaded performance, equal aggregate multi-threaded performance, comparable graphics performance, as well as upgradability. As much as I wanted to give my money to AMD at that time, I just couldn't ignore the advantages Intel is offering. And with IB, the trend may continue even with Trinity around. Trinity's IGP may be superior, but as my experience has shown, plugging in a discrete graphics card on the Intel platform gets the graphics part of the equation up to speed with AMD while still costing less. So what if it's discrete vs IGP? Power savings? I don't think so. I love AMD, and have used FAR more AMD chips than any other brand, but nowadays AMD simply isn't the rational choice. I hope Trinity/Piledriver/Steamroller will turn things around.

          • Yeats
          • 7 years ago

          Not if the “next guy” is jdaven, apparently. 😛

          As far as AMD not offering the “performance… people need”: I disagree. AMD has chips that offer far more performance than many people need. Phenom II and C2Q’s are still easily powerful enough for games, photo & video editing, etc. Many businesses function quite well with hardware far older than that.

          Looking on Newegg, I can’t find a better deal on the Intel Core i3 setup you specc’d vs the Llano one, but maybe you got a combo deal or something. I do think Llano is a tad overpriced, though.

          I agree that there are few scenarios where it makes sense to use AMD over Intel, but most of the time it’s not because AMD does not offer enough performance, it’s because Intel offers (a lot) more performance & efficiency.

            • ronch
            • 7 years ago

            Ok, my bad. What I meant to say was, for the money, AMD doesn’t offer the peformance/watt you’d expect. AMD used to offer better price/performance even if power sometimes was a bit off (Phenom II), but lately, I just can’t see it. As for enough performance, yes, AMD does well except for the most demanding apps. It works fine, alright, but you can do better with Intel and the price difference really isn’t that huge unless you’re talking SB-E/LGA2011.

    • swaaye
    • 7 years ago

    I’d like to find out if Ivy Bridge has the same PLL overvoltage / S3 sleep issue as Sandy Bridge.

    For those who are unaware, with a SB CPU you need to overvolt the CPU PLL to reach higher than 43-44x multi. Motherboard BIOSs often do this automatically. Unfortunately it mostly breaks S3 sleep and Intel acknowledged this.

    On the other hand, considering the power demands increase so dramatically around 5 GHz, maybe there’s not much point in getting excited even if it does work now.

      • bthylafh
      • 7 years ago

      That’s an excellent question, mostly for us K-series owners. I’ve run into it myself.

      Not that a fix is going to make me upgrade from a 2500K, mind. 😛

    • Dposcorp
    • 7 years ago

    Holy cow was that a awesome review….you are correct in saying there are things that TR does that will not be seen anywhere else.
    Great job!

    As for the chip itself, I would say anyone on a 2500k/2550k/2600K is better off spending their money on a different upgrrade:
    GPU for gaming, SSD, Max out ram, or waiting altogether

    Otherwise, if building brand new from a S775/AM2+/AM3, then I would say IVB + Z77 is the way to go.

    Those with a S1366 system have a lot of thinking to do I believe.

      • dpaus
      • 7 years ago

      [quote<]if building brand new.... then I would say IVB + Z77 is the way to go[/quote<] If you can get them. I'm trying to build a new system over the next few days, and I'll let you know my experience in getting my hands on one.

      • Vivaldi
      • 7 years ago

      +1

      My Q6600 (socket 775) is looking mighty prehistoric.

      Must…. resist… urge… to… Newegg….

      *wrestles self*

        • NeelyCam
        • 7 years ago

        Slowly…step away from the browser..

        • yogibbear
        • 7 years ago

        My Q9450 is trudging along…..

        I’m more likely to hold off on the upgrade due to HDD storage prices rather than IB availability…. 🙂

        • chuckula
        • 7 years ago

        You young punks and your “quad” core processors! Some of us are still poking along with dual core E8400s uphill both ways in the snow.. and we like it that way!

        ….

        OK, we don’t really like it that way but we have to wait until next year to upgrade anyway, so Haswell it is.

          • Bensam123
          • 7 years ago

          The e8400 was actually released a year after the Q6600… You were just a youngin without a vision at the time apparently. :p

        • flip-mode
        • 7 years ago

        Be well until Haswell, my son!

        • Bensam123
        • 7 years ago

        Dont you dare look at clock for clock performance.

    • UberGerbil
    • 7 years ago

    For anyone interested in more coverage of Ivy’s graphics, David Kanter at Real World Technologies just posted [url=http://www.realworldtech.com/page.cfm?ArticleID=RWT042212225031<]an 8-page article[/url<] covering just the IGP

      • chuckula
      • 7 years ago

      [quote<]Putting this all together, Intel will substantially narrow the gap with AMD for integrated graphics capabilities in 2012. Actual product level performance depends on pricing, binning and the market. For instance, Intel has an edge for very low power designs due to process technology. The 22nm FinFETs are exceptionally efficient at low voltage and it is likely that Ivy Bridge will match Trinity for 17W designs. At 25-35W for conventional notebooks, Intel should trail by around 20%, which is close enough to be competitive. Looking to desktops though, AMD will have a substantial advantage and the performance gap may be much higher. [/quote<] Looks like Kanter & I are in agreement (and have been going back to at least the Sandy Bridge launch).

    • Hattig
    • 7 years ago

    How is the image quality of the Intel HD4000, as the HD3000 had some pretty major failings, and I was looking forward to seeing if Intel had improved in that area now that they are claiming to be a big boy in gaming now – and this is especially true for the future mobile variants.

      • UberGerbil
      • 7 years ago

      I wonder if Damage & Co have a follow-up planned to see if the flowers are truly gone.

        • Scrotos
        • 7 years ago

        Anandtech had a preview that showed them to be more circular but with strange spikes.

        [url<]http://www.anandtech.com/show/5626/ivy-bridge-preview-core-i7-3770k/16[/url<]

    • flip-mode
    • 7 years ago

    Props for the framerate to fps conversion table and for the thickened lines in the graph legends. I have much more reading to do but wanted to mention that.

      • Chrispy_
      • 7 years ago

      Indeed, that line thickening really helps me (tho I am colourblind).

    • Ifalna
    • 7 years ago

    Woo at long last! Nice review, thanks. I won’t be regretting spending 30 bucks more on Ivy if I run that system for 5-6 years.

    • OneArmedScissor
    • 7 years ago

    My takeaway from this has nothing to do with Ivy Bridge. It’s that AMD’s myriad of cache configurations are all borked, and that will continue to be the case. Judging by how high the frame latency is on Llano, that does not bode well for Trinity’s chunks of larger, likely higher latency L2.

    Oh AMD, why couldn’t you just use a faster L3 cache with your CPUs instead of jacking up the core clock for the last several years?

    Thank you for doing the frame latency tests and including all of those CPUs. It’s an eye opener.

      • UberGerbil
      • 7 years ago

      Kanter has mentioned a few times in discussions over at RWT that he can’t understand why the latency for the caches (each level) are as high as they are. Given the access and information he has, that’s pretty interesting. The potential silver lining there, I guess, is that there may be something AMD can do to fix whatever the problem is and pick up a significant chunk of dormant performance that has been trapped in the design. But the longer the designs feature those long latencies, the more we have to consider there’s some fundamental problem that is preventing them from doing so.

        • OneArmedScissor
        • 7 years ago

        This seems to have been going on since the Athlon 64 shrinks. I remember there was an explanation offered for that particular instance, though it didn’t really seem to justify their decision. They’ve just done more weird things ever since, but now they don’t even talk about it.

        I’m leaning more to the fundamental problem side, but moving away from 2 GHz L3 and 4 GHz cores would go a long way.

      • JMccovery
      • 7 years ago

      I do know that ever since the 65nm ‘G’ Revision Brisbane K8s (which is funny, because it seems that AMD’s process problems started there), the latency of AMD’s L2 caches has been increasing. For L3, I honestly do not know what in the world went on from K10 to K10.5 to Bulldozer. It seems as though L3 latency fell off a cliff…

      Does anyone know the average L2/L3 latency of K10, K10.5 and Bulldozer?

        • OneArmedScissor
        • 7 years ago

        Here’s a recent comparison:

        [url<]http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/6[/url<] A few more CPUs from just before that: [url<]http://www.anandtech.com/show/2901/2[/url<] Original Phenom, Athlon X2, and some Core 2s: [url<]http://www.anandtech.com/show/2702/5[/url<] "AMD too will pursue a faster L2, that will most likely come in 2011 with Bulldozer" *M-M-M-M-MONSTER FACE PALM* What's important to note is that when Intel made the L2 cache ginormous with Penryn, they also [i<]lowered[/i<] the L2 latency at the same time: [url<]http://www.anandtech.com/show/2306/3[/url<] And here's AMD, many years later, not keeping up in L2 latency with both smaller transistors and smaller caches.

    • Firestarter
    • 7 years ago

    Makes you wonder how hard they could push this design if AMD were breathing down their proverbial neck.

    • chuckula
    • 7 years ago

    Thanks for the great review! The GCC compilation test with QT Bench is a very welcome addition to the reviews since compiling is a task that a good subset of the people here use and a good test for exercising multi-core systems.

      • UberGerbil
      • 7 years ago

      It’s also not an FPU-centric number cruncher, which makes for a nice contrast with many of the other standard TR benchmarks. It’ll be interesting watching AMD and Intel’s designs match up on this one.

    • OneArmedScissor
    • 7 years ago

    Thank you for using video encoding instead of 3D rendering for the power test, and for using a modern, reasonable PSU. That makes a lot more sense for just about anyone.

    • kuraegomon
    • 7 years ago

    Hmmm – interesting shift in focus here on Intel’s part. It’ll be interesting to see how yields of low-voltage parts rise as the process matures. IB mobiles might really be something to behold a year from now on price and battery-life fronts.

      • NeelyCam
      • 7 years ago

      A year from now you will be beholding Haswell.

Pin It on Pinterest

Share This