Intel’s Core i7-4770K and 4950HQ ‘Haswell’ processors reviewed

Another year has passed, and once again, it’s time for Intel to unveil a new generation of processors. Somehow, it seems like we’ve been waiting longer than usual for this latest refresh. Dunno why that is. Perhaps it’s because the “tocks” in Intel’s vaunted “tick-tock” development cadence tend to be the more exciting technologies, new CPU architectures that promise major change for the better. Last year’s “tick,” under the code name Ivy Bridge, brought some nice reductions in power consumption thanks to the transition to an advanced 22-nm fabrication process. This year’s “tock,” code-named Haswell, packs even more change: enhanced CPU cores, faster graphics, and some sweeping modifications to the PC platform itself. The goal, in part, is to shoehorn Intel’s fastest CPU technology into power envelopes suitable for ultra-thin laptops and tablets. Undoubtedly, Haswell is the cornerstone of Intel’s bid for relevance in a new, mobile-centric market. The benefits spill over onto the desktop, though, in various and surprising ways.

New architectures, both micro and macro

A look at the Haswell die. Source: Intel.

Superficially, the Haswell chip represented in the die shot above looks an awful lot like the Sandy Bridge and Ivy Bridge generations that came before it. All three of ’em have four cores, with the same size caches at each level of the hierarchy, down to the 8MB L3. They each have graphics, a memory controller, and PCI Express integrated, too, with the CPU cores and those other elements linked together by a high-speed ring interconnect. Even at the die level, Haswell doesn’t look that much different from Ivy.

Code name Key

products

Cores/

modules

Threads Last-level

cache size

Process node

(Nanometers)

Estimated

transistors

(Millions)

Die

area

(mm²)

Lynnfield Core i5, i7 4 8 8 MB 45 774 296
Gulftown Core i7-9xx 6 12 12 MB 32 1168 248
Sandy Bridge Core i5, i7 4 8 8 MB 32 995 216
Sandy Bridge-E Core-i7-39xx 8 16 20 MB 32 2270 435
Ivy Bridge Core i5, i7 4 8 8 MB 22 1200 160
Haswell (Quad GT2) Core i5, i7 4 8 8 MB 22 1400 177
Deneb Phenom II 4 4 6 MB 45 758 258
Thuban Phenom II X6 6 6 6 MB 45 904 346
Llano A8, A6, A4 4 4 1 MB x 4 32 1450 228
Trinity A10, A8, A6 2 4 2 MB x 2 32 1303 246
Orochi/Zambezi FX 4 8 8 MB 32 1200 315
Vishera FX 4 8 8 MB 32 1200 315

Both chips are built using Intel’s 22-nm fab process with “tri-gate” transistors, and both are relatively small in size. Although it’s grown a little compared to Ivy, the quad-core desktop Haswell is quite a bit smaller than anything else in our table, which dates back several generations.

At this level, Haswell doesn’t look too different from what has come before. Drill in a little deeper, though, and many of the components have been updated. Let’s start by looking at those CPU cores. Haswell isn’t a complete microarchitectural rebuild like Sandy Bridge. Instead, the Haswell core builds on the foundation established by Sandy and Ivy Bridge. The main pipeline depth is unchanged, but Intel’s architects have implemented a series of evolutionary modifications intended to increase per-clock throughput. Some of those measures are aimed at raising instruction throughput in existing code, while others require the use of new instructions.

The list of tweaks aimed at improving performance with legacy code begins with some items that will look familiar to those who track CPU development. They’re the sorts of things CPU architects do with growing transistor budgets in order to extract more parallelism out of a single thread. Fetch bandwidth is higher in Haswell’s front end. Branch prediction accuracy is up. The window for out-of-order execution is larger, with a corresponding size increase in related structures. Switching latencies for hardware virtualization support have been reduced. We’ve seen these sorts of changes with regularity over time, although it’s worth remembering that Intel’s CPU cores already have some of the richest feature sets and highest rates of per-clock throughput anywhere.

In a more obvious microarchitectural modification, Haswell’s execution core is even wider than its predecessor’s. The number of ports in the reservation station has risen from six to eight, with the added ports feeding a couple of new execution units: another integer ALU/branch unit and a store address unit. When possible, Haswell should be able to rearrange even more components of an incoming instruction stream and feed them through this wider machine in parallel. This feat is one of the most difficult tasks in modern CPU design, and Intel continues to make earnest efforts to push the boundaries forward.

The new core also raises the performance ceiling by adding several new instructions for faster processing of vector math. Sandy Bridge doubled the floating-point throughput of the prior Nehalem generation by trading SSE’s 128-bit-wide vectors for AVX’s 256-bit vectors. With AVX2, Haswell adds support for 256-bit integer vectors and introduces a fused multiply-add (FMA) instruction. Media-centric workloads frequently pair multiply and add operations together, and fusing the two into a single instruction can mean twice the FLOPS executed in a clock cycle. AMD’s Bulldozer and Piledriver cores were the first x86 processors to incorporate FMA support, but a Bulldozer/Piledriver module can only process a single 256-bit FMA per cycle. A Sandy Bridge core can produce a 256-bit add and a 256-bit multiply in each cycle, for a comparable peak FLOPS rate. Each Haswell core can execute two 256-bit FMAs per cycle, double Bulldozer’s and Sandy’s peaks—and four times Nehalem’s. The catch, of course, is that software will have to be recompiled to take advantage of the AVX2 and FMA instructions.

Haswell needed more bandwidth in order to service twice the FLOPS per clock, so Intel has revised its cache hierarchy. The L1 cache now has fewer restrictions related to banking. The L2 cache, which in Sandy Bridge was accessible on every other clock cycle, can now be read on each cycle. The result, Intel claims, is that Haswell’s L1 and L2 caches both offer roughly double the bandwidth at the same access latencies as Sandy Bridge.

One of the more intriguing new technologies built into Haswell is another set of ISA extensions known as TSX, an enabler for hardware transactional memory. TSX has the potential to ease the development of highly multithreaded applications. Unfortunately, Intel seems to be playing product segmentation games with this feature, disabling it selectively in key models of the Core i5 and i7, including the K-series processors targeted at enthusiasts. Intel has established a tradition of putting too many knobs and dials into its chips and fine-tuning its product offerings aggressively enough to be positively confusing and off-putting to the consumer. The decision to play this game with TSX support may be the worst (or is best?) example yet of overdoing it.

Iris dilates

Haswell’s CPU cores bring major changes, but Intel’s graphics team pushed out a new architecture in the last generation with Ivy Bridge. This time around, Haswell’s integrated graphics processor (IGP) is more of a refinement. The IGP is software-compatible with Ivy’s, though it has been tweaked for better efficiency. In spite of this continuity, graphics may still be the single biggest area of improvement from Ivy to Haswell, for several reasons.

For one, Intel has completely rewritten the software driver stack for its IGP, and it claims the new driver has some of the lowest CPU overhead in the industry. Combined with this new driver, Haswell’s IGP supports the latest graphics and compute APIs, including DirectX 11.1, OpenGL 4.0, and OpenCL 1.2. Also, as part of ever-creeping integration, the display output block has moved from the platform controller hub chip onto the Haswell CPU die. The new display block offers double the bandwidth and can drive displays with 4K resolutions via DisplayPort 1.2. Triple-display configs are possible now, too. And the IGP’s media processing block gets it own little “tock” worth of innovation, with claimed advancements in QuickSync video encoding speed and quality.

The biggest factor in Haswell’s graphical improvement, though, is simply giving us more of the same. This graphics architecture is modular, with various “slices” capable of being scaled up as needed. Intel is taking advantage of that fact with Haswell, producing three different versions of the IGP, known as GT1, GT2, and GT3, in increasing order of size and potency.

As with Ivy Bridge, the GT2 part will see duty in most desktop Core i5 and i7 models. Haswell’s GT2 IGP has 20 execution units, up from 16 in Ivy Bridge. (Each of those execution units has eight shader ALUs onboard, so you’re looking at 160 “cores” in Haswell GT2, if you’re talking like Nvidia and AMD do about these things.) Haswell GT2 also has the benefit of slightly higher graphics clock speeds and a more expansive top thermal envelope, 84W versus 77W.

GT3 doubles up on resources from GT2, with 40 execution units and thus 320 shader ALUs. The GT3 version of Haswell’s IGP will be deployed primarily in laptops, where the additional parallelism should allow for healthy performance gains at fairly modest clock frequencies. In ultrabook-class CPUs, Intel expects Haswell to achieve roughly 2X the performance of the prior Ivy Bridge-based parts.

The most interesting version of Haswell’s graphics, though, is something different. Known as GT3e, it’s the same GT3 graphics hardware backed up by a massive embedded DRAM cache. The 128MB eDRAM chip is manufactured by Intel on a specially tuned version of its 22-nm fab process, and it’s situated on the same package as the CPU. Employing a large eDRAM cache for graphics has little precedent in consumer PCs, but it does address one of the primary constraints that integrated graphics solutions face: the amount of memory bandwidth available onboard a CPU socket.

The eDRAM connects to the Haswell mother ship using a narrow, high-speed, on-package interconnect that offers about 50GB/s of bandwidth in each direction. That’s about 4X the bandwidth offered by a single channel of DDR3-1600 memory, and the bandwidth is additive, since the eDRAM can be accessed in parallel to main memory. On the CPU, this connection is routed through the system agent, as is the memory controller interface. The eDRAM chip is fully power managed and can be powered down when it’s not needed. Intel tells us it uses between half a watt and 1W at idle, while consuming about 3.5W at peak.

The GT3e cache doesn’t work quite like you might expect. It’s not a frame buffer; frames are still written to DRAM. Instead, it’s a cache for the graphics data used to create frames—and only the graphics data that makes sense to cache. Intel’s graphics driver is smart enough to know not to cache certain things, like streaming vertex buffers, that would likely spill out of the cache or otherwise derive no benefit from caching.

What’s more, the eDRAM isn’t just a graphics cache, but a full-fledged L4 cache, accessed coherently and available to the entire CPU. The bandwidth it provides has the potential to benefit CPU workloads as well as graphics. Certain types of applications, like computational fluid dynamics and OpenCL media processing, are obvious candidates. Once you know that, you may be shocked to hear this: Intel has no plans to bring a GT3e class chip to socketed desktop systems. These things are slated for BGA-style packages, for surface mounting into laptops and the like. I’m not sure why no one thought, “Hey, we have a CPU with a massive 128MB L4 cache. We should sell to people who want to buy it and put it into their systems.” But apparently that didn’t happen—or at least that guy didn’t persuade everybody else. Happily, we do have a GT3e system to test, so we can show you the benefits of its L4 cache in a full suite of graphics and CPU workloads.

Oh, right. To go along with the new graphics goodness Haswell brings to the table, Intel has coined a new brand name: Iris graphics. The wonderfully generic “Intel HD Graphics” is sticking around, attached to the IGPs for slower Haswell variants. The 28W GT3 parts, which will face off directly against Radeons and GeForces, get the new Iris brand name. The higher-end GT3e offering forgoes amateur status to become the Iris Pro 5200.

Dialing up more power management mojo

We’ve talked about the CPU core and graphics, yet we haven’t yet covered the most consequential new technology in Haswell. Although it makes for a fine desktop processor, much of the innovation in Haswell is centered around the quest to bring a full-fledged PC into smaller devices with better battery life. That means—points if you guessed it—even more integration of former platform elements into the CPU die.

This time around, the object of Intel’s attention is the power delivery portion of the platform. Haswell introduces what Intel calls “FIVR” for “fully integrated voltage regulator” or something along those lines. In short, the VRMs that used to live on the motherboard have migrated into the CPU silicon for all Haswell-based parts. Essentially, the entire power delivery control and power train for the CPU now lives on the CPU itself.

As with past integration efforts, bringing the VRMs onboard the CPU has some immediate and tangible benefits. Intel claims its integrated VRs can replace as many as seven different VRMs that would otherwise be scattered throughout a traditional PC platform, and it says Haswell systems should realize between 600 and 1000 mm² of physical space savings thanks to FIVR. The firm also expects that systems will be able to tolerate higher peak voltages without costing more to produce.

Because power is delivered into the chip and then distributed, Haswell is able to offer greater control over how power is routed and used. The chip has double the number of internal voltage rails of Sandy Bridge, and it can decide with much finer granularity where power should be delivered, depending on the present workload. The transitions between voltage states can happen dramatically faster with the VRs on die, too. Intel claims FIVR is 5-10X faster than external VRMs at framing voltages.

As a result, Intel has decoupled many of Haswell’s internal units to run at independent frequencies and voltages, where before they were linked. For example, Haswell’s CPU cores are no longer tied to the chip’s internal communications ring. As a result, during a heavy graphics workload, the IGP can pull the ring up to full speed and power in order to take advantage of the bandwidth it provides—without the CPU cores having to clock up, as well. Thanks to this finer granularity, Haswell can more effectively “shift power around” on the CPU die, granting one unit permission to run at a higher-than-usual frequency and voltage because another one is powered down. Back when, Intel talked a good game about moving power around on Sandy and Ivy Bridge, but apparently the mechanisms for doing so were fairly limited. For instance, those parts had a limit set of fixed ratios between CPU and IGP speeds, and transitions between states were relatively slow. With Haswell’s finer-grained power distribution and decoupled clocks, the power sharing has grown more sophisticated. Intel’s architects now talk about “transferring credits” to track how power and thermal capacity is transferred from one portion of the die to another.

Haswell’s VR integration happens just as another effort is bearing fruit. In 2007, Intel started a sweeping program to revamp the entire power management infrastructure of the PC platform. PCs weren’t originally built with the same set of assumptions as today’s mobile devices, where battery life is paramount. As a result, the various support chips scattered around a PC motherboard weren’t built to stay in a low-power state until needed or to make transitions between power states quickly. The communication protocols and the I/O devices that use them weren’t built to communicate info about power states, either. Intel has undertaken the daunting task of addressing that problem under a program with the wonderfully generic name “Power Optimizer.” This effort touches nearly every PC standard, PCI Express, USB, SATA, and DisplayPort among them.

The goal is to provide instrumentation about power states and state transitions across the entire system, so that the PC can tune itself for low-power operation. Each device can specify its latency tolerance—that is, how long it takes to wake from a low-power state. Armed with this info, the system can decide how deep a sleep state to enter, depending on the current workload. When it’s time to wake up from sleep, either to service a timed event or to respond to user input, the system knows which devices to wake, and in what order, to resume operation safely.

To complement this capability, Haswell introduces a new platform-level “active idle” state that is very low power, known as S0ix. Systems based on Haswell should essentially default to this state, seeking it whenever possible, even between keystrokes, in order to pursue power savings aggressively. The user, application software, and the OS should be none the wiser. At last year’s IDF, one of Intel’s architects characterized S0ix as “automatic, continuous, fine-grained, and transparent to well-written software.” As you can see in the diagram above, in S0ix, the CPU cores and caches are power gated, DRAM is powered down, and even the FIVR rail is shut off. When needed, the CPU can recover from being completely powered down in about three milliseconds.

That’s vaguely amazing, huh? Of course, the benefits of Haswell’s new power plumbing will vary according to the platform. They should be most fully realized in ultrabooks with U-series processors. One key enabler for continuously low-power operation is a tech we’ve been hearing about for years called panel self-refresh. This nifty feature removes the burden of refreshing the display 60 times a second from the CPU or chipset. Instead, the LCD has its own small pool of DRAM and, when the screen’s contents are static, the LCD will continue displaying the same image without outside assistance. This simple optimization allows the CPU and system memory to remain in a low-power state continuously, until something more important happens. Plans for panel self-refresh have been kicking around for years, but the feature is now part of the DisplayPort 1.3 spec. Intel expects the first wave of Haswell-based laptops to make use of PSR, at last.


A BGA-style Haswell SoC package with platform controller hub chip onboard. Source: Intel.

In larger laptops and desktops, the Haswell chip and its PCH will live in separate packages, just like in past generations. Haswell processors headed for ultrabooks and convertible tablets will be mounted on the package above, which Intel calls a “1-chip BGA solution.” You can see the, er, two chips on this package, the oblong Haswell CPU and the smaller platform controller hub. This system-on-a-chip package will squeeze into power envelopes from 28W to as little as 6W. For ultrabooks, the U-series processors will fit a dual-core Haswell and its PCH into a 15W limit, versus 20W for Ivy Bridge (17W CPU + 3W PCH.) For convertibles, the Y-series parts will slide into a 6W envelope, compared to 10W total for the comparable Ivy Bridge/PCH combo. Those are just peak numbers, though, and don’t reflect the benefits of Haswell’s active idle mojo. Intel expects Haswell U-series parts to offer 40-50% better battery life than the last generation, with over twice the battery run time in connected standby mode—at measurably higher performance levels.

Better overclocking! Kinda…

Many of Haswell’s power-saving measures won’t benefit desktop systems, but this new silicon does have some nice things to offer desktop users, including more flexibility for overclocking. Haswell exposes higher core multipliers and higher power, current, and voltage limits than Ivy Bridge, as well as new ratios for graphics clocks. DDR3 ratios above 2667 are available, too. (Our Asus board goes up to 3200MHz.) The most noteworthy change along these lines would be the added base clock ratio, which could in theory allow reasonably decent overclocking leeway on chips without unlocked multipliers. CPU straps are available for 125, 167, and 250MHz. However, Intel has decided to lock the bclk ratios on most Haswell products. Only the K-series parts will have the new ratios enabled, which means the new bclk ratios will probably only be useful to truly extreme overclockers.

For more on Haswell overclocking, have a look at Geoff’s attempts to push the 4770K to new heights.

The new desktop Core lineup

Model Cores/

threads

Base

clock

Max

Turbo

clock

L3

cache

HD

Graphics

Max

graphics

clock

TDP Price
Core i7-4770K 4/8 3.5GHz 3.9GHz 8MB 4600 1250MHz 84W $339
Core i7-4770 4/8 3.4GHz 3.9GHz 8MB 4600 1200MHz 84W $303
Core i5-4670K 4/4 3.4GHz 3.8GHz 6MB 4600 1200MHz 84W $242
Core i5-4670 4/4 3.4GHz 3.8GHz 6MB 4600 1200MHz 84W $213
Core i5-4570 4/4 3.2GHz 3.6GHz 6MB 4600 1150MHz 84W $192

The table above shows the heart of Intel’s new socketed desktop CPU lineup based on the quad-core Haswell GT2 chip. These are the standard 84W parts that supplant their 77W Ivy Bridge counterparts. Prices for the Haswell chips are generally up somewhat compared to the models they replace: the 4670K lists at $242, while the Core i5-3570K sells for $225, for example.

Intel turns a ridiculous number of knobs and dials when converting its chips into products. We’ve tried to capture the big-ticket items in the table. Enthusiasts will probably want to focus their attention on the K-series parts with their unlocked multipliers for overclocking. Just know that you’ll be giving up a handful of potentially useful features with the K series, including the TSX support we mentioned earlier, Intel VT-d for better device virtualization, and some enterprise-focused client features like vPro.

Model Cores/

threads

Base

clock

Max

Turbo

clock

L3

cache

HD

Graphics

Max

graphics

clock

TDP Price
Core i7-4770S 4/8 3.1GHz 3.9GHz 8MB 4600 1200MHz 65W $303
Core i7-4770R 4/8 3.2GHz 3.9GHz 6MB Iris Pro 5200 1300MHz 65W NA
Core i7-4770T 4/8 2.5GHz 3.7GHz 8MB 4600 1200MHz 45W $303
Core i7-4765T 4/8 2.0GHz 3.0GHz 8MB 4600 1200MHz 35W $303
Core i5-4670S 4/4 3.1GHz 3.8GHz 6MB 4600 1200MHz 65W $213
Core i5-4670T 4/4 2.3GHz 3.3GHz 6MB 4600 1200MHz 45W $213
Core i5-4570S 4/4 2.9GHz 3.6GHz 6MB 4600 1150MHz 65W $192
Core i5-4570T 2/4 2.9GHz 3.6GHz 6MB 4600 1150MHz 35W $192

Here’s the rest of the Haswell desktop lineup, largely consisting of lower-power S- and T-series offerings. The standout from the crowd is the Core i7-4770R. That’s a GT3e quad-core, with the eDRAM cache, in a BGA package aimed at large system builders. It won’t drop into a CPU socket. We should talk about another, related model we have for testing, but first…

Yep, another new platform and socket

With the integration of the VRMs into the Haswell die, there’s no way around it: these chips require a new socket type, and they’re getting an updated platform controller hub support chip, as well. Here’s a look at the LGA1150 socket and the chip package that drops into it.


The LGA1150 socket


“Ivy Bridge” Core i7-3770K (left) vs. “Haswell” Core i7-4770K (right)

The new socket isn’t much different in size from the old one, and the pin count is only down by five. There’s zero chance of upgrading to a Haswell CPU without upgrading your motherboard, but as socket transitions go, this one isn’t too disrputive. LGA1150 boards are compatible with LGA1155-style coolers, and they use the same DDR3 memory modules.


Block diagram of the Z87 platform. Source: Intel.

Intel is offering something like six variants of its revised platform controller hub for Haswell, targeted at different segments. The basic layout of the enthusiast-class one, the Z87, is shown above. That’s the one you’ll want to pair with a K-series processor in order to make overclocking happen. Compared to the Z77 PCH for Ivy Bridge, the Z87 adds more USB 3.0 ports (six rather than four) and all six of its SATA ports support 6Gbps transfer rates. The display block has been removed, too, since that’s now integrated into the CPU.

We have a ton of Z87 motherboard coverage coming down the pike, so stay tuned for more on that front.

GT3e in action: the Core i7-4950HQ

I said earlier that Intel won’t be offering a socketed version of Haswell with GT3 graphics and eDRAM. That’s unfortunate, but we do have an opportunity to see what we’re missing. Intel has loaned us one of its own test platforms so we can get a look at an Iris Pro graphics solution. Here’s a peek inside.

The CPU sitting under that puny little cooler is a Core i7-4950HQ, the top model in a five-part lineup of H-series CPUs. These processors are targeted primarily at larger laptops that might otherwise incorporate a discrete Radeon or GeForce GPU. I suspect we may see these H-series CPUs popping up in some desktop all-in-one configs, as well. Here’s a look at the 4950HQ’s basic specs.

Model Cores/

threads

Base

clock

Max

1-core

Turbo

clock

Max

4-core

Turbo

clock

L3

cache

HD

Graphics

Max

graphics

clock

TDP Price
Core i7-4950HQ 4/8 2.4GHz 3.6GHz 3.4GHz 6MB Iris Pro 5200 1300MHz 47W $657

In spite of its relatively modest power budget and 2.4GHz base clock, the 4950HQ’s Turbo frequencies are pretty aggressive. In fact, they’re only 200-300MHz shy of the usual operating clocks for the speediest socketed 84W Haswell, the Core i7-4770K. With its Iris Pro graphics and 128MB eDRAM cache, the 4950HQ could very well be the most capable integrated graphics solution we’ve ever seen. We’ve tested it against various other Intel IGPs, the desktop version of AMD’s Trinity APU, and a low-end discrete graphics card, the Radeon HD 6570 with GDDR5 memory. Through the magic of re-branding, this card is very similar to one of AMD’s mobile GPU models, the Radeon HD 7670M, and should offer a nice basis for comparison.

Since we have it on hand, I’ve also elected to run the 4950HQ though our entire CPU test suite, to see how its enormous L4 cache impacts performance in CPU-centric workloads. That should be fun.

You’ll notice I haven’t included the 4950HQ system in our testing methods table. This test mule setup is a little weird, with a simulated battery and SO-DIMMs. Still, the end result is actually fairly similar to our other test systems, with 16GB of DDR3-1600 memory and a 240GB SSD running our standard Win8 Pro disk image.

Test notes

Every so often, we throw out all of our old CPU test results and start over with refreshed hardware and software. Haswell’s launch is a natural breaking point, so we took the opportunity to revise just about everything. We have retained the XFX Radeon HD 7950 graphics cards that we used last time around, since they’re still quite current. We’ve hung on to our Corsair AX650 modular PSUs, too.

Above is a shot of our Haswell test rig, one of the many systems here in Damage Labs that made this comparison possible. The motherboard is an Asus Z87-Pro, an enthusiast-class Haswell mobo. Thermaltake provided the CPU cooler, the NiC C5; it’ll dissipate up to 230W and avoids the DIMM clearance issues that some big tower coolers have. Corsair supplied the DIMMs, a pair of Vengeance Pro 8GB modules capable of running at 2400 MT/s at 1.65V.

Finally, peeking out from under the motherboard is a 240GB HyperX SSD from Kingston. Our older 128GB drives were getting too cramped once we installed the latest games. These new 240GB drives give us some room to breathe, while dramatically reducing the time it takes to reboot our systems between tests.

Our testing methods

We ran every test at least three times and reported the median of the scores produced.

The test systems were configured like so:

Processor
AMD
FX-8350

AMD A10-5800K

Core
i7-2600K

Core i7-3770K

Core i7-4770K
Motherboard Asus
Crosshair V Formula
MSI
FM2-A85XA-G65
Asus
P8Z77-V Pro
Asus
Z87-Pro
North bridge 990FX A85
FCH
Z77
Express
Z87
Express
South bridge SB950
Memory size 16 GB (2 DIMMs) 16 GB
(2 DIMMs)
16 GB (2 DIMMs) 16 GB (2 DIMMs)
Memory type AMD
Performance

Series

DDR3 SDRAM

AMD
Performance

Edition

DDR3 SDRAM

Corsair

Vengeance Pro

DDR3 SDRAM

Corsair

Vengeance Pro

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s 1600 MT/s 1600 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
9-9-9-24
1T
9-9-9-24
1T
Chipset

drivers

AMD
chipset 13.4
AMD
chipset 13.4
INF
update 9.4.0.1017

iRST 12.5.0.1066

INF
update 9.4.0.1017

iRST 12.5.0.1066

Audio Integrated

SB950/ALC889 with

Realtek 6.0.1.6873 drivers

Integrated

A85/ALC892 with

Realtek 6.0.1.6873 drivers

Integrated

Z77/ALC892 with

Realtek 6.0.1.6873 drivers

Integrated

Z87/ALC1150 with

Realtek 6.0.1.6873 drivers

OpenCL
ICD
AMD APP
1124.2
AMD APP
1124.2
Intel SDK
for

OpenCL 2013

Intel SDK
for

OpenCL 2013

IGP
drivers
Catalyst
13.5 beta 2
Intel
9.18.10.3177
Intel
9.18.10.3177

They all shared the following common elements:

Hard drive Kingston
HyperX SH103S3 240GB SSD
Discrete graphics XFX
Radeon HD 7950 Double Dissipation 3GB with Catalyst 13.5 beta 2 drivers
OS Windows 8
Pro
Power supply Corsair
AX650

Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

We used the following versions of our test applications:

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
  • We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we encoded a video with x264.
  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

IGP performance: Tomb Raider

We’ll kick things off with some IGP performance tests, to see how Haswell’s revamped graphics options perform. As usual, we’re testing games by measuring each frame of animation produced. For the uninitiated, start here for an intro to our methods.



Apparently, caching can do good things for graphics. The Core i7-4950HQ with Iris Pro graphics leads the pack in the FPS average sweeps—and in each of our latency-focused performance metrics, as well. The GT3 with eDRAM is even faster than the discrete Radeon HD 6570 GDDR5 when paired with the same four CPU cores. Although it only averages 37 FPS, Tomb Raider game quite playable at these settings on the Iris Pro—notice that the IGP doesn’t spend any time working on frames that take longer than 50 milliseconds to produce. There’s very little waiting on individual frames to be rendered, as the Iris Pro 5200’s nice, flat latency curve also indicates.

The regular desktop GT2 part, the Core i7-4770K, also makes substantial strides compared to Ivy Bridge, nearly pulling even with AMD’s A10-5800K. As expected, both of these desktop IGPs benefit from the move from DDR3-1600 (our stock config) to DDR3-2133 memory. Still, of the two, the AMD APU hangs on to the lead.

IGP performance: GRID 2

Here’s a first. We have a pre-release version of GRID 2 to test, and it’s because Intel has been working closely with Codemasters to optimize this game for Iris Pro graphics. GRID 2 comes out of the box with some distinctive visual effects, such as order-independent transparency, made possible via special hooks in the Intel IGP hardware. We didn’t test performance with those effects enabled, since they wouldn’t work on other brands of GPUs, but it’s a sign of change when Intel is putting in this sort of work with game developers. We did enable the “CMAA” antialiasing mode, which looks to be a post-process antialiasing method a la MLAA or FXAA. Intel tells us it co-developed CMAA with Codemasters, and this feature works on Radeons as well as on Iris.

Forgive me for not having a video of this test session recorded. We just tested in the opening race of the game for 60 seconds.



The 4950HQ again takes the top spot in the FPS average, this time by a wider margin. You’ll notice in the frame time plots above that there are occasional spikes as high as 50 ms or better on the Iris Pro and on the other Intel IGPs. Those spikes show up in our latency-focused metrics, where the Iris Pro’s lead isn’t quite so commanding. Still, the spikes are relatively minor, at least on the Iris Pro. The 4770K/HD 4600 combo suffers a little more compared to AMD’s A10-5800K, which runs this game very smoothly.

IGP performance: Metro: Last Light

Ok, here’s a bit of a gotcha. How well does Intel’s shiny new IGP handle a brand-new, just-released game like Metro: Last Light? Are the graphics drivers up to the task? To find out, we fired up Last Light‘s built in benchmark.

Well, not bad, folks. The Intel IGPs managed to run this test well, without obvious visual artifacts, and the Iris Pro cranked out the highest frame rate of the group. That experience is consistent with what we’ve seen from Iris Pro elsewhere, in our limited time with it. Each of the games we’ve tried has run without issue, with decent image quality, and with surprising fluidity given our usual expectations for an Intel IGP.

Slowly but surely, Intel appears to be bringing its graphics solutions up to par. With Iris Pro, it may even succeed in capturing some of the space where low-end mobile discrete GPUs have traditionally played.

Power consumption and efficiency

Since power efficiency is another one of Haswell’s big highlights, we’ll head there next. The workload for this test is encoding a video with x264, based on a command ripped straight from the x264 benchmark you’ll see later. The first set of plots comes from our standard test systems with discrete Radeon graphics. The second set shows system-level power consumption with only the integrated graphics processors in use.

Interesting. Our 4770K-based test system offers a nice decrease in idle power draw compared to the Ivy-based 3770K, but only with integrated graphics. Once you pop in a graphics card, the reduction in power draw is only about 5W—and the 4770K system doesn’t even have the lowest idle power draw of the desktop processors we’ve tested. That distinction goes to the AMD A10 APU.

By spec, the 4770K has a 7W higher power ceiling than the 3770K. In this test, the difference in total system power consumption boils down to just 1-2W. The AMD CPUs, meanwhile, consume quite a bit more juice.

We can quantify efficiency by looking at the amount of power used, in kilojoules, during the entirety of our test period, when the chips are busy and at idle.

Perhaps our best measure of CPU power efficiency is task energy: the amount of energy used while encoding our video. This measure rewards CPUs for finishing the job sooner, but it doesn’t account for power draw at idle.

The 4770K requires the least power to complete the encoding task even though its peak power draw is slightly higher than the 3770K’s. Credit for that win should go to the AVX2 and FMA extensions in the Haswell core, which are supported in the version of the x264 encoder we’re using. They help the 4770K finish the encoding task sooner.

If you want a single set of numbers to summarize AMD’s struggles of late, look no further than the chart above. Even though the FX-8350 also supports FMA, it requires more than twice the energy to complete the same task as the 4770K. The FX processor’s absolute performance is lower and its peak power draw is substantially higher. Not a recipe for success.

Memory subsystem performance

Before moving into the rest of our application-focused CPU tests, we can briefly look at some synthetic benchmarks that measure targeted aspects of system performance.

All three recent generations of Intel’s processors do a fine job of extracing bandwidth from a pair of DDR3 DIMMs, and there’s not much difference between them. I believe looser memory timings on those SO-DIMMs are responsible for the 4950HQ’s relatively weak showing.

This test is multithreaded, so it captures the bandwidth of all caches on all cores concurrently. The different block sizes step us down from the L1 and L2 caches into L3 and main memory. The only hitch is that the capacity of all four caches on a quad-core CPU must be taken into account when reading this plot. For instance, at the 128KB block size above, we’re just starting to spill out of Haswell’s four 32KB L1 caches.

The most remarkable thing about these results is that Haswell really does appear to have twice the L1 cache bandwidth of Ivy and Sandy Bridge, in a measurable way—roughly one terabyte per second of internal bandwidth. We don’t see improvement to nearly the same extent in the L2 cache, for whatever reason, though.

Can we also measure the impact of the 4950HQ’s gargantuan eDRAM cache? Let’s zoom in a bit and have a look.

Yep. The impact of the 4950HQ’s L4 cache is evident at the 16MB and 64MB test block sizes. Bandwidth is almost exactly doubled at the 64MB block size compared to the 4770K, which has to rely on main memory alone.

SiSoft has a nice write-up of this latency testing tool, for those who are interested. We used the “in-page random” access pattern to reduce the impact of prefetchers on our measurements. This test isn’t multithreaded, so it’s a little easier to track which cache is being measured. If the block size is 32KB, you’re in the L1 cache. If it’s 64KB, you’re into the L2, and so on.

As advertised, access latencies for Haswell’s L1 and L2 caches are no higher than Sandy’s and Ivy’s. The L3 cache appears to be slightly slower, though. Notice also how the 4950HQ’s L4 cache mitigates access latencies at the 16MB through 64MB block sizes.

Some quick synthetic math tests

The folks at FinalWire have built some interesting micro-benchmarks into their AIDA64 system analysis software. They’ve updated several of these tests to make use of new instructions on the latest processors, including Haswell. Of the results shown below, PhotoWorxx uses AVX2 (and falls back to AVX on Ivy Bridge, et al.), CPU Hash uses AVX (and XOP on Bulldozer/Piledriver), and FPU Julia and Mandel use AVX2 with FMA.

The big, eye-popping gains here come courtesy of AVX2 with FMA in the Julia and Mandel tests, where the 4770K is about 50% faster than Ivy Bridge and over twice as fast as the FX-8350. This isn’t quite the doubling of peak FLOPS throughput that Haswell can produce in theory, but it’s pretty good for delivered performance. Also, I think we can safely say the 4950HQ’s L4 cache doesn’t provide much benefit here.

One more test here, just because it’s interesting. Tamas Miklos of FinalWire pointed this fact out to us, and we’ve confirmed it with our own testing: Haswell is a little slower than Ivy Bridge when running SinJulia, which uses extended-precision x87 code. Not a big deal, especially in light of the gains Haswell provides with new instructions, but it’s something to note.

Crysis 3



Although it might not seem so since four of the five CPUs tested average 60 FPS or higher, Crysis 3 is actually a fairly CPU-intensive game, at least in spots. The biggest frame time spikes in the plots above happen at the two places where I unleash an exploding arrow at the bad guys. Those spikes tend to be larger on the slower processors. The Core i7-4770K isn’t much faster than the 2600K or 3770K, but it does reduce the duration of the slowdowns encountered with every one of the CPUs. Trouble is, the differences are really small.

Far Cry 3



On the three generations of Intel CPUs, only a handful of frames take longer than 16.7 ms to produce. In other words, this game runs at a near-constant 60 FPS, save for the last 1-2% of frames rendered. The 4770K is measurably better than the 3770K and 2600K in our latency focused metrics, but once again, the differences are so small, they’re not meaningful.

The AMD CPUs are pretty competent here, as well. They spend very little time beyond our 50-ms “badness” threshold, where users are more likely to perceive slowdowns. Still, both are quite a bit slower than any of the past few iterations of Intel’s desktop quad-core flagship.

Tomb Raider



Whoa. Here’s a game that’s well optimized or just doesn’t require a lot of CPU power. The plots for the three Intel CPUs and the FX-8350 are pristine, without a single spike above 16.7 ms from any of them. Only the A10-5800K struggles at all, and then only in relative terms.

Metro: Last Light

The built-in benchmark for Last Light shows us the same sort of result we saw in our other gaming tests. Moving on…

Productivity

Compiling code in GCC

Our resident developer, Bruno Ferreira, helped put together this code compiling test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. Here’s Bruno’s note about how he built it:

QT SDK 2010.05 – Windows, compiled via the included MinGW port of GCC 4.4.0.

Even though apparently at the time the Linux version had properly working and supported multithreaded compilation, the Windows version had to be somewhat hacked to achieve the same functionality, due to some batch file snafus.

After a working multithreaded compile was obtained (with the number of simultaneous jobs configurable), it was time to get the compile time down from 45m+ to a manageable level. This required severe hacking of the makefiles in order to strip the build down to a more streamlined version that preferably would still compile before hell froze over.

Then some more fiddling was required in order for the test to be flexible about the paths where it was located. Which led to yet more Makefile mangling (the poor thing).

The number of jobs dispatched by the Qtbench script is configurable, and the compiler does some multithreading of its own, so we did some calibration testing to determine the optimal number of jobs for each CPU.

The 4770K isn’t much faster than the 3770K, percentage-wise. Still, I like these benchmarks that show time to completion, because we know that completing this particular compile task will take 34 seconds less of your time on the 4770K. For the right user, that may be very much worth the price of entry.

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, so the encoding of the AES algorithm, in particular, should be very fast on the CPUs that support those instructions. We’ve also included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions.

7-Zip file compression and decompression

SunSpider JavaScript performance

With the puzzling exception of 7-Zip compression, the 4770K achieves modest but consistent gains over the 3770K in our productivity-oriented tests.

Video encoding

x264 HD video encoding

We’ve devised a new x264 test, which involves one of the latest builds of the encoder with AVX2 and FMA support. To test, we encoded a one-minute, 1080p .m2ts video using the following options:

–profile high –preset medium –crf 18 –video-filter resize:1280,720 –force-cfr

The source video was obtained from a repository of stock videos on this website. We used the Samsung Earth from Above clip.

Handbrake HD video encoding

Our Handbrake test transcodes a two-and-a-half-minute 1080p H.264 source video into a smaller format defined by the program’s “iPhone & iPod Touch” preset.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

picCOLOR image processing and analysis

picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including SSE and AVX extensions, multiple cores, and Hyper-Threading. Many of its individual functions are multithreaded.

At our request, Dr. Müller graciously agreed to re-tool his picCOLOR benchmark to incorporate some real-world usage scenarios. As a result, we now have four tests that employ picCOLOR for image analysis: particle image velocimetry, real-time object tracking, a bar-code search, and label recognition and rotation. For the sake of brevity, we’ve included a single overall score for those real-world tests.

The pattern holds through a range of different application types: Haswell is a little faster than Ivy Bridge, but the improvements in non-AVX2/FMA-enhanced applications are generally fairly minor. You’ve probably also noticed that the 4950HQ’s massive L4 cache hasn’t granted it any noticeable advantage. Fortunately, that’s about to change.

3D rendering

LuxMark

Because LuxMark uses OpenCL, we can use it to test both GPU and CPU performance—and even to compare performance across different processor types. OpenCL code is by nature parallelized and relies on a real-time compiler, so it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both support AVX. The AMD APP driver even supports Bulldozer’s and Piledriver’s distinctive instructions, FMA4 and XOP. We’ve used the Intel ICD on the Intel processors and the AMD ICD on the AMD chips, since that was the fastest config in each case.

We’ll start with CPU-only results.

So, two things. First, the results for the 4950HQ are not a fluke. I had some worry that the Iris Pro graphics drivers might have installed a different version of the Intel OpenCL ICD that was responsible for the 4950HQ’s victory, but that’s not so. The 4950HQ is also faster than the 4770K when using AMD’s APP driver. Looks like that L4 cache is finally showing us some potential.

Second, it appears Intel’s OpenCL ICD doesn’t yet support FMA on Haswell. I’d expect that instruction to come in very handy here. Perhaps a future update will correct that oversight.

Now we’ll see how a Radeon HD 7950 performs when driven by each of these CPUs.

It’s hard to beat a modern GPU for this sort of FLOPS-intensive work. I’d sure like to see a Haswell with proper FMA support make a run at it, though.

We can try combining the CPU and GPU computing power by asking both processor types to work on the same problem at once.

Now, let’s pull the discrete GPU out of the test systems and see how their IGPs perform in OpenCL.

AMD has based its sales pitch for APUs on converged computing and OpenCL acceleration. Looks to me like Intel isn’t willing to cede any ground to its competitor here. Using its eDRAM cache, the Iris Pro 5200 IGP nearly triples the performance of the A10’s Radeon IGP. Even the scaled back HD Graphics configs in the 3770K and 4770K outperform the A10’s integrated graphics.

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

POV-Ray rendering

Scientific computing

MyriMatch proteomics

MyriMatch is intended for use in proteomics, or the large-scale study of protein. You can read more about it here.

STARS Euler3d computational fluid dynamics

Euler3D tackles the difficult problem of simulating fluid dynamics. Like MyriMatch, it tends to be very memory-bandwidth intensive. You can read more about it right here.

At the very end of our regular suite of benchmarks, we have a bright and shining example of how the 4950HQ’s eDRAM cache can make a difference. True to expectations, it’s good for computational fluid dynamics. Ok, so maybe that’s not the best justification for bringing a GT3e config to a socketed CPU, but I’d still like to see that happen.

Legacy comparisons

I know what you’re thinking: what I need in my life, right at this moment, is more benchmarks. Well, we’ve got you covered. Many of you have asked for broader comparisons with older CPUs, so you can understand what sort of improvements to expect when upgrading from an older system. We can’t always re-test every CPU from one iteration of our test suite to the next, but there are some commonalities that carry over from generation to generation. We might as well try some inter-generational mash-ups.

Now, these comparisons won’t be as exact and pristine as our other scores. Our new test systems run Windows 8 instead of Windows 7, for instance, and have higher-density RAM and larger SSDs. We’re using a slightly newer version of POV-Ray, too. Still, scores in the benchmarks we selected shouldn’t vary too much based on those factors, so… let’s do this.

Our first set of mash-up results comes from our last-gen CPU test suite, as embodied in our FX-8350 review from last fall. This set will take us back at least three generations for both Intel and AMD, spanning a price range from under $100 to $1K.

Productivity

Image processing

3D rendering

Scientific computing

Nice. I like how the Haswell chip with the graphics cache nearly beats out the six-core, quad-channel, thousand-dollar Core i7-3960X in the CFD test.

Legacy comparisons, continued

That was a nice start on the last page, but we really can go broader than that. This next set of results includes fewer common benchmarks, but it takes us as far back as the Core 2 Duo and, yes, a chip derived from the Pentium 4: the Pentium Extreme Edition 840. Also present: dual-core versions of low-power CPUs from both Intel and AMD, the Atom D525 and the E-350 APU. We retired this test suite after the 3960X review in the fall of 2011.

Image processing

3D rendering

Not enough? Back in 2001, we reviewed the Pentium 4 1.7GHz CPU, and we had it render the same “chess2” scene in POV-Ray that the 4770K finishes in 50 seconds above. The execution time? Over 13 minutes. Progress is good.

Conclusions

As always, the rafts of data on the preceding pages can be boiled down to one simple price-performance plot, for those folks considering a purchase. Here’s how the Core i7-4770K stacks up against the two prior generations of Intel processors—and the closest competition from AMD—on one of our famous value scatter plots:

On the desktop, the generational progress from Ivy Bridge to Haswell is fairly modest, as we’ve noted throughout our analysis. This chip doesn’t even move the needle much on power efficiency in its socketed form. For those folks who already own a Sandy or Ivy Bridge-based system, there’s probably not much reason to upgrade—unless, of course, your present system has become a serious time constraint. We did shave off 34 seconds when compiling Qt on the 4770K, after all, and we’ve illustrated that much larger speed gains are possible in floating-point intensive applications that make use of Haswell’s FMA capability.

I’ll also take issue with our performance summary graph on this basis: many of our tests are widely multithreaded, which is why the FX-8350 with eight integer cores fares well, nearly matching the 2600K. Trouble is, the FX-8350’s per-thread performance is relatively weak. By contrast, the Core i7-4770K has the highest per-thread performance we’ve ever measured. Improvements of that sort don’t come easily. They are more important to the way most people use their computers every day than the gains to be had by scaling out core counts and such.

With that said, Haswell’s integrated graphics have made bigger strides than the CPU cores this time around. The HD 4600 IGP in the Core i7-4770K isn’t quite a fast as the one in AMD’s A10-5800K, but it comes perilously close to wiping out AMD’s one consistent advantage in this class of chip. And the Iris Pro graphics solution in the Core i7-4950HQ not only wipes out that advantage but threatens low-end discrete mobile GPUs, as well.

Haswell’s true mission is to squeeze into thinner and lighter laptops and tablets, where it can provide something close to a desktop-class user experience with all-day battery life. Much of the new technology developed to make that happen isn’t present in the desktop versions of Haswell. That’s fine, as far as it goes. Focusing on mobile applications surely makes good business sense at this point. We’ll take what gains we can get on the desktop, where the user experience is already very satisfying, and we are very much looking forward to getting our hands on some Haswell-based mobile systems to see how much more of its promise this architecture can fulfill.

Maybe I’ll post some more benchmark graphs on Twitter. Follow me and see.

Comments closed
    • Nec_V20
    • 6 years ago

    It’s time to take a deep breath, as I have over the past years.

    Normally the systems I build for myself start becoming a bit long in the tooth, performance wise, about this point (my system has been running 24/7 for 951 days 3 hours) and in another year or so I would be looking to replace it with something with a lot more performance.

    That scenario is not even a blip on the horizon. I got a great deal on a 965x and then about a year and a half ago I got another offer I couldn’t refuse for a 990x. The 965 I combined with some other bits and bobs and gave the system to a friend who had become unemployed.

    For the second time in my memory the performance is not being gobbled up by the OS/Games/Applications.

    The other time this happened was when I did not feel the need to go with the 386 generation and my Harris 25 MHz 286 (which ran the software of the time a lot faster than the 33 MHz 386) did me very nicely until I built my new system based on a 486 DX50.

    The desktop market is not dead per se, there is just no perceived performance need on the part of many to invest in a new system. Personally I don’t even remotely feel any need to move on from the platform I am using, and I am most certainly not looking enviously at the new Haswell generation in comparison to the one I have.

    The “Legacy Comparisons” portion of the article more than adequately illustrates my point.

    The system which I presently have will probably be the first one ever where I will only be changing the platform because the CPU dies on me. Neither AMD nor Intel have given me any incentive to do otherwise.

    • kileysmith31
    • 7 years ago
    • LauraLasher43
    • 7 years ago
      • moose17145
      • 7 years ago

      More spam. Please delete and ban user.

    • chuckula
    • 7 years ago

    OK, so there have been I don’t know how many posts slicing & dicing Haswell to conclude that it’s awesome/craptacular/meh/etc.

    Guess what? You know who thinks Haswell is pretty good? AMD that’s who.

    Behold the confirmation of those rumors about Centurion: [url<]http://www.techpowerup.com/184960/amd-centurion-is-fx-9000-scrapes-5-00-ghz.html[/url<] A couple of notes: 1. 5 GHz [b<]is the turbo frequency, not the base frequency[/b<] as some people had hoped. 2. The TDP is... [b<][i<]TWO-HUNDRED AND TWENTY WATTS[/i<][/b<]* 3. Price still isn't listed, so maybe AMD can get away with making it just a touch cheaper than Haswell... if they are smart. Otherwise, AMD has officially decided that the Pentium 4 EE approach was a good idea worth emulating.** * I absolutely cannot wait to see the AMD fanboys spin that number as being totally unimportant. ** If AMD is dumb enough to price these above the 3930K, I will personally buy buckets of movie-theater popcorn for anyone who wants to have a dramatic reenactment of the fanboy "value arguments" for why Centurion is really "smoother" than the 3930K despite still losing practically every benchmark to the 3930K even with those ridiculous clocks. Actually.. the fanboys have a point since the fans needed to cool that beast will double as smoothy makers!

      • Deanjo
      • 7 years ago

      Not sure why they would name it the Centurion, I see those numbers and immediately think chupacabra would be a more worthwhile name.

        • chuckula
        • 7 years ago

        I think Strong Bad said it best: FX-9000 [b<][i<]THE BURNINATOR![/i<][/b<]

          • jihadjoe
          • 7 years ago

          My head asplode!

      • flip-mode
      • 7 years ago

      You’re too bombastic. Can’t you attention-whore more subtly? The FX 9000 is a single chip, not an entire chip lineup. Intel would be smart to release just such a chip based on Haswell – clock the thing to the max, TDP be damned, and sell the thing for $1500. And it would probably sell a bunch of them.

      If AMD can release such a chip and charge any kind of premium for it then the company is smart to do so. Yes, the TDP of 220 watts is absurd to the majority of people, but AMD doesn’t need to sell it to the majority of people.

        • chuckula
        • 7 years ago

        [quote<] Can't you attention-whore more subtly? [/quote<] Isn't that sort of a contradiction? It's one thing to have an insane halo product at insane prices. I get the draw of that. A good example is the 3960X (and now 3970X) that are obviously not winning any price/performance battles even with Intel's own chips. However, with those halo products at the very minimum you can say "Gee, I paid way too much, but at least I'm dominating everybody for a few months." The problem with Centurion at the rumored prices* is that you get the halo-product price, but without the halo product bragging rights. Let's be completely serious here: We already know exactly how this part will perform from TR's own overclocking guide that took the FX-8350 up to similar clockspeeds. Where are the bragging rights going to be for these chips? * One note: AMD has been big into combos lately.. if $795 gets you Centurion + a 7970GHz, then it is still expensive, but at least it's not insane and I can actually respect the marketing idea behind it. If there's no combo, but it's substantially cheaper than $795, that's also OK to a point.

    • Damage
    • 7 years ago

    abw has been banned for violating forum rules 2, 10, and probably 13. Don’t be a jerk, kids.

      • JohnC
      • 7 years ago

      Thank you!

        • Celess
        • 7 years ago

        But.. I was just about to test the leopard in the jungle…

        Edit: sorry wrong troll.

        • chuckula
        • 7 years ago

        As I am somewhat known for the occasional [cough] troll, I’m actually pretty tolerant of fanboys from a wide spectrum… I’ll fight with them, but I don’t want to stop them from posting.

        ABW’s problem was that he wasn’t just biased, he knew he was lying in almost ridiculously transparent ways, and without a hint of satire or tongue-in-cheek delivery either. Then, instead of playing along in good fun, anyone who disagreed with his trolls became the target of content-free personal insults.

        Despite what some people think, I believe there is great value in a good flamefest. It may not look pretty, but you can actually learn a lot when two well-informed sides argue. However, ABW’s flames weren’t adding any information or any insight to the discussion, unfortunately.

          • maxxcool
          • 7 years ago

          “”As I am somewhat known for the occasional [cough]”” /shocked!/ 🙂

          • moose17145
          • 7 years ago

          I agree that the occasional troll thread can be kind of fun / entertaining as long as it isn’t a constant thing (especially if obvious troll is obvious, OR if successful troll is epically successful)… but yea abw was taking it way too far and was just being a general jerk.

            • Bensam123
            • 7 years ago

            Except that Chucks ‘trolling’ has been going on for months. Some of his stuff doesn’t even qualify as trolling it’s so terribly offensive. It hasn’t even been till recently where he’s said ‘I’m just trolling lolers guys’. People revert to the ‘just j/ks guys’, when they know they’re overstepping a line and they’re risking some serious punishment, like a ban. If you’ve seen some of the replies he’s made to me it’s downright disrespectful, hateful, attempt to bash me over whatever small little thing he can find. It’s not even making fun of AMD or whatever, it’s making fun of [i<]me[/i<] and has gotten to be very personal. More disappointing is some of the TR regulars (not naming names) have started to band wagon with him because he's playing buddy with them, instead of using common sense. The only reason he hasn't gotten the same treatment is because he sucks up for leeway points. In other words he upvotes whoever he can find and give some sort of semi-agreeable lip service so people will up vote him and take his side in the future when he decides to go full ass-hole on people. Like right now he's trying to play this off so Damage doesn't notice him (he did it both times Damage listed a ban). I've actually found what he does to be far more offensive then either of the two corporate shills that have been banned, perhaps because I've been the target of it for the last six month. Just look at the replies he made to my post further down. They weren't even semi-controlled until the last few after Damage started banning people. You know I consider myself a fair, open, honest, and rather caring individual, even online. I'll disagree with people, but disagreements happen all the time. It's normal as everyone is different. However, even while disagreeing with people I will give them respect and decency, civil discourse. I don't break down into personal insults (usually, it's sad Chucks 'trolling' has brought me down to such a level in some of my replies to him) and I respect whoever I'm talking to, regardless of their opinion. Chuck is one of four people I've met online that I really have no desire to ever talk to or associate with again. I already put him on ignore on the forums and I would put him on ignore on the news page if I could. I have never put anyone on TR on ignore, even some people that definitely brush me the wrong way. Nothing positive has come out of any of my discussions with him and I've tried everything. Over the years I've come to realize some people have no real interest in actually talking with you, they're just trying to make you feel bad or use you in other ways (such as making themselves look better, this is especially popular in online communities). What Chuck has been doing (to me at least) is borderline harassment, if not full blown harassment. If you don't believe me scroll down and look at the replies on my post. It's really a shame he can suck up a bit and get away with it. How do you even determine when he's trolling and when he's being a 'good guy'? This has been going on for so long that there is no real clear distinction between the two and the only reason there is another side is because he recently tried to make a case for himself just 'joking' all the time. You know, Neelycam trolls. But Neelycam also acts like a decent human being most of the time, even in arguments. Neelycam doesn't go out of their way to constantly bash one person or single them out and Neelycam adds things of merit to conversation. [quote<]but yea abw was taking it way too far and was just being a general jerk.[/quote<] Have you ever considered that there may have been someone constantly antagonizing him till he reached that state? Chuck doesn't just troll, he tries to get under peoples skin and break them down. As I said before there are some people that aren't worth talking to and a lot of the things that he's said to me have made me want to reach through the screen and throttle him. This is coming from a person who is very much in control of who they are. Not because he's been 'right' but because he constantly tries to make you feel bad, look bad, and then acts like a 'good guy' at the end of it. Our earliest arguments were him constantly trying to make me look like a AMD faboi, which have largely stuck because he's said it so much. I favor neither side (funny I know, right?). I had to point that out time and time again and he wouldn't even listen, he just kept implying that because I had a positive opinion on something then I therefore solely support that side. It's infuriating as well as demeaning because he doesn't actually listen to what you say, it only seems that way. He simply takes whatever you say, misconstrues it, mixes it with half truths, adds in a hefty dose of hyperbole, then adds in some sort of logical argument like 'the sky is blue' in order to tie it all together, topping it off with a nice insult. That's why I've called him out on strawmen, because that's essentially what I just named off. Constantly having people do that to you can make you ape shit crazy. I'm not saying Chucks solely responsible for what happened to Abw (as we all are in control of our actions), but when you have someone constantly trying to provoke you, I can't say he doesn't share a sizable amount of the blame. Abw has been around TR for awhile. It hasn't been till recently when he got really 'riled up'. Perhaps there is a reason for that... Thinking back on this we haven't really had a problem with anyone like this since SSK's hayday. Then he disappeared. If I didn't know better I'd almost say Chuck and SSK were one in the same.

          • Chrispy_
          • 7 years ago

          [quote<] I believe there is great value in a good flamefest. It may not look pretty, but you can actually learn a lot when two well-informed sides argue.[/quote<] QFT

          • Nec_V20
          • 6 years ago

          Troll is a word which has been so overused that it has become almost meaningless.

          Since “The September That Never Ended” the art of Trolling and/or Flaming has declined, as more and more of those who think that spell-checking simply means moving their lips when they read have imposed themselves.

          In many cases one should apply Hanlon’s Razor to the misguided wights:

          “Never attribute to malice that which is adequately explained by stupidity”.

      • sschaem
      • 7 years ago

      Probably 13th? “stealth viral marketing activities”

      Who might he be working for? his marketing skills are really not that great… so AMD? 🙂

    • halbhh2
    • 7 years ago

    Test the cheetah on the plain, and the leopard in the jungle. We want to know how these higher end on-chip graphics perform for a 4K video, or even better, two HD videos simultaneously. And with it’s competition, which would not be the older A10, but the newer A10. And for me, that lower power 6700 is the most interesting.

    • maxxcool
    • 7 years ago

    FWIW, Thank you TR for the trustworthy review as always.

    • halbhh2
    • 7 years ago

    If the chip isn’t available for us to install…….

      • halbhh2
      • 7 years ago

      My local Microcenter has the A10 6800K.

      Of course a bump of 300Mhz won’t be night and day, but…..it is available to install.

        • chuckula
        • 7 years ago

        I’m glad that halbhh2 was able to answer halbhh2’s question about Richland availability.

      • Damage
      • 7 years ago

      halbhh2 has been banned for corporate shilling. I’m getting sick of this stuff. Expect more bans if others continue in the same vein.

        • chuckula
        • 7 years ago

        I RENOUNCE INTEL! (please don’t ban me!)

          • Airmantharp
          • 7 years ago

          But we can keep our trolls, right? They’re so entertaining! (insert awkward auxy emoticon)

            • chuckula
            • 7 years ago

            I would never give up my trolls! They are collectors items and will pay for my retirement one day!

    • Bensam123
    • 7 years ago

    Anyone else see the A10-6600k on Newegg? Will that make it’s way onto TR too or the 6800k?

      • chuckula
      • 7 years ago

      [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16819113331&Tpk=6800K&IsVirtualParent=1<]I have no idea what you are talking about[/url<] P.S. --> Good news Bensam. The 6800K is going for a higher price. Going back to that chart you had the other day, if you either cut the performance of the Trinity or increase its price, then the slope of your line gets steeper and AMD's clear benefits over Intel get even bigger! Hopefully AMD has cut the performance with the 6800K, which means that an imaginary $300 AMD processor will be at least 5x faster than the 3960x and have a GPU that's faster than the Titan!

      • halbhh2
      • 7 years ago

      The chip I’ll actually use, to build media PCs for customers, is most likely the A10 6700. It’s about the total performance for amount of $ for media use and able to run a 4K screen later, etc., and price. This isn’t a home media center I mean, but for an auditorium, where for instance it may be running 2 hd screens at higher than 1080p eventually (have to test that!).

        • halbhh2
        • 7 years ago

        So far, sites appear to lack the imagination to realize the best use of these higher end on-chip graphics: media for professional or auditorium or presentation where more than one HD display can run simultaneously.

        Why even bench an desktop A10 for gaming. What kind of thinking is that?

        Are you worried someone is thinking to use it for a gaming computer, and they you might save $20 in overall cost vs adding in a low end video card to get the same lower end result?. But that *isn’t* the natural use for this animal. Some reviews treat the A10 like putting a leopard out in the open plain to hunt gazelles. That’s not it. Leopards have a natural habitat, where they are more suited. Test the leopard in the jungle.

    • kvndoom
    • 7 years ago

    The point of severely diminishing returns that we all saw coming 5 or so years ago seems to be here.

      • Airmantharp
      • 7 years ago

      Core 2 Quads fit into thin & light laptops five years ago?

      I think you’re missing the point if you’re only looking at raw performance :).

    • tipoo
    • 7 years ago

    Regarding

    [quote<] I like how the Haswell chip with the graphics cache nearly beats out the six-core, quad-channel, thousand-dollar Core i7-3960X in the CFD test.[/quote<] on page 14, isn't that last test single threaded? Heck, how does the 3960x do so well if it can't use its core advantage?

    • tipoo
    • 7 years ago

    Jeepers, Intels efforts already produce more favorable frame latencies than AMD chips, integrated or discreet? That’s kind of nuts if you think about historical Intel video drivers. What about image quality though, the 4000 was still pretty far behind iirc?

    Then again, Intel had no reason to invest a lot in their video drivers before now, as their old alleged “graphics accelerators” were just the left over die space from the chipset given away for free, they had no reason to put a great deal of support in them until recently, especially with Apple becoming so influential and thankfully asking for better integrated graphics performance.

    I find myself wondering about that eDRAM, whether it’s that or Intels EU architecture, this chip is actually surprisingly good at compute, despite Intel not advertising that much. I wonder if they could slap more EUs together, maybe more eDRAM, and make a discreet compute board out of it. i know they have the Phi, but this one could just use standard OpenCL. If the architecture scales up well, hell, Intel could even have a competitive discreet GPU, if they’re already just under 640 levels.

    I also wonder, if 128MB of fairly cheap eDRAM can be used to gain this level of performance, why aren’t all GPUs with this level or under using it rather than expensive power consuming GDDR? I’ve heard that only some foundries produce eDRAM, is this true? Intel and IBM being two, of course.

    My last matter of speculation is Apple, since they were reportedly the ones pushing for the eDRAM so much. There’s two ways that will go, they’ll either strip out the 650M in the 15″ MBP/rMBP and keep the Iris Pro exclusively in there to shave some weight and leave the 13″ with a dual core/HD 4600 part, or best case scenario they’ll go up on both, giving the 13″ a quad core and Iris Pro. This would mean higher wattage, but their new Retina cooling system seems like it could deal with it. I hope the latter is the case, a 13″ with a quad core and Iris Pro would be very appealing.

    • UberGerbil
    • 7 years ago

    Nice review. Appreciate all the work that went into it.

    Couple of questions:
    – On the platform block diagram, there’s still a line between the CPU and PCH labelled “Intel FDI” — but if the display circuitry has all been pulled into the CPU, what purpose does that link serve? Is this an error in the slide, or is there some other reason for that still to be there?
    – With the *Bridge chips, there was some criticism of the IGP being unable to nail the exact (23.976) “24p” framerate; I didn’t follow closely (I’m not OCD enough to worry about dropping one frame out of several thousand) and I think Ivy (or perhaps its drivers) improved things a bit. Has Intel mentioned this at all or has it come up in testing? (Or, perhaps, did this all get solved a while back when I wasn’t paying attention?)

    The apparent lack of FMA support in Intel’s own OpenCL ICD is both surprising and intriguing. Given the new hardware features to play with (not just the instructions but also the eDRAM LLC) and what is reportedly a “from scratch” rewrite of the drivers, I guess we should expect some immaturity — and potentially some big gains from driver updates down the road. You may have to consider an update in a few months when Intel’s software team has had more time to refine their work

      • accord1999
      • 7 years ago

      The 24p support has been improved to one dropped frame every 6 hours:

      [url<]http://www.anandtech.com/show/7007/intels-haswell-an-htpc-perspective/4[/url<]

      • chuckula
      • 7 years ago

      Props to Accord1999 for the Anandtech link about the 24 FPS bug.

      As for lacking FMA support in the OpenCL support… that is quite interesting since Intel’s OpenCL has traditionally encompassed both CPU and GPU compute (with more recent development being focused heavily on finally getting some performance out of the GPU).

      I’m a little surprised that the GPU isn’t capable of FMA because that is a very common operation in linear algebra used in graphics. Maybe the GPUs aren’t fully IEEE 754 compliant? Also, as you suggest, CPU based FMA could come later with more software updates since the CPUs do support FMA with IEEE 754 compliance.

        • UberGerbil
        • 7 years ago

        The shaders in the IGP almost certainly do FMA (though they may or may be IEEE 754 compliant); that’s not what we’re talking about here. It’s the new FMA x86 instructions that appear to not be exploited in the current Intel OpenCL IGP. If they were, there would be a significant difference between the scores for the i7-3770K and the 4770K. (The OpenCL driver must use the CPU, because if it used the IGP then there would be a difference due to the differing number of shaders in the two designs)

          • chuckula
          • 7 years ago

          In that case I’d just be on the lookout for a software update to support FMA better in OpenCL. The hardware is there, but as we’ve seen in the past, it takes a while for software to support the hardware.

    • moose17145
    • 7 years ago

    Think I will just hold onto my i7-920 system for a while longer yet. It gets the job done well enough for what I use it for.

    If anyone is curious it’s current specs are
    – i7-920 (still running stock)
    – 24GB DDR3 1600 MHz ( actually running at 1600 MHz thanks to tweaking multipliers on the mobo )
    – GeForce GTX 285 ( I would like to upgrade this, but I don’t really play enough games anymore for me to justify it )
    – X-Fi XtremeMusic Sound card ( It still works great so why get rid of it? )
    – Bunch of Various sized HDDs in JBOD

    I feel like if I upgraded the boot drive to a SSD and updated my videocard, my computer would probably be 90% of a Haswell system for gaming. Otherwise for the most part I have been using my system for teaching myself about virtualization (hence the 24GB of memory). And for that it seems to be holding up just fine. So as much as I WANT to do a completely new system build, I just can’t justify it based on what I saw.

    That being said… I have NEVER had a computer hold out for as long and as well as this one has. I built it in December of 2008, and yet I still don’t really feel like I need to do another build. I know these new CPUs have better virtualization tools built into them, but I’m not really sure how big of a difference they would make. Maybe it I were running my machine in a corporate environment or using it to make money… but for right now all I am really using it for is as a learning tool. Perhaps some more research looking for benchmarks into virtualization performance is in order…

    • vvas
    • 7 years ago

    Great review as usual, thanks. Now is really the time for me to upgrade my first-generation Phenom. 🙂 The legacy CPU graphs are particularly illuminating, thanks for that. Of course there are still a lot of question marks here, but I understand the inherent limitations of embargo-time reviews. For follow-up articles though, it would be great to see the following:
    [list=1<] [*<]Benchmarks of the i5. As the article itself points out, it's often the sweet spot in terms of performance-per-dollar, especially for single-threaded performance. If it follows the trend, the 4670 looks [i<]very[/i<] enticing. [/*<][*<]If you get your hands on a retail version, a quick article on the acoustics of the stock cooler would be great. [/*<][*<]Legacy GPU graphs! Can Haswell's GT2 defeat older generation midrange graphics cards? I wish I could find out how it compares with the Geforce GT 240 that I have now…[/*<] [/list<] And of course I'll be anxiously waiting for (Geoff's, I assume) coverage of Haswell-ready motherboards. This is going to be an interesting month for reviews for sure. 🙂

    • brucek2
    • 7 years ago

    Anyone upgraded from a i7-920 yet who can share their experiences? Or any sharp readers want to estimates the gain in doing so? I’m a casual overclocker, I took my 920 to 3.5 GHz in its first five minutes with no voltage changes and have left it there since.

    4.5 years seems like a long time to keep a system but I’m wondering how much performance gain I’ll really see by upgrading now. Sure I’ll probably get a higher overclock, but I’ll also lose my tri-channel memory controllers.

      • moose17145
      • 7 years ago

      I understand EXACTLY how you feel. I also am running an i7-920 system, and the things is still holding out strong in terms of being “top of the line” in my mind. Throw in a SSD and a new videocard and I feel like I would have at least 90% of a new Haswell system for gaming.

      That being said… should you upgrade? That depends what you do with your computer. Basically we just need more information. As was shown, for gaming, if you have a Sandy Bridge system, there really isn’t much point in upgrading if you primarily use your computer for playing games. And I don’t think Nehalem was really that far behind sandy bridge for gaming IIRC (especially if you have it overclocked). But if you are using your computer for work related purposes, and the stuff you are using will make use of some of the new instruction sets that have been introduced since Nehalems release, then it might very well be worth upgrading.

      • Krogoth
      • 7 years ago

      Unless you push Haswell chip hard, you will see little improvement on the performance front.

      The most noticeable difference is that Haswell consumes a fraction of the power which means you don’t have to throw in loud fans and huge heatsinks to keep the chip cool. You wouldn’t miss the triple-channel DDR3, since it only benefits handful of applications and multi-socket platforms. You will lose some 2.0 PCIe lanes (30-46 lanes) in exchange for getting PCIe 3.0 lanes (IIRC, 16-24 lanes). This may matter if you want to drive multiple video cards and other high-end PCIe devices.

    • Airmantharp
    • 7 years ago

    So, Haswell shows us two things.

    1. If you have Sandy Bridge (like flip-mode mentions below) you’re good, no need to upgrade.
    2. You want a thin laptop or Ultrabook with the GT3e part in it. At least, I do.

      • NeelyCam
      • 7 years ago

      [quote<]2. You want a thin laptop or Ultrabook with the GT3e part in it. At least, I do.[/quote<] I do as well, but as far as I remember from the rumor mill, there won't be GT3e Ultrabook parts.. May have to settle for regular GT3

        • UberGerbil
        • 7 years ago

        But we may see the GT3e shoehorned into what would otherwise be an Ultrabook form factor; you may not even give up that much in battery life if you don’t use it too intensely. I don’t know what Intel requires for OEMs to get the “Ultrabook” naming and marketing dollars, but I wonder if just using the “wrong” processor like that would be enough to forfeit them.

          • NeelyCam
          • 7 years ago

          This I would be happy with. I’m OK if it’s a bit thicker, too, as long as it’s 11.6″ (13.3″ is unnecessarily large for my needs… I prefer better portability)

          • Airmantharp
          • 7 years ago

          Really just want something thin and light- I’d prefer a 17″ 1080p IPS screen as well, which might knock it out of ‘Ultrabook’ branding territory. An optical drive bay, and support for a mSATA drive would be nice.

            • UberGerbil
            • 7 years ago

            I don’t think anything with a 17″ screen is going to be light, nor anything with an optical drive bay is going to be thin (by modern standards of thin)

            • Airmantharp
            • 7 years ago

            Sure, but it can be thin(er) and light(er), right? Like a Macbook Pro compared to a Clevo?

        • FuturePastNow
        • 7 years ago

        I’d settle for something in the 13″-14″ range that isn’t spectacularly thin or light. A more traditional laptop size, you know, just not massive. That size would suit my eyes and hands better, anyway, and I’m not such a girly man that I can’t carry four pounds.

    • kamikaziechameleon
    • 7 years ago

    I’m curious as to when we will get a solid mobile review on this processor.

      • brute
      • 7 years ago

      if u have a cell phone, i padlet or laptop, the review is mobile

      • abw
      • 7 years ago

      There is one at Anand , well mobile inside your basement…..

      With a car 12V 50AH lead/acid accumulator you ll get
      about the 8 hours of promised usage , this at full duty…

    • brute
    • 7 years ago

    weak

    “Comparitive” and leave out ARM Vortex V10 in my phone. how i know whether this is upgrade?

      • chuckula
      • 7 years ago

      I give you a tip of the hat fellow Troll!

    • Chrispy_
    • 7 years ago

    [list=1<][*<]Legacy Comparisons: Thank you.[/*<][*<]GT3e is both impressive and annoyingly unavailable to HTPC builders; Damn Intel's BGA-only strategy on this one![/*<][*<]How long until Intel offer a discrete graphics card to compete with AMD and Nvidia on the desktop?[/*<][/list<]

      • chuckula
      • 7 years ago

      [quote<]How long until Intel offer a discrete graphics card to compete with AMD and Nvidia on the desktop?[/quote<] There's been talk of that on and off for years and it almost kinda coulda come true with Larrabee.. but we saw how that turned out*. I'm pretty sure the answer is going to be: never. Intel cares about the IGP in mobile parts and is happy enough to try to compete with AMD's mobile IGP offerings. The weird twist is that AMD & Nvidia's own powerful GPUs are actually a selling point for Intel's higher-end chips: If you want the ultimate gaming experience you pick an AMD or Nvidia GPU and Intel gives you the PCIe 3 and CPU oomph to back the GPU for the best balanced game performance. It's much easier for Intel to have AMD & Nvidia develop the high-end GPUs than to try to do it directly. * Failure in the GPU world, but as we'll be seeing in a few days, a massive success in the world of supercomputing with the Xeon Phi.

        • Scrotos
        • 7 years ago

        [url<]http://en.wikipedia.org/wiki/Intel740[/url<]

          • Chrispy_
          • 7 years ago

          Heh, I hadn’t forgotten the i740, but I definitely said ‘discrete [i<]graphics card[/i<]' I think the i740 qualifies alongside the S3 Virge as a "steaming pile of turd". The resemblance to any actual graphics card is merely coincidental 😉 edit - real men (or at least real 17-year-old wannabe men) were rocking V2's or Banshees back in those days!

          • chuckula
          • 7 years ago

          They thought that they had destroyed all the records!

        • tipoo
        • 7 years ago

        Intels strategy certainly seems to be eating as much of the BOM on mobile parts as they can, including the GPU and now the voltage regulator, more and more moving to the CPU die. I don’t know what it would cost to develop a scaled up discreet version of this architecture, but if it scales well it would certainly seem competitive. That would allow them to eat the BoM on systems with discreet graphics as well.

      • smilingcrow
      • 7 years ago

      Surely there’s no reason why Asus etc can’t release ITX boards with BGA chips onboard?

        • Airmantharp
        • 7 years ago

        It’s really hard NOT to expect this to happen; especially since there appears to be only one GT3e SKU, and it looks quite fully featured.

        I’m just annoyed that there isn’t a GT3e ‘k’ part.

          • smilingcrow
          • 7 years ago

          There are three R range SKUs in total but the two i5 chips don’t appear to have been formally announced yet; they should appear in Debrett’s fairly soon.

          • Chrispy_
          • 7 years ago

          I’m expecting a Zotac Mini GT3e in the next couple of months, for sure.

        • UberGerbil
        • 7 years ago

        Indeed. In fact, for HTPC use I think I’d rather buy a motherboard with CPU and HSF already attached anyway. And I’m not going to miss the K versions for that purpose, certainly (though I can see why folks would be interested in such a beast otherwise).

    • Star Brood
    • 7 years ago

    Great article!

    I would like to see how StarCraft 2 performs in the 4950HQ – given that it’s more CPU-limited in most cases, having a passable GPU now might finally make it playable on ultimate settings without a dGPU.

    • DPete27
    • 7 years ago

    Page 1, end of Paragraph 4:
    [quote<]Some of those measures are aimed at raising instruction throughput in existing code, while others with require the use of new instructions.[/quote<] Should be: "....while others [b<]will[/b<] require the use of new instructions.

    • CaptTomato
    • 7 years ago

    No competition creates a Haswell!!!

      • BIF
      • 7 years ago

      ^

      Hmmmm, he may be onto something here.

      • Bensam123
      • 7 years ago

      About it… This is why it’s really bad if AMD goes out of business.

        • chuckula
        • 7 years ago

        Which is why Intel has been extremely careful not to put AMD out of business in the last 8 years or so, to the point of intentionally throwing AMD bones to keep them around.

        • NeelyCam
        • 7 years ago

        Let’s have the government bail out AMD.

          • chuckula
          • 7 years ago

          AMD is a three-letter acronym, and the feds [b<]love[/b<] those. Right after all that tax payer money ends up in Rory's off-shore account, the IRS can schedule a completely random and totally non-political audit of everyone who has bought an Intel product in the last 15 years!

        • CaptTomato
        • 7 years ago

        They won’t go out of business, they’ll just adapt to changing consumer needs.
        IMO, if AMD went down, Intel would go with them as both the desktop and Laptop market would crash.

      • maxxcool
      • 7 years ago

      THIS *IS* the result of competition! from ARM 🙂

        • chuckula
        • 7 years ago

        Truer words have never been spoken. The only part of the entire Haswell design that is meant to compete with AMD is the GPU, and even that is also really tuned for decent performance in very low power envelopes instead of absolute overall performance.

        • UberGerbil
        • 7 years ago

        Exactly. This is the result of [i<]intense[/i<] competition, but not in the segment -- or in the technical features -- we're used to seeing for traditional x86. Intel is going where the growth and excitement is; unfortunately for the regular denizens of TR, that's no longer where they live.

          • maxxcool
          • 7 years ago

          I can agree that were not the target audience, but it still has the chops to get it done for now.

        • flip-mode
        • 7 years ago

        Yeah, you nailed it there. Intel see’s all the excitement surrounding ARM’s products and what do we get: an extremely intense focus on power consumption.

          • maxxcool
          • 7 years ago

          Which I welcome, My sincere hope is AMD can catch up which they *can* do with some intense work. The result I *hope* is Intel will get squeezed on both sides power/performance and what we get as consumers is a better set of cores that are maximized for both power and performance in future architectures from both vendors. 🙂

        • CaptTomato
        • 7 years ago

        I’m a desktop gamer who craves moar power, all the rest of the gadgets generally bore me regardless of how popular and useful they might actually be.

    • cygnus1
    • 7 years ago

    [quote<] The number of ports in the reservation station has risen from five to seven [/quote<] Should be from six to eight, numbering starts at zero.

      • Damage
      • 7 years ago

      Doh! Fixed. Thanks.

        • cygnus1
        • 7 years ago

        No sir, thank you for the awesome work you guys do

    • slaimus
    • 7 years ago

    Thanks for the old CPU comparisons. The Q9400 looks surprisingly competitive against modern competition. It has back up what I have seen on my slightly overclocked Q8300, which is that they are still fast enough for everything except video encoding.

    [quote<]The display block has been removed, too, since that's now integrated into the CPU.[/quote<] It looks like the display block should still be there. The CPU still has the FDI going to the southbridge, where I assume there is a RAMDAC for the VGA output. [quote<]The FX processor's absolute performance is lower and its peak power draw is substantially higher.[/quote<] That Asus Crosshair V Formula motherboard is probably eating a big chunk of that power. With a more mainstream motherboard it probably would not look as bad.

      • ZGradt
      • 7 years ago

      Last year I upgraded from a Q9450 to an i5 3470. I didn’t really feel a great need too, but it is a heckuva lot faster and more responsive. Especially for video encoding, as you mentioned.

      Then I found a new A10 5700 for a price I couldn’t pass up, and tossed it in a spare case. Being so much newer, I was a little surprised that it encodes about as fast as my retired Q9450! It feels like the same computer. I was hoping it would be at least 20% faster or something. I mean, just for having DDR3 instead of DDR2, and 3.4+ GHz compared to 2.66 GHz. The caching is supposed to be a lot better too, since the Quads are basically two dual cores glued together.

      Then again, I paid less for the entire A10 system combined than I paid for the Q9450 CPU…

      It still seems odd to me that they’re releasing new processors that are competitive with processors from 4 years ago. 10 years ago, you could only say that about laptop chips.

    • DPete27
    • 7 years ago

    Only 5 CPUs to compare in the discrete GPU gaming results? Will we see another CPU gaming roundup in the coming months or are we stuck looking at the FX8350 review for the forseeable future?

      • Damage
      • 7 years ago

      This is the start of a new generation of tests. You’ve seen what we’ve done with the last ones. I literally have CPUs queued up two or three deep for three different socket types, sitting here on the bench.

        • flip-mode
        • 7 years ago

        Man, your “Reservation Station” only has 3 ports?

          • Damage
          • 7 years ago

          Four, actually, and we sometimes co-issue to a fifth unit.

            • flip-mode
            • 7 years ago

            A fifth unit! How Tegra-esque.

            • derFunkenstein
            • 7 years ago

            It’s the companion station and it runs at low speeds. That must be where the Pentium 4 was seated.

        • DPete27
        • 7 years ago

        I figured that gaming benchmarks with all those legacy CPUs would be coming since they were included in this article for computational benchmarks…but assumptions are just assumptions. Can’t wait. Very nice job on this article though.

    • derFunkenstein
    • 7 years ago

    While it’s possible I just missed where you wrote it, I didn’t see any mention – I assume the 4950HQ system is using a different PSU and that’s why its idle power is so much higher than the 4770K? Otherwise that’s a huge bust.

      • Damage
      • 7 years ago

      Yeah, dude, that thing has a Sparkle.

        • derFunkenstein
        • 7 years ago

        Wow, you’re lucky it didn’t turn into a Spark.

    • ronch
    • 7 years ago

    Increasing the number of function units from 12 to 17 (or is it 18?) and widening the processor from 6 to 8 ports seems like Intel did a lot of work on the schedulers, and they did. How hard can it be for AMD to also re-design Steamroller’s schedulers so they can also widen their architecture?

    Many folks argue that the Bulldozer design is narrow, but I don’t think it’s as narrow as it seems. I’d assume the basic Bulldozer architecture has 4 ports, 2 for ALU and 2 for AGU, at least for integer execution. The FPU is decoupled so I’m not sure if you can do a direct apples-to-apples comparison and say that the FPU has 4 ports, resulting in 12 ports spread across two cores. Unfortunately AMD hasn’t released more information regarding this. Tight-lipped AMD. Ever so secretive. And good thing too: at least Intel won’t know the secrets of how to sabotage their architecture. /sarcasm from an AMD fan

    Seems to me AMD needs to juggle the ports around and do some more performance profiling to determine the best mix of function units the same way Intel juggled function units around going from Nehalem to Sandy to increase efficiency. AMD also needs to increase the capacities of its schedulers, TLBs, execution windows, etc.

      • maxxcool
      • 7 years ago

      Here is the rub. The way steam/pile is written it *can* be more efficient and performant than intel in certain situations. *IF* there is ever a operating system that treats the side-cores correctly you’d see a nice bump from steamy/pile cores.

      but for now MS is not interested in that as consumer devices will be bobcat/kabini/intel/cleron/pentium class cpus. only enthusiasts use fx-class cpus so the margins just dont exist to warrant a re-write of a huge portion of the hal-software stack.

      what i am suprised by is how ?3 years? later we don’t have a custom linix core by some amd-fan to show-case the power it does possess.

        • ronch
        • 6 years ago

        The OS can only do so much. The patch MS released some time ago addressed this issue on how to assign threads to each of the FX’s cores. But if a particular piece of software can’t take advantage of a certain number of cores, the OS can’t do much about it in terms of making that app use more cores. On the other hand, compilers can perhaps do more than the OS by seeing where an application can make use of more cores and thereby works to parallelize the code for more cores. It’s either an app can use a certain number of cores or not, and that’s where compilers can help.

    • xeridea
    • 7 years ago

    Why was there no section about temperatures?

      • ronch
      • 7 years ago

      I don’t recall TR ever doing them, at least not in recent years.

      • maxxcool
      • 7 years ago

      hmmm this would be nice to know… i bet we will see it in the OC article, this article did not contain any overclocking attemps.

        • derFunkenstein
        • 7 years ago

        They said they had Z87 board coverage coming. I assume that’s where we’ll see the overclocking, and as you said, hopefully some temps.

      • Damage
      • 7 years ago

      Patience, young padawan: [url<]https://techreport.com/review/24889/haswell-overclocked-the-core-i7-4770k-at-4-7ghz[/url<]

    • chuckula
    • 7 years ago

    This thread is for legitimate gripes about Haswell instead of the manufactured ones from people who decided it was a disaster in 2009 when they first heard the codename:

    1. BCLK overclocking should be made available on the non-K parts. This is my personal #1 disappointment by a wide margin. If the desktop is “dying” you should be throwing out sweeteners to encourage people to buy into the desktop.

    2. The early reports are that overclocking is OK but not amazing.. about on part with Ivy Bridge. If it is due to the use of TIM instead of soldering the IHS, that was a mistake on Intel’s part. The K-series parts are halo parts and Intel should be putting its best foot forward to get enthusiasts excited.. those enthusiasts often make purchasing decisions for other Intel products down the road.

    3. I’d like TSX and the virtualized I/O on my K-series part, although these features are not insanely useful in day to day use on a regular desktop.

      • Mr. Eco
      • 7 years ago

      Virtualization and TSX features are very important, as is AES. It is very disgusting decision not to include them on i3, thus screwing budget buyers.

        • chuckula
        • 7 years ago

        Uh?? How many AMD platforms at any price give you TSX exactly? Seriously, if I hand Rory a billion dollars, will I walk out of AMD’s HQ with a chip that has TSX support*? Including prototype Steamroller chips? It’s fascinating how the value of a feature skyrockets to amazing heights if and only if it isn’t present in every single Intel CPU from high-end servers down to smartphone atoms. As soon as the feature become ubiquitous, it turns into a useless marketing gimmick instead of being somehow “vital”.

        And the “lack of virtualization” on an i3 is really a misnomer, I have been running VMs in different capacities since the days of the Pentium III and Athlon XP. My current Core 2 Duo doesn’t have VT-D and trust me, I can run VMs on it. The i3s do have VT-X support for acceleration of the VMs. The i3s do lack some of the higher-end VT-D virtualization for I/O devices, but why is that needed to run a VM on your desktop exactly?

        As for AES acceleration, once again, would it be nice? Sure! Have I been running a machine for 5 years doing a crapton of encrypted tunneling without those AES instructions? Sure! Methinks that the Eco doth protest just a bit too much…

        * The answer is yes because they’ll give me an Intel chip.

          • Deanjo
          • 7 years ago

          The VT-D and AES would be really nice in the i3 but it would cut into intels Xeon market. I could see using them both for small servers where big iron isn’t needed but virtualization and encrypted volumes would be preferable. The only reason I still have AMD’s in some of my systems is for it’s superior virtualization capabilities.

          • ZGradt
          • 7 years ago

          I actually decided on an A8 Trinity chip for my all-in-one encrypted fileserver featuring VMWare ESXi with a pass-thru raid card. It was the cheapest, lowest power processor I could find that supported VT-d (AMD-Vi = AMD’s version?) and AES-NI. Had to buy a high-priced motherboard with IOMMU support though…

          I would’ve gotten an i3, but they cut out too many features.

    • DancinJack
    • 7 years ago

    A 4770K and Z87-M Pro (POINTING AT YOU ASUS) are in my future.

    Great article.

    • ClickClick5
    • 7 years ago

    Well for gaming…not anything worth tooting a horn about. But as for real-world applications (ie, video encoding), that’s a nice boost over the 2600k.

    Not bad. Not bad.

    • puppetworx
    • 7 years ago

    The i5-4670K might be the chip for me. Not a tremendous leap over the i5-3570K or FX-8350 but it would get me into the latest platform without much of a price disparity. I’m in no rush just now though.

    • anotherengineer
    • 7 years ago

    I am more amazed by the FX-8350 every time I see a review like this.

    In most things it’s average, in a few things it’s below average, and in some things it can actually best the 4770k Haswell, showing that the chip does have good potential for much more performance for a often on sale price of 180 bucks!!!

    If AMD can tune this chip to improve the IPC and get it too run consistent across all benchmarks and get the power consumption down with tweaks and 28nm this will become and excellent CPU.

      • chuckula
      • 7 years ago

      I recall that pcperspective had a quote about how disappointed they were with Haswell’s power consumption: When they overclocked the 4770K to 4.5GHz, it was [i<]only[/i<] using 3 watts less power than a stock FX-8350. As I've said many times in more detailed posts, if you actually look at the paper specs for the FX-8350, it should wipe the floor with any and every Intel processor below the 3930K, and even with the 3930K it should easily trade blows. The transistor budgets, die sizes, clockspeeds, cache sizes, etc. etc. etc. are all [b<]grossly[/b<] slanted in AMD's favor, but outside of a few synthetic benchmarks, where is the FX-8350 really winning again?

        • anotherengineer
        • 7 years ago

        To answer your question

        “outside of a few synthetic benchmarks, where is the FX-8350 really winning again”

        -Price

        Neely’s trolling >> your trolling

        😉

          • chuckula
          • 7 years ago

          1. That wasn’t a troll, my trolls are insanely over the top and easily verifiable as being factually inaccurate, which is why they are indistinguishable from the serious posts from real fanboys 🙂

          2. I did the math: In the system I just built that will likely last for 5 years just like the Core 2 Duo system it is replacing, going with the FX-8350 would have saved me a grand total of.. $210 in system costs.. and that’s assuming I was saving about $50 on the motherboard, which is not even necessarily true since the most expensive FX-990 mobos (read: ones I would have any interest in) really aren’t cheaper than the Z-87 motherboards. Even Asus, not known for being cheap, has full-ATX Z-87 motherboards for only $130. If you go for the same knockoff brands that are popular with the AMD crowd, there are first-day on-sale LGA-1150 motherboards going for $65… I’m tired of hearing about how much of a ripoff Intel motherboards are.

          But $210 sounds like a lot right? Not really. I built a relatively high-end system at about $2500 where the CPU just isn’t a massive cost factor. I would have saved less than 10% of my total system costs under even these ideal circumstances. In exchange, I’m getting a chip with much better performance and a vastly more modern platform with much better I/O than anything AMD offers or is projected to offer anytime soon.

            • anotherengineer
            • 7 years ago

            I agree.

            i7-4770k = 350
            8350 = 180

            Only $170 difference, not much in a 2500 dollar build.

            Only thing I’m saying is the 8350 has potential to be much more if they can iron out the bottlenecks.

            • Action_Parsnip
            • 7 years ago

            I really think it does have more hidden deep within in it, amongst insufficient queues and buffers sizes. Insufficient register numbers, caches that are too slow (L2 & L3) or too small (L1D), load-store and floating point units that are a bit of a kludgey mess and a decode capacity shortfall. These are all things that Steamroller is apparently due to address.

            I’d say Bulldozer was a generally horrible implementation with loads of bits poorly done then Steamroller offered a tangible increase in IPC without [u<]any extra cache space or execution resources[/u<] just by going over different logic blocks a second time. Who knows what the changes laid out for the forthcoming Steamroller will do? Surely that screams something that was half-assed on it's first outing with more left to come. Maybe a lot more, but we have no way of knowing until it arrives.

            • Mr. Eco
            • 7 years ago

            Good for you, you can afford to built such a system. My latest PC is 700 euro incl. VAT, and it is not bad by any means: i5-3550, HD7850 2GB, 16GB RAM, ASRock H77 MiniITX board.
            In such common system 210$ difference is a lot.

      • Action_Parsnip
      • 7 years ago

      It is all over the place. Some gaves love it even (see next sentence), some abhor it (Shogun: Total War). Something interesting, three games, Farcry 3, Crysis 3 and BF3 (multiplayer) really like the 8350. All very new and shiney engines so begs a couple of questions:

      1.) Does Steamroller map well to game engines that spawn lots of weak threads?
      2.) Will Steamroller map well to coming console ports?
      3.) Is 8 the new magic number? ….. I mean 8 threads (logical or physical) making a tangible difference to games looking ahead to the near future? Will just 4 threads (logical or physical) hold you back much?

      I’m going to be in a position to afford a new system soon, solely for gaming and web browsing, options are a 3770k and a z77 on the cheap now it’s EOL (Haswell ….. just not worth it. Sorry) or an 8350 and AM3+ board. One’s going to be pricey and good at everything, one is much cheaper and will be alright for things but could be exceptional for games going forward.

      Then again the elephant in the room is AM3+ is likely a dead socket and the chipset is ancient and way behind anything z67 or z77, let alone z87. Choices choices.

        • anotherengineer
        • 7 years ago

        I don’t think anyone will know until the 3rd gen ‘steamroller’ chip is released and tested.

        If your system will be “solely for gaming and web browsing” the TR forums are a good place to start.

        • Anonymous Coward
        • 7 years ago

        Well AMD can be trusted to deliver a weak version 1.0, and then get their act together after a few revisions. However I would be really surprised if the weak console CPUs actually shifted the relative performance between Haswell and Streamroller in AMD’s favor. What could be the cause of such a shift?

          • Action_Parsnip
          • 7 years ago

          Well it’s uncertain what sort if any shift could be expected. There was a developer interviewed on Eurogamer and he said basically that he couldn’t see ports of console games having any bias for AMD hardware except for maybe how they interact with the SIMD units, as these can have foibles specific to a particular vendor. So code for Jaguar SIMDs [u<]may[/u<] have a preference for Bulldozer/Piledriver/Steamroller SIMDs when it emerges from porting. Either way, are >4 threads the (near) future for gaming?

      • PixelArmy
      • 7 years ago

      I’m guessing you’d make up that initial price difference in energy costs over the [i<]lifetime[/i<] of the system (4+ years?) by going with the 4770k (though I'd say the i7 system would last longer). And you'd have a more pleasant experience over that time. Going by TRs charts (pg 7): Rough estimate: delta 37w @ idle = 0.037 kW * 24 * 365 = 324.12 kWh * $0.11 (I think this is the national avg) ~= $35 $35 a year if you idle 24/7, so that's $140 for 4 years. If you actually do stuff, the difference gets dramatically larger... Play with the numbers (less uptime, more load time, etc.), but I'd say amortized, the cost is a push. Edit: Correction pointed out by sschaem: 0.023 kW * 24 * 365 = 201.48 kWh * $0.11 = $22

        • anotherengineer
        • 7 years ago

        Possibly, it would be on an individual basis though.

        Your assumption assumes on 24hrs. However if it is a home PC and you work, etc. you could also make the assumption that the PC runs say 6hrs mon-fri and 16hrs sat & sun.

        Which would reduce the (on-time) to 3224 hrs/yr

        so 0.037kw/hr*3224hrs/yr*0.11$/kwhr = $13.12/yr additional per yr *4=$52.78

        Then one actually realizes how much leaving things on actually costs (especially phantom power)

        Just for reference, I used a watt meter in my power bars around the house, TV center in living room, 37W everything off, TV center bedroom 32W everything off, PC station 35W everything off.
        So even with everything off, dvd players, clocks, TVs, modems, routers, etc. are still consuming 104W

        And that does run 24/7, this is where a timer to shut off those power bars, from 11pm to 7am can pay for itself in a few months.

        Sorry for the side track power rant, electricity up here after taxes and etc. works out to 17.5 cents/kwhr

        • sschaem
        • 7 years ago

        The chart I see show 20watt.

        You are comparing the 4770K with no GPU card to a Fx-8350 with a discreet GPU.

        Also most system spend their time in sleep mode not idle.

        Reality is for a active system you might save ~10$ a year.
        Only people running 24/7 servers will see a net saving.

          • PixelArmy
          • 7 years ago

          I’d say my work computer spends most of the day idle. note to work overlords watching me, I’m joking. 🙂

          Anyways, you’re right, 23W, edited above. Though one would be included to argue that the IGP would then be added savings. What do we value a Radeon 7660 performance at? $60?

          Sure we can live on a price performance curve, but it’s not unreasonable to say $x is an appropriate amount for the premium “experience”.

          My point is $40 power here, (maybe a $60 IGP or game bundle $50 there, which you might not care about), platform longevity, etc. it’s closer than just the initial $ suggests. Like I said CPU shopping I tend to think in a more amortized way (even if I just know ballpark figures).

      • puppetworx
      • 7 years ago

      The 4770K is ~75% more expensive yet it’s only ~25% faster then the 8350 (overall). While the 3570K and 4670K run for ~18% and ~28% more with similar overall performance to the 8350.

      The 8350 is definitely sitting pretty right now.

        • Klimax
        • 7 years ago

        Pretty badly, because as soon as you encounter something few threaded, it goes down.

        Also not every First World country has cheap electricity…

      • smilingcrow
      • 7 years ago

      “If AMD can tune this chip to improve the IPC and get it too run consistent across all benchmarks and get the power consumption down with tweaks and 28nm this will become and excellent CPU.”

      And after they’ve sorted all that they can move onto sorting the problems in the Middle East.

      • esterhasz
      • 7 years ago

      You know, the Phenom X6 also still looks pretty good considering its age. Most everything I do is multithreaded to the hilt and lots of cheap cores is a valid approach to computing and surprisingly robust over time.

      • Bensam123
      • 7 years ago

      My thoughts exactly. It’s not about being a speed demon or having 300 hours of battery life. It’s the fact, that for a desktop part, it has a very respectable $/performance.

      I’ve used both a 8350 and a 3570k and I’ve talked about my experiences, which have been very positive in a heavily multithreaded scenario (streaming and playing games) compared to the 3570k. But even aside from that it’s a solid performer with the exception of some very single threaded games losing a bit of FPS off the top.

      I really don’t think it’s ever been given enough credit for what it is. For laptops or mobile devices, definitely go with Intel, but on a PC where you’re on a budget or may be more of a power user and rely on multi-threading to help get things done it really shines.

      I honestly can’t wait for Steamroller since Intel kinda went ‘meh’ with the desktop crowd this time around and AMD said they weren’t going the same direction.

    • Zizy
    • 7 years ago

    The star of the show is 4950HQ. Good graphics for integrated, interesting cache for some other uses. Waiting for torture tests on the chip – how does it perform after hours of playing game. I currently dont see any drawbacks except the price. TDP is quite high as well, but given the GPU score it could be forgiven.

    4770K = yawn + WTF. 2500 and 2600 were truly impressive chips, all the successors were just meh – no real progress. With that pace, AMD will catch them eventually.

      • chuckula
      • 7 years ago

      [quote<] With that pace, AMD will catch them eventually.[/quote<] AMD is still trying to catch itself.. have you ever seen benchmarks of a 6-core Phenom II that is Overclocked to 3.8 or 4 GHz compared to Piledriver?

    • yogibbear
    • 7 years ago

    Time to retire my q9450!

      • chuckula
      • 7 years ago

      my E8400 will live on in some capacity, but you’re right: Time for a new Timmy!

    • chuckula
    • 7 years ago

    [quote<]If you want a single set of numbers to summarize AMD's struggles of late, look no further than the chart above. Even though the FX-8350 also supports FMA, it requires more than twice the energy to complete the same task as the 4770K. The FX processor's absolute performance is lower and its peak power draw is substantially higher.[/quote<] But remember kids, the FX-8350 is a great chip and Haswell is a complete disappointment across the board. I'm abw, and I approve this post.

      • NeelyCam
      • 7 years ago

      I’m not sure I’ve seen even a single pro-AMD post from abw; half of his posts are about bashing Intel, and the other half bashing various TR comment posters, but I don’t recall AMD even being mentioned..

        • chuckula
        • 7 years ago

        It is a little strange… he has accused me of leading some sort of anti-Kabini reign of terror since one time I mentioned that Intel makes parts for laptops, but outside of that he is strangely silent about actually having any affection for AMD.

        Perhaps AMDzone has devolved so far that they aren’t really even what you could call “fanboys” anymore. Instead they just spew hatred about anything that isn’t AMD while having no actual love for AMD. Maybe we should call them hateboys instead.

      • flip-mode
      • 7 years ago

      Not your best work.

        • chuckula
        • 7 years ago

        It gets harder every day!

        Now I have to start coming up with material for how unimpressive Broadworse is. I’ll keep at it for y’all!

          • UberGerbil
          • 7 years ago

          You should just stop. Seriously.

            • abw
            • 7 years ago

            He cant , he doesnt exist by himself….

        • abw
        • 7 years ago

        The word “best” is an incongruity when ranking his posts…
        “Less worse” , “more worse” , although gramaticaly incorrect
        are more adequate…

      • mcnabney
      • 7 years ago

      ABW wins one ‘Internet’ for getting inside Chuckula’s head.

        • chuckula
        • 7 years ago

        I’m the one who found out he’s French, and about 75% of his posts are personal attacks on me, whereas I can keep it amusing…

          • abw
          • 7 years ago

          Wrongness and pretentiousness…You are making a fool
          of yourself unrelentlessly….

            • chuckula
            • 7 years ago

            So if my first comment there is wrong, does that mean that you don’t think Haswell is a disappointment? Or does it mean that you don’t think the FX-8350 is great? I’m really confused here… I thought you liked AMD? Or do you just spew incoherent insults at people to make up for your personal inadequacies?

            • abw
            • 7 years ago

            I already told you, you have not the competence to discuss technical issues
            as demonstrated by your magistral course about thermal resistance , first time
            i heard someone talking of matching the Rth of two contacted surfaces
            to provide better thermal conduction , a mockery of scientific discourse ,
            all you are seeking is to boost an inflated ego feeded by a rare clulessness
            and strawmen multiplications frenzy.

    • Prestige Worldwide
    • 7 years ago

    Error on page 1: Sandy Bridge-E i7 39xx listed as 8 core / 16 thread with 20mb, should be 6 core / 12 thread with 15mb of cache.

    Other than that, nice review, although I’ll echo the sentiment that it’s pointless to bench games at low resolutions because nobody buying a top of the line i7 is going to be gaming at 720p or lower. Real-world performance at 1080 is a metric that actually matters to potential buyers.

      • Damage
      • 7 years ago

      Not an error, just gimped product offerings vs the native chip. 🙂

    • trackerben
    • 7 years ago

    A good review pointing to exciting news about real advances to be found in upcoming mobile Haswell. Good thing I resisted getting a notebook this soon.

    • chuckula
    • 7 years ago

    Excellent review as always guys!

    The Qt compile benchmarks are plenty of incentive for me to upgrade.. from my E-8400 🙂 I’m not shocked that a bunch of GPU-limited games don’t show huge performance improvements on Haswell, but at least your frame-latency based tests showed some improvements here & there for Haswell. One thing in Haswell’s favor: There’s plenty of bandwidth and compute power in the CPU to drive very high-end GPUs in the coming years and keep up with more multi-threaded games.

    OK, for everyone calling Haswell a disaster: Go ahead and look at the last graph on the price/performance page. Do you notice that the overall performance is… oh.. about 10% higher for the 4770K over the 3770K? Isn’t that just about what Intel promised?

    Now, can you fault Intel for only promising a 10% gain? Sure. Can you fault them for failing to deliver on the gain that they promised? Only if you are an AMD fanboy who truly believes that the FX-8350 is “smoother” than the 4770K because “you can feel it.”

    • Action_Parsnip
    • 7 years ago

    Also guys …. could you have squeezed some jaguar numbers into the legacy comparisons bit? would have been interesting to see it versus the P4 EE 🙂

    • esterhasz
    • 7 years ago

    I really don’t agree with the general tenor on Haswell here.

    As far as I understand it, Intel set out to 1) bring its performance core line down to lower power envelopes and 2) finally get serious about integrated graphics. At least for now, it looks like it has succeeded splendidly on both accounts. As somebody who does 95% of his work on a laptop, I find this quite spectacular. Quad-core + Iris Pro in a 47W power envelope is really impressive – even if the price is really steep.

    OK, desktop performance is stagnant. OK, more would have been nice. But mobile is where the market is currently going and I for one am happy that Intel is taking the fight to ARM territory.

      • Bensam123
      • 7 years ago

      Not sure about that… I know a lot of people who use desktops… Myself included… Corporations included. Corporations generally don’t need better integrated graphics. Really anyone who wants better graphics will either get something dedicated or go with a Nvidia or AMD solution (like in laptops).

      This is great for battery life I suppose, but at this point they should consider spinning off the product lineup into a mobile and desktop variant. They seem so eager to segment things in the first place, they should consider actually committing to those segments.

        • esterhasz
        • 7 years ago

        No, of course, the desktop still has some life in it. But for every desktop sold, two laptops go over the counter, probably with a higher ASP, too. And Intel’s got that segment squarely in their pocket. In the sub 15W market, however, there’s pressure, so it’s not surprising that Intel is focusing – for this round at least – on those parts of the PC market that are still going relatively strong (laptops, especially lighter ones) and the low power market. And for those two segments, Haswell has a done a lot of things right.

        A third core architecture besides Atom and Core would be much too expensive I think, but in a sense, there already is a “desktop” variant: socket 2011. If they’d treat that with a little more enthusiasm and bring down prices some more, most power users (pro or gamer) could migrate there and bath in DIMM slots, L3 caches bigger than the first HDD I bought and 150W TDPs… Got one of those in my office, the irony is that it doesn’t even have a screen – I just tunnel there from my laptop.

          • UberGerbil
          • 7 years ago

          Yes, when I saw the CFD results for the eDRAM-equipped model I started to wonder if the reason it has “excess” LLC is precisely because that model is going to see another life in a non-BGA socketed form… but in the socket of whatever replaces 2011 (and also possibly as a Xeon). Especially if the OEMs tend to forgo that model in non-ultrabook ~gaming laptops (using 3rd party graphics instead) so that Intel is left looking for another use for it.

          The trouble (for enthusiasts) is, as you say, that form factor tends to be expensive (both naturally as more of a niche product and because Intel can charge more for “workstation” hardware and all that goes with it, including enthusiast passion). And I agree Intel doesn’t seem to be all that interested in it, since in the past they introduced that line first to soak up the enthusiast/professional dollars before releasing the more mainstream chips (and now have been leaving it to molder for a while). But they have to do something about the aging 2011 line, and if they continue that segment at all this would seem to be the way to go (though maybe not until the Broadwell timeframe).

      • Ryhadar
      • 7 years ago

      I agree. I’m not sure what people were expecting but Haswell has been mega-hyped for quite some time now. The performance improvements are good.

      The thing that is disheartening to me is the continued needless segmentation. No GT3e on a desktop in socket form? Still no VT-d on K series parts? No base clock overclocking on non K series parts? Laaaaaaaaaaaaaaaaaaame.

      • nanoflower
      • 7 years ago

      What would you expect on a site where the majority of users are concerned with desktop performance when Intel comes out with a chip that is focused on tablet/laptop performance? Of course people are going to be somewhat disappointed. That doesn’t mean Haswell isn’t a great achievement for Intel. It’s just not the sort of product that will excite most people reading the Tech Report.

      The same thing would happen if Intel had some sort of manufacturing breakthrough and could sell their Atom chips for $0.50 a piece. They might take over the embedded market from ARM but is that likely to excite the users of this site? Only if that breakthrough can be applied to the desktop/server processors.

        • esterhasz
        • 7 years ago

        You’re right, much in life is about managing expectations – whether it’s about processors or tech forums. ,-)

        But I also think TR has been branching out in terms of content and also forum discussions are often enough going beyond the “my interests = the world” type of debates that are so common in comment sections. I’m much less involved here than others, but the reason why I do come back and suffer through the sometimes endless X vs. Y bickering are the moments when people who understand a lot about technology are able to tease out what makes a product interesting, why somebody would bother design and build it in the first place, and how that fits into larger trajectories.

        There are people here who’d have a lot of smart things to say if a $0.50 cent Atom were real, even if they’re not excited. And I’d love to read that.

          • UberGerbil
          • 7 years ago

          I so profoundly agree about the endless X vs. Y debates (and especially the idiotic trolling) and it’s one of the reasons I’m less active on the forums than I used to be.[quote<] the "my interests = the world" type of debates[/quote<]I call this "ELM disease" (Everybody's Like Me).

      • vvas
      • 7 years ago

      An additional point to make here is that the industry has long hit a performance wall in terms of single-threaded performance, and Intel’s Core architecture, being one of the most mature ones right now, is strongly affected by this. In other words, Intel’s mainstream architecture has already been past the point where spectacular increases in performance are possible (Sandy Bridge was the last hurrah I guess, and not even), so all they can do at each shrink is make some incremental additions which typically yield single-digit percent improvements.

      So what’s Intel to do with all that extra die space that they get at each shrink? Obviously they could keep adding cores, but 4 cores are really enough these days for non-server workloads, especially when you add HT to the mix. So instead they beef up the GPU (which had been sorely lacking), and improve power efficiency, which allows them to get the Core processors into more form factors. Seems like a good strategy really.

      What I’m trying to get to is, even if Intel were concentrating at improving specifically desktop performance and didn’t care about mobiles (madness as it might be), the performance improvements would still be incremental and unspectacular. Unless they added more cores of course, which would do wonders for massively-multithreaded benchmarks but would have little benefit for most desktop workloads.

    • Action_Parsnip
    • 7 years ago

    So this review is the best one I’ve seen yet by quite, quite some distance.

    Think it’s worth saying that the Iris GPU thingy isn’t faster than the Trinity GPU going by the review data: compare the dedicated GPU and iGPU results, strongly suggests the Piledriver cores are almost woefully inadequate.Roll on Steamroller….

    Again TOP review.

      • derFunkenstein
      • 7 years ago

      While that may or may not be true, the result is the same – the A10 is still too slow.

        • Action_Parsnip
        • 7 years ago

        Oh undoubtedly.

          • derFunkenstein
          • 7 years ago

          OK I’ve changed my mind on this. The only way Richland’s GPU is being held back by the CPU is if performance is the same with the discrete GPU. It’s not, so the CPU is being held back by the integrated GPU, or a memory bottleneck potentially. I mean, Piledriver is anemic, relatively speaking, but it’s not entirely to blame here.

      • Andrew Lauritzen
      • 7 years ago

      Such a distinction is not actually possible to make on a single chip solution. It’s impossible to control for the “driver” component that makes up a good chink of the CPU workload of modern games. For example, if I design a GPU that requires a heavier driver component to feed or optimize for it, does that make my GPU worse or better than one that requires much less hand-holding?

      The answer is… it’s not clear, both are different design points. You can shift that work more onto the GPU hardware by making your GPU more performance-robust (and thus require less complex optimization on the CPU), or you can use software to feed a “leaner” GPU design, but then you tend to spend more time on the CPU assembling work so as to avoid the performance cliffs.

      In a dGPU situation while your GPU and driver code is coupled, you can factor out CPU differences by running with the same CPU. In an integrated CPU/GPU, this isn’t possible. Thus even if you narrow down the bottleneck to the CPU, it’s hard to conclude whether or not it’s due to weak CPU architecture or due to the GPU requiring a heavier driver. It’s all one system now, and the software is a component of it, so the end result is really what matters since you can’t mix and match. This is even more true now that everything shares and increasingly limited power budget.

      That said, I would still be interested to see some rough numbers on GPU busy-ness between the iGPUs when running these benchmarks. The results wouldn’t really be particularly actionable, but I’d be curious to see if your basic assertion that Trinity’s GPU is sitting idle some of the time due to weak CPU/heavy driver is true. If that’s true generally, that suggests AMD should aim for a different balance in future APUs, no?

        • Action_Parsnip
        • 7 years ago

        ….. Not really sure how efficiently the driver run sand how much CPU time they take when a game is running really amounts to all *that* much.

        These devices are complicated enough to say it’s not that simple to state: x is holding back y, although comparing the iGPU and dedicated GPU numbers does unfurl a big red flag and wave it around: The Piledriver cores are sub-par.

    • Dysthymia
    • 7 years ago

    What exactly is the overclocking capability of the non-K 4770 (if any)? How much benefit can TSX provide in legacy single-threaded application performance?

      • chuckula
      • 7 years ago

      [quote<]How much benefit can TSX provide in legacy single-threaded application performance?[/quote<] Zero. That's not the point of TSX. It's expressly designed for highly-concurrent code to reduce overhead from lock contention. Frankly, in an 8 - 12 core server chip it could make a nice boost, but with only a quad-core desktop the number of in-flight threads isn't going to be huge to begin with. I could see it giving a 5-10% performance boost in just the right type of code.

    • Wildchild
    • 7 years ago

    HASBEEN

      • chuckula
      • 7 years ago

      I’m suing for copyright infringement unless you pay the royalty!

        • derFunkenstein
        • 7 years ago

        You’re not royalty! You are merely Count Chuckula. Though I guess we could upgrade you to Queen.

          • abw
          • 7 years ago

          Attention whore queen..??..

    • Krogoth
    • 7 years ago

    Good… Good…

    Let the disappointment, flow through you!

      • chuckula
      • 7 years ago

      Oh I’m WAY ahead of you there Krogoth:
      Here’s my Broadwell Review!

      [quote<]2014: Meh, I'm unimpressed. I'm not going to replace my OC'd 4770K just to get a 5-10% IPC boost. Who cares about power savings anyway? Too bad that Steamroller still can't beat Haswell on the desktop, Intel is such a disappointment![/quote<]

      • abw
      • 7 years ago

      Beware the intel gullibles fan club ressentment…..

      • ronch
      • 7 years ago

      No I won’t. Unlike you, I’m quite impressed. 🙂

      • maxxcool
      • 7 years ago

      now now, we all knew this was intel’s piledriver perf increase….

    • flip-mode
    • 7 years ago

    That was a wonderfully written article, much better than anything else I’ve read. Thank you.

    All of the power-saving mojo is pretty amazing. I think it is good to have even on desktops.

    The performance improvements are not exciting, but they are incremental and incremental will add up over time.

    If I had bought a 2500K back when it launched, I’d be feeling like I made one of the best CPU purchases ever in regard to how long it will stay relevant. I guess a bunch of people may be disappointed that Haswell doesn’t offer much in the way of compelling reasons to upgrade. For folks like me who are still running on even older processors, Haswell looks like as good of an option as anything else, and when the platform advancements are factored in, it easily looks like the best choice.

    If I have any request for TechReport, I hope Scott and co. intend to look at overclocking soon.

    Thanks, Techreport.

      • Action_Parsnip
      • 7 years ago

      ^^This can’t be stated enough. Anandtech’s review was …. oddly short and naff. This review is the virtual BIBLE of Haswell reviews.

        • Krogoth
        • 7 years ago

        I would argue that to go as far as i7-920 and i7-930.

        Outside of power consumption, you aren’t getting much with newer chips.

          • Airmantharp
          • 7 years ago

          Only if you leave them at stock settings :).

          At 4.6GHz+, Sandy Bridge is still as fast as anything in most common productivity and gaming workloads.

          • flip-mode
          • 7 years ago

          Opinions vary, of course, especially when you ask those who cannot be impressed. I could claim the same of the X4 955 since it’s still doing all I need it to. But I think it’s a pretty commonly held view that the i5 2500K was an exceptionally good value. Sure, the i7 920 is still relatively relevant, but my point was that the i5 2500K was pretty extraordinary. Mix in it’s price, its modest power consumption, its excellent stock performance, its exceptional overclockability, and the fact that two generations later there isn’t a clearly compelling upgrade … again, if you’re one of those that bought the i5 2500K back when it launched, this Haswell launch probably makes your CPU seem all the sweeter.

        • flip-mode
        • 7 years ago

        Yeah, Anandtech’s review fell noticeably short of what is typical for the site. It was, again, noticeable.

          • Farting Bob
          • 7 years ago

          It felt like they had only just got the CPU to test and had to rush to get the review out. Nowhere near the detail you would expect from Anand on a brand new CPU architecture.

        • Deanjo
        • 7 years ago

        Ya but what I like about Anand’s reviews is that they also had an article on Haswell from a HTPC perspective. (Summed up, great on playback, finally fixes the frame rate issue but regresses in the quality of QuickSync encoding)

          • Action_Parsnip
          • 7 years ago

          yeah there was that. They split their time between the cpu review and iris thingy review, so the iris review is pretty thorough and the cpu review had all the text and not much results content. See they also got a 4k video review up too, something else they split there time with. They had lots of bases covered but the center-piece review suffered for it. Techreport’s knocked all the others into a cocked hat!

      • puppetworx
      • 7 years ago

      I’d be interested in seeing an overclocking follow-up article in a few months, the potential looks high but it might take a little time to figure out how to get the best out of these new chips.

      • dragosmp
      • 7 years ago

      +1 for the overclocking, particularly on BCLK OCing the i5s (and i3s when they show up)

      Despite the clear performance benefit of Haswell over a Phenom II 980 I can’t convince myself to fork 350$ for a CPU that would probably be slightly faster in what I care most – games. A 180$ i5 looks good if the 125/166 MHz BCLK actually works.

      And lastly, this is great article. The bechmarks paint a reasonably whole picture and there are the legacy comparisons many have asked for a while.

      cheers

        • flip-mode
        • 7 years ago

        The article stated that BCLK overclocking is only permitted on the K series CPUs. Reading your post, is there some expectation that this is not true?

          • chuckula
          • 7 years ago

          I was confused about this at first but unfortunately it is true & confirmed: You can’t do BCLK overclocking on non-K series parts. Of all the manufactured “disappointment” from AMD fanboys who would call Haswell a failure if it were 10x faster and was given away for free, that is actually a legitimate bone of contention that I have with Intel.

          I understand that we are moving into the mobile world, but that should actually encourage Intel to *unlock* more stuff on the desktop parts to encourage sales in this supposedly “dying” sector. Business users aren’t going to overclock their business desktop parts, but getting the enthusiast sector in your corner in a big way leads to other sales and more profit down the line.. it’s a multiplier effect.

            • flip-mode
            • 7 years ago

            It’s annoying and can feel unjust, but people don’t /have/ to buy the product. Even with a locked BCLK, Intel’s chips may be the best product for the money. We need a competitive AMD.

        • Klimax
        • 7 years ago

        There are games (especially those with CPU based physics like Havoc) where older CPUs can’t keep up.

        Ex.:
        X3 Reunion can get very hard due to that when there is fight – no matter how GPU is good
        Company of heroes
        Hard Reset

        • nanoflower
        • 7 years ago

        Yep, looking at Microcenter’s web page I can’t help wondering if it is better to an I7 3770K or an I5 4670K considering that you will pay only $20 more for the I7 when bundled with a motherboard.

          • chuckula
          • 7 years ago

          New Purchase instead of upgrade: No matter how “meh” you might think Haswell is, go with Haswell since 1. Despite the hate it is actually faster and 2. The Z87 is actually a pretty nice platform with lots of USB 3 and lots of SATA-III support beyond the Z77.

          Upgrade from Ivy: Meh factor can still apply there. I’m not a very good Intel fanboy since I’ve flat out said that Sandy/Ivy owners shouldn’t upgrade outside of special circumstances.

      • flip-mode
      • 7 years ago

      A few additional comments:

      Intel’s segmentation practices do suck. It’s really annoying. Why make feature cuts to the enthusiast K series chips when those are the enthusiast chips that should have all the features enabled? Mildly irritating.

      Pricing: I’m concerned with the pricing moves here. Not much can be done about it. AMD doesn’t offer a good enough alternative, which is the whole reason why Intel can make these pricing adjustments. On the other hand, in an age where this CPU should easily last 4-5 years, another $20-ish isn’t really a big deal.

        • Krogoth
        • 7 years ago

        Intel wants to justify the existence of Extreme Edition line-up which has all of the bells and whistles at a cost.

      • flip-mode
      • 7 years ago

      TechReport just nailed it: [url<]https://techreport.com/review/24889/haswell-overclocked-the-core-i7-4770k-at-4-7ghz[/url<]

      • brute
      • 7 years ago

      best review was the one where the chip was reviewing itself.

      scott wasson is my hero

    • hasseb64
    • 7 years ago

    Expected news = Almost no benefits for desktop users..

    Good thing: I will keep my gaming rig with 2500K SandyBridge, still one of the best chips Intel has produced.

    Bad thing: My new laptop my not materialize this year either; it seems that the power-savings are not of the magnitude INTEL have claimed and new fast GPUs will just be found in top-end line
    Idle and sleep may be improved but how many new “web-surfing hours” can this new chipset add?

    • Pax-UX
    • 7 years ago

    Great review as always!

    Left more impressed by the Xbox One then this little guy 🙁

    • abw
    • 7 years ago

    This review is definitly the coup de grace for the gullibles hopes…

    Sic transit gloria mundi…

      • alwayssts
      • 7 years ago

      Deja entendu…but…

      The reasons that I have to believe you aren’t too hard to sell…

      • chuckula
      • 7 years ago

      Yeah what a disaster! At this rate Steamroller will only lose by about 15% across the board to Haswell (while having an outdated platform and sucking down power like… well.. a Steamroller)!!

        • abw
        • 7 years ago

        Yet another testimony of clulessness, SR will be FM2+….

          • chuckula
          • 7 years ago

          Oh good: Hey AMD fans! ABW says that they will force you to buy a new motherboard just to run Steamroller! Who do they think they are, Intel?

            • abw
            • 7 years ago

            Clulessness deepens , SR will be FM2 compatible…

            It s about impossible to make two such dumb claims
            in two successive posts , what is your secret..?….

            • chuckula
            • 7 years ago

            Uh.. so I don’t need to change motherboards to go from an FX-8350 on AM3+ to your magically backwards compatible socket FM2?? This socket FM2 is magically amazing!

            (P.S.–> Why is there an FM2+ in the first place if everything is supposed to work on FM2 with no issues?)

            • abw
            • 7 years ago

            What you need is a bucket of anti trollmania pills…..

            • chuckula
            • 7 years ago

            So your saying that if you take pills, you can’t tell the difference between an AM3+ motherboard and an FM2 motherboard? That’s a good thing because you obviously take lots of those pills.

            • abw
            • 7 years ago

            At this rate you ll soon need a truckload….

            First SR will be an APU for APU dedicated plateform FMxx ,
            FX is on low cost server plateform that will be updated in 2014.

            • derFunkenstein
            • 7 years ago

            [quote<]This socket FM2 is magically [s<]amazing![/s<] delicious![/quote<] FTFY.

            • abw
            • 7 years ago

            FM = Fully Magic by AMD , Advanced Magic Devices…

            • heinsj24
            • 7 years ago

            Chuckula, siccing AMD fans on abw is a new low.

            Looking at my AMD roadmap tattoo, I believe I should, unfortunately, be good for one more release on AM3+.

            Yes, unfortunately… how long have these boards been out?

            • abw
            • 7 years ago

            He s an inveterate digger , he cant resist his nature , alas..

    • Sheogorath
    • 7 years ago

    $657 for a i7-4940HQ?

    Intel must be drinking skooma again…

      • abw
      • 7 years ago

      Waiting for their milk cows with glee….

      • drfish
      • 7 years ago

      5x as much $$$ to beat AMD’s IGP… 0_o

        • dragosmp
        • 7 years ago

        It’s probably priced just above the cost of a Haswell + equivalent Nvidia dGP + the engineering to design a slim integrated power sipping notebook.

        • MFergus
        • 7 years ago

        You know they aren’t charging 5x more because it’s that much more expensive to make, they charge that much because they can. $650 is only for the super highend version though, there’s a $450 version and hopefully eventually even cheaper versions than that.

        • abw
        • 7 years ago

        Milk cowery think that price = performance….

    • ronch
    • 7 years ago

    If Intel really wants to improve gaming in a big way, I think they might as well quit beefing their IGPs up and instead make discrete video cards just like AMD and Nvidia do. Nobody’s gonna buy a Core i7 Sandy, Ivy, or Hasbeen and use the IGP for anything other than MS Office and IE so even an HD2000 will cut it if you don’t plant to do any serious gaming. On the other end of the spectrum, nobody would be crazy enough not to buy a proper video card for their expensive i7 gaming setup unless they’re strapped for cash.

    Hey, how about Intel just ditches the IGP and puts in a couple of CPU cores in the void left by the IGP instead? And uh, still charge the same price? 😀

      • Bensam123
      • 7 years ago

      What… IGP is relegated to non-dedicated video card tasks and the normal Intel graphics did that find before hand? Blasphemy. They have to drum up their epeen first. Or maybe they just want that super duper niche AMD has with their Trinity systems in HTPC builds… XD

      • chuckula
      • 7 years ago

      I’d love to see a 6-core Haswell desktop chip [non-IGP] that still uses socket 1150… unfortunately I don’t think we’ll ever see that chip due to economics. Haswell is really aimed at mobile, and we are just getting the overclocked parts on the desktop.

      [Edit: Apparently a bunch of AMD fanboys would not want to see more cores on Intel chips… I can understand why.]

        • willmore
        • 7 years ago

        On my last desktop build, I seriously pondered an i7-3820 vs an i5-3570k. I eventually went with the latter because the MBs for the SB-E parts command a $100 or more premium over those for the normal desktop chips.

        So, I completely agree with you that I’d rather see no IGP (or an extremely minimal one) and more cores–on the desktop.

        Laptops is where Haswell should really make some noise.

    • brucethemoose
    • 7 years ago

    So… is this the end?

    Haswell is barely an improvement over sandy bridge, broadwell will just be a die shrink, skylake/skymont will probably bring modest (CPU) improvements like haswell, then it’ll be near impossible to shrink silicon transistors. Meanwhile, AMD won’t catch up to sandy until steamroller/excavator.

    Guess it’s time to pick up a 2500k, and ride it out to the end of silicon and the desktop as we know it.

      • ronch
      • 7 years ago

      [quote<]is this the end?[/quote<] Yup. The world officially ended last Dec. 21, 2012. This is Reality 2.0 and here, we get increasingly smaller IPC and clock speed improvements, at least with Intel.

      • Bensam123
      • 7 years ago

      Nah, when people are so far ahead they start doing stupid things. If and when AMD catches up they’ll start aiming for performance again. Until then we’ll have to deal with them turning their desktop chips into mobile chips.

        • brucethemoose
        • 7 years ago

        With all the trouble TSMC/Glofo are having, I can’t see the architecture catching up until excavator/2015. By then, Intel will probably hit the 10nm node, and shrinking transistors will become a real PITA, leaving little room for improvement.

        I’m talking lightly threaded performance, of course. For threaded performance, the moar coars strategy works pretty well…

    • Prion
    • 7 years ago

    [quote<]The IGP is software-compatible with Ivy's[/quote<] [quote<]For one, Intel has completely rewritten the software driver stack for its IGP, and it claims the new driver has some of the lowest CPU overhead in the industry.[/quote<] Am I reading this right? Taking these two things together, should Ivy Bridge users with Intel GMA HD2500 and HD4000 graphics expect to see a noticable performance boost with new drivers released concurrently with Haswell? Will there be any followup testing on this?

    • cal_guy
    • 7 years ago

    It’s pretty clear that Intel are hitting wall when improving CPU performance (and let’s not forget that changes Intel have a made such as the expanded Re-order buffer, expanded scheduler, a dynamic partitioned micro-op decode queue in place of the statically partitioned one, two additional ports are substantial). I think we all had a laugh at the 15% improvement claims for the successors of Bulldozer, but if AMD can continue the gains like they had with Piledriver they might start threatening Intel once again, and that’s going thing for all of us.

      • ronch
      • 7 years ago

      Yep, Intel did hit a wall here. Sooner or later, AMD will hit the same wall, and everything will be in balance again.

        • flip-mode
        • 7 years ago

        I disagree that Intel hit a wall. Intel boosted performance by roughly 10% without adding any clock speed or any cores. Intel could have easily launched a 3.8 GHz or higher part and everyone’s jaws would be slack. Instead, we get 10% at the same clock speed. That’s not hitting a wall, that’s near to magical. Have you seen AMD’s CPU performance at the same clocks? The FX 8350 Vishera per clock performance is still lower than the ancient Deneb. The only way AMD manages to look competitive is by boosting clock speed and by adding cores. So I think Intel deserves credit for some impressive power cutting with simultaneous performance increases all without modifying the core count or the clock speed.

          • ronch
          • 7 years ago

          Of course they deserve credit. And AMD deserves credit too because they’ve managed to keep themselves in the picture with a small fraction of Intel’s R&D budget.

      • Action_Parsnip
      • 7 years ago

      How? If anything the difference between this and Ivy Bridge is greater than the difference between Sandy Bridge and Ivy Bridge.

      • Bensam123
      • 7 years ago

      I don’t think they hit at wall at all, rather the target of this specific generation was power saving and integrated graphics. Like almost all their improvements are focused around just power saving.

      • NeronetFi
      • 7 years ago

      IMO – I think Intel wanted to focus on increasing the IGP performance. Now that they have done, that they might have found a sweet spot for both IGP and processor so the next design might see a performance boost on both.

    • Arclight
    • 7 years ago

    16 pages?

    [url<]http://imgur.com/gallery/kp8te[/url<] Edit Dang, those who wanted to upgrade their CPUs for gaming should be fairly dissapointed.

    • fellix
    • 7 years ago

    Please, re-do the memory benchmarks with the new AIDA64 v3.0. The new version comes with a brand new multi-threaded memory subsystem test suite, much more suitable for Haswell and every other multi-core CPU.

    • Bensam123
    • 7 years ago

    Tock… tick… erm… tick?

    All the hype surrounding this I thought it would be super-duper, but nothing close. All the changes and enhancements sound really great, they didn’t deliver anything notable for the desktop, not even power consumption. It is worth pointing out that in the majority of the benchmarks you have a $190 8350 facing off against $300 chips, the graph at the end is especially telling. If you drew a best fit line between the A10 and the 8350, then over to the Intel crowd they would need to be about 50% faster to deliver the same price/performance ratio.

    I’m sure these chips will be marvelous when put inside laptops and netbooks, but that has relatively little meaning to a desktop user. It looks like haswell was focused almost completely around improvements for the mobile crowd, which isn’t necessarily a bad thing, but it conflicts with the goals of providing better performance to desktop users.

    Remember when there were two completely different line ups of processors for two completely different areas? We had desktops and mobile processors, both were different and unique in their own way. Then Intel decided to go with their mobile branch (rightly so) and it helped, but now we’re back to the point where they’re now sacrificing desktop experience for mobile users. Perhaps they should consider branching the two again, then cross breed improvements between the two that don’t interfere with the primary goal of either branch. Maybe they don’t need to because they think their processors are THAT amazing (they don’t even need to compete on the performance level). Think of the performance gains we would’ve had if Intel was focusing around performance increases rather then battery life.

    Definitely a bad call as Scott pointed out that Intel is using TSX for segmentation. That just means it wont be used at all. Like Creative with their X-Ram. It’s not worth developing for a tiny niche. Most people wont go out of their way to buy TSX processors either.

    It is nice to see they finally increased their SATA and USB port allocation. However eSATA is just around the corner and from what I’ve read the USB 3 implementation for Haswell has much to be desired.

    Honestly I think a 8350 is the way to go for desktop users. The price/performance ratio is great and after plugging it into your the wall outlet of magical energy, you don’t need to worry about task energy or anything like that. For a power user it will probably take quite awhile before you’ll even break even at 10c kW/h (unless you’re running f@h all the time or something), even longer for the average user. It’s sorta weird too considering all the hate BD received and PD inherited, it looks better every time it pops up on charts and Intel hasn’t moved at all. Unless you want something like a hex-core, in which case it can’t compete with a $500 processor.

    Good article though, I think the i5-3570k should’ve made it into the normal benchmarks as it’s the most direct competitor with the 8350 and definitely one of (if not the) most popular IB chip for desktop users. It has the best price/performance ratio too (considering OCing on top of it).

    Also curious why the FCAT didn’t make it in? Are we going to have a few follow up articles on specific portions of performance like streaming, FCAT, and overclocking? This sorta seems like it could be a multi-tiered article.

      • chuckula
      • 7 years ago

      tl;dr version: blah blah blah Intel sucks! blah blah blah

      1. How many of those benchmarks [b<][i<]that TR expressly states are generally very favorable to the FX-8350 due to their multithreaded nature[/i<][/b<] does the FX-8350 actually win again? 2. Given the supposedly useless HD-4600 IGP, why is it that Trinity isn't at least 3x-4x faster across the board? Especially given Trinity's larger power envelope and intentionally weaker CPU cores... 3. I like how you are dishonestly re-spinning on the power consumption since TR's review showed that yes, despite what Bensam123 and ABW wanted, Haswell is actually more power efficient than Ivy Bridge even on the desktop where most of the power saving features are muted. If Haswell had consumed even one-watt more power in any situation, I'm sure you would be trumpeting it as Intel's greatest failure and had us all ignore anything and everything that AMD produces since it's "not fair" to compare those products. 4. I'm a little disappointed that TSX isn't in my 4770K, but there is a silver lining: You just started blathering about how TSX is now some great and amazing feature that Intel is somehow denying to the poor starving children in Africa because its overclocking chips don't have that feature. I'm going to run out and buy an FX-8350 right now since AMD would never EVER deny me great features like that! Oh, wait....

        • pedro
        • 7 years ago

        tl;dr version: blah blah blah blah blah blah

        • maxxcool
        • 7 years ago

        haha 2x the power consumption as intel…

        [url<]https://techreport.com/r.x/core-i7-4770k/power-plot.png[/url<]

      • srg86
      • 7 years ago

      meh, I’d take the 4770 over the FX-8350 any day. It has greater (or at very worse equal performacen), but uses much less power and generates less heat. I’m not into overclocking so the 4770 which has all though knobs turned on would do me fine. Plus as a developer I am interested in those new additions to the instruction set.

      Although I use a desktop and I do like the performance. I do still appreciate a quieter, cooler running system. I’d be prepared to pay the extra from what I see as a better quality product. If the FX-8350 does the job for you, then fine, but IMHO everything about it to me shouts desperation.

      (Also don’t like the module approach in the FXs, just leaves a bad taste.)

        • Bensam123
        • 7 years ago

        I suppose we all have or individual preferences, but this is largely something I never noticed. I used both the 3570k and the 8350… You don’t notice either one. The 8350 at idle uses just about as much power as Intel chips, which is really where you spend most of your time… Browsing the net and watching video, you can’t tell that it has a 130w power envelope. The power issue is something that’s largely been blown out of proportion IMO, for desktop users that is.

        There is no denying having a 12 hour battery life or whatever is nice and I’d agree a 8350 has no place in a laptop, but in a desktop you wont notice all the things you pointed out. Even your electricity bill will be no the wiser for like a year and a half. Perhaps much longer then that if you’re just a casual user.

        Honestly the modules in my experience have made for a smoother multi-tasking experience overall. My general windows experience (with core parking off) has been snappier then with the 3570k@4.2ghz. It’s not the load speed of individual applications or anything like that, it’s being able to switch between windows and open new ones while doing other things on top of it. Perhaps not something everyone will notice, but it is something I noticed. When dipping into >50% core usage it’s also very humble about it and doesn’t start throwing in intermittent lag like I’ve had happen with HTing.

        • vvas
        • 7 years ago

        I completely agree, with one extra observation: you don’t even have to pay extra, as the 4570/4670 should cost more or less the same as the 8350, and still offer better performance at improved power/heat (with the exception of massively-multithreaded integer-heavy workloads perhaps, but I wouldn’t hold my breath even for those).

          • srg86
          • 7 years ago

          Indeed, my main point was that people talk about the price difference between intel and AMD. I used to use AMD religiously for about 10 years but with the original Phenom debacle I decided look at Intel as well.

          In the end I found the extra value in the Intel products being that they just feel like better quality products, it is pretty subjective to be honest. AMD stuff always feels like it’s made down to a price (which they need to, to compete for the most part). Also the surrounding Intel platform, while not without it’s problems, always seems better quality, not just in the K6-2 days but even today.

          Another part of that is how Intel can get that level of performance out of a chip with a TDP of 84W where AMD needs 130W, the Intel technology is better.

          So I’m willing to pay the extra overall as I feel there is real value there. Plus the 4570/4670 give you about the same performance, though the i7 with HT do help beat the 8350 in very multithreaded tasks which is they 8350’s stronger suit.

          BTW for Sandy Bridge and Ivy Bridge users, unless you need AVX2, there is no point in upgrading, that I do agree with.

      • maxxcool
      • 7 years ago

      what hype ?

        • abw
        • 7 years ago

        browse the net , lazy you….

          • maxxcool
          • 7 years ago

          until i see a TR link showing it, it is not true.

        • Bensam123
        • 7 years ago

        Sarcasm? >>;

          • abw
          • 7 years ago

          Ignorance , gullibility , pretentiousness , all that is needed for a milk cow….

            • maxxcool
            • 7 years ago

            Snakeoil you are!

      • maxxcool
      • 7 years ago

      intel wins 54 of the benchmarks of the 62, now has 2x the single threaded performance of any amd cpu.

      • maxxcool
      • 7 years ago

      and the Igpu, now wins as well.

        • abw
        • 7 years ago

        Where it win overhelmingly is price , the best perf so far…

          • maxxcool
          • 7 years ago

          YOUR right snakeoil! it is faster than amd!

            • abw
            • 7 years ago

            Maxxxxxxxxxxxxxxxxxximum paranoia………….

            • maxxcool
            • 7 years ago

            HA! you are really snakeoil after all. Your going to get banned again…

            • abw
            • 7 years ago

            I guess that the intel maffia strategy is to spread lies to get
            the annoyance disappear…

            As dirty as intel s bribing strategy…

            • maxxcool
            • 7 years ago

            As opposed to your job, kettle ?

      • maxxcool
      • 7 years ago

      lastly. we all new this was intel’s piledriver, a tiny bump.

        • Bensam123
        • 7 years ago

        I don’t think you read the PD article.

          • chuckula
          • 7 years ago

          Yeah exactly, on a clock-for-clock basis Haswell is a bigger improvement than Pildriver! Piledriver also had the benefit of a 15% clockspeed increase, but even with Haswell’s supposedly broken overclocking, getting to 4.2 GHz isn’t that tough.

      • maxxcool
      • 7 years ago

      And yes, I will buy intel everytime for my workstations, servers and power user boxes. for my htpc… well guess i will have to see a bga review it may just be time to kick amd out of my htpc as well.

        • ronch
        • 7 years ago

        Hey, how about consolidating your posts next time? Makes for easy reading, you know.

          • maxxcool
          • 7 years ago

          No, 😉 i have my reasons 🙂

        • Bensam123
        • 7 years ago

        I did talk about $/performance in there, maybe you missed it?

      • maxxcool
      • 7 years ago

      oh, and it is a mobile cpu… not a desktop part.

        • abw
        • 7 years ago

        A mobile CPU that run up to 83W according to Anand….

      • Bensam123
      • 7 years ago

      So I made the Bensam123 version of a best fit line for the end graph. It’s not entirely the same since I didn’t use a nifty algorithm and compute averages, but you get the idea…

      [url<]http://img825.imageshack.us/img825/4125/performancebr.jpg[/url<]

        • maxxcool
        • 7 years ago

        LOL 🙂

        • chuckula
        • 7 years ago

        I have never been able to make a regression line in the perfect shape of “404 – Not Found” before.. you truly are a statistician of some renown!

        • chuckula
        • 7 years ago

        Please tell me that your line is intended as a joke… seriously, I’ll give you props if that was a joke, but if you are serious, then I don’t even know where to begin….

        [Edit: For those of you who think that line is legitimate, take a look at the axes and learn something about units…]

        [Edit 2: Keep in mind that if you believe Bensam123’s graph is legitimate, [b<]then AMD would actually be better off if the A5800K literally had zero performance since it would increase the slope of his line[/b<]... I'm really hoping that Bensam hasn't gone over the rainbow and actually realizes that his graph is a joke... seriously.]

          • NeelyCam
          • 7 years ago

          Your turboboosted trolling of late must have somehow damaged your humor sensor.. Of course it was a joke, and a good one too, IMO

            • chuckula
            • 7 years ago

            With Bensam123 things can be very hard to determine since he typically has very little in the way of a sense of humor. Bear in mind that only a few days ago he went on a religious crusade claiming that Maxwell cannot possibly ship at any time before 2015 while simultaneously claiming that AMD’s 20nm GPU parts are right around the corner.

            • chuckula
            • 7 years ago

            Hey Neely.. go read Bensam’s post below… apparently he was deadly serious with that graph and literally can’t comprehend how AMD [b<]improving[/b<] the price/performance ratio of Trinity would make his results look [b<]worse[/b<] instead of better...

        • Andrew Lauritzen
        • 7 years ago

        Do you really think such an extrapolation where no product exists is meaningful? Why not take a couple processors that are so old that they are free and connect their line right up the vertical axis? Makes everything look like a terrible deal!

        Also I don’t get this “I wanted even more single-threaded perf with Haswell” (even though you were told 5+ years ago that line of perf increases is coming to an end and yet still has had a nice tail), “thus AMD”. As a gamer, the numbers I see in this article indicate that even the outdated 2600K is a significant step up from the 8350, and I’m willing to bet that extends to the 2500K as well since games rarely make good use of even four threads. Yet I can get a i5 3570K for $30 more than the 8350, and I imagine I could even go lower and still have better performance in games.

        i.e. this isn’t some conspiracy here… AMD’s game performance (which largely comes down to single-threaded performance) is simply not very good, even when you consider price.

          • chuckula
          • 7 years ago

          I’m giving Bensam123 the benefit of the doubt that his graph was a joke… at least until he comes out and accuses me of being a “strawman” or something for actually trying to be nice to him for a change.

          • Bensam123
          • 7 years ago

          The performance points in the article include gaming benchmarks, not just multi-threaded applications.

          A game itself doesn’t need to be multi-threaded in order to take advantage of a processor with more cores. Everything in your system runs on your processor, not just a game in it’s own little environment. So a processor is always doing multi-threaded workloads, especially with system tasks.

          I think it’s very meaningful, I was showing the price point at which Intel would have to offer the same price/performance to be in line with AMD chips on the chart. It wasn’t intended to be absolute. You’re mistaking a correlation with a absolute relationship. Just because the lines there, doesn’t mean that’s the origin of the line, just the same as with normal best fit lines.

          The 8350 costs about $1.11 per percentage point ($190/170%). Using that same ratio for the Intel chip at $342 it would be at 382% (Intel is at $1.62 per percentage point, $342/215%). Much higher then where I put the line on the chart as it was supposed to be a best fit line rather then a direct ratio. It was off the cuff and in place of more detailed analysis as I don’t really feel like spending the time to put the data into SPSS to spit out the line for a internet discussion. You’re welcome to do so though.

            • chuckula
            • 7 years ago

            Dude… did that long-winded post just imply that your graph [b<]was serious??[/b<]

        • NeelyCam
        • 7 years ago

        +1 That was awesome!

      • Bensam123
      • 7 years ago

      And like magic OCing results show up~

      TR engage evolution #2!

        • maxxcool
        • 7 years ago

        Love TR’s timing and work, thanks guys!

      • The Jedi
      • 7 years ago

      [quote<]I think the i5-3570k should've made it into the normal benchmarks as it's the most direct competitor with the 8350[/quote<] Seconded. That is worth a follow-up.

        • Bensam123
        • 7 years ago

        Yeah… After writing that post it’s become apparent that TR has switched to a tiered article release, where they release a initial article and then do follow up articles on interesting areas that can be explored more. I think it’s actually a pretty good strategy as it increases viewership and gives them more subject matter to write on, so we should get some more tasty bits in the future.

    • trek205
    • 7 years ago

    PLEASE when you test games for the cpu run them on highest settings and lower resolution. of course leave AA off but many of the other settings in modern games impact the cpu too so to test on high or just medium gives you no idea what to expect. it also gives you an impression that a cpu may be just as sufficient as another when in fact its not. for instance Crysis 3 is more cpu intensive on very high than on the lower settings.

      • travbrad
      • 7 years ago

      I agree about using high settings, but not lower resolution. Testing at 720p or lower (as some sites did) turns it into a synthetic test, rather than a test that actually reveals anything meaningful. Virtually no one who is interested in high-end CPUs for gaming is going to run games at 720p.

        • spuppy
        • 7 years ago

        Speaking of 720p, I think using it for IGP testing makes sense, since it’s the only next step down from 1080p when using these as an all in one TV gaming system (which I think would be the case for most people wanting to use an IGP for gaming)

          • travbrad
          • 7 years ago

          Yeah I agree it makes sense for testing IGP because frankly those IGPs aren’t really capable of delivering playable framerates at 1080p in most new games.

        • trek205
        • 7 years ago

        this has been discussed a lot. you lower the resolution to 720 to make sure the gpu is not the limiting factor. at 720, if one cpu is only able to manage 55 fps while another is able to get 75 then that tells you which is better. running full res in a demanding game on a gpu that’s not top of the line which may result in 45 fps for both cpus tells you nothing at all. that way you know if you turn down a graphical setting or two or upgrade to a faster gpu then you can get better performance. most people keep their cpus for at least 2 or 3 years so you want a cpu that can handle those future gpu upgrades.

          • travbrad
          • 7 years ago

          [quote<]you lower the resolution to 720 to make sure the gpu is not the limiting factor[/quote<] They might as well test at 640x480 or 800x600 by that logic. I still think it's a pointless test when those aren't the settings gamers actually use. There are plenty of synthetic or completely CPU-bound tests that will allow you to see the "raw" performance of the CPU. Testing games at realistic settings gives another perspective on the CPUs, whereas testing them at low resolutions will just yield exactly the same results synthetic tests do (making the whole suite of benchmarks much more redundant)

      • Bensam123
      • 7 years ago

      This is a good idea. Lowering the resolution down quite a bit would scale better then adjusting in game options.

      • Action_Parsnip
      • 7 years ago

      Soo agree. Original crysis was the same, very high and lowest performed differently between CPUs. Sometimes lowering settings virtually turns off streaming systems/complex geometry.

      Maximum settings without AA, minimum resolution [u<]to generate a good delta to show performance differences between CPUs[/u<]

    • dragontamer5788
    • 7 years ago

    Running a i7-2600K here, and I don’t feel like upgrading after reading the review. Thanks for your good work TR!

    Anyway, its a shame that the i7-4950HQ is so darn expensive, I think it will be outside of my price range as a laptop. But its a good preview to be on what will be comming out in the laptop space. I do need a laptop myself… so I’ll be looking into that as Haswell laptops hit the market.

    • RtFusion
    • 7 years ago

    After reading several of the these reviews for Haswell, including this fine one from TR, I don’t feel bad at all for my recent purchase of an i7 3770K which I plan on overclocking. Here is hoping for 4.5 GHz on air! Should be a large jump from my current Core2Duo E7300 setup.

    For those already on Ivy Bridge or Sandy Bridge, it may be best for them to stay on those platforms as the performance gains overall aren’t worth purchasing a new motherboard and a Haswell compatible PSU.

      • travbrad
      • 7 years ago

      [quote<]For those already on Ivy Bridge or Sandy Bridge, it may be best for them to stay on those platforms as the performance gains overall aren't worth purchasing a new motherboard and a Haswell compatible PSU.[/quote<] Yep, particularly when it comes to gaming. A $400-500 upgrade (mobo and CPU) would gain me about 2FPS in most games. The main advantages of Haswell are obviously in the mobile arena.

      • Bensam123
      • 7 years ago

      Yeah, I don’t think you’re missing anything dude. No buyer resentment for this generation. Not like buying right before SB came out.

    • Goty
    • 7 years ago

    I feel like sending out the HQ part in a desktop scenario is a bit disingenuous since it will only come in a BGA package and won’t be available to system builders. This isn’t the first time something like this has been done to try and make a product shine, but I think it should be pointed out.

    *EDIT* That’s a bit ambiguous, I don’t mean pointed out in the review (as it most certainly was), but rather among the readers as a point of discussion.

      • Damage
      • 7 years ago

      I think Intel wanted the world to see what it’s capable of offering with the GT3e graphics solution, which is why we got one to test. Nothing wrong with that. Also, I think these parts *will* be in desktops–just AIO desktop systems from OEMs, as I said in the review.

        • dragontamer5788
        • 7 years ago

        The only issue I can think of… is that the nicer thermals of a desktop system will pad the test out a bit. If GT3e has an overheating / throttling issue, you guys won’t see it in this test.

        On the other hand, getting this sort of information early is great. I’m glad you guys have done the test… as long as you repeat the test later with a laptop, and ensure the performance remains similar. 🙂

        • Goty
        • 7 years ago

        This is why I added “in a desktop scenario” as a caveat. For enthusiasts, it just doesn’t make sense. The vast majority of us won’t be buying pre-built desktops or AIOs, but we are quite likely to be in the crowd that values function over form in laptops and are willing to buy some of the larger chassis notebooks out there, which is where I think this part makes the most sense.

        • derFunkenstein
        • 7 years ago

        Looks like the perfect fit for a wave of slightly-larger-than-NUC devices. And probably the next Mac Mini.

      • Voldenuit
      • 7 years ago

      4950HQ will be available to OEMs, who can presumably choose to integrate them into desktop systems if they see a market. This is much less shady than the vaporware 1/1.33 GHz P3s that were impossible to find at launch but miraculously made their way to review sites.

        • Goty
        • 7 years ago

        Oddly enough, my parents bought a 1GHz PIII from Dell shortly after its launch.

      • chuckula
      • 7 years ago

      Uh.. last time I checked every GPU that you can buy is soldered on to a board and they are apparently available to system builders. I wouldn’t be surprised to see mini-ITX motherboards and other interesting designs come onto the market that use these chips… kind of exactly like how the Brazos/Kabini systems have motherboards with soldered-on processors.

        • Goty
        • 7 years ago

        [quote<]Uh.. last time I checked every GPU that you can buy is soldered on to a board and they are apparently available to system builders.[/quote<] ... I'll let you take a while and try to figure out what's wrong with that argument. As for the Brazos boards, that's a very different story. Selling a board with a company's lowest priced SKUs is very different than selling a board with the highest priced SKU; how many people are willing to pay a $300 premium [i<]just for the CPU[/i<] in such a system?

          • smilingcrow
          • 7 years ago

          There will be a bunch of R series BGA chips including two i5 so pricing may not be as bad as you suggest.

            • Goty
            • 7 years ago

            Sure, but that’s not the HQ part, which is really the only BGA part I’d be interested in.

            • smilingcrow
            • 7 years ago

            Apart from the lower performance & lower TDP are than any significant differences?

            • Goty
            • 7 years ago

            Ah, I see now that (at least one of) the R parts are going to come with HD 5200 graphics as well. Interesting that the price wasn’t released. That eDRAM is a big chunk of silicon, though, and Intel had never been one for offering a bargain when it doesn’t have to.

            • smilingcrow
            • 7 years ago

            I thought they all come with Iris Pro? OEM only hence no tray pricing.

            • Goty
            • 7 years ago

            Could be, I’m just going off the one listed in the chart in the review. However, being OEM only kind of sticks in with the argument I started off with (and was promptly downvoted for.)

            • smilingcrow
            • 7 years ago

            I thought it was a pointless argument to be honest.

Pin It on Pinterest

Share This