Intel’s Core i7-5960X processor reviewed

For a PC hobbyist who’s into building high-end systems with elaborate water-cooling setups and multiple GPUs, it doesn’t get any better than Intel’s Core i7 Extreme processors. They’re pricey, sure, but they’re clearly the fastest, most capable CPUs on the planet.

Except, you know, when they aren’t.

The last generation of Intel’s Extreme CPUs lost much of its luster earlier this year when the Devil’s Canyon chips arrived in mid-range desktops with higher clock speeds and sometimes superior performance. It didn’t help that the Core i7-4960X and friends were saddled with the older X79 chipset, whose selection of USB and SATA ports left much to be desired.

Happily, Intel has been cooking up a new high-end platform that should remove all doubt about who’s top dog. The CPU is known as Haswell-E, and it brings with it an updated companion chipset, the X99. Together, this dynamic duo offers more of absolutely everything you’d want in a high-end rig: more cores, larger caches, and a huge increase in high-speed I/O ports. Haswell-E is also the first desktop CPU to support DDR4 memory, which promises faster transfer rates than DDR3.

We’ve been waiting impatiently for Haswell-E’s arrival for most of the year. At last, it’s finally here. We’ve had the top CPU in the lineup, the Core i7-5960X, up and running in Damage Labs for a while now—and we’ve tested it more ways than is probably healthy. Read on for our in-depth assessment.

The E is for Extreme

Compared to the prior-gen Ivy Bridge-E chips, the new Haswell-E silicon is an upgrade on just about every front—except maybe one. Both chips are built using Intel’s 22-nm fabrication process with tri-gate transistors. Intel is on the cusp of releasing 14-nm chips for use in tablets and laptops, but these big chips probably won’t move to the new process for another year.

The most notable change in Haswell-E is embedded in its name: the transition to newer CPU cores based on the Haswell microarchitecture. Compared to Ivy Bridge, Haswell cores can execute about 5-10% more instructions in each clock cycle—and possibly more if programs make use of AVX2 instructions for fast parallel processing. Haswell also brings its voltage regulation circuitry onto the CPU die, which can allow for faster, finer-grained control over the delivery of power around the chip.

A look at the Haswell-E die. Source: Intel.

Those improvements are welcome, but Intel hasn’t left anything to chance. The Core i7-5960X packs eight cores, and its L3 cache capacity is a beefy 20MB. That’s two more cores and 5MB more cache than the prior-gen Core i7-4960X, which should be enough to ensure the new chip’s performance superiority in multithreaded workloads.

To feed all of those cores, Haswell-E can transfer tremendous, almost unreasonable amounts of data. One of the key enablers here is DDR4 memory, which offers transfer rates of 2133 MT/s on these first products—up from DDR3 at 1866 MT/s in Ivy-E—and promises to scale up from there. Haswell-E has four memory channels, so it’s starting with 68 GB/s of memory bandwidth. In theory, that’s 20 GB/s more than the last gen. That’s also, coincidentally, the same amount of memory throughput the Xbox One has dedicated to both its CPU cores and graphics.

Speaking of graphics, one of the big selling points for these Extreme platforms is PCI Express bandwidth for use with multiple graphics cards. Haswell-E doesn’t disappoint on that front, with 40 lanes of PCIe 3.0 connectivity coming directly off the CPU die. The CPU can host multi-GPU configs with 16 lanes dedicated to two different graphics cards—or up to four graphics cards with eight lanes each. That’s the same basic config as in the last gen, with a few tweaks. One change is the ability to host a 5×8 setup, if the motherboard is built to support it. Indeed, the Asus X99 Deluxe board in our test system has five PCIe x16 slots onboard. I’m not quite sure what you’d do with five graphics cards at once, but it is apparently a possibility now.

Code name Key

products

Cores/

modules

Threads Last-level

cache size

Process node

(Nanometers)

Estimated

transistors

(Millions)

Die

area

(mm²)

Gulftown Core i7-9xx 6 12 12 MB 32 1168 248
Sandy Bridge-E Core-i7-39xx 8 16 20 MB 32 2270 435
Ivy Bridge-E Core-i7-49xx 6 12 15 MB 22 1860 257
Haswell-E Core-i7-59xx 8 16 20 MB 22 2600 356
Vishera FX 4 8 8 MB 32 1200 315

All of this beefy hardware makes for a complex chip. Haswell-E is certainly that, at roughly 2.6 billion transistors and 356 mm². The quad-core Haswell chip is only 177 mm², or about half the size, and that’s with integrated graphics. You can see the difference in the dimensions of the packages used for the socketed processors below.

The quad-core Haswell Core i7-4790K (left) versus the Core i7-5960X (right)

Yeah, this is big and substantial hardware. Here’s a look at the three new Haswell-E-based CPU models alongside their quad-core Haswell cousins.

Model Cores/

threads

Base

clock

(GHz)

Max

Turbo

clock

(GHz)

L3

cache

(MB)

PCIe

3.0

lanes

Memory

channels

Memory

type

& max

speed

TDP

(W)

Price
Core i7-5960X 8/16 3.0 3.5 20 40 4 DDR4-2133 140 $999
Core i7-5930K 6/12 3.5 3.7 15 40 4 DDR4-2133 140 $583
Core i7-5820K 6/12 3.3 3.6 15 28 4 DDR4-2133 140 $389
Core i7-4790K 4/8 4.0 4.4 8 16 2 DDR3-1600 88 $339
Core i7-4690K 4/4 3.5 3.9 6 16 2 DDR3-1600 88 $242

The Core i7-5960X gives up some clock frequency to cram eight cores into its 140W power envelope. Those base and boost clocks of 3.0 and 3.5GHz are down quite a bit from the 3.6/4.0GHz speeds of the Core i7-4960X. Even with Haswell’s per-clock performance improvements, those lower frequencies will have consequences in workloads that don’t scale up to 16 threads perfectly.

As usual, Intel charges a big premium for its top-end processor. You’re probably better off buying the Core i7-5930K for over 400 bucks less, as long as you can live with “only” six cores (and 12 threads via Hyper-Threading.) The 5930K has the added advantage of slightly higher clock speeds, too. Then again, I’m not sure how much stock clocks matter since all of the X- and K-series parts shown above come with unlocked multipliers for dead-simple overclocking.

One product you’ll probably want to avoid is the Core i7-5820K, which Intel has ruined by disabling a bunch of the PCI Express lanes. I swear, if there’s a way to tune a knob or dial in order to gimp a CPU for the sake of product segmentation, Intel’s product people will find that knob and turn it, no matter what. In this case, the Core i7-5820K loses the ability to host a dual-graphics setup with 16 lanes to each PCIe slot. Have fun explaining that one to your friend who popped $389 for a CPU and about the same for a fancy X99 motherboard, only to find that it’s no better—not even in theory—than a 4790K for dual-GPU setups. This issue is more pressing now that AMD relies on PCI Express bandwidth for transferring CrossFire frames between GPUs.

We have in the past considered CPUs like the Core i7-3820 to be a nice entry point into Intel’s higher-end platforms. That ends here. The 5820K’s hobbled PCIe removes a major rationale for the X99 platform’s adoption among PC gamers. Unless you really know what you’re doing, stay away from it.

 

A new socket: LGA2011-v3

As you might expect given the VR integration and the shift to DDR4, Haswell-E adopts a new socket type that isn’t compatible with prior chips. Intel calls it Socket 2011-v3. Although the pin config is different, the new socket looks a lot like the one it replaces. That’s good news, since we’re fans of the robust retention mechanism and physical design of LGA2011. Coolers made for LGA2011 sockets should work just fine with LGA2011-v3, too.

This socket is tightly flanked by DIMM slots. Cooler clearance around it will be an issue, which is one reason folks tend to choose water cooling for Core i7 Extreme systems. As you can see above, Corsair sensibly chose to equip its Vengeance LPX DIMMs with low-profile heat spreaders, which is the right way to do it, in my view.

The deal with DDR4 memory

This new platform requires DDR4 memory, of course. The modules have 288 pins and are notched differently along the bottom, so they’re completely incompatible with DDR3. DDR3 has been with us for a long, long time, and the switch to a new memory type promises big benefits, at least eventually.

A DDR4 module (top) versus DDR3 (bottom)

One major plus is lower-power operation. Samsung says DDR4 modules require about 30-40% less power than even DDR3L DIMMs. Some of that gain comes from a lower 1.2V standard operating voltage, and the rest comes from a collection of design features expressly intended to improve power efficiency. For instance, DDR4’s smaller-sized pages require less power to activate. All told, the savings should add up to about 2W per module. That’s not really a big deal in the context of a high-end desktop, but it would be in a tablet or in a server crammed full of DIMMs.

Speaking of which, DDR4 is also primed to achieve higher bit densities than DDR3, and the spec includes native support for chip stacking. Samsung is already using through-silicon vias to stack four DDR4 chips on top of one another.

Another big perk of DDR4 is, of course, additional bandwidth. This new standard has been designed to reach higher transfer rates than DDR3. As with most new memory types, its potential may not be realized right away. DDR3 currently tops out at 2133MHz, more or less, and that’s where DDR4 starts with Haswell-E. Thing is, memory makers are already working on DDR4 chips capable of 3200 MT/s operation.

Although the Core i7-5960X doesn’t officially support RAM speeds above 2133 MT/s, the firmware on our Asus X99 Deluxe offers options as high as 4000 MT/s. Heck, the Corsair Vengeance LPX DIMMs we used for testing are rated for 2800 MT/s at 1.2V. Intel has even blessed an XMP (for eXtreme Memory Profile) 2.0 spec that will allow DDR4 DIMMs to auto-configure themselves at higher clocks on X99 motherboards.

So DDR4 looks to have plenty of headroom right out of the gate. The more difficult question is whether any common consumer applications will actually benefit from the additional bandwidth.

The X99 platform

Block diagram of the X99 chipset and platform. Source: Intel.

Few folks will question the wisdom of giving the X99 chipset more oomph. This new companion I/O chip is loaded to the gills, with 10 SATA 6Gbps ports and six USB 3.0 ports, which is enough not to be embarrassing like the X79. The X99 chip can also support M.2 and SATA Express-based storage, although you’re surely better off hanging fast SSDs directly off of the CPU. What happens there will depend on the motherboard makers.

Mobo manufacturers also have the option of implementing Thunderbolt 2 on the X99 platform if they wish. Doing so will add some costs, as Thunderbolt tends to do, but the return will be an external I/O connection that’s capable of 20 Gbps transfers. That’s twice the rate of the original Thunderbolt and four times what USB 3.0 can sustain.

The possibilities for I/O configurations on X99 boards are incredibly complex given the number of ports, lanes, and slots available between the CPU and the X99 chipset. Geoff will be covering the particulars of various motherboards in his reviews, including today’s look at the Asus X99 Deluxe. I’ll leave most of the detail to him, but there is one caveat about the X99’s setup I should note.

In the block diagram above, you can see the “DMI 2.0 x4” connection between the CPU and the X99. That’s essentially a dedicated PCIe 2.0-style link from chip to chip—which means it has only 20 Gbps of raw, bidirectional bandwidth available to it. Behind this not-especially-fast interconnect are six USB 3.0 ports, eight USB 2.0 ports, eight PCIe 2.0 lanes, 10 SATA 6Gbps ports, and more. Do the math, and it’s pretty abysmal. The X99 just can’t support nearly the amount of concurrent I/O that its port payload suggests—not if those transfers are going to the CPU or memory. For most desktop users, this bottleneck probably won’t become a problem too often, but it’s still pretty far from ideal.

 

Our testing methods

The Cooler Master Nepton 140XL kept our Haswell-E frosty

As usual, we ran each test at least three times and have reported the median result. Our test systems were configured like so:

Processor AMD FX-8350 AMD A6-7400K Pentium G3258
AMD A10-7800 Core i3-4360

Core i5-4590

Core i7-4790K

Motherboard Asus Crosshair V Formula Asus A88X-PRO Asus Z97-A
North bridge 990FX A88X FCH Z97 Express
South bridge SB950
Memory size 16 GB (2 DIMMs) 16 GB (4 DIMMs) 16 GB (2 DIMMs)
Memory type AMD Performance

Series

DDR3 SDRAM

AMD Radeon Memory

Gamer Series

DDR3 SDRAM

Corsair

Vengeance Pro

DDR3 SDRAM

Memory speed 1866 MT/s 1866 MT/s 1333 MT/s
2133 MT/s 1600 MT/s
Memory timings 9-10-9-27 1T 10-11-11-30 1T 8-8-8-20 1T
10-11-11-30 1T 9-9-9-24 1T
Chipset

drivers

AMD chipset 13.12 AMD chipset 13.12 INF update 10.0.14

iRST 13.0.3.1001

Audio Integrated

SB950/ALC889 with

Realtek 6.0.1.7233 drivers

Integrated

A85/ALC892 with

Realtek 6.0.1.7233 drivers

Integrated

Z97/ALC892 with

Realtek 6.0.1.7233 drivers

OpenCL ICD AMD APP 1526.3 AMD APP 1526.3 AMD APP 1526.3
IGP drivers Catalyst 14.6 beta 10.18.10.3652

 

Processor Core i5-2500K Core i7-4960X Core i7-5960X
Motherboard Asus P8Z77-V Pro Asus P9X79 Deluxe Asus X99 Deluxe
North bridge Z77 Express X79 Express X99
South bridge
Memory size 16 GB (2 DIMMs) 16 GB (4 DIMMs) 16 GB (4 DIMMs)
Memory type Corsair

Vengeance Pro

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Corsair

Vengeance LPX

DDR4 SDRAM

Memory speed 1333 MT/s 1866 MT/s 2133 MT/s
Memory timings 8-8-8-20 1T 9-10-9-27 1T 15-15-15-36 1T
Chipset

drivers

INF update 10.0.14

iRST 13.0.3.1001

INF update 10.0.14

iRST 13.0.3.1001

INF update 10.0.17

iRST 13.1.0.1058

Audio Integrated

Z77/ALC892 with

Realtek 6.0.1.7233 drivers

Integrated

X79/ALC898 with

Realtek 6.0.1.7233 drivers

Integrated

X99/ALC1150 with

Realtek 6.0.1.7233 drivers

OpenCL ICD AMD APP 1526.3 AMD APP 1526.3 AMD APP 1526.3
IGP drivers

They all shared the following common elements:

Hard drive Kingston HyperX SH103S3 240GB SSD
Discrete graphics XFX Radeon HD 7950 Double Dissipation 3GB with Catalyst 14.6 beta drivers
OS Windows 8.1 Pro
Power supply Corsair AX650

Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Cooler Master, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

Some further notes on our testing methods:

 

 

  • The test systems’ Windows desktops were set at 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
  • We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we encoded a video with x264.
  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Memory subsystem performance

Since we have a new chip architecture and a new memory type on the bench, let’s take a look at some directed memory tests before moving on to real-world applications.

The fancy plot above mainly looks at cache bandwidth. This test is multithreaded, so the numbers you see show the combined bandwidth from all of the L1 and L2 caches on each CPU. Since Haswell-E has eight 32KB L1 caches, we’re still in the L1 cache at the 256KB block size above. The next three points, up to 2MB, are hitting the L2 caches, and beyond that, up to 16MB, we’re into the L3.

Intel’s architects essentially doubled the bandwidth in Haswell’s L1 and L2 caches compared to Ivy Bridge in order to make them fast enough to support AVX2’s higher throughput. We don’t see quite a doubling of performance in our measurements when comparing the Core i7-4960X to the 5960X, but there are a lot of moving parts here. The 5960X has more cores but a lower frequency, for instance. Regardless, the 5960X’s caches can sustain vastly more throughput than any other CPU we’ve tested.

Stream offers a look at main memory bandwidth, and the results are scandalous. I tried a number of different thread counts and affinity configs, but I just couldn’t extract any more throughput from the 5960X with this version of Stream. In fact, I even tried raising the DDR4 speed from 2133 to 2800 MT/s, and throughput didn’t improve. Frustrated, I decided to try a different bandwidth test from AIDA64.

That’s more like it. The Haswell-E/DDR4 combo can achieve higher throughput in the right situation. The memory read results don’t tell the whole story, though.

Looks like DDR4 writes are slower than reads, at least with AIDA64’s access pattern. In the end, the memory copy test shows a reasonably good overall result for the 5960X and DDR4. At 2133 MT/s, it’s not much faster than an i7-4960X with DDR3-1866, but at 2800 MT/s, the new CPU and memory type move ahead.

Next up, let’s look at access latencies.

SiSoft has a nice write-up of this latency testing tool, for those who are interested. We used the “in-page random” access pattern to reduce the impact of prefetchers on our measurements. This test isn’t multithreaded, so it’s a little easier to track which cache is being measured. If the block size is 32KB, you’re in the L1 cache. If it’s 64KB, you’re into the L2, and so on.

The bottom line: Haswell-E achieves roughly double the cache bandwidth without any real increase in the number of clock cycles of access latency. That’s excellent.

Accessing main memory is a bit of a different story. The Haswell-E-and-DDR4 combo has higher memory access latencies at 2133 MT/s than most of the DDR3-based setups. Fortunately, that slowdown pretty much evaporates once we crank the DDR4 up to 2800 MT/s.

Honestly, I’m not sure the added memory latency matters much, anyhow, given the fact that the 5960X has a massive 20MB L3 cache. We conducted most of our testing at the CPU’s officially supported memory spec of DDR4-2133. We may have to do some additional testing at 2800 MT/s to see what it nets us in real applications.

Some quick synthetic math tests

The folks at FinalWire have built some interesting micro-benchmarks into their AIDA64 system analysis software. They’ve tweaked several of these tests to make use of new instructions on the latest processors, including Haswell-E. Of the results shown below, PhotoWorxx uses AVX2 (and falls back to AVX on Ivy Bridge, et al.), CPU Hash uses AVX (and XOP on Bulldozer/Piledriver), and FPU Julia and Mandel use AVX2 with FMA.

Good grief. The Core i7-5690X is off to one heck of a start. Many of the big generational performance gains you’re seeing above come from the use of AVX2 and the FMA (or fused multiply-add) instruction. Haswell has it, and Ivy Bridge doesn’t. Notice how the quad-core 4790K nearly matches or even beats the six-core 4960X? That’s Haswell magic at work. Now, give the Haswell chip eight cores, and you have the i7-5960X.

 

Power consumption and efficiency

The workload for this test is encoding a video with x264, based on a command ripped straight from the x264 benchmark you’ll see later.


The 5960X’s impressive reductions in idle power consumption versus the i7-4960X come mostly courtesy of the Asus X79 motherboard on the 4960X system; it’s a great board, but we’ve seen similarly configured X79 systems idle at around 64W. In fact, I’m a little disappointed that our 5960X system doesn’t go lower at idle. Yes, it has twice as many DIMMs as our quad-core Haswell systems onboard, but I’d hoped the power savings from DDR4 might move the needle a bit more.

Now that’s more like it. This drop in peak power use is an earnest improvement over the 4960X, since that Asus X79 mobo only draws more power than other boards at idle, not under load. At first, the fact that the 5960X system pulls less power than the 4960X one surprised me. After all, the 5960X has a 140W TDP, and the 4960X’s peak power rating is 130W. However, the 5960X’s TDP encompasses the CPU’s integrated voltage regulators. On the 4960X, the VRMs are external and don’t count toward the TDP, so it makes sense that the 5960X’s total system power draw would be lower.

That said, this workload doesn’t really engage all eight cores and 16 threads on the 5960X throughout its execution. We saw transient peaks of 130W or more during the test run, and this same system will draw as much as 163W during a 3D rendering workload.

We can quantify efficiency by looking at the amount of power used, in kilojoules, during the entirety of our test period, when the chips are busy and at idle.

Perhaps our best measure of CPU power efficiency is task energy: the amount of energy used while encoding our video. This measure rewards CPUs for finishing the job sooner, but it doesn’t account for power draw at idle.

The 5960X looks pretty darned efficient overall, regardless of the fact that this isn’t the ideal workload for a 16-threaded CPU. Only a couple of quad-core Haswells use less energy to complete the task—and the 5960X brings a vast improvement in efficiency over the Core i7-4960X.

Pour one out for my homies at AMD.

 

Crysis 3

For the most part, Crysis 3 isn’t nearly as demanding as past Crysis games were, relatively speaking. Any reasonably modern system can run it well at the right settings. There is, however, one level that’s particularly difficult for slower CPUs—and it seems to benefit from having more hardware threads and cores on hand. Some of you all suggested that we test there, so we did. You can see the crazy amounts of grass and other vegetation in the video below.

 


As usual, we recorded the time needed to render each and every frame the game produced. Click on the buttons above to cycle through plots of the frame times from one of our three test runs for each CPU.

What you’ll see, I think, is that frame times vary much more widely on the slower processors. The faster the CPU, the smoother and more consistent the frame rendering times become.

Average FPS is often kind of meaningless for reasons I’ve explained in the past. The slowest CPUs here, though, are clearly struggling. AMD’s A10-7800 is a more affordable CPU that competes with the Core i3-4360, and its 22 FPS average just isn’t getting it done. Its 99th percentile frame time of 68 milliseconds ain’t great, either; that means the slowest 1% of frames are being rendered at a rate equivalent to 14 FPS or less. Interestingly enough, the FX-8350 performs much better; it’s an eight-core variant of more or less the same basic architecture.

The Core i7-5960X essentially ties for the lead with its predecessor, the 4960X, and its little brother, the Haswell-based 4790K. Although the 4790K only has four cores, it has eight hardware threads and runs at substantially higher clock speeds than the 5960X. As a result, it comes out ahead of the 5960X by just a smidgen.

We can sort the frame times from best to worst and look at the tail end of the results, where the slowest frames reside, to see a clear separation between the faster and slower CPUs. The 4760X, 4790K, and 5960X are all packed together almost identically. Above is the eight-core FX-8350, whose performance here is commendable, followed by the Core i5-4590 and the rest of the pack.


Our final frame-time-based metric looks at “badness,” those cases where frames take an especially long time to produce and the user is most likely to notice. We have several thresholds of “badness,” starting at 50 ms, which is the equivalent of 20 FPS or three full refresh cycles on a 60Hz display. Any time you’re waiting 50 ms or more for a frame, you’re likely to notice that in-game animation isn’t as smooth as it should be. Few of these processors spend any time at all beyond our 50-ms threshold in this test session, and none of the high-end ones do.

The 5960X essentially ties the 4960X and 4790K at the other two thresholds, so it’s among the fastest CPUs we’ve tested. Unfortunately, though, this CPU doesn’t break any new performance ground in this test scenario compared to last year’s model.

 

Watch_Dogs

Here’s another game with a reputation for making life hard on CPUs. We tested by taking a quick stroll around the block and then blowing up an electrical junction box with our phone. Kind of like real life around here.

 


 

Well, this game is strenuous—but only for the A10-7800 among the CPUs we tested. Even the Core i3-4360 aces this test, rendering 99% of the frames in 19 ms or less. Notably, the Core i7-5960X takes the slightest of leads over the 4790K and 4960X.

 


Yeah, perhaps we need to find a different area to test, but this game doesn’t look to be as tough on the CPU as we expected. Even the A10-7800 stays below our 50-ms “badness” threshold throughout the test session.

 

Arkham Origins

 


 

This game is based on Unreal Engine 3, which has gotta be the most widely used game engine right now. UE3 doesn’t use truly robust multithreading, so Arkham Origins doesn’t respond as well as the last two games did to the addition of more cores and hardware threads. The CPUs do encounter some challenges, but the processor that performs the best is the one with the highest clock speed: the quad-core i7-4790K.

Not only that, but for whatever reason, the 5960X runs into a bit of trouble. Some of its frame times are higher than what we see out of even the Core i3-4360. That may simply be because this game is especially sensitive to clock speeds. The Core i3-4360 is Haswell-based and runs at 3.7GHz, while the 5960X starts at 3GHz and maxes out at 3.5GHz with Turbo.

Whatever the cause, the 5960X places near the back of the pack in the 99th percentile frame time scores.

You can see how the 5960X’s latency curve shoots upward during the last few percentage points worth of frames. That’s probably because, in those toughest instants, the game’s performance is gated by a single thread’s execution speed. With its relatively modest clocks, the 5960X can’t power through those situations as easily as the other Intel CPUs we tested.


Remember that we’re talking about minute differences here. None of these processors push beyond our 50-ms “badness” threshold, and the 5960X barely spends any time beyond 33 ms, either. The deltas between CPUs are only apparent at the 16.7-ms threshold. For what it’s worth, though, this game felt “smoother” during play-testing on the other Intel CPUs than it did on the 5960X.

 

Battlefield 4 with Mantle

 


 

 

 


Test Battlefield 4 with Mantle, they said. It’ll be interesting, they said.

Ack.

Turns out the work that DICE and AMD have done with the low-overhead Mantle API in BF4 has really come together nicely over time. Even the A10-7800, which has struggled mightily in the other games, turns in a near-perfect performance.

Then again, you may recall that BF3 ran almost flawlessly on just about any CPU, too, and it used Direct3D.

Thief

Our final gaming test also explores Mantle performance. We had to use Thief’s built-in benchmark, since our usual tool for frame-time testing, Fraps, doesn’t yet work with Mantle.

By reducing CPU overhead, Mantle really does allow the lower-end CPUs to perform better in Thief. Unfortunately, the game wouldn’t start up properly in Mantle mode with the two lowest-end CPUs we tested, so they didn’t benefit from the new API.

Beyond that, the 5960X continues to perform well, but it’s clearly not the fastest CPU for gaming. That’s really no surprise, given its clock speeds.

 

Productivity

Compiling code in GCC

Our resident developer, Bruno Ferreira, helped put together this code compiling test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

 

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, so the encoding of the AES algorithm, in particular, should be very fast on the CPUs that support those instructions. We’ve also included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions.

7-Zip file compression and decompression

The 5960X simply dominates the first chunk of our non-gaming application tests. These programs are all pretty widely multithreaded, so the 5960X is able to put all eight of its cores to good use.

JavaScript performance

 

These two JavaScript benchmarks tend to prize per-thread performance, so the 5960X finishes behind its higher-clocked Haswell stable mates. Beats the Ivy-based 4960X, though.

Video encoding

x264 HD video encoding

Our x264 test involves one of the latest builds of the encoder with AVX2 and FMA support. To test, we encoded a one-minute, 1080p .m2ts video using the following options:

–profile high –preset medium –crf 18 –video-filter resize:1280,720 –force-cfr

The source video was obtained from a repository of stock videos on this website. We used the Samsung Earth from Above clip.

As we noted earlier in the power efficiency tests, the x264 encoder doesn’t seem to use all 16 of the 5960X’s threads especially well—at least, not with the settings we used.

Handbrake HD video encoding

Our Handbrake test transcodes a two-and-a-half-minute 1080p H.264 source video into a smaller format defined by the program’s “iPhone & iPod Touch” preset.

The 5960X turns things around in Handbrake with an overall victory. That’s what one would hope to see, since one of the big target workloads for a CPU like this is video encoding.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

Chalk up one more win for the 5960X in our photo-stitching app, but it’s a close one.

 

3D rendering

LuxMark

Because LuxMark uses OpenCL, we can use it to test both GPU and CPU performance—and even to compare performance across different processor types. OpenCL code is by nature parallelized and relies on a real-time compiler, so it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both support AVX. The AMD APP driver even supports Bulldozer’s and Piledriver’s distinctive instructions, FMA4 and XOP. We’ve used the AMD APP ICD on all of the CPUs, since it’s currently fastest ICD in every case.

We’ll start with CPU-only results.

Eight cores are nice to have for parallel workloads like this one. The 5960X easily takes the top spot. Unfortunately, we’ve not yet seen the sort of speed-ups from AVX2 with FMA that we’d hoped—certainly nothing like what we saw in some of those synthetic AIDA64 tests, for instance.

We can try combining CPU and GPU computing power by asking both processor types to work on the same problem at once.

The 5960X’s rendering performance more than doubles when it gets help from AMD’s “Tahiti” GPU.

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

 

POV-Ray rendering

The 5960X erases any questions about which desktop CPU is the best at 3D rendering. Yeesh.

Scientific computing

MyriMatch proteomics

MyriMatch is intended for use in proteomics, or the large-scale study of protein. You can read more about it here.

 

STARS Euler3d computational fluid dynamics

Euler3D tackles the difficult problem of simulating fluid dynamics. Like MyriMatch, it tends to be very memory-bandwidth intensive. You can read more about it right here.

Haswell-E with DDR4 looks to be especially well-suited to these sorts of workloads, which is why this same chip will also be sold in multi-socket Xeon workstations and servers.

 

Legacy comparisons

Many of you have asked for broader comparisons with older CPUs, so you can understand what sort of improvements to expect when upgrading from an older system. We can’t always re-test every CPU from one iteration of our test suite to the next, but there are some commonalities that carry over from generation to generation. We might as well try some inter-generational mash-ups.

Now, these comparisons won’t be as exact and pristine as our other scores. Our new test systems run Windows 8.1 instead of Windows 7, for instance, and have higher-density RAM and larger SSDs. We’re using some slightly different versions of POV-Ray, too. Still, scores in the benchmarks we selected shouldn’t vary too much based on those factors, so… let’s do this.

Our first set of mash-up results comes from our last two generations of CPU test suites, as embodied in our FX-8350 review from the fall of 2012 and our original desktop Haswell review from last year. This set will take us back at least four generations for both Intel and AMD, spanning a price range from under $100 to $1K.

Productivity

Image processing

3D rendering

Scientific computing

I think the short answer is: yes, it’s time to upgrade.

 

Legacy comparisons, continued

That was a nice start on the last page, but we can go broader than that. This next set of results includes fewer common benchmarks, but it takes us as far back as the Core 2 Duo and, yes, a chip derived from the Pentium 4: the Pentium Extreme Edition 840. Also present: dual-core versions of low-power CPUs from both Intel and AMD, the Atom D525 and the E-350 APU. We retired this original test suite after the 3960X review in the fall of 2011. We’ve now mashed it up with results from our first desktop Haswell review and from today.

Image processing

3D rendering

Still not old-school enough for you? In April of 2001, the Pentium III 800 rendered this same “chess2” POV-Ray scene in just under 24 minutes.

 

Overclocking

Since the 5960X’s key multiplier is unlocked, overclocking this CPU only requires changing a few settings in the system BIOS. Naturally, we took a shot at it.

With some fiddling, we were able to get our 5960X running reasonably stable with all eight cores synced at 4.5GHz. To do so, we set the CPU voltage on our Asus X99 Deluxe motherboard to 1.325V. With a big water cooler attached, the CPU’s temperatures reached the low-to-mid 70s Celsius, which isn’t too bad.

We tried to push further, to 4.6GHz, but cranking up the multiplier produced BSODs in Windows almost instantly. We raised the voltage to 1.35V and then 1.375V, but it didn’t help. We then tried bumping up the cache voltage and turning up the fan speed on our water cooler, but the BSODs persisted. We had to settle for 4.5GHz.

Or so we thought. We then kicked off Cinebench to see what the clock speed increases had won us, but the system crashed during the benchmark with a BSOD. Ultimately, we had to drop down to 4.4GHz at 1.3V in order to run Cinebench without crashing.

That’s still, uh, crazy fast, especially in multithreaded applications.

Based on our experience and what we’ve heard from other folks with Haswell-E overclocking experience, I don’t think you can expect to see these chips getting into the 4.7-4.8GHz range that’s more common with Haswell dual- and quad-core parts—not very often, at least.

 

Conclusions

I’d say we’ve compiled enough performance data. Let’s summarize with a couple of our infamous price-performance scatter plots. The first one is based on a geometric mean of all of our non-gaming application tests, and the second one focuses on our frame-time-based game performance results. As ever, the best values will tend toward the top left corner of the plot.


The fact that we’ve split up our results between gaming and non-gaming applications is convenient, since the story of the Core i5-5960X really does break along those lines.

For general desktop use, the 5960X is easily the fastest CPU we’ve ever tested, and it’s a healthy generational improvement over Ivy Bridge-E. This chip’s eight cores and 16 hardware threads really shine in heavily CPU-intensive applications that can make use of them—things like video encoding, image processing, and compiling large software projects. No, this CPU is not cheap. If you do the right sort of work, though, the 5960X could be well worth the investment, because it burns through large jobs much faster than anything short of a Xeon-based workstation.

That said, Core i7 Extreme processors have always been aspirational products. You pony up a grand for one of these things, you want the very best. For PC gamers, the 5960X marks the end of an era, of sorts. The best CPU you can buy for gaming is pretty clearly the Core i7-4790K; its combination of four cores, eight threads, and high clock speeds fares best overall in our gaming tests. The 5960X isn’t bad for gaming. Heck, the only thing it’s probably bad at is not being awesome. But the 5960X isn’t the best at everything—mostly because Intel chose to add more cores this time around. I suspect that fact matters to some folks.

If you want to build the finest multi-GPU gaming rig, there’s no denying the Haswell-E platform’s superiority in terms of PCI Express connectivity. I suspect that the Core i7-5930K, with six cores and a 3.7GHz Turbo peak, might be an ideal choice for a gaming rig based on this platform. Also, you know, the 5930K costs about 60% of what the 5960X does. Intel didn’t supply us with a 5930K to review, but it’s almost certainly the best value among Haswell-E CPUs.

Enjoy our work? Pay what you want to subscribe and support us.

Comments closed
    • pandemonium
    • 5 years ago

    Thanks very much for the legacy comparisons. That’s data that’s extremely difficult to get anywhere, so I applaud the effort and data provided there!

    • kamikaziechameleon
    • 5 years ago

    Wake me up when they make the 5930K is replaced at that price point by an 8 core alternative.

    As impressive as the darn stats are on the flagship it doesn’t trickle down to anything we want to see in the more reasonably priced parts. Intel continues to diminish the draw of this to the enthusiast by focusing purely on people with money to burn. Yet again this is Intel making a product that in many cases fails to live up to the potential of its $300 step cousin. Perhaps I should be happy that my $300 processor is still king of the roust but as a industry spectator this is a boring turn out.

    • USAFTW
    • 5 years ago

    David be gone, add your hands together for Goliath.

    • BiggieShady
    • 5 years ago

    Somehow I expected these game tests to be with CPU heavy games … like Skyrim with moded detail draw distance or something from Total War series

    • Arbedark
    • 5 years ago

    Yeah, Nice Review, and some nice Comments too.

    But, well, who can show some real results 🙂 without benchmarks here..

    I try one:

    I work at a Hospital, we are not that big, but enough for some Results.

    We have 4k+ Clients, HP at most. W7 x64 and a lot of custom Applications

    40% HP 8000/8200 sff clients (C2D, i3-2120)
    50 % HP 8300/800G1 sff clients (i5-3570 / i5-4570)
    10 % are Workstations ( 50/50 Z400 (Xeon W3550) and Z420 (E5-1650 v2)

    and of course some old forgotten ones..

    P4 / C2D for some neverused workplaces.. mostly forgotten to change.

    As it is with Users who barely can turn on a Monitor (sometimes even that is to much)
    they all work with more then 5 applications at the same time (some even more,…), mostly CPU heavy Loads (GPU`s only challange is Youtube 🙂 )

    My conclussions with our PC´s:

    P4 / C2D Models: quite good as long as u don´t start to many Applications same time, but got really Hot.. (p4 only)

    i3 Models: a little better with multithread, but weaker then a p4 if u only use one application, intel HD Chip is crap.. failures with drivers or bad performance

    i5-3570 Models: better then i3, but again not that special.. (medical applications are crap..)

    i5-4570 Models: our new Clients, quite good, works fine with most workloads and stay´s cool, but for the Users.. nothing special (again, medical applications are crap)

    Z400 Xeon W3550 are CRAP.. slow like hell.. even with HT on (standard is off on HP MB)
    Z420 E5-1650 v2 (my workplace) better then Z400 but slower then i5-4570 (HP 800G1 sff)

    I would like to see some AMD Clients too, maybe we get some for testing..

    So, its one thing to have some cool Benchmarks and Reviews of all that new Stuff, but for the Real World it means nothing..

    Ah and forgive my bad English 🙂

    • TEAMSWITCHER
    • 5 years ago

    “I swear, if there’s a way to tune a knob or dial in order to gimp a CPU for the sake of product segmentation, Intel’s product people will find that knob and turn it, no matter what.”

    I really, really, liked this comment.

      • LoneWolf15
      • 5 years ago

      That’s why I bought a Devil’s Canyon before they change their mind and do it to K-series Broadwell chips again.

    • CeeGee
    • 5 years ago

    Am I the only person that misses the Civ V late game benchmarks of old?

      • Airmantharp
      • 5 years ago

      Not really- read what Krogoth has to say about it below when I used it as an example of games that are not CPU limited.

      Taking his word for it, the game only uses two threads (or two primary threads) and thus isn’t a great way to test CPUs with differing numbers of cores, and that’s assuming that the test is a near end-game snapshot where the game is really loaded up.

    • Airmantharp
    • 5 years ago

    There’s so much disappointment in these comments, and there really shouldn’t be.

    Look at the gaming benchmarks- the 8-core beast largely keeps up with the quad-core 4970k that’s running [i<][b<]1GHz faster[/i<][/b<]. And that's with mostly consolized games, or BF4 in it's rather weak single-player mode. I expect an overclocked 5820k to easily outperform an overclocked 4970k in demanding game scenarios, and while I agree that the 28-lanes of PCIe 3.0 limitation is silly, I don't think it will be an issue for the average single or dual GPU setup. It certainly won't be an issue for non-graphics/GPU compute workloads

      • chuckula
      • 5 years ago

      Accidentally downthumbed when I should have up-thumbed…

      Expanding on your point on demanding game scenarios, we keep hearing about the 8-core consoles and how multi-threading is going to be so crucial in “modern” and “future” games. I’m not sure how much I buy that line, but if it’s true, then a 6-core 5820K is going to have both the “more core” and “more powerful core” angles covered.

        • Airmantharp
        • 5 years ago

        Thanks- the real problem with evaluating Haswell-E for gaming is that we’re still waiting on games that are either truly PC-centric or that have been earnestly developed with the new generation of consoles in mind, so the options for comparison testing are limited and in some cases difficult, such as multi-player gaming.

          • Krogoth
          • 5 years ago

          Haswell-E Is a poor buy for a gaming CPU like its Sandy-Bridge-E and Ivy-Bridge-E predecessors. There are only a handful of titles that utilize more than two threads and this will continue to the case in the foreseeable future.

          The primary reason? It is a PITA to code something that can effectively harness more than two threads with items aren’t parallelized. The cost, time and debugging involved makes such software falling into the professional realm.

          The next-generation consoles will not change the scene either. We may seen more games that uses three or four threads, but anything beyond that it is going to be a crapshot. The extra threads on Haswell-E will go to waste.

          It is like buying a industrial tier truck and you never use for hauling stuff just commuting to work.

            • Airmantharp
            • 5 years ago

            I agree on the present, of course. As to the future, well, that depends on how industrious developers are at making use of the new consoles. I will say that I’m not as characteristically pessimistic as you are :).

          • Andrew Lauritzen
          • 5 years ago

          Indeed, but I doubt that new generation console development is going to change this, even if they go perfectly multithreaded.

          The problem is console games are not going to raise the bar on *total* CPU use. It’s much harder to scale CPU load than GPU in most cases (i.e. how do you make your physics or AI simpler without affecting core gameplay/design?) and so a game that is designed to work on the consoles is by definition only going to use a fraction of one of these new chips.

          To put the comparison in more stark contrast: these things have ~9x the FLOPS of the console CPUs. That’s simply too wide a gap to expect a game to scale on the CPU. Thus, sadly, I don’t really think games are going to be a primary consumer of this level of CPU performance any time soon.

            • Airmantharp
            • 5 years ago

            If we’re talking straight console ports, I agree, and I’m not sure if the somewhat-to-truly optimized for PC titles and developed for PC titles will really make a difference.

            Hell, I mostly hope that they don’t, as I’m also in the 2500k 4.5GHz crowd and it’d mean that I’ve seen the ‘high point’ of CPU usage in games for at least the next few years. New GPU(s) and I’d be set to take advantage of a higher-resolution display if I get one, and I’d certainly have no reason to complain :).

    • jessterman21
    • 5 years ago

    Thanks for testing Welcome to the Jungle on Very High! Makes even Haswell i5s cry…

      • Krogoth
      • 5 years ago

      Not really, Crysis 3 is one of the few games that takes advantage of four and more threads. It is more of a showcase of improved thread scheduling in Windows 8.

    • LoneWolf15
    • 5 years ago

    Quick summary:

    -If your daily tasks involve heavy video encoding (H.264, or if you have future plans for H.265) with highly multithreaded apps, or 3D rendering, Haswell-E rocks.

    For anyone who does it occasionally, or games, Devil’s Canyon costs much less, lets you reuse your DDR3 if you bought decent RAM, and will last you through a few years, and the latest Z97 mainboards will support Broadwell when it comes out if you really itch to upgrade that quickly.

    Interesting how using GPU-assisted rendering brought Ivy-E and Devil’s Canyon much closer to parity with Haswell-E.

    • ronch
    • 5 years ago

    Intel Core i7-5960X (8 cores, 3.0GHz) .. $1,000
    Intel Core i5-4590 (4 cores, 3.3GHz) .. $200
    AMD FX-8350 (8 cores, 4.0GHz) .. $180

    The 5960X is the finest desktop CPU money can buy, but $180 sure makes the 8350 look compelling. That is, if you don’t put the i5-4590 into the equation. And with no overclocking aspirations one can go with a cheap but well-equipped B85-based board such as [url=http://www.gigabyte.com/fileupload/product/2/4567/8017_big.jpg<]this[/url<]. For most folks, $200 is the sweet spot, and they'll have to decide between the two based on their needs.

      • HisDivineOrder
      • 5 years ago

      The gamer who chooses an 8350 over any recent Intel chip is choosing a pretty crappy motherboard chipset. That has long range consequences that most gamers would do well to avoid.

        • ronch
        • 5 years ago

        Why does it seem like every time people talk about the FX lineup, the most important thing is gaming performance?

          • f0d
          • 5 years ago

          they do the same with the HEDT platform on intel also (check this comments section for proof)

          most people that come to techreport are after gaming performance per dollar so that gets reflected in the comments section and thats why people love the pentium AE

          • LoneWolf15
          • 5 years ago

          Gaming isn’t most important. But a quality mainboard chipset is. As much as I don’t think AMD processors are “bad”, I always look at thr things:

          A Core i3 can go toe-to-toe with many of them, and a Core i5 can outright beat most.
          Intel processors cost less at the outlet.
          Intel chipsets have better USB3 and SATA driver support overall, and better stability.

          Intel chipsets and lower power consumption for Intel processors are bigger wins for me than their higher performance in the end.

      • Airmantharp
      • 5 years ago

      Eight “cores”

        • srg86
        • 5 years ago

        More like 4 and 4 half cores.

      • LoneWolf15
      • 5 years ago

      Last week, the i5-4690K Devil’s Canyon was $194.99 on special. And personally, I’d rather make up the $20 on a lower annual electric bill than buy an 8350.

    • ronch
    • 5 years ago

    What’s that strip of unused die area on the right side of the CPU cores? Did the uncore and memory controllers grow?

    • Klimax
    • 5 years ago

    Just some points:
    1)Some crypto-algos from currencies would be nice like Scrypt, X11 and others. They have built-in benchmark with preset input, so it is highly repeatable. And some of them are very sensitive to memory bandwidth…
    2)I’d like to see more comparisons between DirectX on GeForce versus Mantle on Radeon. (Mantle fixes only AMD drivers, nothing more so it would show how delta changes with new CPU)
    3) Would like to see Visual Studio C++ 2013 compiler with highest optimizations settings. That can be brutal on CPU. (While producing very effective output) Throw there COM/ATL/CX code and you get to see non-disk bound compilation…
    4)x264 –preset veryslow –pass 3 –bitrate 10600 –bframes 3 –qpmax 48 –ratetol 10 –merange 64 –me tesa –subme 11 –no-dct-decimate –no-fast-pskip –ssim
    That is for archiving video. Can takes almost entire day for 1,5h long video…

    Anyway, off to secure money. Only thing left is selection of mainboard.

    • Ninjitsu
    • 5 years ago

    Tom’s Hardware has done some really nice power consumption testing, which is worth checking out.

    [url<]http://www.tomshardware.com/reviews/intel-core-i7-5960x-haswell-e-cpu,3918-8.html[/url<] Looks like the 5960X can touch a peak of 302W @ 1.38V and 4.8 GHz, no wonder Asus is recommending at least 25A on the 12V rail.

      • sschaem
      • 5 years ago

      Interesting, even with their custom water cooling solution the chip will not sustain clock above 4.3ghz because of thermal limits.

      4.5ghz seem stable (no throttling ) with 6 core (vs 8)

      Seem like a i7-5820K set to 4.2ghz might a decent option, 33% slower for $600 in saving ?

      • Krogoth
      • 5 years ago

      *sigh* This is what I have been telling the overclockers since 22nm parts came out and they were disappointed. 22nm was never build for ultra-high clockspeeds and it needs a ton of voltage to keep itself stable. Yes, the thermal paste issue for normal chips was an annoying problem but the real problem is the 22nm itself. It leaks like a SOB when overvolted and pushed hard.

      Delidding isn’t new either. It has been done in overclocking competitions since headspreaders became commonplace. Heatspreader is a bit a misnomer, it was never meant to improve thermal conductivity. It actually hinders it. The real function of a heatspreader is to protect the silicon from heatsinks from physical stress. This was a known problem with last generation of exposed cores (Socket A Athlons and P3 Coppermines) when there were a large number of horror stories with aftermarket HSF solutions killing chips (improper installations to sudden impulses/torquing to the classis).

      Delidding only became more commonplace in enthusiast rings because 22nm parts were the first chips from Intel since Prescott that were hitting thermal walls when pushed hard. Overclockers just got spoiled rotten by generations of effortlessly overclocking from Conroe to Sandy Bridge. Physics just caught up again. I’m not holding my breath on 14nm faring any better.

        • Ninjitsu
        • 5 years ago

        Yeah, 14nm will top out at even lower voltages.

        Transistor gate length and voltage are proportional, after all. Not sure how that maps to FinFETs, though.

    • ronch
    • 5 years ago

    Any optimism here that AMD will ever make a comeback? Will their upcoming x86 core save them? Or put the final nail in the coffin?

    Can’t wait for die shots!

      • HisDivineOrder
      • 5 years ago

      Their upcoming x86 CPU core is supposedly coming in 2016, but their track record with new architectures suggests imo they’ll show up in 2017 or later. They aren’t exactly stellar at introducing new cores when they say, either. How’s that Steamroller update to the FX series working out? 😉

      The same basic story plays out for AMD over and over, which leads me to believe that what will happen to AMD with their new architecture has happened before:

      That is, AMD will show up late with an architecture that COULD have been competitive if it had shown up two years before it did. Then when it does show up, it is undone by fabrication issues that are mostly outside their control (as long as you assume them not leaving GloFo for a foundry with better QA/process control is something out of their control) that cause major leakage issues, leading to higher than desired voltage requirements and heat production.

      Then they’ll make some silly and ridiculous choices with regards to their motherboard chipset to support it, likely focus it almost (if not) entirely on APU’s (especially at first), and then let the CPU/MB overstay its welcome years and years after its prime.

      Given that, my answer is no. They will not make a comeback. By the time they almost have a new architecture ready, their focus will be entirely on keeping their stock afloat by chasing after buzzwords like ARM compatibility and mobility. The same impetus that is having them ignore the FX series in favor of APU’s (that don’t seem to be helping their bottom line at all) is the same reason they’ll wind up ignoring the x86-CPU’s of the future to favor still-poorly-selling ARM-compatible chips.

      Because they need desperately to make investors think they have a chance of being the miracle comeback kid. To do that, they can’t just do the same old, same old, can they? They have to have some magic pill, a hail mary, that promises to right the ship overnight.

      The joke is we all know it’s not going to work, but having something that COULD or even MIGHT right the ship overnight is better than slowly working your way out of the hole with losses for quarters to come.

      At least to investors. That’s assuming AMD is still around by 2016-18.

        • ronch
        • 5 years ago

        I’m being cautiously optimistic about their next x86 core. Just because their last two CPU generations weren’t very successful doesn’t men they’ll never come up with a winning architecture again. They got awfully lazy with Barcelona (just tweaked the K8 core thinking it’s way too awesome for Intel to catch up to) and Bulldozer was a huge gamble on a concept that was AFAIK never done before, thinking the world would be running apps with tons of threads by the time BD came out. GF had a hand in killing BD and PD but I bet AMD is a lot wiser now.

    • S_D
    • 5 years ago

    Has there been an embargo on overclocking the 5930k for these reviews? I see a couple of sites have simulated it be disabling 2 cores on a 5960x but that’s not the same as it doesn’t take into account die harvesting or variations in silicon…. I think a 5930k should overclock a little better than the 5820k….

    • jackbomb
    • 5 years ago

    [quote<]Still not old-school enough for you? In April of 2001, the Pentium III 800 rendered this same "chess2" POV-Ray scene in just under 24 minutes.[/quote<] It's actually quite impressive that an 800MHz PIII takes 4:30 [i<]less[/i<] to complete a POV-Ray scene than a 1.8GHz dual-core Atom. Aliens.jpg

      • Krogoth
      • 5 years ago

      Not really, the Atom was designed for ultra-low power consumption in embedded applications while the P3 800Mhz was designed to be a performance desktop CPU in its heyday.

      What you are seeing is the difference in CPU architectures and design doctrines.

    • Ninjitsu
    • 5 years ago

    AnandTech tested SLI and stuff.

    [quote<] [b<]The 28 lanes of the i7-5820K has almost no effect on SLI gaming at 1080p[/b<]. One question that will come from all sides is if the 28 lanes effects gaming. The CPU will cause an x16/x8 SLI configuration in two-way and x8/x8/x8 in three-way SLI, rather than the x16/x16 or x16/x16/x8. [b<]We tested at 1080p maximum settings with two GTX 770 Lightning GPUs[/b<], and found that the only benchmark that any significant difference was the average frame rates in Battlefield 4, which dropped from 110 FPS with the 5930K to 105 FPS with the 5820K. It makes sense that we should test this with 4K in the future. [/quote<] For dual GPU setups, and for people who need the cores, bandwidth and storage, the 5820K is terrific.

      • Krogoth
      • 5 years ago

      5820K is not a good buy as a platform. You get identical performance with a 4790K/Z97 rig for less because you don’t have to get expensive (for a while until supplies stabilize) DDR4 DIMMs and a X99 board (which 5820K cannot take full advantage of!) The only advantages that 5820K has over 4790K is having more memory bandwidth (games and mainstream don’t take advantage of it) and memory capacity due to having more DIMM slots (only servers and ultra-high workstations need that much memory.) It uses the older BLCK overclocking on top of being multipler unlocked, however like all 22nm parts it is thermally limited when you attempt to push it beyond 4.5Ghz barrier (even with delidding.)

      If you working with budget for X99 rig. You might as go the extra mile and get at least a 5930K (6 cores and full 40 PCIe support) for only $200 more.

        • Ninjitsu
        • 5 years ago

        But that’s only if those extra PCIe lanes are of any importance to you. For dual GPU setups, they won’t be.

        “Only” $200 (+50%) more for a 100 MHz faster CPU which can use more PCIe lanes that can’t be used doesn’t seem like a great idea, really (unless you need more PCIe).

        And no, assuming you can use 2C/4T more, you’ll have more performance than a 4790K, with approximately the same single threaded performance once overclocked.

        I don’t see the point of a 5930K.

        To quote Chris Angelini of Tom’s Hardware:
        [quote<] Twenty-eight lanes gives you room to run one 16-lane graphics card, two in x8-mode with plenty of connectivity left over, or even three cards on x8 links. And for $50 more than a Core i7-4790K, you get six cores, 15 MB of shared L3 cache, a bit of insulation against the future, four channels of DDR4, and ample PCIe. This time around, I’m going with the Core i7-5820K as my smart choice. [/quote<]

        • yuhong
        • 5 years ago

        The 5820K is six-core.

      • HisDivineOrder
      • 5 years ago

      1080p gaming is not something most people with SLI will care about.

      Talking about 1080p gaming is like talking about 480p gaming back when the Wii was first released. It’s nice and all, but it’s quickly falling out in favor of a new more advanced standard.

      Those who aren’t going 4K (and you have to imagine a lot of people considering $400 motherboards, $400 memory kits, $500-1k CPU’s along with likely PSU upgrades are going to be more likely to be higher than 1080p60 monitor users) are at least considering 1440p/1600p, if not temporary usage of TN-based 4K displays.

      Testing 1080p is next to worthless at this point.

        • jihadjoe
        • 5 years ago

        HardOCP tested this two years ago and PCIe 3.0 made little to no difference, even with 3-way CFX 7970s at 5760×1200. What differences there were could be easily down to tiny variances, especially because in Arkham City PCIe 2.0 actually had higher performance than 3.0 across the board (min, avg, and max fps).

        [url<]http://www.hardocp.com/article/2012/07/18/pci_express_20_vs_30_gpu_gaming_performance_review/7[/url<]

        • jihadjoe
        • 5 years ago

        Here’s another interesting result:

        TPU tested PCIe scaling as well, and it seems that LOWER resolutions actually benefit more from higher-bandwidth PCIe link speeds. Performance tends to hit a peak out once they have at least PCIe 1.1 x16, or PCIe 2.0 x8, or PCIe 3.0 x4, but if we look at the difference to PCIe 1.1 x4 and compare it with peak we see that there is a larger difference the lower the resolution is:

        [url<]http://www.techpowerup.com/reviews/Intel/Ivy_Bridge_PCI-Express_Scaling/23.html[/url<] GTX 680 @ PCIe 1.1 x4 vs peak performance: 1280x800: 67% 1680x1050: 71% 1920x1200: 73% 2560x1600: 80% HD7970 @ PCIe 1.1 x4 vs peak performance: 1280x800: 83% 1680x1050: 85% 1920x1200: 86% 2560x1600: 90% It's pretty counter intuitive at first glance, but it does show the bandwidth becomes far less important as resolution increases. My guess here is that the number of frames is actually far more important than the content of the frame, that or the GPUs are running into a computational brick wall that makes further bandwidth irrelevant at higher resolutions.

          • Airmantharp
          • 5 years ago

          Scott mentioned in the article that his worry with fewer PCIe lanes was directly tied to Crossfire where the GPUs share data via PCIe. With increasing resolutions and faster monitors (such as ASUS’ 144Hz 1440p G-Sync monitor recently reviewed) where using multiple GPUs will likely remain necessary to realize the monitor’s full potential available PCIe bandwidth may again become an issue. Especially given that AMD has FreeSync monitors of their own in the pipe, and how well AMD’s top-end cards are doing for the money.

          • Ninjitsu
          • 5 years ago

          Thanks, that’s a lot of useful information. Though I would assume that at higher resolutions, if better textures are used then pressure on the memory subsystems would increase. Current games don’t really push high-res textures that much.

            • jihadjoe
            • 5 years ago

            Actually, my guess on how this SLI/CFX stuff works is that each GPU has a local copy of all the shaders, models, and textures necessary to render any given scene, since each GPU works on individual frames indepenent of each other.

            This means that textures don’t need to be moved across GPUs anywhere near as often as you might think. That leaves finished frames, plus some procedural and timing information to be sync’d across GPUs.Transferring a finished frame will take the same amount of bandwidth regardless of the level of texture detail, but higher framerates can, and will push the required amount of bandwidth way up because more frames = more data to by sync’d, which is what leads to lower resolutions benefiting more from the increased bandwidth.

            Conversely, increasing the detail and/or resolution means things start to become GPU-bound, which drops the framerate, and decreases the amount of bandwidth necessary.

            • Ninjitsu
            • 5 years ago

            I think you’re most likely correct, there.

            But I meant in terms of pushing texture/shader/model data to one (or more GPUs) in the first place, from RAM or secondary storage (I think the path is storage -> RAM -> GPU VRAM?). That would put more pressure on the PCIe interface, though I’m not sure how much or how often.

            Appreciate the write up, though!

            • jihadjoe
            • 5 years ago

            Unless Rage-style dynamic texture loading becomes all the *ahem* rage, then the only thing that would suffer should be level load times.

          • Andrew Lauritzen
          • 5 years ago

          It’s not actually that counter-intuitive, especially for single-GPUs. The reason why is because PCIe bandwidth is relevant for the total amount of data transferred between the CPU/GPU each *frame*. At lower resolutions, you typically are rendering more frames, and thus transferring more data. It’s fairly rare for the amount of data transferred in a game scenario to be significantly related to the framebuffer resolution.

          The cases where the latter is not true are roughly:
          1) Virtual texturing (a la. Rage, etc). The amount of texture data you transfer is related to both the screen resolution and consistency from frame to frame.
          2) SLI/AFR-like schemes that transfer the framebuffer. Obviously directly related to framebuffer size and frame rate.

          It would be more interesting to see these numbers if they were locked to the same frame rates (via vsync or similar). I imagine you’d see roughly flat ratios in that test.

          And of course the question of 4k SLI/AFR is still relevant, but in reality unless you’re rendering >60Hz it’s still a fairly small fraction of the total available bandwidth… and there aren’t any real setups that can consistently do 4k/120Hz yet that I know of 🙂

    • Laykun
    • 5 years ago

    I would absolutely love one of these for work. Compiling is such a drag and really hamstrings my work flow. Usually I’ll make some changes in C++, set it to compile and go work on some nodejs stuff while I wait as that’s about all my computer can manage while I’m compiling. I currently run an i7-4770k at work.

    Nice to so my i7-980X at home getting some real competition after so many years.

    • ronch
    • 5 years ago

    1. Comparing a 3.0GHz 8-core (5960X) to the controversial 8-core FX-8350 is very telling. Now you can look at the FX any way you like but if it doesn’t compare favorably to Intel’s 8-core, it will naturally draw fire.

    2. Those power consumption graphs… bring tears to my eyes.

    3. This is like a review of the latest toy from Ferrari. Nice, but ultimately just a pipe dream for most of us poor kids.

      • Krogoth
      • 5 years ago

      Nah, it is more like a workhorse, industrial vehicle. 😉

        • ronch
        • 5 years ago

        Core = cylinder

        So, most folks today run 4-cylinder engines, and we can only gawk at these 8-cylinder beasts.

          • Krogoth
          • 5 years ago

          I was coming along the lines that these chips aren’t really faster at general purpose stuff, but they show their strength if you intend to use them for real-world work and run professional-tier software that harnesses their full potential. 🙂

          Like a large pick-up, diesel-power truck. It is not faster than your run of a mill car at getting to point A to point B, however it can do it while hauling a lot more stuff or much bigger items. 😉

            • ronch
            • 5 years ago

            More cylinders, lower RPM. Yup, sounds like one badass truck.

      • sschaem
      • 5 years ago

      1. Its a 4 module (8 HW thread) vs 8 core (16 HW thread).
      Also AMD seem to have missed the mark with its cache & memory controller.
      So its a crippled 4 module design in many respect.

      2. idle 62w vs 68w, load 115w vs 181w … state of the art Intel 22nm vs crummy GF 32nm

      Note: AMD doesn’t sell the FX-8350 as a power optimized skew at all, so they set the default test voltage really high. The ‘new’ FX-8370e is a better representation of piledriver at 4.1ghz on GF crummy 32nm, Also, power consumption fall off a cliff at 3ghz. (slower, but much, much higher power efficiency)

      3. I think in PC perspective i7-5960X review, the FX-8350 ranked #1 in all the value/performance ratio, with a price of $200. (even so its more often then not sold ~$150 at major online retailer)

      Personally I think the i7-5920K is a better value then both actually (if its really ~$350)
      Its the near perfect compromise.

    • guardianl
    • 5 years ago

    Great review! The historical results really are nice to see. The P3 800 should get an honorary bar on at least a few of the charts 🙂

    I’m running a 5820k + X99M Killer Mobo right now. I’ve had the CPU for a little while (*cough*), but getting a motherboard was more difficult. There’s a definite lack of stock of X99 at any retailer but Newegg (at least in Canada).

    I was willing to spend the $$$ on the 5960X no question, but when I saw the low clock speeds I passed. The 5960X is great if you encode video etc., but it’s easy to get the best of everything by overclocking the 5820k to 4.0 Ghz (thermals, MT, ST) rather than wondering if you’ll get a decent 5960X and going to water cooling from the start.

    The lack of PCI-E lanes on the 5820k is a non issue in practical terms. Anandtech already has some SLI results with the 5820k vs 5930k, and bottom line is x8 PCI-E is a non issue for SLI right now. The 5390k is a pretty poor perf/$ CPU because anyone who can afford 4X SLI might as well spend the extra $300 for the 5960X. Anyone not SLI’ing can pass on padding Intel margins for no reason and stick with the 5820k.

    • chuckula
    • 5 years ago

    LMAO…. bottom of page 5, the analysis of the efficiency benchmarks has an interesting summary of AMD’s showings: [quote<]Pour one out for my homies at AMD.[/quote<] That's why reading the article a few times to catch those little gems is worth it.

    • Waco
    • 5 years ago

    Dammit, how hard is it to make a chipset that can actually utilize it’s SATA ports in unison? This kind of crippling is annoying.

    • ChthonicMiasma
    • 5 years ago

    I’m curious to see if disabling HT would make any difference in the 4790K vs 5960X as far as gaming goes, at least in the games not optimized for more threads. Finally a contributor! my first purchase towards my new Haswell-E build!

      • Krogoth
      • 5 years ago

      It will make practically no difference with an OS that has decent thread scheduling.

      Haswell-E is literately a Haswell with more cores, a little more cache, more PCIe lanes and more memory channels. Most of those items do very little for gaming performance. Almost all of the performance difference if any is attributed to the larger cache.

      • ronch
      • 5 years ago

      Probably not much in gaming, but it would be cool seeing the difference between a 4C/8T (say, an LGA1150 Core i7) and an 8C/8T (5960X with HT disabled) CPU at the same clock running apps that can spread out to as many threads as there are cores (physical or virtual) available.

    • Rza79
    • 5 years ago

    Just noticed that Vishera is quite a bit bigger than Gulftown.
    I just bought an old Xeon X5650 for €120.
    I would assume that anybody buying an AMD FX processor is out of his or her mind.

      • ronch
      • 5 years ago

      Riiiiiigghhtt…

      • LoneWolf15
      • 5 years ago

      Trying…to understand…relevance to article.

    • oldDummy
    • 5 years ago

    Guess that $700 spent for a 3970x was worth it in a long, strange sort of way.
    notwithstanding the x79 ASUS board
    An impulse buy that worked out.
    Doesn’t happen often.
    .

    • srg86
    • 5 years ago

    I think for me, it would be between the 4790K or the 5820K. The 6 cores / 12 threads would be nice, but I think the extra clock speed of the 4790K would make up for it (I mainly do code compiling and running VMs, no games). The 4790K has the integrated graphics, which I plan to use, so no need for a graphics card there.

    Also the 4790K is cheaper, uses cheaper RAM and runs on cheaper motherboards. I don’t intend to use the stock HSF.

    It’s a hard choice, but I think for me, the 4790K would a better bet.

    • Krogoth
    • 5 years ago

    These chips are excellent server and workstation tasks, but are a piss-poor value for mainstream stuff.

    It is like the jump from Bloomfield to Sandy Bridge. A small boost in CPU performance, but a considerable drop in power consumption.

    These chips are going to sell like hotcakes in the prosumer and enterprise world.

    • Philldoe
    • 5 years ago

    So what I’m seeing is that when the i7 6xxxK series hits, I’ll want to upgrade.

      • Krogoth
      • 5 years ago

      I wouldn’t hold my breath.

      Broadwell is going to continue Intel’s general trend in mainstream desktop chips of reducing power consumption with a modest bump in CPU performance (attribute to architecture improvements than anything else.)

        • w76
        • 5 years ago

        That’s true, and my expectation is the same, except to say from Sandy Bridge that it’ll finally probably be an accumulation of enough mild performance bumps to make it look worth it. Not just raw IPC, but QuickSync is vastly improved since SB, more instructions, DDR4 should be fully baked by then, decent on-board graphics in a pinch, etc. Unlike the old days, it’ll definitely not be warranted on raw performance alone, but the overall package will finally be somewhat enticing… I hope.

        A mainstream 6 core i7 priced in line with current i7 K-series parts would seal the deal, but that’s likely a fantasy, much less 8.

    • maxxcool
    • 5 years ago

    I also like how the mantle bench would not run on the 7400.. classic.

    • maxxcool
    • 5 years ago

    Soo this cpu is MORE power efficient than a 7800k … /smirk/

    Nice… 2x-3x the cpu multithreaded power in many multithreaded apps .. lower heat power.

      • Klimax
      • 5 years ago

      Try to argue 22nm with anything foundries put out…

    • f0d
    • 5 years ago

    ill be getting one as i do a lot of handbrake encoding and any time saved there is worth it for me

    i just have to wait for the asus rampage black edition motherboards to come out and then ill grab one and OC the heck out of it (im not scared to put 1.5v through it like i did with my 3930k@5ghz)

    for most people i can see this isnt worth it (the people that only play games) but there was a nice improvement in handbrake times which is what i want

    • Bensam123
    • 5 years ago

    Kinda sad the 5820k wasn’t included, it really seems to be the winner out of all these chips. You basically have the 4690k and then the 5820k. The ‘i7’ doesn’t matter as all it throws in is HTing (which usually doesn’t do much) and a slightly higher speed which can easily be made up for with a small OC on the 4690k and a $100 less price tag.

    I can’t imagine PCI-E 3.0 x8 being a bottleneck. Last I heard PCI-E 2.0 x8 slots weren’t even bottlenecks. This would’ve definitely been a very interesting area to pursue and look at results for. Does it even matter in this day and age? My info is a little bit dated, but I’m pretty sure it still doesn’t.

    Honestly if someone was going to even begin to look at the 4790k I’d point them right away to the 5820k as soon as memory prices and motherboard prices equalize, it’ll be the best deal (compared to the 4790k).

    Another interesting point, motherboard and memory prices weren’t included or the ‘price of the system’ which used to be used in reviews here. I haven’t looked up the price of the new motherboards or the memory, but I’m pretty sure it’s at a premium right now.

      • ptsant
      • 5 years ago

      [quote<] Kinda sad the 5820k wasn't included, it really seems to be the winner out of all these chips. You basically have the 4690k and then the 5820k. The 'i7' doesn't matter as all it throws in is HTing (which usually doesn't do much) and a slightly higher speed which can easily be made up for with a small OC on the 4690k and a $100 less price tag. [/quote<] Although the 5820K seems like a winner in isolation, when you factor the cost of the platform, (say 350 for MB and 300 for memory), the difference with the 5930K is easier to swallow. If you spend big money on the X99 platform you might at least benefit from the extra PCIe lanes and the bigger cache. A dual-GPU setup might benefit but also, in the near future, a PCIe SSD. So, I'd argue it's probably worth going for the 5930K. The 5960X is a much harder proposition, not only because it is $400 more expensive, but because the added benefit of going from 6-core to 8-core concerns fewer applications. Oh, and MB prices will never equalize between 2011 and 1150 parts. The layout of the 2011 platform is vastly more complex (PCI lanes, quad-channel memory, higher TDPs). Expect at least a $100 premium for equivalent products (ASUS Deluxe X79 vs X99, for example). DDR4 could gradually become more attractive than DDR3, but not before at least 18 months, so you're also paying a premium there.

        • Bensam123
        • 5 years ago

        The extra lanes don’t matter till you get into quad sli.

        The price of motherboards will fall as well the prices of memory, which I originally mentioned as being a pre-requisite. The price of the original socket 2011s fell as well, they will with time.

        Hex core will still be a premium, but it has more meaning then HTing for some synthetic tests.

          • ptsant
          • 5 years ago

          [quote<] The extra lanes don't matter till you get into quad sli. [/quote<] But it's not only about graphics. Already, M2 takes up 4 lanes, if I remember correctly. And PCIe SSDs like the Intel 3700 also take a slot. And if you plan on migrating to 10Gbps ethernet you'll need a slot. And I would certainly be tempted to stick an Asus Xonar inside. Right now, I see the 5820K for 394 (local price) while the 5930K sell at 586, ie 198 difference that gets you higher base clock (I don't overclock if I can avoid it), bigger cache and 12 more lanes. The 5820K is not a bad deal, but if you're buying a BMW, you might as well get the leather seats.

            • Airmantharp
            • 5 years ago

            Well, if you’re going to use the 5820K, you’ll just have to know how much you need for whatever it is you’re doing.

            The likelihood that you’ll need to saturate everything at once is pretty low.

            • ptsant
            • 5 years ago

            [quote<] The likelihood that you'll need to saturate everything at once is pretty low. [/quote<] People who buy the 2011 platform usually have bigger needs than just raw CPU speed, which is why motherboards like the ASUS X99-WS (88-lanes total!) exist. Obviously, you are right: buyers should be aware of the limitations. But I can imagine, as Damage pointed out above, that some people with more money than knowledge will stumble on lane limitations in the future.

            • Airmantharp
            • 5 years ago

            I agree that some will- but I’d also argue that the overwhelming majority of said individuals would be headed for the Haswell-E equipped Mac Pro ‘trashcan’ rather than use custom PC hardware.

            And maybe TR will do a torture comparison to see how limiting the loss of lanes is!

            • JustAnEngineer
            • 5 years ago

            If you were just going to run one GPU on a [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007627%20600533617%20600009017&IsNodeId=1&bop=And&Order=PRICE&PageSize=100<]small $230 motherboard[/url<], then perhaps the $400 6-core Core i7-5820K Haswell-E would have a lower entry price than a less-crippled $590 6-core Core i7-5930K, but the cases where it would perform better than a $340 4-core Core i7-4790K on a $135 motherboard are somewhat uncommon. If your application fits entirely inside 15MB of cache but not 8MB and if it is multi-threaded so that 12 threads produce a lot more than 8 do, you could make up for the fact that the less expensive chip runs 4.0GHz ÷ 3.3 GHz = +21% faster per thread.

            • Airmantharp
            • 5 years ago

            Which is absolutely true until both are overclocked.

            Assuming similar overclocks the question will simply be whether the extra grunt can be made use of, or not, and whether that extra grunt is worth the expense. I believe that it can be plenty useful, but I’m still hesitant on the expense- my 2500k at 4.5GHz has been paid for for years :).

            • Bensam123
            • 5 years ago

            PC game streaming is the first example that comes to mind and something I’ve asked TR to cover for quite a few times. Streaming isn’t exactly uncommon, such as twitch.tv.

            Console titles will be designed more and more around 8 core processors as that’s what’s in the newest consoles, how much meaning that’ll have… we’ll still see.

            • CeeGee
            • 5 years ago

            The new consoles may eventually get i3 budget PC gamers to upgrade but anyone with a decent Intel quad core i5 or above has nothing to worry about. 4 modern Intel cores over 3ghz have nothing to fear from 8 Puma cores at 1.6ghz.

            • Airmantharp
            • 5 years ago

            Mantle and DX12 may make that true, as may real incentives to actually break game processes into multiple threads. As it stands there is still some benefit to the higher-end CPUs.

            • CeeGee
            • 5 years ago

            There may be benefit to higher end CPU’s but not because of anything the new consoles bring to the table. Puma cores simply can’t compete with Haswell based ones in any shape or form. There aren’t any 8 core Puma based CPU’s in the PC space but there is a 4 core 1.6ghz one and Anandtech’s Bench lets us compare it to a 2 core 3.2ghz Pentium easily enough.

            [url<]http://www.anandtech.com/bench/product/1224?vs=1265[/url<] i5 quad core's don't need Mantle or DX12 to stay ahead of the consoles, they can manage just fine already.

            • Airmantharp
            • 5 years ago

            If only we could compare console CPUs with desktop CPUs directly- but we can’t.

            • CeeGee
            • 5 years ago

            The gulf is large enough that the comparison would be academic.

            • Bensam123
            • 5 years ago

            And you have what proof for that? What makes you think a quad core will perform just as well as a eight core processor compared to a dual core performing as well as a quad core with current games?

            Carmack even talked about this earlier this year. Games are designed mainly around single threaded performance currently, not multithreaded… That’s why Intel performs so well and AMD has performed relatively poorly.

            • CeeGee
            • 5 years ago

            Pardon? Did you even look at the multi-threaded benchmarks?
            [url<]http://www.anandtech.com/bench/product/1224?vs=1265[/url<] Unless you have evidence that Intel's CPU architecture struggles with highly threaded software more than AMD's does then by what magical process do you expect a PS4 Jaguar/Puma based CPU to be able to out pace a Haswell quad core running at roughly twice the clock speed? Lets be clear here we aren't comparing AMD's Bulldozer/Piledriver architecture (which is already behind) to Haswell but AMD's low power CPU design. So yeah I look forward to you linking me proof of a Puma/Jaguar based CPU beating a Haswell chip with half the cores but twice the clock speed in any workload, any workload at all.

            • JumpingJack
            • 5 years ago

            the evolution of games and gaming code, this is an untrue statement for the past 4-5 years.

            Hell, quake 4 was multithreaded…

            AMD performs poorly on games relative to the core microarch because AMD has a poor microarch, just like P4 performed poorly on games relative to K8 back in the day, as the P4 was a poor microarch compared to K8.

            • Airmantharp
            • 5 years ago

            ‘Poor’ may be a poor description, if you pardon.

            The P4 was poor for gaming compared to the late K7, K8 and early K10; the K10 and ‘Bulldozer’ series is poor for gaming compared to Intel’s Core series.

            Yet plenty of gaming was done on all of them, and plenty of non-gaming things were better on one than the other and so forth.

            • Bensam123
            • 5 years ago

            I think the 5820k still falls into wee-peasant territory. As it’s affordable for normal people and wont be schuffled off into hex crossfire terrtitory or something.

            I’d actually consider building a 5820k rig if the price of memory/motherboard was right and I definitely qualify as normal folk. It reminds me a LOT of the Q6600 when it first came out. It had a premium, but it had more to offer then just another mid range processor.

    • Meadows
    • 5 years ago

    Mr Wasson, I recommend doing the Watch_Dogs test with “Level of Detail” maxed out next time. It boosts draw distance and adds significant environmental detail.
    What you tested barely had more detail than the console version.

    (At least that’s my suspicion, because I did expect worse results.)

    • ptsant
    • 5 years ago

    Judging from its big brother, the 5930K should be a great product and the price is not insane. However, once you start adding all the “premium” components that go together (X99 M/B, DDR4 in quad kit), the price of the complete system gets out of control.

    I really would consider the 5930K if it had ECC support. I wonder what will be the price of the equivalent E5 Xeon…

      • the
      • 5 years ago

      Well what do you need more, kidneys or ECC?

        • jihadjoe
        • 5 years ago

        But kidneys are ECC for our bodies.

      • Krogoth
      • 5 years ago

      It is because the X99 platform is really just workstation/server tier stuff guise itself as an ultra high-end gaming platform for people who don’t know any better. 😉

        • Klimax
        • 5 years ago

        Or people who cannot afford full Xeon-class WS of same performance.

      • cynan
      • 5 years ago

      I think that remains to be seen. It’s pretty difficult to get a feel for relative architecture performance when comparing a chip with 33% more cores.

      This also makes the value proposition of the XX30K part more of a question. With Sandy and Ivy, both were 6 core parts, with similar price deltas. Obviously this is no longer the case. All to say that it still probably makes very little sense for anyone that already has a Sandy-E or Ivy-E XX30K, to go and replace it the 5930K. However, now some sort of argument can be made for the 8 core part if you really need the most multithreaded performance.

        • ptsant
        • 5 years ago

        [quote<] All to say that it still probably makes very little sense for anyone that already has a Sandy-E or Ivy-E XX30K, to go and replace it the 5930K. However, now some sort of argument can be made for the 8 core part if you really need the most multithreaded performance. [/quote<] Clearly, people who bought the Sandy-E (or Sandy) got great value out of their platform. I wouldn't consider the 5930 or 5820 if I had a Sandy-E. However, buying into Haswell-E also depends on the future life of the platform. I wonder whether 2011v3/DDR4 will support Broadwell-E. At these prices, keeping the $400 MB and $300 RAM another generation is an important component of the equation.

    • Ryhadar
    • 5 years ago

    Great review!

    My favorite part was probably the throw-back results. I’m still running a core 2 duo in my HTPC so it’s fun to see how far we’ve come. I know they not be the most precise comparisons, but as long as that’s a known qualifier I don’t see anything wrong with the additional information. I hope you continue to put this into future CPU reviews.

    • drfish
    • 5 years ago

    Hate to say it but it looks like my 2600K will live to see another year (or two). I’m hoping we see a ~$500 8-core part next year, something that can OC into the 4Ghz range. Still darn nice hardware though.

      • Ryhadar
      • 5 years ago

      I’m still rocking a 2600K also (at stock) and with the way things are going in regards to my uses I can probably keep it going for a long time coming. If I start to get into to trouble, I always can overclock.

      • hansmuff
      • 5 years ago

      Yep, another year at least. That chip was an awesome upgrade back in 2011, and it’s still a great performer, especially overclocked. Nothing has me impressed so much since.

      • NeelyCam
      • 5 years ago

      [quote<]Hate to say it but it looks like my 2600K will live to see another year (or two).[/quote<] Same thought here.

      • Coran Fixx
      • 5 years ago

      2600k club here too. DDR4 is another impediment to me upgrading until I see magic unicorn numbers I don’t want to buy another set of ram

      • ChangWang
      • 5 years ago

      I’m willing to bet that there are a lot of us in this predicament. Looks like we will be sitting tight another year. Maybe DDR4 prices will have come down by then.

        • Chrispy_
        • 5 years ago

        Anyone who bought an Intel quad-core since Sandy Bridge, basically.

        This is AMD’s problem; real-world performance doesn’t care that much about having more cores, it’s still about IPC and clockspeed.

      • oldDummy
      • 5 years ago

      ha,10-15% increase in performance over three years.
      Not very thrilling.
      Who was that guy? [s<]Moorrre[/s<], [s<]More[/s<] ...something like that.

        • Airmantharp
        • 5 years ago

        If you measure performance just in the gains in processing speed, you’re right, but if you measure it as the amount of work done per unit of energy consumed, then one could argue that Intel is doing quite well.

        Granted I’d prefer the raw performance increase too. But Intel seems to think that it’s more important to shove their headline architecture into ever smaller devices and that’s not a bad thing either.

    • chuckula
    • 5 years ago

    The 5820K is going to be a hard proposition to pass up if you want a high-end system that’s within about 10% of the total system cost for what you’d already pay for a K-series Haswell desktop.

    The workstation market is going to hoover up the Xeon-equivalents of the 5960x. That is an extremely nice piece of silicon.

    That leaves the ugly stepchild of the 5930K at at little less than $600. In previous generations its predecessors were quite popular since they were the only way to get 6 cores without dropping $1000. Now the only real advantage over the 5820K is the extra PCIe lanes, but TBH those extra PCIe lanes aren’t going to help you in gaming even if you are in a Crossfire/SLI setup.

      • Ninjitsu
      • 5 years ago

      Yup, this is exactly what I was thinking too. X- series chipsets make little sense for gaming anyway, $390 for a 6C/12T, 3.3 GHz, unlocked, DDR4 supporting processor is a really good deal if you can use it.

      For gaming, is there any GPU that can saturate 8 lanes of PCIe 3.0 anyway?

    • TwoEars
    • 5 years ago

    So dissapointed, doesn’t even beat the 4790k in gaming when overclocked.

    I know that it’s the same achitecure and all but I thought that they’d be able to tune it to hit 5GHz or so.

    I don’t think this will big seller.

      • Firestarter
      • 5 years ago

      This thing has literally twice as many cores crammed in it and will consume roughly twice the power at the same clock speeds and voltages. Do you know how much power you need to run a 4790K at 5Ghz? Multiply that by 2 and you’re setting things on fire

        • TwoEars
        • 5 years ago

        keyword was “tune”.

        Besides – with a name like yours I thought you’d be right into that kind of stuff.

          • Firestarter
          • 5 years ago

          if they could tune this CPU to do that, we’d have 6GHz quad-cores by now

      • maxxcool
      • 5 years ago

      8 *real* cores at 5ghz would be nice, but wholly unrealistic given the terrible limits of 14,16,18,20-nm

      • Krogoth
      • 5 years ago

      You have much to learn my young apprentice in the “Not Impressed” side of the force. 😉

      These chips are going to be massive sellers for enterprise crowd.

      • Ryu Connor
      • 5 years ago

      5GHz is non-trivial to achieve.

      [url<]http://arstechnica.com/science/2014/08/are-processors-pushing-up-against-the-limits-of-physics/[/url<] [quote="ArsTechnica"<]Here, we really are pushing physical limits. Even if signals in the chip were moving at the speed of light, a chip running above 5GHz wouldn't be able to transmit information from one side of the chip to the other. The best we can do with current technology is to try to design chips such that areas that frequently need to communicate with each other are physically close to each other. Extending more circuitry into the third dimension could help a bit—but only a bit.[/quote<]

        • NeelyCam
        • 5 years ago

        That’s some fuzzy science. If the signals were in fact running at the speed of light, signals would travel some 6cm in one 5GHz clock period. Good luck finding a CPU with any dimension anywhere near 6cm.

        Even if the signals don’t propagate in silicon wiring at the speed of light, the chip could still be running at 5GHz and communicate from one side to another – the communication would just have latency associated with it.

          • Ryu Connor
          • 5 years ago

          Except the signals aren’t at the speed of light – silicon is not a vacuum.

          That article and the paper it is based on (published in the peer reviewed Nature no less) are real world problems with CPUs today.

          Discounting latency is silly. If latency didn’t matter Intel would still make math coprocessors a motherboard upgrade. Timing/latency very much matters. Hell, time per stage is a fundamental piece of a CPU pipelines operation and has a direct relationship with clockspeed. You ignore latency at your peril. You’re statement is nonsensical. You’re effectively saying, “Clock skew doesn’t matter.”

            • NeelyCam
            • 5 years ago

            Nature is no JSSC.

            Clearly latency matters, but your quote from the OMG peer-reviewed paper made it sound like the size of the chip and the speed of electromagnetic wave in silicon/wiring is somehow the “physical limit”. It’s not. It’s the resistance and capacitance – a classic RC delay.

            I’m not sure why you even bring up silicon not being a vacuum… what do you think the speed of EM propagation is in copper? Does silicon/copper not being a vacuum change my point [i<]at all[/i<]? Or maybe you missed the point...?

            • Ryu Connor
            • 5 years ago

            You didn’t have a point to miss.

            You should have just let this sleeping dog lay. You’re just wandering in the dark.

            1. You accused a paper presented in a respected and peer reviewed multiple discipline journal like Nature of being fuzzy science.
            a. You try and disparage it here again – showing you have no clue what you’re talking about.
            b. You clearly didn’t read the article.
            c. You are some random dude and forum warrior on the Internet your credibility as compared to the article is non-existent.

            2. You claim to have done your math at the speed of light, which was absurd.
            a. You don’t understand why that is absurd or why I mention a vacuum.

            The speed of light is for light in vacuum.

            Electricity running through silicon with copper interconnects is neither light (electrons have mass) nor a vacuum and runs about 1/100th the speed of light.

            3. You claim you get why latency matters and then chalk up your misunderstanding for only reading the small blurb I posted with a link (really this line of logic is dubious given the whole of your replies, but I’ll just take your word for it and let you keep your pride).

            Then read the article next time, that’s why I linked it.

            • NeelyCam
            • 5 years ago

            1. That quote didn’t make sense (I have no idea why you would quote that particular passage… why did you?). [b<]This quote is the fuzzy science, and it is from the article (by John Timmer) - not from the paper by Dr. Markov.[/b<] I didn't say Nature is fuzzy science. a) I didn't disparage Nature by implying JSSC is a better source for stuff related to CPUs or solid-state circuits in general. What makes you say I have no clue what I'm talking about? Clearly you don't know me. b) I read the article. Your quote didn't make sense - which begs the question again: why would you quote something nonsensical like that? Did you not understand it or the absurdity of it yourself? c) I'm no more "a random dude and forum warrior" with non-existent credibility compared to the article than you. 2. I did it because your nonsensical quote mentioned speed of light. You quoting that paragraph to support the idea that 5GHz is non-trivial to achieve is what is absurd. a) You are incorrect. "Electricity running through silicon..." I mentioned R-C delays already. That's what you're effectively talking about here. That's the #1 limiter here; not EM wave propagation (which would be significantly faster in copper than 1/100th of speed of light). 3. My misunderstanding? I'm sorry, sir, but could you please elaborate on what that misunderstanding was? So far, your commentary has been mostly personal attacks on me and my credibility or understanding of these issues, while you yourself haven't demonstrated understanding of of the underlying issues yourself. A slightly less arrogant attitude would be prudent.

        • Krogoth
        • 5 years ago

        It is also part of the reason why you need to overvolt the crap out of the chip to make 5Ghz+ overclock “somewhat” stable barring thermal limitations.

        The only caveat is 22nm process gets toasty really fast when you pump it with volts and by default it uses a thermal paste instead of the epoxy to mount the headspreader.

        It is really no surprise that overclocking anything beyond 4.5GHz is a challenge for 22nm parts.

          • Ninjitsu
          • 5 years ago

          From what I know, voltage is usually to keep the transistor switching regions in order, and also depends on feature size.

          Also, Haswell-E has a soldered IHS.

      • HisDivineOrder
      • 5 years ago

      I think the hexacore 5930K is the part you’re searching for. Much higher speeds, full PCIe lanes, and still has more cores than the 5960X here.

      Sacrifices have to be made to be within that 140w TDP.

      I’d be more interested in an article about one of these chips fully reviewed at the speeds enthusiasts are more likely to run them at.

      Overclocked.

      At normal speeds, it’s running into the TDP wall, which is why someone like you may want to read a 5930k review instead.

        • Krogoth
        • 5 years ago

        you probably meant 4790K 😉

    • DPete27
    • 5 years ago

    The Pentium apparently skipped Game-testing day?

    • Ushio01
    • 5 years ago

    What’s wrong with a 5820K only having 28 PCI-E 3.0 lanes? they have double the bandwidth of PCI-E 2.0 lanes.

    Or is there a reason you need 3.0 lanes for current graphics cards? I haven’t seen anything showing more than a 1-2 frame difference.

      • Damage
      • 5 years ago

      Oh, I suppose graphics with a dual x8 config could be just fine in a lot of cases. However, we may push some limits with XDMA CrossFire in 4K/high-refresh/multi-display–this is a bit of an unknown right now.

      Trouble is, you can do dual x8 graphics with a Z97/Haswell quad combo, so if you get the 5820K, you’ve lost out on one of the big reasons to spend more for X99.

      Intel didn’t hobble the cheaper Sandy-E or Ivy-E processors in this way. Frustrating to see them do it now for no good reason.

        • Ushio01
        • 5 years ago

        Yeah but the cheaper models for Sandy and Ivy were quad core’s this time it’s a 6 core model.

        With this gen’s cheapest model you still get 12 more PCI-E lanes than the consumer chip plus 2 more cores.

        So I guess what would you prefer 2 cores or 12 PCI-E lanes?

          • Damage
          • 5 years ago

          The 5820K will be fine for people who don’t care about mGPU gaming. No problems there.

          But it’s a dicey proposition for other folks, given the unknowns around the practical bandwidth/latency needs for mGPU setups with 4K/high refresh/multiple monitors/etc.

          My broader objection is simply the weirdness factor. Intel didn’t have to segment its products on this axis–and practically every other one, so that consumers need ARK in order to know what they’re getting. If Intel sold milk, they’d offer 1%, 1.3%, 1.5%. and 1.7% options.

          But here we are, and now it’s possible for a n00b with a nice budget–who read all the mobo spec sheets and thought he knew what he was getting–to throw down a big chunk of change on dual GPUs, an X99 mobo, DDR4, and a 5820K, only to find out that he now has the same inherent PCIe 2×8 limitation as a Z97-based system.

          That’s just bad mojo. People *will* have this experience; just wait.

          Intel rules this segment of the market uncontested. They have attractive products. People will be paying a lot for those X99 mobos and any of these CPUs. Intel could afford to give people a little more hardware for their money rather than building in a strange “gotcha.”

          I think they made a bad call.

            • Bauxite
            • 5 years ago

            Wait, you didn’t see that more squeezing of [s<]starsystems[/s<] enthusiasts was in the future? Now that their fully armed and operational [s<]battle station[/s<] monopoly has been dominating margins uncontested for years its only going to get worse. A bunch of cell phones and tablets may be a threat to their total revenues, but the real workhorse PC and server market is here to stay. Intel is altering their features. Pray they don't alter them any further.

            • chuckula
            • 5 years ago

            The easy way to settle this argument is to get a couple of monster GPUs together in Crossfire/SLI and then compare them at 4K resolutions with a 5820K vs. a 5930K to see if there’s really any difference to speak of…

            • Firestarter
            • 5 years ago

            hold on, I’ll just grab these 5820Ks and 5930Ks that I have lying around and test this, OK?

            • chuckula
            • 5 years ago

            Where do you live and TAKE ME WITH YOU!

            • the
            • 5 years ago

            Devil’s (canyon) advocate question:

            What would better in Intel’s current lineup pricing: a higher clocked quad core (>3.8 Ghz stock, >4.0 Ghz turbo) 5820k with 40 PCIe lanes or the current six core 5820k with 28 PCIe lanes? Assume the same current price between these options.

            • Damage
            • 5 years ago

            For gaming and mGPU, the higher clocked quad. For other stuff, depends on the applications and on the exact 6 vs 4-core clock deltas. Remember that a lot of everyday apps won’t use 12 threads well. Our test suite is kinda freakishly designed to show potential.

            • ptsant
            • 5 years ago

            [quote<]Intel rules this segment of the market uncontested. They have attractive products. People will be paying a lot for those X99 mobos and any of these CPUs. Intel could afford to give people a little more hardware for their money rather than building in a strange "gotcha." [/quote<] My feelings exactly. For that kind of money I really wanted this CPU to tick all the right boxes. Especially the PCIe lanes, which are one of the two platform's differentiating features (the other being quad channel memory). Intel's product segmentation is simply astounding.

        • the
        • 5 years ago

        It does sound like the 5820K would support triple 8 lane PCIe slots for a three video card configuration. A socket 1150 setup would need a PLX chip to do this. That’d make Haswell-E motherboard prices a bit more attractive as socket 1150 boards with PLX bridge chips are rather expensive. Oddly, if the PLX chip supports device-to-device transfers, it may end up being faster than the 5820K due to the higher bandwidth directly available between cards in theory.

          • Damage
          • 5 years ago

          Yeah, and tri-SLI/CrossFire is bag of hurt. Barely works, if you are talking about superior delivered performance, not just FPS counters.

          That said, if you’re spending enough to build around three GPUs, you’re gonna want to give them every advantage. 🙂

            • Milo Burke
            • 5 years ago

            I have no personal experience with systems of this caliber, only going off of what I’ve read elsewhere by people likely less experienced than you, Scott:

            Back when micro-stuttering was a nightmare (instead of merely a problem), wasn’t triple-GPU better at avoiding microstuttering than dual-GPU? My consensus at the time was that you buy one card or three, but not two.

            If I’m wrong, please correct me.

            • Damage
            • 5 years ago

            The thing about three GPUs smoothing out stuttering was repeated a lot (by a few people, at least) but I never saw any data to support the assertion. Didn’t test it enough myself to say much more. Sorry!

            • Milo Burke
            • 5 years ago

            Dangnabbit! These are things that need answering so I can return, with peace of mind, to gaming on my single 5870!

            =]

            • Krogoth
            • 5 years ago

            Besides, Tri-SLI/CF and beyond are quite silly for gaming and are only good for ePenis benchmarks and GPGPU related stuff.

          • ptsant
          • 5 years ago

          I think a Z97 with PLX is probably cheaper than a reasonable Z99 board… These motherboards are in the 300-500 range. And the 5820K does support 8x8x8x tri-GPU.

        • bwcbiz
        • 5 years ago

        Agree that if you want to do 4k gaming the 5820k won’t hack it, but if you’d like 1080p or 1440p gaming with 1×16 or 2×8 and still have channels open for a PCIe SSD, a video capture board, a soundcard and other goodness, a 5820k is clearly better than a 4790k. It’s clearly not a pure gamer’s CPU, but it seems pretty good for the Twitch/Youtube crowd.

        • erwendigo
        • 5 years ago

        “However, we may push some limits with XDMA CrossFire in 4K/high-refresh/multi-display”

        The worst scenary of 4K means that you need to transfer/upload from the “slave card” to the master, for a 4K and 32 bits, uncompressed (and, you don’t know about if XDMA uses some technique of lossless compression) framebuffer, 31.64 MB of data for one frame. So, you can transfer around 30 frames per second over a lane of pci-e 3.0.

        If you have a system that, you can say, reachs the 60 fps of performance over 4K (a great one!), you need transfer around 30 frames from slave card to the master one, so you only “sature” the bandwith of a lane of a pci-e of a X99. You have the bandwith of 7x lanes, 7GB/s in each direction. This is very much than “enough” for a top-end card of today.

        “Fugg”, you can go to 120 fps with 4K and need the transfer of 60 fps between cards (the bandwith of 2x lanes), and you can bet that the impact over the performance will be no a great one, at all. And this is a crazy performance configuration, like as not posible now.

        And this, is for 4K, the wost situation. Multidisplay is similar, 1600p or less is a easy situacion.

        The “problem” isn’t real, and Scott Wasson is overreacting for a “minus” that is mostly a insignificant one.

      • Krogoth
      • 5 years ago

      Intel artificially crippled the PCIe controller on the CPU to justify the higher-end Socket 2011 offerings, while not placing the existing 4790K at a significant disadvantage.

        • Krogoth
        • 5 years ago

        Intel doesn’t want to repeat the “silly” mistake of 3820K and 4820K. They were in the same price range as 2600K and 3770K. However they had the same amount of cores/threads while offering far memory greater bandwidth and 40 PCIe lanes as like all other Socket 2011 offerings. They still retain the more classical “FSB” overclocking since the clock generator is off-core (like all other Socket 2011 stuff).

        If you were just a little adventurous, you could easily get a near high-end part performance for most stuff with a fraction of the cost ($299 versus $599-999+). You just made the trade-off of not having six cores and all of the cache. It was laughable easy to boot.

        They decided to make the 5820K more like its 4790K cousin by providing it a few more PCIe lanes more then it instead of all 40 lanes that normally comes with Socket 2011 chips. You are just getting access to quad-channel DDR4. Both of these factors will make not such a hot deal for enthusiasts and you might as go with a 5930K or 5980X if you go Haswell-E.

      • ronch
      • 5 years ago

      You’re forgetting one thing: These things are not meant to be practical. It’s about having all the tech available out there at whatever cost even if it means they won’t make much of a difference in real world usage. In other words, just for bloating epeens.

      • HisDivineOrder
      • 5 years ago

      When you pay $400 for a motherboard, you want to take full advantage of it. That’s what’s wrong. The platform isn’t cheap. The memory isn’t cheap. Skimping last minute on the CPU because the specs all look great except that ONE tiny little spec that most people never check because it’s never been different in the history of the chip line is…

      …price differentiation. Intel-style.

    • jihadjoe
    • 5 years ago

    5820k seems like the value pick. Just $50 more than a 4790k for 50% more cores and 12 more PCIe lanes. Now if only DDR4 prices would start coming down already…

      • Pwnstar
      • 5 years ago

      I think we can all agree on that.

      • Freon
      • 5 years ago

      You’re also looking at an extra $80-100 for an equivalent motherboard, and we could see Broadwell in the time it takes DDR4 to drop in price.

        • jihadjoe
        • 5 years ago

        Z97 with 3-way SLI is going to need a PLX chip, so motherboard prices should be about even with X99. MSI and Gigabyte already have X99 boards with 4x PCIe 3.0 x16 selling for about $230-$250 on Newegg.

        IMO It’s really just DDR4 that’s going to make the platform more expensive.

          • Airmantharp
          • 5 years ago

          If you’re using three GPUs you’re not likely worried about any prices, so while the argument is certainly valid it’s not terribly representative. I’d build an X99 setup with just one GPU if I found the two extra cores to be more useful than the difference in price.

    • elmopuddy
    • 5 years ago

    Error in the first table, Sandy-E is listed as 8 core, 20MB cache

      • Damage
      • 5 years ago

      The table is correct for how the silicon looked. Intel disabled some of the units to make the i7 Extreme parts based on that chip.

        • Ninjitsu
        • 5 years ago

        But seeing that no one could use the locked cores, does it still count? Maybe you could write something like “6(8)”…

        • the
        • 5 years ago

        True. This is Intel’s binning magic at work.

        This highlights one of the unique aspects of the Sandybridge-E chips: the die was also used for the high core count Xeons. There was a native 4 core Sandybridge-E chip that launched several months later. All the Ivy Bridge-E chips used the low core count Xeon die where as the high end die went to 10 core.*

        The only reason consumers are getting an 8 core chip now instead of several years ago is that that is the low core count Haswell-E die. Intel reportedly has a 12 core die planned for the high end Xeons.

        *There are 12 core Ivy Bridge ‘EP’ chips but they’re actually Ivy Bridge-EX chips repackaged to fit into the EP socket. These natively have 15 cores on-die. Haswell-EX is rumored to come with 18 cores though it is uncertain if it will also appear in Haswell-EP socket.

          • brucethemoose
          • 5 years ago

          Haswell-EP will have an 18 core variant. In fact, you can already pre-order a Xeon E5-2699 v2.

          • Wren
          • 5 years ago

          The 5960X is [url=http://www.guru3d.com/news-story/intel-core-i7-5960x-de-lidded-haswell-e-uses-soldered-tim,3.html<] apparently a 12 core die with 4 cores disabled[/url<]. Intel probably didn't show it on their die shot slide for marketing reasons...

            • Firestarter
            • 5 years ago

            well that was expensive

            • Damage
            • 5 years ago

            Nah, there’s a native 8-core version of Haswell-E.

            • Krogoth
            • 5 years ago

            That 5960X is just a rebinned Haswell-EP chip. I’m sure there are some 5960Xs who are native Haswell-E chips.

      • Krogoth
      • 5 years ago

      Sandy Bridge-E are 8-core silicon, but most of them had 1/4 of the silicon disabled. The small number that were 8-core silicon were branded as Xeons.

    • Takeshi7
    • 5 years ago

    It feels like i’ve been waiting for this CPU forever, and finally it’s here! Although I am upset that none of the motherboard makers set up the PCIe slots differently from each other. There’s practically no differentiation on that front.

    • LocalCitizen
    • 5 years ago

    read. drool. read. drool.

      • Pwnstar
      • 5 years ago

      Close your mouth!

        • maxxcool
        • 5 years ago

        LOL!

    • brucethemoose
    • 5 years ago

    First!

    EDIT: Aww, why test with a single 7950? A CPU like this needs a 290/780 at least.

      • Krogoth
      • 5 years ago

      To make it apparent that most modern games are GPU-limited. 😉

        • Airmantharp
        • 5 years ago

        Most common console ports that ran on the last generation of consoles are CPU limited. And yes, that’s ‘most’ modern games.

        But not only is that changing with the new generation of consoles, where demanding games will have to actually be multi-threaded, but it’s also not entirely true for games that actually need the CPU; think Civilization, some MMOs, or the Battlefields in multi-player.

          • Krogoth
          • 5 years ago

          MMOs are limited by network traffic, bandwidth and latency of the clients and servers. CPU and GPU are only factors with drawing out the eye candy but it doesn’t reflect the actual game that is going on.

          Battlefield is still GPU-limited in “hot-zones” with 30+ players. Civilization and other massive strategy games are only limited by clockspeed of the CPU in late-stage games. They only use two-threads at most. Assuming that clockspeeds are equal the Haswell-E chips will fare no better than their lesser counterparts at these games.

        • Krogoth
        • 5 years ago

        It looks like some people haven’t been paying attention. CPU is only a problem if you running at low-resolutions with none of the eye-candy enabled. It has been this way for years.

        Throwing in a 290/780 would have just made the numbers larger but the difference between the different platforms would remain mostly the same. 290/780 would only choke on aging chips and lower-end modern CPUs.

          • Airmantharp
          • 5 years ago

          Rather, some people have been actually playing games and seeing the difference for themselves 🙂

          (benchmarks, nice and repeatable as they are, do not tell the whole story…)

            • Krogoth
            • 5 years ago

            It is the placebo effect at work.

            You make an unconscious bias to think that $999 part is “faster” then $299, when the cold numbers show that they are practically identical (look at the graphs in the TR review not the end numbers). In a double-blind test, you couldn’t tell the difference between the platforms.

            • Airmantharp
            • 5 years ago

            The graphs show that a 3.0GHz-3.5GHz 8-core CPU is almost as fast (in some cases faster) than a 4-core CPU of the same generation running at 4.0GHz-4.4GHz.

            That’s not a placebo. That’s showing that even in a ‘GPU limited’ test suite more cores means more performance.

Pin It on Pinterest

Share This