Intel’s Core i7-6700K ‘Skylake’ processor reviewed

Well, this is certainly something. As you may know, Intel has been focused like mad on mobile computing for the past few years, attempting to insert itself into a growing market against established rivals like ARM. Desktop computing has kind of been on the backburner as a result. But a funny thing happened on the way to the death of the PC: yet another revival at the high end of the market. PC gaming is more alive and vibrant than ever, and folks are pioneering new applications like virtual reality on the PC, as well.

Intel has decided to acknowledge the thriving PC gaming market by throwing us a big, juicy bone. The first-ever version of the Skylake, its next-generation CPU architecture, is making its debut today in a pair of socketed processors for desktop PCs. The Core i7-6700K and Core i5-6600K are the first ever Skylake parts available to the public, and they’re arriving alongside an armada of motherboards based on the new Z170 chipset.

It’s a lake in the sky!

2015 has been a busy year in PC hardware, but it’s been full of strange product introductions. We’ve covered a number of product unveilings that have involved big architecture reveals and great fanfare but very little actual hardware to review. Heck, Intel announced the desktop version of its Broadwell CPUs back in June, but you still can’t buy them in North America. Skylake is the opposite situation. We have a Core i7-6700K chip in our grubby hands, but we don’t yet know the details of this new CPU microarchitecture. Intel says it’s planning to reveal those in a couple of weeks, at its Intel Developer Forum event in San Francisco. So we can show you how Skylake performs, but we can’t yet tell you exactly why.

We also don’t yet know the exact shape of the entire lineup of Skylake-based products. Intel says it will be releasing the rest of the family later in the third quarter of this year, after IDF. For now, these two desktop CPUs will stand alone.


The Core i7-6700K processor

Truth be told, though, we really do know quite a bit about Skylake already. The desktop CPU we’re reviewing is a quad-core, eight-threaded processor with 8MB of L3 cache, much like its predecessors. Skylake parts are manufactured on Intel’s 14-nm fabrication process with tri-gate transistors, like the Broadwell parts of the prior generation. However, Skylake is a “tock” in Intel’s familiar “tick-tock” development cadence; it brings with it a new CPU microarchitecture.

As usual, this architectural refresh is meant to improve clock-for-clock performance through various clever enhancements to the processor’s efficiency and throughput. Intel claims the 6700K is “up to 10% faster” than its predecessor, the Haswell-based Core i7-4790K.

In fact, the Core i7-6700K replaces the 4790K at the exact same price, just as the 6600K replaces the 4690K. The 6700K’s peak clock speed is down a little bit, at 4.2GHz instead of the 4.4GHz Turbo peak on the 4790K, but the 6700K can run all four cores at 4.2GHz under load. Since the 6600K and 6700K are both K-series parts, they have unlocked multipliers for easy overclocking, too.

Intel has pushed a little on the power front in order to deliver higher performance in recent generations. The top Ivy Bridge quad core topped out at 77W, while Haswell moved up to 88W. Now, Skylake nudges the limit up to 91W.

That added CPU power draw could be offset at a platform level by the transition to a new memory type, DDR4. Skylake supports both DDR4 and DDR3L type memories, for lower voltage operation and power savings. Bog-standard DDR3 isn’t officially supported at its usual 1.5V, although we may see some motherboard makers hack their way around that limitation. Most of the market will likely embrace DDR4 as the new standard, since it promises higher throughput and more headroom for transfer rates going forward. Intel’s high-end Haswell-E platform made the transition to DDR4 last year, as did the dual-socket Haswell-EP Xeons.

Like those platforms, Skylake desktop parts are getting a conservative start with DDR4. The first products only officially support 2133 MT/s operating speeds. Running your memory any faster is technically overclocking. That said, our test rigs are outfitted on day one with Corsair Vengeance LPX DIMMs rated for 2666 MT/s operation, and much higher speeds are possible.

The LGA 1151 socket

As you might expect given the new memory type, Skylake processors are not socket-compatible with prior generations. They require a motherboard with a new socket type known as LGA1151. Although the pinouts and plastic retention tabs are different, this socket is virtually the same size and shape as the one that came before it. As a result, any CPU coolers meant for Haswell CPUs ought to be compatible with LGA1151-based motherboards.

The new socket brings another change of note: the death of FIVR, the fully integrated voltage regulator first introduced and hailed as progress on Haswell processors. I expect we’ll hear more about the reasons behind the decision to spike the integrated VR at IDF, but we already know FIVR caused some complications for the mobile Broadwell chips. The inductors required by FIVR increased the Z-height of the processor, and Intel had to cut a hole in the motherboard in order to shoehorn Broadwell CPUs into extra-thin devices. Furthermore, FIVR wasn’t optimally efficient at low voltages, and that fact prompted Intel to build a bypass mechanism into Broadwell called LVR. As we wrote back then, “The need for 3DL and LVR makes one wonder whether the level of VR integration in Broadwell makes sense for future generations of Intel SoCs.” I suppose we don’t have to wonder anymore.

The question now is what exact arrangement has replaced FIVR. Presumably, Intel hasn’t taken a step back on things like independent supply rails for the CPU cores and the chip’s internal I/O ring.

Whatever the case there, Skylake does bring progress on other fronts, including tweakability. Intel says this CPU is its first ever to have features included expressly for overclocking. For one thing, the CPU’s base clock or BCLK has been liberated. The Asus Z170 Deluxe motherboard in my test rig offers options from 40 to 500MHz for BCLK speeds in 1MHz increments. Folks tweaking the BCLK don’t need to worry about the DMI and PCI Express ratios, either; a new PEG/DMI domain has its own isolated 100MHz clock. That should take a lot of the pain out of BCLK-based overclocking. Finally, Skylake’s memory controller supports a ton of different DDR4 speeds, up to 4133 MT/s in increments of 100 and 133MHz.

As a guy who just likes to overclock his stuff and not a dude with a professional overclocking career, I’m not sure what to make of these changes. I mean, I’m quite happy with the unlocked multipliers on K-series processors, which will let you squeeze a little extra out of a processor using conventional cooling. And I’m down with the memory clock flexibility. But those BCLK-oriented tuning knobs are probably meant for the folks with liquid nitrogen pots. I mean, I’m happy that they get these features, but I think it’s pretty obvious these capabilities aren’t for everyone. I seriously doubt Intel will allow lots of BCLK tuning leeway on the non-K variants of Skylake, for instance. That prospect seems very unlikely.

A new chipset: the Z170

One of the biggest changes from Skylake to Haswell happens at the platform level with the introduction of the new 100-series chipsets. By “chipset,” I mean “companion I/O chip,” since Intel’s platforms have consolidated things into a single chip for several generations.

Skylake’s companion chip is built using Intel’s 22-nm fab process, and the enthusiast-class variant of it is called the Z170. This chip supports 26 high-speed I/O ports, each one offering bandwidth equivalent to a single lane of PCI Express Gen3 connectivity. That’s a huge upgrade from the eight PCIe Gen2 lanes in the Z97 chipset. The Z170’s high-speed ports can be deployed by motherboard makers in various configurations. The possibilities include up to 20 PCIe Gen3 lanes, up to 10 USB 3.0 connections, and up to six SATA 6Gbps ports—though not all at the same time, since there are 26 high-speed ports in total.

Our Asus Z170 Deluxe is packed with slots and ports

The tremendous bandwidth available via the Z170 prompted Intel to upgrade the DMI link between the CPU and the chipset, as well. The new DMI 3 link offers bandwidth equivalent to four lanes of PCIe Gen3. That’s not nearly enough to sustain simultaneous I/O operations across all 26 of the Z170’s high-speed ports. Heck, it’s not even close, which is kind of a big deal since the system’s memory sits beyond that DMI link, hanging off of the CPU’s integrated memory controller. If this were a server architecture, I’d be worried about that fact. For typical desktop PC use, though, I suspect DMI’s new four-lane arrangement should generally be sufficient.

Although the Z170 doesn’t natively support the higher-bandwidth USB 3.1 standard, Intel offers a chip code-named Alpine Ridge that supplies both USB 3.1 and Thunderbolt capabilities, and several motherboard makers are adopting it. As a result, you can expect to see quite a few Skylake boards with USB 3.1 support. A subset of those should be qualified to work with Thunderbolt, as well.

Desktop Broadwell caches in


The Core i7-5775C

I mentioned the desktop variant of Broadwell earlier, and here is the fastest model: the Core i7-5775C. This is a four-core, eight-thread desktop processor with 6MB of L3 cache, and like all Broadwell chips, it’s built on Intel’s latest 14-nm process. With only a 65W TDP, the 5775C isn’t a speed demon; its base and peak clocks are 3.3 and 3.7GHz.

This poor devil has led a neglected life. Intel unveiled it at Computex in June, but we didn’t get a review sample for weeks. Then our review unit sat for a while in Damage Labs, untouched, while I labored away on Radeon Fury reviews. Meanwhile, these chips still aren’t broadly available in North America, and now the new hotness of Skylake has arrived.

Regardless, the 5775C is intriguing for several reasons. First, it drops into existing Z97 motherboards and could be an upgrade option for current Haswell owners. Second, it has built-in Iris Pro graphics, which are pretty potent as these things go. As a 65W chip with an integrated GPU, the 5775C could fulfill some unique missions. Last and definitely not least, the 5775C has 128MB of eDRAM situated in a separate chip on the CPU package. This eDRAM helps to accelerate the Iris Pro graphics core, but it’s also just a big frickin’ L4 cache for the entire CPU. Any application whose working data set will fit into a 128MB cache could stand to benefit from its presence. We saw hints of greatness from a similar chip with an eDRAM cache back when we reviewed the first Haswells, but we weren’t able to use it with a discrete GPU. The 5775C has no such limitations, and you may be surprised by its performance.

One qualifier, though: you will pay for the privilege of owning a 5775C. The current list price is $366, 27 bucks more than a Skylake 6700K. Assuming you can find one in stock.

Since we have a Haswell, Broadwell, and Skylake on hand, and since Windows 10 is out, I figured we should test a whole range of Intel processors against one another—so that’s just we did, spanning five generations back to Sandy Bridge.

Our testing methods

As usual, we ran each test at least three times, and we’ve reported the median result. Our test systems were configured like so:

Processor AMD FX-8370 Core i7-2600K Core
i7-3770K
Motherboard Asrock
Fatal1ty 990FX Killer
Asus P8Z77-V Pro Asus P8Z77-V Pro
Chipset 990FX +
SB950
Z87
Express
Z87
Express
Memory size 16 GB (2 DIMMs) 16 GB (2 DIMMs) 16 GB (2 DIMMs)
Memory type AMD
Performance

Series

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Memory speed 1866
MT/s
1333 MT/s 1600 MT/s
Memory timings 9-10-9-27 1T 8-8-8-20 1T 9-9-9-24
1T

 

Processor Core i7-4790K

Core i7-5775C

Core i7-6700K Core
i7-5960X
Motherboard Asus
Z97-A
Asus Z170
Deluxe
Asus X99
Deluxe
Chipset Z97 Express Z170 X99
Memory size 16 GB (2 DIMMs) 16 GB (2 DIMMs) 16 GB (4 DIMMs)
Memory type Corsair

Vengeance

DDR3 SDRAM

Corsair

Vengeance LPX

DDR4 SDRAM

Corsair

Vengeance LPX

DDR4 SDRAM

Memory speed 1600 MT/s 2133 MT/s 2133 MT/s
Memory timings 9-9-9-24
1T
15-15-15-36
1T
15-15-15-36
1T

They all shared the following common elements:

Hard drive Kingston HyperX SH103S3 240GB SSD
Discrete graphics GeForce
GTX 980 4GB with  GeForce 353.62 drivers
OS Windows
10 Pro
Power supply Corsair AX650

Thanks to Corsair, Kingston, MSI, Asus, Gigabyte, Asrock, Cooler Master, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available.

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
  • We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we encoded a video with x264.
  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

Since we have a new chip architecture and a new memory type on the bench, let’s take a look at some directed memory tests before moving on to real-world applications.

The fancy plot above mainly looks at cache bandwidth. This test is multithreaded, so the numbers you see show the combined bandwidth from all of the L1 and L2 caches on each CPU.

The most notable result above involves the comparison between the, uh, sky-blue Skylake 6700K line and the yellow Haswell 4790K one. At the 256KB to 1MB block sizes, where we’re accessing the four 256KB L2 caches in these chips, Skylake achieves substantially higher transfer rates at the same basic clock frequency as Haswell.

Oh, and get used to the Core i7-5960X taking the top spot in almost every benchmark. That CPU has eight cores, 16 threads, quad channels of DDR4, and a 20MB L3 cache. It also costs a grand. The 5960X is not in the same class as the rest of these chips. It’s just here for reference.

Now let’s zoom in on a portion of the graph above.

My main motivation for including this strange plot is to get you to consider the 64MB test block size. There, the purple line for the 5775C indicates higher bandwidth than any other CPU tested; the 5775C’s bandwidth at this block size is roughly double the 6700K’s. This data point is one spot where we can see the impact of the 5775C’s 128MB L4 cache. Ooh, ahh.

Interesting. You can see the impact of the 6700K’s higher-bandwidth DDR4 memory easily in Stream. That wasn’t the case with Haswell-E compared to Ivy-E. My suspicion is that Skylake may be more aggressive about speculatively pre-fetching data into its caches than prior architectures. That would explain its ability to take immediate advantage of DDR4’s additional bandwidth.

Looks to me like the Broadwell 5775C’s large L4 cache is a boon in AIDA’s memory copy test. The 5775C doesn’t look like anything special in isolated read or write tests, but when asked to do both, having that big cache on hand appears to help.

Next up, let’s look at access latencies.

SiSoft has a nice write-up of this latency testing tool, for those who are interested. We used the “in-page random” access pattern to reduce the impact of pre-fetchers on our measurements. This test isn’t multithreaded, so it’s a little easier to track which cache is being measured. If the block size is 32KB, you’re in the L1 cache. If it’s 64KB, you’re into the L2, and so on.

Despite the higher transfer rates of Skylake’s L2 cache, its access latencies have barely risen. L2 accesses on the Haswell 4790K take 11 cycles using this tool, and they take 12 cycles on the Skylake 6700K.

The move to DDR4 at 2133 MT/s carries only a slight penalty for the 6700K—it’s four nanoseconds slower than the 4790K with DDR3. That’s not bad at all, and I suspect that penalty could evaporate pretty quickly as DDR4 clock speeds ramp up.

Some quick synthetic math tests

The folks at FinalWire have built some interesting micro-benchmarks into their AIDA64 system analysis software. They’ve tweaked several of these tests to make use of new instructions on the latest processors. Of the results shown below, PhotoWorxx uses AVX2 (and falls back to AVX on Ivy Bridge, et al.), CPU Hash uses AVX (and XOP on Bulldozer/Piledriver), and FPU Julia and Mandel use AVX2 with FMA.

These quick tests give us a nice starting sense of how the Skylake-based 6700K may compare to its 4790K predecessor. In Photoworxx, the 6700K manages a pretty dramatic gain over the 4790K. The 5775C even gets in on the action, with the Broadwell chip taking the third spot ahead. However, the 6700K’s advantage over the 4790K grows slimmer as we move to different workloads. Skylake’s improvements in per-clock performance can be substantial in the right circumstances, but not every application will benefit equally. Some may not benefit much at all.

Power consumption and efficiency

The workload for this test is encoding a video with x264, based on a command ripped straight from the x264 benchmark you’ll see later.

Our Core i7-6700K-based test system draws a little more power under load than the 4790K-based equivalent, which is what we’d expect given these processors’ respective TDP ratings of 91W and 88W. Our 6700K system is relatively frugal at idle, too, although it’s nothing special there. The Core i7-5775C is in a class of its own on that front.

We can quantify efficiency by looking at the amount of power used, in kilojoules, during the entirety of our test period, when the chips are busy and at idle.

Perhaps our best measure of CPU power efficiency is task energy: the amount of energy used while encoding our video. This measure rewards CPUs for finishing the job sooner, but it doesn’t account for power draw at idle.

Even though it draws a little more power at peak, the 6700K-based system requires less energy to encode this video than the 4790K-based one. The difference between the two isn’t dramatic, but Intel’s architects appear to have succeeded in improving Skylake’s power efficiency over Haswell’s.

Project Cars
Project Cars is beautiful. I could race around Road America in a Formula C car for hours and be thoroughly entertained. In fact, that’s pretty much what I did in order to test.


Frame time
in milliseconds
FPS
rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

Click the buttons above to cycle through the plots. We capture rendering times for every frame of animation so we can better understand the experience offered by each solution.

Whoa. That’s different.

So, first things first: the Skylake 6700K takes a clear lead over the Haswell 4790K with a markedly higher FPS average. The differences are a little smaller if you switch to a more advanced metric like our 99th-percentile frame time, but either way, Skylake beats Haswell cleanly.

Things get weird, though, with the Core i7-5775C in the picture. The Broadwell-based CPU with the 128MB L4 cache turns in the top performance in Project Cars, outdoing even Skylake. Looks like that big cache can help with gaming performance, even with a discrete GPU.

We can understand in-game animation fluidity even better by looking at the entire “tail” of the frame time distribution for each card, which illustrates what happens with the most difficult frames.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame. And 8.3 ms corresponds to 120 FPS, an even more demanding standard that may become more popular with the growing popularity of PC displays with high refresh rates.

Someone told me Project Cars was seriously CPU-bound. Maybe that was true in older versions of Windows. Maybe Win10 has introduced some magic that changes the math. I dunno. What I do know is that even the slowest CPU here, the FX-8370, spends less than half a millisecond beyond our 16.7-ms threshold. In other words, that CPU pumps out frames at a constant 60Hz throughout almost the entire test run. The faster Intel processors aren’t too far from delivering a constant 120Hz.

The Witcher 3

Uh, sorry. This game has a lot of settings.



After testing this game quite a bit for recent GPU reviews, I expected it to be somewhat CPU-bound. I suppose it is, in the sense that we can show that faster CPUs seem to perform better in this game than slower CPUs. The 6700K looks strong here, and the 5775C’s magical gaming prowess continues. Still, all of the processors are doing a great job of producing smooth animation in conjunction with our GeForce GTX 980 graphics card. The FX-8370 produces 99% of all frames in 11.2 ms or less, roughly the equivalent of 90Hz. Every other CPU here is even faster.

Civilization: Beyond Earth

Since this game’s built-in benchmark simply spits out frame times, we were able to give it a full workup without having to resort to manual testing. That’s nice, since manual benchmarking of an RTS with zoom is kind of a nightmare.



The 6700K proves to be just a little slower than the 4790K in each of our metrics here—by an eyelash. Meanwhile, that crazy Broadwell 5775C embarrasses them both with the help of its beefy L4 cache.

Far Cry 4



You can see some minor spikes in the frame time plots above. I think, in this case, the 99th-percentile frame time does the best job of sorting out some closely matched contenders. The 4790K, 6700K, and 5775C are essentially tied at the top of the pack. They’re all exceptionally fast, nearly producing a constant 85Hz stream of frames. Then again, even with our advanced metrics helping tease out any differences, this game appears to be largely CPU-bound among the top Intel processors.

Dat 5775C, tho.

Shadow of Mordor

I had hoped this quick, FPS-only built-in benchmark from Shadow of Mordor would shed some new light on the contest between the CPUs. Instead, this result seems to be driving home the point that even many of today’s most demanding PC games simply are not meaningfully CPU-bound—provided you have a GeForce graphics card and a fast Intel quad-core CPU from the past four or five years. At this rate, we may have to start hunting explicitly for CPU-bound games and scenarios in order to stress test CPUs in the future.

Also, I may have made a mistake in switching to GeForce graphics cards on our CPU test rigs. My sense is that Radeons tend to be quite a bit more CPU-bound, especially when measured with advanced metrics.

Productivity

Compiling code in GCC

Our resident developer, Bruno Ferreira, helped put together this code compiling test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

As we switch away from gaming, the 6700K turns in a dominating performance versus the other quad-core processors in the bunch. Is this result the start of a trend among our other productivity tests?

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, so the encoding of the AES algorithm, in particular, should be very fast on the CPUs that support those instructions. We’ve also included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions. (Yes, the TrueCrypt project has fallen on hard times, but these results will come in handy later, as you’ll soon see.)

7-Zip file compression and decompression

JavaScript performance

The Skylake 6700K is the fastest Intel quad-core CPU pretty consistently, with the lone exception of the 7-Zip decompression test. Generally, though, it’s only a smidgen quicker than the Haswell-based 4790K. The 5775C performs respectably in these productivity tests, but it doesn’t continue the surprising excellence we saw in our gaming tests.

Video encoding

x264 HD video encoding

Our x264 test involves one of the latest builds of the encoder with AVX2 and FMA support. To test, we encoded a one-minute, 1080p .m2ts video using the following options:

–profile high –preset medium –crf 18 –video-filter resize:1280,720 –force-cfr

The source video was obtained from a repository of stock videos on this website. We used the Samsung Earth from Above clip.

Handbrake HD video encoding

Our Handbrake test transcodes a two-and-a-half-minute 1080p H.264 source video into a smaller format defined by the program’s “iPhone & iPod Touch” preset.

One of the most notable outcomes in our video encoding tests is simply that the eight-core 5960X performs so well. Last time we checked, x264 didn’t benefit much from having 8 cores and sixteen hardware threads on tap. This latest build certainly does.

The 6700K outperforms the other quad-core processors here, too.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

Sometimes, Skylake is clearly faster than Haswell. Other times, well, this happens.

3D rendering

LuxMark

Because LuxMark uses OpenCL, we can use it to test both GPU and CPU performance—and even to compare performance across different processor types. OpenCL code is by nature parallelized and relies on a real-time compiler, so it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both support AVX. The AMD APP driver even supports Bulldozer’s and Piledriver’s distinctive instructions, FMA4 and XOP. We’ve used the AMD APP ICD on the FX-8370 and Intel’s latest OpenCL ICD on the rest of the processors.

We tested with LuxMark 3.0 using the “Hotel lobby” scene.

We’ll start with CPU-only results.

The 6700K manages a nice gain over the 4790K here. Now, if we switch to only using the GPU to render and just letting the CPU feed it, here’s what happens.

Chaos, mostly, but with higher scores thanks to the GPU’s number-crunching prowess.

We can try combining CPU and GPU computing power by asking both processor types to work on the same problem at once.

Sharing the load between the two processor types leads to the highest overall scores. It also erases the 6700K’s advantage over the 4790K from the CPU-only test, oddly enough.

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

The Skylake 6700K achieves the highest per-thread performance of any of the CPUs tested. It’s roughly 80% faster than in the single-threaded test than AMD’s FX-8370, which is either awesome or depressing. Maybe a little of both. With all eight threads active, the 6700K only slightly outperforms the 4790K overall.

POV-Ray rendering

No surprises in POV-Ray. The 6700K continues to be just a bit faster than its predecessor.

Scientific computing

MyriMatch proteomics

MyriMatch is intended for use in proteomics, or the large-scale study of protein. You can read more about it here.

STARS Euler3d computational fluid dynamics

Euler3D tackles the difficult problem of simulating fluid dynamics. Like MyriMatch, it tends to be very memory-bandwidth intensive. You can read more about it right here.

Probably thanks to its higher-bandwidth L2 cache and DRAM, the 6700K easily outruns the 4790K in this fluid dynamics simulation. The real star of this show is the Broadwell 5775C, though, whose L4 cache finally helps out rather dramatically in something other than a game. The Haswell equivalent also performed well in this test.

Legacy comparisons

Many of you have asked for broader comparisons with older CPUs, so you can understand what sort of improvements to expect when upgrading from an older system. We can’t always re-test every CPU from one iteration of our test suite to the next, but there are some commonalities that carry over from generation to generation. We might as well try some inter-generational mash-ups.

Now, these comparisons won’t be as exact and pristine as our other scores. Our new test systems run Windows 10 instead of Windows 8.1 and 7, for instance, and have higher-density RAM and larger SSDs. We’re using some slightly different versions of POV-Ray and 7-Zip, too. Still, scores in the benchmarks we selected shouldn’t vary too much based on those factors, so… let’s do this.

Our first set of mash-up results comes from our last two generations of CPU test suites, as embodied in our FX-8350 review from the fall of 2012 and our original desktop Haswell review and our Haswell-EP review from last year. This set will take us back at least four generations for both Intel and AMD, spanning a price range from under $100 to $1K.

Productivity

Image processing

3D rendering

Scientific computing

Aw, we can do better than that. Turn the page.

Legacy comparisons, continued

That was a nice start, but we can go broader. This next set of results includes fewer common benchmarks, but it takes us as far back as the Core 2 Duo and, yes, a chip derived from the Pentium 4: the Pentium Extreme Edition 840. Also present: dual-core versions of low-power CPUs from both Intel and AMD, the Atom D525 and the E-350 APU. We retired this original test suite after the 3960X review in the fall of 2011. We’ve now mashed it up with results from our first desktop Haswell review, our Haswell-EP review, and from today.

Image processing

3D rendering

Never forget: in April of 2001, the Pentium III 800 rendered this same “chess2” POV-Ray scene in just under 24 minutes.

Conclusions

Let’s summarize our results with a couple of our famous power-performance scatter plots. The first one is based on a geometric mean of all of our non-gaming application tests, and the second one focuses on our frame-time-based game performance results. As ever, the best values will tend toward the top left corner of the plot. Since they’re discontinued products, I’ve used the original introductory prices for the 2600K and 3770K in order to provide context.

Intel’s annual parade of incremental progress marches on. For non-gaming applications, the Skylake 6700K offers a slight boost in performance overall compared to last year’s model, the Haswell 4790K. What the summary plot can’t tell you is that this advantage is sometimes fairly substantial, while other times the 6700K is no faster than its predecessor. The plot also neglects to mention that the gains from Haswell to Skylake come in my favorite flavor: per-thread performance, which is the most important and most difficult sort of CPU performance to achieve. I’m always happy to see improvements on this front, even if they’re relatively modest.

The gaming plot tells a similar story, but here, the 6700K is in the running for the fastest gaming CPU on the planet—and it would’ve won, too, if it weren’t for the pesky Broadwell 5775C and its magic L4 cache. The 6700K improves on the 4790K by a tad, but the 5775C upstages it with a freakish string of gaming performance wins, even though its prevailing clock speed is ~500MHz lower.

What should we make of these results? If you’ve been nursing along a system based on a Sandy Bridge processor like the Core i5-2500K or Core i7-2600K, then perhaps Skylake has enough to offer to prompt an upgrade. Cumulatively, Intel has made quite a bit of progress in the past several years—and the Skylake platform with the Z170 chipset is a considerable upgrade in terms of I/O bandwidth, too. Those motherboards bristle with storage options and high-speed USB ports and such. So there’s that. What’s jarring about our gaming results is that the Sandy Bridge-based 2600K remains a very competent processor for running the current PC games we tested. You’ll probably want to avoid thinking about that fact when it comes time to pull out the credit card.

One thing I haven’t addressed yet is the Core i7-6700K’s overclocking potential. Sorry, but I simply ran out of time after testing seven different processors and making this review happen in the short span of time since the Windows 10 launch. I’ve heard that some folks are reaching as speeds as high as 5GHz with Skylake parts, while others are reporting clock speeds around 4.7-4.8GHz, similar to what we’ve seen from most Haswells. Mark’s 6700K topped out at 4.6GHz aboard the Asus Z170-A. At the end of the day, if only a few hundred megahertz are at stake between Haswell and Skylake, well, that ain’t much in the grand scheme. Then again, we need to explore the bandwidth potential of higher DDR4 memory frequencies, too. We’ll have to overclock our Skylake chip soon and write it up in order to provide another data point.

I figure we should overclock the 5775C, as well, to see what it can do. Heck, if you’re a gamer sporting a Haswell-compatible motherboard and looking for an upgrade, this little desktop Broadwell may be a better choice than the 6700K. So long as your motherboard is Broadwell-compatible via a BIOS upgrade, the 5775C could deliver gaming performance that’s superior to Skylake, provided your games of choice benefit as much from that L4 cache as the ones we tested did.

Enjoy our work? Pay what you want to subscribe and support us.

Comments closed
    • bintsmok
    • 4 years ago

    Sir Scott,

    Can you include again Crysis 3 for CPU gaming benchmarks ?

    [url<]https://www.youtube.com/watch?v=aX1S4aSJ3lU[/url<] The YouTube link above is the most CPU-intensive part of the game based on my experience. Using a GTX 970, there is a noticeable difference when I disable Hyper Threading of my Core i7 4790K and it shows in the 99th percentile frame time. Thank you.

    • chrcoluk
    • 4 years ago

    What was most interesting to me is the gap between sandy/ivy and haswell on many of the tests. It seems haswell is faster than people have considered. The gap between those chips is the 2nd biggest with the biggest been between the 5960x and skylake.

    • sweatshopking
    • 4 years ago

    WHO EVEN USES CPU’S ANYMORE? IT’S NOT 1995 GUIZE.

      • Milo Burke
      • 4 years ago

      You know, you’re a lot less threatening than you used to be…

      • anotherengineer
      • 4 years ago

      Exactly. Everyone knows, that they’re called APU’s nowadays!!!!

    • Milo Burke
    • 4 years ago

    Now that Skylake is out, can we get back to Pascal rumors please?

      • Anovoca
      • 4 years ago

      Pascal is so fast it made the Kessel run in under 12 parsecs

        • Milo Burke
        • 4 years ago

        I’m hoping it’ll make 0.5 past light speed.

          • anotherengineer
          • 4 years ago

          [url<]http://lolheaven.com/wp-content/uploads/2013/05/2062.jpg[/url<]

        • Mr Bill
        • 4 years ago

        Normally its 21 parsecs but a stitch in time saves nine.

      • anotherengineer
      • 4 years ago

      How about the when is AMD going to upgrade their 7850 silicon rumors??

    • Anovoca
    • 4 years ago

    My biggest concern with upgrading to Skylake is how long the new socket is going to be around for. 1150 had an incredibly long run which was great for those of us that like to upgrade incrementally. I had some bad experiences in the past having a system upgrade locked by 1156 which was around for about a half a year before being replaced by 1155. Overall skylake looks like a solid upgrade, but I would hate for them to turn around and release the next tock with a huge cache upgrade and make it socket 1152.

    Then again 11nm will probably use a new socket regardless.

    • ronch
    • 4 years ago

    It’s amazing that at this point in time we’re already at Skylake along Intel’s roadmap. 5 years ago, when we only had Nehalem, Haswell and Skylake seemed so mysterious, so far off. Heck, even Sandy Bridge seemed mysterious and Bulldozer was akin to Star Wars Episode IV: A New Hope.

    So right now we’re getting new names like Kaby Lake and Ice Lake, and it’s been a year since AMD answered the question on everyone’s mind about how they plan to move forward post-Bulldozer.

    Interesting times in the x86 CPU space, for sure. I really hope the eternal battle between Intel and AMD will be just that.

    • itachi
    • 4 years ago

    [url<]http://pclab.pl/art65154.html[/url<] that's a polish website review (just browse through graphs), they tested at 2666mhz ram I believe and show nice actual gains, some people say that they're not reliable source but I believe they are.. those results make sense I think, and they did test some CPU bound games. Also HardOCP did low resolutions tests : [url<]http://www.hardocp.com/article/2015/08/05/intel_skylake_core_i76700k_ipc_overclocking_review/6#.VcajWPmCmUk[/url<] to show the gains of memory bandwidth, altought some say running such low resolution actually overkill to test CPU bound barrier, but I think it's a good idea... Sorry for posting links also not sure it's allowed, but what better way to illustrate the claims I been making in my previous comments ! I really think Techreport should dig deeper into that higher RAM frequency/testing CPU bound games, after all alot of us are gamers ! I know people do exclusively productvity of course we're not alone :). But the Skylake gains in gaming if that polish review is true are actually pretty good... but in the PC tech review world I do not trust anyone more than you guys ;). I even mailed Scott to let him know my opinion ! (they put their emails after all :p)

    • DavidC1
    • 4 years ago

    Windwalker, and anyone who wonders about the iGPU on “enthusiast” K chips like the 6700K.

    Whatever miniscule extra die cost the iGPU might incur, it has already been PAID for by the people that uses the iGPU. So it costs you nothing. In the case of the Crystalwell parts though, you ARE actually paying $60 more.

    Somebody mentioned about Desktop Sandy Bridge SKUs like the 2450P that does not feature iGPU. They don’t do that anymore. So Intel must have figured out pretty quickly that there’s very little people that explicitely DO NOT want iGPUs.

    If there’s someone willing to pay $15-20 EXTRA for having a part without iGPU, I’m pretty sure Intel would make them.

    Now go ask how many people will go buy an iGPU-less part that costs $15 more and brings zero benefits over a iGPU enabled part.

    • TopHatKiller
    • 4 years ago

    HaHa. Uhm. Ha… Just read the S/A front-end piece on Skylark.
    That made me laugh.
    As usual excellent criticism of blind gibbering-at-the-knee adulations of our journalistic friends.
    That Demerjian guy is even a little more critical then me – and without any transistor count / die size we still don’t how much a useless space is the ‘brilliant best-in-the-business’ 14nm process actually might be.. Maybe intel will surprise me, who knows?
    I’m laughing still… but sadly… with no Zen still in sight Intel have the playing field of rehashes and lame-dick releases all to themselves.

      • Airmantharp
      • 4 years ago

      Why is this guy still here?

        • TopHatKiller
        • 4 years ago

        Probably because you people still haven’t managed to bully me out. Keep trying though, you never know; you might even succeed! You have a lovely day too!

      • TopHatKiller
      • 4 years ago

      Skylark: 11.4 Kaveri: 9.8 Haswell: 7.9 by anand m/mm2
      Skylark’s power: equal, worse, little better then haswell. Hence:
      Intel’s 14trigate process is [a] still hasn’t reached maturity! [b] is total crap.

      I posted some time ago that there was something horrible about 14nm….

    • Takeshi7
    • 4 years ago

    I wish you posted a second memory latency graph that showed the latency in actual time rather than just in cycles.

    • ZGradt
    • 4 years ago

    You should have included a $390 5820K instead of the $1000 5960X. If you need to buy a fancy new motherboard and DDR4 anyway, you might as well pay an extra $40 for the CPU and get 2 more cores. I’ve been stuck at 4 cores for so long I want to scream.

    I expected skylake to be a lot faster. They went from 22 to 14 nm, revamped the architecture and upgraded to DDR4. It seems like a lot of trouble on Intel’s part for such little gain. Were they just looking to reduce die size and fatten up their margins?

    I’m thinking about going to a 5830K and making a steambox out of my 3470. That should last me quite a while.

    • anotherengineer
    • 4 years ago

    Hmmm so skylake igpu does not support DX12, or HDMI 2.0 and does not mention DP version supported. Seems kind of odd, for a brand new piece of silicon?!

    [url<]http://ark.intel.com/products/88195/Intel-Core-i7-6700K-Processor-8M-Cache-up-to-4_20-GHz[/url<]

    • ermo
    • 4 years ago

    [quote=”Damage”<][b<]Someone told me Project Cars was seriously CPU-bound[/b<]. Maybe that was true in older versions of Windows. Maybe Win10 has introduced some magic that changes the math. I dunno. What I do know is that even the slowest CPU here, the FX-8370, spends less than half a millisecond beyond our 16.7-ms threshold.[/quote<] In fairness, I stated that the combination of AMD's WDDM 1.x GPU drivers (Win 8.1) and Project CARS expose a GPU driver bottleneck on AMD's side that shows up under high CPU load. The AMD WDDM 1.x GPU driver bottleneck gets worse at Ultra settings, since Ultra significantly increases the amount of draw calls that the CPU needs to massage for the GPU driver. Your review uses a GTX 980 and NVidia's WDDM 2.x GPU drivers, which support the WDDM 2.x multi-threading model and can distribute the GPU driver draw call generation load across multiple CPU cores. Feel free to debunk my assertion by using your test bench to benchmark a R9 390X against a GTX 980 on Win 10 vs Win 8.1. I am confident that you'll see a 20 - 40% discrepance in performance on the AMD card between Win 10 and Win 8.1, as that is what internal WMD testing of Project CARS has shown the entire spring of 2015. In comparison, the discrepance between the Win 10 and Win 8.1 GTX 980 results are likely negligible. [quote="Robert Dibley, SMS Render Lead"<] Ultra settings include drawing a great deal more into almost every part of the game (e.g. greater draw distance in main scene, mirrors, environment maps etc), and every extra draw call on PC is a CPU cost, primarily inside the DirectX drivers. So increasing CPU clock speed may very well make a significant difference when using ultra settings, more than it does in other settings. ([url=http://forum.projectcarsgame.com/showthread.php?35742-2500K-Bottleneck-at-Stock-Let-s-compare&p=1065535&viewfull=1#post1065535<]source[/url<]) [/quote<]

    • blastdoor
    • 4 years ago

    It feels like Intel has just been taunting AMD for the last few years.

    If Zen turns out to have performance in the ballpark of Haswell, I wonder if Intel will immediately deploy the “real” Core upgrade that they’ve been holding in reserve behind a sign that reads “In case of competition, break glass”

      • Klimax
      • 4 years ago

      Not happening even if you use their PR claims of 40%. (Too behind) And given what we have seen with Fury X, then I don’t think they’ll reach even Nehalem…

        • TopHatKiller
        • 4 years ago

        Nonsense.

          • Klimax
          • 4 years ago

          Nonsense are your posts especially content free like this one. You completely failed to refute my post.

            • TopHatKiller
            • 4 years ago

            True. I didn’t present a counter argument.
            Your reference to gpus is inappropriate for cpus.
            The “40%” nonsense from amd is basically meaninglessness – and no sensible person could conclude anything at all from it.
            ‘Zen’ is a new architecture, possibly a virtualized one, who knows? Not me. Nor you.
            ‘Zen’ could have a better ipc then haswell, or not. It could have a higher frequency, or not. Maybe it has more cores and threads, maybe not.
            No one knows except amd.
            Now, do you see why I said “nonsense”?

            • bfar
            • 4 years ago

            It’s all speculation at this point though. We have no clue how Zen will look in terms of either price or performance, or how Intel will respond.

      • ronch
      • 4 years ago

      Intel only employs humans, not elves or fairies.

      • TopHatKiller
      • 4 years ago

      There have been rumours for some years that intel actually has a new architecture in waiting.
      So it’s possible. At least that would be some explanation for what the engineer teams at intel having being doing for years, other then in the server end.

    • TwoEars
    • 4 years ago

    Scott’s review of Skylake is great, no need to change anything. But just for fun here’s a Skylake vs Sandybridge test with dGPU that Ryan over at PcPer did. I think both Scott and the rest of us might find it interesting, I certainly did.

    [url<]http://www.pcper.com/reviews/Graphics-Cards/Skylake-vs-Sandy-Bridge-Discrete-GPU-Showdown/Metro-Last-Light-and-Conclusion[/url<]

    • chuckula
    • 4 years ago

    Hey! Here’s a link to a delid of Skylake with a very nice die photo: [url<]http://www.overclock.net/t/1568357/skylake-delidded[/url<] Guy claims he got the chip up to 5.1Ghz stable after the delid with reasonable temps.

      • chuckula
      • 4 years ago

      As a followup: I have estimated Skylake’s die size to be a little over 122 mm^2… which is freakin’ tiny! [Edit: The die size for the 4770K/4790K is 177mm^2…]

      See my math in the forum post linked below and please feel free to correct me if I messed up any calculations.

      [url<]https://techreport.com/forums/viewtopic.php?f=2&t=116098[/url<]

    • ptsant
    • 4 years ago

    I still remember my upgrade from 386DX40 to an AMD K5 100. Ten times faster. Then from the K5 100 to the Athlon 700, also almost ten times faster. Now, should I upgrade for 25-30% more performance? Maybe… I certainly have more money than when I was a student.

    I suppose the most important factor now, just as in the past, is the need for a new platform: new RAM, new USB 3.1, new M.2 SSD etc. Otherwise, why bother. At least PCIe has held up very well (you may remember PCI -> VLB -> AGP (different iterations) that got all our video cards obsolete very quickly).

      • auxy
      • 4 years ago

      VLB predated PCI. (*´ω`*)

        • Krogoth
        • 4 years ago

        It is only predates PCI by two years and died just as quickly because VLB was tied to 486 architecture.

          • auxy
          • 4 years ago

          And yet… (*‘ω‘ *)

    • xeridea
    • 4 years ago

    It will be interesting when DX12 and Vulkan games are out, the 99th percentile framerates comparing CPUs will be essentially identical.

    • gmskking
    • 4 years ago

    “nursing along a system based on a Sandy Bridge processor like the Core i5-2500K or Core i7-2600K”

    Nursing along a 2600k? Lol. My 2600k is still crazy fast. Intel has not given me a reason to even start looking at a replacement yet. Why would anyone with a 2600k want to upgrade to a 6700k? No point.

      • auxy
      • 4 years ago

      PCI Express 3.0, lots of USB 3.0, M.2 storage, support for more advanced UEFI functions, better single-threaded performance and lower power consumption, support for DDR4 memory with higher throughput, support for wider graphics card configurations, generally running on a newer platform with longer support and higher system performance overall due to DMI 3.0…

      Nah, no point. (⌒▽⌒)

        • Krogoth
        • 4 years ago

        In other words, a platform upgrade.

        • travbrad
        • 4 years ago

        I agree it’s not that there’s “no point”, but there is no way I would spend $500-600+ for those things, particularly since I primarily care about gaming where most of that would make no difference or very little difference. I do a bit of video encoding too which would show more of a difference than gaming, but I usually do that overnight anyway.

        A 4.5ghz+ overclock (which virtually every Sandy Bridge CPU can do) gets you some pretty darn good single-threaded performance too. Of course Skylake can also be overclocked, but again I couldn’t justify spending that kind of money for the performance it would get me. That kind of money could buy a GTX980 or even 980TI.

          • auxy
          • 4 years ago

          Of course, of course. I wasn’t suggesting that someone with a 4Ghz SNB machine should upgrade to a Skylake for the gaming performance benefits!

          Mainly I was just remarking that the original poster’s dismissive attitude of the new CPUs is hardly fair. Sure, if you don’t need anything the new platform offers, the performance alone isn’t especially compelling, but if you’re someone who is really concerned about worst-case frame latency (like a VR user, or a competitive gamer using a low-latency low-persistence display), it may still be worth it. (´・ω・`)

          Really the people who would be interested are the people who are using exotic storage configurations (3x M.2 NVMe drives!) or who want to do Tri-Fire. (I might argue that Tri-Fire users should be on HEDT anyway, though.) It would also be nice to have all of my USB ports be 3.0 finally. Tired of this weird mix of USB2 and USB3 we’ve had hanging around forever.

    • wingless
    • 4 years ago

    So I may actually consider upgrading from my 2600K to Skylake this year…..or nah, I’ll just overclock it another 500 Mhz and save my money.

      • gmskking
      • 4 years ago

      My 2600k still screams. No need to upgrade for a while yet.

    • cegras
    • 4 years ago

    Hi Scott,

    Recently (or maybe, it is only I who has become recently aware) there have been games being released that rely on a scripting (Lua) logic engine on top of a compiled graphical engine. Two examples are Payday 2 and Natural Selection 2, and I think Civ 5. In those games, I expect core counts and IPC to matter a little more. Of course, these games will be easily run by any top of the line hardware. But I’m interested in how the performance scales with increasing CPU vs. GPU power.

    Any comments?

      • xeridea
      • 4 years ago

      Lua speed is pretty good, the performance hit for using it for some scripting in your game wouldn’t be that big, especially since DX11 is horrible at using multiple cores, and is by and large the biggest CPU user in games.

        • Klimax
        • 4 years ago

        That’s drivers not DX 11. Also it is not largest user of CPU by far. Not at all – even in terribly programmed cases like Star Swarm. (Got it all captured, unfortunately presenting graphs and other results is bit hard from VTune)

      • DrDominodog51
      • 4 years ago

      I know for a fact G-Mod uses Lua, but I’m not sure it uses it in the way you’re describing.

    • ronch
    • 4 years ago

    I suppose this is really big news, judging by the number of comments. And rightly so. It’s been two years since we ACTUALLY saw a new CPU core. (Broadwell is practically vaporware.) And note that Skylake offers people not just the first 14nm CPUs that most folks can actually buy, but an updated architecture as well. So it’s practically a tick+tock.

    • jessterman21
    • 4 years ago

    I’d like to see Witcher 3 on Ultra (no HW) benchmarked while running through Novigrad. Tried this last night, and it was peaking my 4.1GHz i5-3570K every second or so, causing big latency spikes last night – I assume from drawing in the people.

    • lycium
    • 4 years ago

    I’m not sure whether lack of AVX-512 / LRB-NI or the 128mb is more disappointing :'(

    Edit: and I wish the people complaining about the beefy iGPUs would remember that THIS is where the parallel compute power goes, not more cores! They are excellent OpenCL workers in general (it’s not just a GPU for graphics/games), and exactly what you want instead of more relatively low performance x86 (in GFLOPS/watt, die space etc). Basically don’t worry, Intel will drag you kicking and screaming into the parallel future!

    • HisDivineOrder
    • 4 years ago

    So I’ve read the reviews and all I can say is…

    Something Happened.

      • njoydesign
      • 4 years ago

      Something Haswelled. =)

      That’s how I felt after glancing though the review. It’s a bit beter here and there, but overall not much incentive to upgrade from haswell. For me, that is)

    • geekl33tgamer
    • 4 years ago

    Looking forward to Skylake-E it is then, and [i<]maybe[/i<] the Zendozer. If AMD can't close the IPC gap at least a little while Intel's IPC gains start slowing per release, you could argue that they should just give up?

    • Laykun
    • 4 years ago

    Gaming rig sitting on an aging i7 980X, looks like it’ll be staying that way for a while, particularly with DX12 around the corner about to make games potentially much easier on the CPU.

    • itachi
    • 4 years ago

    These CPU’s are hungry on DDR4 bandwidth, i’d say this needs more thorough testing in this area, not sure if it was like that with X99 but the difference gaps are huge.

      • Krogoth
      • 4 years ago

      Skylake isn’t memory bandwidth starved. Dual-channel DDR4 yields more than enough bandwidth for its IGP even with JEDEC-spec modules.

      The quad-channel DDR4 on Socket 2011 chips are meant for eight-core or more chips where the bandwidth is needed to keep all of those cores happy.

        • the
        • 4 years ago

        Xeon D and its eight cores work fine with dual channel DDR4.

        The reality is that the bandwidth requirements per CPU core haven’t gone up much since Sandy Bridge when it introduced AVX. However, memory bandwidth has steadily been increasing since then: DDR3-1600 to DDR3-1866 to DDR4-2133 now with Sky Lake. This increase has mainly been to fuel the bandwidth requirements of the integrated graphics no the CPUs.

        For servers, additional memory channels are just as much about increasing memory capacity as increasing bandwidth.

          • tfp
          • 4 years ago

          At the same time 128MB of L4 would be very helpful in a number of situations on the desktop.

            • itachi
            • 4 years ago

            Maybe I exagerated a little but the scale of benefits gain in DDR4 seems much higher than the DDR3 one, at least according to Hardware.fr graph, which was quite killer by the way !

            It would be really nice if we could see a DDR4 roundup/comparison on Techreport to compare the performance in multiple games !

            from 2133 to 3600 Mhz ! 🙂 wonder if RAM is overclockable also, say I get a 3200 kit, think I can push it even further ?

            And say I get a 3200 kit, would I need to push a high OC to reach that speed or is it still doable by reducing rations, and possibly tweaking timings ? never got arround tweaking timings, that’s kind of advanced stuff..

            Also indeed a l4 cache on Skylake would have allowed it to have TRUE supremacy over the broadwell counterpart.. which, needless to say would make alot of sense eh.. altought I just checked prices and I found one at 420 euros.. didnt thought it was so much, also found a 6700k at 360 CHF at a local shop :O so far it’s the less overpriced with amazon.fr also at 377 euros, the rest are way way overpriced over here, 395 euros and such.. silly.

            • Klimax
            • 4 years ago

            At too big cost with nontrivial potential of regressions. (Loss of some L3 for inclusion of L4)

        • swaaye
        • 4 years ago

        How did you come to the conclusion that the IGP is satiated by the DDR4 bandwidth alone?

          • Krogoth
          • 4 years ago

          Intel IGP aren’t build for gaming performance. They can do casual stuff just fine and DDR3 from previous Haswell flavors were quite sufficient.

          I honesty don’t get the unhealthy obsession with trying to throw a beefy IGP onto a CPU package.

            • Klimax
            • 4 years ago

            Iris Pro in Broadwell already demonstrates why you are not correct.

            • itachi
            • 4 years ago

            Steal market shares of GPU companys ? lol but that doesnt apply to AMD, also I don’t quite get it either, they should make a a version with IPGU and one without, but like someone said, it isn’t as simple as that, not sure how much die space it takes, but to us enthusiasts and overclockers, it certainly feels like wasted space for CPU transistors !!

            And I don’t think the way they do it matter much, what I mean with that is.. yes they do it that way because they want an IGPU in there.. if they didn’t.. it would be different, at least that’s how I see things.

            I think they better off manufacture micro GPUs with half decent performance, and maybe make it USB3 ? don’t even know if thats possible I think I heard of something like that, so the people that need it, they can get it for 40-60$..

            The only thing useful in IGPUS are the when you got screen or GPU problems to troubleshoot and you don’t have an extra screen, it came quite handy at a friends once.

        • Ninjitsu
        • 4 years ago

        Performance seems to scale till 2933 MT/s.

          • itachi
          • 4 years ago

          If I go mid end i5 I might go for 2800 (or if I can find not so expensive 3000mhz 16gb kits with decent timings) if I go high end i7 will go 3200-3400 eventually lol, make sense.

            • itachi
            • 4 years ago

            But the more speed the higher timings too.. mmh an extensive test with different CPU bound games would be really nice !

    • ronch
    • 4 years ago

    On boy, my FX-8350 looks pretty bad compared to Skylake, but then it was meant to go up against Ivy Bridge. Should Ivy owners upgrade? If not, then perhaps Piledriver FX owners shouldn’t need to either. I think I’ll just hold out until 2016 or so and see what Zen can do. Judging by Skylake though, 40% higher IPC than Excavator may not be enough. Or they need to clock higher than 4.0GHz. Still, by my numbers, Zen should be about 60% faster than Piledriver clock-for-clock. That’s a nice bump for Piledriver holdouts like me.

    I hope Zen will be very competitive with whatever Intel will have by the time it comes out, but if it won’t, it’s still gonna be a great upgrade for AMD fans given how it will still be much faster than AMD’s top FX models (even when you count in those silly OC’ed 9590s) and because it will probably have to be priced lower than Intel. I hope reality won’t take this course, however, as it’s only good short term but bad long term.

    • Nictron
    • 4 years ago

    Thank you again for a thorough review. One question I have:

    Clearly AMD shows a latency performance lag on the tested gaming performance, does this latency lag translate the same when high resolution gaming with high detail on let’s say a GTX 980 Ti?

    I have a AMD X6 1100T and would like to know if the primary application used on my AMD platform which is gaming at 3440×1440 and trying to achieve a constant >60 FPS with VSYNC will be influenced by the AMD platform?

    Much appreciated,

      • flip-mode
      • 4 years ago

      Any recent Core i5 or Core i7 is going to be significantly better for gaming than your CPU.

        • novv
        • 4 years ago

        Did you actually tested an X6 1100T in games ? I guess not, but I’m sure you read a lot of benchmarks. I had a great opportunity to test an Ivy Bridge I7-3770 against an X6 1090T about an year ago. Same video card on both platforms R7-260X 2GB GDDR5. And guess what, the AMD system was consistently 5-10% faster in the F1 2013 and ~10% slower in GRID 2. But what made me keep the AMD platform was the stability and consistent frames and fluidity in Max Payne 3. But I’m not testing games like TR and any other website. First of all in the background I have a torrent client and sometimes an encoding which I set to use 2 to 4 core. I will still have at least 2 free real cores for any game. Can you do that with an I5 ? You can try with an I7 but it’s not going to win because HT is crap. But I agree that if you sit home and just work with AIDA64, Sisoft Sandra, Cinebench, Sysmark etc. Intel will always be the better choice. It will make you feel good about the choice you made giving you charts showing what you want to see.

          • flip-mode
          • 4 years ago

          That’s all been heard before. A die-hard AMD fan will find scenarios in which the AMD processor doesn’t miserably lose to Intel’s. If it’s higher thread counts that you are concerned about you can always jump on Haswell-E.

    • anotherengineer
    • 4 years ago

    A few things
    -Well done review Scott, very nice. Some minor constructive feedback or a question if you will.
    for your power /energy tests, I see you using kilo-joule, why not for reader simplicity use something like watt.hrs or kw.hrs??

    -next – Broadwell I think is going to go down in history as one of Intel’s best paper launches, well at least here in North America.

    -Lastly, that AMD FX. For being a 3yr old chip, on a 4 yr. old design on 32nm and $200 or close to half the price of the i7 6700k, in most of the tests it doesn’t compare too poorly given its handicaps and cost!! Maybe W10 helps a bit??

    Maybe Zen does have a good chance of competing toe to toe with Intel in benchmarks again. However, it’s going to be tough to sell chips in a flooded market with Sandy, Ivy, Haswell, & Skylake, well and Broadwell if we see it?

      • anotherengineer
      • 4 years ago

      I think Zen will need to be at least as good as Sandy in terms of IPC, but with power consumption around at least Ivy, and offer a true 6-core like Thuban and even an 8-core for the right price and they should hopefully sell good.

      Maybe Zen might be more available than Broadwell 10 months from now 😉 😉

      • tipoo
      • 4 years ago

      The AMD FX pushes most games above 60fps if the GPU allows it, but the problem with them is frametimes. They’re decidedly worse than even a few generations back of Intel hardware.

      Not to cherry pick, but this kinda thing
      [url<]https://techreport.com/r.x/skylake/civ-16ms.gif[/url<]

        • auxy
        • 4 years ago

        [quote=”tipoo”<]Not to cherry pick but [i<]this one looks good[/i<]...[/quote<]

    • itachi
    • 4 years ago

    MEh to see it do less than the broadwell in games is such a disapointement.. why they don’t use that famous l4 cache in Skylake is beyond me if it’s that good..

    Indeed testing CPU bound games would be smart ! some games would be Starcraft II, Arma 3, I think Shogun 2 and also.. GTA V I get massive FPS drops and stutter on my FX and my friends 2600k@4.4 runs butter smooth…(probably more single threaded bound than CPU bound..) I’m sure we can find more

    It would be nice to have a scenario where you could really establish the CPU gains accurately..

    I’m thinking first, in starcraft II you can record a demo of a massive battle, Hardware.fr used to do that..it’s such a nice idea, and the performance readings are super accurate(due to it being a recording).

    And in Battlefield 4 I think I got the best idea to record FPS drop-ness(invented a word eehee) which is certainly related to CPU : first use a resolution scale setting of 150 @ 1080p or just run the game at 1440p or 4k, all details max except no MSAA then go in an empty server, even the test range map, put 6 packs of C4 on easy to remember spots on something destructible, like a wall or a house, BOOM, record the fps, rinse and repeat with other CPUs :).

    Also I use Vsync and I believe it could be what cause these lags due to all the small particles with effects on ultra explosions smoke or fire.

    To be honest I was quite excited to upgrade to Skylake from my FX 8320@ 4.65ghz (to get rid of that CPU limitation, even in BF4 during explosions my fps drops are dramatic @ 150 resolution scale and r9 290x OC) but now I feel less excited.. sure it’s a good CPU but the leap in gaming is so small, we can only hope more games like project cars benefits from it..

    Also I’m checking the Hardware.fr review and they compare DD3 vs DDr4 performance and it seems weird.. the optimum speed seems DDr4 2666 mhz because anything lower you can have better performance with DDR3 2133 mhz which is a bit sad, I’m sure some games will benefit more from the higher bandwidth too just as some games are CPU bound or single-threaded bound.

    Plus it better be well overclockable apparently people managed to hit 5ghz, I’m especially thinking to those of us that are big enthusiasts and upgrading from Haswell, if they were at 4.7 and then move to Skylake to then get 4.5, and in the end have the same performance, that would be a bit sad..

    Oh and one last thing, would have be nice to see some higher resolutions compared with lower resolutions 🙂 (I know I wanna make you guys work hard lol)

    • TopHatKiller
    • 4 years ago

    Yawn. I should explicate my comment: Yawn.
    There is no data about die size & transistor count, I wonder why?, but unless something quite shocking on that front is revealed – where the hell are the power improvements? There doesn’t seem to be any. Unless new info is revealed over the coming days it looks, as i predicted, that 14nm finfet isn’t very good at all.
    I’m nuts for saying that – possibly skylark has a huge increase in transistors -little on/off switches that appear to do… well, bugger-all it appears. I await more news. And, of course, laughs from the-all-around.
    IPC? Huh! Power? Huh! 14nm finfet? Huh!
    Yawninng fest. :Skyyawn ……….. Intelyawn……..
    Bring out Zen for the love of anything. Competition is sorely needed.
    Yaaaaaaaaaaaaaawn. [Sorry, I fell asleep typing that]

      • chuckula
      • 4 years ago

      [quote<]Yawn. I should explicate my comment: Yawn. [/quote<] I'm sorry, you will now be sued by Krogoth for infringement.

        • TopHatKiller
        • 4 years ago

        Sorry. Apologies to all, but if ‘yaaawn’ isn’t the immediate response, natural to all people, then I wouldn’t know what is. ohh.. thanks for the triple downvote! Tar!

    • NovusBogus
    • 4 years ago

    I have to admit, I really didn’t see them launching the high-end stuff early. So, will I get it? Not sure…I’m one of those happy 2500K owners who hasn’t seen a worthy challenger yet. I might go for an i7, and I do some TrueCrypt stuff so that crypto performance looks real nice, but not sure it’s worth $500+ for CPU/mobo/RAM/possibly PSU.

    • UnfriendlyFire
    • 4 years ago

    For those ho-hum about Skylake, look what Intel has in store for the laptop market: [url<]http://arstechnica.com/gadgets/2015/08/intel-promises-unlocked-overclockable-skylake-cpu-for-laptops/[/url<] K-edition CPUs. For laptops. That's what happens when you spend nearly half a decade focusing primarily on power consumption and beating AMD at their own APU game. Fun fact: HP's Elitebook 840 G1 (i7-4600U, 2.9 GHz at full turbo on both cores) allows CPU overclocking using Intel XTU. A few people on Notebookreview website were able to increase the CPU's voltage for better OC, and one of them got close to 3.5 GHz before chickening out. EDIT: The 840 G1 however can not deliver enough power to keep the Radeon 8750M running at full speed under load. Notebookcheck noticed that the GPU throttled severely (with micro-stuttering) when running Dota 2 at 72C.

      • brucethemoose
      • 4 years ago

      Top-tier laptop CPUs with unlocked multipliers are nothing new.

      However, more sanely priced, lower power unlocked laptop CPUs are an extinct species. The last chip like that was Llano, and then AMD decided that locking their bulldozer APUs would somehow make them more competitive :/

      The i7-4600U doesn’t have the typical X/”Extreme” branding, which is a very good sign.

      • NovusBogus
      • 4 years ago

      I reeeeeeeaaaaaaalllllly want a quad-core Skylake enterprise class laptop with Iris Pro or Geforce 950M. And aluminum case, because raaaagh metal.

        • UnfriendlyFire
        • 4 years ago

        There are workstations for +$1500 that have the quad-cores and look professional (*cough* unlike Alienware *cough*).

        Other than that, finding a business laptop with a quad-core and a powerful GPU is going to be a tough find.

        • Beelzebubba9
        • 4 years ago

        So you want a MacBook Pro?

    • Ninjitsu
    • 4 years ago

    So Tom’s Hardware got 4.7 GHz (at less than 1.4v, not sure what they used) and 3466 MT/s for ~34 GB/s.

    EDIT:
    [quote<] In stark contrast to our other benchmarks, we noticed significant performance differences between Windows 8.1 and 10. Consequently, wherever these differences were significantly above the margin of measurement error, we included both results in our graphs. And of course, we're using the most up-to-date drivers available for each platform. [/quote<] For Adobe CC. May help explain the differences between TR and AT too. EDIT2: They've also tested the 6600K and the 5820K - and the 5675C! EDIT 4: So takeaways from that review: 1. THG's results are inline with TR. 2. SiSoftware Sandra's Processor Arithmetic benchmarks is throwing up some very curious results. 3. The i5-6600K is very, very potent. 4. Their CPUs were engineering samples, and their 6700K seemed to suffer from unusually high power consumption. 5. [b<]Intel GT2 graphics now meet or beat AMD's top line IGPs.[/b<]

      • chuckula
      • 4 years ago

      [quote<]5. Intel GT2 graphics now meet or beat AMD's top line IGPs.[/quote<] That can't be good for AMD. At least with the high-end Broadwell parts they could [s<]whine[/s<] [u<]put forth a spirited debate[/u<] about the fact that Broadwell is more expensive, but many GT2 Skylake parts won't be that much more expensive than even AMD's APUs.

        • TopHatKiller
        • 4 years ago

        No they don’t. And yes they probably will./

    • divide_by_zero
    • 4 years ago

    Excellent review as usual from TR.

    A couple observations:

    1) To the naysayers who are fully Krogoth’d over this release, there’s a potential big performance boost in play here that I haven’t seen mentioned in the review nor the comments. Based on the leaked slides that appeared in the past couple months, Skylake supports hardware accelerated x265 encoding. Depending on how efficient this is, it could be a huge selling point to certain users. (<looks at his crusty old i3 2100 HTPC which takes its sweet time with transcoding>)

    2) Anyone else find it really strange that Intel is holding back on additional tech details until IDF?? I’ve been following this industry a long time, and I can’t recall a similar situation where a product was launched, review embargo lifted, etc, prior to official details and specifications being available.

      • GreatGooglyMoogly
      • 4 years ago

      Sadly, as it looks right now, HEVC Advance is doing its utmost to completely kill h.265/x265 as a viable format: [url<]http://blog.streamingmedia.com/2015/07/new-patent-pool-wants-share-of-revenue-from-content-owners.html[/url<]

        • divide_by_zero
        • 4 years ago

        Oh jeebus – that looks like it’s going to turn into quite a cluster. Figured that h.265 was going to turn out the be the de facto standard in the years to come, but if things get held up with royalties, lawsuits, and injunctions, I suspect that Google or others will happily fill the void.

        For my needs though, I expect the patent mess won’t change things. (Looking to archive my physical media collection and the space savings vs quality with HEVC is pretty substantial.)

    • Jamannetje
    • 4 years ago

    Good news for AMD. Miniscule upgrades from Intel means a clear point to hit with Zen. (They just need to deliver that, which may be a problem)

      • Waco
      • 4 years ago

      A 40% increase over Piledriver with Zen would get them to Sandy Bridge…needing another 30-40% bump to be at parity with the chip [i<]I can buy today[/i<]. So no, not really.

    • hasseb64
    • 4 years ago

    I my world we buy a K-serie to overclock it, what world are you in TR?
    Shrinkage? For what reason again? Lower consumption? Do not see it!

    • DrDominodog51
    • 4 years ago

    I will hunt down Broadwell; I will find Broadwell, and I will have no mercy with my overclocking.

    • kleinwl
    • 4 years ago

    If it wasn’t for my ASUS P5Q SE Plus (Q6600 CPU) dying (audio died, boot slowing, etc), I would probably stick it out for another generation. As it stands, the system has given me 8 years of solid 98+% uptime while running distributing computing through IBM (and some gaming, etc). It’s too bad that Intel doesn’t release the -k versions without integrated graphics. It is just wasted process time.

    • auxy
    • 4 years ago

    Pick up a 5775C if you see one at near-MSRP. They are a limited production run and will be very expensive soon! ( `ー´)ノ

    • windwalker
    • 4 years ago

    Why doesn’t Intel replace the integrated graphics nobody uses with more cache?

      • auxy
      • 4 years ago

      A) it’s not that simple.
      B) it’s not that simple. (other reason.)
      C) LOTS of people — more than use discrete cards — use the integrated graphics. I have a 290X and I still use my Intel GPU.
      D) it’s not that simple. (third reason.) ( `ー´)ノ

        • Airmantharp
        • 4 years ago

        Yup. I make use of integrated video, and the commentary on the [H] review about using the integrated video while overclocking is concerning, given that my 2500k is running a monitor (and has run two) while running at 4.5GHz.

          • auxy
          • 4 years ago

          Hey, can you link to that [H] article? (‘ω’)

            • Airmantharp
            • 4 years ago

            [url=http://hardocp.com/article/2015/08/05/intel_skylake_core_i76700k_ipc_overclocking_review/7#.VcT84vlsZLY<][H]ard|OCP 6700k Overclocking and Conclusion[/url<] Fifth paragraph from the bottom :).

            • Mr Bill
            • 4 years ago

            My A10-7850 build shows TDP throttling in benchmarks. I’m not surprised the I7-6700K would do the same. Would have been nice to see the igp performance in this review. Does Intel make any claims (similar to AMD) vis-a-vis the HD 530 graphics boosting or graphics performance when a discrete card is added to the system? Or, it seems more likely that for the sake of overclocking performance, the HD 530 unit is or can be disabled when a discrete video card is present.

        • windwalker
        • 4 years ago

        That’s just stupid.
        I don’t care that it’s not simple and neither should you.
        Nothing Intel does is supposed to be simple.

          • Andrew Lauritzen
          • 4 years ago

          The point is that your naive ideas about how die space is/can be used are not representative of reality, nor are the vague notions of how it would or wouldn’t affect performance.

            • windwalker
            • 4 years ago

            I don’t care any of about that.

            The bottom line is that current desktop quad core Intel CPUs use more die space for features that customers don’t use than for features that they do use.
            That is stupid.

            • Andrew Lauritzen
            • 4 years ago

            > The bottom line is that current desktop quad core Intel CPUs use more die space for features that customers don’t use than for features that they do use.

            What are you talking about? The GPU is not a huge part of the die on a 4+GT2 config like this. And frankly even if it were, just ignore it – it’s not doing you any harm if you’re not using it. As the comments here demonstrate, lots of people are.

            If you “don’t care” about how actual CPU designs work and the associated trade-offs I might suggest that you’re not really qualified to draw these sorts of conclusions to start with. Not to have to repeat it for a 5th time, but “it’s not that simple”.

            • windwalker
            • 4 years ago

            I don’t care how simple it is or isn’t. It’s not my job.
            I’m the party who pays for a new CPU and so I get to not care about how it gets done or to listen to how “it’s not that simple”.

            Read the original question again.
            It’s not a ridiculous nitpick of the simple presence of the integrated graphics.

            This review has clearly shown that the L4 cache would be a much more useful and impactful use of the die space.
            As such, the question comes naturally. Just as naturally as asking why has the original question triggered such a defensive response.

            • Andrew Lauritzen
            • 4 years ago

            > I’m the party who pays for a new CPU and so I get to not care about how it gets done or to listen to how “it’s not that simple”.

            Absolutely, your job is to decide if what is on offer is worth the cost to you. So is the CPU portion of the 6700K worth it to you, etc? The presence of other stuff on the CPU that you won’t use is irrelevant to that decision.

            > This review has clearly shown that the L4 cache would be a much more useful and impactful use of the die space.

            And that statement itself reveals that you don’t understand the so-called “L4 cache” (eLLC people, please), nor the die configurations, nor die space in general. So just accept that it’s a bad question.

            > Just as naturally as asking why has the original question triggered such a defensive response.

            Because it’s the same old “armchair architecture” that comes up literally *all the time* and when experts actually try to explain why “it’s not that simple”, rather than trying to learn the commenters usually just dig in on their preconceived notions based on an incorrect understanding of the trade-offs.

            If you really do want to understand *why* it’s not that simple we can have that conversation, but so far your replies have all been of the nature of “I don’t give a damn why, I’m obviously right”.

            • Milo Burke
            • 4 years ago

            I’m with Windwalker.

            I wish Intel wasn’t dedicating so many resources to IGPs for enthusiast desktops; no matter how you spin it, with transistor count or with die space or with production costs or with R&D costs, Intel could be more frugally using their resources to provide better performance/$ than they are now with high-end chips that don’t need the full potential of the new IGPs they all come with.

            Even though Windwalker’s question isn’t that technical, it can still be replied to with courtesy.

            I’d be interested in links to experts explaining why “it’s not that simple”. Please share them if you have them.

            • Andrew Lauritzen
            • 4 years ago

            > Intel could be more frugally using their resources to provide better performance/$ than they are now with high-end chips that don’t need the full potential of the new IGPs they all come with.

            No offense, but you simply don’t have the required information to draw that conclusion. You’re welcome to your opinion, but “it’s not that simple”. And this is hardly a chip that you can complain about – it has a *tiny* GPU, as the benchmarks vs. the 5775c here demonstrate. It’s pretty ironic that people are complaining about wanting a 5775c in this article whereas at the time they were complaining about why we’d ever make that chip.

            It is literally impossible to win here guys. Every design point is available to you at this point between the 5770c, 5960x, 4790k, 6700k, etc. If it’s just “I want option X cheaper” that’s not an interesting technical conversation at all.

            > Even though Windwalker’s question isn’t that technical, it can still be replied to with courtesy.

            I think I was pretty courteous compared to his replies (“that’s stupid, I don’t care about that, etc”). You also have to consider the history here – this is not the first time this has come up and it carries the overtone of “XYZ is trying to screw me” under it.

            > I’d be interested in links to experts explaining why “it’s not that simple”. Please share them if you have them.

            A lot of the complexity gets into how chips are designed and manufactured. I’d start with rys’ excellent article on TR a few months back – and you should ask him live on a podcast 🙂

            Beyond that it gets into very technical specifics. For instance, how does one just “add more cache” to a CPU in “the area that the GPU is in”? Cache configurations affect a lot of different things from chip layout to tags to balance of memory latencies and throughputs. It’s not something you just drag a slider around and ship a SKU. For instance, you may have noticed that the chips with EDRAM (eLLC) only have 1.5MB LLC/core rather than the usual 2MB. That is not an accident…

            Furthermore even if you did put down more LLC there, is it going to make XYZ workload faster? Depends entirely on the workload. You can’t draw any conclusions based on the eLLC numbers from the 5775c because that “cache” works in a completely different way and is far larger. Comparing to something more like HSW-E (5960x) would get a bit closer, but obviously that’s far from apples to apples either due to many more cores, threads, bandwidth, etc.

            Suffice it to say there are a million variables that go into making a chip, and they are all inter-related. I am actually not an expert on the architecture side of things, but I do know enough to move me out of the Dunning-Kruger camp on this issue at least 🙂

            • Ninjitsu
            • 4 years ago

            [quote<] It's pretty ironic that people are complaining about wanting a 5775c in this article whereas at the time they were complaining about why we'd ever make that chip. [/quote<] I KNOW RIGHT? I've been thinking the same thing! It's funny. Intel is damned if it does, damned if it doesn't.

            • Milo Burke
            • 4 years ago

            No, I don’t pretend to be an expert. There’s far more that I don’t know than things I do.

            I think it would be fascinating to have a discussion on this topic on the podcast provided the appropriate guest is present. I hope Scott can make it happen.

            Unlike Windwalker, I’m not asking for a direct replacement from IGP to cache. Like Windwalker, I’m wondering if some (not all) of the IGP is overkill (disclaimer: for enthusiast desktop processors, particularly the unlocked ones). A simplistic analogy is how AMD focused on filtering rather than pixel-pushing, contrary to Maxwell, which some find questionable. And maybe as Windwalker should have said, I’m more than happy to read a comprehensive answer of why it’s not a good idea, be it here in the comments or in a linked article.

            Scott mentioned he doesn’t have a die-shot for Skylake yet. Although it looks like a notable percentage of Haswell is used up with IGP: [url<]https://techreport.com/review/24879/intel-core-i7-4770k-and-4950hq-haswell-processors-reviewed[/url<] I suppose you can categorize my complaint in the oversimplified crowd of "If Devil's Canyon had only a tiny IGP, it could have been hex-core!", which is only inches away from "Haswell-E costs too much!", and quickly jumps over into "I wish AMD was competitive enough to make Intel scared!". Nothing new here. Although, while we're on the topic of core counts, Sandy Bridge Xeon topped out at 8 cores, but Ivy Bridge Xeon topped out at 15 cores. I wonder what year Intel's consumer line will experience such a renaissance in core count.

            • Andrew Lauritzen
            • 4 years ago

            > Although, while we’re on the topic of core counts, Sandy Bridge Xeon topped out at 8 cores, but Ivy Bridge Xeon topped out at 15 cores. I wonder what year Intel’s consumer line will experience such a renaissance in core count.

            That’s really a separate question, and one more driven by market forces. You may note that Xeon cores run at far lower frequencies – that’s also not an accident. If you benchmarked those in games I’m pretty sure Scott’s conclusion would be “these are terrible gaming CPUs, just get a 4790k instead” or whatever.

            If you want *some* more cores *and* higher frequencies, that’s what HSW-E is (and I still argue that the 5920k is not out of the reach of most folks who would consider a 4790 or similar in the first place). Yet the conclusion from Scott’s testing is the same – the 5960x is mostly slightly slower than the 4790k or 6700k in games. So by what magic do you think that adding more cores is going to improve game performance?

            Conversely I’ll get to the crux of the issue: if games were bumping up against the total processing power of the 6700k (or 4770k, or even much weaker quad cores), the 5960x would win. In benchmarks that are stressing the entire CPU, it does win, by a landslide. Thus if you want more cores to be a compelling trade-off over more frequency for gamers, games fundamentally have to do more work, and do more work in a parallel and SIMD manner.

            If you’re more interested in “workstation-style” workloads, HSW-E is a really cheap Xeon…

            Relevant further reading: [url<]https://forum.beyond3d.com/threads/where-are-the-cheap-16-core-desktop-processors.57174/[/url<]

            • Milo Burke
            • 4 years ago

            No, I get it. Games are largely GPU-bound in 2015. And no doubt, Xeons would be pretty poor for games.

            I don’t want more CPU for games, I just want more CPU. My processor chokes when I ask too much of it working with music. Virtual instruments are hungry, and substantial audio processing is tough while keeping latency low. Not everybody works on music, but I’m sure other people with other workloads would appreciate more cores.

            I suppose pickup trucks are a fair analogy. For all I know, everybody buying the top of the consumer line is whining that the towing capacity is too low even though most of them never tow anything. And those that really need more towing capacity can step up to something more commercial at a cost. In this analogy, I actually do tow things, just not the same load most others tow.

            I’ve always been thrifty, but I think I need to give the Skylake equiavlent of the 5820k a hard look when it comes out. 6+6 should be able to pull a lot more weight than the 4+0 of my 3570k.

            I’ve enjoyed chatting, but my stomach is rumbling. Gotta find a podcast to host quick so I can munch with an audience.

            • cegras
            • 4 years ago

            So you say:

            “If it’s just “I want option X cheaper” that’s not an interesting technical conversation at all.”

            Then you say:

            “That’s really a separate question, and one more driven by market forces.”

            Which one is it? There are people here saying that they are not willing to purchase a product because the value proposition isn’t good enough, so you are telling us we are naive to expect such a product and should appreciate it with our money. Then you say that there *is* demand for a GPUless product, and therefore, intel creates such a product in the server space.

            • Andrew Lauritzen
            • 4 years ago

            > There are people here saying that they are not willing to purchase a product because the value proposition isn’t good enough

            That’s exactly what I said in the above post. That is precisely what each person should be doing for their unique use cases.

            > so you are telling us we are naive to expect such a product and should appreciate it with our money.

            I’m saying the notion that having a separate SKU without an iGPU for people who don’t want that would make it cheaper overall – either to produce or for consumers – is naïve. Obviously judge the end product and whether it is worth your money, but let Intel deal with whether or not it’s cheaper to give you a free GPU that may or may not remain off most of the time or not.

            As has been stated elsewhere in the thread, there are *many* more people that use the iGPU – even on the K SKUs (which are obviously not completely different designs or anything too) – than otherwise. This should not be a surprise to people… turns out there are many more uses for computing power than just gaming 🙂

            Anyways this point has been beaten to death. I still claim that all of the relevant design options are available to people today and if you want an iGPU-less CPU (since it offends you to be present or whatever) with more cores, the 5920k is a fantastic choice.

            • windwalker
            • 4 years ago

            The insistence to keep battling the same straw-man is fascinating.
            Nobody mentioned price or cost before you brought it up.
            The question is not about the monetary but about the opportunity cost of including the iGPU.

            • Andrew Lauritzen
            • 4 years ago

            If price or cost is irrelevant, the answer is Xeon or Haswell-E…

            • windwalker
            • 4 years ago

            Are you able to have a conversation without putting words in your interlocutors’ mouths?
            I have humoured your straw-man battling activities long enough.
            Answer the question or go away.

            • cegras
            • 4 years ago

            Look, this will probably never happen, but that’s because you have not released any information for a comparison. What would be the final cost per tray if intel made their desktop chips without igpus at the same volume as current processors? I’m sure there is some person in intel that is capable of crunching the numbers, including space saved on wafer, fab costs, and the like. What could intel gain in terms of cache and other cpu resources if it used the igpu space for something else? Engineers are totally capable of making estimates.

            Without a comparison, as well as survey data about % of people who use the iGPU, you can’t really convince us that it was a good engineering choice to put the iGPU in high end desktop processors. Furthermore, most of your really high end products sold for compute end up in number crunching clusters – and those Xeon chips don’t have iGPUs.

            • Andrew Lauritzen
            • 4 years ago

            > Without a comparison, as well as survey data about % of people who use the iGPU, you can’t really convince us that it was a good engineering choice to put the iGPU in high end desktop processors.

            That’s backwards though. You yourself mentioned that it would be quite a lot of factors and numbers to crunch to determine how to maximize the profit here. As businesses are profit maximization machines at their core, the assumption is that indeed someone is crunching those numbers and making the decisions here.

            Indeed my point is the same as the one that you have eloquently demonstrated: none of us have the data to work this out, and it’s not as simple as doing arithmetic on die space or whatever. Thus I think it’s daft to assume that there’s some conspiracy at play here that is working counter to profit vs. the much more logical solution that Intel already has run these numbers and picked fairly optimal strategies.

            So just be happy with/ignore your free GPU and stop worrying about how much it costs Intel to put it on there just so that you can turn it off. Let Intel worry about that.

            • cegras
            • 4 years ago

            I’m not assuming any conspiracy. Rather, I’m thinking that it’s simply some sort of mobile-focused decision. You can admit that instead of telling us that we are unable to appreciate the work and constraints that go into the product. And the data is actually right there – all we need to do is look at the Xeon lines.

            • Andrew Lauritzen
            • 4 years ago

            > You can admit that instead of telling us that we are unable to appreciate the work and constraints that go into the product.

            You act as if I make those sorts of decisions or have any special insight – that is not the case. That sort of thing is not really related to my job at all.

            In any case I agree that the Xeon lines are a good example and incidentally there is a great enthusiast option spun off of them: Haswell-E. Again, if what you want is more cores and no IGPU there are several options for that.

            • windwalker
            • 4 years ago

            Nobody asked for more cores or a bazillion PCIe lanes or whatever other silliness X99 is about.
            I asked about replacing something useless with something that has been proven to be useful.
            Answer that question, not the questions your marketing department wishes were being asked.

            • cegras
            • 4 years ago

            Thanks for the chat :).

            • lycium
            • 4 years ago

            > “If it’s just “I want option X cheaper” that’s not an interesting technical conversation at all.”

            Except in the case of AVX-512 being limited to a handful high-end Xeons :'(

            • windwalker
            • 4 years ago

            The presence of the integrated graphics is easy to ignore when there is no alternative use for the space it occupies.

            I’m just a customer, which means I’m not supposed to understand implementation details.
            The question is very simple and clear and it deserves an equally simple and clear answer.
            Nobody has bothered yet to try to provide one.

            How can you possibly claim that “I don’t give a damn why” when that is the only question I have asked?
            What I don’t care about is how hard it might be to do but most replies are all about exactly that.

            • brucethemoose
            • 4 years ago

            [quote<] The bottom line is that current desktop quad core Intel CPUs use more die space for features that customers don't use than for features that they do use [/quote<] There's the issue. I think you're dramatically overestimating the DIY PC market... We're a tiny, tiny sliver of Intel's total sales, and the consumer chips don't have enormous profit margins like the Xeons. It simply isn't profitable to pay a bunch of engineers to create a new LGA 1151 chip for a tiny slice of customers when existing LGA1151 designs are adequate, and when HEDT chips will eventually exist for users who want more without an IGP. If you pay Intel hundreds of thousands of dollars for a custom SKU, they'll happily make it for you... Otherwise, they won't.

            • windwalker
            • 4 years ago

            I think you’re wrong.
            Look at: K-series CPUs, Socket 2011 CPUs, motherboards with special colour schemes, cases with windows, water-cooling. All of this stuff and much more exists and is profitable because the market is large enough.
            The consumer chips may not have the largest margins but their large volumes are essential for making the fabrication economical.

            • brucethemoose
            • 4 years ago

            2011 CPUs are just cut down Xeon E5s, and unlocked consumer socket CPUs are just unlocked laptop chips. Neither costs much for Intel to make, as they’re existing designs with some fused transistors and a different box. Motherboards are cheaper to design and test than CPUs, and I believe profit margins are pretty high with the more expensive ones. Same story with consumer fans, cases, peripherals etc, as there’s no huge multi million dollar flat cost like you have with a CPU design. Liquid AIOs are cheap to produce and are used by PC OEMs as well, and custom water cooling is VERY low volume.

            Now, ask yourself… How many more customers would choose to buy a LGA 1151 Core i5/i7 if there was a version with more cache instead of an IGP? 1%? Maybe 10%? One could argue that more cores, a wider bus, and better I/O would attract more customers in the same amount of die space, and that’s exactly what LGA 2011 is for.

            • windwalker
            • 4 years ago

            That makes sense: the choices are driven by costs and the performance is enough.

            I believe over 90% of Core i5/i7 unlocked CPU and over 50% desktop Core i5/i7 CPU buyers would take 10% performance increase over integrated graphics.

            • Waco
            • 4 years ago

            You’re implying that removing the iGPU would give a 10% performance increase (through magic I guess).

            You’re wrong.

            • Andrew Lauritzen
            • 4 years ago

            Yeah that’s a completely absurd notion. It would be no faster with the iGPU removed. It’s even possible that it would have slightly worse thermal behavior (less ability to overclock) due to the smaller die.

            Entire architecture changes typically give you in the range of 10% per clock, as the numbers in the article clearly demonstrate. How on earth you think there’s another 10% just sitting around that people are somehow too lazy to harvest is beyond me.

            • windwalker
            • 4 years ago

            By reading the review I got the impression that there’s another 10% just sitting around.

            • Andrew Lauritzen
            • 4 years ago

            [Edit – posted to wrong subthread]

            • windwalker
            • 4 years ago

            The benchmark results in the review imply that the L4 cache improves performance.
            Through magic you might stop trolling.
            You’re just trying to pick a fight.

            • Waco
            • 4 years ago

            The L4 cache is not part of the CPU die. Your logic makes no sense.

            Even IF you converted the space allocated to the iGPU to cache, I hope you realize that increasing cache size is not as simple as “more cache == faster”. In many cases, more cache can be slower, there’s additional logic required for it, and cache is usually the first part of the die to be defective (because of the density and small feature size) so yields would also drop.

            So no, your argument holds no water.

            • windwalker
            • 4 years ago

            So move it there.
            It’s not my job to know how to make it happen. That’s why I pay $300 for $3 worth of silicon.

            • Waco
            • 4 years ago

            You really have NO IDEA what you’re talking about.

            • cygnus1
            • 4 years ago

            Buy a socket compatible Xeon. No iGPU. Problem solved.

            • windwalker
            • 4 years ago

            I believe you mean the Socket 115x Xeon E3s.
            My understanding is that they are based on the same design with the IGP present but disabled.

            • DrDominodog51
            • 4 years ago

            Some actually have an iGPU(P4600 iGPU).

            • maxxcool
            • 4 years ago

            over 60% of ALL intel desktops ONLY use integrated graphics. Such as my workstation i’m typing on and the 4 other personally bought test machines under my desk and the 1500+ other sockets in the test lab down the hall.

            • windwalker
            • 4 years ago

            Show me the data.
            All of your 1504+ test machines run headless so they are servers, not desktops.

            • cegras
            • 4 years ago

            Intel has a server product that doesn’t have the IGP. I don’t think intel is dealing with physical limitations, but engineering ones that stem from a certain design choice. It’s not like you are being forced to include GPUs in your chips, right?

            Until you are willing to give an actual answer here, telling everyone that they are naive isn’t really doing anything for your position.

        • I.S.T.
        • 4 years ago

        Why do you use your integrated gpu? Heat and/or power reasons?

          • auxy
          • 4 years ago

          Mostly for more display outs. Hehe. Also because hooking displays up to my IGP doesn’t consume any of my precious and “limited” 4GB VRAM. Ahahaha.

          Also, I make HEAVY use of Intel QuickSync video for streaming and recording gameplay.

      • Firestarter
      • 4 years ago

      it’s expensive to manufacture yet another different CPU, and they recon they won’t sell enough of them at high enough prices to make up for it

        • windwalker
        • 4 years ago

        That can’t be true.

        They decided to make and sell CPUs with Iris Pro graphics, first only BGA and now even LGA.
        Those sell significantly fewer units than high performance desktop quad core CPUs that are almost always paired with a graphics card.

        If Iris Pro CPU units were numerous enough to justify separate development, desktop Core i5 and i7 with no useless junk definitely are as well.

          • swaaye
          • 4 years ago

          Apple sells Iris Pro. I’m gonna guess they do decent volume and Intel finds it motivating.

          You could always buy a -E chip.

            • windwalker
            • 4 years ago

            Yes, they do use it in the 15″ MacBook Pro and the iMac, not Apple’s highest sellers.

            Intel probably sells more unlocked CPUs than CPUs with Iris Pro.

            • Andrew Lauritzen
            • 4 years ago

            Across the board Apple sells the highest Intel GPU configs. They are one of the only people that sell the 15W and 28W GT3’s for instance.

            • windwalker
            • 4 years ago

            I suspect it’s because they try to remove discrete graphics from as many configurations as possible to maximise performance per watt.

            • Andrew Lauritzen
            • 4 years ago

            Yes, but the point swaaye was making is that Apple is obviously big enough that they get to decide on whatever processor configurations they want.

            • windwalker
            • 4 years ago

            Apple sells about 4 million Macs per quarter.
            Do gamers buy so fewer CPUs than Apple that they can’t influence processor configurations?

            • Ninjitsu
            • 4 years ago

            Well, yes and no. It’s not just volume, it’s also margins and other overheads like R&D. You know why the new Macbook exists? Because Intel gets paid by Apple to develop the motherboard, and when Dell, Apple, Lenovo, etc. have certain platforms in mind, Intel listens.

            You buy [i<]a[/i<] chip. You're not influential enough to matter. Gamers WILL NOT BENEFIT from more than 4 cores at the moment. That's obvious from benchmarks. As for cache, that'll increase price. How many will be willing to pay that price? Some, sure, and for them the options ALREADY EXIST NOW. If those do really well, Intel will make more. Heck, the only reason the 5775C and 5675C exist in the first place is because the enthusiast community has been asking for it. However, where you're going very, very wrong is understanding how important integrated graphics is to Intel, and to consumers. From a marketing standpoint, AMD's had stronger IGPs, and Intel's been eyeing that performance crown. Now they pretty much have it. If they keep scaling like this then they can take a cut of the low end GPU market as well. Heck, I know lots of people who used to game on integrated graphics even back in the Core 2 days. CS 1.6 and FIFA 08 don't require a lot of GPU. I don't know what kind of answer you're looking for, but if it's a market-based one then this is it. All other answers will be technical.

            • windwalker
            • 4 years ago

            I’m looking for any answer that makes sense and doesn’t include lame excuses.

            If the market was significant enough for Intel to listen and make Iris Pro LGA parts then it’s significant enough to deliver other changes as well.

            Integrated graphics are essential for mobile parts and quite welcome for low end desktop parts.
            High end desktop parts do not need integrated graphics.
            Why are high end desktop parts higher core count and clock speed versions of mobile parts instead of lower core count and clock speed versions of Socket 2011 parts?

            • Ninjitsu
            • 4 years ago

            [quote<] High end desktop parts do not need integrated graphics. [/quote<] They do. You may have CPU related compute work to do with a high end CPU and not need the power of a high-end discrete GPU, or have no GPU-specific code to run apart from the UI. In this case, integrated graphics saves power. Then you have QuickSync, mixed rendering with DX12/Vulkan, OpenCL workloads, etc. Unless you can prove with actual numbers that no one with a mainstream i5 or i7 (which Intel considers mainstream/mid-range anyway) uses integrated graphics, your argument doesn't make sense at all. And Intel DID release a Sandy Bridge part without the IGP (or with it disabled). i5-2500P or something like that. There wasn't any that followed. I WONDER WHY? Next, it's expensive to make CPU masks and make production runs unless the volume is in millions, especially at the scale and technology Intel operates at. They aren't going to make die masks whimsically. They'll have done their research, estimation, etc. and decided on a few optimised designs per market that they can [i<]further[/i<] bin and modify. [quote<] High end desktop parts do not need integrated graphics. Why are high end desktop parts higher core count and clock speed versions of mobile parts instead of lower core count and clock speed versions of Socket 2011 parts? [/quote<] "High End Desktop" according to Intel is Haswell-E and Xeons. They are exactly that - high-core count parts without any IGP. They also have different masks, so the 8 and 6-core may get a separate mask but the 10 and 12 core will get a separate layout and design. The cores are laid differently, the ring bus is different (the 10 core and higher ones have two ring buses or more iirc), etc. Mainstream parts are meant for mainstream workloads, where the focus is on lower power and efficiency and workloads rarely scale beyond 4 cores and 8 threads, and need higher clock speeds and wider turbo frequencies, in contrast with the high-end and server lines. Look up Amdahl's Law for more. Anyway, in this market, it's more profitable for Intel to use one quad core mask and a dual core mask, with the IGP, which they then manufacture with different constraints - mobile chips are low leakage, low voltage and have lower frequencies, while desktop chips are allowed more of all. Cost of all this is power consumption. So that's that. If this answer didn't make sense, I accept that I'm incapable of further explanation and will not make any further attempt.

            • windwalker
            • 4 years ago

            People who pay $300 for a CPU can afford another $20 for a discrete GPU they only need to render UI.

            I’m not here to prove anything. If the results of the review are not enough for you to ask some questions then it’s pointless to discuss any further.

            Sandy and Ivy parts with disabled iGPU sold for about the same price as the similar parts with iGPU. Because of that, depending on retailer and availability, the part with iGPU could be had for less.

            I already got the same satisfactory explanation from brucethemoose earlier: cost and good enough performance.

            Maybe in Intel’s wet dream la-la land are X99 and Xeon for high end desktop. In reality, they are for servers and high performance professional workstations.

            • maxxcool
            • 4 years ago

            Simple answer == you do not matter to Intel or Amd.

            They are not developing cpus for you or me. only the 99-other sheep in line who do not use a discrete gpu.

            • windwalker
            • 4 years ago

            That’s not really an answer, but a statement of fact that inspired me to ask the question.
            Why is Intel not developing CPUs for gamers?

            • Ninjitsu
            • 4 years ago

            WTF IS A CPU FOR GAMERS? You’re trolling, so so hard at this point.

            • tipoo
            • 4 years ago

            They were pretty instrumental in Intel SKUs with eDRAM at all, weren’t they? And pushing up Intel IGP performance in general. Word is Apple was quite Threatmantic® for that. People will love to poop on them for “die space” (it’s complicated, heh) or whatever else, but I find it nice that the baseline performance of computer graphics is going up, so that the market with “acceptable” PC graphics performance is much larger.

          • Waco
          • 4 years ago

          There are far more high end i5s and i7s running integrated than discrete.

            • windwalker
            • 4 years ago

            Mobile ones, absolutely.
            Desktop ones, I highly doubt it.

            • Phartindust
            • 4 years ago

            Outside of the DIY crowd, most PCs don’t even have a discreet GPU in them. Look at any major OEM pc in the store, you’d be lucky to find more than one system on the shelf with a GPU installed. AND unless you’re doing CAD work, or something similar, you can forget about getting approved for a system with a discreet GPU in it from the boss.

            • windwalker
            • 4 years ago

            Most PCs bought for home or office use are laptops.

            • auxy
            • 4 years ago

            Yeeep. I work for a big-box retailer as part of their technology services sub-brand. (NOT Geek Squad.) Our company, which operates over 1500 stores world-wide, does not sell one single computer with a discrete GPU in any store. (‘ω’)

            • windwalker
            • 4 years ago

            What percentage of total unit sales are Core i5 and i7 desktops?

            • Waco
            • 4 years ago

            You’re wrong, but okay. Most PCs don’t have discrete GPUs at all.

            • windwalker
            • 4 years ago

            Discrete GPUs are about the only reason why anyone would even bother buying a desktop PC.

            • Waco
            • 4 years ago

            You really do love just making things up don’t you?

            • windwalker
            • 4 years ago

            Get a job, go out into the world and you’ll find out what computers people use and why.

            • Waco
            • 4 years ago

            Personal attacks, awesome. I run more machines than you’ve probably ever laid eyes on. Learn to backup your statements and arguments with at least some semblance of evidence.

            • windwalker
            • 4 years ago

            If you’re so sensitive it’s no surprise you don’t get out much.

            • Waco
            • 4 years ago

            Sensitive? Ha.

            Try giving up your charade for a bit and try to learn something.

            • ClickClick5
            • 4 years ago

            I’m proud to be the outlier then! I never turned on my integrated on my 2600K. Not once. Now I do not have an integrated option.

            Although I fully get the point of having integrated graphics. (My HTPC/server is running integrated)

          • Ninjitsu
          • 4 years ago

          All laptop -HQ parts are Iris Pro.

            • windwalker
            • 4 years ago

            Integrated graphics is much easier to justify for laptop parts.

            • Ninjitsu
            • 4 years ago

            It’s much easier to justify in general. And that’s why Intel makes them.

            • windwalker
            • 4 years ago

            It makes sense for i3 and below.

            • Ninjitsu
            • 4 years ago

            To you, maybe.

            • windwalker
            • 4 years ago

            Learn to communicate like a human and cut the passive aggresive act.

            • Waco
            • 4 years ago

            Learn to back up your statements with something that approximates reality. 🙂

            • windwalker
            • 4 years ago

            This is not a court of law.
            I owe you no proof or explanation.
            You are not required to interact with me.

            • Waco
            • 4 years ago

            No, this is not.
            No, you do not.
            No, I am not.

            You, however, cannot expect anyone to listen to you when you make random statements with no backing evidence or information.

            • windwalker
            • 4 years ago

            I don’t expect or even want you to listen.
            If you can’t agree with the basic assumptions of the question I asked you should not get involved.

            • Waco
            • 4 years ago

            The basic assumptions you make are wrong.

            • Beelzebubba9
            • 4 years ago

            Have you noticed you’re such a terrible poster nearly everything you’ve written on this topic has been down voted? Heck, even an Intel engineer came out of the woodwork to tell you you’re wrong and yet you still persist with the same terrible nonsense point.

            The banal arrogance of some people never ceases to amuse.

            • windwalker
            • 4 years ago

            People down-vote because they disagree. It says zero about the quality of the posts.
            Banal arrogance is required to ask why is the emperor naked.

            • Ninjitsu
            • 4 years ago

            Irony.

            • windwalker
            • 4 years ago

            Not at all. I am quite actively aggressive.

            • maxxcool
            • 4 years ago

            ohhh so micro-itx systems should NEVER be quad core parts ? .. got it.

            • windwalker
            • 4 years ago

            Micro-ITX does not exist.
            MicroATX and mini-ITX boards usually have one PCIe x16 slot.

            • smilingcrow
            • 4 years ago

            Few HQ parts are Iris Pro:
            [url<]http://ark.intel.com/products/series/75115/Intel-Core-i7-4700-Mobile-Processor-Series#@Mobile[/url<] About 25% of all mobile quad cores.

            • Ninjitsu
            • 4 years ago

            Wow. That’s the most confusing naming scheme ever, then. Thanks.

          • maxxcool
          • 4 years ago

          Developing 2 completely separate lithography process, and films is HIDEOUSLY EXPENSIVE.

            • windwalker
            • 4 years ago

            Qualcomm and MediaTek deliver many variations of SoC every year with different core and radio configurations and they do all of that on massively lower R&D budgets than Intel enjoys.

            • Ninjitsu
            • 4 years ago

            They are tiny cores.

      • HisDivineOrder
      • 4 years ago

      Because Intel wants these chips to be the mainstream line and as the mainstream line, they want to mass produce a ton of them to get economies of scale on their side. They just took the first batches they produced and slapped them into a box and are selling them to enthusiasts as a way to build up a fervor.

      I’m sure some bean counter looked at the release of Windows 10 as a great way to rile up the enthusiasts and get them on their side. “Studies show that when a new OS releases, people routinely look to upgrade their entire CPU/MB and why not have something out for then?”

      Bada boom, bada bing. Intel’s got a new chip and motherboard with different memory out there for you a week after Windows 10.

      • Bauxite
      • 4 years ago

      Forced bundling due to de-facto monopoly

      • Chrispy_
      • 4 years ago

      Because for every single desktop CPU Intel sells, they sell over nine thousand mobile parts.

      • K-L-Waster
      • 4 years ago

      You don’t really think all those corporate laptops and desktops have discrete graphics do you? Some do, sure, but the vast majority don’t.

      Just because your use case doesn’t call for IGP doesn’t mean there isn’t a market for it.

        • windwalker
        • 4 years ago

        Who buys corporate desktops?

          • maxxcool
          • 4 years ago

          Corporates ….

            • windwalker
            • 4 years ago

            They buy mostly laptops and by a large margin.

            • green
            • 4 years ago

            [url<]http://www.statisticbrain.com/computer-sales-statistics/[/url<] Source: Gartner, International Data Corporation Research Date: January 14th, 2015 Percent of computers sold for business 74 % Percent of desktop computers sold [b<]81.5 %[/b<] Percent of laptop computers sold 16.4 % Percent of servers sold 2.1 % i know. lies, damn lies, and statistics and all but unless someone else is collecting this kind of data there isn't much else to go on they do write "business" rather than "corporate" but even if the entire 16.4% of laptops share were exclusively to corporates, and the 26% of non-business sales were exclusively desktop, you still have a ridiculously massive "business" chunk of sales going to desktop outweighing all other categories

      • brothergc
      • 4 years ago

      I agree , what person plunks down 350 clams for a I7 K model and uses intergrated graphics ? Lets DUMP the intergrated graphics PERIOD on the K models

        • auxy
        • 4 years ago

        Meeee! (´・ω・`)

      • f0d
      • 4 years ago

      im going to take some massively wild guesses at why cpu’s (with iGPU’s) are like they are

      #1 if you want more cores the 5820 is waiting for you – its not that much more expensive that the 6700k imo and if you NEED that many threads then the price difference is small

      #2 making a no iGPU cpu would need yet another cpu die/design as they would still need to make iGPU cpu’s for the markets that use them (laptops basic computers etc) and they need to justify having yet another die (they have a fair few already) to the accountants (it would cost a lot in things like masks etc)

      #3 more people in the price range of quad cores need the iGPU (and quicksync) more than than they need more than 8 threads, have a look at the performance difference between the 5960X and the 6700k in anything but scientific computing, the 6700k LEADS in most games and it has quicksync for encoding video which is good enough for most people – most (not all but most) consumers that buy cpu’s in the price range of the 6700k will not be needing more than 8 threads

      #4 performance is “good enough” on quad cores even for those that need lots of threads – have a look at a few of those “legacy” comparisons, the 6700k with its performance improvements is close to the 4960X 6 core from just a few years ago on most of those highly threaded workloads

      #5 it would eat into their HEDT platform

      would i like a 6 core on the 1151 platform? for sure i would but i can see why they havnt yet and why it isnt as simple as just “adding a few more cores”

      • Krogoth
      • 4 years ago

      It is because Skylakes are desktop chips that are designed for OEM markets. The majority of them are going to end-up in OEM systems that no need for discrete graphic solutions.

      • TopHatKiller
      • 4 years ago

      ‘Cos they are selling them to ‘people’ who do use them. There is no mistake: intel has a greater share of the ‘gpu’ market then either AMD or Nv.
      This is our life.

        • windwalker
        • 4 years ago

        People who buy K series CPUs do not use integrated graphics.

          • geekl33tgamer
          • 4 years ago

          Usually not, but comes in real handy if/when your dGPU throws a hissy and you’re troubleshooting an otherwise blank screen. 😉

            • geekl33tgamer
            • 4 years ago

            Dam downvote trolls.

            • Nevermind
            • 4 years ago

            You’re one of them – probably one of the more obvious ones.

            • geekl33tgamer
            • 4 years ago

            I’m not – GTFO this site and stop trolling all my threads. You’re pathetic and have been doing it for months now.

          • f0d
          • 4 years ago

          i have a 2500k in my htpc that uses integrated graphics

          edit: i have it overclocked to 4.8 so i can encode videos faster with handbrake, it doesnt need any special graphics as all it does is play video

            • windwalker
            • 4 years ago

            That’s hilariously confusing and contradictory.

            • Waco
            • 4 years ago

            Yeah, using his CPU exactly as intended is totally confusing and contradictory.

            Oh, wait, no, that’s exactly what makes sense.

            • windwalker
            • 4 years ago

            What’s crazier: contradicting yourself in a single sentence or pretending not to notice the contradiction out of spite?

            • Waco
            • 4 years ago

            The fact that you think it’s contradictory is hilarious.

          • srg86
          • 4 years ago

          I did. I have a 4790k and use the integrated graphics. I don’t play games but I wanted the 4GHz CPU power for code compiling etc.

          It fits my needs perfectly.

          And no I don’t overclock.

            • windwalker
            • 4 years ago

            That sounds reasonable.
            The standard clock speed of the 4790K justifies its purchase over the 4770.

          • auxy
          • 4 years ago

          Didn’t we conclusively prove you wrong on that point?

            • windwalker
            • 4 years ago

            You can’t prove a question wrong.

          • TopHatKiller
          • 4 years ago

          It’s a discrete chip they can’t take it out.

      • Unknown-Error
      • 4 years ago

      AMD put a $h!t load of cache in Bulldozer mArch. Total of 16MB (L2 + L3) and look what happened.

        • geekl33tgamer
        • 4 years ago

        Something happened?

      • Gyromancer
      • 4 years ago

      According to the gpu marketmarket share, the majority of people use on board Intel graphics: [url<]http://l2.yimg.com/bt/api/res/1.2/0xVr2KM61Sq_IfewCizncg--/YXBwaWQ9eW5ld3M7cT04NTt3PTYyMA--/https://marketrealist.imgix.net/uploads/2015/03/GPU-share.png?w=620&h=579&fit=max&auto=format[/url<]

        • windwalker
        • 4 years ago

        Find the same chart but limited to desktop Core i5 and i7.

      • maxxcool
      • 4 years ago

      Because unless it’s a *huge* dataset (ie sql, web server, data-mining) you would see -0- benefit.

      Before the poop storm ensues I would add this. Game and app devs do not care about ‘enthusiast chips’. that’s not whop they code for, and that’s not *where the money is at*. So even if there was a quad-core, hyper-threaded cpu with 64megs of l3/l4 cache … no-one would code for it besides HPC apps. this would include game devs. add to that that there is more performance having ‘more cores’ than cache and natural selection murders a high cache consumer even before its born.

      In the end the devoplment $$ go to what the baseline is.. where your customers hardware everyday specs are.

      same thing goes for cores.. 8 ‘real’ cores… wasted in the consumer space for 90%+ of the world. so nobody’s going to build that until there is a real demand for that size of consumer chip. and it will be a VERY LOW margin chip. no real incentive.

      quad core is fine. what we need is better threading from the OS, and DRIVERS …

        • windwalker
        • 4 years ago

        There is nothing theoretical about the benchmark results in this review.

          • maxxcool
          • 4 years ago

          The word ‘theoretical’ does not appear in my text.

            • windwalker
            • 4 years ago

            But the word “would” does appear repeatedly.
            My context is this review. I’m not interested in discussing other scenarios exactly because people ramble endlessly.

            • maxxcool
            • 4 years ago

            Since you can obviously develop cpus better you should apply at AMD and see if you can save them.

            • windwalker
            • 4 years ago

            Come back after you wipe your tears.

      • TwoEars
      • 4 years ago

      This comment section is insane.

        • swaaye
        • 4 years ago

        This happens with every CPU and GPU review.

          • auxy
          • 4 years ago

          Yah, it does. Go check out the FX-8350 review!

          I think the Titan review comments got pretty heated too.

            • Ninjitsu
            • 4 years ago

            Fury X…

            • swaaye
            • 4 years ago

            Yeah GPU launches really kick up the nutty.

      • green
      • 4 years ago

      because predicting the future is hard

      if a game developer found a way to offload physics, ai, and/or some other kind of general computation to the igp resulting in an overall gaming graphics performance by 10-30%, people would complain about intel igp’s being underpowered. but then why would a game developer bother trying such things if the majority of chips in their market segment don’t have an igp in the first place?
      intel has, for a number of chips targetted at certain segments, been betting that the igp will be used for general compute
      with eyes on open compute programming (eg. OpenCL) putting an IGP on die/package and seeing what might come of it is a stable (as opposed to risky) decision, especially if the lower binned variants can be re-purposed for laptops or htpc’s
      amd had taken a similar strategy hence why acquiring ati seemed a sound choice
      intel are also still trying to figure out how to sell newer and faster chips with the various upcoming developments in VR / Augmented headset stuff

      given this is the first in the series of chips being released, i would hazard intel is testing the waters to see what variants and market segment they could be targetting (ie. a large cache version, a large igp version, more integrated components, less i/o interfaces, etc)
      intel could release any number of variants but it is significantly easier, [b<]cheaper[/b<] and faster to do so when you've built-in something that can be "disabled", than to go back to near the start of the chip design process and add something that was never there

    • southrncomfortjm
    • 4 years ago

    I guess my question in response to the all the yawning is – what else could Intel be doing that they aren’t?

    Basically, do we believe Intel is dragging its feet, or have they really gotten to the point where it is unreasonable to expect massive gains from them? I’m guessing its the latter with a dash of the former since process improvements and shrinking are getting difficult, but then again Intel has negligible competition from AMD in the desktop and laptop realm.

      • Airmantharp
      • 4 years ago

      Rather than work toward gains in desktop computing- which isn’t in demand, as the benchmarks clearly show, regardless of who makes the CPUs- Intel is working toward maximizing both platform efficiency (for mobile) and task efficiency (for servers).

      Their target is ARM, not what’s left of AMD.

        • w76
        • 4 years ago

        Not really fair to say it’s not in demand, I’d love more power to throw at data crunching, but they seem to cover that with Haswell-E and what’ll eventually be Skylake-E, not these regular consumer chips. They will give me what I want, just charge me a kidney for it.

          • brucethemoose
          • 4 years ago

          If you need more oomph, Intel already sells unlocked 14-core Xeons. Unfortunately, you don’t have enough internal organs to afford one.

            • Milo Burke
            • 4 years ago

            Would you please link us to the diagram of the anatomy of one who does?

            • brucethemoose
            • 4 years ago

            For one Xeon E5-1691 V3?

            [i<] Looks at chart [/i<] Do you happen to have any children?

      • odizzido
      • 4 years ago

      They could drop the IGPU and sell their processors for less money. They won’t, but it’s something they could be doing that they aren’t.

        • brucethemoose
        • 4 years ago

        That’s what HEDT is for.

        And why would they drop prices? They aren’t a charity, they’re a company that exists to turn a profit 😛

      • Milo Burke
      • 4 years ago

      They could crop 90% of the IGP for the high-end chips, since most owners either aren’t playing games or have a dedicated GPU.

      And they could equip i5’s as quad-cores with HyperThreading, and i7’s as hex-cores with HyperThreading. I’d buy a hex-core Devil’s Canyon equivalent today if it were for sale.

        • NovusBogus
        • 4 years ago

        I’d love to see Iris migrate to i3/Pentium class chips where it might actually get used, but including the regular IGP on everything probably doesn’t have much impact on price or performance when measured against the overhead of additional design/manufacturing/QA/etc. processes.

        • cobalt
        • 4 years ago

        [quote<]And they could equip i5's as quad-cores with HyperThreading, and i7's as hex-cores with HyperThreading. I'd buy a hex-core Devil's Canyon equivalent today if it were for sale.[/quote<] You get all my thumbs for that statement. (Gosh, that sounded better in my head.) It's been 10 years since Intel introduced their (first?) quad-core desktop CPU, QX6700, I think they should feel comfortable moving past that now. [url<]https://techreport.com/review/11160/intel-core-2-extreme-qx6700-processor[/url<] (Was going to find a link to AnandTech, but wanted to survive to the morning.) (Speaking of which, I didn't remember the nomenclature of the QX6700 until just now. 6700K sounds slightly more familiar all of a sudden.)

        • brucethemoose
        • 4 years ago

        … Why haven’t you bought a 5820k?

        That’s just rebranding. Intel aleady has higher end CPUs without an IGP, quads/hex scores with hyper threading and so on. All you’re asking them to do is change the name.

          • f0d
          • 4 years ago

          just checked the prices in the US (because prices there are different to here in australia)

          5820k $389 [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16819117402[/url<] 4790k $339 [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16819117369[/url<] motherboards do have a slight premium also but really that isnt much difference for what you get (more pci-e /more memory channels/more cpu cores) are people expecting to get the 6 core for the price of a 4 core? well thats not going to happen

        • Klimax
        • 4 years ago

        They’d lose millions on those chips without IGP (market extremely too small to warrant extra mask set and extraneous runs) and hexa cores wouldn’t get you anything, just wasted space and loss of frequency.

        Get HEDT if you really want cheap Xeons…

        • TopHatKiller
        • 4 years ago

        Believe me, if Zen works, in the future you’ll be able to.
        Intel’s pathetic development in ‘performance desktop’ is solely because of the lack of competition from AMD. If Zen was out now, that haswell-e hex or octo core on the dearer platform would magically appear as the mainstream desktop solution.
        Intel laughs, you swallow it, AMD does bugger-all and bleeds share price.

          • Klimax
          • 4 years ago

          Nope. All of remaining IPC upgrades are massively expensive from power/heat perspective and couldn’t be used at all for mobile/AIO builds. And other non-Xeon markets don’t have necessary numbers to pay for masks.

            • TopHatKiller
            • 4 years ago

            I’ll explain: It has nothing to do with ipc or architecture. If amd had competitive chips out the “haswell-e and platform”, would be mainstream. Relegating “haswell and platform” to low-end and mobile. Intel’s margins have been increasing massively over the years

      • TopHatKiller
      • 4 years ago

      what else could Intel be doing that they aren’t?

      Design a new architecture. Not one ‘borrowed’ from P5 of yesteryear. But an actual new approach. They have the engineers, the resources. But there has been nothing on that front since the P7 debacle.
      Intel must have the ability and time to push forward the architecture, but the company is to obsessed with profit in the short term, letting future growth go hang itself.
      [or hope AMD cant deliver anything much: and ARM too.]

      edit: forgot, just like intel, the attempt of a ‘mic’ approach with Larrabee.

        • Waco
        • 4 years ago

        You really have no clue what you’re talking about. Intel has steadily pushed performance up, power down, and features up at the same price point.

        You seem to think that these new architectures…aren’t “new” or something. It’s obvious you know nothing, I’m not even sure why I’m taking time to point this out.

          • TopHatKiller
          • 4 years ago

          Because you want to be loved? Because you want affirmation for your own opinion?
          Skylark is not new you…you… [Sigh]

            • Waco
            • 4 years ago

            It’s like you don’t comprehend what you’re posting or something. You seem to think Intel is not developing new architectures (which Skylake is). You seem to think they’re sitting on their laurels and leaving performance on the table. You seem to think you have any clue how CPUs work.

            You “think” a lot, but nothing you’re thinking meshes with reality.

            • TopHatKiller
            • 4 years ago

            I comprehend fine, sweatie. You just insist on believing nonsense. Go ahead – by the by saying someone “thinks” a lot is not an insult.

            • Waco
            • 4 years ago

            “Thinks” was in quotes for a reason.

        • srg86
        • 4 years ago

        Larrabee was P5 based.

        Current CPUs inherit from P6.

        P7 was Itanium, if you was thinking Netburst then that was P68.

          • TopHatKiller
          • 4 years ago

          Trueish, except larrabee was a mic – a genuine attempt at a new architecture. Failure, to date, obviously, but at least they tried.

            • Klimax
            • 4 years ago

            Larabee was only GPU, never targeted CPU side. (Completely different requirements and restrictions)

            • TopHatKiller
            • 4 years ago

            Intel might have you killed if you repeat that- say after me – the larrabeeeeeeee project was never meant as a gpu, only a cpu co-processor.

            • Klimax
            • 4 years ago

            Any evidence for your bloody assertion? Don’t think you can find any reputable source backing you up.

            Larabee as named was GPU, it was Intel’s attempt at competing with ATI and Nvidia and is suspected to be one of direct causes of AMD/ATI merger. Only after it became Knights <something> it became coprocessor.

            • TopHatKiller
            • 4 years ago

            My assertion was not bloody. I think. Evidence for my ‘assertion’ – that would be basically everywhere, and if you doubt that, ask Intel. You’ll see the short shrift you would get there.
            Also, check dates. ‘Larabeeeeee causing the merger’. Well, check dates.

        • f0d
        • 4 years ago

        #1 skylake is nothing like p6, sure it evolved from there but pretty much nothing from p6 is still in current processors

        #2 if it aint broke dont fix it – why improve the wheel?
        the two times both amd and intel have tried completely new architectures is when they failed, p4 was a different architecture and bulldozer was also a different architecture, so why should they try and design something new again?

        #3 you dont seem to be critical of AMD’s bulldozer>excavator which has shown even less evolution than sandy>skylake and they diddnt even bother releasing any improvements on the performance desktop (only apu’s)

        #4 whats wrong with profit? it just means they are doing something right, if they diddnt make any profit then they would be doing something wrong

        #5 just looking at the legacy processor chart would show that intel have been improving, the 6700k or 5775C is close to and sometimes beating the 6 core extreme edition 3930k cpu from just a few years ago – you get similar performance to a 6 core EXTREME cpu from just a few years ago on a 4 core on MULTITHREADED applications, how is that not progress?

        #6 improvements have come in leaps and bounds on the IGP side of things so much that intels 5775c is faster than any APU from amd and the 6700k it knocking on the door of amds best igp

        after all that i have to say i WANT amd to improve just so they give me another option but im not going to give them a free pass on every failure in the last almost 10 years now (since conroe) on the cpu side of things because of it, the second they actually make something great thats better than intels equivalent i will be first in line to buy it (like i was with a64’s and the era when intel made crappy p4’s) but hopes and dreams of the future dont give me great performance NOW which is all i care about

        i dont care about the politics or rumors surrounding processors or what fans and rumors say may or may not happen or what some people say some company should or shouldnt do – what i care about is what gives me the best performance NOW and when amd do that then i will buy their stuff

          • TopHatKiller
          • 4 years ago

          That’s lovely, just a quick reply: You can’t assume I’m not critical of 15h. I didn’t say anything about them – why should I? They can’t even be assed to release anything so I can’t be assed to comment.

    • marraco
    • 4 years ago

    Still on the same league than 2008 processors.

    • I.S.T.
    • 4 years ago

    Question: why is the 6700K slower than the 4790K on 7zip decompression? Doesn’t really make much sense. The 200 mhz boost clock advantage can’t be kicking in, given that is a multi-threaded app…

      • auxy
      • 4 years ago

      7-Zip only really cares about two things: CPU ALU throughput and memory subsystem performance. The 6700K has slightly higher latency and slightly lower clocks than the 4790K, and I suspect that is making all the difference in the world. (‘ω’)

        • chuckula
        • 4 years ago

        It is a little odd that the [i<]compression[/i<] side sees a performance boost on the 6700K while the [i<]decompression[/i<] side sees the performance loss. What's even weirder is that the compression is typically considered the "harder" process that has lower performance.

          • Ninjitsu
          • 4 years ago

          And in the same vein it’s worth noting that the FX-8350 does much better too.

          I suppose decompression stresses different components that Intel deems less important (and AMD classically thought was more important).

            • I.S.T.
            • 4 years ago

            I’d chalk that one up to craploads of threads. It’s an integer heavy application, IIRC, and will use all eight int “cores” on the FXs.

          • Waco
          • 4 years ago

          Decompression is a lot more biased towards good integer throughput and good caching. Compression is typically MUCH more difficult depending on the algorithm. Speculative execution and not missing branches plays a big part in most compression schemes.

        • TwoEars
        • 4 years ago

        (‘ω’)

        Looks like a girl with a giant camel toe, or maybe a set of balls. Is that what it’s supposed to be?

    • LoneWolf15
    • 4 years ago

    Great article, Scott.

    I continue to find it ironic that what I really want at this point is what I can’t get. I want an i3-class desktop CPU with Iris/Iris Pro class graphics, to upgrade my HTPC.

    Putting a top-end graphics chip in an i5 or i7 seems a little silly when chances are that sort of user is going to add a graphics card, or if not, be perfectly happy with Intel’s mainstream. On the other hand, putting excellent integrated graphics in a modest package seems like the perfect combination for midrange desktops and media center type devices.

      • EndlessWaves
      • 4 years ago

      Ironic? It’s just a business practice, Intel are trying to get you to spend more.

      It’s like the Pentium Anniversary Edition. From a user perspective the best chip would have been a dual core with all the trimmings (AVX, Hyper-threading, turbo etc.) but Intel only chose to unlock only the cheapest and most stripped down chip (DDR3-1333!).

      I’m fully in agreement with you. An i3/i5 dual core with the rumoured GT4e graphics for the price of a quad core i5 would be a great chip for me, especially if it was unlocked too. A nice compact setup that I can fit in a sub-5L case with enough oomph to run 99% of games that are available DRM/Adware free.

      Still, I’m sure AMD will manage to match Intel in the disappointment stakes. Anyone willing to take a bet that it’ll be at least a year before we see single slot/half height Adaptive Sync?

    • GreatGooglyMoogly
    • 4 years ago

    Thanks for the succinct review! Not surprised with yet another lacklustre mobile-first-driven architecture.

    Disappointed not to see a 5820K processor in the benchmarks. I’ve been debating upgrading my 4-year-old 2600K system to either the 5820K or 6700K. I do a lot of DAW work, music production, and lots of embarrassingly multithreaded CPU power is what I want. The 5960X is cool and all, but in the end it’s just a curiosity.

    It would be nice if you guys could add DAWbench to your benchmarking suite…

    “Also, I may have made a mistake in switching to GeForce graphics cards on our CPU test rigs.”

    Definitely not a mistake. The vast majority of people use Nvidia GPUs, so to me it makes sense to test with that—making it more real world-like.

      • auxy
      • 4 years ago

      His point was that the finely-tuned Geforce driver helps to mask differences in CPU performance. ( `ー´)シ

        • GreatGooglyMoogly
        • 4 years ago

        Yeah, of course I got that. I still think that switching to a GPU few use would result in a very unhelpful test.

      • Aquilino
      • 4 years ago

      “Disappointed not to see a 5820K processor in the benchmarks.”

      This ^

    • Ninjitsu
    • 4 years ago

    So AnandTech’s review is…odd. Very in-depth, but odd.

    So the first point is that IPC for the 6700K seems [i<]less[/i<] than Haswell. Then, 6700K + DDR4 seems to be taking a performance hit (avg fps) in games, especially when paired with a GTX 770. 6600K seems less affected. 577C, 4790K results seemed in-line with TR. I suspect that it's because they're using a sample DDR3L/DDR4 board for the 6700K, but a DDR4-only board for the 6600K. They're also using Windows 7, with an unspecified driver version. Do Win 8.1 and Win 10 support Skylake/DDR4 better and/or vice versa?

      • yuhong
      • 4 years ago

      They killed the EHCI controller which Win7 depends on of course (only XHCI is supported now). There is PS/2 emulation in most Skylake BIOSes, but that of course only works for keyboard and mouse. Personally I am surprised that they killed it that soon.

    • Bensam123
    • 4 years ago

    Nice legacy comparison, kudos on that.

    I’ve asked before, but I still think it would be nice to have some streaming benchmarks. So how well it runs a game and encodes at the same time. Say with something like OBS at 720p@60fps on Veryfast (which is common) encoding preset.

    Streaming isn’t going away and it’s very applicable. It’s also a way for these processors to stretch their legs in different ways then we’ve already seen. I [i<]still[/i<] haven't seen a website test streaming in any meaningful form. That being said, it's also be great if there was a newer 'hardware encoder' roundup from what Cyril did some time ago. I'm sure I'm not the only one that is interested in the hardware encoders on these chips. Those can also be used for streaming.

      • Bensam123
      • 4 years ago

      Negative votes, why?

        • TopHatKiller
        • 4 years ago

        You’re point is fine. But.. you’re asking about negative votes? Bloody hell, I could tell you a story about or three…

        • maxxcool
        • 4 years ago

        I imagine you have 6 very dedicated h8ters … I thought it was a reasonable request…

          • Bensam123
          • 4 years ago

          Ahhaha you flagged your self by offering empathy. Shame on you.

            • maxxcool
            • 4 years ago

            meh

      • derFunkenstein
      • 4 years ago

      I don’t know this answer, so it’s an honest question: with so much emphasis on video encode/decode on graphics hardware, wouldn’t this make it more of a GPU concern than a CPU one?

        • TopHatKiller
        • 4 years ago

        As far as I understand… gpu & cpu interrupts interfere, possibly resulting in an unbalanced test if both aren’t considered.

          • Klimax
          • 4 years ago

          Not really. Almost everything is done in DPC and interrupts are kept as tight as possible. (often written in asm) Also interrupts are relatively rare compared to other computation load.

        • Bensam123
        • 4 years ago

        I don’t know of any sort of GPU encoders. There is CPU encoding then hardware encoding on the built in encoders on the hardware NVenc, Quicksync, and VCE.

        It would be nice if TR revisited the hardware encoder bit to see if things have improved from manufacturers, but the initial post was more about seeing how well the processor handles the encoding workload as well as the game workload at the same time.

          • derFunkenstein
          • 4 years ago

          NVenc and Quicksync are the sort of thing I was talking about. I’ve done a ton of in-home streaming via Nvidia’s solution, and if there’s a performance hit, I don’t see it. I agree it’d be nice to quantify, though. Hmm.

            • Bensam123
            • 4 years ago

            There was talk of OpenCL accelerated encoding at one time, seems like no one is really working with that anymore though.

            Streaming with those hardware solutions via OBS will have a couple second latency pentalty, so they really can’t be used instead of shield or steam streaming.

            • derFunkenstein
            • 4 years ago

            Oh, latency? Yes, there is definite latency, but it’s not even one second, let alone a couple.

            • Firestarter
            • 4 years ago

            Streaming latency is probably between 15 and 25ms, if you have a decent wired network and a client that supports hardware accelerated decoding. I say that based on the sort of numbers I see when I use Steam In-Home Streaming

            • derFunkenstein
            • 4 years ago

            How are you measuring (and I don’t mean, “how did you arrive at that number” I mean, “from what point to what point in the input-render-stream-output process are you talking about”)? Do you mean from the moment you press the button to the moment it happens on the screen is 15-25ms?

            I have a 240fps camera in my phone, so I guess I should set aside some time to measure the latency difference between streaming and native gaming.

            • Firestarter
            • 4 years ago

            I’m not measuring it, I’m just going with what Steam says the latency is. I’m not sure how accurate it is, but at an indicated 20 to 25ms it’s good enough to play most games

            • derFunkenstein
            • 4 years ago

            That’s pretty cool. I tried it out this weekend and just tested streaming between two PCs, both connected via Gigabit Ethernet, and found it worked fairly well. Wouldn’t want to play mouse-based games, but controller-based games did well enough, like you said.

            • Bensam123
            • 4 years ago

            It’s not about connection speed, it’s about the encoder and the processes above it. There are a few different threads on the OBS forums about it, including low latency threads to try and reduce it.

            What software are you guys using to do this? I’ve been trying to do something like this for some time to do lan encoding and latency has always been a hurdle.

            • Bensam123
            • 4 years ago

            Weird, I’ve used a relay server locally and you end up with at least 1-3 seconds of latency, that increases over time if the server gets bogged down.

            • derFunkenstein
            • 4 years ago

            I think we’re comparing different things. I was talking about in home streaming from PC to Shield. You’re talking about streaming to Twitch. So I misunderstood the question, sorry.

            FWIW streaming to Twitch from my Shield has about a 5 second delay and I know from experience the TR podcast stream is similar. So you’re not seeing anything out of the ordinary.

            I had assumed (maybe incorrectly) that most of the delay was in Twitch processing the video and distributing it to viewers, not a delay coming from the PC going out. Interesting that the PC is actually that far behind. Maybe that’s because of the compression scheme used.

            • Bensam123
            • 4 years ago

            Yeah low latency solutions like Shield and Steam in-home are a bit different.

            Twitch delay should be about 30s~, depending on how close you are to the ingestion server and what ingestion server the streamer is streaming to. If you’re right clicking on the video > clicking stats > then adding up the encoding delay and such, that’s not accurate. Best way to do it is basically having a live chat on screen. You can tell based on when text pops up on screen how far behind it is. Sometimes your computer can add more as well if it’s falling behind when it comes to decoding.

            • auxy
            • 4 years ago

            [quote=”Bensam123″<]Streaming with those hardware solutions via OBS will have a couple second latency pentalty[/quote<]W-what? Why do you think that? The stream delay on streaming sites like Twitch has nothing to do with OBS or hardware streaming; is that why you're saying that?

            • Bensam123
            • 4 years ago

            That’s what’s talked about on the OBS forums and what happens when you set up a media server on a lan. I’ve had it happen in practice in addition to talking with other people about it. OBS has a BASE of at minimum 700ms, it’s setup in the advanced options. Going lower then 500ms causes all sorts of problems. That’s encoding, program overhead, and media server buffering that will ratchet it up. Transcoding will add even more latency. (This is why you can’t use OBS or RTMP instead of custom solutions like Shield or Steam in home (not the same as steam broadcasting))

            Twitch delay is about 30s, but that depends on how far away from the ingestion servers you are and which ingestion server the streamer is streaming to.

      • auxy
      • 4 years ago

      I stream every day at 3584kbps 720p@60fps on hitbox.tv and I use Intel Quicksync video for it. Zero impact on my game performance and stream quality is great per my viewers. (*‘ω‘ *)

        • Bensam123
        • 4 years ago

        Thanks for the subjective impressions. ~_~;

        Quicksync will actually deliver a worse quality at the same bitrate compared to the x264 encoder with something like OBS. It uses about 20% more bitrate for the same quality. I guess you don’t need to worry about the encoding workload though (although there is still a performance hit).

        Thought hitbox was dead though… >>

          • auxy
          • 4 years ago

          People complain a lot about the quality of QuickSync; I’ve done subjective double-blind testing with techy people versus x264 veryfast and they generally cannot tell the difference. Veryfast preset is not good for quality. (´・ω・`)

          People often tell me things like “you shouldn’t use quicksync because the quality is worse” — this point of view is popular in the official OBS IRC — but the reality is that compared to software encoding using settings at which most people can actually encode in real time, it isn’t. I take this opinion about as seriously as people who say “you shouldn’t be using MP3, because Vorbis has better quality at the same bitrate.” (・∀・) Software encoding can absolutely give far superior results, but not at the same speed.

          And no, there really is no significant performance hit. A couple of percentage points of CPU usage; no visible change in game performance. People who are still using CPU encoding [b<]for streaming[/b<] are missing the point.

            • Firestarter
            • 4 years ago

            if you have cores left over, software encoding can make sense. If I were a professional streamer I’d seriously consider getting something like a i7-5820K or i7-5960X just so that it can handle high quality encoding without having to resort to the poorer quality of the hardware H264 encoders

            • auxy
            • 4 years ago

            I would agree with you, except that in fast motion scenes even a 5960X may stumble with software encoding, while for whatever reason, QuickSync never has an issue. Stream quality remains fairly constant even while I’m flipping around in Warframe, while CPU encoding on my 4790K skips and drops frames like crazy. I had to fall back to superfast, which looks much worse.

            • Bensam123
            • 4 years ago

            I’ve never had issues with fast motion scenes with CPU encoding. That depends entirely on resolution/fps/encoding preset and of course what kind of content it is. FPS vs Moba for instance. A moba requires much less bit rate and will be stressed less.

            If it’s skipping and dropping frames that means your encoder preset/res/fps is too high for your content and your processor is getting maxed out. You’d need to reduce either one of them.

            Not to be the ‘you’re doing it wrong’ guy, but if you’re having problems with skipping on occasion or churning out high motion scenes, that’s not a good comparison to quicksync.

            • auxy
            • 4 years ago

            I know that Bensam; I said that I had to lower my preset to superfast to make Warframe stream smoothly with x264. (*´ω`*)

            And yeah, Warframe is very near a worst-case scenario for x264, but why do you say that makes it a bad comparison to QuickSync?

            • Bensam123
            • 4 years ago

            You’re doing a apples to oranges comparison. Quicksync will have the edge if your PC is being overloaded by your settings and you have to drop it down a notch. A apples to apples comparison would be two streaming PCs (pc’s just for encoding the workload) and comparing the quality of Quicksync vs x264.

            If your PC is already being overloaded at veryfast you aren’t going to get a accurate comparison at veryfast, especially when you drop it down to superfast. Although people usually compare Quicksync to superfast. You’ll get more stuttering and pixelation as your CPU reaches closer to 90-95%.

            • Bensam123
            • 4 years ago

            Cheaper solution is getting a second PC and mirroring the monitor over it (which is what most people do). There may be some degradation depending on how much you spend on a capture card though.

            • Bensam123
            • 4 years ago

            Weird, I’ve done tests and asked my viewers, in addition to threads on the OBS forums saying otherwise.

            I was talking performance hit of using Quicksync Vs encoding PC. OBS still uses 6-7% in addition to having weird quirks when interacting with games sometimes.

    • Spyder22446688
    • 4 years ago

    I wish Intel offered the i7-6700K with 128MB of eDRAM. That extra performance across a variety of applications might offer something for the i2500K/i2600K cohort.

      • swaaye
      • 4 years ago

      I wouldn’t be surprised if that’s coming eventually. They’re going to be selling this tech for quite awhile and will need a refresh to sell.

        • w76
        • 4 years ago

        Maybe that’s Kaby Lake?

          • Spyder22446688
          • 4 years ago

          Always something right around the corner. Which is either positive or infuriating, depending on the frame of mind.

    • geekl33tgamer
    • 4 years ago

    For once, Krogoth (I’m not impressed) seems right on the money more than ever.

    • BobbinThreadbare
    • 4 years ago

    Awesome review.

    I wonder how much it costs Intel to put the 128mb of cache on the CPU package. Seems like they should be putting that on any enthusiast oriented chip.

    • Shambles
    • 4 years ago

    After seeing the reviews i am completely happy i bought a 4790k a few weeks ago instead of waiting. What a disappointment. It sure looks like we’re hitting the practical limit of silicon.

    • dodozoid
    • 4 years ago

    Well then, I am postponing my upgrade plans… i7-3770 is just fine….

    • USAFTW
    • 4 years ago

    Great review as always. Thanks for the effort.
    I guess I did the right thing by not waiting for skylake. Not that a jump from 4690K to 6600K would be huge.

    • jessterman21
    • 4 years ago

    Now if we can get a 4GHz i3 for gaming… DONE

    • Convert
    • 4 years ago

    For some reason this review seems so much meatier with the commentary and insight. I really enjoyed it!

    [quote<]I figure we should overclock the 5775C, as well, to see what it can do.[/quote<] Yes please 🙂 I'm not sure if I'd overclock mine if I got one, but being at a 500mhz disadvantage really makes me want to see how it will do on a equal clock footing.

    • Oriflamme
    • 4 years ago

    So basically there is no real benefit to this over a 4790K, except in the occasional application.

    Mostly Fps was about the same or weaker than the 4790K.

    Heh.

    • Ninjitsu
    • 4 years ago

    I think this has to be the best review I’ve read on TR (admittedly from my pov of usefulness). Loved it. 1080p results, frame time data, historical comparisons, everything, everything.

    I’d love it if you could manage to test the 6600K and the 5675C as well when you take a look at overclocking.

    I’m not sure what people are going to hate on now – Intel’s thrown everything we’ve asked for at as during the last two years, and prices have barely moved upwards.

    Regarding Luxmark, did you get a chance to check out the 577C’s GPU performance? And CPU + IGP + GPU for the chip?

    [quote<] but the 6700K can run all four cores at 4.2GHz under load [/quote<] If that's how Turbo Boost (3.0?) operates then that's fantastic, the i5s will be deadly. [quote<] I seriously doubt Intel will allow lots of BCLK tuning leeway on the non-K variants of Skylake, for instance. That prospect seems very unlikely [/quote<] I don't know, may not be that unlikely. It'll require Z170, I'm sure, but the way Intel's going these days, we can hope. [quote<] 5775C's magical gaming prowess [/quote<] That L4! Never thought it'd shine here. It's a fantastic chip, and not really that much more expensive. It does indeed have a bit of a positioning problem, and didn't get much exposure at all, but it definitely didn't warrant a "meh". I also finally understand why a -HQ part resides in the Asus RoG G751. [quote<] At this rate, we may have to start hunting explicitly for CPU-bound games and scenarios in order to stress test CPUs in the future. [/quote<] Arma 3 Arma 3 Arma 3 Arma 3 Arma 3.... (I can point you to benchmarks if you want, and detail which settings are most CPU bound, etc). [quote<] My sense is that Radeons tend to be quite a bit more CPU-bound, especially when measured with advanced metrics [/quote<] Yup, AMD drivers. I'm not sure if you should replace Nvidia GPUs entirely, but do check out games like GRID, DIRT or pCars (or Arma) that cause them problems, it should be useful to a lot of people. ---- That's all! [spoiler<]CPU Hash is the best hash[/spoiler<]

    • wierdo
    • 4 years ago

    Came for the I7-6700, stayed for the I7-5775C.

    If I ever decide to upgrade from my I7-3770 – in 10 years at this performance advancement rate – then the future replacement should hopefully have a “giant L4 cache” like that.

    Neato.

      • the
      • 4 years ago

      You wouldn’t need a giant L4 cache as main memory will have likely moved to HBM or similar in that time frame for consumers. 🙂

      • USAFTW
      • 4 years ago

      The 5775c kinda steals the thunder. That and the fact that the Haswell variant is the 4790K and not 4770K. Only seems fair though.
      I was expecting lower power consumption from the new 14nm process, especially since it’s clocked the same or lower than the 4790K. Also very impressive idle numbers from the 5775c.

      • jihadjoe
      • 4 years ago

      5775C — kicking butt at only 65W.

      • DrDominodog51
      • 4 years ago

      The 5775C made me glad I bought Z97 and a Pentium AE.

      • Starfalcon
      • 4 years ago

      Yeah still plenty happy with my i7-3770 also. Looks like I will be good for a while still, going to wait until there is a really huge jump in performance.

        • Chrispy_
        • 4 years ago

        ditto

    • tfp
    • 4 years ago

    Thanks for including the old CPUs in the benchmark!

    I can see that this latest CPU finishes the image processing and 3d rendering in about 1/3 the time it takes my Q9400 stock. Maybe I’ll upgrade someday, but I heard there is a new CPU just around the corner.

      • derFunkenstein
      • 4 years ago

      You heard wrong! This is the last CPU family. Everything dies after 14nm. It’s true! I read it on the internet! Too much physics!

        • TwoEars
        • 4 years ago

        It’s not only the physics but also the economics of it. Some are arguing that while 10nm is on the horizon 7nm might be very though becuase of yield and cost issues. The smaller it gets the higher the circuit design and manufacturing costs. You might also need even fancier transistors than the tri-gate ones. I think intel is looking at nanowires and other very high-end cutting edge stuff.

        [url<]http://cen.acs.org/articles/92/web/2014/12/Record-Breaking-Nanowire-Transistors.html[/url<]

          • MadManOriginal
          • 4 years ago

          75GHz 😮

          TIME TO REVIVE NETBURST.

            • auxy
            • 4 years ago

            I imagine NetBurst at 75Ghz would be reasonably performant! 150Ghz ALUs!

        • tfp
        • 4 years ago

        Then use less physics

          • tipoo
          • 4 years ago

          I think you’re onto something there! If you patent using less physics, you could sell it to Intel for a fortune!

    • Thrashdog
    • 4 years ago

    Seems like the best idea for gamers would be to wait and see if/when a high-end Skylake chip with a full-fat GT4e implementation (and more importantly all that sexy eDRAM that comes with it) is forthcoming in the next few months.

    In all likelihood it’s not going to be enough to make me ditch my 4670k, but wow do those 5775C numbers have me intrigued…

      • itachi
      • 4 years ago

      Ah man that would be awesome and smart of them, but would be mean too, if I decide to upgrade now to a 6700k… well I would be quite mad if they released the eDRAM version a few months later :S and so would be everyone who upgrade

    • NTMBK
    • 4 years ago

    Given that one of the major improvements with Skylake is the improved platform, I’m a little sad to see the same old SATA SSD used for testing. I’m curious to see whether a nice and snappy PCIe SSD would improve performance in games which are streaming-heavy, like Assassin’s Creed, Far Cry, and so on.

    EDIT: Though otherwise, great review as always Scott 😀

      • Krogoth
      • 4 years ago

      You aren’t going to see any difference.

      PCIe SSD devices are still overkill for mainstream and gaming usage patterns. They are the new “SCSI 15K HDDs”.

      • chuckula
      • 4 years ago

      That sounds like a job for… FOLLOWUP REVIEW MAN!

    • Unknown-Error
    • 4 years ago

    As usual thanks for the review Lt. General Wasson. Soon you’ll be promoted to General.

    • mkk
    • 4 years ago

    You know it’s a sad situation when the best of the best keeps putting up an uninspiring performance. It’s like when the studio album sounded great, but after the concert you regret every penny spent.

      • bthylafh
      • 4 years ago

      It’s a real shame that AMD can’t compete anymore. Intel needs someone nipping at their heels to give their best.

        • Kretschmer
        • 4 years ago

        Intel needs to be competitive with…Intel. If people aren’t replacing or adding Intel CPUs to their households – even if they’re using Intel CPUs now – Intel will go out of business.

        AMD never had the scale to seriously threaten Intel, even when they held the performance crown.

    • Forge
    • 4 years ago

    So if you have a 4790K, continue sitting pretty. If you really need DDR4, get a Skylake, and if you’d enjoy playing with a rather singular CPU with some interesting bells and whistles, the 5775C is pretty cool.

      • Krogoth
      • 4 years ago

      DDR4 is kinda pointless for desktop usage patterns and gaming. Skylake makes sense if you need more PCIe lanes on the relatively cheap.

    • southrncomfortjm
    • 4 years ago

    i5 3570k should last me a long while based on what I saw here, especially since it is only paired with a measly GTX 760.

      • Kretschmer
      • 4 years ago

      Yeah, I anticipate becoming CPU limited in 2020. And I haven’t even overclocked, yet!

        • southrncomfortjm
        • 4 years ago

        I had overclocked, but then put everything back to stock when I realized there really wasn’t any point yet.

      • Krogoth
      • 4 years ago

      Unless there’s a killer app that renders systems of yesterday woefully obsolete.

      • BlackDove
      • 4 years ago

      You guys dont play many new games do you?

        • Krogoth
        • 4 years ago

        Games are typically GPU-bound not CPU-bound especially if you want to game at 4Megapixel and/or with AA/AF on top. At lower resolutions (1920×1200 or less) you can become “CPU-bound”, but the framerate is already super-high with “slowest chip” that it is a moot point.

        Look at the gaming benches in the review for goodness sake.

        I’m aware that there are some strategy games where CPU is clearly the bottleneck, but even Skylake chips struggle under such conditions.

        • Kretschmer
        • 4 years ago

        Can you name a game that would be bottlenecked by an i5-3570K but not an i7-6700K?

          • Ninjitsu
          • 4 years ago

          Arma 3.

            • Krogoth
            • 4 years ago

            It is GPU-bound for the most part.

            When it does become CPU-bound, both the 6700K and 3570K struggle under the same conditions.

            • Ninjitsu
            • 4 years ago

            I completely disagree. I’ve been playing the game since the Alpha, man!

            I have a GTX 560. It starts getting hit after sampling hits 80% of 1080p. In a lot of scenes, however, I’d probably not notice because my Q8400 craps out first.

            If I plot FPS vs GPU utilisation, at times my GPU isn’t crossing 60% and my frame rate is like 24 fps or something.

            Now, assuming you have a GTX 580 or faster, you should be able to push 40+ fps at 1080p, with the settings I’m using, assuming you’re not CPU bound (e.g. looking at the ocean, or a particularly empty part of the map).

            Now, where are you CPU bound in game? Objects, draw distance. Then you get bound by VRAM, for textures. Finally you’re GPU bound.

            I’ve managed to replicate the same performance on a friend’s 3570K and GTX 670 as my Q8400 and 560 in CPU bound scenarios (like towns, AI) by increasing object quality and draw distance.

            The game loves them GHz on the CPU side, and it has additional launcher options to support hyperthreading.

            Finally I have this for you. It’s almost two years old (and we’ve had quite a few updates since), but highly relevant even today:
            [url<]http://www.techspot.com/review/712-arma-3-benchmarks/page5.html[/url<]

            • southrncomfortjm
            • 4 years ago

            Any other games that come to mind? I’m sure there are some RTS/4X games or something that may be CPU bound, but I’m guessing most people find themselves more constrained by their GPU than their CPU in the vast majority of games.

            • Kretschmer
            • 4 years ago

            OK, so Arma 3 players might want to upgrade. The initial comment implied that many new games would be limited by an i5-3570K, but that’s clearly not the case.

            • Ninjitsu
            • 4 years ago

            Of course. For me, it was Total War Rome II and Arma 3 that made me realise that my Q8400 is getting too old.

        • southrncomfortjm
        • 4 years ago

        Not really, no. I tend to wait for games to get really cheap before I jump on board except for rare occasions. Saves a ton of money and doesn’t lessen my overall enjoyment of my leisure time.

        So, in my use case, my CPU is not adversely affecting performance and I remain perfectly happy.

      • Kenjiee
      • 4 years ago

      My GTX 760 pairs well with my i5 760 xD. I planed to upgrade with Skylake. But it just doesn’t convinces me.

    • Krogoth
    • 4 years ago

    Yawn, another evolutionary improvement of the Sandy Bridge dynasty.

    The most exciting thing about Skylake is the updated platform not the CPU itself. The CPU’s new instruction sets are nice for content creation provided that software supports it through.

      • Ninjitsu
      • 4 years ago

      But all the evolution has led to a ~25% performance improvement over Sandy Bridge. And you’re looking at 4.6 GHz vs 4.9 GHz after overclocking, which is a 6% difference.

      Still 19% better, with much more consistent frame delivery for games, an updated platform and much lower power consumption (especially including the platform).

      Not a “BUY THIS NOW” but it’s not exactly nothing. Go back to Nehalem and Penryn and it’s up to 3x faster (or more).

        • Waco
        • 4 years ago

        Funny, this looks pretty good to me:

        [url<]https://techreport.com/r.x/skylake/cinebench-one.gif[/url<]

          • Ninjitsu
          • 4 years ago

          I think you meant to reply to Krogoth!

            • Waco
            • 4 years ago

            To both of you. 😛

            The 6700K has a 35% increase over the 2600K, in single threaded workloads. I don’t see that as bad progress!

            • Ninjitsu
            • 4 years ago

            Ah. Of course, I was just saying 25% on an average. TR’s using stock clocks, which would increase the difference in favour of Skylake.

            I’m fairly excited about Skylake, my Core 2 Quad Q8400 needs to be replaced…shame I won’t be able to re-use the RAM though. 🙁

            • Laykun
            • 4 years ago

            Compare GPU speed improvements to CPU speed improvements over the same time period and prepare to be depressed.

            • smilingcrow
            • 4 years ago

            Intel’s integrated GPUs started from a very low point on the curve so had a lot of room for easy growth.
            dGPUs are highly parallel so it’s easy to add more performance when moving to a smaller node just by adding more cores.
            For consumers dGPUs are about real time performance (FPS in Games) whereas with CPUs that is not the case as it’s more about reducing the time of a task to complete; encoding etc.
            Hence there is much more demand for extra performance in games as the benefit is much more tangible.
            Plus there’s the lure of 4K gaming etc which demands better real-time performance.
            For consumer CPUs there isn’t such a big demand for more cores as shown by the low sales volumes of Intel’s -E platforms.
            If the price was right more people would buy 6+ core systems but due to lack of competition prices remain high.
            It’s all quite logical!

            • Laykun
            • 4 years ago

            I’ll be more specific, I meant discrete GPU improvements compared to CPU speed improvments. I understand the embarrasingly parallel nature of GPU performance problems, I’m just futily trying to appeal to intel’s pride by saying their single core performance is not keeping pace with improvements in other products from other companies. But alas, the problem is more complicated than just throwing more transistors at it, and may not really be solved until we move to a new semi-conductor medium that allows for similar transistor densities but much higher clock speeds.

            • Waco
            • 4 years ago

            GPUs and CPUs cannot be compared. The workloads are entirely different in nature.

            • Ninjitsu
            • 4 years ago

            [quote<] I'm just futily trying to appeal to intel's pride by saying their single core performance is not keeping pace with improvements in other products from other companies. [/quote<] Unless those products are CPUs, you can't compare the two. And if those products are CPUs, then I'd really like to know which companies you're talking about. GPUs have low clocks and low per thread/per ALU performance.

        • Krogoth
        • 4 years ago

        Skylake is not 3x faster than Bloomfield (it is closer to 100% faster, some people forgot how fast Bloomfields were back in their heyday). It is only 3x faster in Penyrn if you run applications that take advantage of eight threads and new instructional sets. Otherwise it is closer to a 150-200% improvement for most mainstream applications.

        The real trick is that it consumes over 1/2 the power of a Penyrn and Bloomfield rig while having that level of performnace.

        In terms of energy efficiency. Skylake is a massive win over its long-fanged predecessors.

          • Ninjitsu
          • 4 years ago

          I said “up to”. 😉

          • Meadows
          • 4 years ago

          A 200% improvement means 3x faster. Same thing.

        • anotherengineer
        • 4 years ago

        Ya up to being the key word. IIRC the i7-2600k Sandy was 3.4 GHz base and 3.8 GHz turbo. Sky i7-6700k is 4Ghz & 4.2GHz, so from base clocks, so sky base is about 17% more than Sandy at base clocks.

        The improvement from Sandy to Ivy to Haswell to Broadwell to Skylake, is pretty small when
        1. clock speeds are taken into account
        2. the improvement seen moving from P4 netburst to conroe

        If that trend continued, skylake would probably have over triple the cpu performance it currently has.

        Which leads to the next thing.
        1. have we hit a wall with current cpu architecture (x86/hardware/physics)??
        2. or is it software related, x86, windows, etc.?
        3. or both?
        4. maybe netburst like bulldozer just wasn’t cut out for desktop benchmarking and it was easy to make large gains?
        5. cheese/other?

      • flip-mode
      • 4 years ago

      Were you hoping for them to dust off the P4 and evolve that instead? Seriously, what is the expectation here? As evolution goes, the i7 6700 looks pretty damn sweet compared to an i7 2600.

      • HisDivineOrder
      • 4 years ago

      The most exciting thing about the last SEVERAL CPU releases from Intel have been the updated platforms. Nothing new there.

      That was my first thought upon reaching the end of this CPU review. The motherboard chipset is more interesting and compelling, but then I realized…

      I think Sandy Bridge was the last time that wasn’t true.

      • ronch
      • 4 years ago

      Well, if your current CPU quits, nobody’s stopping you from getting Sandy or Ivy. 🙂

    • Chrispy_
    • 4 years ago

    On the gaming front, I hear that overclocking is very poor, like only 4.2GHz compared to Sandy’s easy 4.7-4.9GHz

    If Skylake provides 15% better IPC [b<]in games[/b<] and a 2500K provides a 15% higher clock, the end result is [i<]Krogoth[/i<].

      • TwoEars
      • 4 years ago

      You heard wrong. It’s comparable to haswell, easy 4.6 Ghz, sometimes 4.7-4.8 Ghz.

        • Chrispy_
        • 4 years ago

        Hmm, fair enough. if it hits 4.6 easy then Sandy doesn’t have much on it, as although I had a couple of 5GHz Sandy samples, I hear that 5GHz was a good overclock and 4.8 was a more reasonable expectation.

      • BobbinThreadbare
      • 4 years ago

      It runs at 4.2 GHz stock nearly. I serious doubt it doesn’t over clock *at all* over it’s default boost rate.

      • chuckula
      • 4 years ago

      Even Ars Technica, which is no longer known for very rigorous CPU benchmarks, got it to 4.8GHz.

      [url<]http://arstechnica.com/gadgets/2015/08/intel-skylake-core-i7-6700k-reviewed/[/url<]

        • Growler
        • 4 years ago

        That’s because their best hardware guys ran off to start their own podunk website. I forget the name of it, though.

          • chuckula
          • 4 years ago

          I think they write reports about technology. So it must be ReportTech.com!

            • w76
            • 4 years ago

            Tech news without all the SJW? Great concept!

            • Ninjitsu
            • 4 years ago

            Street Journal Wall?

            • auxy
            • 4 years ago

            TR is pretty apolitical, which is why I like it so much. Did you see GitHub went full SJW?

          • modulusshift
          • 4 years ago

          Wait, haha, seriously? That makes so much sense… Anything remotely technically intensive there is stupendously long running (like Siracusa’s OS X reviews). And I know it used to be up there with AnandTech as far as that went, I just didn’t know where the talent ran off to. And I’ve been following this site almost as long as Ars!

      • itachi
      • 4 years ago

      When you say “easy” you mean on water or ? because I overclocked my friend’s old 2600k lately.. and i told him to get a noctua D15 so we can achieve a good OC, we only managed to get 4.4ghz, beyond we hit a wall, mostly due to high temps..I found it very strange, so I bought another thermal paste, Gelid extreme, to see if it would help, nope, he’s still at 4.4ghz and overheating by just playing his cores hit 76° sometimes !!! he also have a good case with 2 20cm fans on top..

        • Jason181
        • 4 years ago

        Sounds like your friend got a sample like mine; 4.4 Ghz on an H100. My chip doesn’t overheat (probably due to the water), but it doesn’t clock higher without overvolting that makes me uncomfortable.

        So at least take solace in the fact that 4.4 is likely the best he’d get, even on water.

        The 6700k is looking like a nice upgrade to the 2600k @ 4.4 though. The time over 8.3 ms is a serious improvement.

          • itachi
          • 4 years ago

          Aww indeed yea that’s sad, I will try one last time to change the thermal paste and put a very tiny bit I think I may have overloaded a bit last time but not by much so I doubt it will change much..

          That said if that’s the limit then needless to say after all I heard about Skylake “hitting 5ghz without even trying” was a bit meh to me ahha. anyway that’s the silicon lottery eh.. and I guess 4.4 is still a fair boost, that thing runs GTA V maxed smoooth, and he even upgraded to a 980Ti so now he cranked all things up, looks sweet.. I even was asking him to put the FPS counter but he was like meh I don’t care lol (so much it’s smooth) but I was curious. my FX 8320@4.6 just crys in this game..

          There is a nice review on a polish website called pclab.pl where they test many games more extensively and you can see a big difference compared to most review, hell, the i5 beats the 4770k in most cases, that’s just awesome.. I will probably go for a i7, but damn the pricing in euro hurts.. 400 euro ish, bit less, hope when there is more supply it will drops, that’s like 70$ more than US MSRP !

            • Chrispy_
            • 4 years ago

            “Easy” overclocks tend to mean crank the voltage to 1.4V, put the multiplier to whatever you can and just deal with the heat by buying superior cooling. Beyond 1.4V is what requires tinkering/fettling/lots of patience and the best boards, PSU and cooling you can afford.

            For most people, a casual overclock is fine (adjust the multiplier using stock voltage until it’s unstable and then see if a minor voltage bump makes it stable). In these terms an “easy” overclock to 4.8 might result in a casual overclock to 4.4 – it just depends what you define as casual/easy/hard.

            I’ve seen hardened overclockers fail to get Sandy beyond 4.8GHz using 280mm radiators and a day’s work, and I’ve seen casual overclockers (myself included) hit 4.5GHz on stock voltage. There’s just a lot of variation in the chips and, as ever, “YMMV” 🙂

            • itachi
            • 4 years ago

            Yah I see what you mean, I didn’t try my best as he don’t really have the patience LOL, but I tryed pretty hard, and this chip seems to be a big disapointment…just bad luck I guess.

            Also the board is pretty good it’s the gigabytes ud3h I think, only fail thing is the BIOS.. god I never seen such a horrible BIOS in my life, I get it it’s few years old, but even my old asus bioses were better ugh.. Imo Asus Bios rocks ..especially with the digi+ settings, it’s where it’s at, I assume Asrock bioses are good too since it’s a subsidiary, might go for an Asrock for Skylake, or perhaps MSI, but don’t like the color themes lol !!

            And I couldn’t really find a guide for this mobo, I probably would achieve better results if I understood the bios better, it’s just confusing, I think there is only 1 setting to overclock, so I’m not quite sure..

            All the other CPUs I had you could modify the Bus speed and the multiplier (and the rated FSB) here I could only change the BUS speed oh well.. and I believe we’re at 1.4v already, hence the overheat on a d15 @ 4.4, I’m sure I could tweak more with a proper bios tutorial..

            Plus we tryed to update the bios, both with the online tool and the usb trick, it didn’t work..

    • DancinJack
    • 4 years ago

    So, guys, is my 4790K @ 4.5 okay for a while? What should I do?!?!?!

      • TwoEars
      • 4 years ago

      Garbage. Toss it out.

      • southrncomfortjm
      • 4 years ago

      Replace it now!

      • Krogoth
      • 4 years ago

      Keep it.

      There’s really no reason to upgrade, unless you absolutely need more PCIe lanes.

      • derFunkenstein
      • 4 years ago

      Guys, it’s cool. I’m trained in hardware disposal. Just wrap it up in Bubble Wrap and mail it to me. The bubble wrap is important so that it doesn’t explode or spontaneously burst into flames when USPS inevitably drops it. These things are fragile and dangerous. Call [s<]a professional[/s<] Ben.

        • chuckula
        • 4 years ago

        Don’t forget the anti-static protection bag.
        On errant spark and the place goes up!

          • derFunkenstein
          • 4 years ago

          That’s a myth perpetuated by Big Antistatic.

      • PrincipalSkinner
      • 4 years ago

      We’re not guys.
      We are GUIZE!

      • geekl33tgamer
      • 4 years ago

      That CPU is useless now. You [i<]must[/i<] upgrade.

      • USAFTW
      • 4 years ago

      Every second you spend pondering on upgrading is another second you have to spend waiting for your handbrake encoding to finish, all for not spending the money to upgrade. Don’t wait anymore. Time is money.

      • uni-mitation
      • 4 years ago

      Not even good to keep my balls warm, destroy it!

      Now, AMD seems to be doing something right! It keeps my balls fuzzy, warm and sweaty.

    • uni-mitation
    • 4 years ago

    So, unless you got something before the 2500K, or you have been recently defrosted from your cryogenic sleep, then it would NOT be a sound investment to buy in.

      • Krogoth
      • 4 years ago

      or need more PCIe lanes on the cheap.

    • ForceEdge
    • 4 years ago

    Any idea how fast z170 boards boot up nvme ssds like the 750? 😛 got the 400gb one and every reboot annoys the hell outta me when it takes upwards of 40 seconds! blazing edge tech problems i guess hehe

    Edit: To you guys who’re wondering if it boots on older boards, i’m running the z87 hero, boots up with the proper csm settings provided by intel!

    -Still Krogothed though, was really hoping to see at least a tangible (read noticeable) performance boost..
    PS: prices are CRAZY in the UK right now, literally 50% extra compared to the 4790K

      • Krogoth
      • 4 years ago

      There are performance improvements in certain worklords, namely content creation if it supports the new instruction sets.

    • TwoEars
    • 4 years ago

    Performance increase for gaming/encoding: Negligible

    Power savings for notebooks: Noticeable

    That’s Skylake in a nutshell.

      • Krogoth
      • 4 years ago

      FIFY,

      Performance increase in gaming: Tiny(if the game is CPU-bound), but hardly noticeable

      Performance increase in applications that take advantage of new instruction sets: Noticeable gain that may be worth it if time is $$$$. We will see more of this with Skylake-E.

      Performance increase in applications that don’t take advantage of new instruction sets: Barely noticeable.

      Power savings for portables: About the same as Ivy Bridge, it is actually a step back from Haswell, However that’s what Broadwell is for.

        • Andrew Lauritzen
        • 4 years ago

        > Power savings for portables: About the same as Ivy Bridge, it is actually a step back from Haswell, However that’s what Broadwell is for.

        How on earth are you drawing conclusion about the mobile chips already based on this review?

        • esterhasz
        • 4 years ago

        Yeah, fixed function hardware is where some big gains can still be made. After the Altera purchase some months ago, we can expect more in that direction, at least I hope so.

    • guardianl
    • 4 years ago

    Awesome, and thanks for doing the legacy comparisons again. Showing that Skylake is ~44x faster than a PIII 800 Mhz (which came out in 1999) is pretty cool.

    • PrincipalSkinner
    • 4 years ago

    Yeah, just as I suspected reading all the rumours. I guess buying that 2500k was one of the best CPU purchases I ever made.

      • ish718
      • 4 years ago

      I guess I will be sticking to my 2600 for a while…

        • bthylafh
        • 4 years ago

        My 2500K has been a ridiculously good value.

          • NeelyCam
          • 4 years ago

          Still sporting 2600K

            • gmskking
            • 4 years ago

            Me too

          • anotherengineer
          • 4 years ago

          Indeed. 6 year anniversary for my AMD 955 BE, and it is still good for what I use it for.

      • TwoEars
      • 4 years ago

      My i7-920 lasted 6 years, also pretty darn good.

        • Pwnstar
        • 4 years ago

        Yeah, six years is great value.

          • Beelzebubba9
          • 4 years ago

          Yeah I only sold my Core i7 920 because at 4Ghz it was consuming a hideous amount of power. The Haswell that replaced it isn’t really faster in games, but I really do appreciate the silence. 🙂

      • ozzuneoj
      • 4 years ago

      Same. I still see little reason to upgrade.

      Even with a relatively high end GTX 970 I’ll be far more GPU limited than CPU limited in anything that I play with my 2500K at 4.2Ghz.

      The one thing I find interesting is the changes to how the BCLK is handled. It’d really be awesome if they allowed that BCLK freedom with their lower end non-K models, Pentiums and Celerons. It wouldn’t interest me as far as upgrading my main system, but it certainly would make for some interesting budget build opportunities if you could overclock any CPU with the BCLK alone… like in the old days of FSB overclocking.

      Aside from this, I still don’t see anything that looks like it’d be a noticeable improvement over my ancient P67 board and 2500K with DDR3.

        • Airmantharp
        • 4 years ago

        My Z68 board is on it’s last legs- so when it dies, I’m upgrading, but till then a 4.5GHz 2500k looks like it will still place up with the more modern Intel chips for most tasks, and that GTX970 ain’t no slouch neither ;).

        • Starfalcon
        • 4 years ago

        Yeah sadly the days of the 300A celly overclocking to 450+ is long gone. Hard to sell the expensive cpus if their cheap ones run just as fast, so got to keep the cheap ones down.

      • Firestarter
      • 4 years ago

      I’m just miffed that I didn’t go with a 2600K and the best doodads to overclock the hell out of it. I mean, at 4.3gHz my 2500K is nothing to scoff at but a 2600K at 4.7gHz would’ve been that much sweeter..

      That said, color me Krogoth until Intel puts that giant cache on their top model so we can unambiguously call it the fastest desktop CPU ever. It’s a crying shame that their newest and supposedly best CPU should get its behind handed to it by yester-months model with a lower TDP!

      • isotope123
      • 4 years ago

      2500K brothers!

      • TheMonkeyKing
      • 4 years ago

      By that same logic, the Phenom II X6 running my HTPC has paid for itself over and over.

      Now I guess I’ll wait for the AMD Nano reviews to see if it is worth replacing my oldest GPU, the venerable Sapphire HD6950 running the HTPC too.

      I guess if I ever move to a 4K TV then yeah, probably.

        • Firestarter
        • 4 years ago

        well your Phenom is not in your main PC anymore, is it?

    • NTMBK
    • 4 years ago

    Surprise surprise, adding big caches can reduce latency spikes.

    • the
    • 4 years ago

    Color me Krogoth: I am not impressed.

    Overall Sky Lake has some real improvements but they’re incredibly minor. While increasingly difficult to pull off much higher

    Wow, the i7 5775C is a bit of a spoiler for Sky Lake. The performance boost from the L4 cache is quiet impressive considering that it lets the i7 5775C keep up with the i7 6700K despite being several hundred Mhz slower in both base and turbo clocks. This really makes me want a Sky Lake + eDRAM solution.

    I noticed that the article mentions that SkyLake has kept the internal ring bus between cores. To double check, has this been confirmed by Intel? I ask as Intel is introducing a new on-die interconnect with Knight’s Landing and I’d kind make sense to move the mainstream architecture to it too.

    Also Damage, can you confirm that there is no AVX3/AVX-512 support on this chip?

      • chuckula
      • 4 years ago

      [quote<]I noticed that the article mentions that SkyLake has kept the internal ring bus between cores. To double check, has this been confirmed by Intel? I ask as Intel is introducing a new on-die interconnect with Knight's Landing and I'd kind make sense to move the mainstream architecture to it too.[/quote<] That makes sense. KNL has a new interconnect because a single ring bus doesn't scale well to 72 cores. However, in a quad-core desktop part the ring bus still handles everything without any issues.

        • the
        • 4 years ago

        Absolutely true but you also have to take a look at the interconnect in the context of the full range of chips Intel will offer with this design. Intel has had to significantly alter the ring design in their Ivy Bridge-EX and simply implemented two fully independent ring domains for Haswell-EX. SkyLake-EX is expected to come with over 20 cores and that is where the moving to a new on-die interconnect would pay off. However, Intel has only designed one core for each mainstream generation and simply copy/paste to add cores + specific IO for different markets.

        If Intel were to keep consumer Sky Lake with a ring bus but upgraded to a crossbar for Xeons then their design strategy has changed. (This could [i<]potentially[/i<] explain why only the Sky Lake Xeons have AVX3/AVX-512: the consumer chips simply don't have it due to it being a slightly different design. More investigation is necessary as my analysis could easily be wrong here.)

      • Damage
      • 4 years ago

      I’m assuming the ring remains because Intel’s architects have said they believe it has headroom for generations to come. I don’t have any architectural info about Skylake to confirm that, though.

      Wikipedia says no AVX-512 support in the non-Xeon versions of Skylake. I can’t confirm that, either, until Intel is ready and willing to talk architecture with me.

        • NTMBK
        • 4 years ago

        You could check the AVX-512 CPUID bit if you want to find out: [url<]https://en.m.wikipedia.org/wiki/CPUID[/url<]

    • TheQat
    • 4 years ago

    typo on page one: “pc gaming is more alive than vibrant than ever” should be “pc gaming is more alive and vibrant than ever”

      • USAFTW
      • 4 years ago

      There’s more typos on the first page (have not gotten around to reading the rest.) Like 4970K instead of 4790K.

        • Eggrenade
        • 4 years ago

        There’re one typo in this post.

          • USAFTW
          • 4 years ago

          Scott fixed them.

      • Damage
      • 4 years ago

      Fixed. Thanks.

        • Convert
        • 4 years ago

        Page one, last sentence of the second paragraph: ” and they’ve arriving alongside an armada of motherboards based on the new Z170 chipset. “

    • chuckula
    • 4 years ago

    Yeah, as you pick the benchmarks apart in detail it looks like Skylake is a Jekyll & Hyde sort of launch.

    On the nice [Jekyll] side there are some solid improvements in non-game benchmarks even over the 4790K and even with the 200MHz clockspeed deficit. In some of the computational benchmarks it’s pretty clear that Intel definitely has made some IPC improvements that are similar in scope to the improvements we saw in the Nehalem to Sandy Bridge jump. The reason that Nehalem –> SB felt like a bigger jump was that SB had a large clockspeed bump in addition to the IPC improvements.

    It’s very clear that Intel has made some improvements in the AVX2 execution units, which is very nice but there’s a big catch: A crapton of software still fails to use any of those execution units and the software needs to actually utilize all those fancy features if you want the chip to perform well.

    On the not so nice [Hyde] side, we see that Skylake certainly isn’t worse than Haswell but isn’t exactly pushing any boundaries either, especially in games. On the more ugly side, we see the red headed stepchild 5775C that nobody thought was worth the effort actually doing better overall in games! It just goes to show that raw CPU performance is [b<]NOT[/b<] the overriding factor in game performance these days. Project Cars appears to be one big exception: Whatever it is they are doing with the CPU seems to benefit from Skylake's architecture although it benefits from Broadwell's L4 cache EVEN MORE.

      • Ashbringer
      • 4 years ago

      Lets run Prime95 and see if we can cook the CPU with AVX.

      • Andrew Lauritzen
      • 4 years ago

      > The reason that Nehalem –> SB felt like a bigger jump was that SB had a large clockspeed bump in addition to the IPC improvements.

      This. I really wonder at what point enthusiasts are going to actually take the memo about clockspeed scaling to heart. We’re like 10 years in now and people still expect massive improvements to workloads that are a) single threaded and b) not very well written in the first place.

      IPC gains have been pretty consistent for a long while. Factor out the frequency changes and you’ll get a very different narrative than what gamers seem to be convinced of…

      I don’t mean to sound too bitter, but at this point if you expect current games to see massive improvements from a CPU architecture change, you’re simply ignorant of what game workloads actually are. Frankly it’s downright amazing that they see any gains at this point.

        • chuckula
        • 4 years ago

        Getting Crystalwell or a next-gen equivalent more widely deployed will be a very useful thing. Ironically I think this review that is [i<]purportedly[/i<] about Skylake made the TR editors and many readers big believers in on-package high-bandwidth memory in a much stronger way than the R9 Fury reviews ever did!

          • Andrew Lauritzen
          • 4 years ago

          Yup, totally agreed. I have a few 5775c’s as I mentioned and they are beasts, even paired with dGPUs. I’ve been a big proponent of the EDRAM since its initial introduction but it takes people time to realize how awesome it is 🙂

          It is indeed strange that this review seems to show the 5775c in a great light vs. the higher clocked and newer architecture 6700K. I’m not sure that would really be true across a broader set of workloads, but certainly it’s interesting to see the 5775c doing particularly well in games.

            • Ninjitsu
            • 4 years ago

            To be fair to the 6700K, it’s cheaper and does almost equally well in games – and outpaces the 5775C in other benchmarks.

            • Andrew Lauritzen
            • 4 years ago

            Yes indeed it’s a good chip and that + the new chipset is what I’d recommend to most folks. It’s just interesting how the 5775c really punches above its weight in a few cases like games.

            • Ninjitsu
            • 4 years ago

            Yup. Even the i5-5675C does very well, though I haven’t seen frame time data for it.

        • w76
        • 4 years ago

        Okay, I like a lot of your posts, very informative, but that’s all silly excuses. Some types of problems may never translate to multithreaded code very well, are consumers/users of such software expected to get peanuts from Intel every generation and just like it? And what has Intel done for programs that ARE multi-threaded? Intel hasn’t increased the core count on mainstream products since the Q6600. For that privilege, consumers are expected to cough up big bucks for the high-end platform, or jump all the way to Xeon.

        And why would it be unreasonable to look for further clockspeed enhancements? Are you saying Intel’s engineering wizards couldn’t keep pushing that envelope if they weren’t so focused on keeping ARM at bay?

        So whether it’s single-threaded tasks, multi-threaded ones, or clock speed, you’re saying consumers can’t expect another Sandy Bridge (or Q6600 style improvement in core count). I’d say, you’re right, as long as AMD is the only entity keeping Intel honest. What I hear from your words is that this in the Intel mantra, the minimally acceptable improvement, and consumers can just stop expecting nice things until whatever technology leapfrogs over silicon-based processors, or qubit-powered processors become a thing. At least, that’s what it sounds like, when you make dismissive comments about different “workloads” and what they need, when these benchmarks suggest that it doesn’t apparently matter what the workload is, Intel’s improvements have been mediocre! … Except for Crystal Well, which Intel doesn’t deign to bestow upon the masses just yet, and even it only helps in very specific jobs.

        Even the “10 years in now” thing comes off wrong. This may come as a shock, but quite a lot of engineering endeavors have been going on for many, many decades longer than integrated circuits have even been around, and improvements are still steady. Look at internal combustion engines, look at some of the innovation that happens in aerospace technologies. People for decades said space travel was, as a matter of undeniable physics, hard to make any cheaper with chemical rockets, and then SpaceX made it look embarrassingly easy.

          • Andrew Lauritzen
          • 4 years ago

          > Some types of problems may never translate to multithreaded code very well, are consumers/users of such software expected to get peanuts from Intel every generation and just like it?

          Yes, that’s what the memo was! You obviously don’t have to “like” it, but there are physical limits to have fast this code will *every run*: gotw.ca/publications/concurrency-ddj.htm

          > And what has Intel done for programs that ARE multi-threaded?

          It’s not just about multithreading. Haswell for instance has a full 2x the FLOPS of Ivy Bridge via AVX2. Code just takes a while to take advantage of new SIMD instruction sets and so on. This very review shows that code that uses newer instruction sets gets some pretty nice microarch advantages from BDW->SKL even, which is exactly what you’d expect from the aforementioned article.

          > So whether it’s single-threaded tasks, multi-threaded ones, or clock speed, you’re saying consumers can’t expect another Sandy Bridge

          In terms of blanket, across the board improvements, likely not. A good chunk of those improvements were purely frequency related. If you factor the frequency differences out, the IPC gains were more similar to what you got with Ivy -> Haswell or any other tock. It has absolutely nothing to do with “competition” and everything to do with the fundamental physics and workloads at play.

          As mentioned above, some code does get huge improvements. It just turns out that game CPU code is a) not particularly good users of the latest instruction sets (they tend to consider new stuff when it gets to about 95+% install base) and b) not hugely CPU bound on these higher end SKUs in the first place. Again, the results in this article demonstrate exactly that. Even the outlier 5775c “surprising” result is only a few % faster and only in some games. Conversely other reviews have stuff like Dolphin emulator – an almost entirely single-threaded application – getting really large gains from SKL.

          > Even the “10 years in now” thing comes off wrong.

          That was a reference to the article that I linked above, published in 2005. i.e. the end of frequency scaling on current silicon processes is neither new or unexpected. “Breakthroughs” are of course always possible, but in this case it is almost guaranteed to require entirely different materials and processes. It is also possible that we will run into fundamental limits here in terms of making certain types of code ever run faster.

            • tipoo
            • 4 years ago

            Do you think the lack of advanced SIMD usage like AVX2 in games has to do with game developers simply putting things that would be well suited to SIMD-iness onto the GPU instead, which could perhaps do them even better? Not that a large amount of game developers are even using GPGPU to good effect, you’re right that they’re surprisingly slow on adaptation for the industry they’re in.

            Not to say that using both AVX2 and GPGPU wouldn’t be even better, but perhaps developers get everything they want from just GPGPU, so they don’t bother using AVX2 and other recent advanced sets?

            • Andrew Lauritzen
            • 4 years ago

            > Do you think the lack of advanced SIMD usage like AVX2 in games has to do with game developers simply putting things that would be well suited to SIMD-iness onto the GPU instead, which could perhaps do them even better?

            It’s almost entirely a function of what I said – they don’t bother with it until it is the overwhelming majority of their audience. While recently great tools like ISPC (ispc.github.com IIRC) exist, classically the burden of developing separate code paths for different instruction sets has not been a good use of time. Thus they design for the min spec.

            And indeed that’s the other part of the problem – the min spec is usually an “old” machine with less advanced instruction sets. Developers argue that these machines are the ones that need help – there’s no need for them to make the 6700K’s faster or whatever because they are already fast enough. Now there’s some argument that there are lower end SKUs (either i3’s or lower power stuff like U’s and Y’s) but these are not typically things that most game devs worry about specifically.

            Another factor is consoles of course. Current “next gen” consoles actually have extremely weak SIMD units so while you can get a lot of FLOPs out of a Haswell or newer desktop machine, the ratio of compute power between the GPU and CPU on the consoles is very skewed. That does indeed cause some folks to move more towards “GPGPU” although there are still a lot of algorithms that are very parallel but still unsuitable for GPUs, so it’s not a great long term solution.

        • Jason181
        • 4 years ago

        Precisely the reason why so many SB users haven’t upgraded. This looks like enough of an increase over SB to consider, especially for 120 fps gaming.

        I don’t think people expect massive improvements so much as they [i<]want[/i<] them. Gaming is one of the few very common non-commercial applications that really aren't "good enough" so people are naturally going to focus on improvements there. Having said that, looking back from SB to Skylake you can see a definite and large improvement, it's just that each step was incremental along the way.

      • jihadjoe
      • 4 years ago

      IKR? Who’da thunk that the most exciting thing from Intel is basically an on-package cache.
      It’s Pentium Pro all over again, except instead of 256kB we now have 128MB.

    • chuckula
    • 4 years ago

    Buffalo!

    Biggest takeaway: Skylake needs that L4 cache from the i7 5775C.
    Second biggest takeaway: Games really aren’t CPU bound.
    Third biggest takeaway: [quote<]The Skylake 6700K achieves the highest per-thread performance of any of the CPUs tested. It's roughly 80% faster than in the single-threaded test than AMD's FX-8370, which is either awesome or depressing. Maybe a little of both.[/quote<] AMD is promising a 40% IPC increase with Zen??

      • tsk
      • 4 years ago

      Yay buffalo!

      40% IPC increase from excavator, although I don’t know if that’s much up from piledriver.

        • USAFTW
        • 4 years ago

        Steamroller and Excavator both were supposed to be 10+ % IPC improvements over Bulldozer. But we never got to know how much more performance Excavator had over bulldozer since we didn’t get any post-piledriver high end desktop parts.

      • utmode
      • 4 years ago

      On 80% faster than fx-8370.
      At least you have to admire intel for not being Usain bolt at Olympic.

      • Gyromancer
      • 4 years ago

      You woke up an hour early, just to beat me to saying buffalo… lol

      • tipoo
      • 4 years ago

      69% faster actually. 40% over Excavator, not the current Piledriver. It’s accumulative, and last I checked it should be in the ballpark of lower end Haswell IPC. Even if some cores never come out, there were supposed to be two more 10% jumps for their promised Zen IPC. So 1.69x in total.

        • ronch
        • 4 years ago

        I’m putting it at 62% actually.

        1.1 * 1.05 * 1.4 = 1.62

        1.1 is the diff between Steamroller and Piledriver, 1.05 is Excavator over Steamroller, and 1.4 is, you know. I’m being more conservative here though. So being just 62% faster per clock, Zen needs to clock faster than Skylake to make up for the IPC deficit. Can their foundry partner’s 14/16nm node do that?

      • Krogoth
      • 4 years ago

      Skylake doesn’t need that cache and would only drive-up button costs for trivial gains for the majority of mainstream applications.

      However it is a different story for Skylake-E and Skylake-EP.

      • derFunkenstein
      • 4 years ago

      Maybe Zen will run at 5.5GHz to make up the difference. Or maybe someone slap me.

        • NoOne ButMe
        • 4 years ago

        *slap*
        AMD will be on a 14nm node that is worse clocking than Intel.
        😛

        • ronch
        • 4 years ago

        5.5GHz and 300w TDP. The PSU makers will have a hayday.

      • ronch
      • 4 years ago

      40% better than Excavator, not Piledriver. By some rough guesstimates I think Zen will be ~60% faster per clock than Piledriver. But it won’t matter if AMD doesn’t hit high clocks, say, 4.0GHz. Intel just kept setting the bar higher and higher over the years. Making Excavator the basis for Zen’s performance improvement makes sense for AMD since it’s their most efficient core, but it’s not really much when you have Intel as your competitor.

Pin It on Pinterest

Share This