Intel’s Core i7-7700K ‘Kaby Lake’ CPU reviewed

It’s tough to be at the top of your game. The Tiger Woodses, Roger Federers, and Lebron Jameses of the world can blaze all the paths they want, set all the records, break all the barriers, force the invention of new metaphors for pushing the bounds of what’s possible. They win and win and win some more. Eventually, though, even the best and brightest have to stumble—or, at least, plateau.

About six years ago to this day, Intel launched the brilliant Sandy Bridge family of CPUs to universal acclaim. Clock for clock, Sandy Bridge redefined desktop performance by every measure we cared to test. Folks building new PCs in 2011 had ample reason to retire even their Nehalem and Lynnfield PCs in favor of Sandy Bridge. Overclockers enjoyed plenty of “free” headroom from K-series Sandy CPUs with modest aftermarket coolers on top. Life was very, very good.

Intel followed Sandy Bridge with Ivy Bridge, the process-advance “tick” to Sandy’s architectural “tock.” While Ivy delivered some under-the-hood improvements to Sandy’s microarchitecture and its instructions-per-clock throughput, the big changes on the third-generation Core chips came in the integrated-graphics and power-efficiency departments as Intel began chasing thin-and-light notebooks with vigor.

Enthusiasts with Sandy Bridge CPUs still ended up sitting pretty when Ivy came around. Intel’s third-generation Core chips proved less-forgiving overclockers than their forebears, and their performance improvements on the desktop weren’t overwhelming compared to Sandy.

Meanwhile, AMD’s ambitious Bulldozer architecture failed to demolish Sandy Bridge Core i5s, much less the Core i7-2600K. Even with AMD’s subsequent refinements, Bulldozer’s derivatives never quite matched the killer combo of single-threaded performance, power efficiency, and gaming competence that Intel delivered with tick-tock regularity. The potent Radeon integrated graphics in AMD APUs did little to capture the enthusiast imagination, either.

As Intel continued to perfect its microarchitectures and fabrication processes, the Haswell, Broadwell, and Skylake architectures each brought progressively more modest single-threaded performance improvements with them. The Skylake Core i7-6700K, Intel’s most recent mainstream desktop range-topper, is one of the finest all-around CPUs ever made, and its only peers come from past generations of Core i7s.

With that long line of winners, AMD’s perennial struggles with its CPUs and APUs, and the maddeningly difficult pursuit of smaller process technologies industry-wide, Intel’s processors have enjoyed more-or-less complete domination of the PC in recent years. How many among us can say they have nothing left to beat but their own past successes? Indeed, when you’re doing so well, what’s the motivation to stick your neck out even further and perhaps overplay your hand?

The quad-core Kaby Lake die. Source: Intel

And that brings us to this morning. Even in light of the slowing pace of per-clock performance improvements from Intel, the Core i7-7700K CPU that’s launching today feels like the company’s least-ambitious desktop chip in quite some time. From dispatch to retirement, the i7-7700K’s basic Kaby Lake CPU core is identical to Skylake’s. Aside from some improvements in its fixed-function video engines to efficiently handle 4K video encoded with the next-gen HEVC and VP9 video codecs, Kaby’s integrated graphics are largely a carry-over from the Gen9 IGP technology on Skylake chips, too. Womp.

The Skylake Core i5-6600K (left) and the Kaby Lake Core i7-7700K (right)

Basically every item on our wish list for Kaby was dashed during Intel’s Kaby Lake introduction earlier this year at IDF. No, there will be no eDRAM for socketed Kaby Lake CPUs. No, VESA Adaptive-Sync still hasn’t been integrated into the Kaby Lake IGP. So on and so forth. If you were hoping for some revolutionary architectural change from Intel to mark the seventh generation of Core processors and put the last nail in Sandy Bridge’s coffin, well, Kaby Lake ain’t it.

i5-6600K on the left, i7-7700K on the right

Instead, the biggest change in Kaby comes when it’s forged in Intel’s foundries. Kaby is the first CPU from Intel’s new “optimize” product phase, and that means the company has tweaked and tuned its 14-nm tri-gate transistors to extract every bit of performance possible from its fabs. With those improvements, the Core i7-7700K enjoys a 200-MHz base clock boost over the i7-6700K, to 4.2 GHz, and its Turbo clock is a dizzying 4.5 GHz. The Core i7-7700K’s TDP has held steady at 91W even with that clock boost, halting a slow upward creep that began with Ivy Bridge. If there’s any single-threaded performance improvement from Kaby, it should come entirely from this extra clock speed.

Like the Skylake CPUs before it, the Core i7-7700K also exposes a couple new knobs for overclockers to play with. Kaby Lake incorporates a new base-clock-aware dynamic-voltage-and-frequency-scaling (DVFS) feature that lets the CPU’s power-management circuitry take both changes in the base clock and changes in multipliers into account when it adjusts the CPU’s P-states. In the past, the company says that its CPUs’ power-management circuitry only took multiplier changes into account in that scenario, so BCLK overclocking required finding a stable fixed voltage. The company says this new BCLK-aware approach lets overclockers enjoy a simpler BCLK tweaking process and the power-saving benefits of DVFS at idle.

Unlocked Kaby Lake chips also inherit Broadwell-E’s AVX Offset feature. That setting lets tweakers run the chip at lower speeds under heavy AVX2 workloads while maintaining higher frequencies when those instructions aren’t being executed, possibly increasing the light-load frequency gains one can achieve with a given chip. If those issues held you back from taking your Skylake CPU to its maximum potential, Kaby Lake might break those shackles.

 

The Z270 chipset and friends

To go with the desktop Kaby Lake lineup, Intel is introducing several new chipsets to go with it. The one of most interest to enthusiasts, however, will likely be the Z270 platform. Z270 retains the same LGA 1151 socket that underpinned Skylake CPUs, so one can mix and match Z170, Z270, Skylake, and Kaby Lake CPUs in whatever mind-bendingly complex compatibility matrix that would produce. It’ll all work together. You can see just how MSI and Aorus are implementing the Z270 in our full reviews of the MSI Z270 Gaming Pro Carbon and the Aorus Z270X-Gaming 5.

In a 200-series motherboard, however, Kaby Lake CPUs will boast out-of-the-box compatibility with DDR4-2400 RAM, a nice little boost that came along with Broadwell-E CPUs last year. Z270 will also give motherboard makers four more PCIe 3.0 lanes from the chipset to pair with storage devices and peripheral I/O controllers. That means a total of 24 such PCIe 3.0 lanes from the Z270 platform controller hub and 16 more from a Kaby Lake CPU. In a world where more and more devices hunger for PCIe lanes, that small update could prove quite handy. Intel hasn’t updated the DMI 3.0 interconnect between the processor and chipset, however. That link still offers bandwidth equivalent to about four PCI 3.0 lanes.

Despite the broad cross-compatibility among Skylake and Kaby Lake CPUs and 100-series and 200-series chipsets, there are some advantages to going with Intel’s latest and greatest. Pairing a Kaby Lake CPU with a 200-series motherboard is the only way builders will be able to take advantage of Optane Memory, a new intermediate data-caching product that’ll sit somewhere between main memory and bulk storage. Optane Memory could be the first appearance of Intel’s 3D XPoint technology in a consumer product, but we know next to nothing about it right now aside from the fact that it can only be used with Kaby Lake CPUs and 200-series chipsets.

Optane Memory superficially sounds like a much-improved reinvigoration of the Turbo Memory solid-state caching product that made a rather inglorious appearance in some laptops several years ago. We’ll hopefully learn more about this technology soon and get an opportunity to give it a spin, but Intel seems to think Optane Memory will have the greatest benefit for systems that rely on hard drives for primary storage. Given the increasing prevalence of large NAND flash SSDs as the primary storage devices for enthusiast desktops, we’ll have to see whether Optane Memory is a valuable addition to those systems.

PCs with 200-series chipsets inside will also gain support for Intel Smart Sound technology, a dedicated DSP that can work with Windows 10 to enable features like system wake-up with Cortana. Compatible 200-series chipsets and Kaby Lake processors with Intel vPro support will also be able to work with Intel Authenticate technology, a hardware-enforced identity management system that can require the user to log in using any of several factors. Intel says it’s working with consumer software providers to add support for hardware-enhanced security measures for applications like password managers, touch-to-pay with biometrics, and more. The availability of those features will likely depend heavily on what a given system integrator chooses to include in a PC, so we’d expect to see them mostly in laptops where tight integration of the necessary hardware can be guaranteed.

As you can see from Intel’s comparison diagram of all of its new desktop chipsets, only the Z270 chipset will permit the PCIe 3.0 lane-switching from the processor that one might want for CrossFire or SLI setups. The remaining feature differences between the chipsets largely boil down to peripheral connectivity, RAID support, and management features for IT departments. Given the wide range of price points that motherboard makers were able to hit with Z170 boards, we’d expect that outside of the most budget-limited systems, builders will be able to choose a Z270 motherboard at the price that best meets their needs with time.

 

Our testing methods

For each of our benchmarks, we ran each test at least three times, and we’ve reported the median result. Our test systems were configured like so:

Processor AMD FX-8370 Intel Core i7-2600K Intel Core i7-3770K
Motherboard Gigabyte GA-990FX-Gaming Asus P8Z77-V Pro
Chipset 990FX + SB950 Z77 Express
Memory size 16 GB (2 DIMMs)
Memory type Corsair Vengeance Pro Series

DDR3 SDRAM

Memory speed 1866 MT/s
Memory timings 9-10-9-27 1T

 

Processor Intel Core i7-4790K Intel Core i7-6700K Intel Core i7-7700K Intel Core i7-6950X
Motherboard Asus Z97-A/USB 3.1 Aorus Z270X-Gaming 5 Gigabyte GA-X99-Designare EX
Chipset Z97 Express Z270 X99
Memory size 16 GB (2 DIMMs) 16 GB (2 DIMMs) 64GB (4 DIMMs)
Memory type Corsair Vengeance Pro Series

DDR3 SDRAM

G.Skill Trident Z

DDR4 SDRAM

G.Skill Trident Z
DDR4 SDRAM
Memory speed 1866 MT/s 3866 MT/s 3200 MT/s
Memory timings 9-10-9-27 1T 18-19-19-39 1T 16-18-18-38 1T

They all shared the following common elements:

Storage 2x Kingston HyperX 480GB SSDs
CPU cooler Cooler Master MasterLiquid Pro 280
Discrete graphics Gigabyte GeForce GTX 1080 Xtreme Gaming
OS Windows 10 Pro
Power supply Corsair RM850x

Thanks to Corsair, Kingston, Asus, Gigabyte, Aorus, Cooler Master, Intel, G.Skill, and AMD for helping to outfit our test rigs with some of the finest hardware available.

Since the Aorus Z270X-Gaming 5 motherboard that we’re using to test our Core i7-7700K is  equally compatible with the Core i7-6700K, we chose to use that board instead of a Z170-powered one to perform testing for both CPUs. That decision gives the Core i7-6700K even footing with its successor when it comes to RAM speeds, so any difference in performance results between the two should come down to the differences between Skylake and Kaby Lake. Our Z170 motherboard claims DDR4-3866 support with only one DIMM, and it didn’t seem ideal to us to produce a set of results with the Core i7-7700K that didn’t take advantage of its support for higher RAM speeds.

For perspective (and also for fun), we’ve run the Core i7-6950X through our benchmarking suite alongside the Core i7-7700K. We didn’t get a good opportunity to review that chip when it first arrived, so it only seemed fair to give it a turn in the spotlight. That 10-core, 20-thread CPU sells for $1650 right now, so it’s in a completely different ballpark than Intel’s mainstream CPUs. Still, it’s good to finally get an idea of what Intel’s biggest, baddest consumer chip can do.

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at a resolution of 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.

  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Memory subsystem performance

Given the dearth of changes under Kaby Lake’s hood, we’re not going to repeat our usual whole-hog suite of Sandra cache bandwidth and latency tests for the Core i7-7700K. Instead, we’re using some simple memory-bandwidth benchmarks from the AIDA64 utility to get a basic idea of what Skylake and Kaby Lake can do on the same Z270 platform with DDR4-3866 RAM strapped in.

Yikes. There’s almost no difference between the Core i7-6700K and the Core i7-7700K in these synthetic benchmarks of memory performance. What’s surprising is that our motherboard’s support for DDR4-3866 lets those chips move much more data around than Intel CPUs with DDR3-1866 hooked up. In fact, the Skylake and Kaby Lake CPUs paired with the Z270 platform are elbowing in on the results the Core i7-5960X posted in these same AIDA64 tests way back when. That’s kind of scary performance from a dual-channel memory setup.

Also, get used to the Core i7-6950X taking the top spot in many of our benchmarks. It’s just along for the ride in these tests, but it still turns in some jaw-dropping numbers relative to Intel’s more mainstream CPUs. Feel free to ooh and ahh as necessary.

The move to exotic high-speed DDR4 results in some of the lowest memory access latencies we’ve ever seen in our tests. Impressive.

Some quick synthetic math tests

To get a basic idea of how the CPUs on our bench stack up, we’re using some of the handy built-in benchmarks from the AIDA64 utility. These benches can take advantage of the various feature sets of Intel and AMD’s latest chips. Of the results shown below, PhotoWorxx uses AVX2 instructions (and falls back to AVX on Ivy Bridge, et al.), CPU Hash uses AVX (and XOP on Bulldozer/Piledriver), and FPU Julia and Mandel use AVX2 with FMA.

In these tests, the Core i7-7700K behaves like a clock-bumped Skylake chip. As we’d expect, the i7-7700K takes a small lead over the i7-6700K when its extra Turbo headroom comes into play, and it behaves a lot like the i7-6700K when it doesn’t.

Power consumption

We also won’t be repeating our in-depth power measurements and task-energy calculations for this review. Instead, we’ve chosen to do some quick platform power draw measurements with our trusty Watts Up? power meter. The power supply for our system was plugged into the Watts Up?, while the monitor and other peripherals were plugged into a separate outlet. We tested each system’s power draw using the “bmw27” benchmark file for the Blender 3D rendering app.

While a lot of the differences in idle power draw above can be put down to variations in motherboards, the Core i7-7700K seems to need a couple dozen more watts under load to do its thing. We may have to redo these numbers with future firmware updates for our Z270 motherboard to see whether the i7-7700K’s power draw is within the expected range.

Now that we have a basic understanding of the Core i7-7700K, its platform, and its performance, let’s see how it handles some games.

 

Doom (OpenGL)
Doom likes to run fast, and especially so with a GTX 1080 pushing pixels. We figured the game’s OpenGL mode would be an ideal test of each CPU’s ability to keep that beast of a graphics card fed, so we cranked up all of its eye candy at 1920×1080 and went to work with our usual test run in the beginning of the Foundry level.


The nice progression of average frame rates above suggests our hunch about the GTX 1080’s hunger for work from the CPU is correct. The fast clocks and high IPC of the Core i7-6700K and Core i7-7700K result in the best performance from the GTX 1080, and we bet the extra bandwidth afforded by the DDR4-3866 memory we’re using doesn’t hurt, either. At the other end of the chart, pairing the FX-8370 with the GTX 1080 cuts the card’s average frame rate roughly in half compared to the Core i7-7700K’s. Ouch. The AMD chip’s 99th-percentile frame time is significantly higher than those turned in by the Intel chips, too.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time the GTX 1080 spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame. And 8.3 ms corresponds to 120 FPS, an even more demanding standard that Doom can easily meet or surpass on hardware that’s up to the task.

Happily, none of the chips in our test suite cause the GTX 1080 to spend more than a handful of milliseconds beyond the critical 16.7-ms barrier. The Core i7-6700K and the Core i7-7700K each let the GTX 1080 spend less than half a second working on frames that take longer than 8.3 ms to produce, however, compared to the two to five seconds that the Sandy, Ivy, and Haswell Core i7s produce. The FX-8370 and GTX 1080 combo spends more than 12 seconds under 120 FPS, though, a disappointing result in this company.

 

Doom (Vulkan)

Our previous graphics-card tests have shown that Doom‘s Vulkan renderer doesn’t provide the best performance with GeForces, but we figured we might as well give both of the game’s APIs a try to see whether the switch had a meaningful impact on performance from these CPUs. Aside from the switch to Vulkan, we left all of the game’s quality settings the same for this test.


Although we haven’t seen the best performance from Doom‘s Vulkan renderer on GeForce cards in the past, making the switch here has a surprising effect on both average frame rates and the 99th-percentile frame time the GTX 1080 can produce. Every CPU here gets a frame-rate increase of some sort, and 99th-percentile frame times fall across the board. The FX-8370 gets the biggest boost of all.


In our measures of “badness,” each CPU allows the GTX 1080 to spend an imperceptible amount of time past 16.7 ms working on tough frames. The real improvement comes past the 8.3-ms threshold, where most of the Intel CPUs let the GTX 1080 spend about a tenth of a second on challenging scenes. The Core i7-2600K causes the GTX 1080 to spend about two-tenths of a second on tougher frames—still impressive.

The “most improved” award in these metrics goes to the FX-8370. The Piledriver chip lets the GTX 1080 spend just 1.2 seconds on tough frames with Vulkan running the show, a factor-of-10 improvement over its OpenGL performance. Still, the FX-8370 can’t quite match the Intel parts for absolute smoothness.

Doom‘s performance with Vulkan and these CPUs is fascinating. Although average frame rates under Vulkan drop a bit for all the chips we tested, 99th-percentile frame times improve across the board. Most strikingly, those 99th-percentile figures are more or less equal regardless of the age of the chip in question. We’ve seen hints of this equalizing effect with Vulkan in informal testing before, but this is the first time we’ve formally quantified it.

Not every game has a Vulkan rendering path, to be certain, but Doom‘s implementation suggests that clever developers can extract substantial amounts of performance from older CPUs with the new API. That fact could have important implications for folks with older systems if Vulkan becomes more popular in future titles. For now, though, these results are more of an outlier than the norm.

 

Crysis 3

Although Crysis 3 is nearly four years old now, its lavishly detailed environments and demanding physics engine can still stress every part of a system. To put each of our CPUs to the test, we took a one-minute run through the grassy area at the beginning of the “Welcome to the Jungle” level with settings cranked at 1920×1080.


So that’s an unusual result. At least for the test run we chose, it seems Crysis 3 leans heavily enough on the CPU that the beastly Core i7-6950X actually allows the GTX 1080 to turn in the best overall average frame rate and 99th-percentile frame time. Monster Broadwell-E chip aside, the Core i7-7700K’s clock-speed advantage over the Core i7-6700K seems to let it turn in slightly better average-FPS and 99th-percentile figures, as well. At least at the settings we chose, Crysis 3 still seems able to take advantage of as much CPU as one can throw at it.


At the critical 16.7-ms threshold of “badness,” only the FX-8370 causes the GTX 1080 to spend a substantial amount of time working on frames that would drop animation rates below 60 FPS. The Core i7-7700K turns in a perfect score here, while the i7-6700K and i7-6950X aren’t far behind.

At the more challenging 8.3-ms threshold, the Core i7-6950X puts a point on its unusual photo finish by holding up the GTX 1080 for just under a second in total. The i7-7700K still delivers an excellent result here by allowing the graphics card to spend about half the time on tough frames that the Core i7-6700K does. Otherwise, the results largely sort themselves out by the age of the chips in question. The i7-2600K, i7-3770K, and FX-8370 all deliver substantially worse gaming performance with the GTX 1080 than the i7-4790K and its newer cohorts.

 

Far Cry 4

Although it’s not quite as demanding as Crysis 3, my experience with Far Cry 4 suggests it’s still a challenge for most systems to run smoothly with the settings cranked. To confirm or dispel that impression, I turned up the game’s quality settings at 1920×1080 and ran through our usual test area.


Interesting. Far Cry 4‘s performance improvements taper off once we start testing it with Haswell and newer chips. The i7-3770K, i7-4790K, and i7-6700K all bunch up behind the i7-7700K, trailed slightly by the i7-2600K and the i7-6950X. The i7-7700K also turns in the best 99th-percentile frame time by a slight margin, but the differences aren’t large among the Intel chips.


Our time-spent-beyond results shake out about as you’d expect from the average frame rates and 99th-percentile frame times above. The i7-7700K lets the GTX 1080 deliver nearly all of its frames in 16.7 ms, and the other Intel chips in the test suite hamper the card just a bit more. Meanwhile, the FX-8370 forces the GTX 1080 to spend nearly two seconds on frames that take more than 16.7 ms to render, and that performance translates into a notably less smooth gaming experience than the Intel CPUs provide. 

 

Deus Ex: Mankind Divided

One of 2016’s most demanding titles for any system, Deus Ex: Mankind Divided seemed like an ideal game to include in our test suite. We performed our usual test run in the game’s DirectX 11 mode with lots of eye candy turned on.


Whoops. Seems our test settings for Mankind Divided ended up more GPU-bound than we had expected. Still, these results show some minor variations in 99th-percentile frame times among the Intel CPUs we’re testing, so even with a game that stresses the graphics card as much as Mankind Divided does, it helps to have a powerful CPU backing it up—just not by that much.


Given the rather pedestrian frame rates our test rigs achieved in this test, it’s worth taking a look at the 33.3-ms threshold first. The Core i7-7700K delivers all of its frames above this threshold, while the other chips hamstring the GTX 1080 just a bit. That said, there’s not a substantial gap between the best- and worst-performing chips in this test once we flip over to the 16.7-ms threshold.

 

Grand Theft Auto V
Grand Theft Auto V recently offered us hints that it can still put the hurt on CPUs as well as graphics cards, so we ran through our usual test run with the game’s settings turned all the way up at 1920×1080.


Wow. The Core i7-6700K, Core i7-7700K, and Core i7-6950X all deliver average-FPS numbers and 99th-percentile frame times that are quite a bit better than anything else in our test suite. The i7-7700K ekes out a tiny advantage over its Skylake stablemate, though, perhaps thanks to its extra clock speed. We may have to perform some further testing to see whether the extra bandwidth afforded by our DDR4-3866 memory on our Z270 test rig makes a meaningful difference to GTA V‘s performance given these numbers.


All of the Intel CPUs in this test pose little hindrance to the GTX 1080’s thirst for work at the critical 16.7-ms threshold. None of the Intel chips hold up the card for more than a fraction of a second, while the FX-8370 makes the graphics card spend more than three seconds of our test run waiting to complete tough frames that drop frame rates below 60 FPS. GTA V seems to care about high single-threaded performance and memory throughput when it’s running on a GTX 1080, and the FX-8370 is thoroughly outclassed in those measures by even its contemporary, the Core i7-2600K.

We can see some better separation among the Intel CPUs in this test by clicking over to the 8.3-ms threshold. There, the Core i7-7700K spends about two seconds less contributing to tough frames than the Core i7-6700K does. The Core i7-6950X mixes it up with that highest-performing pair of chips, too. The i7-4790K, the i7-3770K, and the i7-2600K all contribute significantly more milliseconds to the time spent under 120 FPS in our test run than Intel’s most recent chips, and the FX-8370 simply bottlenecks the GTX 1080.

 

Memory frequency and performance scaling with Arma III

Our own Colton Westrate, aka “drfish,” is a huge fan of Bohemia Interactive’s DayZ. He’s long suggested that we include Bohemia’s Arma III, which is powered by the same engine, in our CPU performance benchmarks because of the disproportionate demand it places on single-threaded CPU performance and memory bandwidth. We figured now was as good a time as any to give it a shot. 

We didn’t test Arma with every CPU on our bench because of those inherent performance characteristics and a lack of diverse DDR3 kits. Informally, though, the move from a Core i7-3770K with DDR3-1866 to the Core i7-7700K with DDR4-2133 netted us about 11 FPS more on average. Given that fact, we ran the game through its paces with the Core i7-7700K and three different memory kits: one running at DDR4-2133, the second at DDR4-3000, and the third at DDR4-3866. Our thanks to G.Skill for hooking us up with the requisite kits.

To make our Arma benchmarking repeatable, we relied on the community-created “Yet Another Arma Benchmark” scenario. This roughly two-minute test somewhat replicates what it’s like to be present in a multiplayer Arma game, and it’s quite demanding. We also cranked every graphics setting the game had to offer save for the full-screen anti-aliasing slider.

One thing is for certain if you’re an Arma III player: sticking with good old DDR4-2133 isn’t going to let you get all of the performance that’s available from a Core i7-7700K in this game. The move from DDR4-2133 to DDR4-3000 nets a big increase in average frame rate and a big decrease in 99th-percentile frame times. Past that point, the benefits are somewhat more modest. Our exotic DDR4-3866 kit still offers improved performance over slower memory, but whether it’s worth the $65 or so extra over a DDR4-3000 kit of a similar capacity is something that only fans of the game can judge.


Our graph of the tail end of the frame-time distribution for Arma III and our “badness” graphs offer a little more insight into just what that extra $65 buys. The biggest improvement in Arma performance still comes from the move to DDR4-3000, but stepping up to DDR4-3866 means that the game spends about five fewer seconds below 60 FPS in the Yet Another Arma Benchmark scenario. That’s not nothing when even a Core i7-7700K and a GeForce GTX 1080 don’t seem to be helping Arma III‘s performance all that much. If you’re an Arma fan and already have the best CPU and graphics card that you can afford, it seems like it’s worth stepping up to a reasonably fast (or even ludicrously fast) memory kit if your motherboard and CPU can handle it.

 

Productivity

Compiling code in GCC

Our resident developer, Bruno Ferreira, helped put together this code compiling test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

No surprises here. The Core i7-7700K is the fastest quad-core chip on the bench in this test, and the Core i7-6950X can stretch all 20 of its threads to take a commanding lead.

Javascript performance

In these common Javascript benchmarks, the Core i7-7700K’s superior single-threaded performance lets it pull ahead of the pack by a small margin. In all of these tests, the Kaby chip is about twice as fast as the FX-8370. There’s no replacement for displacement when it comes to per-core performance, and Intel retains a commanding lead by this measure.

7-Zip benchmark

Compressing and decompressing zip archives is one of the more common tasks I still perform on a desktop PC, and the Core i7-7700K is the best at it of the mainstream chips here.

TrueCrypt disk encryption

Although the TrueCrypt project has fallen on hard times, its built-in benchmarking utility remains handy for a quick test of these chips’ accelerated and non-accelerated performance when we ask them to encrypt data. The AES test should take advantage of hardware acceleration on the chips that support Intel’s AES-NI instructions, while the Twofish test relies on good old unaccelerated number-crunching prowess.

For both the accelerated and non-accelerated encryption algorithms we benched with TrueCrypt, the Core i7-7700K is the fastest thing going for a mainstream CPU.

Scientific computing with STARS Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here.

Unsurprisingly, the Core i7-6700K and i7-7700K both take substantial leads over the older quad-core chips in our stable in Euler3D, thanks in part to the copious memory bandwidth that DDR4-3866 affords. The Core i7-6950X continues to play in a completely different league. Moving on.

 

3D rendering and video processing

Cinebench

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

Cinebench’s single-threaded test proves that the Core i7-7700K is one of the fastest CPUs around on a per-core basis, and that performance translates into a solid lead for it in the all-cores test among the quad-core chips it’s competing with. The Core i7-6950X is in a league of its own here, though.

LuxMark

As an OpenCL benchmark, LuxMark lets us test performance on CPUs, GPUs, and even a combination of the two. OpenCL code is by nature parallelized and relies on a real-time compiler, so it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both support AVX. The AMD APP driver even supports Bulldozer’s and Piledriver’s distinctive instructions, FMA4 and XOP. We’ve used the AMD APP ICD on the FX-8370 and Intel’s latest OpenCL ICD on the rest of the processors.

We tested with LuxMark 3.0 using the “Hotel lobby” scene.

The Core i7-7700K continues its string of victories in our CPU-only tests with LuxMark, but the Core i7-6700K isn’t far behind.

As we’ve come to expect with LuxMark, switching to GPU-only rendering leads to a chaotic mixture of results. Still, the Core i7-7700K comes out on top.

With their powers combined, the CPUs and the GeForce GTX 1080 fall back into an orderly progression. We also get the highest possible scores this way. Once again, the i7-7700K takes the overall crown among the quad-core chips it’s pitted against.

Blender Cycles renderer

Here’s a new addition to our test suite. If you’ve been paying to AMD’s Ryzen events of late, you’ve probably seen the company demonstrating its chips with Blender and its Cycles renderer. The Blender project offers several standard scenes to render with Cycles for benchmarking purposes, and we chose the CPU-targeted version of the “bmw27” test file to put Cycles through its paces on these parts.

The i7-7700K adds another trophy to its case with the Cycles test. The Core i7-6950X does it one better and finishes the job in almost half the time.

 

Some quick overclocking tests

One of the biggest questions about Kaby Lake is whether Intel’s improved 14-nm process technology opens up any additional overclocking headroom for the company’s unlocked CPUs. To find out, we dived into our motherboard’s firmware to see just how much more performance we could extract from our particular i7-7700K. We stuck to basic multiplier overclocking for this test and raised the Vcore of our chip as needed to achieve stability. We continued in this way until we ran into thermal limits.

After several cycles of increasing multipliers, running Prime95, and increasing voltages as needed to achieve stability, we ended up at 4.8 GHz and around a 1.32V Vcore. Our system could boot at 4.9 GHz, but we ran into thermal throttling when we tried increasing the Vcore further to achieve stability. Considering the i7-7700K’s 4.5 GHz stock Turbo speed, a 6% overclock ain’t much—especially when it’s threatening to overcome a $110, 280-mm liquid cooler.

A beefier liquid loop or even more exotic cooling solutions might perform better, but we’re just regular folks with regular coolers here. We’ve successfully taken a Core i7-6700K to 4.7 GHz before, so another 100 MHz seems like it falls within the chip-to-chip variations inherent in semiconductor manufacturing.

We unfortunately didn’t have time to overclock every CPU on our test bench, but we did overclock our evergreen Core i7-2600K to 4.5 GHz to see how it performed against the Core i7-7700K at 4.8 GHz. We re-ran a few of our productivity tasks to see just how each chip performed. Here’s what we found.

So, uh, yeah. Overclocking the Core i7-7700K makes a fast chip a little bit faster at most everything, but the improvements are pretty modest considering the extra fan noise and heat production involved. A new Sandy Bridge this isn’t.

 

Conclusions

To sum up all of the data we collected over the past few pages, we’ve condensed our results into our famous value scatter plots. The non-gaming chart shows the performance each chip delivers per dollar, measured as the geometric mean of the results of our productivity tests. The gaming chart takes the geometric mean of all the 99th-percentile frame times each chip produced in our gaming tests and converts that figure into FPS so that our higher-is-better system works.

The best values in these graphs can be found toward the upper-left corner, where prices are lowest and performance is highest. We’ve presented versions of each chart with the Core i7-6950X both present and omitted, since its ginormous price tag skews our results.



All of our in-depth tests boil down to a couple simple conclusions. Intel’s process optimization and the resulting clock-speed bump in the Core i7-7700K translate to a 5% faster performance on average in our productivity tests than that of the Core i7-6700K. The i7-7700K also delivers about a 2% higher 99th-percentile frame rate than its predecessor in the games we tested. Despite that improvement, Intel actually suggests a $339.99 price tag for the new chip, $10 less than the Core i7-6700K’s.

The Core i7-6700K was already one of the fastest mainstream desktop CPUs money could buy for most any purpose, and the 300 MHz of extra Turbo headroom Intel found in its process optimizations lets Kaby Lake set a slightly higher bar for this segment. The Core i7-7700K delivers exceedingly modest improvements, to be sure, but bellyaching about them is like complaining about a slightly faster Ferrari for less money. More single-threaded performance is a valuable commodity these days, and higher clock speeds are one way to get there.

Along with its higher clock speeds, the Kaby Lake refresh brings some welcome platform improvements with it. Seventh-generation Core chips can now handle DDR4-2400 memory out of the box, and they can also handle higher overclocked memory speeds. The Z270 motherboard we tested fired right up with nosebleed-inducing DDR4-3866 RAM in its DIMM slots after we enabled the right XMP profile. So equipped, our rig turned in some synthetic memory performance results that would make even Haswell-E-powered systems a little bit uncomfortable, and our testing showed that some games and applications seem to be able to take advantage of the extra bandwidth. Z270 also comes with more flexible I/O lanes for motherboard makers to tap, and in an era where more and more devices are clamoring for PCIe connectivity, we’re all for it.

Folks looking to turbocharge their Kaby chips through overclocking might not find a lot of performance remaining to be tapped, though. Even with a 280-mm liquid cooler on top, our i7-7700K ended up thermally limited at about 4.8 GHz. The Core i7-6700K was already a bear to keep cool when we started turning up its clocks, and the process-technology improvements in Kaby Lake don’t seem to have translated to more overclocking headroom.

To be fair, making general statements about overclocking performance from one sample of a given CPU is a bit dicey, but it’s what we’ve got to work with. Given the i7-7700K’s already-impressive stock clocks, we have to wonder about the value proposition of strapping on CPU coolers that cost hundreds of dollars to get a few more percent’s worth of clock speed out of a chip.

Many readers are likely wondering whether it’s finally time to retire their Sandy Bridge or Ivy Bridge systems with the advent of Kaby Lake. If the Core i7-7700K’s performance in productivity tasks doesn’t tantalize you, perhaps its gaming performance will. With a blisteringly fast graphics card like the GeForce GTX 1080 installed, many of our more CPU-bound gaming tests at 1920×1080 show that older systems can limit the maximum performance one can achieve with today’s highest-end graphics cards. That behavior isn’t consistent across every game we tested, to be sure, but it does suggest that you might be leaving a lot of performance on the table if you just plop a GTX 1070 or GTX 1080 into a five-year-old PC.

Now that Intel’s cards are largely on the table for this generation of desktop chips, we’re curious to see what AMD has up its sleeve with its Ryzen CPU family. Early performance demonstrations of Ryzen suggest its instructions-per-clock throughput will be comparable to that of Intel’s Broadwell-E chips, rather than the class-leading Skylake and Kaby Lake parts. If that’s the case—and if AMD can find substantial Turbo headroom for Ryzen above and beyond the 3.4 GHz top-end base clock figure it’s touted so far—those chips could finally deliver some sorely-needed competition in the enthusiast desktop CPU market.

We say as much because our tests with the Broadwell-E Core i7-6950X show that its 99th-percentile FPS figures already don’t trail Intel’s highest-IPC cores by that much in our gaming tests. If the DirectX 12 and Vulkan APIs become more widely adopted, our experience with Doom‘s Vulkan renderer shows that those low-overhead APIs have the potential to let lower-IPC chips with a lot of cores substantially close the gap with higher-performance parts, too. To be fair, Vulkan and DirectX 12 aren’t the APIs of choice for a lot of games yet, and their benefits won’t accrue exclusively to AMD CPUs. In any case, Ryzen chips are slated to arrive in the first quarter of this year, so we should know for sure just how they stack up pretty soon.

Speaking of the Core i7-6950X, you may have noticed that chip’s near-complete dominance in our non-gaming application tests. Although we’re only belatedly getting this 10-core monster on the bench, its performance in any task that can use lots of threads remains downright jaw-dropping. For tests where we have cross-comparable data, the i7-6950X offers considerably higher performance than the Core i7-5960X before it, and it often leaves the Core i7-7700K eating dust. Despite Broadwell’s IPC deficit compared to Skylake and Kaby Lake, the i7-6950X doesn’t suffer much in our 99th-percentile frames per second metric of gaming smoothness, either.

If time is money for your work, and your work can take advantages of lots of threads, the i7-6950X is the fastest high-end desktop CPU we’ve ever tested, full stop. If you don’t need all of its cores and threads, however, the Core i7-7700K arguably delivers the best gaming performance on the market for about a fifth of the price. Intel’s Extreme Edition CPUs have never been good values, but the i7-6950X takes the definition of “halo product” to eye-watering new heights. If the return-on-investment calculations work out for you, though, the i7-6950X is an amazing chip.

Comments closed
    • Ph.D
    • 3 years ago

    Still sitting here with my i7 2600K, waiting for intel to make a CPU worth upgrading to…
    10nm on desktop where you at?

      • Freon
      • 3 years ago

      I don’t think 10nm is going to do anything for us old desktop users after seeing what 14nm and 22nm didn’t do.

    • BigDDesign
    • 3 years ago

    Your testing has shown that my Ivy Bridge with Win7 could be ready for a brother. I like the fact that Optane, Faster Memory, and no need to overclock with the 7700k could bring me a machine that is about 50% faster than the one I have now (only using a 3770k @3.9, GTX 670).We will have to see what Ryzen brings. The biggest problem I have now with a new build is the pricing with the GTX 1080. I have decided that buying anything less than that would be a big mistake, but $650 for a good one. Ouch! A GTX 1080 Ti model may drop pricing some, but do you get one of those? Just bought a couple of WD Gold HDs, Hoping Optane and WD Golds are a good choice for massive storage at breakneck speeds. I’ve been using Intel RST on my Ivy Bridge machine and it just works.

    • Srsly_Bro
    • 3 years ago

    Over clocking and not a single game in the test. Is this trash report or tech report. You guys know we game and that info is relevant to many of us loyal readers.

    A quote. “Do or do not. There is no try.”

    It’s time to close up shop if there isn’t enough time or too lazy for gaming benches.

    Also about time a steam best seller for over a year is finally back on the tests.

    TR is out of touch.

    Another quote.

    “Just shut it down.”

    Chef Gordon Ramsay

      • Jeff Kampman
      • 3 years ago

      Wow.

      Since five games’ worth of data showing that the Core i7-7700K is already the class leader in 99th-percentile frame time performance doesn’t seem to be enough for you, consider that overclocking would likely mean slightly higher frame rates and slightly lower 99th-percentile frame times where it mattered. Doesn’t take a genius to figure that out, or so I would have figured.

      Still, wow. Your entitlement complex could fill a supertanker. Cheers.

        • Srsly_Bro
        • 3 years ago

        I’m sorry, Jeff, and I don’t mean to be a jerk; however, many of us have Sandy Bridge CPUs and want to see how they will perform overclocked against a Kaby Lake CPU. I’m more surprised those users weren’t given consideration, or even more modern CPU users who could make parallels from the comparisons. I stand by what I said. Many of us play games and we want to see what our older CPUs up against the new gen.

          • RAGEPRO
          • 3 years ago

          What are you talking about, man? Doom, ARMA III, and GTA V are all Steam best sellers, and Kaby Lake wasn’t even a big CPU launch. If you actually read the review you should know that it’s Skylake with a clock bump.

            • Srsly_Bro
            • 3 years ago

            -apologist

            None of those have anything to do with my argument.

            Are you sure you replied to the correct person?

    • sophisticles
    • 3 years ago

    I posted the following on a different forum and think I’ll share it with you guys:

    I have seen reviews where the new I7 7700k is actually slower, clock for clock, than the i7 6700k; and I think this is done on purpose…

    My theory: Intel expects AMD is hammer them with Ryzen from a price-performance standpoint, while I don’t believe the initial claims of a sub $300 8C/16T Ryzen cpu, I do believe the claims of $250 6C/12T part. Intel has also expressed it’s plans to bring their own hexacore main stream cpu’s to market in 2018, I think Intel knows that Ryzen will be IPC competitive with Broadwell and to counter will release Coffee Lake in 2017 instead of 2018. If they do plan on doing this, Intel would want to show a significant performance gain between a hexacore Coffee Lake and it’s “previous” gen quad core high end parts, namely Kaby Lake. The only way to do that is if Kaby Lake is no faster, and in some cases slower, than Skylake.

    The reviews bear this out.

      • Jeff Kampman
      • 3 years ago

      This is complete and total hogwash, man, sorry.

    • Bensam123
    • 3 years ago

    Yup… Not sure why anyone would’ve thought differently after we got performance numbers from their mobile chips…

    [url<]https://techreport.com/discussion/30587/intel-kaby-lake-cpus-revealed?post=998731#998731[/url<] And guess we're waiting till Zen. ~_~

    • albundy
    • 3 years ago

    well, i cant say that i didnt see that coming. it’s literally a downgrade, and it’s about $5 more than the 6700k on newegg. if it cant perform better in any reasonable way, then it’s a downgrade in my book. yay to lower prices?

      • Krogoth
      • 3 years ago

      It is not a downgrade. Kaby Lake has an updated integrated GPU on it. It was meant to be a laptop chip at heart.

      • tipoo
      • 3 years ago

      x.265 in hardware is at least something, though maybe on a desktop someone might not care if they’re software decoding.

    • maxxcool
    • 3 years ago

    meh. wheres the Devils Canyon version. I don’t want to delid mine for massive cooling gains.

      • RAGEPRO
      • 3 years ago

      As the owner of a Devil’s Canyon chip (4790K), it still needs delidding. 🙂

        • maxxcool
        • 3 years ago

        I can 100% agree.. I have the 4960k. BUT it is better than the old TIM and runs AVX2@4.3 without running over 70c so I have refrained from doing it.

        Now if we can see 30%+ success on 5ghz after deliding I might be seriously tempted to make a new build.

    • kokolordas15
    • 3 years ago

    @jeff Kampman

    What timings did you use on kabylake for the arma 3 ram speed bench?(for 2133 and 3000)

      • Jeff Kampman
      • 3 years ago

      Sorry, never saw this until now!

      DDR4-2133: 15-15-15-35

      DDR4-3000: 15-16-16-35

    • End User
    • 3 years ago

    Meh. Now my attention goes to Skylake-X and Kaby Lake-X.

    • Freon
    • 3 years ago

    Meh.

    • ermo
    • 3 years ago

    I liked the review.

    As someone who recently delidded his i7-3770k (\o/), I find that it would be very interesting to see like-for-like AVX-only codepaths being used by the AVX2-compatible processors — all at a fixed 4.5 GHz, as that ought to be achievable for all the CPUs used.

    If additionally you could somehow manage to acquire DDR3-2400 RAM and DDR4-2400 RAM and find a way to run both at the same timings, discerning the true scaling improvements across generations ought to be a piece of cake?

    That’s not to say that the architectural benefits of new (vector) hardware (and the corresponding compiler tricks) is irrelevant, of course!

    • RickyTick
    • 3 years ago

    Just noticed that Newegg has the i7 7700K in stock for $350. That’s also the same price as the 6700K. Time to buy??

    • rutra80
    • 3 years ago

    Where are the elaborate power consumption tests?

    • DancinJack
    • 3 years ago

    Just saw this at Tom’s…

    [quote<]Microsoft announced earlier this year that it would not support Kaby Lake and Zen processors with pre-Windows 10 operating systems. The company indicated that it would not update drivers for older operating systems to support the new hardware. Based on our initial testing with MSI's Z270 Gaming 7 motherboard, we can confirm that HD Graphics 630 does not function correctly under Windows 7 and 8.1.[/quote<] That's really quite unfortunate for some, but I guess they've been telling us for a while now. (Not me, as I prefer to be on a current OS, but I'm sure it will affect some) [url<]http://www.tomshardware.com/reviews/intel-kaby-lake-core-i7-7700k-i7-7700-i5-7600k-i5-7600,4870-2.html[/url<]

      • NovusBogus
      • 3 years ago

      I’m very interested to see the ways in which the much-hyped lack of support manifests itself. Aside from curmudgeons like me, it’s going to be a big deal for the business world which is still very solidly into W7 and likely will be until at least 2018.

        • Krogoth
        • 3 years ago

        Intel and other third-party vendors are going have to pick-up the slack.

    • Voldenuit
    • 3 years ago

    [quote<]we ended up at 4.8 GHz and around a 1.32V Vcore. Our system could boot at 4.9 GHz, but we ran into thermal throttling when we tried increasing the Vcore further to achieve stability. [/quote<] So... any delidding tests? ^_~

      • derFunkenstein
      • 3 years ago

      NOOOOOO I don’t want any more temptation to pop the lid on my 6600K.

        • Voldenuit
        • 3 years ago

        I have a delidder, send it to me and I’ll pop it for you ^_^.

          • chuckula
          • 3 years ago

          THAT OFFER IS ONLY VALID IN A FEW COUNTIES IN NEVADA!

            • Voldenuit
            • 3 years ago

            (Relocates offer to Craigslist) mss4d (Man seeking Skylake for de-lidding)

    • Voldenuit
    • 3 years ago

    [quote=”Jeff Kampman”<]"We performed our usual test run in [Deus Ex: Mankind Divided]'s DirectX 11 mode with lots of eye candy turned on."[/quote<] [quote="Jeff Kampman"<]"Whoops. Seems our test settings for Mankind Divided ended up more GPU-bound than we had expected." [/quote<] [url<]http://i.imgur.com/IArTGkB.jpg[/url<]

    • chuckula
    • 3 years ago

    The page with the overclocking results does put a few things into perspective when we look at the usual line around here about how Intel hasn’t made any improvements since Sandy Bridge.

    Looking at just the stock-clocked 7700K vs. the 2600K we see:
    1. 40% improvement in 7zip compression;
    2. 23% improvement in 7zip decompression, and this is about the worst improvement in the lot.
    3. A speedup factor* of 1.58 in Blender.
    4. A 45% improvement in [i<]single threaded[/i<] Cinebench and 46% in multi-threaded. 5. A 1.59 speedup factor in Handbrake encoding. 6. A 50% improvement in Jetstream. Now imagine that those percentage improvements were happening for RyZen vs. Piledriver, which is basically the non-broken architecture that should have premiered in 2011 against Sandy Bridge (and as TR shows, the 2600K is more "future proof" in 2017 compared to even the refined FX 8370 that didn't actually launch until 2014). Can you name one person here who would hurl insults at AMD for not innovating over that time span? * When comparing time-based benchmarks the speedup factor is (baseline time) / (comparison time) since as you approach 0 seconds you are approaching an infinite speedup, so each marginal second counts for a greater amount of speedup (e.g. a 1 second difference from 5 seconds to 4 seconds is a lot more than a 1 second speedup from 10 seconds to 9 seconds).

      • derFunkenstein
      • 3 years ago

      This is all totally fair.

      I wonder how much of the difference is attributable to the lack of new instructions ([url=https://www1.cs.fau.de/avx.crypto<]AVX2[/url<] in particular, since 2nd-gen Core has the original) on Sandy and Ivy. Look at the browser scores (both of which has an [url=https://developers.google.com/octane/benchmark#crypto<]encryption[/url<] [url=http://browserbench.org/JetStream/in-depth.html#crypto-aes<]component[/url<]) and TrueCrypt AES. Some of the other improvements can be directly tied to more memory bandwidth and higher clocks.

      • AnotherReader
      • 3 years ago

      I would compare the results at the same clock; fortunately, the overclocking page has such results as the turbo clock of the 7700k is identical to the OC for the 2600k. In that case, the most impressive result is the speedup for JetStream at 25%. Handbrake and Blender see speedups of 36% and 37% respectively, but as these use AVX, I wouldn’t rate that performance as highly as JetStream. Decompression only improves by 7%. Anandtech [url=http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/9<]compared Skylake to Sandybridge at the same clocks[/url<] at the time of the Skylake launch.

        • Andrew Lauritzen
        • 3 years ago

        Sure, but if you compare the results at the same clock ILP improvements have been pretty small long before Core ever came onto the scene. People forget so quickly that the vast majority of performance improvements in the “good old days” were frequency scaling.

          • JustAnEngineer
          • 3 years ago

          Presler to Conroe was a huge step a decade ago.

            • derFunkenstein
            • 3 years ago

            Same with Bloomfield to Sandy Bridge. It’s not like there was much of a difference in clocks between a Core i7-975 Extreme Edition and a Core i7-2600K, yet Sandy wins that one hands down.

          • AnotherReader
          • 3 years ago

          I think you misread my post; the 25% improvement in JetStream is amazing. I wouldn’t have made the comparison if Sandybridge couldn’t match Kabylake for maximum clock speeds. I think it is only the younger users who don’t recall the causes of performance increases in the past. Anyone who played with an early Athlon or Pentium III would remember that the performance improvements relied primarily on improving clock speeds. The extinction of Dennard scaling after the 130 nm node put a stop to that and I don’t think it is wrong to point out that single threaded performance scaling through IPC improvements seems to be plateauing as well.

          Edit: It would be a better discussion if we could name any microarchitectural tricks that are left to be used; I can only think of value prediction and speculative multithreading. However, it isn’t clear if they would be worth it on a performance per watt basis.

        • travbrad
        • 3 years ago

        It’s anecdotal but in those few games that are CPU limited I saw a much bigger jump in minimum framerates than in average framerates going from a 2500K @ 4.6ghz to a 6700K @ 4.6ghz. Planetside 2 and ARMA 3 both saw about 50-60% improvement in minimum framerates even though the average framerates aren’t that different. Part of that could have been memory as well since I went from DDR3-1600 to DDR4-2400 though, and it’s worth noting that most games AREN’T really CPU limited.

        Since Planetside 2 is my most played game it was a welcome improvement, but I can’t really recommend it for the average gamer unless you are specifically playing those few CPU-limited games a lot. A GPU upgrade and/or CPU overclock would probably get you more in most games.

        My Handbrake encode times are literally more than twice as fast too from a combination of AVX, hyper-threading, and IPC improvements. Even if AVX “skews” that, the end result is my encodes are twice as fast which was worth it for me since the only 2 CPU-intensive things I use my PC for are gaming and video encoding.

          • brucethemoose
          • 3 years ago

          A fellow Planetman!

          Yeah, PS2 hits the CPU and storage really hard (it’s the only game I know of that seems to benefit from a RAM cache on an SSD).

          I always thought it would be a fantastic benchmark if there was any way to get consistent results.

      • strangerguy
      • 3 years ago

      Yeah, it’s funny some folks think AMD should deserve a standing ovation for *finally* being able to compete with Intel rather than “about $#%@-ing goddamn time” after feeding us 6 years of garbage.

      • ptsant
      • 3 years ago

      To put this into perspective, when I benchmarked my Pentium (gen 5) 100MHz vs my AMD 386DX40 (gen 3), I measured a 10x (1000%, if you prefer) improvement in most tasks.

      Anyway, cumulatively the difference is certainly noticeable, but it took a relatively long time and many products (and sockets, and money buying said products) to get there. Plus, you fail to mention the slight increase in TDP envelopes, despite the transition to 14nm.

      I am curious, anyone measured the improvement from the first AMD APU to the latest and greatest? Should be similar in magnitude and occured within the 32nm and 28nm technologies only.

    • Kretschmer
    • 3 years ago

    In future reviews, please please please please please include a recent i3 and i5 in the graphs. I’m more interested in whether or not an i7 is worth the premium over an i5 than whether or not the 2600K or 3770K are the better chip.

    • Ninjitsu
    • 3 years ago

    BTW DayZ and A3 don’t use the same engine 😛

      • drfish
      • 3 years ago

      Well, close enough, and they both share a lot of the same pitfalls.

        • Ninjitsu
        • 3 years ago

        Actually not any more, DayZ runs on Enfusion now, not RV4. Enfusion is BI’s new engine.

          • drfish
          • 3 years ago

          I understand. It just wasn’t worth getting into the minutia for the purposes of the connection being made.

    • 7c0
    • 3 years ago

    The last four paragraphs of the review are a quite clear hint of things to come. Good enough from AMD has never been so good enough, trust me.

      • derFunkenstein
      • 3 years ago

      Except for a 6 year period from late-1999 through mid-2006 when the Athlon was plenty good enough and the Athlon 64 mopped the floor with the Pentium 4.

        • 7c0
        • 3 years ago

        I’m not sure what this has to do with my comment, but just to make you comfortable, yes, this is a well established fact. AMD used to rule the CPU world for quite a few years in the past.

        My point – for which I got so generously downvoted, I guess? – is that AMD will deliver soon and it will be much, much better than many people expect.

        Yes, it will be [i<]good enough[/i<], because it won't be [i<]the fastest[/i<]. But this time good enough won't sound like giving up so much on performance and efficiency as it used to. I'm sure most readers didn't bother to read between the last few lines of the review, so let me reiterate: it's no coincidence that the Core i7-6950X gets mentioned so often there. AMD's new CPUs will be benchmarked against [i<]this[/i<] one, no against Kaby Lake. Sure some Ryzen will trail 10-15%, but for one-fifth of the price. Ok, make it 20% for quarter the price. Sounds more than good enough to me.

    • Meadows
    • 3 years ago

    Intriguing. It’s been a very long time since I’ve seen meaningful gaming performance differences from memory bandwidth alone.

    • Delta9
    • 3 years ago

    And now we can guess why Coffee Lake will bring 6 cores to the desktop. If the performance gains in IPC are single digits, strapping 2 extra cores on mainstream cpus makes for great marketing. And the extra core counts in Ryzen make the chip sound twice as fast to the public. That and DX12 likes 6 cpu cores.

      • Krogoth
      • 3 years ago

      6-cores will likely be pitched as top of the line SKUs though like how current 7700K’s gimmick is a quad-core chip w/ HT support while the rest of the line-up is either dual-core w/ HT or quad-core w/o HT.

        • drfish
        • 3 years ago

        Yeah, and I doubt a hexa-core will best the stock or turbo clocks of the 7700K. So, it might be ~40-50% faster sometimes, but for single threaded stuff, it’ll be a little behind. It should come closer than the 6950X though.

          • mganai
          • 3 years ago

          Depends on the clock speeds. If it’s 3.3-3.5, then yeah, not happening.

    • evilpaul
    • 3 years ago

    Optane… so does it only work with the primary system drive? I’d love it if my big HDD could cache gigs of Steam install writes and such.

    • Beahmont
    • 3 years ago

    There appears to be a bad/dead link to the Broadwell-E AVX Offset information on the first page. If I try and go there I get a TR made Page Not Found Error. If someone could point me to the information I’d be appreciative.

      • DancinJack
      • 3 years ago

      [url<]https://techreport.com/review/30204/intel-boosts-the-high-end-desktop-with-its-broadwell-e-cpus[/url<] is correct. The link from this review is missing the review dir. [url<]https://techreport.com/30204/intel-boosts-the-high-end-desktop-with-its-broadwell-e-cpus[/url<] edit: FWIW, there is no mention of the AVX offset feature in that "review" edit2: here is some further information - [url<]https://edgeup.asus.com/2016/05/31/get-best-performance-broadwell-e-processors-asus-thermal-control-tool/2/[/url<]

    • Laykun
    • 3 years ago

    In years to come, people will only remember they loud ‘meh’s that echoed throughout the internet.

    • RdVi
    • 3 years ago

    Great real world review with very interesting results. I remember not too long ago all CPU reviews ran games at 720p, before that 1024×768 and before that 640×480. This was the only way to see a difference between CPU’s with the old average only tests. Well now we know better and reviews like this are very insightful. Thanks!

    • derFunkenstein
    • 3 years ago

    What a load of crap. I can get a [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16819113360<]7700K for under $100[/url<].

    • wingless
    • 3 years ago

    This review has told me more about getting higher FPS in Arma3 than years of reading forums.

    Arma3 loves overclocked memory!

    God bless you, Jeff!

    • jihadjoe
    • 3 years ago

    Can’t believe Sandy was only 5 years ago.

      • chuckula
      • 3 years ago

      Neither can I.

      Because it was 6 years ago.

      [url<]https://techreport.com/review/20188/intel-sandy-bridge-core-processors[/url<]

      • Farting Bob
      • 3 years ago

      I still have my launchweek 2500k. Still so no major reason to upgrade, i think i’ll have to eventually because my motherboard is getting grumpy.

        • JustAnEngineer
        • 3 years ago

        I’ve passed down my launch night i7-2600K to a family member, but it’s still a potent system.

          • NeelyCam
          • 3 years ago

          Still my main rig.

        • NovusBogus
        • 3 years ago

        I got mine during the Ivy rollout and it’s still going strong. Gonna turn it into a second gaming-capable PC when the X99 build goes live, because anything this glorious is too good for the junk pile.

        • torazchryx
        • 3 years ago

        I had to about a year ago (actually around Oct 2015) because the Asus P8Z68-V Pro/Gen3 developed an infuriating random USB bus issue that made everything hitch. was the WORST.

        Retired the Sandy to home server tasks where it runs headless with no USB devices hanging off of it (and perma-retired an Opteron 175 in the process) and plonked a 6700K on my desk, which runs at 4.6Ghz and is quite nice.

        But asides from the “works properly” the tangible performance difference is such that I’d still be waiting for something that made me go “ooh yes I need that”

        and damnit, I want to feel that urge again!

      • Krogoth
      • 3 years ago

      Actually, it was [b<] six years ago[/b<] that normal Sandy Bridge platform came out. Sandy Bridge-E platform came out almost five years ago.

    • MDBT
    • 3 years ago

    GN found that the Gigabyte Z270 mobo used here and in their tests was pumping a lot of extra voltage to the CPU. I see some notes in the OC section about voltages used, just didn’t know if manual voltages were used for the power consumption tests here earlier on in the review.

    [url<]http://www.gamersnexus.net/hwreviews/2744-intel-i7-7700k-review-and-benchmark/page-4[/url<]

    • dodozoid
    • 3 years ago

    Well,
    it seems desktop kaby = sky for all intents and purposes.
    The real difference is in mobile parts with boost clocks equivalent to those of desktop Ivy. I must admit I am impressed (if those chips actualy reach and maintain listed clockspeeds)

    • AnotherReader
    • 3 years ago

    Great review! I think we need an article dedicated to exploring the impact of RAM speed on various benchmarks. As an aside, given the conventional wisdom about games not exploiting multiple cores sufficiently, Crysis 3 is a welcome surprise.

      • Krogoth
      • 3 years ago

      It will only make a difference in games that involve moving tons of data round a.k.a “sandbox”, “simulators” and “grand strategy” genres.

      As for blockbusters such TF2,LOL, DOTA2, Overwatch, CS:GO and their ilk. JEDEC-spec DDR3/DDR3 are more than sufficient.

    • SomeOtherGeek
    • 3 years ago

    Oh man, 5 Intel tests against one AMD. I can’t wait until it become 3 against 3. Here’s to hoping…

    BTW, great review and write up. It was a good coffee read. Thankfully, the cup was on the desk during the funny points.

    • drfish
    • 3 years ago

    Anyone want to talk about how faster RAM in [i<]Arma III[/i<] gives the same FPS boost as jumping from Ivy to Kaby? You don't see that every day. Thanks for adding that, Jeff!

      • Andrew Lauritzen
      • 3 years ago

      Yeah that’s the one interesting bit here. That one you should test with an EDRAM part too to see if there’s anything to be gained there. It’s not common to see real workloads on the CPU that scale much with RAM frequency.

        • tipoo
        • 3 years ago

        Overwatch also seems to love memory bandwidth. Had a friend scale nearly 30% down on FPS when he was stuck on single channel waiting for an RMA for a while. And even without going to single channel:

        [url<]https://www.reddit.com/r/Competitiveoverwatch/comments/55o3mn/can_ram_speed_boost_your_fps_in_overwatch_i_just/[/url<]

          • Krogoth
          • 3 years ago

          Overwatch’s eye candy is more depended on single-threaded CPU performance though. There are some mainstream applications and games where single-channel memory system memory doesn’t cut it anymore if you want maximum throughput. I’m not surprised that Overwatch is among them. I don’t think you get any significant benefits from going beyond JEDEC-spec dual-channel DDR3/DDR4 though.

            • tipoo
            • 3 years ago

            The second part is what the link was for, even without the severe jump of going to single channel, it scales performance even on modern dual channel RAM more than most things I’ve seen.

            • Krogoth
            • 3 years ago

            The user was overclocking their CPU so having factory-overclocked memory would help and they also had a Skylake chip which are [b<]known[/b<] to be limited by dual-channel DDR4 when overclocked when bandwidth becomes an issue.

        • drfish
        • 3 years ago

        FYI, I built my new system over the weekend. I went from a 2600k @ 4.2Ghz with DDR3-2133 to a stock 7700k with DDR4-3733. In YAAB @ 2560×1080, everything cranked just like Jeff’s test, I went from 28 FPS to 51 FPS. Same 980 Ti in both systems. I’m pretty happy. DayZ is obviously happier too but you can’t really bench it.

      • Krogoth
      • 3 years ago

      Don’t expect miracles. DDR3 doesn’t scale nearly as well as DDR4.

      • Ninjitsu
      • 3 years ago

      I’ve been reading that for a while, nice to see it confirmed here.

      Would have liked to see DDR3-2133 vs DDR4-2133, though.

      Additionally would like to note that increasing object view distance or object quality, and overall view distance could affect things (object view distance should hit the CPU more, overall view distance should hit memory more – although it’s possible mem bandwidth impacts object view distance too).

      And yeah, 64-bit binaries for A3 are on dev branch, will likely hit stable release in Feb sometime. Should be even more interesting then.

      EDIT: Would add a note of caution that the clock speed diff between the 3770K and 7700K would also play a role.

      • ptsant
      • 3 years ago

      Nice remark. I too was surprized. I suppose ARMA III is bw-sensitive.

      I guess the transition to DDR4 started with unreasonably low speeds. It took a lot of time to get DDR3 kits from 1066 to reliable 2133. DDR4 has almost immediate availability at 3000+, while chipsets are still rated for 2133 or 2400. There is a potential for improvement there. Happy to see that Z270 easily reaches 3000+.

      • DPete27
      • 3 years ago

      So they found one game that, for whatever reason, abnormally favors RAM frequency….so what? While it may be moderately popular on Steam, I wouldn’t call Arma III particularly advanced in terms of graphics engine (it was released in 2013 after all).

        • drfish
        • 3 years ago

        So what? So it’s interesting, that’s what. It’s a strange bottleneck. It’s a way to extract additional performance that simply can’t be done any other way. It’s also verification of what folks in Arma forums have been reporting for a while, with frame time data to back it up. What’s not to like?

          • DPete27
          • 3 years ago

          All I’m saying is that it’s basically an outlier.

            • drfish
            • 3 years ago

            No arguments there.

        • Ninjitsu
        • 3 years ago

        Arma’s engine has ancient scaffolding but they’ve improved it a lot, and it can often look quite good. However, there’s no game out there that does what Arma does, particularly at that scale.

        It pulled in quite a bit of revenue in 2016, btw, which is pretty impressive for a 2013 game. 😉

        [url<]http://store.steampowered.com/sale/2016_top_sellers/[/url<]

    • odizzido
    • 3 years ago

    Good news for AMD I guess.

      • Tristan
      • 3 years ago

      they may delay release of Ryzen 🙂

      • Farting Bob
      • 3 years ago

      Intel trying to support the competition by sitting on it’s ass for a few years. It’s like handicapping yourself when playing a game against a child.

        • Tristan
        • 3 years ago

        It is not in favour of AMD, but for higher profits. Less for R&D = more into pockets.

    • DancinJack
    • 3 years ago

    The i3-7350K looks pretty sick. I’d slap one of those in an HTPC if I needed one right now, that could handle some low-level gaming as well.

    (Anandtech got a sample)

      • Coran Fixx
      • 3 years ago

      I would like to see a comparison to the G4560 ($65)

    • wingless
    • 3 years ago

    OK, now it’s just about time to think about upgrading my 2600K system. I’ll see what Ryzen has to offer and pull the trigger on a new build.

    90FPS in GTA 5 on my G-Sync monitor is unacceptable in 2017. (Not really….I still have no reason to upgrade)

      • DancinJack
      • 3 years ago

      I read 90FPS in GTA5 and i was like wtf, then i finished reading.

      • Tristan
      • 3 years ago

      8 core Ryzen isn’t really for gaming. You must wait few months more for 4 / 6 core variants with higher speed

        • Farting Bob
        • 3 years ago

        Nearly all modern games take advantage of more cores. And while most games are GPU limited those that are CPU limited will love having 8 cores instead of 4.

      • Dragonsteel
      • 3 years ago

      I have a 2600K OC to 4.5 GHz. I think the benchmarks shown are not very accurate as a result. There is probably closer to a 15% loss in performance from the processor architecture if anything; not half as suggested.

    • Topinio
    • 3 years ago

    I know it’s a -K CPU and the argument will be that “no-one” runs at stock, but it sure would’ve been nice to see those tests run at stock/supported memory settings (e.g. 1333 for the Sandy, 1600 for the Ivy) to see what it looked like including that generational improvement.

      • Jeff Kampman
      • 3 years ago

      I believe all of our original reviews for Sandy, Ivy, etc. were conducted with stock memory speeds, and the numbers should be roughly comparable if you want to tab between this review and the older ones.

        • Topinio
        • 3 years ago

        Ta, that gives a way to compare some of the benchmarks hopefully, was kinda looking for gaming comparisons though which I guess won’t have the same games or GPUs.

    • MrDweezil
    • 3 years ago

    The unlocked i7 doesn’t seem to make much sense anymore with such limited overclocking headroom.

      • Krogoth
      • 3 years ago

      Unlocking gimmick is a premium that Intel is willing to sale for though. That’s the entire reason for “K” and “X” series chips.

        • MrDweezil
        • 3 years ago

        I get that the K chips are sold at a premium, but the older (2xxx 3xxx) chips could be pushed significantly further than their out-of-the-box max turbo frequencies. Going from 3.8 to 4.6 made the K chips appealing but going from 4.5 to 4.8 is less enticing.

          • derFunkenstein
          • 3 years ago

          I agree, but unfortunately you can’t buy a locked CPU that Turbos up to 4.5. If the Core i7-7700 (non-suffixed) had the same clock speeds, then I’d agree there’s no bonus for the 7700K.

          • VincentHanna
          • 3 years ago

          would it make you feel better if the base OC were set to 4.2ghz like the non-k part so that you could OC it all the way to 4.8?

      • jihadjoe
      • 3 years ago

      Good thing it’s clocked pretty high from the get-go.

    • The Egg
    • 3 years ago

    If you’re building new, I’d almost look for deals on outgoing Skylake chips and pair them with the new 270 chipset. Factor the minor clockspeed difference into the math, unless you’re buying unlocked, in which case there’s no difference.

    • tipoo
    • 3 years ago

    I’ve felt this more and more with each passing Intel release

    [url<]http://i0.kym-cdn.com/photos/images/original/000/296/788/4fc.jpg[/url<]

      • blastdoor
      • 3 years ago

      Yeah.

      In case anyone doubts the effects of competition (or lack of competition) just look at the die sizes of Intel chips over the last 10 years. As time goes by, Intel seems to use new process nodes more and more to boost profit margins rather than increasing transistor counts (and therefore performance).

      Kaby Lake isn’t a die shrink obviously, but the 14nm process ought to be pretty mature by now. Presumably their yields are better and they could give us bigger dies for the same price (either more cores or bigger caches or beefier GPUs — something).

      Help us Ryzen — you’re our only hope!

        • Andrew Lauritzen
        • 3 years ago

        Do we need to have this conversation every time guys, seriously…

        GAMES ARE NOT GOING TO GET FASTER UNTIL THEY USE MORE CPU – GET OVER IT. That goes for 95% of the stuff you do on your PC as well.

        If you’re one of the folks who actually does something that can benefit from more cores, you probably already own an -EE or Xeon processor. Or if you don’t you should. Or at this stage wait for Ryzen and see how it compares 🙂 But if you’re expecting 8 cores for $200 (or even $300), I think you’re fooling yourselves. Would love to be pleasantly surprised on that though.

        Stop expecting literally impossible magic out of single threaded or GPU bound applications. It is not going to happen.

          • DancinJack
          • 3 years ago

          thank you. I just didn’t have the patience to write it up.

          • chuckula
          • 3 years ago

          NO! WE REFUSE TO GET OVER IT!

          Similarly, I will also criticize Intel for how the 6950X fails to improve the results in PrinterMark and ModemMark compared to my Pentium IV.

          • tipoo
          • 3 years ago

          Blastdoor didn’t only mention more cores for what it’s worth, he just said more transistors haven’t gone to the CPU than could have, more cores being one example he gave, more cache being another. More transistors could also be taken as per-core performance, which in turn wouldn’t make it impossible magic to expect more performance out of single-thread bound tasks.

          Not that I know how possible it is to shove forward that single threaded performance over at Intel, but benefit of the doubt 😉

            • Andrew Lauritzen
            • 3 years ago

            It was not meant to be a personal attack – in fact blastdoor’s post was less obnoxious than most. I’m just trying to head off the silly “hey if there was more competition everything would be 4x better and 10x cheaper” rhetoric. No guys, the memo on this went out 10 years ago and you still haven’t internalized the message!

            Indeed the whole point is that there simply aren’t super-significant per-thread performance gains to be had left. Like… EVER. Seriously guys – sit down and internalize that right now. Remember all that talk that applications that aren’t rewritten may never get faster? Welcome to 2017 where it’s frankly amazing that you even see 5% improvements per year. I’d expect near zero at this point and definitely trending towards zero in the future. So please, stop and wrap your head around that and then don’t post stuff that implies you haven’t on the next release.

            And this goes way beyond brand: when Ryzen comes out and it’s a big improvement over previous AMD stuff but just gets “somewhere near to” the ILP of Intel chips and thus is slightly slower in games because of a slight frequency deficit, don’t lose your shit. Instead look at the workloads that actually can ever scale: i.e. the ones where Broadwell-E crushes 4 core Skylake in this very review. The ones that many folks seem to gloss over because they’re somehow expecting magic from a quad core part. And when 8 core Ryzen is slightly cheaper than X99 but not as cheap as a 7700k and yet slower in games, take a deep breath and realize that this is all expected and maybe it wasn’t just some grand conspiracy in the first place.

            Don’t get me wrong – I love competition and hope that Ryzen is awesome… particularly now that I have to pay full price for my CPUs going forward 🙂 But really guys there are only two sensible camps to be in at this point:

            a) You use scalable applications where BDW-E is already way faster than Kaby Lake client. This review is of passing interest but no real surprises. You’re excited more by the workstation chips and reviews.

            b) You mostly play games and client applications. You shouldn’t expect more than 5% improvements/year diminishing as time goes on until games make better use of many-core CPUs. These sorts of reviews should again hold no real surprises outside of features or regressions. But that’s okay, because you only need to upgrade your CPU very rarely anyways… basically once those improvements have compounded enough or something breaks or you need new features.

            Folks who are constantly expecting impossible magic and then enraged when it doesn’t happen are what infuriates me 🙂

            • the
            • 3 years ago

            There are several techniques still on the table for increasing raw single threaded performance but they come at a disproportionally higher performance per watt. That is why Intel is leaving them on the table.

            There are also a handful of things Intel can do today that fit into Intel’s performance/watt ratio but they also increase costs. Throwing 128 MB of eDRAM on the i7 5775C made it a surprisingly competitive gaming chip despite its lower clock speed when put up agains the i7 6700k. With Kaby Lake here, why don’t we have a i7 7775K with 128 MB of eDRAM cruising at 4.0 Ghz? Intel does have such chips for mobile so they’re not creating a new die either.

            Intel has a distinctly unique version of the SkyLake core for servers which comes with more L2 cache per core and AVX-512 instruction support. These can show a nice per core benefit. So why didn’t Kaby Lake for consumer be the introduction point for this slightly faster version of the core?

            • blastdoor
            • 3 years ago

            Yup. I know that increasing single thread performance through higher IPC with x86 has run into seriously diminishing returns. But that doesn’t mean there’s no other way to get more performance from a bigger die.

            This isn’t my field of expertise — I’m just an unfrozen caveman enthusiast. But my understanding is that there are tradeoffs between power, performance, and area in CPU design. Does that mean that the clock speed could be raised if the transistor density were lowered? And if so, how about a heterogeneous design where a couple of big cores are given more space and the ability to run at higher clock speeds, while several smaller cores are packed in tight to handle loads that are easier to distribute across multiple threads? Say, 2 big Cores combines with 8 of the little cores used in the Xeon Phi?

            I’m not disputing that things are harder now. My argument was that Intel isn’t trying because Intel doesn’t have to. They’ve been using die shrinks primarily to extract more profit, holding performance more or less constant. (yes, there are power reductions too, and that’s great for mobile, but Intel doesn’t really compete in mobile)

          • NTMBK
          • 3 years ago

          You know what might help single threaded workloads? A big fat EDRAM cache.

            • derFunkenstein
            • 3 years ago

            Frustrating that those big caches from socketed Broadwell didn’t dip their toes into Kaby Lake.

            • Andrew Lauritzen
            • 3 years ago

            I’m probably more qualified than most to tell you that – no – it doesn’t really help much other than the odd workload like WinRAR (but not 7zip – details matter). Trust me, if there was any significant amount of performance to be had there in common CPU workloads they would have put a lot more cache on these things a *lot* sooner 🙂 It’s there for the GPU, with some odd, minor benefits coming for applications that happen to have a working set that falls somewhere between ~8 and 64MB. (And comparing to the -EE/Xeon stuff, there’s even less of a delta!)

            I like the sentiment and I’m a big fan of EDRAM myself, but for the cost/silicon/area/etc. it’s still better to add more cores if you only care about CPU workloads.

            • DancinJack
            • 3 years ago

            I don’t think most of them know who you are and what you do/did. Maybe explain that 🙂

            • derFunkenstein
            • 3 years ago

            I know who he is and I’d still like to play around with it. :p

            • Chrispy_
            • 3 years ago

            He was the janitor at Cyrix, right?

            I remember that outcast Watt Scosson having an internet chat show with Andrew talking about technology report or something but that was many a year ago.

            • DancinJack
            • 3 years ago

            Ding ding ding!!

            • Andrew Lauritzen
            • 3 years ago

            As of next week I’m just a rendering guy @ Frostbite Labs now, so what do I know 🙂

            • DancinJack
            • 3 years ago

            I know 🙂

            You could have mentioned you worked on Intel Larrabee and graphics for the better part of the last decade though!

            • Andrew Lauritzen
            • 3 years ago

            Yeah but it’s more fun this way, and I use my full name here so people can always google it 🙂

            • travbrad
            • 3 years ago

            I agree it’s better to add more cores if you only care about multi-threaded CPU workloads, but I’d still love a Kaby Lake with eDRAM for the ultimate single-threaded/per core performance for those games that don’t make good use of multiple cores. As some of TR’s testing in this review showed a game like ARMA 3 is severely limited by per-core CPU performance and the memory/caching in a system. It isn’t even averaging 60FPS with the fastest gaming CPUs available, let alone coming close to the 120/144hz monitors a lot of us now own.

            There are a few games like this that are simply never going to use more cores, so the only solution is higher clock speeds and better performance per ghz. One thing that seemed to achieve that pretty well (at least with Broadwell) was a “big” eDRAM/L4 cache. I know it’s not cheap and only has niche use cases, but I’d gladly fork over some extra money beyond the price of a 7700K/6700K for such a CPU. Money that I won’t fork over for “MOAR CORES” that are slower.

            Of course I’d rather just see all games make really good use of all your cores/threads (like Frostbite engine stuff ;)), but for some games that will sadly never happen.

            • NTMBK
            • 3 years ago

            Still more useful than an unused GPU.

            • RAGEPRO
            • 3 years ago

            I love you Andrew, but in this one I’m gonna have to disagree.

            If you look at our gaming benchmarks of the 5775C, as well as other benchmarks around the web, it does VERY well for its clock rate. I think that for “CPU workloads” as we usually call them— massively parallel compute-y stuff—yeah, you want more cores.

            But what you really want is to run it on a GPU. CPUs these days are for that ultra-low-latency serial code, stuff that needs the best IPC and ILP, and high clock rates. I think an extremely large cache is really helpful on that kind of work, because every time it saves you having to bounce out to main memory to grab data you’ve just saved a TON of waiting in the pipeline.

            I mean, you know that already of course; I’m just explaining my thought process. What I’m really saying is that while I do think that for traditionally “CPU heavy” tasks you want more cores, we need a new word, a new way to describe the new type of “CPU heavy” tasks. Stuff that’s reliant on single-threaded performance and low instruction latency.

            I think the majority of games fall into that latter category; I think they’re a workload unlike almost anything else. The contast stream of user interaction and need for immediate feedback is a weird workload. Intel’s chips handle it best to be sure, but for this workload I think more cache is the way to spend transistors, not more cores.

            • Andrew Lauritzen
            • 3 years ago

            There’s definitely some nuance to this discussion – for instance it is not true that all massively parallel workloads make sense for GPUs. GPUs have been gradually expanding what they can do but they are still fundamentally designed in a way (specifically around how they handle static register and shared memory allocation to HW threads) that makes some algorithms inappropriate, even if they are massively parallel. Ray tracing is actually a good example of a embarrassingly parallel algorithm that runs acceptably on GPUs, but it’s not clearly a more efficient architecture for it iso-power/area (hence folks looking at fixed function hardware again). Even CPUs still compete just fine in ray tracing.

            I think you’re giving games a bit too much credit in terms of being fundamentally workloads that don’t scale well to multiple cores. Certainly there are some latency sensitive aspects, but the *vast* majority of the heavy stuff in game code can scale out really well… it’s just we’re sitting on legacy code bases that take a long time to update and “parallelism is hard” 🙂 CPUs have gotten really good at providing a parachute for inefficient code as well, which has put off the problem. Console CPUs being significantly weaker has caused some amount of improvement, but that brings us to the real crux of the issue:

            Even if a game has been nicely updated to spread its work out over many cores, few games *have enough work* to fill a modern quad core machine even at the highest settings. It’s still very important for most games to run on dual core machines and consoles and if that is a constraint, you’re likely far off the performance of even a quad core, let alone 6+. Scaling CPU work is a lot harder than GPU work because it tends to be tied to things that directly affect the game experience that the designers want to be the same for everyone. Multiplayer adds additional complexity as no one wants to segment off the quad core+ player base from the dual core guys.

            We really only have a single example of a game engine designed from the ground up to scale on CPUs and that is Oxide’s stuff. As is quite evident there, it gains far more from increased core count than being sensitive to ILP and cache sizes (although there are elements of all of the above at the extremes, as always) and I think that’s probably a reasonable predictor of the direction engines will go. Of course the workload details doesn’t necessarily apply outside of RTS, but it’s a proof point that game technology can certainly evolve in this area.

            Progress is being made, and I certainly expect to help make more direct progress on this in my new job 🙂 But there’s a variety of economic and market factors that muddy the water here, so I’d encourage you to not assume these things are fundamental CPU vs GPU architecture issues when in most cases they are not.

            PS: I love that you guys are fans of the EDRAM parts, I really do. As someone who has worked with them a lot, owns several (personally) and often argue to skeptical game developers that “it really works!”, I’m probably more of a fan than most. But in reality, it’s still not a great use of silicon for most current workloads. I’d be curious to see someone do ISO-clock testing between broadwell and skylake with EDRAM on/off (can usually do this in the BIOS) on both to get some solid numbers for CPU workloads. As I mentioned while there are certain cases that do see nice gains (WinRAR, databases specifically sized to fit in EDRAM, etc), it’s not a big deal on most things, and certainly less relevant than some of the other improvements that went into SKL architecture… improvements which people happily say “meh” to despite them being more relevant to performance than EDRAM 🙂

            • RAGEPRO
            • 3 years ago

            [quote<]There's definitely some nuance to this discussion - for instance it is not true that all massively parallel workloads make sense for GPUs.[/quote<]That's fair enough. Overall, I hope you're right. I hope most games can be scaled out to more cores in the future. It's true that we're already seeing some of that (e.g. Doom's strong Vulkan performance in this review on the FX chip relative to its OpenGL performance.) I still think we're going to see single-threaded performance (specifically latency, more than compute) being the single biggest factor in CPU-related game performance, though.

            • Andrew Lauritzen
            • 3 years ago

            > I still think we’re going to see single-threaded performance (specifically latency, more than compute) being the single biggest factor in CPU-related game performance, though.

            Definitely in the near term, I agree. And it’s really hard to guess at what exactly “near term” means in technology trends like this, so I’ll exercise my right to leave it ambiguous 🙂

            • ptsant
            • 3 years ago

            An expensive and not particularly elegant solution. Anyway, +1 for the cache if the price premium is reasonable.

            What would certainly help is a bunch of intelligent coders trying to parallelize the code. Sure, it’s not always easy but it almost always can be done. Algorithmic optimization beats almost any other way of improving performance.

          • NovusBogus
          • 3 years ago

          …but I want a pony for Christmas!

          • Voldenuit
          • 3 years ago

          [quote<]GAMES ARE NOT GOING TO GET FASTER UNTIL THEY USE MORE CPU - GET OVER IT. That goes for 95% of the stuff you do on your PC as well.[/quote<] That's a valid point, but with the rise of 120, 144, 165 and 240 Hz monitors, gamers with higher end GPUs would like to know if and when CPU bottlenecks on various games. There are ppl on youtube claiming that i7 processors are required to hit 120+ fps reliably in modern games, and I for one would like to see the more articles from the tech review community investigating and/or debunking these claims.

        • Beahmont
        • 3 years ago

        You do realize that there is also this economic force called inflation which makes every dollar spent today less valuable than the dollar spent yesterday. That means that your argument is blatantly false on it’s face. Especially with Kaby Lake being $10 cheaper for the 7700K at Intel’s recommended price than the 6700K.

        Hell, Inflation alone means that every year prices don’t go up, profits go down with out cost reductions. The price of Intel’s top Processor’s are staying mostly steady, have done so for years. Intel is making more and more money because of the volume of sales, not the price per chip.

        Edit: Clarified that the 7700K is cheaper than the 6700K.

      • Tristan
      • 3 years ago

      Thats because Intel management require to release new CPU every year

      • maxxcool
      • 3 years ago

      Welcome to X86. AMD will be the same boat in less than a generation.

        • NovusBogus
        • 3 years ago

        True story, physics isn’t going anywhere and applications don’t really need moar cowbell anyway. I want Ryzen to succeed, but I have to admit I’m going to get some giggles from the fanboi rage if it really does compete with Broadwell-E and bring us back to ‘the good old days’ when both sides charged $500+ for the good stuff.

    • DragonDaddyBear
    • 3 years ago

    I’m curious how this compares to the i5 in the range.

      • The Egg
      • 3 years ago

      I’d say……exactly the same, when adjusted for clockspeed.

        • morphine
        • 3 years ago

        I’ll see your $0 and raise you $0 on that same bet.

    • Farting Bob
    • 3 years ago

    Hopefully this pushes prices down slightly for Skylake and Z170 motherboards.

    • brucethemoose
    • 3 years ago

    No QuickSync test?

    Honestly I’m a little disappointed, as that’s the only significant improvement over Skylake I can see. In fact, maybe TR can do a general video encoding shootout between AMD’s, Nvidia’s, and Intel’s hardware encoding blocks some day.

      • Firestarter
      • 3 years ago

      I’d argue that software support is more important than all-out performance. My AMD card has an H264 encoder (VCE) but hardly anything can put it to good use. Even worse, everything that supports VCE also supports the Quicksync encoder and most of the times the Quicksync encoder just has better compatibility. From what I gather the situation is somewhat better over at Nvidia

        • derFunkenstein
        • 3 years ago

        Hardware H.264 encoding is useful for streaming, either on Twitch or via Steam in-home streaming. Plenty of folks leave the IGP enabled and let QuickSync do the encoding. That’s what was so laughable about AMD’s Dota2 “demo”.

        edit: derp, I said what you said. +3 to you.

        • Voldenuit
        • 3 years ago

        Nvidia has Shadowplay, which is used for hardware accelerated real-time encoding for storage or streaming. I don’t use it myself, but it’s supposed to have a pretty small impact on (average) fps.

        • brucethemoose
        • 3 years ago

        It’s getting better. OBS and the new AMD utility have recording/streaming covered, and staxrip can use VCE for video encoding.

    • I.S.T.
    • 3 years ago

    Honestly, it seems like the faster RAM helps more than the 200 MHZ does, unless Skylake had trouble maintaining turbo speeds and Kaby Lake doesn’t.

    Some of these tests show a higher than expected by pure clockspeed alone difference, so the RAM has got to be it.

    • Krogoth
    • 3 years ago

    It is Skylake+ for all intent and purposes.

      • dodozoid
      • 3 years ago

      I trusted in you and you failed me…

        • Beahmont
        • 3 years ago

        Indeed. It seems a lot of people are Krogothed at Krogoth for not being appropriately Krogoth.

          • drfish
          • 3 years ago

          *head explodes*

            • Krogoth
            • 3 years ago

            [url<]https://www.youtube.com/watch?v=B_Lnz64vXB8[/url<]

          • juzz86
          • 3 years ago

          I feel like Xzibit or whatever his name is would approve this post.

    • JustAnEngineer
    • 3 years ago

    Here’s the new king of gaming CPUs. However, this doesn’t seem like any sort of significant advancement over Skylake.

      • 111a
      • 3 years ago

      No, 5775c is gaming king. You probably forgot.
      BTW, where is it in the tests?

        • brucethemoose
        • 3 years ago

        Maybe at stock speeds with stock memory. Otherwise it isn’t faster than the 6700k or 7700k.

          • 111a
          • 3 years ago

          5775c is going from 3.3 to 4.2 (about 30% overclock), where you go with 7700k? 15%?
          EDRAM is unbeatble now, best choice for gaming due to low frame-times.

          Here is old techreport test: [url<]https://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/[/url<]

            • DancinJack
            • 3 years ago

            I guess if you want to invest in a dead socket and DDR3, sure. You wouldn’t even notice the “advantage” compared to a 7700K… I see no logical reason to get a 5775c over a 7700K unless you’re going without a discrete GPU.

            • derFunkenstein
            • 3 years ago

            And even then his math kind of sucks. If you have good cooling, your CPU won’t be running at the base frequency under load.

            • MOSFET
            • 3 years ago

            Dead socket indeed. Z97 is what kept me from pulling the trigger on a 5775c/5675c build. By the time the CPUs were actually available for the price Intel intended, finding a new Z97 board with decent reviews/features was impossible. And they were all full ATX (which is fine except for the lack of options). So it came down to, is a big cache worth lower clock speed, lower TDP, and an older platform missing USB 3.1 AND M.2 PCIe? Oh yeah, it will also be the most expensive option outside of X99/LGA2011. I thought about it, I thought about Skylake i5’s and i7’s and Z170, and in the end I’m still using my trusty FX-8320 @ 4.2.

Pin It on Pinterest

Share This