It’s tough to be at the top of your game. The Tiger Woodses, Roger Federers, and Lebron Jameses of the world can blaze all the paths they want, set all the records, break all the barriers, force the invention of new metaphors for pushing the bounds of what’s possible. They win and win and win some more. Eventually, though, even the best and brightest have to stumble—or, at least, plateau.
About six years ago to this day, Intel launched the brilliant Sandy Bridge family of CPUs to universal acclaim. Clock for clock, Sandy Bridge redefined desktop performance by every measure we cared to test. Folks building new PCs in 2011 had ample reason to retire even their Nehalem and Lynnfield PCs in favor of Sandy Bridge. Overclockers enjoyed plenty of “free” headroom from K-series Sandy CPUs with modest aftermarket coolers on top. Life was very, very good.
Intel followed Sandy Bridge with Ivy Bridge, the process-advance “tick” to Sandy’s architectural “tock.” While Ivy delivered some under-the-hood improvements to Sandy’s microarchitecture and its instructions-per-clock throughput, the big changes on the third-generation Core chips came in the integrated-graphics and power-efficiency departments as Intel began chasing thin-and-light notebooks with vigor.
Enthusiasts with Sandy Bridge CPUs still ended up sitting pretty when Ivy came around. Intel’s third-generation Core chips proved less-forgiving overclockers than their forebears, and their performance improvements on the desktop weren’t overwhelming compared to Sandy.
Meanwhile, AMD’s ambitious Bulldozer architecture failed to demolish Sandy Bridge Core i5s, much less the Core i7-2600K. Even with AMD’s subsequent refinements, Bulldozer’s derivatives never quite matched the killer combo of single-threaded performance, power efficiency, and gaming competence that Intel delivered with tick-tock regularity. The potent Radeon integrated graphics in AMD APUs did little to capture the enthusiast imagination, either.
As Intel continued to perfect its microarchitectures and fabrication processes, the Haswell, Broadwell, and Skylake architectures each brought progressively more modest single-threaded performance improvements with them. The Skylake Core i7-6700K, Intel’s most recent mainstream desktop range-topper, is one of the finest all-around CPUs ever made, and its only peers come from past generations of Core i7s.
With that long line of winners, AMD’s perennial struggles with its CPUs and APUs, and the maddeningly difficult pursuit of smaller process technologies industry-wide, Intel’s processors have enjoyed more-or-less complete domination of the PC in recent years. How many among us can say they have nothing left to beat but their own past successes? Indeed, when you’re doing so well, what’s the motivation to stick your neck out even further and perhaps overplay your hand?
And that brings us to this morning. Even in light of the slowing pace of per-clock performance improvements from Intel, the Core i7-7700K CPU that’s launching today feels like the company’s least-ambitious desktop chip in quite some time. From dispatch to retirement, the i7-7700K’s basic Kaby Lake CPU core is identical to Skylake’s. Aside from some improvements in its fixed-function video engines to efficiently handle 4K video encoded with the next-gen HEVC and VP9 video codecs, Kaby’s integrated graphics are largely a carry-over from the Gen9 IGP technology on Skylake chips, too. Womp.
Basically every item on our wish list for Kaby was dashed during Intel’s Kaby Lake introduction earlier this year at IDF. No, there will be no eDRAM for socketed Kaby Lake CPUs. No, VESA Adaptive-Sync still hasn’t been integrated into the Kaby Lake IGP. So on and so forth. If you were hoping for some revolutionary architectural change from Intel to mark the seventh generation of Core processors and put the last nail in Sandy Bridge’s coffin, well, Kaby Lake ain’t it.
Instead, the biggest change in Kaby comes when it’s forged in Intel’s foundries. Kaby is the first CPU from Intel’s new “optimize” product phase, and that means the company has tweaked and tuned its 14-nm tri-gate transistors to extract every bit of performance possible from its fabs. With those improvements, the Core i7-7700K enjoys a 200-MHz base clock boost over the i7-6700K, to 4.2 GHz, and its Turbo clock is a dizzying 4.5 GHz. The Core i7-7700K’s TDP has held steady at 91W even with that clock boost, halting a slow upward creep that began with Ivy Bridge. If there’s any single-threaded performance improvement from Kaby, it should come entirely from this extra clock speed.
Like the Skylake CPUs before it, the Core i7-7700K also exposes a couple new knobs for overclockers to play with. Kaby Lake incorporates a new base-clock-aware dynamic-voltage-and-frequency-scaling (DVFS) feature that lets the CPU’s power-management circuitry take both changes in the base clock and changes in multipliers into account when it adjusts the CPU’s P-states. In the past, the company says that its CPUs’ power-management circuitry only took multiplier changes into account in that scenario, so BCLK overclocking required finding a stable fixed voltage. The company says this new BCLK-aware approach lets overclockers enjoy a simpler BCLK tweaking process and the power-saving benefits of DVFS at idle.
Unlocked Kaby Lake chips also inherit Broadwell-E’s AVX Offset feature. That setting lets tweakers run the chip at lower speeds under heavy AVX2 workloads while maintaining higher frequencies when those instructions aren’t being executed, possibly increasing the light-load frequency gains one can achieve with a given chip. If those issues held you back from taking your Skylake CPU to its maximum potential, Kaby Lake might break those shackles.
The Z270 chipset and friends
To go with the desktop Kaby Lake lineup, Intel is introducing several new chipsets to go with it. The one of most interest to enthusiasts, however, will likely be the Z270 platform. Z270 retains the same LGA 1151 socket that underpinned Skylake CPUs, so one can mix and match Z170, Z270, Skylake, and Kaby Lake CPUs in whatever mind-bendingly complex compatibility matrix that would produce. It’ll all work together. You can see just how MSI and Aorus are implementing the Z270 in our full reviews of the MSI Z270 Gaming Pro Carbon and the Aorus Z270X-Gaming 5.
In a 200-series motherboard, however, Kaby Lake CPUs will boast out-of-the-box compatibility with DDR4-2400 RAM, a nice little boost that came along with Broadwell-E CPUs last year. Z270 will also give motherboard makers four more PCIe 3.0 lanes from the chipset to pair with storage devices and peripheral I/O controllers. That means a total of 24 such PCIe 3.0 lanes from the Z270 platform controller hub and 16 more from a Kaby Lake CPU. In a world where more and more devices hunger for PCIe lanes, that small update could prove quite handy. Intel hasn’t updated the DMI 3.0 interconnect between the processor and chipset, however. That link still offers bandwidth equivalent to about four PCI 3.0 lanes.
Despite the broad cross-compatibility among Skylake and Kaby Lake CPUs and 100-series and 200-series chipsets, there are some advantages to going with Intel’s latest and greatest. Pairing a Kaby Lake CPU with a 200-series motherboard is the only way builders will be able to take advantage of Optane Memory, a new intermediate data-caching product that’ll sit somewhere between main memory and bulk storage. Optane Memory could be the first appearance of Intel’s 3D XPoint technology in a consumer product, but we know next to nothing about it right now aside from the fact that it can only be used with Kaby Lake CPUs and 200-series chipsets.
Optane Memory superficially sounds like a much-improved reinvigoration of the Turbo Memory solid-state caching product that made a rather inglorious appearance in some laptops several years ago. We’ll hopefully learn more about this technology soon and get an opportunity to give it a spin, but Intel seems to think Optane Memory will have the greatest benefit for systems that rely on hard drives for primary storage. Given the increasing prevalence of large NAND flash SSDs as the primary storage devices for enthusiast desktops, we’ll have to see whether Optane Memory is a valuable addition to those systems.
PCs with 200-series chipsets inside will also gain support for Intel Smart Sound technology, a dedicated DSP that can work with Windows 10 to enable features like system wake-up with Cortana. Compatible 200-series chipsets and Kaby Lake processors with Intel vPro support will also be able to work with Intel Authenticate technology, a hardware-enforced identity management system that can require the user to log in using any of several factors. Intel says it’s working with consumer software providers to add support for hardware-enhanced security measures for applications like password managers, touch-to-pay with biometrics, and more. The availability of those features will likely depend heavily on what a given system integrator chooses to include in a PC, so we’d expect to see them mostly in laptops where tight integration of the necessary hardware can be guaranteed.
As you can see from Intel’s comparison diagram of all of its new desktop chipsets, only the Z270 chipset will permit the PCIe 3.0 lane-switching from the processor that one might want for CrossFire or SLI setups. The remaining feature differences between the chipsets largely boil down to peripheral connectivity, RAID support, and management features for IT departments. Given the wide range of price points that motherboard makers were able to hit with Z170 boards, we’d expect that outside of the most budget-limited systems, builders will be able to choose a Z270 motherboard at the price that best meets their needs with time.
Our testing methods
For each of our benchmarks, we ran each test at least three times, and we’ve reported the median result. Our test systems were configured like so:
|Intel Core i7-2600K
|Intel Core i7-3770K
|Asus P8Z77-V Pro
|990FX + SB950
|16 GB (2 DIMMs)
|Corsair Vengeance Pro Series
|Intel Core i7-4790K
|Intel Core i7-6700K
|Intel Core i7-7700K
|Intel Core i7-6950X
|Asus Z97-A/USB 3.1
|Aorus Z270X-Gaming 5
|Gigabyte GA-X99-Designare EX
|16 GB (2 DIMMs)
|16 GB (2 DIMMs)
|64GB (4 DIMMs)
|Corsair Vengeance Pro Series
|G.Skill Trident Z
|G.Skill Trident Z
They all shared the following common elements:
|2x Kingston HyperX 480GB SSDs
|Cooler Master MasterLiquid Pro 280
|Gigabyte GeForce GTX 1080 Xtreme Gaming
|Windows 10 Pro
Thanks to Corsair, Kingston, Asus, Gigabyte, Aorus, Cooler Master, Intel, G.Skill, and AMD for helping to outfit our test rigs with some of the finest hardware available.
Since the Aorus Z270X-Gaming 5 motherboard that we’re using to test our Core i7-7700K is equally compatible with the Core i7-6700K, we chose to use that board instead of a Z170-powered one to perform testing for both CPUs. That decision gives the Core i7-6700K even footing with its successor when it comes to RAM speeds, so any difference in performance results between the two should come down to the differences between Skylake and Kaby Lake. Our Z170 motherboard claims DDR4-3866 support with only one DIMM, and it didn’t seem ideal to us to produce a set of results with the Core i7-7700K that didn’t take advantage of its support for higher RAM speeds.
For perspective (and also for fun), we’ve run the Core i7-6950X through our benchmarking suite alongside the Core i7-7700K. We didn’t get a good opportunity to review that chip when it first arrived, so it only seemed fair to give it a turn in the spotlight. That 10-core, 20-thread CPU sells for $1650 right now, so it’s in a completely different ballpark than Intel’s mainstream CPUs. Still, it’s good to finally get an idea of what Intel’s biggest, baddest consumer chip can do.
Some further notes on our testing methods:
The test systems’ Windows desktops were set at a resolution of 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
- After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled.
The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
Memory subsystem performance
Given the dearth of changes under Kaby Lake’s hood, we’re not going to repeat our usual whole-hog suite of Sandra cache bandwidth and latency tests for the Core i7-7700K. Instead, we’re using some simple memory-bandwidth benchmarks from the AIDA64 utility to get a basic idea of what Skylake and Kaby Lake can do on the same Z270 platform with DDR4-3866 RAM strapped in.
Yikes. There’s almost no difference between the Core i7-6700K and the Core i7-7700K in these synthetic benchmarks of memory performance. What’s surprising is that our motherboard’s support for DDR4-3866 lets those chips move much more data around than Intel CPUs with DDR3-1866 hooked up. In fact, the Skylake and Kaby Lake CPUs paired with the Z270 platform are elbowing in on the results the Core i7-5960X posted in these same AIDA64 tests way back when. That’s kind of scary performance from a dual-channel memory setup.
Also, get used to the Core i7-6950X taking the top spot in many of our benchmarks. It’s just along for the ride in these tests, but it still turns in some jaw-dropping numbers relative to Intel’s more mainstream CPUs. Feel free to ooh and ahh as necessary.
The move to exotic high-speed DDR4 results in some of the lowest memory access latencies we’ve ever seen in our tests. Impressive.
Some quick synthetic math tests
To get a basic idea of how the CPUs on our bench stack up, we’re using some of the handy built-in benchmarks from the AIDA64 utility. These benches can take advantage of the various feature sets of Intel and AMD’s latest chips. Of the results shown below, PhotoWorxx uses AVX2 instructions (and falls back to AVX on Ivy Bridge, et al.), CPU Hash uses AVX (and XOP on Bulldozer/Piledriver), and FPU Julia and Mandel use AVX2 with FMA.
In these tests, the Core i7-7700K behaves like a clock-bumped Skylake chip. As we’d expect, the i7-7700K takes a small lead over the i7-6700K when its extra Turbo headroom comes into play, and it behaves a lot like the i7-6700K when it doesn’t.
We also won’t be repeating our in-depth power measurements and task-energy calculations for this review. Instead, we’ve chosen to do some quick platform power draw measurements with our trusty Watts Up? power meter. The power supply for our system was plugged into the Watts Up?, while the monitor and other peripherals were plugged into a separate outlet. We tested each system’s power draw using the “bmw27” benchmark file for the Blender 3D rendering app.
While a lot of the differences in idle power draw above can be put down to variations in motherboards, the Core i7-7700K seems to need a couple dozen more watts under load to do its thing. We may have to redo these numbers with future firmware updates for our Z270 motherboard to see whether the i7-7700K’s power draw is within the expected range.
Now that we have a basic understanding of the Core i7-7700K, its platform, and its performance, let’s see how it handles some games.
Doom likes to run fast, and especially so with a GTX 1080 pushing pixels. We figured the game’s OpenGL mode would be an ideal test of each CPU’s ability to keep that beast of a graphics card fed, so we cranked up all of its eye candy at 1920×1080 and went to work with our usual test run in the beginning of the Foundry level.
The nice progression of average frame rates above suggests our hunch about the GTX 1080’s hunger for work from the CPU is correct. The fast clocks and high IPC of the Core i7-6700K and Core i7-7700K result in the best performance from the GTX 1080, and we bet the extra bandwidth afforded by the DDR4-3866 memory we’re using doesn’t hurt, either. At the other end of the chart, pairing the FX-8370 with the GTX 1080 cuts the card’s average frame rate roughly in half compared to the Core i7-7700K’s. Ouch. The AMD chip’s 99th-percentile frame time is significantly higher than those turned in by the Intel chips, too.
These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time the GTX 1080 spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame. And 8.3 ms corresponds to 120 FPS, an even more demanding standard that Doom can easily meet or surpass on hardware that’s up to the task.
Happily, none of the chips in our test suite cause the GTX 1080 to spend more than a handful of milliseconds beyond the critical 16.7-ms barrier. The Core i7-6700K and the Core i7-7700K each let the GTX 1080 spend less than half a second working on frames that take longer than 8.3 ms to produce, however, compared to the two to five seconds that the Sandy, Ivy, and Haswell Core i7s produce. The FX-8370 and GTX 1080 combo spends more than 12 seconds under 120 FPS, though, a disappointing result in this company.
Our previous graphics-card tests have shown that Doom‘s Vulkan renderer doesn’t provide the best performance with GeForces, but we figured we might as well give both of the game’s APIs a try to see whether the switch had a meaningful impact on performance from these CPUs. Aside from the switch to Vulkan, we left all of the game’s quality settings the same for this test.
Although we haven’t seen the best performance from Doom‘s Vulkan renderer on GeForce cards in the past, making the switch here has a surprising effect on both average frame rates and the 99th-percentile frame time the GTX 1080 can produce. Every CPU here gets a frame-rate increase of some sort, and 99th-percentile frame times fall across the board. The FX-8370 gets the biggest boost of all.
In our measures of “badness,” each CPU allows the GTX 1080 to spend an imperceptible amount of time past 16.7 ms working on tough frames. The real improvement comes past the 8.3-ms threshold, where most of the Intel CPUs let the GTX 1080 spend about a tenth of a second on challenging scenes. The Core i7-2600K causes the GTX 1080 to spend about two-tenths of a second on tougher frames—still impressive.
The “most improved” award in these metrics goes to the FX-8370. The Piledriver chip lets the GTX 1080 spend just 1.2 seconds on tough frames with Vulkan running the show, a factor-of-10 improvement over its OpenGL performance. Still, the FX-8370 can’t quite match the Intel parts for absolute smoothness.
Doom‘s performance with Vulkan and these CPUs is fascinating. Although average frame rates under Vulkan drop a bit for all the chips we tested, 99th-percentile frame times improve across the board. Most strikingly, those 99th-percentile figures are more or less equal regardless of the age of the chip in question. We’ve seen hints of this equalizing effect with Vulkan in informal testing before, but this is the first time we’ve formally quantified it.
Not every game has a Vulkan rendering path, to be certain, but Doom‘s implementation suggests that clever developers can extract substantial amounts of performance from older CPUs with the new API. That fact could have important implications for folks with older systems if Vulkan becomes more popular in future titles. For now, though, these results are more of an outlier than the norm.
Although Crysis 3 is nearly four years old now, its lavishly detailed environments and demanding physics engine can still stress every part of a system. To put each of our CPUs to the test, we took a one-minute run through the grassy area at the beginning of the “Welcome to the Jungle” level with settings cranked at 1920×1080.
So that’s an unusual result. At least for the test run we chose, it seems Crysis 3 leans heavily enough on the CPU that the beastly Core i7-6950X actually allows the GTX 1080 to turn in the best overall average frame rate and 99th-percentile frame time. Monster Broadwell-E chip aside, the Core i7-7700K’s clock-speed advantage over the Core i7-6700K seems to let it turn in slightly better average-FPS and 99th-percentile figures, as well. At least at the settings we chose, Crysis 3 still seems able to take advantage of as much CPU as one can throw at it.
At the critical 16.7-ms threshold of “badness,” only the FX-8370 causes the GTX 1080 to spend a substantial amount of time working on frames that would drop animation rates below 60 FPS. The Core i7-7700K turns in a perfect score here, while the i7-6700K and i7-6950X aren’t far behind.
At the more challenging 8.3-ms threshold, the Core i7-6950X puts a point on its unusual photo finish by holding up the GTX 1080 for just under a second in total. The i7-7700K still delivers an excellent result here by allowing the graphics card to spend about half the time on tough frames that the Core i7-6700K does. Otherwise, the results largely sort themselves out by the age of the chips in question. The i7-2600K, i7-3770K, and FX-8370 all deliver substantially worse gaming performance with the GTX 1080 than the i7-4790K and its newer cohorts.
Far Cry 4
Although it’s not quite as demanding as Crysis 3, my experience with Far Cry 4 suggests it’s still a challenge for most systems to run smoothly with the settings cranked. To confirm or dispel that impression, I turned up the game’s quality settings at 1920×1080 and ran through our usual test area.
Interesting. Far Cry 4‘s performance improvements taper off once we start testing it with Haswell and newer chips. The i7-3770K, i7-4790K, and i7-6700K all bunch up behind the i7-7700K, trailed slightly by the i7-2600K and the i7-6950X. The i7-7700K also turns in the best 99th-percentile frame time by a slight margin, but the differences aren’t large among the Intel chips.
Our time-spent-beyond results shake out about as you’d expect from the average frame rates and 99th-percentile frame times above. The i7-7700K lets the GTX 1080 deliver nearly all of its frames in 16.7 ms, and the other Intel chips in the test suite hamper the card just a bit more. Meanwhile, the FX-8370 forces the GTX 1080 to spend nearly two seconds on frames that take more than 16.7 ms to render, and that performance translates into a notably less smooth gaming experience than the Intel CPUs provide.
Deus Ex: Mankind Divided
One of 2016’s most demanding titles for any system, Deus Ex: Mankind Divided seemed like an ideal game to include in our test suite. We performed our usual test run in the game’s DirectX 11 mode with lots of eye candy turned on.
Whoops. Seems our test settings for Mankind Divided ended up more GPU-bound than we had expected. Still, these results show some minor variations in 99th-percentile frame times among the Intel CPUs we’re testing, so even with a game that stresses the graphics card as much as Mankind Divided does, it helps to have a powerful CPU backing it up—just not by that much.
Given the rather pedestrian frame rates our test rigs achieved in this test, it’s worth taking a look at the 33.3-ms threshold first. The Core i7-7700K delivers all of its frames above this threshold, while the other chips hamstring the GTX 1080 just a bit. That said, there’s not a substantial gap between the best- and worst-performing chips in this test once we flip over to the 16.7-ms threshold.
Grand Theft Auto V
Grand Theft Auto V recently offered us hints that it can still put the hurt on CPUs as well as graphics cards, so we ran through our usual test run with the game’s settings turned all the way up at 1920×1080.
Wow. The Core i7-6700K, Core i7-7700K, and Core i7-6950X all deliver average-FPS numbers and 99th-percentile frame times that are quite a bit better than anything else in our test suite. The i7-7700K ekes out a tiny advantage over its Skylake stablemate, though, perhaps thanks to its extra clock speed. We may have to perform some further testing to see whether the extra bandwidth afforded by our DDR4-3866 memory on our Z270 test rig makes a meaningful difference to GTA V‘s performance given these numbers.
All of the Intel CPUs in this test pose little hindrance to the GTX 1080’s thirst for work at the critical 16.7-ms threshold. None of the Intel chips hold up the card for more than a fraction of a second, while the FX-8370 makes the graphics card spend more than three seconds of our test run waiting to complete tough frames that drop frame rates below 60 FPS. GTA V seems to care about high single-threaded performance and memory throughput when it’s running on a GTX 1080, and the FX-8370 is thoroughly outclassed in those measures by even its contemporary, the Core i7-2600K.
We can see some better separation among the Intel CPUs in this test by clicking over to the 8.3-ms threshold. There, the Core i7-7700K spends about two seconds less contributing to tough frames than the Core i7-6700K does. The Core i7-6950X mixes it up with that highest-performing pair of chips, too. The i7-4790K, the i7-3770K, and the i7-2600K all contribute significantly more milliseconds to the time spent under 120 FPS in our test run than Intel’s most recent chips, and the FX-8370 simply bottlenecks the GTX 1080.
Memory frequency and performance scaling with Arma III
Our own Colton Westrate, aka “drfish,” is a huge fan of Bohemia Interactive’s DayZ. He’s long suggested that we include Bohemia’s Arma III, which is powered by the same engine, in our CPU performance benchmarks because of the disproportionate demand it places on single-threaded CPU performance and memory bandwidth. We figured now was as good a time as any to give it a shot.
We didn’t test Arma with every CPU on our bench because of those inherent performance characteristics and a lack of diverse DDR3 kits. Informally, though, the move from a Core i7-3770K with DDR3-1866 to the Core i7-7700K with DDR4-2133 netted us about 11 FPS more on average. Given that fact, we ran the game through its paces with the Core i7-7700K and three different memory kits: one running at DDR4-2133, the second at DDR4-3000, and the third at DDR4-3866. Our thanks to G.Skill for hooking us up with the requisite kits.
To make our Arma benchmarking repeatable, we relied on the community-created “Yet Another Arma Benchmark” scenario. This roughly two-minute test somewhat replicates what it’s like to be present in a multiplayer Arma game, and it’s quite demanding. We also cranked every graphics setting the game had to offer save for the full-screen anti-aliasing slider.
One thing is for certain if you’re an Arma III player: sticking with good old DDR4-2133 isn’t going to let you get all of the performance that’s available from a Core i7-7700K in this game. The move from DDR4-2133 to DDR4-3000 nets a big increase in average frame rate and a big decrease in 99th-percentile frame times. Past that point, the benefits are somewhat more modest. Our exotic DDR4-3866 kit still offers improved performance over slower memory, but whether it’s worth the $65 or so extra over a DDR4-3000 kit of a similar capacity is something that only fans of the game can judge.
Our graph of the tail end of the frame-time distribution for Arma III and our “badness” graphs offer a little more insight into just what that extra $65 buys. The biggest improvement in Arma performance still comes from the move to DDR4-3000, but stepping up to DDR4-3866 means that the game spends about five fewer seconds below 60 FPS in the Yet Another Arma Benchmark scenario. That’s not nothing when even a Core i7-7700K and a GeForce GTX 1080 don’t seem to be helping Arma III‘s performance all that much. If you’re an Arma fan and already have the best CPU and graphics card that you can afford, it seems like it’s worth stepping up to a reasonably fast (or even ludicrously fast) memory kit if your motherboard and CPU can handle it.
Compiling code in GCC
Our resident developer, Bruno Ferreira, helped put together this code compiling test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.
No surprises here. The Core i7-7700K is the fastest quad-core chip on the bench in this test, and the Core i7-6950X can stretch all 20 of its threads to take a commanding lead.
Compressing and decompressing zip archives is one of the more common tasks I still perform on a desktop PC, and the Core i7-7700K is the best at it of the mainstream chips here.
TrueCrypt disk encryption
Although the TrueCrypt project has fallen on hard times, its built-in benchmarking utility remains handy for a quick test of these chips’ accelerated and non-accelerated performance when we ask them to encrypt data. The AES test should take advantage of hardware acceleration on the chips that support Intel’s AES-NI instructions, while the Twofish test relies on good old unaccelerated number-crunching prowess.
For both the accelerated and non-accelerated encryption algorithms we benched with TrueCrypt, the Core i7-7700K is the fastest thing going for a mainstream CPU.
Scientific computing with STARS Euler3D
Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here.
Unsurprisingly, the Core i7-6700K and i7-7700K both take substantial leads over the older quad-core chips in our stable in Euler3D, thanks in part to the copious memory bandwidth that DDR4-3866 affords. The Core i7-6950X continues to play in a completely different league. Moving on.
3D rendering and video processing
The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.
Cinebench’s single-threaded test proves that the Core i7-7700K is one of the fastest CPUs around on a per-core basis, and that performance translates into a solid lead for it in the all-cores test among the quad-core chips it’s competing with. The Core i7-6950X is in a league of its own here, though.
As an OpenCL benchmark, LuxMark lets us test performance on CPUs, GPUs, and even a combination of the two. OpenCL code is by nature parallelized and relies on a real-time compiler, so it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both support AVX. The AMD APP driver even supports Bulldozer’s and Piledriver’s distinctive instructions, FMA4 and XOP. We’ve used the AMD APP ICD on the FX-8370 and Intel’s latest OpenCL ICD on the rest of the processors.
We tested with LuxMark 3.0 using the “Hotel lobby” scene.
The Core i7-7700K continues its string of victories in our CPU-only tests with LuxMark, but the Core i7-6700K isn’t far behind.
As we’ve come to expect with LuxMark, switching to GPU-only rendering leads to a chaotic mixture of results. Still, the Core i7-7700K comes out on top.
With their powers combined, the CPUs and the GeForce GTX 1080 fall back into an orderly progression. We also get the highest possible scores this way. Once again, the i7-7700K takes the overall crown among the quad-core chips it’s pitted against.
Blender Cycles renderer
Here’s a new addition to our test suite. If you’ve been paying to AMD’s Ryzen events of late, you’ve probably seen the company demonstrating its chips with Blender and its Cycles renderer. The Blender project offers several standard scenes to render with Cycles for benchmarking purposes, and we chose the CPU-targeted version of the “bmw27” test file to put Cycles through its paces on these parts.
The i7-7700K adds another trophy to its case with the Cycles test. The Core i7-6950X does it one better and finishes the job in almost half the time.
Some quick overclocking tests
One of the biggest questions about Kaby Lake is whether Intel’s improved 14-nm process technology opens up any additional overclocking headroom for the company’s unlocked CPUs. To find out, we dived into our motherboard’s firmware to see just how much more performance we could extract from our particular i7-7700K. We stuck to basic multiplier overclocking for this test and raised the Vcore of our chip as needed to achieve stability. We continued in this way until we ran into thermal limits.
After several cycles of increasing multipliers, running Prime95, and increasing voltages as needed to achieve stability, we ended up at 4.8 GHz and around a 1.32V Vcore. Our system could boot at 4.9 GHz, but we ran into thermal throttling when we tried increasing the Vcore further to achieve stability. Considering the i7-7700K’s 4.5 GHz stock Turbo speed, a 6% overclock ain’t much—especially when it’s threatening to overcome a $110, 280-mm liquid cooler.
A beefier liquid loop or even more exotic cooling solutions might perform better, but we’re just regular folks with regular coolers here. We’ve successfully taken a Core i7-6700K to 4.7 GHz before, so another 100 MHz seems like it falls within the chip-to-chip variations inherent in semiconductor manufacturing.
We unfortunately didn’t have time to overclock every CPU on our test bench, but we did overclock our evergreen Core i7-2600K to 4.5 GHz to see how it performed against the Core i7-7700K at 4.8 GHz. We re-ran a few of our productivity tasks to see just how each chip performed. Here’s what we found.
So, uh, yeah. Overclocking the Core i7-7700K makes a fast chip a little bit faster at most everything, but the improvements are pretty modest considering the extra fan noise and heat production involved. A new Sandy Bridge this isn’t.
To sum up all of the data we collected over the past few pages, we’ve condensed our results into our famous value scatter plots. The non-gaming chart shows the performance each chip delivers per dollar, measured as the geometric mean of the results of our productivity tests. The gaming chart takes the geometric mean of all the 99th-percentile frame times each chip produced in our gaming tests and converts that figure into FPS so that our higher-is-better system works.
The best values in these graphs can be found toward the upper-left corner, where prices are lowest and performance is highest. We’ve presented versions of each chart with the Core i7-6950X both present and omitted, since its ginormous price tag skews our results.
All of our in-depth tests boil down to a couple simple conclusions. Intel’s process optimization and the resulting clock-speed bump in the Core i7-7700K translate to a 5% faster performance on average in our productivity tests than that of the Core i7-6700K. The i7-7700K also delivers about a 2% higher 99th-percentile frame rate than its predecessor in the games we tested. Despite that improvement, Intel actually suggests a $339.99 price tag for the new chip, $10 less than the Core i7-6700K’s.
The Core i7-6700K was already one of the fastest mainstream desktop CPUs money could buy for most any purpose, and the 300 MHz of extra Turbo headroom Intel found in its process optimizations lets Kaby Lake set a slightly higher bar for this segment. The Core i7-7700K delivers exceedingly modest improvements, to be sure, but bellyaching about them is like complaining about a slightly faster Ferrari for less money. More single-threaded performance is a valuable commodity these days, and higher clock speeds are one way to get there.
Along with its higher clock speeds, the Kaby Lake refresh brings some welcome platform improvements with it. Seventh-generation Core chips can now handle DDR4-2400 memory out of the box, and they can also handle higher overclocked memory speeds. The Z270 motherboard we tested fired right up with nosebleed-inducing DDR4-3866 RAM in its DIMM slots after we enabled the right XMP profile. So equipped, our rig turned in some synthetic memory performance results that would make even Haswell-E-powered systems a little bit uncomfortable, and our testing showed that some games and applications seem to be able to take advantage of the extra bandwidth. Z270 also comes with more flexible I/O lanes for motherboard makers to tap, and in an era where more and more devices are clamoring for PCIe connectivity, we’re all for it.
Folks looking to turbocharge their Kaby chips through overclocking might not find a lot of performance remaining to be tapped, though. Even with a 280-mm liquid cooler on top, our i7-7700K ended up thermally limited at about 4.8 GHz. The Core i7-6700K was already a bear to keep cool when we started turning up its clocks, and the process-technology improvements in Kaby Lake don’t seem to have translated to more overclocking headroom.
To be fair, making general statements about overclocking performance from one sample of a given CPU is a bit dicey, but it’s what we’ve got to work with. Given the i7-7700K’s already-impressive stock clocks, we have to wonder about the value proposition of strapping on CPU coolers that cost hundreds of dollars to get a few more percent’s worth of clock speed out of a chip.
Many readers are likely wondering whether it’s finally time to retire their Sandy Bridge or Ivy Bridge systems with the advent of Kaby Lake. If the Core i7-7700K’s performance in productivity tasks doesn’t tantalize you, perhaps its gaming performance will. With a blisteringly fast graphics card like the GeForce GTX 1080 installed, many of our more CPU-bound gaming tests at 1920×1080 show that older systems can limit the maximum performance one can achieve with today’s highest-end graphics cards. That behavior isn’t consistent across every game we tested, to be sure, but it does suggest that you might be leaving a lot of performance on the table if you just plop a GTX 1070 or GTX 1080 into a five-year-old PC.
Now that Intel’s cards are largely on the table for this generation of desktop chips, we’re curious to see what AMD has up its sleeve with its Ryzen CPU family. Early performance demonstrations of Ryzen suggest its instructions-per-clock throughput will be comparable to that of Intel’s Broadwell-E chips, rather than the class-leading Skylake and Kaby Lake parts. If that’s the case—and if AMD can find substantial Turbo headroom for Ryzen above and beyond the 3.4 GHz top-end base clock figure it’s touted so far—those chips could finally deliver some sorely-needed competition in the enthusiast desktop CPU market.
We say as much because our tests with the Broadwell-E Core i7-6950X show that its 99th-percentile FPS figures already don’t trail Intel’s highest-IPC cores by that much in our gaming tests. If the DirectX 12 and Vulkan APIs become more widely adopted, our experience with Doom‘s Vulkan renderer shows that those low-overhead APIs have the potential to let lower-IPC chips with a lot of cores substantially close the gap with higher-performance parts, too. To be fair, Vulkan and DirectX 12 aren’t the APIs of choice for a lot of games yet, and their benefits won’t accrue exclusively to AMD CPUs. In any case, Ryzen chips are slated to arrive in the first quarter of this year, so we should know for sure just how they stack up pretty soon.
Speaking of the Core i7-6950X, you may have noticed that chip’s near-complete dominance in our non-gaming application tests. Although we’re only belatedly getting this 10-core monster on the bench, its performance in any task that can use lots of threads remains downright jaw-dropping. For tests where we have cross-comparable data, the i7-6950X offers considerably higher performance than the Core i7-5960X before it, and it often leaves the Core i7-7700K eating dust. Despite Broadwell’s IPC deficit compared to Skylake and Kaby Lake, the i7-6950X doesn’t suffer much in our 99th-percentile frames per second metric of gaming smoothness, either.
If time is money for your work, and your work can take advantages of lots of threads, the i7-6950X is the fastest high-end desktop CPU we’ve ever tested, full stop. If you don’t need all of its cores and threads, however, the Core i7-7700K arguably delivers the best gaming performance on the market for about a fifth of the price. Intel’s Extreme Edition CPUs have never been good values, but the i7-6950X takes the definition of “halo product” to eye-watering new heights. If the return-on-investment calculations work out for you, though, the i7-6950X is an amazing chip.