Intel’s Core i7-8700K CPU reviewed

How quickly things change. Just nine months ago, Intel introduced the Core i7-7700K, its seventh in a line of four-core, eight-thread enthusiast processors that stretches back to 2010. We contented ourselves with about a 5% performance increase on average from that chip compared to the Core i7-6700K before it. A proven formula got a little bit better, and that was that.

Shortly after that quiet bit of incremental progress, AMD upended the status quo like an errant speck of dust in a 193-nm stepper with its Zen architecture and Ryzen CPUs. Builders could suddenly get more cores and threads for their dollar than ever before. Every one of those chips was overclockable, and they didn’t suffer from nipping and tucking to satisfy product-segmentation whims. Most importantly, AMD spoke directly to the hearts and minds of the brightest spot of today’s PC market: enthusiasts and gamers. Nobody less than CEO Lisa Su talked up the company’s focus on high-performance products for that audience, and AMD backed up that talk with a full range of desktop chips spanning prices from about $100 to $1000 and everywhere in between.

In the face of this onslaught, the i7-7700K easily maintained its dominance in single-threaded performance and high-refresh-rate gaming, but it was suddenly matched in just about every other measure of performance by chips selling for about $100 less. Whatever Intel’s plans may have been at that time, it was clear that the four-core, eight-thread formula wasn’t going to be enough to hold the line any longer for mainstream PCs.

The Core i7-8700K

That brings us to Intel’s eighth generation of Core desktop processors, code-named Coffee Lake, that launch this morning. These CPUs don’t combat the Ryzen threat with a major microarchitecture change or even a process shrink. Instead, the company has continued to improve its 14-nm process for another generation of CPUs powered by the Skylake microarchitecture. Although the exact nature of those improvements is classified, they are at least partially responsible for letting the company put more cores in its mainstream CPUs at every price bracket, and those higher core counts are how Intel hopes to extract the performance leap that the eighth-generation Core name suggests.

Make no mistake: this is a major change for Intel’s entire product stack. Coffee Lake Core i7s ditch the four-core, eight-thread configuration that’s been the top-end Core i7 formula for the better part of a decade. Instead, those chips now have six cores and 12 threads at their disposal. Core i5 chips now have six cores and six threads, and Core i3s have gone from dual-core designs with Hyper-Threading to native four-core parts without. Given Intel’s clock-speed and architectural advantages over today’s Ryzen CPUs, this full-court press seems poised to slow AMD’s momentum in regaining desktop CPU market share.

The six-core Coffee Lake-S die. Source: Intel

Outside of those very high-level changes, however Intel is basically treating these chips as black boxes, aside from the fancy die shot you see above. Die size, transistor count, and the exact nature of the process improvements Intel has been able to extract from its 14-nm fabs remain mysteries.

We do know that the 14nm++ process, as it’s called, can handle a 23% to 24% higher drive current (a measure directly associated with how fast a transistor can operate) with only minor increases in leakage current, at worst, thanks to a mantra of constant improvement over the life cycle of a given process node. The company disclosed as much at its Technology and Manufacturing Day earlier this year.

In total, the company claims the improvements in 14nm++ can allow for transistor performance increases of up to 26% at the same power level compared to the process’ status in 2015. Perhaps more astoundingly, the company can get the same level of performance on 14nm++ as one of its 2015 transistors for 52% less power. These developments are most likely a large part of what allowed the company to perform feats like putting four cores in a 15W TDP and six cores in a 95W TDP.

Outside of what it’s presented in its manufacturing day events, however, we don’t have any specific inkling of how those process improvements are put to use in Coffee Lake. In the finest tradition of Andy Grove, the company now seems unwilling to let even the slightest hint of competitively useful information slip its clean-room walls. Given AMD’s resurgence, this paranoia is perhaps justifiable. Tipping off your competitors about how, exactly, your fabs are improving is perhaps not the wisest course of action.

Just like the birds and the bees in the age of the Internet, however, curious folk can find enough things out on their own to be dangerous. Someone will certainly figure out the die size of Coffee Lake-S by simply popping the IHS off a chip and using some calipers—not exactly nation-state levels of industrial espionage. Transistor count and process improvements can’t be gauged nearly as easily, but firms like TechInsights will surely take a guess (or probably better, for those willing and able to pay). So long as that’s all that happens, however, Intel seems to be happy to let the rumors swirl.

This secrecy is kind of a shame, because the company’s process technology at 14 nanometers is clearly one of its greater competitive advantages. It’d be neat to hear what has allowed the company to boost the performance of its transistors quite so much on the same node, but in such a cutthroat industry as fabrication, Intel has no reason to tip its hand to competitors. For now, though, let’s examine the implementations of Coffee Lake that are launching today.

 

Sizing up the lineup

Model Base clock Single-core

Turbo Boost 2.0

frequency

Cores/

threads

TDP L2

cache

L3

cache

Memory

speed (MT/s)

Price
Core i7-8700K 3.7 GHz 4.7 GHz 6/12 95W 1.573

MB

12MB Dual DDR4-2666 $359
Core i7-8700 3.2 GHz 4.6 GHz 65W $303
Core i5-8600K 3.6 GHz 4.3 GHz 6/6 95W 9MB $257
Core i5-8400 2.8 GHz 4 GHz 65W $182
Core i3-8350K 4 GHz N/A 4/4 91W 1.05

MB

6MB Dual DDR4-2400 $168
Core i3-8100 3.6 GHz 65W $117

As it’s implemented in the six chips launching today, Coffee Lake really is just a six-core Skylake die at an architectural level. The increase to 12MB of L3 cache on Coffee Lake Core i7s is the natural consequence of adding two more cores with 2MB of L3 each. Coffee Lake Core i5s and Core i3s get 1.5 MB of L3 per core to play with. Outside of those broad changes, Coffee Lake is, as we’ve noted, largely a product of process refinement and small tweaks, like the boost to natively-supported DDR4-2666 RAM on some parts.

A block diagram of the Skylake client core. Source: Intel

This is by no means a bad thing. Despite launching in 2015, Skylake is still a world-class architecture, and Intel probably wants to get as much out of it as it can before firing another valuable microarchitecture or process-shrink arrow from its quiver. Doesn’t help that the company seems to be struggling with its 10-nm process, either, but that’s another topic for another time.

Although price points largely haven’t changed in the move to Coffee Lake, buyers are getting more for their money at each rung on Intel’s product ladder than ever before. Core i3s now offer four full cores to toy with, and the Core i3-8350K is basically a Core i5-7600K for almost $100 less. That’s quite the rough break for AMD’s four-core, eight-thread Ryzen 5s and four-core, four-thread Ryzen 3s, as the i5-7600K had enough oomph to match the $190 Ryzen 5 1500X in our productivity tasks while beating it handily in our gaming benchmarks. That’s before we consider the i5-7600K’s overclocking prowess, as well.

Model Cores Threads Base clock Boost clock Max XFR

headroom

L3 cache TDP Price
Ryzen 5 1600X 6 12 3.6 GHz 4.0 GHz 100 MHz 16MB 95W $249
Ryzen 5 1600 3.2 GHz 3.6 GHz 50 MHz 65W $219
Ryzen 5 1500X 4 8 3.5 GHz 3.7 GHz 200 MHz $189
Ryzen 5 1400 3.2 GHz 3.4 GHz 50 MHz 8MB $169

Up the stack, things look slightly better for the red team. The six-core, six-thread i5-8400 has a Turbo clock of 4 GHz and a base clock of 2.8 GHz in a 65W power envelope. It goes for $20 less than the Ryzen 5 1600, an unlocked, six-core, 12-thread chip for $215 at retail. AMD’s effort has a much larger L3 cache, twice the L2 per core, and six threads on the i5-8400. Its stock cooler will likely be much nicer than Intel’s boxed heatsink, too, a nice perk for budget builders. I’d expect the i5-8400 to have better performance in lightly-threaded workloads, but the Ryzen 5 1600 could pull ahead in multithreaded tasks, and it could stretch any lead it has even further with some overclocking.

The real part of interest to many enthusiasts may be the $257 Core i5-8600K, whose price naturally pits it against AMD’s hottest six-core part, the Ryzen 5 1600X. With six unlocked cores and the considerable overclocking potential of Intel’s 14-nm process to play with, the i5-8600K seems an easy list-topper for any midrange system. Even before we look at overclocking, though, the i5-8600K is besting the Ryzen 5 1600X’s Turbo speeds with XFR in the picture. The 1600X’s SMT support might give it a slight edge in some heavily-threaded workloads at stock speeds, but I imagine overclocking the i5-8600K could easily bridge that gap.

The Core i5-8400

We do have a Core i5-8400 in our labs that arrived alongside the i7-8700K, but in the interest of time, I chose to focus my efforts on the high-end chip first. Intel hasn’t sent us an i5-8600K to play with so far, so we may have to source one of our own to see how it handles as part of our i5-8400 review. Neither the i5-8600K nor the 1600X come with a cooler, so buyers are essentially being asked to pick their poison with regard to overclocking or not. As we’ll soon see, Coffee Lake’s overclocking potential could easily tip the scales in Intel’s favor, but I still expect this battle to be a close one, dollar for dollar.

Coffee Lake Core i7s are the most interesting of this bunch, because they mark the first time so many cores and threads have ever been available on Intel’s mainstream desktop platform. In fact, these chips have specifications that used to be reserved for the blue team’s high-end desktop CPUs, and I have every reason to expect their performance will belong in that tier.

For $303, the i7-8700’s base clock matches the Turbo Boost speed of the Broadwell-E Core i7-6800K, and its Turbo speeds go all the way up to a very impressive 4.6 GHz. No, you’re not getting unlocked multipliers, quad-channel memory, or a ton of PCIe lanes to play with from the i7-8700, but I still think many builders will be elated with this kind of performance from a mainstream CPU in a 65W power envelope.

Finally, the $359 Core i7-8700K brings six unlocked cores and 12 threads to the table. In the transition from i7-7700K to i7-8700K at the top of the lineup, Intel found 200 MHz more Turbo Boost frequency for lightly-threaded workloads even as it dropped two more cores into roughly the same power envelope. Compare these numbers to Intel’s own $379.99 Core i7-7800X and its six Skylake Server cores, and it’s clear the gap between the company’s mainstream and high-end desktops is narrower than ever.

The i7-7800X is kind of the ugly duckling of the X299 platform. It doesn’t have Turbo Boost Max 3.0 support like other Skylake-X chips, so it only clocks up to 4 GHz in light workloads. It has less shared L3 cache than the i7-8700K (but four times the private L2). It’s limited to DDR4-2400 RAM at stock speeds, albeit four channels of it as opposed to two. The i7-7800X is also built with Intel’s server-class mesh interconnect, which sometimes does not respond well to tasks like high-refresh-rate gaming. All together,  it seems that only those with a need for AVX-512 support, gobs of memory bandwidth, or 28 PCIe lanes would need to consider the 7800X for $20 more. The i7-8700K is a very potent chip, and we can say that even before we look over to the Ryzen 7 side of the aisle.

Although some will point to the lower base clocks on Intel’s eighth-generation desktop CPUs as the way it held TDPs to only minor increases over seventh-gen chips, I’m not sure those base clocks will be entirely representative of real-world performance. The Core i7-8700K can sustain all-core Turbo speeds of 4.3 GHz in non-AVX workloads without a hitch, at least under the 280-mm liquid cooler we use on our test bench. That’s down just 100 MHz per core from what we observed from the Core i7-7700K. Admittedly, we’re looking at a 95W chip instead of 65W models, but still. Put a good cooler on these chips, and I expect the base clock could be a rare sight.

 

Z370 increments by 100

New Intel CPUs generally mean new motherboards every couple generations or so, and Coffee Lake is no different. These CPUs will still drop into an LGA 1151 socket, but don’t be fooled—it’s not the same as the LGA 1151 we know from Z170 and Z270 boards. Intel’s official statements on this platform change vaguely cite the need for improved power delivery to support six cores, but we found that position confusing given the already-overbuilt power delivery circuitry on many enthusiast Z270 boards.

Thankfully, the enterprising David Schor of WikiChip has ferreted out exactly what’s changed in the Z370 version, and the answer appears to be more power-delivery and ground pins taken at least in part from a group that were previously reserved. Note to Intel: “we changed the pinout” was all you had to say without giving anything more away.  

Even with the different pinout and lack of support for older CPUs, Z370 motherboards still use the LGA 1151 name. I suppose this is technically correct, but continuity implies compatibility, and I expect some builders will be surprised when they learn that Z370 boards can’t be used with physcially-compatible older CPUs. Intel says it’ll avoid confusion among buyers by putting “Supports eighth-generation Intel processors” on motherboard boxes, and early packaging renders suggest “Requires Intel 300 series-based chipset motherboard” will be on at least one side of the box in somewhat-prominent text.

We still don’t think that does nearly enough to avoid befuddlement, however. Buyers will still see “LGA 1151” on both CPU boxes and motherboard boxes, and Z270 supported both sixth- and seventh-gen Intel processors without complaint, so some will inevitably assume one can mix and match all of these things freely. Call the socket LGA 1151 v2, call it LGA 1151+, we don’t care—just call it something different. Newegg seems to have settled on the term “LGA 1151 (300 Series),” and that’s a good enough start.

 You can put a sixth- or seventh-gen LGA 1151 CPU into a Z370 motherboard, and it will drop in and latch just fine. I tried as much with a Core i5-7600K, and it didn’t harm anything even after I tried to power it up. Z370 boards will not even POST with these older chips installed, however, and I wasn’t brave enough to risk anything by putting our i7-8700K in a Z270 board to find out what would happen.

Really: just cross the 7 and 2 out and write in an 8 and a 3

Beyond the electrical changes in the socket, the Z370 PCH itself offers the exact same resources as Z270 before it. Motherboard makers can tap 24 flex I/O lanes for use as PCIe 3.0 slots, SATA ports, M.2 slots, and more. Up to 10 USB 3.0 ports and up to 14 USB 2.0 ports can also be called upon for peripheral connectivity. Six SATA ports round out the package. Intel advertises Thunderbolt and Optane support from Z370, as well, just as it did with Z270.

The lack of new features in Z370 is honestly a bit underwhelming. Z270 was already the basis for some fine motherboards, and I never found myself wanting much of anything from them for peripheral I/O. Still, given that the life of a system is often five years or more these days, it’s annoying that Z370 itself offers no new features, connectivity, or quality-of-life improvements compared to its outgoing counterpart.

AMD’s X370 and B350 chipsets each offer two native USB 3.1 Gen 2 controllers for motherboard makers to tap, and Ryzen 3, 5, and 7 CPUs can devote four lanes of CPU-connected PCIe to M.2 slots in addition to the 16 available for graphics cards or other expansion cards. That dedicated x4 link for storage devices is especially nice given the lane-sharing circus that can ensue if builders try to use every port available from modern Intel boards.

 

Touring the Z370 platform with Gigabyte’s Z370 Aorus Gaming 7 motherboard

Perhaps with the extremely modest changes in Z370 in mind, motherboard makers are offering their highest-end Z370 boards for relatively sedate prices. I got Gigabyte’s Z370 Aorus Gaming 7 in the TR labs ahead of the arrival of our Coffee Lake chips, and I performed all of my Core i7-8700K testing with it for both stock and overclocked numbers. This board is the highest-end model in Gigabyte’s Z370 lineup so far, and it’s served me admirably over the past few days. Despite being the fanciest board of the Aorus bunch, this mobo only carries a $250 price tag.

For the money, buyers get a top-end board almost as fully-featured as any from Gigabyte’s under-$300 Z270 lineup. The most prominent feature of the Gaming 7 is RGB LEDs, though, and there are lots of ’em. All three primary PCIe slots are illuminated with multicolor goodness, and the blinkenlights also nestle into the power circuitry, I/O shroud, and even the VRM and chipset heatsinks themselves. A lighting strip on the right edge of the board completes the visual statement. These various lighting zones should be individually tweakable through Gigabyte’s RGB Fusion Windows software, although a handy firmware function allows for basic setup if your tastes run more to the monochrome.

The Gaming 7 is also the most extreme exponent of Gigabyte’s Z370 styling trends. Instead of the plain white and black shrouding that we saw on the company’s Z270 boards, the Z370 Gaming 7 has a sort of mecha- or cyberpunk-inspired style that manifests as elaborate metal and metal-look accents all over the I/O shroud and heatsinks. Although I quite liked Gigabyte’s simple and clean design language in the Z270 generation, this look is distinctive without evoking medieval torture devices or occult rituals. It should stand out in any windowed case.

This board isn’t all about flash, though. To supply the necessary juice to Coffee Lake CPUs, Gigabyte taps a 10-phase power design incorporating VRM and PWM control circuitry from Intersil. This VRM array is paired with what Gigabyte calls “server-level chokes,” and the units so employed do appear similar to some of the higher-end components we’ve seen on the company’s X99 boards in the past. While I can’t comment in depth on the specific components involved, this setup does appear more than ready to supply the juice the i7-8700K needs to reach its maximum potential.

Like many high-end Z270 boards, the Z370 Gaming 7 offers three PCIe x16 slots and three PCIe x1 slots, all meeting the PCIe 3.0 standard. Two of these are powered by the CPU’s 16 lanes of connectivity. The topmost x16 slot gets 16 lanes from the CPU with one graphics card installed. Deploy a second card in the middle x16 slot, and the Gaming 7 splits those lanes into a pair of x8 channels. The third x16 slot gets four PCIe lanes from the Z370 chipset, and each PCIe x1 slot gets a lane from the PCH, as well.

For PCIe storage devices, the Gaming 7 has a whopping three M.2 slots, the topmost of which sits above the first PCIe x16 slot for better thermal resilience against the graphics card that will presumably sit directly below. The first slot is also protected by a handsome M.2 heatsink with a pre-applied thermal pad.

Since Z370 is simply an evolution of Z270, loading up the board with expansion cards and storage devices could result in some resource-sharing conflicts. The first two PCIe x1 slots get a single lane from the chipset at all times, but the third shares its bandwidth with SATA port 0. Plug an expansion card into that slot, and SATA port 0 goes dark, and vice versa. Install a PCIe or SATA storage device in M2M_32G, the first M.2 slot, and SATA ports 4 and 5 turn off. M2A_32G, the middle M.2 slot, gets four PCIe lanes for M.2 devices at all times, but installing a SATA device in it will disable port 0, as well. Finally, M2P_32G (the bottom M.2 slot of the bunch) shares four lanes of PCIe with the bottom-most PCIe x16 slot. Install a device in one slot or the other, and its unused counterpart will go dark.

Those resource conflicts are a bit frustrating on a motherboard this expensive when we consider that the board only has six SATA ports to work with. Use an NVMe SSD in the first slot as your system’s boot device, and you immediately lose two of these ports to lane-sharing. It’s a bit bemusing that Gigabyte didn’t flip the allocation of lanes for M2M_32G and M2A_32G when it was laying out the board, considering that the middle slot doesn’t ever conflict with SATA devices with an NVMe SSD installed. You can move the M.2 heatsink to this second slot, at least, but then you’re subjecting your M.2 device to the jet blast of the PC’s graphics card. This is a decidedly sub-optimal arrangement for the storage-hungry.

On the audio front, Gigabyte employs its usual arrangement of Nichicon and Wima capacitors in the board’s analog audio path. The codec that feeds this array is Realtek’s now-ubiquitous S1220 codec, complemented by an ESS Sabre 9018Q2C DAC. This chain is claimed to be good for an analog SNR of 121 dB, or on par with the claimed specs for many high-end onboard setups. We’ll delve into this more when I review the Gaming 7 in depth, but my listening experiences on this board were as pleasant as any high-end motherboard’s of late. Nothing quite touches the Z270X-Gaming 8’s Creative ZxRi setup, though.

The Z270X-Gaming 7 offers plenty of possibilities for peripheral I/O. All of the back panel’s USB ports are of the 3.0 standard at a minimum. The leftmost yellow ports offer Gigabyte’s DAC-Up voltage-control feature, which purports to provide more juice to power-hungry devices on long cable runs if it’s needed. Although it’s unlikely they’ll be used on such a high-end board, Gigabyte offers an HDMI 1.4 jack and a DisplayPort 1.2 connector for Coffee Lake’s integrated graphics processor, as well. The lack of a separate converter chip for HDMI 2.0 means the HDMI port only supports 4096×2160 at a maximum, and only then at 30 Hz. Those looking for tolerable IGP output probably want to use the DisplayPort, which can handle 4096×2304 displays at 60 Hz.

The two blue USB 3.0 ports to the right of the gold-plated display outs draw their connectivity from the Z370 PCH. The USB 3.1 Gen 2 Type-C connector and the red Type-A port  both draw connectivity from ASMedia’s latest ASM3142 USB 3.1 controller. Gigabyte backs this chip with two lanes of PCIe 3.0 from the chipset for a potential 16 GT/s of bandwidth, a reserve that might come in handy when transferring lots of bits over both ports at once. Above the Type-C port, we get a Killer Gigabit Ethernet jack powered by the company’s E2500 controller. If you still have a thing against Killer for some reason, Gigabyte accommodates with an Intel controller behind the second Gigabit Ethernet jack. The final USB 3.0 port also comes from the Z370 PCH.

Perhaps because of the power-draw potential of a Coffee Lake chip, Gigabyte includes a tiny fan for active VRM cooling duties under the I/O shroud. This tiny fan joins similar cooling approaches from Asus on its X399 boards, and I’m not really a fan. Tiny fans like this have the potential to completely spoil the noise characteristics of a system, and I feel like more effective heatsinks with greater surface area and less decorative bric-a-brac would be a more effective choice. That said, I never heard the fan spin up in my testing, so it’s likely one would have to push the Gaming 7’s power delivery subsystem to true extremes to get it to run.

Like Z270 boards, the Gaming 7 offers four DIMM slots with support for up to 64GB of RAM. Coffee Lake ups the base DDR4 speed to 2666 MT/s for JEDEC RAM, but the Gaming 7 should offer support for much higher speeds through XMP and manual tweaking. Gigabyte’s QVL offers options at instane speeds up to 4166 MT/s. I got my G.Skill DDR4-3600 sticks running on the board without a hitch simply by flipping on the XMP profile in the firmware. 

I have more to talk about with the Gaming 7 in my full review, but Gigabyte’s early firmware seems well-baked, and the company slowly continues to address some of my biggest nitpicks with the interface. I had no issues with this board’s performance while gathering numbers for our review, and overclocking the i7-8700K manually was swift and effective on the Gaming 7. If you want a top-end Z370 board to toy with, this one is a fine choice already.

 

Some quick overclocking explorations

As is usually the case with every new generation of its chips, Intel is exposing some new knobs for overclockers both casual and extreme to toy with on Coffee Lake and Z370. The most prominent of these is per-core overclocking, a feature reserved for recent Broadwell-E and Skylake-X CPUs at the very least. Overclockers will also be able to tune a Coffee Lake chip’s memory timings without a reboot. Extreme OCers will enjoy memory multipliers for speeds up to 8400 MT/s and better phase-locked loop (PLL) control.

We’re not putting some of those more advanced overclocking features to work today, but it seemed criminal not to overclock our particular i7-8700K like a regular enthusiast might, given Intel’s claimed process improvements. Because I like to live dangerously, I swung for the fences and tried for 5 GHz on all cores from the get-go. With a 280-mm cooler, it turned out a Blender-stable 5-GHz overclock was easy enough to hit, but I could never quite get Prime95 Small FFTs enough voltage to be stable at the same speed before we ran into thermal limits.

Eventually, I compromised and set a 5-GHz all-core speed with a -2 AVX offset, good for 4.8 GHz on all cores under AVX workloads. Incredibly, that configuration was happy even under Prime95 loads with an observed 1.284-1.296V using dynamic Vcore on our Aorus motherboard, so it wasn’t difficult to cool at all— 80° C or so was the order of the day with a Corsair H115i on top. Those willing to delid their chips and apply more exotic thermal compounds would seem to have plenty of voltage headroom left to pursue even more extreme overclocks, assuming our particular chip isn’t an exceptional cherry-picked example.

On the same note, we’re finally giving one of AMD’s Ryzen 7 CPUs a formal overclocked review. I picked our Ryzen 7 1700 to serve as the guinea pig for this article, and I was able to achieve 4 GHz on all cores with a pretty aggressive 1.41V with it. Thanks to AMD’s use of solder under Ryzen CPUs’ heat spreaders, though, thermals were never an issue under the AMD-supplied EK Predator 240-mm all-in-one I have on hand for Socket AM4 motherboards. My concerns about safe voltages and the limits of our given chip were much more relevant road blocks to higher speeds. Not bad for an eight-core, sixteen-thread CPU selling for $299.

I didn’t stop turning up the clocks there, either. Intel’s Sandy Bridge Core i7-2600K is rightly regarded as a legendary overclocker, so I pushed up the sliders on our particular chip and reached 4.6 GHz on all cores. With both stock-clocked and overclocked Sandy Bridge results in our stable, builders who have chosen to sit out Intel’s incremental march of improvements over the past few years can see what they’re missing.

Now that we’ve poked, prodded, tweaked, and tuned our test subjects, it’s time to put up some numbers. Let’s hop to it.

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each benchmark at least three times and took the median of those results. Our test systems were configured as follows:

Processor
AMD Ryzen 7 1700 AMD Ryzen 7 1700X AMD Ryzen 1800X
CPU cooler EK Predator 240-mm liquid cooler
Motherboard Gigabyte Aorus GA-AX370-Gaming 5
Chipset AMD X370
Memory size 16GB
Memory type G.Skill Trident Z DDR4-3600 (rated) SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 15-15-15-35 1T
System drive Intel 750 Series 400GB NVMe SSD

 

Processor
Intel Core i7-2600K Intel Core i7-3770K Intel Core i7-4790K Intel Core i7-7700K
CPU cooler Corsair H110i 280-mm liquid cooler
Motherboard Asus P8Z77V-Pro Asus Z97-A/USB 3.1 Asus ROG Strix Z270E Gaming
Chipset Intel Z77 Express Intel Z97 Intel Z270
Memory size 16GB 16GB
Memory type Corsair Vengeance Pro Series DDR3-1866 (rated) G.Skill Trident Z DDR4-3600 (rated) SDRAM
Memory speed DDR3-1866 (actual) 3600 MT/s (actual)
Memory timings 9-10-9-27 16-16-16-36 2T (DDR4-3600)
System drive Corsair Neutron XT 480GB SSD Samsung 850 Evo 512GB

 

Processor
Intel Core i7-8700K
CPU cooler Corsair H115i 280-mm liquid cooler
Motherboard Gigabyte GA-Z370X-Gaming 7
Chipset Intel Z370
Memory size 16GB
Memory type G.Skill Trident Z DDR4-3600 (rated) SDRAM
Memory speed 3600 MT/s (actual)
Memory timings 16-16-16-36 2T
System drive Samsung 960 Pro 500GB

They all shared the following common elements:

Storage 2x Corsair Neutron XT 480GB SSD

1x HyperX 480GB SSD

Discrete graphics Nvidia GeForce GTX 1080 Ti Founders Edition
Graphics driver version GeForce 385.69
OS Windows 10 Pro with Creators Update
Power supply Seasonic Prime Platinum 1000W

Our thanks to Intel and AMD for all of the CPUs we used in our testing. Our thanks to AMD, Intel, Gigabyte, Corsair, Cooler Master, and G.Skill for helping us to outfit our test rigs with some of the finest hardware available, as well.

Some additional notes on our testing methods:

  • You’ll note that Intel’s Core i7-6700K is missing from our results. We figured that the i7-7700K is a close enough substitute given its minor performance improvements over the Skylake chip—about 5%, on average. We used the time saved this way to test more overclocked CPUs and to explore use cases like streaming in more depth. We hope this is an understandable decision.
  • Unless otherwise noted, we ran our gaming tests at 1920×1080 at a refresh rate of 165 Hz. V-sync was disabled in the driver control panel.
  • For our Intel test system, we used the Balanced power plan, as we have for many years. Our AMD test bed was configured to use the Ryzen Balanced power plan that ships with AMD’s chipset drivers.
  • All motherboards were tested using the most recent firmware available from the board vendor, including pre-release versions provided exclusively to the press where necessary.
  • All available Windows updates were installed on each test system before testing commenced. The most recent version of each software application available from each vendor was used in our testing, as well.

Our testing methods are generally publicly available and reproducible. If you have questions, feel free to post a comment on this article or join us in the forums.

 

Hitman
Hitman‘s DirectX 12 renderer can stress every part of a system, so we cranked the game’s graphics settings at 1920×1080 and got to testing.


Get used to this sight. With a GTX 1080 Ti and an i7-8700K working in tandem, we get the highest average frame rates and lowest 99th-percentile frame times out there in Hitman. Yes, this is a big gap, but it’s not unprecedented. Our recent review of the Core i9-7980XE showed similar gains for the Skylake architecture compared to Zen.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

To best demonstrate the performance of these systems with a powerful graphics card like the GTX 1080 Ti, it’s useful to look at our three strictest graphs. 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays. Finally, we’ve added a 5-ms graph to see whether any of our chips can sustain 200 FPS or better for any length of time.

Our 16.7-ms graph shows that none of these CPUs hold up the graphics card for any appreciable length of time past that mark. What’s really incredible is that the Kaby Lake and Coffee Lake chips spend less than one-fourth the time of the next-best chip hampering the GTX 1080 Ti’s efforts to stay above 120 FPS, and they continue that exceptional performance even when we consider the 6.94-ms mark. One has to stretch all the way out to 5 ms before these chips start accumulating substantial amounts of time in our graphs. That’s hair-raising high-refresh-rate gaming at peak visual quality in Hitman, and nothing else on the market can come close.

 

Crysis 3

Although Crysis 3 is nearly four years old now, its lavishly detailed environments and demanding physics engine can still stress every part of a system. To put each of our CPUs to the test, we took a one-minute run through the grassy area at the beginning of the “Welcome to the Jungle” level with settings cranked at 1920×1080.


The section of Crysis 3 we use for testing loves lots of CPU cores, and that shows in our results. The overclocked Ryzen 7 1700 rockets to third place thanks to its eight cores and 16 threads.  Having slightly fewer CPU cores running at breakneck speed seems to be the perfect recipe to get the best out of this section of gameplay, though, and the i7-8700K produces a considerable leap in average FPS above that of the Ryzen 7 1700. Stock for stock, the 8700K is delivering double the frames, on average, compared to the i7-2600K, and it delivers 99% of its frames in less than half the time the i7-2600K needs to feed the beast by the same measure.


Our time-spent-beyond-X graphs show just how stupefyingly quick the i7-8700K-and-GTX-1080-Ti combo are in this title. Even at stock speeds, the 8700K spends just a little over a second on tough frames that log time past 6.94 ms. You want 144 FPS virtually all the time? You have it. The overclocked Ryzen 7 1700 is delivering a fine experience, to be sure, but wth about four seconds logged past 6.94 ms to its name, the experience on the AMD CPU might be just slightly less glassy. Our average-FPS graph shows the i7-8700K can provide higher frame rates to go with its smoothness, though, so it wins out.

 

Grand Theft Auto V
Grand Theft Auto V can still put the hurt on CPUs as well as graphics cards, so we ran through our usual test run with all of the game’s settings turned all the way up (save for MSAA and extended distance scaling) at 1920×1080. Unlike most of the games we’ve tested so far, GTA V favors a single thread or two heavily, and there’s no way around it with Vulkan or DirectX 12.


Let’s face it: Grand Theft Auto V is getting up there in years. Its original code base dates back to 2013, and its PC release is a little over two years old at this point. Unlike Crysis 3‘s forward-looking friendliness to many-core chips, GTA V still needs an extremely fast single thread to extract maximum performance at our test settings. That might explain why the i7-7700K takes the top stock-clocked spot in this benchmark.

Put the spurs in, however, and the i7-8700K finds an incredible second wind by delivering a 15% higher average frame rate than even the i7-7700K. The overclocked Ryzen 7 1700 can’t keep up with those frame rates, even if its 99th-percentile frame time is still quite respectable in this company.


Starting our time-spent-beyond analysis at the 8.3-ms mark, the Kaby Lake and Coffee Lake chips spend barely a couple of tenths of a second holding up the GTX 1080 Ti here. Again, if you want 120 FPS or better virtually all of the time, Intel’s latest cores deliver. The overclocked Ryzen 7 1700 would otherwise lead the pack, but it logs over three seconds of time past 8.3 ms where Kaby and Coffee don’t.

What’s really illuminating is how these chips behave past the 6.94-ms mark. The Core i7-7700K’s lower core count seemingly lets it log less time spent beyond this mark against the stock-clocked i7-8700K, but the extra-caffeinated Coffee spends less than a second on frames that take 6.94 ms or more to produce over the course of our entire one-minute test run. That’s the kind of performance where I’m tempted to start throwing words like “perfect” around, even though that’s technically not the case.

With 11 seconds spent beyond 6.94 ms, the Ryzen 7 1700 in overclocked form is the next-best thing going in our group, but it’s a distant, distant fourth place. Coffee and Kaby are the clear champions of high refresh rates and smooth gameplay all at once in GTA V.

 

Watch Dogs 2
Watch Dogs 2 can seemingly occupy every thread one can throw at it, so it’s a perfect CPU test. We turned up the eye candy and walked through the forested paths around the game’s Coit Tower landmark to get our chips sweating.


Watch Dogs 2 will take every core and thread one can throw at it with our test settings, but as we saw with Crysis 3, fast seems to beat mass. The i7-8700K’s killer combo of fast and numerous cores lets it open a wide lead over every other chip in our stable.


At the 8.3-ms mark, the i7-8700K spends less than half the time holding up the GTX 1080 Ti than even the i7-7700K does. Weirdly, overclocking the chip doesn’t seem to help it much at all, suggesting another bottleneck lies elsewhere. Still, if you’re looking for some of the highest frame rates and the smoothest possible gameplay from Watch Dogs 2, the i7-8700K is the chip to get.

 

Deus Ex: Mankind Divided (DX11)

With its rich and geometrically complex environments, Deus Ex: Mankind Divided can prove a challenge for any CPU at high enough refresh rates. We applied our preferred recipe of in-game settings to put the squeeze on the CPU and got to it.


I warned you to get used to this, didn’t I? Our Coffee Lake and Kaby Lake chips take a wide lead over the rest of the pack in DXMD. Just like with Hitman, these results aren’t unprecedented, either. We saw similar leaps in performance over AMD CPUs while running a similar test setup on Intel’s Skylake-X chips.


There’s not really much more to say here: the Kaby Lake and Coffee Lake chips spend far less time holding up the GTX 1080 Ti than the rest of the pack, whether you consider 8.3 ms, 6.94 ms, or 5 ms your piece of pie. Their gaming performance in this high-refresh-rate scenario is simply without equal.

That brings us to the end of our pure gaming tests for these CPUs. I don’t have much to say that hasn’t already been said, except wow. I’m a fiend for high-refresh-rate gaming, and the i7-8700K is simply the best pairing available with the fastest consumer graphics card around on that mission, and the race isn’t even close most of the time. It’s just flat-out fun watching single-thread performance bottlenecks evaporate in the face of a 5 GHz Coffee Lake chip. As Coco Chanel once said, “luxury lies not in the richness of things, but in the absence of vulgarity.” If you consider poor 99th-percentile frame times and fuzzy frame-time plots as vulgar as I do, then the i7-8700K embodies that statement perfectly. Just, y’know, that it’s not priced like couture. Or something.

 

Streaming performance with Deus Ex: Mankind Divided and OBS

One of the uses that Intel and AMD have hyped the most for their highest-end desktop processors this year is single-PC gaming and streaming. The most avid Twitch streamers, as we understand it, have tended to set up dedicated PCs for video ingestion and processing to avoid affecting game performance, but the advent of these many-core CPUs may have opened up a world where it might be more convenient to run one’s stream off a single PC. Intel calls this kind of thing “megatasking,” and it claims the i7-8700K is quite good at it. Let’s find out.

Although one might wonder why people are still making a hullabaloo about CPU encoding performance when hardware-accelerated game streaming is available from both major GPU software packages, the fact of the matter seems to be that the most demanding professionals still choose to use software encoding. The reason for this is that Twitch and other streaming services have restrictive bit rates for streamed content. GPU-accelerated services like GeForce Share (née Shadowplay) and Radeon ReLive make it easy to stream without affecting gaming performance that much, but they might not offer the highest-quality viewing experience to fans within the bounds of those bit rates. For achieving the best results possible, the name of the game is still software encoding with x264.

Let’s start off by noting that gaming at 1920×1080 and streaming that same gameplay at 60 FPS will be difficult, if not impossible, with a CPU-bound title like Deus Ex on four cores and eight threads. The gameplay experience and the stream alike look like crap, even on our otherwise excellent Core i7-7700K. You can safely forget trying on older quad-core chips. One could perhaps get away with looser encoder settings and lower frame rates on such chips, and you might not see the same problems with games that aren’t primarily CPU-bound, but this is a benchmark, dangnabit. We’re trying to find limits, not shy from them. 

Even with that in mind, the Core i7-8700K couldn’t quite keep up with DXMD running at 1920×1080. I found that easing the load on the CPU by upping the game resolution to 2560×1440 and downscaling the capture to 1920×1080 at 60 FPS was the optimal solution. In simpler terms, we’re giving the CPU some breathing room by shifting more of the load to the graphics card. I maintained this approach across all of the chips I deemed streaming-capable.

If you really want to do no-holds-barred 1920×1080 gaming and stream it from the same machine, it seems you really want a Ryzen 7 1800X at the very least. A Core i7-7820X or a Core i9-7900X are even better yet, if Hitman is any guide. No, this is not a cheap way to do things, but if you’re dead-set on single-PC streaming performance, you will eventually have to pay the piper for the equivalent of two systems’ worth of processing power.


The GTX 1080 Ti can still play DXMD in an enviably fluid fashion at 2560×1440, so it’s not much of a sacrifice to play at the higher resolution and broadcast at the lower one. To get my final setup, I played with OBS’ various x264 settings to achieve what looked (to my eye) like the best visual quality possible without unduly bogging down any of my test rigs. To my video-snob tastes, that was the “faster” x264 profile. For the curious, though, it’s interesting to note that looser encoder settings didn’t make 1920×1080 gaming and 60-FPS streaming possible on the i7-8700K, either. That suggests streaming framerate is a much bigger initial obstacle to clear when tuning one’s setup than the choice between “veryfast” or “faster,” for example.

We’re still not considering x264 encoder performance in isolation for this review. While metrics like dropped frames are certainly important to the viewer experience, we don’t have the methods to effectively process or present that data yet. We did monitor stream quality during our testing and ensured that our particular encoder settings weren’t producing choppy or otherwise ugly stream delivery, and we figure that what looks consistently good to the eye is fine in light of the fact that there are possibly dozens of network hops between us and a final viewer. We might consider these metrics in future articles, but for now, we’re mostly worried about the gameplay experience these CPUs deliver to the streamer.





There are a lot of moving parts in the graphs above. We’ve presented frame-time plots, average frame rates, and 99th-percentile frame times for both streaming and non-streaming gameplay, so take some time to flip through the various graphs above to get a full sense of the performance picture.

For all that, the results are stark. The i7-8700K suffers only a minor performance hit with OBS running, while the Ryzen 7 CPUs lose about 20% of their performance potential (as measured by average FPS). DXMD is apparently GPU-bound at these settings, as evidenced by the nearly identical performance across the board for these parts without OBS engaged, so it seems flipping on a stream is enough to expose a major bottleneck on the Ryzen chips. Once again, this result is not unprecedented: the Ryzen 7 1800X’s average frame rate plunged 30% in our Hitman streaming tests during our Core i9-7980XE review.

x264 is one of the more prominent applications with AVX support, so it’s possible that Coffee Lake’s wider AVX pipes may be giving it a leg up here, among other things. Whatever the cause, it’s clear that the i7-8700K can deliver a smoother and more fluid gaming experience at our test settings than even a Ryzen 7 1700 can with all cores ticking away at 4 GHz. The large performance drop we observed with the Ryzen 7 parts might not be as severe with less CPU-bound games, and admittedly, that describes more of today’s titles than not. Still, it’s not like Intel is charging huge amounts more money for the i7-8700K for the pleasure of this experience, CPU-bound games or not.

One thing our results don’t capture is that at stock speeds, the Ryzen 7 1700 is not quite capable of an entirely smooth stream. I noticed a number of frame drops while watching my test stream as I played, so I held onto my log file and found that the stock 1700 was dropping about seven percent of the frames destined for Twitch. Blame the chip’s constrictive TDP and low resulting base clock, I guess. If you want to stream CPU-bound games with the 1700, you definitely want to take a trip into the BIOS and turn up some multipliers.



Our time-spent-beyond-X graphs basically confirm what we’ve already discussed: the i7-8700K is delivering a much smoother gaming experience than the Ryzen 7 parts while streaming, no matter what threshold you choose to examine. Let’s see if our productivity results show as wide a gulf between these competitors.

 

Javascript

The usefulness of Javascript benchmarks for comparing browser performance may be on the wane, but these collections of tests are still a fine way of demonstrating the real-world single- or lightly-threaded performance differences among CPUs.

 

No surprises here. Intel’s client cores have long posted the best results in Javascript benchmarks. Kaby Lake and Coffee Lake are nearly identical performers in this benchmark at stock speeds. The overclocked i7-8700K takes the top of the charts, as we’d expect.

Compiling code with GCC

Our resident code monkey, Bruno Ferreira, helped us put together this code-compiling test. Qtbench records the time needed to compile the Qt SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

Compiling likes lots of threads, but there’s a whiff of Amdahl’s Law to the proceedings, and the i7-8700K’s many cores and high clocks give it a minor edge on even the overclocked Ryzen 7 1700.

File compression with 7-zip

In our 7-Zip testing, the i7-8700K takes an early lead in the compression portion of the test. In the arguably more common decompression operation, though, it falls behind the entire Ryzen lineup at stock speeds. Even an overclock is only enough to bring the i7-8700K even with the Ryzen 7s running at stock speeds. The overclocked 1700 is the undisputed champion here, for once.

Disk encryption with Veracrypt

In the accelerated AES portion of our benchmark, the i7-8700K is just behind the Ryzen family at the top of the chart at stock speeds, and it ends up in about the same place in our non-accelerated Twofish test. Overclocking the i7-8700K to 5 GHz takes it to the top of the chart in AES work, and it only trails the overclocked Ryzen 7 1700 in the non-accelerated portion of the benchmark. Pretty impressive considering the core-count imbalance here.

 

Cinebench

The evergreen Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

Like our Javascript benchmarks should have foreshadowed, the i7-8700K takes the top spot in Cinebench’s single-threaded test. Cinebench isn’t really about one thread, though.

The multithreaded portion of Cinebench really lets the Ryzens shine. The i7-8700K can only match the Ryzen 7 1700 at stock speeds, and it can’t catch that chip at 4 GHz even in its overclocked form.

Blender rendering

Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

With the latest version of Blender, it’s a close race between the Ryzen 7s and the i7-8700K at stock speeds. Overclocking the i7-8700K lets it chase the OCed Ryzen 7 1700 at the top of the chart, though.

Corona rendering

Here’s a new benchmark for our test suite. Corona, as its developers put it, is a “high-performance (un)biased photorealistic renderer, available for Autodesk 3ds Max and as a standalone CLI application, and in development for Maxon Cinema 4D.”

The company has made a standalone benchmark with its rendering engine inside, so it was a no-brainer to give it a spin on these CPUs. The benchmark reports results in millions of rays cast per second, and we’ve converted that figure to megarays for readability.

Corona seems to like Intel CPUs, but that’s not enough to let the i7-8700K overcome its cores-and-threads disadvantage versus Ryzen chips. It merely pulls even with the Ryzen 7 1800X at stock speeds, and narrowly edges out the overclocked Ryzen 7 1700.

Handbrake transcoding
Handbrake is a popular video-transcoding app that recently hit version 1.0.7. To see how it performs on these chips, we’re switching things up from some of our past reviews. Here, we converted a roughly two-minute 4K source file from an iPhone 6S into a 1920×1080, 30 FPS MKV using the HEVC algorithm implemented in the x265 open-source encoder. We otherwise left the preset at its default settings.

x265 should tap AVX instructions where it can, so the i7-8700K takes the top stock-for-stock spot. It also eclipses the overclocked Ryzen 7 1700 when we pull out all the stops.

CFD with STARS Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here. We configured Euler3D to use every thread available from each of our CPUs.

Before we discuss our results, it should be noted that the publicly-available Euler3D benchmark is compiled using Intel’s Fortran tools, a decision that its originators discuss in depth on the project page. Code produced this way may not perform at its best on Ryzen CPUs as a result, but this binary is apparently representative of the software that would be available in the field. A more neutral compiler might make for a better benchmark, but it may also not be representative of real-world results with real-world software, and we are generally concerned with real-world performance.

Given the i7-8700K’s speed, number of threads, and its memory bandwidth, it’s no surprise it takes the top spot here.

 

Digital audio workstation performance

One of the neatest additions to our test suite of late is the duo of DAWBench project files: DSP 2017 and VI 2017. The DSP benchmark tests the raw number of VST plugins a system can handle, while the complex VI project simulates a virtual instrument and sampling workload.

We used the latest version of the Reaper DAW for Windows as the platform for our tests. To simulate a demanding workload, we tested each CPU with a 24-bit depth and 96-KHz sampling rate, and at two ASIO buffer depths: a punishing 64 and a slightly-less-punishing 128. In response to popular demand, we’re also testing the same buffer depths at a sampling rate of 48 KHz. We added VSTs or notes of polyphony to each session until we started hearing popping or other audio artifacts. We used Focusrite’s Scarlett 2i2 audio interface and the latest version of the company’s own ASIO driver for monitoring purposes.

A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI’s support—this benchmark would not have been possible without the help of the folks there. Be sure to check out their many fine digital audio products.


At 96 KHz and a buffer depth of 64 for DAWBench VI, we are most likely testing per-core throughput more than anything. That means the i7-8700K shoots to the top of the chart at a buffer depth of 64, and its lead widens when we relax the buffer to 128 samples. Six Coffee Lake cores are no joke.


The DAWBench DSP test is a much more even playing field for our test subjects at 96 KHz. Still, the i7-8700K grabs the stock-clocked lead at 64 samples, and the overclocked Ryzen 7 1700 is only enough to match it. The overclocked i7-8700K is the leader by a wide margin. At 128 samples, the i7-8700K still maintains its lead at stock and not-stock speeds.


DAWBench VI loves Intel CPUs, and lowering the sampling rate just lets the Core i7-8700K widen its lead at both buffer depths. The superb latency characteristics of Intel’s client core seem to make a big difference in this benchmark.


DAWBench DSP tends to be a great equalizer, but the i7-8700K bucks the trend by taking a wide lead at both buffer depths at 48 KHz, both stock and overclocked.

Given its chart-topping performance across the board, the Core i7-8700K seems to be the best thing going in affordable CPUs for digital audio workstation duty. There’s really no contest.

 

Power consumption and efficiency

We can get a rough idea of how efficient these chips are by monitoring system power draw in Blender. Our observations have shown that Blender consumes about the same amount of wattage at every stage of the bmw27 benchmark, so it’s an ideal guinea pig for this kind of calculation. First, let’s revisit the amount of time it takes for each of these chips to render our Blender “bmw27” test scene:

Because Blender is a steady-state workload, we can take a reading off our Watts Up power meter and multiply it by the time required to complete the render for each chip to come up with an estimated task energy in kilojoules. We can then plot that figure against the time each chip needs to complete its task to get a simple visualization of effciency.

Unfortunately, we’ll only be considering Intel versus Intel for the task energy portion of these tests at press time. Recent firmware updates to our X370 motherboard of choice have vastly increased both idle and load power consumption from our Ryzen 7 CPUs for no discernible reason, and we don’t feel comfortable reporting power-consumption numbers for what appears to be a firmware issue on the AMD side. This power management issue didn’t affect clocks or performance in any of our other benchmarks, but it does make this comparison impossible. We’re looking into this issue and we’ll update the review with new numbers if we can.

Under load, the i7-8700K system’s instantaneous power consumption is up from generation to generation, as one would expect from adding two more cores on a similar process. The increase isn’t substantial, though. Overclocking the chip does cause our testbed to draw far more power from the wall, but that’s to be expected, and about 60W more isn’t that bad.

Plot our estimation of task energy against time to completion, and the i7-8700K is impressively efficient at stock speeds. Its higher system power draw is offset by the fact that it gets the job done much faster than every other chip here, so its overall power consumption is lower than any four-core, eight-thread part. That’s an excellent advance for Intel.

 

Conclusions

It’s time to condense all of our test results into our famous value scatter plots. We use a geometric mean of all of our real-world results to ensure that no one test has an undue impact on the overall index. First up, let’s look at gaming performance.


The i7-8700K doesn’t leave me much to say. In both gaming performance potential (as measured by average FPS) and in delivered smoothness (as measured by its 99th-percentile FPS figures), literally nothing else on the market can touch this chip for high-refresh-rate experiences. Simple as that.

Not much to say here, either. The Core i7-8700K’s world-beating single-threaded performance, Ryzen 7-matching multithreaded performance, and exceptional DAWBench results put it well over the top for the best overall productivity CPU of this bunch. Overclocking only sweetens the deal.

All told, the Core i7-8700K is Intel’s most groundbreaking enthusiast CPU in years. Its productivity performance and power efficiency are outstanding. It offers enviable new heights for high-refresh-rate gaming experiences, especially on our critical 99th-percentile FPS metric of delivered smoothness. Our particular CPU had plenty of overclocking potential, too. Whatever wizardry Intel’s manufacturing team is working on the company’s 14-nm process is paying off handsomely, even if we are getting a third round of the same Skylake core we’ve known since 2015.

Although Intel took its sweet time in delivering the coup de grace, the i7-8700K dramatically puts the final nail in the coffin of 2011’s Core i7-2600K. Clock for clock, the i7-8700K delivers 2.5 times the performance of Sandy Bridge’s finest four-core avatar in our productivity index. Overclocking the i7-8700K delivers nearly three times the performance. ‘Nuff said. Coffee Lake’s gaming prowess is head, shoulders, knees, and toes above the 2600K’s for high-refresh-rate experiences, too.

Even though it’s down two cores and four threads on AMD’s Ryzen 7 family, the i7-8700K spells trouble for those chips at their current prices. The Coffee Lake chip is a fair bit faster than AMD’s eight-core parts in our productivity index. For high-refresh-rate gaming, there’s no contest. Those playing at 2560×1440 and 4K might find less to get excited about in Coffee Lake, but I think gamers of all stripes will prefer the i7-8700K’s as-yet-unmatched combo of world-beating single-threaded performance and 12 threads for more demanding work. Ryzen 7 CPUs can’t quite cover both bases yet, and you’ll feel the i7-8700K’s advantages everywhere, not just in gaming.

All that performance was available before we turned up the multipliers for an easy overclock to 5 GHz on our particular CPU, too. Ryzen CPUs just can’t clock that high under typical conditions today. That’s not to say Ryzen is completely left in the dust in the overclocking stakes, of course. I pushed my $300 Ryzen 7 1700 to 4 GHz across all its cores, and that boost let the chip claw back some of the gap with the $360 i7-8700K in productivity tasks at stock speeds. Overclocking the i7-8700K opens the gap right back up, though, and if you’re going to the effort of tweaking to begin with, the rewards of the i7-8700K seem much more tantalizing to me.

Intel Core i7-8700K

October 2017

As if all that wasn’t rough enough, the i7-8700K might snuff one of AMD’s brightest selling points for its Ryzen 7 parts: single-PC streaming power. Whether at stock speeds or overclocked, the i7-8700K delivered a much more fluid client-side gaming experience than the Ryzen 7 chips for a given stream quality (at least in our largely CPU-bound testing, which won’t always be typical). Streamers who considered Ryzen 7 CPUs a default pick over Intel’s quad-cores now have a compelling alternative.

No matter what team you root for, though, AMD deserves enthusiasts’ gratitude for finally offering Intel some stiff competition in x86 CPUs again. Without that development, it’s not clear how long we would have had to wait until Intel delivered higher core and thread counts in its mainstream socket. AMD seems to have an aggressive CPU roadmap of its own planned for the near future, so we can only hope this renewed competition will remain vigorous for some time to come.

For now, the Core i7-8700K is the most well-rounded CPU available. With virtually no weaknesses, it’s among the easiest TR Editor’s Choice picks I’ve ever made. If you’ve been sitting on an upgrade from Sandy or Ivy for the past few years, grab your hammer and piggy bank and wait for our next System Guide. It’s time.

Like this review? Help us continue to perform the dozens upon dozens of hours of hands-on testing that go into articles like this one by becoming a TR subscriber today. Subscribers get exclusive benefits as part of their membership, and you can contribute any amount you’d like to support us. Best of all, we were doing crowdfunding before it was cool, so your entire contribution (less transaction fees) helps us continue our work. Thanks in advance for your support.

Comments closed
    • MileageMayVary
    • 2 years ago

    Any chance for a follow up with QHD and UHD resolutions?

    While I do think FHD is important, especially for comparisons up and down the product stacks, I think people paying $300+ for a CPU might also be paying for better pricier monitors.

      • Voldenuit
      • 2 years ago

      At [url=https://arstechnica.com/gadgets/2017/10/intel-coffee-lake-8700k-review/2/<]1440p and 2160p[/url<], the differences between CPUs evens out.

    • abiprithtr
    • 2 years ago

    Coffee lake is a beast when it comes to performance. At its price, it has to be.

    The other side of this cpu is the thermal issues it has, as pointed out by many articles including one from TR.

    Interesting to see how individuals with 20+ posts gloating in this review article being totally absent in the article about the thermal issues.

    Are TR members really so partisan?

      • elites2012
      • 2 years ago

      its a beast cause of higher clock. clock for clock. the ryzen sweeps the floor. i think one of the rules intel tells reviewers, not to under clock the chip for a clock to clock comparison.

    • maroon1
    • 2 years ago

    On page 2 you list i3 8350K with 6MB L3 cache
    [url<]https://techreport.com/review/32642/intel-core-i7-8700k-cpu-reviewed/2[/url<] In intel website, it says 8MB L3 cache [url<]https://ark.intel.com/products/126689/Intel-Core-i3-8350K-Processor-8M-Cache-4_00-GHz[/url<]

    • 1sh
    • 2 years ago

    Ryzen 7 wins by default because of the lack of availability for Coffee Lake CPUs.

      • chuckula
      • 2 years ago

      Now that the Vega launch is partially behind us I’m OK with that reasoning!
      — Raj’s stand-in.

    • ronch
    • 2 years ago

    This is exactly why I kept saying it’s not enough for AMD to come up with a great product, they need to match Intel or at least come really close. Look how Ryzen is absolutely a great product but Intel can easily cram 2 more cores and price aggressively, and all of a sudden AMD will have to move Ryzen pricing down. If you don’t perform as well you’re more susceptible to pricing pressure.

    • jackbomb
    • 2 years ago

    As nice as it is to finally have 6 cores on the mainstream socket, we’re still looking at Skylake (2015) levels of single-threaded IPC. Apple is all about single-threaded “snappiness,” and they seem to have pulled off Kaby-class IPC with their A11.

    If Intel wants to remain in the Macbook, their next CPU had better give us a meaningful bump in single core performance. We all know Apple has no problem switching uarchs when CPU makers can’t give them the performance they want.

    • Voldenuit
    • 2 years ago

    I’d like to see comparisons between the 8700K and the 7800X, overclocked.

    $10 more than 8700K (street), twice the memory channels, twice the number of PCIE lanes available to the PCH. Granted, X299 motherboards aren’t cheap, but neither are Z370 boards.

    Ready, fight!

      • NTMBK
      • 2 years ago

      You also need to pay for a quad channel memory kit to make the most of it, so I think overall platform costs will be a lot higher. It’ll kick ass on anything AVX-512 compatible though!

      • Firestarter
      • 2 years ago

      7800X has way higher memory latency though, even with more bandwidth at its disposal I could see that being enough of a factor to make it perform worse than the 8700K in gaming tests which seem more sensitive to this than most productivity benchmarks. The Ryzen chips have similar high latency and we all know how well they respond to faster RAM.

        • Voldenuit
        • 2 years ago

        anandtech [url=https://www.anandtech.com/show/11859/the-anandtech-coffee-lake-review-8700k-and-8400-initial-numbers/14<]tested a 6800K (6c12t) in their 8700K review[/url<], and it beat the 8700K at gaming tests despite being down on clockspeed. I'm going to guess the massive 12 MB L2 cache has something to do with it, and I'd expect the 7800X to perform similarly.

      • thecoldanddarkone
      • 2 years ago

      Z370 mobo’s start at about 120 dollars while x299 start at about 210. If anything, Intel knows the Z370 chipset is going to be short lived and priced them accordingly (as much as they though the market would handle).

      X370 Ryzen boards start at about the same price. If anyone thinks that’s by accident…

    • JustAnEngineer
    • 2 years ago

    Availability is very poor. 🙁

    [url<]https://promotions.newegg.com/intel/17-6805/index.html?utm_medium=Email[/url<] If this were an AMD product, Chuckula would have posted about a "paper launch" a dozen times already. For me, the non-availability of Core i7-8700K and Core i5-8600K means that it'll be easier for me to wait to upgrade, either until a decent selection of micro-ATX motherboards arrive, or until the stop-gap Z370 boards are replaced with the promised new chipset in a few months.

      • chuckula
      • 2 years ago

      Yeah, but if I was Krogoth I’d be bragging about how this proves there’s unprecedentedly [i<]impressive[/i<] demand. And I'd get upthumbed through the roof for it.

        • NTMBK
        • 2 years ago

        You’ve really got a chip on your shoulder, haven’t you?

        • Krogoth
        • 2 years ago

        It is partly because of sheer demand and it is partly because Coffee Lake was rushed to the market to minimize the hemorrhaging from Ryzen 5 and 7. Intel didn’t want to wait for their stocks to build up before publicly releasing Coffee Lake SKUs.

      • Beahmont
      • 2 years ago

      Well, I mean yeah. It’s not like US retailers sold over 100k units at $20+ retail prices in a matter of hours or something.

      Oh wait… never mind.

      • Redocbew
      • 2 years ago

      I don’t understand all of you who spend so much time arguing over the supposed supply and demand of this stuff. Do any of you know how many units were shipped to Amazon, Newegg, B&H, or any other retailer? If not, then you have no meaningful way of measuring demand, and it’s all just a study in confirmation bias.

    • PrincipalSkinner
    • 2 years ago

    I see some people have already dismissed AMD as being non-competitive.
    Don’t forget that Intel has probably squeezed every bit of MHz out of their 14 nm process while GloFo’s is yet to shine. Also AM4 comes with a promise of longevity. Sure, 6 CFL cores @5Ghz are impressive, but I want an option to upgrade without changing the mobo.
    Can’t wait for next Zen iteration.

      • chuckula
      • 2 years ago

      How come I didn’t get the upvote love when I posted the [url=https://techreport.com/discussion/32642/intel-core-i7-8700k-cpu-reviewed?post=1054705<]same thing yesterday?!?![/url<]

        • PrincipalSkinner
        • 2 years ago

        It’s all about the timing my friend. 🙂

          • chuckula
          • 2 years ago

          So you’re saying it doesn’t pay to be [b<][i<]first?[/i<][/b<]

            • Mr Bill
            • 2 years ago

            +3 nuntius interficere

            • PrincipalSkinner
            • 2 years ago

            Have you seen the votes on my first posts in shortbreads? 🙂

      • Mr Bill
      • 2 years ago

      Mojo Ryzen

      • ronch
      • 2 years ago

      Well, since Zen 2.0 is purportedly pushed back to 2019 it could mean we’ll have to be content with the tweaked ’12nm’ Zen 1.1 purportedly coming out next year. If only they’ll clock much higher but I’m not holding my breath. Here’s hoping to be pleasantly surprised.

        • PrincipalSkinner
        • 2 years ago

        5% more clock and 5% more IPC and I’m buying an eight core.

    • chuckula
    • 2 years ago

    All this nostalgia about the final sendoff of the 2600K reminds me of the good ol’ days when Intel was supposedly capable of making an impressive part.

    Oh wait, Krogoth [url=https://techreport.com/discussion/20188/intel-sandy-bridge-core-processors?post=526810<] hated it too:[/url<] [quote<] Sandy Bridge is getting too many accolades. It is just the final product of Intel's goal of intergrating core logic onto the CPU. It all started with Nehalem. Calling it the biggest architectural change since Netburst is a bit too much. It is really just an extremely evolved Conroe. IMO, the last architectural change from Intel was Conroe. Ivy Bridge is looking like the next generation. [/quote<] Since Krogoth has been completely unimpressed by the successive line of failures out of Intel from 2011 on, then by the transitivity property the 8700K is just a failed die shrink of Conroe with some extra crap glued on. Ten years of continuous unimpressive failures Intel. It's a wonder Intel is still in business somehow.

      • Krogoth
      • 2 years ago

      Holy quote mining Batman!

      I didn’t hate Sandy Bridge at launch. I just wasn’t getting a massive e-gasm over it like some other geeks.

      It really wasn’t that much of an upgrade for those who were already on the Nehalem bandwagon (Lynnfield, Bloomfield).

        • chuckula
        • 2 years ago

        [quote<] I just wasn't getting a massive e-gasm over it like some other geeks.[/quote<] And yet in late 2017 the almost seven year old 2600K is still relevant enough to at least be included in this review. While AMD's premier desktop parts from March 1 of this year apparently weren't relevant enough to make the cut. Maybe in retrospect some of your Meh wasn't fully justified.

          • Krogoth
          • 2 years ago

          Jeff only has finite time and resources. He can’t bench ever CPU on the market.

          The reason why Sandy Bridge is even relevant has far more to do with the semiconductor industry having to deal with laws of physics and software side hasn’t seen a paradigm shift in the mainstream market. Quad core SKUs are still quite sufficient for today and foreseeable future for this market.

          Sandy Bridge already does shows its age due to DDR3 that it is tied to and 6x series chipset are stuck with PCIe 2.0 and have no NVMe support. The age of the platform is what is convincing its remaining users to upgrade to Coffee Lake.

            • ozzuneoj
            • 2 years ago

            What? Aside from DDR3 being too limiting for newer CPUs, those things have very little impact on actual real world performance and hardly anyone is itching to upgrade for those features. Check the reviews for any NVMe SSD or comparisons of PCI-E 2.0 vs 3.0 performance in gaming.

            As someone with a 2500K + P67 still going strong, having enough CPU performance to keep minimum and average frame rates as high as possible (I game at 120Hz at 1080P) is the only thing that has me considering an upgrade. An NVMe SSD would make no perceptible difference in anything I do, and PCI-E 3.0 is tiny bonus that comes with any platform upgrade I may do eventually (maybe a few percent performance improvement in the best case scenario).

            The only thing I find even remotely limiting about my P67 board is that my USB 3.0 ports are on the back of my case… so four years ago I bought a 5 1/4 bay card reader\USB 3.0 hub that attaches to one of my PCI-E 1x slots, and a couple of cheap USB 3.0 extension cables from Amazon to run to the rear ports if I really need them.

            Oh, and it only has 2 6gbps SATA ports, which I use for my SSD and hard drive. I have no need for more than this.

            Aside from this, what am I missing?

            • Krogoth
            • 2 years ago

            DDR4 makes a noticeable impact on CPU-bound applications. That’s why Skylake had a nice bump from Haswell and older chips. It makes a massive impact on frame-time and minimal framerate.

            FYI, my poor 3570K holds back Vega 64 in almost every game (2560×1440) expect when I crank up the AA/AF.

            Sandy Bridge held its weight for a while but it does differently shows its age with taxing applications. Coffee Lake warranties enough of an improvement to justify to upgrade expect for die-hard luddites (Windows 10).

            The age of 6x series motherboards is also a factor especially when you been overclocking the Sandy Bridge for years on end. The caps and MOSFETs are going show signs of wear and tear = instability and general flakiness.

            • Ifalna
            • 2 years ago

            Yup I have the same problem on my 3570K (@4.6) with a 1070. CPU taxed (despite OC) and GFX twiddling it’s thumbs while being unimpressed.

            Good for me that I am no longer a hardcore gamer or CL would definitely be tempting.

            • homerdog
            • 2 years ago

            What do you mean by “held back”? Are you trying for >60Hz? My 4GHz 3770K manages 60fps in all the games I play.

            • Srsly_Bro
            • 2 years ago

            I’ve been running my z68xp ud4 almost 24/7 since 2012. It’s on 100% load when I’m not using it. I haven’t had any issues yet, thankfully.

    • Fonbu
    • 2 years ago

    With regards to the new top dog coffee lake, the elusive 5Ghz is finally here!

    Handbrake benches are relatively good, highly threaded.

    The digital audio workstation benches have a huge increase from Intel. Ryzen was generally pretty good, but the 8700k is just way out there.

    Octobers system guide will be mighty interesting taking into account coffee lake supply issues, probably a Ryzen price drop and hopefully some smaller motherboard goodness mATX and ITX builds.

      • PrincipalSkinner
      • 2 years ago

      AMD did 5GHz years ago. FX 9590.

        • DancinJack
        • 2 years ago

        …..

        At 220 Watts….

          • PrincipalSkinner
          • 2 years ago

          Bad trade off but the point is Intel isn’t first.

            • Beahmont
            • 2 years ago

            Well… if we’re talking about just first to 5Ghz then I’m fairly sure IBM beat everybody there back in the early 2000’s. If we’re talking about 5Ghz on a mainstream chip with high IPC, then it’s an Intel chip.

            • Mr Bill
            • 2 years ago

            [url<]http://hwbot.org/submission/3613447_lucky_n00b_cinebench_r15_ryzen_threadripper_1950x_4122_cb[/url<]

    • ronch
    • 2 years ago

    Oh man. Ryzen is being clobbered in those game bencmmarks. It’s just a good thing Ryzen is hitting >100 fps in 3 out of those 4 games, which means gameplay should be fine. Still, you know Intel has much more power to spare.

    • Zizy
    • 2 years ago

    14nm++ is a very sneaky way to name your 15nm process, Intel. I wonder if others pick that up as well.

    • NeelyCam
    • 2 years ago

    It has taken many a year, but it might be time to retire my 2600k. By “retire” I mean have it replace the Westmere system that’s been my file server.

    Or… maybe I should wait a “little bit” longer for an Icelake system…? It’s not like 2600k is dead yet

      • Airmantharp
      • 2 years ago

      If it ain’t broke, don’t fix it!

      • NovusBogus
      • 2 years ago

      It totally depends on your needs. If you’re not playing today’s AAA games or trying to get super high frame rates on slightly older ones, or otherwise doing some workstation-like task pretty regularly, much of that power is going to be wasted. What Intel did here is damn impressive, but it’s still splitting hairs for most users due to CPU performance outpacing software needs.

      • NTMBK
      • 2 years ago

      I was still clinging on to a Phenom II X4 (with DDR2, no less) until the start of this year. I think a 2600K still has life in it!

      • chuckula
      • 2 years ago

      [quote<]It's not like 2600k is dead yet[/quote<] It wants to go for a walk!

      • ClickClick5
      • 2 years ago

      Aw man, my Harpertown server is feeling old now :/ The dual E5410 system is still doing everything I need it to do…so far. But getting a bit old 🙁

        • cmrcmk
        • 2 years ago

        Depending on the TDP of your Harpertown, it might be worth the upgrade just to reduce power consumption going forward.

    • HERETIC
    • 2 years ago

    With the catchphrase today being “Best thing since Sandy” after reading this review and
    then nipping over to Anand-be hard to argue.
    Then noticed TPU had them all there-ooh lots reading………………..
    [url<]https://www.techpowerup.com/reviews/Intel/Core_i3_8350K/18.html[/url<] That really threw a spanner in the works with 8350K coming within a couple percentage points in gaming,and virtually no difference at 1440---At less than half the price............... The BIG question thro is longevity-Are those 4 cores able to do a Sandy down the track???????

      • Voldenuit
      • 2 years ago

      I think it depends if you shut down every other app while gaming. If you have chat, voice, torrent, game, browser up while gaming, 4 cores might not be enough to guarantee high minimum framerates.

      Also, multiplayer games tend to hit CPU a lot harder and are rarely tested in benchmarks because repeatability is quite difficult, so if you play a lot of multiplayer games, I wouldn’t go for the i3 8350K.

      8600K might be the sweet spot for gaming and desktop use, and 8700K for streamers and video encoders.

        • Airmantharp
        • 2 years ago

        Yup, same argument as 2500k vs 2600k, or any non-HT vs. HT CPU at four cores: Games can use four to six cores effectively and without slowing down in benchmarks, but running on a real desktop means having stuff that might get in the way. Thus, you want to be able to handle at least six threads, where the 8600K provides ‘just enough’.

        And with the price difference for the 8700k being so small in the scope of build cost, it’s hard not to argue for the HT version if you’re chasing high minimum framerates, i.e. a solid 120Hz and/or VR.

        • HERETIC
        • 2 years ago

        Agree-at least with apps you can turn them off-but all the cr@p W10 does in the background
        when you don’t want it to-needs a xtra core……………………..

    • aspect
    • 2 years ago

    Can we expect an 8 core chip in the i7 lineup within a generation or 2?

      • Klimax
      • 2 years ago

      More like 4 and depending on process characteristics. Likely one reason why mainstream got upgrade only now.

      • JustAnEngineer
      • 2 years ago

      That probably depends on what Intel is calling a “generation”.

        • Klimax
        • 2 years ago

        Usually by u-arch. (Aka 8-gen now)

      • Beahmont
      • 2 years ago

      If certain rumors as talked about here on TR are correct, we could have a 8 core mainstream parts in 2H 2018.

      • K-L-Waster
      • 2 years ago

      Well there’s this:

      [url<]https://techreport.com/news/32565/rumor-eight-core-desktop-intel-cpus-and-z390-chipset-riding-in[/url<] Make of it what you will....

      • Krogoth
      • 2 years ago

      It depends on the mainstream market demand and landscape. The only reason that 6-core “desktop-tier SKUs” from Intel came out today is because of pressure from Ryzen 5 and 7.

      Intel doesn’t want to give up juicy HEDT SKU margins if they can help it.

    • Bensam123
    • 2 years ago

    Hey look websites are finally bench marking multi-threaded workloads specifically pertaining to streaming which I suggested 4-5 years ago on multiple different occasions (one of the benefits of buying AMDs old school 83XX series and what I got a lot of flack for). Cool that everyone is getting with the time. I guess streaming isn’t going away… XD

    Not exactly sure why increasing the game resolution and then down scaling it would reduce CPU workload in OBS? For the testing I assume he’s not using classic (studio has completely replaced classic). Increasing the resolution and down scaling it should use the same amount of CPU cycles as running it natively at 1080p@60 unless he found some weird sort of bug.

    Picking things apart here, I assume he settled on 2560×1440 in game resolution and then 2560×1440 base resolution in OBS, then used downscaling to reduce it to 1080p@60. Furthermore, you can’t ‘offload’ work to the GPU unless you use hardware encoding, I assume what’s happening here is he’s running into a deus-ex bug of some kind where higher resolution does not equate to higher FPS and it needlessly wastes CPU cycles when it’s not GPU bound without increasing FPS. So increasing the resolution causes the GPU to become bound and the game no longer can actually run as fast as it normally should be able to (this should also cause a decrease in FPS though).

    Taking it a step further, normally if you increase your GPU load you will eventually become GPU bound, the FPS will dip below your streaming FPS, and you actually reduce the encoding load on OBS. Even though the stream will still be 1080p@60, the amount of bits you need to encode will not be as high and results in a lower CPU load and 1080p@60 will be ‘fake’, just repeat frames. Although the average test FPS never showed it dropping below 60, it’s entirely possible something else is going on here.

    “…though, it’s interesting to note that looser encoder settings didn’t make 1920×1080 gaming and 60-FPS streaming possible on the i7-8700K, either. That suggests streaming framerate is a much bigger initial obstacle to clear when tuning one’s setup than the choice between “veryfast” or “faster,” for example.”

    Changing encoding presets has a very measurable impact on CPU load in OBS. If it doesn’t something here isn’t working properly. 1080p@60 is going to have a pretty huge and noticeable impact on most systems. Most people are settling for 900/864p@60 right now unless you have a capture PC. For instance the ‘medium’ preset will double/triple your CPU workload compared to veryfast. This is most noticeable in high action games, such as FPS’s (Overwatch would be a good candidate for that). Even if you’re capable of streaming at those resolutions you also need the bitrate to do so, which is why a lot of people also opt for lower resolutions. Sometimes that is because they don’t have the bandwidth, sometimes that’s because people watching don’t have the bandwidth (unless they’re partnered and have a transcode setting).

    If the testing setup constantly pops up ‘high encoder lag detected’ in red on OBS, your resolution/encoderpreset/FPS are too high. There is no way you can stream at the resolution and it’s effectively a dud. Viewers aren’t going to watch jittery/stuttery stream. If the stream is working properly that will never pop up. Watching OBS on a second monitor is a good way to look out for that (although it was mentioned it was being monitored).

    In other unrelated news, I was stalking the 8700k on Newegg. Immediately upon listing it was set to ‘back ordered’, same with the 8600k. Pretty sure they were never actually available from launch.

      • Jeff Kampman
      • 2 years ago

      OK, there’s a lot to unpack here. Several of your suppositions seem to miss the fact that DXMD running at 1920×1080 is a CPU-bound workload and DXMD running at 2560×1440 is not.

      I am not claiming that raising the gameplay resolution somehow “reduces CPU workload in OBS”; doing so keeps a CPU-bottlenecked game from making OBS unusable by removing that CPU bottleneck. You can see this if you click through the average-FPS charts with and without OBS running—when the stream is offline, all of the CPUs perform about as well at 2560×1440, suggesting that the bottleneck has shifted to the GPU. That shift frees up enough CPU to make streaming possible through downscaling.

      I’m also not denying that changing encoder settings has a major effect on CPU load, but when you try to stream a game that’s already CPU-bound from one PC at 1080p60, it doesn’t change the fact that the system is already overloaded. It just makes the system more overloaded and makes the stream look like garbage. Changing encoder settings in this coffin corner simply didn’t make much of a difference, in my experience.

      I specifically tuned my streaming settings to avoid producing a final stream whose quality was unwatchable by monitoring the stream on a second monitor, as you noted. I didn’t get any warnings from the software or anything to that effect.

        • Noinoi
        • 2 years ago

        The explanation makes a lot of sense!

        It’s slightly off-topic, but I was thinking: since the game appears to be running rather well (average FPS of 98 with a 99th frame time of 15 ms) at 1440p + 1080p60 streaming, I do wonder if the game might have streamed properly off the 8700K if we used the same 1080p resolution for game and 1080p60 for rendering, but with a change: the game is “1080p60” and capped with a frame limiter or vsync on a 60 Hz refresh. A capped maximum frame rate might also have an effect of easing the CPU load.

        As for why, I was thinking that the streaming attempt at 1080p as mentioned was probably with the frame rate unlocked and vsync off on a 165Hz monitor, so I was thinking that if I were playing a game at an average 1080p60 monitor (which I have), if things might change.

        • homerdog
        • 2 years ago

        Do most streamers really do CPU encoding? It seems daft to me when the GPU can do a fine job for free.

          • RAGEPRO
          • 2 years ago

          Hardware encoders really fall down at lower bitrates, particularly for complex and detailed game scenes with fast action gameplay. The CPU-based x264 encoder does a much better job.

          • Voldenuit
          • 2 years ago

          Many serious encoders do CPU encoding, and the ultra-serious ones do it on a dedicated capture box with its own CPU. The reason being that software encoding has higher visual quality at lower bitrates than GPU. It’s the difference between a bandwidth-limited stream looking like a ‘stream of a game’ to ‘actual gameplay of a game’. Additionally, the second capture box can be used for overlay and compositing effects. Lastly, some apps like OBS seem to be monetizing the use of their internal encoders.

            • RAGEPRO
            • 2 years ago

            Which apps? OBS doesn’t.

            • Voldenuit
            • 2 years ago

            Don’t stream myself, but a friend of mine streams with OBS, and he says he gets points for using OBS instead of Shadowplay. Would be open to hearing more specifics if you’re using it.

      • NeelyCam
      • 2 years ago

      In some ways, you sound a bit like a paid russian troll on a Washington Post comments forum. Or maybe this is just a hobby for you…?

      • K-L-Waster
      • 2 years ago

      TBH I’m not all that impressed by the “you can stream in one box” reasoning. It really only enters into the equation if you are actually streaming in the first place. Most users don’t stream, don’t intend to start, and so the ability to do so is superfluous.

      When Intel and AMD start selling mainstream chips with the “moar coarz cuz str33m1ng”, they’re basically peddling a solution in search of a problem.

      Automotive analogy time: this is like telling the average motorist they need a full size pickup because now they have the cargo capacity to become a landscaper or building contractor. Sure, it’s useful if that’s what you’re planning to do, but… how many people are actually planning to do that?

    • DeadOfKnight
    • 2 years ago

    Anyone else notice that no reviewers seem to have an unlocked Core i5? My guess is it would overclock just as high as the i7 and show everyone how useless HT still is for playing games.

      • NeelyCam
      • 2 years ago

      HERETIC pointed this out:

      [url<]https://www.techpowerup.com/reviews/Intel/Core_i3_8350K/18.html[/url<] Sweetness.

        • juzz86
        • 2 years ago

        A 4.5GHz 8350K as good as a stock 8700K at 1440p.

        Unreal (in a good way).

      • Airmantharp
      • 2 years ago

      HT is absolutely useful for playing games, especially in a real desktop environment versus a benchmarking environment, and for multiplayer and/or streaming, not to mention when you want to keep high minimum framerates (rather, low maximum frametimes) for high-refresh rate monitors and VR.

        • DeadOfKnight
        • 2 years ago

        Not in my testing. A few games see a <10% increase in FPS. In fact, I have HT turned OFF on my i7-5775c for 2 reasons:

        1. It significantly reduces temperatures when overclocking
        2. There is a small, but measurable INCREASE in game performance for the majority of games that I play.

        True, many of the games I tested are not the ones widely used by reviewers today, but just because half a dozen of the newest titles see some benefit from HT doesn’t mean they are the games we are all playing and replaying today. This is the same reason I have no real desire to upgrade to more cores in the near future. Sure, it may help me to max out the smoothness on my 100Hz monitor for the latest titles, but the majority of those titles I will play once and never play again.

        Most of the games I play are still wildly popular older games or not all that demanding. Steam sale deals for 80% off, eSports titles, retro gaming titles, etc. None of these games will see any benefit because they use a maximum of 3 threads. Not saying I won’t ever upgrade, but these reviews give more value to the hardware than is going to be felt in the real world. If you need or want to upgrade, absolutely go for the best perf/$, but I don’t think the i7s are for most gamers.

          • Airmantharp
          • 2 years ago

          Don’t confuse performance in general with your personal experience.

            • Redocbew
            • 2 years ago

            I tend to go by the fact that hyperthreading is pretty much a standard feature of a modern CPU these days as the indicator on its utility. The fact that so many people are so distrustful of it really just speaks to how badly Intel bungled its introduction to enthusiasts with the Pentium 4, and how application dependent its behavior can be in practice.

          • curtisb
          • 2 years ago

          Just because you aren’t getting more FPS out of a game doesn’t mean you aren’t getting more overall performance. There’s more to a game than frame rate.

        • Krogoth
        • 2 years ago

        No, it does not. HT actually causes more “hic-ups” due to scheduling issues with the OS.

        HT is only useful if you like to multi-task or use applications that are hilariously parallel.

          • curtisb
          • 2 years ago

          Sure, if you’re running Windows 2000…you know, since the OS was made before HTT was a thing. Those scheduling issues were taken care of a long time ago.

            • Krogoth
            • 2 years ago

            The issue still persisted for a while and it wasn’t until Window 8 and onward that the problem was mostly under control. It still can be a “problem” for who are obsessive with miminal framerate and frame time.

            It has much less of an issue for *nix crowd thought.

            • curtisb
            • 2 years ago

            Yeah…no. Just…no. If what you’re saying was correct, then an i7 would perform worse than an i5 because from a technical stand point, there is [b<]literally[/b<] no difference between the two except for HTT. Don't bother bringing up clock speeds or the additional L3 cache. Neither of those would fix a kernel scheduling issue. [url<]https://ark.intel.com/compare/126685,126684[/url<]

            • Voldenuit
            • 2 years ago

            [url<]https://www.techpowerup.com/forums/threads/gaming-benchmarks-core-i7-6700k-hyperthreading-test.219417/[/url<] Forum poster found that disabling HT on his 6700K could sometimes increase performance (slightly) in some games. Overclock.net found that [url=http://www.overclock.net/a/intel-core-i3-vs-core-i5-vs-core-i7-gaming-performance-with-geforce-gtx970<]an i5 4690K at 4.4GHz posted slightly better frame times than an i7 4790K at the same speeds[/url<] in one game (GTA V), but that was only one game and possibly within the realm of test variance (20.6 ms vs 19.7 ms). I can imagine HT would help in multi-tasking scenarios, but I hesitate to say that HT improves gaming performance per se. As for performance deficits, well, they seem to be rather minor so I keep HT on on my home rig b/c I'm lazy that way.

            • curtisb
            • 2 years ago

            So here’s the thing. Just because the extra threads are there doesn’t necessarily mean the game is going to benefit from them. The game does have to be coded to use the exta threads, just like any other multithreaded software. There are many more timing issues that have to be dealt with in games to keep frames, AI, sound, etc. all in sync. It’s way different than going “Ok Photoshop, here are all of these threads…have fun!” Multithreading in games has been talked about ad nauseam for over a decade. Saying that it will hurt performance [i<]because of Windows scheduling issues[/i<] is incorrect. These are software and driver optimization issues. There are plenty of other applications that do show performance increases with HTT enabled. If the Windows scheduler was the issue, those applications wouldn't show any increases either.

            • Voldenuit
            • 2 years ago

            [quote<] There are plenty of other applications that do show performance increases with HTT enabled. If the Windows scheduler was the issue, those applications wouldn't show any increases either.[/quote<] Unlike games, though, most of the scenarios that benefit from the extra virtual threads in HT are not latency or timing sensitive. If a game thread is context-swapped from logical processor 1 to logical processor 2, it can incur a latency penalty, whereas a heavily threaded situation like Cinebench would just take it in its stride.

            • curtisb
            • 2 years ago

            Right, not disagreeing with that in the least. There’s latency involved with context swapping threads in general, even without HTT. That’s a multithreaded coding issue, though, not a Windows scheduler issue. Am I saying it’s not [i<]slightly[/i<] more pronounced with HTT enabled? No. You do realize that most of the frame rate differences on the Tech Power Up forum post you linked are within a margin of error. Most of them are within 1-5 FPS of each other. Just looking at that one piece of information, there's certainly not enough difference that you'd be able to tell in real-world playing. It also doesn't make a case for disabling HTT and affecting the performance of other applications that might be used. Personally, I'd like to see this run with TR's method to see if it affects the minimum frame rate, and the time beyond X metrics, although I still have my doubts that the differences are going to be all that much. TR's method has proven time and again that the maximum or average FPS doesn't tell the whole story. And FWIW, I'm not the person who down voted you. I don't feel it necessary to resort to that, especially when I feel we're having a constructive conversation. 🙂

            • Voldenuit
            • 2 years ago

            No worries, we’re probably on the same page here. If I felt that HT really hurt performance, I’d have turned it off on my system, and I haven’t.

            • Krogoth
            • 2 years ago

            It is frame-rate performance and minimal framerate that gets effected because of the “hic-ups”. The only way to do a “apples to apples” comparison is by disabling HT and enabling HT on the same silicon.

            OS scheduling and software in question are the typical culprits of “hic-ups” not the hardware. Again this is only a problem for obsessive-types who want minimize any sort of “hic-up” as much as possible. The overwhelming majority aren’t going to bother by the occasional “fraction of a second hic-up/dip” even if happen to notice it.

            • curtisb
            • 2 years ago

            Again, as I said in my reply to Voldenuit, it is not the OS scheduler that’s the issue. If that was the case, there wouldn’t be any scenario on a Windows machine where HTT would provide a performance increase.

            • Airmantharp
            • 2 years ago

            Wanting smooth performance is ‘obsessive’?

            • Waco
            • 2 years ago

            You say this, but there’s no evidence presented to support your supposition on modern hardware/software stacks.

            • Krogoth
            • 2 years ago

            [quote<][b<]properly coded[/b<] modern hardware/software stacks [/quote<] FIFY Thread-scheduling has never been Microsoft/NT's strength. It is more of a crapshot with mainstream-tier software. There's plenty of obsessive users who do notice the minor "hic-ups" with HT enabled and turn off it for that reason alone.

    • Ph.D
    • 2 years ago

    Sitting here with my 2011 PC running the 2600K and I guess this is about was good as I can expect to get?

    Is it time to just give up on waiting for the 10nm Cannonlake chips?

      • tsk
      • 2 years ago

      You won’t see Cannon lake on desktop. Intel themselves stated 10nm+(ice lake or later) is likely to the be first iteration of desktop 10nm.

        • DeadOfKnight
        • 2 years ago

        They will probably have a mild performance bump before then. Either a new chip penciled in like Nvidia did with Pascal or, more likely, something even smaller like Devil’s Canyon. This is a good time to take the plunge if you’re ready.

      • NeelyCam
      • 2 years ago

      Boat buddies. tsk’s comment about icelake resonates with me

      • Airmantharp
      • 2 years ago

      Icelake, the desktop 10nm version (props to tsk above), should have been here by now had Intel followed their ‘tick-tock’ strategy.

      However, not only did 14nm give them trouble, so has 10nm: so we got the tick and tock at 14nm as Broadwell (5000-series) and Skylake (6000-series), but two more 14nm ‘tocks’ as well in Kaby Lake (7000-series) and now Coffee Lake (8000-series).

      Remember that the ‘tick’ was the previous ‘tock’ on a new node, so Broadwell was essentially Ivy Bridge (3000-series) ported to 14nm, plus the Iris Pro graphics stuff with massive caches, while Skylake was the updated architecture on 14nm.

      For Kaby Lake and Coffee Lake, the core of the architecture didn’t change from Sky Lake, so for all three of these CPUs single-thread performance is essentially equal at equal clockspeeds. The improvements, as noted in the review above, came as better sustained boost speeds on Kaby Lake, especially for mobile, and more cores and more boost refinement for Coffee Lake.

      Now, to your question: yes, this is as good as you’re going to get: I’d agree with AbRASiON that 10nm on the desktop as Icelake will be at least a year out if not more, and DeadOfKnight is probably right that Intel might do a ‘refinement’ release if 10nm desktop CPUs take too long as they did with Kaby Lake and Coffee Lake.

      As to whether it’s worth dumping the 2600K: honestly, if it ain’t broke, don’t fix it, but if you’re maxing the 2600K out, this is the CPU to get for the foreseeable future.

      • ptsant
      • 2 years ago

      Small things accumulate. Between faster mem, faster USB, faster NVMe, per core gains and more cores, I really thing the 8700K is a worthy upgrade from a 2600K.

    • blastdoor
    • 2 years ago

    A couple of thoughts:

    1. Thank you AMD for providing the competition that was needed to make this happen. The ++ in 14nm++ stands for “competition.” Intel could have done this a couple of years ago but didn’t want to take the margin hit. They are doing it now only because they have to.

    2. Sorry AMD, but my gratitude goes no further than posts in forums like this. When it comes to $$, I only buy the best product.

      • strangerguy
      • 2 years ago

      Yup, AMD definitely will need to do some major price cuts, and the lineup will become very compressed. 1800X at $300 would be a tough sell, let alone the original $500, after CFL.

      • Pancake
      • 2 years ago

      I completely agree. Although ECC in Ryzen/TR…

      Still, it’s kinda hard watching AMD keep bleeding red ink. It’s like a terribly one-sided boxing match. AMD punch-drunk getting up and staggering then copping another blow to the head. Kinda why I don’t watch boxing or any martial art matches.

      • Klimax
      • 2 years ago

      Sorry, no. Competition doesn’t have a thing to do with this unless you talk about previous Intel CPUs competing with their newer ones.

      14++ was there for quite long time. (Remember when Intel announced change from Tick/Tock? This is what they meant and targeted, it took just bit longer then anticipated) Intel couldn’t do this two years ago. They were already working/fixing and upgrading 14nm. Check your timeline.

        • blastdoor
        • 2 years ago

        I just did and I’m right.

        Broadwell Xeons were out in 2015. That proves that Intel was fully capable of making 14nm CPUs with higher core counts a couple of years ago. :-p

          • Klimax
          • 2 years ago

          Just because you can string larger number of cores doesn’t make them viable for mainstream.

      • ronch
      • 2 years ago

      You mean you only buy winners?! Someone else here certainly does!

    • credible
    • 2 years ago

    Absolutely astonishingly in-depth review Jeff, bravo.

    It’s amazing what effect a monopoly can have but whats even more amazing is when competition reappears and in the end guess who wins, the consumer.

      • Klimax
      • 2 years ago

      AMD competition had very little to do with this. For one, too short timeframe. And second: There are not many ways to convince peple to buy Yet Another CPU…

        • ptsant
        • 2 years ago

        Too short timeframe?

        Intel has had high-core CPUs for ages, but they were only sold as $$$$$ Xeons. Intel also had 6 and 8-core consumer CPUs which were only sold in the HEDT market for $600+ with $400 motherboards.

        Intel did not reinvent the wheel. The gains (per core and freq) from the previous architecture are marginal. They simply realized that they have to convert some of their precious Xeons into HEDT cpus and some of their HEDT cpus into consumer CPUs. This is what happened and this did not take an engineering miracle. It was purely a business decision.

          • psuedonymous
          • 2 years ago

          Coffee Lake is not an existing die, in any way shape or form. Spinning up a brand new die – even if you rationalise it as “you’re just sticking two more cores in a Kaby Lake die!!1!” – from conception to production is not a fast process.

            • NTMBK
            • 2 years ago

            Zen rumours have been swirling since 2015. They had time.

            • Klimax
            • 2 years ago

            Just 2 years. 14++nm was bit longer in planning then that.

          • Klimax
          • 2 years ago

          Just because one can add cores, doesn’t mean frequency nor any other property won’t get affected.

          Anyway, my primary point is that AMD had zero to do with this lineup. Like in mobile space “more cores” can be effective marketing to get people to buy new CPU when getting more of IPC is very limited. And if they could have gotten this CPU with same frequency scaling sooner and thus boast same %% of increase, they would do that.

        • NTMBK
        • 2 years ago

        There were rumours of high core count Zen, with competitive per-core performance, as far back as 2015. Intel had plenty of time to add another two cores onto their Skylake chips.

          • Klimax
          • 2 years ago

          Or they had them planned for even longer. It’s not like they were unaware that a lot of people are skipping CPU releases because of perception of insufficient performance increases.

          Also as I already mentioned above, just 2 years. If Intel didn’t need 14++nm to create this CPU in review, then they could have already done it. And “monopoly” is not valid answer. (Unsold CPUs don’t give money…)

            • NTMBK
            • 2 years ago

            They had the unequivocal best mainstream CPU, they had no incentive to go to 6 cores. They’ll have had to sacrifice some of their profit margins to offer a ~25% larger chip at the same price, they wouldn’t do that without competition.

            • Beahmont
            • 2 years ago

            Or they could just wait and amortize their 14nm process for longer and sell the larger chips with the same margins because ‘costs’ went down.

    • Srsly_Bro
    • 2 years ago

    I have to say, this is TR’s best CPU review to date. Not only did TR cover the new CPU, it covered the OCed version also. Not only did TR cover the new CPU in stock and overclocked settings, it compared the 2600K overclocked.

    Only thing left is to thank Jeff for letting me know how much my 2700K sucks. I’m going to upgrade now. Thanks, bro.

      • Redocbew
      • 2 years ago

      Srsly, bro?

    • JustAnEngineer
    • 2 years ago

    Thanks for the review, Jeff. I was hoping that you would include comparisons to the Core i7-7740X that was recommended in the recent System Guide.

    • Kretschmer
    • 2 years ago

    Well hey, my 7700K was top dog yesterday…

      • juzz86
      • 2 years ago

      Same. But I’m happy, because even one generation ahead, there’s already a tangible benefit to upgrading.

      By the time we’re where Skylake was over Sandy, the gains will be even bigger.

        • Kretschmer
        • 2 years ago

        Yeah jesting aside I don’t think I would see any noticeable difference slotting in an 8700K right now, and I did get a good 7 months of extra awesomeness out of my 7700K.

      • Srsly_Bro
      • 2 years ago

      Every dog has its day.

      • ptsant
      • 2 years ago

      I hear you. My Ryzen 1700X also seemed a bit nicer until yesterday.

      Well, still does the job nicely…

    • Plazmodeus
    • 2 years ago

    I have the feeling I had some seven years ago when I read TR’s Sandybridge review: Now it’s time for an upgrade.

    • Wirko
    • 2 years ago

    What was in the photo when Tech Report did the Bra Swell review, again?

      • PrincipalSkinner
      • 2 years ago

      A swollen bra of course.

        • ronch
        • 2 years ago

        First funny joke I came across today.

        You do this First thing exceedingly well!!

          • PrincipalSkinner
          • 2 years ago

          First and foremost, than you.

    • DeadOfKnight
    • 2 years ago

    It’s really too bad Ryzen isn’t better because I like AMD’s strategy more. It’s a shame Intel doesn’t do the same thing. i7-8c/16t, i5-6c/12t, i3-4c/8t, Pentium-4c/4t would be a killer lineup.

    Not to mention chipsets, feature segmentation, and TIM.

      • mganai
      • 2 years ago

      Icelake from the looks of it will be 8C/16T for i7 and 8C/8T for i5. The latter of which still has more power potential than 6C/12T. I wonder if Intel really will risk undercutting the new gen so soon though.

        • Beahmont
        • 2 years ago

        Last I heard Ice Lake wasn’t due out until 2019, and there was going to be no Cannon Lake desktop chips because of transistor power curve issues on 10nm. It’s why all the road maps for 2018 have a Coffee Lake Refresh.

        • ptsant
        • 2 years ago

        If true, yet another socket change.

          • psuedonymous
          • 2 years ago

          Unlikely, Ice Lake is still slated for 300-series chipsets, and Intel has stuck to two-gens-per-socket for the last decade.

      • Klimax
      • 2 years ago

      What strategy?

        • Mr Bill
        • 2 years ago

        No market segmentation of VM support
        DDR4 memory channels
        ECC support
        PCIe lanes
        Unlocked clocks

    • AnotherReader
    • 2 years ago

    Intel’s process optimization is working wonders. Let’s see if Global Foundries’ 12 nm node, which is actually just an improved 14nm, will close the gap. However, I don’t feel the urge to replace my 1700X yet.

    • xigmatekdk
    • 2 years ago

    So what can AMD do besides playing the budget brand?

      • Voldenuit
      • 2 years ago

      Right now? Not much. Raven Ridge might be competitive for IGP-only systems, if only they were being used in GPU-limited situations, but most IGP-only systems aren’t. I don’t think AMD has much chance of market penetration into laptop OEM sales, and even if they did, intel might (emphasis on might) have a performance/watt advantage.

      Zen 2 won’t be out for another 2 years, probably. AMD’s only real saving grace right now is that intel is having troubles with their 10nm process, so will be competing at near-parity nodes for the short term.

      Also, Z370 sucks. 20 PCIE lanes in 2017? Lame.

        • DeadOfKnight
        • 2 years ago

        Indeed. NVME makes more PCIe lanes necessary.

          • Beahmont
          • 2 years ago

          Yes, but it’s hard enough to get manufactures to put SATA SSD’s in their pre-builts. Worse yet, NVME SSD’s are generally still more expensive than SATA SSD’s and only provide a performance improvement in a very narrow range of circumstances that the average pre-built box will never encounter.

          NVME SSD’s are fancy and cool. But common software just hasn’t been built that can take advantage of the increases in performance over SATA SSD’s.

        • Krogoth
        • 2 years ago

        Blame the limitations of Socket 1151. There’s not enough pins for more PCIe lanes as well as DDR4 channel to feed them.

        If you need more I/O connectivity. You have to go with Socket 2011v3, Socket 2066 or Socket TR4.

          • Freon
          • 2 years ago

          Yeah I was hoping for something like a 1200-1400 pin socket with the same dual channel memory and a bit more PCIe connectivity, or an upgraded DMI and more PCIe on the southbridge, whatever.

      • Klimax
      • 2 years ago

      Noting. They started there as such and never got out. They were always there. (Short of Intel making second Netburst-scale mistake)

      • ptsant
      • 2 years ago

      EPYC: outperforms anything Intel has at similar price points. In a much more lucrative market.
      Threaripper: Competitive at the $999 pricepoin.
      Raven ridge: GPU will certainly outclass anything Intel has, CPU is “good enough”. If power is OK, might work well in mobile, where a dGPU is harder to integrate.

      Also, keep in mind that Intel shifted Xeons to HEDT and HEDT to consumer once, but they can’t keep playing that game, because their cores are bigger and costlier, especially at the higher end. And it might be easier for AMD to optimize a new architecture that still has rough edges, vs the highly refined core architecture. Of course, this remains to be seen but if true we may see bigger progress per generation on the Zen side than on the core side.

      Exciting times!

      • thx1138r
      • 2 years ago

      1. Bring out a decent mobile chip!

      PC Enthuasiasts often forget that desktops make up only about 43% of consumer CPU sales these days:
      [url<]http://gs.statcounter.com/platform-market-share/desktop-mobile-tablet[/url<] Both the Zen CPUs and the Vega GPUs seem to be very power efficient when run at an appropriate clock-speed. And based on the list prices and sales volumes, intel has probably been making a lot more profit in the mobile arena than it has on the desktop for quite some time. 2. Next years Zen refresh has to be good, it needs some extra IPC and higher clocks. 3. Convince some big companies to use EPYC. These big server chips are extremely profitable, the trick is to get a bunch of users to buy them.

      • Unknown-Error
      • 2 years ago

      Enthusiast and Mainstream Desktop wise Coffee-Lake is going to push AMD to a corner but not completely out and into the back end of the budget segment like Bulldozer and in HEDT Threadripper certainly has its relevance. Server wise EPYC should give them at least a few percentage point gains in that segment which translates to a lot of $$$. Their problem is the RyZen DT APU won’t be coming until next year.

      And the biggest problem with RyZen DT in general is not really the IPC but the darn clockspeeds. They really need to get the XFR to around 4.5 GHz. Hopefully next year Pinnacle-Ridge on better/refined process can do that. There won’t be any IPC increase until Zen2 but if they can get the XFR higher things should be better. But unfortunately for AMD fans this won’t happen until [b<]MID NEXT YEAR[/b<]. As of now overclocking RyZen is an absolute b**ch. In contrast putting CFL past 5GHz is a breeze. Mobile segment wise I am really curious to see how Raven-Ridge APU does. Supposed to be end of this year but leaks are few far between.

    • gmskking
    • 2 years ago

    Seems to me there is no real IPC improvement here. The extra threads are nice. No doubt, overall performance for price, this is the the new King of the Hill.

    • USAFTW
    • 2 years ago

    Welcome back, competition! We’ve missed you!

    • End User
    • 2 years ago

    [quote<]the i7-8700K dramatically puts the final nail in the coffin of 2011's Core i7-2600K[/quote<] Thank heavens!!!

    • Mr Bill
    • 2 years ago

    Page 13… Corona raytracing. Should be: ‘It merely pulls even with the Ryzen 7 1700X at stock speeds, and narrowly edges out the overclocked Ryzen 7 1700.’
    [quote<] It merely pulls even with the Ryzen 7 1800X at stock speeds, and narrowly edges out the overclocked Ryzen 7 1700.[/quote<]

      • Mr Bill
      • 2 years ago

      Pedantic; I know…
      [url<]https://techreport.com/r.x/2017_10_03_Intel_s_Core_i7_8700K_CPU_reviewed/corona.png[/url<]

    • xigmatekdk
    • 2 years ago

    The only 6 cores that matters.

      • Mr Bill
      • 2 years ago

      [url<]https://www.youtube.com/watch?v=o_eSwq1ewsU[/url<]

      • Krogoth
      • 2 years ago

      7800X has more PCIe lanes and quad-channel DDR4 at its disposal if you need more I/O expansion at the cost of being tied to X299 platform and having no-IGP.

    • Lordhawkwind
    • 2 years ago

    So as I game at 1440p with a 144hz freesync monitor and already have an i7 7700K this launch is of little interest to me other than, like others have stated, I’m not happy that the 8700K doesn’t work on the Z270 platform. Poor excuses from Intel as to why it can’t work and just another example of their inherent greed at trying to rip off consumers at every turn.

    Hey you want our best CPU? Well you’ve also got to buy a new motherboard which makes us even more money. Remember AMD have promised that future Ryzen CPU’s will run on AM4 boards until at least 2020. Z270 lasted about 9 months before becoming obsolete so as I’m GPU bound guess who’s CPU I WON’T be buying for my next upgrade in a couple of years. By then Zen+ or Zen 2 will be out and I think AMD will have closed the gap to Intel again. Anyways I wouldn’t be surprised that if the 8700K isn’t the runaway success Intel think it will be that in 6 months or so we’ll see Intel do a U-turn and make them work on the Z270 platform. Better to get some money out of the suckers than none LOL.

      • Krogoth
      • 2 years ago

      It is because older Socket 1151 boards would have power delivery issues with fully enabled Coffee Lake chips especially if you to overclock.

        • smilingcrow
        • 2 years ago

        That’s been mentioned in dispatches but considering the actual power consumption is a lot less than that of an O/C’d KL and they don’t have to offer more for CL even for an O/C I wonder if it’s an arbitrary change of how power is delivered via the pins?
        Or did they actually need to redesign the pin layout due to the change of the layout on the die with the extra cores?
        So it is a power issue but as you said it’s delivery and not necessarily amount!

          • DancinJack
          • 2 years ago

          It’s not “arbitrary.” It has 50 percent more cores over your CPU. Power is going to need to be distributed differently. It sucks, I agree, but you shouldn’t just be writing it off as they did it for NO reason other than to make money and piss you off.

            • smilingcrow
            • 2 years ago

            Thanks, that’s what I figured but I’m not up on chip power delivery hence my questions.
            I’m firmly in the camp that Intel should build the best platform for the majority, who as it happens don’t swap out CPUs, so I have no issue with them only keeping compatibility for 2 generations.
            These things are generally well signposted and known about so it’s not as if there are many surprises here.

            • DancinJack
            • 2 years ago

            🙂

    • DeadOfKnight
    • 2 years ago

    Intel Core 2 Q6600 – January 2007
    Intel Core i7 2600k – January 2011
    Intel Core i7 8700k – October 2017

    On this curve, when can we expect Intel’s next big upgrade?

      • chuckula
      • 2 years ago

      Half past never.

      • Prion
      • 2 years ago

      October 2021 obviously

        • HERETIC
        • 2 years ago

        Shouldn’t that be 2025?

          • DeadOfKnight
          • 2 years ago

          That sounds more like it.

      • USAFTW
      • 2 years ago

      I don’t expect it to be any later than how many coffee lake refreshes there will be (if that makes sense). Unless Intel makes a breakthrough with single threaded performance, I expect Intel to put out an 8-core mainstream chip not long from now. Why? Two reasons:
      1. They are going to like the sales and the positive reception from these (guess which matter more);
      2. AMD is back.
      My guess is if AMD doesn’t step up with positive IPC gains, perhaps Intel won’t do an 8-core in a hurry.

        • DeadOfKnight
        • 2 years ago

        2 more cores would only be a 33% improvement over this. So much for the days of 50% or more gains.

      • Dark Pulse
      • 2 years ago

      20X6, of course.

        • venfare
        • 2 years ago

        You mean 2066?

    • Krogoth
    • 2 years ago

    This is really just a cheaper and more accessible verison of 7800X. 800lb gorilla woke up with a vengeance.

    8350k is a sleeper for the budget-mind crowd that makes 7600k and 7700k obsolete. I bet it OC like a champ (dark silicon, more power via 1151v2) and AMD has nothing in that price range that can compete against it in most mainstream application.

      • chuckula
      • 2 years ago

      You forgot the part about how Coffee Lake clearly can’t overclock!

        • Krogoth
        • 2 years ago

        The fully enabled chip can OC the issue is dealing with the thermal output.

        • IGTrading
        • 2 years ago

        Leaving the sarcasm aside, Coffe Lake’s 14nm+++ in actually a step back in transistor density and Intel was forced to do so to be able to achieve higher frequencies.

        Price-wise, power-wise and single-threading-wise, the Coffee Lake are clearly the best choice.

        Unfortunately, considering Intel’s past behavior, current behavior and most likely future behavior every Intel customer is BEGGING to be screwed on platform longevity and upgrade-ability.

        Also, the chip doesn’t actually overclock too well compared with its states 4.7 GHz max frequency. (even if that’s single core)

        4.7 GHz -> 5 GHz is just a minor 6,3% increase which is minuscule considering the fact that Intel changed their manufacturing process to achieve this.

        The benefit is from bringing the other cores up to speed.

        So congrats Intel and thank you for bringing in AMD Ryzen discounts!

        Considering AMD’s prices are already more than 20% lower than at launch and they will discount the chips by an extra 10% soon, the Ryzen 1800X will be more affordable than the 8700K and the platform longevity is assured while the 12nm upgrade path is guaranteed. (unlike with Intel)

        Looking at HotHardware’s review (not including those DAW results that topple the charts 🙂 at TR ) , we can see that Ryzen 7 only yields by less than 1% in 4K gaming which will be the norm soon enough while being better in multitasking.

        [url<]https://hothardware.com/reviews/intel-core-i7-8700k-and-core-i5-8400-coffee-lake-processor-review[/url<] The same less than 1% 4K gaming difference is confirmed by TechPowerup : [url<]https://www.techpowerup.com/reviews/Intel/Core_i7_8700K/18.html[/url<] For us, AMD remains the better choice overall unless there is a particular productivity application that requires Intel's architecture.

          • chuckula
          • 2 years ago

          [quote<]Leaving the sarcasm aside,[/quote<] OK guy who registered just to hurl insults at Skylake X! [quote<]Coffe Lake's 14nm+++ in actually a step back in transistor density and Intel was forced to do so to be able to achieve higher frequencies.[/quote<] Yes, but what about the part of "leaving sarcasm aside"? And I'm being serious since these parts are only 30 mm^2 bigger than Kaby Lake at about 150mm^2. Meaning that an 8 core version of Coffee lake with the same "step back in transistor density" -- oh and an IGP that's literally nowhere on RyZen -- would be an 180mm^2 chip while even the most over-optimistic die size estimates put RyZen at 190mm^2 or so (it's more like 200 - 210mm^2).

            • tsk
            • 2 years ago

            It is true that the 14nm++ process is less dense than Intels own 14nm+ process. More specifically the minimum transistor gate pitch increased from 70nm to 84nm for 14nm++.

            Edit: I’m guessing that’s part of the reason they went with 14+ for mobile.

            • exilon
            • 2 years ago

            Gate pitch isn’t the the constraining the factor most of the time. I don’t think the core size grew at all with Coffee Lake vs Kaby.

            Coffee Lake is ~25mm^2 larger than Kaby for the two extra cores, which is in line with the expected increase from 2 more Skylake core+cache.

            • DancinJack
            • 2 years ago

            Kaby and Coffee are literally the same cores. There are no changes. Just the process + two cores.

            • exilon
            • 2 years ago

            Yes, but the claim above was that 14nm++ has lower density due to higher gate pitch, and I provided some evidence that gate pitch increase had no material impact to die size.

            • tsk
            • 2 years ago

            It might be minor, but it technically makes the process less dense.
            From Dr.Cutress of anandtech;

            “Increased gate pitch moves transistors further apart, forcing a lower current density. This allows for higher leakage transistors, meaning higher peak power and higher frequency at the expense of die area and idle power.“

            ” With the silicon floor plan, we can calculate that the CPU cores (plus cache) account for 47.3% of the die, or 71.35 mm2. Divided by six gives a value of 11.9 mm2 per core, which means that it takes 23.8 mm2 of die area for two cores. Out of the 26mm2 increase then, 91.5% of it is for the CPU area, and the rest is likely accounting for the change in the gate pitch across the whole processor. “

            • psuedonymous
            • 2 years ago

            [quote<]Increased gate pitch moves transistors further apart, forcing a lower current density.[/quote<][i<]Only[/i<] if the previous limiting factor in transistor spacing was the gate-pitch, rather than another limit (of which there are many, both from patterning scale and from physical issues with doping things so close together). Anandtech also does not account for extra uncore parts, including the additions for UHD output (MOAR RAMDACs!) and a little extra video routing logic, tweaks to the GPU despite the almost-the-same name (e.g. higher OGL FL support), and reacquiring TXT (lost on the 7700k).

            • exilon
            • 2 years ago

            Also Dr. Cutress of Anandtech:
            “The following calculations are built on assumptions and contain a margin of error”

            ~2.2mm^2 is not a very big difference for pixel counting a ~151mm^2 die.

      • smilingcrow
      • 2 years ago

      “8350k is a sleeper for the budget-mind crowd that makes 7600k and 7700k obsolete.”

      The 7600k definitely but the 7700k plays to a different crowd and being on a different platform also makes it less of a competitor.
      The 7700k has been around £275 – 280 in the last week which is about £110 more than the 8350k.
      The i3-8100 is two thirds of the price of the 8350k which is only £5 less than the i5-8400 so unless you need its 5+ GHz the 6 core makes more sense.

      Too much choice almost! 🙂
      Oh yes, and your favourite, the i7-7740X is around £235. 😉

        • Krogoth
        • 2 years ago

        HT is definitely not worth the different for the budget-minded crowd. An aggressively OC 8350K will keep up if not beat the 7700K at stock in the majority of mainstream applications.

        If you need the extra threads for your workload then 8700 and 8700K make far more sense. For budget-minded crowd 1600 and 1600X are more attractive choices for heavily-threaded workloads.

        i3-8100 isn’t that attractive of a buy over the i3-8350k because it is half-locked and only operates at 3.6Ghz versus 4.0Ghz on i3-8350K.

        Ivy-Bridge-X chips are now completely worthless aside from being a stop-gap for a “cheap” future Skylake-X drop-in upgrade.

          • smilingcrow
          • 2 years ago

          But what about all those gamers with an Intel 4C 4 Life tattoo?

    • w76
    • 2 years ago

    In future years, people will look back at this launch as Intel’s greatest mistake, the decisive launch that started AMD’s final descent and boosted Intel head-long on a trajectory to monopoly status and the legal nightmare that will entail. Intel should’ve launched these products but priced significantly higher across the board.

    Why? AMD has yet to post a positive quarter from Ryzen, and they report soon but Q3 will either be just the slightest of profits or still a small loss. The problem is, they have significant amounts of debt maturing in the coming years and they’ve got many years of painful losses to make up. On top of those strains, AMD has to maintain R&D for both GPU’s and CPU’s or they might as well pack up and go home now.

    So the smart move for Intel would’ve been to play Lexus to the AMD Toyota, cede some market share and let AMD take its first deep breathe since Sandy Bridge. A sustainable AMD would’ve been in everyone’s long term best interests.

    Instead, they’ve gone for the jugular. AMD can’t compete on the performance of these parts, so they must reduce prices, but now they’re the “budget” brand again, and reducing prices on steady volume will just mean a return to deeper losses.

    So, just my 2c, but I don’t see how on the current trajectory (unless Intel relents) AMD survives through 2020. Big bond maturities start in 2019, and who would let them roll them in to new bonds when they rarely ever can post a profitable quarter? There’s only a handful of product cycles left between now and then, and unless Intel changes track soon on pricing, I don’t see how AMD pulls it off.

    What I don’t get is why Intel would submit itself to a high probability of anti-trust litigation in the future. Even if they think they can buy off America’s political system, they’re an American company and we all know what the EU does to any US company they suspect can be milked for a few billion.

    Unless maybe they think they can hold ARM chips out as competition? Hmm.

      • Captain Ned
      • 2 years ago

      You’ve obviously never heard of a derivative suit. Under US (mainly Delaware, ’cause that’s where all the bigs are incorporated) corporate law, management may be sued by the stockholders for taking actions that would reduce profits (shareholder value) or for not taking actions that would increase/maximize profits. The other part about these derivative suits is that what may happen 3 years from now isn’t relevant. If Intel forfeited possible revenue in Q4 ’17 through pricing strategy designed to keep AMD in the market, they’s likely get sued (and more likely lose) after the 4th Quarter earnings call (somewhere in Feb/March ’18).

    • Unknown-Error
    • 2 years ago

    This is amazing. Record-breaking single threaded performance, Ryzen 7 like multithreaded performance (with 2 less cores!), respectable power consumption and all for little over USD 350? Am I dreaming?! [b<]chuckula[/b<], tell me I am dreaming! Intel has a winner! Ryzen 7 has suddenly become obsolete. There is no point spending USD 400+ for 1800X or USD 300+ for 1700X. Just not worth it any more. With i5-8xx0 and i3-8xx0 on its way even Ryzen 3 & 5 could become unappetizing. And don't forget with Coffee-Lake you get an IGP. AMD won't have a desktop Raven-Ridge until Q2-2018. The one caveat I see is that... isn't Intel cannibalizing some its X299 based systems. All SkyLake-X below 7900X suddenly look unappetizing. As I said before 7980XE, 7960X are dumb considering their prices, but 7900X, 7920X, 7940X are all worth it IMHO. But thanks to CFL, SKL-X 7820X, 7800X, 7740X, 7640X don't look so good especially considering the X299 prices.

      • chuckula
      • 2 years ago

      [quote<] chuckula, tell me I am dreaming![/quote<] You're dreaming! And don't call an Intel product "amazing"! Intel's too corporate for "amazing"!

      • Krogoth
      • 2 years ago

      Hold onto your horses, It is the same IPC performance as Skylake. It is just a “cheaper” 7800X with an IGP but only 16 PCIe 3.0 lanes and dual-channel DDR4.

        • chuckula
        • 2 years ago

        Phew.. I was beginning to [i<]panic[/i<] that Krogoth wasn't going to show up to tell us how these failed reject parts are unimpressive. I knew he'd be pleased with the failure of Coffee Lake when yet again the non-gaming performance delta between the 7700K and the 1800X is smaller than between the 1800X and the 8700K.

          • NeelyCam
          • 2 years ago

          It’s like watching the two guys on the [url=https://www.youtube.com/watch?v=14njUwJUg1I<]Muppet Show balcony[/url<]

            • K-L-Waster
            • 2 years ago

            I was thinking more like Hound Dog and Foghorn Leghorn….

        • srg86
        • 2 years ago

        Sounds good to me, I’d have this over a Ryzen 7 in a heart beat.

          • tay
          • 2 years ago

          Yeah same. For my gaming focused needs this is way better.

        • Unknown-Error
        • 2 years ago

        I said “Single-Threaded performance” not IPC. 4.7 GHz boost right out of the box while maintaining decent power-draw. 7800X cannot boost to 4.7 GHz despite 140W TDP. All Ryzen CPUs other than Threadripper are also Dual-Channel but none of them have an IGP. With an 8700K You get single threaded performance of a 7700K with multithreaded performance of a
        Ryzen 7 1700 (at times even at 1700X/1800X levels) in addition to that is has an IGP. Desktop Raven-Ridge won’t be out until Q2-2018. So no need to hold my horses. 😀

          • Krogoth
          • 2 years ago

          7800X can effortlessly OC’ed to the same speed though.

          Turbo boosting is just “arm-chair” overclocking on easy mode.

      • jihadjoe
      • 2 years ago

      Like I said in the early Skylake-X reviews, Intel shot itself in the foot by implementing mesh + small L3 for low core count LGA-2066 chips. They designed that stuff for greater multi-core efficiency, but it really wasn’t necessary for, and adversely affected performance on the lower core count chips like the 7800/7820x/7900x.

      Should’ve just stuck to the tried and tested ring, while keeping the mesh for their HCC and XCC designs where it could really shine.

      • Klimax
      • 2 years ago

      Those low cores SL-X are sort of place holder. Also good to verify functionality of board.

      • Airmantharp
      • 2 years ago

      Cannibalizing X299 systems?

      I’d say that they might be cannibalizing some X299 sales, but realistically the number of people that need/want X299 but not multi-threaded CPU prowess wasn’t that great to begin with. Only the crossover SKUs would really be ‘threatened’, but given that they weren’t out for very long and that a six-core 1151 release was imminent throughout the entirety of their retail availability, I’d posit that anyone opting for X299 did so because they needed more platform support than 1151 provides and thus were never going to buy Coffee Lake anyway.

    • Takeshi7
    • 2 years ago

    Why don’t motherboard companies use PLX chips any more? The fact that when you use certain PCIe slots other parts of the motherboard are disabled is stupid. We solved this issue a long time ago with PLX chips. Granted they’ll be bottlenecked, but at least you still have access to all of the connectivity. It’s sad when my 1st gen 1156 motherboard can use all of the SATA ports and PCIe slots at the same time, but a brand new top of the line Gigabyte board can’t.

      • Krogoth
      • 2 years ago

      Bottom line costs. PLX chip and tracing for them cost extra $$$. You only see them used on higher tier SKUs.

        • Takeshi7
        • 2 years ago

        Hmm, I would have thought Gigabyte’s top of the line Z370 motherboard would qualify as “higher tier SKUs” .

        • the
        • 2 years ago

        It doesn’t help that PLX has been had its own change hands who increased the price of those chips. Some one got greedy.

          • MOSFET
          • 2 years ago

          It’s been a while, it could have been 4, 5, or 6 years ago, but I’ve seen a site in relatively modern times review essentially same-spec boards of the same generation from the same manufacturer. One had PLX chip(s) and the other didn’t. I don’t remember exactly, but each PLX was adding significantly to the cost. Might have been about $80 per chip.

        • psuedonymous
        • 2 years ago

        And on top of that, PCIe standards are getting more and more stringent in order to push the clock rate up. PCIe 3.0 routing is already a copper-plated bitch (literally, in the case of stripline routing) to path through a board. Adding in PLX chips means you have effectively [i<]tripled[/i<] the number of complex routing tasks you need to deal with (i.e. instead of point A to point B, you are now going Point A to PLX, PLX to point B, PLX to point C). Not only is that triple your engineering workload, your board literally may not have enough area to support that many traces while complying with routing requirements. PCIe 4.0 then throws everything into a cocked hat with even more stringent requirements. OCuLink can't come soon enough.

    • djayjp
    • 2 years ago

    TR, could you please compare to Intel’s other high end CPUs as well please? Oh and the 8400 too 🙂

    • Voldenuit
    • 2 years ago

    Are there any good Non-linear Editing video benchmarks out there?

    Video encoding tasks might give a good indicator of your video render-out times, but I haven’t seen any tests of how responsive a complex layer + filter + effects workload is in Premiere or Vegas Pro before the actual render-out. Something which affects the QOL for the user.

    You can go grab a coffee while your video is rendering out, but if the system is slow or unresponsive when you’re editing and previewing, it’ll make you tear your hair out.

      • Firestarter
      • 2 years ago

      in general interactive workloads are more single thread dependent than batch workloads, so you’d expect intel’s advantage in that area to have a slight impact on that use case. But without a specific program to test that’s just a giant assumption without anything to back it up with

    • srg86
    • 2 years ago

    Fantastic CPU, and a better performer than I was expecting it to be. I was thinking this would just be faster in mult-threaded tests, especially with the lower base clocks.

    imho, the CoffeeLake line are exactly what I’d buy if I was in the market for a new system, without a doubt. Even for just an i3.

    Here’s hoping Ice Lake Desktop parts have 8 cores plus iGPU!

    • jackbomb
    • 2 years ago

    Found it interesting that the overclocked Sandy Bridge couldn’t even match the stock Haswell. I swear that wasn’t the case a few years ago when Haswell first launched.

      • stefem
      • 2 years ago

      Faster GPUs, richer games… consoles really clip the wings of PC

        • K-L-Waster
        • 2 years ago

        Also, that’s the Devil’s Canyon Haswell, not the launch Haswell. The 4790K was faster than the 4770K.

          • Voldenuit
          • 2 years ago

          [quote<] Also, that's the Devil's Canyon Haswell, not the launch Haswell. The 4790K was faster than the 4770K.[/quote<] Stock speeds were faster, Turbo to 4.4 GHz vs 3.9 iirc. But despite the fancy NGPT compound, overclocking results on the 4790K weren't better than 4770K. I looked at an online database of user clocks (sample size was in the couple hundreds, iirc), and the majority of 4790K users were getting 4.6 or 4.7 GHz as their highest overclocks. Running a de-lidded 4790K on a 240mm rad at 4.6 GHz over here.

            • K-L-Waster
            • 2 years ago

            All fair points — but OP was asking about OC 2600K vs stock Haswell, expecting the 2600K to be faster, so the stock clock speed bump in 4790K is almost certainly the cause.

            • Voldenuit
            • 2 years ago

            2600Ks were also hitting 4.6-4.7 GHz fairly regularly, though. But there were also IPC gains from Sandy to Haswell (I think it was cumulatively around 12-15%?).

            • jihadjoe
            • 2 years ago

            Yeah but the OC 2600k in this example is only 4.6GHz. The 4790k does 4.4GHz stock so there’s not much in it, add the small incremental IPC gains and it’s a wash.

    • chuckula
    • 2 years ago

    A few deeper takeaways from the benchmarks are that I’m a little surprised at how much the 8700K closes the gap with the higher core count RyZens in the multithreaded benchmarks where 8 cores is still going to be an advantage no matter how you cut it. The compiler benchmark in particular ended up being stronger than I would have thought.

    The other interesting point was in the scaling factor when doing the streaming game tests. It’s not that Coffee Lake is faster in some games in an absolute sense, it’s how the performance hit of turning on streaming was much less pronounced (about a 1.7 millisecond hit in 99th percentile for the stock chip) than what we saw with RyZen. Normally I’d expect better scaling out of the 8 core part simply because having the extra compute resources available to handle streaming should put the 8 core parts at an advantage for scalability even if they are somewhat slower when only the game itself is running.

    Thanks again for the excellent review as always.

      • Krogoth
      • 2 years ago

      Clock speed and IPC adavntage of “Skylake family” is the reason. 8700k performs like an aggressively clocked 7800X as expected. The four core Skylake chips were on the tail of Ryzen 7s. Six core version should at least be on par.

    • Anovoca
    • 2 years ago

    So does this mean Haswell is the new SandyBridge? (gently pats his z87 PC with a defiantly proud look on his face)

      • drfish
      • 2 years ago

      Seems more likely that Coffee Lake is the new Sandy Bridge.

        • tsk
        • 2 years ago

        I think you might be right. Intel themselves said that 14nm++ offer better performance than 10nm and 10nm+. It’ll be interesting to see if the 9th gen ice lake will offer any desktop products.

        • Anovoca
        • 2 years ago

        Coffee Lake has to at least be 2 generations old for that badge of honor to apply; otherwse, how can you defiantly refuse to upgrade a system that is currently the newest gen.

        • IGTrading
        • 2 years ago

        Actually Coffee Lake is far from it.

        Sandi Bridge was upgrade-able while Coffee Lake most likely will offer no upgrade path.

        Sandi Bridge brought serious air overclocking while Coffee Lake barely overclocks 6% over its stated max frequency.

        It is impossible to correctly tune you system with Coffee Lake : [url<]https://www.extremetech.com/computing/257044-intel-will-no-longer-provide-per-core-turbo-frequencies-making-motherboard-tuning-impossible[/url<] Sandi Bridge was seriously faster than AMD's solutions at the time in all benchmarks while Coffee Lake, while faster in single threaded and medium resolution gaming, is just 1% away from Ryzen 7 in 4K gaming : [url<]https://tpucdn.com/reviews/Intel/Core_i7_8700K/images/perfrel_3840_2160.png[/url<] So no, Coffee Lake is a decent chip (despite being extremely hot), but it is no Sandi Bridge. Also, the TIM used in Coffee Lake will get hotter by the day (over 25% thermal resistance after 2000 cycles ) .

          • chuckula
          • 2 years ago

          [quote<]It is impossible to correctly tune you system with Coffee Lake [/quote<] For some reason Kampman was able to do it. I think the man needs some more respect around here!

      • USAFTW
      • 2 years ago

      I’ll probably have to replace my trusty old 4670K…

      when it dies.

      • NovusBogus
      • 2 years ago

      Sandy Bridge is still Sandy Bridge, and always will be. The fact that we’re still talking about a six year old CPU as a somewhat serious contender in performance tests says it all. A decade from now, when Haswell gets its “no longer viable” review someone is still going to post some 2600K results and say “silly fools, this !#$% here is older than your high school kid and it still plays Crysis 5!”

    • MHLoppy
    • 2 years ago

    Just a heads up with your OBS testing – my understanding is that the x264 CPU preset “veryfast” is [i<]overwhelmingly[/i<] used by streamers utilising OBS, and the OBS documentation also suggests generally not going slower than this setting when streaming unless you're using an exotic setup such as a capture card (https://jp9000.github.io/OBS/general/technical.html). While the testing you’ve done is still of some value, I’d be very interested in seeing if the “veryfast” preset makes a meaningful difference in the 8700K’s lead over Ryzen parts or not.

    “x264 CPU Preset

    This is an internal quality / time tradeoff scale in x264. A faster preset means less time will be spent encoding, but the image quality will be worse. A slower preset will increase CPU usage, but the image quality will be slightly better. You will almost always get better quality improvements from either a higher FPS, higher resolution or an increased bitrate. [b<]Changing this setting to improve quality is not recommended as the improvements are very minor and come at a great CPU cost. Going faster than veryfast will also disable many x264 optimizations and your stream quality will suffer.[/b<] If you have constant CPU usage (eg a capture card) and know for certain you have spare CPU resources, you can set a slower preset for a very minor quality improvement. If you notice you have low FPS or lagged frames, you may have set your preset too slow. Recommended: veryfast"

      • RAGEPRO
      • 2 years ago

      You’re right that “veryfast” is overwhelmingly used. A modern i5 or i7 can usually handle “veryfast” preset in conjunction with a game without mangling game performance. However, despite what the OBS FAQ says, as a long-time user of the app (and of x264 standalone) I can assure you that there are very significant gains in image quality to be had in busy scenes when stepping down to slower presets. It’s simply that you need to have enough CPU to handle it.

      TL:DR The main issue is simply that as a benchmark, x264 veryfast isn’t all that interesting for a modern Core i7.

        • MHLoppy
        • 2 years ago

        That’s fair. The documentation is probably getting a little dated particularly with Ryzen / Coffee Lake levels of performance.

        After reading your comment I checked out this comparison video [url<]https://www.youtube.com/watch?v=pOeGGQg9rQo[/url<] The difference is subtle, and it's difficult to pinpoint any one thing that noticeably better or worse, but whenever looking between the three of them those subtle differences do add up to an image that looks closer to "this is the game" rather than "this is a stream of the game".

    • WaltC
    • 2 years ago

    Intel: The CPU for doddering old men–dig those prices–dig the non-availability–dig the terrible PCIe lane support, ugh. Dig a hole and drop Intel into it…;) Ugh. (Thus ends my scientifically based commentary.)

    • thx1138r
    • 2 years ago

    Has anybody seen any reviews of the iGPU on the new core line? Got any links? I was wondering if there were any improvements or are they sticking to the status quo. I’m guessing that because of the lack of chatter it’s either no change or something minor. This is kinda disappointing as intel had made some big gains in GPU power for a number of years, but lately it’s iGPUs have been stagnating.

    Also, before anyone gets all-up-in-my-face! I know the iGPU is not of particular importance when it comes to this particular chip, it’s further down that stack that people are much more likely to attempt some light gaming with their iGPUs.

      • K-L-Waster
      • 2 years ago

      It’s a reasonable question, but it is a little like asking what the cup holders are like in the new Buggatti…

        • Chrispy_
        • 2 years ago

        It’s irrelevant for the desktop 8700K model, but it’s perhaps the single most important question for the upcoming Coffee Lake mobile processors.

          • tsk
          • 2 years ago

          There won’t be any mobile coffee lake, kaby lake refresh takes that spot.

        • thx1138r
        • 2 years ago

        My thinking is that because the Bugatti is going to use the same cup holders as the VW Passat, I can get a preview of what the Passat cup holders are like right now.

        And Bugatti? Sure it’s quick, and I don’t really want to get into another car-cpu analogy discussion, but wouldn’t the core i9-7980xe be the Bugatti. I see the 8700K being more of a Porsche Panamera V6.

      • tsk
      • 2 years ago

      [url<]https://www.guru3d.com/articles_pages/intel_core_i7_8700k_processor_review,26.html[/url<]

        • thx1138r
        • 2 years ago

        Thanks, shame it’s just 3dMark, but it looks like just a minor improvement

          • Chrispy_
          • 2 years ago

          I don’t think there are any architectural differences or even any clockspeed changes on the IGP, so I’m guessing that extra 4MB of cache means that more of it is free for the IGP to use.

            • thx1138r
            • 2 years ago

            There was a slight iGPU clockspeed bump: comparing:
            [url<]https://ark.intel.com/products/97129/Intel-Core-i7-7700K-Processor-8M-Cache-up-to-4_50-GHz[/url<] with [url<]https://ark.intel.com/products/126684/Intel-Core-i7-8700K-Processor-12M-Cache-up-to-4_70-GHz[/url<] I can see the 8700K iGPU is 0.05Ghz faster. The only other change was that instead of the old-fashioned "Intel® HD Graphics 630" the new chip has "Intel® UHD Graphics 630". Note the extra "U"!

            • Chrispy_
            • 2 years ago

            The U stands for “U N00b, stop gaming on IGPs”

    • Chrispy_
    • 2 years ago

    Here’s something interesting – in truly multithreaded workloads, the 5GHz 8700K almost perfectly matched the 4GHz 1700.

    That’s 5GHz x 6Cores of Intel matching 4GHz x 8Cores of AMD, which puts Intel’s current mainstream IPC per core at just 6.8% higher than Ryzen, with the obvious issue that Ryzen simply won’t be able to match Intel’s clock speeds at the high end.

    The real question is whether the gamer’s performance-per-dollar darling (the Ryzen 5 1600 with a mild overclock) is better than the equivalent $200 Intel still. I’m sure that many people will be eagerly anticipating the Ryzen 5 1600 against the i5-8400 in the coming review, and the 5GHz Intel parts are very definitely out of reach at this budget.

    As tempting as the expensive Z370/Cooler/8700K option is for high-end gamers, most people care more about the GPU budget than their CPU budget – because without the 1080Ti at low-resolution testing, most games are still GPU-bound. With the extra $150 on the i7, a $25 Z-series motherboard premium, and another $50+ on a decent cooler for the 8700K, that’s the difference beween a GTX1060 and a GTX1080. No gamer with budget concerns is going to pass up a GTX1080 in that scenario, especially since the 8700K would be wasted on a GTX1060 in the first place.

    So, great review, Jeff; I can’t wait for your i5 followup 🙂

    [i<]edit: bad math, corrected[/i<]

      • just brew it!
      • 2 years ago

      Edit: Looks like you’ve corrected the math errors. Cheers!

      I think your calculations are (still) off. Looks like only a 7% IPC advantage to me, since the Intel chip is being clocked 25% higher?

      Of course that headroom will make the 8700K an interesting option for people who plan to OC.

      Edit: Look at it this way — aggregating across all cores, the 8700K OC has 30 GHz worth of compute power, whereas the 1700X has 32 GHz. The fact that they turn in very similar multi-threaded results implies that the IPC is also pretty close.

        • Chrispy_
        • 2 years ago

        Oh dear, I should get another coffee.

        (4/5)*(8/6) is indeed 6.8%

        That’s actually really encouraging for AMD, but Intel clearly have the gaming advantage because it’s a type of workload that is still dominated by single-thread performance.

          • just brew it!
          • 2 years ago

          Yup, and like I said it also appears to have an edge in the OCability department.

          Bad news for AMD, but the good news is that real competition finally seems to have returned to the desktop CPU market. Lets hope AMD can stay in the game this time; Intel and AMD trading blows with each product release is a best-case scenario for enthusiasts.

            • BurntMyBacon
            • 2 years ago

            I was just thinking about how, from a performance perspective, this new chip doesn’t really do much to distinguish itself from my OCed 6700K in low thread count workloads, but your comment about Coffee Lake’s overclockability vs AMD likely applies (to a lesser extent) vs my Skylake (and possibly Kaby Lake) as well. Admittedly, I have a good Skylake sample to match the OC clock rate of the Coffee Lake in this article, but I also have a fairly monstrous custom water loop and I’m certain I could get a better overclock out of the Coffee Lake with my water loop than can be attained by a standard AIO sealed setup like what was used here. Naturally, this all assumes that the chip in the article is a good representative of common retail versions.

      • ermo
      • 2 years ago

      So what you’re saying is that if +6.8% is the target, then Zen+@12 nm needs to be able to clock 5-7% higher at the same wattage to be competitive (in addition to whatever small efficiency tweaks AMD manages to slip in)?

      4.5/4.2 =~ 1.0715 =~ +7.2%
      4.4/4.2 * +2% IPC =~ 1.05 * 1.02 =~ 1.068 =~ +6.8%

        • Chrispy_
        • 2 years ago

        Not really. The calculations here are simply cores and clocks in highly-threaded workloads.

        What I’m saying is that [i<]even when[/i<] single-threaded performance is ignored, Zen+ needs a ~7% boost to remain competitive. The sad news is that many scenarios are still bogged down on a single thread, so the IPC deficit of 7% is added to the clock deficit of 25% meaning that AMD has to try and get 32% more single-threaded performance to remain competitive in single-threaded workloads. 12nm may delivery higher clocks which would be good for performance and one would imagine that AMD can get some IPC improvements in there as well. Probably not enough to close the 32% single-threaded gap, but maybe enough to make Zen+ a more appealing option than Intel for multithreaded workloads. Performance per Watt isn't something I've even considered, but that absolutely goes out of the windows with the i7 and Ryzen 5GHz and 4GHz overclocks respectively.

      • HERETIC
      • 2 years ago

      Don’t hold your breath-
      i5’s treated like red headed stepchild here……………………………………..

      • Stochastic
      • 2 years ago

      Personally, I think it makes more sense to spend more money on the CPU than the GPU. While it’s true that most games will be GPU bottlenecked with modern hardware, it is much easier and cheaper to upgrade a GPU than a CPU. A CPU upgrade that is significant will require a new CPU, motherboard, and RAM (not to mention the hassle of having to effectively rebuild your PC and reinstall Windows). Upgrading a GPU is comparatively much cheaper and easier. Also, GPUs are advancing at a faster rate than CPUs. So it makes sense to splurge on the platform to futureproof your system, even if that means having to compromise on the GPU a bit.

      • Klimax
      • 2 years ago

      Just beware that you have highly mixed loads in your calculations.
      Integer versus SSE versus SSE2 versus SSEx (aka sometimes useful functions) versus AVX(2)

      Just different mix of workloads can cause severe shifting. Get more older SSE loads and Ryzen can have more IPC because AMD traded full 256-AVX compatibility for more SSE units. Start shifting to AVX loads or L1-read bottlenecked and Ryzen loses quickly and disappears from radar.

    • NTMBK
    • 2 years ago

    [i<]Nice.[/i<] Glad AMD have put the fear into Intel, and made them work hard again- the results are fantastic.

      • chuckula
      • 2 years ago

      I can’t wait to see what AMD did for Cannon Lake!

    • rudimentary_lathe
    • 2 years ago

    Thanks for another great review, Jeff. My takeaways:

    1. That’s an impressive jump in performance from the 4790K on the frame times. Add in the 50% jump in core counts and there are two compelling reasons to upgrade, something I haven’t been able to say about Intel in some time.

    2. The previous generation Intel CPUs like the 4790K and the new Ryzen CPUS are still more than good enough for the average consumer. Only the enthusiast and pro-sumer segments will use the extra performance of these new chips.

    3. AMD will have to make its recent sale prices the regular prices. Great for the consumer, bad for AMD. (And potentially great for me personally – been thinking about building a cheap home server on Ryzen).

    4. Given how easy 5 GHz came on the hyper-threaded part, can’t wait to see what the 8600K part is capable of. I have no need whatsoever for a new CPU, but I’ll be sorely tempted to upgrade just so I can have the fun of trying to break 5 GHz. Tell me you tried to break 5 GHz?!

    • Ikepuska
    • 2 years ago

    Arg. Or maybe AAAAARRRRRGGGGGG.

    I keep getting into the bad habit of saying, well there’s only been incremental improvements for a while guess I’ll bite the bullet, and then pulling the trigger early.

    I lived with a system with dual Xeon E5462’s for YEARS for gaming along with a laptop. I then bought an i5-4670k for 189 during [s<]2012[/s<] ETA: Correction 2013, but got promptly deployed. I end up deciding to get a Z97 ITX board and using it for a living room PC in 2015 after I got back from a second deployment since otherwise it would just be gathering dust in a box, and picked up a launch day 1060 to go with it later. But since I ended up using that for my livingroom PC I bought a 6700k a little after on sale too when I happened to be travelling to an area with a MicroCenter. I then deployed AGAIN, so I got back in July of this year and said screw it, and bought a Gigabyte Z270 board that was on sale on one of the roundups here on TR. And now I can't even upgrade to this...... DARN IT! I wouldn't mind using the 6700k for a few years and then upgrading to a 8700k off ebay or something a while down the line, I don't mind not being cutting edge. BUT I'm really upset that a board I JUST BOUGHT in JULY isn't compatible. ETA: I figured hey the 7700k had just released THIS year so I hope I could be forgiven for figuring that the Z270 platform wouldn't be obsolete this early in its lifecycle.

      • derFunkenstein
      • 2 years ago

      Yep, that’s a huge concern. It’s a shame that Intel couldn’t figure out how to get these things working on older boards. I’m surprised to read that they’re not even keyed any differently.

      • tay
      • 2 years ago

      Could just stop buying random shit for a minor improvement? Is that your fault or intel?

        • Ikepuska
        • 2 years ago

        I’m not buying random stuff for minor improvements. I built two systems that are used for different purposes and I plan to keep both just as long as my xeon system which is still in use by the way. My complaint is not at all with the entirely known performance of what I got. It’s with the platform being obsolete arbitrarily. I’m not by any stretch the only person salty about the z270 to z370 issue

          • tay
          • 2 years ago

          Well, the issue is intel greed but the fact that the socket has a LOT more power pins. Check the AnandTech diagrams for details. Since it’s not socket compatible might as well up the chipset number.

    • TwoEars
    • 2 years ago

    That is one hell of a cpu. High-ipc, high-clock and 12 threads. And look at those frame times. If you’re looking for a gaming cpu this is the new king, no doubt. Thanks for the review Jeff!

    • ultima_trev
    • 2 years ago

    Within the same year, we have witnessed AMD ryzen and fallzen.

    • ronch
    • 2 years ago

    I think the 1600X should’ve been here, just for an apples-to-apples comparison.

    Anyway, the 800lb. gorilla is awake again so David had better float like a butterfly and sting like a bee. More specifically, they’d better crank those clocks and tweak the core like crazy to improve IPC.

    • Pzenarch
    • 2 years ago

    On the ‘Time spent beyond’ charts, could we get a total test run time for comparison? It’s useful to be able to see the differences between the platforms, but I think it’d also be useful to have a figure to put those differences into context … or maybe convert it into a ‘time spent beyond X per minute’ if the test running times vary.

      • Jeff Kampman
      • 2 years ago

      Always a one-minute test run; I believe I introduce that on the first page of gameplay results.

        • Pzenarch
        • 2 years ago

        In that case, I apologise, I must have missed that 😛

    • cpucrust
    • 2 years ago

    I’ve been looking forward so much for TR’s review on Coffee Lake as I’ve been on the threshold of buying a Ryzen system or an i7 system. I’ve not completed reading the review yet, but great job so far.

    Note: Just noticed the Grand Theft Auto 5 Time Spent Beyond 16.7 ms is kind of strange as it lists the i7-3770K 6 times.

    Also, Hitman (DX12) Time Spent Beyond 16.7 ms is really interesting where the i7-7700K is the worst of the bunch at 79ms – I wonder what is happening there…

    Again, thanks for the review

      • Jeff Kampman
      • 2 years ago

      Fixed, sorry.

    • Firestarter
    • 2 years ago

    welp, time to put the ol’ i5-2500K out to pasture I guess

      • tritonus
      • 2 years ago

      Yep, that good ole 2500K @4.3GHz became obsolete today. Period. No more excuses, a new build is a-coming!

        • Firestarter
        • 2 years ago

        yeah but with which graphics card? I’d feel kinda dirty buying an Nvidia card and Gsync monitor

          • Redocbew
          • 2 years ago

          What does that have to do with replacing a 2500k?

            • Firestarter
            • 2 years ago

            my GPU is just as old, an AMD HD7950. I could get a i7-8700K and keep my old GPU but that’d just be all kinds of silly

            • DeadOfKnight
            • 2 years ago

            Then get a Vega GPU. Just because it’s not the top performer doesn’t mean it sucks.

      • jihadjoe
      • 2 years ago

      Didn’t you see those Hitman results? OC Sandy is just as good as OC Ryzen!
      You can keep that chip till Ice Lake at least.

      • NeelyCam
      • 2 years ago

      “The King Is Dead!

      Long Live The King!”

    • derFunkenstein
    • 2 years ago

    While I’ve had about 7 months with my Ryzen system, the Core i7-8700K is very sexy and part of me wishes I’d waited. That’s A-OK, though, since the difference in gaming doesn’t apply to my pedestrian 60Hz display and the difference in productivity over my OC’d Ryzen 7 1700 isn’t huge.

      • drfish
      • 2 years ago

      Indeed. I feel the same way about my 7700K. I’m actually glad that I haven’t been bitten by the high-refresh-rate bug yet.

        • derFunkenstein
        • 2 years ago

        I just refuse to look at those very fast displays because I’m afraid that once I do I’ll have to have one.

          • TwoEars
          • 2 years ago

          Be afraid, be very afraid. 2560×1440@144 is glorious. But if you’ve got some spare cash at some point.. it’s one heck of a computer upgrade. And you don’t even need 144Hz. Just going from 60Hz to 80Hz is quite noticeable, 100Hz is all you need really. +144Hz is just pure indulgence.

            • MOSFET
            • 2 years ago

            I jumped on that Nixeus deal Jeff posted not long ago, and wow, I’m convinced about 1440p and the high refresh rate (even with the wrong video card for VRR on it.) Even for desktop work, I can’t even look at my 1080p monitors anymore. They look like monstrous netbook panels now.

            • Chrispy_
            • 2 years ago

            As a 2560×1440@144 member, I agree with you.

            I’m also wondering whether 2560×1440@75 is 90% of the way there. I get the feeling that everyone has a different set of eyeballs and sees things slightly differently, but the difference between 60 and 144Hz is obvious. The difference between 75Hz and 144Hz is far less obvious, but still discernable when side-by-side.

            I have the distinct feeling that 85Hz or 100Hz is probably the upper end of the sweet spot, and as long as that refresh rate is mixed with VRR, there’s little point to going higher than that. I could certainly never tell the difference between 120Hz and 144Hz, and I’ve apparently seen 240Hz screens without even realising it.

            I had the Acer Z1 (144H) alongside an LG ultrawide at one point in time, and I’d overclocked the LG to 80Hz so that I could enable LFC in Freesync. There really wasn’t a lot in it, and they were both vastly nicer to track motion on than a 60Hz panel.

            • TwoEars
            • 2 years ago

            I also believe that everyone’s eyeballs are a little different. I could for instance never stand those Panasonic Plasma TV’s with the black blacks which everyone else loved. I could see the flicker on those and for me it was like staring at an old fluorescent light bulb, very irritating. Before OLED came out I preferred the high-end LED TV’s even if the blacks weren’t as good.

            Like you say I also think that just going from 60Hz to 75Hz is a really big changes, but for me going to 100Hz is the same change yet again. But it depends on the game as well. If it’s a slow paced RPG you can play at 60Hz and not notice it. For the Witcher 3, which has fairly fast combat, I liked 80-100Hz. And in some very fast games such as DOOM 2016 or Shadow Warrior 2 I actually cranked it all the way up to 165Hz and could see a difference.

            The difference at those very, very high refresh rates is mostly the sharpness of objects when you turn and move quickly. If you for instance make a 180 degree turn or make a dash jump across the map you can see that objects maintain their sharpness even when moving at a break neck pace. But if you’re just standing still you won’t see much difference from say 100Hz.

        • Kretschmer
        • 2 years ago

        Oooooh you’re going to enjoy that upgrade.

      • Tirk
      • 2 years ago

      Don’t worry the 8700K is not in stock. You can preorder it and wait 2-3 weeks for it to get in stock and shipped.

      And we all know how much everyone here likes to preorder stuff right? hehe

      So by the time you actually get it and set it up, Ryzen 7 will have been out 8 months. Be happy with your purchase. After all, everyone shits on AMD for releasing parts 8 months after Nvidia. Seems only reasonable to shit on Intel for releasing it 8 months after AMD right?

        • derFunkenstein
        • 2 years ago

        Well I have no plans to upgrade. Once some of those initial issues got sorted, particularly with memory compatibility, it’s been great.

      • Concupiscence
      • 2 years ago

      I’m late to this party, but still happy I went with a Ryzen 1700 for the work I do. For multithreaded video encoding it’s a stupidly good performer at the price point. I’m not a hardcore gamer, but it’s smooth as silk and doesn’t let my 32″ 1440p monitor down. And, as you’d expect, it multitasks like a dream. Coffee Lake’s debut seems even more rushed than Ryzen’s; motherboard availability is a problem, and I predict there will be some early growing pains. At some point in the future I may look at an i5 8400 to replace the Haswell i3 in my home theater PC, but that will require a solid mini-ITX motherboard and lower DDR4 costs. And it may seem catty, but I’m not sure I want to reward Intel just yet for finally waking up after AMD gave them some serious competition…

    • DragonDaddyBear
    • 2 years ago

    Allow me to channel my inner Chuckula for a moment. The down side to this is amazing CPU is AMD isn’t really a great deal any more. AMD’s only hope to continue to sell Ryzen at this point is that the consumer:
    A) knows the AMD CPU is best for their particular use case (which is now far less likely)
    B) does the typical consumer thing and thinks “more is better is faster” with the cores.

    I’m rooting for the under dog but it’s a lot harder when Intel knocks one out of the park like this. Good news, though, is ThreadRipper is still pretty awesome compared to other high-core-count CPUs, especially for the price.

    EDIT: to be clear, I’m not just trolling. I’m genuinely trying to think of a reason why anyone would buy AMD with this new Intel release. Better CPU in most cases and you get an iGPU. The only excuse I can think of is AMD is committed to the socket and you might be able to upgrade, but AMD’s track record isn’t that great in improving existing architectures.
    EDIT2: ECC and the 1700 (non X) would be good for home server uses.

      • chuckula
      • 2 years ago

      Don’t worry, AMD will make these Coffee lake parts obsolete by next February.

      [Man, the Intel fanbots must be strong today, I don’t see why there’s all this hate for Zen+]

      • just brew it!
      • 2 years ago

      You left out C) AMD drops their prices to even out the value proposition.

      That Ryzen die shrink can’t come soon enough…

        • DragonDaddyBear
        • 2 years ago

        Honest question: Can they do that and not lose money?

          • just brew it!
          • 2 years ago

          Probably? They survived this long with a much lower ASP than they’re getting now, I think they’ve got some wiggle room to drop prices without becoming unprofitable, as long as they continue to execute on their roadmap (e.g. die shrink next year).

            • w76
            • 2 years ago

            I don’t quite understand you and Alexko’s response. AMD still lost money last quarter. We haven’t seen Q3 yet, it’s coming soon, but as of the data currently in front of us the only possible answer is “No,” because you can’t be losing money, cut prices just to maintain your current volume, and magically make more money.

            The consensus forecast for Q3 is oh-so-slightly positive net income, but again, that doesn’t support the idea that AMD can engage in a price war and keep their head above water.

            • chuckula
            • 2 years ago

            Given that Q3 includes revenue from high-margin Epyc parts and a full quarter’s worth of mining-mania resulting in huge GPU sales, I’m sincerely hoping there’s a solid profit somewhere in AMD’s report. Given all these product launches and market conditions they weren’t being constrained by the competition in any meaningful way.

            • w76
            • 2 years ago

            I hope your right, though the GPU thing depends on if they were able to ramp production or if the markups fell in the hands of retailers and resellers.

            • DancinJack
            • 2 years ago

            you’re

            • just brew it!
            • 2 years ago

            Nearly all of the Q2 loss was the result of increased R&D spending, for whatever that’s worth.

            Yeah, it’s troubling that they still lost money. But revenue was up significantly, and it remains to be seen how Epyc is selling; if uptake of Epyc is strong that’ll boost their ASP significantly.

            Edit: Effect of pricing on profit is also a tricky thing. As long as they’re selling well above their production cost, per-unit profit can be offset by higher volume.

          • Alexko
          • 2 years ago

          At 192 mm², the Zeppelin die used on Ryzen is a lot smaller than the 315mm² one used in Zambezi/Vishera products, which were selling for a lot less than Ryzen, even accounting for any upcoming, large price cuts.

          So, most likely, yes.

            • stefem
            • 2 years ago

            That would have been true in the good old days of transistor shrinking, today things are more complex. For example at TSMC a 1 billion transistors chip producet at 20nm had the same cost (even higher initially) of a 1 bilion transistors chip made at 28nm despite being smaller in size (ie. cost per transistor doesn’t always scale)

            • BurntMyBacon
            • 2 years ago

            [quote=”stefem”<]at TSMC a 1 billion transistors chip produced at 20nm had the same cost (even higher initially) of a 1 billion transistors chip made at 28nm[/quote<] Yes. Consequently, TSMC didn't get large GPUs on their 20nm process. While costs of fabrication are certainly going up, the larger issue at TSMC was yield problems on larger chips. Smaller chips most certainly were cheaper on the smaller process even with the yield issues. TSMC wasn't alone in this either. Even now Intel appears to be shying away from pushing their larger chips on their 10nm process until process improvements are made. I'm not sure how large Cannon Lake will be, but its a good bet that as mobile only chips, they'll be smaller than Coffee Lake. Keep in mind that while the details have gotten more complex, these are the same issues that these companies have dealt with for decades. They wouldn't move to a new node if they didn't think they could eventually make it profitable. Inevitably, TSMC, Global Foundries, Samsung, Intel, etc. will work out yield issues or move on to a process that they can. To your point, though, it does seem to be taking longer to work out issues and moving to the next node does not appear to be quite as beneficial as it once was (particularly for early adopters).

        • chuckula
        • 2 years ago

        Thank you Intel for providing competition to make AMD lower its prices!

        Wait what?

          • Voldenuit
          • 2 years ago

          Wait, what, did I just give chuckula an upvote?

          I feel dirty.

      • ronch
      • 2 years ago

      You got one downvote and Chuck’s reply got five. Guess you’re not channeling well enough. 😉

      • xeridea
      • 2 years ago

      C) You don’t have to delid Ryzen CPUs to get good OC thermals.

      For multithreaded apps, 1700 and 8700k trade blows, so may not be worth extra cash for 8700k.

      8xxx series just makes Intel more price competitive again. It doesn’t make Ryzen obsolete, or bad, it just make Intel not as expensive. Really the only weakness for Ryzen is gaming, and it’s not like they are bad, just not as good on paper. Most games you still get good performance, so depends if you care about say 80FPS vs 90FPS, either one being still perfectly acceptable.

      • USAFTW
      • 2 years ago

      AMD is gonna have to react to this. The coffee lake line-up will pretty much render the entire Ryzen stack moot.
      Much better single-threaded performance (therefore better at gaming). Two less cores, doesn’t matter since the increased IPC on each almost makes up for the additional two cores on Ryzen 7s.
      So yeah, I loved it when Ryzen came out and it was as good as it was, but now…

    • DragonDaddyBear
    • 2 years ago

    That price isn’t bad, actually. It wasn’t THAT long ago that an “OK” CPU was $400+ and the top end was closer to $1,000.

    • tsk
    • 2 years ago

    “Perhaps more astoundingly, the company can get the same level of performance on 14nm++ as one of its 2015 transistors for 52% less power. These developments are most likely a large part of what allowed the company to perform feats like putting four cores in a 15W TDP and six cores in a 95W TDP.”

    But the 15W kaby refresh parts are on 14nm+ as opposed to coffee on 14nm++ correct?

      • Jeff Kampman
      • 2 years ago

      You are correct, reworded.

    • chuckula
    • 2 years ago

    FYI, while I fully respect Jeff leaving out RyZen in the power/performance graphs since his platform is apparently sucking down more power than expected, you can see some proxy information for the 1800X in TR’s [url=https://techreport.com/review/32607/intel-core-i9-7980xe-and-core-i9-7960x-cpus-reviewed/13<]Skylake X review[/url<] and in TR's [url=https://techreport.com/review/32390/amd-ryzen-threadripper-1920x-and-ryzen-threadripper-1950x-cpus-reviewed/14<]Threadripper review[/url<] although as a word of caution the Threadripper numbers are using an older version of Blender.

    • watzupken
    • 2 years ago

    I find it very strange that out of 3 review sites I visited, none of the sites included the critical R5 1600 in the comparison. Not sure if there is a reason why it got left out? It makes sense to compare 6 core solutions from both camps to see how each fare to be honest.

      • EzioAs
      • 2 years ago

      TechPowerUp has the R5 1600 in their i7-8700K review.

        • watzupken
        • 2 years ago

        Yes, I saw it. But I checked here, AnandTech and PC Persepctive don’t, which I find it very odd. Considering some of these sites previously did reviews on the Ryzen 5 1600, it is just strange they don’t use it for comparison.

          • drfish
          • 2 years ago

          I assume it’s just because the R5s and the i5s are better suited to go up against each other price wise.

      • Jeff Kampman
      • 2 years ago

      When the i7-8700K is beating up on Ryzen 7 parts (and costs way more than a Ryzen 5 1600/X), it didn’t make a lot of sense to drag in every Ryzen 5 and Core i5 in our labs given the time we had on hand. We’ll address Ryzen 5 versus Coffee Lake when we review the i5-8400 and i5-8600K.

        • watzupken
        • 2 years ago

        Sure, this makes sense to me, although it will be interesting to see how much more superior is the 6c/12t solutions between the 2 companies. At the top end, Intel is bound to dominate since Kaby Lake is already at least 10% faster in IPC and with the increased clockspeed, it will definitely do only better. Looking forward to seeing the Core i5 reviews.

          • derFunkenstein
          • 2 years ago

          The answer is gonna be “a lot, sometimes” – better IPC plus higher frequencies = it’s not really that close even when you give AMD extra cores to play with. But only sometimes because sometimes single-threaded (or less than 12 threads) performance is king. Wherever you see a Ryzen 5 matching a Ryzen 7, that’s where Ryzen 5 will be a bad choice compared to Intel.

      • crabjokeman
      • 2 years ago

      The i5-8400 seems like a more fitting comparison to the R5 1600 in terms of clock speed, TDP, and price.

      • maroon1
      • 2 years ago

      Anandtech has 16000
      [url<]https://images.anandtech.com/graphs/graph11859/91888.png[/url<]

        • willmore
        • 2 years ago

        That’s over 9000!!!

        Hint: 1600.

      • USAFTW
      • 2 years ago

      I wouldn’t hold my breath. Intel will have significantly higher single-threaded performance (Haswell to Skylake) without the dreaded cross-CCX latency. Ultimately, you will pay more for an 8700K, but you’ll also get more.

    • EzioAs
    • 2 years ago

    Man, I keep telling myself to resist from upgrading to either Coffee Lake or Ryzen and wait for the next gen instead…but the temptation is far too strong…

    [url=https://youtu.be/SP5eMEwEUKg?t=1m25s<]I...cannot...maintain[/url<] P.S. And I don't even drink (read: like) coffee...

      • DeadOfKnight
      • 2 years ago

      If you want to upgrade, now is a good time. Looks like this is the boost most people who want to upgrade have been waiting for. Last one was Sandy Bridge. Upgrade on the same cycle as that crowd and you won’t be disappointed. Personally I’ve been in the mid cycle of upgrades and I’m always jealous, but not enough to go out and drop that kind of money if I don’t need to.

        • EzioAs
        • 2 years ago

        I don’t know. I haven’t been in this dilemma since quite a while. Anybody have an educated guess on what next-gen Ryzen can deliver?

          • DeadOfKnight
          • 2 years ago

          Less than or equal to this, for sure. The real question is what do you have now? What do you plan to do after you upgrade? Do you play old games on a 60 Hz monitor? Then probably don’t even worry about upgrading, you’re gonna be good for a long time yet. Do you do game streaming? If so, this could be the chip you need right now.

    • ptsant
    • 2 years ago

    Best news for me is the $359 price tag, which has remained stable over the years. For most people 6c/12t is really enough and the jump (in price!) to HEDT seems even more excessive.

      • Beahmont
      • 2 years ago

      I was waiting on the review to order one of these if it was as good as I expected, and it was. However, during that time, newegg has sold out of 8700k chip and board combo’s and says there could be as much as 20 days to get a chip by itself with a $20 dollar markup to $379 as well.

      Sounds like I’ll be waiting for demand to taper off a little before picking one of these up. Oh well, my C2Q will just have to soldier on for a little while longer.

        • drfish
        • 2 years ago

        C2Q? Jeff, can we send this poor soul your review sample? o_0

          • Beahmont
          • 2 years ago

          While I would never look a gift fish in the mouth, I think I’ll do fine purchasing one in a few months. I finally got paid by the Social Security Administration for disability, so I actually have a little bit of money to finally buy a new computer to last the next decade.

          How I’m going to afford the computer after this one is another question entirely. Apparently if you pass out and have a seizure 4 times a week and spend the rest of the time dizzy or in discomfort from heart palpitations you’re disabled, but if you only pass out and have a seizure 1-2 times a week and spend the rest of the time dizzy or in discomfort from heart palpitations you’re capable of substantial work. As I’ve now gotten ‘better’ to the point I’m only passing out 1-2 times a week, I don’t get any on going money. But the C2Q system is on it’s last leg. So new comp it is.

          American Administrative Law, aint she grand!

        • cpucrust
        • 2 years ago

        Yeah, I was waiting on this to determine whether to go with an AMD Ryzen system or an Intel system. The i9 series (7820x and above) interested me with their performance but not price.
        Now with the Coffee Lake, I have a product review showing performance more in line with my expectations price wise, but I suspect the retailers will have their pound of flesh before we see MSRP.

    • chuckula
    • 2 years ago

    Hey Jeff, quick question about the VeraCrypt results. The RyZen results look kind of weird where the non-OC’d RyZen 1700x/1700X are beating the 1800X and even the overclocked 1700. Do you have a theory on that or do you just chalk it up to benchmark wonkiness?

      • Jeff Kampman
      • 2 years ago

      Data-entry goof on my part for AES and probably variance for the rest.

    • drfish
    • 2 years ago

    Stultus ego pisces.

      • chuckula
      • 2 years ago

      Post hoc ergo propter hoc.
      Mea Culpa.

      • Beahmont
      • 2 years ago

      Res ipsa loquitur

      • Captain Ned
      • 2 years ago

      Quis custodiet ipsos custodes?

      • derFunkenstein
      • 2 years ago

      FISH YOU IDIOT YOU SHOULD HAVE WAITED

        • drfish
        • 2 years ago

        I was hoping my comment would spark a bit more sophisticated remarks than this one, oh well. 😉

          • derFunkenstein
          • 2 years ago

          it did but i am a simple country boy

            • morphine
            • 2 years ago

            [quote<]it shor did, yes siree, but I's but a simple country boy, pardner[/quote<] FTFY.

            • derFunkenstein
            • 2 years ago

            I like how you think I apparently ride horses into The City or something.

      • RickyTick
      • 2 years ago

      Is there any possibility this i7-8700K will work in your Z270 motherboard? BIOS update maybe? Just curious since I have the same setup as you.

        • chuckula
        • 2 years ago

        It’s never going to work on the Z270 motherboard. There are pin-level hardware changes in the package that can’t be worked around in firmware no matter what you try.

          • RickyTick
          • 2 years ago

          Ok, thanks.

      • Alexko
      • 2 years ago

      Why [i<]pisces[/i<] rather than [i<]piscis[/i<]?

        • drfish
        • 2 years ago

        Because I’m nearly 20 years removed from the two years of Latin I took in high-school and I choose to trust Google translate.

        I’m afraid that nearly the only thing I remember from that class now, is singing OST-MUS-TIS-NT to the tune of The Mickey Mouse Club song.

          • Captain Ned
          • 2 years ago

          Romani ite domum.

            • chuckula
            • 2 years ago

            Carthago delenda est!

            • Captain Ned
            • 2 years ago

            Pie Iesu domine. Dona eis requiem

            • chuckula
            • 2 years ago

            THWACK!

            • Redocbew
            • 2 years ago

            “Thwack” is Latin?

            • chuckula
            • 2 years ago

            esYay!

            • just brew it!
            • 2 years ago

            [url<]https://www.youtube.com/watch?v=e4q6eaLn2mY[/url<]

            • just brew it!
            • 2 years ago

            Vehiculum caeli anguillarum meum plenum est

            • Captain Ned
            • 2 years ago

            Quomodo dicitis ad est rex ille? Non faeces toto eo.

          • Mr Bill
          • 2 years ago

          We took our highschool class motto from our first Latin lesson…
          Semper Ubi Sub Ubi

            • drfish
            • 2 years ago

            A classic, but technically gibberish.

            • Mr Bill
            • 2 years ago

            True but we all know how sophmoric freshmen are. 😉

          • Mr Bill
          • 2 years ago

          Tempus fugit velut sagitta est, fructum fugit velut Musa sapientum fixa

            • Voldenuit
            • 2 years ago

            That’s bananas.

    • juzz86
    • 2 years ago

    The time for six-core gaming is actually upon us, be damned.

    What a beast.

    Solid review Jeff, thanks mate. A+

    • tsk
    • 2 years ago

    Nice I forgot these were launching today, now let’s get to reading!

    • chuckula
    • 2 years ago

    Thanks for the review TR!

    But you forgot one vital thing: Since Intel clearly can’t win on performance* they have resorted to a media-bundling deal with the estate of the late great David Bowie.

    Here’s a low-quality [url=https://www.youtube.com/watch?v=AXxmIcsmpnQ<]sample of the free song[/url<] you get with every Coffee Lake purchase. * If you read the review and disagree with that preconceived notion of mine, you just need to look at the scores more closely to see how Covfefe lake fails.

Pin It on Pinterest

Share This