Intel’s Core i7-8086K CPU reviewed

At Computex this past week, five was the magic number. Usually, it was describing the number of gigahertz certain Intel processors were clocked at during the company’s keynote. One of those demos has touched off a firestorm about just how valid it is to demonstrate a 28-core CPU with an extreme overclock and extreme cooling, but the other 5-GHz chip Intel showed is a product you can actually buy today. In exchange for $425 at Amazon, Newegg, and Best Buy, the Core i7-8086K is meant to celebrate 40 years of the x86 instruction set architecture. It’s also making a little history of its own as Intel’s first processor to reach 5 GHz Turbo Boost speeds.

Intel didn’t send review samples of this chip, but I wanted to see just what that 5 GHz number meant in practice for a stock-clocked CPU. I ordered one off Amazon with my own hard-earned cash to put it to the test. Ever since that shipment arrived Saturday, we’ve been exploring the behavior and performance of the i7-8086K.

  One core

active

Two cores

active

Three cores

active

Four cores

active

Five cores

active

Six cores

active

Core i7-8086K 5 GHz 4.6 GHz 4.5 GHz 4.4 GHz 4.4 GHz 4.3 GHz
Core i7-8700K 4.7 GHz 4.6 GHz 4.5 GHz 4.4 GHz 4.4 GHz 4.3 GHz

So how did Intel get to that 5 GHz figure? Simple: Turbo Boost 2.0 relies in part on an “n-cores-active” heuristic. Put simply, fewer active cores mean the highest clocks from Turbo Boost’s multi-variate frequency-scaling management. Each chip has a number of Turbo bins equal to the number of cores on the die.

With one core active, the Coffee Lake Core i7-8700K that the i7-8086K is derived from can boost all the way up to 4.7 GHz. The Core i7-8086K’s top Turbo bin is 5 GHz. Once you get outside those numbers, the i7-8700K and i7-8086K are identical. Scuttlebutt had suggested the i7-8086K might have a more aggressive Turbo table across the board, but that’s not the case. That fact makes sense, of course, given that the chip’s TDP remained the same as its less-special sibling’s.

We’ll exhaustively tease out what this change in Turbo tables means for stock-clocked performance in a moment, but outside of web browsing, common desktop tasks, and the like, that 5-GHz figure probably won’t make itself felt. It’s nice, for sure, but I’ll stop you right here: it’s no reason to go spend $75 more on one of these over a Core i7-8700K if you want a Coffee Lake part. To really get the most out of this chip, you need to take full advantage of its unlocked multiplier.

Our initial overclocking efforts with the Core i7-8700K yielded a 5-GHz all-core Turbo speed with an AVX offset of -2 for SIMD-heavy workloads like Blender and Handbrake. That led to some jaw-dropping performance in CPU-bound games and single-threaded benchmarks, even if AMD’s Ryzen 7 1700 could often catch up in productivity workloads when pushed to 4 GHz. The mitigations for the Spectre and Meltdown vulnerabilities have dropped some ice in the coffee since, but the overclocking headroom of Coffee Lake cannot be denied. It’s the one thing AMD’s Ryzen CPUs can’t even hope to match.

I dived into some tentative overclocking efforts during our live unboxing and benchmarking of the i7-8086K this past weekend, and I’ve been refining our stable overclock since. It must be noted that every chip will overclock differently thanks to the vagaries of silicon lithography, but our sample was able to reach 5.1 GHz on all of its cores at a slightly-higher-than-conventionally-accepted-as-safe 1.38 V. Better yet, the chip can sustain those clocks with both AVX and non-AVX workloads. We have two Core i7-8700Ks in the TR labs, and neither one can stably exceed 5 GHz with the aforementioned -2 AVX offset.

I may still delid and re-paste our chip to explore its very limits, as regular workloads with those clocks and voltages are causing it to crest 90° C package temperatures under a Corsair H110i 280-mm liquid CPU cooler. I suspect I might be able to eke 5.2 GHz non-AVX clocks out of my chip with lower temperatures, as well. Perhaps we’ll do that live, too. For now, let’s see how the i7-8086K performs.

 

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each benchmark at least three times and took the median of those results. Our test systems were configured as follows:

Processor Intel Core i7-8086K Intel Core i7-8700K
CPU cooler Corsair H110i 280-mm closed-loop liquid cooler
Motherboard Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z370
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Trident Z DDR4-3866 (rated) SDRAM
Memory speed 3400 MT/s (CPUs at stock), 3866 MT/s (CPU OC)
Memory timings 16-16-16-36 2T
System drive Samsung 960 Pro 512 GB NVMe SSD

 

Processor AMD Ryzen 7 2700X
CPU cooler EK Predator 240-mm closed-loop liquid cooler
Motherboard Gigabyte X470 Aorus Gaming 7 Wifi
Chipset AMD X470
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Sniper X DDR4-3400 (rated) SDRAM
Memory speed 3400 MT/s  (actual)
Memory timings 16-16-16-36 1T
System drive Samsung 960 EVO 500 GB NVMe SSD

Where applicable, our overclock for the Core i7-8086K was 5.1 GHz all-core with 1.38 V and no AVX offset. Our Core i7-8700K OC was 5 GHz with a -2 AVX offset and 1.35 V. While our Ryzen 7 2700X was not overclocked, its Precision Boost 2 all-core clock speed was observed to be 4.075 GHz under AVX workloads. Leaving the chip’s 4.3-GHz stock single-core Turbo speed intact is more helpfull to the 2700X than pushing for a marginally higher all-core speed like 4.2 GHz.

Some other notes on our testing methods:

  • All test systems were updated with the latest firmware, graphics drivers, and Windows updates before we began collecting data, including patches for the Spectre and Meltdown vulnerabilities where applicable. As a result, test data from this review should not be compared with results collected in past TR reviews. Similarly, all applications used in the course of data collection were the most current versions available as of press time and cannot be used to cross-compare with older data.
  • Our test systems were all configured using the Windows Balanced power plan, including AMD systems that previously would have used the Ryzen Balanced plan. AMD’s suggested configuration for its CPUs no longer includes the Ryzen Balanced power plan as of Windows’ Fall Creators Update, also known as “RS3” or Redstone 3.
  • Unless otherwise noted, all productivity tests were conducted with a display resolution of 2560×1440 at 60 Hz. Gaming tests were conducted at 1920×1080 and 144 Hz.

Our testing methods are generally publicly available and reproducible. If you have any questions regarding our testing methods, feel free to leave a comment on this article or join us in the forums to discuss them.

 

Memory subsystem performance

Let’s kick off our tests with some of the handy memory benchmarks included in the AIDA64 utility.

 

No surprises here. Our overclocked chips are using DDR4-3866 versus the DDR4-3400 memory in our stock-clocked configs, and that translates to higher bandwidth and lower latencies.

Some quick synthetic math tests

AIDA64 offers a useful set of built-in directed benchmarks for assessing the performance of the various subsystems of a CPU. The PhotoWorxx benchmark uses AVX2 on compatible CPUs, while the FPU Julia and Mandel tests use AVX2 with FMA.

The Ryzen 7 2700X may put up a win in the SHA-accelerated win in the CPU Hash benchmark, but the superior floating-point throughput of the Coffee Lake parts is otherwise on full display. Oddly, the i7-8086K falls behind the i7-8700K in the double-precision Mandel benchmark despite its win in the single-precision Julia test. Let’s see how these synthetic performance tests bear out in actual workloads.

 

Javascript

Our browser benchmarks are primarily single-threaded, so they should allow the i7-8086K’s 5-GHz top Turbo bin to rear its head from time to time.

The i7-8086K notches wins across the board for Javascript performance when it’s overclocked, but it only beats out the i7-8700K stock-for-stock in two of our browsing tests. Even though these tests are single-threaded, that doesn’t mean they’re the only thing the operating system is working on at any given time, and it seems practically any activity on other cores is enough to keep the i7-8086K from boosting to 5 GHz consistently.

WebXPRT 3

The WebXPRT 3 benchmark is meant to simulate some realistic workloads one might encounter in web browsing. It’s here primarily as a counterweight to the more synthetic microbenchmarking tools above.

The i7-8086K barely beats out the i7-8700K in WebXPRT 3, although it does open a bit more of a lead over the overclocked, OG Coffee Lake i7 when we turn the screws.

Our single-threaded tests suggest one will see little difference in real-world usage from an i7-8086K versus an i7-8700K in their system. Let’s see if the chip can distinguish itself better in more multithreaded testing.

 

Compiling code with GCC

Our resident code monkey, Bruno Ferreira, helped us put together this code-compiling test. Qtbench records the time needed to compile the Qt SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

The practically identical Turbo tables of the i7-8700K and i7-8086K play out exactly as you’d expect in this multi-threaded benchmark. Overclocking the i7-8086K does open a small lead over the i7-8700K at 5 GHz, but it’s nothing to write home about.

File compression with 7-Zip

The free and open-source 7-Zip archiving utility has a built-in benchmark that occupies every core and thread of the host system.

The i7-8086K’s results are a bit scattershot in 7-Zip. Stock-clocked results from both Coffee Lake parts are about the same, but the i7-8086K takes the compression crown versus the i7-8700K and gives it right back in decompression.

Disk encryption with Veracrypt

The accelerated AES portion of the Veracrypt benchmark seems to favor the i7-8086K’s overclocked guise, but the non-accelerated Twofish portion of the test delivers the same results on either Coffee Lake chip.

 

Cinebench

The evergreen Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

Cinebench’s one-thread mode appears to let the i7-8086K stretch its single-core boost legs a bit. Single-core operation isn’t really why we run Cinebench, though.

In the multithreaded mode of Cinebench, the Coffee Lake parts are defeated handily by the Ryzen 7 2700X. There appears to be no substitute for cores in this benchmark.

Blender

Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

Blender doesn’t put more than the barest light between our Coffee Lake parts at stock speeds. Overclock both chips, though, and the 5.1-GHz all-core AVX speed of the i7-8086K seems to give it a noticeable edge over the i7-8700K. Still, the Ryzen 7 2700X takes advantage of its beefy cooler to deliver enough stock speed to eke out a narrow win over either Coffee Lake part.

Corona

Corona, as its developers put it, is a “high-performance (un)biased photorealistic renderer, available for Autodesk 3ds Max and as a standalone CLI application, and in development for Maxon Cinema 4D.”

The company has made a standalone benchmark with its rendering engine inside, so it was a no-brainer to give it a spin on these CPUs.

Sorry to repeat myself, but it’s unavoidable. The Coffee Lake parts are practically the same in performance, and they get beat out by the Ryzen 7 2700X no matter what.

Indigo

Here’s a new benchmark for our test suite. Indigo Bench is a standalone application based on the Indigo rendering engine, which creates photo-realistic images using what its developers call “unbiased rendering technologies.”

The pattern continues. It’s a split decision between AMD and Intel, though. The Ryzen 7 2700X wins out in the Bedroom test scene, while the OCed Coffee Lake parts take the prize with the Supercar scene.

Handbrake

Handbrake is a popular video-transcoding app that just hit version 1.1. To see how it performs on these chips, we converted a roughly two-minute 4K source file from an iPhone 6S into a 1920×1080, 30 FPS MKV using the HEVC algorithm implemented in the x265 open-source encoder. We otherwise left the preset at its default settings.

Handbrake is another instance where the overclocked i7-8086K seems to be able to press a small advantage over the i7-8700K thanks to its higher AVX clocks. Stock performance is, as usual, identical.

CFD with STARS Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here. We configured Euler3D to use every thread available from each of our CPUs.

 Euler3D doesn’t give much of an advantage to either Coffee Lake chip, whether stock or overclocked. Perhaps gaming performance will give us something to get excited about.

 

Grand Theft Auto V

Grand Theft Auto V‘s lavish simulation of Los Santos and surrounding locales can really put the hurt on a CPU, and we’re putting that characteristic to good use here.


Grand Theft Auto V tends to love what Intel’s CPUs have to offer, and it shows even at stock speeds. That said, anyone hoping for a big splash from the i7-8086K at either stock or overclocked speeds can exhale now. There’s little difference in gaming performance from an i7-8700K at 5 GHz vs an i7-8086K at 5.1 GHz.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

To best demonstrate the performance of these systems with a powerful graphics card like the GTX 1080 Ti, it’s useful to look at our three strictest graphs. 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays. Finally, we’ve added a 5-ms graph to see how well any of our chips sustain a scorching 200 FPS.

Given how fast our GTX 1080 Ti runs Grand Theft Auto V, it makes the most sense to start our exploration at the 8.3-ms mark. Aside from a couple giant hangs that I have no good explanation for from the overclocked i7-8086K, things shake out about as we would expect. Our unlocked Coffee Lake parts spend just vanishing amounts of time on tough frames that take longer than 8.3 ms to complete. Flip over to the 6.94-ms mark, and the Coffee Lake chips spend about a ninth of the time the Ryzen 7 2700X does past that line in the sand. Of course, we already knew how that would play out—none of these chips are really unknown quantities. Let’s see if any other games tease out major differences between our cups of Coffee.

 

Far Cry 5


In our observations, Far Cry 5 tends to max out a single thread, so it’s no shock that our average-FPS numbers and 99th-percentile frame times favor Intel CPUs. Even so, pushing our chips to the 5 GHz mark and beyond seems to have exposed a new bottleneck, since we’re not really getting any major changes in performance for the trouble.


The impressive performance of the Intel chips is plenty evident at the 6.94-ms mark, where both Coffee Lake chips spend less than a fourth of the time that the Ryzen 7 2700X does holding up the graphics card from outputting 144 FPS. That translates into incredibly, enviably smooth gameplay from the Intel corner of the ring, but one doesn’t have to shell out the extra for the i7-8086K to get it.

 

Crysis 3

Even as it passes six years of age, Crysis 3 remains one of the most punishing games one can run. With an appetite for CPU performance and graphics power alike, this title remains a great way to put the performance of any gaming system in perspective.


Crysis 3 is still an unusual beast in that it will happily take advantage of every core and thread one can throw at it in high-refresh-rate gaming. Clock speeds matter too, though, and that means the overclocked Coffee Lake parts turn in some truly face-melting performances while feeding the GTX 1080 Ti.


Our time-spent-beyond-X graphs indicate just how gooey our visages can get. Both overclocked Coffee Lake parts spend about a second of our one-minute test run holding back our graphics card from 144 FPS. The Ryzen 7 2700X isn’t doing badly, either, but it can’t match the lofty peaks from either Coffee Lake part. Still, the Core i7-8086K and the Core i7-8700K are indistinguishable from one another.

 

Deus Ex: Mankind Divided

Thanks to its richly detailed environments and copious graphics settings, Deus Ex: Mankind Divided can punish graphics cards at high resolutions and make CPUs sweat at high refresh rates.


Ctrl+C, Ctrl+V. Deus Ex: Mankind Divided sure likes it some Coffee, but the i7-8086K doesn’t distinguish itself from its more pedestrian sibling at either stock clocks or 5.1 GHz.


 

 

Assassin’s Creed Origins

Assassin’s Creed Origins isn’t just striking to look at. It’ll happily scale with CPU cores, and that makes it an ideal case for our test bench.


Assassin’s Creed Origins at least gives us something to talk about. Note the spikiness in the Ryzen 7 2700X’s frame-time plot. Origins runs noticeably rougher on the Ryzen 7 2700X than it does on either Coffee Lake part. As for our Core i7 contenders, I’d be hard-pressed to pick out one from the other, whether stock or overclocked. The decrease in 99th-percentile frame times from our OC efforts is nice, but hardly noticeable.


The Ryzen 7 2700X’s weird spikiness emerges in our time-spent-beyond-50-ms graph, although the i7-8086K isn’t immune to a less-severe hitch that also ends up on the board here. Once we’re out of those troubled waters, though, the Coffee Lake parts assert their dominance in high-refresh-rate gaming once again. At our 11-ms threshold (or 90 FPS, if you prefer), the Ryzen 7 2700X holds up our graphics card for twice as long as the overclocked Coffee Lake parts do.

All told, the i7-8086K games like an i7-8700K. The extra 100 MHz we eked out has no noticeable effect on gaming performance, nor should we expect it to given that it’s only a 2% increase over what we could get from our Core i7-8700K. The i7-8086K didn’t justify its markup in either productivity or gaming, and that’s likely to make it a tough sell for anybody actually interested in using it in our final reckoning.

 

Conclusions

There are two ways to evaluate the Core i7-8086K. The first is as a commemorative curio. Intel fans who want to celebrate 40 years of x86 should probably leave theirs sealed in the box for future generations to pick over when silicon becomes a curiosity. Intel is only making so many of these, and sealed copies will likely become more and more unusual with time.

As an actual processor, the i7-8086K isn’t worth the $75 upcharge over the i7-8700K at stock speeds. Outside of its rarely-seen 5-GHz top Turbo bin, the i7-8086K performs the same as an i7-8700K the vast majority of the time. That’s because the rest of its Turbo Boost 2.0 table is identical to the i7-8700K’s. There’s only so much a chip can do within the same thermal budget. It would have been nice to see Intel really take the leash off this thing and push TDPs or implement something like its Thermal Velocity Boost feature on this chip to truly make it something special for those who don’t want to overclock.

The story changes a little—and I do mean a little—when we take advantage of the i7-8086K’s unlocked multipliers. It’s tricky to recommend a processor on the basis of its overclocking prowess alone, because no two chips will overclock alike. That said, our retail i7-8086K made it to 5.1 GHz on all cores without any AVX offset and nothing more than the usual thermal challenges of modern Intel CPUs. No  i7-8700K in our labs can run at speeds higher than 5 GHz for non-AVX workloads, and they require -2 AVX offsets to remain stable.

For all that, the i7-8086K’s slightly higher overclock didn’t translate into many practical performance benefits in our tests versus a run-of-the-mill 8700K at 5 GHz. Still want to pay that $75 extra?

That behavior does suggest Intel is putting its best Coffee Lake silicon of late under i7-8086K heat spreaders, so overclockers who are looking to get the very best performance out of their chips might not mind the upcharge. Third-party retailers like Silicon Lottery offer binned, delidded i7-8700Ks for more than an i7-8086K goes for at e-tail. If the i7-8086K proves its mettle as more enthusiasts get them in their hands, the price of this CPU could be plenty reasonable for those after the very best Coffee Lake dies.

Ultimately, the i7-8086K is more of interest for its history-marking and history-making than it is as a practical processor for the enthusiast. If you want one and don’t mind the fact that you’re paying for what is essentially the pleasure of special packaging, well, fair enough. Everybody else should just buy an i7-8700K or a Ryzen 7 2700X, depending on whether high-refresh-rate gaming or multithreaded grunt is what’s called for.

Comments closed
    • Doctor Venture
    • 1 year ago

    This’ll probably get downvoted to hell and back, but am I the only one that found it a bit odd that the review compared an overclocked (5GHz all cores) 8086K CPU to a Ryzen running at stock? I mean, comparing it to an i7-8700K is fair game, since the 8086K is just a highly binned part of the 8700K line, but tossing in the Ryzen just seemed like trying to compare apples to durians.

    EDIT: That said, I wouldn’t mind having even a non-functional 8086K encased on Lucite (?), for nothing else than than a conversation piece.

      • Jeff Kampman
      • 1 year ago

      Here’s the thing you have to balance with the Ryzen 7 2700X. It will overclock to about 4.2 GHz on all cores within the bounds of reasonable voltages. However, its single-core Precision Boost speed ranges up to 4.35 GHz.

      Like our test notes state, Precision Boost 2 and XFR 2 are already bringing the 2700X up to 4.075 GHz all-core under the cooler I used. I could clock the chip up to 4.2 GHz but then it loses the peak of its single-core frequency for a roughly 3% increase in all-core performance. It’s simply not worth overclocking our particular sample when the chip is already doing most of the work.

        • Doctor Venture
        • 1 year ago

        I understand that. I guess my point was that we already know that the Ryzen CPUs aren’t worth overlocking due to the minimal gains, and while I completely agree with you including both stock and overclocked data points for both the i7-8700K and the 8086K, it just seemed a bit gratuitous to include the Ryzen in there.

        Just my two cents.

      • chrcoluk
      • 1 year ago

      you not wrong, but here is the thing with reviewers, its a pattern I see keep been repeated as well.

      When they review a product for whatever reason it seems a reviewer feels obligated to put the reviewed product in the best light as possible.

      So what you tend to see is this.

      If a intel product is been reviewed they will throw in overclocked figures and compare to stock AMD product.
      If a AMD product is been reviewed, they will throw in overclocked figures and compare to stock intel product.

      Then I guess they expect people to simply not notice LOL.

        • Waco
        • 1 year ago

        Jeff already explained the methodology.

    • DPete27
    • 1 year ago

    The i7-8086K is NOT soldered, and has been overclocked to 7.24GHz.

    [url<]https://youtu.be/DX24ocSJ4AI[/url<]

      • derFunkenstein
      • 1 year ago

      Oh all we had to do was push the vcore up to 1.85v. LOL

        • jihadjoe
        • 1 year ago

        One more data point from Anandtech’s friend: 7.3GHz @ 1.70V

        [url<]https://www.anandtech.com/show/12945/the-intel-core-i7-8086k-review/2[/url<]

    • psuedonymous
    • 1 year ago

    tl;dr:
    If you were considering getting a pre-binned 8700K from Silicon Lottery (or buying a pile of 8700Ks to bin yourself), then this is likely a cheaper way to get a more highly binned part. If you were not considering a pre-binned CPU, then this is probably not the CPU for you.

    • Unknown-Error
    • 1 year ago

    Why is the 2700X “Time spent beyond XX ms” in all games are so messed up?

      • Jeff Kampman
      • 1 year ago

      Nothing in there that’s inconsistent with our past testing of the 2700X.

        • dragontamer5788
        • 1 year ago

        Have you always defaulted to 8.3 ms?? (120Hz)

        I feel like maybe 16.7ms was the default before or something. I mean, it make sense: modern processors are all 60FPS consistent. So making the difficulty harder with a 8.3ms default is reasonable.

        I’m wondering if there can be a good infograph that can combine the various “badness” metrics into a single graphic. Because 60fps is still a useful benchmark, and I think that 8.3ms (120Hz) is more of a luxury as opposed to a necessity.

        EDIT: Maybe if you did a log-scale graph with FPS-indicators on the left-hand side? For example: 8.3ms (120Hz) could have a dotted-line or something. Log-scale would be based on 480Hz (2.08ms).

        The major benchmarks are 24Hz (cinematic quality), 30Hz, 60Hz, 120Hz, 144Hz, 240Hz, and 480Hz (unrealistic but a log-scale needs a “base”).

          • Goty
          • 1 year ago

          I don’t know anything about the capabilities of that control, but it would be handy if it could be configured on a per-graph basis to default to the lowest bin containing a majority of non-zero measurements.

            • dragontamer5788
            • 1 year ago

            From the perspective of a user however, users only “care” about badness if they have a monitor at that speed or better.

            * If you have a 60FPS monitor, you don’t care about anything below 16.7ms badness.
            * If you have a 120FPS monitor, you don’t care about anything below 8.3ms badness.

            So its not something that should be “automatic”, based on the data or anything. The user should explicitly click on the monitor / benchmark that matches their hardware.

            From that perspective, 60Hz, 75Hz, 120Hz, and 144Hz are the common monitor benchmarks I’m aware of.

            • Jeff Kampman
            • 1 year ago

            This is a misunderstanding of how non-VRR monitors and the speed at which games run interact. Frame rates higher than monitor refresh can still have a positive impact on perceived smoothness/responsiveness in games.

            • dragontamer5788
            • 1 year ago

            Thanks for the reply.

            I do have one thing though: my main goal was to push for log-graph or even a log-log graph for the [url=https://techreport.com/r.x/2018_06_11_Intel_s_Core_i7_8086K_CPU_reviewed/Grand_Theft_Auto_V_frametime_percentile.png<]frame time percentage charts[/url<]. With a log-chart (y-axis logarithmic), we'd see more information across the various frame times. Not too useful in this case, but log-chart probably would make APUs and high-end graphics cards "fit" on the same graph. A log-log chart (y-axis AND x-axis logarithmic), will "stretch" out the 90% and 99% areas on the far right side of the graph, which is arguably the most important part. So please, consider my push for log-graphs and/or log-log graphs. [url=https://en.wikipedia.org/wiki/Log-log_plot<]Wikipedia link[/url<] EDIT: Man, that took a lot of tries to get that link working...

            • Goty
            • 1 year ago

            I don’t know about everyone else, but I’m more interested in the performance of these parts in general, not solely for my independent use case. I have a 60 Hz monitor, but that doesn’t mean I’m completely uninterested in the performance comparison at lower frametimes, especially in cases where all CPUs in the comparison are able to maintain better than 16.7 ms frametimes throughout the entire test.

            *EDIT* Jeff ninja’d my underlying point.

          • Freon
          • 1 year ago

          For higher end systems it makes sense to default to the 120hz timing value, though Gsync sort of makes those metrics arbitrary so I ignore them. If you have a Gsync/Freesync panel that you use you might as well ignore that graph and focus on the 99th percentile and averages for an entire run.

          24hz is irrevelant for gaming benchmarks.

          They have 30, but shouldn’t come into play except on lower end GPU testing. Not really relevant for any CPU-focused testing.

          480 is a waste and cluttering the graph.

          Also, this whole topic has been beaten to death.

    • ronch
    • 1 year ago

    $75 more for 6% more 1-core Turbo clock. That ain’t much, and for 3/4 a Benjamin I think Intel at least could’ve spent $3 on a nice metal box like the one initially used by AMD for their 8C FX models. I mean, the real reason this thing’s here is to commemorate 40 years, right? It’s a halo chip. Might as well make it feel a bit more special. Would also make it look nicer sitting on the shelf for those buying it only for collective purposes, processor inside the metal box or not.

    On another note though, I think the real winners here are the 2700X and the 8700K, depending on what stuff you run. It’s really just amazing that you could get those chips at $330 – $350 when folks ponied up $340 for just 4 cores for so long. Out of principle alone I’m going with Ryzen on my next upgrade for shaking things up. Now all we need is for RAM prices to come back to earth.

      • ptsant
      • 1 year ago

      I agree. I had hopes for this part. Not necessarily for extreme performance, which is already great, but for a nicer package, soldered heatsink, maybe a few “extras” that would make it look special.

      The “devil’s canyon” CPU felt more exotic compare with the vanilla version.

      Marketing fail.

        • ronch
        • 1 year ago

        I would guess marketing wanted to put it in a fancier box but the bean counters got the upper hand.

    • tsk
    • 1 year ago

    This is like reviewing a new iPhone color.

      • derFunkenstein
      • 1 year ago

      I think we all knew it would be, but I also think we were all hoping we were wrong.

        • EndlessWaves
        • 1 year ago

        It does seem like it was always an unpromising subject to spend review time on.

          • derFunkenstein
          • 1 year ago

          It’s not like it took long. Jeff got it on Saturday and did a lot of the OC testing on the YouTube stream. To have a review up on Monday morning is pretty impressive turnaround time.

            • Jeff Kampman
            • 1 year ago

            I mean every hour of every day of those days was consumed by testing and writing but in an absolute sense it didn’t take that long yes 😉

            • derFunkenstein
            • 1 year ago

            I don’t mean to downplay it, I just mean that it seems to have gone relatively quickly.

            edit: it’s not like you had to run tests on 4 different platforms and 12 different CPUs like you have in the past. Folks that want that can look at very recent CPU reviews on this very website and compare to this.

    • elites2012
    • 1 year ago

    why are outdated benchmarks still be ran? the aida benchmarks have not been updated in yrs. also they are not optimized for ryzen chips.

      • chuckula
      • 1 year ago

      [Looks at the AIDA64 CPU Hash results]

      Yeah, AIDA is [b<]so[/b<] not optimized for RyZen at all! Take it from an expert, you need to work on your trolling technique.

        • Klimax
        • 1 year ago

        Hash benchmark is anomaly. It uses if it can Intel SHA1 instructions. RyZen supports them, so does Newer Atoms. No Core supports them. It is effectively test of instruction support and in second order RAM bandwidth.

          • chuckula
          • 1 year ago

          I’m well aware of [i<]why[/i<] it's optimized for RyZen. But that's not the point here, especially for a benchmark that suspiciously supports very new instructions and that has "not been updated in yrs. [sic]"

      • Jeff Kampman
      • 1 year ago

      FinalWire Unveils AIDA64 v5.97
      AVX-512 Benchmarks and AMD Raven Ridge Support
      Posted | March 28, 2018
      The latest AIDA64 update implements 64-bit AVX-512 accelerated benchmarks, adds monitoring of sensor values on Asus ROG RGB LED motherboards and video cards, and supports the latest AMD and Intel CPU platforms as well as the new graphics and GPGPU computing technologies by both AMD and nVIDIA.
      [list<] [*<]AVX-512 optimized benchmarks for Intel Skylake-X and Cannon Lake CPUs [/*<][*<]Microsoft Windows 10 Spring Creators Update support [/*<][*<][b<]>>> AVX2 and FMA accelerated benchmarks for AMD Pinnacle Ridge and Raven Ridge <<<[/b<] [/*<][*<]Asus ROG RGB LED motherboard and video card support [/*<][*<]Optimized 64-bit benchmarks for Intel Atom C3000 Denverton SoC [/*<][*<]Improvements for Intel Cannon Lake PCH chipset based motherboards [/*<][*<]64-bit multi-threaded benchmarks for Intel Celeron/Pentium Gemini Lake SoC [/*<][*<]Advanced support for 3ware, AMD, HighPoint, Intel, JMicron, LSI RAID controllers [/*<][*<]GPU details for nVIDIA GeForce GTX 1060 5GB, Quadro V100, Titan V [/*<] [/list<]

        • Chrispy_
        • 1 year ago

        Dude, that’s so out of date. Nobody uses ROG RGB LED motherboards or video cards any more.

      • chrcoluk
      • 1 year ago

      think about what you just said? and think again, keep thinking till you understand.

      If you only run benchmarks optimised for ryzen, are you fairly benchmarking systems? does this accurately reflect expected performance for the software out there that is “no t optimised for ryzen”?

    • deruberhanyok
    • 1 year ago

    PSA: Intel is supposed to be notifying winners of the 8086k giveaway “on or about” today (June 11), so check your inboxes/spam filters!

    Maybe some lucky TR reader will get one!

    • auxy
    • 1 year ago

    See Zak, aren’t you glad I convinced you not to wait for this? Hihihi. (*´艸•*)

      • RAGEPRO
      • 1 year ago

      Ahem. That was your bro, lil’ bit. But yeah, thanks for coming to MC with me. The new 8700K is pretty hot. When you gonna hook me up with that Polaris part you promised me boo? 😉

    • B166ER
    • 1 year ago

    No DAWBench tho….

      • Jeff Kampman
      • 1 year ago

      We were pressed for time and DAW Bench is the single most time-consuming benchmark we conduct, sorry.

    • Kretschmer
    • 1 year ago

    So Intel commemorated the anniversary with a cheaper-than-market-rate binned CPU with a funky name.

    All six people who want to buy a binned 8700K and have yet to do so should be happy for this.
    The rest of us should be bored. Yet I’m sure the comments section will be full of people who are ecstatic or enraged over this goofy one-off marketing product.

      • just brew it!
      • 1 year ago

      Neither ecstatic nor enraged here. I don’t plan to buy one, but it’s a cute marketing stunt and a viable (albeit niche) product in its own right.

    • uni-mitation
    • 1 year ago

    I believe there is a mistake on the Grand Theft Auto Time Spent beyond at 50 ms: it shows the oc’ed 8086 spent more than 257 times; 274 times at 33.3 ms; 297 times at 16.7 ms.

    Unless it is correct then this chip possesses artificial intelligence and has developed self-awareness that it was being prodded by us dumb humans: skynet is upon us and the 8086 is the vanguard!

    uni-mitation

    • Bauxite
    • 1 year ago

    Literally cheaper to buy an 8700k, a reusable delid kit, liquid metal or your favorite paste and a milled copper IHS replacement.

    At least then you can “show” people your cpu instead of having slightly different letters hiding underneath the heatsink forever.

    Imagine if they left off the useless igpu and had an actual 8 core desktop SKU, that would be a more worthy successor.

      • chuckula
      • 1 year ago

      [quote<]Imagine if they left off the useless igpu and had an actual 8 core desktop SKU, that would be a more worthy successor.[/quote<] Imagine a post-apocalyptic war-zone in which Intel makes an actual 8 core desktop SKU that [b<]leaves in the useless IGP![/b<]

      • jihadjoe
      • 1 year ago

      There’s a guy on /r/intel who already [url=https://www.reddit.com/r/intel/comments/8q38n9/the_tech_report_i78086k_spoilers/e0gfaig/<]got his to 5.5GHz with a delid[/url<]. That's 200MHz higher than any 8700k I've known which mostly top out at 5.3GHz without going to extreme (LN2) overclocking. If you're going to go that far the $75 premium is certainly worth it, especially since Intel seems to be binning the best Coffee Lake-S chips for 8086ks. I know I'd be pretty bummed if I delidded, replaced the heatspreader and used liquid metal only to find out my 8700k is a dud.

        • DavidC1
        • 1 year ago

        Needing a water cooling AND delid starts to be classified in the exotic cooling category.

          • jihadjoe
          • 1 year ago

          Anyone getting a binned CPU from Siliconlottery is effectively doing at least as much, and the parent of this thread specifically mentioned delidding an 8700k, using liquid metal and replacing the IHS with milled copper.

            • drwho
            • 1 year ago

            Surely you can reclaim the $$$ spent on this CPU from your expenses? I don’t know how you get paid tho? is it per article .. Plus you could put it back in its box and sell it on/keep it .
            No mention of watts used ? LOL or power usage figures.
            I like the IGPU , good for htpc/testing/ emergencys , and for those NOT playing AAA games.

      • srg86
      • 1 year ago

      I’m always really annoyed when people call the igpu useless. I like fast CPU but don’t play games, the igpu is fine for my uses.

        • VincentHanna
        • 1 year ago

        The dedicated GPU dies and you can still use windows.

        Priceless.

        • SkyWarrior
        • 1 year ago

        You have a point but somehow some people (Count me as one) don’t like their precious ram being hijacked by underpowered and underprogrammed graphics unit.

        • Chrispy_
        • 1 year ago

        I’m starting to think that even the most basic iGPUs have now reached that critical mass where they’re good enough for [i<]everything[/i<] except games. Even the low-power Core-Y chips can decode 4K and run browser-based 3D. They play minecraft and casual games for younger kids without issues. Our work 'presentation' thin & light Asus Zenbooks can run 3DSMax, AutoCAD, Enscape, Microstation, Revit, Rhino, Unity, Unreal projects at adequate framerates and without graphical issues. If you want to run the latest AAA games at decent resolutions, details, and framerates - they yes, iGPUs are useless. But that's also not what they're designed for, and it's not what they're sold as being suited for so people who call them useless are completely missing the point and the wider market demographic of a non-gaming PC user.

        • YukaKun
        • 1 year ago

        Then you should ask Intel to put them back in the MoBo instead of the CPU and we all can be happy.

        It’s a waste of silicon space in CPUs when you don’t have such basic needs, that is why most enthusiasts call it “useless”.

        Cheers!

    • lycium
    • 1 year ago

    Indigo Renderer dev (and TR reader since 90s) here, what a pleasant surprise to see it in the comparison! Thanks for including it, and please let me know if I can help with anything – happy to answer any questions 🙂

    Looks like my delidded i7 8700K is solid for now, but AMD’s march forward continues unabated!

      • chuckula
      • 1 year ago

      Download link: [url<]https://www.indigorenderer.com/indigobench[/url<] Linux version available, so I'm un-Krogothed with this benchmark.

        • anotherengineer
        • 1 year ago

        un or anti-krogothed?

          • BobbinThreadbare
          • 1 year ago

          con-krogothed

      • Chrispy_
      • 1 year ago

      It’s common software, right?

      We have several Indigo licenses and AFAIK we only buy tried-and-tested mainstream stuff because we don’t have the time, staff, or money to be paying beta-testers.

      Either way, as an IT manager for a major AEC company, I appreciate your no-nonsense software, have my (rare) upvote.

    • chuckula
    • 1 year ago

    From this review, some of you might think that the 8086K is nothing more than a pricy marketing stunt on Intel’s part.

    Well, as Intel’s officially sponsored shill, I’d just like to say: Hold our beers and wait for Skylake-X2 if you want to see a [b<]real[/b<] marketing stunt!

      • Chrispy_
      • 1 year ago

      Bravo!

      I didn’t actually realise this chip was identical to an 8700K at all times other than single-core active.

      Given the nature of modern operating systems, that will occur approximately NEVER in real-world use.

        • MOSFET
        • 1 year ago

        Windows is still a single-core champ, much of the time.

          • Chrispy_
          • 1 year ago

          You couldn’t be more wrong if you tried.

          My machine is basically idle (Just email, a couple of remote terminal sessions, and a few browser tabs) [url=https://i.imgur.com/kFF5z8Z.png<]yet it's managed to distribute the running services and background tasks evenly across all eight threads,[/url<] despite only using 4% of the CPU. Windows is multi-threaded by design. Even if you think your PC is doing nothing, Windows is probably running 60-100 concurrent threads on background services.

            • chrcoluk
            • 1 year ago

            Its not actually doing that tho.

            Basically task manager reports average usage on each core for the polling interval, even at fastest the polling interval is only once a second.

            The windows cpu scheduler will distribute single core workloads rapidly moving from one core to the next, its a weird way to optimise usage, linux doesnt do this, but its how windows works. The net result is that task manager will see usage across all cores across the polling period and give the impression you just observed.

            With that said tho, you are correct windows doesnt just sit there only using one core for everything, there is multiple processes, each process is capable of using its own core for its specific task. Of course also there may be other background apps running on idle systems installed by the end user, e.g. on my system I have afterburner, rivatuner, hwinfo, and various other apps in the background, which when combined together are capable of keeping multiple cores awake.

            Now I have learned that only the single core clock speed is boosting on these chips I also consider them a marketing trick, kind of devious that none of the multi core stock clocks are boosted.

          • Klimax
          • 1 year ago

          Question: From what bloody hole this nonsense came???

            • Chrispy_
            • 1 year ago

            [url=https://i.redditmedia.com/K1f1WwOd6pTLxFGwCXCeYSnvxwuSequoiZfhLyjaIL8.jpg?s=8931337f86a75f3c3941f214a23fabe9<]This one[/url<] Technically, that's an SFW link, and [i<]technically correct[/i<] is the best kind of correct.

Pin It on Pinterest

Share This