Intel’s Core i9-9900K CPU reviewed

Intel’s Core i9-9900K is here. With eight cores and 16 threads clocked at up to 5 GHz (on as many as two active cores, to boot), plus a 4.7-GHz all-core Turbo Boost 2.0 clock, the i9-9900K has the potential for incredibly formidable performance. We’ve been testing that performance down to the very last minute, and we have a comprehensive set of results to share with you now, but analysis of those results will have to trickle in as we fully digest the reams of data our work has produced. In honesty, though, a chip with the kinds of performance we’ve been seeing over the past few days doesn’t need us to vouch for much on its behalf.

We’ll be adding more detail and flavor to this article throughout the day (and intermittently, at best, as I’ll be traveling by air and won’t have consistent Internet access), but if you want more perspective on the i9-9900K, David Schor at WikiChip has an excellent run-down of the Coffee Lake Refresh silicon that underpins ninth-generation Core CPUs. For our part, let’s dive right into our performance results.

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each benchmark at least three times and took the median of those results. Our test systems were configured as follows:

Processor Intel Core i7-8700K Intel Core i7-9700K Intel Core i9-9900K
CPU cooler Corsair H100i Pro 240-mm closed-loop liquid cooler
Motherboard Gigabyte Z390 Aorus Master
Chipset Intel Z390
Memory size 16 GB
Memory type G.Skill Flare X 16 GB (2x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 2T
System drive Samsung 960 Pro 512 GB NVMe SSD
Processor AMD Ryzen 7 2700X AMD Ryzen 5 2600X
CPU cooler EK Predator 240-mm closed-loop liquid cooler
Motherboard Gigabyte X470 Aorus Gaming 7 Wifi
Chipset AMD X470
Memory size 16 GB
Memory type G.Skill Flare X 16 GB (2x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 2T
System drive Samsung 960 EVO 500 GB NVMe SSD
Processor AMD Ryzen Threadripper 2950X AMD Ryzen Threadripper 1920X
CPU cooler Enermax Liqtech TR4 240-mm closed-loop liquid cooler
Motherboard Gigabyte X399 Aorus Xtreme
Chipset AMD X399
Memory size 32 GB
Memory type G.Skill Flare X 32 GB (4x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 1T
System drive Samsung 970 EVO 500 GB NVMe SSD
Processor Core i9-7900X
CPU cooler Corsair H100i Pro 240-mm closed-loop liquid cooler
Motherboard Gigabyte X299 Designare EX
Chipset Intel X299
Memory size 32 GB
Memory type G.Skill Flare X 32 GB (4x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 1T
System drive Intel 750 Series 400 GB NVMe SSD

Our test systems shared the following components:

Graphics card Nvidia GeForce RTX 2080 Ti Founders Edition
Graphics driver GeForce 411.63
Power supply Thermaltake Grand Gold 1200 W (AMD)

Seasonic Prime Platinum 1000 W (Intel)

Some other notes on our testing methods:

  • All test systems were updated with the latest firmware, graphics drivers, and Windows updates before we began collecting data, including patches for the Spectre and Meltdown vulnerabilities where applicable. As a result, test data from this review should not be compared with results collected in past TR reviews. Similarly, all applications used in the course of data collection were the most current versions available as of press time and cannot be used to cross-compare with older data.
  • Our test systems were all configured using the Windows Balanced power plan, including AMD systems that previously would have used the Ryzen Balanced plan. AMD’s suggested configuration for its CPUs no longer includes the Ryzen Balanced power plan as of Windows’ Fall Creators Update, also known as “RS3” or Redstone 3.
  • Unless otherwise noted, all productivity tests were conducted with a display resolution of 2560×1440 at 60 Hz. Gaming tests were conducted at 1920×1080 and 144 Hz.

Our testing methods are generally publicly available and reproducible. If you have any questions regarding our testing methods, feel free to leave a comment on this article or join us in the forums to discuss them.

 

Memory subsystem performance

The AIDA64 utility includes some basic tests of memory bandwidth and latency that will let us peer into the differences in behavior among the memory subsystems of the processors on the bench today, if there are any.

Some quick synthetic math tests

AIDA64 also includes some useful micro-benchmarks that we can use to flush out broad differences among CPUs on our bench. The PhotoWorxx test uses AVX2 instructions on all of these chips. The CPU Hash integer benchmark uses AVX and Ryzen CPUs’ Intel SHA Extensions support, while the single-precision FPU Julia and double-precision Mandel tests use AVX2 with FMA.

 

Javascript

The usefulness of Javascript microbenchmarks for comparing browser performance may be on the wane, but these tests still allow us to tease out some single-threaded performance differences among CPUs. As part of our transition to using the Mechanical TuRk to benchmark our chips, we’ve had to switch to Google’s Chrome browser so that we can automate these tests. Chrome does perform differently on these benchmarks than Microsoft Edge, our previous browser of choice, so it’s vitally important not to cross-compare these results with older TR reviews.

WebXPRT 3

The WebXPRT 3 benchmark is meant to simulate some realistic workloads one might encounter in web browsing. It’s here primarily as a counterweight to the more synthetic microbenchmarking tools above.

WebXPRT isn’t entirely single-threaded—it uses web workers to perform asynchronous execution of Javascript in some of its tests.

 

Compiling code with GCC

Our resident code monkey, Bruno Ferreira, helped us put together this code-compiling test. Qtbench records the time needed to compile the Qt SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

File compression with 7-Zip

The free and open-source 7-Zip archiving utility has a built-in benchmark that occupies every core and thread of the host system.

Disk encryption with Veracrypt

 

Cinebench

The evergreen Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

Blender

Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

Corona

Corona, as its developers put it, is a “high-performance (un)biased photorealistic renderer, available for Autodesk 3ds Max and as a standalone CLI application, and in development for Maxon Cinema 4D.”

The company has made a standalone benchmark with its rendering engine inside, so it’s a no-brainer to give it a spin on these CPUs.

Indigo

Indigo Bench is a standalone application based on the Indigo rendering engine, which creates photo-realistic images using what its developers call “unbiased rendering technologies.”

Handbrake

Handbrake is a popular video-transcoding app that recently hit version 1.1.1. To see how it performs on these chips, we converted a roughly two-minute 4K source file from an iPhone 6S into a 1920×1080, 30 FPS MKV using the HEVC algorithm implemented in the x265 open-source encoder. We otherwise left the preset at its default settings.

 

Digital audio workstation performance

After an extended hiatus, the duo of DAWBench project files—DSP 2017 and VI 2017—return to make our CPUs sweat. The DSP benchmark tests the raw number of VST plugins a system can handle, while the complex VI project simulates a virtual instrument and sampling workload.

A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI’s support—this benchmark would not have been possible without the help of the folks there. Be sure to check out their many fine digital audio products.

A very special thanks also to RME Audio, who cut us a deal on one of its Babyface Pro audio interfaces to assist with our testing. RME’s hardware and software is legendary for its low latency and high quality, and the Babyface Pro has exemplified those qualities over the course of our time with it.

We used the latest version of the Reaper DAW for Windows as the platform for our tests. To simulate a demanding workload, we tested each CPU with a 24-bit depth and 96-KHz sampling rate, and at two ASIO buffer depths: 96, the lowest our interface will allow at a 96 KHz sampling rate, and 128. In response to popular demand, we’re also testing two buffer depths at a sampling rate of 48 KHz: 64 and 128. We added VSTs or notes of polyphony to each session until we started hearing popping or other audio artifacts.





 

Crysis 3

Even as it passes six years of age, Crysis 3 remains one of the most punishing games one can run. With an appetite for CPU performance and graphics power alike, this title remains a great way to put the performance of any gaming system in perspective.



 

Assassin’s Creed Odyssey

Ubisoft’s most recent Assassin’s Creed games have developed reputations as CPU hogs, so we grabbed Odyssey and put it to the test on our systems using a 1920×1080 resolution and the Ultra High preset.



 

Deus Ex: Mankind Divided

Thanks to its richly detailed environments and copious graphics settings, Deus Ex: Mankind Divided can punish graphics cards at high resolutions and make CPUs sweat at high refresh rates.



 

Grand Theft Auto V

Grand Theft Auto V‘s lavish simulation of Los Santos and surrounding locales can really put the hurt on a CPU, and we’re putting that characteristic to good use here.



 

Hitman

After an extended absence from our test suite thanks to a frame rate cap, Hitman is back. This game tends to max out a couple of threads but not every core on a chip, so it’s a good test of the intermediate parts of each processor’s frequency-scaling curve. We cranked the game’s graphics settings at 1920×1080 and got to testing.



 

Far Cry 5



 

Gaming and streaming with Far Cry 5 and OBS

Intel made a point of the Core i9-9900K’s single-PC gaming and streaming prowess during its introduction of the chip, so we took Far Cry 5 and shared our test run with the world using Open Broadcaster Software (OBS). We chose streaming settings that should be fairly typical for the serious streamer: 1920×1080 output at 60 FPS, with a bit rate of 6000 Kbps and the “faster” x264 preset for CPU encoding. For some CPUs, we’ve also provided an idea of what a higher-quality stream might look like for client-side performance using the “fast” preset.





 

A quick look at power consumption and efficiency

 

Conclusions

Let’s try and wrap up some of the reams of test data we’ve collected using our famous value scatter charts. To produce these figures, we take the geometric mean of the results of our non-synthetic benchmarks in both productivity and gaming tests. Smushing the entirety of our test data into a chart like this inevitably conceals some areas of strength and others of weakness for the CPUs we tested, and as always, we encourage those with specific workloads to consult the benchmark results most relevant to them. Still, if you want an at-a-glance idea of the all-round competence of these chips, our scatters serve well enough.

In our productivity tests, Intel has delivered a CPU that’s actually a better all-rounder than the Core i9-7900X for roughly half as much money on its window sticker. Do I need to say more? Unless you need the memory bandwidth of a quad-channel platform for work like computational fluid dynamics or lots of PCIe lanes for NVMe storage, Intel has just obviated its entry-level high-end desktop CPUs with this part. Not surprising, since the i9-9900K is essentially an HEDT CPU in a mainstream package.


In our gaming tests, the i9-9900K doesn’t put quite as much light between itself and its predecessors. The i7-8700K was already the world’s best CPU for high-refresh-rate gaming, and since most CPU-bound games aren’t effective at using every bit of every core and thread available to them, the i9-9900K’s gains seem to come from its 300-MHz peak clock speed increase over the i7-8700K. That said, the positioning of the high-end desktop CPUs on our 99th-percentile-FPS-per-dollar chart emphasize just how balanced this chip is across both productivity and gaming workloads. You couldn’t buy this potent a blend of productivity and gaming performance at all before now.

Our Far Cry 5 streaming tests further show that gamers can use demanding x264 encoding presets with the i9-9900K while enjoying unparalleled fluidity and smoothness while sharing their gameplay with the world. That’s yet another high-end desktop advantage that the i9-9900K snatches away.

Apologies for the rather threadbare preceding pages, but our results speak for themselves. The Core i9-9900K often shadows chips costing nearly twice as much in multithreaded workloads while delivering unbeatable single-threaded responsiveness, gaming performance, and single-PC streaming chops. In some arenas, like the lowest-latency realms of DAW Bench VI testing, the i9-9900K’s performance is simply without precedent.

Intel Core i9-9900K

October 2018

While the competition might occasionally beat Intel’s baby in one benchmark or another, no other CPU on the market can match its all-around balance. We’ve long wanted an Intel chip that didn’t trade off the single-threaded performance of the company’s client cores for the multithreaded grunt of its high-end desktop parts, and the i9-9900K finally delivers on that front.

AMD’s Ryzen Threadripper CPUs are certainly better performers in some specific workloads, but folks who want a single PC that’s good enough to handle both heavy-duty work and serious play after hours will want this chip. Doesn’t hurt that its $500 price tag is tantalizingly low for the performance on offer, although we doubt it’ll be easy to find an i9-9900K for that price until production ramps up.

All told, the i9-9900K is a remarkable and heady blend of balanced performance that truly bridges the mainstream and high-end desktop PC. Like the Core i7-8700K before it, there’s nothing this chip can’t do well, and the extra dose of power it brings to the table in all sorts of tasks is more than enough to make it worthy of a TR Editor’s Choice award. ‘Nuff said.

Comments closed
    • Nictron
    • 1 year ago

    Thank you for a great review Jeff.

    [u<]I have a request[/u<] or you can provide the answer if you know? In your review of the Core i7-6700K 'Skylake' processor the Core i7-5775c showed some of the best low latency gaming performance beating the Skylake processors. If we had to test the 5775c for low latency performance against these newer chips what would the result be? I know the top end performance would be better with the newer chips but what would the lower level frames look like? Quoted from the 'Skylake' review: [i<]Things get weird, though, with the Core i7-5775C in the picture. The Broadwell-based CPU with the 128MB L4 cache turns in the top performance in Project Cars, outdoing even Skylake. Looks like that big cache can help with gaming performance, even with a discrete GPU. [/i<] [i<]The 6700K looks strong here, and the 5775C's magical gaming prowess continues. Still, all of the processors are doing a great job of producing smooth animation.[/i<] [i<]Meanwhile, that crazy Broadwell 5775C embarrasses them both with the help of its beefy L4 cache[/i<] [i<]5775C upstages it with a freakish string of gaming performance wins, even though its prevailing clock speed is ~500MHz lower. [/i<] [i<]the 5775C could deliver gaming performance that's superior to Skylake, provided your games of choice benefit as much from that L4 cache as the ones we tested did. [/i<] I am interested in this question because upgrading the graphics card could lead to much better low level frame performance. Thank you,

      • Concupiscence
      • 1 year ago

      I’ve gotta say, you [url=https://techreport.com/review/34205/checking-in-on-intel-core-i7-5775c-for-gaming-in-2018<]definitely got your wish.[/url<]

        • Nictron
        • 1 year ago

        Yup I did and happy to see that it still has legs.

        Happy subscriber today!

        Though when I upgrade I will have to take the plunge on a full system to take advanced of those higher frame rates. The 5775c does not have the legs to power these new GPUs.

        Thank you and Much appreciated Jeff.

    • Firestarter
    • 1 year ago

    got my 9700K running now, boy I can tell you guys it’s definitely a CPU and it most certainly has 8 cores!

    • HERETIC
    • 1 year ago

    Hey Jeff, when you finished all this, and had a break, how about trying something a little
    different. Turn off HT and try OC on 2 or 4 cores.Run a few games that don’t demand lot’s
    of cores…………….
    Reason—the humble red haired stepchild gets within 3% in 1440.We want MOOOOOOOAR.

    • dodozoid
    • 1 year ago

    What the scatter plots really show is how pointless the high end has become and how potent mid-range CPUs are.
    Would you pay 3x the money for extra 20% of performance?
    It might be worth the money for some professional (perhaps even pro gaming) use cases, but apart from that, it’s just about bragging rights.
    Not that there is anything wrong about that, but buyers should be aware of the fact that anything above i5 or Ryzen 5 is basically just a collectible with no practical benefits.

      • HERETIC
      • 1 year ago

      POTENT, is a huge understatement.
      If gaming at 1440 is your aim, the red headed stepchild(8600/8600K) is within about 3%
      performance wise………………………………..

        • K-L-Waster
        • 1 year ago

        And the 2600X isn’t too far off either.

          • HERETIC
          • 1 year ago

          Yup 5%@1440 with a 1080Ti.
          Thro it does suffer a little in 99th Percentile.
          So the extra few dollars for the 8600/8600K or 9600K seem to be well worth it…………..

      • Concupiscence
      • 1 year ago

      If you’re talking about gaming, you’re pretty much on the money. If you’re throwing in CPU-driven OBS to stream or record gaming, or simultaneous background multitasking, or non-gaming productivity that lends itself to multithreading, that changes considerably.

    • Zizy
    • 1 year ago

    Well, you really need to value a CPU that is both for work and gaming and needs to be top at both to seriously consider this. 9700k is as good gaming chip while still decent for the rest and costing much less, while the AMD chips are better perf/$ for work stuff.

    • UnknownZA
    • 1 year ago

    Where’s the hardware setup for this CPU? I don’t see any specifications for the setup anywhere. i.e. what motherboard was used and what about the memory specs etc.

      • Jeff Kampman
      • 1 year ago

      Added.

    • jihadjoe
    • 1 year ago

    Uhh did I miss the testbed setup?

    It seems like the review jumped right in to the synthetics without disclosing what parts were in use.

      • Jeff Kampman
      • 1 year ago

      Added.

    • Voldenuit
    • 1 year ago

    Good news, everyone! Der8auer was able to reduce his 9900K temps by 12C, and all he had to do was, uh (mumbles) [url=https://www.youtube.com/watch?v=r5Doo-zgyQs<]ʟᴀᴘ ʜɪs ᴄᴘᴜ[/url<].

      • chuckula
      • 1 year ago

      Old & busted: Delidding.
      New Hotness: Lapping.

      • danny e.
      • 1 year ago

      interesting. not a fan of the power draw either.

      • UberGerbil
      • 1 year ago

      “Why the hell is this chip so damn thick?”
      Because that’s what brings all the boys to the yard, amiright?
      (Man, google image search for that word is pretty unanimous)

    • End User
    • 1 year ago

    Very tempting. The single core performance is amazeballs.

    I game at 2560×1440 on a 1950X so the cost of a new rig cannot be justified.

      • chuckula
      • 1 year ago

      Just put 4 iPhones next to each other and you’ll never need a primitive PC again.

        • End User
        • 1 year ago

        4 A12s in a row. Tasty.

      • ptsant
      • 1 year ago

      Nah, just wait for 3950X in 6 months. It will almost certainly be an in-place upgrade.

        • End User
        • 1 year ago

        Nah, saving my pennies for a EVGA GeForce RTX 2080 Ti FTW3 ULTRA GAMING.

    • djayjp
    • 1 year ago

    Looking forward to seeing where the 9600k fits in the graph 🙂

    • Jeff Kampman
    • 1 year ago

    I’ve added value scatters and some extra commentary to the conclusion for those who would like an at-a-glance summary of our results. Enjoy!

      • JustAnEngineer
      • 1 year ago

      Thanks, Jeff.

      Do you have any words for how the Core i7-9700K fares without hyperthreading?

        • djayjp
        • 1 year ago

        Presumably you mean 9900k

          • JustAnEngineer
          • 1 year ago

          In the article’s conclusion, Jeff discussed the merits of the Core i9-9900K (8 cores, 16 threads), but he didn’t devote any prose to the Core i7-9700K (8 cores, 8 threads). Intel disables hyperthreading and ¼ of the cache from the former chip to produce the second.
          [url<]https://ark.intel.com/compare/126684,186604,186605,123613[/url<]

            • djayjp
            • 1 year ago

            Oh I misunderstood. I think what would be more interesting, however, is if they provided results of an HT disabled 9900k. Would also love to see 9600k results!

        • DeadOfKnight
        • 1 year ago

        If you check out the AnandTech review, it looks like the 9700k may be a better chip for overclocking. Obviously, YMMV, but it’s definitely a better value and could actually turn out to be the “best CPU for gaming” when overclocked.

      • Noinoi
      • 1 year ago

      Thanks for the welcome addition! Have to say that the 9700K does look like it should be more than reasonably fast enough, too, considering existing non-HEDT CPUs. Holy cow at the 9900K’s prodigious performance, though.

        • Srsly_Bro
        • 1 year ago

        And power consumption…

          • Noinoi
          • 1 year ago

          The 9900K uses less power to finish a task as listed in the power consumption/efficiency page – high instantaneous power consumption can be offset somewhat by doing it faster in the first place.

          It’s still higher than I’d like if I’m going to keep my power supplies as is, though.

      • synthtel2
      • 1 year ago

      Whoa, the Intel CPU shortage must have gone from purports to being a real thing when I wasn’t looking. At $380 and $500, they’d be much better deals than I was thinking; I was originally going by Newegg’s prices of $420 and $580. Speaking of that, Newegg is selling the 8400 for $285 right now. 🙁

      • DeadOfKnight
      • 1 year ago

      Appreciate the extra graphs. These are always my favorite.

    • HERETIC
    • 1 year ago

    Red-headed stepchild misses out again………………

    • Unknown-Error
    • 1 year ago

    Just imagine how fast next years Ice-Lake is going to be! 😮 :O

    • just brew it!
    • 1 year ago

    Nice to see the competition between Intel and AMD heating up some more. This should drive prices down, which is a great thing for enthusiasts.

    • anotherengineer
    • 1 year ago

    2 questions,

    Will this actually sell for $500?
    and
    does it support ecc ram?

      • DeadOfKnight
      • 1 year ago

      Yes and no

        • anotherengineer
        • 1 year ago

        $580 is not $500

        [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16819117957&cm_re=i9_9900k-_-19-117-957-_-Product[/url<]

          • K-L-Waster
          • 1 year ago

          Are you asking “ever” or “right now”?

          I’d say wait for the initial rush at launch to die down before passing judgement.

    • Laykun
    • 1 year ago

    Would loved to have seen the old top end quads the 6700k/7700k in this performance comparison.

    • Forge
    • 1 year ago

    Finally Intel has released something in the sub-600$ market that incontrovertibly beats my i7-4790K in every way. Send one over, Intel, I will lub it and squeeze it and call it George.

    • Eversor
    • 1 year ago

    This CPU launch is a joke. How can you announce 95W TDP and then it pushes over 200W? When you’re doubling power consumption, it is expected the performance be quite high. This is CPU package only, as posted on Anandtech.

    I expect this to unleash another fiasco of blown motherboard VRMs like in the X299 launch.

    AMD didn’t release a 2800X this time around and will probably bin some cores to close the gap.

      • blargh4
      • 1 year ago

      Because TDP is not specified for turbo clocks? It’s the same process and micro-architecture running at higher clocks, where did you expect this increased efficiency would come from?

        • Voldenuit
        • 1 year ago

        > Because TDP is not specified for turbo clocks?

        That is the exact problem. Anandtech suggested intel should list PL1 and PL2 as two separate TDP figures, and I agree.

        If thermal throttling in laptops is rightly seen as a problem, then thermal/power throttling in laptops should be taken just as seriously.

          • magila
          • 1 year ago

          PL2 doesn’t mean what Ian Cutress thinks it means. PL2 is a power limit which is enforced for a brief period (typically around 30 seconds) before PL1 kicks in. Advertising PL2 doesn’t make much sense because it has nothing to do with what the CPU’s steady state power draw will be.

          People see Intel CPUs drawing huge amounts of power because motherboard firmware often changes the power limits to arbitrarily high values. Intel can’t really predict how much power the CPU will draw in this situation, it will vary depending on workload and from one die to the next.

            • MOSFET
            • 1 year ago

            The extent of what Ian doesn’t know is perpetually perplexing.

        • fomamok
        • 1 year ago

        The point of knowing TDP is to find the PSU for the system.

        Being short by 100w may cause the system to crash.

    • rudimentary_lathe
    • 1 year ago

    Thanks for the review Jeff.

    The 9700K looks like a winner, depending on how the price shakes out over time. I usually have few VMs running and with eight full and fast cores I doubt I’d miss the the hyper-threading.

    That said I’m not in the market for a new CPU right now and will be waiting to see what Zen+ looks like next year.

      • Voldenuit
      • 1 year ago

      >That said I’m not in the market for a new CPU right now and will be waiting to see what Zen+ looks like next year.

      Zen+ is already out, it’s the Ryzen 2.

      Zen 2 is coming 1H2019 and will be in the Ryzen 3.

      Don’t know why AMD had to make the core/product naming so confusing.

        • Klimax
        • 1 year ago

        They are learning only from best masters: Intel.

      • NovusBogus
      • 1 year ago

      For gaming and anything short of major number crunching or a VM fetish, I agree. But it’s nice to see Intel offer up a nice yet affordable HEDT platform, even if it’s not Glorious LGA 2011/2066 Master Race. Not that long ago, they were charging $1200 for this level of performance.

    • synthtel2
    • 1 year ago

    The 8700K looks like the real winner here. The 9700K costs $30 more and trades with the 8700K a lot, coming out a tiny bit faster on average, and the 9900K’s price tag is fully halo-class. Meh.

    If nothing less than the best will do, we know where to find it, but very few people have a real use for that. ~$200 buys a very nice CPU these days.

    • 1sh
    • 1 year ago

    WTF I jus bought 8700K less than a year ago, I thought the days of annual upgrades were over…

      • Krogoth
      • 1 year ago

      Unless you have a workload that does benefit from having two extra cores. I really wouldn’t sweat it.

      You should be fine until Ice Lake and its successor come out.

      • blargh4
      • 1 year ago

      I’m really not sure why anyone would think Intel was going to ride out 6 cores for several years when the competition has 8 cores nipping at Intel’s heels. The 8700k was pretty clearly a stopgap – but, a perfectly capable one for most workloads.

    • brucek2
    • 1 year ago

    A hearty thank you to AMD for making this product/price available to us!

    • sdch
    • 1 year ago

    Looking forward to more test details since I trust the test methodology on this site. Word on the street is that MCE is broken (always on) for a lot of motherboards right now.

      • Jeff Kampman
      • 1 year ago

      I can confirm that multi-core enhancement was not active on our test motherboard, both from explicitly disabling the setting and observing clock-speed behavior in CPU-Z.

        • sdch
        • 1 year ago

        Thanks for the quick response!

        • sdch
        • 1 year ago

        Any thoughts on power limits being set out of spec (210W vs 119W) on motherboards?

        See Table 5-7 (PL1 and PL2 values):
        [url<]https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/8th-gen-core-family-datasheet-vol-1.pdf[/url<] Edit: If true, this is pretty damning: [url<]https://smallformfactor.net/forum/threads/intel-core-9000-series-processors-discussion.9366/page-3#post-119183[/url<] Edit2: Well, it's true, and the processor can't maintain all core speeds without increasing the power limits: [url<]https://www.golem.de/news/core-i9-9900k-im-test-acht-verloetete-5-ghz-kerne-sind-extrem-1810-136974-4.html[/url<] Performance and power differences (click through images): [url<]https://www.golem.de/news/core-i9-9900k-im-test-acht-verloetete-5-ghz-kerne-sind-extrem-1810-136974-5.html[/url<]

          • jihadjoe
          • 1 year ago

          The high power consumption seems to be partially a symptom of AsRock boards giving the CPU much higher voltage than specified or required, which leads to the high temps/power consumption. Didn’t help that a lot of reviewers used AsRock boards to test.

          Anandtech’s initial review had one of the highest power numbers, with the 9900k pulling 220W (whole system power). They [url=https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/21<]updated their numbers[/url<] and dropped down to 168W after switching to an MSI board. Edit: partially

            • sdch
            • 1 year ago

            Edit: I woke up on the wrong side of the bed today.

            • strangerguy
            • 1 year ago

            Blame the mobo makers catering to “OCing is so foolproof and easy” meme aka overvolting everything to the moon. My own MSI Z370M board has VCCSA and VCCIO voltages that is way too high for DDR3000 operation by default when XMP is enabled.

          • Jeff Kampman
          • 1 year ago

          Our test motherboard’s PL2 limit is 118 W.

    • marvelous
    • 1 year ago

    I’m wondering about the power draw. Anandtech and Tomshardware both put out 2700x little lower than 8700k levels but Techreport puts it really close to 9900k. Which is it?

    I would hate to put up a 200+ watt CPU in my room while gaming.

      • Jeff Kampman
      • 1 year ago

      Are Tom’s and AnandTech using the same test workload (Blender, i.e. AVX-heavy) that we are?

        • marvelous
        • 1 year ago

        [url<]https://www.youtube.com/watch?v=_I--zROoRws[/url<] Hardware unboxed is using blender here. Shows 2700k and 8700k showing similar results while 9900k is out there. They were testing gooseberry however.

      • aspect
      • 1 year ago

      Different motherboards? Some can draw like 20 watts more.

      • HERETIC
      • 1 year ago

      Spend a little time and analyze what your reading-
      Anand and Toms use precision equipment to TRY and measure CPU power only.
      Here whole system power is measured, possibly a basic wallwart-good enough
      for ballpark. But useless for cross platform comparison………………….

      • Unknown-Error
      • 1 year ago

      TDP. You can see a disparity in 2700X Cinebench results between Anandtech and Techreport – [url=https://images.anandtech.com/graphs/graph12725/97977.png<]AT-Graph[/url<], [url=https://techreport.com/r.x/2018_04_19_AMD_s_Ryzen_7_2700X_and_Ryzen_5_2600X_CPUs_reviewed/cinenT.png<]TR-Graph[/url<] "The Stilt" at AT-forums observed similar results - [url=https://i.imgur.com/DwPWcLa.png<]The Stilt-Graph[/url<] plus you can check out his [url=https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/<]Ryzen - Strictly Technical[/url<] thread. The 2700X with stock cooler acts like a 140W TDP CPU unless you put restriction. Techreport power numbers are higher probably because they allow the CPU to hit the advatised frequencies. That also means their Cinebench scores are higher compared to others. EDIT: That also means their power numbers would be higher.

    • DrDominodog51
    • 1 year ago

    The performance still isn’t quite enough for me to justify the cost of DDR4 and a new motherboard yet. I might have to pick up one of those unlocked 8-Core Ivy Bridge Xeons to replace my 3930K, and wait for DDR4 to drop in price some more.

      • NovusBogus
      • 1 year ago

      Also worth noting that the big-boy sockets and chipsets have a lot of fringe benefits like additional PCIe lanes and CPU driven SATA/USB ports compared to the regular stuff. Consumer grade is finally catching up to Sandy-E but at best they trade shots. I’m dealing with this stuff right now, as my employer foolishly attempted to exchange the engineering workstation configuration for some kind of 1151 based nonsense and we’re drafting a requirements document to justify why we buy $3,000 computers.

    • yokem55
    • 1 year ago

    Just as an aside – the 10/19 launch date is turning out to be a paper launch for the 9900K. Amazon, NewEgg, B & H and Microcenter all have not received their stock on the unit and pre-orders have not been processed. Folks on Reddit are hearing a 11/21 availability date from NewEgg.

      • DancinJack
      • 1 year ago

      Well if Reddit said it, we have to believe it.

      • K-L-Waster
      • 1 year ago

      Sells out fast /= paper launch.

        • Redocbew
        • 1 year ago

        Unless you’re one of the people who wasn’t sitting there at midnight waiting for the stock to go online. With that said, yeah paper launches are annoying, but even if the Oracle of Reddit is accurate I’m not seeing how getting one of these chips now compared to getting one a month from is now all that big of a deal.

          • K-L-Waster
          • 1 year ago

          Still doesn’t make it a paper launch. The new shiny always sells out on launch day — the 8000 series did the same, so did the RTX cards — I don’t recall if RyZen+ did but it wouldn’t surprise me….

            • Redocbew
            • 1 year ago

            I’m fully comfortable with the idea that the new shiny always gets a paper launch, but you are welcome to die on that hill if you’re so inclined.

            • K-L-Waster
            • 1 year ago

            Nah, sounds painful.

            • yokem55
            • 1 year ago

            If you can find someone who can report actually being able to buy one today or get one shipped today, I’ll happily concede your point that limited quantities sold out fast versus it being a paper launch. The flip side is folks that pre-ordered the moment the units showed up online being told they won’t get them until a month from now…

        • Krogoth
        • 1 year ago

        Yep, the only real way to tell if it is an actual paper paper if etailers/tailers are out of stock for weeks on end and there’s healthy scalper activity in the gray area markets.

    • sweatshopking
    • 1 year ago

    Happy to borrow the review chip indefinitely.

      • derFunkenstein
      • 1 year ago

      You can write an article about how quickly every CPU deletes all your files when installing the original RTM version of Windows 10 1809.

        • EzioAs
        • 1 year ago

        Not enough neon greens!!

          • sweatshopking
          • 1 year ago

          THIS GUY GETS IT.
          GO BRIGHT OR GO HOME

        • sweatshopking
        • 1 year ago

        Probably not one worth reading.

    • Air
    • 1 year ago

    I would like to see some temperatures tested with the same cooler at a fixed RPM… to get an idea of how hard is to cool each CPU. Since power alone is not enough due to diferences in die size, solder or tim, etc.

    • DeadOfKnight
    • 1 year ago

    Looks like the 9700k beats it in most games. Hmmm…

      • Jeff Kampman
      • 1 year ago

      It’s almost as though most games are bound up on a single thread.

        • DeadOfKnight
        • 1 year ago

        The Farcry 5 + OBS benchmark is really telling on where the value is.

          • Freon
          • 1 year ago

          As much as I appreciate the test, If you use NVENC instead of the CPU x264 encoder it probably doesn’t matter.

        • Mr Bill
        • 1 year ago

        Have my +3. That made me LOL.

        • Kretschmer
        • 1 year ago

        But I’ve been hearing that the industry just has to catch up to AMD’s MOAR COARS!

    • techguy
    • 1 year ago

    The temperature/OC results on 9000 series chips are horrible. Leave it up to Intel to screw up STIM. Makes me wonder if they did it on purpose so they could go back to paste next generation by pointing to this generation and saying “see, solder doesn’t matter!”

    I’m really not looking forward to having to lap a CPU die…

    [url<]https://www.youtube.com/watch?v=r5Doo-zgyQs[/url<]

      • Jeff Kampman
      • 1 year ago

      You can’t beat physics. Putting eight cores in a relatively small die means there’s just not as much surface area to move heat around as there used to be. Solder was probably a necessary condition for this chip to exist, not a miracle cure for Intel’s heat problems for overclockers. If you want bigger dies with better heat transfer capacity, wait for the soldered X299 Refresh chips coming next month.

      • Krogoth
      • 1 year ago

      Actually, the eight-core Coffee Lakes are remarkably efficient for their power density and size. Remember that you dealing with ~Skylake x 2 under a single die. The slightly larger Skylake-X LCCs dies fare no better when you attempt to push them aggressively.

      Solder wouldn’t made that much of a difference anyway. The whole solder being “vastly” better than TIM meme came from chips that had poorly fitted IHS (Ivy Bridge to first batch of Haswell) because the chip package assembly equipment was tuned to soldered IHS chips.

      Soldering just makes IHS fittings more consistent and can help improve thermal dissipation by a little bit. Lapping the die isn’t going to help either (Too risky not enough returns to justify it).

      • Beahmont
      • 1 year ago

      It’s more or less 2 Skylake 95W 6700k chips fused together with more cache, higher clocks, a better slightly more power hungry IGP, and non-linear ring bus power costs.

      It’s a marvel of modern IC engineering that the thing has an approximately linear TDP and power curve, especially since it’s crammed into a slightly smaller space than two 6700k die.

    • ozzuneoj
    • 1 year ago

    Really looking forward to seeing some temperature data. Lots of reviews are giving pretty inadequate information in this regard.

      • ptsant
      • 1 year ago

      Depends a lot on cooler, case, ambient temperature etc.

      I wouldn’t pair this chip with a budget cooler, obviously, but I don’t think you’ll have trouble if you have a good air/AiO cooler.

        • ozzuneoj
        • 1 year ago

        I’m still rocking a Thermalright Ultra 120 Extreme which I’ve dragging along through 3 different builds in 10 years (E6750+P35, Q9550+P45, 2500K+P67). The 2500K is still working so well I’d probably keep it as is and just build a whole new rig… but I digress.

        I’m mostly interested in the overall thermal performance with a few common mid range\high end air coolers versus other recent Intel chips.

        Gamernexus has a pretty decent article about this but it only touches on the 9900K with HT enabled as far as I can tell. I’m more interested in knowing if the 9700K being soldered makes it easier to cool than the 8700K, despite having more cores.

    • gerryg
    • 1 year ago

    Noooooooooo!…..

    • dragontamer5788
    • 1 year ago

    TL;DR: Wow, that’s fast. But at lol 200W+ Power Draw.

    Overall seems like a good buy and worth the $580 that Intel is asking for. But you better build an overall high-end system with excellent cooling, good power delivery, and beefy Motherboard VRMs to feed that power draw.

      • cozzicon
      • 1 year ago

      And they made fun of me for going with a 9590 for threaded work loads….

      • jihadjoe
      • 1 year ago

      200W is just a short burst, not sustained, it’ll [url=https://www.gamersnexus.net/hwreviews/3378-intel-9900k-cpu-review-solder-vs-paste-delid-gaming-benchmarks-vs-2700x/page-4<]quickly drop down to 100W long term power[/url<]. I agree though that if you want to raise power limits to sustain the advertised all-core boost clocks a good motherboard will be necessary.

    • jensend
    • 1 year ago

    Looking forward to price/performance scatter plots, including platform cost.

      • hiki
      • 1 year ago

      Is sad when Tomshardware beats Techreport on his own strengths…

      [url<]https://i.imgur.com/ngBu9IY.jpg[/url<]

        • DancinJack
        • 1 year ago

        Someone should teach Tom’s how to make a plot.

        • Beahmont
        • 1 year ago

        Those axis designations are terrible. Reverse them like TR and the thing would look much better.

        Setting your axis up so that down and to the right is the optimal position is always asking for trouble.

        • Krogoth
        • 1 year ago

        That chart only contains gaming benches which are largely irrelevant to people who care about CPU performance.

    • blargh4
    • 1 year ago

    Wow, I’m surprised Chrome’s JS engine trips up on hyperthreading to such a degree. I thought HT performance regressions worth paying any attention to were largely a thing of the past.

      • just brew it!
      • 1 year ago

      “It’s complicated!”

        • Krogoth
        • 1 year ago

        Understatement of the day

      • Redocbew
      • 1 year ago

      If there’s something weird going on concerning a browser, then it’s a pretty safe bet javascript will be involved somehow.

    • Firestarter
    • 1 year ago

    well I just pulled the trigger on a 9700K, that should be a nice upgrade from my 2500K

    • rnalsation
    • 1 year ago

    I would like to know what is causing i7-9700k to be on top of those web tests, also the i5-8400 beating out some faster CPU’s.

      • Voldenuit
      • 1 year ago

      Higher boost clocks because no HT? HT is known to increase power consumption and raise temps. Would be interesting to look at power efficiency of virtual threads vs real cores in terms of Ops/Watt.

        • rnalsation
        • 1 year ago

        I guess I don’t know how multi-threaded those tests are but the i7-9900K takes the top boost slot going to 5GHz the i7-9700K 4.9GHz and the i5-8400 only doing 4GHz.

          • Voldenuit
          • 1 year ago

          Anandtech seems to be [url=https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/5<]seeing a similar pattern[/url<], too, with the 9700K beating the 9900K in several tests.

      • Jeff Kampman
      • 1 year ago

      I’m checking into it, but I’m pretty sure it’s Spectre-mitigation-related.

        • ermo
        • 1 year ago

        If that’s the case (and it is related to HT mitigations), what’s the point of the 9900K compared to the 9700K?

          • derFunkenstein
          • 1 year ago

          For the web, obviously there’s nothing. But overall that’s easy: fully utilizing each core when you can handle more than 8 threads. All the heavily multi-threaded tests give the 9900K a 15-20% advantage. In 7-zip decompression it was closer to 30%.

            • ermo
            • 1 year ago

            I came at the question from the perspective of “best turn HT off in the BIOS since intel’s implementation of it is apparently made of terminally faulty layers”.

            Hence why it seems it makes very little sense to pay for the 9900K assuming it will be run like a non-HT 9700K anyway?

            • derFunkenstein
            • 1 year ago

            I get you now. Makes sense from that perspective

      • dragontamer5788
      • 1 year ago

      Hyperthreading disabled should improve single-thread performance slightly. There is more L1 cache and other resources available to threads if you have fewer threads per core.

    • cegras
    • 1 year ago

    There is some discussion about the fuzziness of the frame time graphs. Personally, the 9900K looks less fuzzy than the 2700X.

    A histogram of the frame times, plus a mean and standard deviation, would really be better for comparing the data. It’s a chore to look at fuzzy graphs and swap between ‘time spent beyond…’ bar charts. With just two numbers you can summarize all of that data, and also plot the histograms against each other, fitted to what I assume would be a normal distribution.

    As a bonus, if the data is normally distributed then you can make use of the 68-95-99 rule: [url<]https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule[/url<] Edit: Although, it is probably not normally distributed. Frame times always jump upwards, and not symmetrically to zero. But you can still quote mean and stdev, and even the skew if you want ... all easily accessible from excel.

      • ptsant
      • 1 year ago

      The relevant metric is still “time below X fps” (or % of frame > Xms). These being outliers, I’m almost certain they don’t fit the normal distribution too well, although the fact that we *don’t* see major reversals between the “mean fps” and “time above Xms” graphs obviously shows that these two measure are very highly correlated.

      I can’t say whether a high variance (with the same number of outliers below 60 fps or similar) changes the gaming experience, although that would have to be tested. My guess is that glaring hiccups at 30 fps or below are easy to spot, but fluctuations between 80-120 fps are not perceptible, especially if not sustained for more than a couple of frames. Could be horribly wrong of course.

    • ronch
    • 1 year ago

    The giant is awake again.

    The good thing for us consumers is AMD will likely respond with some price cuts.

    Welcome back to the year 2000.

      • Krogoth
      • 1 year ago

      They already did some preemptive price cuts on their line-up. 2600 is a steal for its price point.

        • ronch
        • 1 year ago

        Yes, as I’ve mentioned in a recent TR article about these Intel chips. But I think there’s more coming. The 2700X went down to $295 at Newegg then it went back up to $305 last time I checked.

          • willmore
          • 1 year ago

          Too bad the price/performance scatter charts neglect the current prices for the Ryzen chips.

    • Krogoth
    • 1 year ago

    9900K was obviously going to kill 2700X where it was strong at without having to sacrifice single and dual-threaded performance. The CPU is limited by the platform it is operating on (Dual-channel DDR4 and only having 16 PCIe lanes) which makes not so attractive to potential power users. The extra cores are overkill for mainstream/gaming crowd when the six-core Coffee Lake SKUs exist.

    9900K is pretty much a halo product designed to retake the mainstream crown no matter the cost. Intel wasn’t content with 8700K merely contesting the 2700X for the crown.

    8700K, 8600K are far better buys for mainstream and gaming crowd. The 2700X is still a strong-contender for multi-threaded loads and gaming performance isn’t super critical while costing 30-40% less then 9900K.

    9900K and 9700K are pretty much catering towards a niche within a niche (Need high clockspeed and more core count while I/O connectivity/memory bandwidth aren’t much of a factor).

    I’m more curious to see the impact of factory overclocked memory versus JEDEC-spec units on 9900K and 9700K under taxing loads.

    Overall, I give it -2 Krogoths, awesome CPU that is somewhat crippled by its underlying platform. Just imagine if eight-core Coffee Lake silicon got the Socket 2066 treatment.

      • drfish
      • 1 year ago

      One man’s niche is another man’s sweet spot.

      • srg86
      • 1 year ago

      Speak for yourself, if I was in the market for a new machine, the 9900K would basically be perfect for my use case. I don’t need more than 16PCIe lanes, plus it has 8 cores and iGP, basically what I would be looking for in a new PC.

    • revcrisis
    • 1 year ago

    What GPU was used for these tests? 2080ti? 1080ti? What RAM speed was used to test? Any information on your test bench would be awesome.

      • Jeff Kampman
      • 1 year ago

      RTX 2080 Ti Founders Edition, and all systems used DDR4-3200 CL14 RAM.

        • yokem55
        • 1 year ago

        What kind of cooling did you use for this? (BTW – maybe it isn’t something you do anymore but I missed seeing a full table on the testing set up that you guys used to do…)

          • Jeff Kampman
          • 1 year ago

          I’m working on it, but all systems had 240-mm closed-loop liquid coolers.

    • Noinoi
    • 1 year ago

    The 9700K looks plenty compelling even though it’s under the shadow of the 9900K. I’ll admit, I do like the idea of having the ultimate CPU (for the time being, at least) without going HEDT, but the 9700K is still very fast when it comes to the kind of things I’d do to my home PC, be it gaming, compiling after dev, or encoding. The 9900K would be basically a show-off – and I don’t think my current UPS will like it at peak performance.

    That $100 or so saved over the 9900K can basically pay for 8 GB of DDR4 RAM these days over at my country, too. Or a significant amount towards a motherboard, however you slice it.

    • Shobai
    • 1 year ago

    Here’s one for chuckles:

    [tongue in cheek]
    Jeff, I must have skipped the testing notes page – did you try to level the playing field by running each CPU with its boxed cooler? If not, how did this factor in your price analysis?

    [/tongue]

      • benedict
      • 1 year ago

      Do you even need to analyze the price? The i9 is twice as expensive after buying the massive cooling required to cool it.

    • blastdoor
    • 1 year ago

    Definitely impressive.

    Time to unleash the 2800X, AMD!

      • Krogoth
      • 1 year ago

      “2800X” isn’t going make much of a difference though (assuming it is just a cherry-picked 2700X with 100Mhz+ on its base clock and boost clock)

      AMD’s real answer will be 7nm Zen 2 that should be coming around sometime in 2019.

        • blastdoor
        • 1 year ago

        True. And I suppose that the benefit of Zen 2 will be more cores, not more Hz. So perhaps the 1920X is a good indicator of what we might expect to see.

          • ptsant
          • 1 year ago

          Rumors claim +10% IPC for Zen2. There will hopefully be higher frequencies, although that remains to be seen. If you really want the 9900K, I suppose that’s when they will have to lower the price a little bit.

      • MileageMayVary
      • 1 year ago

      In nearly every test but the DAW set, AMD is close or equal to Intel in IPC (from +1 to -10% in my calculations). The MHz is the biggest thing holding AMD back right now. *looking at TSMC 7nm*

    • leor
    • 1 year ago

    Thanks for including the i9 7900X.

    It’s looking like this system with my 2016 Titan card is way more future proof than I thought it would have been. It’s weird though, where 10 cores used to feel like a lot , it seems downright pedestrian today.

    I’ve always been a bleeding edge guy, but nvidia and AMD have pretty much solidified that it’s unlikely that I will replace a single thing in this rig before 2020.

    • Kretschmer
    • 1 year ago

    Wait, internet comments have convinced me that Intel was doomed. What happened?

      • K-L-Waster
      • 1 year ago

      [url<]https://techreport.com/forums/viewtopic.php?p=1393208#p1393208[/url<]

      • Krogoth
      • 1 year ago

      Intel has too much momentum to simply fall overnight, however if they cannot remedy the 10nm process woes and beyond. The next few years for them are going to be very bumpy.

      • blargh4
      • 1 year ago

      I mean, Skylake(++) was the fastest core on the market and remains the fastest core on the market, and now you’ve got 8 of them.
      Nevertheless, personally I wouldn’t touch the 9900k until I see what AMD squeezes out of ~2 process nodes and a uarch update next year.

        • tfp
        • 1 year ago

        I wouldn’t buy AMDs chip next year with out seeing what Intel squeezes out of a new process node and uarch update in the year following that.

          • Zizy
          • 1 year ago

          Except it doesn’t seem there will be 10nm in 2020 already 😀

          • derFunkenstein
          • 1 year ago

          Correct. And I wouldn’t buy Intel’s chips in 2020 without seeing what AMD squeezes out of its puckered rear end in the year following that. Nobody should ever buy a processor without seeing what comes next. 😆

          • blargh4
          • 1 year ago

          Intel has already said that 1st gen 10nm won’t outperform 14nm+++, so I think you can safely wait for Ice Lake. If you’re in no hurry to upgrade, sure, why not?

          But with the Zen 2 announcement being a mere 2 months away (not to mention a 14nm supply shortage) this situation seems a little different, ya know?

            • tfp
            • 1 year ago

            I don’t know if I need something I’ll get it when I have the need. I sat on a machine for near 10 years because it ran well enough I’ll sit on my latest one for years and years as well. The jumps from both AMD and Intel are nothing to be concerned about. Buy what you want for the money that does well in the now when you have a need and a budget.

            Great example is Nvidia’s latest chip, it’s better but who cares last gen does well enough for the price and 2 to 3 years from now the new hardware will be faster and useful/supported in software at less of a cost. The same applies for any “revolutionary” changes in CPUs.

        • jihadjoe
        • 1 year ago

        And 14nm was the best process available, now it’s 14nm+++ and still the bestest.

    • drfish
    • 1 year ago

    I always feel like such an idiot when a new CPU review comes out.

    [i<]Edit: Also, Jeff's photo is completely amazing, but I couldn't help myself and had to spruce it up a bit. [url<]http://dr[/url<]_fish.speedymail.org/techreport/RGBon.gif[/i<] [i<]Edit 2: *grumple* stupid underscore bug *grumble*[/i<]

      • K-L-Waster
      • 1 year ago

      So, like, did no one tell you to wait or something?

        • jihadjoe
        • 1 year ago

        I was waiting for this comment!

          • K-L-Waster
          • 1 year ago

          Happy to oblige 🙂

      • derFunkenstein
      • 1 year ago

      This is a good comment.

      • Kretschmer
      • 1 year ago

      I bought an i7-7700K a few months before the i7-8700K dropped and…I’m still happy with my PC. It runs things well. Try to focus on the amazing tech you have, rather than the what-ifs.

      I mean you could have bought bitcoin in 2011 instead of building a PC and bought a mansion last year, but that type of thinking will just drive you insane.

        • drfish
        • 1 year ago

        Yeah, even at a lowly 4.6 Ghz all-core overclock, I know I’m not missing out on much for gaming, but it still irks me. In this review, I’m using the i5-8400 numbers as a stand-in for the 7700K, even though it’s not a perfect match. It shouldn’t feel bad, but the heart wants what it wants.

          • Kretschmer
          • 1 year ago

          Just be sure that you’re using DDR3200 or faster RAM; the Ryzen 2 gaming article showed us that the 7700K performs like an 8700K with the right RAM.

            • drfish
            • 1 year ago

            3866 FTW!

            • jihadjoe
            • 1 year ago

            I’d clock it at 3860MHz because I’d have a chuckle every time I see 386

      • JustAnEngineer
      • 1 year ago

      Use the ASCII hex code %5F to replace the underscore?
      [url<]http://dr[/url<]%5Ffish.speedymail.org/techreport/RGBon.gif Edit: TR's comment system doesn't like %, either. This would work in the forums.

      • derFunkenstein
      • 1 year ago

      Nice GIF work there, Fish.

      • rnalsation
      • 1 year ago

      [url=http://dr_fish.speedymail.org/techreport/RGBon.gif]RBGon.gif[/url]

      [url=https://www.youtube.com/watch?v=gvdf5n-zI14<]nope.avi[/url<]

      • chuckula
      • 1 year ago

      YOU SHOULD HAVE WAITED FISH!

      Just like I did to post a comment.

      • Redocbew
      • 1 year ago

      [quote<]I couldn't help myself and had to spruce it up a bit.[/quote<] Fish, you idiot. This is one usage of RGBs up with which we can all put.

        • derFunkenstein
        • 1 year ago

        [quote<]up with which we can all put.[/quote<] I know that's the grammatically correct way to end that sentence, but wow that's difficult to parse. LOL

    • ColeLT1
    • 1 year ago

    Got all the parts except the 9900k (preordered) at the house ready to replaced all my recently-lightning-fried gear.
    9900k on a custom waterloop
    RTX2080, just overclocked to 2055mhz “curve” and +1000 (maxed) memory on afterburner, never seen that before! Highly recommend the MSI GamingXtrio, 60% fan hits about 52c, sub 50c at max fan, benchmark/gaming)
    Samsung 970 evo 500gb
    ASUS ROG Maximus XI Hero Z390
    G.Skill 16GB DDR4 3600 15-15-15-35
    Corsair 570x (800D retiring to server duty)

      • Pwnstar
      • 1 year ago

      You didn’t have surge protection? =(

        • ColeLT1
        • 1 year ago

        Two computers were damaged through my wired network, killed the Ethernet ports on everything hardwired in the house, blew a hole in my roof and went through the center/back of my house. I had devices that were not plugged into the wall get fried too, I always unplug the wii from the surge protector, they get hot in standby. Still finding things, just put up a replacement TV and found my HDMI port on my htpc/laptop is done.

          • UberGerbil
          • 1 year ago

          When I was a kid there was a house in my town that got hit by lightning and they had to open up the walls and [i<]replace all the wiring in the house[/i<], as well as every appliance. Also the strike either entered or exited through the dryer vent and flash-melted the hose between the dryer and wall (where it also set a small fire). When voltages are so high the ionized air becomes conductive... weird crap happens.

      • ronch
      • 1 year ago

      Plot twist : courier sends you a 2700X by mistake.

    • SecretMaster
    • 1 year ago

    From the figures I’m presuming y’all also tested the i7-9700k? I’m seeing numbers for it, but I don’t see it stated explicitly in the article.

      • Jeff Kampman
      • 1 year ago

      Yes.

    • thx1138r
    • 1 year ago

    101Watts more than the i7-8700K in the blender test, is that right?

    While impressive overall, it looks like the performance gains have come at the expense of power consumption/efficiency. If we were to scale a hypothetical 8700k up to 8 cores it would have used about 166W, so at 226W, it looks like that extra 300Mhz of Max-turbo speed has cost intel 60W of power consumption.

    So, a bit like the way AMD squeezed the most out of their chips in order to compete with Intels higher-end desktop chips, Intel seems to have squeezed everything it could (and then some more) in order to stay ahead in the performance stakes. The side-effect of all that power consumption is that it’s not going to be much of an over-clocker, but that’s the way CPU’s have been going lately.

      • jarder
      • 1 year ago

      The numbers seem legit, anand measured a very similar 221W power consumption at full load:
      [url<]https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/21[/url<] Does TDP mean nothing to these people any more 😉

        • Shobai
        • 1 year ago

        TDP still means what it always has.

        • ColeLT1
        • 1 year ago

        TDP is not power consumption

          • jarder
          • 1 year ago

          I know.
          Do you know what the winky smiley means?
          🙂

            • ColeLT1
            • 1 year ago

            It means you like me? 😉

            • jarder
            • 1 year ago

            sure I do 😉

          • psuedonymous
          • 1 year ago

          TDP is not [i<]peak[/i<] power consumption. Thermal output tracks incredibly well with averaged power consumption as a CPU (or GPU) outputs a mere rounding-error in electrical power.

        • derFunkenstein
        • 1 year ago

        T is for Thermal

    • Platedslicer
    • 1 year ago

    Seems to me that the 9700k is the go-to chip for those whose main concern is gaming. Guess now that we’ve finally moved to 8 cores, I’m out of excuses to keep driving ye auld 3770k. Plus Stellaris is a hog at the endgame.

      • jessterman21
      • 1 year ago

      If you’re willing to pay for the privilege… my cheap-arse just sees the 2nd gen Ryzen 5 killing it in the gaming smoothness department.

        • DancinJack
        • 1 year ago

        huh?

          • jessterman21
          • 1 year ago

          Ryzen 5 2600X beats the i7-9700K in Hitman and Far Cry in Time Spent Beyond X metrics, and comes close enough for me in the other games to not even consider spending $420 on the i7-9700K vs. $160 for the (still-overclockable) Ryzen 5 2600.

            • Jeff Kampman
            • 1 year ago

            As I’ve advised other readers in this comment section, beware of focusing on the relative standings of CPUs in our time-spent-beyond-X graphs and remember exactly what those graphs are measuring.

            You seem to be looking at thresholds that are only collecting vanishing fractions of a second worth of “roughness” while ignoring those that actually expose significant differences in CPU performance.

            The Ryzen 5 2600X is not anywhere close to beating the i7-9700K at the 8.3-ms and 6.94-ms marks that actually matter for describing Far Cry 5 or Hitman performance with these chips.

            • ptsant
            • 1 year ago

            At the same time, at some point this “faster than Xms” no longer has significance for the gamer.

            I don’t doubt that the 9700K is much faster. It’s blindigly obvious. But if you have to look at the number of frames below 120 [!] fps (and we’re not talking about average, but the outliers here!) then the chips are definitely both fast enough with respect to that specific benchmark.

            To me the GCC, blender etc benchmarks are much more illuminating. Let’s just hope that the upcoming games will stress CPUs even further (and put the CPU cycles to good use).

            • Freon
            • 1 year ago

            You didn’t click on the 8.3ms and 6.94ms numbers did you?

            The differences at 16.7 are a tiny fraction of a second over the entirety of the 90 second benchmark run. About .1-.2s. If you look at 8.3 or 6.94 you’ll see several full seconds of difference.

    • Chrispy_
    • 1 year ago

    So I’m looking at the “frames beyond 16.7ms” results and the two new Intel chips s aren’t faring too well at all.

    These are Intel’s halo gaming duo and there are hiccups galore even at lowly 60Hz ‘entry level’ gaming 🙁

    [i<]To clarify:[/i<] [url=https://techreport.com/r.x/2018_10_19_Intel_s_Core_i9_9900K_CPU_reviewed/Crysis_3_time_spent_16.png<]Crysis 3: Ouch![/url<] [url=https://techreport.com/r.x/2018_10_19_Intel_s_Core_i9_9900K_CPU_reviewed/Deus_Ex_Mankind_Divided_time_spent_16.png<]Deus Ex: Worse than the cheaper 2700X and 8700K :([/url<] [url=https://techreport.com/r.x/2018_10_19_Intel_s_Core_i9_9900K_CPU_reviewed/Hitman_time_spent_16.png<]Hitman = Oh dear! Buy an i5 instead.[/url<] [url=https://techreport.com/r.x/2018_10_19_Intel_s_Core_i9_9900K_CPU_reviewed/Far_Cry_5_time_spent_16.png<]Far Cry 5: Eight threads aren't enough.[/url<] [i<]Edit 2:[/i<] If you look at the frame time graphs, there are quite a few instances where these new octa-core chips are giving "hairy" graph plots compared to AMD and their hexa-core Intel brethren. That means inconsistent performance, which is definitely not desirable. Could this be immature drivers/microcode, or have Intel broken something trying to shoehorn two extra CPU cores into the die as a rush job to catch up with AMD's core counts?

      • derFunkenstein
      • 1 year ago

      It’s weird, but once you get to 11.1 milliseconds (edit: not Hz, I’m an idiot), it’s the best of the best basically across the board.

        • Chrispy_
        • 1 year ago

        Yeah, I’m not denying that it’s fast.
        I’m saying it’s inconsistent.

        Consistency is REALLY important. It’s why we now care about 99th percentile frametimes more than average frame rates. TR is the one place I would hope I can raise a comment on inconsistency and not be drowned out by ‘average fps’ heathens 🙂

      • K-L-Waster
      • 1 year ago

      Some of the bar graphs do look odd, especially when compared to the frame delivery plot and the overall FPS.

      • Jeff Kampman
      • 1 year ago

      The graphs measure time spent beyond a certain threshold in milliseconds, not “frames beyond x”. If you revisit the graphs you cite with that in mind, I think you will find that you are making a mountain out of a figurative molehill in the grand scheme of these chips’ gaming performance.

        • Chrispy_
        • 1 year ago

        Perhaps, yes.

        But the fuzzy/spiky graphs are exactly what AMD were slated for when Scott’s whole “inside the second” article pointed this problem out in the first place. Despite being blisteringly fast, the framerates are spiking up and down a lot. Isn’t that the key point of showing us the frametime graphs in the first place? I often wait for the TR reviews specifically because of these graphs you guys produce that few other sites bother with.

        It may not change the overall ranking of the i9 and 9700K in relation to other chips, but I’m more curious to find out [b<]why[/b<] the i9 and 9700K are less consistent than than their six-core brethren on the same architecture.

          • drfish
          • 1 year ago

          FWIW, I place a lot more valve on the summarized totals. The graphs can look dicy just because that’s the best Excel can within a small image sometimes. Even so, I’m not seeing it, they look darn good to me.

          • Kretschmer
          • 1 year ago

          A graph plotted with the limits of a tiny picture is worth less than the numerical totals. I mean, the *guy that does this for a living* is telling you that you’re manufacturing issues.

          • cegras
          • 1 year ago

          I think the i9’s are the least fuzzy. We need to quantify the variance!

          • Andrew Lauritzen
          • 1 year ago

          Yes jitter definitely matters, but it’s important to keep the scale of it in mind (i.e. the absolute numbers are actually what matters even though those graphs sorta draw your attention to the relative comparison a lot more). For instance, that “ouch” 28ms in Crysis could easily be a single frame spike or something, and while it’s obviously fair to say that “no frame spikes are better than a single one”, it’s not even entirely clear whether the test is repeatable enough for results in that range to be considered a signal.

          By way of comparison, I imagine those spikes on Far Cry 5 are something that will be visible while playing as they last 4-5 frames or more. They occur on all of the processors though so it’s likely more of a game/driver issue than anything.

          If you look at the frame time graphs for Crysis 3 there’s certainly a few more spikes than on the AMD parts and that’s worth noting. That said, they are all less than a frame long and in that range you do need to correlate a bit with subjective experience because how visible they are depends a lot on how the engine handles animation and sim timing.

          At least in my experience I agree with Jeff’s general notion that all of the graphs look pretty good here for both AMD and Intel with no obvious issues. That’s progress worth celebrating vs. ~5 years ago (even if a good amount of it has been in GPU drivers) 🙂

      • Krogoth
      • 1 year ago

      I suspect ring-bus issues (Eight cores fighting over the same L3 pool) and Window’s subpar scheduler being the culprits here.

    • derFunkenstein
    • 1 year ago

    Now that I’ve read the whole thing:

    Wow, the Core i9-9900K is super fast. I can’t blame anybody who needs it or wants it for buying it. Intel absolutely and convincingly took back every aspect of the standard desktop performance crown. Now there’s no caveat. It doesn’t matter if the load is multi-threaded or single-threaded, the 9900K does it all. Definitely worthy of the TR Editor’s Choice award.

    At the same time, I’m a huge cheapass. On the price/performance curve, the Ryzen 7 2700X acquits itself nicely. That’s especially true when you consider the cost of a motherboard. B350 boards are fully-featured for everything except multi-GPU, which itself is falling way out of style, and they’re generally an additional savings over the Intel one. I’d put the $300 difference in the bank and have a plenty-fast system.

    Part of this might be fueled by the fact I’m putting a new engine in my car because the current one has a cracked head and there’s coolant in my oil.

      • psuedonymous
      • 1 year ago

      Remember that what you save on the motherboard, you need to spend again on the RAM due to Ryzen’s pickyness in dies and sensitivity to memory speed for CPU performance.

        • derFunkenstein
        • 1 year ago

        I wouldn’t spend Core i9-9900K prices on the CPU and not get fast RAM to go with it. I figure I’m buying DDR4-3200 no matter what.

          • psuedonymous
          • 1 year ago

          It does however mean anyone deciding “naff Intel, I’m going to move to Ryzen!” and who already has 16GB/32GB of decent DDR4 may find that they need to buy a whole new set of DIMMs to get good performance (or even POST at all in some cases!), which with current RAM prices may be a much larger price hike than expected.

            • derFunkenstein
            • 1 year ago

            That was definitely an issue for me when I originally went Ryzen 18 months ago, but that got better with AGESA updates. My understanding has been that Ryzen 2 further addresses that, but I dunno.

            • synthtel2
            • 1 year ago

            The issues bad enough to make anything not POST have been sorted out for a long time now. AFAIK, OG Zen still usually needs B-die to get to 3200 and Zen+ still usually needs it to get to 3466, but in either case a lack of B-die isn’t the end of the world for performance.

            As to the rest of the topic, if you’re blowing $400+ on a CPU like this and not springing for B-die, you’re doing it wrong.

        • Andrew Lauritzen
        • 1 year ago

        Yeah Ryzen’s affinity for only populating 2 DIMMs has given me pause in the past. I’ve considered picking up a Ryzen a few times but I really want/need my 64GB of RAM for work stuff. It’s obviously not a deal breaker, but it’s a negative.

        There’s always Threadripper of course for more “workstation-style” workloads, but the RAM quirks of Ryzen are still worth noting depending on budget and needs.

          • ptsant
          • 1 year ago

          Supposedly it’s much better with Zen+.

          • K-L-Waster
          • 1 year ago

          Yeah, if you’re doing something that needs 64GB odds are a TR makes more sense than an R7. (Or Skylake X of course, but $$$…)

            • Andrew Lauritzen
            • 1 year ago

            I agree – the main things that have been keeping me away from TR are that I also do a fair bit of VR on this machine and that is very single-threaded-perf sensitive (I’ve had to overclock my 5960x in a few VR games…), and the random platform issues around stuff like the HPET and all the “Game Mode”/NUMA nonsense. All very solvable of course, but when given the choice of do I want to deal with it or not, it’s a negative factor.

            A 16-core SKL-X is pretty much perfect for my combined work/gaming needs, but the price is too nutty even for me 🙁 The i9-9900k might actually be a decent enough middle-ground upgrade for me: lose some on workstation stuff vs. TR, but gain the optimal VR/gaming performance while being somewhat cheaper and a bit less platform quirks.

            We’ll see though, still undecided.

          • Mr Bill
          • 1 year ago

          Really is Ryzen that bad? Does anybody building a gaming system or one they want to overclock [edit]ever[/edit] put more than one stick per channel? I don’t have any Ryzen systems. But all my other AMD systems will take a full load of sticks.

          Edit: every to ever

            • Krogoth
            • 1 year ago

            Mainstream platforms and most mainstream motherboards really do not like dealing with fully populated DIMM slots (Especially with dual-rank DIMMs). You typically have to dial-down the clockspeed/timings to get the system to run without any issues even if you are using JEDEC-spec DIMMs.

            You have better luck with overengineered/overclocking-friendly mainstream-tier and workstation-tier boards.

            • Andrew Lauritzen
            • 1 year ago

            I’ve had no trouble with 4 DIMMs (or even 8 on X99) in any of my builds. I typically use ASUS motherboards and RAM a few notches above the speeds I’m targeting, but it’s certainly not something I’ve run into frequently.

            I’m aware that there’s issues especially with launch BIOSes on all platforms, but I’ve found the ASUS/Intel/Corsair combo to be pretty solid most of the time in whatever config.

            • Krogoth
            • 1 year ago

            You are working with overengineered/workstation-tier boards. Try running fully populated boards on your run of mill SKUs that aren’t trying to be the kitchen sink.

            I have seen and dealt with stupid issues with trying to run fully populated boards (failure to POST, freezing and subtle memory errors). You usually have to dial-down the timings and sometimes underclock the memory to get it all working without an issue. The memory wasn’t at fault (individual DIMMs work fine when isolated)

            • Andrew Lauritzen
            • 1 year ago

            These are just the usual ~150-200 US$ middle of the road ASUS boards, nothing crazy like the high end $400+ ROG overclocking stuff. Maybe the $100 tier or off-brands would have more issues but if it only costs me $50 more to get that “over-engineering” (I’d argue it’s just engineering in this case… the boards are supposed to support 4 DIMMs properly!), I’m pretty fine with that.

            As someone noted, 64GB of RAM isn’t exactly cheap at the moment so I’m not gonna be going to be trying to save a few bucks on a motherboard anyways. I’d also argue that it’s worth paying a bit more to minimize the irritation of dealing with motherboard/BIOS quirks. To some extent paying for a good QA department and large QVL is worth it alone 🙂

            • Krogoth
            • 1 year ago

            It looks like people never worked with actual run of mill hardware and platforms before. They only dealt with overengineered-tier stuff. They probably never had to wrestle with silly motherboard, memory controller and/or memory compatibility issues.

            There’s a reason why OEM vendors rarely bother implementing more then two DIMM/SODIMM slots on their systems. It is more than simple cost-skimming. It is a lot easier for motherboard and memory controller to drive one or two DIMMs. It also means less headaches for customers and company support down the road.

            It is the reason why registered memory exists if you need more than four DIMM slots on your platform.

            • Srsly_Bro
            • 1 year ago

            Two is greater “than” 3.

            Krogoth made a 5th grade grammar mistake “then” I corrected him.

            Paragraph 2, line 3, bro.

            • derFunkenstein
            • 1 year ago

            No Ryzen is not that bad any longer. At the start it was very dicey and I had a thread that chronicled my experience. It got better over time, and today my run-of-the-mill DDR4-3000 Corsair RAM with some sort of memory other than Samsung B die works just fine with the XMP1 profile.

            • Andrew Lauritzen
            • 1 year ago

            Nah it’s not “bad”, it just was a bit more finicky than the equivalent Intel platforms with 4 DIMMs. I imagine that has been addressed to some extent with BIOS updates since then but haven’t been keeping track.

            I imagine I’m at the edge of the use cases wanting to build a hybrid system that is both great a games and VR, and also good at games development, but there’s your use case 🙂 Any of Ryzen, TR, SKL-X or these latest chips can obviously fill those roles, they just all have their places along the various gradients. As I mentioned above, SKL-X would be ideal for me likely if it wasn’t so expensive.

            • Jeff Kampman
            • 1 year ago

            I can’t recall if I ran memtest on it but I dropped 64 GB of DDR4-3200 RAM across 4 dual-rank memory modules—pretty much the worst case for the Zen IMC—into an X470 board when the second-gen chips first came out and I could get it up to 2933 MT/s without a ton of fuss. It’s probably better now but I would have to specifically go and test it to be sure.

            • Voldenuit
            • 1 year ago

            From what I’ve read, 2933 with 4 DIMMs seems to be very doable even on first gen Ryzen. The B-die RAM chips seem to do better than non-B-die if you’re trying for high clocks, as well.

          • freebird
          • 1 year ago

          I really don’t know what you are talking about… I’ve been running my Ryzen 1700 with 64GB since March 2017. I was able to get 2993 stable after a few bios upgrades. I’m currently running it at 3200 CL16 16-16-16 and the RAM is rated at 3000 CL14 14-14-14.

            • Andrew Lauritzen
            • 1 year ago

            That’s great to hear. Initially on launch there were enough reports that it seemed to rise above the usual “new platform initial BIOS issues” noise, and indeed it sounds like you mention needing a few updates as well.

            If everything is now much smoother that’s great news. Unfortunately since tech sites don’t tend to ever revisit their older reviews it’s super hard to find anything more than anecdotes about whether issues got solved, etc. To that end it’s good to hear that at least the RAM thing seems to be good for you!

        • Krogoth
        • 1 year ago

        I suspect that 8-Core Coffee Lake chips also have similar performance issues for heavily-threaded loads if you were to budget for JEDEC-spec DIMMs.

        However, I don’t think spending a ~$30-100 premium on factory overclocked memory is going to be much of a factor if you are budgeting for a 9700K and 9900K build.

          • K-L-Waster
          • 1 year ago

          This.

          Springing for a Corvette then skimping on the tires is kinda self defeating.

            • JustAnEngineer
            • 1 year ago

            If you want to drive like a hooligan, spin your tires and drift around corners because it [b<]looks[/b<] fast, then skinny tires are what you need.

            • K-L-Waster
            • 1 year ago

            … and this is where automotive analogies in computer parts breaks down…

          • Andrew Lauritzen
          • 1 year ago

          Sounds like something worth testing 🙂

        • albundy
        • 1 year ago

        not true at all. base ram speeds must be fully supported. maximum xmp profiles not so much, but you can get the to work tweaking it out or run a profile a little below max. running my $107 G.SKILL Ripjaws V Series 16GB DDR4 3200 Intel Z170 Platform / Intel X99 Platform Desktop Memory on my ryzen 1700/b250 prime motherboard with no issues.

      • Mr Bill
      • 1 year ago

      That low memory latency, (given the clock and bandwidth); is that the key to crunching games?

        • Waco
        • 1 year ago

        That, combined with a 10-20% higher core clock.

          • Mr Bill
          • 1 year ago

          [quote<](given the clock and bandwidth)[/quote<] I've noticed that AMD does not come up to par even when the Intel is downclocked. Some say its IPC but Intel seems to be consistently ahead in memory latency cache management. I wonder if that is enough to make the difference.

            • synthtel2
            • 1 year ago

            IPC as we usually use it here is more of a catch-all term for performance effects that aren’t clocks or core counts anyway, so memory latency is a subset of it.

            I’d guess memory latency and SIMD width are the two biggest factors in what you’re looking at.

            • Waco
            • 1 year ago

            IPC captures memory latency as well. Games are particularly memory latency sensitive in some cases.

    • Fonbu
    • 1 year ago

    What a nice suprise to wake up to this morning!
    Despite the i9-9900k recommendation, the i7-9700k seems to be where its at for alot of tests not highly threaded.

    • PTRMAN
    • 1 year ago

    Wow! High praise indeed!

    Looks like there might be an upgrade in my future….

    • derFunkenstein
    • 1 year ago

    [quote<]As part of our transition to using the Mechanical TuRk to benchmark our chips, we've had to switch to Google's Chrome browser so that we can automate these tests. [/quote<] This is good news, because people actually use Chrome, so the performance might be relevant to their current systems.

Pin It on Pinterest

Share This