Gaming and streaming with AMD’s second-gen Ryzen CPUs

Finally. After an eventful week in the world of hardware benchmarking, I’ve wrapped up our gaming test results for AMD’s second-generation Ryzen CPUs. We’ve taken some extra time to bench each Intel and AMD chip in our test suite with stock RAM as well as overclocked speeds. The wealth of resulting data has taken more time to crunch than I would have liked, but hey, we’ve got it.

If you missed our productivity benchmarks of AMD’s latest CPUs, be sure to check out our launch-day article. I won’t be rehashing that material too much here. Let’s jump right into our testing methods and our results.

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each benchmark at least three times and took the median of those results. Our test systems were configured as follows:

Processor
AMD Ryzen 1600X AMD Ryzen 1800X AMD Ryzen 2600X AMD Ryzen 2700X
CPU cooler EK Predator 240-mm liquid cooler
Motherboard Gigabyte X470 Aorus Gaming 7 Wifi
Chipset AMD X470
Memory size 16 GB (2x 8 GB)
 Memory type Crucial Ballistix Elite DDR4-2666 (rated) SDRAM

G.Skill Sniper X DDR4-3400 (rated) SDRAM

Memory speed 2666 MT/s (Ryzen first-gen)

2933 MT/s (actual) (Ryzen second-gen)

3400 MT/s (actual)

Memory timings 16-17-17-36 (2666 MT/s)

15-15-15-35 1T (2933 MT/s)

16-16-16-36 1T (3400 MT/s)

System drive Samsung 960 EVO 500 GB NVMe SSD

 

Processor
Intel Core i7-7700K
CPU cooler Corsair H115i Pro 280-mm liquid cooler
Motherboard Asus ROG Strix Z270E Gaming
Chipset Intel Z270
Memory size 16 GB
Memory type G.Skill Flare X DDR4-3200 (rated) SDRAM
Memory speed 2400 MT/s (actual)

3200 MT/s (actual)

Memory timings 15-15-15-35 2T (2400 MT/s)

14-14-14-34 2T (3200 MT/s)

System drive Samsung 960 Pro 500 GB NVMe SSD

 

Processor
Intel Core i7-8700K Intel Core i5-8400
CPU cooler Corsair H110i 280-mm liquid cooler
Motherboard Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z370
Memory size 16 GB (2x 8 GB)
Memory type Crucial Ballistix Elite DDR4-2666 (rated) SDRAM

G.Skill Sniper X DDR4-3400 (rated) SDRAM

Crucial Ballistix Elite DDR4-2666 (rated) SDRAM

G.Skill Flare X DDR4-3200 (rated) SDRAM

Memory speed 2666 MT/s (actual)

3400 MT/s (actual)

2666 MT/s (actual)

3200 MT/s (actual)

Memory timings 16-17-17-36 2T (2666 MT/s)

16-16-16-36 2T (3400 MT/s)

16-17-17-36 2T (2666 MT/s)

14-14-14-34 2T (3200 MT/s)

System drive Samsung 960 Pro 500 GB NVMe SSD

They all shared the following common elements:

Storage 2x Corsair Neutron XT 480 GB SSD

1x HyperX 480 GB SSD

Discrete graphics Nvidia GeForce GTX 1080 Ti Founders Edition
Graphics driver version GeForce 385.69
OS Windows 10 Pro with Fall Creators Update
Power supply Seasonic Prime Platinum 1000 W

Some other notes on our testing methods:

  • All test systems were updated with the latest firmware and Windows updates before we began collecting data, including patches for the Spectre and Meltdown vulnerabilities where applicable. As a result, test data from this review should not be compared with results collected in past TR reviews. Similarly, all applications used in the course of data collection were the most current versions available as of press time and cannot be used to cross-compare with older data.
  • Our test systems were all configured using the Windows Balanced power plan, including AMD systems that previously would have used the Ryzen Balanced plan. AMD’s suggested configuration for its CPUs no longer includes the Ryzen Balanced power plan as of Windows’ Fall Creators Update, also known as “RS3” or Redstone 3.
  • Unless otherwise noted, our gaming tests were conducted at 1920×1080. Our test system’s monitor was set to refresh at 60 Hz.

Our testing methods are generally publicly available and reproducible. If you have any questions regarding our testing methods, feel free to leave a comment on this article or join us in the forums to discuss them.

 

Crysis 3

Even as it passes six years of age, Crysis 3 remains one of the most punishing games one can run. With an appetite for CPU performance and graphics power alike, this title remains a great way to put the performance of any gaming system in perspective.


Crysis 3 is still an unusual beast in that it will happily take advantage of every core and thread one can throw at it in high-refresh-rate gaming. That’s good news for Ryzen CPUs, as their performance scales both thread-for-thread and generation-to-generation in this title. Even with DDR4-2933, the Ryzen 7 2700X isn’t trailing the i7-8700K by much in either average frame rates or 99th-percentile frame times, and throwing DDR4-3400 into the mix tightens the gap just that little bit more. Meanwhile, the Ryzen 5 2600X matches the Ryzen 7 1800X for both fluidity and smoothness.



These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

To best demonstrate the performance of these systems with a powerful graphics card like the GTX 1080 Ti, it’s useful to look at our three strictest graphs. 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays. Finally, we’ve added a 5-ms graph to see how well any of our chips sustain a scorching 200 FPS.

Given how fast our GTX 1080 Ti runs Crysis 3, it makes the most sense to start our exploration at the 8.3-ms mark. Here, we can see that the Ryzen 7 2700X and the Core i7-8700K are about as well-matched as our broad overview would suggest. Both CPUs hold up the GTX 1080 Ti for less than a second on tough frames that take longer than 8.3 ms to render. The Ryzen 5 2600X and Ryzen 7 1800X also turn in impressively clean records here. Flip over to the 6.94 ms mark, though, and it’s obvious that the i7-8700K is keeping the GTX 1080 Ti just that bit more well-fed for 144-Hz action.

 

Far Cry 5


In our observations, Far Cry 5 tends to max out a single thread, so it’s no shock that our average-FPS numbers and 99th-percentile frame times favor Intel CPUs. Among the Ryzen gang, both of our second-gen contenders solidly outperform their forebears. Even a Ryzen 5 2600X with stock-clocked RAM can overtake a Ryzen 7 1800X using DDR4-3400. To be fair, we’re talking extremely minor generation-to-generation differences, but it’s clear that second-gen Ryzen parts deliver important performance improvements.



Aside from a few frame-time spikes that seem to plague all of our CPUs in Far Cry 5, it’s most instructive to check out our chips’ performance at the 8.3-ms mark. The Ryzen 7 2700X puts the best face on AMD’s Far Cry 5 performance, but there’s no denying that Coffee Lake and Kaby Lake CPUs put just that little extra bit of polish on the experience. That’s especially obvious at the 6.94-ms mark, where our Intel contenders with overclocked RAM are putting half the time on the board compared to even the Ryzen 7 2700X paired with DDR4-3400.

 

Assassin’s Creed Origins
Assassin’s Creed Origins isn’t just striking to look at. It’ll happily scale with CPU cores, and that makes it an ideal case for our test bench.


Assassin’s Creed Origins seems to want the whole enchilada of CPU performance for high refresh rates: lots of high-clocked threads backed with high-speed memory. Even so, this title seems to favor Intel CPUs. Among Ryzen chips, the Ryzen 7 2700X and Ryzen 7 1800X perform the best, but it’s weird that the Ryzen 7 2700X doesn’t seem to gain anything from its higher Precision Boost 2 clocks relative to its predecessor. Perhaps the bottleneck is elsewhere. Meanwhile, the Core i7-7700K suffers badly until we toss it DDR4-3200 to play with.



Given the frame-rate averages in play in ACO, we’ve added an 11-ms time-spent-beyond graph to our menu here. 11 ms corresponds to time spent under 90 FPS. This plot basically confirms what our overview shows: the Ryzen 7 chips stack up with about five seconds spent on tough frames throughout our test run. Both the Ryzen 5 2600X and Ryzen 5 1600X benefit handsomely from the use of 3400 MT/s RAM, at least, but it’s nowhere near enough to catch even the Core i5-8400 here.

 

Grand Theft Auto V
Grand Theft Auto V‘s lavish simulation of Los Santos and surrounding locales can really put the hurt on a CPU, and we’re putting that characteristic to good use here.


Grand Theft Auto V has proven a constant thorn in the side for Ryzen CPUs ever since their launch, and second-gen Ryzens can only hope to close the distance in this largely single-thread-performance-dependent and memory-latency-sensitive title. No surprises—that’s basically what happens in our results. Even the $179 Core i5-8400 is delivering higher FPS averages than the $329 Ryzen 7 2700X here. Hook up both second-gen Ryzens to 3400 MT/s RAM, and they post the best 99th-percentile frame times we’ve yet seen from AMD chips in this title, at least.



Delving into our time-spent-beyond-X analysis paints a slightly kinder picture for the Ryzen 7 2700X, at least with 3400 MT/s RAM. At the 8.3-ms mark, tapping that faster memory helps the top-end second-gen Ryzen spend under a third of the time holding up the GTX 1080 Ti in this title than it does with stock RAM. At the critical 6.94-ms juncture, though, the i7-8700K proves its mettle by putting nearly six fewer seconds on the board versus the Ryzen 7 2700X, and the gap only widens for the 2700X with stock memory. The same basic story holds for the Ryzen 5 2600X. If you want the absolute smoothest high-refresh-rate GTA V experience, the extra threads of second-gen Ryzens just can’t compete with Coffee Lake.

 

Deus Ex: Mankind Divided

Thanks to its richly detailed environments and copious graphics settings, Deus Ex: Mankind Divided can punish graphics cards at high resolutions and make CPUs sweat at high refresh rates.


Deus Ex: Mankind Divided remains another bugbear for Ryzen CPUs at high refresh rates, and the second-gen chips are once again just making up ground here compared to their predecessors.



The story here is quite simple: if you want the smoothest, most fluid Deus Ex: Mankind Divided experience at high refresh rates, any Coffee Lake CPU is the way to get there.

 

Streaming Far Cry 5 with OBS

Now for a change of pace. To test software-encoded streaming performance with our CPUs, we fired up the latest version of OBS Studio and captured Far Cry 5 running at 2560×1440 before feeding it through the x264 encoder using the “faster” preset, 60 FPS, and OBS’ default bitrate of 2500 kbps. Running a game at 2560×1440 while streaming the output at 1920×1080 tends to allow systems to perform CPU encoding without completely overtaxing the processor, as a CPU-bound game running at 1920×1080 might. We’ll test that torture scenario in a moment. 

We streamed the resulting footage to Twitch. Admittedly, Twitch’s recommended streaming settings have changed since our last review, and the service will accept higher-bitrate streams (4500 to 6000 kbps) as of this writing. We might need to re-examine the effects that higher bit rates have on performance in future reviews, but for now, OBS’ defaults will have to do. We verified that the resulting stream looked OK to the eye by playing it back on a separate PC in the TR labs. CPUs that couldn’t stream without noticeably dropping frames failed our test. As a result, the Core i5-8400 and Core i7-7700K don’t appear in the following charts. All of the CPUs tested below used DDR4-3400 RAM.


While same-PC streaming using software encoding is certainly a strength of Ryzen CPUs, the Core i7-8700K produces the highest frame rates and lowest 99th-percentile frame times in this scenario. The flip side is that one doesn’t need to spend more than $230 to get streaming with a Ryzen 5 2600X, say, while the Core i7-8700K is a $350 investment. Presuming one isn’t trying to go for high-refresh-rate gaming and streaming from the same PC, in fact, it looks a bit difficult to justify the Ryzen 7 2700X for that purpose alone.


A look at our time-spent-beyond graphs tells us a little more about just what the Ryzen 7 2700X is buying us, though: about two fewer seconds spent on tough frames that would drop the client-side frame rate below 90 FPS. Even so, the Ryzen 5 2600X is delivering better same-PC streaming performance than the Ryzen 7 1800X, and that’s quite impressive given that the 2600X sells for half of what the 1800X did at launch.

 

Streaming Deus Ex: Mankind Divided with OBS

Although streaming Far Cry 5 at a 2560×1440 source resolution is hard enough on its own, we wanted to go one further with the notoriously punishing Deus Ex: Mankind Divided. We used the same streaming settings as we did with Far Cry 5, but we ran DXMD at a much-harder-on-the-CPU 1920×1080 while attempting to stream the output to Twitch at 60 FPS.

Turns out that only the Ryzen 7 2700X and Ryzen 7 1800X were up to the job. Even the usually imperturbable Core i7-8700K dropped an unpleasant number of frames when we attempted this stream, and the Ryzen 5s weren’t having it, either. Let’s see how the last two chips standing handled the load.

Even with this incredibly punishing setup, the Ryzen 7 2700X turns in a nice performance increase over its predecessor. Both its average FPS result and 99th-percentile frame time suggest a perfectly pleasant client-side gaming experience. It’s worth remembering that the 2700X loses a lot of performance while streaming this way, as it delivered an average of 120 FPS and a 13-ms 99th-percentile frame time in our regular testing without OBS. Even so, the 2700X (and 1800X) could deliver a subjectively pleasant stream this way, while none of the other chips in our test suite could. Can’t complain too much when you’re pushing CPUs to the limit like that.


Our time-spent-beyond graphs show just how much of an improvement one might enjoy when pushing a same-PC streaming setup to the limit like this. The Ryzen 7 2700X cuts three seconds off the Ryzen 7 1800X’s time on the board past 11 ms here, and that translates to a noticeably smoother client-side gaming experience from the second-generation chip.

 

Conclusions

Let’s try to sum up that outrageous amount of data in a couple convenient charts. First up, we’ll take a look at the geometric mean of average frame rates and 99th-percentile frame times from our test systems with overclocked RAM (converted to FPS so that our higher-is-better logic works).


As a matter of academic interest, we can do the same thing for our test systems using stock-clocked RAM. Here you go:


As I noted in our launch-day coverage, builders who are after the absolute best high-refresh-rate gaming performance at 1920×1080 will be leaving frames on the table with Ryzen CPUs. Even the $180 Core i5-8400 is delivering better performance than the Ryzen bunch under our test conditions, and that’s gotta sting a little for the red team. The Core i7-8700K remains the best high-refresh-rate gaming chip on the planet, and that’s before we tap into its formidable overclocking potential. Second-generation Ryzen chips do close the gap with Intel chips somewhat in our stock-clocked and CPU-bound testing conditions, but it’s not enough to steal Intel’s crown.

Of course, there are many ways around this bottleneck for folks who still want a Ryzen CPU. Upgrading to a 2560×1440, 144-Hz monitor takes some of the stress off the CPU and doesn’t incur a large performance hit on the GTX 1080 Ti, for example, and we’ve long considered those monitors the best balance of resolution and refresh rate one can buy. Gaming at 4K will make these CPUs pratically indistinguishable from one another. Builders who aren’t after the absolute highest frame rates around also won’t notice these bottlenecks nearly as acutely with a GTX 1060 6 GB or Radeon RX 580, say.

Gaming isn’t just a solitary experience these days, either. If you plan to game at high refresh rates and stream, the Ryzen 7 2700X and Ryzen 7 1800X are the only two chips in our test suite that could handle the punishing Deus Ex: Mankind Divided running at 1920×1080 whlie streaming the experience to Twitch. In a less torturous setup, however, the Core i7-8700K still delivered the smoothest client-side gaming experience while gaming with Far Cry 5 at 2560×1440. The Ryzen 5 2600X really stood out in our less-torturous streaming test, too, delivering client-side gaming performance on par with the Ryzen 7 1800X while selling for less than half of the first-gen Ryzen halo chip’s launch price.

All told, our gaming tests don’t change the final word on second-gen Ryzen CPUs. If you want an enviable amount of multithreaded bang-for-the-buck that provides a fine gaming experience after hours, the Ryzen 5 2600X and Ryzen 7 2700X are your chips. But if you’re after the best high-refresh-rate gaming performance around, bar none, an order of Coffee Lake is still the way to go. The best part of today’s CPU market is that system builders can pick the parts that are right for their needs at the right price, and that simply wasn’t possible a year ago. Viva la revolución.

Comments closed
    • Air
    • 1 year ago

    I don’t know if anyone will read this comment on such an old story, but anyway, I’d like to suggest to change the way avarage FPS is represented on the conclusions page. By using the mean of avarage FPS, you increase the influence of games with very high FPS in the final number (for instance, Crysis 3 has almost double the weight in the final number as Assassins Creed). I think a % valule number over a baseline would be a better way to show it.

      • Kretschmer
      • 1 year ago

      Agreed.

    • pogsnet1
    • 1 year ago

    Benchmark with DOTA2 and PUBG

    • Action.de.Parsnip
    • 1 year ago

    Niceeee I enjoyed reading this. Mega tasking is always fun. Also: “In our observations, Far Cry 5 tends to max out a single thread” but it’s 2018?!

    • elites2012
    • 1 year ago

    more bs trickery with intel. i see this i7 7700k is performing better than the ryzen 1700x in some games. intel still has a choke hold on 4 cores being enough for anything.

      • Pancake
      • 1 year ago

      BS trickery or market forces? It’s always illuminating to have a look now and then at the Steam hardware survey. Quad core systems take the lion’s share at 62%, dual-core at 30%, six-core 3.8%, 8-core at less than 1%. If you’re a games developer wanting to actually sell games and make money what sort of system are you going to be targeting? Also, the overwhelming majority of CPUs are Intel so that’s going to be the number one priority for testing/optimisation.

      It gets even more fun with video cards actually used for gaming where the dominance of NVidia is utterly crushing – much more so than even Intel vs AMD in CPUs. AMD is practically a no-show in gaming GPUs. The top order is completely dominated by NVidia with the first AMD card – ancient R7 graphics – ranking at less than 1%. Followed by the equally ancient R5 at 0.7% (I’m rounding up). The highest contemporary AMD card is the RX480 at a massive 0.63% – about 20x less used than the number one GTX1060 at 12.3%.

      So, that’s kinda fun and games. If you’re a games developer why even bother testing on much less optimising your code for AMD unless they’re paying you to do it? Practically NO ONE is gaming using their cards. Conclusion: pretty much all AMD gaming GPUs being sold are used for cryptocurrency mining. This is an absolutely DIRE situation for AMD as crypto mining is NOT a sustainable business to be in.

    • Pancake
    • 1 year ago

    Well, these results help complete the picture of where Ryzen “fits in” in terms of market segments:

    Professional use: No. Requisite systems aren’t available and it isn’t Intel.

    Gaming: Nawp.

    SOHO light duty office work: No integrated GPU. Even with 2400G weak light thread performance. So, just no.

    Home enthusiast AMD fanboy: YES!

    Home enthusiast video editor: Yes.

    Home enthusiast or part-time pocket money 3D modeller: Yes.

    So, nothing’s changed since the 1800X in terms of entering new market segments. Prediction – no new opportunities = flat market share. Perhaps a decline as home users aren’t going to upgrade to the slight increase in performance from their current 1700/1800X.

      • HERETIC
      • 1 year ago

      Disagree with your “nawp” for gaming.

      Purchaser’s of 8700-8600-2700X/2700-2600X/2600,are more than likely to be
      playing at 1440.
      At that res you could throw a blanket over that list-as the difference is likely
      to be around 2%(acceptable error tolerance)
      [url<]https://www.techpowerup.com/reviews/Intel/Core_i5_8600/14.html[/url<] Throw in the few that have a 1080/60Hz monitor as well.................

        • Pancake
        • 1 year ago

        Well, according to this here august website a canny build for a gaming rig would involve the use of an i5-8400 for the available suite of games today. I’d go Intel every time as it seems game developers test and optimise more for Intel as evidenced by much more consistent frame times.

        If you’re building a box to last a few years and maybe 2 graphics card upgrades then it would be wise to have a bit of extra CPU in reserve. I would go for the i7-8700 because it’s nearly as fast as the 8700K but fits in an insanely great 65W TDP.

    • Luminair
    • 1 year ago

    People on twitch use bitrate at 6000 because it makes fast motion look better. Run the test again and 8700k might fail far cry too.

    I believe you that the chips that failed failed. But that won’t help anyone who links the graph. Measure how bad the failures fail and put them in the graph. I promise you’ll get linked around the web. How often do you see 8700k at the bottom of a chart? It’ll be like a circus!

      • fyo
      • 1 year ago

      I agree. The processors that failed should be included in all the graphs and charts. Not including them feels misleading and certainly IS misleading if one happens to skip the text and only look at the pretty pictures ;).

      If it’s a matter of the graphs being “cleaner” with fewer processors, include an option to show only processors that receive a “passing grade”, similar to the AMD / Intel toggle.

    • ermo
    • 1 year ago

    If you want 32 GB RAM and can live with lower bandwidth, the 8700K might actually break even on the platform costs compared to the 2700X on account of the insane RAM prices (assuming that you want 2x16GB DDR4-3200+ for the Ryzen and can live with e.g. 4×8 DDR4-2666 for the 8700K).

    You might even be able to throw in a decent aftermarket cooler for the 8700K and *still* break even if you shop intelligently.

    That’s … pretty insane actually.

    EDIT: OK, just specced two system with ASUS Prime motherboards and 4×8 DDR4-2666 vs. 2×16 DDR4-3200 SR C14 for the 2700X. With a NH-D14 included, the 8700K build is €10 more expensive, but a whole lot quicker in gaming!

      • dragontamer5788
      • 1 year ago

      [quote<]DDR4-3200 SC C14 for the 2700X[/quote<] I think that DDR4-3200 CL16 or 3400 CL18 might be better for price/performance ratio. Ryzen's internal "infinity fabric" runs at the speed of RAM. So CL isn't quite as important as getting a high-clock.

        • ermo
        • 1 year ago

        Doesn’t it technically run at half the speed of RAM?

        In any case, isn’t it the overall cycle time in ns that counts in terms of not keeping the other CCX waiting longer than necessary?

        In that scenario, DDR4-3200 C14 is 14/3.2 =~ 4.4 ns, DDR4-3200 C16 is =~ 5ns and DDR4-3400 C18 is 18/3.4 =~ 5.3 ns, making the latter run at only (14/3.4) / (18/3.2) =~ 83% speed of the former?

        But sure, there’s a point of diminishing returns.

          • dragontamer5788
          • 1 year ago

          [quote<]Doesn't it technically run at half the speed of RAM?[/quote<] Well, if you wanna get technical, RAM MT/s specifications are 2x its speed. That is, 3200 MT/s RAM is only clocked at 1600 MHz. So.... yeah. 3200 MT/s RAM runs at 1600 MHz and Ryzen Infinity Fabric runs at 1600 MHz. DDR means "Double Data rate". Which means two transfers per clock cycle. That's where this "doubling" comes from. GDDR5 btw is 4 transfers per clock, which is why the MT/s ratings are so big. [quote<]In any case, isn't it the overall cycle time in ns that counts in terms of not keeping the other CCX waiting longer than necessary? In that scenario, DDR4-3200 C14 is 14/32 =~ 4.4 ns, DDR4-3200 C16 is =~ 5ns and DDR4-3400 C18 is 18/34 =~ 5.3 ns, making the latter run at only (14/34) / (18/32) =~ 83% speed of the former?[/quote<] CCX-issues are the Achilles Heel of the Ryzen architecture. CAS-latency is relatively irrelevant to the big picture. [url=https://www.sisoftware.co.uk/wp-content/uploads/2018/03/amd-zen2k7x-data-lat.png<]It takes 81-nanoseconds for Ryzen 2700x to read from main-memory[/url<]. It takes Intel 6700k 64 nanoseconds in comparison. So in reality, you're looking at a difference of 81.3 nanoseconds (for CL18) vs 80.4 nanoseconds (for CL14) on Ryzen chips. Compared to Intel, which has all sorts of optimizations, allowing ~64ns even on cheaper RAM. The Sisoftware tests were conducted with 2933 CL 14 (AMD) and 2400 CL16 (Intel). So that's with Intel having a severe disadvantage on RAM CL time. These tests demonstrate Intel's superior memory controller. Note that "sequential" is the easiest pattern to predict, and both Intel and AMD "predict the future" appropriately. 5ns is even faster than L3 cache: the data was already fetched and is in L2 in sequential cases... demonstrating this "prediction + auto-prefetch" feature I'm talking about. As such, AMD's CCX [b<]innately[/b<] has a +20ns (or more!) weakness vs Intel machines with regards to latency. Intel's memory controller is simply better. Infinity fabric, as awesome as it is, slows down Ryzen's memory access. That much is certain. But wait, AMD systems keep up with Intel? Well, that's because modern CPUs speculatively fetch memory. Yeah, CPUs fetch memory [b<]before[/b<] a program needs it. This is managed not just through speculative execution, but also out-of-order buffers and other such CPU tricks. So ultimately, latency is hidden away in the vast, vast, [b<]vast[/b<] majority of cases. You have to build a special test on particular algorithms (ie: linked lists randomly traversing memory) to even see the difference you're worried about. So the 20-nanoseconds of difference rarely matters between Intel / AMD... (let alone the 0.9nanoseconds of difference between 3200-CL14 and 3400-CL18) Those nanoseconds do make a measurable difference, especially on AMD (ironically. It seems like Intel's memory controller is better at "predicting the future" so it manages better stats). But the difference between 3200CL16 and 3200CL14 is miniscule. If such a difference matters to you, its far better to just choose Intel for its better memory controller.

            • mat9v
            • 1 year ago

            IF does not exactly run with memory controller speed. IF multiplier “ticks” closely to memory controller but not fully in sync. According to Sandra IF multiplier is set like this:
            2133 – 10
            2400 – 12
            2666 – 13
            2800 – 14
            2933 – 14
            3000 – 15
            3066 – 15
            3133 – 15
            3200 – 16
            3266 – 16
            3333 – 16
            3400 – 17
            and so on… I don’t know if it is a software “error” or the IF is really a bit disconnected from memory controller.

            Infinity Fabric creates a problem mostly in games because memory accesses have to contend for IF bandwidth with PCIEx writes to GPU – you can test it by running a memory access test (pinned to cores from CCX1) while a texture intensive game is running on CCX2.

            • synthtel2
            • 1 year ago

            Games’ PCIe requirements are miniscule compared to what they demand of memory accesses for other purposes. That kind of test will show heavy interference, but PCIe traffic will have very little to do with it.

            • dragontamer5788
            • 1 year ago

            [quote<]Infinity Fabric creates a problem mostly in games because memory accesses have to contend for IF bandwidth with PCIEx writes to GPU - you can test it by running a memory access test (pinned to cores from CCX1) while a texture intensive game is running on CCX2. [/quote<] That bottleneck exists in Intel too. Either in Intel's "Ringbus" (client systems like i7-8700k), or in the "Mesh" ("Extreme" or server systems like i9-7900x or Xeon Platinum). The Ringbus > AMD's infinity fabric, at least on specifications. AMD's infinity fabric however isn't about performance. Infinity fabric is about saving costs: delivering more cores for less money than Intel. Honestly, its a wonder that Infinity Fabric even does this good, considering how much more flexible and cheap Infinity Fabric is compared to Intel's "Ringbus" or "Mesh" approach.

            • synthtel2
            • 1 year ago

            Longer chains of pointers do happen a lot, unfortunately for prefetchers, and RAM timings have many more components than CAS, some of which are responsible for much larger portions of those measured latencies. Agreed on the uncore being where Intel is really winning at things like gaming performance.

            • dragontamer5788
            • 1 year ago

            [quote<]Longer chains of pointers do happen a lot, unfortunately for prefetchers, and RAM timings have many more components than CAS, some of which are responsible for much larger portions of those measured latencies.[/quote<] Agreed on both counts. Note: chains of pointers are mostly handled by Out-of-order execution + hyperthreading. The pointer chain is put into the "reorder buffer", and the CPU starts to look into the future for what to execute without needing that memory. If no memory is found, then Hyperthreading takes over: the CPU switches to a new thread while waiting for that linked list. That other thread then executes until a stall and then the CPU switches back. This doesn't make "one task" complete faster. But it will make two tasks complete faster, which might help for heavily multithreaded games. My ultimate point: for AMD systems, bandwidth is your primary metric. CAS Timings help, but are secondary to the bandwidth. The CPU is very good at turning "latency" issues into "bandwidth" issues through a large number of mechanisms (prefetching, out-of-order, hyperthreading, and more). I wouldn't invest into lower CAS unless you were certain you were latency-bound. (which unfortunately for me... I think I have a few games which are CAS-latency bound. Factorio in particular)

            • synthtel2
            • 1 year ago

            OoO and SMT help a lot, but they’ve got their work cut out for them. 40 ns cache miss penalty, 3 GHz, and 1.5 IPC equals 180 instructions per miss, which is nearly the entire reorder buffer already. 80 ns, 4 GHz, and 2 IPC blows through the reorder buffer like it’s hardly even there. Throwing SMT at it beats not throwing SMT at it, but it still doesn’t take a particularly pathlogical case to drag that well below the IPC it could get without cache misses.

            For most other heavy workloads, I’d agree that memory latency is fairly irrelevant, but games are really good at overloading the various latency-hiding mechanisms.

            Planetside 2 is another one I can confirm is strongly latency-bound (when there’s a lot going on in-game), and it’s (probably not coincidentally) the workload in which my R7 1700 feels most inadequate.

            • ermo
            • 1 year ago

            Thanks for the deep dive, appreciate it!

          • Shobai
          • 1 year ago

          To reiterate what dragontamer said, halve your denominators (and so double your results).

            • ermo
            • 1 year ago

            I guess the takeaway point is that the difference is basically drowned out by the inter-CCX latency?

            • dragontamer5788
            • 1 year ago

            [quote<]I guess the takeaway point is that the difference is basically drowned out by the inter-CCX latency?[/quote<] Inter-CCX is even bigger actually. [url=https://www.sisoftware.co.uk/wp-content/uploads/2018/03/amd-zen2k7x-core-lat.png<]AMD's inter-CCX is 130ns+[/url<]. Someone on Reddit actually told me that AMD seems to have a "fast-path" inter-CCX of ~20ns to 80ns under very specific circumstances... but I haven't verified it personally yet. But yeah, the CCX -> Memory Controller latency (~70ns+) drowns out most of your CAS-latency numbers. On the other hand, RAS, CAS, and Trcd, all can shrink with better RAM. (plus Tfaw and other even more obscure values). A "typical" RAM access is really RAS (wait for last row to close), RP (Row-Precharge: gotta wait for the next row to open. Works simultaneously with RAS), Trcd (How long to wait after a row opens up), and finally CAS (how long to wait for a column to open up). RP + Trcd + CAS, all [b<]together[/b<], is the total wait time under most conditions. If the row is "already open", then you only need to wait for CAS. ----------- CAS is just the most commonly looked at value. But all the numbers are important. If you're confused: don't worry, there's a simple explanation. RAM is [b<]eradicated[/b<] whenever you read it. That's just how it works. When you're done reading, you have to write the RAM back. So a "Row" is where you store that information. A row of sense-amplifiers. It takes time to transfer from memory -> Row. And when you're done, you need to put the memory [b<]back[/b<] before you can read anything. So RAS (the time it takes to put a row back into RAM), RP (the time it takes to prepare the new row to be read), Trcd (the time it takes for the sense-amplifiers to get ready), and CAS (the time it takes to read from a sense-amplifier). There are other timing issues. Tfaw is the number of nanoseconds where you can open-up four rows. Because each of these operations use energy, and there's only a limited amount of energy here. If you use up too much energy, the RAM fails. Etc. etc. Tons and tons of specifications, but the three main ones (CAS, RAS, Trcd) are the main case to think about.

            • ermo
            • 1 year ago

            Um … maybe now would be a good time for you to ask Jeff if he’d mind collaborating on an AMD memory subsystem deep dive?

            Sounds like this would be right on the money for a site like TR in terms of RAM perf/$? Throw in a few affiliate RAM links on Jeff’s part and bob’s your uncle?

    • mat9v
    • 1 year ago

    Do you have any idea why in FarCry 5 benchmark, 8700K with slow 2666 memory was consistently faster then with 3400 memory. Additionally why did 8700K score lower then 7700K in this game, it is supposed to have higher clocks – was it maybe due to lower turbo clocks caused by temperature? It is supposed to be faster and has better memory (3400 vs 3200) so what is wrong there?

    • Fugboi
    • 1 year ago

    Did you try how 8700k handles DeusEx streaming with OBS in high process priority mode?

      • blitzy
      • 1 year ago

      gamersnexus has a similar article with streaming tests that show the 8700k can keep up if you manually manage cpu priority. The takeaway for me from these articles is that a single PC can’t really handle both gaming and streaming at the highest quality settings, they do a pretty good job for the average joe on medium to high settings. But to get the absolute best performance and quality it’s best to output to a second pc to encode your stream.

    • Bensam123
    • 1 year ago

    Consider adding utilization graphs and/or utilization numbers to the benchmarks. While it’s great to show raw gaming performance, it doesn’t indicate room for growth on the processor. For instance if the game is running faster on a hex core then a oct, but the program only really has a couple big threads running it’ll look like the hex core is faster. If a user has more of a workload running in the OS or other games show up that better utilize multithreading, the extra available overhead will be utilized on a higher core processor and it’ll end up faster.

    Hex core is definitely the place to be right now, as games are currently still made for quads; as more move to take advantage of hex, oct will be the new hex.

      • Redocbew
      • 1 year ago

      [quote=”Jeff Kampman on this page”<]I carefully chose these titles and settings precisely because they do max out one to n cores[/quote<] Written by me: Benchmarking with the intent to show "room for growth" is a ridiculous idea in general. Edits: I aim for the number of edits to exceed the number of characters. This time I fail.

        • Bensam123
        • 1 year ago

        Yeah, nothing changes, especially when big gaming companies have talked about better supporting multi-threading with the recent increases in core counts.

        I assume that testing methodology holds true to all the articles here then? The only games that will be picked will be those designed to stress the CPU as much as possible. Increasing core counts isn’t something that’s going away or is going to become any less relevant. Looking at specifically this article vs reading all the articles on the site, especially into the future when things change, is a pretty important thing.

        I also assume that most people that buy a system would be interested on how well it will fair further down the road. That’s actually one of the buying points for purchasing a oct core over a hex and before hex became standard hex over a quad. You could even do a retroactive take on that and compare a hex core system bought years ago vs a more modern quad core system today.

          • Redocbew
          • 1 year ago

          [quote=”Jeff Kampman on this page, still”<]I carefully chose these titles and settings precisely because they do max out one to n cores[/quote<] Written by me: Yep, still ridiculous. Edit: Now that I think about it I kinda like it though. In a way this goes beyond even just juking the stats towards some agenda and providing benchmarks that seem legitimate. You just want to flat out speculate on what you think might happen for some unknown reason at some unspecified point in the future. Way to commit dude.

            • Bensam123
            • 1 year ago

            What?

            We don’t even seem to be talking about the same things here. There are cases where you can test Intel vs Intel and AMD vs AMD if you want and be agnostic about it, this isn’t about AMD vs Intel. For instance you could test some of the same generation Intel parts against their non-extreme counterparts today and show the difference in longevity.

            I’m not exactly sure who you are besides the guy that writes the articles, I don’t think you’re Chuckula so apparently at some other point in the past we’ve argued and you have a chip on your shoulder about me? You’re not even looking at what I’m writing in a unbiased light and not referring to Intel or AMD, but rather just who you’re talking to. You’re the one making your own agenda here…

            • Redocbew
            • 1 year ago

            Correct! I am not Chuckula(but I do feel like there’s room for a Negan-inspired “we’re all Chuckula” meme here). However, it’s been a number of years since I was paid to write for anyone(unless you count writing code as part of that). I do think I’ve become rather Linus-like at times in my attitude towards these things(not that there’s anything wrong with that). There seems to be so much more noise interfering with what might otherwise be a straightforward(if not difficult) investigation, and I tend to have very little patience with those things which are only in the way.

            It’s a ridiculous idea, because a task that scales from one to n cores will still scale from one to n cores for the foreseeable future. It’s academic to consider scaling outside of performance in this way, but there remains no point in utilization graphs that all show 100% utilization or very near to it, and as I’ve quoted(twice) that’s exactly the case we find here. Should there be a point at which this task no longer maxes out the hardware, then there’s no way to know when this is going to happen. If you’re worried about something else happening somewhere down the line that screws up the performance of your carefully chosen components, then what exactly are you afraid of? If you don’t even know what it is, then how on Earth are you going to test for it?

    • Ninjitsu
    • 1 year ago

    Nice and useful set of benchmarks, thanks 🙂
    8400 looks real attractive there. I’d assume pairing it with faster RAM would help it a bit more too, although the differences look tiny.

    Was hoping maybe you’d throw in Arma 3 as well, given the inclusion of memory scaling, but I suppose you were already pretty pressed for time…

    • JoeKiller
    • 1 year ago

    Thanks for putting in the time to get everything recorded. Truly TR is the most concerned about doing it right and as a reader I appreciate it.

    Even if it takes a week 🙂

    • flptrnkng
    • 1 year ago

    Is there some kind of prohibition to showing the 8700K in a bad light?

    You show the 8700K results (while streaming) when it ‘wins’. But, when it doesn’t, the graph just becomes an AMD head to head contest.

    How horrible was the 8700K?

      • Jeff Kampman
      • 1 year ago

      I don’t know how much more negative the judgment “literally cannot do what is asked of it” can be.

        • flptrnkng
        • 1 year ago

        Let’s see the data in the graphs, like when 8700K ‘won’.

        Are you still concerned that Intel is losing it’s manufacturing advantage? Why, exactly?

          • Spunjji
          • 1 year ago

          How do you graph a line from nonexistent data?

          The reader has a responsibility to, y’know, read.

            • Voldenuit
            • 1 year ago

            If you’re making a chart and a product failed to complete the test or perform adequately, put it on the chart as ‘Failed’. That gives users the same data at a glance without chance of misapprehension.

          • rechicero
          • 1 year ago

          Don’t like the implied accusation, I think is undeserved and uncalled for. But I can see some point on adding them to the table with a “FAILED” tag, or something.

        • JoeKiller
        • 1 year ago

        Sounds like a bug

      • Voldenuit
      • 1 year ago

      From the article:
      “CPUs that couldn’t stream without noticeably dropping frames failed our test. As a result, the Core i5-8400 and Core i7-7700K don’t appear in the following charts. ”

      This is the same reason the 8700K was absent from the Deus Ex streaming test.

    • maroon1
    • 1 year ago

    Anandtech updated review show 8700K beating 2700X in gaming

    The early review was bullshit and they fixed it

    • Vesperan
    • 1 year ago

    I’m loving the conclusion of this article, I feel like it is one of the few (only?) reviews that goes beyond ‘Intel is better for gaming’ and digs into the actual situation: ‘Intel is better for gaming if you have a very high end graphics card and high refresh rate monitor at x1080 resolution’.

    Because if you’re not in that situation, the difference is so minimal as to be imperceptible.

    I have to wonder how many people (like me) are not bottlenecked by the traditional CPU or graphics, but more the age/averageness of their monitor.

      • Voldenuit
      • 1 year ago

      If you care about gaming performance, you shouldn’t be neglecting the GPU and monitor; they’ve always been far more important than the CPU.

      But if you do have the right gear, intel has been the way to go for CPU for a long time, and while the gap is smaller, is still the case today.

        • Vesperan
        • 1 year ago

        I agree with all that, but alas at this point my bank account does not agree with the price of a high end monitor.

          • Kretschmer
          • 1 year ago

          “High end” is getting cheaper. You can get a TN GSync monitor at 27″ and 1440P for $400. The same price will get you an IPS FreeSync version. Gaming at >60Hz is so, so worth it.

      • Kretschmer
      • 1 year ago

      Theoretically you want to overspec your CPU today so that build can be used for a long time with a GPU refresh or two. In practice, there are a ton of subjective factors at play…

      • Redocbew
      • 1 year ago

      I plan on keeping both of the monitors I have until they die. Hopefully that doesn’t happen for at least a few years from now. It clearly is the machine and not the monitor(s) which is the limiting factor for me, but having dual 34″ 21:9 monitors isn’t really a typical setup either.

    • leor
    • 1 year ago

    I wonder how my i9-7900X @4.6ghz with 4000 RAM would compare…

      • kuraegomon
      • 1 year ago

      … probably fairly well 😉

      I’d say nice humblebrag about your sweet rig … but I think the “humble” part went missing 😛

        • leor
        • 1 year ago

        To be honest, I did have a slight concern given the way the previous X99 platform stacked up against the 7700k in non productivity tasks when I put this rig together, and it was a NIGHTMARE to build, worst build of my life. So yeah, I’m a bit proud of it now that it’s finally working, but I really am curious how it stacks up.

      • drfish
      • 1 year ago

      This seems as good a place as any to mention I just bought a i9-7940X system for work (shared CAD translation workstation). Of course, I didn’t go with that fast of RAM, but [url=https://techreport.com/news/33567/thursday-deals-a-threadripper-1950x-for-719-and-more-crazy-discounts?post=1076353#1076353<]I got a lot of it[/url<]... Parts arrive next week, expect some pics in a thread soon.

      • moose17145
      • 1 year ago

      I am curious how my i7-6900K compares. Would be interesting to see how these Ryzens compare to the Broadwell-E’s

    • auxy
    • 1 year ago

    [i<]reads review[/i<] anyone want to buy a delidded-relidded i7-5775C? ('ω')

      • Kretschmer
      • 1 year ago

      Relidded with toothpaste or peanut butter?

        • Voldenuit
        • 1 year ago

        Still better than stock intel TIM.

          • Redocbew
          • 1 year ago

          Plus, it fills the room with the smell of peanuttery goodness.

        • auxy
        • 1 year ago

        Coollaboratory Liquid Ultra! (‘ω’) Stays super cool except my motherboard has a busted voltage regulator and crashes if I go over 1.2v. It was an extra pain to relid this thing because of the Crystalwell die.

          • MOSFET
          • 1 year ago

          FIVR?

          Z97 was the reason I never pulled the trigger on 5775C or 5675C. Too pathetic even then, of course desktop Broadwell was about 1.5 years late.

    • DragonDaddyBear
    • 1 year ago

    Thank you for doing the hard work of testing multiple DDR speeds. It was late but very thorough and very insightful.

      • OptimumSlinky
      • 1 year ago

      Agreed. As someone with an ancient i7-930 rig who is on the fence and budget about upgrading, it’s nice to see RAM speed comparisons to know if I REALLY need to blow an extra $50 for absurdly priced 3000+MHz DDR4 right now.

        • Noinoi
        • 1 year ago

        So true. I think I’ll be pretty happy even with an i5-8400 if judging from the results regarding general gaming performance, even with 2666 MHz DDR4 and (likely) a relatively affordable B360 board. Though I think I might consider the 8600K if only for the possibility of much higher clock speeds when paired with a Z370.

      • albundy
      • 1 year ago

      agreed. it’s a rarity these days to see it in gaming benchmarks, and understandably time consuming, but well worth it.

    • DPete27
    • 1 year ago

    Jeff. Crysis 3 chart. The 8700K produced higher average FPS with DDR4-3200 but better 99th percentile frames with DDR4-2666? Is that a typo?

      • Jeff Kampman
      • 1 year ago

      2% variance is likely not something to get worked up about.

        • DPete27
        • 1 year ago

        That’s fine. Just struck me as odd, so I wanted to point it out.

    • mikeq6648
    • 1 year ago

    how about if i don’t play the above games? i just play sim racings such as Assetto Corsa, Project cars,rFactor2? intel or amd?

      • derFunkenstein
      • 1 year ago

      Honestly I don’t think it matters for those games. Those games are all several years old. Every CPU here is capable of > 100FPS on Project Cars. Just check out the Skylake review from 2015.

      [url<]https://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed[/url<] A 6700K could push ~130FPS in Project Cars. The 7700K in this review is slightly faster than that, and it routinely brings up the rear.

        • mikeq6648
        • 1 year ago

        i am still using my old machine: Intel Xeon E3 1230 v3+MSI Z87-GD65 GAMING+EVGA NVIDIA GeForce GTX 780+Samsung SSD 850 EVO 500G,just wanna upgrade to i7-8700 or Ryzen 2700X

          • Kretschmer
          • 1 year ago

          Get the i7 if you game, or the Ryzen if you stream games. There. That’s the review.

            • mikeq6648
            • 1 year ago

            thanks

          • derFunkenstein
          • 1 year ago

          Based on the games you’re playing, I don’t think either are going to see much benefit. Especially with such an old GPU. Graphics card prices are high right now, but that’s the upgrade you need (IMO).

      • gerryg
      • 1 year ago

      My guess is the racing games might have a lot of physics processing but mostly graphics, so may not be as CPU bound. IIRC a lot of games use the GPU for physics?

      I was wondering about strategy games (Civ?) that might have higher AI usage and maybe more multi-threading and CPU-boundedness. Particularly RTS type games.

      Another thing that could be done is to test a CPU for a game server. Then you can eliminate the GPU altogether? Just start up the server then add bots. Sure, other factors like memory might play into it, but different classes/vendors of CPUs have different memory handling strategies, too, so perfectly valid.

    • cozzicon
    • 1 year ago

    Gosh I want threads… considering the multiple tasks my machine does even while I play game- it’s a no brainer.

    I’m in 🙂

    • Anovoca
    • 1 year ago

    Looks like the only aspect of this review that wasn’t worth the wait were the chips themselves.

    • Hattig
    • 1 year ago

    Did the perf/$ charts include the cost of the cooler on the Intel CPUs?

      • gerryg
      • 1 year ago

      +1 on this question. Since cooler comes in the box, either reduce the AMD by estimated cost of cooler, or add the cost of the cooler used for the Intel CPUs.

      I’m even more interested in total cost (CPU+Cooler+RAM+MB) than just cost of the CPU, especially after seeing the variety of RAM used. I rarely buy a drop-in CPU replacement, I just upgrade the whole enchilada every few years.

      • Shobai
      • 1 year ago

      Since the included cooler wasn’t used in the testing, how do we quantify its effect on performance?

      • Kretschmer
      • 1 year ago

      Yes, but you have to include the cost of an AIO 240MM radiator for people who want to use liquid cooling. Minus, of course, the ebay price of the bundled AMD cooler. Also, when you add the cost of a cooler to the Intel SKUs use a slightly cheaper option, because of the lower thermal dissipation. Oh, but up the price of the AMD chips due to their liking faster RAM. Still, don’t forget to discount the AMD CPUs, because their mobos are often cheaper.

      Or maybe you can list the price of the CPU as sold and let others add or subtract options as they see fit.

      • blitzy
      • 1 year ago

      the total cost of ownership in the perf/$ charts makes things interesting, e.g. RAM cost, included cooler. Possibly skews things a bit further in AMDs favour for overall value

        • Firestarter
        • 1 year ago

        total platform cost makes sense to compare but it’s definitely not easy, especially if there are particular features that you require on your motherboard

    • Usacomp2k3
    • 1 year ago

    The biggest takeaway to me was how much the 7700 loved the faster RAM.

      • Kretschmer
      • 1 year ago

      Yeah, I read that, panicked and checked my RAM speeds. Thank goodness DDR4 3200 was pretty cheap when I bought it.

    • YukaKun
    • 1 year ago

    I didn’t see this commented out anywhere, but if it was and I missed it, I apologize in advance.

    Are the 2 cooling systems used for the CPUs different for a reason? Are the thermal performance of them relevant enough to warrant an asterisk in the results, specially for stock operation?

    Thanks for all the effort as usual. It was a great reading.

    • chuckula
    • 1 year ago

    And the winner of the 2700X vs the 8700k battle royale is… the 8400?

    And that’s why they [s<]play the game [/s<] run the benchmarks. Thanks TR!

      • derFunkenstein
      • 1 year ago

      Yeah that’s probably not the winner that AMD was hoping for. 😆

      • drfish
      • 1 year ago

      …and I’m not feeling too bad about my 7700K w/ DDR4-3866 either. Maybe I’m not quite the idiot I thought I was?

        • Krogoth
        • 1 year ago

        Any of the quad-core chips will start to show their shortcomings in the coming decade as more professional and enthusiast-tier software start harnessing more than two threads.

        Real-time CPU streaming is barely tolerable with some of the newer games on a quad-core chip even if it is aggressively clocked.

          • chuckula
          • 1 year ago

          I sincerely hope that ALL of the chips reviewed here are rendered obsolete in the coming decade or else it’s going to be pretty boring.

            • Krogoth
            • 1 year ago

            Outside of killer application or a breakthrough in manufacturing technology. The 2020s are going to be even slower than the 2010s. Big silicon already has tapped out all of the low-hanging fruit.

            • Firestarter
            • 1 year ago

            all the more reason to buy high end

            • Anonymous Coward
            • 1 year ago

            Damned good point. Although I’m a bit torn between things advancing, and not having to throw things out all the time.

          • drfish
          • 1 year ago

          I don’t disagree, was mostly just making a crack about my own tired joke. I have no expectation that my 7700K with serve me for as long as my 2600K did. However, it IS heartwarming to see the 7700K hanging with the big boys in so many tests, thanks to fast RAM.

          • Kretschmer
          • 1 year ago

          We’ve been hearing that “software will catch up and need cores” since Bulldozer launched. Software is definitely becoming better about threading, but some tasks are inherently less than fully parallel.

            • Krogoth
            • 1 year ago

            It is already happening. If you try to do any sort of multi-tasking and run a modern gaming title on a quad core CPU. You will start to experience shuddering and hiccups which hurts frame-timing.

            I have already experienced it on my aging 3570k.

            • Kretschmer
            • 1 year ago

            It depends on the multitasking. I can run a voice chat application, browser windows, movie application, and Windowed game without a noticeable hit with 4 cores/8 threads.

            Streaming craves cores, but that’s still a niche.

            • Kretschmer
            • 1 year ago

            Also, the 3570K is much slower in games than a 7700K, even if the box specs wouldn’t seem to be that far apart. You’re also looking at 4 vs 8 threads. 🙂

            Source: I’ve owned both.

            • synthtel2
            • 1 year ago

            The Amdahl’s law limits for games are actually very high, it’s just tougher to extract that parallelism from them than most workloads with similar limits. A heavy majority of the work to be done on an average entity doesn’t need immediate connection to other entities, but the connections between them that do exist are varied and complex enough to make extracting parallelism from that a pain. That, and parallelism for parallelism’s sake isn’t worth much compared to good memory access patterns, but getting both really is tough.

          • DPete27
          • 1 year ago

          Good thing none of the CPUs in this review have less than 6 threads then!!!
          (the i5-8400 is a 6C/6T CPU)

          BTW, the consumer-tier i7-870 launched in 2009 with 4C/8T …..9 years ago.

            • ColeLT1
            • 1 year ago

            Quad 7700k is in the review.

            Edit: DPete said cores, not threads before the edit.

            • auxy
            • 1 year ago

            it has 8 threads. (‘ω’)

            THERE ARE 8 THREADS ( ゚Д゚)

            • ColeLT1
            • 1 year ago

            He said cores before the edit.

            • DPete27
            • 1 year ago

            Ninja edit!!!

            • ColeLT1
            • 1 year ago

            I was starting to doubt myself and go full existential on this crazy Friday morning, was I mistaken, am I dreaming, do I exist?

            • Redocbew
            • 1 year ago

            No.

            • ColeLT1
            • 1 year ago

            I just upgraded Backupexec (one version number, aka v16 to v20.1 lol) and now test backup just failed, please be a dream/nightmare… Time to reinstall agents across 50+ servers.

            • MOSFET
            • 1 year ago

            Honestly, that’s what you get for not ditching BackupExec 19-21 years ago.

          • Chrispy_
          • 1 year ago

          I think that buying an 8-core CPU now when quad cores are the standard is as prudent as buying a quad core was back when dual-core was the standard.

          You may only use the extra cores in a few things at first, but by the time you’ve had a few years, you’ll be glad you didn’t stick with a quad core.

          I’m sitting on Haswell and Ivy i7’s at home. They’re fine, but with the Ivy especially I’m wary that it’s past its sell-by date and that there’s a good chance I’m not getting the most out of the GPU because of it.

      • Krogoth
      • 1 year ago

      8400 is only good for single and dual-threaded loads since it has nearly the same turbo speeds as the 8700/8700K. It starts falling behind the 8700, 2600, 2700 when it starts utilizing more than two threads and stumbles behind when they are all fully loaded. It is because 8400 turbo speeds are gimped when it starts using more than two cores when compared to its greater siblings. It is also half-locked, so your best bet for overclocking is trying squeeze out BCLK but good luck doing that. PCIe peripherals really do not like running beyond spec. If you want to overclock with a Coffee Lake chip you might as well go with a 8600K/8700K.

        • HERETIC
        • 1 year ago

        That’s why I think the 8600 is the real gem, it matches the 8700 in most games.
        The xtra 300 is just enough to make a difference-4.2/4 cores and 4.1/ 6 cores.

        And then the 8600K should give you around 4.6 all cores for a sensible 24/7 OC.
        Hard to justify the xtra 25% cost for-CPU-MB-Cooler,for a 1 to 2% improvement
        in games over the 8600 thro………………………………………………………..

      • ronch
      • 1 year ago

      Kinda reminds me of how some matches end up in the WWF/E.

        • chuckula
        • 1 year ago

        Well that would explain the suspiciously placed folding chair in the TR test labs.

          • NTMBK
          • 1 year ago

          AND IT’S LISA SU COMING OFF THE TURNBUCKLE!!!

            • chuckula
            • 1 year ago

            I would not want to face Lisa Su on WWE or in the Octagon.

      • abiprithtr
      • 1 year ago

      This means the 8700k loses to the 8400 in at least one way when it comes to how it performs in games.

      Not to forget that the 8400 is a small winner in games, but a big loser in productivity. Relative to these two and the 2600x and the 2600 (probably) of course.

        • thx1138r
        • 1 year ago

        I think you’re forgetting the price, while the 8400 does lose to the 2600x by a relatively large 25% or so in productivity apps, it is $50 or so cheaper and does manage to beat it by a few fps in games.

        I don’t play games much these days, so I’d plump for the 2600x, but most moderate to hard-core gamers would be better off with the 8400.

          • abiprithtr
          • 1 year ago

          If you see a +1 against your comment, that’s me.
          That the 8400 was cheaper than the 2600x by so much, I wasn’t aware of.

          2 things what I said are still true though:
          a.) The 8700/8700k is too costly for little improvement over the 8400
          b.) The 8400 loses big in productivity (this sounds better than saying it is a big loser)

          Cheers

    • IGTrading
    • 1 year ago

    How about showing some results on the viewer’s side ?

    Apparently that’s where the differences really are and that should be the whole point.

    AMD Ryzen & Core i7 are both good enough for gaming on your own PC, but which of them is best able to do constant high-quality streaming at the same time ?

    This is a very interesting read and paints a completely different picture once you increase the streaming quality a bit (not much , just 20%) : [url<]https://www.gamersnexus.net/hwreviews/3287-amd-r7-2700-and-2700x-review-game-streaming-cpu-benchmarks-memory/page-2[/url<]

      • chuckula
      • 1 year ago

      You paid Intel Shill!

      What about some Dell from 1999 that Dell refused to ship with RyZen!

      WHAT ABOUT IT!

      • Jeff Kampman
      • 1 year ago

      We’re not going into the level of depth that GN is because it’s honestly a lot easier to just look at the stream on a second machine and verify that it’s not dropping more than a handful of frames over the course of a benchmark (if that).

      As I noted, if a CPU noticeably dropped frames in our testing configuration, it failed on a pass/fail grade and was left out of the results. I’d rather not spend time plotting degenerate cases that our eyes can already tell us are degenerate.

        • IGTrading
        • 1 year ago

        Aha ok .

        I was just surprised of the difference between the behavior on the gaming machine, where the two chips are pretty well matched and the streaming result in itself, that shows dramatic differences between AMD Ryzen and the Core i7.

        For the gamer and occasional streamer, it is easy to recommend AMD or Intel without thinking too much about it.

        For the streamer, it looks like AMD Ryzen is by far the superior solution.

        Depends to whom the recommendation & research/review is addressed.

    • Kretschmer
    • 1 year ago

    Quick question: I know you picked these games because they stress the CPUs in question, but does that make these games outliers? E.g. did you test many other games that saw little difference between these CPUs? I’m always curious if the “average” game is still CPU bound or not at 120Hz.

      • Anonymous Coward
      • 1 year ago

      The “average” game I play thinks a years-old quad-core i5 is pretty spiffy.

    • dodozoid
    • 1 year ago

    “The Core i7-8700K remains the best high-refresh-rate gaming chip on the planet, and that’s before we tap into its formidable overclocking potential.”

    Only that according to Anandtech’s Ian Cutress, OC tools will often force HPET in OS and it does ugly things to intel platform after Smeltdown fixes…

      • Jeff Kampman
      • 1 year ago

      Multiplier overclocking through the firmware is not going to cause the issues Anandtech experienced.

        • dodozoid
        • 1 year ago

        Fair enough.
        It’s just a thing to watch out for now, since it apparently wasn’t an issue before the fixes.

          • jihadjoe
          • 1 year ago

          Who even messes with bclk on the 8700k?

            • dodozoid
            • 1 year ago

            It doesent have to be messing with bclk. From what I heard, even quite a lot of monitoring or fan control programs force HPET – basicaly every OC app that requires a reboot probably does it (Ryzen master does). And I gues people who bother to OC their CPU tend to use tools like that.

            Edit: apparently out of date info about Ryzenmaster, sincere apologies.

            • Kretschmer
            • 1 year ago

            Do you have more information on this? I use XTU and Afterburner and would be bummed if they cause big slowdowns.

            • dodozoid
            • 1 year ago

            No first-hand information. There was an update in anandtech’s article – the issue seems more complicated, take away is don’t force HPET on newer Intel platforms…

            • Shobai
            • 1 year ago

            Ryzen Master does not, and hasn’t since V1.0.1 [i.e. since 11th April ’17]

        • utmode
        • 1 year ago

        We need to have a look into accuracy of timer during full load in intel desktop system if we are not forcing HPET ON. We just can’t take their word. If the timer in the intel system is not accurate then all is vain.

          • chuckula
          • 1 year ago

          Yeah, so since you aren’t actually genuine in that troll lemme clue you in: HPET was already turned off by default on these products [b<]INCLUDING RYZEN SYSTEMS[/b<]. Meaning that you are basically saying the vast majority of systems operating right now all have incorrect timers. Stop acting like you are on some crusade for truth & justice here when you clearly don't know the technology and just want to find a way to justify why Anand got it wrong the first time and had to post a major retraction.

            • MOSFET
            • 1 year ago

            Probably as simple as Ian’s history of competitive overclocking.

            • utmode
            • 1 year ago

            [‘HPET was already turned off by default on these products INCLUDING RYZEN SYSTEMS.’]
            In that case, we should have tested with HPET on for both system.

            [‘Meaning that you are basically saying the vast majority of systems operating right now all have incorrect timers.’]
            which is not an issue if you are not using the system for time interval sensitive task.
            [‘Stop acting like you are on some crusade for truth & justice here when you clearly don’t know the technology and just want to find a way to justify why Anand got it wrong the first time and had to post a major retraction.’]
            Their intention was to make sure testing system event timer is accurate. Without HPET forced-on AMD & Intel system time-interval accuracy might not be good enough. We are talking about nano sec here. That’s why I would like to see how accurate AMD & Intel system’s time-interval accuracy when HPET is default. Or Intel & AMD can remove HPET support for their system saying RDTSC is as good as HPET and HPET is resouce hungry. Also don’t get personal, we are in tech-review site, not in WordPress.

            • Redocbew
            • 1 year ago

            You should read(or re-read) the follow up Anandtech posted. Seems like maybe you didn’t quite get it the first time.

          • dodozoid
          • 1 year ago

          The difference in Intel’s numbers aren’t from different time counting but from the overhead introduced by calling HPET.
          Apparently, new Intel processors do not really count with software calling this particular time very often and moreover system calls like this invoke penalties connected to the Smeltdown fixes.

    • Kretschmer
    • 1 year ago

    Thanks for such a meaty review; TR remains the best in the business. I’m glad that I sprung for DDR4 3200 with my 7700K and how well it has held up.

    • Eversor
    • 1 year ago

    Great review Jeff. I am puzzled with some of your comments on the last page though.

    You say: “oh, if you increase the resolution the CPUs are not the bottleneck”. This is true. But isn’t the point of CPU benchmarking on games to glean at the best one, so it lasts more down the road? Sure, the engines will change but if you have a fast CPU, when you upgrade the GPU the CPU being faster will make a difference.
    When the engines get more complex, it is likely the CPU can become a bottleneck.

    Benchmarking CPUs with very high resolution is a good data point if you want to hit some targets now, without spending too much, but is not useful at all to reveal the proper difference in CPU performance – if anything it hides the real performance delta. Is the Intel CPU idling needlessly even at 163fps on Crysis 3? It’s an engine or GPU bottleneck?

      • Jeff Kampman
      • 1 year ago

      All I’m trying to do is to stave off the idea that this particular method of benchmarking (high frame rates at relatively low resolutions) is the only measure of a CPU’s gaming performance. It might be the one where the CPU matters most, sure, but I just want to be clear about the scope of applicability of the results since not everybody games the same way or has the same preferences (some much prefer high resolution versus high refresh rate, for example, and in that case your CPU choice needs to be motivated by different things than “moar FPS”).

      I don’t think any of our CPUs are “idling,” as you put it; I carefully chose these titles and settings precisely because they do max out one to [i<]n[/i<] cores (or come reasonably close).

        • Eversor
        • 1 year ago

        Ok, I don’t agree but I do see your point.

        After writing this I searched around a bit and found a review that pretty much sums up my point: at 720p Intel has a 116% advantage over 2700X, which at 1080p drops to 107%. This is the average across all the benchmarks.

        Is it reasonable to ask some measure of CPU load together with the FPS measures? That way one can spot if there is a GPU bottleneck in there. You know, for the pain in the ass readers like me 🙂

        That said, many Kudos for adding multiple DDR4 speeds, as it does have a significant impact on some CPUs when it comes to gaming. Keep it up.

      • Redocbew
      • 1 year ago

      For a system using a 1080Ti I wouldn’t call 1080p a “very high resolution”, and what exactly do you think is going to happen in future game engines that would obviate the GPU in this way?

        • Phartindust
        • 1 year ago

        1080p can still be stressful. Looking at the charts, these CPUs rarely exceed 144FPS by much, and there are plenty of 1080p 144Hz monitors in use. Some with even higher refresh rates. So for me 1080p is still a relevant resolution to test with.

          • Redocbew
          • 1 year ago

          For a CPU test it’s quite relevant. I was just confused at what spooky thing Eversor might have had in mind.

    • Blytz
    • 1 year ago

    Hey Jeff (gaming and other benchmarks)

    How was the clock for clock with the Ryzens and their direct descendants ?

    Like 1600X vs 2600X (at say 4 gig)

      • HERETIC
      • 1 year ago

      [url<]https://www.techspot.com/article/1616-4ghz-ryzen-2nd-gen-vs-core-8th-gen/[/url<]

    • Chrispy_
    • 1 year ago

    Thanks for the updated figures Jeff.

    I’m not suggesting you do this now, but it would be cool if in future articles when you test with varying RAM speeds, if you could add a third scatterplot of 99%fps/$ for the platform cost, not just the CPU.

    Effectively, the new Ryzen chips get 3-5% improvement with the bump to DDR4-3400 but that stuff also costs $50 more than 2933 for a 16GB kit. Adding that into the scatterplots would make it obvious to the casual reader that money spent on upgraded RAM might be better or worse spent on a different CPU instead.

    • Shobai
    • 1 year ago

    The 8700k + 2666MT/s RAM bucks the conventional wisdom, in the first couple of games at least – not what I was expecting, that’s for sure.

    Regarding the streaming tests, I note that Gamer’s Nexus (iirc) measured the viewer’s framers tell in their testing, also, which added an additional dimension to their results.

    • strangerguy
    • 1 year ago

    This may be an unpopular opinion: Benchmarking SP games has the advantage of 100% testing consistency but hardly representative for MP heavy games like MMOs and modded CS:GO where the CPU demands are a lot greater.

      • Jeff Kampman
      • 1 year ago

      I’m sympathetic to this position, but it’s why we’re demonstrating various cases of CPU scaling in games (single-thread dependency, multi-thread scaling) so that folks with a CPU-bound game can sort of figure out how these chips will map onto their use case.

      I can play with Dota 2’s replay function and see whether it tells us anything interesting, perhaps, but live play of networked multiplayer cannot be used as anything more than experiential benchmarking.

        • Firestarter
        • 1 year ago

        AFAICT, most CPU-bound MP games have a single thread that dictates performance as long as the CPU has 4 or more cores, this is definitely the case for some older games like Planetside 2. I’d expect this to remain a factor but become a bit less common in the future, which would make Ryzen’s performance comparatively better with newer MMOs/competitive games

          • synthtel2
          • 1 year ago

          I haven’t updated my various Planetside ramblings over in the forums lately, but its thing (at least on Zen) turns out to be memory performance. Average utilization in heavy fights at various core/thread counts (disabling some in BIOS) goes about like:

          2C4T ~ 90% ~ 3.6 threads
          4C4T ~ 85% ~ 3.4 threads
          4C8T ~ 50% ~ 4.0 threads
          8C16T ~ 34% ~ 5.4 threads

          4C4T -> 4C8T -> 8C16T is hardly worth anything as far as subjective difference in-game, though. OTOH, JEDEC 2133 versus getting everything dialled in at 2933C16 is a massive difference in big fights. Absolute latency seems to matter more than bandwidth. Lately I’ve been cinching down tRRD and tFAW (2933 autos and XMP put it in the 6-8-32 ballpark, I’m running 4-5-23 right now and will probably end up at 5-6-20), and it seems to really like those too.

          (Planetside uses a main thread, two other dedicated threads that both look to be render-related, and a pool of workers. At lower graphics settings with a lot going on at 8T+, the main thread is active 50-75% of the time, and the render threads look to be well out of the way. Boosting the main thread would therefore probably be worth something, [s<]but I don't know how to isolate which threads are hogging the memory access.[/s<] Edit: maybe I do know how, I'll try a thing later.)

            • Firestarter
            • 1 year ago

            Kind of like Arma3 then, right? I imagine it could be a critical path that iterates through a linked list of some sort that completely throws the prefetcher for a loop, whether in the PS2 engine itself or in the GPU drivers

            edit: I see dragontamer5788 already brought up linked lists and such

            • synthtel2
            • 1 year ago

            I thought ARMA was more bandwidth-sensitive than latency-sensitive?

            Planetside has performance issues in a few places, but the worst (of the purely client ones) seems to be early render or very late sim (could be the process of porting sim data out to the renderer). If there’s a lot going on in the main camera’s frustum, it bogs down even if all the action is on the other side of a hill or something (strongly occluded), and looking the other direction speeds it up nicely. It also seems to have a bit of bimodality that I interpret as a cache not being big enough in many cases.

            Interestingly, high or ultra shadows (the settings with very large frustums) increase load substantially on all threads (except sometimes one of the two render threads), implying again that the main thread expends quite a bit of effort shuffling data off to the renderer.

            • synthtel2
            • 1 year ago

            No Planetside threads stand out as particularly memory-sensitive. Running 4T of HCI Memtest alongside the game bogs down the framerate by about half, but doesn’t do much to the time each Planetside thread spends running. Main and the less-heavy of the two render threads show 8-10% increases, the heavier render thread doesn’t change, jobs vary in both directions by similarly small amounts, and overall CPU time taken by the process doesn’t change.

            • synthtel2
            • 1 year ago

            In case anyone’s still reading this, I found the problem. The game left a log file at some point with some very interesting stuff in it, including:

            ” MutexLocks: 10276319581
            MutexWaits: 2179828962 (21.21% of MutexLocks)”

            Given how long I play the game at a time and at what average framerates, this is almost certainly upwards of 10,000 MutexWaits per frame. That’s at 8C16T with it trying to run as many threads as it probably ever does, so probably not entirely representative, but it does explain why there isn’t much difference between 4C4T and 8C16T, and gives some great hints about why it likes Intel and fast memory.

      • Krogoth
      • 1 year ago

      You will end-up seeing similar deltas. The Coffee Lake chips are going to be faster because of clockspeed.

      TL, DR version of this review: Clockspeed is still king if you only care about gaming.

      • kuraegomon
      • 1 year ago

      Can you suggest a solution to this problem? I can’t think of any way to achieve repeatable multiplayer benchmarking.

        • DPete27
        • 1 year ago

        What about a LAN match where the other “players” are automated to perform the exact same move/shoot sequence each time?…….not sure if that defeats the purpose….

          • kuraegomon
          • 1 year ago

          You’d require multiple scripted scenarios in that case. And the setup costs would be prohibitive for most (all?) review sites.

          This is one case where only game developers could fill the need. They’d have to capture representative gameplay sessions from their servers (possibly from internal servers, because privacy), then provide a scripting engine that would allow assigning the actions of one selected player to be run on the system under test, and all the other networked actions to be played back simultaneously. There are many complications that I’m not capturing here, but this is the only repeatable basic approach I can visualize.

        • hansmuff
        • 1 year ago

        What about Starcraft II? It has a rock-solid recording and replay function that actually plays the match with all the recorded inputs from all players.

          • DPete27
          • 1 year ago

          Starcraft II is also the type of game that would (predictably) be heavily CPU-bound!!

            • kuraegomon
            • 1 year ago

            Are there any more recent games whose record/replay functionality measures up to the SC2 standard?

            • DPete27
            • 1 year ago

            Crysis 3 isn’t recent. Games are picked for reviews for various reasons. Some titles are included because they’re currently popular, others are included because they set a high demand on hardware (CPU/GPU) that ages well (Crysis 3). I have no idea if SC2 is multi-core optimized, sensitive to RAM bandwidth, API, etc etc, but if it serves the purpose of [reliably] simulating an online gaming demand, then the release date of the game is less important.

            • kuraegomon
            • 1 year ago

            Crysis 3 is an outlier. And 4 years more recent than SC2 – which is a DirectX [b<][i<]9[/i<][/b<] game. Obviously, a game's age and technology used are not unrelated, and this is precisely why I asked about more recent games - not because I really care that a given game is less than X years old, but because SC2 [i<]specifically[/i<] is demonstrably too outdated to be representative of the majority of games that would tax recent CPUs/GPUs.

            • Mr Bill
            • 1 year ago

            “But can it stream Crysis!?”…
            .
            .
            .
            I’ll show myself out

        • Firestarter
        • 1 year ago

        Repeatable is impossible but simultaneous benchmarking could work. Having your 2 test systems spectate a player should result in the same workload for both systems. Alternatively there’s games where multiple players can be in a single vehicle with the same perspective (Planetside 2 comes to mind), that would test the complete game loop. For this test that’d require 2 identical 1080Ti’s though (along with all the other hardware), and even then you’ll have variance between those as one might boost longer/higher than the other

      • Kretschmer
      • 1 year ago

      I totally agree, but – save for games with replay functionality – there is no way to create apples-for-apples numbers for a multiplayer match. It IS eyeroll-inducing to see benchmark numbers that are “I walked through this quiet town five times and averaged the results,” but it’s an impossible problem without insane cost and overhead.

      TR is doing the best they can. I’d rather this than nothing.

Pin It on Pinterest

Share This