AMD’s Ryzen 7 2700X and Ryzen 5 2600X CPUs reviewed

AMD’s Ryzen processors have indubitably reshaped the mainstream PC in the year since their release. Four-core, eight-thread CPUs reigned in those systems for the better part of eight years, but first-generation Ryzen parts brought core and thread counts typical of high-end desktop chips within range of the average builder for the first time.

The Zen microarchitecture has since proven itself worthy in a broad range of gaming and productivity tasks, and enthusiast-friendly perks like capable stock coolers, universally unlocked multipliers, and soldered heat spreaders have won the hearts of many a DIY builder. Some fundamental disadvantages of the Zen core versus Intel’s Skylake architecture, like SIMD units that provide half the potential throughput of the blue team’s cores, will require major architectural changes if AMD chooses to address them. Massive re-architecting like that will likely need to wait for the move to 7-nm-class process technologies and the bounty of extra transistors they could offer.

The Ryzen 7 2700X and Ryzen 5 2600X

AMD has been listening to Ryzen owners over the past year for more pragmatic changes it can implement to make its products more competitive, however, and it came up with a few relatively straightforward fixes. First and foremost, enthusiasts have demanded a better-behaved and lower-latency integrated memory controller for use with high-speed DDR4 RAM, important characteristics for feeding as many as eight cores with dual-channel memory. Overclockers have pined for more potential from Ryzen CPUs, many of which top out at 3.8 GHz to 4 GHz all-core speeds. Folks who don’t overclock want higher stock clock speeds, as well. Finally, AMD felt it could reduce access latencies at the various levels of the processor’s cache and memory hierarchy to improve performance.

At least on paper, AMD has ticked off every box on that wish list with the Zen+ microarchitecture that underpins second-generation Ryzen CPUs. The company says the die size, transistor count, and fundamental logic of the Zen core remains unchanged in the transition from GlobalFoundries’ 14-nm FinFET process to its 12LP process.

Instead, the company is reaping the benefits of the better transistors available from that process to improve performance on critical paths of the chip. In sum, that means higher peak clock speeds, lower cache latencies, a more robust memory controller, and lower voltage requirements for the same performance.

With the improvements of 12LP in mind, some might find it odd to see that the TDP of the top-end Ryzen 7 2700X has actually increased 10 W, to 105 W. Part of this change is because of second-gen Ryzen’s Precision Boost 2 dynamic-voltage-and-frequency-scaling technology. More on that chip’s specs in a second.

Instead of first-generation Ryzen CPUs’ simple concept of single-core, two-core, and all-core boost speeds, Precision Boost 2 allows AMD’s SenseMI power- and temperature-monitoring tech to vary second-gen Ryzen parts’ all-core clock speeds more or less linearly, from one to as many as eight loaded cores and one to sixteen threads.

Precision Boost 2 means a second-generation Ryzen chip can take full advantage of the power and thermal headroom available to it, and AMD says it’s intentionally allowing Ryzen second-generation parts to burn more power under load to fully realize Precision Boost 2’s potential in cases where the original Precision Boost would have had to fall back to an all-core boost speed. To be clear, higher power usage alone should not be seen as a regression, at least in theory. This decision makes perfect sense if it allows Ryzen second-gen parts to consume less energy over the course of a task thanks to higher performance-per-watt.

The move to a more linear dynamic voltage and frequency scaling curve means that the behavior of AMD’s Extended Frequency Range (XFR) technology is changing, as well. XFR 2 does away with the idea of a fixed frequency increase across single-core and all-core workloads, as seen on all first-generation Ryzen products to some degree. Instead, XFR 2 works more like the Mobile XFR feature we first saw on Raven Ridge mobile chips.

The peak Precision Boost 2 speed on any Ryzen second-generation part will still rise (by about 50 MHz, in our experience) if better-quality cooling is installed, but the bigger change is that users should see higher sustained frequencies on multi-core workloads with a more capable cooler, and hence better performance in those tasks. That sustained clock-speed improvement is the second key way that AMD is improving performance with its second generation of Ryzen parts.

All second-generation Ryzen CPUs enjoy a more robust memory controller than first-generation Ryzens, too. As it does with Raven Ridge desktop parts, AMD rates second-gen Ryzens for DDR4-2933 support from single-rank, two-DIMM memory configurations (although only from motherboards with six PCB layers, oddly enough). Using more DIMMs or dual-rank memory will cause stock memory speeds to drop off, just as with first-generation Ryzen parts. Even with that in mind, our hands-on testing with AMD’s demo X470 systems at an event in New York suggested that overclocked memory speeds in the range of 3600 MT/s with two DIMMs could be achievable with some care. That’s a major improvement over first-gen Ryzens, where speeds greater than 3200 MT/s proved difficult to reliably achieve without exacting choices of memory kits and motherboards.

 

Sizing up the lineup

As we teased in our teaser last week, four second-generation Ryzen desktop CPUs are launching today.

  Cores/

threads

Base

clock

(GHz)

Boost

clock

(GHz)

Total

cache

TDP Stock

cooler

Suggested

price

Ryzen 7 2700X 8/16 3.7 4.3 20 MB 105 W Wraith Prism (LED) $329
Ryzen 7 2700 3.2 4.1 65 W Wraith Spire (LED) $299
Ryzen 5 2600X 6/12 3.6 4.2 19 MB 95 W Wraith Spire $229
Ryzen 5 2600 3.4 3.9 65 W Wraith Stealth $199

The Ryzen 7 2700X is the top-end part among these, and it’ll occupy the top-end slots formerly filled by the Ryzen 7 1700X and Ryzen 7 1800X. Its eight cores and 16 threads run at 4.3 GHz peak speeds and a 3.7 GHz base clock. It maintains the 4 MB of total L2 cache and 16 MB of shared L3 from past Ryzen 7 parts. Compare those clock speeds to the 3.6 GHz base clock and 4 GHz peak speeds of the Ryzen 7 1800X (or 4.1 GHz with first-gen XFR accounted for).

  Cores Threads Base clock Boost clock XFR TDP Suggested

price

(launch)

Ryzen 7 1800X 8 16 3.6 GHz 4.0 GHz 100 MHz 95 W $499
Ryzen 7 1700X 3.4 GHz 3.8 GHz 100 MHz 95 W $399
Ryzen 7 1700 3.0 GHz 3.7 GHz 50 MHz 65 W $329
Ryzen 5 1600X 6 12 3.6 GHz 4.0 GHz 100 MHz 95 W $249
Ryzen 5 1600 3.2 GHz 3.6 GHz 50 MHz 65 W $219

In a change from its approach with the heatsink-less first-generation Ryzen X-branded CPUs, AMD will bundle its new Wraith Prism cooler in the box with the 2700X. The Wraith Prism improves AMD’s already-solid top-end boxed cooler with four direct-contact heat pipes, and it adds a little extra bling to the Wraith with an addressable RGB LED ring and a translucent, RGB LED-illuminated fan. AMD told me that it expects the Wraith Prism will perform about on par with enthusiast-favorite air coolers like the Cooler Master Hyper 212 Evo.

AMD wants builders to think “value” with the Ryzen 7 2700X, and it’s pricing the highest-end second-gen Ryzen chip so far at just $329—down from $499 for the Ryzen 7 1800X at launch. That’s also $40 short of the Core i7-8700K’s suggested price, and the highest-end Coffee Lake chip doesn’t even come with a stock cooler in the box. AMD continues to solder the integrated heat spreader atop the second-generation Ryzen die, as well, a move that could help overclockers extract the most performance from the chip without the need for the exotic and risky measure of delidding and re-pasting the integrated heat spreader. Coffee Lake builders pushing their systems to the limit are already quite familiar with this operation.

The Ryzen 7 2700 will take over the eight-cores-in-65-W spot formerly filled by the Ryzen 7 1700. The $299 2700 will offer a 4.1 GHz peak clock speed and a 3.2 GHz base clock, and it’ll come with the same capable RGB LED-illuminated Wraith Spire cooler in the box as its forebear had. Recall that the Ryzen 7 1700 provided a 3.7 GHz peak speed and a 3 GHz base clock—all without the benefit of Precision Boost 2’s more granular control under load. The 2700 should offer a major performance upgrade over its predecessor. Overclockers looking to push a Ryzen second-gen part while saving a few bucks should like this chip, too.

Two second-generation Ryzen 5 processors will launch alongside the new Ryzen 7 lineup. The Ryzen 5 2600X boasts six cores running at 4.2 GHz peak speeds and a 3.6 GHz base clock. It’ll come with 3 MB of L2 and 16 MB of L3 cache enabled, and it too will come with the Wraith Spire cooler. The 2600X delivers anywhere from 100 MHz to 200 MHz of peak clock speed boost over the Ryzen 5 1600X’s peaks, though its base clock remains unchanged. For builders after the best stock-clocked six-core performance from a Ryzen 5, AMD will oblige for $229.

The Ryzen 5 2600 may be the most important chip in the Ryzen second-generation lineup. I thought the Ryzen 5 1600 was worthy of a rare TR Editor’s Choice award when I reviewed it, and the Ryzen 5 2600 improves on the formula with a nice dash of higher clock speeds. This second-gen six-core chip promises 3.9 GHz peak clocks and a 3.4 GHz base clock, up anywhere from 100 MHz to 200 MHz over its forebear. AMD has slightly dulled the appeal of the 2600 by boxing the entry-level Wraith Stealth cooler with it instead of the beefy Wraith Spire that crowned the Ryzen 5 1600 in use. Still, builders who want a stock-clocked six-core Ryzen 5 shouldn’t find many other shortcomings in this $199 chip, and overclockers are likely going to toss the stock cooler, anyway.

Now that we’ve seen what the second generation of Ryzen CPUs has to offer, let’s get to testing.

 

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each benchmark at least three times and took the median of those results. Our test systems were configured as follows:

Processor
AMD Ryzen 1600X AMD Ryzen 1800X AMD Ryzen 2600X AMD Ryzen 2700X
CPU cooler EK Predator 240-mm liquid cooler
Motherboard Gigabyte X470 Aorus Gaming 7 Wifi
Chipset AMD X470
Memory size 16 GB (2x 8 GB)
 Memory type G.Skill Sniper X DDR4-3400 (rated) SDRAM
Memory speed 3400 MT/s (actual)
Memory timings 16-16-16-36 1T
System drive Samsung 960 EVO 500 GB NVMe SSD

 

Processor
Intel Core i7-7700K
CPU cooler Corsair H115i Pro 280-mm liquid cooler
Motherboard Asus ROG Strix Z270E Gaming
Chipset Intel Z270
Memory size 16 GB
Memory type G.Skill Flare X DDR4-3200 (rated) SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 2T (DDR4-3600)
System drive Samsung 960 Pro 500 GB

 

Processor
Intel Core i7-8700K Intel Core i5-8400
CPU cooler Corsair H110i 280-mm liquid cooler
Motherboard Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z370
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Sniper X DDR4-3400 (rated) SDRAM G.Skill Flare X DDR4-3200 (rated) SDRAM
Memory speed 3400 MT/s (actual) 3200 MT/s (actual)
Memory timings 16-16-16-36 2T 14-14-14-34 2T
System drive Samsung 960 Pro 500 GB

They all shared the following common elements:

Storage 2x Corsair Neutron XT 480 GB SSD

1x HyperX 480 GB SSD

Discrete graphics Nvidia GeForce GTX 1080 Ti Founders Edition
Graphics driver version GeForce 385.69
OS Windows 10 Pro with Fall Creators Update
Power supply Seasonic Prime Platinum 1000 W

Some other notes on our testing methods:

  • All test systems were updated with the latest firmware and Windows updates before we began collecting data, including patches for the Spectre and Meltdown vulnerabilities where applicable. As a result, test data from this review should not be compared with results collected in past TR reviews. Similarly, all applications used in the course of data collection were the most current versions available as of press time and cannot be used to cross-compare with older data.
  • Our test systems were all configured using the Windows Balanced power plan, including AMD systems that previously would have used the Ryzen Balanced plan. AMD’s suggested configuration for its CPUs no longer includes the Ryzen Balanced power plan as of Windows’ Fall Creators Update, also known as “RS3” or Redstone 3.
  • Unless otherwise noted, all productivity tests were conducted with a display resolution of 2560×1440 at 60 Hz. Gaming tests were conducted at 1920×1080 and 60 Hz.

Our testing methods are generally publicly available and reproducible. If you have any questions regarding our testing methods, feel free to leave a comment on this article or join us in the forums to discuss them.

 

Memory subsystem performance

Let’s kick off our tests with some of the handy memory benchmarks included in the AIDA64 utility . We’re especially eager to have a look at this benchmark’s measurement of memory latency, since that’s one of AMD’s claimed improvements in the Zen+ architecture.

As we might expect from systems running dual-channel memory at similar speeds, none of these chips pull far ahead or fall far behind the rest of the pack. The i5-8400 and i7-7700K would likely be even closer to the competition still if they had played nicely with the DDR4-3400 kit we used to test our Ryzen systems.

As we had hoped, the AIDA64 memory latency test shows a slight reduction in the time it takes to fetch data from main memory between first-generation and second-generation Ryzen CPUs, all else being equal, but hey, improvements are improvements.

Some quick synthetic math tests

AIDA64 offers a useful set of built-in directed benchmarks for assessing the performance of the various subsystems of a CPU, as well. The PhotoWorxx benchmark uses AVX2 on compatible CPUs, while the FPU Julia and Mandel tests use AVX2 with FMA.

Zen+ doesn’t include any fundamental changes to the resources available from each core, so it’s no surprise that these synthetic math tests shake out about as we’ve come to expect. AMD chips take a wide lead in the AIDA64 Hash test thanks to their support for Intel’s SHA Extensions, while the Coffee Lake and Kaby Lake parts punch above similarly-provisioned Ryzens in the single-precision Julia and double-precision Mandel floating-point math tests thanks to their twice-as-wide SIMD hardware.

 

Javascript

The usefulness of Javascript benchmarks for comparing browser performance may be on the wane, but these collections of tests are still a fine way of demonstrating the real-world single-threaded performance differences among CPUs.

On first glance, there might not seem to be anything remarkable about these numbers. Second-generation Ryzen CPUs take advantage of their slightly-lower-latency caches and slightly-higher clock speeds to outpace their forebears across the board, and that’s a welcome improvement for what was a minor pain point with first-gen Ryzen chips.

The shock comes when we compare the Core i7-8700K’s performance (and that of pretty much any chip among these) with our results from October, where the Core i7-8700K and the Core i7-7700K were pushing well over 300 in JetStream and scorching off eyebrows in our other Javascript tests, too. Whatever browser-level, operating-system-level, and firmware-level changes are mitigating Spectre and Meltdown have given Intel chips a serious gut-punch in these rapid sequences of microbenchmarks.

Compiling code with GCC

Our resident code monkey, Bruno Ferreira, helped us put together this code-compiling test. Qtbench records the time needed to compile the Qt SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

Here’s a nice win out of the gate for the Ryzen 7 2700X. Precision Boost 2 lets the second-gen top-end part boost to around 4 GHz in sustained workloads by our observation, and that impressive all-core speed lets the 2700X speed past the i7-8700K. The Ryzen 5 2600X lands squarely in between its first-generation counterparts, as it too can sustain 4 GHz in sustained workloads. Our observations of both the Ryzen 7 1800X and Ryzen 5 1600X suggest those chips’ less-sophisticated Precision Boost algorithms only allow them to reach 3.7 GHz in all-core workloads.

File compression with 7-zip

The Ryzen 7 2700X edges out the Core i7-8700K by a little bit in the compression half of this benchmark. Move to the arguably more-common case of decompressing files, however, and even the Ryzen 5 2600X beats out Intel’s top-end Coffee Lake part. Both Ryzen 7s establish a league of their own for unpacking archives.

Disk encryption with Veracrypt

In the accelerated AES portion of this benchmark, our higher-performing parts appear to be hitting a memory-bandwidth wall. Turn things over to the elbow-grease Twofish algorithm, however, and the second-gen Ryzens once again prove their mettle. The Ryzen 5 2600X goes toe-to-toe with the Core i7-8700K, while the Ryzen 7 2700X flies far out in front.

 

Cinebench

The evergreen Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

The single-threaded portion of Cinebench seems a lot less harsh on the Core i7-8700K than our fast-and-furious Javascript tests. In fact, the Coffee Lake chip performs better here than it did in October. Both Ryzen second-gen parts deliver a nice boost over their predecessors, though.

Cinebench’s single-threaded mode is primarily of academic interest for a rendering benchmark. Harness every thread these chips have to offer, and a different story emerges. The Ryzen 7 2700X walks all over its first-generation forefather, while the Ryzen 5 2600X again pulls nearly even with the i7-8700K.

Blender

Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

Ryzens love to render, and that’s no less true of the 2700X with the latest version of Blender. The Core i7-8700K can’t hope to keep up.

Corona

Corona, as its developers put it, is a “high-performance (un)biased photorealistic renderer, available for Autodesk 3ds Max and as a standalone CLI application, and in development for Maxon Cinema 4D.”

The company has made a standalone benchmark with its rendering engine inside, so it was a no-brainer to give it a spin on these CPUs.

Make that three for three for AMD in our rendering tests.

Handbrake transcoding

Handbrake is a popular video-transcoding app that just hit version 1.1. To see how it performs on these chips, we converted a roughly two-minute 4K source file from an iPhone 6S into a 1920×1080, 30 FPS MKV using the HEVC algorithm implemented in the x265 open-source encoder. We otherwise left the preset at its default settings.

Even with an AVX wind at its back, the i7-8700K just can’t overtake the Ryzen 7 2700X. Same goes for the i5-8400 against the Ryzen 5 2600X.

CFD with STARS Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here. We configured Euler3D to use every thread available from each of our CPUs.

It should be noted that the publicly-available Euler3D benchmark is compiled using Intel’s Fortran tools, a decision that its originators discuss in depth on the project page. Code produced this way may not perform at its best on Ryzen CPUs as a result, but this binary is apparently representative of the software that would be available in the field. A more neutral compiler might make for a better benchmark, but it may also not be representative of real-world results with real-world software, and we are generally concerned with real-world performance.

Perhaps because of Intel’s compiler magic, the Core i7-8700K and Core i5-8400 lead this test. The Ryzen 7 2700X still puts up a strong showing, but it can’t continue its streak of dominance.

 

Digital audio workstation performance

One of the neatest additions to our test suite of late is the duo of DAWBench project files: DSP 2017 and VI 2017. The DSP benchmark tests the raw number of VST plugins a system can handle, while the complex VI project simulates a virtual instrument and sampling workload.

We used the latest version of the Reaper DAW for Windows as the platform for our tests. To simulate a demanding workload, we tested each CPU with a 24-bit depth and 96-KHz sampling rate, and at two ASIO buffer depths: a punishing 64 and a slightly-less-punishing 128. In response to popular demand, we’re also testing the same buffer depths at a sampling rate of 48 KHz. We added VSTs or notes of polyphony to each session until we started hearing popping or other audio artifacts. We used Focusrite’s Scarlett 2i2 audio interface and the latest version of the company’s own ASIO driver for monitoring purposes.

A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI’s support—this benchmark would not have been possible without the help of the folks there. Be sure to check out their many fine digital audio products.


Ryzen CPUs have struggled mightily with this high-sample-rate and low-buffer-depth configuration of the DAWBench VI benchmark in the past, but the improved cache-and-memory latencies and higher clock speeds available from second-generation Ryzen chips seem to offer substantial improvements in performance over their first-gen counterparts. Relaxing the buffer depth to 128 only widens the Core i7-8700K’s lead over the solid second-place standing of the Ryzen 7 2700X and the third-place finish of the Ryzen 5 2600X, though.


The DAWBench DSP test tends to treat Ryzen chips better, and at a 96-KHz sampling rate and a buffer depth of 64, the Ryzen 7 2700X actually manages to come out on top of the i7-8700K by a hair. The Ryzen 5 2600X offers a slight increase in performance over the Ryzen 5 1600X here, as well.


DAWBench VI loves Intel CPUs, and lowering the sampling rate to 48 KHz just lets the Core i7-8700K take an insurmountable lead at both buffer depths. Still, the Ryzen 7 2700X and Ryzen 5 2600X actually stay in the running instead of running out of gas as the first-generation Ryzens do here.


DAWBench DSP at 48 KHz and a buffer depth of 64 puts the Core i7-8700K and the Ryzen 7 2700X neck-and-neck, but this recipe of settings also lets the Ryzen 7 1800X wake up a bit. Relaxing the buffer depth to 128 lets the Ryzen 7 2700X stretch its legs over the 1800X to stay neck-and-neck with the i7-8700K until the bitter end. The Ryzen 5 2600X stakes out a nice middle ground between the Ryzen 5 1600X and Ryzen 7 1800X.

While it’s hard to draw a one-to-one relationship between a given improvement in Zen+ and our second-gen Ryzens’ performance in DAWBench, it appears the combo of lower cache latencies, lower memory latencies, and higher clocks is a big step in the right direction for these parts, especially in the DAWBench VI benchmark. I hope that AMD can continue refining its chips along these lines for future DAW performance gains. For now, it appears that folks who want a true DAW all-rounder are still best suited by the Core i7-8700K, but second-gen Ryzen parts are more competitive here than ever.

 

A quick look at power consumption and efficiency

We can get a rough idea of how efficient these chips are by monitoring system power draw in Blender and using that information to fill in the blanks on the convenient fact that one joule equals one watt expended over one second. Our observations have shown that Blender consumes about the same amount of wattage at every stage of the bmw27 benchmark, so it’s an ideal guinea pig for this kind of calculation. First, let’s revisit the amount of time it takes for each of these chips to render our Blender “bmw27” test scene:

The Ryzen 7 2700X comes out on top here, of course, but many have wondered just what its 105-W TDP suggests for overall power draw and efficiency. Recall that AMD says it pushed the TDP of the 2700X upward to allow the chip to better take advantage of its Precision Boost 2 headroom. Let’s see just how that

Our Watts Up shows that the 2700X is indeed drawing more power from the wall than its predecessor in Blender, and by no small margin. We could stop here and say that the 2700X is a power hog, but that would be half an analysis. We have a better way of thinking about this problem, and that’s by estimating the total amount of energy each of these systems expends to complete a given task. This simple calculation multiplies the time each chip takes to complete the Blender “bmw27” benchmark by our observed power draw at the wall to estimate that task energy figure.

So that’s something. Despite its substantially higher instantaneous power draw, the Ryzen 7 2700X doesn’t seem to draw much more power in total than the Ryzen 7 1800X before it. Given the 2700X’s higher performance in Blender, that looks like a performance-per-watt win to us. The same is true of the swift Ryzen 5 2600X, whose higher performance translates to a potentially lower task energy expended compared to the Ryzen 5 1600X.

Mash up these data sets into a convenient scatter plot, and we get a visualization of the tradeoffs involved to reach the highest performance and the lowest power consumptions. While the Core i7-8700K does consume the least power to complete our render, it doesn’t complete it the most quickly. The Ryzen 7 2700X expends about 12% more power to deliver a 17% lower time-to-completion of our benchmark. That tradeoff might be worth it for builders whose time is worth more than the minor increase on a power bill, for example.

Overall, it seems like the combo of a higher TDP and Precision Boost 2 not only lead to better performance overall for the Ryzen 7 2700X, but also higher performance per watt. Its roughly 3%-higher task energy gets it roughly 10% better performance in Blender versus the Ryzen 7 1800X, and that kind of better-than-linear scaling is the kind of improvement we like to see.

 

Conclusions

Whew. So what do our reams of data tell us about AMD’s second-gen Ryzen CPUs? Let’s try and sum it all up with our trusty bang-for-the-buck scatter plots, starting with gaming performance.

As a CPU reviewer, I’ve got to stomach a hard truth: the vast majority of single-player games are just not that CPU-bound these days. I searched far and wide for some newer titles that I hoped would show a difference in performance between the various processors on our test bench, but the results above show just how hard it is to put much distance between CPUs this way, even with a GeForce GTX 1080 Ti running the show at 1920×1080. It’s telling that Crysis 3 and Grand Theft Auto V remain useful CPU-bound gaming benchmarks several years after their PC debuts.

All that said, if high-refresh-rate gaming is your jam, the Core i7-8700K holds on to its top spot with our revamped game selection. The Ryzen 7 2700X and Ryzen 5 2600X are hardly in poor company down with the Core i7-7700K in the 99th-percentile frame-time department. Gamers who game first and foremost would do well to direct their attention to the mighty impressive Core i5-8400 and its $179 price tag, though. Ahem.

Folks raring to spend $300 or more on a CPU really need to work as hard as they play these days, and in productivity tasks, the Ryzen 7 2700X is all upside compared to its first-generation forebears. The higher peak clocks from this part allow for single-threaded performance that’s more than adequate in day-to-day use, and the higher sustained performance from Precision Boost 2 lets the 2700X claim a number of victories over Intel’s Core i7-8700K in tasks like compilation, rendering, and transcoding where Ryzen chips were already hanging tough.

The 2700X also performed admirably in our single-PC high-refresh-rate gaming and streaming test with Deus Ex: Mankind Divided—one that the i7-8700K simply couldn’t handle without dropping buckets of frames. Admittedly, our much-less-CPU-heavy and more-typical Far Cry 5 streaming test gave the edge back to the hottest Coffee Lake part, but only just. Folks who want to try their hand at single-PC streaming with CPU encoding can’t go wrong with either of these parts.

Outside of our performance results, AMD’s refresh of its high-end mainstream platform makes X470 motherboards feel reassuringly mature, and the company’s efforts to improve the robustness of the second-gen Ryzen memory controller seem to have paid off nicely. The DDR4-3400 kit that I tested with fired right up with nothing more than a trip to the XMP settings in my Gigabyte X470 Aorus Gaming 7 Wifi’s firmware, and I used it with nary a hitch throughout my tests of the 2700X. We probably need to test our chip with more memory kits soon to see whether we’ve been handed a cherry kit, but initial results for those hungering for fast memory kits appear promising.

Our early overclocking efforts suggest tweakers could wring a few percent more clock-speed headroom from second-generation Ryzen CPUs, as well, although our 4.275-GHz all-core result on the 2700X took more voltage than some might be comfortable with for day-to-day use. We expect most overclockers will be most interested in the $299 Ryzen 7 2700, anyway, and AMD has indicated that we’ll have an opportunity to put that chip through its paces. Stay tuned.

Aspiring AMD builders without the budget or workload for eight cores and 16 threads of second-generation Ryzen goodness need not despair. The $229 Ryzen 5 2600X delivers single-threaded performance that’s only a hair’s breadth away from the Ryzen 7 2700X, and its impressive multithreaded performance, high-quality stock cooler, and unlocked multipliers make similarly-priced and locked-down Coffee Lake Core i5s look like a hard sell for do-it-all systems. I’m especially interested to see how the Ryzen 5 2600 stacks up against the Core i5-8400 when we get our hands on that part.

AMD Ryzen 7 2700X

AMD Ryzen 5 2600X

April 2018

 All told, the best thing about today’s CPU market is that builders can choose just the chip they need at the right price. Those after the very best single-threaded performance, overclocking potential, high-refresh-rate gaming experiences, and all-round digital audio workstation performance can still get it in the Core i7-8700K, and those things still justify the price premium the blue team’s best mainstream chip commands. Even with the tarnish of Meltdown and Spectre on its heat spreader, the i7-8700K is still a remarkable chip—just not as much so as it was back in October of last year.

Those whose needs run more toward sheer multi-threaded grunt, on the other hand, can pick up a Ryzen 7 2700X for less money than the i7-8700K, and they’ll enjoy its capable (and colorful) stock heatsink, winning parallel throughput, perfectly snappy per-core performance, and polished platform. AMD has stuffed an impressive amount of bang into the Ryzen 7 2700X for the buck, and if the stuff it does well meshes with your workload, you really can’t go wrong. The second round of Ryzen looks mighty fine indeed, and I’m happy to call both the Ryzen 7 2700X and Ryzen 5 2600X TR Editor’s Choice’s.

Comments closed
    • The Wanderer
    • 1 year ago

    [quote<]As a CPU reviewer, I've got to stomach a hard truth: the vast majority of single-player games are just not that CPU-bound these days. I searched far and wide for some newer titles that I hoped would show a difference in performance between the various processors on our test bench, but the results above show just how hard it is to put much distance between CPUs this way, even with a GeForce GTX 1080 Ti running the show at 1920x1080.[/quote<] If you want a CPU-bound single-player game, you might consider Dwarf Fortress. My understanding is that although it does use GPU-based rendering when it's available, the GPU is never the primary bottleneck, unless it's exceptionally anemic compared to the rest of the system. Rather, the primary benefit lies in freeing up the CPU to spend more time on the game's main event loop. A large-site fortress with a high population (even without diving deep into configuration settings to raise the population cap) can reduce the number of internal "ticks" the game can process per second to unplayably low levels, even on the most powerful systems I know of its having been tried on - and even if newer hardware does become able to process things fast enough to bring that back up to the configured FPS cap, just raising the population cap (and bringing new people in to pass the old cap) should make performance differences visible again. Or you might be able to set off catsplosion, if you prefer. One potential negative is that as far as I'm aware, it's not highly multithreaded; for the most part, last I recall reading about, it depended very strongly on single-threaded CPU performance. It's not as if that's not worth benchmarking, however.

    • Jeff Kampman
    • 1 year ago

    I finally have all of our gaming numbers in a place where I’m happy with them and they’re all graphed. We’ll be working hard this afternoon to get them all into a separate article and I hope to have that completed this evening. Thanks again for everybody’s patience.

    • chuckula
    • 1 year ago

    Anand just posted a very substantial update to its original RyZen+ review: [url<]https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results[/url<] TL;DR version: Anand was artificially forcing HPET timers to be turned on (against the default settings) on all platforms that had an artificial effect of boosting the relative performance levels of the RyZen CPUs relative to Intel, especially in games. After their audit their game performance numbers are pretty much inline with TR's and everybody else's. Thank you TR for taking the time to do it right and kudos to Anand for at least being honest.

      • YukaKun
      • 1 year ago

      AT was never dishonest.

      They presented their data as they ran it. Ian never considered HPET to affect the numbers to that degree, since Intel never gave guidelines to it like AMD did.

      So, if you wanted numbers and see how activating HPET affects different workloads, there you have it.

      Cheers!

      • Shobai
      • 1 year ago

      Interestingly enough, the deltas for AMD in the gaming tests map nicely to the deltas for Intel – the magnitudes are obviously different.

      [Edit: so, connecting the dots, Meltdown and it’s mitigations can’t be the root cause]

      • dragontamer5788
      • 1 year ago

      It goes to show how much Meltdown has changed things.

      HPET was a minor performance hit before: up to 5% in some cases but ultimately was used due to their much higher accuracy.

      But after Meltdown, HPET is causing slowdowns of over 30% in some cases. And since AMD systems are immune to Meltdown, this HPET-methodology is biased against Intel.

        • abiprithtr
        • 1 year ago

        Yep, looks like HPET is causing a massive drop in performance on Intel CPUs, especially after those Meltdown patches. Should Intel give its users an option to disable those patches temporarily, so they could enjoy their *artificially* inflated numbers?

          • Shobai
          • 1 year ago

          You, also, have missed the mark: Meltdown, apparently, can’t be the cause.

          To answer your question, the answer is smacking you ’round the chops: don’t artificially force HPET and you won’t get the degradation.

        • Shobai
        • 1 year ago

        Apologies, but this is a bit silly:

        [quote<]But after Meltdown, HPET is causing slowdowns of over 30% in some cases. And since AMD systems are immune to Meltdown, this HPET-methodology is biased against Intel.[/quote<] If we allow that AMD is immune to Meltdown, as AMD and third parries appear to agree, and AMD is suffering performance degradation in the same tests, using HPET, as the Intel platform did (as I'm sure you've seen by looking at the Anandtech article linked above) then Meltdown and it's mitigations cannot be the issue. Right? Correlation != causation, etc.

          • Redocbew
          • 1 year ago

          There may be more than one factor at fault here. I have a hard time believing a bad implementation of this timer by its self can cause the kind of variance we see from the Intel platform when Anandtech tested with and without the use of the HPET timer. However, since the HPET timer causes an interrupt at every tick, and the clock for the timer happened to be running faster on the Intel platform, then IO overhead starts to look like a reasonable explanation.

          We should also remember that it’s not the HPET timer per say that’s the problem, but a Bad and Wrong usage of it in software.

        • Voldenuit
        • 1 year ago

        Anandtech [url=https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results/4<]mentioned that their standard practice of forcing HPET on /might/ have had something to do with abnormally low Skylake-X gaming results[/url<], and this was years before Meltdown/Spectre. I agree that post-Meltdown/Spectre patches, I/O tasks have a higher performance penalty, and HPET timer calls fall into that category. The effect is probably negligible in typical gaming scenarios, but when monitoring framerates (whether using in-game or 3rd party tools), this might cause a hit to performance that wouldn't otherwise be there. It'd be interesting to do some testing and see how much monitoring affects performance. You could set up a cannned benchmark or timedemo and compare time to complete or frames rendered at the end of the run with fps monitoring tools enabled and disabled.

      • techguy
      • 1 year ago

      I’m glad they went back and identified the problem. I pointed out their faulty results shortly after they posted their review – you know your results are wrong when you have AMD beating Intel @ 1080p in GTA V.

      • Pancake
      • 1 year ago

      Oh-ho-ho-ho. I was absolutely SHOCKED at Anandtech’s initial results. How could AMD go from so far behind to so far in front?

      Quite clearly if you’re into gaming you still want to be buying Intel. Glad Anandtech took it on the chin. However, if you look at their revised results they try to dilute Intel’s victory with silly 4K, 8K, 16K results. Who even does that?

      And wait for Intel’s response.

      • Mr Bill
      • 1 year ago

      [quote<][url=https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results/2<]The extreme overclocking community, after analysing the issue, found a solution: forcing the High Performance Event Timer, known as HPET, found in the chipset. Some of our readers will have heard of HPET before, however our analysis is more interesting than it first appears.[/quote<][/url<] So, that was Win8 and 2013. Do overclockers still have to force HPET in Win10 when going for records? Its unclear what timer is being used instead. Back in the very old days we used to run scripts in DOS that called the system time, ran the benchmark then called the system time.

      • abiprithtr
      • 1 year ago

      What Anandtech did is nowhere near artificial as this:
      [url<]http://agner.org/optimize/blog/read.php?i=49#73[/url<] or this: [url<]http://www.caselab.okstate.edu/research/euler3dbenchmark.html[/url<] That's 2 times you've used "artificial...ly" in one post, in addition to using it in one other post in this same thread. All three posts are when you were trying to discredit Anandtech and/or their work. This overuse of adjectives to try and paint one single company in a darker shade than they deserve reminds me of a certain IT security firm who recently ran a report on AMD.

    • chuckula
    • 1 year ago

    Der8auer [url=https://www.youtube.com/watch?time_continue=753&v=VEUG9d4Mjug<]delidded a 2600X[/url<] and among other things the die size is confirmed to be identical to the original RyZen (215 mm^2).

      • jarder
      • 1 year ago

      He also found that there was only a 4 degree difference between the maxed-out stock chip (Indium solder) and his delidded (liquid metal) one:
      [url<]https://hothardware.com/news/amd-ryzen-5-2600-gets-delidded-by-overclocker-der8auer[/url<]

        • Chrispy_
        • 1 year ago

        Because, unlike some companies, AMD packages their chips properly using solder and well-fitting IHS spreaders.

        I mean, Wouldn’t it be terrible if you spent big bucks on a processor and discovered that it was uncoolable because the IHS wasn’t seated properly from an excess of glue, and there was a thick layer of cheap TIM instead of solder?

          • chuckula
          • 1 year ago

          Yes, it’s too bad that the 8700K clearly can’t be cooled by mere mortals, which is why it lost in every benchmark TR ran.

          Not to mention that literally every Raven Ridge part can’t even boot up because AMD uses thermal paste on those chips too.

      • Zizy
      • 1 year ago

      It would be nice to see some SEM images of the same part of the chip to compare Ryzen1 and 2 and see that dead space 😛 Shouldn’t cost too much to do it.

      • Shobai
      • 1 year ago

      [quote=”Jeff”<]No change in die size or transistor count.[/quote<] I mean, he had access to AMD's review guide, right?

        • chuckula
        • 1 year ago

        It’s one thing to read it from a review guide, and another to measure a working part.

        It also confirms what Jeff said even though some people still didn’t believe it.

          • Shobai
          • 1 year ago

          No. He spent roughly half the video saying that he expected it to be smaller because it was a “12nm” part, and to let him know if he had missed something. It’s pretty clear not only that he hadn’t read it, but he hadn’t read any other review that mentions it – I can’t say that I’ve read all the reviews available; do any in the shortbread list fail to mention it?

          Other than that, for sure. It’s nice to have it confirmed.

          [Edit: forgot ‘part’]

          [Edit: going through the list, [H] is the first review that mentions that the die size and transistor count remains the same]

          [Edit: through the rest of the list, because who needs sleep: four other reviews mention that the die size and transistor count remains the same, for a total of 5 of 11, or 1/2 if TR is included]

          • JustAnEngineer
          • 1 year ago

          Once you’ve disassembled the CPU and prepped it for SEM, it’s no longer a working part. 😉

    • willmore
    • 1 year ago

    Phoronix saw some nice improvement with a BIOS update that included some AGESA changes:
    [url<]http://www.phoronix.com/vr.php?view=26235[/url<] I hate to request even more benchmarking.... Edited to add: Oh, Veerappan already mentioned this, sorry. I looked for a head post mentioning it, not a reply. Nearly 400 replies....

      • Jeff Kampman
      • 1 year ago

      We tested with a non-public BIOS on our Gigabyte board that already included AGESA 1.0.0.2a. We’re also not testing Linux performance (the area that Michael specifically points out as a trouble spot) so I highly doubt there’s anything to be gained from retesting at this stage.

        • Veerappan
        • 1 year ago

        Fair enough.

        Just didn’t know if there was something in the new firmware/AGESA that would boost performance on other OSes as well.

        I’ll be updating firmware on my board as it comes out, but I’m still “stuck” with my R7 1700 for the foreseeable future unless I go to a 2700 to get around the performance marginality issue that my chip and workloads are affected by. Yes, I know I can send the chip to AMD for replacement, but that’d imply downtime I haven’t felt like dealing with.

    • Jeff Kampman
    • 1 year ago

    Gaming bench status update: I’m collecting a whole new set of numbers out of caution because of some weird deltas I’m observing upon retesting some configurations. The upside of this is that we’ll also have performance results with DDR4-2400/DDR4-2666 on our Intel platforms and DDR4-2933 on our Zen+ systems. The downside is that we likely won’t have something to publish before tomorrow. I appreciate everybody’s patience on these numbers given the furor around anomalous results on launch day.

      • jokinin
      • 1 year ago

      We can wait, because I find TR reviews the most reliable ones. Congratulations on your amazing job with these reviews.

      • dragontamer5788
      • 1 year ago

      [quote<]Gaming bench status update: I'm collecting a whole new set of numbers out of caution because of some weird deltas I'm observing upon retesting some configurations[/quote<]. Yo, if it helps to explain the wtf results of Anandtech, then its worth the wait. Something weird is going on with these benchmarks. If there's any results that seem odd, please share them!

      • Waco
      • 1 year ago

      TR results are always worth the wait because they’re [b<]TR[/b<]ustworthy results. 🙂

        • MOSFET
        • 1 year ago

        Thumbs because cheese is so appreciated around here. Now where’s that [url=https://techreport.com/review/33506/intel-nuc8i7hvk-hades-canyon-gaming-pc-reviewed<]bleu cheese and buffalo sauce[/url<]?

      • HERETIC
      • 1 year ago

      “DAZED AND CONFUSED” Is probably a understatement at the moment.
      You probably have a range of results,ranging from AMD 5% behind to 3%
      in front. WTF to send to print is the tough one………………………….

      Went back and compared Anand’s original C/L review to Zen+and surprise-
      Intel seems to have gained a few frames in most games,despite penalty’s.

      Not trying to fill your boots Jeff,it seems you can take any C/L i5-i7-Zen+
      and there’s NOT a bad choice anywhere there-when you get to 1440,the
      most likely res these are to be used at, your about plus or minus 2 percent,
      you could get that on two identical setups……………………..

      Still think from a value perspective,for the 90% that don’t want to stream
      the 8600/8600K is still the diamond in the rough………………………………

      • Chrispy_
      • 1 year ago

      The reason your CPU/GPU reviews are the slowest on the net is also the reason why they’re the definitive reviews on the net.

      • Anonymous Coward
      • 1 year ago

      I applaud your integrity on the matter.

      • Kretschmer
      • 1 year ago

      As always, we appreciate your rigor.

      • Veerappan
      • 1 year ago

      I saw on Phoronix that they redid a bunch of benches in order to try to explain a Linux/Windows discrepancy. Turns out that Asus pushed out a newer firmware (updated AGESA) that also had a pretty significant (positive) performance impact on the results.

      • gerryg
      • 1 year ago

      Thank you for at least providing an update on what’s holding things up. Communicating early/often to the community is a good strategy when expectations and/or anticipation is high. Thanks for the hard work on this!

      • odizzido
      • 1 year ago

      Thanks for the update, take all the time you need to do it well 🙂

      • Eversor
      • 1 year ago

      +1 for benching with default RAM clocks for the platform!

      Overclocking both is also strange because Intel isn’t putting the work in to validate higher clocks, so you have to do it yourself and pay extra for Z chipsets (IIRC).
      Also, if you look at Skylake vs Kaby Lake most of the performance gains are from the move to higher DRAM clock (especially games), so this gives a better picture of generational improvements.

      • Jeff Kampman
      • 1 year ago

      Denuvo is preventing me from finishing the last two CPUs I need to finish (adding Rise of the Tomb Raider results to the pool) but aside from that we should have something we can publish today (4/25) or tomorrow (4/26).

        • YukaKun
        • 1 year ago

        Thanks for the updates and hard work as usual.

        Will you change the Article title update to reflect the new information added?

        Cheers!

        • K-L-Waster
        • 1 year ago

        And they say DRM never hurts legitimate users….

      • psuedonymous
      • 1 year ago

      [quote<]e'll also have performance results with DDR4-2400/DDR4-2666 on our Intel platforms and DDR4-2933 on our Zen+ systems[/quote<]Are you also doing the inverse (DDR4-2933 on Intel, DDR4-2400/DDR4-2666 on Zen+)? We all know that using slower RAM on Zen hobbles performance, but it would be interesting to see the comparative scaling, especially for budget builds where it might make sense to skimp on slower DIMMs to offset a more expensive CPU/mobo if it won't affect performance.

    • Srsly_Bro
    • 1 year ago

    I’m just here now for the goty vs pancake physics and math battle. Who is the supreme nerd of TR?

      • davidbowser
      • 1 year ago

      Physics is just math with examples anyway…

      *ducks*

    • NIKOLAS
    • 1 year ago

    When are games going to be used as benchmarks?

    • ronch
    • 1 year ago

    Is it possible to know the core number assignments on the die? I reckon the last time AMD released die shots with core numbering on them was with Phenom II and even then we can’t be sure if the marketing department got it right.

    • Shobai
    • 1 year ago

    Just noticed that the notes on StoreMI say

    [quote=”AMD”<]You can even use up to 2GB of RAM as a last-level cache for ultra-fast data [/quote<] I'm interested to know more - Jeff, is this one of the features you plan to test? Will the chipset features get their own deep-dive article?

    • helix
    • 1 year ago

    What I am curious about is whether (unofficial) ECC is working. Anyone knows?

    • NunoP
    • 1 year ago

    Got my hands on a R5 2500 with 16GB of 2400 memory. (Not the fastest i know but it was one i got in hand just to test)
    Cloned my normal OS SSD that has Cakewalk by Bandlab, now free software, that runs on a old i7 860.
    Tried a simple test project that was struggling in 24 bits, 96khz 512 with some pop ups.
    With the r5 2600 the cpu level went real low, even at 64 samples. But i still got pop ups. Only around 256 samples it played perfect.

    Running a much bigger project at 24 bits, 48khz and 64 samples was very smooth.

    So looks like the CPU is not the bottleneck anymore.
    OS configuration and Asio drivers should now have a bigger importance.(Among other things)

    I hope i can get my hands on a i5 8400 to do the same test.

    I am still not sure witch is the smarter value choice of cpu for a DAW right now. But looks like any of the modern cpus really deliver a lot if you are a normal daw user.

    If the cpu is no longer the bigger bottleneck i wonder if i gain a lot going from r5 2600 to r7 2700.
    Why don´t consider the x version?
    Daw does benefit so much of turbo. A quieter system is important and you can always overclock it in crucial moments.

    Hope iam think correctly

    • Rakhmaninov3
    • 1 year ago

    Ooooooohhh. RyZEN. I get it now!

    • odizzido
    • 1 year ago

    Just want to say take your time doing the gaming tests. I can get quick results anywhere, I come here for good results.

    • HERETIC
    • 1 year ago

    Keep up the good work Jeff.
    Better LATE than WRONG(remembering the old saying-“check twice,cut once”)
    Can’t wait for your gaming results-After reading several reviews am a little
    stunned how different sites manage to manipulate results to suit their own agenda.
    Never before have we had so many variables that can affect results-
    Meltdown and Spectre variants-Ram speed and timings-Power plans-MCE on or off.

    So far my view is unchanged-For the 90% of gamers that don’t want to do anything
    intensive while gaming, the red-headed stepchild the 8600 or 8600K is the Diamond
    in the rough……..

    • JoeKiller
    • 1 year ago

    You can totally tell that Jeff is trolling the comments while running those benchmarks. Kudos to y’all for doing it right.

    In the past I think typically TR would miss the blackout end date and people would complain and then marvel at the article. Now TR his the date and says more to come and people complain.

    c’est la vie

    • gerryg
    • 1 year ago

    [i<]Reader’s note: This article will be continuously reloaded as the day goes on, hoping for more content, including looking for gaming performance data. Pardon our dust as we eat computer manuals out of anxiousness while waiting for extensive benchmark data of these CPUs, and remind ourselves to be patient.[/i<]

    • Srsly_Bro
    • 1 year ago

    IT’S BEEN A DAY ARE WE GETTING GAMING BENCHMARKS OR NOT?

    Meanwhile every other review online posted them yesterday. What’s the hold up? Did the CPUs arrive yesterday morning?

      • Jeff Kampman
      • 1 year ago

      I’m re-running some things and taking the time to do a thorough analysis. If someone’s buying decision is hinging on knowing this info I have already posted the sum of our knowledge so they can take that into consideration. Sorry that this entirely free and hopefully useful content is slightly delayed.

        • JoeKiller
        • 1 year ago

        Thanks for the effort. Some of us do pay for the content =) at least that’s why I donate.

        • Srsly_Bro
        • 1 year ago

        Thanks for responding. I noticed Anandtech was catching some heat for their results. Their claim is that they reran all the tests with all in the Meltdown and Spectre variants except variant 2 on Intel systems instead of recycling old data or running unpatched machines.

        Is this of any concern to you?

        • thecoldanddarkone
        • 1 year ago

        Good sir, do you have an expected time it’s going to be finished?

        Thanks

          • Shobai
          • 1 year ago

          [quote=”Jeff Kampman”<]Coming, but probably not until Monday[/quote<] I believe that's the last word we've had

            • thecoldanddarkone
            • 1 year ago

            Thank you. I must have missed that. 😛

    • jensend
    • 1 year ago

    [quote<] As helpful commenter Goty points out, taking the distance from the origin (where the most efficient chips would lie if they could be made) and the Cartesian points represented by our time-to-completion and estimated task energy results lets us figure out a kind of measure of "task efficiency."[/quote<]That's not a useful measure. Quick, tell me, what's 90000 square seconds plus 2500 square kilojoules? Answer: you can't add inconsistent units. Therefore this is a meaningless artifact of your axis scaling. Plot "time taken in hours" vs "task energy in joules" and your distance to the origin 'metric' would of course give entirely different results. It's worth taking effort to actually think through what the right ways to measure things are. The only reasonable thing you can say that relates to the idea you're using here is to consider pareto efficiency. (Any pareto efficient point could look "closest" given some consistent way of defining distance on the graph.) The 1800X, 2700X, and i7-8700K are all pareto efficient by these measures. BTW, it also sounds like you just took a momentary glance at the power readings and multiplied that by the time taken. But CPU power use is far from steady these days with varying loads and clocks. Even if your meter doesn't give you data logging, it should give you watt-hours.

      • Shobai
      • 1 year ago

      You may have added your last thought later, but from the first paragraph on the page you reference :

      [quote<]Our observations have shown that Blender consumes about the same amount of wattage at every stage of the bmw27 benchmark, so it's an ideal guinea pig for this kind of calculation.[/quote<]

      • Goty
      • 1 year ago

      While you are 100% correct that my original post is flawed, Zizy provided the correction and suggested that we should use the area under the line from the origin to each point as the correct measurement of “task efficiency” as we have been calling it. Doing this gives a measure in units of the Action (as Mr_Bill so graciously pointed out; my classical mechanics professor would be ashamed of me) and is the true measure of the metric we are considering. The best part about it? It doesn’t actually change the results I posted on the first place.

      It is certainly possible that Jeff only used a single reading from the power meter when listing the power consumption numbers, but I’d like to give him the benefit of a doubt here; perhaps he can chime in or update the article with more details in this regard. Assuming the numbers are averages or at least the result of several measurements, I’m sure the results are likely to be fairly representative of the performance of each CPU in this particular test.

        • jensend
        • 1 year ago

        Well, yes, that’s a more reasonable metric. You’ll see it in the EE literature as “energy-delay product” or EDP. But it still takes more to decide that’s the right metric.

        Multiplying together power and time gives us task energy, which is an independently meaningful physical quantity. For dealing with fixed tasks on fixed energy budgets – sometimes the case on battery powered devices – that’s really directly important and useful to know. But generally we [i<]ought[/i<] to be able to find ways to reduce task energy simply by slowing down, since power use is more than linear in clock speed etc. So yes, it's worthwhile to give more weight to time taken. If we want a metric that's independent of whether something's being overclocked/underclocked, then EDP doesn't weight time enough. Power is proportional to frequency * voltage^2, and across a reasonable range of frequencies, voltage will have to scale roughly linearly for overclocks (and can be scaled down linearly for underclocks). So power will go roughly as frequency^3, and since the time is inversely proportional to frequency, we get the following: Task energy = power * time: proportional to frequency^2 Energy-delay product: proportional to frequency, metric still favors lower-clocked parts Energy-delay^2 (energy-delay-delay product, EDDP, i.e. power*time^3): should be basically invariant with respect to frequency, a characteristic of the fab process and processor design. I think this is a strong point in favor of EDDP. But it would be worth looking at a bunch of processor data and use cases to make sure that's really what we want to minimize. Just because the EDP comes out to a unit with a name doesn't really speak in its favor. Outside of the context of the Euler-Lagrange equations, there's no direct physical meaning to action and no particular reason why we should be minimizing it. (A side note: a normal way to "scalarize" a multiple-objective function is assign each objective a cost. Task energy and time taken are things we can readily consider the direct cost of. But since minimum wage divided by normal electricity price is on the order of 100,000 W, electricity cost hardly matters compared to performance for a personal machine as long as you have access to the mains. The more relevant costs of high energy consumption are less direct, e.g. cooling, form factor, noise.)

          • Anonymous Coward
          • 1 year ago

          Maybe I’m not clever enough, but this all seems vastly overcomplicated. In the case of more efficient (longer) time intervals over which you might process the data, I’ll say it seems irrelevant. If we are not very concerned about the cost of electricity, then the efficiency graph diminishes into just an elapsed time benchmark.

          Task energy says something useful and [i<]understandable[/i<], as does idle power, attempts to adjust the value of time or electricity rapidly become difficult to apply to each person's circumstances.

            • jensend
            • 1 year ago

            I’m sympathetic to your point; when I read Goty’s reply my initial thought was “yes, multiplying energy and time gives you a unit of measure, but is it really meaningful??”

            But finding it used seriously in the EE literature made me reconsider.

            I do think we are often interested in separating the question “how (in)efficient is this design” from “how much has the manufacturer hot-clocked the part.” I think the back-and-forth both in the article and in the comments about whether the 2700X is really more efficient than its predecessor or than the Intel competition shows that.

            And metrics like this can tease those two apart. If you got a 11.6% performance boost and the only difference was clock speed, you expect CPU power to have to go up by 39% (1.39~=1.116^3). So if CPU power use increases by less than that, i.e. task energy increases by less than 24.5%, the design is more efficient.

            One caveat I should have mentioned: these comparisons only really make sense with power measurements *for the processor* not power draw at the wall. Making a guess that the rest of the system was using 80W in both cases, the CPU power increased by 27.8% and the CPU task energy increased by 14.5%, and AMD could likely have instead chosen to make a 2700X that had the same performance as the 1800X but drew 7W less power.

            These comparisons make it clear that both the 2700X and the i7-8700K are more efficient than the 1800X, since that conclusion remains true across a variety of guesses as to system vs cpu power. But the 2700X and i7 are quite close and we’d need to have a more exact read on package power to compare them.

            • jensend
            • 1 year ago

            Downvoted for math?

            If someone has an intelligent reply instead of just empty downthumbs I’d be glad to hear it.

            • Pancake
            • 1 year ago

            There’s a significant underclass of technical/maths illiterates on this site that will try to stifle any intellectualism or intelligent debate. Presumably because it makes them feel dumb.

            What this will lead to long term is the marginalisation of the technical upperclass – “the brainiacs” – and their loss from this website which will result in increasingly low quality posts.

            This even happens at the article level. Look at Kanter’s last interview in which he tore strips off Ryzen Mobile by the sheer force of logic and insightful analysis. Didn’t play too well with the crowd.

    • crystall
    • 1 year ago

    Just curious: which browser was used for the JavaScript tests?

      • Jeff Kampman
      • 1 year ago

      The latest version of Edge.

        • crystall
        • 1 year ago

        Thanks!

    • maroon1
    • 1 year ago

    Where is the gaming benchmark ?!

      • gogogogg
      • 1 year ago

      Honestly, I am starting to tremendously dislike this trend of incomplete, on-going reviews… I understand the argument it is bad for the sites not to have a review on launch day/time, but I’d much rather have a complete, thoroughly edited/reviewed text than this. It doesn’t even make sense to have “FPS per dollar” charts without the actual gaming sections of the review.

      • Jeff Kampman
      • 1 year ago

      Coming, but probably not until Monday.

        • YukaKun
        • 1 year ago

        Since all reviews are already out and no NDA is in place for the figures, would you give us a rough idea on what you got?

        Reading AT’s and the reaction to those numbers, I hope you’d be able to confirm or deny some of the findings they got?

        Thanks for your hard effort as well.

        Cheers!

          • Jeff Kampman
          • 1 year ago

          I’m extremely reluctant to compare any of our results with any other sites’ numbers because we don’t test in the same way that most do (AT runs the scripted benchmarks available from the games they test, for example, while we are running through levels/sections of games and manually capturing frame times using OCAT/PresentMon.) We are also relying on different games, different in-game settings, a different graphics card, and a different motherboard for our AMD CPUs than AnandTech is using. AT also doesn’t post the graphics driver versions they’re using and that’s another variable that I need to make an informed comment.

          I guess my point is that other sites are accountable for the results they may or may not be producing. If you believe there is an inconsistency in AnandTech’s numbers then their editors are responsive on Twitter and in the article comments. As for TR, I’ve already posted summary information for our game testing if it helps you gain perspective about our results.

            • YukaKun
            • 1 year ago

            Thanks for the response.

            Yes, I understand your position and point of view completely. I guess I phrased the idea incorrectly. I just want more data points to see why AT’s numbers are like that on the meantime. Ian is working on the “why” now, but having some insight from you (from the outside) would help me (us?) understand a bit better.

            Given the amount of responses they got from their numbers, I don’t want to add more pressure to them. Or you, obviously.

            I’ll wait until your data is complete and interpreted for us then.

            Again, thanks for your efforts!

        • kuraegomon
        • 1 year ago

        In cases where an article is going to be updated incrementally post-release, I believe a changelog on the first page of the article would be [i<][b<]extremely[/i<][/b<] helpful. Something like: - <Date/Time 1> Original version (omitted if there haven't been any edits) - <Date/Time 2> Power consumption (with new/edited page numbers) - <Date/Time 3> Gaming benchmarks (with new/edited page numbers as above) If I've already read through the original article once or more, I'd like to be able to head straight to the new content without having to flip through the entire article again.

          • Jeff Kampman
          • 1 year ago

          This is a good idea and I’ll implement it in the future.

      • anotherengineer
      • 1 year ago

      Silly Rabbit

      Games are for kids 😉

        • K-L-Waster
        • 1 year ago

        Maybe eventually… but only after Dad is finished playing.

    • adamlongwalker
    • 1 year ago

    Good article.

    My concerns are with the price/performance which Jeff has no control of.
    An example of my concern is this.
    [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16819113499&cm_re=ryzen_7_2700x-_-19-113-499-_-Product[/url<] It's only a 6 day sale at this price. I have had similar issues with corporations promoting limited amount products being sold for a certain price and then the prices goes up to its intended pricing. I think the price will go up to the $359 range for starters after the sales end. I also think that the Ryzen 1 prices will drop a bit after the sales are done to get rid of inventory. It just depends on what the rest of the tech economy does such as Ram which is putting people on the side lines for not upgrading as well as video cards. Though with video cards the prices are starting to come down. The other concern which has been brought to my attention is that it seems that AMD is taking a bit of Intel's play book via the Tick Tick Tock performance over previous years models. I own a 1800X and its a sweet rig in that price vs performance aspect of all of my builds that I am known for. Those people who already purchased their Ryzen are most likely not going to make that upgrade for such a small performance uptick. In my case the differences are just not there for me to upgrade. And I believe that the performance vs value is going change within a month as prices change. Again good article. I've own both Intel and AMD rigs and am glad that AMD are back into the competition once again. It's a win - win for consumers.

      • stefem
      • 1 year ago

      True, I really don’t understand the ratio behind the price selection for the price/performance chart.
      Something similar happened also with the AMD’s Vega review where, even worst, they took a limited time MSRP by AMD as reference for Vega Polaris and street prices (and on newegg there was cheaper cards) for NVIDIA

      • scellio
      • 1 year ago

      I agree with you here. I built a 1600X and do not see it worth the money for the minor performance gain.

      • Jeff Kampman
      • 1 year ago

      Newegg regularly runs “sales” on products without actually changing the MSRP pretty often, in my experience. I don’t know the reasoning behind it but I just want to point out that it’s not necessarily a sign that the price is going to go up suddenly.

        • thecoldanddarkone
        • 1 year ago

        Yea it’s super annoying. I’ve noticed this as well. YAY SALE AT MSRP WITH NO SAVING….. Yea you are not getting my money for that newegg.

          • OptimumSlinky
          • 1 year ago

          It’s purely marketing. If you’re reading Techreport, you’re likely a more analytical, rational type of consumer, but so much of the market is spur of the moment and emotion-driven. “Sales” like this will likely lead to an increase in product moved and cost Newegg nothing.

            • NoOne ButMe
            • 1 year ago

            or nonsense like the other day. I had an unusual working shift, which messed with my sleep schedule.

            So I found myself wandering around a store for food at almost midnight… and came across a four pack of “Mexican coke”…

            on sale for 1 cent off it’s regular 6.49 price…

            But it probably works, looked like it was missing one… someone probably saw sale and grabbed it.

            • Srsly_Bro
            • 1 year ago

            If you read the comments here and the comments at Tom’s hardware, you would see little difference for the majority of posts of both user bases.

    • kuttan
    • 1 year ago

    If the review platform is not run on latest Windows with Spectre/Meltdown patches + with latest BIOS then numbers here make no sense. Since new mobos comes with latest BIOS with Intel Spectre/Meltdown microcode that coupled with Windows 10 patches caused considerable performance impact on the Intel platform which can be seen from AnandTech and Techradar reviews.

    [url<]https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600[/url<] [url<]https://www.techradar.com/reviews/amd-ryzen-7-2700x[/url<]

      • thecoldanddarkone
      • 1 year ago

      Where do you people come from? Instead of asking to see if they were installed when you couldn’t immediately find it you assumed they were not.

      From this article.

      “All test systems were updated with the latest firmware and Windows updates before we began collecting data, including patches for the Spectre and Meltdown vulnerabilities where applicable. As a result, test data from this review should not be compared with results collected in past TR reviews. Similarly, all applications used in the course of data collection were the most current versions available as of press time and cannot be used to cross-compare with older data.”

        • kuttan
        • 1 year ago

        Mind your crap language.[b<]Where did the article states Intel platform used the latest BIOS that have Spectre /Meltdown fixes which had considerable performance impact ?[/b<]

          • kuraegomon
          • 1 year ago

          “Firmware” == “BIOS”. So the text quoted by thecoldanddarkone applies.

          • Waco
          • 1 year ago

          Perhaps you should pay more attention before you wildly accuse. Please, return to Anandtech where you can get your bias confirmed and pat yourself on the back.

      • chuckula
      • 1 year ago

      Speaking of not applying the latest code, what evidence did Anandtech present that they had in fact applied the latest microcode Spectre updates from AMD that were only belatedly announced last week?

      Incidentally, I don’t really care if TR applied that microcode, but since Anand’s results are being held up as the only “honest” results on the web despite their frankly questionable testing setup, I expect Anand to apply microcode updates to all platforms equally.

    • ronch
    • 1 year ago

    I can’t get over the fact that if it weren’t for AMD we’d still be paying $340 for 4C/8T chips to this day. For shaking up the market alone we should all be buying AMD. It’s crazy how the 2700X is going for $330 right out the gate. What a way to follow through with Ryzen’s showing last year.

    • Pancake
    • 1 year ago

    Hope they’ve fixed GCC compiler bug issues. I use my computer for professional work and can’t afford any unreliability or incorrect results.

    The performance is great for the price although I’d rather spend a bit more for the peace of mind of an Intel system and know there aren’t any software issues – if I had to buy a system right now. I’ll see how Ryzen fares over the next year or so to see if there are any underlying issues that manifest. Just as well the price of RAM is so revolting right now I can’t contemplate a new system.

      • jarder
      • 1 year ago

      [quote<]Hope they've fixed GCC compiler bug issues[/quote<] Didn't that gcc bug only affect some early Ryzen steppings, and AMD offered to replace the chips of anyone affected: [url<]https://www.extremetech.com/computing/254750-amd-replaces-ryzen-cpus-users-affected-rare-linux-bug[/url<] [quote<]spend a bit more for the peace of mind of an Intel system and know there aren't any software issues [/quote<] You slept through the whole Spectre/Meltdown thing I assume. [quote<]I'll see how Ryzen fares over the next year or so to see if there are any underlying issues that manifest[/quote<] Sounds like you're an flat-out intel guy, maybe you should keep a look out for underlying issues on the platform you are using instead.

      • ptsant
      • 1 year ago

      Ryzen chips made after june (or july?) 2017 no longer have this issue. Threadripper also doesn’t have this issue.

      • Cooe
      • 1 year ago

      Uhh, those have been fixed in hardware since last June. Where the hell have you been????

      • ronch
      • 1 year ago

      I have never had any reliability issues with AMD, at least not with my current FX and my previous Phenom II. The only time I had issues was with 2 or 3 overheating Athlon 64 X2 chips after varying months of use. I think it has something to do with their 65nm process where the die somehow had increased impedance or something after a while and the chip simply generated way more heat than normal. When I told AMD about this they seemed to be aware of the issue because they didn’t even ask about my cooler or anything else and they just processed an RMA right away and a few days later I had the replacements looking like I bought them brand new.

        • Pancake
        • 1 year ago

        That’s nice for you. But you have no experience with Ryzen?

        I’m not sure why I’m getting the downvotes. The sad fact of the matter is almost nobody uses AMD in a professional setting. I have never once seen an AMD box at any of my clients which covers small to large business, government and tertiary education. Ever. Why do you think that is?

        The only time I’ve used an AMD box was going back many years when I seemed to choose employers that would indulge me and let me choose my PC components and even O/S which tended to be AMD and Linux (and before they became irrelevant Sparcstations and DEC Alphas). But I would be the only software developer to make those choices (I did theoretical R&D algorithm research in C++ so nobody cared how weird I was). Then DirectX 8 came out with those beautiful accelerated video APIs and it was buh-bye Linux for good.

          • Goty
          • 1 year ago

          [quote<]The sad fact of the matter is almost nobody uses AMD in a professional setting. I have never once seen an AMD box at any of my clients which covers small to large business, government and tertiary education. Ever. Why do you think that is?[/quote<] Y'know, I'm not sure but maybe that has something to do with this: [url=https://www.nytimes.com/2009/05/14/business/global/14compete.html<]Europe Fines Intel $1.45 Billion in Antitrust Case[/url<].

          • Krogoth
          • 1 year ago

          AMD has been used in professional setting back when the K8 was dominating poor Netburst stuff and contesting with Core 2 family. Intel only took it back when Nehalem-Westmere were superior options.

            • Pancake
            • 1 year ago

            Well, as I’ve said I’m the only person in an office anywhere I’ve ever been aware of using an AMD system and only because I was in a position to make that choice. But, really, by the time Core 2 came out it was all over for AMD as far as competitive performance. I have since built AMD systems for family and friends and recommended AMD laptops. Home users.

            • ronch
            • 1 year ago

            It’s not that AMD processors are any less reliable or any buggier than Intel CPUs. It’s just that most folks don’t even know about AMD. And when Bulldozer came out even those who knew about AMD in the corporate world steered clear. I’ve been using my FX-8350 for over 5 years now and I don’t think it’s unreliable at all, but I don’t think I’d deploy hundreds of it in an office building because of the much higher power consumption unless it performs very well. With Ryzen AMD has a much more high performance, much more efficient part, and as usual for AMD they’re pricing them very aggressively. But you already know that, right?

            • Pancake
            • 1 year ago

            Much more high performance? Debatable – depends on your workload. I think purely from a qualitative point of view the much higher low-thread performance of Intel chips is going to be something most people would appreciate most of the time.

            Much more efficient? Just plain wrong. On this here august website the i8700K uses the LEAST energy for the BMW Blender test – a test which may favour AMD parts. I already know that – from reading the review and understanding basic physics like what energy efficiency means. Not made up stuff like what Goty wrote. Bet he didn’t even do any university-level physics. I studied quantum mechanics.

            • BurntMyBacon
            • 1 year ago

            [quote=”ronch”<]I've been using my FX-8350 for over 5 years now and I don't think it's unreliable at all, but I don't think I'd deploy hundreds of it in an office building because of the much higher power consumption unless it performs very well. With Ryzen AMD has a much more high performance, much more efficient part, and as usual for AMD they're pricing them very aggressively. [/quote<]I could be wrong, but I think ronch was using the FX-8350 as the point of reference for that statement. I read this as the FX-8350 was not suitable for wide spread deployment due to high power consumption without a commensurate performance level in most workloads. The Ryzen processors, however, do not share this flaw. [quote="Pancake"<]Not made up stuff like what Goty wrote. Bet he didn't even do any university-level physics. I studied quantum mechanics.[/quote<]Neither levels of education are actually necessary to read and understand the relatively simple performance vs power charts relevant to this discussion. Incidentally, neither levels of education (by themselves) will qualify you to design semiconductor chips either. Whipping out your credentials in response to ronch is a poor way of establishing your opinion as superior given the lack of relevant material depth. Also, what does Goty's statement (with supporting link) about EU antitrust fines have to do with that have to do with ronch's statement about the FX-8350 and Ryzen in comparison? It may be best to leave comments from other threads in those threads to avoid confusion. [i<]Edited for spelling/grammar[/i<]

            • Waco
            • 1 year ago

            Can you quantify “much higher low-thread performance”? I don’t think that’s a true statement, unless you think 10-20% is a noticeable change for lightly-threaded desktop workloads…

            • Pancake
            • 1 year ago

            10-20% is a big deal. There’s only the JavaScript tests here to go by. For other tests, you’ll have to look elsewhere. On Toms Hardware their Adobe CC tests show a similar margin. Their web browser tests shows a bigger delta:

            Kraken i7-8700K = 699.2ms, 2700X = 761ms
            WebXPRT i7-8700K = 761, 2700X = 690 (higher is better)

            POV-RAY single core i7-8700K = 536s, 2700X = 622s

            So, you are going to get snappier performance with the i7-8700K or even a i7-7700K most of the time unless you’re spending a lot of your time rendering or encoding videos. Which isn’t what I do.

            • Waco
            • 1 year ago

            Um…that’s 10%. Nobody is going to notice that without benchmarking. Also…POV-RAY on a single core. Lolz.

            • ronch
            • 1 year ago

            I have a theory studying quantum physics somehow affected your reading comprehension skills? I was obviously comparing Ryzen with my FX-8350.

            And I studied molecular biology, which is why I was able to come to the conclusion that Ryzen is better. Then again quantum mechanics is more useful in this regard. /s

            Anyway, I see you’re the kind of guy who needs to embellish himself with his.. um.. Quantum mechanics education to bolster his credibility in a discussion that has nothing to do with quantum mechanics? Or maybe at the quantum mechanics level Ryzen just isn’t very good? Do enlighten us with your quantumness, sir. ^_^

            • K-L-Waster
            • 1 year ago

            In Quantum terms, until observed, RyZen+ is simultaneously better and worse and Coffee Lake.

            • ronch
            • 1 year ago

            Seriously though, even without quantum mechanics my FX-8350 is both better and worse than an i3, i5 and i7. At least in theory.

            • K-L-Waster
            • 1 year ago

            In theory? I dunno, you sound uncertain…

            • kuraegomon
            • 1 year ago

            When working for a telecommuncations vendor, we absolutely had use-cases where our software performed better on Opterons vs Westmere, and we recommended platforms to our customers accordingly. In our space, Sandy Bridge-E was really the death-knell for AMD.

          • ludi
          • 1 year ago

          At my office, at least, we have a standard deployable image and it covers Dell boxes with Intel processors and Nvidia GPUs. Regardless of whether the price/performance value might actually be higher for AMD in some of our usage scenarios, the limiting factor is that IT doesn’t have bandwidth to support multiple platforms.

          I imagine other shops with large numbers of workstations have similar issues, and if they do support a second platform, it’s Apple. For better or worse AMD isn’t a consistently reliable choice over a period of years and that gets factored into the long-range planning.

          For some types of server applications, the calculus can be different, but in our case our power, performance, and upgrade cycle isn’t strenuous enough to matter. So those are typically Dell or HP midrange.

            • Pancake
            • 1 year ago

            It’s an effect that feeds on itself. Do I actually TRUST that the software I use to make a living with has been adequately tested on AMD systems? What about the drivers and hardware I’ve already invested in? Intel for peace of mind and risk minimisation.

            So, AMD is stuck in the home user corner. How to get out?

            • BurntMyBacon
            • 1 year ago

            Yes, this is their hurdle breaking into business. It is not an insurmountable problem, but it won’t happen over night either. Non-standard deployments (no standard image from IT) may view Ryzen as an option early on. Standard office systems will be some of the first “standard deployments” to consider Ryzen due to heavy use of generic software and lower impact if bugs are found. If they are sold at lower costs or make use of the heavier graphics on the Ryzen-G processors vs comparable Intel processors, it may help adoption some. Servers and similar indirect use systems may come around later, but upgrade cycles are long and IT likes to have some assurance of reliability and availability of parts. Workstations are a mixed bag. Ryzen is particularly well suited to some tasks. Where the processor is supported by software and/or there are lower numbers of deployed systems (lower impact from bugs), the risk may be justified to some. Larger deployments and/or specialized software users will be reluctant to switch at least until the processors are listed in the software support package. Software often cost orders of magnitude more than hardware, so spending up on hardware to get guaranteed support is a simple decision.

            As to how they get out of the home user corner: consistent execution and good partner relations to get their processors covered under support contracts.

            • MOSFET
            • 1 year ago

            Outside of a couple NUCs, I usually build AMD desktops for my small business. I’m one case, but they absolutely run our “critical apps” just fine, and have from Athlon XP to Phenom II to FX to Ryzen. I even had an FX stand in as a server for a year since it could do ECC RAM and IOMMU. The killer was no IPMI, not that it ever crashed once.

            So, AMD is not necessarily stuck in the home corner. Although I certainly see the much broader big corporate scenario at play as well.

          • Kretschmer
          • 1 year ago

          [quote<] I'm not sure why I'm getting the downvotes. The sad fact of the matter is almost nobody uses AMD in a professional setting. I have never once seen an AMD box at any of my clients which covers small to large business, government and tertiary education. Ever. Why do you think that is?[/quote<] Brands matter, and it takes time for perception of them to shift. Expecting AMD's perception to turn on a dime with their first good chips in almost a decade is quite silly.

          • Eversor
          • 1 year ago

          Mindset take time to change. I recall back when Core 2 was released, people were so impacted by the P4 fiasco that they still ordered Athlon 64 X2s and Opterons for the datacenter.
          In the Phenom II era, I pointed my employer to use Istanbul AMDs vs Xeons due to price/performance ratios and we were pretty happy with the final result. The sales people always want sell you the higher margin products and know little of how it performs .

          Most people handling video seem to either be on the Ryzen side or very expensive Xeons. On the server side EPYC seems to have great traction also.

            • NoOne ButMe
            • 1 year ago

            AMD at the time did have IMC advantage, to Intel’s FSB.

            My understanding was that not until Intel integrated the memory controller did they really kick AMD out of the datacenter/server market.

      • Krogoth
      • 1 year ago

      Both platforms have their own set of stupid, stupid issues over the years.

      • Brother Michigan
      • 1 year ago

      I judge the quality of AMD launches by the amount whinging I see from Intel fanboys. Judging from the amount of time you’ve invested in this comment section, this must be a good one.

    • Klimax
    • 1 year ago

    General questions and notes (should have written them down long ago):
    -What versions of each application was used?
    (Especially Blender and x265 in Handbrake)
    -What were parameters were applicable?
    (Especially can noticeable in rendering, encoding and compression/decompression)
    -What kind of file was used for transcoding? (Massive differences between simple cases like movies versus output of standard videocamera – especially with and without denoising)

    Anyway, for anybody who wonders what’s up with AIDA Hash, that’s difference between general execution versus SHA1 instructions. (Ryzen and Intel Atom have it, Core not)

    Blender And Corona are anomalous results.

    Wish for next review: Nvidia Iray. (Both CPU and NVidia GPUs) Available in DAZ Studio 4.x (It’s been free for a while)

    • ronch
    • 1 year ago

    Some comments here suggest that the gaming benchmarks are already up but I can’t seem to find them. Am I missing something here?

      • Anonymous Coward
      • 1 year ago

      You might be losing your mind.

        • ronch
        • 1 year ago

        I think I lost it some time ago.

    • Welch
    • 1 year ago

    Just for the sake of it…. I’d love to see those value charts with the 2400g and 2200g in them since they are the only Ryzen chips with integrated GPUs. I realize they aren’t in the same category as most of those….

    • Goty
    • 1 year ago

    RE: The task-energy efficiency plot, I’d like to suggest a more concrete way of interpreting these results.

    You state, “The Core i7-8700K falls the closest to the lower-left point of the chart that indicates maximum efficiency,” but that isn’t really true. If we take the lower-left point to be the most efficient a processor could possibly be (completing a task in zero time for zero energy), then the cartesian distance from this point to the individual points that represent the different CPUs can be used as an indicator of the efficiency of each CPU.

    If we do this, it turns out that the 8700K isn’t the most efficient CPU of the bunch. In fact, it turns out that it comes in third place, behind both the 2700X and 1800X. Here are the full results computed in this manner:

    (not sure how this is going to look after I submit, but here we go)

    Rank – CPU – “Task Efficiency” (lower is better)
    1 – 2700X – 262.9
    2 – 1800X – 292.7
    3 – 8700K – 314.9
    4 – 2600X – 347.7
    5 – 1600X – 387.8
    6 – 7700K – 460.4
    7 – 8400 – 491.6

    It’s an interesting result to say the least, especially for the i5-8400!

    (Lots of edits. I’m tired and can’t computer today.)

      • Jeff Kampman
      • 1 year ago

      This was something that was eating at me and this is exactly the method of interpretation I was looking for. I’ll see if we can’t incorporate it in the results somehow.

        • Zizy
        • 1 year ago

        I believe area under the point would be much better than the distance, because it doesn’t change the way you normalize the units. Say instead of seconds you put milliseconds, and all of a sudden task energy is irrelevant in the calculation because the scale is so stretched in direction of time. Instead of kJ you might use J and time becomes irrelevant.

        If you have area, any units you pick will give the same results. Additionally, it makes intuitive sense – if your chip eats half the energy for the task, but takes twice as long, it should be considered about as good.

        So, you have formula time^2 * power for total efficiency, and the results are
        Efficiency (same order of chips as in the article)
        2700X: 13.08,
        1800X: 14.20,
        8700K: 14.02,
        2600X: 19.29,
        1600X: 22.85,
        7700K: 24.85,
        8400: 22.62

        I believe this ranking also makes more sense.

          • ptsant
          • 1 year ago

          Area would have unit “J*sec” which does not correspond to a physical quantity that I know.

            • Zizy
            • 1 year ago

            Even if it wasn’t related to anything that could be called a physical quantity, well, so what? Hz/$ isn’t a physical quantity and is the most used (and useful) metric 🙂

            But Js has physical meaning – it is the unit of the Planck constant, and it is therefore incredibly fitting in this context 😛

            • ptsant
            • 1 year ago

            Well, perf/$ makes intuitive sense, while J*s doesn’t which is why I’m asking whether it actually corresponds to something. Thanks for the info, even though I don’t see how I could tie the Planck constant, it’s still cool to know.

            • Mr Bill
            • 1 year ago

            J*sec is [url=https://en.wikipedia.org/wiki/Action_(physics)<]Action[/url<].

            • ptsant
            • 1 year ago

            Thanks!

          • Goty
          • 1 year ago

          Area under the curve is certainly a better measure, but the actual scale of the units isn’t important. If you scale everything by an order of magnitude, the relative values are unchanged.

            • Zizy
            • 1 year ago

            If you scale both axes, values indeed remain the same. But if you scale just one, relative values can change a lot.
            The problem is that units of X and Y axis are different, and for distance you are summing them together. This implicitly assumes a magic factor between X and Y to put them in the same units. You put 1kJ = 1s, but you could have used other magic factors too.

      • ptsant
      • 1 year ago

      I don’t want to nitpick, but this assumes that Euclidean distance is a good metric, even though the relative weight of speed and energy is (can be) different for each user.

      Furthermore, the relative importance speed and power depends on the performance of the CPU on a given task. If your CPU needs 0.1sec to do something, why would you ever bother to spend more energy to do it faster? On the other hand, if your CPU sips 20W, would you mind going to 40W to speed up calculation by 30%? And if you can get 150 fps with 60W, would you ever care to increase that to 180 fps, even if it mean only a trivial change in power consumption?

      Anyway, I believe Jeff summed it quite well: the tradeoff of giving 3% more task energy for 10% more speed is probably pretty desirable for most users. We could even run a poll on that: how much more task energy would you be willing to spend for 10% more speed?

      • willmore
      • 1 year ago

      And what units are these and why did you select them?

        • Goty
        • 1 year ago

        Yeah, the units don’t work out and I probably would have noticed had I waited until this morning to post. I think area under the line works better as Zizy mentioned, but it doesn’t actually change the results in any way.

        *EDIT* Sorry, forgot to include that I chose this measure because it is the simplest comparison to the “ideal” (i.e. completing a task instantaneously with no energy, as explained in the original post.)

          • Shobai
          • 1 year ago

          You can then also use the angle between the axis and the line for the upper boundary of the area to distinguish between results; for a given area, higher angles from the x-axis point to shorter runtimes at higher power usage, etc. This would allow for an individual to account for their personal weightings of the relative importance of ‘shortest time’ and ‘lowest draw’ when comparing similar results.

          How easily interpreted that is, and so how useful a metric it is to average readers, is another matter altogether.

            • Goty
            • 1 year ago

            Funnily enough, this is just the power consumption of the chip. Definitely a useful metric, but also one that we can measure more easily with other methods.

            • Shobai
            • 1 year ago

            For sure

      • VillageIdiot
      • 1 year ago

      I would think efficiency is just the lowest power bill for a given task and the Joules needed to complete the task answers that question. So just the lowest point on the Y axis as ptsant points out. If you wanted to give weighting to the time it takes then your method might be reasonable.

        • Mr Bill
        • 1 year ago

        Its the Intel Atom for you my boy!

          • Goty
          • 1 year ago

          I was going to say some low-end ARM variant.

        • Pancake
        • 1 year ago

        That actually is the very definition of efficiency – energy required to complete task. Physics. There seems to be a serious amount of stupidity around here trying to redefine it (Goty – I’m looking at you).

        Time to complete task – or throughput – is a totally orthogonal consideration. Trying to conflate the two is just uneducated and ignorant.

        So, there are many factors that influence a buyer’s decision but mainly – energy efficiency (a big factor for me as I believe in minimising my environmental footprint), performance and cost.

        Now, some maths for the innumerate. Normalise each of these orthogonal factors (set them on a scale of 0-1). Then take the Euclidean distance from 0,0,0. Perhaps weight factors on what you personally care about.

          • Zizy
          • 1 year ago

          The very definition of energy efficiency.
          You have time efficiency too.
          Everyone is trying to conflate the two and is additionally bringing price in the mix, otherwise we would be all running POWER or ARM for most workloads.

          Normalization is a suspicious step, as with normalization you are also quietly doing a transformation of units to be able to compare them, and you can introduce arbitrary amount of bias here. Not to mention you could perhaps try to weight non-linearly as we all know the last bits of energy and power efficiency are the hardest to get …
          You should try avoiding normalization as much as possible, unless you can use some natural units where the normalization factor doesn’t introduce bias (say hbar=c = 1).
          It is far better (easier) to always multiply or divide the numbers whenever possible, because at worst you might make the final quantity something bizarre, but a simple dimensional analysis will tell you whether higher = better, lower = better, or the whole thing is pointless (such as multiplying performance by price).

            • Pancake
            • 1 year ago

            Time efficiency? WTF?!? There’s a perfectly cromulent concept – throughput.

            Normalisation is a pretty fundamental concept in multi-variate analysis. It’s what statisticians and computer scientists do. I don’t even know where to begin with your nonsense.

            But here’s a simple thought experiment for you. What if you changed the unit of cost to cents, or UK pounds or Indonesian rupees? Or you change energy from Joules to calories to pounds of margarine?

            Units don’t come into it.

            As far as weighting, like I said it’s about your personal choice – what’s important to you? Because – for most people – they would not feel equally the same way about efficiency, throughput and cost. I bought a bespoke i8250U Ultrabook because I got lots of money (cost is not important) but care a lot about efficiency and am not too concerned about throughput (would have bought some bloated beast if I did).

            • Zizy
            • 1 year ago

            This simple thought experiment shows that you should stick to multiplication and division so you don’t even need to think about units. If your measured quantity is performance/(energy*money), you can input rupees, btu and measure time to complete in martian revolutions… and all the CPU rankings will still be the same as if you used any other units.

            Now introduce some wonky normalization using distance of some sort, and you can get almost any result. Potential for bias is endless. The only limitation is that unless you really fucked up, if the chip is worse in every parameter it cannot end up being better in the end – other than that, you have basically free reign. Ranking of these 7 chips can be even 2700X>1800X>8700k>7700k>8400>1600X>1500X using suitable nonlinear normalization. Now add price, and you are down to mostly just 1800X and 2700X have to be better than 7700k and 2600X better than 1600X, and that’s about it.

            • Pancake
            • 1 year ago

            You don’t get it do you? If you used Japanese Yen as the cost unit then it skews cost as a factor by 100x when calculating Euclidean (orthogonal) distance.

            Do you even know what normalisation is and why it’s a standard technique used in multivariate analysis? Look it up on Wikipedia or something. Doubling down on ignorance leads to beliefs like climate change denialism, anti-vax, flat-earthism, homeopathy and all the other crazy non-science crazy stuff floating around in this post-fact era. So sad to see it’s arrived with full force on TR…

          • Goty
          • 1 year ago

          You should look behind me at the M.S. in Physics hanging on the wall. 😉

          Please take your fanboyism elsewhere. It’s tiring.

    • EzioAs
    • 1 year ago

    Btw, I probably missed it but is the difference between the X CPUs and the non-X CPUs lies in clockspeed only?

    • smilingcrow
    • 1 year ago

    “The Core i7-8700K falls the closest to the lower-left point of the chart that indicates maximum efficiency, but the Ryzen 7s don’t draw that much more power to deliver considerably higher performance.”

    i7-8700K = 145W
    2700X = 195W = 34.5% higher

    2700X = 259s
    i7-8700K = 311s = 20.1% slower

    So in reality it is the opposite to what the article states in that the difference in power is considerable and the performance difference much less so.
    This site has lost serious credibility for me now as the wording on recent AMD articles and the choice of RAM shows a clear lack of ability or willing to be objective. When you can’t trust the data or the words there’s not much left really.

      • jackbomb
      • 1 year ago

      I’ve read a few of your comments on this article. I dunno man, you just seem kinda bummed that AMD’s finally sitting pretty!

        • smilingcrow
        • 1 year ago

        I’m bummed that a site I used to respect has lost its value to me.
        I’m a DAW user and even though AMD still aren’t competitive in that field it’s great to see them back on form this last year.
        Ryzen 2 is fine and offers a decent overall balance and I would certainly recommend it.

      • Jeff Kampman
      • 1 year ago

      You make a good point and I’ve adjusted the wording. I’m not going to make any more updates to this article today as it’s clear that I’m fatigued and dashing off some not-fully-considered analysis.

        • smilingcrow
        • 1 year ago

        “The Ryzen 7 CPUs draw considerably more power to deliver considerably higher performance.”

        There’s another one you might want to edit tomorrow.

          • Jeff Kampman
          • 1 year ago

          OK, so by my calculations we have 34.5% higher task energy to deliver a ~17% time-to-completion reduction in Blender. I do believe that fits the dictionary definition of “considerable” in both cases. I’m not sure how else you would have me express it.

    • DPete27
    • 1 year ago

    Can someone explain the task energy vs time to completion scatter plot to me?

    The task energy numbers are simply Watts * seconds to complete (KJ), which, assuming each system returns to 0W after the task is complete, is a direct representation of task efficiency. Graphing KJ vs seconds is a misleading (and potentially unnecessary) metric.

    For example:
    The 2700X used 195W * 259 seconds to complete the task = 50.5KJ
    The 8700K used 145W * 311 seconds to complete the task = 45.1KJ

    The only thing the scatter plot does is re-iterate that the 2700X completes the benchmark faster than the 8700K. More alarming to me, is that the 2700X is LESS efficient than the 1800X. Right? More total power used to complete the benchmark. In other words the 2700X uses 15% more power to complete the benchmark 10% faster than the 1800X….

      • Jeff Kampman
      • 1 year ago

      Some readers have said the scatter plot helps them visualize that data set better. It may not be strictly necessary but I include it for that reason.

      You are correct that the Ryzen 7 2700X consumes slightly more power to do its thing, but the increase in task energy (3% ish) seems to be offset by the magnitude of the performance increase (10% ish) over the 1800X. If you care about efficiency first and foremost this is a slight loss but for those after absolute performance it is a big win.

      The i7-8700K is the unquestioned efficiency champion but I feel like whether that matters depends on whether the cost of the time you give up to it while waiting for it to finish is offset by the savings on your electricity bill. I doubt this is true for the majority of folks.

      AMD has said they plan to sample us with second-gen Ryzen non-X parts and those will likely offer an interesting look at the efficiency-focused side of the voltage-and-frequency curve.

        • DPete27
        • 1 year ago

        That’s fair. Since the scatter plot graphs s vs W*s (performance / task-energy), it does place visual importance on raw performance rather than outright efficiency. For someone that’s constantly running computations all day long, the pure amount of calcs that can be completed each day [performance] would outweigh power efficiency to a large degree. If that’s the comparison to be made, then yes, a chip that’s 3% less efficient but allows you to do 10% more calculations per unit time is a better choice.

        Performance/Watt and Efficiency aren’t the same thing, and I think it was difficult for me to differentiate between them the way those two paragraphs in the article were worded and/or how the arguments within them were organized.

        Efficiency = it took the 2700X 195 Watts for 259 seconds [task energy] to complete the 10,000 calculations of the benchmark. You can be efficient without being fast.
        Performance/Watt = graph of seconds vs watts with the best choice being the shortest distance from the origin.
        2700X = 324
        1800X = 335
        8700K = 343 (leading the 6c/12t pack)
        2600X = 380
        1600X = 414
        7700K = 472 (8 threads vs the i5-8400’s 6 threads)
        8400 = 497
        (Same conclusion as Goty, but perhaps easier to comprehend)
        It appears more coarz are rewarded by blender.

        It does seem odd that the 2700X is less efficient than the 1800X, but the 2600X is more efficient than the 1600X.

        NOW!! What if you run perf/watt on a single core for each CPU to suss out architectural efficiencies!!!

          • Mr Bill
          • 1 year ago

          Plot an atom cpu and something power hungry for visual reference. Then folks realize which way the trade off’s trend.

          • BurntMyBacon
          • 1 year ago

          [quote=”DPete27″<]Performance/Watt and Efficiency aren't the same thing, and I think it was difficult for me to differentiate between them the way those two paragraphs in the article were worded and/or how the arguments within them were organized.[/quote<] Using your example with the 2700X written as "Efficiency": 195 Watts * 259 Seconds [task energy in joules] / 10000 (number of tasks) = 5.05 Joules per Task In this case lower is better. Written as "Performance/Watt": (10000 (number of tasks) / 259 Seconds) / 195 Watts = 10000 (number of tasks) / (259 Seconds * 195 Watts) [task energy in joules] = 0.198 Tasks per Joule In this case higher is better. These two representations are showing the same data, but inversely. This also works if you define the task as 1 benchmark instead of 10000 calculations. You could also measure Frames per Joule or Joules per Frame in a game. I think many people like to use the "Performance/Watt" representation better for two reasons. Most reviews already have performance numbers, so it is an easy follow-on both for the reader and writer. Also, like with performance numbers, some people have an easier time associating a higher number with higher efficiency as opposed to concepts like time where it is relatively easy to identify higher numbers as a bad thing.

        • BurntMyBacon
        • 1 year ago

        I agree that the scatter plot can help visualize and trade off performance vs efficiency. I find it useful if I want to define weights for performance and power efficiency rather than just pick one as king. Once I have weights, it is easy for me to define a slope based on my weights. This allows me to take a line (with a fixed slope) and place it anywhere on the graph to very easily determine which options are acceptable to me (lower left side of line in this case) and which options are not (upper right). I can also move it around to easily compare specific options. For those of us where only one criteria is important, a vertical line (for this chart) would represent performance as king, where a horizontal line represents efficiency as king. I realize it would be non-trivial to program inputs to allow us to define weights and create this movable line directly on the charts and I’m not sure how many people would actually use it, but I find it extremely useful so I thought I’d throw it out there.

      • cegras
      • 1 year ago

      1800x – 2700x
      289 – 259 s
      49.1 – 50.5 kJ

      • YukaKun
      • 1 year ago

      Actually… Now that you mention it.. Is the bundled cooler included in the value graph?

      I don’t know how to really weight it, but I’m pretty damn sure it’s like ~$40 greens worth of included hardware?

      Cheers!

      • Zizy
      • 1 year ago

      It isn’t unnecessary, because there is energy efficiency and time efficiency. Both matter, and scatter plot neatly visualizes that tradeoff. If you wanted only energy efficiency, grab 65W chips, those should be the best, likely with 8700 as the winner, but 2700 shouldn’t be far behind.

      2700X is less energy efficient, but way more time efficient. AMD could and perhaps should have kept it at 95W TDP and lower clocks – it would be slower but more energy efficient.

        • kuraegomon
        • 1 year ago

        AMD should most definitely _not_ have done that, because the market wouldn’t reward them for doing so. In order for them to keep providing the revitalized competitive landscape that the original Ryzen launch created, they need to keep, you know, [i<]actually selling chips[/i<]. The broader market doesn't prioritize energy efficiency over the raw performance in the same way that you do, and AMD's first priority needs to continue to be increasing market share and profits. The tradeoffs made for the R2700X are eminently reasonable, given the limited scope for real improvements to Zen+. When Zen 2 ships, I think it's fair to hold AMD to a higher standard on the power efficiency front.

        • DPete27
        • 1 year ago

        “Time efficiency” is performance, is it not? Performance is seconds to complete the task.

        Either way you slice it (whether you do simple Perf/Watt or whatever other convoluted measurement you want to use to confuse people like Perf/Task Energy) the graph doesn’t change. I’m just suggesting we use Seconds vs Watts in the scatter for less confusion.

        As I said in my response to Jeff’s comment, the Intel chips are more efficient (both task energy and perf/watt) for same thread counts against AMD (I didn’t normalize frequency though). Even when the perf/watt Cartesian distance is divided by threads though, the 1800X and 2700X come out on top. I suppose that makes sense, given that the highest end parts would be the best binned (not sure what groups of Ryzen CPUs are cut-down compatible)

          • BurntMyBacon
          • 1 year ago

          [quote=”DPete27″<]Either way you slice it (whether you do simple Perf/Watt or whatever other convoluted measurement you want to use to confuse people like Perf/Task Energy) the graph doesn't change. I'm just suggesting we use Seconds vs Watts in the scatter for less confusion.[/quote<]You are correct. Given a constant task (or number of tasks) this should still represent the data accurately. It is ultimately a viewing preference that stems from the desire to prioritize information important to you. This is completely valid (we all have priorities) and it isn't unreasonable to ask. It is also reasonable for the author to limit the scope of the article, which may include omitting certain visual representations of the data, so long as all the data is in fact represented accurately. I imagine Jeff will play around with it (if he hasn't already) and include it if he thinks it adds value.

          • Zizy
          • 1 year ago

          Yup, this is performance, I used that name to have it sound similar to energy efficiency.
          Essentially, perf/W == energy efficiency. Great metric if you primarily care about “what is the maximum performance we can squeeze in this box, limited by power/thermals”.
          Perf/E is mostly perf^2/W = time*energy efficiency. The metric makes sense if you care about both time and energy efficiency, and by simply multiplying the two you are both eliminating the option of weighting according to preferences, as well as eliminate bias by doing so. It needs to be noted that normalization to give the same result would have to be nonlinear – because with this “quantum of action” measurement you are saying that twice as fast part can eat twice the energy (not power!) to be considered as good. It would be a 1/x curve on the perf/energy graphs.

      • Mr Bill
      • 1 year ago

      You might say its an ‘action figure’

      ducks and hides

    • Jeff Kampman
    • 1 year ago

    I added a page regarding power usage and efficiency to the review. Enjoy.

      • derFunkenstein
      • 1 year ago

      I most certainly did!

    • Tristan
    • 1 year ago

    Coffe LAke still better. Six cores with higher speed are better than eight with lower speed, and 8700K can be OC=ed

      • ronch
      • 1 year ago

      Rain, rain go away…

    • Krogoth
    • 1 year ago

    [url<]https://www.youtube.com/watch?v=uvUL28Skt6E[/url<]

      • chuckula
      • 1 year ago

      We’ll see what you say when the Venti Coffee Lake launches.

    • Zizy
    • 1 year ago

    2700X is pretty nice, but 8700k is a fierce competitor and dominates when overclocked.
    But the rest of the lineup seems AMD won’t have any trouble competing with Intel, as Intel hobbles their other chips just too much.

    Interesting to see tom’s power numbers. I wonder why the TDP increased, given power consumption didn’t.

    • GurtTractor
    • 1 year ago

    Thank you so much for testing the DAW performance, looks like some really substantial improvements there, particularly in the VI bench. Great stuff.

    • gogogogg
    • 1 year ago

    @Jeff, are you testing with ‘Core Performance Boost’ turned on in the BIOS? Ian from AnandTech noticed in their review that this feature is actually the BIOS name for Precision Boost 2 and it came disabled by default.

    EDIT: For some clarification, people might refer to
    [url<]https://twitter.com/IanCutress/status/987025785646211075[/url<] Ian explained that he had originally thought the 'Core Performance Boost' BIOS option was a variant of MCE and that the initial test data was collected with the option turned off, but then he confirmed with ASUS that the option actually means Precision Boost 2, and that's why the initial data was thrown away and the new one was collected with the feature turned on. However, the default BIOS value was 'On' -- they had originally turned it off because they didn't know it meant Precision Boost 2.

    • YukaKun
    • 1 year ago

    Progressive JPEG Review!

    Anyway, thanks a lot for the hard effort and time spent on this already.

    Keep the good information coming, please!

    • Krogoth
    • 1 year ago

    Ryzen+ = K6-2 redux

    Minor clock speed boost and tweaks that address issues from the first generation.

    • smilingcrow
    • 1 year ago

    I wish you wouldn’t test with RAM that is way out of spec and which costs considerably more than RAM which is in spec. (I looked on NewEgg and for my requirements of 2x16GB 2666 the difference in price is $115).
    I’m all for testing with over-clocked RAM and CPUs as a secondary thing but like to see a baseline as there’s no way I’m paying over £200 for the 16GB of RAM you used.
    It means I can’t rely on your DAW tests which is annoying as I spent time analysing the data and came to the conclusion that the i5-8400 and i7-8700 (non K) make most sense for DAW usage.

      • smilingcrow
      • 1 year ago

      The secondary effect is that as AMD requires higher performing RAM much more than Intel it skews the figures towards AMD.
      You are basically over-clocking the RAM which benefits AMD and not over-clocking the CPUs which would benefit Intel.
      If you switched those two around so that you ran the RAM at stock and over-clocked the CPUs and gave no data at stock settings people would think it a farce I imagine.
      Using more expensive RAM also skews the value proposition unless it is taken into account in the graphs.

        • dragontamer5788
        • 1 year ago

        3200 MT/s at 16GB is practically the same price as 2666 MT/s… unless you go for like CL14 B-dies or something. I think its reasonable to assume that people would go for [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16820231941&cm_re=ddr4_3200-_-20-231-941-_-Product<]mid-quality 3200MT/s RAM CL16 today[/url<]. Its at an ideal price/performance point. Still, there are sites out there that test at 2666 MT/s: [url<]https://www.pugetsystems.com/labs/articles/After-Effects-CC-2018-CPU-Comparison-AMD-Ryzen-2-vs-Intel-8th-Gen-1137/[/url<] I personally think 3200 MT/s is reasonable. Its not much more expensive, and it also shows sizable performance gains on both Intel and AMD systems.

          • smilingcrow
          • 1 year ago

          G.Skill Sniper X DDR4-3400 16-16-16-36 2T

          I’ve looked at RAM scaling for the i7-8700K and it is not significant.
          The point is that it is out of spec and Ryzen is known to scale well whereas CL doesn’t so it skews the performance and value plots.

          • smilingcrow
          • 1 year ago

          “3200 MT/s at 16GB is practically the same price as 2666 MT/s”

          As already stated they are using DDR4-3400 16-16-16-36 [b<]NOT[/b<] 3200. I looked on NewEgg and for my requirements of 32GB 2x16GB the difference in price is $115 and this with an i5-8400 which itself is $180 so that's a significant percentage increase on the CPU+RAM cost plus I don't need a dGPU for a DAW. The difference for 16GB is around $35+ which still skews the plots enough.

            • Freon
            • 1 year ago

            Right, I wish TR had used something more reasonable, like 3000 or maybe 3200, but it seems less afoul than Anand pairing up 2666 on a 8700K build, which seems very unlikely for any consumer to pick.

            My gut still says there is more going on than RAM speed. Some people online seem to be questioning the thoroughness of Spectre/Meltdown patches across review sites. Of course not everyone runs precisely the same settings in games anyway. Could be several things.

            • smilingcrow
            • 1 year ago

            Anand used DDR4-2933 for Ryzen and DDR4-2666 for CL which are both the highest officially supported I think.
            That’s as it should be and memory scaling when using RAM over-clocked beyond the spec of the controller is a separate test for me.
            When you consider that the platforms respond quite differently to over-clocking the memory controller as well as the price differential it throws the whole validity of this review out of the window for me.
            The risk is that it leaves TR vulnerable to being accused of bias which is not a place to put yourself in really.

            • Voldenuit
            • 1 year ago

            [quote<]The risk is that it leaves TR vulnerable to being accused of bias which is not a place to put yourself in really.[/quote<] A user who pays the extra for the intel 8700K is not going to be cheaping out on bargain basement RAM. TR's configuration reflects what actual DIY builders will buy.

            • smilingcrow
            • 1 year ago

            A user that pays extra for the K on the end will generally be over-clocking the CPU multiplier as well as the memory controller so why over-clock only one aspect of the CPU?
            Actual DIY builders will over-clock both.
            Buying memory in spec is hardly bargain basement especially at current prices.
            When you look at how little the Intel platform scales with memory why would I waste $115 on a tiny gain?

            • Jeff Kampman
            • 1 year ago

            Since you broke out the B-word, I re-ran some productivity numbers with the Ryzen 7 2700X using a kit running at DDR4-2933 and 15-15-15-35 timings. The swings are at most 6% and typically in the range of 2-4%. In some cases (Javascript) performance is even slightly better, while a couple rendering tests are worse.

            Again, these are single-digit percentage changes in results at most, not the kinds of swings that would change our overall verdict.

            Memory latency rose from 65 ns to 71.2 ns and our directed memory tests predictably take big hits at about 5-7 GB/s less bandwidth. The magnitude of the real-world performance differences is not nearly this large.

            As for DAW Bench, VI performance at 96 KHz is unchanged, while VI perf at 48 KHz is down a little bit (~40-60 voices).

            In the DSP benchmark, the effects were a bit more pronounced:
            – 96/64: 64 instances at 2933 vs 69 at 3400
            – 96/128: 70 vs 77
            – 48/64: 118 vs 138
            – 48/128: 152 vs 156

            It’d be interesting to see whether dialing back the i7-8700K to DDR4-2666 has similar effects in DAW Bench, but I don’t believe changes of this magnitude “[throw] the whole validity of this review out the window” or justify accusations of bias toward a particular manufacturer. It sounds like you are making the correct decision for your needs regardless.

            • smilingcrow
            • 1 year ago

            Good to see the extra tests using stock settings.
            The difference in the DAW tests that are important to me (48/64) are 10% on average.
            As Intel is the clear leader here it would be wonderful to see the tests done again using RAM at stock settings for the Intel chips.

            You previously stated that you consider a 17% performance difference to be considerable so that leaves 10% as being significant especially when it seems that is around the ballpark figure that Ryzen 2 seems to improve on Ryzen 1.
            So in other words the general gain from Ryzen 1 to 2 is the same as the specific gain here using an over-clocked memory controller.
            If different reviews are randomly over-clocking the memory controller by different amounts then how can we make reasonable comparisons between reviews?
            Comparing the two sets of data you gave for the DAW results gives a maximum difference of 16.9% which by your terms is defined as considerable.
            So over-clocking the memory controller considerably skews the data in some cases and has more impact than over-clocking the clock speed as there is little headroom for that.

            Your bias is clearly against running the system fully in spec and based on the data you provided that skews the data considerably in some cases. Therefore, the integrity of your methodology has been shown to be compromised.
            As for the dubious wording on some recent AMD articles, that could be down to a hyperbolic way of communicating information. It certainly doesn’t help the reviews to look balanced which alongside running the systems out of spec adds up to a sense of these reviews no longer deserve so much respect.
            Add all that up and it does leave you open to accusations of being biased at worst or incompetent at best. I don’t know if there is a bias but the mere fact that I have to choose between incompetence or bias is not a good sign. Although it could be both I suppose!

            • K-L-Waster
            • 1 year ago

            TR biased *towards AMD*? That’s a new one. Usually the accusations are going the other direction.

            (That reminds me, haven’t seen IGTrading around in a while…)

            • Shobai
            • 1 year ago

            The other B-word?

            • Jeff Kampman
            • 1 year ago

            I got up this morning and retested the Intel system with DDR4-2666 memory. I’m not going to type up all the results here, but I will note that there is basically no change in the DSP test results but a 13% decrease in VI 96/128 and VI 48/64 and a 6% decrease in VI 48/128.

            So there is “skew” (better performance from each system) but Intel benefits from the faster RAM in the VI test while the Ryzen 7 2700X benefits most from it in the DSP test.

            Ultimately, we’ve shown the best possible performance from each system by normalizing RAM speeds across both platforms and I’m perfectly comfortable with that outcome for our audience (PC enthusiasts, aka the “biased against running our systems fully in spec club.”)

            • smilingcrow
            • 1 year ago

            Thanks.
            AKA – “[b<]Selectively[/b<] biased against running our systems fully in spec club." So doubly biased! 🙂 If you want to preach to the out of spec crowd then go large and O/C the goddamn Multiplier or GTFO. These amateurs!

            • MOSFET
            • 1 year ago

            smilingcrow, you’ve had too much Trump. We all have, but for F’s sake, don’t imitate him.

            • Meadows
            • 1 year ago

            Your work ethic is heroic and appreciated.

            As a side note for the future, a delta of 10% (in anything) is generally viewed as the threshold of clear noticeability, so it should remain worthy of a mention in future reviews.

            • thx1138r
            • 1 year ago

            [quote<]which seems very unlikely for any consumer to pick.[/quote<] Unless the consumer picks a Dell: [url<]https://www.intel.com/content/www/us/en/products/devices-systems/desktops/pcs/dell-xps-8930-se-H39262641.html[/url<] I thought that was bad until I saw this: [url<]https://www.amazon.com/HP-Desktop-Computer-i7-8700K-880-130/dp/B0768M95WY/ref=sr_1_20?ie=UTF8&qid=1524165234&sr=8-20&keywords=8700k+pc[/url<] Yes indeed DDR4-2400 on an 8700K!

            • chuckula
            • 1 year ago

            Yes, but if you are going the OEM route with crappy memory configurations, put the 2700X in the exact same boat.

            The bigger issue with Anand’s review is the unreasonable assumption that the name-plate RAM support speed that nobody pays attention to should be used as the comparison point, which is ridiculous.

            It would be one thing if Ryzen had substantially superior RAM speed support so it gets faster RAM based on superior capabilities, but in actuality the exact opposite is true and Anand not only failed to let the 8700K use the fastest RAM it could, but it artificially refused to simply slap the same sticks used for the 2700X into the 8700K. That’s fishy. I have no problem with TR using equal memory configurations across both platforms though, even if the 8700K could technically go even higher.

            • thx1138r
            • 1 year ago

            Agreed, if you’re going to run with slow RAM, at least use the same speed to make the comparison more fair. Although I’m not sure if it’s fair to say that nobody pays attention to the name-plate RAM, certainly no enthusiasts do, but OEM’s seem to be using it as an excuse to pair slow RAM with expensive CPUs.

            Sure, while the 2700X has improved support for higher speed RAM, the 8700K is still a ways out in front of the pack when it comes to high-speed RAM, certainly one reason why high-FPS gamers will continue to prefer it. But because RAM is such a big issue these days, I’d really like to see some direct comparisons between the 2700X and 8700K using a few different RAM types with the exact same configuration otherwise. My preference would be to see the effects of, say, 2400Mhz, 3400Mhz and the highest speed RAM the 8700K can run with without crashing. I’d like to think that Anandtech gave us a part of that picture but I don’t entirely test their results. It would be great if TR could give us something like that

            • dragontamer5788
            • 1 year ago

            [quote<]Right, I wish TR had used something more reasonable, like 3000 or maybe 3200, but it seems less afoul than Anand pairing up 2666 on a 8700K build, which seems very unlikely for any consumer to pick. [/quote<] I agree 3400-CL16 is way higher end than what most people would buy. That speed demands a high-cost Samsung B-Die. 3200-CL16 is closer to reasonable. Multiple manufacturers (more than Samsung) can hit 3200 - CL16, and the price point is far more reasonable.

            • JustAnEngineer
            • 1 year ago

            I got 3400-CL16 when I built my Coffee Lake system at the end of December.
            [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16820232157[/url<]

            • Beahmont
            • 1 year ago

            16 GB sticks (2x 16GB) not 16 GB in a Packs (2 x 8GB) is the subject under discussion because Smiling Crow indicated a need for 32 GB now and I assume a potential increase to 64GB at some point.

            Can’t do that with 8GB sticks in the consumer products. And the price differences are more drastic in 16GB sticks than 8GB sticks.

            • JustAnEngineer
            • 1 year ago

            If DDR4 prices had not tripled from the time that I built my Skylake system, I would have gone with 2×16 GiB again for this build, but I settled for 2×8 GiB this time to save a couple of Benjamins.

      • ColeLT1
      • 1 year ago

      I counter with I wish they would also test with the fastest ram possible. I tend to buy closer to the highest speed/lowest latency RAM possible for a good price. My 7700k has 16GB DDR4-3600 16-16-16-36 bought for $110 (knowing at the time the 7700k ran fine at 3600, and started to struggle at 3800+). I know this RAM is now $200+, but it is unfortunate too see lower speed being used for benchmarks when enthusiast tend to push things to their limit and would like to see a reflection of that in the charts.

        • smilingcrow
        • 1 year ago

        Don’t expect that on a launch day review but some sites will run extensive RAM scaling tests later on.

      • Mr Bill
      • 1 year ago

      If you need a lot of memory; then either higher density or more than 1 or two sticks means you have to use slower timing for a stable system. I suggest that gaming (+insert your demand here) be tested with 1-2 sticks of the fastest memory the rig will support and other testing with memory slots fully populated with whatever timing works.

      • Krogoth
      • 1 year ago

      ITT: Intel-shills trying to downplay Ryzen+’s tweaked memory controller, cache and CCX which has bridged the small gap between Ryzen and Skylake family.

        • NoOne ButMe
        • 1 year ago

        Ryzen’s memory control on terms of latency, and also core to core latency for more than 4 cores is still much worse than Intel’s?

        It is less worse. Which is still worse.

          • Mr Bill
          • 1 year ago

          As to cache latency, go read [url=https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/3<]AnandTech--Improvements to the Cache Hierarchy[/url<] and them come back and say that. As for core-2-core latency; here is [url=https://www.tomshardware.co.uk/amd-ryzen-threadripper-1950x-cpu,review-33976-2.html<]Tom's Hardware: Infinity Fabric Latency Testing[/url<]. We now know that Threadripper, coming out later, got the L2 cache latency improvement of 12 clocks down from early RyZen's 17 clocks. So, perhaps we can infer Core-to-core in the RyZen 2700X as being similar to Threadripper.

            • NoOne ButMe
            • 1 year ago

            I was very clear what I was talking about:
            1. Core to core communication beyond 4 cores in a CCX is still likely much worse (or else AMD would have mentioned it)
            2. Their memory controller still has worse latency than Intel’s. Although it does seem to slightly outperform Intel’s in bandwidth in some cases, with equal RAM speeds

            I have made no disputes of the cache latency, which is seperate from the memory controller.

            There, AMD has made positive strides forwards and their caches seem to best Intel’s. Although there are lots of trade offs with cache, so it may be both Intel and AMD are better in the design goal areas of the cache.

            [edit: what I mean in last lines is that AMD and Intel may have had different design goals with their caches. So they may well both be “ideal” in terms of performance and power consumption they offer]

            • MOSFET
            • 1 year ago

            1. Extremely small real world performance tradeoff for the privilege of having all the cores and cache. The OSes I’ve used with Ryzen v1 know how to schedule processes and threads correctly, and have for some time. The impact is minimal.

            2. Another old dilemma or tradeoff. Intel does comes closer to ideals here, with often 90% of the bandwidth and half the latency, but when bandwidth climbs and climbs, latency can climb too. Ryzen family seems to lead in bandwidth in most cases, not just some. There probably is [i<]some[/i<] real world impact on Ryzen, but until Scott or Raja can get Intel and AMD to collaborate on a memory controller, we just won't know those sweet-looking Intel latencies can do for Ryzen. I'm not the least bit on the offensive or defensive side here. Just discussing.

            • NoOne ButMe
            • 1 year ago

            Completely agree with point one.

            On point two, Intel isn’t getting lower bandwidth. Equal bandwidth within the margin of error given equal RAM speeds/channels. At least based on Techreport’s AIDA64 results. Where the 7700k/8400 fall begins due to using “only” 3200 instead of 3400.

            I would appreciate links to places showing this bandwidth difference if you have them!

            • Mr Bill
            • 1 year ago

            Reading comprehension fail, apologies. Thank you for expanding on your original post to correct my misdirection.

    • Gadoran
    • 1 year ago

    What is changed? Nothing.

    So the reason of this new AMD output?? Just to have new mask sets around without any gain in die size?? Puff……fake 12nm .

      • chuckula
      • 1 year ago

      Needs more work.

      Accuse TR of being in a pro-AMD conspiracy.

      DO IT!

      • Cooe
      • 1 year ago

      They chose to use the same 9T transistor library as Ryzen 1 (vs the new 7.5T library GloFo is offering along w/ the 12nm process) to increase the amount of “dead space” on the die, which improves voltage & thermal characteristics instead of actually shrinking the die itself to increase yields. It’s just a different way of using a process shrink, that’s all. Don’t talk about stuff you don’t know crap about.

        • Mr Bill
        • 1 year ago

        Can’t remember which Intel CPU it was. But somewhere back in the 90’s there were bits that were double clocked and the die pictures showed low component densities in the higher clocked regions.

          • techguy
          • 1 year ago

          That was P4 aka the Netburst micro-architecture and the “bits” in question were the ALUs which were “double-pumped”. They ran at twice the clock of the rest of the core. I believe this is the first example of clock domains on a consumer PC microprocessor. Nvidia later used clock domains (also for the ALUs) on the famous G80 GPU and continued the use through the Fermi architecture generation. They’ve fallen out of fashion since. I always liked the idea though, wouldn’t mind seeing it come back.

    • ermo
    • 1 year ago

    Are there any Zen+ ThreadRippers on the horizon?

      • dragontamer5788
      • 1 year ago

      2H 2018.

      I don’t think a specific release data has been mentioned yet. But yeah, I’m definitely interested in the high-end product. I might just get a normal Threadripper though rather than wait.

        • Anonymous Coward
        • 1 year ago

        Those higher clocks should be pretty fine in the power constraints of a server room. Mmmm.

        • shank15217
        • 1 year ago

        Don’t get Threadripper, there is way to much processing power under that hood for any home environment.

          • ermo
          • 1 year ago

          Keep in mind that this is a PC enthusiast site and that what enthusiasts do at home might not necessarily reflect the typical “home environment”?

          For instance, I do some Solus Linux maintainer work on the side, and being able to “make -j32” makes turnaround times much nicer than a mere “make -j8”.

          Same deal for rendering, even if we account for OpenCL GPU acceleration.

            • techguy
            • 1 year ago

            Exactly.

            My home media server is powered by a 7900x with 64TB of disk storage.

            I have a de-lidded and water-cooled 7700k machine @ 5.2GHz hooked up to a 49″ 4k tv *just* for Flight Simulator (P3D v4).

            Far from normal.

            Whenever 4k Blu-ray encryption is conquered and I can backup all my 4k discs to my media server then I may look at whatever comes next from the Threadripper family. 16 cores is not quite enough to out-do Intel’s AVX512 performance of 10 cores but 24-32 cores? I’ll be there.

          • UberGerbil
          • 1 year ago

          I still have a Byte magazine from 1989 with a review of the first consumer 386 (Compaq Deskpro running at 25MHz). The review talks about using it as a server and then ends with the question “But does any individual user need this much power?”

    • ronch
    • 1 year ago

    Just saw Anandtech’s review. The gaming performance of these new Ryzen chips is unbelievable. Who would’ve thought AMD would be topping the graphs this soon with Ryzen? I’m sold, man. Ryzen+ is the only CPU worth buying.

      • chuckula
      • 1 year ago

      [quote<]The gaming performance of these new Ryzen chips is unbelievable. [/quote<] Yes. Emphasis on [b<]un[/b<]believable in their numbers. And anandtech just owned up to their numbers: [url<]https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results[/url<]

      • blastdoor
      • 1 year ago

      I know everyone always says that people shouldn’t wait for new products and should buy what they need when they need it. That’s generally good advice.

      But… this might be an exception.

      It seems reasonable to imagine that there are at least two more shoes to drop here, and that they will drop relatively quickly. First is Intel’s response to Ryzen+, which could be 8 core Coffee Lake plus price cuts.

      Then there will be AMD’s response — 2800X, possibly combined with price cuts.

      This could play out pretty quickly — maybe just 2-3 months. So it might be worth waiting…

        • ronch
        • 1 year ago

        Oh I can definitely wait. Being a guy who just surfs the internet, downloads YouTube videos and plays Thief 1-4, Space Quest, Doom 1-2, etc. I find my FX very much more than adequate. I can definitely wait for Zen 2 or 3 unless my FX suddenly goes out.

          • FakeAlGore
          • 1 year ago

          No Doom (2016)? I rated it as the finest single-player FPS ever made with no flaws to speak of in my internal review notes. It’s seriously good and a great follow-up to Doom 2 (was there a third one? I don’t remember. :))

            • ronch
            • 1 year ago

            I’ll check it out one of these days. Thanks for bringing it up. 🙂 And yes there’s a third Doom game, it came out in 2004 and it’s the first Doom game that featured ‘real’ 3D graphics (the first two were DOS titles).

      • smilingcrow
      • 1 year ago

      For DAW usage Intel has better performance and value by a large margin.

      • Waco
      • 1 year ago

      Unbelievable in that they test with parameters that don’t make sense in the real world.

        • blastdoor
        • 1 year ago

        non-substantive thought… it’s kind of ironic to reference the “real world” when talking about playing games.

          • Freon
          • 1 year ago

          As much as a game is itself an end goal and use case it’s as valid as a Cinebench render, both being more “real world” than something like a Dhrystone or memory bandwidth test. They’re actual applications people (potentially) use. Whether that is entertainment or productivity doesn’t make it more or less real world.

    • Shouefref
    • 1 year ago

    I’m actually more interested in Ryzen7 2700, the vanilla version instead of the X-verson.

      • UberGerbil
      • 1 year ago

      I suspect that is a common opinion around here. As Jeff notes on the review page
      [quote<] We expect most overclockers will be most interested in the $299 Ryzen 7 2700, anyway, and AMD has indicated that we'll have an opportunity to put that chip through its paces. Stay tuned.[/quote<]Could just delete "overclockers" and it would still be true. Tuned we shall stay.

    • ptsant
    • 1 year ago

    When are the power results coming? I noticed in Anandtech that 8700K uses more power than the 2700X at full load. This would mean a pretty good perf/W, especially for any upcoming server parts, where perf/W matters.

      • Jeff Kampman
      • 1 year ago

      [url<]https://twitter.com/jkampman_tr/status/986972021295902720[/url<] i7-8700K has a lower estimated task energy but is also slower, so it kind of depends on what your aims are.

      • smilingcrow
      • 1 year ago

      That will depend on the task and Ryzen 2 is more power hungry in this review:
      [url<]https://www.techpowerup.com/reviews/AMD/Ryzen_7_2700X/18.html[/url<]

        • Convert
        • 1 year ago

        I have to wonder if Anand’s are in error?

        Techpowerup’s numbers seem more where I’d expect AMD to be.

          • smilingcrow
          • 1 year ago

          You really need to look at a number of reviews to confirm the data as quoting just one review is not enough for me and I haven’t looked at more than two.
          As a DAW user that demands a silent PC Intel has much better performance so it seems highly unlikely they will lose in the performance per watt stakes which is what defines the limit on how much I can squeeze out of it silently.

          Added:
          Here’s the 3rd I’ve seen and it favours Intel by a large margin:
          [url<]https://hothardware.com/reviews/amd-2nd-generation-ryzen-processors-and-x470-chipset-review?page=8[/url<]

    • Yan
    • 1 year ago

    To me, the charts on the last page show that the 2600X is worth it, but that the 2700X isn’t worth the additional cost.

      • UberGerbil
      • 1 year ago

      Which makes the non-X vanilla 2700 super intriguing. Staying tuned for those results.

      • willmore
      • 1 year ago

      Then you’re reading them wrong.

      What you want to look for is a chip that is along the same slope but higher in performance. Because higher performance is generally incrementally more expensive. If they are linearly more expensive when compared to performance, it’s a good deal. If they’re below the line and higher in performance, then they’re an absolute win. The AMD chips all fall into one of these categories.

    • Kretschmer
    • 1 year ago

    [quote<]It's telling that Crysis 3 and Grand Theft Auto V remain useful CPU-bound gaming benchmarks several years after their PC debuts.[/quote<] If these games are outdated and extreme outliers, I would argue that they are not useful. It's better to report that the latest CPU are not noticeably CPU-bound in current games than dredge up old games to artificially create distinction. It feels weird to recommend AMD chips for gaming. We haven't been faced with this possibility since...2010? After years of people trying to pass off later Phenom II, Bulldozer, and Ryzen 1XXX as better in hypothetical multithreaded games, there is real parity! My hat is off to AMD.

      • UberGerbil
      • 1 year ago

      Old games provide context, since they’re likely to have been played on readers’ recent hardware. Also GTA V seems eternally popular, so I’m not sure it’s even an “old” game in the sense that it is no longer being played. If the game is still widely played, and it stresses the hardware, it belongs in the benchmarks.

        • Zizy
        • 1 year ago

        Gaming benchmarks have to tell 2 things when it comes to the CPUs: how well CPUs play current games, and how well will they play upcoming games. If a CPU is good enough for all current games, then only differences in the freshest games matter to judge what will age better – those engines, and their upgrades, will be used for the newer games.

        • derFunkenstein
        • 1 year ago

        GTA Online is apparently still pretty popular, too.

        • Kretschmer
        • 1 year ago

        Thanks for elaborating. I didn’t realize that GTA V is still wildly popular five years later!

      • Anonymous Coward
      • 1 year ago

      The games might be “outdated” but its worth something to know which CPU is most effective when pushed harder than”modern” games push. Could be that a lot of these 6- and 8-core chips will see more years of service than the quads that preceded them.

        • Kretschmer
        • 1 year ago

        This makes sense, but I’d worry about including old games that were really poorly optimized or otherwise quirky in a way that future games would avoid (e.g. making a dumb decision in the physics engine that causes every calculation to be ten times as taxing as other similar engines).

        Still, the TR staff are more informed about these sorts of choices than I am, and I’ll put more trust in them in the future before posting my gut reactions.

      • techguy
      • 1 year ago

      Yeah, reviewers should totally not use Grand Theft Auto V as a benchmark any more.

      GTA5 is the 3rd-best selling game of all time behind only Tetris and Minecraft, 2 games which are pointless to benchmark (unless you’re playing on integrated graphics).

      GTA5 still has a very active playerbase as well, being the latest entry in a very popular franchise with an online mode, to boot.

      Look, GTA5 is a world simulator, with A.I. vehicles, people, and a physics engine to govern their interactions. The game engine asks a lot out of the host processor, even it’s almost 5 years old now. As such it is a valid CPU performance benchmark.

      • Srsly_Bro
      • 1 year ago

      Please stop posting.

        • RAGEPRO
        • 1 year ago

        I get the sentiment—you strongly disagree with his post and feel it is [url=https://en.wikipedia.org/wiki/Not_even_wrong<]not even wrong[/url<]—but this really isn't a productive post. When you see a comment you disagree with, take it as a teachable moment, because this is a public forum and everyone benefits. [super<]That includes the site, because comment flame wars get people coming back, which gets TR pageviews.[/super<]

          • chuckula
          • 1 year ago

          [quote<]That includes the site, because comment flame wars get people coming back, which gets TR pageviews.[/quote<] YOU'RE WELCOME

        • Kretschmer
        • 1 year ago

        Does not compute.

    • ronch
    • 1 year ago

    At $330 I think the 2700X is a no-brainer. Comes with a stock cooler too. If you thought the 1700X was amazing value that helped change the CPU landscape last year, 2700X is even better.

    Speaking of vulnerabilities plaguing the 8700K… it seems to me a certain security company quietly faded away. Good riddance.

    • Mr Bill
    • 1 year ago

    I like the dynamically updated review concept. Its fun to go back and see what’s been added.

      • blastdoor
      • 1 year ago

      I very strongly agree — that is a fantastic idea.

      edit —

      not only a great idea, but also symbolically appropriate given what’s being reviewed here. AMD rushed Ryzen out the door, and Ryzen+ is the update.

      • UberGerbil
      • 1 year ago

      I like it in theory, but in practice I kind of want a changelog.

        • chuckula
        • 1 year ago

        I’ve always wanted to run git bisect on a TR review.

          • Mr Bill
          • 1 year ago

          I wish there was such a thing implicit in Word or Excel.

    • Srsly_Bro
    • 1 year ago

    Did I miss the gaming section?

      • NTMBK
      • 1 year ago

      No, but apparently you missed the first paragraph:

      [quote<]Editor's note: This article will be continuously updated with new data and commentary as the day goes on, [b<]including gaming performance data[/b<]. Pardon our dust as we finish digesting our extensive benchmark data of these CPUs, and thank you for your patience. [/quote<]

    • emorgoch
    • 1 year ago

    One thing I haven’t been able to find anywhere yet: Do these processors contain any Spectre/Meltdown fixes, or will those not be available until Zen2 hits?

      • jts888
      • 1 year ago

      Meltdown is an Intel-specific design flaw (allowing user program speculative execution to wander in kernel privilege modes).
      Spectre only affects execution within a single program instance/protection level (and consequently is only even theoretically an issue for a subclass of applications like browsers that do JIT and try to manage multiple privacy zones within one process), but it’s a much broader attack.

      It will likely be several years before Spectre and Spectre-like attacks are “solved” in hardware, and even then there will still be tradeoffs chosen about what level of obstruction is good enough and what forsaken optimizations are worth being lost. Designing better internal sandboxing for browsers etc. is frankly the more realistic approach.

      • Mr Bill
      • 1 year ago

      [quote<]All test systems were updated with the latest firmware and Windows updates before we began collecting data, including patches for the Spectre and Meltdown vulnerabilities where applicable. As a result, test data from this review should not be compared with results collected in past TR reviews. Similarly, all applications used in the course of data collection were the most current versions available as of press time and cannot be used to cross-compare with older data.[/quote<]

        • BurntMyBacon
        • 1 year ago

        I believe the OP was asking about hardware mitigations. They comment more than once in the article about how the current software/firmware mitigations affected (or didn’t affect) the benchmarks.

          • Mr Bill
          • 1 year ago

          Agreed. Although, I’m a little hazy about whether some patches actually patch microcode in the CPU permanantly. But then, that would just be a patch, not necessarily a mitigation that both fixes and improves performance.

            • BurntMyBacon
            • 1 year ago

            It is my understanding that the microcode updates are uploaded at boot by the BIOS/UEFI code on updated motherboards or by the OS during initialization for supported operating systems and processors. Many Linux (and presumably Unix) distributions can load the updated microcode. I think even Windows 10 can for Skylake processors, though I may be off on the details there. In any case, these are not permanent updates and need to be loaded every boot.

      • BurntMyBacon
      • 1 year ago

      Probably not. Last I checked, AMD doesn’t believe their chips are vulnerable to Meltdown, so no fix incoming (or necessary). If I recall correctly, neither Intel, nor AMD are planning to release hardware fixes for the first variant of Spectre as software mitigations have been deemed more appropriate. While a fix for the second variant of Spectre is incoming, the fact that neither AMD nor Intel have reported hardware fixes for it and the fact that the architecture (and apparently transistor count) of Zen+ remain unchanged from Zen suggest that you’ll need to wait for Zen 2 to get the AMD hardware fix for the second variant of Spectre.

    • Chrispy_
    • 1 year ago

    Never mind, I read the italic note before the start of the review this time.

    • Thbbft
    • 1 year ago

    WIll be fascinating to watch the marketing gymnastics Intel will engage in if it is still on 14nm when the 7nm Ryzen 3000 series is released, which at this point appears more probable than not.

      • techguy
      • 1 year ago

      Cannon lake is 10nm and out in a few months. Ryzen 3000 isn’t due out until next year.

        • blastdoor
        • 1 year ago

        No, 10 nm from Intel is due out last year. So Intel wins even more!

          • techguy
          • 1 year ago

          10nm is already sampling. I’m not talking about hypothetical due dates, but actual product delivery dates.

          But yeah, sarcasm is cool!

        • jts888
        • 1 year ago

        There is a moderate to strong chance than desktop Cannon Lake will beat Zen 2 to market, but any Cannon Lake part releasing this year will be 2c mobile SKUs.

        Intel really shot themselves in the foot by refusing to do any sort of half-node jump, when they were already admitting by last year at the latest that first-gen 10nm parts wouldn’t clock as high as their most recent 14nm parts.

          • chuckula
          • 1 year ago

          Oh I can guarantee that desktop Cannon Lake won’t beat Zen 2 to market!

          You might even say it’s… [b<]confirmed[/b<].

          • blastdoor
          • 1 year ago

          I wonder if the foot-shooting happened 3-4 years ago when Intel was making plans for future R&D and capital investment under the assumption that their only competition would always be their own installed base of n-1 generation CPUs. Under that assumption, why depress the earnings report by working hard on the next node when you can keep milking 14nm?

          It’s pretty understandable, really. It was a long shot that AMD would be able to both (1) stay in business and (2) design a credible CPU core. But the idea that GloFo could catch Intel in fab process??? That’s still hard to believe.

            • Voldenuit
            • 1 year ago

            Then intel really screwed up the math, because for years now, their competition has been their own n-4 generation.

            • tipoo
            • 1 year ago

            The fab process thing is what reels my mind too. Earths gravity is 9.807 m/s², c is 299 792 458 m/s, and Intels fabrication process is (at minimum) 18 months ahead of the world, that’s been a truth for most of my formative years.

            • Anonymous Coward
            • 1 year ago

            Is GloFo really caught up?

            • chuckula
            • 1 year ago

            Caught up?

            They passed Intel years ago!

            • Anonymous Coward
            • 1 year ago

            They had to work hard to make a dozer fly!!

            • blastdoor
            • 1 year ago

            Not yet, because they decided to skip what they might have called 10 nm.

            Arguably TSMC is currently caught up because they didn’t skip 10nm. Their 10nm density appears to be about equal to (maybe a bit better than?) Intel 14nm.

            But they might be caught up if next year they are shipping their 7nm while Intel is shipping their 10nm.

            • Cooe
            • 1 year ago

            By next year, yeah. GloFo’s 7nm w/ standard litho is pretty much dead on for Intel’s 10nm. We’ll finally have fab parity for like the first time….. Ever lol.
            Thing is, GloFo’s 7nm w/ EUV litho is outright superior to Intel’s 10nm, so if GloFo can stick their EUV rollout for late next year/early 2020, things are gonna get REEEEAAAAALLLY interesting with Ryzen 3/Zen 2+

            • techguy
            • 1 year ago

            I’ll believe that when I see it.

            Intel spends more on R&D than Glofo could ever dream of doing.

            • blastdoor
            • 1 year ago

            GloFo is owned by ATIC, which is a government-owned company. And that particular government is swimming in petro dollars that they are trying to invest in other industries.

            So…. what’s your source for GloFo’s R&D and capital expenditures?

            • techguy
            • 1 year ago

            Common sense. You think ATIC is pouring tens of billions into Glofo each year to what – lose money? Because Glofo’s revenue would not recoup such an expenditure.

            • blastdoor
            • 1 year ago

            You’re implicitly assessing this situation using the model of a publicly owned/traded firm that has to answer to shareholders every quarter. The conclusions that would naturally follow from applying that model is what you imagine constitutes “common sense.” But in this case, it’s the wrong model, and so your “common sense” conclusions are not as obviously correct as you imagine them to be.

            The oil kingdoms in the gulf recognize that they need to shift their economies away from oil. This is not about profits in the next quarter, next year, or even the next ten years. They are looking ahead to the next generation. Their investments can only be understood in that context.

            • techguy
            • 1 year ago

            Nonsense. No one likes to piss away *all* their money, even rich Arabs. There are limits to the capabilities of any individual or organization. After all, we’re only human.

            Bringing a foundry of Glofo’s prowess up to the level of surpassing the best fab company in the history of semiconductors isn’t a small task, despite what marketing departments would have you believe.

            • blastdoor
            • 1 year ago

            1. The amount of money needed to invest in R&D and capital investment in order to make GloFo competitive is far less than “all their money.” In fact, it’s a relatively small fraction of “all their money.” That is not to say that it’s trivial, just that it’s totally feasible for them to invest the money needed.

            2. Obviously there are limits to the capabilities of any individual or organization, and obviously we are all human. Furthermore, the sky is blue except for when it’s grey or black. The moon is not made of cheese. The Earth is round. Who would argue with that? How does that statement fit into a coherent argument about anything? Stating things that are true but irrelevant isn’t making an argument.

            3. I don’t disagree that bringing GloFo up to the level of equalling or surpassing Intel is a small task. I don’t think anyone has claimed that it’s a small task. Another irrelevant point.

            You’re not making an argument here. You are asserting what you believe to be true based on the unconscious application of an irrelevant model for how firms behave.

            There is a long history of governments spending huge amounts of money (frankly, far more than what we’re talking about here) to subsidize the development of a new industry. The Japanese did it with great success across a range of industries — auto, steel, TVs, radios, etc. In 1970 one might have imagined it utterly inconceivable that Japanese automakers could ever surpass American automakers. And if the Japanese had made investments in R&D and capital based on the goal of maximizing quarterly profits, then it would have been inconceivable. But that wasn’t the game they were playing.

            GloFo has existed for 9 years. With enough money, that is enough time for them to catch up.

            • techguy
            • 1 year ago

            When and until Glofo does as you say, I will continue to assert that Intel will remain the top foundry in the industry. There is no reason currently on the horizon for me to do otherwise. Now, if 10 years from now the U.S. federal government breaks up Intel’s fabs from the rest of the company, and maybe 10 years after that they have made a number of mis-steps as a separate entity forced into a new business model, then I would *not* be surprised were your speculation to come true.

            Intel is a manufacturing company first, and a chip company second. They will not let the crown go easily.

            • chuckula
            • 1 year ago

            Yeah, here’s the thing: Intel has shown off fully operational 10nm chips on multiple occasions going all the way [url=https://www.pcworld.com/article/3154884/hardware/intel-shows-off-10nm-cannon-lake-chip-announces-project-alloy-vr-product-plans.html<]back to the beginning of 2017[/url<]. And I don't mean an SRAM proof of concept chip, I mean Cannon Lake running in a real notebook. But they still haven't fully commercially launched yet. INTEL SUCKS! YEAH! Meanwhile, GloFo has never shown anything running on 7nm other than slides. Who do you think is really ahead at this point?

            • blastdoor
            • 1 year ago

            When it comes to GloFo, skepticism is certainly warranted. Perhaps it will only be TSMC that starts to pull ahead of Intel this year when the next iPhone comes out using an ~5 billion transistor 7nm SOC. Intel fans can take comfort in that thought.

            • techguy
            • 1 year ago

            I’m all for a 5 billion transistor part in the ~100mm^2 range, regardless of which fab it comes from.

            As for likelihood of any foundry pulling ahead of Intel, if anyone could do it I’d put my money on TSMC. All dat Apple and Nvidia money…

            • blastdoor
            • 1 year ago

            Samsung wouldn’t be a bad bet, either — all that Samsung money.

            • techguy
            • 1 year ago

            When Apple launched the iPhone 6S they dual-sourced the A9 SOC from TSMC and Samsung, the TSMC chips were more desirable chips due to their power characteristics (lower power = longer battery life), despite being larger than the dies coming out of Samsung’s fab (104.5mm^2 vs. 96mm^2).

            Samsung is kind of a jack of all trades, master of none. TSMC on the other hand, all they do is make chips and I think their track record has been pretty good for quite awhile (20nm aside).

            I would put my money on TSMC in this case.

            • blastdoor
            • 1 year ago

            I agree, TSMC has a better shot.

            But Samsung still has a good shot.

            Intel is just too focused on that quarterly P&L statement. They’re not playing the long game that their competitors are playing.

            • derFunkenstein
            • 1 year ago

            You can’t win this argument. Your reasoning is exactly why I’m skpetical of ARM-based Macs, but that got me nowhere. People WANT to believe. Hell, even I *want* to believe, but I can’t. Not until I see it.

            • blastdoor
            • 1 year ago

            Of course he can win the argument. All he needs is for events to play out as he predicts. That has certainly happened before. I predicted Apple would come out with Macs based on their own ARM SOC by the end of 2017. That didn’t happen. He won, I lost.

          • techguy
          • 1 year ago

          True. We’ll definitely see the ULV SKUs first. Intel always runs the smallest dies through the fab on any new process node.

        • ImSpartacus
        • 1 year ago

        Cannon lake almost certainly isn’t showing up on desktop.

        It’s coffee lake then ice lake for desktop.

      • stefem
      • 1 year ago

      They could learn from competition and call it 12nm

    • chuckula
    • 1 year ago

    Intel is yet again on thin ice and will inevitably end up in the lake at this rate.

      • ronch
      • 1 year ago

      Even worse, thin ice over a hot lake of coffee.

    • srg86
    • 1 year ago

    From a competitive perspective, this is why Intel really needs to release 8-core Coffee Lake (or later CPUs), I think the 2 extra cores are making their mark.

    I buy Intel because of better reliability experience with them, but the performance of RyZen+ cannot be ignored.

      • blastdoor
      • 1 year ago

      I hope Intel does that because I’d love to see what kind of a punch AMD then lands with a 2800X

        • chuckula
        • 1 year ago

        The 2800X is actually Apple’s miracle ARM chip… CONFIRMED

    • techguy
    • 1 year ago

    Another good showing from AMD. @ $329 the 2700x is a very compelling chip for just about any workload outside of AVX512.

      • Waco
      • 1 year ago

      Agreed. If you want raw throughput you’ve got a lot of options with TR and Epyc as well. I wonder what the release dates for “big” Zen+ are?

        • techguy
        • 1 year ago

        If I had my way our next round of ESX hosts this Summer would be Epyc-powered. Alas, it is not to be. Maybe next time!

    • EzioAs
    • 1 year ago

    The review at AnandTech shows these CPUs beating Intel in all games tested in more-CPU bound scenarios (1080p). I’ll wait to reserve judgement of these CPUs fully until TR tested them some more but this is the first time I want an AMD system (sans GPU).

      • Hinton
      • 1 year ago

      That is true. And of course better than if it hadn’t.

      It’s meaningless though, unless that’s what makes gamers choose it, I own AMD stock.

      Non-gamers are probably more interested in non-artificial stuff where the CPU actually matters.

      • thx1138r
      • 1 year ago

      The anandtech results look valid but there’s a reason why the Ryzen’s outrun coffee-lake by such a margin. In their test setup they say:

      [quote<]As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance[/quote<] So they end up with 2666Mhz RAM on the 8700k and 2933 RAM on the 2700x.

        • nanoflower
        • 1 year ago

        Yeah, that’s problematic for me. I want to see the two platforms compared on as equal a basis as can be. If they want to have a separate set of benchmarks where they push the platforms then OC the hell out of everything and show just what those test platforms can do. That’s fine but what I most want to know is how things compare when all things are equal so giving one platform faster memory is just giving that platform an advantage.

          • jihadjoe
          • 1 year ago

          Their reason for doing so is hilariously bad. Ostensibly they only use platform specified speeds because people who build their own PCs are dumb and will end up bamboozled if they try to go into the BIOS to apply XMP/AMP settings.

            • thx1138r
            • 1 year ago

            Actually I think Anandtech have a point. While we may think that hand-built systems are super important, they really only constitute a tiny percentage of the PC market. Most systems are pre-built systems, and these do tend to end up with the kind of slow RAM that anandtech test with. Dell for example can be found pairing the 8700K with 266Mhz DDR4:
            [url<]http://www.dell.com/en-us/shop/dell-desktop-computers/xps-tower-special-edition/spd/xps-8930-se-desktop[/url<] Sad as this pairing is, it is the reality for many systems. My problem with Anand is that it takes quite a bit of digging and some research to find out why their test results are so different. I would much prefer if they were more upfront with their configuration details.

            • NoOne ButMe
            • 1 year ago

            >My problem with Anand is that it takes quite a bit of digging
            >and some research to find out why their test results are so
            >different. I would much prefer if they were more upfront
            >with their configuration details.

            You mean like where they clearly show it in the page they have labeled “Benchmarking Setup and Power Analysis”?

            BTW, they analyze the power draw on the page linked. Sorry they hid the fact.

            [url<]https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/8[/url<] (How much more clear do you want?)

            • Zizy
            • 1 year ago

            Their configuration details aren’t anything strange and I don’t think they could have done any clearer job explaining all that. They just did 3 things differently than most of the rest:

            1.) They run memory at the supported frequencies. Worse results for both, hard to say which CPU is at higher disadvantage.
            2.) They don’t let mobo OC the CPU for Intel – this can be 10%, 4.3 vs 4.7GHz. Clearly shows in cinebench numbers – 1400 vs 1500 scores.
            3.) They had all patches enabled, including BIOS for spectre2. Perhaps another few %.

            Lastly, they don’t OC either – great Intel gaming wins are after OCing to 5GHz. But all of this should see i7 8700k = R7 2700X in gaming performance, like in Shadow of Mordor. Having AMD ahead is still very strange.

            • MileageMayVary
            • 1 year ago

            For #2, where do they say that they turn off boost/turbo? I am unable to find it 🙁

            • NoOne ButMe
            • 1 year ago

            Most motherboards after launch turned off MCE by default.

            Benchmarking ideally would have both on an off, but when timelimited—which Ian at anandtech most certainly was—leaving it off is certainly the better of the options.

            Like how ideally you test with both RAM at max you can get it, and at max officially supported speed.

      • jihadjoe
      • 1 year ago

      I’ve read 10 reviews and AT is the only site where Ryzen beats Intel in gaming. I’d say TR’s numbers are good.

        • Cooe
        • 1 year ago

        TechRadar got the same results as AnandTech, so take that how you will.
        (Referring to the fact that TechRadar and AnandTech both have the Intel losing MASSIVE performance from the Meltdown & Spectre patches that no one else has. Differences in game selection make the game results different ofc, but compare the synthetics. 8700K can’t even break 170cb in Cinebench single-thread. But hey, thanks for all the downvotes!).

          • techguy
          • 1 year ago

          What are you talking about?

          Tech Radar tested a whole TWO games and said the following:

          “As for gaming, the Ryzen 7 2700X doesn’t beat the Core i7-8700K, but it has eroded Intel’s lead by a mere one to two frames per second.”

            • Cooe
            • 1 year ago

            Difference in game selection. Check the synthetics. 8700K is getting just 166cb in Cinebench R15 single-threaded. TechRadar and AnandTech both have the Intel keeps losing MASSIVE performance from the Meltdown & Spectre patches that no one else has. THAT’S WHAT I MEANT.

            • techguy
            • 1 year ago

            Clearly indicating something wrong with their tests. 8700k scores in the upper-190’s on Cinebench R15 single-thread post-Meltdown patches.

            Your “suggestion” to Jeff Kampman implies that he either:
            a) is too lazy to update his test platform
            b) too stupid to know this needs to be done

            Look, your entire argument here is based on the pre-supposition that the entire hardware reviewing “industry” is wrong and the aberrant results of a SINGLE REVIEWER are correct.

            Let me know how that philosophy works out for you in life.

            • Cooe
            • 1 year ago

            I’m not saying either result is correct asshole. Just that they are different (TechRadar & AnandTech vs everyone else). No one knows what the hell is going on right now. That’s the point. But you can’t just say they have it right, and the others wrong, or vice versa. We don’t know which is right or wrong yet. Too little info. And it would be totally natural to assume the Ryzen Balanced Power Plan would work for Ryzen 2 don’t you think? So why go through the effort of disabling it? Or heck, even remember it was active to being with? I’m not saying anything about Jeff being a good or bad reviewer or anything of the sort. No one realized it was badly affecting gaming performance till just recently.

            Seriously though, you are the most insufferable person I’ve met on a forum all year, so *thumbs up*.

            • Jeff Kampman
            • 1 year ago

            Cool it with the language please, folks.

        • shank15217
        • 1 year ago

        TR didn’t test games in this review so WTF are you talking about? The TR non-gaming tests that overlap with AT match up fine. Also an outlier doesn’t mean its wrong, its easy to independently verify their results with the same hardware, that’s how you build consensus.

      • Voldenuit
      • 1 year ago

      [quote<]The review at AnandTech shows these CPUs beating Intel in all games tested in more-CPU bound scenarios (1080p).[/quote<] There's something fishy with the anand gaming benchmarks. None of the other review sites are showing this pattern, and it's not consistent with known Ryzen performance vs intel. Cf: [url<]https://www.pcper.com/reviews/Processors/Ryzen-7-2700X-and-Ryzen-5-2600X-Review-Zen-Matures/Gaming-Performance[/url<] [url<]https://www.techpowerup.com/reviews/AMD/Ryzen_5_2600X/13.html[/url<] [url<]https://www.tweaktown.com/reviews/8602/amd-ryzen-7-2700x-5-2600x-review/index9.html[/url<] I almost want to say there may be a configuration error in the anand test setup, perhaps they have turbo boost disabled, or memory running in 2T, or misconfigured power settings. EDIT: Ryan Smith at AT says they are taking a second look at their results. [quote="Ryan Smith"<]We're looking into it right now. Some of these results weren't in until very recently, so we're going back and doing some additional validation and logging to see if we can get to the bottom of this.[/quote<]

      • ronch
      • 1 year ago

      I reckon games respond well to improved cache latency. Makes me go out and get a 2700X even more.

      • Kretschmer
      • 1 year ago

      AnandTech is an outlier, and they have a heavy emphasis on AMD advertising. I’ve long since dropped their benchmarks, build guides, and reviews as suspect.

        • Intel999
        • 1 year ago

        Since when? Ian Cuttress at Anandtech has been considered an Intel biased hack going back ten years.

        Apparently, unlike most reviewers, Ian didn’t get the email from Intel saying “Leave out our firmware update for Spectre Variant 2 until after you review the Ryzen CPUs!”.

        The guy at Techradar missed that email too.

          • thecoldanddarkone
          • 1 year ago

          Yes and all those other reviewers didn’t have spectre or meltdown patches installed. Lets be clear some did and some didn’t…

      • DrDominodog51
      • 1 year ago

      Look at AnandTech’s ram settings.

        • EzioAs
        • 1 year ago

        Would that minuscule difference in RAM settings affect the game benchmarks by a lot? I don’t think so, but even if we were to ignore AnandTech, the review at TechPowerUp with similar RAM settings show these CPUs perform more admirably in games when compared to Coffee Lake and the Ryzen 1xxx series. I think that’s really good

          • NoOne ButMe
          • 1 year ago

          It depends. But RAM scaling can really bring extra performance.

          This is a bigger difference, both in percentage and numbers, but see the 2500K:
          [url<]https://www.eurogamer.net/articles/digitalfoundry-2016-is-it-finally-time-to-upgrade-your-core-i5-2500k[/url<] RAM from stock CPU clock from 1600 -> 2133Mhz brought about 10-15% at stock in 4 of the 5 games used. RAM when at 4.6Gbz overclock brought a similar 10-15% in 4 of 5 games again. Doubtful 2666 v. 2933 is enough to warrant Anandtech’s gap. But could be.

            • shank15217
            • 1 year ago

            That doesn’t mean AT is wrong though.

      • ronch
      • 1 year ago

      And that’s really how game testing should be done. Isolate CPU performance. Testing games where the GPU becomes the bottleneck is like comparing a Corolla to a Ferrari in a crowded street and saying both perform the same in the real world.

        • stefem
        • 1 year ago

        Pffft! Since the introduction of the first Ryzen everione knows that CPU shouled be benchmarked at 4K to have a rappresentative picture of CPU performance

    • OptimumSlinky
    • 1 year ago

    I know it’s still being updated, but would love to see some info on how X470 compares with X370 and Gen1 Ryzens. i.e., if you run a Ryzen 2600X with an X370 MB, are features like XFR2 unavailable? So far all of the reviews have been R2000+X470 against R1000+X370.

      • turtlepwr281
      • 1 year ago

      I second this. I’d like to see how these 2xxx parts work in B350 boards. Does the boost algorithm work correctly?

      mATX user here 🙁

      • MileageMayVary
      • 1 year ago

      Exactly. I am seeing a bunch of tests with the old chips in the new X470 but none with the new chips in the older X370 or, even more relevant to myself and people I recommend things to, the B350.

      Any word on when B450 will be released?

        • OptimumSlinky
        • 1 year ago

        Seems like X470 is slight more consistent with the lowest frame times, but overall performance impact is minimal: [url<]https://youtu.be/YSMXbbw2B8Y[/url<]

    • thx1138r
    • 1 year ago

    I would have thought that the memory latency improvements would have helped the Ryzens more in gaming than productivity apps, but the opposite seems to have been the case. Productivity apps have made a solid jump in performance, but the fps result seems to have improved only slightly. Granted, the 99th percentile numbers have made a bigger improvement but that’s not going to make any headlines.
    Still though, a solid effort, if you’re not recommending a Ryzen for high-performance productivity applications you’d need some pretty good reasons to back that up.

      • NTMBK
      • 1 year ago

      I think the productivity apps have been helped a lot by the turbo boost changes- higher clock speeds when a medium to high number of cores are loaded.

      The “99th percentile FPS per dollar” graph shows a nice jump between Zen and Zen+, so it looks like those latency optimizations have helped.

      • UberGerbil
      • 1 year ago

      Productivity apps (with a few exceptions) are almost never gated by the GPU. Games frequently are (even at low res and texture settings, just the IO latency can have an effect).

      • willmore
      • 1 year ago

      I think we’re not seeing much of the performance limit being from the games themselves here, but from the driver.

    • sweatshopking
    • 1 year ago

    ANYONE SELLING A CPU WITHOUT AN LED COOLER DOESN’T UNDERSTAND MILLENIALS

      • NTMBK
      • 1 year ago

      I’M NOT SELLING A CPU WITHOUT AN LED COOLER AND I STILL DON’T UNDERSTAND MILLENIALS

        • sweatshopking
        • 1 year ago

        THAT’S YOUR PROBLEM. START SELLING A CPU WITH AN LED COOLER AND YOU’LL UNDERSTAND US.

        • UberGerbil
        • 1 year ago

        [url<]https://i.imgur.com/H7Xr8mU.jpg[/url<]

        • tipoo
        • 1 year ago

        I’M A MILLENNIAL AND I DON’T UNDERSTAND MILLENNIALS!

          • LostCat
          • 1 year ago

          I’M NOT AND I DON’T UNDERSTAND HUMANS.

      • Chrispy_
      • 1 year ago

      SHUT UP THEY WANT AVOCADO TOAST, NOT LED COOLERS

        • Redocbew
        • 1 year ago

        YOU SHUT UP THEY WANT LEDS EVEN IN THEIR SHOES

        LED SHOES ARE A THING, OLD PERSON

          • Chrispy_
          • 1 year ago

          LED SHOES ARE AN OLD THING DAMNIT I HAD LED LA GEAR SHOES IN THE ’80S

            • sweatshopking
            • 1 year ago

            No you didn’t. The 80s havent happened yet. It’s only 2018. You gotta wait another 60 years

            • helix
            • 1 year ago

            Oh, you fixed your caps lock problem, congratulations!

            • Chrispy_
            • 1 year ago

            Yeah, he won a keyboard in a TR competition and there are multiple witnesses that it has a working CAPSLOCK key.

            • sweatshopking
            • 1 year ago

            Only since I made them change the firmware! True story.

      • Kretschmer
      • 1 year ago

      THIS MILLENIAL WANTS OFF THE LED TRAIN. I CHO CHO CHOOSE SANITY.

        • jensend
        • 1 year ago

        OR YOU COULD KEEP INSTALLING RGB LEDS, BUT, YOU KNOW, [b<]IRONICALLY[/b<]

          • Kretschmer
          • 1 year ago

          I CAN DO CAPS IRONICALLY OR LEDS IRONICALLY BUT REFUSE TO EMBRACE BOTH!

          AND LEDS ARE UGLIER THAN NON-FAIR-TRADE COFFEE PREPARED WITHOUT A MASTERS DEGREE IN THE HUMANITIES.

      • Hsew
      • 1 year ago

      GUYS, GUYS, GUYS, … RGB TIDEPODS!!

      • Anovoca
      • 1 year ago

      I’m technically a millennial, and I would rather you put Microbrews in my cooler.

        • K-L-Waster
        • 1 year ago

        You don’t have to be a millennial to prefer that….

      • gerryg
      • 1 year ago

      I ONLY USE LED COOLERS WITH bLiNkEnLiGhTs!

      • the
      • 1 year ago

      JUST LIKE SELLING A KEYBOARD WITHOUT A CAPSLOCK KEY.

      • anotherengineer
      • 1 year ago

      BEST CHAIN POST I’VE READ AT TR IN A VERY LONG TIME!!

        • Wirko
        • 1 year ago

        Number of upvotes totals 136 so far, which is over 9000! And that’s before I start upvoting!

    • NunoP
    • 1 year ago

    Thanks for the Review.
    Can you add dawbench scores?
    WIth all these new patches for Spectre and Meltdown and the better latencies i wonder how the new ryzen will perform.
    Can´t wait for the promised updates in the article

      • Jeff Kampman
      • 1 year ago

      DAW Bench results are in the piece now.

        • NunoP
        • 1 year ago

        Thanks Jeff,
        Great review.

        My conclusion is funny.
        Hi End:
        Intel wins. If you can afford a 8700k system its the best choice.

        Mid
        R5 2600X is more expensive than 8400.
        It wins clearly to it. I think HT does the difference.
        Probably a R5 2600 also does better and has a similar price.

        Important is that 7700k was considered a great DAW CPU 18 months ago.
        Now i5-8400 or R5 2600x both smash it completely.

        So I do believe:
        AMD the best Value option
        Intel The best absolute option.

        Le me look at my budget so i make the smarter choice.
        Had Intel – Intel – AMD – Intel and now may be time to comeback to amd again

        • Audacity
        • 1 year ago

        I’m curious: does anyone know why the 8700K is so much faster than the 7700K? I mean, the chip architecture is similar and it has 50% more cores (6 vs 4) and yet it’s getting scores which are sometimes 3.5x better than the 7700K.

        It would make sense if the 8700K was 50% faster than the 7700K. When it’s almost always doing better than a 100% improvement over the 7700K, something else must be happening here. It’s as though the 8700K has some sort of DSP acceleration instructions, or something like that.

          • derFunkenstein
          • 1 year ago

          I wonder if L3 cache has anything to do with it. The 7700K has 8MB. The 8700K has 12MB. Between the extra cores/threads and extra cache, maybe? But you’re right, the 7700K looks almost like an error.

            • NunoP
            • 1 year ago

            Good Point Audacity
            I guess derFunkenstein should be right.
            To be able to process very fast a big and fast Cache is very important.

            If you look at the benchmarks of R5 2400G they are well behind the normal F5 1400. And besides the graphics the big diference is the cache,

            If R7 2700x had a memory latency similar to the intel chips i guess it would completly smoke the intel chips in this benchmark

            • Voldenuit
            • 1 year ago

            AFAIK, the L3 caches on the intel parts are just victim caches, so I don’t think they have much positive impact on performance.

            • dragontamer5788
            • 1 year ago

            [quote<]AFAIK, the L3 caches on the intel parts are just victim caches[/quote<] Prepare to be confused. L3 caches on Skylake-SP are victim caches. L3 Cache on Skylake are inclusive caches. In short: i9 parts (and some i7) are victim-caches with tons of L2 space. The 8700k is a standard inclusive cache with normal 256kb L2. You can tell the difference between Skylake-SP and Skylake designs by the size of the L2. If the L2 is 1MB, then its Skylake-SP. If its 256kB per core, then its Skylake.

            • derFunkenstein
            • 1 year ago

            Either way, wouldn’t a cache hit rather than a miss still allow faster processing of data? Seems like the kind of scenario where the CPU should be able to predict what data is coming next and start fetching it, and have a lot of it in cache. Not so much about bandwidth as it is having a larger chunk of high-bandwidth cache to store more samples for processing at once (hence more DSP plugins at once)?

            • dragontamer5788
            • 1 year ago

            [quote<]Either way, wouldn't a cache hit rather than a miss still allow faster processing of data?[/quote<] A cache hit at L2 is way faster than a hit at L3. Intel Skylake-SP (i9-7900x) has 1.375 MB L3 (Victim) + 1MB L2. That's a total cache size of 2.375MB. Victim-caches allow the two caches to work together to "find the data". Intel Skylake (i7-7800k) has 2MB L3 inclusive cache. The L2 doesn't help because its an inclusive cache (but its 256kB if you wanted to know). That is, the L3 cache and L2 cache CANNOT be added together on the 7800k. Skylake-SP has bigger cache, and MORE cache at L2 level. Its a superior design (but costs more money to make) -------------- With that being said: Intel's L3 cache on the 8700k works with [b<]any[/b<] core. So a bigger L3 cache of the 8700k (compared to the 7700k) is probably the key to why it does so well in DAW benchmarks. Ryzen has the opposite design. 8MB L3 per 4x cores, but the L3 caches per CCX CANNOT work together. So while 8700k's L3 12MB L3 is technically smaller than Ryzen's 2x8MB L3... the biggest space any problem can work in is 8MB on Ryzen (but the two 8MB chunks can work on different problems at the same time) On Intel's system, 12MB can all work on one problem, which seems to matter in this case.

            • Anonymous Coward
            • 1 year ago

            Funny how AMD’s exclusive caches were endlessly underperforming Intel’s inclusive caches, then they switch places. I’m no CPU designer, but I would bet my money on inclusive caches until reaching very high core counts. I expect the exclusive cached Intel chips perform worse for it, given a small core count.

    • derFunkenstein
    • 1 year ago

    I like that many of the improvements are better than linear by clock speed. Based on that alone, I’ll call it a success.

    Do we know how much smaller the Ryzen+ die is compared to the original? If any? Hard to do with soldered heat spreaders, I’m sure.

      • Jeff Kampman
      • 1 year ago

      No change in die size or transistor count.

        • derFunkenstein
        • 1 year ago

        Thanks, Jeff

        • BurntMyBacon
        • 1 year ago

        I did not expect that. I would have thought with the same transistor count and a (marginally) smaller process (14nm->12nm) that the die would have been (marginally) smaller. I suspect they needed to widen some transistors in critical paths, but there is a whole lot of non critical path (including cache) that should be smaller. Perhaps they left the cache transistors the same size despite the move to a smaller process. It is possible that this was a necessary measure to improve cache latency. Still, coming out exactly the same size with the same transistor count and a smaller process is quite unexpected.

          • tipoo
          • 1 year ago

          This ’12nm’ process node is but a naming change to denote added efficiency to 14nm, no?

            • BurntMyBacon
            • 1 year ago

            [i<]Edit:[/i<] Actually, no. There is a clear change in density (if only marginal) as stated directly by global foundries. [url<]https://www.globalfoundries.com/news-events/press-releases/globalfoundries-introduces-new-12nm-finfet-technology-for-high-performance-applications[/url<] [quote="Global Foundries"<]GLOBALFOUNDRIES today announced plans to introduce a new 12nm Leading-Performance (12LP) FinFET semiconductor manufacturing process. The technology is expected to deliver better density and a performance boost over GF’s current-generation 14nm FinFET offering[/quote<]

            • derFunkenstein
            • 1 year ago

            GloFo has suffixes, too. For example [url=https://www.anandtech.com/show/11862/globalfoundries-weds-finfet-and-soi-in-14hp-process-tech-for-ibm-z14-cpus<]14HP[/url<] for some IBM CPUs. So I don't know what 12 means if it's not more dense or smaller features.

            • BurntMyBacon
            • 1 year ago

            The Anandtech article claims a 15% circuit density improvement for the 12LPP over 14LP. They also claim that the 12LP has tweaks that include a partial optical shrink and a slight change in manufacturing rules in the middle-line and back-end of the manufacturing process. So, not a huge density change, but an density change nonetheless.
            [url<]https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/2[/url<]

            • derFunkenstein
            • 1 year ago

            [quote<]One interesting element is that although GF claims that there is a 15% density improvement, AMD is stating that these processors have the same die size and transistor count as the previous generation. Ultimately this seems in opposition to common sense – surely AMD would want to use smaller dies to get more chips per wafer?[/quote<] And this is why I asked about it. It seems like you'd want to cut costs to go with cut prices. And then we get the answer: [quote<]When discussing with AMD, the best way to explain it is that some of the design of the key features has not moved – they just take up less area, leaving more dark silicon between other features.[/quote<] Well...okay, I guess. Probably helped them save time and money on R&D so they could move on to Zen 2, but doesn't gain them anything at all in production. edit: and I just realized that's what you were saying. Whoops.

            • UberGerbil
            • 1 year ago

            It doesn’t get them more dies per wafer, true, and that matters. But it might increase yields slightly, so depending on defect rate it might be a wash. More importantly (to us) slightly less transistor density should result in slightly less heat density, which might mean higher clocks for longer. For chips that are self-tuning based on realtime thermal measurements, that might be pretty significant.

            • derFunkenstein
            • 1 year ago

            A bright side to (nearly) everything.

            • BurntMyBacon
            • 1 year ago

            I didn’t think about the defect rate issue. I should have as I’ve designed and fabricated several chips in the past. A ~15% decrease in transistor density could correspond to a very significant increase in dark silicon area depending on how packed the design is. For instance, a 15% decrease in transistor density for a chip that is 25% dark would result in a ~45% increase in dark silicon area, bringing it to about 36% dark silicon. This would definitely reduce the probability of [b<]critical[/b<] defects. Yields would improve more or less depending on defect density of the process. The assumption here, of course, is that the 12nm process has a defect density no greater than the 14nm process the previous chips were fabricated on.

            • BurntMyBacon
            • 1 year ago

            Assuming they would get the full 15% theoretical benefit, that would move them from 213mm2 to 181mm2. Realistically, it would not shrink that much. The critical paths may need extra buffer transistors or wider transistors to get the speed up. If thermal density was a problem, but they didn’t want to significantly alter the design, then extra dead space afforded by the smaller transistors would be needed. I’m not sure how many extra dies the could have gotten on a 300mm wafer when all is said and done. It may be that the cost benefit didn’t outweigh the R&D costs.

            Of course, there may be some other cost or supply benefits associated with the process if the tweaks help the fabrication process to complete more quickly. However, if such benefits do exist, I wouldn’t expect them to be significant.

            • techguy
            • 1 year ago

            That’s now how math works.

            14nm -> 12nm is not a 15% benefit, it is 36%

            We’re talking about 2-dimensional structures here, so you have to square each process size and compare the difference to obtain the true benefit.

            ergo, 14^2 = 196 12^2 = 144 difference = 52 52/144 = .36 multiply by 100 to get the difference in percent (did not include this step previously but I just do this automatically in my head because you can just remove the decimal point and end up with a percent)

            5 minute after posting:

            LOL! They’re downvoting math!

            • Jeff Kampman
            • 1 year ago

            My understanding is that you can take advantage of those things if you’re a GlobalFoundries client but that AMD did not choose to and has only elected to put the better transistors of 12LP in the front-end-of-line of its products.

            • Cooe
            • 1 year ago

            * Update – NVM, took a closer look at the test setup and you explicitly stated you did not. Haha my bad! So to clarify the Fall Creators update, if you install it on a Ryzen 2 machine with the old power plan, it gets overwritten right? Or has AMD’s suggested config just changed? Because if the latter, that could get a little messy for anyone upgrading who doesn’t know to turn it off.

            Hey, Jeff, just out of curiosity but did you test the 2nd Gen chips using the old Ryzen Balanced Power Plan perchance? (Some people did, others didn’t, and it’s one of the reasons for so much gaming results variation so I’ve been asking those reviewers I can to see who did what). Turns out that it really hurts gaming performance with these new chips, so if you did might wanna re-do the game testing on the normal Balanced or High Performance plans.

            [url<]https://techbuyersguru.com/amds-second-assault-ryzen-5-2600x-and-ryzen-7-2700x-reviewed?page=3[/url<]

            • techguy
            • 1 year ago

            The old Ryzen Balanced power plan? You mean the one that was replaced a year ago in the chipset driver for X370? [url<]https://community.amd.com/community/gaming/blog/2017/04/06/amd-ryzen-community-update-3[/url<] You can answer that question by asking yourself the following: are reviewers only testing on X370? Answer: no

            • Cooe
            • 1 year ago

            The new one. That one. And *facepalm* It’s the “WINDOWS” power plan you idiot. If you have an OS drive from a prior Ryzen build (and TONS of reviewers have an AMD Ryzen test drive, and an Intel test drive, so unless they reninstalled the former for 2nd Gen, it would be a problem), the power plan stays on it, regardless of board chipset. Come on man, don’t be dumb.

            • techguy
            • 1 year ago

            Yeah, I’m the idiot. Try reading the link. What the hell power plan do you think AMD adjusted? Guess, it’s for an O.S. made by Microsoft…

            • Cooe
            • 1 year ago

            “The AMD Ryzen Balanced Power Plan is now included in the official AMD chipset drivers starting with version 17.10! Simply download and install the latest chipset driver package, and the new plan will be automatically configured for you. Windows 10 64-bit is required.”

            Seriously man. Learn to read what you post. Anyone who installed Windows on a Ryzen 1 system & ofc the AM4 chipset drivers, and then used that same drive to test Ryzen 2 WILL HAVE THE POWER PLAN ACTIVE! *facepalm* lol I’m done with you, you’re ridiculous (or maybe just ridiculously bad at reading. I’m not quite sure).

            • Shobai
            • 1 year ago

            I don’t have a horse in this race, but it’s interesting to note that the AMD Ryzen™ Balanced Power Plan is included in the current ([url=https://support.amd.com/en-us/download/chipset?os=Windows+10+-+64<]as of the 19th[/url<]) set of drivers - is this only for non-X470 set-ups? Non-Win10?

            • Jeff Kampman
            • 1 year ago

            I don’t know if installing the Fall Creators Update necessarily changes any power plans a user has installed since I started from a fresh Windows installation for this review.

            • techguy
            • 1 year ago

            Density improvements can be obtained through the use of better auto routing tools, and changes to the layout library. You will note that nowhere do they state *actual* feature sizes, let alone changes compared to the previous node.

            Furthermore, the *actual* feature size advantage of a true 12nm node compared to a 14nm node is not 15% but 36! You really believe this is a smaller node when the “density improvement” is less than half of what we should see?

            I’ve got some oceanfront property for sale in Arizona, if you do.

            • BurntMyBacon
            • 1 year ago

            [quote=”techguy”<]Density improvements can be obtained through the use of better auto routing tools, and changes to the layout library.[/quote<]Absolutely true, assuming the routing was allowed to change. [quote="techguy"<] You will note that nowhere do they state *actual* feature sizes, let alone changes compared to the previous node.[/quote<]I did notice that, but they do mention a "Partial Optical Shrink". I was under the impression that this meant that they shrunk a subset of the features, so please enlighten me as to what this really means. [quote="techguy"<]Furthermore, the *actual* feature size advantage of a true 12nm node compared to a 14nm node is not 15% but 36! You really believe this is a smaller node when the "density improvement" is less than half of what we should see?[/quote<] Your qualifier of "true" 12nm node negates this point sufficiently. Nobody said it would be a true 12nm node and all indications are a half node at best. Even if it were a full node, the maximum theoretical density improvement only occurs on the smallest feature size. None of the routing takes place at minimum feature size, so realistic density improvements are less.

            • techguy
            • 1 year ago

            Yeah… Press releases are marketing 😉

            “Better density” can only be a byproduct of better layout tools in this case. Once again, Glofo’s 12nm process is a refinement of their 14nm process. You know how Intel has 14nm, 14nm+, and 14nm++ nodes? Glofo decided to “one-up” Intel by calling their improved 14nm process “12nm”.

            Get it now?

            • chuckula
            • 1 year ago

            There’s also a difference between a change in the theoretical density that a node can give you and the actual density of transistors laid out in a specific product that is made using the specific node. Meaning that AMD didn’t have to make the chip denser even on the newer node and may in fact have intentionally avoided doing so for design cost or thermal reasons.

            This issue comes up every time Intel produces a CPU with lower transistor density than a GPU and that fact is then incorrectly taken to mean that Intel’s 14nm process has some lower inherent density than, for example, TSMC’s 16nm process. Which is nonsense but it illustrates the difference.

            • techguy
            • 1 year ago

            re: CPU vs GPU transistor density comparisons:

            There also tends to be significantly higher quantities of SRAM cells (by transistor count and area) on CPUs than GPUs, which has the effect of making those GPUs appear to be more dense.

            • techguy
            • 1 year ago

            lol, you say the same thing as Intel999, he gets downvoted you get upvoted.

            Some really smart people in this comments section!

          • Intel999
          • 1 year ago

          12 nm is only a marketing term used by Globalfoundries. Since TMSC was calling their 16nm+ spin 12nm, they figured they would call their 14nm+ spin 12nm.

          Therefore, the die size remains the same with less leakage resulting in more efficiency. Probably fewer failed dice per wafer as well.

          AMD chose to take advantage of the better efficiency by making their boost last longer and on more cores. AMD also appears to be taking advantage of the cheaper dice due to better yields by pricing these chips rather aggressively.

            • techguy
            • 1 year ago

            Utterly absurd that you’re being downvoted for this comment as it’s 100% true. Anyone can research this and find out for themselves. Glofo’s 12nm process does not feature smaller average feature size than the preceding 14nm process. The name is used to indicate improvements in their manufacturing capabilities.

            Honestly, you have to be pretty dense to think that Ryzen 2 is somehow built on a smaller process while having an identical number of transistors to Ryzen 1 and yet they both have the same die size.

            Some people…

        • Mr Bill
        • 1 year ago

        Did they move anything around?

          • Mr Bill
          • 1 year ago

          [url=https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/2<]When discussing with AMD, the best way to explain it is that some of the design of the key features has not moved – they just take up less area, leaving more dark silicon between other features.[/url<]

        • ImSpartacus
        • 1 year ago

        Wait, so despite moving to the slightly denser 12nm process, this new a Pinnacle Ridge die us exactly the same size and transistor count as Zeppelin?

        Maybe you meant that it’s roughly the same size/count as Zeppelin?

      • DPete27
      • 1 year ago

      Agreed, when I see a clockspeed-per-core chart like the 1800X, it tells me someone in the frequency/power management department was being lazy.

        • derFunkenstein
        • 1 year ago

        They were rushing it out the door because Bulldozer not only was a joke at release, it had been a joke for well over a year. So I kinda get it. But it’s nice to see that Zen+ can do a better job. Very interested in the power-over-time results, once those get posted.

          • NTMBK
          • 1 year ago

          Eh, it’s a first gen product, you always have a few “easy wins” that just didn’t make the cut. Same thing happened with Bulldozer; Piledriver was a big improvement over it only a year later, simply because they had time to fix a bunch of small things.

            • derFunkenstein
            • 1 year ago

            Sure. And the cut was artificially imposed by the market, since basically nobody was buying their existing CPUs. I don’t think we’re in disagreement here, just saying things with different words.

            • jts888
            • 1 year ago

            The single biggest IPC improver in the Ryzen 2000 series, the L2 latency fix, was already in first gen Threadrippers and Epycs. The various clocking improvements are nice too, but other pure fixes are not well known yet.

        • techguy
        • 1 year ago

        I don’t know that laziness is the right term. Schedules and budgets and what not… There’s a reason it made it into the 2nd product release after all.

          • DPete27
          • 1 year ago

          Sure, lazy was more pointed than necessary, but:
          “Should we test this chip to see what the max frequency is for each active core count?”
          “Nah, just test 1 and all cores and send it out.”

      • Mr Bill
      • 1 year ago

      AMD dropped L2 cache latency from 17 cycles to 11 cycles. That’s got to help.

        • Mr Bill
        • 1 year ago

        [url=https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/3<]AnandTech--Improvements to the Cache Hierarchy[/url<]

      • ptsant
      • 1 year ago

      The practical aspects are also very important: RAM configuration used to be relatively painful and is now much easier. Getting 3200+ was only possible with extreme tuning now seems to be mainstream.

      You also get a decent heatsink/fan, which was missing from the 1x00X series.

    • gmskking
    • 1 year ago

    Nice review. Thanks.

Pin It on Pinterest

Share This