AMD’s Ryzen 7 1800X, Ryzen 7 1700X, and Ryzen 7 1700 CPUs reviewed

Way, way back in the fall of 2006, I put together my first PC in my dorm room. I picked out a Core 2 Duo E6400 and a proper motherboard, guided by a friendly-sounding article from some PC hardware site I’d found while Googling around. My elation with that PC—Dual cores! 4GB of memory! A graphics card that can run Half-Life 2! Free Windows Vista!—probably wasn’t shared by the AMD boardroom at the time.

The Conroe cores in that E6400 and its friends helped touch off an Intel CPU performance lead that AMD hasn’t much challenged since. 2007’s Phenom family of chips suffered from a performance-robbing TLB erratum, and the Phenom II series only duked it out with Intel chips from prior generations in its time. Famously, 2011’s ambitious Bulldozer architecture trailed Intel’s seminal Sandy Bridge CPUs substantially when it launched aboard the FX-8150, and the Piledriver refresh of that architecture in 2014 didn’t help much. Our move to frame-time benchmarking between Bulldozer and Piledriver made the refreshed architecture’s shortcomings especially clear for gaming performance. Then-AMD CEO Rory Read eventually conceded that Bulldozer “was not the game-changing part [sic] when it was introduced three years ago,” but the ‘dozer’s derivatives have had to soldier on in various forms in AMD’s CPUs ever since.

It didn’t help that AMD’s bet on the fusion of Radeon graphics and traditional CPU cores over seven generations of APUs didn’t find many takers in the lower end of the market. We can’t forget the company’s long slide into data-center irrelevance, either, an attractive and high-margin business that Intel basically has to itself these days. 

So, yeah. After 10 years and change, the Zen microarchitecture that’s launching this morning aboard AMD’s Ryzen CPUs has a lot riding on its shoulders. The entire company’s future, if I had to guess. No biggie.

Not to spoil things too much, but Zen is solid. Go ahead and breathe a sigh of relief now. We’ve had three Ryzen CPUs in the TR labs this past week: the Ryzen 7 1800X, the Ryzen 7 1700X, and the Ryzen 7 1700. We’ve spent nearly every waking hour of the past few days turning every knob and dial we can to make our Ryzen CPUs sweat. Before we see whether or how the first Zen chips live up to the deafening hype that AMD has drummed up over the past few months, it’s worth taking a peek under the hood to see just how the company fulfilled the promises it’s made about Ryzen’s performance.

From the ground up

The Zen microarchitecture is a complete re-imagining of what an AMD x86 processor should look like. The company’s engineers have tossed the tightly-coupled “module” concept of Bulldozer and friends on the scrap heap. Instead, Zen is a sleek, shiny new chassis that looks a bit like Sandy Bridge and its derivatives if you squint a bit. AMD has consistently touted a “40% IPC speedup” in its discussions of Zen from the beginning, and I’ll do the best I can to briefly explain how AMD got there with its latest and greatest.

At the highest level, I want to draw your attention to two clusters of rectangles in this high-level block diagram of the Zen CPU core. The first point of interest is that each core will have its own integer and floating-point units to work with. This coprocessor layout is quite a bit different from the dual-integer-unit and shared-floating-point-unit structure of the Bulldozer core. Another new AMD trick for Zen is simultaneous multithreading, or SMT—better known as Hyper-threading in Intel parlance—to take advantage of otherwise idle execution resources. Much of the Zen core can be competitively shared between multiple threads of execution, and only a few resources—a new structure for AMD chips called the op cache, the store queue, and the retire queue—are statically partitioned.

The op cache is one of the biggest improvements to Zen’s fetch-and-decode stage. This structure first made its appearance in Intel’s Sandy Bridge architecture, and it serves as a temporary home for the internal micro-ops generated as part of the decode stage. This bit of cache is important because it can let the core leave its power-hungry fetch-and-decode hardware spun down. Instead, recently-decoded micro-ops can be dispatched straight into the maw of the core’s execution units for processing if they’re needed again. That shortcut has benefits for both latency and power consumption. You can read more about the benefits of op-caching in David Kanter’s incredible Sandy Bridge deep-dive. (David’s deep-dives have been indispensable in laying the foundations for this article, and they’re required reading for anyone with even the slightest curiosity about modern CPU architectures. Do go check them out.)

Zen also features an improved hashed-perceptron branch predictor compared to its predecessors. AMD (accurately) calls this a “neural network” instead, because neural networks are cool right now. It’s also not a new concept for AMD chips: Bulldozer, Piledriver, and Jaguar have all used similar technology in their predictors. AMD didn’t share many details of what it changed in the Zen predictor relative to its prior architectures.

In any case, better branch prediction is critical for allowing the chip to speculatively execute instructions without choosing the wrong path in the instruction stream. Get it wrong, and you have to flush the pipeline, an extraordinarily wasteful and performance-degrading operation in most cases. You can read more about the hashed-perceptron predictor in Daniel Jiménez’s introductory paper on the subject.

Intel has trumpeted better branch predictor accuracy in virtually every one of its recent microarchitectures, and it’s been quite reluctant to share any details of what it’s changed to get there. That guardedness suggests the company’s branch-prediction secret sauce is a major competitive advantage. Given Haswell CPUs’ uncanny accuracy in branch prediction, for example, there’s a reason for that.

Zen features a ton of other architectural improvements that contribute to its impressive performance gains over prior generations of AMD CPUs. We’d love to cover them all in depth for you, but we’ve been running tests on Ryzen right up until the NDA lift this morning and beyond. If you’d like to know more, be sure to check out David’s Zen write-up at the Microprocessor Report for more detail than we can possibly offer.

 

The Ryzen lineup

Zen is riding in this morning on three new CPUs: the Ryzen 7 1700, the Ryzen 7 1700X, and the Ryzen 7 1800X. You’ll already be familiar with these eight-core, 16-thread chips from AMD’s launch event last week, but a couple details have changed since we last checked in. Most notably, all three CPUs now feature Extended Frequency Range, or XFR, support.

Model Cores Threads Base clock Boost clock XFR TDP Price
Ryzen 7 1800X 8 16 3.6 GHz 4.0 GHz Yes 95W $499
Ryzen 7 1700X 3.4 GHz 3.8 GHz Yes 95W $399
Ryzen 7 1700 3.0 GHz 3.7 GHz Yes 65W $329

Contrary to what we’ve heard about XFR until now, however, the technology basically applies a 100-MHz clock bump if a Zen chip’s internal sensors detect thermal headroom to work with. There’s no way to configure or turn off XFR, either (short of overclocking); it’s just a thing that will happen with adequate cooling. We found that even AMD’s Wraith cooler is enough of a heatsink to let XFR kick in on the Ryzen 7 1700 and Ryzen 7 1700X, so it seems as though many folks will be able to enjoy some additional out-of-the-box clock speed headroom without overclocking.

Next quarter, two Ryzen 5 CPUs will launch. The Ryzen 5 1600X will offer six cores running at 3.6 GHz base and 4.0 GHz boost clock speeds. It’ll be accompanied by the Ryzen 5 1500X, a four-core, eight-thread CPU with 3.5 GHz base and 3.7 GHz boost clock speeds. AMD says these chips will be priced below $300, but it didn’t offer further details. Those chips will be followed in the second half of this year by Ryzen 3 CPUs, although we don’t know anything about those presumably budget-priced chips yet. We can say that all of these Ryzen parts will be unlocked for those who want to try their hands at overclocking on the appropriate platform.

A quick tour of AM4 platforms

Socket AM4 will launch on a dizzying array of motherboards this year. Most PC builders will be interested in AMD’s high-end X370 chipset and the more entry-level B350 platform. Those two chipsets are the only way to get access to Ryzen CPUs’ unlocked multipliers. You can see how the spec breakdown shakes out in the complicated table below. Most notably, the A320, B350, and X370 chipsets will enjoy native USB 3.1 support, a feature that Intel has yet to integrate into its chipsets.

Source: AMD

X370 is also the only AMD chipset that will offer builders the opportunity to employ dual-GPU setups in SLI or Crossfire through splitting a Summit Ridge CPU’s 16 lanes of dedicated PCIe 3.0 connectivity. We have a wide range of X370 motherboards in the TR labs now, and we’ll try and offer our thoughts on them when we can.

What we’re not testing today

Thanks to shipping delays, a constant stream of BIOS and software updates, and other headaches, we simply ran out of time to complete our testing before this morning’s NDA lift. Because of those circumstances, we elected to paint as complete a picture of the Ryzen 7 CPUs’ performance as possible now while leaving some other details only informally explored. We’ve completed all our usual productivity and gaming tests, and we think we can offer a solid idea of Ryzen’s value for system builders of all stripes.

Unfortunately, we had to make a few cuts from our schedule to achieve that goal. Overclocking performance and power efficiency measurements will have to wait for a separate article, as will platform performance measurements for X370 like USB 3.1 transfer speed and NVMe storage performance. We apologize in advance for the omissions, but we think you’ll enjoy the rest of our review. Let’s get to it.

 

Our testing methods

For each of our benchmarks, we ran each test at least three times, and we’ve reported the median result. Our test systems were configured like so:

Processor AMD Ryzen 7 1700
AMD Ryzen 7 1700X
AMD Ryzen 7 1800X
Motherboard
Gigabyte Aorus AX370-Gaming 5
Chipset AMD X370 (Promontory)
Memory size 16 GB (2 DIMMs)
Memory type G.Skill Trident Z DDR4-3866 (rated) SDRAM
Memory speed 2933 MT/s
Memory timings 13-13-13-33 1T

 

Processor AMD FX-8370 Intel Core i7-2600K Intel Core i7-3770K
Motherboard Gigabyte GA-990FX-Gaming Asus P8Z77-V Pro
Chipset 990FX + SB950 Z77 Express
Memory size 16 GB (2 DIMMs)
Memory type Corsair Vengeance Pro Series DDR3 SDRAM
Memory speed 1866 MT/s
Memory timings 9-10-9-27 1T

 

Processor Intel Core i7-4790K Intel Core i7-6700K Intel Core i7-7700K Intel Core i7-6950X Intel Core i7-5960X
Motherboard Asus Z97-A/USB 3.1 Asus ROG Strix Z270E Gaming Gigabyte GA-X99-Designare EX
Chipset Z97 Express Z270 X99
Memory size 16 GB (2 DIMMs) 16 GB (2 DIMMs) 64GB (4 DIMMs)
Memory type Corsair Vengeance Pro Series

DDR3 SDRAM

G.Skill Trident Z

DDR4 SDRAM

G.Skill Trident Z
DDR4 SDRAM
Memory speed 1866 MT/s 3866 MT/s 3200 MT/s 2400 MT/s
Memory timings 9-10-9-27 1T 18-19-19-39 1T 16-18-18-38 1T 15-15-15-35 1T

They all shared the following common elements:

Storage 2x Kingston HyperX 480GB SSDs
Discrete graphics Gigabyte GeForce GTX 1080 Xtreme Gaming
Graphics driver version GeForce 376.33
OS Windows 10 Pro
Power supply Corsair RM850x

Thanks to Corsair, Kingston, Asus, Gigabyte, Cooler Master, Intel, G.Skill, and AMD for helping us to outfit our test rigs with some of the finest hardware available.

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at a resolution of 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.

  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Memory subsystem performance

To get a sense of where Ryzen’s dual-channel memory architecture slots into the pantheon of bandwidth, we employed AIDA64’s directed memory read, write, and copy tests.

For eight hungry cores and 16 threads in a single socket, Ryzen CPUs fall decidedly mid-pack in these synthetic benchmarks. Intel sees fit to give its high-end desktop processors quad-channel memory controllers for plenty of breathing room, and the Core i7-5960X and Core i7-6950X generally enjoy much faster memory transfers as a result in these tests. Considering that the memory multiplier settings on our Gigabyte motherboard are locked out above 3200 MT/s right now, one can’t just shove DDR4-3866 into an X370 mobo and get around this issue, as one can with Z270 motherboards.

We’d usually test memory latency with AIDA64 and cache latencies with SiSoft Sandra at this point, but AMD warned us that neither utility performs correctly with Zen’s caches or memory controller. As a result, we’re holding off on reporting those numbers through independent testing.

Source: AMD

AMD did provide reviewers with its own internal measurements of cache bandwidth and latency data for Zen. We won’t be diving deep into these numbers, but it is interesting to see how Ryzen chips’ cache hierarchies stack up against their Broadwell-E nemesis.

Synthetic math performance with Y-Cruncher

Normally, this spot is where we’d share the performance results from the synthetic benchmarks built into the AIDA64 utility. Unfortunately, those benchmarks haven’t been updated for Ryzen as we go to press, either. Instead, we turned to Y-Cruncher, a program that calculates pi out to arbitrary billions of digits. Not only does this program require a powerful CPU to run well, it also benefits from fast memory since it’s working with a large pool of RAM to store its calculations.

Y-Cruncher comes with a handful of executables that are tuned for various x86 extensions. Y-Cruncher also offers a Bulldozer-optimized binary that can use that chip family’s unique SIMD instructions. We used the newest version of the executable that ran on each chip without throwing warnings about ISA compatibility. We ran the program in its multithreaded mode and chose a 2,500,000,000-digit test size.

One thing is clear from our Y-Cruncher results right away: AVX2 SIMD support seems to help a lot. Ivy Bridge, Sandy Bridge, and Bulldozer don’t have it, and they suffer accordingly. The Ryzen CPUs have AVX2 support, but their 256-bit AVX throughput is half that of the Haswell and newer chips because of the 128-bit width of their FP units. Despite their high core and thread counts, the Ryzen chips land smack between Haswell and Skylake here. The Intel Extreme Edition chips put their copious memory bandwidth and execution hardware to good use by leading the pack in number crunching.

 

Doom (OpenGL)
Doom likes to run fast, and especially so with a GTX 1080 pushing pixels. The game’s OpenGL mode is an ideal test of each CPU’s ability to keep that beast of a graphics card fed. We cranked up all of its eye candy at 1920×1080 and went to work with our usual test run in the beginning of the Foundry level.


Doom‘s OpenGL renderer demands plenty of single-core throughput to keep frame rates high and 99th-percentile frame rates low. While Intel’s menagerie achieves higher frame rates and lower 99th-percentile frame times than the Ryzen chips here, it’s worth noting that all of the CPUs but the FX-8370 here are rarely dipping below about 83 FPS. That’s still a plenty smooth and playable Doom experience. Still, this first gaming test puts a point on the fact that even with all of their generational improvements, Ryzen CPUs don’t have quite the same IPC oomph as Intel’s latest architectures.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time the GTX 1080 spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame. And 8.3 ms corresponds to 120 FPS, an even more demanding standard that Doom can easily meet or surpass on hardware that’s up to the task.

None of the CPUs we tested have more than a trace of frames that would drop frame rates below 60 FPS, so it’s worth clicking over to the more demanding 8.3-ms plot to see what’s happening. There, we can see that the Ryzen CPUs spend about as much time churning on tough frames that would drop animation below 120 FPS as Intel’s Sandy Bridge and Ivy Bridge CPUs do.

 

Doom (Vulkan)


The switch to Vulkan levels the playing field a bit for the Ryzen chips. With this API, only the FX-8370 has any problems keeping a 99th-percentile frame time of under 10 ms. That performance translates to a gaming experience of over 111 FPS 99% of the time. Still, the Ryzen CPUs (save for the top-end 1800X) can’t quite match the high average frame rates of even the Core i7-3770K. Hm.


With the help of our “time-spent-beyond-X” graph, we can confirm that all of the CPUs at hand spend only fleeting moments past 16.7 ms working on tough frames. The picture at 8.3 ms is quite encouraging, as well. Only the FX-8370 turns in a figure past this threshold that we’d call noticeable. Moving on.

 

Crysis 3

Although Crysis 3 is nearly four years old now, its lavishly detailed environments and demanding physics engine can still stress every part of a system. To put each of our CPUs to the test, we took a one-minute run through the grassy area at the beginning of the “Welcome to the Jungle” level with settings cranked at 1920×1080.


Crysis 3 lets the Ryzen trio shine a bit thanks to its affinity for lots of cores and threads. Even so, all three chips finish just midpack in average-FPS terms, although the race is a tight one. Each Ryzen CPU is just a couple milliseconds off the 99th-percentile frame time pace set by the Core i7-6950X and the Core i7-7700K, too.


Our “time-spent-beyond-X” graphs paint a pretty picture for Zen’s performance in Crysis 3. None of the Ryzen chips cause our GTX 1080 to spend more than an instant past the critical 16.7-ms mark, and they only spend a couple seconds of our test run on frames that cause the GTX 1080’s output to drop below 120 FPS. We’ll chalk that up as a success.

 

Watch Dogs 2

Here’s a new addition to our CPU-testing suite. We heard through the grapevine that Watch Dogs 2 can occupy every thread one can throw at it, so we turned up the eye candy and walked through the forested paths around the game’s Coit Tower landmark to get our CPUs sweating.

Unfortunately, because of the DRM baked into this title, we were only able to complete testing on six of our test CPUs before the NDA lift. We’ve updated all of our graphs with complete data now, but what a headache. Note to publishers: if you’d like to make your game useful for hardware reviewers, don’t lock us out just because we switch machines a bunch.


We thought we were doing the Ryzen chips a favor by running them with a title that does indeed take advantage of all of their cores and threads. Instead, the Ryzen 7 1700 had an embarrassing moment when Watch Dogs 2 advised us that the chip didn’t meet the game’s minimum specs. Intel’s Extreme Edition CPUs and the Core i7-7700K didn’t take Watch Dogs 2 lying down, either. With this almost entirely CPU-bound setup, the Ryzen chips and the Intel competition all fall into a neat line that’s purely reflective of their performance.


At the 16.7-ms threshold, the Ryzen 7 1700 spends by far the most time on tough frames: three seconds of our test run on scenes that make our GTX 1080 drop below 60 FPS. The other two Ryzens perform better, but they still can’t quite match the Intel chips here. A flip over to the 8-ms graph shows an Intel lead at that demanding threshold, as well.

 

Deus Ex: Mankind Divided

After our Core i7-7700K review, where Deus Ex: Mankind Divided proved GPU-bound, we tweaked the game’s settings to see if that remained the case across its entire range of eye candy. Happily, we discovered that turning off MSAA and lightening a few other loads on the graphics card turns DXMD‘s polygon-rich environments into a real torture test for any CPU.


Unshackle the GTX 1080 with some strategic settings changes, and Deus Ex can actually run quite swiftly. Its CPU scaling seems to top out at about eight threads, however, so the Core i7-7700K and company rule the roost by a decent margin. Just like Watch Dogs 2, the performance progression in the graphs above is more or less what we expect from a completely CPU-bound test.


While there’s little churning of note for these CPUs at the 16.7-ms threshold in DXMD, a click over to the 8.3-ms mark shows that the Intel CPUs spend substantially less time holding up the GTX 1080 than the Ryzen chips do. Even with an apparent eight-thread workload to fully occupy them, AMD’s latest just can’t keep up with the GTX 1080’s thirst for work at higher frame rates.

 

Grand Theft Auto V
Grand Theft Auto V can still put the hurt on CPUs as well as graphics cards, so we ran through our usual test run with the game’s settings turned all the way up at 1920×1080. Unlike most of the games we’ve tested so far, GTA V favors a single thread or two heavily, and there’s no way around it with Vulkan or DirectX 12. In that way, it’s a perfect test of whether a CPU can keep the graphics card fed.


Noticing a pattern yet? While the Ryzen CPUs deliver a fine 99th-percentile frame time, they just can’t match the higher average frame rates that the Core i7-6700K and Core i7-7700K can produce. In fact, considering the weird wall that the Ryzen 7 1800X hits with its 99th-percentile frame time, we have to wonder if there’s not a memory bandwidth issue in play.


As expected, the FX-8370 is the only CPU that can’t keep out of the past-16.7-ms doldrums. We’re more interested in what happens past the 8.3-ms mark with these systems, and the Core i7-7700K is good for keeping the GPU waiting for less than half the time at that threshold than the Ryzen 7 1800X is. The Ryzen 7 1700X and the Ryzen 7 1700 both keep the Core i7-3770K company toward the back of the pack.

So what are we to make of Ryzen as a gaming chip? Give one enough threads, as a couple of our benchmark titles do, and a Ryzen CPU can be a fine, if not exceptional performer. Most games still don’t take advantage of n threads, however, and in situations like GTA V, lower-end Ryzens can’t keep our GTX 1080 fed any better than 2012’s Core i7-3770K. Let’s get into some non-gaming tests and see why that might be.

 

Productivity

Compiling code in GCC

Our resident developer, Bruno Ferreira, helped put together this code compiling test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

Here’s a compelling start for Ryzen in our non-gaming tests. The R7 1700 goes blow-for-blow with the i7-7700K, and the R7 1700X only slightly tails the Core i7-5960X. The R7 1800X almost catches the Core i7-6950X. If Zen’s floating-point performance leaves a bit to be desired, the integer side of the core can be a beast when it’s churning away at full tilt.

Javascript performance

These three benchmarks are about as single-threaded as it gets, so they’re an excellent indication of how Ryzen CPUs perform in lightly-threaded workloads. Pay close attention to these numbers if you’re curious about the IPC increases that AMD achieved with the Zen architecture.

There’s a bit of variance in how these tests shake out, but they all paint largely the same picture. At 4 GHz or so, the Zen architecture lands somewhere between Broadwell and Haswell in single-threaded throughput. As clock speeds start to decrease, however, the picture grows less rosy. The Ryzen 7 1700X isn’t much faster in this lightly-threaded workload than the Core i7-3770K at times, while the Ryzen 7 1700 can fall behind even the Core i7-2600K. Those measures have a direct correlation with how “snappy” a machine feels in common tasks, and the lower-end Ryzen 7 chips arguably won’t feel much faster than Sandy Bridge or Ivy Bridge desktops running around the same clock speed. Trust us, though: if you’re upgrading from an FX-series processor to Ryzen, you’ll immediately notice that common tasks like web browsing are much snappier.

7-Zip benchmark

In this common desktop workload, Zen exhibits a rather large performance delta between its compression performance and decompression performance. Considering that I probably unzip 50 zip archives for every one I compress, that’s probably not a bad tradeoff to make. Zen is only bested by Intel’s high-end desktop chips in compression, and it puts the Core i7-5960X to shame when unpacking archives.

TrueCrypt disk encryption

Although the TrueCrypt project has fallen on hard times, its built-in benchmarking utility remains handy for a quick test of these chips’ accelerated and non-accelerated performance when we ask them to encrypt data. The AES test should take advantage of hardware acceleration on the chips that support Intel’s AES-NI instructions, while the Twofish test relies on good old unaccelerated number-crunching prowess.

All of these chips support AES acceleration in hardware, so their performance scales roughly with the number of cores, threads, and Hertz on offer. The story is much the same in Twofish rates. This is another test where Ryzen excels.

Scientific computing with STARS Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here.

For this set of chips, Euler3D seems to tell two different stories. For chips with lots of memory bandwidth but few threads (like the Core i7-7700K), the execution resources the chip has to offer are the bottleneck. For big, wide machines like Ryzen, memory bandwidth seems to be the bottleneck. Both the Core i7-5960X and the Core i7-6950X deliver tremendous performance in Euler3D thanks to their potent combination of many execution resources and bountiful memory bandwidth. Makes one wonder what Ryzen could do with an extra two memory channels.

 

3D rendering and video processing

Cinebench

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

AMD favors Cinebench for its demonstrations of Zen’s single-threaded performance, and it’s not hard to see why. The Ryzen 7 1800X nearly matches the Core i7-6950X here, but it can’t catch the higher-clocked Intel mainstream desktop parts.

Surprise! More cores, higher scores. All of the Ryzen CPUs best the Core i7-5960X and its relatively slow all-core Turbo speed, but they can’t quite catch the Core i7-6950X with its unfair advantage of two extra cores and four extra threads. Still, this is another solid win for Ryzen.

Blender

Until recently, Blender was another common sight at Ryzen demo events. Its recent absence may be because of the version 2.78b update, which includes a number of optimizations for SSE and AVX2-compatible CPUs that improve performance. Our guess is that those updates might favor Haswell and friends more than they do Zen, as we’ve seen throughout this test.

The Blender project offers several standard scenes to render with Cycles for benchmarking purposes, and we chose the CPU-targeted version of the “bmw27” test file to put Cycles through its paces.

Whatever the Blender devs did to Cycles under the hood, every chip with AVX2 support enjoys huge gains compared to our last round of tests in our Core i7-7700K review. AMD needn’t have been bashful about Ryzen’s performance in these tests, either. Only the Core i7-6950X runs better.

Handbrake

Handbrake is a popular video-transcoding app that recently hit version 1.0. To see how it performs on these chips, we converted a roughly two-minute 4K source file from an iPhone 6S into the legacy “iPhone and iPod touch” preset using the x264 encoder’s otherwise-default settings.

x264 doesn’t seem to be scaling across all of the Core i7-6950X’s cores and threads, so the Ryzen chips all bunch up roughly under it. The gap between run times for even the more modest chips in this suite aren’t that far apart, however, so perhaps the program isn’t scaling beyond eight threads. The only CPUs that really suffer under Handbrake are those without AVX2 support, as we’ve come to expect. Perhaps that’s a good reason to consider moving up to a newer chip.

Digital audio workstation performance with DAWBench DSP

Here’s perhaps the most interesting addition to our benchmarking suite. DAWBench DSP is a freely-available project file for a number of digital audio workstation applications that lets us turn on a large number of instances of a standard VST (or effects plugin) while monitoring a looping audio track. The moment one starts hearing pops or crackles from the loop, it means the chip has reached its limit.

We chose the Reaper version of the project file and used the included ReaXcomp compressor plugin in its 64-bit form. To monitor the audio track, we plugged in a Focusrite Scarlett 2i2 USB audio interface using the USB 3.1 port (where available) on each of our test motherboards. We then installed Focusrite’s ASIO driver and selected the Scarlett as our playback device.

After some toying around, we decided that an ASIO buffer depth of 32 struck a good balance of low latency and CPU demand with our test setup. For the sake of time, we elected not to test at higher buffer depths, which decrease CPU load and increase performance. This is a CPU review, after all. Our graph describes the number of compressor instances we were able to turn on before overloading the CPU.

Given how similarly DAWBench scales compared to the Y-Cruncher results on our opening pages, it’s probably safe to say that Reaper and our plugin of choice both lean hard on AVX instructions (and AVX2, where available) to do their thing. We think that’s evidenced by the big leap in performance enjoyed by newer chips with AVX2 support like Zen.

Intel’s cores still have an undeniable advantage in SIMD throughput, though. The four-core, eight-thread Core i7-7700K does about as well as the eight-core, 16-thread Ryzen 7 1700 in this test, highlighting the fact that Zen’s floating-point unit has to halve its throughput in order to execute 256-bit AVX instructions. In contrast, the Haswell-E Core i7-5960X enjoys almost perfect performance scaling compared to the Core i7-4790K. The Zen CPUs trail it despite having the same number of cores and threads on tap (plus relatively higher base clocks, to boot). We’d be curious to see what Ryzen could do with similar SIMD throughput as Haswell and company.

Right now, though, music pros may still be elated by the R7 1800X’s value proposition. The hottest Ryzen 7 is just 18% behind the Haswell-E chip in the number of VST instances it can handle, but it’s a whopping 54% less expensive. Assuming Intel doesn’t cut the prices of its Broadwell-E chips to compensate, we think the Ryzen 7 lineup could be a great friend to audio producers on a budget.

 

Conclusions

AMD’s Ryzen 7 CPUs are arguably its best ever. Our tests show that the Zen microarchitecture can deliver single-threaded performance that’s about on par with Intel’s Broadwell core. In fact, AMD exceeded its ambitious 40% instructions-per-clock improvement target. Some of our directed tests actually showed as much as a 50% single-core boost from Piledriver to Zen. AMD is deservedly proud of this accomplishment.

Zen’s impressive single-thread potential is tempered by the delivered clock speeds of the eight-core, sixteen-thread parts that AMD is debuting today, however. The Ryzen 7 1800X trades blows with Intel’s Broadwell-E parts by dint of its high base and boost speeds, but that parity is a ceiling for the Ryzen lineup, not a floor. Lower-clocked Ryzen 7s seem to perform somewhere between Sandy Bridge and Ivy Bridge Core i7s in lightly-threaded tasks. As a result, they won’t feel like a substantial upgrade from an older Sandy or Ivy system while browsing Facebook and Twitter—out of the box, at least.

In an exciting hat tip to PC enthusiasts, all Ryzen CPUs will have unlocked multipliers for easy overclocking, so it might be simple enough to claw back some clock speed with a relatively affordable CPU. Remember those good old days? On early firmware, we got our $330 Ryzen 7 1700 up to a 3.9 GHz all-core overclock using just the modest AMD Wraith cooler. With those settings, the mildest Ryzen turns into a rather brisk single-threaded performer and a real fire-breather on the cheap for multithreaded workloads. We expect buyers willing to tweak a bit will be happy with the performance they can extract from a Ryzen 7 1700 and an affordable tower heatsink like the Cooler Master Hyper 212 Evo. We didn’t enjoy as much overclocking success with the already-speedy 1700X and 1800X parts, though. We’ll be exploring Ryzen overclocking in-depth in a separate article at some point.

Although AMD would like builders to think that a Ryzen 7 1800X is like getting a Broadwell-E Core i7-6900K for half the price and so on down the line, the reality is more complicated. We were surprised by how many programs in our test suite now appear to take advantage of AVX instructions, and Ryzen can only achieve half the 256-bit SIMD throughput as its Intel competitors when running those operations. Memory-bandwidth-constrained applications like Euler3D can also run far better when paired with Haswell-E and Broadwell-E’s quad-channel memory controller, where Ryzen seems to run into a bottleneck. That’s not comforting for a chip with eight data-hungry cores to feed.

If an application can take advantage of AVX, as our DAWBench DSP test seems able to, Intel’s high-end desktop parts can open a large lead on their Ryzen competitiors thanks to their beefier and higher-throughput SIMD hardware. Core-for-core, however, Ryzen still manages to hang pretty close with its Haswell-and-newer competitors like the Core i7-5960X despite this disadvantage. It helps that Ryzen CPUs are “discounted” far more than the percentage by which they trail the Intel competition, too.

Many of our other productivity tests didn’t run into memory or SIMD bottlenecks, and in those cases, the Ryzen 7 lineup truly does bring a new class of computing performance to the $500-and-under price point. Going by our index, the $400 Ryzen 7 1700X is basically a Core i7-5960X for about a third of the coin, and the Ryzen 7 1800X provides even slightly higher performance overall for just $500. That’s an incredible value in high-performance desktop computing, and we imagine that Intel won’t be able to avoid dropping prices on some of its Broadwell-E CPUs in response. For non-gaming applications, we think these Ryzen 7 chips will be difficult to ignore for those with a need for sheer computing power.

Even though Ryzen redefines the performance available at a given price point for highly multithreaded applications, gamers looking for a similar revolution from Ryzen are likely out of luck. We’ve been trying to find more multithreaded games to test with of late, and Watch Dogs 2 certainly qualified. It seems that game favors high IPC, memory bandwidth, and clock speeds, though, and the Intel chips we tested offer more of some or all of those things. Heck, in the suite of six games plotted in our value chart above, the Core i7-6950X just barely ekes out the top spot. That may be a first for an Intel high-end desktop CPU. For play, Intel’s Core i7-7700K remains the chip to beat, but the future seems to hold multithreaded promise, and that’s good for AMD.

AMD countered our questions about Ryzen’s performance at the modest resolutions we test CPU gaming prowess with by asserting that higher-resolution displays are becoming ever more popular. By extension, AMD seems to think the gaming market is moving toward being more graphics-card bound than CPU-bound. The Steam Hardware Survey doesn’t support this argument, but it is true that gaming at higher resolutions will lessen the differences in performance between Ryzen chips and Intel’s seventh-generation Core CPUs if a gamer chooses to play that way.

The company also suggested that the Core i7-7700K and its ilk will appeal more to “pure gamers” who just, well, play games. AMD sees Ryzen as a one-socket shop for those who want to game and stream to Twitch in the highest possible quality all at once. That may be, but we think gamers would rather not make a tradeoff between wide-shouldered grunt and the smoothness-enhancing goodness of high clock speeds and instructions-per-clock muscle. We probably owe it to ourselves to see how Ryzen and the Intel competition perform under those circumstances at some point to see just what the deal is, regardless.

Small wrinkles and our differences in performance priorities aside, it bears repeating that AMD is well and truly back in the high-performance x86 CPU game. If the company can further refine the Zen architecture over time, take advantage of future process improvements, and push clocks higher, we expect that future Ryzen chips will be competitive for many years to come. For now, we say bravo, AMD, and welcome back.

Comments closed
    • Thoughts
    • 2 years ago

    What I’m surprised to see missing… in virtually all reviews across the web… is any discussion (by a publication or its readers) on the AM4 platform’s longevity and upgradability (in addition to its cost, which is readily discussed).

    Any Intel Platform – is almost guaranteed to not accommodate a new or significantly revised micro-architecture… beyond the mere “tick”. In order to enjoy a “tock”, one MUST purchase a new motherboard (if historical precedent is maintained).

    AMD AM4 Platform – is almost guaranteed to, AT LEAST, accommodate Ryzen “II” and quite possibly Ryzen “III” processors. And, in such cases, only a new processor and BIOS update will be necessary to do so.

    This is not an insignificant point of differentiation.

    • anotherengineer
    • 3 years ago

    Power consumption looks good.

    [url<]https://www.techpowerup.com/reviews/AMD/Ryzen_7_1800X/14.html[/url<] and games don't look that bad either [url<]https://www.techpowerup.com/reviews/AMD/Ryzen_7_1800X/10.html[/url<]

    • seansplayin
    • 3 years ago

    Just built new system with an R7-1800X, ASUS Crosshair VI Hero and Corsair Vengeance LPX 3200 memory. I’m telling you the numbers for Y-Cruncher and AIDA 64 are not accurate. I took screen shots to prove what I saying, have a look. Unfortunately I do not have a new Nvidia 1080 so I can’t double check any of the Game Benchmarks.

    [url<]http://i943.photobucket.com/albums/ad280/seansplayin/Y-Cruncher%203200mhz_zpskyrncdzb.jpg[/url<] [url<]http://i943.photobucket.com/albums/ad280/seansplayin/AIDA%20Mem%20Read_zpsfhoj0syh.jpg[/url<] [url<]http://i943.photobucket.com/albums/ad280/seansplayin/AIDA%20Mem%20Write_zpsz6surjfc.jpg[/url<] [url<]http://i943.photobucket.com/albums/ad280/seansplayin/AIDA%20Mem%20Copy_zpsnq2gij4q.jpg[/url<]

    • raddude9
    • 3 years ago

    After thinking it over I think I’ll get a new Ryzen rig, reckon I can do it for about $700…..

      • derFunkenstein
      • 3 years ago

      Might want to read this about the current state of motherboards first:

      [url<]http://www.legitreviews.com/one-motherboard-maker-explains-why-amd-am4-boards-are-missing_192470[/url<] $700 for a Ryzen 7 1700, 16GB of DDR4 that has at the very least an SPD with 2666 speeds, and a new board is very doable, but I'm not sure it's a great idea right now. I'm working my way through issues with my B350 Tomahawk / Ryzen 7 1700 setup right now.

    • kuttan
    • 3 years ago

    Ryzen’s less than impressive gaming performance at the moment is an issue with Windows 10. Ryzen gaming performance in Window 7 is better as per well known member at anandtech forums.

    [u<]His quote:[/u<] [i<]"I did some 3D testing and eventhou there is not nearly enough data to confirm it, I'd say the SMT regression is infact a Windows 10 related issue. In 3D testing I did recently on Windows 10, the title which illustrated the biggest SMT regression was Total War: Warhammer. All of these were recorded at 3.5GHz, 2133MHz MEMCLK with R9 Nano: [u<]Windows 10 - 1080 Ultra DX11:[/u<] 8C/16T - [b<]49.39fps (Min)[/b<], 72.36fps (Avg) 8C/8T - [b<]57.16fps (Min[/b<]), 72.46fps (Avg) [u<]Windows 7 - 1080 Ultra DX11:[/u<] 8C/16T - [b<]62.33fps (Min)[/b<], 78.18fps (Avg) 8C/8T - [b<]62.00fps (Min)[/b<], 73.22fps (Avg) At the moment this is just pure speculation as there were variables, which could not be isolated. Windows 10 figures were recorded using PresentMon (OCAT), however with Windows 7 it was necessary to use Fraps." [/i<] [url<]https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-8[/url<] If what he said is true then we can expect some decent gaming performance boost when Microsoft patches Windows 10 for Ryzen support.

      • derFunkenstein
      • 3 years ago

      I’m concerned the minimum rates don’t mean anything because they were measured with two different tools, but the average does seem to get something. But I wouldn’t expect that to mean an across-the-board increase is inbound.

      That said, Microsoft did admit this is a problem through its official support Twitter account: [url<]https://twitter.com/MicrosoftHelps/status/839581647351738375[/url<] So I expect that SOME games will get a boost from a scheduler fix, but I'm not getting my hopes up about an across-the-board increase.

      • DoomGuy64
      • 3 years ago

      This can’t be stressed enough. There is a 10+ fps difference in minimum frames because of this bug. [url<]https://www.guru3d.com/news-story/microsoft-confirms-windows-bug-is-holding-back-amd-ryzen.html[/url<] I'd also like to point out that Guru3d tested Ryzen on a [url=https://www.guru3d.com/articles_pages/amd_ryzen_7_1800x_processor_review,16.html<]even playing field[/url<] with equal speed memory, and there was much less difference in 1080p gaming, and no difference in WQHD resolution. They also pointed out that the Asus board supports [url=https://www.guru3d.com/articles_pages/amd_ryzen_7_1800x_processor_review,13.html<]3600 speeds.[/url<] Since WQHD is probably where most of us game at, there is no performance loss, but you gain double the headroom for multitasking. This video is pretty relevant in those terms: [url<]https://www.youtube.com/watch?v=ylvdSnEbL50[/url<] Conclusion: Ryzen looks much better when taken in proper perspective and benched on an even playing field with same speed ram. No noticeable performance loss with plenty of multitasking headroom = big win. It just that you can easily game benchmarks by maxing out Intel's ram speed, and cherry picking tests that rely on single thread/clockspeed more than others. Combine that with the Win10 bug, and you've got the perfect narrative to downplay AMD's actual performance. Worth thinking about.

        • Airmantharp
        • 3 years ago

        1080p = 2k…

          • DoomGuy64
          • 3 years ago

          Then WQHD. I just called it 2k because math. Also, 2k is not 1080p either.

          [url<]https://en.wikipedia.org/wiki/2K_resolution[/url<]

            • Airmantharp
            • 3 years ago

            2k ~= 1080p then, but if you’re talking about 1440p, use 1440p.

        • Redocbew
        • 3 years ago

        Yeah, because constraining two different architectures to the same memory speed is obviously a reasonable thing to do…

        It may be a slightly less bad idea than it would have been when comparing against a Bulldozer-derived chip, but that doesn’t make it good.

        I’m not disputing that Ryzen responds well to increasing memory speed. There’s a bunch of tests popping up which seem to indicate that, but I wouldn’t call it a “level playing field”.

          • DoomGuy64
          • 3 years ago

          Yeah, it actually is. Not only is that more than reasonable, it’s the [i<]only ethical choice[/i<], especially if Ryzen supports similar speed memory. Using 2 different speeds skews the benchmarks in favor of the system using the faster memory speed, especially in bandwidth heavy tests. Memory speed isn't CPU clockspeed, nor am I saying to disable quad channel to match dual channel. You're not downclocking Intel CPUs to match AMD, just using the same memory modules. If both CPU's support high speed DDR, then the only proper way to benchmark on a level playing field is to use the same speed ram. If TR used a board that perhaps didn't support faster DDR modules, and benched it against an Intel board that did, that tells me nothing about Ryzen's performance when using a board that does support higher speed DDR. So between this inconsistency, and the windows 10 bug, these benchmarks are complete bunk. They don't tell me how Ryzen performs with a patched windows 10 using high speed ddr modules. Not to mention mixed workload testing wasn't done at all, which makes no sense for a 8 core 16 thread prosumer product. Where's the prosumer tests? Makes no sense. TR makes it appear like Ryzen doesn't match against the i7, but information gathered from outside sources say otherwise. A lot of the performance discrepancies can be chocked up to early adopter teething issues that can be avoided once you know about them. Not all of it, but certainly a large percent of the difference can be made up with a few tweaks.

            • Jeff Kampman
            • 3 years ago

            At the time of our tests, the only stable configuration that worked with our Ryzen chips was DDR4-2933 13-13-13-33. That’s already a pretty aggressive set of timings. Most motherboards will not be able to exceed DDR4-3200 because of the lack of an external BCLK generator, which only a couple boards have: the Asus Crosshair VI Hero and the Aorus AX370-Gaming K7. That’s why the Hero can tout DDR4-3600 support.

            There’s a catch, though. Some think that modifying the base clock on Ryzen CPUs has the potential to cause instability, reduced PCIe link speeds, or data corruption owing to the fact that certain uncore clocks (PCIe most notably) derive from the base clock and cannot be strapped independently. Higher BCLKs are necessary to get higher RAM speeds than DDR4-3200 and they should probably be viewed as extreme overclocking, not a 24/7 stable config. Z270 has no such issues.

            If a Ryzen builder installs Windows 10 on a new system (as it seems most will do, considering that Windows 7 isn’t officially supported by AMD), they have a right to know how it performs with the most current operating system Microsoft offers. If there is a “bug” or something causing issues then that’s Microsoft’s and AMD’s problem, not ours, and it needs to be fixed. We will retest if and when patches with a substantial potential to impact performance are released. For now, however, I stand behind the results we obtained.

            • Redocbew
            • 3 years ago

            Dude, seriously… Don’t make me link another Journey song here.

            My point was that how much memory bandwidth a chip needs in order to do its job and perform well is a function of its clock speed and architecture. Do you think an 8core Atom is being held back because it supports “only” DDR3? Probably not. In general I find it strange that you can look at all the complexity and factors involved in determining final performance and expect everything to be exactly the same instead of being different.

    • Shouefref
    • 3 years ago

    The Ryzen 7 7 1700 seems to be a very sensible choice.
    And if you want more power, you can always overclock it to 4 Ghz.

    • guruMarkB
    • 3 years ago

    So now this article and GAMING PERFORMANCE REVIEW is INVALID with the revelation that Windows 10 fails to handle task scheduling with hyperthreading properly.

    AMD Ryzen Performance Negatively Affected by Windows 10 Scheduler Bug
    [url<]http://wccftech.com/amd-ryzen-performance-negatively-affected-windows-10-scheduler-bug/[/url<] So for now all Ryzen gaming performance tests will need to be done on Windows 7.

      • Redocbew
      • 3 years ago

      [url<]https://www.youtube.com/watch?v=1k8craCGpgs[/url<]

      • Jeff Kampman
      • 3 years ago

      Haha, no.

      • NTMBK
      • 3 years ago

      Windows 7 is dying.

    • Darthwxman
    • 3 years ago

    Personally, I would have like to see have seen Ryzen compared to the i7 using similar memory settings, just to make a more apples to apples comparison, (especially since your system guide only recommends DDR4-3200 max). Just something to consider for the Ryzen 5 reviews; though hopefully Ryzen will have more motherboards/memory modules capable of running faster memory by then (I have seen some claims that Ryzen gets huge gains from faster RAM).

    • AnotherReader
    • 3 years ago

    Once the platform has stabilized, we need an article exploring the impact of memory speed on Ryzen. There was an [url=https://techreport.com/review/20377/exploring-the-impact-of-memory-speed-on-sandy-bridge-performance<]article like this for Sandy Bridge.[/url<]

    • evilpaul
    • 3 years ago

    People wanting high res gaming in CPU review should probably just wait a week for the 1080Ti reviews…

      • Anonymous Coward
      • 3 years ago

      Seems to me that GPU reviews typically take one CPU and use it everywhere… but I would certainly appreciate both i7 and Ryzen making an appearance.

    • Redocbew
    • 3 years ago

    These comments have prompted a poll including cheese. My faith in TR has again been rewarded.

    • Wonders
    • 3 years ago

    Hey Jeff, awesome review!

    My favorite reviews are those such as this one, which meticulously tease out absolute performance differences. I’d like to submit some feedback here: There’s also a real value in “crystal ball” reviews that match my desired build specs and real-world use closely enough to give me confidence about building a brand new system from scratch. It’s nice to read from a trusted source [b<]exactly[/b<] how my chosen class of components will perform. Even more so when "taking the leap" to a brand new architecture. Like some others I am left wondering: What results, precisely, will a typical PC [b<]enthusiast[/b<] who are building a new system from scratch in 2017 see in terms of actual game performance with "Ryzen inside" (as it were)? I am still somewhat interested in knowing the answer to this question (admittedly, not nearly as much as I was interested in absolute performance comparisons), and it would still be cool to hear it from TR, which is my most trusted source.

    • raddude9
    • 3 years ago

    Seeing as the lack of 1440p (and higher) reviews have disappointed many people. I thought I’d add that Techspot have just released a review/comparison of 16 games benchmarked at both 1080p and 1440p:

    [url<]http://www.techspot.com/review/1348-amd-ryzen-gaming-performance/[/url<] And yes, the 1440p results are a very close match to the 1080p results. My own conclusion: GPU's have advanced to the point where 1440p should now be the norm for reviews, why: [list<] [*<]Apart from a few outliers, most games are still being held up by single threaded CPU performance at 1440p. So it's still a test of the CPU. [/*<][*<]At 1080p, new GPUs often produce well over 100fps, can anyone seriously tell the difference between 130fps and 135fps. 1440p brings these rates down to rates where you might notice the difference. So the test is more practical, and less academic. [/*<][*<]Yes, 1080p is the most popular resolution on Steam, but when people upgrade, they usually upgrade to something better. [/*<] [/list<] I hope nobody sees this post as a complaint, it's not, I really like TR's CPU reviews, and it's obvious that pretty much every review site had to strip-down their reviews in order to get them out on time. Also, small disclaimer, I did get a shiny new (and awesome in every way) 1440p monitor last year and after many years of 1920x1050 (and 1920x1200).

      • drfish
      • 3 years ago

      It would sure be nice if the Steam hardware survey also tracked the refresh rate of a system’s primary display.

      • Jeff Kampman
      • 3 years ago

      Even at 2560×1440, Techspot admits they’re introducing a graphics-card bottleneck that can hide differences between CPUs. I still haven’t seen a compelling case to switch away from our current approach.

      EDIT: Techspot also tested with an older BIOS than the most recent version available internally from Gigabyte that I know provides worse performance under some workloads than the private firmware. We tested with that most recent version (F3n), not the F3 public BIOS. I’d be wary of those results.

        • derFunkenstein
        • 3 years ago

        Gigabyte needs to release this.

        Still, stilted performance is waaaaaay better than what’s happened to Crosshair VI owners on /r/AMD.

          • astrotech66
          • 3 years ago

          What are you referring to? Can you elaborate?

            • derFunkenstein
            • 3 years ago

            Multiple owners on Reddit are reporting that BIOS updates are bricking the motherboard. Some have reported bringing it back to life by using the special BIOS update USB port (Flashback) to re-flash the BIOS and then clear CMOS.

            [url<]https://www.reddit.com/r/Amd/comments/5xn51g/my_asus_crosshair_vi_hero_got_bricked/[/url<] You can see it in action at 10:50 in this video, where just out of the blue it claims to be updating the BIOS and then dies. [url<]https://www.youtube.com/watch?v=SE4sxXva9Eg[/url<] BTW I'd never heard of this YouTuber before but he calls out AMD's crap about 1440p and 4K gaming and comes to a lot of the same conclusions that the sane print reviewers did.

            • derFunkenstein
            • 3 years ago

            Probably also explains why there aren’t any AM4 board reviews yet.

            • astrotech66
            • 3 years ago

            Thanks!

        • raddude9
        • 3 years ago

        Thanks for the review Jeff, I do agree that 1440p games testing CAN indeed hide the differences between CPUs, but I do think that the number of games where it does is distinctly in the minority now. Regardless of Techspots own conclusion, I thought it was particularly telling that when they ranked CPUs in order of game performance, we get exactly the same order for the 11 CPUs they tested, in both 1080p and 1440p. Isn’t that proof enough that testing at 1440p does not hide CPU differences?

        Anyways, I’m looking forward to the 6-core Ryzen reviews already, it’s great to have some old-fashioned CPU discussions again.

        • DoomGuy64
        • 3 years ago

        How about because 1080p in a lightly threaded workload is not the usage scenario most of us are concerned about? People here want to know how this CPU performs under a heavy multitasking workload. You should have included say 1440p and CPU video stream encoding at the same time.

        It’s not that 1080p testing is bad. You are 100% correct in testing 1080p for CPU performance. It’s just that this is not representative of how people here would actually use this CPU.

        [url<]https://techreport.com/discussion/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed?post=1024164[/url<] [quote="slowriot"<]People know the CPU is not the bottle neck at higher resolutions. People are wondering about their own real world scenarios. People want to know "What if I have some applications that can make use of all the threads. Do I also suffer big in gaming at the resolution and settings I want?" and these tests make it hard to answer that. Reality may be that at 1440P there's very little delta between AMD and Intel parts. I'd personally like to know that and I can't even guess at that given the tests as presented. I feel hardware enthusiasts care so much about which part is "fastest at X" and not "best for my usage cases overall" that the tests just don't feel truly relevant to how people use their PCs.[/quote<] That said, I don't think this was a bad review, because it is very informative and well written. However, it misses the point of why we would buy this chip, and does not offer a test scenario for that. We end up having to read between the lines to get that information, but there certainly is enough variety in the testing here to gather an educated opinion. TL:DR: You didn't include a real world mixed workload test scenario, but the information is there regardless. People complain about using their heads to think. Maybe include some new mixed workload testing next review. Or not. I'm not particularly unhappy with the review, just explaining why other people are. It's up to you guys to decide whether or not your benchmark suite should included extra testing from user feedback.

          • Jeff Kampman
          • 3 years ago

          AMD tells this “user story” of the person who games and streams while CPU encoding a lot. I have a number of friends who make their livings doing YouTube and streaming and they all use Elgato devices to do it. There might be a market there, and I will probably do multitasking game performance benches in a follow-up article, but the notion of how “we” would use this chip in a certain way is infinitely malleable.

            • DoomGuy64
            • 3 years ago

            I was just using that as an example, because it was one that has been made before. I’m not saying it’s perfect, just that it is an actual example of a mixed workload test. However, that dismissive attitude is why people are complaining. Your loss for throwing those readers under the bus instead of a bone.

            I don’t really care either way, especially since I read multiple reviews before drawing a conclusion. It’s your exclusive regulars that you are aggravating by not doing more varied tests. I’m not complaining Jeff. I get why you did it, and it was a good review. However, I think that follow up article is what some people wanted to see in the first place.

            So far, my opinion of Ryzen is a solid meh. It’s a competent CPU that only excels in multitasking price/performance. I probably wouldn’t buy it, since it doesn’t match how I use my PC. Your current testing methods actually match my usage scenario pretty well, and the 7700K looks to be a good upgrade for me. It’s just that people who would buy Ryzen for mixed workloads were left out, and that’s why there are complaints. So hey, looking forward to the follow up.

            • NTMBK
            • 3 years ago

            Streamers seem like such a tiny niche of a niche. I just don’t get all the hype around it.

            • Redocbew
            • 3 years ago

            Choosing live streaming as a use case for Ryzen has to be a decision which was made after the fact. Clearly AMD believes in highly threaded workloads becoming increasingly common, but did they base the architecture of their new chip around CPU video encoding? Probably not.

        • Jeff Kampman
        • 3 years ago

        And Gigabyte just emailed with another new BIOS version. Good on them for fast updates but anybody trying to review these things is on shifting sands.

          • derFunkenstein
          • 3 years ago

          Their website now has a version F5c beta posted. Is this the one you’re referring to, or is there still another beta out there somewhere?

            • Jeff Kampman
            • 3 years ago

            F5c is the version I got.

            • AnotherReader
            • 3 years ago

            Did they let you know what changed in this version?

            • Jeff Kampman
            • 3 years ago

            Nope. Haven’t had a chance to install it yet and find out, either.

            • NTMBK
            • 3 years ago

            NVidia and AMD need to get together and sort out their scheduling, they’re being very inconsiderate 😛 Nice work on both reviews, by the way.

        • MileageMayVary
        • 3 years ago

        While I do like seeing performance at 1080, I care about performance at 1440 because that is what I game at. If all CPUs are the same at 1440 because the GPU is the bottleneck then I should just buy the cheapest but that isn’t always the case and not all CPUs scale the same from 1080 to 1440 to 2160.

      • Platedslicer
      • 3 years ago

      Someone must have sold me a fake GTX 1070 if new GPUs are supposed to produce well over 100fps at 1080p, because it barely manages to avoid dipping below 60fps in Witcher 3 when there’s a lot of stuff going on, and that’s without any kind of anti-aliasing. DX:MD, same story.

      I for one am much more picky about smoothness than extra pixels. My last buy was a couple of 1440p monitors, but those are 60Hz and used for working and browsing the web, [b<]not[/b<] gaming. I've no intention of retiring the high refresh rate 1080p any time soon. Either way, this is the CPU being tested. What's wrong with constraining the test to stress the CPU as much as possible, within reason?

        • End User
        • 3 years ago

        Those of us who game @ 2560×1440 and beyond are being underserved by CPU reviews that focus on 1920×1080. Ryzen is a perfect example of this as some games takes a CPU performance hit when playing at 2560×1440:

        [url<]http://www.guru3d.com/articles_pages/amd_ryzen_7_1800x_processor_review,19.html[/url<] I don't have a 4K gaming display but I want to see CPU reviews that show 2560x1440 and 4K gaming benchmarks.

      • f0d
      • 3 years ago

      until absolutely every game is average over 100fps and the minimum is 60fps im more interested in keeping my current high refresh monitors (2560×1080 75hz and 1920×1080 144hz) than getting 1440p

      i have seen many 1440p and above monitors and imo high refresh is much more important and improves my quality of gaming more

      i can easily tell the difference between 60hz monitors and the higher 75/100+ hz ones and there has been many tests that shows that lots of other gamers can too

      just for an example [url<]https://techreport.com/news/25051/blind-test-suggests-gamers-overwhelmingly-prefer-120hz-refresh-rates[/url<] if you look at the TR benchmarks 3 of the 5 games tested were constantly getting over 60fps on all tests with all cpu's and one of them is a very old game (crysis) and the another (doom) is a shooter designed for fast reflexes so needs a high fps looking at your linked review lots of games were having troubles keeping a minimum of 60fps on all cpu's @1440p with many even going 40 min fps and lower which in my opinion is unacceptable gameplay you said yourself the results were similar so there really shouldnt be an issue, the results of the 1080p tests will just be the same as 1440p except it makes it easier to highlight the performance difference between the cpu's testing at 1440p : will just be testing at a resolution that not many people have : will have unwanted fps numbers too low for gameplay on some games : it will not make any difference to the order of how cpus will perform : will be harder to see the performance differences between different cpus so i really dont see the point of doing it

      • Anonymous Coward
      • 3 years ago

      If you come and talk about FPS you have the wrong perspective on the problem. Its right that 130 vs 135 is boring, so talk instead about time spent beyond 8.3ms (perfect 120fps).

      • RAGEPRO
      • 3 years ago

      I recently upgraded from a 4K IPS to a 1080p VA with blur reduction. So, y’know.

    • Vesperan
    • 3 years ago

    The focus on the 1080p gaming performance (when using a top of line GPU) is bloody frustrating. Yes Ryzen performs worse at 1080p – but by and large its a theoretical difference, unless you have a high refresh rates monitor. And when your above that you’re GPU constrained.

    Yes Ryzen can’t clock high enough to beat Intel in lightly threaded tasks, or have high enough IPC to do it at current clocks. AMD were never going to win that battle, and they never claimed they were going to. The hype train has been enormous. They’ve eclipsed all expectations we could have had even just six months ago – but we’re going to slam the entire architecture due to 1080p gaming of all things? Its absurd.

    I wouldn’t mind so much if we were all dumping on something that might actually be perceptible, like… java performance? (Its the only non-gaming benchmark TR has the 7700K winning out over Ryzen). But 1080p gaming where your eyes cant tell the bloody difference?

    Ryzen is “good enough” for gaming. Just like the Intel i5s, or some of their i3s. At 1080P your FPS is over what you can perceive, and at any higher resolution its the GPU that matters. Sure, there are fringe cases – there always are. Lets not focus on them.

    Kudos to PC Gamer for the Ryzen 1700 article based around an RX 480. Even if it pits the slowest Ryzen against an overclocked Ivy Bridge to get its click bait headline. So far its about the only article out there that provides an indication of the performance a midrange Ryzen system would look like.

      • Redocbew
      • 3 years ago

      I don’t think the reaction would have been any better had there not been any gaming benchmarks. The absurd part is the collective head-in-sand routine. By now it’s pretty clear that if it hadn’t been the supposed evils of testing at 1080p, then it would have been something else.

      • evilpaul
      • 3 years ago

      I didn’t wait for Zen (went 7700K) and don’t regret it. I wouldn’t mind 6 or 8 cores, but it would be to be able to game and have Plex transcoding something without any slowdown. If I’m running more recent emulators they benefit from fast single threads. I’ve got an ASUS 144Hz 1080p monitor and a 4K one and a 1080, so I can do either high FPS or high res. If you’re suggesting >60FPS isn’t perceptible, that’s tangibly untrue.

      AMD’s main problem with the Ryzen launch and gaming is that the R5 is probably the better gaming chip unless you’ve got a 144Hz monitor and the game’s not capped at 30 or 60 FPS. Games that aren’t heavily multithreaded will probably run about as well and you’ll have an extra $200 to spend on a GPU. Or better monitor if you’re using a 60Hz 1080p monitor still.

    • HERETIC
    • 3 years ago

    Closing in on 600 comments as I write this-Are we breaking records yet guys????????
    My summary on Rizen-
    Has it delivered? Most certainly YES.
    Is AMD back? Most certainly YES.(with the sound of arnie ringing in the back of my head and
    for the older gen let’s not forget McArthur.)

    What’s impressing me is how efficient the 1700 is.Looking forward to seeing some under-volting/under-clocking articles to see where the sweet spot is on this architecture/process.

    Only minor disappointment was reading this morning that 1500X was only going to be 3.5/3.7
    Was hoping on a smaller die AMD might have tweaked for something like 3.8/4.3.
    Perhaps a few revisions down the track…………………………………………………………….

      • derFunkenstein
      • 3 years ago

      It’s not a smaller die. Most of the Ryzen 5 models have 16MB of L3 cache, which means there’s two quad-core modules with some parts disabled. Ryzen 3 MIGHT be a smaller die, but it’s more likely (early on, at least) a harvested full Ryzen die with a whole cluster disabled.

        • HERETIC
        • 3 years ago

        Knowing how AMD has operated in the past-your probably right.
        More so if yields are bad………….

        We just been reading different rumors.
        Mine were-
        AMD would harvest 6 core from the 8 core and nothing else.
        1500X would be 4 core 8 thread 3.5/3.7 and 12MB cache.

        • Anonymous Coward
        • 3 years ago

        I don’t think there will ever be a 4-core Ryzen that isn’t salvaged from either an 8-core part, or a 4-core part with a bad GPU. Its even the same socket they fit in.

    • basket687
    • 3 years ago

    This is now the most commented TR article ever ! (AFAIK), the previous record was for the Bulldozer FX-8150 review (585 comments): [url<]https://techreport.com/discussion/21813/amd-fx-8150-bulldozer-processor#metal[/url<]

      • Jeff Kampman
      • 3 years ago

      Not quite. Fury X has more by a fair bit. We are watching it climb the ranks, though.

      • Krogoth
      • 3 years ago

      Pfft, that’s small time.

      [url<]https://techreport.com/news/2799/dr-evil-asks-gxp-problems[/url<]

    • Thbbft
    • 3 years ago

    Gonna beat the 7700K like a blue headed step cpu.

    The $200 Ryzen 1500x is going to seem like a relative miracle gaming CPU when it releases in May with no cache issues, most smt and bios issues sorted out and a number of the AAA benchmarking games optimized for Ryzen. Should overclock much better too. Might even beat the 7700K in some gaming benchmarks.

      • Meadows
      • 3 years ago

      Let’s not get false hopes up just yet. Even if the issues are sorted out, it will be at least 10% slower than the 7700K, if not 15%. Right now it’s around 20% slower.

        • Thbbft
        • 3 years ago

        I”m considering the potential performance/efficiency gains of a single core cluster vs. a 2 cluster set-up in addition to the factors enumerated above.

        • bandannaman
        • 3 years ago

        [quote<]Let's not get false hopes up just yet[/quote<] Yeah! We prefer to get our false hopes up [i<]later[/i<].

      • derFunkenstein
      • 3 years ago

      It’ll be interesting to see if the 6-core or 4-core variants can hit higher speeds. While doubtful, doing so would make them very interesting products in a typical, more lightly-threaded gaming workload.

        • LostCat
        • 3 years ago

        The only major difference I can think of (since not much is going to tune for eight core procs in gaming) is that the six core still has the same cache as the eight core. Might not mean much with this arch, but it could make the six core the standout product price/perf wise here.

        I don’t know that I care about the four core, personally. It’ll be interesting to see though.

          • derFunkenstein
          • 3 years ago

          It’s one of the things I hope to get time to play around with when I get my Ryzen box built. Mostly to satisfy my own curiosity, I’d like to see if there’s more headroom in 6C12T. Not that I’d do that all the time, of course.

        • Meadows
        • 3 years ago

        Word on the street is that current Ryzen platforms have hardcoded memory timings for certain memory speeds and this complicates overclocking greatly. Here’s hoping the motherboard makers can sort that out with some sort of workaround in the coming weeks/months, since I’ll wait for up to 2 months before my major upgrade.

        By that time, I’d like to get a new AMD system that is utterly stable and hopefully overclocks beyond factory boost limits as well (or at least up to the boost limit, if nothing else).

          • derFunkenstein
          • 3 years ago

          Wonder if that means that lower memory speeds would improve overclocking then. I bought DDR4-3000 memory, and if sticking to the default 2666 clock improves things I’d at least consider it.

      • kruky
      • 3 years ago

      Ryzen 5 will be direct competition for Intel i5. And there it will be 4 intel threads vs 12 amd threads. I’m very interested in the results. If Ryzen wins then AMD will take the Sweet Spot in TechReport builds 🙂

        • Redocbew
        • 3 years ago

        Adding more threads isn’t necessarily going to overcome IPC and clock deficits even if they are utilized by the applications. It seems unlikely that Ryzen 5 will be clocked any higher than Ryzen 7, but I guess we’ll just have to wait and see.

    • wesley5904
    • 3 years ago

    Without AMD we would all be at the mercy of Intel and their gouging of prices and lackluster inovation. At one time we had cyrix long time ago. VIA didn’t make it as a cpu maker either.
    AMD was the first to reach 1,000 mhz. Do you remember the mhz myth that AMD debunked with the Athlon xp processors vs the Pentium 4. AMD was also the first to come out with a chip with a memory controller built in. You can thank AMD for the x64 cpu, which Intel adopted. You can thank AMD for our systems now being able to address more than 3 gig of ram.

    Do you like paying $1,000 for an Intel CPU…. I know I don’t.

    AMD has a delivered a new processor from scratch, and SOC processor (not a CPU)! This new processor clearly beats Intel’s in so many benchmarks. AMD did this all on a shoe string budget compared to Intel’s massive budget. It is clear to me this new SOC chip can compete with Intel in so many ways and it should be embraced.

    We should all support AMD! I have built hundreds of computers with AMD CPUs and I will continue to do so.

    AMD SHOULD BE APPLAUDED! Thanks AMD!!!

    • basket687
    • 3 years ago

    Hi Jeff,

    How do you explain the big difference between the 6700K and 7700K scores in the DAWBench DSP benchmark (582 vs 678)?

    Thanks for the great review.

      • AnotherReader
      • 3 years ago

      Yes, that result is a real headscratcher.

        • Meadows
        • 3 years ago

        The result is 16% higher, but the processor’s base clock is only 6% higher. All else being equal, my best guess is that the 7700K somehow reaches a higher turbo state.

          • AnotherReader
          • 3 years ago

          I thought that as well. Unlike Broadwell-EP, Skylake shouldn’t downclock in AVX heavy workloads. So the difference should be max all core turbo for 7700K to base clock of the 6700K. That is certainly less than 16%.

            • Meadows
            • 3 years ago

            But it’s almost 13%, and the remaining little bit could be explained by optimisations.

            • AnotherReader
            • 3 years ago

            KabyLake has no microarchitectural improvements that would explain that delta.

            • Redocbew
            • 3 years ago

            I’m not sure what kind of optimizations Meadows was referring to, but if Kaby Lake is boosting more efficiently for some reason, then the last few percent may just be margin of error.

            • Meadows
            • 3 years ago

            I reckoned >3% is a pretty large margin of error, so there must’ve been some secret sauce somewhere.

      • Jeff Kampman
      • 3 years ago

      I’d put it down to boost behavior, although I’d have to retest to be sure.

    • shank15217
    • 3 years ago

    One thing for sure, no topic racks up more comments than AMD news and reviews.

    • The Jedi
    • 3 years ago

    This might be the first Jeff Kampman review I’ve read. Very, very good, and mega props to you.

    I did find myself going to TechPowerUp first, but, they didn’t have a review yet, just a review of reviews.

    I was disappointed to learn that XFR is basically a 100 MHz boost. I kind of fancied the idea that with some kind of refrigeration you might be able to test the limits of the unlimited.

    Hopefully AMD will get some more optimizations out there. I’d like Ryzen to have been able to benefit from higher performance DDR4.

    • Thbbft
    • 3 years ago

    Ryzen 1.0 gets AMD back in the game.

    Ryzen 2.0, with software architectural optimizations in full swing, motherboard BIOS issues sorted out, higher clock speeds and increased IPC is cage fight time.

      • tipoo
      • 3 years ago

      Yeah, that’s why I’m most optimistic about it right now. Intel isn’t going to magically pull another 50% IPC boost out of silicon out of a hat, so AMD can hopefully polish off all those edge cases where they’re a bit behind and the two can stay closer than they have been in a decade.

      • NTMBK
      • 3 years ago

      Even Ryzen 1.0 is enough for a server cage-fight. So it doesn’t clock at 4GHz? Big whoop, neither do Intel server chips.

    • Fonbu
    • 3 years ago

    Ultimately people are mixed about this processor. Some disappointment is understandable and rightfully so. But even with some of the weaknesses shown, it does have strengths. Competition has come back however to the HEDT. With new competition it will bring interesting system guides. We will see the bugs worked out or even major bugs come to bare. More analysis of this new processes will also clarify a whole lot of unanswered questions from the initial somewhat rushed reviews from all websites.

    • joselillo_25
    • 3 years ago

    Lets wait for more usefull tests of things that normal people use computers for, like visiting xvideos or torrent sites and dealing with JavaScript Apocalipsis, multiple Tabs and tons of malware.

    • kilkennycat
    • 3 years ago

    AMD is following Intel’s ‘leadership’ in only supporting Windows 10 with Ryzen silicon a la Kaby Lake (and later). I note from the Steam survey that just under 50% of those surveyed in the Windows group are using Windows 10. Lack of driver support for Windows 7-64 in particular is likely to hamper early adoption of Ryzen by the high end gaming community, now that the free updates to Windows10 have been terminated by Microsoft. Worldwide adoption of Windows 10 appears to have stalled in the last couple of months, probably most likely due to professional users disgust at Microsoft’s undisciplined fiddling with the OS and poor QC. Again, because of AMDs choice of Windows 10 only, it seems that evaluation/adoption of Ryzen for professional applications is likely to be unnecessarily crippled in the near term.

    • DoomGuy64
    • 3 years ago

    Count Chuckula was wrong. Ryzen is pretty competitive, although not an Intel killer in single threaded apps. The real takeaway is multitasking bang 4 buck, and Ryzen is neither slower than sandy bridge or vastly under-performs with AVX. (FUD rumors were wrong)

    The real question that needs to be asked isn’t single threaded performance, but whether or not you need a 8 core CPU. AMD showcased Ryzen playing games while encoding video streams on the CPU. That’s the kind of scenario this chip would useful for, not benchmarks catered to quad core use.

    Gaming performance needs a caveat as well, since anyone playing above 1080p will be more GPU limited than CPU, and more cores might contribute to less hitching in some scenarios. Random AV scanning in the background, for example.

    If you only need the capability of a quad core, stick to a quad core. Best thing is, prices dropped. Get what you want, and save money doing it.

      • synthtel2
      • 3 years ago

      You and Chuck were both mostly right (as usual on such matters, in my estimation), you just both like to phrase things in combative ways, so it always looks like your opinions are much further apart than they really are.

      • ultima_trev
      • 3 years ago

      According to the 99th Percentile FPS chart, the 1800X is about 94% of the 5960X in gaming performance, then 87% compared to the 6950X.

      Problem is the 1800X only gives 88% of the i7 7700K’s gaming performance. It will be better in most workstation apps although due to AMD’s market share the 1800X should have been priced at only $300… It would have sold so well at that price.

      • chuckula
      • 3 years ago

      [quote<]The real takeaway is multitasking bang 4 buck, and Ryzen is neither slower than sandy bridge[/quote<] Never said it was, I said people claiming it was better than Broadwell across the board were wrong (and I was right). I actually said that I assumed RyZen to be about 50% core-for-core clock-for-clock compared to an FX-8370, and Lisa Su herself quoted 52% over Piledriver just a few days ago. What I got wrong was assuming that AMD would be much more conservative with the clock domains to keep to a "95 watt" TDP. Had AMD called these parts 140 watt chips, then hopefully I would have gotten the target clock domains right. [quote<]or vastly under-performs with AVX. [/quote<] Did you read TR's review? Especialy DAWBench or Y-Cruncher (I'm not even bringing the StartsEuler test in here to be "fair")? If there was a workload where a RyZen part at 3.3GHz had a margin of victory like that over an Intel part running at at least 3.6 GHz (or higher based on AMD's hyper-aggressive turbo clocks) I'd like to see it. AMD did a lot right with RyZen since RyZen bears a striking resemblance to recent Intel chips with only a few notable differences (the split integer/fp scheduler and the uncore being the two biggest). However, if what AMD did right is taken directly from Intel products, it kind of means people like you who can't stop screaming about how bad Intel is should stop and ask yourself why AMD is copying supposedly stupid chip architectures in their own products.

        • DoomGuy64
        • 3 years ago

        [quote<]Never said it was, I said people claiming it was better than Broadwell across the board were wrong (and I was right).[/quote<] No, you're wrong because you said it couldn't match Broadwell at all. Ryzen is more competitive than you predicted, plus far more useful in multitasking. BTW, I never said I thought Ryzen was going to beat Intels latest CPUs across the board, only that I figured it would be competitive against Haswell. I'd say AMD not only met, but exceeded my expectations. [quote<]people like you who can't stop screaming about how bad Intel is[/quote<] Lot of strawmanning, but what else should we expect from you? I don't think Intel makes bad chips, but they have been price gouging and keeping us stuck @ 4 cores for years, not to mention screwing us over on upgrades and segmenting features. None of that is false, and it's perfectly within my rights to complain about it. The only reason why I haven't said more against AMD, is because they haven't been worth discussing. Go harass Ronch if you want to troll someone who constantly mentions old AMD CPUs. The only scenario last gen AMD CPUs were worth buying was the APUs for say a cheap laptop that had tolerable integrated graphics.

    • DrDominodog51
    • 3 years ago

    Shit. My Z97 Mobo just decided it no longer wished to live. I guess I’ll be getting one of these sooner or later to replace my 4690k.

    • Srsly_Bro
    • 3 years ago

    Again, TR is reduced to shambles.

    Ryzen’s worst resolution is 1080P and to thank Scott for the review sample, TR is lazy and does only that resolution. Idk why this site is even up anymore if nobody cares and can’t be bothered to do two resolutions and more than a handful of games. I’ve said shut it down before..and I’m saying it again. If this review wasn’t rushed and made in 4 hours, ppl have really stopped caring.

    RIP TR

    Only the loyal and obedient flock of sheep keep this place going.

      • Airmantharp
      • 3 years ago

      How much did you get paid for this post?

        • Redocbew
        • 3 years ago

        Not very much I hope. It wasn’t a very good post. 🙂

      • raddude9
      • 3 years ago

      Sorry to hear it bro.

      In order to help your transition I’ve compiled a list of review and whether they test games on Ryzen at high resolutions:

      [url<]http://www.pcgamer.com/the-amd-ryzen-7-review/[/url<] 1808p only. Testing at 1440p is "complete bunk" [url<]http://www.legitreviews.com/amd-ryzen-7-1800x-1700x-and-1700-processor-review_191753[/url<] Mostly 1080p, but 3 games tested at 1440p and 4k [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/74814-amd-ryzen-7-1800x-performance-review.html[/url<] 1080p only [url<]https://www.pcper.com/reviews/Processors/AMD-Ryzen-7-1800X-Review-Now-and-Zen[/url<] 1080p only [url<]https://arstechnica.co.uk/gadgets/2017/03/amd-ryzen-review/[/url<] 1080p only [url<]http://hexus.net/tech/reviews/cpu/102964-amd-ryzen-7-1800x-14nm-zen/[/url<] 3 games at 1440p [url<]http://www.kitguru.net/components/cpu/luke-hill/amd-ryzen-7-1800x-cpu-review/[/url<] mostly 1080p but 3 games at 4K [url<]http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,4951.html[/url<] 5 games at 1440p [url<]http://www.tweaktown.com/reviews/8072/amd-ryzen-7-1800x-cpu-review-intel-battle-ready/index.html[/url<] 3 games at 1440p [url<]https://www.guru3d.com/articles-pages/amd-ryzen-7-1800x-processor-review,1.html[/url<] 1080p only [url<]http://www.techspot.com/review/1345-amd-ryzen-7-1800x-1700x/[/url<] 1080p only [url<]http://www.anandtech.com/show/11170/the-amd-zen-and-ryzen-7-review-a-deep-dive-on-1800x-1700x-and-1700[/url<] No gaming tests, but lots of other tests that put Ryzen at the top of the pile. So, seeing as toms is rather biased, that only leaves you with hexus, kitguru and tweaktown. Good luck. After looking through that lot myself I can say that Ryzen loses by only a few fps at 4K and at 1440p the results look to be about half way between the 1080p results and the 4K results, i.e. Ryzen still loses out to the 7700K at gaming, but by a reduced amount.

        • Pancake
        • 3 years ago

        Yeah, but the result is that Ryzen still loses. Srsly_Bro is still salty and nothing can salve the pain in his butt.

        Not that any of this is going to stop me building a Ryzen 7-1700X Linux box (when Kernel, drivers, reliability etc proven) which will predominantly run my Java code.

      • Jeff Kampman
      • 3 years ago

      I benchmarked Ryzen in the same way that Scott has benchmarked CPUs since time immemorial. If that disappoints you, your beef is with me, not our methods.

      • Meadows
      • 3 years ago

      People use 1080p the most for gaming, and it conveniently creates a moderate CPU bottleneck too nowadays, which is useful in a CPU review.

      Testing anything else would’ve been a bad call.

        • raddude9
        • 3 years ago

        I disagree, from looking at all of the 1440p ryzen reviews I could find, it’s clear that the CPU bottleneck that exists at 1080p also exists at 1440p, it’s just a bit less pronounced. Only at 4K do modern games become entirely GPU-bound.

        Also, while 1080p is the most prevalent gaming resolution, wouldn’t most people buying a $500 CPU be at least looking at pairing it with a better than average monitor.

          • Ninjitsu
          • 3 years ago

          I responded to this in the other thread too – there’s unlikely to be a correlation between CPUs and monitor resolutions.

          Workstation CPUs aren’t usually bought for gaming anyway.

          You could also have a triple 1080p setup for work, where you game on the primary monitor only.

          Etc.

            • freebird
            • 3 years ago

            Or you could play on triple monitors at home and only be “provided” one monitor at work… like I used to do…

            So are you saying AMD has only release workstation CPUs right now, since they are all 8 core/16 thread?

            Am I not allowed to play/game on my AMD workstation CPU even if it falls into the high end gaming PC price range?

            It isn’t ONLY a GAMING PC, is it? Even strictly GAMING CONSOLES aren’t any more with Netflix/Amazon Prime/Youtube/Skype.

            Didn’t the PC get it’s name because it was now a PERSONAL computer, to each his own…

          • derFunkenstein
          • 3 years ago

          so what are you disagreeing about? That 1080p is only bottlenecked by the CPU? That the plurality (43% on Steam according to the hardware survey [url<]http://store.steampowered.com/hwsurvey)[/url<] of people use 1080p? You can disagree about facts all you want but those are absolutes right now.

            • freebird
            • 3 years ago

            Maybe the 1080p review should be posted on STEAM then… since those FACTS are ABSOLUTE right now…

            • derFunkenstein
            • 3 years ago

            Nevermind. Seems you’re too busy grinding your axe to have a civil discussion. I’ll come back later.

            • rechicero
            • 3 years ago

            You used the maximum possible memory speed for each CPU. Your job is “telling” us how a CPU is going to fare in our rigs, and that doesn’t mean isolating. So you did well.

            About 1080 and gaming benchmarks in general… I really think you’re wrong on that.

            1.- RyZen is a CPU that goes for Bang per buck. Pairing it with a Titan X seems… odd. If you are going to buy that GPU, you don’t care about bang per buck. At all.

            2.- 1080 gaming data is useful: a lot of ppl plays that way. But you shouldn’t obsess about surveys. In that same survey you can check the games that are played and only 1 out of 5 games benchmarked are in the list. And I don’t say you did wrong about that! But, IMHO, you are being wrong using the survey to defend one decision while ignoring it for another ;-).

            3.- Again, the bang per buck is probably in the 1080-1440p range, with a bang per buck GPU like RX 480 and GTX 1060. And this is not IMHO… this is IYHO, as it’s what you say in the building guide ;-).

            4.- Last, but not least… I can understand testing with fresh Win10 installs and with no other programs in the background: it’s easier and more consistent. But, at the same time, it’s far from being real life (Apart from pro players, who is going to play without several programs in the background, like mail client, several tabs of the browser, etc?) and it happens to be skew thing in favour of CPUs with less cores (or maybe not! I don’t know because AFIK nobody tests any other way).

            It’s like when you test thermal and noise… in an open bench. Again understandable, again for easiness and consistency of the benchmark, but it got us with just reference model pushing the hot air out of the cases. And I really wonder if the aftermarket coolers would be completely different if benchmarks happened in cases.

            And please, don’t feel attacked. If I tell you this is because the thing that makes TR different is that you guys evolve, you don’t take a bureaucratic approach on benchmarking. You THINK. So please, think about this. You already did something great with Inside the second. Don’t stop on that.

            • Jeff Kampman
            • 3 years ago

            Not sure why you’re lecturing derFunkenstein when he doesn’t even write for us any more (much as we otherwise love him).

            • derFunkenstein
            • 3 years ago

            <3 Jeff, you tested up Ryzen right. Don’t let anybody else tell you differently.

            But yeah, that reply notification was a very WTF email to wake up to this morning.

            • rechicero
            • 3 years ago

            Sorry for that, then ;-).

            • rechicero
            • 3 years ago

            Apart for the confusion and your surprise, you shouldn’t ever say something like “don’t let anybody else tell you didn’t do something right”.

            The only way to improve is letting people do that, listen to their points and, if you feel they have some merit, change accordingly. There is no improvement without criticism.

            • derFunkenstein
            • 3 years ago

            Alright, let’s go over your four points:

            [quote<]1.- RyZen is a CPU that goes for Bang per buck. Pairing it with a Titan X seems... odd. If you are going to buy that GPU, you don't care about bang per buck. At all.[/quote<] It's not about what someone would build; it's about what this CPU can deliver without anything else getting in the way. Anything else only drops the ceiling on every CPU, and when several hit the same wall the data you've collected is useless for deciding which CPU you want to buy. [quote<]2.- 1080 gaming data is useful: a lot of ppl plays that way. But you shouldn't obsess about surveys. In that same survey you can check the games that are played and only 1 out of 5 games benchmarked are in the list. And I don't say you did wrong about that! But, IMHO, you are being wrong using the survey to defend one decision while ignoring it for another ;-).[/quote<] It's true, I'm into the Steam hardware survey, but only something that explains "hey this is a valid data point". Combine that with #1 and you have a valid data point that a lot of people can use. [quote<]3.- Again, the bang per buck is probably in the 1080-1440p range, with a bang per buck GPU like RX 480 and GTX 1060. And this is not IMHO... this is IYHO, as it's what you say in the building guide ;-).[/quote<] At 1440p you'll be completely GPU-limited. This isn't a GPU review; it's a CPU review. [quote<]4.- Last, but not least... I can understand testing with fresh Win10 installs and with no other programs in the background: it's easier and more consistent. But, at the same time, it's far from being real life (Apart from pro players, who is going to play without several programs in the background, like mail client, several tabs of the browser, etc?) and it happens to be skew thing in favour of CPUs with less cores (or maybe not! I don't know because AFIK nobody tests any other way).[/quote<] I close everything before I play because I care about stuff not hitching all the time (memory use more than CPU use). The real world is messy and can't be repeated or replicated easily. Whoa, I just got 800 Gmail messages in the middle of this test - I better set up a script to send myself 800 in all the others, too. Uh, no. And so I stand by the statement that Jeff did it right. Three of your points are the AMD brigade's talking points.

            • rechicero
            • 3 years ago

            But I’m not part of any brigade…

            I really think isolating the CPU should be only a part of a benchmark, as important as synthetic benchmarks: it’s interesting, tell you things… but can “lie” to you.

            Think for a moment, when that gmail moment happens, and we have skype, and some browser tabs, an i5 7600 is going to be better than a i7 6700 in several games? If you have any doubt on that (and I do have it), then we need that kind of test.

            And now, if instead of a Titan X, We use something more sane, what?

            And of course, I was the first in saying that having a mail client, etc in the backgrounf makes benchmarking more difficult and less consistent. That’s why I would only trust TR to think about a way to do them. Yes, maybe a script to send not 800, but a check in an internal server of sorts, or something like that, scheduled in the background.

            But really, the CPU is just one part of a rig. It’s good to isolate its performance, but IMHO, is really bad if you ONLY isolate its performance. Because it might be that, in real world, you would be better serve with something different (or just cheaper!).

            And no, for me it’s not an AMD thing. Another thing that I would love is testing GPUs in closed rigs. Again with issues (less convenient, less reproducible), but, again, closer to real world results. The world we live in 😉

            Anyway, maybe you are right and Im not…

            • Redocbew
            • 3 years ago

            Testing a single component is the only way to get results that have any useful point of reference. If you test everything all at once you could figure out that one machine is faster than another, but you wouldn’t know why until you started testing each component individually.

            If you don’t care about why, then what are you doing here? 🙂

            • rechicero
            • 3 years ago

            I probably didn’t explain my point correctly. The thing is testing the component, but in a way closer to how it’s going to be used.

            I really understand your point and it’s really interesting. I don’t say isolating is bad: it tells you thing. It tells you the whys… But if you just do that, you can use synthetic benchmarks and call it a day.

            • Redocbew
            • 3 years ago

            Use cases are sticky that way, but the idea here is that these machines are entirely deterministic. There is one way that they behave regardless what we’re using them to do. Testing each component individually helps us to figure out what that is.

            Perhaps it’s not needed, but I also want to point out that isolating components isn’t what makes a test synthetic. A synthetic benchmark is something like 3DMark that cobbles together a workload which developers believe will be representative of a particular task. It’s an approximation of what to expect in terms of performance. How good the approximation will be depends on the choices made while assembling the test workload. That’s in contrast to an application benchmark which would be an actual game like Doom, or something like Handbrake which performs the task its self. The workload is what it is here, because the point is to do work that’s practical instead of just running the system through its paces.

            A good synthetic benchmark may tell us about aspects of performance we otherwise wouldn’t know about, but application benchmarks take the guess work out of how close to the performance of a “real” workload we might be.

            In either case, we’ve still got to know what part(s) of the machine are responsible for providing us with the performance we see in order to make any sense of the results.

            Editpalooza: Insomniac posting strikes again!

            • derFunkenstein
            • 3 years ago

            BTW I’d love to write again of my job allowed it. I’m thrilled that the staffing situation seems pretty stable around here, but if you guys go advertising for a full-time writer, expect to see my application. :p

            • rechicero
            • 3 years ago

            I wasn’t lecturing (or, at least, wasn’t my intention to do it).

            I just wanted to give my 2 cents to make the site even better (or not, maybe my POV has little merit!). But instead of looking to the finger pointing, let’s look at the moon.

            But, of course, it’s my opinion and I still think you do things better than most. I just feel like, sometimes, it’s not about “hiding” difference but offering a good idea of what you buy. And I don’t say your review doesn’t do that, but I have some doubts because of… the reasons I wrote:

            * GPU not in the range of the CPU (I assume the point of RyZen is the sweetspot, so its natural fit should be a 480 or a 1060).
            * Little or zero programs in the backgroung (when a normal user is going to have probably the mail client, some browser tabs , antivirus, printer thingies, etc)
            * only 1080 when 1440p seems like the way to go.

            I understand you feel like those things could “hide” differences between CPUs, but those things are real life. If we eliminate them why might create differences that then a user won’t notice. For that we have synthetic benchmarks, haven’t we? And if in real life 2 CPUs are pretty much the same, then let’s differentiate by price.

            And everybody else seems to do exactly the same as you. You can differentiate going the extra mile.

            But, again, this is not a lecture, not even criticism. It is just what I think (and I’m not a fanboy, I have a Xeon running my rig and I’m GPU agnostic: just grab the best perf per dollar in the moment I need to buy) and I tell you because I like your work and, specially, the TR mentality of giving something more than the rest.

        • LostCat
        • 3 years ago

        I don’t know, I think most people at 1080p still have a 60hz monitor, so they should’ve just shown ’60’ on every graph too.

        While I think not including 1080p is ridiculous, I think only testing at it is equally ridiculous.

      • Ninjitsu
      • 3 years ago

      srsly, bro?

        • Srsly_Bro
        • 3 years ago

        Dont be hurt, bros. Just put some effort into the review. I’m sure a site like Guru3d has done a more complete review. That’s not a good look…

          • Krogoth
          • 3 years ago

          You realize that review has made in short notice and the editor did mentioned that they will do a follow-up on other elements such as power consumption/efficiency?

          Hardware reviews take time bro especially in-depth ones.

            • Srsly_Bro
            • 3 years ago

            Did you even read my post? There is a sentence relating to being rushed. Please go read before getting emotional and cheerleading.

            • Krogoth
            • 3 years ago

            You clearly didn’t read the article in its entirely my friend. The editor made it crystal clear that this was just a quick review on the [b<]second page[/b<]. [quote<]Thanks to shipping delays, a constant stream of BIOS and software updates, and other headaches, we simply ran out of time to complete our testing before this morning's NDA lift. Because of those circumstances, we elected to paint as complete a picture of the Ryzen 7 CPUs' performance as possible now while leaving some other details only informally explored. We've completed all our usual productivity and gaming tests, and we think we can offer a solid idea of Ryzen's value for system builders of all stripes. Unfortunately, we had to make a few cuts from our schedule to achieve that goal. Overclocking performance and power efficiency measurements will have to wait for a separate article, as will platform performance measurements for X370 like USB 3.1 transfer speed and NVMe storage performance. We apologize in advance for the omissions, but we think you'll enjoy the rest of our review. Let's get to it.[/quote<]

          • Visigoth
          • 3 years ago

          Then why the f*ck are you still here? Don’t let the door hit your ass on your way out.

          What a d0uchebag.

      • Krogoth
      • 3 years ago

      0/10

      Nice bait mate

        • christos_thski
        • 3 years ago

        No, you see, he’s a “bro” not a “mate”.

        Despite basking in the performance of my amd made R9 Fury (got one so cheap I couldn’t resisting pawning off my RX480 to a friend), I have to say, these fanboys aren’t doing AMD’s popularity any favors. This stupid crusade attacking tech sites left and right for not benchmarking Ryzen under ridiculous conditions is so… annoying. I haven’t seen so much fanboyism since the early GPU holy wars…

          • Redocbew
          • 3 years ago

          Given all the hype I guess it should have been expected that some people would react this way, but yeah it is annoying. If someone wants to disregard the work done to figure all this out and buy what they want anyway I say go right ahead. Ryzen looks to be a decent CPU even if it can’t match Kaby Lake, so they’ll probably be happier that way than if they had picked the “right” components for their use case anyway. It’s just a shame they can’t do so quietly. 🙂

      • ultima_trev
      • 3 years ago

      It’s even “worse” at 720P since that introduces a bigger CPU bottleneck. But I emphasize worse as it’s still good. 94% of the 5960X in 1080P gaming performance, 92% of the 5960X in 720P gaming performance. If we’re comparing to the 6900K, it’s about 87% in 1080P gaming and 82% in 720P gaming. If AMD can get around their weaker memory controller and unique SMT implementation through software/OS/UEFI optimization, it could bridge that gap quite easily.

      You can’t really expect the 1800X to beat the 6950X or the 6900K so soon as the platform is immature. Give it time.

      • derFunkenstein
      • 3 years ago

      Bad day?

      • maxxcool
      • 3 years ago

      [url<]http://www.techspot.com/articles-info/1348/bench/1440_All.png[/url<] Here you are. Verdict: requires a patching it looks like.. in one scenario the 7700k's MINIMUM framerate beats the 1700x's hHIGHEST framerate. In most of the rest of 'gaming' it is exaclty as expected. 90% of a 7700k.

        • derFunkenstein
        • 3 years ago

        What presenting the average does is it downplays where there’s an actual problem because some games perform worse because they need more threads (Mafia III, for example). An average of 16 games that all run at different rates sucks. An average of 16 games where I don’t care about 15 of them sucks. An average of anything sucks.

    • Gastec
    • 3 years ago

    There is something I don’t understand and I wanted to ask for a long time. If the tests are done at different RAM speeds isn’t that influencing the results? E.g. in the article the Memory speed for Ryzen CPU is 2933 MT/s, for Intel Core i7-5960X is 2400 MT/s , while for Core i7-6700K/7700K is way higher at 3866 MT/s.

    Also a review of AMD Ryzen 5 1600X, when it will be released, would be highly appreciated. That should be a more interesting CPU for gamers , it has the same 16MB cache and frequencies as the Ryzen 7 1800X, but for only $260.

      • Jeff Kampman
      • 3 years ago

      I used the maximum possible memory speed for each CPU given processor and motherboard support. It absolutely influences the results.

        • Shobai
        • 3 years ago

        I think what you’ve done is the only sane way to have done it, regarding memory speeds. Given that AMD expects to be releasing a microcode fix to bring RAM speeds up to the rated 3600MHz, though, do you intend to retest anything if/when such a patch appears?

        • Ninjitsu
        • 3 years ago

        Hmm i’ve been thinking about this too. On one hand it makes no sense to artificially gimp the faster CPUs with a memory bottleneck, on the other hand it doesn’t remain a pure CPU comparison.

        However it’s a more real world test, and therefore makes more sense. A pure isolated CPU comparison in this case would be academic at best, and moreover be misleading.

        So in conclusion I think what you’ve done is the better way of doing it.

          • Shobai
          • 3 years ago

          I think that it’d be interesting to compare the 6700k and 7700k CPUs at the Ryzen RAM speeds, in order to get a bit of an idea about how Ryzen’s performance /might/ be able to improve [b<]if[/b<] AMD manage to 'ungimp' Ryzen's memory limitations with this rumoured microcode update. For one thing, I'm intrigued by the reports suggesting that AMD was able to get better memory badwidth out of a given RAM clock than Intel - I can't see how that could be done.

        • Shobai
        • 3 years ago

        [url=http://www.eteknix.com/memory-speed-large-impact-ryzen-performance/<]eTeknix[/url<] are reporting reasonable gains from overclocked memory - The Witcher 3, with their GTX 1080 and 1080p, running at 3200MHz saw a 16% FPS increase over 2133MHz and almost 6% over 3000MHz [for whatever worth FPS reporting has, of course].

    • anotherengineer
    • 3 years ago

    It would be nice if we could see zen against the PhII in these reviews. I mean if we can go back 5 gens of Intel chips, the PhII is only about 3 gens for AMD.

    • ronch
    • 3 years ago

    After letting all this Ryzen news and reviews sink in for a while, I realize that, despite being a very respectable design that clearly brings AMD back into contention, there are some parallels And similarities to what we saw with the Zambezi launch so many years ago. The following phrases are making the rounds again :

    1. It’s a powerful processor for productivity and content creation but it’s not the best for gaming.

    2. Do we really need 8 cores?

    3. Software needs to be optimized for it.

    4. Tremendous hype before launch then the reviews come in with some disappointing results. To be fair to Ryzen, the reviews only cite gaming as the shortfall and the bottomline is that it’s still a tremendously great chip to have in one’s machine, and the price makes it very accessible to those who need it.

    In not saying Ryzen is bad, not at all. It’s not very good with games but it’s still one helluva chip. I’m just saying there are a few parallels but overall it’s far better received than the FX in ’11. I think the launch was a bit hurried, but it’s also impressive how they pulled this advanced architecture in just 4 years (Bulldozer took 6+ years) and with a limited budget. Zen is still a very new architecture with many things that could be smoothed out or enhanced or upgraded. It would be very interesting to see how far this architecture can evolve.

    • DrDominodog51
    • 3 years ago

    Apparently Ryzen needs an external clock generator to overclock the BCLK, and the subtimings are fixed for each memory clock speed, as in 2400 MHz has tighter subtimings then 2666 MHz and it impossible to change these.

    The subtimings being fixed has led to memory compatibility issues. Maybe that GSkill kit wasn’t released for nothing.

    Source: [url<]https://youtu.be/uEyov-mgRUI[/url<] Buildzoid is a well known LN2 extreme overclocker.

      • Meadows
      • 3 years ago

      that hair tho

        • DrDominodog51
        • 3 years ago

        It’s a joke that people watch his videos for the hair.

        If anyone is interested in extreme overclocking (or overclocking in general), I would highly recommend watching his live streams and PCB breakdowns. He’s a really cool guy.

        • Chrispy_
        • 3 years ago

        I’m knocking on 40 and I still wake up looking like that every day.

    • Walkintarget
    • 3 years ago

    Well after reading this thorough review, I think its quite clear which CPU I’m gonna pick up …

    FX 8370 for teh WINZ !!! 😛

      • tipoo
      • 3 years ago

      I like my frame pacing to surprise me!

      • ronch
      • 3 years ago

      I think given how the FX-8350 is just $140 I think it’s not such a terrible chip for those who don’t care about having the latest and greatest and want to stick with AMD. And it’s also a proven design.

        • Demetri
        • 3 years ago

        Oh it’s proven alright…

        • Redocbew
        • 3 years ago

        “Proven” isn’t the word I’d use…

        Stick to cheerleading for Ryzen dude. You’ve actually got something to go on there.

        Anyone who really needs 8 threads shouldn’t be buying an FX-8350 right now, and anyone who just needs a decent CPU can get an i3 6100 for $110, or a Pentium for even less.

          • DoomGuy64
          • 3 years ago

          Ronch is as much of an AMD fan as Colbert is a conservative. Troll parody. It takes a special kind of person to appreciate the humor. Especially when the vast majority of it isn’t the slightest bit funny.

            • Redocbew
            • 3 years ago

            You’re not funny either. Hah!

        • Jeff Kampman
        • 3 years ago

        Seek treatment for Stockholm Syndrome.

          • ronch
          • 3 years ago

          No need to be rude, Jeff.

            • derFunkenstein
            • 3 years ago

            You’re certainly looking for confirmation that what you have is still viable, and it’s really not.

            • ronch
            • 3 years ago

            It may be, it may be not. Depends how you look at it. But that’s no excuse to bash someone and imply he’s nuts, especially if you are the Editor around here who practically represents the site. Just because you run the site doesn’t mean you can be mean when you want to. Remember, I never attacked anyone around here, at least not with regards to the recent Ryzen launch. But it seems my excitement irritated people that they resort to attacks, maybe not directly ,but their use of expletives strongly suggests an unfriendly reaction. Look, we’re all weird and quirky in our own little ways, but is being a fan of a product or company a mental illness? People cheer for their favorite sports team, I cheer for my favorite tech company (and unlike many fanbois, I bash them too when they deserve it). And rooting for the underdog is a common thing. So are we all in need of help then?

            Let’s keep it friendly, people.

            That said, I do agree that it’s no longer viable to get an FX these days given several reasons but perhaps I needed to clarify what I said earlier : in a vacuum the FX is still a great and amazing piece of technology for just $140. It’s far from not having any faults but it’s nonetheless a very powerful and reliable processor in its own right.

            • Pancake
            • 3 years ago

            [url<]http://cpuboss.com/cpu/AMD-FX-8350[/url<] For certain workloads it doesn't completely suck. It nips on the heels of an i7-3770K. If it didn't suck power like a pig on speed I'd happily have one in my menagerie of grotesque curiosities because it's a very interesting architecture. I'm actually quite curious how it would run my heavily multi-threaded mostly integer code.

            • Chrispy_
            • 3 years ago

            +1 for pig on speed.

            • Redocbew
            • 3 years ago

            [quote<]People cheer for their favorite sports team, I cheer for my favorite tech company[/quote<] To continue that analogy, have you ever seen one of those fans who shows up to the game screaming like a loon, covered in body paint and shirtless even though the temperature is below freezing and the snow is falling? That's you. That's how far you're going when trying to spin the FX-8350 as "amazing".

            • Meadows
            • 3 years ago

            Come on dude, you’re nuts. No implication required.

            • ludi
            • 3 years ago

            Very sorry to hear about your junctional herlitz epidermolysis bullosa.

            • NTMBK
            • 3 years ago

            Hey, if it works for you, it works for you. And I’m sure there is a handful of [i<]very[/i<] nice use cases where it makes more sense than an i3 7300. But for 99% of users, the Intel option makes more sense. That said... I'm really looking forward to seeing the mid-range Ryzen chips shake things up.

        • Ninjitsu
        • 3 years ago

        WTF are you on about?

        [quote<] Have you seen the chart? It's a helluva start It could be made into a monster If we all pull together as a team And did we tell you the name of the game, boy We call it 'riding the gravy train' [/quote<] Quad core Ryzen will run circles around bulldozer too

        • just brew it!
        • 3 years ago

        [quote<]Not such a terrible chip...[/quote<] ...for a 5 year old, $140 CPU. The ancient platform, relatively high power consumption, and lack of decent micro-ATX options are also potentially problematic, even for people who don't care about "latest and greatest". That said, yes I still run one too. I sure wouldn't buy a new one today though!

          • ermo
          • 3 years ago

          I got my FX-8350 for use with Linux a few years ago as a (much) cheaper alternative to the i7-3770K and i7-4770K chips of the day, which did not support ECC and VT-d at the time (yay for pointless product segmentation intel!).

          For the price, its compilation and VM performance more than justified it back then.

          For what I’m using it for, it’s still got more than enough oomph and I’ll probably keep it around for a few years still.

          For reference, I’m planning to keep my delidded Ivy in service as my main Windows PC with a view to upgrading it with a Vega or two (for VR duties) to replace ye old pair of HD7970 GHz Ed. cards it’s currently running. We’ll see what happens once Zen+ rolls around.

          One of those 7970 cards will go into my nephew’s i7-2600K for use with an old 1680×1050 monitor (which ought to yield decent performance even with modern games at that resolution) and the other will spend its autumn and winter years next to the warm hearth of that FX-8350. In rendering tasks, the ability to use OpenCL with the HD7970 will likely yield decent enough performance for tinkering use.

        • tipoo
        • 3 years ago

        At 140, a KL Pentium or an i3 for two or three generations back will wipe the floor with it on frame pacing. Not to mention if you pay a separate electricity bill your few bucks saved will just go to that.

        I’d only go with it if it was practically free, or a complete system for like 200.

          • NovusBogus
          • 3 years ago

          Heck, even a refurb Ivy i5/i7 is going to spank that dozer. Those aren’t much more than $200 for the whole box.

        • Chrispy_
        • 3 years ago

        It’s a proven design alright, but let us be completely clear here:

        In games and general workloads like web-browsing, application opening and office tasks it struggles to keep up with a $100 i3-2100 which is 18 months older than it, despite using almost 3x the energy to manage this ‘feat’.

        In highly-parallel, multithreaded workloads that [i<]don't[/i<] cripple the FX (because of it's shared FP unit between pairs of cores in a module) it can just about keep up with an intel quad core of a similar age. [i<]Whilst pulling down almost twice as much power....[/i<] It's a turd. I bought one once solely for ECC light-server duty and by god, I knew it was a turd before I bought it but wasn't willing to believe it was that bad at [i<]everything[/i<]. My experience now means that I can call it a turd with full confidence, and that's probably why Zen is so different to Bulldozer's architecture; There was literally nothing worth keeping at all. [i<]edit[/i<] I thought I'd add this too. I replaced a 2600K for someone recently with a Pentium G4650 (3.5GHz Kaby 2C/4T). The difference in everyday performance was stark, and even in the H.265 encodes which I expected to be slower, the Pentium was within a minute or so of the 2600K (about a 25 minute encode task). Given the similar clockspeeds, that means that Intel's architectural evolution has managed to practically double IPC since Sandy Bridge for the only real-world scenarios for which the Bulldozer architecture had any merit. And lets be clear here once more, Sandy completely wiped the floor with Bulldozer even back in 2012.

        • Gasaraki
        • 3 years ago

        Can’t you get a better performing i3 for that price? Also the i3 will use way less power.

          • Redocbew
          • 3 years ago

          For a lightly threaded workload I bet it would, and yeah it would definitely use less power.

            • NTMBK
            • 3 years ago

            Also has better memory bandwidth.

    • anotherengineer
    • 3 years ago

    Hi Jeff,

    For the 16GB ram kit you used on the new AMD platform would you happen to know if that kit was dual rank or single rank?

    I was just checking out this slide.
    [url<]http://images.anandtech.com/doci/11170/AMD%20Ryzen%207%20Press%20Deck-18.jpg[/url<]

      • drfish
      • 3 years ago

      This is the stuff:
      [url<]http://gskill.com/en/product/f4-3866c18d-16gtz[/url<] I'm assuming it's single rank but I'm not sure. It would be hard to hit the speed and timings he was using without being the good stuff.

      • Jeff Kampman
      • 3 years ago

      The G.Skilll DDR4-3866 kit is single-rank.

    • Bigbloke
    • 3 years ago

    Thanks for the deeper than the normal internet review. I think this is an interesting platform for a “prosumer”, but I haven’t seen much data on the chip sets for Ryzen yet.

    Will you guys be looking at USB2 & 3 data rates, SATA controller performance, Ethernet performance etc? I for one would really like to see TR’s take on this stuff. I’m old enough to remember Dissonance’s testing of this stuff. Would be good if new team TR can match it 🙂

    Thanks in advance and keep up the good work!

    • unclesharkey
    • 3 years ago

    Gotta love the fact that they come with great coolers too. 😉

    • ermo
    • 3 years ago

    In this [url=http://www.phoronix.com/scan.php?page=article&item=amd-ryzen-znver1&num=1<]phoronix benchmark[/url<], in the cases where znver1 receive a boost (which it doesn't always!) the results point to +3-10% performance (where +10% is on a specific ImageMagick benchmark). It will be interesting to revisit this once more zen-related code-generation compiler optimization trickles in. For Linux where you can typically recompile your code to target your specific CPU with -march=native, this might prove quite beneficial in certain cases. Less certain how much of a difference this will make on Windows, what with the platform being closed source and conservatively compiled binaries being the norm for compatibility reasons. Fun fact: In one of the benchmarks, the k8-sse3 optimization profile led to a massive performance increase with the RyZen 1800X used, while the znver1 profile led to a performance decrease. e: typos

    • jessterman21
    • 3 years ago

    Jeez that’s a lot of comments.

    My thoughts: Very pleased with workstation performance and pleasantly surprised at its gaming chops. Apparently gaming results are improved by disabling SMT?

    Jeff, I’d like to see a retesting with High-Performance Power mode in Windows, and SMT disabled to see if your results match what Tom’s Hardware found. I think I remember Bulldozer performing better in games with its “hyperthreading” disabled as well.

      • Jeff Kampman
      • 3 years ago

      I took my Ryzen drive back to the bare metal this morning and tried the high-performance profile. I’d say the high-performance profile is good for maybe another 2-3% of performance.

        • DPete27
        • 3 years ago

        What does high performance power mode in Windows do? Also, I feel like it would benefit all processors equally, no?

          • Krogoth
          • 3 years ago

          It just disables Speedstep/CoolnQuiet and other power saving ACPI options at the OS level.

    • Leader952
    • 3 years ago

    “tricks?” used by AMD in benchmarking Ryzen before the lanuch

    [quote<]AMD inflated their numbers by doing a few things: In the Sniper Elite demo, AMD frequently looked at the skybox when reloading, and often kept more of the skybox in the frustum than on the side-by-side Intel processor. A skybox has no geometry, which is what loads a CPU with draw calls, and so it’ll inflate the framerate by nature of testing with chaotically conducted methodology. As for the Battlefield 1 benchmarks, AMD also conducted using chaotic methods wherein the AMD CPU would zoom / look at different intervals than the Intel CPU, making it effectively impossible to compare the two head-to-head. And, most importantly, all of these demos were run at 4K resolution. That creates a GPU bottleneck, meaning we are no longer observing true CPU performance. The analog would be to benchmark all GPUs at 720p, then declare they are equal (by way of tester-created CPU bottlenecks). There’s an argument to be made that low-end performance doesn’t matter if you’re stuck on the GPU, but that’s a bad argument: You don’t buy a worse-performing product for more money, especially when GPU upgrades will eventually out those limitations as bottlenecks external to the CPU vanish. As for Blender benchmarking, AMD’s demonstrated Blender benchmarks used different settings than what we would recommend. The values were deltas, so the presentation of data is sort of OK, but we prefer a more real-world render. In its Blender testing, AMD executes renders using just 150 samples per pixel, or what we consider to be “preview” quality (GN employs a 3D animator), and AMD runs slightly unoptimized 32x32 tile sizes, rendering out at 800x800. In our benchmark, we render using 400 samples per pixel for release candidate quality, 16x16 tiles, which is much faster for CPU rendering, and a 4K resolution. This means that our benchmarks are not comparable to AMD’s, but they are comparable against all the other CPUs we’ve tested. We also believe firmly that our benchmarks are a better representation of the real world. AMD still holds a lead in price-to-performance in our Blender benchmark, even when considering Intel’s significant overclocking capabilities (which do put the 6900K ahead, but don’t change its price). As for Cinebench, AMD ran those tests with the 6900K platform using memory in dual-channel, rather than its full quad-channel capabilities. That’s not to say that the results would drastically change, but it’s also not representative of how anyone would use an X99 platform. Conclusion: Regardless, Cinebench isn’t everything, and neither is core count. As software developers move to support more threads, if they ever do, perhaps AMD will pick up some steam – but the 1800X is not a good buy for gaming in today’s market, and is arguable in production workloads where the GPU is faster. Our Premiere benchmarks complete approximately 3x faster when pushed to a GPU, even when compared against the $1000 Intel 6900K. If you’re doing something truly software accelerated and cannot push to the GPU, then AMD is better at the price versus its Intel competition. AMD has done well with its 1800X strictly in this regard. You’ll just have to determine if you ever use software rendering, considering the workhorse that a modern GPU is when OpenCL/CUDA are present. If you know specific in stances where CPU acceleration is beneficial to your workflow or pipeline, consider the 1800X. For gaming, it’s a hard pass. We absolutely do not recommend the 1800X for gaming-focused users or builds, given i5-level performance at two times the price. An R7 1700 might make more sense, and we’ll soon be testing that. [url<]http://www.gamersnexus.net/hwreviews/2822-amd-ryzen-r7-1800x-review-premiere-blender-fps-benchmarks/page-8[/url<] [/quote<]

      • ronch
      • 3 years ago

      Uh oh.

      • adamlongwalker
      • 3 years ago

      Just additional information I picked up yesterday.

      [url<]http://www.pcworld.com/article/3176100/computers/amd-ryzen-7-1700-vs-a-5-year-old-gaming-pc-or-why-you-should-never-preorder.html[/url<]

      • Unknown-Error
      • 3 years ago

      Moral of the story? Always wait for the Independent reviews and never go by company provided benches especially when pre-ordering . It will save you a lot of pain later on.

        • utmode
        • 3 years ago

        With respect, how to know who to trust including benchmark software.

          • Redocbew
          • 3 years ago

          It’s more often the configuration of the test that’s suspect, not the software its self. That’s why you wait for publicly reproducible test results.

      • maxxcool
      • 3 years ago

      Well, Dude did not hold back did he.. interesting observations on AMD’s avoiding geometry loads..

        • Redocbew
        • 3 years ago

        I thought it was pretty apparent at the time that the tech demos were not going to be all that useful. AMD deserves some flak over the whole testing at 4k thing, but overall I’m not sure why they even bothered. So far it looks like Ryzen is a good chip. It’s not like they’ve got another Bulldozer, but you wouldn’t know that from the way they’re marketing it.

          • NTMBK
          • 3 years ago

          Hard to break the habit?

            • Redocbew
            • 3 years ago

            Probably something like that. Business people are weird.

    • ronch
    • 3 years ago

    The lackluster performance in games is troubling, and one really has to wonder why. I hate to bring this up again, but would it be too farfetched to think that game developers whose games are not running too well on Zen using a compiler that disables some features (instruction support etc.) when a non-Intel CPU or an unrecognized CPU is detected? Unless Zen has a fundamental flaw (and yes, their memory controller and other things like branch predictors are still not as good as Intel’s, I understand), this is really the only thing I could think of that would cause these things to happen.

      • tipoo
      • 3 years ago

      Doesn’t the lower per thread performance account for that? As much as AMD pushed for/hyped the 8 core consoles promoting threadiness, on the PC, 4 execution threads still seems like where most games are at most, some more like two, with a few more threads on other cores doing not much.

      It all seems like what most of us were expecting.

        • ronch
        • 3 years ago

        Thing is though, when you run highly threaded code, things turn around. Why does the chip suddenly do better when all cores are working against another chip with the same number of cores with allegedly better per thread performance? Is it the Infinity Fabric?

          • Airmantharp
          • 3 years ago

          You’re really hitting on the emerging issue here. All (well-done) reviews in context, the results just aren’t making sense.

          The challenge is that that’s just about all we can say. Based on results of disabling SMT and improving performance, it could be as simple as a Windows scheduler update, but we’re limited to speculation as of this moment.

          • tipoo
          • 3 years ago

          “Is it the Infinity Fabric?”

          Possibly, they seemed to place an emphasis on each core within a core complex having equally timed access to all parts of the shared cache. There’s a wiring overhead with this but they seemed to find it worth it. They also made the Infinity Fabric scalable up to 64 cores which they said the Intel one would drop off in performance with.

          Maybe Windows also doesn’t know a Ryzen core from a virtual thread yet?

      • K-L-Waster
      • 3 years ago

      It doesn’t have to be a fundamental flaw — it could simply be that Intel still has a lead on IPC, and functions that are dependent on the speed of individual threads rather than multi-threaded performance are still going to run better on Intel’s chips.

      AMD has done a lot to close the gap (just look at Jeff’s graphs and compare how far back the FX chip is compared to the RyZen chips) — they just haven’t actually passed for the lead.

    • blastdoor
    • 3 years ago

    Has anyone seen a thorough comparison with the 6850k?

    In terms of multicore Geekbench 4, the 1800X looks very similar to the 6850k. It would be nice to see a more thorough comparison. I’d love to see where the 6850k fits in the price/performance chart at the end of the TR review.

      • chuckula
      • 3 years ago

      I hate Geekbench so much that if the 1800X flat out “lost” (at least in Geekbench speak) to a 4 core Kaby Lake I would chalk it up as a win for the 1800X.

        • blastdoor
        • 3 years ago

        Awww…. but geekbench likes you!

        So do you think version 4 is an improvement over version 3, or no?

          • RAGEPRO
          • 3 years ago

          I’m not chuckula, but while Geekbench version 4 is certainly an improvement, it still favors Apple hardware a little unreasonably. I do think it’s quite decent for comparisons within the same class of hardware though.

            • chuckula
            • 3 years ago

            [quote<]I'm not chuckula,[/quote<] Sure, whatever helps you sleep at night! :-p

            • ronch
            • 3 years ago

            Are you sure RAGEPRO isn’t your alternate username around here?

    • K-L-Waster
    • 3 years ago

    Still not seeing a compelling reason to upgrade my Haswell i5 for my use case. I’m sure a 1700 would outperform my system, but not by enough to make spending the money worthwhile.

    Having said that, if I *was* looking for a new system, on the CPU side it’s become a real decision to make rather than the “don’t be silly, just get an Intel” that it’s been for donkey’s ages.

    • kuttan
    • 3 years ago

    The real world gaming FPS is entirely different story.

    [b<] i7 7700K 5Ghz Vs Ryzen 1700 3.9Ghz [/b<] [url<]https://www.youtube.com/watch?v=BXVIPo_qbc4[/url<]

      • Krogoth
      • 3 years ago

      The frame-rate benches pretty much show that there isn’t that much of a difference in common settings and resolutions. You to be be running at very low resolutions with none of the eye candy to notice any difference in the raw numbers but the end-user still couldn’t tell a difference in a double-blind test.

      • chuckula
      • 3 years ago

      Where are the 99th percentile frametimes?

        • Pancake
        • 3 years ago

        In the conclusion of TR review. And they show Baby Cakes is King.

          • tipoo
          • 3 years ago

          Sometimes, I go to the lake and sing songs to the lake about the lake.

        • kuttan
        • 3 years ago

        Since the TR game benchmark and the real time game FPS of Ryzen in the video above doesn’t match its impossible to compare both.

      • ColeLT1
      • 3 years ago

      Frame times matter for the FULL STORY.

        • kuttan
        • 3 years ago

        ….

      • Meadows
      • 3 years ago

      As opposed to TR’s results, which are imaginary?

      • raddude9
      • 3 years ago

      Of course, recording the video on the computer you are benchmarking is going to make the Ryzen look better!
      Even though the poster used Nvidia shadowPlay to record the video, it’s still going to use a significant amount of CPU which is going to hit the 7700K more than the Ryzen.

      • freebird
      • 3 years ago

      People can “down rate” reality… if they want, show me all those “gamers” that are buying Kaby LaKe & a Titan or 1080 to play games 1920×1080 at low/normal settings and no eye candy. At least with 4K you can turn off/down AA without much impact because of the high resolution. What are they playing “how high can I push my FPS counter” or a real game???

      I feel like we are back comparing the P4 to the Phenoms and testing at 640×480 with low settings six or seven years ago… just to “prove” the P4 was the better “gaming” CPU…

      And they claim this is to stress the CPU… is it? or is a slower cache/ higher PCIe latency/poor bios or any # of things involved causing it?

      ONCE AGAIN SHOW ME SOMETHING I WILL SEE WHEN I PLAY with THESE MACHINES AND GPUS… I will be running a Ryzen 1700 or X version with Dual R290s or Vega on 2560×1440 @ 144Hz same if I had Kaby Lake… AND I WILL NOT BE RUNNING Low/Normal/Med/High at 4x AA I will be running MAXed out most likely.

        • Airmantharp
        • 3 years ago

        What 1080p testing really shows us is the potential for Ryzen to reach higher framerates, [i<][b<]regardless[/b<][/i<] of resolution. See, testing at 1080p takes the GPU mostly out of the equation while still stressing CPU subsystems that affect GPU speeds (PCIe, memory controller, latency), and shows us the framerate potential for the CPU. This is important: we don't want to know how Ryzen performs when current-gen GPUs are loaded down. GPU performance is variable both because GPUs get faster and because game settings can be changed to effect GPU load, as well as newer games coming along with more demanding graphics. By testing at 1080p, TR (and most other sites) are telling you what you need to hear for your 1440p 144Hz setup: just how much would you gain or lose relative to established Intel CPUs.

    • HERETIC
    • 3 years ago

    Trying to interpret what a reviewer is trying to say,and trying to read in between the lines
    can be difficult at times-But this really stumped me–
    ” As a result, they won’t feel like a substantial upgrade from an older Sandy or Ivy system while browsing Facebook and Twitter”

      • Shobai
      • 3 years ago

      I’m trying not to read too much into it, but I think that he means that someone who simply browses the Web is unlikely to notice much difference between a hypothetical Sandy/Ivy system and a hypothetical Ryzen system.

      • Jeff Kampman
      • 3 years ago

      If your primary workload is JavaScript then you’re not going to enjoy the benefits of a lot of cores.

        • chuckula
        • 3 years ago

        Thanks for running the browser benchmarks to point that out too.
        They tend not to get the hype of Cinebench, but believe it or not, people use web browsers.

        In fact, I’d be willing to bet at least 37% of TR’s readers use web browsers.
        At least!

          • derFunkenstein
          • 3 years ago

          I use the TR mobile app.

            • chuckula
            • 3 years ago

            You 63%ers think your all that.

            • meerkt
            • 3 years ago

            It’s based around the OS’s browser widget.

            • derFunkenstein
            • 3 years ago

            Some are, and the one that’s my main duty at work is, but one of my projects is to prototype consuming REST services with Xamarin and display them using Xamarin forms (with the hope eventually that we’ll dump the mobile development platform we’re paying big bucks for right now). I don’t know this for sure, but I don’t think Xamarin forms are just a wrapper for HTML.

          • meerkt
          • 3 years ago

          I use a web browser! But I don’t care at all about Javascript performance since it’s Lynx.

          • jihadjoe
          • 3 years ago

          How many people here telnet to port 80?

        • bhtooefr
        • 3 years ago

        Depends on how many JavaScripts you’re running, though. (And, I wonder, how parallelized are modern browsers with multiple separate scripts? I don’t think the synthetic JavaScript benchmarks out there handle the case of, say, 20 different scripts on a single page, 12 of them being from separate ad and analytics networks.)

        For instance, if you’ve got a few tabs open to Kinja sites, maybe a tab or two open to The Verge, a tab open to Facebook, a tab open to Twitter… that’s where you’re gonna want the threads, because each thread is competing for resources, and a lot of those sites are extremely bloated.

          • derFunkenstein
          • 3 years ago

          Chrome shuts down a lot of stuff on background tabs. For example, setTimeout is completely hosed if your tab or even your Chrome window go into the background. It breaks a bunch of websites that time users out after a period. Stuff where users might be in it at work. The app I work on had issues for a while, and for the moment it’s been sorted. Until Google gets greedy about power consumption stuff again.

      • K-L-Waster
      • 3 years ago

      How about “if all you do is browse web pages, upgrading to RyZen won’t make much of a difference to you.”

    • POLAR
    • 3 years ago

    Yet, AMD shares are now plummeting down [url<]http://finance.yahoo.com/quote/AMD[/url<] thanks to ArsTechnica and Tom's who reviewed it as "meh", and thanks to CNBC who quickly spread the false news [url<]http://www.cnbc.com/2017/03/02/shares-of-advanced-micro-devices-fall-after-new-cpu-disappoints-with-gaming-performance-.html,[/url<] So the official media position is that the cpu "disappoints". I'm saving everything and sending it to the SEC.

      • Krogoth
      • 3 years ago

      I don’t understand the whole “gaming” performance angle. It hasn’t been that important since the industry has been coding with two to four threads at most. The gaming performance difference between Kabe Lake and Zen isn’t that striking either.

      Ryzen chips offer a far better overall package and deal if you do more than silly gaming on your computer. The lower-end Kabe-Lakes are better budget-minded solutions are gaming usage patterns until AMD delivers its own lower-end counters.

        • Klimax
        • 3 years ago

        There are games old and new that can eat any CPU performance.

        X3: Reunion is in any fight extremely CPU starved. (Benchmark, Scene 2) And Any game with CPU PhysX or Havoc can quickly get CPU bottlenecked. And of course Simulators need good CPU.

        And as soon as you need more then “silly” gaming, then RyZEN very quickly falls off. Video processing and editing, 3D rendering (haven’t checked many reviews, anybody did raytracing?), SQL servers,…

          • Krogoth
          • 3 years ago

          Kabe Lakes [b<]aren't[/b<] that much faster then Ryzen. That's kinda the point. Gaming performance has lost much of importance in the last decade or so. It no longer drives demand in the CPU market. Single-threaded performance has reached physical and practical limitations. The furture is multi-threaded code and shifting towards a whole new computing paradigm. Ryzen completely destroy normal Kabe Lake and Sky Lake chips under real workstation-tier loads and rival Broadwell-E parts that currently run for 2x-3x the platform cost. It has brought much need competition for this market segment. I suspect Skylake-E chips will shift things back in Intel's court in terms of pure performance but price is another matter that has yet to be seen.

            • Klimax
            • 3 years ago

            Meh, AMD needs 8 cores to destroy 4 cores. And? If AMD couldn’t beat 4 cores/8 thread chips with 8 core chips, then it would be time to shut down whole CPU division down!

            • AnotherReader
            • 3 years ago

            Please let us know why you have to troll every AMD thread; did Hector Ruiz murder your favourite pet? Oh and you are wrong; AMD beats Intel’s 8 cores in quite a few [url=http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,4951-10.html<]scientific computing[/url<] and [url=http://www.anandtech.com/show/11170/the-amd-zen-and-ryzen-7-review-a-deep-dive-on-1800x-1700x-and-1700/18<]rendering benchmarks[/url<]. Since, unlike you, I am fair, I'll note that there are some benchmarks where [url=https://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/12<]Intel's HEDT CPUs trounce Ryzen[/url<].

            • Klimax
            • 3 years ago

            Trolling? Heh, you wish i was trolling. because it then becomes trivial to dismiss my posts. Too bad, not happening. Too bad your “complaint” is nothing more then just bunch of logical fallacies and empty ad hominem.

            Why are you addressing completely different thing then we were talking about? Reread Krogoth’s post again to which i responded.

            Anyway, Interesting SPEC workstation bench. Need to see how it is distributed, because there are some things missing info from Tom’s article. As for Anandtech, they used OLD version of Blender! The one AMD itself used for PR, but dropped after NEW version got released. The one with brand new AVX code. Not good counter example! You should be more careful what kind of data you present as counter-evidence! You should check results for Blender in article under which we are posting…

            Furthermore, Anandtech does NOT say which version they used. There are at least three of them! (2.0, 3.0 and 3.1) And they did not write which OpenCL driver was used.
            POVRay is interesting, but I suspect mostly thanks to missing AVX. (Bit odd as basic infrastructure is already present and one small piece already uses it)

            Frankly, anybody betting on absence of AVX from 3D and video is fool and idiot. Some benches in use now might not have it, but it will come sooner or later. Maybe it’ll be even my patch there. (As soon as I finish my work on noise filter and Avisynth)

            • AnotherReader
            • 3 years ago

            You are a knowledgeable poster, but your bias and irrational hatred for AMD makes you a troll! You made a general statement about AMD needing 8 cores to match Intel’s 4 cores; that is clearly false.

            Krogoth clearly stated that the 8 core Ryzens outclass Intel’s 4-core parts in workstation workloads, but lose in gaming.

            As far as Blender is concerned, TechReport used the latest version and [url=https://techreport.com/r.x/2017_03_01_AMD_s_Ryzen_7_1800X_Ryzen_7_1700X_and_Ryzen_7_1700_CPUs_reviewed/blender.png<]Ryzen did pretty well[/url<]. I agree with you about the deficit in AVX performance versus Haswell and its successors. However, you ignore all of the areas where it is strong and ignore that Naples will have substantially greater memory bandwidth than competing Xeons. Memory bandwidth is very important for quite a few HPC applications.

            • Krogoth
            • 3 years ago

            This isn’t an apples to apples comparison. Ryzen and current Kabe Lake architecture have a number of differences and design choices that come with trade-offs. This reflects in their real-world performance.

            • Klimax
            • 3 years ago

            Still, 4/8 versus 8/16 is simply simple case. (And would require very extreme level of fail to lose)

            • Krogoth
            • 3 years ago

            Please drop the blue-tinted glasses. They are making you look very silly.

            • flip-mode
            • 3 years ago

            Kaby

            • DrDominodog51
            • 3 years ago

            Kabi

        • ColeLT1
        • 3 years ago

        I see no use for 8 cores currently in my life, mine hardly ever uses >35% of quad cores doing the “silly” gaming, watching youtube, doing VPN work. Unless you are content creator or run VMs on non-server grade equipment, then I don’t why would you spend more money and get this slower chip?

        I gained close to 0fps going from a 4.9ghz 4970k to a 5.1ghz 7700k, yet my minimum frames doubled. So every stutter, every hiccup, every huge mob fight in ESO, I get a better experience and less laggy gameplay. Borderlands 2 maxed out (physx high) would be a 25-30fps chopfest when all the particles were being spun around by a singularity grenade (every big fight), now is 50-60fps range. Yes my maximum fps didn’t change, but it feels so much better. Going from ddr3 2133 to ddr4 3600 was the main factor, I’m sure some other latency improvements in the chip helped too.

        I can’t express how noticeable the changes were, and I had a Nehalem, Sandy, Ivy, 2 haswells and each jump was not really noticeable, this was. This was SSD installed noticeable.

        Me switching to a Ryzen for gaming would be taking a step back to about ivybridge, except my ivybridge does 4.7ghz and is <10% loaded (home server/blue iris/teamspeak/game server duty).

          • Krogoth
          • 3 years ago

          Actually, it is closer to Haswell and Broadwell(units that don’t have L4 cache). Ryzen is faster than Ivy Bridge and Sandy Bridge at gaming especially at frame-timing.

          Most of your gains in ESO going to a Skylake platform came purely from memory bandwidth. Gamebyo engine loves memory bandwidth. Memory controller on Ryzen doesn’t harness DDR4 as effectively as Skylake and Kabe Lake.

            • ColeLT1
            • 3 years ago

            The higher clocks of my ivy at 4.7ghz beats Ryzen at 4.1ghz (realistic clocks vs realistic clocks) even with slight IPC defect. Works to my advantage in my use case.

            • Krogoth
            • 3 years ago

            That Ivy Bridge would actually would lose in this case because it is held back DDR3 driving it despite the fact Ryzen doesn’t properly utilize the DDR4 that drives it and Ryzen has more/faster L2 and L3 cache which does help out.

            You are underestimating the small jumps made since Ivy Bridge’s depute.

        • Chrispy_
        • 3 years ago

        If anything, AMD’s mistake with Zen is to launch it as a consumer chip first.

        It’s a decent consumer chip – and by that I mean it’s middle of the pack for most consumer workloads. If they’d sold it as a workstation/server product first it would have been reviewed with different applications and for those it absolutely steals the show.

        I’m actually looking forward the Zen-based Opteron replacements. Intel needs dethroning in the small-business server room, and for AMD’s sake, that’s where the real profit is.

          • Airmantharp
          • 3 years ago

          I’m sure they’re aware that a consumer focus was probably not the best positioning based on performance, but man do they need the volume…

      • derFunkenstein
      • 3 years ago

      The AMD hype train made the non-technical media believe that they were going to beat Intel cleanly across the board. So from their perspective it’s a disappointment. It’s adorable you think the SEC will care.

      But +3, though, because I think this might actually be performance art.

        • just brew it!
        • 3 years ago

        ^ This.

        I predicted back on Sunday that the stock would take a hit when Ryzen launched: [url<]https://techreport.com/forums/viewtopic.php?p=1341817#p1341817[/url<] It should not surprise anyone who has even a passing understanding of how the equities markets work.

          • NeoForever
          • 3 years ago

          *raises hand*

          I don’t have a passing understanding of how equity markets work. Where do I sign up?

            • dragontamer5788
            • 3 years ago

            Let me teach you the basics: Buy the rumor, sell the news.

            Hype is always stronger than the actual product. See Tesla. The Model 3 gets a lot of news right now, but its stock will inevitably drop when the Model3 is released. Ditto with Model X a year or so back.

            • just brew it!
            • 3 years ago

            The current price reflects how investors think a company will do in the future, not how it is doing now. If enough investors believe the marketing hype prior to a major product launch, this artificially inflates the stock price in the weeks or months prior. When the product finally launches for real and people start picking it apart and analyzing it, reality sets in and the stock price frequently will back away from its high.

            Some investors seek to exploit this effect (“buy the rumor, sell the news”), which makes it worse. You get a subset of investors who are selling their shares just ahead of the product launch, in an attempt to lock in the profits they’ve made during the preceding price run-up. This puts additional downward pressure on the stock price.

      • chuckula
      • 3 years ago

      [quote<]I'm saving everything and sending it to the SEC.[/quote<] That's great but I think Nick Saban can handle running Alabama's program next year without all those links.

        • Concupiscence
        • 3 years ago

        That made me laugh way too hard, thank you.

      • tipoo
      • 3 years ago

      This is how stock always acts around a big product launch. Watch Apple stock, it doesn’t peak once a new iPhone is selling, it peaks right before the announcement. Ride the hype, sell before delivery.

      • albundy
      • 3 years ago

      I wouldn’t pay any attention to them. CeeNothingButCrap is run by old farts would couldn’t differentiate silicon on a chip and silicone in their mistress’s implants.

      • ronch
      • 3 years ago

      It’s kinda sad how everyone expected Zen to dominate Intel in EVERY benchmark. Sure they’ll win some and they’ll lose some. That’s just how it works. But to see so many game titles not do as expected on Zen warrants further analysis. Is it the memory controller? Branch predictors? Compilers? AMD is saying the chip is fine, it’s the devs that need to work and optimize (which I’m not sure I’m buying), but Mike Clark also said they’ve already identified where Zen is weak and are already working through the list. So maybe there really is some sort of bottleneck in the chip somewhere. This is precisely one of the two big reasons why I’m not buying right away. The other big reason is because I just don’t need an upgrade at this point.

      • Meadows
      • 3 years ago

      You’re acting very, very hurt. Let me guess, long in AMD stock by a large amount?

        • thedosbox
        • 3 years ago

        Most likely got in very recently after the stock had made most of its gains (it was just over $2 a year ago), and is now regretting it.

      • Mr Bill
      • 3 years ago

      Or, its called profit taking.

      • UberGerbil
      • 3 years ago

      I don’t think you understand
      1) what the SEC is, what it does, or why it can’t take your money
      2) how uninfluential sites like Ars and Toms are as far as Wall St is concerned
      3) how Wall St evaluates companies,
      4) how the stock market works in general.

    • Jeff Kampman
    • 3 years ago

    I’ve updated the pages for [i<]Deus Ex[/i<] and [i<]Watch Dogs 2[/i<] with data for our full test suite, and all CPUs are accounted for in the final value scatter, as well. You will need to clear your cache or super-refresh (Ctrl-F5) in some browsers to see all the new data. Sorry for the inconvenience. Also, thanks to all the readers who have said nice things about the piece! Glad it's been informative.

      • Neutronbeam
      • 3 years ago

      You’ve made us all happy Kampers Jeff; great work!

      • derFunkenstein
      • 3 years ago

      At 3:27 in the morning, no less (probably 4:27 there)! You must’ve pulled an all-nighter.

        • Redocbew
        • 3 years ago

        No rest for the wicked, or for CPU reviewers on launch day.

      • Meadows
      • 3 years ago

      Here, have a bump. With that said, I’ll wait for power consumption numbers as part of my final concern.

    • Klimax
    • 3 years ago

    1. I labeled ZEN as SB-lite. Bit wrong. Always forgot that ZEN has split scheduling. (Some workloads will suffer and one of main ones for me is that case)

    2. Frequency comparison to Haswell-E is interesting. Intel is often with E versions very conservative. (My 5960x runs nicely at 4.2 & 70°C on air cooling, limited likely by mainboard)

    3. Dual channel versus quad channel. About as expected. Likely one of reasons for not implementing full AVX units

    4. Interesting L2/L3 numbers in AIDA64. Although I wonder about results from older CBurst32 would look like. Also looking for other (semi-)synthetic tests that are for now missing.

    5. Y-Cruncher: Completely expected result. And first warning about segments of high-end

    6. Looks like early leak by Iranian site was not that anomalous as I thought… But WTF is going there? I didn’t expect such poor showing in games

    7. I see author’s comment on memory bandwidth as cause, but it is also likely that it maybe AVX(2), but that would require some deep analyse of games involved or using game with explicit AVX support like Dirt: Showdown

    8. AES and co without AVX2 dependency looks relatively sane (frequency helps ZEN there a lot)

    9. Get scientific and likely AVX using test and ZEN is gone… (I don’t think memory subsystem, is the only piece there)

    10. Apparently Cinebench doesn’t use AVX. No wonder it was liked so much for ZEN leaks and maybe PR

    11. Blender: Just moving to AVX(2) instructions. it is relatively simple and can yield massive performance upgrade. I don’t know dispatcher in Blender so don’t know how it treats AMD CPUs regarding AVX use. Maybe mercifully they send SSEx only…

    12. For x264 I have far more interesting set of options. 1080p video and them = one day for a single pass. But it will use all threads of 5960x. Pity more demanding preset wasn’t used.

    13. All in all. Price again predicted everything. It cannot go for true high-end were 3D, video and scientific computing resides. (Also SQL servers like MS SQL Server Enterprise edition are not good fit for ZEN) And in games it sort of suffers too.

    So all in all, no second coming of CPU Christ nor something high-end shaking-up, but far from BD high-speed crash. But AMD should learn how to promise what it actually can do. This was still too close to BD-level hype crash.

    Now I wonder how things will loo when Haswell-E/Broadwell-E are on same frequency as RyZEN, like 4.2 GHz.

    And lastly: I perfectly predicted ZEN’s IPC. Around Sandy Bridge. Although FP performance is still not good.

    ==

    Anyway: if you really want to save cash, RyZEN will work, otherwise think twice before buying.

    • tootercomputer
    • 3 years ago

    I thought it was interesting that AMD stock went down 7% on Thursday.

    So, bottom line: on what tasks/apps will these 8-core chips really shine and outperform anything Int el has at the same price? Are there even such apps out there yet?

      • thx1138r
      • 3 years ago

      Did you even read the review? At least check out the conclusion…
      … particularly the price-performance graphs
      … particularly the non-gaming graph
      … it’s a long time since AMD was at the top of any graph

      [quote<]So, bottom line: on what tasks/apps will these 8-core chips really shine and outperform anything Int el has at the same price?[/quote<] Non-gaming tasks. Non-gaming apps. [quote<]Are there even such apps out there yet?[/quote<] Yes, non-gaming tasks have been around for a long time.

        • ronch
        • 3 years ago

        That last part is the finishing blow.

        • tootercomputer
        • 3 years ago

        thx1138r, you points are well taken. I should qualify my comments.

        I built a number of AMD systems from 2002 – 2006. Many fond memories, I was an “AMD guy.” Then Core2Duo came out, OMG, revolutionary, I build my first Intel system in 2007 and have never looked back. But I, like a lot of others I suspect, have waited and hoped for AMD to come out with their own “Core2Duo” and Ryzen looked like it might be the one. But I don’t think it is.

        Yes, there are many non-gaming apps (obviously) and apps that benefit from multi-core processors, and some that utilize the latter that probably will benefit from Ryzen (e.g., Handbrake, which I used to use a lot, but not lately) at a great price-performance point. But at this point in time, do I really need 8 cores? Is that where I want to put my money?

        Finally, in the lead-up to Ryzen, I read a comment about how it will be a “future-proof” CPU in the next few years. I think the comment was in the context of how multi-thread apps will become more prevalent. That may be true. Otherwise, Ryzen’s 8 cores are somewhat puzzling to me given that so many popular apps (especially games) are mostly single-thread and do not benefit from multi-cores.

        AMD seemed to have a lot riding on this chip. I so much want them to succeed, as competition is great. BTW, I just checked, and unfortunately, their stock price is down another 6%.

        My two cents..

          • thx1138r
          • 3 years ago

          I mostly agree. Ryzen really shines when you look at the non-gaming price-performance graph, particularly for people who run heavy-duty applications. But when you say:

          [quote<]many popular apps (especially games) are mostly single-thread and do not benefit from multi-cores. [/quote<] That's not really true any more, if it was then the core i3-7350K would equal the i7-7700K in many benchmarks. It doesn't. It's a fine chip but it lags well behind the 7600K and 7700K in a lot of places. I'm not trying to say that single thread performance doesn't matter any more, it does help, but operating systems and applications have moved on to the point where 4 cores seem to be the optimal point for home users at the moment. The question remains then, if home users have gradually migrated from single to dual and then quad cores, will they eventually migrate to 8-cores, I believe they will, and Ryzen is another step in that (probably long) process.

      • Klimax
      • 3 years ago

      There won’t be any such apps. There’s nothing in ZEN to allow that.

    • ronch
    • 3 years ago

    Regardless of why and how Ryzen performs the way it does, the bottomline is it seems to be a great design that, while not throwing Intel to the ground, kicks sand in Intel’s face. And Mike Clark did say that they’re aware of Ryzen’s shortfalls and they are already working on the next iteration of Zen to address those issues. I think it’s too much to expect Zen to be perfect out the gate, given time and resource constraints designing a brand new architecture, and many many architectures in the past have had teething problems initially. The platform also feels kinda sparse, so I hope new chipsets will come out soon that’ll be a better match for Intel’s wares.

    Given how my FX-8350 is still more than adequate for my usage scenarios I think it would make a lot of sense to skip Zen 1.0 and buy only when there’s a real need. Hopefully by that time Zen and the platform would be far more polished than they are now. I never was the early adopter type anyway.

    Edit – I see some folks love thumbing down my posts. Not sure why. Haters gonna hate, I guess.

      • Airmantharp
      • 3 years ago

      “throwing to the ground”
      “kick sand in face”

      Did you ever wonder if [i<][b<]you[/b<][/i<] might be the 'hater'?

        • ronch
        • 3 years ago

        Oh hi, it’s you again. Those were… figures of speech. Are those enough to trigger a negative response these days?

        Absolutely no hate on my part, just to clear that up. How about you?

      • Redocbew
      • 3 years ago

      They’re downvoting you because you’re not buying a Ryzen right now. So much fuss and then you don’t follow through? Seriously, what the hell.

        • ronch
        • 3 years ago

        Have I not always said that I am ok with my current setup, that I don’t need an upgrade, that I’ll probably get a later iteration of Zen, and that I’m just really curious how Zen works out after so many years of AMD falling so far behind and now is a time when they really have a great chance to catch up? Isn’t everyone excited? That’s precisely why the Web is on fire over Zen. So if I’m more excited than average, is that why folks like you and Airman try to bang on everything I post here about Zen? Don’t both of you have anything better to do or are you both just arrogant?

        Seriously folks, like I said somewhere here before, I’m not out to burn down your house. If my excitement ticks you off, well, I’d hate to imagine how you’d act with other REALLY annoying and more serious things.

        Peace, dudes. I’m out for discussion, not petty flame wars. I’ve grown tired of those and so should you.

      • Pancake
      • 3 years ago

      I wouldn’t say AMD kicks sand in Intel’s face. It’s more like AMD farted in the room then ran out giggling.

      Intel still have the superior architecture, process lead and manufacturing capacity. They are still the 1000-lb gorilla. What I will expect is lower cost 6 and 8-core Baby Cakes that will squeeze the life out of Ryzen in terms of profit margins. Expect AMD to bleed more red ink.

      Great news for consumers. I’m still likely to get a R7-1700x though just because it’s something different. But I will wait until it proves to be reliable and bug-free.

      • derFunkenstein
      • 3 years ago

      [quote<]I think it's too much to expect Zen to be perfect out the gate[/quote<] But Sandy Bridge was perfect out of the gate. Even Kaby Lake was perfect in that it was exactly what Intel wanted. The market isn't going to feel sorry for AMD.

        • chuckula
        • 3 years ago

        Yeah, I have yet to see an Intel launch where there are people running around yelling about how Windows needs to be rewritten to service the new chip properly.

        That’s not to say that Windows is perfect (far from it) but if an Intel chip doesn’t perform well in some benchmark the blame is targeted at Intel and not Microsoft.

        • AnotherReader
        • 3 years ago

        Zen’s success doesn’t depend on gaming; it will depend upon how Naples compares to Skylake-EP. If there is even a subset of the server market where it can outdo Intel, they will do well.

          • derFunkenstein
          • 3 years ago

          Maybe Raven Ridge will be awesome in the 4C8T desktop market if they can get the clock speeds up, but if this is the same chip powering the Ryzen 5 and Ryzen 3, it may not be great.

          And yes, maybe the relatively low power consumption bodes well for Naples. 3.0-3.7 on 16 cores, if that’s really a 130W CPU then that’s going to be nice.

            • AnotherReader
            • 3 years ago

            I might be proven wrong, but I think the clock speeds won’t be significantly higher until Zen+ at least. The chip needs too much voltage to reach even 4.1 GHz. Maybe Zen+ will do a piledriver and increase both clocks and IPC.

            • Airmantharp
            • 3 years ago

            Unless there’s something to be done about the smaller versions between now and release, the 3.9-4.1GHz range looks to be it.

            Fortunately, that’s enough for all but a special class of gaming, the >60Hz crowd, and it’s the right kind of overkill for any time of media or productivity application.

        • ronch
        • 3 years ago

        I honestly feel disappointed that Zen, so promising as it was and for all the hard work AMD poured into it, had to have so many excuses now. I’m just not buying what they’re saying now about game developers needing to optimize for Zen. Yes optimization is nice but if you need to ask everyone to optimize for your chip when it comes out, just like when Bulldozer came out, well, I dunno. It just isn’t very convincing. Either it runs well or it doesn’t.

          • K-L-Waster
          • 3 years ago

          “Buy it because it’s good” is a much better sales proposition than “buy it because it would be good if the whole industry wasn’t unfairly biased towards our competitors.”

        • Spunjji
        • 3 years ago

        Sandy Bridge came with a jacked-up chipset out of the gate. It was also a heavy refinement of the previous architecture rather than a ground-up rewrite. That comparison is a bit strained.

        Please note that I don’t disagree with the sentiment that AMD need to step up to the plate and execute to succeed; I just think that by sane metrics they have. Everything I’m seeing here is a solid architecture that competes better than basically anyone else other than AMD ever has with Intel.

        It’s not a slam-dunk victory but I’m pretty sure (despite the hope that never dies) nobody here expected it to be.

      • ColeLT1
      • 3 years ago

      “kicks sand in Intel’s face.”

      AMD was on the stage at Maury and said “Cash me outside how bout dat.”

    • albundy
    • 3 years ago

    i kinda expected that the gaming segment would be a disappointment, at least initially until tweaks can be make through game updates. what i really cared about was handbrake encoding (HEVC?), 3d rendering, and drive encryption performance. fantastic results, but is it worth twice the price of the intel 6700k?

    • zzz
    • 3 years ago

    Could we get (even some loosey-goosey) benchmarks of Ryzen with an AMD GPU vs an Intel CPU with an AMD GPU? I’m sure it won’t make a difference, but it’d be interesting if it did. Kind of pointless given there’s no high-performing AMD GPU though; so maybe a 1700X Ryzen with a RX 480.

    • cegras
    • 3 years ago

    Would it be possible to benchmark Dwarf Fortress for CPUs?

      • DancinJack
      • 3 years ago

      Are you serious?

        • cegras
        • 3 years ago

        Yep, the latest versions, especially with big maps, can kill modern CPUs. AFAIK it’s serial due to how it does the AI, and isn’t very threaded.

          • drfish
          • 3 years ago

          Considering the difference I saw going from Sandy to Kaby in Rimworld, this is more than plausible.

          • tipoo
          • 3 years ago

          Dwarf fortress is insanity. I love the story about how they didn’t know why cats were dying in it, then figured out that when dwarves drank, they spilled some amount on the floor, the cats walked accross the floor, their paws would accumulate alcohol, they would lick their paws and consume the alcohol, and not eliminate it fast enough.

          The level of simulation is crazy.

    • DancinJack
    • 3 years ago

    [quote<]Default voltage for manual tuning should start at around 1.3625V, according to AMD[/quote<] That's a pretty high stock voltage. Eeeek. source: [url<]http://www.kitguru.net/components/cpu/luke-hill/amd-ryzen-7-1800x-cpu-review/3/[/url<]

    • Welch
    • 3 years ago

    Not to take away from AMD’s accomplishments here, but I was just really hoping for not such a large delta in gaming performance. I didn’t expect the 1700 or 1700x to beat out the 7700k, hell I didn’t even expect the 1800x to beat it out, maybe get really close. This is sort of disheartening in relation as an all around, everyday chip. As a specialized productivity, encoding/decoding or perhaps CAD type workload, it changes the landscape of options. Great!

    I feel like the one place they missed the mark was not fitting in quad channel at least in the 8/16 chips. Imagine what the added memory bandwidth would have looked like for these chips in not only productivity, but perhaps for some games too. That may very well be the deciding factor in the chips success. Maybe this is planned for a revision and they couldn’t drop it first round, who knows.

    In too many of the test the much older Intel offerings beat it by a sizeable margin. It looks like Ryzen landed about where I originally expected it to, and I let me expectations get the best of me for everything else.

    6c/12t may be the new sweet spot for office machine builds that need a little more omph. That or the 4c/8t Ryzen with a much hotter clock. Time will tell.

    Thanks for the thorough review Jeff.

    • NeelyCam
    • 3 years ago

    Huh. AMD stock price down -7% (plus another -1% after hours).

    I guess folks didn’t like that single-threaded performance?

      • Welch
      • 3 years ago

      Probably offloading stock at one of it’s peaks. AMD has become the new short term wave to ride.

      But yeah, I’m not happy with the results by a long shot. For productivity it seems like it is an alternative, but for gaming some of the results were truly embarrassing.

        • Anonymous Coward
        • 3 years ago

        I don’t see how any score there was “truly embarrassing”. But perhaps you thought it was child’s play to knock over Intel.

      • tipoo
      • 3 years ago

      Always the case, ride the hype, sell on the eve of the result. You’ll see Apple stock do this every announcement.

        • cynan
        • 3 years ago

        Except we didn’t see this with the AMD 4th quarter earnings report a month ago. You know, when AMD shares jumped from $10.xx to $12.xx almost overnight and kept rising.

        Congrats to those who made some money on the AMD hype yesterday. But I don’t think there’s anything quite as fatuous as Monday-morning quaterbacking trends in stock market hype.

      • chuckula
      • 3 years ago

      Buy on the rumor.
      Sell on the news.

        • NeoForever
        • 3 years ago

        People often say this but I don’t get it. How does it apply here? Why would one buy at a high (rumor period) and sell at the low (after news of actual performance)?

          • chuckula
          • 3 years ago

          Oh the “rumor” part was months ago, not last week.

      • DPete27
      • 3 years ago

      I called it.

      • just brew it!
      • 3 years ago

      The Ryzen hype was already priced in, so the stock was overvalued. When Ryzen didn’t turn out to be a complete Intel-killer, the stock corrected downward.

        • rechicero
        • 3 years ago

        I don’t understand… Were there people who actually expected AMD to destroy Intel with a couple or more years of deficit in fabs, with fabbing not tuned exactly for your designs and several orders of magnitude less in R&D budgets? Really?????

        I’d say Ryzen is amazingly good, better than I expected. Come on, being able to trade some blows with King Kong is just :-O.

        PD: And about the game perf… Part of the problem is (IMHO) CPU game benchmarks are completely missing the point. Low res with huge GPUs, without programs in the background… IMHO they are little more than synthetic benchmarks.

        A real world scenario for this kind of CPU would be a RX 480 or a GTX 1060-1070 playing at 1440p and with a mail client, some browser tabs and a few program more working in the background.

        I don’t even talk about streaming (that seem more niche), but I don’t think anybody is ever going to pair a Titan X with this chip and play at 1080 or 720, without anything in the background. So if you make your benchmark that way, what are you benchmarking? Something that never, ever is going to happen?

        It’s interesting from an academic point of view, but that’s it.

          • cynan
          • 3 years ago

          It’s at least as much due to the way the (relative to Intel) performance was spun by review sites and the media than actual performance expectations of the chip itself.

          For example, as others have stated here, Ryzen is still a very competitive option for people who don’t just game (but also use their PC for other multi-threaded workloads, or even multitask while gaming). And especially if they don’t game with 200Hz monitors at 1080p, but rather at higher resolutions where the bottleneck becomes the GPU. I’d say this reflects a large usage case for consumers who are contemplating buying a new mid to-all-but-the-bleeding-edge top end system going forward.

          If AMD can weather this mostly PR setback and gain mind share that RyZen still compelling for a large number of (if not most) use cases for consumers and businesses in the market for a new desktop platform, they could rally from this. However, this may be questionable over the short term judging by how difficult it’s been for AMD to claw market share from NVDIA in the GPU sector, even with at least equally as compelling a product (eg, RX 480).

          Yes, their market capitalization is high relative to projected earnings. But these are largely based off of past performance – which hopefully will be vastly improved upon with RyZen and Vega.

      • derFunkenstein
      • 3 years ago

      If you’d bought it a year ago and sold it on Thursday, you’d have quadrupled your investment. Seems like as good of time as any to get out.

    • Laykun
    • 3 years ago

    All this talk about how resolutions are trending. I would say people are trending towards higher refresh rates than they are towards 4k, and unfortunately Ryzen doesn’t seem to service that crowd, the ones that want to get 100-120+ FPS to get the most out of their monitor, in which case the 6700-7700K series are still the right choice for high end gamers.

    I never though it would be a thing but you can really feel when a game dips below 100fps on a high refresh rate monito. I always thought 60fps was the pinnacle and one not need want more, but yet, it is actually a thing.

    I’m excited to see what a process shrink and further tweaking will bring it this platform in the near future as I doubt AMD will stand still with Ryzen, they have a lot of market share to claw back.

    • Tristan
    • 3 years ago

    Can you repeat tests with some Polaris (RX 490) as graphics card ? NV drivers aren’t optimised for Ryzen.

      • DancinJack
      • 3 years ago

      What are you even talking about Tristan?

        • Tristan
        • 3 years ago

        just want to know if NV unoptimized drivers slow down Ryzen in games. NV drivers are optimized for Intel and maybe for FX, but not for Ryzen. Radeon drivers should be optimized for Ryzen. Almost all reviews use NV graphics, which is mistake

          • Freon
          • 3 years ago

          Got any proof of this?

          • Pancake
          • 3 years ago

          It’s a fair enough point. There hasn’t been a meaningful Intel/AMD CPU comparison for years.

          Has NVidia been given Ryzen engineering samples to develop their drivers? I’d like to think so but we don’t have the information. What is likely is that as Ryzen is released and gains significant market share is that drivers will mature for it and take advantage of its characteristics (moar coarz!).

          • chuckula
          • 3 years ago

          Yeah… you know what?

          Do you know what GPU AMD chose when it was time to show off how great RyZen was at gaming vs. those crappy Intel parts under circumstances that were 100% under AMD’s control with Lisa Su literally standing close enough to each system to push it off the table if she felt like it?

          Do you know which model of Polaris they chose?

          I’ll tell you: It wasn’t.

          It was the Pascal Titan X. That AMD chose. To show off its own hardware in a supposedly favorable light.

          So please take the conspiracy theories back to whatever rumor site where you saw them and leave them there.

            • brucethemoose
            • 3 years ago

            They also use Intel chips to show off Radeons.

            • chuckula
            • 3 years ago

            I’m actually curious for the Vega launch now to see what they use.

            I’m guessing it will be an 1800X since it’s at least respectable in games and when they give canned demos they aren’t giving you detailed frame rate tests.

            But that’s not a 100% guarantee either.

            • brucethemoose
            • 3 years ago

            Skylake- E might launch before then.

      • chuckula
      • 3 years ago

      Yeah, about that Rx 490.
      Are you the one who has it?
      We’d like it back.
      — AMD

      • YukaKun
      • 3 years ago

      “No, this is strictly CPU scheduling within the game.”

      [url<]https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def4wcj/[/url<] Cheers!

    • bhtooefr
    • 3 years ago

    So, about what I expected – Intel’s still in the lead in gaming, and Ryzen gets memory starved on some of the multi-threaded workloads that it should hypothetically be good at.

    That said, if I were doing a build today… OK, I think I’d actually wait for Ryzen 5 1600X – still 6/12 for Intel 4/4 money, and same clocks as the R7 1800X. I do a mix of workloads, and I suspect I could benefit some from the higher core count (if nothing else, to run the eleventy billion JavaScripts running in many different Vivaldi tabs…) But if I had to do it [i<]today[/i<]? That's a harder question to answer, but it'd be i5-7600K or i7-7700K, because there's no Mini-ITX AM4 boards out yet. I'm not going to rush out and replace my i5-6600K, though.

    • cygnus1
    • 3 years ago

    This thing is definitely solid. Not the top of the top performer, but very well rounded. Seems like AMD has a tick tock pattern for it’s major architectures. K6 was decent but had flawed FP so was lame for gaming. Athlon/Hammer was very well rounded and even managed to beat Intel for a time. Bulldozer again, OK for most things but hampered with bad FP design. And now Zen is out and is pretty well rounded again. It’s not beating Intel’s top stuff now, but maybe the next version could. I wonder where it would’ve landed had it been able to handle 256 bit AVX2 at full instead of half throughput.

    Where Zen does shame Intel, in my eyes, is CoreCount+Performance/Dollar. I do think gaming, and even other apps, are going to take advantage of more cores/threads before too long. Commodity hardware (mobiles, consoles, etc) has been multi-core long enough, it’s going to happen. The bet is when it happens though. So if you want to make that bet that it’ll happen in the useful-lifetime of your next PC, Ryzen is definitely worth a strong look.

    • derFunkenstein
    • 3 years ago

    Hopefully Mark will have an AM4 review or two so we can see what the motherboards are like. The chipset is going to make or break this thing. Especially SATA and USB 3.1 transfer speeds.

      • DancinJack
      • 3 years ago

      From other reviews I’ve read, it seems everything is doing just fine. It only took another company making their chipsets for it to happen! (if ASMedia really DOES make them)

    • synthtel2
    • 3 years ago

    These are awesome, but I doubt the cheaper parts will more competitive with the 7700K on anything other than price. Dropping core count allows for higher clocks if clocks are limited by thermals or power distribution, but the issue here is that it’s taking [u<]>[/u<]1.4V to get beyond 4 GHz. Unless there are some very serious load line correction shenanigans here, a 4C version is still going to take a ton of voltage to clock high, and wearing out an OCed chip quickly is still going to be a big worry. What I haven't seen so far is an actual voltage/freq curve. Doesn't Anandtech do those sometimes? I'm very curious about its shape. On a related note, has anyone mentioned stock voltages yet? I've hypothesized before that we're nearing the end of the era of CPUs that last indefinitely, and the voltages we've seen for Zen don't inspire any extra confidence in that regard. The way things are looking, if I get a 1700 (questionable due to lack of ECC support), I'll probably end up running it at all-core 3.7. I can deal with thermals, but I'm barely comfortable pushing 1.36V into my old $70 CPU on a 22nm HP process, much less a new $330 CPU on a 14nm LP process.

      • stdRaichu
      • 3 years ago

      Most of ASRock’s AM4 motherboards at least say they support unbuffered ECC, so the support in the chips should be there.
      [quote<]AMD Ryzen series CPUs support DDR4 2667/2400/2133 ECC & non-ECC, un-buffered memory*[/quote<] One of the things that originally attracted me to ASRock boards in the first place was that they tend to enable ECC support wherever technically and economically possible. I've also had much easier times with their BIOS support under linux.

        • synthtel2
        • 3 years ago

        That was a good sign, but apparently these CPU SKUs don’t support it.

          • Ninjitsu
          • 3 years ago

          “We can confirm that ECC is enabled on the consumer Ryzen parts.”
          – AnandTeach

            • just brew it!
            • 3 years ago

            Interesting. So it comes down to the motherboard makers then.

            I am very puzzled by the lack of ECC support on Asus’ first crop of AM4 boards. They’ve supported ECC on their consumer AM2/AM2+/AM3/AM3+ boards, why would they drop it now?

            • synthtel2
            • 3 years ago

            LOL, they changed their story since I last looked (bottom of [url=http://www.anandtech.com/show/11170/the-amd-zen-and-ryzen-7-review-a-deep-dive-on-1800x-1700x-and-1700/14<]this[/url<] page). [quote="AnandTech"<]At present, [s<]ECC is not supported[/s<] ECC is supported.[/quote<] Well, that clears that up and I'll almost certainly buy myself a 1700 in April.

            • just brew it!
            • 3 years ago

            Make sure the motherboard you choose has ECC support, if that’s a make-or-break feature for you. ASRock seems to be the only vendor claiming to support it at present, and I’d wait for independent confirmation that it actually works (and isn’t just a “ECC DIMMs work in non-ECC mode” thing).

            • synthtel2
            • 3 years ago

            That’s the biggest reason for the “almost” before the certainly – ECC + mITX in a mobo might still be a bit of trouble to track down at the rate things are going. At least I like ASRock, so that part isn’t an issue.

            Edit – looks like we’re getting so much conflicting information on it because AMD and the mobo manufacturers can’t even get the story straight. [url=https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/defaoqj/<]Link[/url<]

            • daniel123456
            • 3 years ago

            hey guys, according to the AMA, ECC “works but is not validated”. AKA up to MB manufacturer to enable.
            [url<]https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def6vs2/[/url<] hope that link works

            • synthtel2
            • 3 years ago

            It’s always up to the mobo manufacturer to enable. “Not validated” is just AMD saying they’re not yet in a position to guarantee everything will work right, which is orthogonal to mobo support.

    • NovusBogus
    • 3 years ago

    It may not win many performance trophies but it’s nice to see AMD back in the game again. Finally, a viable alternative to the Intel x500/x700 hegemony! I’m eager to see if the 4-6 core offerings give up any gaming performance; if not they’re going to be a better overall value than i7-7700k and the like.

    • Solean
    • 3 years ago

    I game at 1440p.

    I need to know how much of a difference there is between AMD and Intel there is with 8 cores.

    If we are talking about 4-6 FPS in favour of Intel, I need to know. Because I’m not paying 600 euros more for just 6 FPS advantage, and 150 euros more for a 2011-3 motherboard.

    I need to know the performance per value at 1440p.

    I already know that Ryzen will be a major upgrade for me from my i7 980X @ 4Ghz on all 6 cores.
    I’m still a bit skeptical about chipset quality.

    My X58 chipset, Gigabyte UD5, has served me well since 2010. No hiccups. No problems. No errors.

    Gotta admit that Intel’s chipset is top notch, really really excellent products.

    But, if AM4 offers me 90% stability and quality of Intel’s 2011-3 motherboards, for 150-200 euros less, I’m sold.

    Plus that, I would like to support the underdog.
    Plus that I watercool my rig, so I bet I could get 4.2 Ghz on all 8 core (1×420 normal thick rad, 1×280 monsta thick rad, 1×240 slim rad).

    I’m waiting for EK to release a monoblock for Asus’s Hero VI motherboard, and if AM4 chipset is stable, I’ll probably buy.

      • DancinJack
      • 3 years ago

      I HIGHLY doubt you could get 4.2 on all eight cores. The voltage just won’t let you. It’s not about temperature of your mighty watercooling setup.

      edited: spelling lol scores instead of cores.

        • Solean
        • 3 years ago

        We don’t really know this actually until watercoolers experiment with Ryzen.
        Until now, all reviews are with stock or normal air coolers.

          • DancinJack
          • 3 years ago

          They SENT watercoolers with some Ryzen CPUs. It’s not about the heat, Solean.

    • chuckula
    • 3 years ago

    Now onto the next phase: Wait another 6 weeks for AMD to announce its Q1 earnings to figure out just how much of a hard launch this really was.

      • derFunkenstein
      • 3 years ago

      well it’s in stock right now on Newegg, so it must have been a hard launch. Out of stock on Amazon, though, so now the prices are up.

        • DancinJack
        • 3 years ago

        Right, but how much of a hard launch is a different story. We have no idea how many units they actually sold. And TBH i’d rather see e-/re-tailer numbers than AMD’s own just to get an idea of how many are in hands of actual people rather than on the shelves of Fry’s.

          • derFunkenstein
          • 3 years ago

          I don’t think you’ll ever see that.

            • DancinJack
            • 3 years ago

            Of course we won’t, but it’d be nice.

    • Misel
    • 3 years ago

    Thank you for including the FX-8370. I knew it was bad but now I have facts to back it up and can’t wait to upgrade. 😀

      • DancinJack
      • 3 years ago

      NOW you have facts? They’ve been here all along, you only needed to not deny them. The FX line has been not good, relatively, for a long, long time.

        • bhtooefr
        • 3 years ago

        And that “long, long time” has been, oh, since 2006 or so, IIRC.

    • slowriot
    • 3 years ago

    Amazon already gouging on the price and lack of stock. See the 1700X at $439 and 1700 bumped to $369.

      • Demetri
      • 3 years ago

      Those are 3rd party sellers, so no surprise. Amazon is completely out of stock but you can pre-order for MSRP.

    • ptsant
    • 3 years ago

    Ryzen has been selling like hot cakes here. Right after embargo lift the 1800X went to 3rd place in the bestseller list. I guess many people were waiting for the reviews.

    On the other hand, by prebooking my 1700X I got it at least $20 cheaper than the cheapest price in the country. Everyone is selling above MSRP…

    • OneShotOneKill
    • 3 years ago

    Can you return a delidded processor?

      • Redocbew
      • 3 years ago

      Doubtful, but someone has probably tried anyway.

      • chuckula
      • 3 years ago

      WE TOLD YOU NOT TO DELID YOUR RYZEN!

        • OneShotOneKill
        • 3 years ago

        RYZEN? Who would want to return that?

    • NeelyCam
    • 3 years ago

    Is there any significant difference in overall platform cost? Because if not, 1700 looks like a winner for my purposes. Would be a nice upgrade from 2600K…

      • Demetri
      • 3 years ago

      Platform cost is the same as a Kaby/Skylake system. Both are cheaper than Broadwell-E.

    • Takeshi7
    • 3 years ago

    Bravo AMD *golf clap*. At least they are putting competitive pressure on Intel again.

    • ronch
    • 3 years ago

    OK, it’s late now where I am so I’m going to save the big comment for later. But for now, all I can say is, Ryzen may not be the knockout punch we wanted it to be (and really, we somehow knew it wouldn’t be) but it sure is, practically speaking, a very compelling product.

      • paulWTAMU
      • 3 years ago

      Fairly competitive performance for substantially less cost? I like it. I’m not looking to build for another 2 years though, hope they keep improving and staying competitive until then

      • tipoo
      • 3 years ago

      Comment(s) 😛
      I was wondering where you were after all your pre-launch excitement.

    • srg86
    • 3 years ago

    Very impressive. From looking at the benchmarks, I wouldn’t call this an Athlon 64 or K7 moment. But I would call this a K6 moment. AMD have brought out a chip which is not faster in outright flat out performance but is very close compared to Broadwell E. Close enough and priced right to win the Price Performance title.

    Sounds very similar to the K6 vs lower clocked Pentium II to me.

      • AnotherReader
      • 3 years ago

      Good analogy! However, the K6 clocked much lower than the PII and had an IPC deficit almost all around. Ryzen, on the other hand, does better than the K6 ever did relative to its competition.

      • cygnus1
      • 3 years ago

      Personally, I wouldn’t draw a corollary to K6. IIRC, the K6 had major performance gotcha’s for FP, was terrible for gaming but great for productivity. Ryzen doesn’t appear to have any such major gotcha. It’s pretty well rounded all things considered, but just doesn’t beat the absolute fastest from Intel.

        • tootercomputer
        • 3 years ago

        I have a K6-III 450+ overclocked to 600Mhz, got it in 2001, it still runs on an old Presario I have stashed away. A magic chip (on a surprisingly magic Via mobo).

        Initial impression, I found this review lukewarm, disappointed with Ryzen. I’m anxious to see what other sites have to say.

      • Anonymous Coward
      • 3 years ago

      Neither a K6 nor K7 moment, but if I had to choose one, its closer to K7. Ryzen doesn’t match Intel at everything but its quite competent. With K6, I recall it being not just uninspiring at games, but actually [i<]bad at them[/i<]. I knew a couple of guys with K6-2's playing games, and compared to P2, P3 or the early Celerons it was a really obvious and major problem.

        • AnotherReader
        • 3 years ago

        Yeah closer to a K7 moment than a K6. However, the K7 beat the PIII across the board and clocked higher too (700 MHz on 250 nm vs 600 MHz for the PIII).

        • srg86
        • 3 years ago

        Well, the extra price/performance the K6 had, when buying our first PC in 1997 allowed us to get a 15″ monitor upgrade and a sound card on top of the base system and stay within our budget.

        So that may give me rose tented glasses about that chip. I still have it BTW (the CPU, not the system).

        Sadly I have to agree about the K6/2, in hind sight I should have gone with a Mendocino Celeron. Never was a game player, but I remember having to get a MPEG2 decode card to play DVDs as the FPU couldn’t keep up. Also remember the dodgy AGP in the chipsets.

        • Concupiscence
        • 3 years ago

        I’d say the K6 was a dog. The K6-2 had substantial advantages over the original K6: [s<]doubled L1 cache size,[/s<] the addition of 3DNow instruction set support, and significantly improved execution units. As 3DNow support expanded it became much more tenable, though it was still a budget option. For day to day use, multimedia, and DOS titles, it was better than fine. I'd say Ryzen's debut is more on par with the K6-2 than K7; the original Athlon set people's hair on fire. The architecture is a solid return to a competitive position for a CPU division that spent the last half decade looking nearly as lost as Cyrix did 20 years ago. Let's see how the cheaper models shake out, and how it fares once the platform growing pains and initial support issues fade over time. Edit: whoops! Misremembered an L1 cache bump. Thanks, srg86!

          • srg86
          • 3 years ago

          The L1 of the K6 and K6-2 was the same size 32KB+32KB.

          The K6 had a single MMX unit, which on the K6-2 was replaced with two pipelined MMX/3DNow! units.

          K6-2 CXT core added write combining and 2 MTRRs.

          K6-III Added 256KB full speed L3 cache.

    • djayjp
    • 3 years ago

    Benchmark non-SMT mode pretty please.

    • Kougar
    • 3 years ago

    Excellent article, thanks Jeff!

    Very interesting to see how the 2 vs 4 memory controller arrangement is playing out, wonder if Zen 2 will be adding more.

    For that overclocking article it would be great to see how much ground Ryzen can make up from a smaller clockspeed gap. Especially once the motherboard EFI has matured a bit more.

      • Anonymous Coward
      • 3 years ago

      There is a latency penalty associated with over-complicated hardware … are you seeing Intel’s quad-channel parts dominating much of anything in games?

        • derFunkenstein
        • 3 years ago

        That has more to do with limited threading in most games and the relatively low clock speed of the CPU cores.

        • Kougar
        • 3 years ago

        I recall two games where Intel’s 8+ core chips performed well and were closer to the 7700K than Ryzen was to them, despite the clockspeed differences.

        There also were games where Intel’s 8+ core chips did as bad or even worse than Ryzen, but the 7700K still did excellent. These would be the games AMD could do better in with just higher clocks. Since many users plan to OC anyway it would make Ryzen more competitive in some titles.

      • bhtooefr
      • 3 years ago

      One way I’d like to see things done is… yank a couple DIMMs from the i7-5960X (or an i7-6900K), and re-run the benchmarks, then compare to Ryzen again.

      It’s not representative of what Ryzen would do with 4 channels, but it might illustrate some of the disparity.

        • Kougar
        • 3 years ago

        It’s be one way to verify it was the memory bandwidth as a bottleneck, yeah.

        After reading some other sites it appears turning SMT off can bump both average and minimum FPS by ~10 in some games, which in of itself is a problem. I’m guessing it’s a Windows scheduling issue but who knows. Also the usual issue with Windows requiring the High Performance profile setting so it doesn’t park cores it’s scheduling threads on, which also was dropping FPS.

          • bhtooefr
          • 3 years ago

          Yeah, the instant I saw that SMT was affecting things to that extent, my first guess was scheduling issues…

    • wingless
    • 3 years ago

    Not bad for their first real competitor in a decade! I like that the uArch looks to have room for improvement so we may see gains year after year. All-in-all, the Price/Performance ratio of these parts really tick the biggest box for me.

    Thanks for not sucking anymore, AMD!

    • cybot_x1024
    • 3 years ago

    Everyone here keeps saying the results are underwhelming. These chips were targeting intel’s HEDT chips (5960X/6900K/6950X) not the desktop line. Of course the 8 core chips wouldn’t be able to clock as high as the 4 core chips.

    This is like getting disappointed at the 6900K because it doesn’t beat the 7700K at gaming.

    In any case this is extremely impressive! Two 95W chips and a 65W chip are holding their own against intel’s 140W chips! Not forgetting that intel has been “improving” these for the last decade.
    Stop being ridiculous. If you really want processors that compete with the 7700K/6700K/7600K wait for the 6C/12T and 4C/8T Ryzen chips.

    Well done AMD! Quite the impressive comeback.

      • chuckula
      • 3 years ago

      [quote<]Two 95W chips and a 65W chip are holding their own against intel's 140W chips![/quote<] Indeed, AMD has invented a fascinating way of turning 95 watts into 140 watts.

        • Waco
        • 3 years ago

        They certainly got idle power down, but load power completely ignores that “95” watt rating…

          • DancinJack
          • 3 years ago

          I’m pretty sure they’re using their own “ACP.” Or iow, “it’ll use this much power at SOME POINT while in use, have fun!”

          I think the “real” TDP is closer to 140W IMO.

          • chuckula
          • 3 years ago

          The idle power is quite good, but then again the HEDT platforms just have a lot more crap in the platform sitting around and drawing power at idle (plus they just don’t idle as aggressively as a desktop or mobile platform). AMD’s overall AM4 platform looks pretty power efficient, but it’s also not trying to run as much stuff.

          We’ll see if some of the sites that actually measure direct CPU power in isolation from the rest of the platform actually see during their testing.

        • cybot_x1024
        • 3 years ago

        [quote<]Indeed, AMD has invented a fascinating way of turning 95 watts into 140 watts.[/quote<] Looks like they must have borrowed that from intel. [url<]http://media.bestofmicro.com/U/G/640024/original/04-Power-Consumption-PTU.png[/url<]

          • chuckula
          • 3 years ago

          Good, where are the results of running that application on the 1800X.

      • Anonymous Coward
      • 3 years ago

      You can’t pretend this isn’t aimed at i7-7700. AMD’s lower core count parts are going to be slower, not better.

        • dragontamer5788
        • 3 years ago

        The 1600x has been announced as 3.6GHz and 4.0GHz boost.

        Which means the 6-core 1600x will be hundreds of dollars cheaper, but roughly the same single-threaded performance as the 1800x.

        The lower-core count parts will be cheaper, but not slower (from a single-threaded perspective anyway). With two disabled cores, the 6-core may end up being the price / performance champ. Or hell, maybe even the single-threaded champ (fewer cores usually means you can overclock better)

        I personally can make use out of the 8-core however. LTSpice, Stockfish, Sony Vegas, Handbrake, Visual Studio all benefit from high core counts.

          • raddude9
          • 3 years ago

          The 1600x looks fine, but I think the prospective Ryzen 3 1200X might be the better value gamer CPU. 4 real cores (but only 4 threads) with a high clock speed and a projected price of $150. That’ll make it $30 cheaper than the high-end kaby lake dual cores and $110 cheaper than the 1600x. $100 goes a long way to getting someone a better graphics card….

      • Bensam123
      • 3 years ago

      Yup… Surprising how many people are making light of the whole price thing as well… LOL

        • chuckula
        • 3 years ago

        Number of high school dropouts working at Walmart with a price gun required to cut prices on existing products: 1.

        Number of PhDs required to build the next generation of chips: More than 1.

    • maroon1
    • 3 years ago

    “In fact, AMD exceeded its ambitious 40% instructions-per-clock improvement target. Some of our directed tests actually showed as much as a 50% single-core boost from Piledriver to Zen. AMD is deservedly proud of this accomplishment.”

    AMD 40% claim was over Excavator, not Piledriver. AMD did not exceed its target

      • chuckula
      • 3 years ago

      Yeah, when Lisa Su said “52%” to beat their previous projection of 40% I was assuming that was also over Excavator.

        • DrDominodog51
        • 3 years ago

        She was using CEO math. Duh…

      • Firestarter
      • 3 years ago

      the claim was 52% better IPC, and Ryzen has lower clocks

        • chuckula
        • 3 years ago

        A high-end RyZen most certainly does not have lower clocks that a Carrizo part with Excavator cores, especially in single-thread mode with aggressive turbo-boost.

    • GreatGooglyMoogly
    • 3 years ago

    Thank you very much for including DAWBench (something I’ve suggested you use before)!
    Since I’m mostly interested in compiler and DAW performance, it’s very much appreciated. A shame you didn’t have time to compare with the 6850K which is the CPU I’ve been considering to upgrade to for better DAW performance, and has a bit of a more comparable price to the 1800X. The 6950X isn’t even a curiosity since it costs 3 times as much (at least right now)…
    Since gaming performance is a low priority for me (or, rather I think my 2600K w/ GTX1080 does well enough already), I’m a bit unsure still. Might wait and see what Intel does with their overpriced HEDT lineup.

    • Leader952
    • 3 years ago

    [quote<]AMD did provide reviewers with its own internal measurements of cache bandwidth and latency data for Zen. We won't be diving deep into these numbers, but it is interesting to see how Ryzen chips' cache hierarchies stack up against their Broadwell-E nemesis.[/quote<] The reason why no review should ever use a vendors supplied data without ever testing is because the data and conclusions can both be very very wrong. Something other reviews caught while testing. [quote<]We measured performance with the utilities and achieved similar results for Intel's Core i7-6900K, but we also noticed a large gap between the AMD-provided Ryzen measurements and our test results. Ryzen’s L3 cache latency measured 20 ~ 23ns, which is double the provided value. Due to some of the performance characteristics we noted during our game testing, we also tested with SMT enabled and disabled, but the results fell within expected variation. We also measured a ~10ns memory latency gap in favor of the Intel processor. [url<]http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,4951-5.html[/url<] [/quote<] Edit: Here is the analysis on "tricks?" used by AMD in benchmarking Ryzen before the lanuch [url<]http://www.gamersnexus.net/hwreviews/2822-amd-ryzen-r7-1800x-review-premiere-blender-fps-benchmarks/page-8[/url<] These tricks (cheats?) show why not to use vendor supplied data as fact without verification.

      • RAGEPRO
      • 3 years ago

      Yeah, but if you read right there on the page, they’re using tools that are known to measure inaccurately. That’s not just by AMD’s word, either — the AIDA64 guys specifically said that it doesn’t work right on Ryzen. I would take Toms’ results with a handful of salt.

        • Leader952
        • 3 years ago

        [quote<] I would take Toms' results with a handful of salt.[/quote<] At least Tom's tested it. What we got here was AMD's numbers which we know need to be taken with a ton of salt. Reviews should never ever just publish vendors data without verification. If they can't verify then don't put it in the review as if it is 100% true.

    • Chrispy_
    • 3 years ago

    I think I’m happy with AMD’s end result here.

    It’s broadly-speaking an IPC match for Haswell but AMD are effectively offering i7-5960X performance at half-price. If you were in the market for Haswell-E or Broadwell-E you’ll be all over Ryzen like a rash.

    Gaming is all about IPC and clockspeed, so Ryzen’s Haswell-esque IPC provides Haswell-esque gaming performance. It’s not bad but you wouldn’t buy Ryzen for a gaming-focused build just yet.

    Perhaps the more mature fab process will give better yields and higher clocks to the quad core R5 models and some SMT scheduling improvements (that’s driver-level, right?) will improve Ryzen’s gaming prowess. Also, presumably more thermal headroom for a chip with half the cores, too.

    Nice review Jeff. DRM ruining everything as always and I look forward to your overclocking results and update of the tables with the remaining benchmark results!

      • Vhalidictes
      • 3 years ago

      This looks like the perfect CPU for people who game on the side, but need more cores for various task sets. The price doesn’t hurt at all, and in fact “pure gamers” who want to brag about core count might get these anyway.

      Low-CPU overhead graphics middleware and the generally increasing ability of software to use more cores are tailwinds.

      However, this review has mentioned the doom on the horizon – the memory bandwidth. It’s actually really good for a dual-controller system, but that doesn’t matter when you really need four or more memory controllers.

      Zen2 needs to address this, or RyZen could end up a flash in the pan.

        • Chrispy_
        • 3 years ago

        Yep, indeed.

        I’m running an i7-4790 in my home office right now and that machine is about a 50/50 split between ESXi test environments and gaming. I suspect when I get the upgrade itch it’ll be an Ryzen 7 1700 in there instead. Fast enough that I won’t lose any gaming performance but massively superior for anything multi-threaded.

        Even though a 1700 is one-third the cost of an equivalent Broadwell-E, the most important benefit for my home machine (rather than the servers at work, well out of earshot in a dedicated server room) is that it uses only 65W instead of 140W for the Intel platform.

        Performance/Watt is where it’s at these days and AMD just knocked one out of the park. I’m hoping that AMD can get some big OEM wins for Zen architecture in the enterprise space. Nobody is really mentioning that here since Jeff hasn’t had time to finish his power consumption testing, but it looks like AMD have just doubled Intel’s Broadwell-E power efficiency.

        O.o indeed.

          • DancinJack
          • 3 years ago

          If you go look at the power consumption results from other places (pcper, for instance) AMD’s TDP is more of an “ACP” i think they’ve called it in the past. Even though the 1800X is a 95W CPU, it pulls down 155W at load. I’d wait for more and more detailed power consumption results before I’d bank on that AMD cpu actually using only 65W.

          [url<]https://www.pcper.com/reviews/Processors/AMD-Ryzen-7-1800X-Review-Now-and-Zen/Power-Consumption-and-Conclusions[/url<]

    • Anovoca
    • 3 years ago

    Nothing surprising here but a great review all the same. I think the more interesting comparisons will come between these and their xeon counterparts. If Intel is true to its word in cutting off all support for workstation xeon builds, AMD could fill that hole nicely. As someone who personaly owns a Skylake xeon build for a HTPC/Plex server, I would love to see how it stacks up to one of these chips.

    • GrimDanfango
    • 3 years ago

    I still haven’t been able to find any information beyond the rumoured chip names – does anyone know anything about the “Ryzen 7 Pro 1800” and “Ryzen 7 Pro 1700”?

    Are these chips actually a thing wot exists? Is there any speculation that these might come with a platform that is more “Pro”-oriented – ie, more PCIe lanes and more DDR4 slots – maybe quad channel? Or is such a thing unlikely to be appearing?

      • Waco
      • 3 years ago

      Pro is the OEM channel version. The die only has [b<]16[/b<] PCIe lanes and dual channel memory controllers, so they can't just add those in. EDITED

        • RAGEPRO
        • 3 years ago

        16 lanes, Mr. President.

          • Waco
          • 3 years ago

          Bah, I don’t know why I keep mixing them up. Been looking at too much Naples stuff recently.

          • GrimDanfango
          • 3 years ago

          Isn’t it actually 24 lanes? 🙂 Only 16 exposed directly as expansion slots though. 4 to storage and 4 to the chipset.

            • Waco
            • 3 years ago

            Sure, but you can’t use the other lanes. I was hoping the rumors of it having 32 internally were true, which would have given 24 lanes to the slots with 8 reserved for IO and chipset…

        • GrimDanfango
        • 3 years ago

        Ah right, gotcha. So no real point holding out for a xeon-esque spin off of Ryzen…

        It’s a shame… the 64GB RAM limit is the main stumbling block for me. I could live with the PCIe lanes, but fluid sim consumes *all* of the RAM, and 64 will likely become a bottleneck within a year or less for my workloads.

          • Waco
          • 3 years ago

          Yeah, there won’t be more capacity until larger modules launch, or until the Opteron versions of these chips launch (which will have more 8 core dies in them, and thus, more memory channels).

            • GrimDanfango
            • 3 years ago

            On that note… is there any indication that 32GB unbuffered modules will be turning up inside of the next year or so? Or is it probably further off than that, given few people bother even going above 16GB total RAM at the moment?

            • Waco
            • 3 years ago

            No clue. I’d assume the same timeframe that 256 GB modules hit the market for servers. 😛

    • Gadoran
    • 3 years ago

    Turbomax 3.0 (all cores option on) is active in this great review ?? thanks for the answer 🙂

    • slaimus
    • 3 years ago

    I am most impressed by the fact that the CPU got to such high clock speeds on the supposedly inferior GF/Samsung 14nm node.

      • Gadoran
      • 3 years ago

      Looking at other reviews Ryzen looks pretty clock limited due high voltages. Apparently over 3.8ghz the voltages goes up very quickly….we’ll see 🙂

      • Anonymous Coward
      • 3 years ago

      The capability of AMD & their fab partners to make this thing at these clocks is at least as impressive as the design itself. Not bad at all. Intel is looking less untouchable than in the past.

      • DancinJack
      • 3 years ago

      It’s not supposedly, it’s a fact of science.

      • raddude9
      • 3 years ago

      The high clock speeds is one thing, but what’s more impressive is that they seem to have done it while keeping to some very reasonable power consumption levels as well.

      • tipoo
      • 3 years ago

      Anyone know if it’s a new revision of the node? Newer than the RX480 I mean. Probably differences in being tuned for CPU clocks vs GPU clocks alone I assume.

      • Turd-Monkey
      • 3 years ago

      I think for Zen v1, AMD chose to optimize for die size / complexity. (Which often helps for clock speed)

      The design choice that best illustrates this is the optimization of the FP pipeline for 128-bit operations. This reduces resource requirements at the cost of slower 256-bit AVX2 operations. (Still faster than CPUs without AVX2!)

      (We know AVX2 has an effect on clocks because Xeon E5 CPUs with AVX2 reduce their base and turbo frequencies when running AVX heavy workloads and Broadwell-E and Kaby Lake BIOSes allow you to manually tweak this AVX offset to help when overclocking.)

      Many workloads that benefit from thread parallelism also benefit from AVX. Skylake-E (512-bit AVX) and Coffe Lake (6 core equivalents to the 7700K, 7600K) will likely squeeze Ryzen 7’s advantages from both sides.

      I still think AMD made the right choice to target what they did, Zen is a great value for many workloads, decent at gaming, and they have a good base to work from.

    • USAFTW
    • 3 years ago

    I really hope the gaming results are due to lack of optimization at firmware, os or game engine level.
    Aside from that, Ryzen seems like a very strong base upon which AMD can build future Zen+ iterations.
    For now, I’ll be sticking to my 4670K and I’ll keep an eye out to other gaming tests to see if these results are the norm rather than the exception.
    Still, well done AMD, bravo!

    • Concupiscence
    • 3 years ago

    I wonder how much the old Piledriver designs will go for in a month or two…

    • Arclight
    • 3 years ago

    When will the refresh happen this time? I know it’s early to ask but still, they’ll need it in order to catch up in terms of performance per core.

    • Mr Bill
    • 3 years ago

    Thanks for all your hard work Jeff, the TR review is the one I wait to read. Gonna have to print out that configuration page and read it again. I’m not very conversant in the model number vs threads, cores, memory channels, and clocks of Intel chips. However, your comments on each test made that more clear. The X-class looks pretty sweet.

    • jokinin
    • 3 years ago

    I think that in my case, I’d rather get a 4C/8T with higher clocks speeds to replace my ancient i5 3550.
    Let’s hope we’ll see higher clocked Ryzens in the near future.

    • AnotherReader
    • 3 years ago

    Great review Jeff! I should have said this in my first comment after reading the review, but better late than never. I look forward to the further tests that you couldn’t do due to issues beyond your control.

    • Freon
    • 3 years ago

    Glad to see the famous frame data presented here, certainly clears a lot of things up. Too many other sites seem to have rushed out just averages and min fps, which is always a bit questionable.

    Nothing too surprising here, but interesting to see that sometimes the 6950X will beat the much faster clocked 7700K which would indicate core/thread scaling, and yet the 7700K can still beat the 1700/1800X. Perhaps there is some thresholding going on in some individual threads where you need a certain clock/IPC, then after that extra cores keep context switching down. This is a very complex subject…

      • RAGEPRO
      • 3 years ago

      It’s about the memory bandwidth and cache. Lots of stalls in games are caused when the machine has to hold up on later-level cache or main memory to come back with some data. Having those two extra memory channels and 25MB L3 makes all the difference in the 99th percentile.

    • Andrew Lauritzen
    • 3 years ago

    Nice review Jeff, and quite a nice CPU! The “non-gaming” price/performance graph is definitely fairly telling. Anyone who actually expected it to beat a 7700k in gaming was fooling themselves, but hopefully people can start to see past gaming a little bit. (https://techreport.com/discussion/31179/intel-core-i7-7700k-kaby-lake-cpu-reviewed?post=1015257#1015257)

    The other interesting bit is how much Skylake+ and Ryzen have really separated themselves from previous generation chips, even the 4790k. The Crysis 3 99th percentile results are somewhat crazy – the 7700k is 1.34x faster than the 4790k! And the 2600k and 3770k are definitely starting to look pretty pathetic in these benchmarks.

    That’s in stark contrast to when Skylake came out. I wonder what mix of workloads evolving/being updated, different tests/configs and otherwise that might be.

      • Prestige Worldwide
      • 3 years ago

      I’m feeling the Sandy-Bridge struggle right now. It can’t keep up with my GTX 1080 @ 1080p in BF1.

        • Firestarter
        • 3 years ago

        From what I’ve seen, Ryzen does well in BF1 in DX11 mode, but struggles in DX12 mode:

        DX11: [url<]https://www.computerbase.de/2017-03/amd-ryzen-1800x-1700x-1700-test/4/#diagramm-battlefield-1-dx11-multiplayer-fps[/url<] DX12: [url<]https://www.computerbase.de/2017-03/amd-ryzen-1800x-1700x-1700-test/4/#diagramm-battlefield-1-dx12-multiplayer-fps[/url<] the i7-7700K is still fastest in DX12, but the 1800X can almost catch up in DX11. It seems weird but I guess that the Frostbite DX12 renderer may use less CPU time in total but is not multithreaded as well as the DX11 renderer

      • Kretschmer
      • 3 years ago

      If gaming is your most demanding CPU application why would you want to “see past” it? Should I buy a chip based on benchmarks that don’t apply to me?

        • RAGEPRO
        • 3 years ago

        Where did he say anything that implied those people should do that?

          • Kretschmer
          • 3 years ago

          “but hopefully people can start to see past gaming a little bit”

            • RAGEPRO
            • 3 years ago

            But can you explain to me how that specifically implies that he thinks that people for whom gaming is their primary or most demanding workload should look at other benchmarks?

            It’s a casual statement made offhandedly. All Andrew is saying is that some people seem to fixate on gaming benchmarks, even when gaming may not be their primary or most demanding use-case.

            You’re taking it and inferring a much more specific meaning that was not implied.

    • geekl33tgamer
    • 3 years ago

    A great chip, and thankfully not a repeat of bulldozer. For my use however, I am a little underwhelmed.

    My 3 year old 4790K for gaming needs looks like it’s not getting taken out anytime soon. It also runs 32GB of RAM at 2666Mhz in a 4 x4 config just fine, seeing as memory speed is being talked about a lot with Ryzen at the moment. Hopefully BIOS updates can address that.

    For gamers, the CPU to wait for may well be the 4C/8T or a just a straight up 8C with no SMT part that will run higher clock speeds. That chip could be very exciting. 🙂

    • Gadoran
    • 3 years ago

    Over 1.4 V to have only 4Ghz……come on AMD :(. Bad surprise for me i have to say. It is the process at GloFo or the arc???
    Intel achieve 4Ghz at 1.15V on top parts 🙁

      • Chrispy_
      • 3 years ago

      Don’t forget, that’s 4GHz on an [i<]eight-core[/i<] model. Inte's 8-core Haswell runs at just 3GHz, whilst the 8-core Broadwell-E runs at 3.2GHz.

        • Gadoran
        • 3 years ago

        Yes but the point is the voltage, too high,. For comparison i7-6900K does 4 Ghz overclock (all cores active) at 1.15/1.2V depending on sample. And this is only Broadwell no a champion in clock scaling.
        This tell something about the strange (high) power consumption of Ryzen cores (not plataform) under cinebench in other reviews around the web.

    • LocalCitizen
    • 3 years ago

    i don’t see it in the comments yet. at least not enough. so let me just say what a fabulous review Jeff has done here. i think you have done more tests and shown more results on the strength and weakness of the zen architecture than the others guys. you have shown a more complete picture of what zen can do (or cannot do). take a look at the a-guys (without naming names) their review doesn’t even have any games, which is i think a step too far outside of their core (ha!) audience.

    so please get some good sleep, and gosh some good food too Jeff. i think we have a lot more to explore about zen for a while. (for example, the t-guy noticed by turning off smt, games can run up to 10% faster. that’s interesting)

      • w76
      • 3 years ago

      Yes, loved the review, the most important (to me) review in quite a few years! Long live the value scatter plot.

    • snowMAN
    • 3 years ago

    How does Ryzen compare to Intel when compiling C++ code, say the Boost libraries, with an appropriate number of parallel GCC invocations?

      • Manabu
      • 3 years ago

      See the Qt GCC benchmark. Qt is C++.

        • snowMAN
        • 3 years ago

        Right, thanks, it’s so old I was thinking it was C.

          • just brew it!
          • 3 years ago

          C++ has been around since the 1980s.

    • Dazrin
    • 3 years ago

    Can we guess at the 1600X performance from this?

    If it is clocked the same as an 1800X but only has 6c/12t should that mean the gaming (less multi-threaded in general) performance will be about the same as the 1800X, probably just under but at 60% of the cost. The non-gaming (more multi-threaded) performance will be around but slightly lower than 1700 levels (higher clocks / fewer threads). Say, on par with the 6700K or 7700K.

    Obviously just a guess, but at a $260 price that seems to be a decent value. Still below the i7-7700K in gaming but about the same for non-gaming, at an MSRP about $80 less.

    • Concupiscence
    • 3 years ago

    Current long-term plan remains in place: keep the i5 6600K for gaming and emulation, build an 8 core Ryzen box as a replacement workstation later this year after the platform’s matured and operating systems in the wild are tuned for the architecture, and demote my FX-8320e to duties as a VM host and server in the corner. I’m so glad Ryzen’s put AMD back in the game at reasonable price points. Thanks for a great review, TR!

    • AnotherReader
    • 3 years ago

    Another comment. What the hell is wrong with these game publishers? We can decide if our systems are fast enough to play a game. Does this mean that in 2022, I won’t be able to play Deus Ex on the Ryzen 7 6800X?

    • Unknown-Error
    • 3 years ago

    Tom’s Hardware has a pretty extensive review and it looks like SMT is hurting Ryzen’s Gaming performance. And at times the difference is quite noticeable.

    [url<]http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,4951-6.html[/url<] [url<]http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,4951-7.html[/url<] Optimization issues? Edit: Same from GamersNexus [url<]http://www.gamersnexus.net/hwreviews/2822-amd-ryzen-r7-1800x-review-premiere-blender-fps-benchmarks/page-7[/url<]

      • Manabu
      • 3 years ago

      Hardware.fr found the same: [url<]http://www.hardware.fr/articles/956-7/impact-smt-ht.html[/url<] I would also be interested on how the CPU overclocks with SMT disabled.

        • Firestarter
        • 3 years ago

        computerbase.de found the same, about 3% in their gaming tests: [url<]https://www.computerbase.de/2017-03/amd-ryzen-1800x-1700x-1700-test/4/#abschnitt_vor_und_nachteile_durch_smt[/url<]

      • DragonDaddyBear
      • 3 years ago

      They do have a 4 core non SMT CPU that would be interesting to bench next to the SMT enabled CPU.

      • chuckula
      • 3 years ago

      AMD did warn us that Intel would sabotage RyZen’s launch!
      It looks like Intel snuck in and dropped SMT on their chips!

        • Anovoca
        • 3 years ago

        I blame Russia!

          • Redocbew
          • 3 years ago

          In soviet Russia, threads run you!

        • Unknown-Error
        • 3 years ago

        I blame Neelycam

          • NeelyCam
          • 3 years ago

          [url=https://www.youtube.com/watch?v=jeGTt08xdWA<]I did it![/url<]

      • just brew it!
      • 3 years ago

      [quote<]Optimization issues?[/quote<] Or just a less efficient SMT implementation.

        • cegras
        • 3 years ago

        But that seems to only be the case for gaming. How different are gaming and encoding / computational workloads? With SMT off, there are huge boosts in 1% / 0.1% FPS.

          • RAGEPRO
          • 3 years ago

          Very, VERY different. As different as they can be.

            • cegras
            • 3 years ago

            My first guess would be that games have lots of boolean logic versus HPC. I’m not sure how SMT would make it worse though, that would seem to penalize bad branch predictors.

            • just brew it!
            • 3 years ago

            In general, the effects of SMT on throughput in multi-threaded code is hard to predict. It tends to help… except when it doesn’t. 😉

          • just brew it!
          • 3 years ago

          I’m pretty sure encoding/HPC workloads tend to be more regular/predictable.

        • Manabu
        • 3 years ago

        AMD believes it can/should be fixed at software level. One theory I heard is game programmers abusing thread affinity.

        From [url=http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,4951-12.html<]Tom's Hardware[/url<]: [quote<]For instance, we discovered Ryzen's tendency to perform better in games with SMT disabled. Could this be a scheduling issue that might be fixed later? AMD did respond to our concerns, reminding us that Ryzen's implementation is unique, meaning most game engines don't use if efficiently yet. Importantly, the company told us that it doesn’t believe the SMT hiccup occurs at the operating system level, so a software fix could fix performance issues in many titles. At least one game developer (Oxide) stepped forward to back those claims. However, you run the risk that other devs don't spend time updating existing titles. [/quote<]

          • 1sh
          • 3 years ago

          Especially in Doom when you compare Ryzen’s performance with OpenGL to Vulcan.

      • BaronMatrix
      • 3 years ago

      Well, I’d rather it worked better on productivity apps… And SMT is HARD… It was totally broken for a year or two for Intel…

      Still my upgrade, still LONG…

      • USAFTW
      • 3 years ago

      This must be the first time since my P4 Prescott that I see enabling SMT (or HT in Intel’s parlance) cause such a drop in gaming performance.

        • Unknown-Error
        • 3 years ago

        Ewwww………..You just had to remind us of P4.

        • 1sh
        • 3 years ago

        It happens with core i7 as well…
        [url<]https://www.techpowerup.com/forums/threads/gaming-benchmarks-core-i7-6700k-hyperthreading-test.219417/[/url<]

      • maroon1
      • 3 years ago

      And yet it still loosing to kaby lake and lower clocked i7 6900K

      • ptsant
      • 3 years ago

      Need more intelligent scheduling. Can be fixed with a driver update that is aware of the underlying resources. Bulldozer had the same problem: running 4 threads on 4 modules is not the same as running 4 threads on 2 modules.

      • Peter.Parker
      • 3 years ago

      Everybody knows that is just fake news. I would stick with our alternative facts .

        • Unknown-Error
        • 3 years ago

        True, the Anunnaki aliens have been helping Intel in the microarchitectures. Below you’ll see Intel design team receiving messages from the Annunaki

        [url<]http://static.wixstatic.com/media/4e2f7c_f2a7a33b87d7001773aaf0e7eec47d36.jpg_srz_486_321_85_22_0.50_1.20_0.00_jpg_srz[/url<]

      • ronch
      • 3 years ago

      Back in 2010 when AMD put out its first Bulldozer and Bobcat video they went to great lengths bashing SMT and talking about how great CMT was. So yeah, SMT is not cool. Let’s stick with our Bulldozers! 😀

      • Bensam123
      • 3 years ago

      Ever try turning Hyperthreading off on Intel CPUs? It helps as well, not always increasing average FPS, but definitely reduces micro-stuttering. Windows isn’t super amazing at managing virtual cores and loading them up with appropriate workloads.

      This isn’t just a AMD only thing.

      • Meadows
      • 3 years ago

      The obvious questions are: Can you turn off SMT? And if not, will Windows 10 get an appropriate update?

    • odizzido
    • 3 years ago

    I think this is pretty solid. I never expected an 8core CPU to deliver better gaming performance per dollar than a four core one.

    When you compare this to intel’s 8 core CPUs things look pretty good for AMD. I would buy any of these zen processors over a 5960 without a second thought.

    I am looking forward to seeing zen in 4/6 core form which is what I think will be more competitive to the 7700.

      • Pancake
      • 3 years ago

      Except it doesn’t. If you look at the “99th Percentile FPS vs Price” graph (the second one) in the conclusions page the i7-7700K is still king. Have a downvote for failing to read article.

      Edit: quote from the conclusion to show how poor odizzido’s comment is:

      “For play, Intel’s Core i7-7700K remains the chip to beat”

    • Rasterian
    • 3 years ago

    I just skipped to the conclusion page and read through it. But let me say this; this conclusion was well written, provided a good picture of the tradeoffs between these products, a good response to AMD’s views, and a great ending paragraph. I say bravo, Techreport, and thank you for producing independent and thoughtful analysis in the very short timeframes often offered by the industry.

    • jensend
    • 3 years ago

    Though some workloads would be better with quad-channel, the memory results are quite impressive for a dual-channel setup, and L2 and L3 cache numbers look good too. Huge improvement over piledriver, and bodes well for the 4C/8T parts and future APUs.

    • mcnabney
    • 3 years ago

    You know, Ryzen didn’t have to beat Intel across the board. All they needed to do was return to parity, which they have clearly done. They provide great value on the applications I use and they won’t be a bottleneck at the resolutions I game at. They also have the best possible feature I am looking for in a CPU, not giving the price gouging Intel my money.

    • Jigar
    • 3 years ago

    Overclock 1700X to 4.2 GHZ (all cores) and call it a day. I feel there is an anomaly at memory end because i see IPC matches broadwell but results aren’t showing. May be mobo bios update might clear out the memory performance bug ?

      • ColeLT1
      • 3 years ago

      4.2 looks to be out of the picture according to most of the reviews out there. 4.1 will take 1.45v to be stable (.1v over AMDs recommended voltage for long chip life).

    • pandion124
    • 3 years ago

    Thanks for doing a compiler benchmark! As a software engineer who uses C++ this is very useful information.

      • Veerappan
      • 3 years ago

      Likewise.

      One of the most performance-heavy things I do after-hours on a regular basis is compilation of large C/C++ projects (llvm/clang/mesa). llvm/clang alone take almost 40 minutes to clean build on the Phenom II x6 my home desktop has, and I’ve been waiting for something like the 1700X to replace it and hopefully drop those times down below 20 minutes..

        • snowMAN
        • 3 years ago

        AMD still needs to catch up some, the quad-core i7 7700K gets the same GCC/QT score as the octa-core R7 1700! What’s going on here, both have dual-channel memory. The i7 has a higher TDP, but the 1700X has 50% more than the 1700 yet is less than 10% faster. The difference between the Intel and the AMD is larger than frequency x cores x 1.15 would suggest. What’s the bottleneck? Is Intel branch predictor still smarter?

    • Kretschmer
    • 3 years ago

    I can’t wait for the usual suspects to go from telling us that “no one can tell the difference between an i5 and 8350, anyways” to “this Zen upgrade changed my life; best chip ever”.

    Congrats on the competitive product, AMD. Now all eyes are on Vega.

    • geniekid
    • 3 years ago

    I’d like to see some benchmarks that measure game performance while streaming.

    Based on some benchmarks from Linus and Ars, the i7-7700K takes a bigger hit than the 1800X while streaming via OBS, as expected. Depending on the game though, the i7-7700K might still perform better overall.

      • slowriot
      • 3 years ago

      I’d really love a “mixed load gaming” test. Streaming, web browsers open with some heavy tabs (video, chat), voice communication app (Discord?) also running.

      • chuckula
      • 3 years ago

      How about using QuickSync for the streaming?

        • cegras
        • 3 years ago

        [url<]https://www.reddit.com/r/Twitch/comments/22isi8/obs_x264_vs_intel_quick_sync/[/url<] [url<]https://www.reddit.com/r/Twitch/comments/4pgwm6/streaming_x264_vs_intel_quicksync_vs_nvidia_nvenc/[/url<] Probably leads to blockiness if the stream changes frequently, dota 2, fps, etc.

        • geniekid
        • 3 years ago

        Linus’ review showed that QuickSync encoding did not affect game performance as negatively as x264 encoding on the 7700K.

        Linus said the x264-encoded stream looked better than the QS one, but it was a subjective assessment (and he did say “judge for yourself”).

          • brucethemoose
          • 3 years ago

          Most streamers will use the encoder on their dGPU, not IGP, if they use hardware encoding at all.

    • raddude9
    • 3 years ago

    Thanks for including a gcc benchmark in your review, to me it’s more important than any of the others. I’m sure a bunch of other people have their own favored/indicator benchmarks as well and TR manages to get a great balance of benchmarks into a single review.

    I checked out a bunch of other reviews on the web in the delay there was with this review going up and none of them could tell me if I should get one of these chips for a new dev machine, now I have my answer. (I just have to run the order past Mrs raddude first of course…).

    • astrotech66
    • 3 years ago

    Excellent review, given the time constraints and other issues.

    As a pre-orderer, I’m still happy with my purchase now that I’ve seen real benchmarks. I think Ryzen will be a nice upgrade over my aging 2600K system. I do my gaming at 4K, so I don’t think that Ryzen’s slight weakness in gaming versus the Intel chips will matter much to me. And I should get a nice bump in the non-gaming stuff that I do. I just got back into F@H, too, so I’ll be interested to see how it does with that, even though most of my points come from my video card.

    Even though Ryzen doesn’t quite match Intel’s best, at least it gets them back into the high performance game. I was glad that the review included FX-8350 benchmarks so we can see just how big an improvement Ryzen is over the older architecture.

    • w76
    • 3 years ago

    The Handbrake results were pretty disappointing. I was hoping for a monster, but it seems like some apps thought to be heavily multi-threaded either can’t really utilize that many cores, or Ryzen just can’t keep the cores fed. For some of my use cases where I was hoping for a super value I’ve been given pause. It’s half the price of Broadwell-E but the 7700K isn’t all that far behind in a lot of situations! Also, like TR I guess, I’m surprised (but pleased!) to see software actually leaning AVX. That’s not good for Ryzen.

    A slightly wonky platform could cast a lot of shade on the value proposition. I’m at an age where I really don’t want to troubleshoot chipset issues. My i7-2600K and everything associated with its motherboard just simply worked, and still works, from day 1.

      • Jeff Kampman
      • 3 years ago

      I wouldn’t put too much stock in Handbrake. Recent versions of x264 don’t seem to be compiled to scale to n threads. We probably need to find a better test for this kind of thing.

        • brucethemoose
        • 3 years ago

        I would suggest StaxRip.

        HandBrake is kinda like VLC… Popular and reliable, but pretty much obsolete at the same time. And both dev teams are too stubborn to open up their platforms and adopt other standards.

          • USAFTW
          • 3 years ago

          Agreed. StaxRip is a bit more difficult to navigate (until one gets used to it, and Handbrake has its kinks, too) but it’s more frequently updated and much more feature-packed than Handbrake.

        • chuckula
        • 3 years ago

        Would a command line X.264/X.265 with a freshly compiled package work better than running it through Handbrake?

          • Jeff Kampman
          • 3 years ago

          I saw no improvement.

            • chuckula
            • 3 years ago

            Thanks for testing anyway!

            I’m a Linux person myself so we’ll see how RyZen fares a little later when the Linux benchmarking websites sink their teeth into it.

        • w76
        • 3 years ago

        Didn’t know that, though unfortunately (for both AMD and Intel) Handbrake and, specifically, x264 upon which it relies, is what most people use for encoding/transcoding, which makes it a good “real world” benchmark. Hopefully the x264 team sees all these cores laying around and improves the situation.

        Someone else recommended Staxrip, but if it uses x264 (and it does) then it’s bound to not be much better.

          • brucethemoose
          • 3 years ago

          This is true.

          x264 should use (logical cores * 1.5) threads by default. Maybe it needs more for higher core counts (which means just adding an extra line in Handbrake), or maybe it has something to do with the workload itself.

          Perhaps x265 will fare better?

            • Concupiscence
            • 3 years ago

            x265 has generally scaled worse than x264 to date, but time will tell.

        • Bauxite
        • 3 years ago

        Can confirm, has scaling problems even on the same socket. Multiple sockets…don’t bother, run multiple independent jobs instead.

        • stdRaichu
        • 3 years ago

        On my main workstation, I’ve not seen x264 scale linearly apart from going from 1 to 2 CPUs; after that you’re into diminishing returns territory. That’s fine with me since I’ll normally batch up my encodes and queue them to run alongside one another, usually limited to 2 threads; doing it this way supposedly also has an advantage in that it supposedly gives better encoding efficiency/quality (although I can’t spot any difference myself).

        Just threw a quick x265 CRF=21 preset=slow bench together with some DVD content in MeGUI just to see if it behaved any different and;
        threads = 0 (unrestricted, so full reign of a 6 core chip) gave an average of about 42fps using about 32% CPU (70% if you fudge out SMT)
        threads = 4 gave an average of about 47fps using about 48% CPU
        threads = 2 gave an average of about 39fps using about 31% CPU (i.e. nearly maxing out those two cores)

        Will see if I can get some bare command line stats from one of my linux boxes without using a frameserver to see if that behaves any different.

        (Hardware above is a Xeon 1650-v3 [6c/12t] with quad channel DDR4-2133 ECC)

        Edit: quick’n’dirty raw x264 on linux (on a Xeon E3-1230v3, 4c/8t) using `x264 –crf 21 –preset slow –threads X -o output.mkv input.mkv` with a short 1280×720 input gives:
        threads 0 = ~65fps
        threads 4 = ~42fps
        threads 2 = ~31fps
        threads 1 = ~16fps

        Seems likely that the frameserver/cropping and scaling are the biggest bottlenecks compared to a raw encode, but however way you cut it, CPU scaling with encoders seems poor once you get past 2 cores.

        Forgot to add – hats off the AMD, glad to see them back in the race. Reaching more-or-less parity with intel after essentially a decade on the bench is pretty astonishing. Assuming mobo support is there, a ryzen APU will be a shoo-in for my HTPC.

    • Unknown-Error
    • 3 years ago

    So for a gamers Ryzen is not worth it. i7-7700K is a much better investment.

      • Krogoth
      • 3 years ago

      Not really

      The i5-7600K is a much better value for pure gaming usage. HT on the i7-7700K doesn’t really help at gaming usage patterns. The workloads where HT because handy. The Ryzens become better values.

        • ptsant
        • 3 years ago

        Try the 7350K, even cheaper and equivalent to the 7600K in almost all games.

          • derFunkenstein
          • 3 years ago

          Older ones, maybe, but I wouldn’t go that route if I could afford a 7600K. Eurogamer found [url=http://www.eurogamer.net/articles/digitalfoundry-2017-intel-kaby-lake-core-i3-7350k-review<]wide-reaching differences[/url<].

            • ptsant
            • 3 years ago

            Thanks for the link. I would certainly get the 7600K, but if the budget is constrained, the difference between the two CPUs (~$70) is better spent on a GTX 1060 than on a GTX1050Ti, for example.

            I also noted that the review used a Titan X at 1080p. Realistically, you won’t see so big differences, for example with a 1060 at 1080p.

            • derFunkenstein
            • 3 years ago

            Probably fair points, and if you’re on the fence between i5 + 1050Ti and i3 + 1060, I’d probably choose the latter, too.

        • USAFTW
        • 3 years ago

        So as far as gaming performance is concerned, you’re not impressed, right?

      • raddude9
      • 3 years ago

      And a better graphics card is an even better investment than a 7700K…

        • chuckula
        • 3 years ago

        Or any RyZen chip.

          • raddude9
          • 3 years ago

          Hmm, not sure if you got my point.

          People are always raving about how good the 7700K is for gaming, and it is a great chip for squeezing the most fps out of a given GPU. But that doesn’t mean it gives people good value for money. The problem is that it’s just too expensive and you can get CPU’s that perform reasonably close to it for $100 less. Now, if you put that $100 towards getting a higher performing GPU you would end up with a better performing gaming system.

          RyZen is a different type of animal, I see it as a type of budget HEDT system. It can game pretty well, but obviously not as well as the higher IPC quad cores, but the fact that it can go toe-to-toe with intel’s many-cored monsters in many multi-threaded benchmarks at a much lower price is what impresses me.

      • willmore
      • 3 years ago

      I would say hold off on that. Here we have a brand new CPU architecture that has a different balance of integer/FP than previous ones. It’s going to take a while for software to get optimized for it. In particular, will nVidia optimized their drivers for Ryzen? And, if they do, how soon?

      I would expect AMD to deliver more optimized GPU drivers sooner than nVidia. So, it might make more sense to test Ryzen chips with AMD GPUs than nVidia ones.

    • just brew it!
    • 3 years ago

    So not a game-winning grand slam. More like an RBI double, bringing them back within striking distance. I’ll take that. We sorely need the competition in the CPU space.

      • southrncomfortjm
      • 3 years ago

      Truth. AMD performing well in the CPU and GPU space will have really great impacts on everyone who relies on a PC for anything. There is literally no downside for consumers other than having more than one brand of CPU to buy.

      That said, I’ll probably still go for a 7700K later this year for my PC build since that processor is still king for gaming. Hopefully, though, I’ll be paying far less for the privilege.

        • just brew it!
        • 3 years ago

        It also makes me glad I waited to buy the CPU to complete my build using the parts I got in the TR Holiday Giveaway. I didn’t wait specifically because of the Zen launch, it was just because I’ve been too busy to mess with a new build. Maybe I’ll be able to get a better deal on a mid-range Intel CPU in the near future.

        Edit: Already picked up a case. Just need the CPU and RAM (I don’t have any DDR4 in my spares pile as all of my existing systems are still DDR3).

    • MrJP
    • 3 years ago

    If anyone from AMD happens to be reading this – well done, guys.

      • MrJP
      • 3 years ago

      …and well done Jeff as well for the great review. 🙂

    • ermo
    • 3 years ago

    The L2 and in particular the L3 cache results look quite impressive.

    My guess is that this accounts for a lot of the Multi-threaded/SMT-related performance in e.g. CineBench.

    I’m torn about whether the fact that my delidded Ivy@4.5GHz w/4 sticks of DDR3-2400 MHz RAM will serve me better in most games than even the 1800X is a good thing or not, though.

    @JeffK: Enjoyed the review and looking forward to reading your updated piece!

    • DrDominodog51
    • 3 years ago

    I’m impressed, but I’ve heard that Ryzen matches Broadwell-E in overclocking ability.

    I guess I’ll have to wait patiently for Jeff’s OC results.

    • torquer
    • 3 years ago

    The reports of Intel’s impending demise appear to have been greatly exaggerated.

      • chuckula
      • 3 years ago

      RyZen does make it clear that Intel should either cut the prices on its HEDT chips or release a new line that have sufficient performance improvements to justify higher prices.

      So in that regard, RyZen is a success because for the first time in practically a decade, Intel should actually respond to something AMD has done.

      However, as we have seen from RyZen’s good & bad points, it also shows that Intel aren’t a bunch of idiots and that designing these high-end chips is hard.

        • Krogoth
        • 3 years ago

        Intel is going to justify some of the premium on their Socket 2011 platform with its quad-channel DDR4 and 40 PCIe 3.0 lanes support for everything expect the i7-6800K (which only has 28 PCIe 3.0 lanes)

          • chuckula
          • 3 years ago

          Intel doesn’t necessarily need literal price parity but cutting the prices on the highest-end parts would be a smart move (they still don’t have to be cheap, just less expensive).

          If Intel had the guerrila marketing savvy and less of a stuffed-shirt bureaucracy then they would have done this to screw with AMD:
          1. Bin a bunch of 22nm 5960X parts that like to run at about 4GHz, which isn’t that hard considering the 5960X actually overclocks pretty well for an HEDT part.

          2. Launch them as the “5990X” at $500 each last month. Hell, they’d still be making money on those chips and with the higher clockspeeds they’d beat the 1800X in most of the benchmarks where RyZen is strong.

          Now, Intel isn’t that smart or agile to pull off a stunt like that. Additionally, I’m pretty sure Intel doesn’t want AMD to fail. They don’t want AMD to dominate either, but they don’t want outright failure.

      • UberGerbil
      • 3 years ago

      Oh, remember the chorus of that back in the Netburst / AMD64 timeframe? I recall having an on-line argument in 2003 with someone who was absolutely [i<]certain[/i<] that not only had AMD permanently won the battle, Intel would be bankrupt "by the end of the decade."

      • Anovoca
      • 3 years ago

      It was only purported, never reported.

      • bfar
      • 3 years ago

      Hardly, but their HEDT range looks awfully poor value all of a sudden. They’ll have little choice but to drop prices significantly across the entire range.

    • ptsant
    • 3 years ago

    Great review and a balanced conclusion. The chip is an all rounder at a great price.

    Ryzen could not give all the bells and whistles of the $1000 6900K or the sigle-threaded performance of the 7700K, but the overall package is decent. You can see some of the compromises: wide AVX had to go, which is partially compensated by the number of cores, the quad-channel memory also would not fit at the price point, but doesn’t seem to hurt too much.

      • blastdoor
      • 3 years ago

      Yup…

      Intel still has the superior product — it’s just priced far too high.

      AMD just has to hope that Intel has replaced the mantra “only the paranoid survive” with “only the complacent maximize short term profits”.

      Otherwise, Intel could cut prices and wipe AMD out once and for all.

      I’m guessing Intel will be fairly complacent for the rest of this year.

    • rudimentary_lathe
    • 3 years ago

    Thanks for the review.

    I’m curious as to why AMD didn’t use quad-channel with Ryzen – these are supposed to be server CPUs, after all.

    I’ll be interested to see what happens to these performance numbers if/when AMD fixes the higher memory clock issue.

    These CPUs look to be an excellent value, and a truly massive improvement on Excavator. While ultra-hardcore gamers will probably stick with Intel, those on a budget or those with more diverse workloads will likely find these very appealing. Well done AMD!

      • chuckula
      • 3 years ago

      [quote<]I'm curious as to why AMD didn't use quad-channel with Ryzen - these are supposed to be server CPUs, after all.[/quote<] They are gluing 4 of these chips together for the Naples servers to produce an aggregate of 8 channels of RAM (with caveats since each chip is actually only directly attached to 2 channels). So for big servers, there are more memory channels. In desktops, it looks like a cost/performance tradeoff decision since most desktop workloads don't see big benefits from going beyond 2 channels and they wanted inexpensive motherboards.

      • Krogoth
      • 3 years ago

      Cost is main reason why quad-channel DDR4 isn’t being used. There’s a reason why Socket 2011 boards command a significant premium over their mainstream counterparts.

      Ryzen is not a “server” grade version of the architecture. It is the HEDT/Workstation version of it.

      • raddude9
      • 3 years ago

      Quad channel adds greatly to the cost of the motherboard, as well as meaning lots more pins on the chip, which in turn makes them more expensive as well. For all that extra expense, quad channel only helps in a few specific workloads.

      Also, this iteration of Ryzen is not really for servers, AMD is going to weld 4 of these chips together in a server CPU called Naples which is supposed to have 8-channel memory.

      • rudimentary_lathe
      • 3 years ago

      I’m aware that Naples is the server variant, but my understanding is that the underlying architecture is essentially the same. I would have though quad-channel would have been better for serving multiple VMs for example on a single 8c/16t chip.

        • Anonymous Coward
        • 3 years ago

        Why would serving VM’s be especially demanding for memory bandwidth?

        Perhaps in a VM environment the underlying server is being driven harder because there is by design fewer unused resources.

        Anyway, as far as I know this is a similar situation to Intel’s very high core count chips, they have what 18 cores on a die already, I think. Soon more. So when AMD comes along with 32 cores and 8 memory controllers, its a competitive ratio.

          • Concupiscence
          • 3 years ago

          It generally comes down to I/O: if you have lots of VMs all reading and writing to drives, connected devices, and network shares, you’ll need a lot of bandwidth to prevent performance from tanking under load.

            • Anonymous Coward
            • 3 years ago

            But that is the same IO load that would happen if everything was being run inside on OS instance instead of many, and I don’t see disk or network IO being a particular strain that would make a lot of difference in the 2- vs 4-channel context. This still sounds to me like 2 DDR4 channels for 8 cores is not problematic for servers, except for those that are into the most heavy data processing.

            Of course I do not have low-level insight into this. But I am constantly using VM’s in the cloud and some of those I expect have a similar IO to CPU power ratio that Ryzen is showing.

      • Anonymous Coward
      • 3 years ago

      [quote<]I'm curious as to why AMD didn't use quad-channel with Ryzen - these are supposed to be server CPUs, after all.[/quote<] I think 2 channels was the correct choice, but then it crossed my mind that if they had launched with quad channels to impress all the workstation and server people... then that would be heck of a platform for an integrated GPU, too. It would be too expensive for the lowest part of the market, but that would a way to get some more sales volume on the quad-channel motherboards.

    • derFunkenstein
    • 3 years ago

    I was totally into the DAW testing when it came up on Twitter a couple days ago. In my own informal testing, I should have bought a 6700K. Using the same software and settings on my i5-6600K system, I only get around 450 ReaXcomp compressors. I don’t have the same interface, but it’s still a Focusrite Scarlett (8i6 in my case). Looks like HyperThreading could have made a big difference.

    • Thrashdog
    • 3 years ago

    It’s about what I was expecting — not world-beating, but competitive enough to consider again, especially at the price point. I’ll be interested to see if there is extra clock headroom to be wrung from the 6-core parts launching later in the year. I suspect that they may be the sweet spot for gaming workloads that have evolved to take advantage of more threads, but can’t quite saturate an 8c/16t processor.

      • derFunkenstein
      • 3 years ago

      It’s a shame Ryzen 3 and Ryzen 5 aren’t releasing today. Specifically, the Ryzen 3 are (rumored to be) 4C4T compared to Core i3 CPUs with 2C4T. That should give Ryzen a decent advantage in games at the budget end of things, where 8 cores is already overkill.

        • drfish
        • 3 years ago

        The R5 is going to be in the i7-7600K’s stomping ground though, that’s going to be interesting…

          • derFunkenstein
          • 3 years ago

          The most interesting thing to see there is whether the 6C12T CPU also has an AVX unit disabled. If AMD is just turning off one core on each quad-core module, I would hope the answer to be no. That would mean that in AVX tasks, it’ll at least be on even footing with the 7600K.

    • ultima_trev
    • 3 years ago

    I was right in my prediction all along!

    It’s exactly like one would expect if they took a 5960X then cut off half of the the AVX/FMA registers and memory controllers.

    I’m glad I didn’t get the 5960X two years ago and waited until now to preorder a 1700X!

    • AnotherReader
    • 3 years ago

    I just returned from the daily standup. I’ll edit this post when I have finished reading.

    Edit: After some detours, I have finished reading. All in all, the performance per $ graphs summarize the situation succinctly. Zen is great for people like me: those of us who want one box to do it all. For a dedicated gaming box, the 7700K or the 6700K are the better buys.

      • Stochastic
      • 3 years ago

      And an OC’d 7600K is even better value.

    • NTMBK
    • 3 years ago

    This looks like a serious threat to Intel in servers. Bring on the Opterons.

      • Anonymous Coward
      • 3 years ago

      I wonder how this works out with the cloud players, and thats a huge concern these days, with their market share and worldwide deployments. Making the correct platform choice is worth quite a lot of money. Wattage and performance will be under the microscope. So we’ve seen Zen offers solid integer performance, people say the wattage is good too, and the price will be very competitive. But a huge (and rapidly growing) amount of the market is guided not by companies targeting a specific workload, but a general workload, server rental.

      Reliability and manageability will also be huge questions.

      I suspect AMD will take a while to gain traction in servers again.

    • DragonDaddyBear
    • 3 years ago

    Your frame times paint a completely different picture than what I was getting out of the FPS from other reviews I’ve seen to now. It makes it seem not as bad for games. It’s better than a 2600K at least.

    The question of if Ryzen is right for you seems to be what you expect your workload to be. If you expect to do VM labs, Handbrake, or anything that benefits from lots of cores fairly regular basis then Ryzen is a fantastic value. It really does look like a server architecture first.

      • ptsant
      • 3 years ago

      The simple truth of the matter is that in situations that gamers are likely to choose (ie, not 1080p with a Titan X) the CPUs are quite close to each other, compared with the huge differences between GPUs.

      You can always find extreme situations where the 10-20% difference is perceptible (usually, close to the ~60fps threshold), but most of the time a game is either playable or not. It doesn’t get more playable at 150fps.

      Anyway, 7700K is the best gaming CPU out there and it’s even better than the Intel $1500 flagship. Nothing changes that.

        • Krogoth
        • 3 years ago

        i5-7600K is the best value for a pure gaming GPU (just tailing behind the i7-7700 for $100 less).

          • ColeLT1
          • 3 years ago

          i7-7700k is the best gaming CPU (what he said, not value). The +2mb cache and higher clocks win.

    • Krogoth
    • 3 years ago

    Impressed, considering that Ryzen’s idle/loaded power consumption is roughly Haswell-E tier (gathered from other reviewers).

    That is a massive leap from Pilediver and Bulldozer.

      • DragonDaddyBear
      • 3 years ago

      Krogoth impressed. Ryzen success confirmed!

        • K-L-Waster
        • 3 years ago

        Depends — is “Krogoth Impressed” a bellweather or a contrarian indicator? 😀

      • Freon
      • 3 years ago

      This review shows the 1800X actually with much better idle power consumption that the 8/10 core Intel parts and a slight edge on the 7700K, but that could boil down to test settings:

      [url<]https://www.pcper.com/reviews/Processors/AMD-Ryzen-7-1800X-Review-Now-and-Zen/Power-Consumption-and-Conclusions[/url<] Hope TR follows up with power testing!

      • chuckula
      • 3 years ago

      That also means Intel has been LYING to us all this time!

      They promised us 140 watt TDP chips!
      And look at AMD matching them with 95 watt TDP chips!

        • derFunkenstein
        • 3 years ago

        With 2x the AVX throughput of Ryzen, I think Intel can still do it, even if PCP didn’t show it.

        • freebird
        • 3 years ago

        I HATE people that HATE sarcasm… unless you are serious… 😉

      • superjawes
      • 3 years ago

      [quote<]"Impressed." -Krogoth[/quote<] AMD should put that on the box!

      • USAFTW
      • 3 years ago

      According to numbers from PCPer, idle power consumption is 10 watts lower than a 7700K/Z270 combo, that’s actually impressive.
      Edit: Sorry, someone already pointed that out.

      • ronch
      • 3 years ago

      AMD can rest easy knowing Ryzen is a complete success when Krogoth is impressed.

        • UberGerbil
        • 3 years ago

        Or they can rue all the work they put in to ship a product that brings us The End of the World As We Know It.

      • Neji_Hyuga
      • 3 years ago

      No its actually much better, but narrows with higher clocks. The Ryzen 1700 uses 20-30w than the quad core i7-7700k. That is extremely impressive.

      • Wirko
      • 3 years ago

      100th upvote. Your impressedness has inspired hundreds!

      • Eversor
      • 3 years ago

      Trade offs seem to have been worth. I’m particularly impressed with L2 & L3 bandwidth numbers. Seems the architecture was well thought out and has some legs, despite the lower performance in gaming scenarios – which is probably an area of improvement on the next core.
      Seems like Naples will also find success in servers, though the lack of 256bit AVX and low-ish clocks will probably keep AMD out of the HPC server rooms.

        • NTMBK
        • 3 years ago

        I think AMD is more interested in selling GPUs and APUs for HPC. There were some plans leaked a while ago for a massive HPC APU-on-an-interposer, with a full sized Vega GPU, HBM memory, ~16 Zen cores… looked like quite a beast.

      • freebird
      • 3 years ago

      So was the process technology it was built on …. 🙂 32n SOI vs 14nm bulk Finfet hard to compare, but needless to say it helps…

    • chuckula
    • 3 years ago

    Hey Jeff,

    Excellent review as always. Given the fact that these major CPU launches are few & far between these days, will there be some followup articles that explore energy consumption & overclocking in a little more detail?

      • drfish
      • 3 years ago

      [quote<]Unfortunately, we had to make a few cuts from our schedule to achieve that goal. Overclocking performance and power efficiency measurements will have to wait for a separate article, as will platform performance measurements for X370 like USB 3.1 transfer speed and NVMe storage performance.[/quote<] Yep!

        • chuckula
        • 3 years ago

        w00t!

    • NarwhaleAu
    • 3 years ago

    Can we get some gaming benchmarks that aren’t at 1080p? I don’t really care how the chip performs there. 4k would be great, but I’d take 2560×1600, or even 1440.

      • maxxcool
      • 3 years ago

      Probably the result of a time constraint, since only 6% of gamers use larger than 1080p displays (according to the steam survey) they probably opted to just do 1080p for now.

        • slowriot
        • 3 years ago

        Yes, and maybe 6% or less of Steam users spend more than $200 on their CPU over the last 5 years. People spending $400-$500 on a CPU almost certainly own greater than 1080P monitors at a much higher rate than the rest of the market.

          • maxxcool
          • 3 years ago

          Yup, I am not going to disagree. BUT with time constraints I imagine running benches two times was not in the cards and they defaulted to the market average.

          • JustAnEngineer
          • 3 years ago

          My ultrabook has only a 1920×1080 screen, a GeForce GT620M and 10 GiB of RAM. That’s the one of my PCs that Steam asked for information about during their most recent hardware survey. They didn’t get information from my desktop with the 2560×1600+1200×1600 displays, GeForce GTX980Ti and 32 GiB of RAM. Keep in mind that there area lot of casual gamers using Steam, too, not just enthusiasts.

          • anotherengineer
          • 3 years ago

          Maybe.

          I know business’s end up getting i7, machines for CAD use, then use integrated intel gpu and a 22″ 1680×1050 screen. But hey 16:10 😉 I know not quite $400, cpu but close.

          Sigh true stories……….

          • freebird
          • 3 years ago

          WTF!?!? Where are the 320×200 results!!!!
          I run my TitanX through three converts so I can still play it on my 320×200 CGA monitor!!!!!

          (left over from my Tandy 1000a, WHICH did 8-color CGA which the GD monitor didn’t support…)

          Jesus Christ, I need to know if Ryzen or Intel can push 1000 fps at that resolution…

        • ImSpartacus
        • 3 years ago

        The community isn’t a representative sample of all gamers.

        It’s heavily skewed towards the ~6% that have >1080p displays.

        • Sabresiberian
        • 3 years ago

        Steams’ survey shows that the number of people that use 1080p is about 43%, less than half. Granted the second most used monitor size is 1366×768 and the 6% figure you quoted is actually high, it includes “other”. The actual number is a little over 4%.

        [url<]http://store.steampowered.com/hwsurvey[/url<] Consider these things though: The readers of The Tech Report have a much higher percentage of users that have moved on to 1440p and 4K. The biggest portion in the site's 2014 survey was still 1080p, but 39% of us use higher-res displays. the total of those higher pixel count displays almost matches the number of 1080p displays. And since that survey was taken in 2014, I'd bet that there are more users that have upgraded from 1080p since then. [url<]https://techreport.com/news/27055/the-tr-hardware-survey-2014-what-inside-your-main-desktop-pc[/url<] The most played game on Steam is DOTA2, hardly a game that requires, or even benefits from, a larger monitor or higher pixel densities. The second most played game is CS:GO, also a game that can be played on pretty much anything. These to games far out-strip the numbers for any other game connected with Steam (that is played today). Many games aren't connected to Steam. I don't currently play any games that are, the other games I play have nothing to do with the platform. Steam's numbers don't reflect this in any way. Steam is just a service, it doesn't define PC gaming in any way. I certainly understand time constraints - these articles take a massive amount of work - and certainly don't fault Jeff in any way for not including higher resolutions. It could even be argued that using 1080p better shows the difference in performance between the processors because the games are less likely to be GPU-bound. But higher resolutions are very relevant to The Tech Report's audience.

          • maxxcool
          • 3 years ago

          🙂 yup, I know, not disagreeing at all.

        • jihadjoe
        • 3 years ago

        Maybe it’s time for another TR monitor resolution poll!

          • maxxcool
          • 3 years ago

          Jeff! Make us a front page a Poll!!

        • Pancake
        • 3 years ago

        From Sept 2014 TR poll – probably due for an update given explosion of cheap 4K displays:

        1680×1050 = 7%
        1920×1080 = 43%
        1920×1200 = 22%
        2560×1440 = 13%
        2560×1600 = 6%
        3840×2160 = 1%

        + 5% < 1680×1050 presumably laptop only users. So, TR users are somewhat ahead of the Steam curve.

        But, gather round children. Once upon a time – many, many years ago when you were knee high to a grasshopper – AMD and Intel competed with each other to see who made the best CPU for gaming. Yes, AMD actually competed with Intel and people would buy AMD processors to build gaming machines! Why, I myself had one and a sweet humdinger of an Athlon XP2000 with an nForce-2 motherboard combination it was. It was typical to run benchmarks at a lower resolution than average to exaggerate the differences between the CPUs. Reviewers did this as they were testing CPUs and not graphics cards.

          • Redocbew
          • 3 years ago

          You just made me feel old. Thanks dude.

      • Den2
      • 3 years ago

      CPU is probably not the bottleneck at >1080p. CPU is more important at lower FPS, so at lower resolutions. I’d prefer adding 720p benchmarks over adding 1440p+ benchmarks and I personally have a 4K monitor.

        • RAGEPRO
        • 3 years ago

        Comments like this are why I love gerbils. Right on.

        • slowriot
        • 3 years ago

        People know the CPU is not the bottle neck at higher resolutions. People are wondering about their own real world scenarios. People want to know “What if I have some applications that can make use of all the threads. Do I also suffer big in gaming at the resolution and settings I want?” and these tests make it hard to answer that. Reality may be that at 1440P there’s very little delta between AMD and Intel parts. I’d personally like to know that and I can’t even guess at that given the tests as presented.

        I feel hardware enthusiasts care so much about which part is “fastest at X” and not “best for my usage cases overall” that the tests just don’t feel truly relevant to how people use their PCs.

          • Redocbew
          • 3 years ago

          [quote<]Reality may be that at 1440P there's very little delta between AMD and Intel parts.[/quote<] How are you going to figure out the performance of a system overall without knowing how much each component contributes to it? That's the only meaningful way to go about it.

            • slowriot
            • 3 years ago

            You’re chasing the “best case scenario which CPU gives the best FPS” metric and I’m chasing the “with several browser tabs open and a video going, plus several chat apps, maybe a VM or two open in the background, Discord open.. which CPU gives me the best overall experience” metric.

            The “best case scenario” is pretty much only what gets tested but the “mixed load gaming” scenario is what most people are actually experiencing.

            Further, I’m trying to weigh that “mixed load gaming” versus other compounding factors like… I’m only gaming a portion of the time and have other productivity workloads which do benefit from many threads. If at 1440P, max quality settings, in a mixed load gaming scenario the FPS delta is small because I’m GPU-bound I want to know that. Because if that’s the case I simply don’t care that the Intel 7700K’s single threaded performance advantage shows at 1080P. If at how I actually run a game they’re ultimately at the same result I’ll focus on my other use cases as the deciding factor.

            • Redocbew
            • 3 years ago

            [quote<]If at 1440P, max quality settings, in a mixed load gaming scenario the FPS delta is small because I'm GPU-bound I want to know that.[/quote<] It's going to be difficult for you to get that while reading a CPU review. Making sure the system is not GPU bound is kind of the whole point here, and I'm not sure what you think the value of this "mixed" testing would be if it doesn't fully stress the component being tested.

            • slowriot
            • 3 years ago

            Why do you read hardware reviews? For me its to inform a potential purpose. Therefore I would prefer benchmarks and tests which are design to represent my own real world usage.

            Again people understand what the 1080P gaming test is designed to highlight. The maximum difference in a CPU-only bound scenario. The issue is… that scenario isn’t realistic.

            If CPU X is 110FPS vs CPU Y at 90FPS at 1080P but both are at 80FPS at 1440P and CPU Y’s extra cores allow gaming performance to be less impacted by other non-gaming processes isn’t CPU Y better for me overall? Possibly. I’d like to see benchmarks designed to test that scenario.

            • ImSpartacus
            • 3 years ago

            A system also isn’t GPU-bound when you’re running cinebench.

            So why test games in the first place? Furthermore, why test multiple games?

            To me, the point of a benchmark is to “benchmark” the performance of something in a real use case that some actual human might actually do. Ideally, you need your benchmark to mimic that real task as closely as humanly possible.

            Nobody games at 720p with a 1080 and a high-end desktop CPU. It’s just not a realistic benchmark for an actual use case.

            • Redocbew
            • 3 years ago

            It’s not “realistic” to test NVMe SSDs at high queue depth with respect to the desktop either, but that’s often the only way to really separate one from another. If that didn’t happen, then we’d be missing the complete picture.

            So, which way would you rather have it? Reviews that decide for you what they think a “realistic” workload should be, or reviews that provide as much information as possible and expect you to be able to figure it out from there?

            • slowriot
            • 3 years ago

            The reviews are not giving as much information as possible. That’s the entire point. Keep the 1080P tests. Just also can we please someone, anyone also give tests with more varied workloads? PLEASE?

            To be quite frank my biggest issue here is that virtually every hardware review site does the exact same test. Yes, its useful as a way to validate different sites results. But it also mean we’re not remotely getting as much information as possible. There are tons of workloads, ones I think more representative of real workloads, that don’t get benchmarked.

            • Redocbew
            • 3 years ago

            I’m sure Jeff and the other bleary-eyed and over-caffeinated reviewers around the web would have loved more time to continue torturing their test systems, but the real world has deadlines. I also wonder how much of this extra information you want would actually be about the CPU and not some other component.

            There’s nothing wrong with testing an entire system to see how well it would do as a database server, or an audio workstation or a gaming machine, but if you do it that way, then your test results don’t have any point of reference. You can’t use them for comparison to a different machine. They apply only to that system. That’s why tricks like increasing queue depth for SSDs, and dropping the resolution for CPU tests are used.

            • benedict
            • 3 years ago

            I fully agree. Reviews benchmark CPUs and they only run a single game and close everything else. Then they conclude that a CPU with less cores but more single-threaded performance is better for gaming. The thing is, I never use my pc that way, I have lots of other stuff running taking CPU time. In my case a CPU with more cores would do much better because whenever I need to do something more there will always be a free core to take care of it.
            Simulating a real world scenario is hard, especially because that scenario is different for everyone. Unfortunately reviewers don’t even try and do the most simple thing which is to only test a single app at a time. AMD has shown a few tests where a lot of programs run simultaneously on the pc and then multi-core CPUs really trump high-GHz ones.

            • ImSpartacus
            • 3 years ago

            What exactly are you supposed to “figure out” from some useless benchmark of a 1080 and a bunch of high-end desktop CPUs running games at 720p?

            What exactly is the takeaway?

            Are we all assuming that somehow if a CPU does better in that 100% artificial situation that it’s better in other situations?

            When we’re talking about informing the ability to make decisions (and that’s really the core purpose of these kids of reviews), useless and misleading data is worse than no data because at least you’re aware of your decision-making ignorance when you don’t have any data/guidance. When you drawing some kind of false conclusion from a silly 720p gaming benchmark, then you’re confidently making flawed decisions. That’s a bad situation to be in.

            • Redocbew
            • 3 years ago

            [quote<]What exactly are you supposed to "figure out" from some useless benchmark of a 1080 and a bunch of high-end desktop CPUs running games at 720p? What exactly is the takeaway? [/quote<] In this case I take away that the CPU does a decent job against the competition for its part. What part the GPU would play, and whether or not it would mask some of those differences is a different question. I'd also be interested to see it answered, but I don't think that casts any doubt on the results published here. [quote<]Are we all assuming that somehow if a CPU does better in that 100% artificial situation that it's better in other situations?[/quote<] I wouldn't expect the Celeron I have to do well in handbrake just because it does well in my pfSense box, but chances are if I put together a Ryzen machine I wouldn't build an exact clone of one we saw in the review. It'd be a bit different, but not by much. I'd still expect similar performance from it, and I probably wouldn't be disappointed.

            • ImSpartacus
            • 3 years ago

            [quote<]In this case I take away that the CPU does a decent job against the competition for its part. [/quote<] Does a decent job at doing what? Like, seeing 720p gaming results tells you how the system would do if you were to game at 720p with a 1080 and a desktop cpu. Surprise surprise, it'll game like a champ at that low res. As far as actual decision-making goes, that doesn't really inform your ability to make reasonable decisions because it's not a situation that will ever be relevant.

            • Spunjji
            • 3 years ago

            Just want to second this line of thinking. I get that 1080p and lower testing highlights the differences between CPUs, but it’s not telling us anything that applies outside of theory unless for some reason we spend out heavily on GPU and CPU and inexplicably decide to stick with a 1080p monitor.

            If testing doesn’t show a meaningful difference in gaming performance between CPUs at 1440p+ resolutions, then I Want To Know That. It’s seriously relevant info. I don’t even really care if it’s based on a reduced data set for time reasons. As it is, I’m left guessing.

            • 5UPERCH1CK3N
            • 3 years ago

            I wonder if it would be useful to also log processor usage across the cores.

            This wouldn’t answer the question of application performance decisively though. For example, if you’re running into a memory bandwidth issue, using that free processor time very well may have a cost that impacts your gaming experience. Benchmarking that would be a nightmare due to the number of loops you’d have to run determining where that bandwidth saturation point is during the entire run. And if you don’t know how much memory bandwidth each of your applications consume, it’s still not useful information even after all that work.

            I’m not sure benchmarking at higher resolutions would answer any more questions either. I’m not convinced (i.e. I haven’t seen any evidence) that the CPU-bound case doesn’t give you the answer. In other words, I haven’t seen this premise demonstrated false: that the higher performance in the CPU-bound case also translates to being able to run more applications in the GPU-bound case.

            If you’re running into a core saturation problem, then indeed, more cores may allow more applications to be executed along with gaming. However, if you’re running into a bandwidth issue along with that, throwing more cores at the problem isn’t going to help. I don’t think we’ll really know what the case is with Ryzen until we see some overclocked memory results where the process clock is held as constant as possible.

            Even once you figure out if your core starved or bandwidth starved, it’s hard to give any recommendations unless you know what applications you’ll be using and what their needs are. Are there cases where a lot of processor time is required but the bandwidth requirements are low? In the GPU-bound case, you’re probably likely to have at least a little bandwidth left over for that sort of thing so long as the bandwidth needs are low. Still, so long as you don’t need a ton of cores, a speedy processor with good IPC is also going to have leftover cycles in that case. So again we reach the point where someone needs to demonstrate this before I’d see it as a valuable use of a reviewer’s time.

            Does that preclude TR from looking into this themselves? No, I don’t think so. However I think it’s way too early to ask this to be added to the reviews until the premise is first demonstrated that more performance in the CPU-bound case doesn’t translate to more concurrent application performance in the GPU -bound case.

            • mat9v
            • 3 years ago

            Exactly, we are all doing those real life benchmarks, not canned tests but fragments of real gameplay, real movie conversions, file copying exercises. We are doing them to see how would these components perform in real life, not in some imagined set use case. What will testing game performance at 720p tell me about playing the game at 1440p or higher (not much)? So why test the irrelevant scenario, one we won’t use in real life, to get some mostly useless number? The only scenario that matters is playing e-sports, where players like to have 400fps and intentionally lower graphic quality to get that. From various tests, we know that Ryzen offers in for example CS:GO about 20-25% lower framerate, on order of 350 instead of 450fps.
            I would love a test of playing typical games with graphical details set to reach 144 fps as that is our most used monitor frequency (I guess), while twitch is streaming in background or even nvidia is saving my gameplay to disk, while 20 web pages are open in Chrome, ftp server is working, teamviewer is running, skype is there, I’m talking on Discord, steam and few other platforms are loaded in background and maybe my disk is encrypted too.
            This is what is relevant for me and while I know not even one website will ever test a scenario like that, this is what really matters, how will cpu perform with plethora of tasks.
            We know (if we believe AMD) that Ryzen works well with gaming while background tasks are running but I would really like to know how well, between 6900 and 1700 for example.
            Personally I set my IQ such that I can have at least 100fps as I can frankly tell if it is less than that, and it is for me more important to have a smooth gameplay then perfectly beautiful picture. Unfortunately except hardocp I have never found any site that does test games that way, and even them set their target framerate much lower.
            I know that probably means that Ryzen would perform worse for me then 6900 but how much is a question and is it worth it to pay more for the pleasure 🙂

            • raddude9
            • 3 years ago

            No need to guess, have a look at

            [url<]http://www.techspot.com/review/1348-amd-ryzen-gaming-performance/[/url<] it should be clear that the more rapid advancemt of GPU's has led to games still being CPU bound at 1440p. Hmm, might be time for a new post on the subject...

            • Redocbew
            • 3 years ago

            [quote<]Does a decent job at doing what?[/quote<] Being a CPU? [quote<]Like, seeing 720p gaming results tells you how the system would do if you were to game at 720p with a 1080 and a desktop cpu. Surprise surprise, it'll game like a champ at that low res.[/quote<] Well yeah, even the FX-8370 scored high enough to be easily playable, but did it win? No, of course not. It's a turd, and you can see it being a turd. It came in dead last in every single test, and the others were faster. That's the whole point here. Not that you can get playable frame rates at a low res, but that you can distinguish the performance of one chip from another. If you just wanted to know the "quality of fine" as Waco put it in this very same thread, then fine I get it, but that's not what you're saying. You're saying these tests are useless because they were run at 1080p. That's not the same thing. What you want is a different kind of test that includes more than just the CPU.

            • rechicero
            • 3 years ago

            If the only way to separate several products is a synthetic benchmark then, by all means, just separate them by a real worlds benchmark: price.

            • freebird
            • 3 years ago

            WHY NOT ONE WITH BOTH?!? Is that too much to ask? As for Queue Depth; it is important if you also dabble with Database software on the side from time to time… like I do, which would be an except for most.. but what I DO NOT do is buy powerful hardware and run it in a handicapped state… so people that think 720×480 is still valid might as well ask for 320×200 CGA. If you’re willing to pay $300+ for a CPU and $500+ for a 1080 or better, then I would expect most of those (like me) are also owning or planning to own a > 1080 monitor. Personally, I used to run 3x1680x1050, but switch to 27″ 2560x1440x144hz; but still wish for an economical, high quality 37″ 3440×1440+ 120+Hz That would be my sweet spot for the next 3-4 years depending on cost and innovation.

        • ImSpartacus
        • 3 years ago

        [quote<]CPU is more important at lower FPS, so at lower resolutions.[/quote<] I'm confused. You're going to have higher fps at lower resolutions, like 720p, all else being equal. That aside, I don't think there's merit in 720p benchmarking. The whole point of benchmarking is that it's a "benchmark" for reality. Gaming at 720p with expensive desktop CPUs simply isn't reality. Yes, 720p would expose greater differences in CPUs, but it's an artificial scenario intentionally designed to do that. It doesn't match the real world and benchmarking is about emulating real world scenarios.

        • Voldenuit
        • 3 years ago

        [quote<]CPU is probably not the bottleneck at >1080p. CPU is more important at lower FPS, so at lower resolutions. I'd prefer adding 720p benchmarks over adding 1440p+ benchmarks and I personally have a 4K monitor.[/quote<] I'm not so sure. CPU is becoming a bottleneck for high fps gaming, and I also really want to know if it is a bottleneck for 99th percentile frame times and fps dips/min fps. Considering how big streaming is, I'd also like to see how the CPUs stack up for gaming+streaming scenarios.

          • ImSpartacus
          • 3 years ago

          Yeah, we need legitimate real world scenarios, not artificially cpu-bound scenarios that will never happen.

            • K-L-Waster
            • 3 years ago

            Would be a good addition to future tests — run Game X and stream the session and run a VOIP app at the same time.

            • bhtooefr
            • 3 years ago

            Will they never happen, though?

            Keep in mind that newer games will likely use more CPU – in fact, they already are, seeing how Sandy Bridge is finally – [i<]finally[/i<] - starting to fall behind. Artificially CPU-bound scenarios can be an indicator towards future performance, in future games.

            • ImSpartacus
            • 3 years ago

            [quote<]Artificially CPU-bound scenarios can be an indicator towards future performance, in future games.[/quote<] That's a great hypothesis, but I haven't seen anyone actually test whether that's true. I don't think we can assume that just because you're technically running a video game and the hardware-load from that video game is CPU-bound that it's actually a relevant benchmark for reality.

            • Spunjji
            • 3 years ago

            This.

            “CPU-bound” can mean a LOT of different things. Gaming a benchmark to make it CPU-bound doesn’t mean that it will represent results from a future game that has genuinely high CPU requirements.

        • Ninjitsu
        • 3 years ago

        Honestly 720p is meaningless and useless at this point.

        1080p is fine, though.

        • freebird
        • 3 years ago

        So if AMD would have designed the system with a SUPER CHARGED 1 cycle 2MB 1st level cache that ate up frames at 50% more than Intel’s at 720p; limited to only 2 CPUs fully load and added tricks to make sure it used 2 supercharged cores that could clock higher than the rest, but Gimped under a super heavy GPU load, that would be ok with you?

        Or maybe just maybe you’d also like to see what happens when a system is trying to SQUEEZE every last drop out of a maxxed out GPU? Maybe some systems perform better under GPU loads and some better under heavy GPU loads…some of us want to know that too.

        I for one only really care how it performs and what the cost is at the resolution I’m going to use it…and if talking about ANYTHING in the future, you can throw all the non-DX12/Vulkan results in the can in my opinion. Isn’t DX12/Vulkan and more threads the future?

      • derFunkenstein
      • 3 years ago

      That would only hide any performance difference behind the bottleneck caused by the GPU. Its basically doesn’t matter what CPU you use at 4K because even the GTX 1080 will struggle on these titles. It’s 4x as many pixels.

        • Waco
        • 3 years ago

        A shitty CPU won’t give a satisfactory experience at 4K, though. People keep saying it doesn’t matter, but minimum frame rates plummet with shitty CPUs driving too…

          • derFunkenstein
          • 3 years ago

          Well, yeah, I don’t mean a celeron. But any of these CPUs are going to be fine. Any one of them.

            • Waco
            • 3 years ago

            Agreed, but I’d like to know the quantity of fine. 🙂

            • slowriot
            • 3 years ago

            I’d love an inside the second look at 1440P and 4K gaming. I’m willing to bet we’d still see the differences between the Intel chips and AMD with those metrics. But I’d really like to know just how much difference it is under those more GPU-bound (but far closer to reality) loads.

            • Waco
            • 3 years ago

            Absolutely!

          • Ninjitsu
          • 3 years ago

          Yeah but a CPU holding 100 fps at 1080p isn’t going to be the bottleneck at higher resolutions, since the load will shift more to the GPU.

          i.e. you’re not gaining any data points by going to higher resolutions.

          OTOH people asking for 720p benchmarks is silly, since they’re academic at best, and if 1080p is already CPU bottlenecked then it’s not going to prove much by going to 720p. Additionally, no one can really use the information – other non-gaming benchmarks are far more useful in this regard.

            • Waco
            • 3 years ago

            That totally depends on the game engine in question, doesn’t it? 🙂

            • derFunkenstein
            • 3 years ago

            wouldn’t big lag on one platform or another manifest itself at 1080p, and anything at higher resolutions would probably fall on the graphics card? I just think that if AM4 had problems with games at higher resolutions, we’d see signs of it at 1080p. Especially since TR still tests on Ultra.

            Edit: That doesn’t mean they shouldn’t do it, just to be sure. I just think it’d be a really boring article. 😆

            • Waco
            • 3 years ago

            Some engines scale differently with resolution than others unless I’ve gone crazy. It’s not always just a GPU load increase moving upward.

            • derFunkenstein
            • 3 years ago

            That might be true. Hmm. OK, I’ve changed my mind.

            (edit: if that sounds flippant, it’s not. You’ve convinced me it is worth looking at)

            • Ninjitsu
            • 3 years ago

            Well, you could argue that, but i can’t actually remember any…

            Arma will remain CPU limited, assuming it’s run on a GTX1080, but the order of CPUs isn’t going to change.

            I don’t know of any game in recent years that increases CPU load with resolution, but if there exists such a game then sure, should be tested.

      • rechicero
      • 3 years ago

      For me 4K has the same sense as 720p: only a very slight percentage of buyers are going to use those resolutions. They are essentially synthetic benchmaerks (specially lower resolutions). 1440 is probably the best choice.

        • DancinJack
        • 3 years ago

        I would actually bet you quite a bit of money you’d see WAY more 720p/768p than 4K. I don’t think those lower resolutions are irrelevant at all.

          • derFunkenstein
          • 3 years ago

          Steam agrees with you. [url<]http://store.steampowered.com/hwsurvey[/url<]

            • bhtooefr
            • 3 years ago

            One thing that I’d love to see is CPU core count and presence of a DGPU on the lower resolution – 1366×768, 1600×900, and 1440×900 specifically – machines.

            I wouldn’t be surprised if a lot of those machines are IGP-equipped dual cores – read: laptops, or very low end or old desktops.

            The 1280×1024 machines, though? Those are desktops.

            • Ninjitsu
            • 3 years ago

            those are probably laptops…

            • derFunkenstein
            • 3 years ago

            Combined, the two resolutions represent 70% of all PCs connected to Steam (if you add up 1360×768 and 1366×768). 0.69% are 3840×2160. 1.81% are 2560×1440.

            • Voldenuit
            • 3 years ago

            Laptops probably make up most of the 720p/768p stats. Those won’t be running Ryzen, nor would 768p laptops be hosting discrete GPUs, either.

            • derFunkenstein
            • 3 years ago

            Probably so, but what about 1080p? All I’m trying to get at is that 1080p is absolutely relevant. It’s more than 40% of all systems on Steam.

            High-resolution testing for graphics cards is imperative. I’m less convinced about CPU reviews.

          • rechicero
          • 3 years ago

          Yeah, yeah, but I said “of buyers”, not “of ppl” ;-). I really doubt anybody is going to invest in a 1700 or the intel equivalent to play at 720p.

          I understand the point of playing at 720p to focus on CPU but, at the same time, that makes the review (specially with a Titan X!!!!) quite like a synthetic benchmark (not real world for the product) and if we want a synthetic benchmark, why not using actual synthetic benchmarks?

            • raddude9
            • 3 years ago

            Very true, very few people are going to upgrade TO a 1080p system, and the kind of people who spend $500 on a CPU and not the kind of people who are happy to live with 1080p monitors.

            In other words, chips in this price range should be benchmarked with 1440p games, anything else is unrealistic.

            • Ninjitsu
            • 3 years ago

            That’s not true – the investment in a CPU has little correlation to the resolution of the monitor. People could be pairing these with $70 GPUs to drive the displays needed to look at more spreadsheets for all we know.

            Point being that this isn’t a gaming chip and no one will be buying this for gaming, if they do then the 1080p results are more useful – 1440p for CPU is useless for now.

            Once 1440p becomes the dominant resolution, or GPUs aren’t the bottleneck at 1440p, then we can talk about it.

            • raddude9
            • 3 years ago

            GPU’s are not the bottleneck at 1440p in today’s games, CPU’s are still the bottleneck. Have a look at the 1440p reviews of Ryzen, they show a similar, although less pronounced pattern as the 1080p.

            And saying that there is no correlation between the resolution of monitor and CPU price does not ring true to me. Most workstations today are snapped up by creative types working with 2d and 3d graphics, these people will want large monitors (and may also like playing games in their spare time). And heavy duty spreadsheet users need a bigger monitor much more than they will need an 8-core CPU, do spreadsheets even use more than one core

            • Ninjitsu
            • 3 years ago

            See for yourself, the order of CPUs don’t change, but the difference becomes smaller. What you see at 1080p is mostly the max you’ll see in 1440p, but the difference between most CPUs becomes minimal as the bottleneck shifts.

            [url<]https://www.guru3d.com/articles_pages/amd_ryzen_7_1800x_processor_review,16.html[/url<]

            • freebird
            • 3 years ago

            What do you mean “the order of CPUs don’t change”??? in the link you post the 1800x went from 7th @ 1080p to 2nd at 1440p

            I think if you want to give people a “true” view of what they can “expect” pair up an RX480/1060i/1070 with both at 1080p
            or
            1080/Ti/Vega and/or Xfire/SLI at 1440p & 4K…

            Because more than likely those are the cards they’ll currently be using at these resolutions with Max. Settings.

            • Ninjitsu
            • 3 years ago

            It goes to second place when most of the CPUs are within 3 FPS of each other – except the last few places that remain unchanged.

            There’s a clear GPU bottleneck there. If the GPU was fast enough to run 1440p like it does 1080p, you’d see the 1080p graph repeated. That’s the point.

            CPUs can last for many years. Benching against today’s mid/low end at 1080p, or 1440p with today’s higher end GPUs tells you nothing about how it’ll perform with next year’s GPUs, etc. because performance is limited by the GPU.

            p.s. Read through the other benchmarks too.

            • freebird
            • 3 years ago

            The point being made by many here is that if you use a high-end CPU & 1080+ Video card is that you probably do NOT play at less than 1080p MAX settings (which TR did to their credit in most of their tests; I saw other sites testing at 1080p with a 1080 or Titan but turning setting way down) and many will play at and want to see additional tests at higher resolutions and I’d like to see newer games using DX12 or Vulkan, since that is going to be the future more than likely.

            BTW,
            I’ve read through dozens of bench marks…so what is the point of your p.s.? You stated that the CPUs don’t change order, which they did on THAT page. if you would have said “significantly” that would be correct too; or “barely” or “within acceptable margin of error”…etc.

            I agree CPUs can last for many years, so can GPUs, but NO TEST will tell you how things are going to work in the FUTURE…PERIOD. So testing at low res and pushing a lot of frames tells you nothing of how a system is going to work at higher resolutions years from now, so what is the point??? Especially with DX12 and now 8-core systems available at semi-mainstream prices; I live in the NOW, not the recent past or the near or far future… so ALL I want is to know if buying a 8-core Ryzen is going to get me a boost for Video Encoding and also keep me in acceptable fps margin for playing games where I CURRENTLY PLAY them.

            Before everyone said all you need is 2 fast cores for gaming? Still true? NO. If you want to talk of the FUTURE or FUTURE PROOFING… Then the tests should be run with SLI TitanX s or 1080TIs at 4K minimum, to see if any of todays CPUs “that can last for several years” have enough “horse power” to fully utilize that kind of GPU power… still like I said programming models and DX12/Vulkan will be changing the way games are programmed in the future, but that would be a better test than checking how many fps can be pushed at 720p or 640×480.

      • f0d
      • 3 years ago

      the higher you go in resolution the less strain is on the CPU and its just straining the GPU instead

      get high enough (probably around 4K) and you will notice pretty much all CPU’s are the same – bulldozer will be performing about as good as a 7700k and a ryzen

      testing a CPU at lower resolutions (and also may i add that most gamers still use 1080p) is the best way to test a CPU – testing at higher resolutions shows you less info on how a CPU performs

      a CPU that performs well at lower resolution will also perform well at higher resolutions but the problem is a CPU that is poor at lower resolutions will also perform the same as the best CPU’s at higher resolution and that pretty much tells us nothing in a “CPU review”

      • Airmantharp
      • 3 years ago

      I get what you’re saying, but as a 165Hz 1440p gamer 1080p benchmarks tell me [i<][u<]exactly[/u<][/i<] what I'm looking to uncover: that Ryzen is nipping at Intel's heals but not quite 'there' yet for gaming. And while I do do some intensive non-gaming work, I don't do enough to justify the upgrade over my current 6700K, but if I did I certainly would, and the strong performance here shows that AMD is a worthwhile competitor and definitely worth recommending.

    • chuckula
    • 3 years ago

    [quote<]One thing is clear from our Y-Cruncher results right away: AVX2 SIMD support seems to help a lot. Ivy Bridge, Sandy Bridge, and Bulldozer don't have it, and they suffer accordingly. The Ryzen CPUs have AVX2 support, but their 256-bit AVX throughput is half that of the Haswell and newer chips because of the 128-bit width of their FP units. Despite their high core and thread counts, the Ryzen chips land smack between Haswell and Skylake here. The Intel Extreme Edition chips put their copious memory bandwidth and execution hardware to good use by leading the pack in number crunching.[/quote<] A very good object example of the differences in design philosophy between recent Intel chips and what we are seeing with RyZen. The DAW crunching benchmarks are also strong real-world test cases that show how important it is to get the software right if you actually want to take advantage of newer chip architectures.

    • Coran Fixx
    • 3 years ago

    Great job on the review. Will be great to see overclocking and maybe some 2k gaming data.

    People will spend whatever they want ( Myself as an example) but I don’t see $100 difference between the 1800x and the 1700x.

    • JAMF
    • 3 years ago

    Where are the Arma3 numbers??? *shock*

      • drfish
      • 3 years ago

      Timing constraints left them on the chopping block. However, Jeff said during testing that YAAB on the 1700X produced 35fps. So, it looks like I made the right call.

        • chuckula
        • 3 years ago

        YOU SHOULD HAVE WAITED FOR RYZEN+!

    • tsk
    • 3 years ago

    This is the CPU we deserve, but not the one we need right now.

    • DragonDaddyBear
    • 3 years ago

    Thank you for adding not just a 2600K but an FX processor!

    • kuraegomon
    • 3 years ago

    Woohoo!

    • chuckula
    • 3 years ago

    [quote<]Both the Core i7-5960X and the Core i7-6950X deliver tremendous performance in Euler3D thanks to their potent combination of many execution resources and bountiful memory bandwidth. Makes one wonder what Ryzen could do with an extra two memory channels.[/quote<] A lot of people think that those quad memory channels are just Intel being stupid because they don't make your games run faster. While it's true they don't help your games, that's also not why Intel put them into the package.

      • AnotherReader
      • 3 years ago

      True! A lot of HPC workloads are constrained by memory bandwidth. The first Opterons, by virtue of their on die memory controller, won AMD a big share of the HPC market and the eDRAM equipped i7-4950HQ [url=https://techreport.com/review/24879/intel-core-i7-4770k-and-4950hq-haswell-processors-reviewed/13<]spanks the higher clocked 4770 k like Spock beating on Khan in the 2nd reboot movie in Euler 3D[/url<].

        • Krogoth
        • 3 years ago

        AMD is going to remedy that with their server-tier platforms. Ryzen is really HEDT/Workstation-tier at heart.

          • AnotherReader
          • 3 years ago

          Yes, those 8 DDR4 channels will help some workloads a lot.

          • just brew it!
          • 3 years ago

          If they were really aiming for the workstation market they should’ve included ECC support.

            • DrDominodog51
            • 3 years ago

            Is the lack of ECC support confirmed?

            • DrDominodog51
            • 3 years ago

            ServeTheHome’s Twitter says they spoke to people at AMD and Ryzen lacks ECC support in desktop versions.

            • chuckula
            • 3 years ago

            ServeTheHome was at the official RyZen press event, so if they have confirmation from AMD on it then I’ll believe it until I see any concrete evidence to the contrary.

            • DancinJack
            • 3 years ago

            [url<]https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def6vs2/[/url<] I know it's reddit, but it IS an AMD employee during their AMA.

            • DrDominodog51
            • 3 years ago

            I just saw that on the AMA. Hopefully, a Mobo manufacturer will allow the BIOS option be available to turn on ECC.

            Edit: [url<]https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def58sv[/url<]

            • Krogoth
            • 3 years ago

            That’s assuming that there’s tracing for the parity table on the motherboard.

            • chuckula
            • 3 years ago

            [s<]I've heard noise both ways about ECC. The latest round of rumors indicated you could get it but you have to have the right motherboard to make sure it works.[/s<] [EDIT: As DrDomingo51 notes there has been relatively concrete information straight from AMD that ECC isn't in the consumer level chips, so I'll go with that.]

            • just brew it!
            • 3 years ago

            Near as I can figure, early pre-release specs for some of the motherboards incuded ECC support. This was subsequently changed to remove that, and/or state that ECC DIMMs would work [i<]in non-ECC mode only[/i<]. If I may speculate for a moment, this probably means that AM4 has the pins for ECC, but current RyZen CPUs have it disabled. As a consequence, motherboard makers either couldn't validate ECC on their motherboards, or didn't want to confuse consumers by advertising ECC support when no currently released AM4 CPUs support it.

            • Krogoth
            • 3 years ago

            It looks like Ryzen memory controller does support unbuffered ECC but motherboard and chipset support is another matter.

            • just brew it!
            • 3 years ago

            Chipset is irrelevant since the memory controller is in the CPU. Chipset ECC support only mattered in the days back before IMCs, when all of the memory was connected to the northbridge.

            • AnotherReader
            • 3 years ago

            I wish it were that simple. The i7 doesn’t support ECC while the i3 does. The Xeon E3, based on the same die as the i7, supports it.

            • Krogoth
            • 3 years ago

            That is no longer true on the Intel side. You have to get their Cxxx series chipset if you want ECC support but that artificial lock is more do with firmware then anything else.

            Not sure if AMD is going to follow a similar suit.

            • just brew it!
            • 3 years ago

            OK, I guess I should’ve stated that there is no [i<]technical[/i<] reason why it should depend on the chipset any more.

            • SuperSpy
            • 3 years ago

            Chipsets might be irrelevant, but the firmware on the motherboard isn’t.

            My guess is it will be similar to AM3 systems now, where the likes of ASUS will ship with ECC support, and not many others will.

            • just brew it!
            • 3 years ago

            The initial round of Asus AM4 boards do not claim to have ECC support.

    • Dazrin
    • 3 years ago

    Prince Humperdink from the Princess Bride… “Skip to the end!”

    Then I will go back and get the rest.

    Thanks Jeff!

    • dpaus
    • 3 years ago

    I think the price-performance graph says everything from a marketing PoV

      • chuckula
      • 3 years ago

      YOU’RE BACK!

        • LocalCitizen
        • 3 years ago

        you liked bulldozer. you must really love zen

        • dpaus
        • 3 years ago

        Every so often, I drop in to see if NeelyCam is ready to [url=https://techreport.com/news/24042/tuesday-shortbread#metal<] face the music[/url<], but.... Let's just say I haven't had a good beer-n-wings victory meal in a loooooong time.

          • chuckula
          • 3 years ago

          I would buy you beer-n-wings [[b<][i<]Buffalo wings of course[/i<][/b<]] even if I didn't lose a bet.

            • dpaus
            • 3 years ago

            Dude! Are you in Buffalo?

            • chuckula
            • 3 years ago

            No, but in the midwest in general. Gimme a PM if you’re ever in the greater Indy region (I actually don’t live all that far from Kampman’s home base).

    • chuckula
    • 3 years ago

    Intel’s IoT division built a robot to deliver [url=https://www.youtube.com/watch?v=sc9Y-vh6eUM<]its response to RyZen.[/url<]

Pin It on Pinterest

Share This