AMD’s Ryzen 5 1600X and Ryzen 5 1500X CPUs reviewed, part one

AMD is wasting no time filling out its Ryzen CPU lineup. Just a little over a month ago, the company’s eight-core, 16-thread Ryzen 7 CPUs roared into the high-end desktop market, where they delivered a huge boost in bang for the multithreaded buck. Today, the company’s Ryzen 5 CPUs take the fight to the $170-to-$250 price range, also known as the meaty middle of the CPU market. AMD’s strategy here is the same as it’s been for many years: offer more cores and threads than Intel does for the money. This time around, though, the Zen architecture’s much-improved IPC and competitive power efficiency could offer much more steak to go with the sizzle.

The two Ryzen 5 CPUs I have on the bench today—the Ryzen 5 1500X and the Ryzen 5 1600X—are the highest-performing members of a quartet of Ryzen 5 chips. The Ryzen 5 1500X offers four cores and eight threads for $189, while the Ryzen 5 1600X offers six cores and 12 threads for $249. AMD will also offer lower-priced variants of each of these CPUs with lower clocks and less XFR headroom. The Ryzen 5 1600 takes a 400-MHz haircut across the board, and AMD slices $30 off the price tag of the 1600X for the trouble. The Ryzen 5 1400, in turn, loses 300 MHz of pre-Extended Frequency Range (XFR) clock speed and costs $20 less than the 1500X.

Model Cores Threads Base clock Boost clock L3 cache XFR TDP Price
Ryzen 5 1600X 6 12 3.6 GHz 4.0 GHz 16MB Yes 95W $249
Ryzen 5 1600 3.2 GHz 3.6 GHz 65W $219
Ryzen 5 1500X 4 8 3.5 GHz 3.7 GHz $189
Ryzen 5 1400 3.2 GHz 3.4 GHz 8MB $169

To make a Ryzen 5 CPU from the full eight-core die that underpins the Ryzen 7 family, AMD symmetrically shuts off cores across the pair of core complexes (or CCXes) on that die to reach the desired resource complement. In the case of the Ryzen 5 1600 series, that means one core in each CCX is disabled, but the chip retains all 16MB of its L3 cache. The Ryzen 5 1500X loses two cores per CCX to the silicon scythe, but it still keeps all 16MB of L3 from the full die. The Ryzen 5 1400 takes the deepest cuts of the bunch: in addition to losing two cores per CCX, the amount of L3 cache per CCX is halved to 4MB, for 8MB in total.

As we hinted at a moment ago, AMD’s Extended Frequency Range (XFR) returns on the Ryzen 5 series. Depending on the cooling apparatus one straps on top of a Ryzen 5 1600X, that CPU will run at up to 4.1 GHz Turbo speeds in lightly-threaded workloads, and it can clock up to 3.7 GHz under heavier load. The Ryzen 5 1500X features the most aggressive XFR implementation that AMD has yet shipped. That chip can take advantage of up to 200 MHz of XFR headroom for a 3.9 GHz maximum Turbo speed and a 3.7 GHz all-core speed. The 1600X’s 95W TDP might give it more leeway to hit its XFR speeds compared to the 65W 1500X, however.

As Intel has done with its recent unlocked Core i5s, AMD won’t be including a boxed cooler with the Ryzen 5 1600X as part of the bargain. The company did send along one of its Wraith Max coolers as part of the press kit we received, but the rank and file will be on their own for finding an adequate CPU cooler. The Ryzen 5 1500X comes with AMD’s fairly hefty Wraith Spire cooler right in the box, however.

Now, for some bad news. While I expect exciting numbers from the Ryzen 5 1500X and Ryzen 5 1600X in our productivity benchmarks, those numbers will have to wait for a little bit. I’ve been battling a severe case of the flu since the middle of last week, so testing and writing for this review has been slow going. After some deliberation, I decided to go ahead and publish gaming benchmarks for the Ryzen 5 family first. I expect that many builders shopping for a CPU in this price range are more interested in a gaming PC than an all-out workstation, so I wanted to get this vital information out the door rather than publish nothing at all this morning. We’ll have full productivity numbers for the Ryzen 5 chips soon, but for now, let’s get our game on.

 

Our testing methods

As always, we did our best to collect clean test numbers. For each of our benchmarks, we ran each test at least three times, and we’ve reported the median result. Our test systems were configured like so:

Processor AMD Ryzen 7 1800X AMD Ryzen 5 1500X
AMD Ryzen 5 1600X  
Motherboard Gigabyte Aorus AX370-Gaming 5
Gigabyte AB350-Gaming 3    
Chipset AMD X370 AMD B350
Memory size 16 GB (2 DIMMs)
Memory type G.Skill Trident Z DDR4-3866 (rated) SDRAM
Memory speed 3200 MT/s
Memory timings 15-15-15-35 1T
System drive Intel 750 Series 400GB NVMe SSD

 

Processor Intel Core i5-2500K Intel Core i5-3570K
Motherboard Asus P8Z77-V Pro
Chipset Z77 Express
Memory size 16 GB (2 DIMMs)
Memory type Corsair Vengeance Pro Series DDR3 SDRAM
Memory speed 1866 MT/s
Memory timings 9-10-9-27 1T
System drive Corsair Neutron XT 480GB SATA SSD

 

Processor Intel Core i7-4690K Intel Core i7-6600K Intel Core i7-7600K Intel Core i7-7700K
Motherboard Asus Z97-A/USB 3.1 Gigabyte Aorus GA-Z270X-Gaming 8
Chipset Z97 Express Z270
Memory size 16 GB (2 DIMMs) 16 GB (2 DIMMs)
Memory type Corsair Vengeance Pro Series

DDR3 SDRAM

G.Skill Trident Z DDR4-3866 (rated) SDRAM
Memory speed 1866 MT/s 3200 MT/s
Memory timings 9-10-9-27 1T 15-15-15-35 2T
System drive Corsair Neutron XT 480GB SATA SSD Samsung 960 EVO 500GB NVMe SSD

They all shared the following common elements:

Storage 2x Corsair Neutron XT 480GB SSD
Discrete graphics Gigabyte GeForce GTX 1080 Xtreme Gaming
Graphics driver version GeForce 378.92
OS Windows 10 Pro with Creators Update
Power supply Corsair RM850x

Thanks to Corsair, Kingston, Asus, Gigabyte, Cooler Master, Intel, G.Skill, and AMD for helping us to outfit our test rigs with some of the finest hardware available.

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at a resolution of 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.

  • Because Ryzen processors perform best with Windows’ “High Performance” power plan enabled, we’ve broken a long-standing tradition and switched on that plan for our Ryzen systems. Our Intel systems were left on the “Balanced” plan, since to our knowledge, it doesn’t interfere with performance from those CPUs.

In response to popular demand, we’re re-benching AMD’s Ryzen 7 1800X and Intel’s Core i7-7700K with identical DDR4 speeds: DDR4-3200 at 15-15-15-35 timings. For fun, we’ve also included numbers from our Core i5-2500K pushed to 4.9 GHz in our test results. You’ll see this configuration called out in our results as “Core i5-2500K (OC).” As with any overclock, your mileage may vary.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Doom (OpenGL)
Doom likes to run fast, and especially so with a GTX 1080 pushing pixels. The game’s OpenGL mode is an especially hard test for keeping that beast of a graphics card fed. We cranked up all of Doom‘s eye candy at 1920×1080 and went to work with our usual test run in the beginning of the Foundry level.


As we’ve come to expect from testing Ryzen CPUS, Doom‘s OpenGL mode favors single-threaded grunt over broad-shoulderedness. While the Ryzen 5 1600X isn’t in contention for the highest average frame rate around, it turns in a 99th-percentile frame time on par with that of the much more expensive Ryzen 7 1800X. The Ryzen 5 1500X falls toward the back of the pack a bit, possibly thanks to its lower clock speeds.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time the GTX 1080 spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame. And 8.3 ms corresponds to 120 FPS, an even more demanding standard that Doom can easily meet or surpass on hardware that’s up to the task.

For this review, we’ve also added a button for the 6.94-ms mark, or 144 Hz. In combination with the GTX 1080, some of our CPUs have no trouble pushing frame rates that high in some of our test titles. We figure it’s worth diving in and seeing how well they do at this most demanding threshold.

None of the CPUs we tested have more than a trace of frames that would drop frame rates below 60 FPS, so it’s worth clicking over to the more demanding 8.3-ms plot to see what’s happening. The Ryzen 5 1600X trails only the Skylake and Kaby Lake chips here, while the 1500X performs about on par with our overclocked Core i5-2500K. At the 6.94 ms threshold, all of the Ryzen chips spend significantly more time working on tough frames than the Intel competition does.

 

Doom (Vulkan)


Doom‘s Vulkan renderer brings every CPU in this bunch more or less on par. Where only the highest-IPC chips of this bunch could push a 170-FPS average under Doom‘s OpenGL mode, Vulkan lets nearly all of them do it. Check out those remarkably consistent 99th-percentile frame times, as well.


As with Doom‘s OpenGL mode, none of the CPUs under test here spend more than a breath of time past the 16.7 ms mark. They all do an admirable job keeping under the 8.3 ms threshold, as well. We have to click over to the 6.94 ms mark to see any major differences in CPU performance here, and even then, the Ryzen 5s are right in the mix with Intel’s best.

 

Crysis 3

Although Crysis 3 is nearly four years old now, its lavishly detailed environments and demanding physics engine can still stress every part of a system. To put each of our CPUs to the test, we took a one-minute run through the grassy area at the beginning of the “Welcome to the Jungle” level with settings cranked at 1920×1080.


With our test settings, Crysis 3 really isn’t happy with just four threads at its disposal. Witness the huge gap in 99th-percentile frame times between the pure quad-core chips in this bunch, the Ryzen 5 1500X, and the higher-end chips. There’s some serious fuzz in our frame-time graphs, as well. The Ryzen 5 1600X offers serious value for the high-refresh Crysis 3 gamer on a budget, but the 1500X can’t quite keep up.


Our time-spent-beyond-X graphs reveal that the older Intel quad-cores spend a few seconds of our test run on frames that drop the instantaneous frame rate below 60 FPS. The Ryzen 5 1500X spends less than a third of a second on tough frames past that threshold, by comparison. The real action is happening at the 8.3 ms mark, however, where the Ryzen 5 1600X and the Ryzen 7 1800X trade blows with the Core i7-7700K for superiority. Despite its best efforts, the 1500X sits far back of its many-threaded counterparts here.

 

Deus Ex: Mankind Divided

With its rich and geometrically complex environments, Deus Ex: Mankind Divided can prove a challenge for any CPU at high enough refresh rates. We recently tweaked our preferred recipe of in-game settings to put the squeeze on the CPU, and it’s proven quite the torture test.


Under these grueling conditions, the most powerful Ryzen CPUs finish mid-pack in our average-FPS measure, but they deliver better 99th-percentile frame times than the Skylake and Kaby Lake Core i5s by a hair. The Ryzen 5 1500X trails those same Core i5s by just a bit in that critical measure of smooth frame delivery. 


At the crucial 16.7-ms threshold, the older Intel CPUs spend anywhere from one to two seconds on tough frames that would drop us below 60 FPS. Flip over to the 8.3-ms mark, and the 1600X and 1800X are hanging right with the Core i5-6600K and i5-7600K in the fight to stay above 90 FPS. The Ryzen 5 1500X splits the difference between newer and older Core i5s with the overclocked Core i5-2500K. The Core i7-7700K is unquestionably superior to every other chip here when it comes to sustaining high frame rates, though.

 

Watch Dogs 2

Here’s a new addition to our CPU-testing suite. We heard through the grapevine that Watch Dogs 2 can occupy every thread one can throw at it, so we turned up the eye candy and walked through the forested paths around the game’s Coit Tower landmark to get our CPUs sweating.

 


Watch Dogs 2 gives the Ryzen 5 1600X plenty of room to stretch its legs. The six-core chip’s average frame rate is only a bit behind that of the Core i5-7600K, and its 99th-percentile frame time is on par with the Kaby chip’s. Meanwhile, the Ryzen 5 1500X can’t quite keep up with the Core i5-6600K.


The 16.7-ms mark is the most relevant threshold for seeing where tough frames crop up for these chips in Watch Dogs 2. The Ryzen 5 1600X, the i5-7600K, the 1800X, and the 7700K all chalk up negligible amounts of time processing work for those frames, while the 1500X makes the GTX 1080 spend about two seconds of our one-minute test run waiting. Still, that’s a much better result than even the overclocked i5-2500K turns in here.

 

Grand Theft Auto V
Grand Theft Auto V can still put the hurt on CPUs as well as graphics cards, so we ran through our usual test run with the game’s settings turned all the way up at 1920×1080. Unlike most of the games we’ve tested so far, GTA V favors a single thread or two heavily, and there’s no way around it with Vulkan or DirectX 12.


GTA V is a worst-case scenario of sorts for the Ryzen 5 CPUs. The 1600X, for its part, does an admirable job of keeping up with the i5-6600K and i5-7600K despite its clock-speed and IPC deficits. The Ryzen 5 1500X can’t keep the average frame rate as high, and its 99th-percentile frame time is noticeably worse than its beefier stablemates’.


It’s good news for the Ryzen 5s at our 16.7-ms threshold of “badness.” None of AMD’s midrange contenders spend any noticeable length of time on frames that would drop the instantaneous frame rate below that magical 60-FPS number. Flip over to the more demanding 8.3-ms mark, however, and the superior single-threaded performance of the Skylake and Kaby Lake chips asserts itself. Even so, the 1600X does pretty well here. Unfortunately, the 1500X looks pretty anemic by comparison.

 

The Division (DX11)
Tom Clancy’s The Division features a vast, open-world New York setting with tons of complex geometry and visual density. Informal testing showed that this rich world could take up most of the power of our quad-core CPUs, so we cooked up some test settings and trudged through the endless garbage bags on the game’s streets to see how it performed.


In its DirectX 11 mode, The Division exhibits downright weird performance. Part of this may be down to variance in the in-game environmental conditions at which we tested, despite our best efforts to keep things consistent. Weather and time of day can have a major effect on The Division‘s performance, so that might explain some of the chaos. Regardless, the game’s DirectX 11 render path seems to benefit from more per-core performance, at least up to a point. Problem is, the Ryzen CPUs are all on the wrong end of that criterion, so they all cluster at the back of the pack.


 

Using our time-spent-beyond-X graphs, we can see that the Ryzen CPUs spend about twice as much time as the recent Intel competition working on tough frames that drop the instantaneous rate below our golden 60 FPS yardstick. What’s weird is that the 1600X suffers the most, while the 1500X and 1800X are about on par with one another. We might need to retest The Division and see if these results continue to hold. For now, let’s take a look at the game’s performance under DirectX 12.

 

The Division (DX12)

 


With its DirectX 12 renderer engaged, a lot of the performance problems we saw from The Division disappear. Problem is, the Ryzen 5 CPUs both exhibit noticeable spikes in frame times that can often be felt as little hitches or judders during movement. Even so, the major improvement in both average frame rates and 99th-percentile frame times across the board make it well worth it to flip on the next-generation API in this game.


For the first time in a while, we have to start at the 50-ms mark as we begin our time-spent-beyond analysis for The Division‘s DX12 renderer. Here, you can see the collected milliseconds representing the little spikes I was talking about from some of the Ryzen CPUs. Moving down to the 33.3-ms threshold reveals more of those troublesome frames, but that’s about the end of any concerning data from this particular test. None of the CPUs spend an appreciable amount of time past 16.7 ms making the graphics card wait for work, and they all spend under 10 seconds past the 8.3-ms mark, as well. Were it not for those inexplicable little spikes, the Ryzen 5 chips would be excellent performers in The Division‘s DX12 mode.

 

Conclusions—for now

Let’s sum up the performance of our group of test CPUs using one of our famous value scatter plots. The best values in gaming smoothness from this group of chips will tend toward the upper left of the plot, where prices are lowest and performance is highest. To make this higher-is-better view of the data work, we’ve taken the geometric mean of the 99th-percentile frame times we recorded for each CPU and converted it into an FPS value.

Going by our latency-sensitive 99th-percentile-frame-time metric, the Ryzen 5 1600X falls right between Intel’s most modern Core i5s for delivered gaming smoothness. AMD needed to nail that spot in order to have a chance at taking back a slice of the mainstream CPU market, and it’s stuck the landing perfectly. The Ryzen 5 1600X gives gamers real choice at the $250 price point for the first time in several years.

In games like Grand Theft Auto V that care a lot about single-threaded performance, the 1600X doesn’t fall that far behind the Core i5-7600K, and titles that can take full advantage of the 1600X’s many threads let the Ryzen chip match or even handily beat the Kaby Lake quad-core. Our early tests suggest that the 1600X’s generous core and thread count will let it take a hefty lead outside of games, as well. Given those early results, I think AMD may have delivered the best bang-for-the-buck, do-it-all CPU so far this year.

The $189 Ryzen 5 1500X could also be an appealing CPU value for gamers, even if its performance isn’t quite as eyebrow-raising as that of the 1600X. The hot Ryzen quad-core only trails the Core i5-6600K by about 7% in our 99th-percentile-FPS metric, and it’ll sell for 20% less than the unlocked Skylake quad-core did at the height of its popularity. Unlike Intel’s unlocked quads (and the Ryzen 5 1600X), the 1500X will also be ready to go out of the box thanks to its included Wraith Spire cooler. Gamers considering this CPU will almost certainly pair it with a more modest graphics card than the GTX 1080 on our test bench, and it should serve as quite the solid foundation for an RX 480- or GTX 1060-powered gaming PC. We’ll have to finish running the 1500X through our productivity testing to really get a sense of whether this chip is worthy of consideration alongside the Core i5-7500 in more affordable midrange builds.

The biggest challenge for AMD in this market may be the unlocked Core i5 CPUs already in builders’ systems. Although we didn’t get to overclock every Core i5 on our test bench, pushing our Core i5-2500K to 4.9 GHz often let it deliver gaming performance on par with even the Ryzen 7 1800X (except in Watch Dogs 2 and Crysis 3, where the extra cores and threads of the higher-end Ryzens let them keep a wide lead). Not bad at all for a six-year-old CPU. Assuming their workloads allow for it, folks who haven’t overclocked their unlocked Core i5s yet may find it worthwhile to strap on a beefy cooler and get to tweaking instead of shelling out for a whole new rig.

We’re hard at work finishing up our non-gaming testing on this Ryzen 5 duo, but for now, the Ryzen 5 1600X seems like the Ryzen chip to get if you’re a gamer. Its high clock speeds and generous thermal envelope let it deliver gaming performance on par with that of the $500 Ryzen 7 1800X, and its six fast cores and 12 threads should let it offer plenty of performance in non-gaming tasks. If the 1600X delivers on its considerable potential in our productivity testing, it might even topple the Core i5-7600K as our Sweet Spot CPU recommendation in our System Guide. Stay tuned.

Comments closed
    • feelsgoodbatman
    • 3 years ago

    [url<]https://youtu.be/yWdhLXl5a5s[/url<] Great video from the lady herself.

    • ptsant
    • 3 years ago

    Hello!

    Do you plan on updating the review with multithreaded benchmarks? There are several to be found around the web, but for completeness sake and for future reference it would be nice to have this information.

    • Thoughts
    • 3 years ago

    What I’m surprised to see missing… in virtually all reviews across the web… is any discussion (by a publication or its readers) on the AM4 platform’s longevity and upgradability (in addition to its cost, which is readily discussed).

    Any Intel Platform – is almost guaranteed to not accommodate a new or significantly revised micro-architecture… beyond the mere “tick”. In order to enjoy a “tock”, one MUST purchase a new motherboard (if historical precedent is maintained).

    AMD AM4 Platform – is almost guaranteed to, AT LEAST, accommodate Ryzen “II” and quite possibly Ryzen “III” processors. And, in such cases, only a new processor and BIOS update will be necessary to do so.

    This is not an insignificant point of differentiation.

      • Redocbew
      • 3 years ago

      I can’t remember the last time I bought a motherboard thinking I could keep it after upgrading the CPU regardless of the CPU or platform. Time will tell how AM4 progresses, but I’m thinkin the longevity of the socket on AMD platforms has more to do with their lack of progress in general up until now. Now with the CPU having absorbed a number of components which were previously part of the motherboard it’s much more a part of the platform its self than it used to be.

      • Gastec
      • 3 years ago

      Longevity aye? 🙂
      I’m still on P55 with a Core i7-860 here, using two SSD and if not for my desire to get more transfer speed and all the latest connectivity options (USB 3.1, NVMe, etc) I could buy a new powerful video card and call it a day for the next 5-10 years or until I die, if only for one MAJOR drawback: the artificial limit almost all game devs and Microsoft have imposed on their latest products, to determine the PC users to buy the latest hardware.

      Of course I might be from a minority as I see most people posting on tech/games sites brag about how they change their Intel CPU’s and Motherboards every iteration.

    • Bensam123
    • 3 years ago

    There is literally no reason to recommend Intel chips currently till they drop their prices or more then likely start messing with the amount of cores they offer at a certain pricepoint. Any forward looking system builder wont recommend those chips. Intel chips are sometimes faster ‘today’ in ‘todays’ workloads.

    You’re essentially trading 10%~ slower performance in ‘some’ current gen games for 50-100% faster performance ( R5 and R7 comparatively) in games in a year or two. Even some games today take advantage of the extra cores and Windows itself as well as background tasks will always require resources. Once we get away from the lowest common denominator of 4 cores, it’ll change. The Q6600 original did that and I’m sure giving someone a hex core chip for $190 also will.

    Overwatch is currently a good example of where more then four cores helps (modern popular titles as well). I gained 30% performance from going from a 4690k to a 5820k.

    The knee jerk reactions from the initial Ryzen release were cute though.

      • Redocbew
      • 3 years ago

      You must have a terrible time counting chickens.

      • f0d
      • 3 years ago

      I remember people saying that with bulldozer
      “Soon games will need more cores and then bulldozer will be better than Intel CPUs”

      Ryzen is a good CPU worth recommending but I definitely wouldn’t be betting on games using lots of cores in the future

      Heck even a dual core i3 is about as good as a i5 in most games still

        • sreams
        • 3 years ago

        Bulldozer had a chicken/egg problem. It was so much slower than Intel’s parts that developers felt little need to optimize for it. Ryzen is a different story.

          • Redocbew
          • 3 years ago

          Until there’s another round of CPU upgrades in the consoles I wouldn’t bet on games in general being any more heavily threaded than they are now. It’s not at all unreasonable to recommend an R5 for gaming, but if gaming is your primary use case it is unreasonable to recommend against buying an i5 because of some vague impression that everything is going to be different “a year or two” from now for no particular reason.

            • mesyn191
            • 3 years ago

            Actually its a given that consoles will be at least as multi threaded as they are now if not more so.

            How so?

            Because clock scaling has pretty much vanished as a driver of performance, in particular for devices that are highly power and/or heat constrained like consoles. The cost, heat, and power usage associated with putting something like a high clocked (4-5Ghz) 4C/4T Kabylake/Coffeelake or Zen/Zen+ into a console assures it won’t happen.

            Realistically any future high end console is going to have a mid clocked (2-3Ghz) high core/thread count (8-16 threads) CPU that uses relatively modest or small (35-10W) amounts of power and is cheap to buy since where they really have to spend their silicon and power budget is on the GPU. Both MS and Sony have figured that out and even Nintendo is catching on there.

            • Redocbew
            • 3 years ago

            If the next round of consoles increased the number of cores from 8 to 10 or 12, but kept them small and puny I wouldn’t really call that an “upgrade”. 🙂

            The fact that everyone wants MOAR COREZ is enough to guarantee that the next consoles will have at least 8 cores even though dropping the number of cores to 6 or 4, but providing each with more resources could probably work just as well.

            • mesyn191
            • 3 years ago

            Depends on what you think of as “small”.

            I would point out Zen is a fairly small core on its own and performance wise it does fairly well. Even power wise it does well if you keep the clocks around 2-3Ghz.

            A 8C/16T Zen CPU with clocks set to 2-3Ghz would be a LARGE performance improvement over the current Jaguar 8C/8T CPU in use by both Sony and MS within a similar power envelope too!

            Going by the way AMD’s and Intel’s current 4-6C CPU’s eat power and put out heat at higher clocks its not sensible to assume future consoles will use them in that manner since future processes aren’t going to magically fix that issue. You can’t reasonably just say that more clocks/”resources” per core will somehow make up the difference without addressing that issue.

            • Redocbew
            • 3 years ago

            “Small” in terms of performance compared to the PC, I guess. Yes of course, a lower clocked 8C/16T Zen chip would be a dramatic improvement, but not just because it can run 16 threads instead of 8.

            The reason I mentioned consoles was just to provide a reason for why there would be an increase in the number of threads utilized by games. It’s not going to just happen; there needs to be a reason for it first, and I don’t think Ryzen being out in the wild now is enough on its own.

            • mesyn191
            • 3 years ago

            Strange way to think of “small” IMO then. The manufacturers would look at die size first probably when it comes to “small” I believe.

            Having a 8C chip support multithreading so it can now run 16T is a awfully nice perk even if it doesn’t get used much from a practical standpoint. MS or Sony wouldn’t have to worry about “locking” any cores at that point and the developers would have more freedom in allocating CPU performance.

            I would point out 8-6 cores was considered quite excessive, even weak ones, for games back when the XB1 and PS4 were being initially released.

            Software expands in scope and capability only if there is hardware available to use it…but once it is available it will almost certainly get used! So if you want to show that 8T or more won’t be used or available in consoles you have to give some really good reasons why that latter statement wouldn’t be true. You haven’t really done that.

            In effect all you’re doing is saying that the future will be exactly the same as it is now when it comes to console/game CPU/thread use without giving good reasons for that to be true.

            • Redocbew
            • 3 years ago

            Once those capabilities in hardware are *widely* available, then they’ll get used. In this case that means available across both consoles and PCs. Right now they’re not really in the same ballpark as far as CPU performance goes, and I think it’s unlikely we’ll see much movement in this regard until that changes.

            You’re probably right. I’m sure Ryzen will age gracefully, and since it’s not starting in such a hole, then chances are it’ll probably end up in a pretty good spot. However, we don’t know when that’s going to happen, multithreading is really frickin hard, and it’s always been notoriously difficult to nudge the software industry in any one particular direction even for the best of reasons. I don’t think it makes sense at this point in time to say the i5 is a bad idea for a gaming rig just because at some unknown point in the future it’ll end up on the wrong side of increasingly multithreaded software. That’ll probably be true of the R5 and R7 at some point also.

            • mesyn191
            • 3 years ago

            By default when you’re talking about possible hardware in a major console from either MS or Sony it is a given that the features under discussion would be *widely* available.

            I’d also point out that AMD has priced their 6C/12T and 8C/16T Ryzen’s very competitively vs Intel’s 4C/4T, 4C/8T, and up chips. And that supply doesn’t seem to be a issue at all right now. And that Intel will be introducing affordable, well closer to affordable, 6C/12T chips with Coffeelake around the end of this year or early next year. So there is no reason at all to believe chips with higher than 8 threads will be uncommon enough to deter developers in the near, much less distant, future for games.

            Doing exact predictions of where the industry will go is indeed difficult at best but calling a trend is actually fairly easy and that is essentially all I’m doing when I say games will indeed be making use of 8 or more threads over the next few years. There are already a few that will use 8 threads now and 4 threads is becoming common place. Change does happen, its just often slower than anyone would really like.

            • NoOne ButMe
            • 3 years ago

            <6W at 3.3 (~5.96)
            Per Anandtech Ryzen 5 page two, and The Stilt’s voltage/clock chart.

            • mesyn191
            • 3 years ago

            Yeah single core power usage for Zen is incredibly low even at greater than 3Ghz but realistically 6-8C power usage at those clocks is more useful and more meaningful given the context of the thread.

            I’ve seen Cinebench numbers for a 1800X underclocked low enough to keep its power usage at or below 35W and the performance was still fairly impressive. Certainly bodes well for a 8C/16T laptop Zen!

            • NoOne ButMe
            • 3 years ago

            8C Zen is DOA on laptops unless one with iGPU is made.
            Except on DTR. do not count those as laptop

            • mesyn191
            • 3 years ago

            At around 2Ghz a 8C/16T Zen will use around 35W and still get excellent performance. 35W is still well within tolerable for a mid to high end laptop that isn’t a DTR.

            Zen scales down very well much like how Kabylake and Skylake do. Its pretty clear so far that AMD made sure it’d do well at low power.

            • NoOne ButMe
            • 3 years ago

            You need a dGPU. Outside of DTR it’s DOA because of that.

            I guess for a gaming laptop But you would want 4C, or maybe 6C and higher clocks in that case

            • mesyn191
            • 3 years ago

            LOL no. Plenty of mobile dGPU’s that use 10W or less. They’re useless for gaming but if you care only about CPU performance (ie. for a mobile work station) then they’re fine.

            If you want to talk about serious gaming power for a dGPU then your power budget gets blown out anyways since they’ll add another 40-50W at a minimum.

            • NoOne ButMe
            • 3 years ago

            Octo core Zen+/++/+++ for next gen consoles 8C <35W 3.3Ghz+

            Should be possible on so called 7nm from TSMC, Samsung or GF

            • mesyn191
            • 3 years ago

            I was assuming more current timeframes and not possibly distant future ones.

            Yeah on 7nm it will probably be doable to have a 25-35W 8C/16T Zen+ at 3.3Ghz or more but timeframe for that in the market place is probably farther out than what the fab guys are saying right now.

            I think assuming significant delays with new processes is fairly reasonable from here on out.

          • mesyn191
          • 3 years ago

          It did indeed. Even then BD has aged surprisingly well even if it does still lag significantly behind Intel’s CPU’s in game/app performance.

          AMD was probably right in principal that applications in general would become more multi threaded over time but they misjudged the timing on when that would occur badly and that resulted in BD being a poor offering in the CPU market. Even on its own merits vs previous AMD Phenom chips much less Intel’s Sandybridge.

        • Bensam123
        • 3 years ago

        Yup… Difference being Bulldozer 4c/8t was an attempted matchup with Intels 4c/8t, however often times it’s ‘8t’ was 10-30% slower then a 4c Intel.

        This time around you have a 6c/12t being matched up against a 4c and each on of those 6 cores either performs on par with with one of the 4 cores from Intel or it’s ~10% slower. Adding to it, that means it has 2c more, not even counting the threads… which increases multi-threaded performance even more.

        Now taking it a step further you have a 8c/16t chip being placed against a 4c/8t chip. Once again each one of those 8cs performing just as well or ~10% slower then that 4c chip.

        This is a completely different ballgame and while I like that you’re aiming for the ‘moar doesn’t mean better!’ approach, in this case it most definitely does.

        No one should be recommending Intel chips until their core counts at certain price points are adjusted. If you’re hedging things in favor of that 10% extra performance in some games today not remotely panning out vs the additional 1.5 or 2x performance in the future, that’s your prerogative.

        …and indeed maybe we will still have the lowest common denominator being 4c four years from now, but as I mentioned there are already some games that take advantage of more then four cores or fully utilize those cores enough where extra ones will take the excess load off of them (background tasks like Windows just operating).

      • derFunkenstein
      • 3 years ago

      Until Zen matches Intel’s clock speeds, there’s still SOME reason to suggest Intel CPUs. Namely lightly-threaded workloads.

        • NoOne ButMe
        • 3 years ago

        not to mention, depending on the workload, Intel has a 10-20% (usually on the upper area of the range) or so IPC advantage.

          • mesyn191
          • 3 years ago

          More like 9% better IPC. The applications where it does much better than that usually use AVX2 which isn’t in widespread use yet nor will it be for years since not much benefits from it much.

        • Bensam123
        • 3 years ago

        Not sure about that, as long as they offer similar performance the whole 2-4 more cores things starts coming into play. Most games today aren’t made for more then four cores or even fully utilize those four cores. Overwatch for instance fully utilizes a four core processor so there is even reason to go from 4 to 6 cores (putting aside threads). This is in high FPS scenarios, if you’re bottlenecked by your GPU you wont see this.

        That’s why as a baseline a hex core is basically a ‘goto’ solution for recommendations right now, up until this point they’ve never been cheap enough to do that though. Games that do properly utilize a 4 core will see benefit from it as Windows and background tasks will fall over to the other two cores, more forward looking 8 cores will become more of a staple but not until we get off the lowest common denominator which is 4 cores.

        Which is why I would suggest a $190 hex core and if you’re really looking down the line, the oct core. No sane person should be able to recommend Intel chips until they start offering similar core counts at similar price points, save very niche case scenarios.

        Surprisingly I haven’t seen a single review that took the R7 and turned off four of the cores and put it up against a i5 (or conversely shown CPU utilization/graphed it). A lot of people aren’t putting into context that even though the 8c/16t R7 has lower performance then a 4c/8t i7 in some current gen titles it isn’t being maxed out or anywhere close to it. There is a lot of headroom there.

          • Redocbew
          • 3 years ago

          TL;DR Having six cores sitting idle most of the time in my PC instead of four makes me feel warm and fuzzy inside.

          I can respect that.

            • Bensam123
            • 3 years ago

            I guess if you set your affinity that way. Next time you’re playing Overwatch check your task manager (assuming you aren’t running on a potato).

            ‘Tis great you like making uninformed opinions in a sarcastic manner though.

            • Redocbew
            • 3 years ago

            On the contrary, I’m giving you the benefit of the doubt while you’re trying to justify personal preference with lame excuses. None of this stuff holds water as a reason why you need a six core chip. Background tasks in windows? That’s your reason? That’s going to easily be lost in the noise on an i5. If that wasn’t the case you would have seen it in the tests that were run against the R5 1600. It’s not like the gaming benchmarks you see here on TR and elsewhere somehow were done outside the OS.

            But hey, far be it for me to get on someone’s case going for overkill when building a PC considering how many times I’ve done it myself.

            • NoOne ButMe
            • 3 years ago

            Most gaming benchmarks are done with little background tasks.

            I have 2 web browsers (10-50 tabs between the two), typically a music player Or YouTube/streaming service, a few game launchers (Blizzard Services, GoG galaxy, on very rare occasion Steam for Civ5/6). And a few other odds and ends. I think I’m in a similar range of stuff in the background as most people.

            Do you game on a near-clean install with nothing besides games, monitoring software and OS running? I don’t if you game this way, but I know most people I’ve met in person, in games and on forums don’t.

            So, have you ever said an i7 is a good buy? Why waste $100 on 4 more threads?

            • Redocbew
            • 3 years ago

            I’m confused. I’m not sure exactly what you’re arguing here, so I may miss the mark with some of this, but let me try to break it down point by point.

            [quote<]Most gaming benchmarks are done with little background tasks.[/quote<] Yeah, because that gives you the cleanest picture of the behavior of the thing you're testing. If you're not going to use methodology which can fully expose the behavior of the hardware, then why run the tests at all? [quote<]I have 2 web browsers (10-50 tabs between the two), typically a music player Or YouTube/streaming service, a few game launchers (Blizzard Services, GoG galaxy, on very rare occasion Steam for Civ5/6). And a few other odds and ends.[/quote<] Applications in user land may be more demanding, but even if you've got that stuff minimized and running in "the background" that's not usually what's referred to as "background tasks" of the OS. Those are often designed to stay out of the way, and honestly I doubt a modern CPU would have much trouble with the stuff you've got listed there either. [quote<]I think I'm in a similar range of stuff in the background as most people.[/quote<] Probably. The only time I boot into windows is for gaming, so that side of it actually is pretty minimal, but like you I don't make any effort to shut stuff down before playing some game which has a quick and easy Linux client. [quote<]So, have you ever said an i7 is a good buy? Why waste $100 on 4 more threads?[/quote<] Because 7 is a bigger number than 5? I actually did buy an i7 recently, but it wasn't for gaming. 🙂

    • TheSeekingOne
    • 3 years ago

    Going by anandtech’s review, the 4 core 1500X is as good as the i5 7500 in single threaded tests. In multi-threaded workloads, the 1500X bests even
    the 7600K. The Ryzens still do not perform as well as the Intel CPUs in gaming, but they’ll get there, it’s a matter of time.

      • Krogoth
      • 3 years ago

      The difference in gaming performance isn’t earth-shattering either. It is too close to call without graphs and charts.

    • ozzuneoj
    • 3 years ago

    Excellent review Jeff! Get well soon!

    Anyone have any comments on older game performance, like Anandtech recorded in Rocket League?
    [url<]http://www.anandtech.com/show/11244/the-amd-ryzen-5-1600x-vs-core-i5-review-twelve-threads-vs-four/14[/url<] Personally, I am spoiled by high frame rates in games from using a high refresh rate CRT and now a BenQ XL2720Z at 120Hz+Blur Reduction. A CPU that struggles to reach 70fps average (99th percentile) in a situation where significantly cheaper CPUs are getting around 120fps is cause for concern... especially considering the age of the game. I'm hoping that this is an outlying case, as Ryzen looks pretty solid for gaming aside from this. Are there any other reviews that touch on Ryzen's ability to provide ultra high FPS in twitchy fast paced games, especially older ones? Competitive performance in the latest games (which I don't play) is great, but if my overclocked 2500K is similar or faster in the older games I play then it wouldn't be an upgrade to go to Ryzen.

      • Demetri
      • 3 years ago

      Look at the results on AMD gpus in that review. 480 + Ryzen is killing it in Rocket League; just as good as a 1080 + Intel @ 1080P. Seems to be some issue with Nvidia’s driver and how it interacts with Ryzen in that particular game.

        • ozzuneoj
        • 3 years ago

        Good catch! I didn’t notice that… must have gotten lost in the wall of graphs. Very surprising to see the RX480 actually have better 99th percentile frames than the 1080. This is a situation where I’d like to see TR’s results, as I trust their methods more.

        This is certainly something I’d like to see investigated more. I can’t seem to find any other Ryzen reviews that cover old games.

          • flip-mode
          • 3 years ago

          70 FPS in Rocket League is definitely troubling.

      • odizzido
      • 3 years ago

      That’s not the only example of AMD GPUs doing really well when combined with ryzen. There was a video I watched a while ago which had benchmarks showing crossfire 480s with ryzen beating a 1080 with a high end intel CPU in the new tomb raider game. I don’t have the link handy anymore but I’ve linked it previously in the comments section on TR.

      Looks like there is some bug with Nvidia drivers which tanks performance when paired with a ryzen CPU. It certainly makes intel look better in these benchmarks and I am curious how many games are affected. Jeff replied to me when I posted this but either he was sick and didn’t have time, didn’t care, stopped reading the comments, or didn’t bother watching the video I linked. Whatever the reason TR did not look into this so only anand has any results showing this problem that I’ve seen.

        • Ninjitsu
        • 3 years ago

        Did they try 2x 1060s in SLI, though?

        EDIT:
        Because you’re basically saying “two mid-end GPUs are faster than a high-end GPU when the workload is GPU bound”.

        Which isn’t saying much.

          • derFunkenstein
          • 3 years ago

          That’s a dumb question. The 1060 doesn’t support SLI.
          [url<]https://techreport.com/review/30812/nvidia-geforce-gtx-1060-graphics-card-reviewed[/url<]

        • whoistydurden
        • 3 years ago

        Was it the video “Ryzen of the Tomb Raider” by AdoredTV?
        [url<]https://www.youtube.com/watch?v=0tfTZjugDeg[/url<]

    • Anovoca
    • 3 years ago

    That 1600 has peaked my interests some. I don’t have the budget for a new build just yet, but if AMD can put out an APU in that price/performance range I may have the perfect chip for a mITX steam stream console to throw in the bedroom.

      • ronch
      • 3 years ago

      I think it’s not ‘peaked’, but rather, ‘piqued’.

        • dpaus
        • 3 years ago

        He was looking at the test results, so he clearly means ‘peeked’

          • ronch
          • 3 years ago

          Why did he have to peek?

            • UberGerbil
            • 3 years ago

            Because he didn’t pay for a full view.

      • Demetri
      • 3 years ago

      Raven Ridge APUs will max out @ 4 cores, same as Intel:

      [url<]https://cdn.videocardz.com/1/2017/03/AMD-Pinnacle-Ridge-Raven-Ridge.jpg[/url<]

    • Jigar
    • 3 years ago

    Many thanks for using 3200MHZ RAM Jeff. Impressive results, i was looking forward to this CPU.

      • Mr Bill
      • 3 years ago

      Looks like it is the same DDR4 as used for the first Ryzen review (G.Skill Trident Z DDR4-3866 (rated) SDRAM). Has an updated BIOS made the faster timing possible? I’d be interested in how that was accomplished. Most of the Amazon reviews for the 370X motherboards are saying memory speed is limited to 2666MHz.

        • Jeff Kampman
        • 3 years ago

        I ran the memory at DDR4-3200 speeds despite the 3866 rating.

          • Mr Bill
          • 3 years ago

          In the first Ryzen review, the table says you tested at 2933 MT/s using that same memory. I was wondering how you got it up to 3200 MT/s for the second review.

            • Jeff Kampman
            • 3 years ago

            Ah, fair enough. Firmware updates have improved support for higher memory speeds on the Gigabyte motherboards I’ve been using.

            • Mr Bill
            • 3 years ago

            Thanks, I thought it was probably a bios update. I’d like to know if the 370X boards from Asus and MSI have managed to allow higher memory speeds also.

            • Renko
            • 3 years ago

            My Gigabyte A350 Gaming 3 with F5 revision won’t get my 32GB-3000 sticks (two) up above 2133MHz regardless of anything that I do on my part. I can’t even get the timings to post. However I have my 1700X running at 4.15GHz using an H115i. If I benchmark it gets close to 80C, but gaming doesn’t go above 60C, right now it is idling at 43C.

            Also ran the Cinebench R15 benchmark that you did during your Ryzen 7 review and I managed an 1800 which is 16.5% higher than stock 1700X and 10.5% higher than a stock 1800X and just under 5.3% slower than the 6950X. It did 131W during the test, but I would say that is not too shabby. I’ll take it. Just wish the IMC and motherboard would behave and I would be quite happy indeed.

            Edit: Got it stable at 4.15GHz, 4.2 was a no go. Still only 2133MHz on the RAM despite the new F6 BIOS revision I just flashed.

            • freebird
            • 3 years ago

            To be honest, that is why I bought one of the highest end boards available… I didn’t want the quality of the motherboard or perhaps marketing limiting top end OCing and memory results. Especially since AMD stated AM4 will be around for several generations and hoping Ryzen 2 or 3 are that much better! My AsRock Fatal1ty X370 Professional Gaming specifically stated DDR4 3200+ memory support(OC)

            Not saying vendors will intentionally gimp BIOS to force people to buy more costly mobos, but you never know either… anyhow I bought mine because this sounded good:

            ASRock Super Alloy
            – XXL Aluminum Alloy Heatsink
            – Premium 60A Power Choke
            – Premium Memory Alloy Choke (Reduces 70% core loss compared to iron powder choke)
            – Dual-Stack MOSFET (DSM)
            – Nichicon 12K Black Caps (100% Japan made high quality conductive polymer capacitors)

            and a crapload of features that could choke a donkey… 5Gb lan plus Intel 1Gb, 2 m.2 slots, AC wifi, SB Cinema 3, USB 3.1, etc.

            • whoistydurden
            • 3 years ago

            If you’re using Corsair memory, you’ll have a hard time getting dual rank memory kits to run faster without increasing DRAM and VSOC voltages. XMP will be unlikely to work too. If you have a Samsung B-die based DDR4 KIT, there’s still hope for decent memory overclocks. A member over at overclock.net was able to overclock a 2×16 GB G.Skill Trident Z kit to 3200 MHz at 14-14-14-32-1t with more recent bios releases.

        • freebird
        • 3 years ago

        The lastest bios update (v2.0) for my AsRock Fatal1ty X370 with new AMD AGESA 1.0.0.4a let me push my 64GB (16GBx4) of G.Skill 3000 CL14-14-14-34 from 2667 up to 2993 & 3200 even, but with one caveat. I could only get it running higher than 2667 by bumping the CL from 14 to 18.

        So now I’m running a full 64GB with CL18 14-14-14-34 @ 3200 @ 1.4v (yeah for 3200 I had to nudge the volts up from v1.35 to v1.4) but this gets the memory controller and fabric? running at 1600 which also helps reducing memory latency (mine is now down to 74.2ns) which I believe was part of the Ryzen CPU issues in early 1080p gaming.

        That’s why I updated my Newegg reviews several times and with EMPHASIS on UPDATING your BIOS ASP if the mobo doesn’t ship with the latest version. Since ordered mine on the Ryzen release mine had v1.20 which upgrade to v1.54beta to v1.60 and now (past week) v2.0

        Side Note: Not sure about any others, but my bios has 3200 as the top speed selection. You can push it higher with the bus speed, which I was able to push to 106 in the BIOS, but my evo 960 NVMe freaks out if the bus is pushed above 102! so that is a limiting factor.

        Maybe higher settings in a future bios? Or Ryzen 2 due supposed 1H of 2018? I feel pretty satisfied with the results of my R1700 running at 4.0Ghz v1.38v and memory settings now.
        I always like mucking around with stuff, though! : )

        • whoistydurden
        • 3 years ago

        For many x370 boards, the XMP/DOCP feature was working on most BIOS releases. DDR4 kits using Samsung IC’s were quite capable of running at 3200 MHz but required manually entering timings and voltage. Most of the Amazon reviews you read involved users trying to use DDR4 kits from brands like Corsair which almost always use Hynix IC’s. The issue with Hynix chips is that they don’t run well with Ryzen’s default 1T command rate. Asus seems to have improved memory compatibility the most so far, specifically with their C6H board.

        • whoistydurden
        • 3 years ago
    • Laykun
    • 3 years ago

    Now these are interesting CPUs, the 1600x is better than I was expecting. It really now comes down to what platform features do you want supplied by the motherboard. The fact that they come with the wraith cooler makes the price/performance graph potentially misleading as the k series CPUs still need some sort of cooling solution which is an additional cost (unless you’re balla and just run HSF-less, god speed memester). The only problem is what is the price equivalent of the wraith cooler, is it a $10 cooler, or a $40 cooler? Where does it sit? Regardless I think this makes the 1600X a compelling purchase.

    • HERETIC
    • 3 years ago

    Well done Jeff.
    We wish you a speedy recovery. Perhaps this experience will encourage you to
    have a yearly flu jab.

    How all the i5’s line up really shows us holdouts the smooth experience a 1600X
    can bring to gaming.
    The only “what if” which might come as the process matures with a few tweaks,
    is a 10% OC above the turbo speed.That would be the icing on the cake…………..

      • MOSFET
      • 3 years ago

      [quote<]Well done Jeff. We wish you a speedy recovery. Perhaps this experience will encourage you to have a yearly flu jab.[/quote<] Huh, I was just dropping in after reading the review to congratulate Jeff on some really great writing, especially for a sick guy. Just before reading your comment, I chuckled to myself, maybe Jeff should get the flu more often. (No, not really!) Seriously, though, great writing in there Jeff.

        • Mr Bill
        • 3 years ago

        Wishing you a more efficient as well as a speedy recovery.

    • Welch
    • 3 years ago

    Something I wondered about is if all 4 cores on a CCX package ever become L3 limited. I mean we are talking 8MB of L3 per CCX, which is very generous. But does the 3+3 setup for the 6c/12t parts like the 1600x afford those 3 cores any benefit from only having 3 cores instead of 4 to share that 8MB with?

    Any insight on programs that are L3 heavy?

    • Bumper
    • 3 years ago

    Not too shabby amd. Not too shabby indeed.

    • albundy
    • 3 years ago

    any new am4 motherboard releases or news on releases? the current crop is atrocious.

    • deruberhanyok
    • 3 years ago

    The one thing I’m curious about, and I don’t know that we’ll ever see, is what performance difference there would be, if any, on the quad core part if it was a full CCX instead of two half CCX.

    I mean, I guess they had to do something with the dies that couldn’t handle all four cores operating, and maybe as yields improve we’ll get a mid-cycle refresh with higher clocks and smaller parts (or maybe when the second generation tech launches) and future quad core parts being a single CCX module.

    I’d expect it to be just really latency-intensive tests that would see a difference, but that could matter to some.

    At any rate, happy to see these parts are offering viable alternatives to Intel’s mid-range. It’s about time we got some competition again!

    • chuckula
    • 3 years ago

    Hey Jeff,

    Kudos on the Doom [and Deus and Crysis] frametime graphs where you threw in the beyond-6.94ms measurements (that corresponds to 144Hz monitors for those of you in Rio Linda).

      • derFunkenstein
      • 3 years ago

      -1 for a Rush Limbaugh reference.

      • Airmantharp
      • 3 years ago

      I certainly appreciated it- the 8.3ms and 6.94ms metrics show where the rubber meets the road when it comes to CPU game performance.

      Of course, the pleasant surprise is that the Ryzen CPUs are catching up to the ‘Lake cores with faster RAM support at these higher framerates!

    • ermo
    • 3 years ago

    Just wondering what the best price/performance air cooler solution is on today’s market? I’m currently using a Hyper 212 Evo, but I’m open to replacing it to squeeze the temps down a little on my delidded i7-3770k running at 4.5GHz@1.3v.

    If this review is any indication, that would be a pretty good use of funds for the time being, especially if it can be reused with a potential upcoming Zen or Zen+ build.

      • ImSpartacus
      • 3 years ago

      I think what you’ve got is generally agreed to be a pretty good value.

    • coolflame57
    • 3 years ago

    [quote<]Now, for some bad news. While I expect exciting numbers from the Ryzen 5 1500X and Ryzen 5 1600X in our productivity benchmarks, those numbers will have to wait for a little bit. I've been battling a severe case of the flu since the middle of last week, so testing and writing for this review has been slow going. After some deliberation, I decided to go ahead and publish gaming benchmarks for the Ryzen 5 family first. I expect that many builders shopping for a CPU in this price range are more interested in a gaming PC than an all-out workstation, so I wanted to get this vital information out the door rather than publish nothing at all this morning. We'll have full productivity numbers for the Ryzen 5 chips soon, but for now, let's get our game on.[/quote<] If this isn't dedication, I don't know what is. I wish more people were as hardworking as Jeff.

    • Shobai
    • 3 years ago

    Looking forward to your “What isn’t being tested today” update, once you’re better, for this article also

    • ronch
    • 3 years ago

    I’ve seen some other reviews over at other sites, specifically Anandtech, and while gaming still kinda trails Intel, you can’t deny that the 1600X totally slaughters the i5 in many if not most multi-threaded scenarios. While I’ve always voiced my concern regarding Ryzen’s gaming prowess I think if I’m buying right now or this year, it would be very hard to pick the i5. The 1600X is just the better potato chip.

      • srg86
      • 3 years ago

      Personally I’d go for the i5 because of the Integrated Graphics and the Chipset. Though I’d probably go for an i7 instead.

      The multi-threaded compiling performance is tempting though (not that I’m interested in buying any time soon)

      • BurntMyBacon
      • 3 years ago

      [quote=”ronch”<]The 1600X is just the better potato chip.[/quote<] Where do they grow the potatoes that can do that? (o_O)

        • alloyD
        • 3 years ago

        Latvia

    • Ninjitsu
    • 3 years ago

    Ooh that 1500X is very interesting. Can’t wait to see how the R3 parts perform!

    BTW Jeff, would it be possible to do an Arma benchmark along with the productivity stuff?

    (Get well soon!)

    • dpaus
    • 3 years ago

    Let’s hope that AMD offers at least some models of APUs with the maximum possible GPU architecture that they can squeeze into the thermal/power envelope of the socket. With these chips, they now have a CPU architecture that’s nipping at Intel’s heels (if not their toes!), and GPUs are the one area where they have a significant advantage over Intel. Do it right, AMD, and the combination may be just what you need to return to real relevance in the market.

    • sophisticles
    • 3 years ago

    I can’t help but think that the best move for most people will probably end up being picking up a 7350k, overclocking it, and pairing it with a MB that supports Optane and install the OS on one of those Optane drives.

    When I first heard about Ryzen I was so excited, and I had even put aside $500 to build a Ryzen based system but I find myself left feeling flat by all the reviews I see of these processors.

    I still have hopes for the Ryzen based apu’s, maybe if AMD releases on of those apu’s with the AVX issues corrected and a decent integrated graphics chips, maybe R7 class and some integrated HBM that may entice me to build an AMD system again.

      • Magic Hate Ball
      • 3 years ago

      That’s not how Optane works…

      Also, you’d be better off getting into the AM4 platform as LGA1151 will likely only get one more CPU generation on it before being discontinued.

      Secondly, you can pair a $220 R5 1600 with a $80-100 B350 board and still be able to overclock to 3.8ghz+. Also, the Ryzen comes with a Wraith Spire cooler that can handle OCing up to that speed, while the i3-7350K will require additional purchase of a cooler.

      Then wait for another generation or two of Zen that will be on this same platform and upgrade then if it makes sense.

        • K-L-Waster
        • 3 years ago

        Yes, LGA 1151 is likely to be superseded soon. Having said that, every time I’ve bought an Intel platform with the idea that I could upgrade the CPU later, I’ve never ended up bothering. The CPU lifespan was long enough that upgrading it within the same platform wasn’t really worth the spend.

        If AM4 ends up being as long lived as AMDs previous platform that might not be the case for RyZen, but OTOH AM2 and AM3 were really showing their age. One would hope AMD doesn’t let AM4 remain the standard for quite that long (unless connectivity standards go into a serious stagnation phase over the next 5+ years….)

      • raddude9
      • 3 years ago

      [quote<]picking up a 7350k[/quote<] HUH? The 7350K has been essentially pointless since Intel hyperthreaded the Kaby-Lake pentiums. It's double the price of the G4620 and almost triple the price of the G4650, with very little to recommend it over either of those chips, apart from a marginal clock-speed increase. And while basic users will still be fine with a fast dual-core, the days when they could be recommended for gaming have passed. [quote<]Optane[/quote<] The Optane angle is a red herring, there's no evidence that it will substantially speed up what you would do on a low-end computer. It will almost certainly be too expensive to put in a low-end computer for quite some time. And why wouldn't you be able to put an Optane drive in a Ryzen machine? [quote<]When I first heard about Ryzen I was so excited[/quote<] Why? AMD's first statements about the Zen core were an indication that it would only give a 40% uplift in IPC over the bulldozer line. This was actually a bit worrying at the time, it wasn't until after the chip was released we could see that they managed to surpass this by a respectable margin. [quote<] if AMD releases on of those apu's with the AVX issues corrected[/quote<] What AVX issues? Do you perhaps mean the half-speed AVX2 implementation? Sure, that hurts the chips performance in a couple of benchmarks, but luckily for AMD many of those AVX2-heavy benchmarks (DAWbench DSP and STARS Euler3D) are heavily multithreaded so it essentially means that Ryzen ends up with similar performance to a quad core chip. The thing is though, these AVX2 benchmarks are also very dependent on memory bandwidth, so upping the AVX2 speed in Ryzen won't help in these types of tests without also upping the memory bandwidth. Personally I'm quite happy with the Ryzen balance, the AVX2 performance is near as good as you'll get on a dual-channel system, and for non-AVX2 multithreaded tasks those 8 cores are a massive benefit.

        • sophisticles
        • 3 years ago

        [quote<]And why wouldn't you be able to put an Optane drive in a Ryzen machine? [/quote<] Just a little thing that Optane is Intel storage technology designed to only work with Kaby Lake and later INTEL processors. as for the other fellow that said [quote<]That's not how Optane works[/quote<] I'm going by the assumption that when one installs an Optane drive onto a supporting MB that it will be visible to the BIOS/UEFI, as must all devices be before an installed OS can see and use it, I am also guessing that such a drive would be bootable which means that one should be able to install an OS onto it. And as someone else has already pointed out, the $60 Kaby Lake Pentium is a much better deal than the 7350k or anything AMD is offering.

          • Redocbew
          • 3 years ago

          As things stand now, compared to an NVMe SSD the usefulness of Optane is questionable. Considering how Intel has “unveiled” this super amazing and totally awesome new kind of memory like six times already I’m not going to hold my breath waiting for it to get here for real.

          Aside from that, Optane is just Intel’s product name for the phase change non volatile memory that it developed with Micron. JEDEC has already announced something which looks very similar called NVDIMM-P as part of DDR5, so if this kind of non-volatile storage takes off don’t expect it to stay exclusive to Intel forever.

          • raddude9
          • 3 years ago

          [quote<]Just a little thing that Optane is Intel storage technology designed to only work with Kaby Lake and later INTEL processors.[/quote<] Can you point me to the web page where intel say they won't allow Optane-based storage tecnology to run on non-intel processors?

            • Waco
            • 3 years ago

            He can’t, because it doesn’t exist. He’s mixing up using Optane as a cache (Intel only) versus using Optane as a fast SSD (which anyone can use).

          • Zizy
          • 3 years ago

          Largest Optane cache thingy is just 32GB. You might have some issues updating Windows and can completely forget about having anything but Windows there.
          Assuming it is even exposed as a separate drive in UEFI, I expect it to be pure cache only where you don’t get much say in what will be on the drive and what won’t.

      • flip-mode
      • 3 years ago

      To be candid, anyone “left feeling flat” from reviews of Ryzen has serious and substantial problems with managing expectations. I mean, can you please share what your actual expectations were and how they were not met? Can you link to any statements from AMD the led you to believe that Ryzen would be some kind of undisputed champion of a processor? I’m curious exactly what you were led to believe and what exact statements from AMD led you to believe it.

      I would assume you were completely devastated to the point of tears and getting a prescription for Zoloft by Kaby Lake, Skylake, Broadwell, and Haswell, since those chips have provided only extremely modest improvements over their predecessors.

      I don’t know how long you’ve been building up hopes for the next AMD chip. I’ve been doing it since 1999 when I built an Athlon 800 system. Regardless of whether it is the next chip from Intel or AMD, you need to MANAGE YOUR EXPECTATIONS. The future is usually just another iteration of the past. AMD actually exceeded expectations here. AMD actually did what they said they would do, which is – factually speaking – actually better than what usually happens.

        • shank15217
        • 3 years ago

        Flip-mode is krogrothed by your lack of realistic expectations. Manage your krogoth better, please.

        • cegras
        • 3 years ago

        Sophisticles = sophistry.

      • Kougar
      • 3 years ago

      You should check Optane’s perf levels again. It’s a caching drive with 280MB/s sequential write speed. Sequential reads are 1/3rd that of a 960 Pro. IOPs performance is equal or worse than a 960 Pro.

      Lower latency than SSDs is nice, but that in of itself isn’t going to make up for the worse performance. It also adds another layer of software abstraction and is highly limited on supported chipsets and processors.

      Instead of buying an Optane + SSD, why not just buy a 960 Pro and enjoy the simplicity and faster performance.

        • maxxcool
        • 3 years ago

        Or a whole *STACK* of Samsung 960’s for the cost of the Optane.. or even better max your board ram THEN buy more 960-pros..

    • Takeshi7
    • 3 years ago

    I wonder how the G4560 would have fared against Ryzen 5 in these gaming benchmarks. Obviously the Ryzen would smoke it in workstation loads, but in gaming, for $65 I don’t think AMD could compete at that price point, even with Ryzen 3 on the way.

    • rudimentary_lathe
    • 3 years ago

    Get better Jeff.

    So is AMD not releasing a CPU with just one CCX and all cores enabled? I’d be very curious to see how that would perform compared with the R5 1400 with the four cores but two CCXs.

    If I were buying a CPU today, I’d be looking very closely at the R5 1400 with the intent of overclocking it. I’ve always gone with the best bang/buck, and an R5 1400 – assuming it can be overclocked to 4-4.2 – looks like great value. Hard core gamers will stick with Intel, of course.

      • NoOne ButMe
      • 3 years ago

      That will probably be R3 CPUs. Which will be Raven Ridge without iGPU (and with iGPU?).

      Anyone with a Ryzen 7 can also test this. And I believe there was between 10% and 0% difference in most tests.

      Although with you include the 8MB extra L3 cache not sure how it happens.

      • DoomGuy64
      • 3 years ago

      The upcoming APU probably is. Current stuff, no. 1600X looks really good from my perspective, currently on haswell, but is 6 cores really enough to compel me to upgrade? Maybe not. 1700 with some OC seems more appealing, but for anyone else I think the 1600X is the absolute sweet spot for a gaming CPU. High stock clock speed, 6 cores, full cache, pretty much means Intel has nothing that actually competes with this CPU. It also helps that 90% of the platform issues associated with the R7 have been dealt with. Cheap CPU, throw some 3200 ddr on a 350 board, and that beats the pants off Intel which is vastly more expensive for their equivalent chip.

      As for the 1400, I don’t think it’s worth bothering with, unless budgeting. It probably isn’t as well binned as the 1500X, OC may be limited, cache is halved, and CCX issues mean you shouldn’t cheap out on slower ram. If you are on a shoestring budget, sure, but I wouldn’t want to deal with the downsides. Better than cheap Intel, but not enough to justify passing over the 1600X sweet spot, IMO. Just doesn’t appeal to my tastes, but it still is a really good value chip. Just depends on what you’re willing to sacrifice for the value, not to say that it is bad, but it isn’t something I’d care to “upgrade” with. But that’s probably more my haswell i7 talking than whatever you might be using atm.

      • Takeshi7
      • 3 years ago

      Hardware Unboxed already tested this, and they actually found that when all 4 cores are on one CCX, performance degrades because there is less cache compared to 2 cores per CCX. Of course, since the R5 1400 has half the cache anyways, it might perform better with them all on one CCX instead of 2.

      [url<]https://www.youtube.com/watch?v=Rhj6CvBnwNk[/url<]

    • SecretMaster
    • 3 years ago

    Will the second part of this article also contain power consumption numbers? And for that matter, will it contain power consumption numbers for the 1800X/1700X/1700 (AFAIK those were never published by TR)

    • barich
    • 3 years ago

    What’s amazing to me is that a six year old CPU with an overclock is still so competitive.

      • Sputnik7
      • 3 years ago

      You know, it’s really interesting. I just recently went from an i5-2500k @ 4.5Ghz with 1333mhz DDR3 to an i7-7700k at stock speeds right now with 3200mhz DDR4. My GPU is a 970 and i’m running everything on a 144hz G-sync Dell S2716DG

      The difference in performance between the two CPUs is night and day, not even close. My 7700k is giving rock solid and higher framerates than my OC-ed 2500K ever did.

      At least at 1440p, I would say that the 2500K bottlenecked my 970. I’m not sure why my results are so different than TR, but I know i’m not the only gerbil here that would take this stance.

        • Voldenuit
        • 3 years ago

        Playing multiplayer? I’ve found that the CPU requirements become more intensive than in single player.

        Unfortunately, this is harder to control for when benchmarking, so the vast majority of benchmarks I see are in single player or offline modes.

          • Sputnik7
          • 3 years ago

          I’ve seen it even in some single player games, but yes, mostly in multiplayer.

          I agree that MP is much harder/near impossible to reliably benchmark.

          • ImSpartacus
          • 3 years ago

          I’ve heard that multiplayer hypothesis before and it seems pretty reasonable.

          However, I don’t know why no one even attempts to test it.

          Is it challenging? Sure, but you could overcome the increased variance with sample size.

          Maybe it would require a sample size that’s unreasonable for a “regular” benchmark, but as a one-off? With one hardware-game configuration that’s known to be reasonably responsive to CPU perf? That’s doable. And you at least have one solid, vetted data point that either supports or doesn’t support that hypothesis.

            • Sputnik7
            • 3 years ago

            in lieu of an actual distribution, you could do a playable/unplayable (essentially pass/fail metric).

            Statistics dictates that you need ~10x the data of an actual distribution for a pass/fail test to be valid, but it still may be worth testing.

            As a first pass, logging your FPS/frame time over 100 multiplayer matches might be able to give some sense of cpu performance. But doing this with 10 different CPUs quickly becomes time consuming (unless you can get 10 different players, each on a different CPU with otherwise identical system configs to each play 100 MP rounds of some popular MP game)

            • ImSpartacus
            • 3 years ago

            I don’t think you need to replicate it with a full lineup of a dozen CPUs.

            Just picking two “easy” options like a 7700K and a 7350K.

            Right now, I know of zero results that support the existence of this phenomena. There’s worth in just demonstrating that the is a real thing that actually happens.

            • Firestarter
            • 3 years ago

            comparing two configurations head to head at the same time would definitely be doable, but that only works as long as you have 2 identical setups apart from the components you want to test, and probably a third PC at least to help control and measure. I’d love to see someone do it but I don’t think the results will be very surprising

            • ImSpartacus
            • 3 years ago

            I feel like a 7700K versus 7350K would be perfect. Same exact platform, but with a large enough difference between them.

            Bonus points for clocking them identically (preferably with a modest ~4.5 GHz oc).

        • travbrad
        • 3 years ago

        I saw similar things in some games going from a 2500K @ 4.6ghz to a 6700K @ 4.6ghz also with a GTX 970. I’m “only” running 1080p (144hz gsync) though. ARMA 3 in particular my minimum framerates are something like 75% better. Planetside 2 saw big gains too with around a 50% increase. I saw noticeable improvements in Kerbal Space Program and Project CARS too

        There are a lot of games where it just doesn’t make a difference though. Even at 1080p the GTX 970 becomes a bottleneck in a lot of newer games, and in older/indie games I was already great framerates.

          • Sputnik7
          • 3 years ago

          ARMA shouldn’t be too surprising, as it depends heavily on RAM speed.

          I agree it’s mostly the newer games that bottleneck the 970.

        • ColeLT1
        • 3 years ago

        Right there with you, but I went from a ~5ghz 4790k to 5.1ghz 7700k. Night and day performance difference. I’m still not sure what doubled my fps during heavy fights on ESO and BL2, but it happened.

        • setaG_lliB
        • 3 years ago

        I’m running a 4930K (IVB-E) at 4.5GHz, which shouldn’t be much faster than an overclocked Sandy Bridge. I’m not noticing any CPU-related performance issues at all. Heck, I recently went from a 970 to a GTX-1080 and my framerates at 1440p just went through the roof. Games are running well beyond the 72Hz of my overclocked Korean DVI display and just look super smooth (if you can ignore the slight tearing).

        • freebird
        • 3 years ago

        Probably because you are playing the games most people do… at the highest settings their monitor/GPU/CPU allow, right? You are not running 1080p with features turned off that “stress” the GPU too much either.

        That’s why some of us only care about higher end testing if you want to compare a 7700K with a Ryzen ???? @ 4Ghz when running a GPU like the GTX1080/1080i like what most sites have done to compare these CPUs for gaming. Point being if you are spending $500-800 on a GPU, why game at 1080p? Especially when 2560×1440@100Mhz+ is becoming affordable at $400-$700; lower end if Freesync, higher end if you want G-sync…

        • tipoo
        • 3 years ago

        I wonder how much of what you saw is the memory speed though. Eurogamer tested this:

        [url<]http://www.eurogamer.net/articles/digitalfoundry-2016-is-it-finally-time-to-upgrade-your-core-i5-2500k[/url<] With 2133MHz RAM an OCed 2500K sticks substantially closer to modern processors. Granted if the motherborad doesn't support that it's time for a platform upgrade anyways.

      • ozzuneoj
      • 3 years ago

      Yes, I was quite happy to see that Jeff included the 2500K overclocked results as many requested! My 2500K is only at 4.2Ghz, but its good to know roughly how it stacks up. I think it goes without saying that the 2500K is likely to go down as one of the bestenthusiast CPUs of all time. Even the highly praised Celeron 300A and Athlon XP 1700+ with their massive overclocking potential were completely outclassed and nearly useless 6 years after release.

      The 2500K has held up extremely well against newer CPUs and applications in that same span of time. It’d be like comparing a Celeron 300A@450Mhz to a 2Ghz Athlon XP 3000+ or early Athlon 64, or comparing a 1700+@2Ghz to a Core 2 Quad Q9550. There’d be absolutely no comparison. Market stagnation and a lack of competition certainly played a huge factor in the 2500K’s longevity, but that doesn’t change the fact that it still works so well.

        • sreams
        • 3 years ago

        “Even the highly praised Celeron 300A and Athlon XP 1700+ with their massive overclocking potential were completely outclassed and nearly useless 6 years after release. ”

        Although I’d argue that the reason the 2500K has lasted so well is that the lack of competition from AMD or otherwise has resulted in Intel basically sitting on their hands. The Celeron 300A and Athlon XP 1700+ weren’t going to shine for as long, because competition was far more fierce.

      • OptimumSlinky
      • 3 years ago

      I’m still rocking an i7-930 (!!!) but I think the Ryzen 5 1600X is calling my name…

        • foo
        • 3 years ago

        have you considered upgrading to a Xeon 56xx – mine cost me £77.00 GBP.

        Intel Xeon 5660 @4.2GHz 24GB RAM @1600MHz GTX 1070 MSI Gaming X 1080p @144Hz
        Cinebench R15 = 951 CPU Mark = 11310
        Superposition 1080p High = 10157 (FPS 76, Min 61, Max 95)
        Superposition 1080p Extreme = 3906 (FPS 29, Min 24, Max 34)
        Timespy 1.0 = 6162 (GPU 6453, CPU 4912)
        [url=http://www.3dmark.com/fs/12196599<]Firestrike 1.1[/url<] = 16183 (GPU 19754, CPU 14435) Heaven 3 Extreme = 3220 (FPS 127, Min 46, max 283) Heaven 4 Extreme = 2464 (FPS 97, Min 25, Max 202) Valley 1 Extreme = 3749 (FPS 89, Min 30, Max 159)

    • BobSmith
    • 3 years ago

    This reivew has some of the most unfathomable results I have every seen. There is no way that a 6600k is 47% better than a 4690k in GTAV. There is no way a 3570k is beating a 7700k by 10 percent in the Division.

    • south side sammy
    • 3 years ago

    isn’t it amazing how when the resolutions goes from 1080p to 1440p the 7700k doesn’t matter much any more?

      • christos_thski
      • 3 years ago

      Not really? As far as I know the GPU becomes the bottleneck on higher resolutions, so the CPUs don’t matter. Up the resolution enough, and you’d end up with a 60 dollar G4560 performing the same with a 7700k (or, for that matter of fact, a Ryzen 7).

        • unclesharkey
        • 3 years ago

        1080p sure but at 1440p I don’t think so. Maybe if the GPU is the bottle neck but throw in a high end GPU and sky’s the limit.

        • rechicero
        • 3 years ago

        So, if you play at those resolutions, you shouldn’t buy a 7700 as a 60 dollar CPU would be the same. And 1440p doesnt seem so strange if you have a 1080 Xtreme… In fact, if you’re going to play at 1080, going the 7700 + 1080 Xtreme route seems (IMHO) like buying a Veyron for going to Walmart.

      • Ninjitsu
      • 3 years ago

      it won’t be amazing next year when Nvidia/AMD have mainstream GPUs that can hold 60 fps at 1440p…

      • ColeLT1
      • 3 years ago

      Look at frametimes not average FPS. I would rather have high FPS with as little drops, hitches, and stutters as possible.

        • south side sammy
        • 3 years ago

        I saw a bunch on other sites. I didn’t really pay attention here. But what I really go for first are the minimum frames. That’s where I’ve seen Ryzen pull ahead and what I think matters more. let’s face it, the frames dip too far and game play turns to crap.

          • ColeLT1
          • 3 years ago

          I have not seen this, I have seen the opposite though, where ryzen lines up between ivy/haswell in games’ minimum frame times and dips/stalls/hitches at the same clocks, can you send me some links?

            • south side sammy
            • 3 years ago

            No, I’m not going to fall into this trap of having to defend myself every time I make a statement here. They are out there………. don’t go to Jayz, or Linus or NitWit or any of the Intel/nvidia sites. look around. easy to find videos by impartial players.

            • DancinJack
            • 3 years ago

            lol

            • Redocbew
            • 3 years ago

            Locate sand, insert head. Neat trick.

            • derFunkenstein
            • 3 years ago

            I have found Jayz to be the very best, most impartial full-time YouTuber out there.

            • Redocbew
            • 3 years ago

            I like Jayz also, except for the fact that he’s making me want to spend a lot of money on watercooling again…

            • Ninjitsu
            • 3 years ago

            i like kanyeEast better

            • CuttinHobo
            • 3 years ago

            KanYeast would be a good name for a Ryzen-focused rapper…

            • Redocbew
            • 3 years ago

            [url=https://www.youtube.com/watch?v=t2mU6USTBRE<]Ding dong yo, ding dong.[/url<]

            • CuttinHobo
            • 3 years ago

            Haha, nice. I’ve gone to two of his shows and they were a lot of fun. Fat is a favorite of mine so seeing him hit the stage in the fat suit just about made the tickets worth it right there. 🙂

            • south side sammy
            • 3 years ago

            he doesn’t have a new one yet. still on his 30 day trial of the 1800.

            • K-L-Waster
            • 3 years ago

            Lemme guess, “impartial” == “AMD FTW!!1!”

        • DoomGuy64
        • 3 years ago

        I wouldn’t trust those blindly until enough time has passed for the platform to mature. Nvidia currently has driver issues with Ryzen, where certain games/settings perform slower than on AMD video cards. DX12 in particular, but newer benchmarks show that dx9 is also affected.

        [url<]https://www.reddit.com/r/Amd/comments/64r0hj/thanks_nvidia_for_letting_ryzen_5_look_bad/[/url<] [url<]http://www.techspot.com/article/1374-amd-ryzen-with-amd-gpu/page2.html[/url<] The techspot article shows that DX11 performed better than dx12 on the 1060. That's not possible if it's a CPU issue, because dx11 is less CPU efficient. Also, the 480 handily beats the 1060 in the cases where it performs poorly. Then we get Rocket league, where the 1060 performs much worse than the 480 in every cpu configuration, but particularly with Ryzen. Nvidia drivers have lost their edge. Not only are they not optimized for Ryzen in dx12, but they are underperforming in casual games like Rocket league. Best case scenario seems to be solely dx11, where they are throwing most of their resources into optimizing.

      • f0d
      • 3 years ago

      isnt it amazing that when you go from 1440p to 4K that almost all cpu’s performs the same??
      at 4k bulldozer is pretty much the same as everything else

      ….its called a gpu bottleneck

        • Waco
        • 3 years ago

        Bulldozer at 4K is not a pleasant experience even in GPU-bound cases. Averages may be similar, but frametimes tank.

    • revcrisis
    • 3 years ago

    What’s with your Division DX12 results? You have a 5-year old i5 3570k beating every chip, including the brand new i7 7700K. Did you run that benchmark multiple times?

      • chuckula
      • 3 years ago

      ?? I only see it coming in second place in the 8.3ms frametime graph, and that’s likely down to random noise when you get that far into the weeds.

      Edit: Oh you looked at average framerate. Sorry, I don’t pay much heed to those numbers since TR has better metrics.

    • Krogoth
    • 3 years ago

    AMD is definitely back in the game in the mainstream desktop market. They are on the heels of Skylake and Kaby Lake chips.

    The real question is if that continues to translate into the laptop and SFF markets which is where the big money is made in the mainstream market.

      • Goty
      • 3 years ago

      Summit Ridge CPUs really seem to hit their voltage/frequency sweet spot down around 3.3 GHz; power consumption comes down quite a bit (the chips are being pushed pretty hard to get them up to 3.6-3.8 GHz) and performance is still very respectable, so these might actually be pretty compelling chips in anything other than ultrabooks (if that term is even still used.)

      • DragonDaddyBear
      • 3 years ago

      Are you… impressed?!

        • morphine
        • 3 years ago

        Shirley that can’t be right.

          • willmore
          • 3 years ago

          Stop calling me Shirley.

      • BurntMyBacon
      • 3 years ago

      I expected that the R5 1600X would wind up between Haswell and Skylake. Sure enough it beat the i5-4690K, but I did not expect it to best the i5-6600K in the overall 99th percentile FPS metric as shown in the conclusion page chart. Even the R5 1500X beat the i5-4690K here. This is a good start. I wonder how it compares to the price equivalent Skylake and Kaby lake processors.

    • ptsant
    • 3 years ago

    Nice review, thank you and get well soon. Would be interested to hear your thoughts on how the platform has matured: any difficulties with installation, BIOS, drivers, memory etc.

    Anyway, I guess my suspicion is confirmed. The gaming performance is almost identical to the 1800X, which is much more palatable in the $249 than the $500 range. The 7700K is always the king for gaming, but it’s also almost 30-40% more expensive, which completely changes the value proposition.

    We will of course have to see the multithreaded tests, but these should be really close to 75% of 1800X performance, which is clearly beyond the 7600K and even possibly the 7700K in some situations.

    The way I see it, the only place for the 7600K is for hardcore gamers that can’t afford the 7700K. For almost everyone else, the 1600X is a much more balanced chip.

    • derFunkenstein
    • 3 years ago

    Feel better soon, Jeff. Sucks that you’re down for the count with such a tight schedule.

    The quad-core, eight-thread part with 8MB of L3 cache (a single CCX) was the most interesting thing on the docket for me. I wanted to see how Ryzen performs (officially) without any CCX weirdness. It does relatively well, I think. It has a boost-speed deficit of 800MHz, which is around 20% more cycles, and loses to an equivalently-configured (cores, threads, and cache, anyway) Intel CPU. The 7700K is closer to 30% faster, so it’s more than a clock-speed deficit.

    All that being said, from a price perspective, you’re going to save 80% and only give up 30% of the performance. Once you factor in the ability to (most likely) drive all the cores up to 3.8 or so, you’re making equivalently-priced [b<]locked[/b<] Core i5-7400 with half the threads and its max turbo speed of 3.5GHz look a little sad.

      • barich
      • 3 years ago

      That part still has two CCXs. Half of each are disabled.

        • Demetri
        • 3 years ago

        Yeah, I don’t think we’re seeing single-ccx chips until Raven Ridge.

          • derFunkenstein
          • 3 years ago

          aww, dang it. Mis-read. These chips all have 16MB of L3 cache.

          Still, the performance per dollar is definitely there.

          Ryzen 5 1400 is listed as 8MB L3, so that seems like it might be a single-CCX part. I think Ryzen 3 is rumored that way as well.

            • Redocbew
            • 3 years ago

            Even still, it doesn’t seem like having three cores per module had much effect on the 1600X compared to the 1800X. Maybe in more heavily threaded applications it would.

            • derFunkenstein
            • 3 years ago

            Yeah that just tells me that games aren’t using more resources than a 6-core CPU can deliver. There will be separation for part 2 of the review, I’m sure, but then again for these prices it won’t be a big deal.

            • Jeff Kampman
            • 3 years ago

            The 1400 is still a dual-CCX part with half of the L3 per complex disabled.

            • derFunkenstein
            • 3 years ago

            That’s…really weird. Seems like they’re intentionally handicapping it. Is it a yields or thermal concern?

            I messed around with some of that stuff when my R7 1700 was new, and it seemed like despite losing half the L3 cache, 4+0 with 8MB of L3 outperformed 2+2 with 16MB. But my tests were quick and not exactly scientific.

            • jihadjoe
            • 3 years ago

            Someone did the math on this during the R7 launch, but it seems the odds of a chip going bad in such a way that it enables a 4+0 config are very small, hence the 3+3 and 2+2. Maybe we’ll have to wait until the process matures and they can tape out a fully enabled 4+0, or maybe AMD is binning those chips for a later release.

            • Mr Bill
            • 3 years ago

            The core allocation discussion in the Anandtech review is quite interesting.
            [url<]http://www.anandtech.com/show/11244/the-amd-ryzen-5-1600x-vs-core-i5-review-twelve-threads-vs-four/2[/url<] I hope that the TR review will expound on this for the second half of the review. Hope you are getting better soon.

    • kuttan
    • 3 years ago

    The Ryzen 5 1600 offers terrific value/performance 6 core CPU. One can future proof costing easy.

    • mkk
    • 3 years ago

    Yeah when gaming and keeping a good look at core loads with an overlooked 1700 I’ve only seen cases where the six core 1600X ought to do just fine at its stock configuration. Stepping down to a plain 1600 doesn’t make much sense at these prices.

      • Magic Hate Ball
      • 3 years ago

      Not sure how you draw that conclusion when the 1600 comes with a cooler worth $25 easy to further the price gap.

      Unless you happen to have a AM4 cooler laying around already.

    • DragonDaddyBear
    • 3 years ago

    What’s the consensus on the AMD Power Profile they released? If I built an AMD system I would probably set that. Do we think their profile is similar the high power used in the test?

      • ultima_trev
      • 3 years ago

      Some people have reported issues with it.

      I myself have experienced none, it feels pretty much as responsive as high performance mode and I get the benefit of 4.1 GHz boosts in low thread scenarios.

      • derFunkenstein
      • 3 years ago

      It seems to work fine for me. Very little additional power consumption (I measured 3W at the wall, which might just be a margin of error) at idle and maybe a bit more performance. Worth the 2 minutes it takes to install.

    • chuckula
    • 3 years ago

    [quote<]Now, for some bad news. While I expect exciting numbers from the Ryzen 5 1500X and Ryzen 5 1600X in our productivity benchmarks, those numbers will have to wait for a little bit. I've been battling a severe case of the flu since the middle of last week, so testing and writing for this review has been slow going.[/quote<] Ouch! Get better soon man! You can do a cleanup review that hits things like power consumption numbers and other issues for both the 8-core and 6/4-core parts that way. Great review as always.

      • juampa_valve_rde
      • 3 years ago

      Get better Jeff, we can wait (a little) for your quality reviews.

      Any idea of where to check head to head performance of the Ryzens against their old buddies thuban and piledriver? I’m curious.

        • ColeLT1
        • 3 years ago

        The R7 review has the FX-8370 (Piledriver) in there:
        [url<]https://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed[/url<]

Pin It on Pinterest

Share This