AMD’s Ryzen 3 2200G and Ryzen 5 2400G processors reviewed

Morning, folks. Today is the day when we can finally share performance details for the desktop versions of AMD’s Ryzen processors with Radeon Vega graphics, or Ryzen APUs for short. AMD isn’t using the term “accelerated processing unit” to refer to these chips any longer, but it’s a whole lot easier to call them APUs than it is to type out “Ryzen processor with Radeon Vega graphics” every time we want to refer to the family of chips. Naming conventions aside, what matters most is that AMD finally has a competitive CPU core that it can fuse with its muscular graphics processors, and it’s used those resources to form a most exciting pair of chips for entry-level gaming builds, small-form-factor game boxes, and HTPCs.

I could regale you with a wealth of background information on the silicon marriage of a single Zen core complex and Vega graphics here, but I am out of time as of this very moment. Thing is, we pretty much know the deal with Raven Ridge. Check out my post about AMD’s pre-CES event for the ground rules of AMD’s desktop APUs, along with our review of the mobile Ryzen 5 2500U and my initial write-up of the Raven Ridge silicon from a while back for ample general information on the red team’s blend of its core competencies. At this stage, I felt it was most important to get our performance results out in the open rather than rehashing a great deal of already-public information. For the moment, enjoy our full test results and slightly-less-full analysis for these chips, and feel free to debate amongst yourselves in the comments.

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each test at least three times and published the median of those results.

Our test systems were configured as follows:

Processor
Ryzen 3 2200G Ryzen 5 2400G Ryzen 3 1300X Ryzen 5 1500X
CPU cooler AMD Wraith (95W)
Motherboard MSI B350I Pro AC
Chipset AMD B350
Memory size 16 GB
Memory type G.Skill Flare X 16 GB (2x 8GB) DDR4-3200
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 1T
System drive Intel 750 Series 400GB

 

Processor
AMD Athlon X4 845 AMD A10-7850K
CPU cooler AMD Wraith (95W)
Motherboard Asus Crossblade Ranger
Chipset AMD A88X
Memory size 16 GB
Memory type Corsair Vengeance Pro Series 16 GB (2x 8 GB) DDR3-1866
Memory speed 1866 MT/s (actual)
Memory timings 9-10-9-27
System drive Samsung 850 Pro 512 GB

 

Processor
Core i3-8100

(simulated via Core i5-6600K

at 3.6 GHz and 65 W)

Core i5-8400
CPU cooler Cooler Master MasterAir Pro 3
Motherboard Gigabyte Aorus Z270X-Gaming 8 Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z270 Intel Z370
Memory size 16 GB
Memory type G.Skill Trident Z 16 GB (2x 8 GB) DDR4-3200
Memory speed DDR4-3200 (actual)
Memory timings 14-14-14-34 2T
System drive Samsung 960 Pro 512 GB

We used the following system to host our discrete GPUs for testing:

Processor
Intel Core i7-8700K
CPU cooler Corsair H110i 280-mm liquid cooler
Motherboard Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z370
Memory size 16 GB
Memory type G.Skill Trident Z DDR4-3200 (rated) SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 2T
System drive Samsung 960 Pro 512 GB

Some other notes regarding our testing methods:

  • Each motherboard was updated to the most recent firmware version available prior to testing, including pre-release firmware versions available through processor manufacturers.
  • Our Intel test systems were both updated with Meltdown mitigations through Windows Update and Spectre mitigations through firmware updates. These patches were confirmed to be in use through the InSpectre utility.
  • Each software utility or program used in our benchmarking was the most recent version publicly available prior to our testing period. Where necessary, we used beta versions of certain utilities as recommended by CPU manufacturers for the best compatibility with the systems under review.
  • Each system used Windows 10’s Balanced power plan. Our Ryzen systems were set up with AMD’s Ryzen Balanced power plan.
  • Unless otherwise noted, our gaming tests were conducted at 1600×900 in exclusive fullscreen mode. Vsync was disabled both in-game and in the graphics driver control panel where possible.
  • “Multi-core enhancement” or “multi-core turbo” settings were disabled in our motherboards’ firmware.

Our testing methods are generally publicly available and reproducible. If you have questions regarding our testing methods, you can email me, leave a comment on this article, or join us in our forums. We take the integrity of our test results seriously and will go to reasonable lengths to clear up any apparent anomalies.

 

Memory subsystem performance

The AIDA64 utility includes some basic tests of memory bandwidth and latency that will let us peer into the differences in behavior among the memory subsystems of the processors on the bench today, if there are any.

With the same DDR4-3200 CL14 RAM in the DIMM slots of all of our systems, AMD’s chips take a small advantage in raw bandwidth. Memory latency, on the other hand, continues to favor Intel’s microarchitectures by a wide margin. AMD has slightly decreased the latency of its chips’ integrated memory controllers in the move from Summit Ridge to Raven Ridge, but the difference likely isn’t large enough to translate into noticeably higher performance.

Some quick synthetic math tests

AIDA64 also includes some useful micro-benchmarks that we can use to sketch out broad differences among CPUs on our bench. The PhotoWorxx test uses AVX2 instructions on all of these chips. The CPU Hash integer benchmark uses AVX, while the single-precision FPU Julia and double-precision Mandel tests use AVX2 with FMA. The Ryzen 3 2200G had to sit out the PhotoWorxx test, as running that benchmark caused a hard lock on the host system.

Core for core and thread for thread, the Ryzen 5 2400G can’t outpace the similarly-provisioned Ryzen 5 1500X. Weirdly, it’s the four-core, four-thread Ryzen 3 1300X that leads the AMD pack here.

As we’ve long understood now, Ryzen CPUs include support for Intel’s SHA Extensions, meaning that they can accelerate the algorithm behind this micro-benchmark. That acceleration explains the wide gulf between even the Ryzen 3 1300X and the Core i5-8400’s otherwise-impressive score here.

In these tests of floating-point performance, Intel’s wider AVX units give its chips a wide lead.

Now that we have a basic idea of how these chips perform, let’s get to gaming.

 

Doom (Vulkan)
Doom‘s Vulkan renderer is a familiar sight in our graphics-performance reviews by now. We were able to crank the game all the way up to medium settings at 1600×900 to give all of our test mules a workout.


The Radeon Vega IGP duo gets off to a fine start in Doom. The Ryzen 3 2200G’s Vega 8 IGP crushes its similarly-provisioned Kaveri predecessor, while the 2400G’s Vega 11 nearly holds pace with the GT 1030 in both average frame rates and delivered smoothness (as measured by the 99th-percentile frame time). That’s spectacular performance from an IGP without its own dedicated graphics memory. The UHD 630 runs the game without crashing, at least.

One bit of oddness at this resolution is how thoroughly the GTX 1050 outpaces the RX 460, in contrast with Radeons’ usual advantages under Doom‘s Vulkan renderer. I suspect we’re seeing a bottleneck in the driver or something from the Radeon side that the GTX 1050 doesn’t suffer from. This isn’t the only time you’ll see similar behavior from the GTX 1050 in this review, so stow your pitchforks. I’m just as piqued by this issue as you are.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame, while a constant stream of frames at 8.3-ms intervals would correspond to 120 FPS.

With the range of integrated graphics solutions we’re testing, there’ll be a wealth of opportunity to begin our analyses at the 50-ms mark. As you already guessed, the UHD Graphics 630 IGP is not delivering anything approaching a smooth or enjoyable gaming experience, spending over 12 seconds of our one-minute test run on especially tough frames. The A10-7850K joins the UHD 630 at the 33.3-ms mark with 11 seconds spent beyond 33.3 ms, while the Vega 8 IGP turns in a tiny bit of roughness past the 33.3-ms mark.

Despite the Vega 11 IGP’s apparent performance parity with the GT 1030 in our intial graphs, the entry-level GeForce delivers a noticeably smoother gaming experience in Doom by spending five fewer seconds on frames that take longer than 16.7 ms to render. The Vega 8 spends a full third of our test run rendering frames that take longeer than 16.7 ms to render, suggesting that lower detail settings or resolution would be in order for a smoother gaming experience.

 

Rise of the Tomb Raider
Rise of the Tomb Raider is both gorgeous and fully modern under the hood, thanks to its DirectX 12 rendering path. We used a blend of low settings at 1600×900 to get this demanding game up and running on the Vega IGP duo.


With those modest settings, the Ryzen APU duo bookends the GT 1030 in our average-FPS measure of performance potential. 99th-percentile frame times above 33.3 ms—well above, in the case of the Ryzen 3 2200G’s Vega 8 and the GT 1030—suggest anything but a smooth ride for our competitors, though. Even the Ryzen 5 2400G can’t deliver the majority of its frames under the 33.3-ms threshold.


A look at just how much time each chip spends past 33.3 ms paints a better picture for the Ryzen 5 2400G than it does the GT 1030 or Ryzen 3 2200G. The beefy integrated Vega 11 spends just a fifth of a second in total on tough frames that take longer than 33.3 ms to render, while the GT 1030 and Vega 8 tally up four to five seconds on that tough work. If we use the common metric of a consistent 30 FPS as a (forgiving) definition of “playable” for these chips, the Vega 11 IGP nearly aces the test. Impressive work from integrated graphics.

 

Grand Theft Auto V
Grand Theft Auto V‘s online mode remains one of the most popular games around, and these IGPs seem like ideal vehicles for allowing budding miscreants to drop in on Los Santos. We used a combination of high settings at 1600×900 with “high-resolution shadows” and “long shadows” enabled in the game’s advanced graphics options.


GTA V tends to favor GeForces, and our IGP experience proves no different. The extra shader power of the Ryzen 5 2400G’s Vega 11 isn’t good for much extra performance potential compared to the Vega 8 on the Ryzen 3 2200G, but it does keep the beefier integrated Vega on the right side of 33.3 ms in our 99th-percentile frame-time accounting. As you can see from its pencil-thin frame-time graph, however, the GT 1030 delivered a significantly smoother and more fluid gaming experience still.


The Ryzen 3 2200G’s potentially concerning 99th-percentile frame time turned out to be less of an issue than it first appeared. Our time-spent-beyond-X graphs let us see that the lesser Vega IGP spends just over a tenth of a second in total on frames that take longer than 33.3 ms to render.

The really interesting look into our contenders’ results comes at the 16.7-ms mark, where the GT 1030 spends less than a third of the time past that threshold working on tough frames compared to the Ryzen 5 2400G’s Vega 11. As a result, GTA V just feels better on the entry-level GeForce than it does on the Vega IGPs.

 

Rocket League

Let’s take a detour out of the big leagues and into Rocket League. This popular esports title runs on the Unreal Engine. We pushed the resolution up to 1920×1080 here and used the game’s highest-quality appearance settings.


Rocket League proves a close match for Pascal and Vega. The Ryzen 5 2400G closely shadows the GT 1030 in both performance potential and delivered smoothness, while the Ryzen 3 2200G commendably turns in 99% of its frames a couple milliseconds under the critical 33.3-ms mark. The GT 1030 ultimately provides the most fluid and smooth gameplay here by a small margin among our entry-level contenders.


Some spikiness in our frame-time graphs translates into time past 50 ms on the board for both Ryzen APUs’ Vega IGPs, suggesting one or two noticeable hitches in gameplay. Happily, neither Vega IGP noticeably spends much time past 33.3 ms. We have to draw the line at 16.7 ms before any of the Ryzen APUs or the GT 1030 give us cause for concern. Here, the GT 1030 spends three fewer seconds in total than the Vega 11 on frames that take over 16.7 ms to render, while the Vega 8 spends over five more seconds yet compared to its more powerful sibling.

 

Dota 2

With a global audience and a burgeoning tournament scene boasting multi-million-dollar prize pools, one could say that Dota 2 is a big deal, and it’s another test that any integrated graphics processor worth its salt has to pass. We ran the game at 1920×1080 on its “Best Looking” preset to see how the Vega duo fares.


Dota 2 is clearly bottlenecked somewhere other than shader-processing resources. In fact, the Ryzen 5 2400G’s Vega 11 turns in a worse 99th-percentile frame time than its less-well-endowed sibling here. The GT 1030 pulls well ahead in both performance potential and delivered smoothness. Folks eyeing a spot at the next International should probably join team green.


Where the GT 1030 spends just about four seconds in total past 16.7 ms on tough frames, the Ryzen 5 2400G’s Vega 11 puts 14 seconds in that bucket, and the Ryzen 3 2200G’s Vega 8 chalks up nearly 16. Dota 2 certainly isn’t unplayable on these parts, but some dialing-back of resolution and eye candy would seem warranted.

 

Hitman

We enter the home stretch for our gaming tests with another DirectX 12 monster. Hitman puts every one of a GPU’s shader processors to work, and it’s still one of the more demanding triple-A games around. We dialed resolution back to 1600×900 for this test, cranked up shadows and textures as far as our chips’ 2 GB of VRAM would allow, and used mostly high settings save for leaving screen-space ambient occlusion off.


Although Hitman has long served as  a showcase for AMD’s graphics cards, I was still surprised by how much hurt it puts on the GT 1030. The entry-level Nvidia card just isn’t delivering a playable experience here, while the Ryzen 5 2400G’s Vega 11 runs this game with little fuss. If we dialed back settings a bit more, the Ryzen 3 2200G’s 99th-percentile frame time would likely drop below the critical 33.3-ms mark, as well.


The GT 1030 is already struggling hard as we look at the time spent past the 50-ms mark. It’s most instructive, then, to check how the Vega duo is performing with a look at the 33.3-ms mark. Happily, neither Vega IGP puts noticeable numbers on the board past 33.3 ms. Flip over to the 16.7-ms mark, however, and it becomes clear that neither Vega IGP is delivering a perfectly fluid experience. Both Vegas spend around a third of this test run working on tough frames that would drop the delivered frame rate below 60 FPS.

 

Tomb Raider (2013)
Tomb Raider‘s 2013 reboot revitalized the franchise, and it remains a fun and visually-rich-enough experience to justify revisiting on these IGPs. We cranked the game’s resolution back up to 1920×1080 and used a slightly-tweaked version of its High preset to add tesselation and a higher degree of anisotropy to its texture filtering.


After the beating that was Hitman, the GT 1030 dusts itself off and pulls dead-even with the Ryzen 5 2400G here in both our average-FPS measure of performance potential and in 99th-percentile frame times. Both the beefy Vega and the pint-size Pascal turn in 99th-percentile results well under the 33.3-ms mark we’re so keenly watching for, and they’re plenty capable of running this game well at our relatively high-resolution and visually-rich settings choices.


Our time-spent-beyond-X graphs don’t put any more light between the GT 1030 and the Ryzen 5 2400G, either. The 16.7-ms threshold does show us where the performance of the Vega 8 and Vega 11 IGPs diverges, though. The Vega 11 spends about 11-and-a-half seconds on tough frames that take longer than 16.7 ms to render, while the Vega 8 spends nearly a third of our one-minute test run juggling those frames. The extra oomph of the Ryzen 5 2400G’s IGP might be worth having in older titles if you plan to make an attempt on the 1920×1080 summit.

Tomb Raider concludes our gaming performance results. The Ryzen 5 2400G’s Vega 11 IGP sometimes proves a worthy competitor or even superior to the GT 1030, an impressive achievement for a chip that has to get its memory bandwidth from system RAM instead of a dedicated pool of GDDR5. The Ryzen 3 2200G’s performance is certainly better than Intel’s UHD Graphics 630 IGP by a long shot, but its performance suggests that resolutions lower than our 1600×900 reference point and lesser amounts of eye candy will be friendliest to the Vega 8. Still, both Vega IGPs far outpace the DDR3-bound Radeon R7 graphics on the A10-7850K, and that’s an achievement to rival the move from AMD’s family of construction cores to the Zen architecture.

Let’s see just how much of an advance over Steamroller and Excavator Zen represents across our wide swath of productivity tasks now.

 

Javascript

In these tests of single-threaded latency and throughput, Raven Ridge parts prove themselves about on par with their Summit Ridge brethren and the newly Meltdown- and Spectre-hampered i3-8100. The Core i5-8400’s lofty 4 GHz boost clock seems to give it an edge in the JetStream and Octane benchmarks.

The Speedometer benchmark, a new addition to our test suite, puts the Ryzen 5 1500X, Core i3-8100, and i5-8400 well in the lead. Speedometer runs longer than any of our traditional benchmarks, which might prove a disadvantage for the 65-W AMD parts more so than the Intel competition.

Compiling code with GCC

File encryption with 7-zip

Disk encryption with Veracrypt

 

Cinebench

The evergreen Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

Blender rendering

Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

Corona rendering

Corona, as its developers put it, is a “high-performance (un)biased photorealistic renderer, available for Autodesk 3ds Max and as a standalone CLI application, and in development for Maxon Cinema 4D.”

Handbrake transcoding
Handbrake is a popular video-transcoding app. To see how it performs on these chips, we’re switching things up from some of our past reviews. Here, we converted a roughly two-minute 4K source file from an iPhone 6S into a 1920×1080, 30 FPS MKV using the HEVC algorithm implemented in the x265 open-source encoder. We otherwise left the preset at its default settings.

CFD with STARS Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here. We configured Euler3D to use every thread available from each of our CPUs.

It should be noted that the publicly-available Euler3D benchmark is compiled using Intel’s Fortran tools, a decision that its originators discuss in depth on the project page. Code produced this way may not perform at its best on Ryzen CPUs as a result, but this binary is apparently representative of the software that would be available in the field. A more neutral compiler might make for a better benchmark, but it may also not be representative of real-world results with real-world software, and we are generally concerned with real-world performance.

 

Digital audio workstation performance

One of the neatest additions to our test suite of late is the duo of DAWBench project files: DSP 2017 and VI 2017. The DSP benchmark tests the raw number of VST plugins a system can handle, while the complex VI project simulates a virtual instrument and sampling workload.

We used the latest version of the Reaper DAW for Windows as the platform for our tests. To simulate a demanding workload, we tested each CPU with a 24-bit depth and 96-KHz sampling rate, and at two ASIO buffer depths: a punishing 64 and a slightly-less-punishing 128. In response to popular demand, we’re also testing the same buffer depths at a sampling rate of 48 KHz. We added VSTs or notes of polyphony to each session until we started hearing popping or other audio artifacts. We used Focusrite’s Scarlett 2i2 audio interface and the latest version of the company’s own ASIO driver for monitoring purposes.

A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI’s support—this benchmark would not have been possible without the help of the folks there. Be sure to check out their many fine digital audio products.





 

Conclusions

AMD’s Ryzen processors with Vega integrated graphics would seem to fix lots of things that prevented past APUs from finding a foothold in the market. Fast DDR4 memory offers copious bandwidth to a powerful Vega GPU. The Zen CPU core puts up competitive performance against Intel’s latest and greatest. An advanced 14-nm process helps Zen and Vega coexist in a thrifty 65 W thermal envelope. Those are a lot of advances to wrap up in one piece of silicon, and Raven Ridge shows what AMD is capable of when it’s firing on all cylinders.

For all that, Raven Ridge desktop parts don’t change AMD’s competitive position against Intel much on desktop CPU performance alone. For folks who are uninterested or only mildly interested in gaming, the Ryzen 5 2400G is simply too close in price to the considerably superior Core i5-8400. Intel’s UHD Graphics 630 IGP will suit non-gamers just fine, and the i5-8400’s potent Coffee Lake (née Skylake) cores and high all-core clock speeds will chew through even the toughest desktop workloads with aplomb. (Yes, the Ryzen 5 1600 still exists, but it needs a graphics card to make it useful. Tried to buy one of those recently?) The same basic reasoning is largely true of the Ryzen 5 2200G versus the Core i3-8100.

Evaluating these Ryzen chips on their CPU performance alone is willfully missing the point, though. As it always has with its APUs, AMD is aiming these things at the person who cares more about gaming than raw CPU power, but who also doesn’t have the cash for an entry-level discrete graphics card. In today’s crypto-crazy market, that means about $80 for a GeForce GT 1030 or $150 and up for a GTX 1050. (Excuse me while I curl into a fetal position and sob.) That kind of cash matters a lot in builds trying to stay south of the $500 mark.

The question, then, is whether AMD has truly taken the need for such a graphics card out of the equation for entry-level gaming PCs. To get an answer, we can evaluate the delivered smoothness of each integrated graphics processor against the discrete cards we tested using our 99th-percentile frame-time measure. It’s hard to make price-to-performance comparisons when the Vega 8 and Vega 11 are inseparable from their host CPUs, so I’ve simply taken the geometric mean of the 99th-percentile frame times each graphics processor delivers across all of our tests. I then converted that figure into FPS so our higher-is-better logic works.

If we go by the 30-FPS line that entry-level gamers often draw in the sand to mark playable performance, the overall picture is quite favorable for AMD. Our numbers suggest that the Ryzen 5 2400G will usually deliver 99% of its frames in under 33.3 ms (or at a rate above 30 FPS), and the Ryzen 3 2200G could get there with some settings tweaks. AMD’s most powerful integrated graphics processor yet might not beat out the GeForce GT 1030, but it’s still quite impressive that the 2400G and its Vega 11 IGP come as close as they do without the benefit of a pool of GDDR5 memory.

It’s worth noting that each graphics processor’s performance varied from game to game in our test suite despite the at-a-glance temptations of our 99th-percentile FPS metric, and the GT 1030 proved superior to the Vega IGPs in the pair of esports-y titles that drive so many PC purchases in this price range. Folks whose tastes run more to the triple-A than the twitchy will find plenty to like in the all-around competency of the Vega 11 IGP, but those laser-focused on digital sports will still find the best performance from something like a GT 1030. Whether that’s worth the extra dough—about $70 in a budget system build right now, by our reckoning—is up to the individual builder.

If you value all-around competence from your budget PC, however, it might be worth saving that $70 and enjoying the Ryzen 5 2400G’s powerful CPU-and-IGP combo. Our first system builds with the 2400G will certainly deliver better CPU performance than a Core i3-8100 gaming PC in tough tasks, and you don’t give up too much in the way of graphics prowess for the savings. In a price bracket that used to require sacrificing a ton of single-threaded CPU performance for a passable Radeon IGP, the Ryzen 5 2400G finally offers a well-balanced package. Overclockers will find free rein in the Ryzen 5 2400G if they want to tweak, as well, something the Core i3-8100 can’t claim.

AMD Ryzen 3 2200G

AMD Ryzen 5 2400G

February 2018

The Ryzen 3 2200G, for its part, is a no-brainer for a hundred bucks. Dial back the resolution and eye candy a bit from our rather ambitious test settings, and the 2200G’s integrated Vega 8 graphics should prove a fine first step into the world of PC gaming or a capable companion for folks on the tightest budgets. Enthusiasts on a shoestring will find solid CPU performance with fully unlocked multipliers to play with for an extra shot of oomph, too. That kind of freedom simply isn’t available from the most affordable Intel CPUs. Folks concerned about productivity alone will still want to consider the Core i3-8100, but the Ryzen 3 2200G will certainly be the superior do-it-all part.

It’s that all-around competence and enthusiast-friendly nature that leads me to call the Ryzen 3 2200G and Ryzen 5 2400G TR Editor’s Choice award winners. We’re happy to see a pair of APUs that finally deliver on the promised power of Radeon graphics and AMD CPU cores on one chip, and we’re sure that PC builders will be, too.

5 1 vote
Article Rating
230 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Thbbft
Thbbft
4 years ago

WHERE IS THE SCATTER PLOT LABELED ‘GAMING’?

People needing serious ‘productivity’ performance aren’t buying CPUs at this price point.

JustAnEngineer
JustAnEngineer
4 years ago
Reply to  anubis44

I agree that selecting PC4-2400 would be stupid when PC4-3200 is just [url=https://techreport.com/blog/33259/revisiting-the-value-proposition-of-amd-ryzen-5-2400g?post=1069073<]$14 more[/url<] and provides [url=https://www.techpowerup.com/reviews/AMD/Ryzen_5_2400G_Vega_11/17.html<]significantly improved performance[/url<]. Alas, even well-established system assemblers seem to be [url=https://techreport.com/blog/33259/revisiting-the-value-proposition-of-amd-ryzen-5-2400g?post=1069243<]afflicted with stupidity[/url<] occasionally.

anotherengineer
anotherengineer
4 years ago

Nice review, I know you answered the question about the discrete GPU/CPU, but… Would it be possible to put that under the testing notes going forward please/thanks? I agree with the issue of bottle-necking and using a CPU to avoid it, however I think in this budget segment, it probably would have been better to pair it with the i3 to provide a more realistic frame time/rate one could expect in that budget range, then add the *note the 460/1050 are/could be bottle-necked by the cpu on certain games* Or On the testing graphs * beside the 460-1050 – a… Read more »

farmpuma
farmpuma
4 years ago
Reply to  JustAnEngineer

Definitely a topic(s) worthy of further exploring. Possible working title – Budget Gamer / HTPC Upgrade Path – Myth or Monumental Improvements?

Jeff Kampman
Jeff Kampman
4 years ago
Reply to  NoOne ButMe

Turns out Hitman DX12 is just flat broke on the GT 1030; DX11 is fine.

jarder
jarder
4 years ago
Reply to  thx1138r

Good question, I don’t have a direct comparison, but you can see that from here:
[url<]https://hothardware.com/reviews/amd-raven-ridge-ryzen-3-2200g-and-ryzen-5-2400g-am4-apu-review?page=6[/url<] That the 2400G gets roughly double the benchmark scores of the previous generation i7-5775c. Then from a a review of the i7 6770HQ (skull canyon): [url<]https://www.techspot.com/review/1202-intel-skull-canyon-nuc6i7kyk/page6.html[/url<] We can see that the i7 6770HQ only beats the i7-5775c by 10% or so in real games. So, this is definitely not an apples-to-apples comparison, but it looks pretty clear the 2400G would be ahead.

Mr Bill
Mr Bill
4 years ago
Reply to  NoOne ButMe

Wonder if there is any way to characterize the GPU workload each game creates by breaking out the percentages of operations.

NoOne ButMe
NoOne ButMe
4 years ago
Reply to  maroon1

ROPs are 32 v. 8 i believe.

could be that?

maroon1
maroon1
4 years ago

There is must be something wrong with Hitman benchmark

GTX 1050 performed more than 3x faster than GT 1030 in this game. Yet it other games it was often less than 2x faster. ANy reason why the results are so poor for GT 1030 in hitman even compared to other nvidia GPU ?!! Even if you go by the specs (tflops, memory bandwith), GTX 1050 should not be more 2x faster than GT 1030, nevermind being 3x faster

If it wasn’t for hitman, the GT 1030 would have better performance than 2400G A{U

tipoo
tipoo
4 years ago
Reply to  Klimax

Historically they’ve spent more transistors so it could run at lower average power, rather than have higher performance.

ermo
ermo
4 years ago
Reply to  Jeff Kampman

Jeff,

FYI, in addition to the content you deliver, the fact that you consistently engage with TR’s readership in this way is why I will renew my subscription. Kudos. =)

leor
leor
4 years ago
Reply to  JustAnEngineer

Here ya go.

[url<]http://www.tomshardware.com/reviews/amd-ryzen-5-2400g-zen-vega-cpu-gpu,5467-7.html[/url<]

derFunkenstein
derFunkenstein
4 years ago
Reply to  Jeff Kampman

That’s fair. I’ve had the luxury of a couple days since the review went up and it only dawned on me this morning. 🙂

Jeff Kampman
Jeff Kampman
4 years ago
Reply to  derFunkenstein

I’ve been playing with similar configurations and there will likely be a rethink of this as soon as I get a spare second. The original conclusion was formulated in less-than-ideal conditions.

derFunkenstein
derFunkenstein
4 years ago

Not sure I agree with the conclusion on the Ryzen 5 2400G. Core i3-8100 on Newegg is $120. It only goes in a Z370 motherboard right now, which starts at around $120 for the cheapest I could find. Total platform cost = $240. Ryzen 5 2400G is $170. If you don’t want to overclock, you could stuff it into an A320 board for $50. Total platform cost = $220. If you want to OC you can spend $10 more on an ASRock AB350M for a total cost of $230. On the CPU benchmarks, the 2400G wins hands down. With integrated… Read more »

thx1138r
thx1138r
4 years ago

Anybody see a review comparing these chips to the Skull canyon APU (i7-6770HQ)?
Be interesting to see if the Ryzen APU’s have brought the gaming APU crown back to AMD. (Yes I know Hades Canyon is in the pipeline and will have 2-3x the performance, but it’s not released yet and we all know it’s going to be expensive).

NoOne ButMe
NoOne ButMe
4 years ago
Reply to  Klimax

1. few transistors 2. equal/lower clockspeeds- 3. worse drivers- AMD’s drivers are better than Intel’s when it comes to GPUs. 4. AMD’s internal buses are better than the ring bus Intel uses little more detail/guesswork: 1. I’m guessing probably 60-80% of the transistors (630 to vega 11) 2. Intel used to have a clockspeed advantage, at least in peak, but AMD has closed/passed this. 3. Look at their drivers side by side… 4. As I understand, the ring bus is shared between CPU only and GPU-CPU products that Intel makes. Tweaks will be made, but AMD has, or at least… Read more »

anubis44
anubis44
4 years ago
Reply to  maroon1

“DDR4 2400 is probably what the average consumers is going to use.”

Do that and you’ve annihilated the performance you could have had from the Raven Ridge APU. It’s utterly pointless and stupid. If you can’t afford DDR4 3000 or 3200 MHz ram, then forget it.

NoOne ButMe
NoOne ButMe
4 years ago
Reply to  Shobai

knowing me, I think i took the term I have never written, and miswrote it…
oops!

Airmantharp
Airmantharp
4 years ago
Reply to  Jeff Kampman

[b<]maroon1[/b<]'s outrage is a bit out of place, but the point does remain: DDR4-3200 [b<][i<]CAS 14[/i<][/b<] was used for the review, which is essentially the fastest RAM that Ryzen can currently support.

derFunkenstein
derFunkenstein
4 years ago
Reply to  Redocbew

The dude had a snit over something either last night or this morning. When I checked the forums this morning, there was a thread on the BP of him asking for TR forum admins to delete his account. He “no longer want[s] to be a part of TR”. It’s gone now, though.

Jeff Kampman
Jeff Kampman
4 years ago
Reply to  maroon1

8 GB of DDR4-3200 CL16 is $103: [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16820231900[/url<] 8 GB of DDR4-2400 CL15 is $92: [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16820231886[/url<] With memory prices as they are right now, it would be insane for an APU builder to consider saving $11 on the much slower RAM. I wouldn't offer the G.Skill Flare X kit that AMD sent as a representative kit for a sub-$500 system build, but obtaining a smaller and slightly higher-latency DDR4-3200 kit is hardly out of the question for such a system. Also, have you looked at graphics-card prices lately? If you thought RAM was unreasonably expensive...

maroon1
maroon1
4 years ago

Why use DDR4 3200Mhz ?! You are telling me that the average joe who buys a budgest APU is going to use DDR4 3200Mhz with low latency ?!

DDR4 2400 is probably what the average consumers is going to use.

EDIT
The 16GB G skill flare DDR4 they used for the APU COST 250 dollars on newegg
[url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16820232530[/url<] You can buy i5 8400 or Ryzen 6 1600 with cheaper RAM and spend the money on a better GPU. You get much better CPU and GPU performance this APU with very expensive RAM

Redocbew
Redocbew
4 years ago
Reply to  chuckula

Dude, don’t leave us hanging…

chuckula
chuckula
4 years ago
Reply to 

TL;DR.

Redocbew
Redocbew
4 years ago
Reply to  JustAnEngineer

I don’t care what the intended budget is. I don’t care if AMD or Intel tells me by way of pricing where their products should be used. The whole point is for me to figure that out on my own, and I don’t think you can have it both ways here. Either you test for performance and choose your comparison parts to showcase that, which is what Jeff did here, or you forgo any kind of comparative analysis at all and do the kind of budget build you’re talking about. Almost by definition the budget build is not going to… Read more »

Anovoca
Anovoca
4 years ago
Reply to  chuckula

If media streaming is all you want, then an HTPC is overkill. Roku’s or even a raspi are MUCH cheaper; and that is assuming your TV doesn’t have all the apps installed on it already to begin with.

Anovoca
Anovoca
4 years ago
Reply to  thx1138r

Personally I find a headless server with a good discrete GPU and cheap streaming boxes at each TV to be a more elegant solution. It costs more to implement at the start, but it is much easier and cheaper to buy another rokhu or Shield console and add it to other TVs if you want to expand your capabilities in the future compared to building a second HTPC.

Mr Bill
Mr Bill
4 years ago
Reply to  Voldenuit

The 1050 vs 460 have a bottleneck at 1600×900. But I suspect that a graph of FPS vs resolution would have a crossover in favor of the 460 at higher resolutions. Maybe AMD felt that designing for quick response at low resolutions was less important than optimizing frametimes at higher resolutions. This has been discussed in previous reviews.

fredsnotdead
fredsnotdead
4 years ago
Reply to  Jeff Kampman

Yup, now I see all the IGPs. How about my suggestion to use % of total time in the “Time Spent Beyond…” graphs?

Zizy
Zizy
4 years ago
Reply to  Redocbew

You *are* testing performance – GPU performance as bottlenecked by the CPU you are likely to pair with. Which is the most relevant data point in the end – this is similar to the computers people actually use. You generally don’t pair G4560 with 1080Ti and you don’t pair 8700K with 560, so those numbers aren’t very relevant. If 1030, 1050 and 560 aren’t exposing CPU bottlenecks even when using G4560, R3 1200 or i3 8100, or whatever other cheap CPUs people might get for their build, then good, numbers in the article are valid even though a better CPU… Read more »

JustAnEngineer
JustAnEngineer
4 years ago
Reply to  Redocbew

The problem with testing the discrete GPUs with a [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819117827<]$361[/url<] (+$$ for heatsink) Core i7-8700K CPU is that it's a rather different budget. The [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819117739<]$85[/url<] Pentium G4600, [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819113446<]$105[/url<] Ryzen 3 1200, [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819117822<]$121[/url<] Core i3-8100 and [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819113445<]$125[/url<] Ryzen 3 1300X are the sorts of CPUs that would likely be included in a budget build. I suspect that what started the foray into using the high-end CPU was a performance anomaly that other sites also reported. [quote="W1zzard at TechPowerUp"<]You can also pair the new Ryzen APUs with a separate graphics card to increase 3D performance.... We used a GTX 1080 for our discrete GPU performance testing and saw surprising performance numbers that are lower than expected, especially at high resolutions, which is strange because the GPU should be the limiting factor here, not the CPU. We reported those results to AMD a week ago, but haven't been given an explanation for these numbers. It's not a configuration problem either. When swapping the Ryzen G CPU for a normal Ryzen CPU without changing any BIOS or software settings, using the same Windows installation and drivers, performance is back to expected values. [/quote<]

Klimax
Klimax
4 years ago
Reply to  barich

Since focus was gaming, I thought it was most relevant.

MrJP
MrJP
4 years ago
Reply to  Jeff Kampman

Hi Jeff. Thanks for taking the time to reply and I do understand the decision you had to make given the limited time it sounds like you had to pull this together. Having said that, I’d echo the requests to do a little more testing if you could to have a look at a slightly more level playing field. I’m genuinely interested in this as at some point I’m looking to put together an entry level gaming PC for the kids. It’s not so much the exact price parity argument, but how much extra performance would I get for spending… Read more »

Redocbew
Redocbew
4 years ago
Reply to  Welch

Yeah, and 8GB is still plenty of space if there’s no specific requirements. If I was in the market for HTPC parts one of these chips with 8GB memory, an M.2 SSD, and a picoPSU would make for a pretty nifty PC that’s easily hide-able.

Redocbew
Redocbew
4 years ago
Reply to  MrJP

With the estimated global power usage of mining being compared with various countries of the world it doesn’t seem like there’s much of a draw towards passive components.

Redocbew
Redocbew
4 years ago
Reply to  K-L-Waster

That was the first thing I thought of also. To me “vegazen” sounds like a particularly militant kind of vegetarian.

Redocbew
Redocbew
4 years ago
Reply to  JustAnEngineer

Once you start doing that, then you’re no longer testing performance. You’re not really testing the subject of the article at all. It requires no testing to determine the price of any of these components, but it does require testing to determine their performance, and price doesn’t mean anything to me without knowing how they perform.

Bauxite
Bauxite
4 years ago
Reply to  jensend

I’m almost afraid to bring any attention to this, but right now ECC samsung b-die modules can be found cheaper than a lot of the tweaker kits. You can overclock it decently on AM4, and actually know the real limits right away instead of trying to guess from random instability.

Redocbew
Redocbew
4 years ago
Reply to  JustAnEngineer

Only if the bank is half way snowed under, and there’s no snowmobiles around to give chase. If the dude has to drive away, then all bets are off.

JustAnEngineer
JustAnEngineer
4 years ago
Reply to  EzioAs

I understood that most of the biathletes had a day job in the army of their home country. Are you suggesting that Martin Fourcade should instead rob banks in the off-season?

Redocbew
Redocbew
4 years ago
Reply to  chuckula

I don’t believe I ever have now that I think about it. I’ve had systems refuse to boot plenty of times, but none that even refused to post because of an old BIOS.

It is sort of disappointing though that being able to flash a board sans-CPU isn’t a standard feature these days.

chuckula
chuckula
4 years ago
Reply to  Redocbew

People forget that while I enjoy satire, I’ve also been there done that when it comes to this type of situation.

smilingcrow
smilingcrow
4 years ago
Reply to  Jeff Kampman

Thanks Jeff. I’ve been aware of DAWBench for ages probably via SoundOnSound but never looked into it.
I have a lot of Kontakt libraries so was wondering how much they vary in terms of system load.
Maybe the choice of Kontakt effects a particular instrument uses is a large factor?
I came across a Kontakt Instrument GUI design tool a few days ago but I imagine it’s out of my league or even needs!
[url<]http://www.rigid-audio.com/kgmv2.html[/url<] Looks interesting though but not sure how much you can do just using Kontakt when starting from scratch. I have a couple of 28" gongs that might be fun to sample.

Redocbew
Redocbew
4 years ago
Reply to  chuckula

Oh sure. Wonderful. Awesome. That’s great. Really, just great. Chuck inspires a PSA, and now we’re stuck with him for good(I kid, I kid…).

Seriously though, I wonder how many years it’s going to take before this is no longer a thing. I personally considered the upgrade path for CPUs to be dead years ago, or at least more trouble than it’s worth, but there always seems to be a surprising number of people hanging on to the idea.

Rza79
Rza79
4 years ago
Reply to  Jeff Kampman

I guess I’ll get an A6-9500 in stock to pre-flash all the boards I’ll use.

chuckula
chuckula
4 years ago
Reply to  Jeff Kampman

This post really needs to be made into a story to get the word out.
There are going to be some unhappy early adopters who didn’t get lucky with a fresh motherboard that was pre-flashed with the newest firmware.

Especially when the number of people “upgrading” from a RyZen system bought last year to an APU this year is clearly tiny.

Jeff Kampman
Jeff Kampman
4 years ago
Reply to  Rza79

At least with the Gigabyte boards I have here, they all needed a BIOS update before Raven Ridge chips would boot. The MSI board I got from AMD itself was already primed to work with them, so I can’t say for sure.

chuckula
chuckula
4 years ago
Reply to  Rza79

You might be interested in [url=https://techreport.com/news/33232/motherboard-makers-are-all-set-for-amd-ryzen-desktop-apus?post=1068258<]this post.[/url<]

Rza79
Rza79
4 years ago

Does an AM4 motherboard without the newest BIOS boot with a Raven Ridge CPU installed or does it need a BIOS update before a Raven Ridge CPU is installed? Historically Intel motherboards needed a BIOS update before the new gen CPU was installed or else, no boot.
If AM4 motherboards do need a BIOS update before they can even boot with a Raven Ridge CPU then every purchase made now won’t work for most customers since most boards sold now still have a september or oktober BIOS.

Pin It on Pinterest

Share This

Share this post with your friends!