AMD’s Ryzen 3 2200G and Ryzen 5 2400G processors reviewed

Morning, folks. Today is the day when we can finally share performance details for the desktop versions of AMD’s Ryzen processors with Radeon Vega graphics, or Ryzen APUs for short. AMD isn’t using the term “accelerated processing unit” to refer to these chips any longer, but it’s a whole lot easier to call them APUs than it is to type out “Ryzen processor with Radeon Vega graphics” every time we want to refer to the family of chips. Naming conventions aside, what matters most is that AMD finally has a competitive CPU core that it can fuse with its muscular graphics processors, and it’s used those resources to form a most exciting pair of chips for entry-level gaming builds, small-form-factor game boxes, and HTPCs.

I could regale you with a wealth of background information on the silicon marriage of a single Zen core complex and Vega graphics here, but I am out of time as of this very moment. Thing is, we pretty much know the deal with Raven Ridge. Check out my post about AMD’s pre-CES event for the ground rules of AMD’s desktop APUs, along with our review of the mobile Ryzen 5 2500U and my initial write-up of the Raven Ridge silicon from a while back for ample general information on the red team’s blend of its core competencies. At this stage, I felt it was most important to get our performance results out in the open rather than rehashing a great deal of already-public information. For the moment, enjoy our full test results and slightly-less-full analysis for these chips, and feel free to debate amongst yourselves in the comments.

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each test at least three times and published the median of those results.

Our test systems were configured as follows:

Processor
Ryzen 3 2200G Ryzen 5 2400G Ryzen 3 1300X Ryzen 5 1500X
CPU cooler AMD Wraith (95W)
Motherboard MSI B350I Pro AC
Chipset AMD B350
Memory size 16 GB
Memory type G.Skill Flare X 16 GB (2x 8GB) DDR4-3200
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 1T
System drive Intel 750 Series 400GB

 

Processor
AMD Athlon X4 845 AMD A10-7850K
CPU cooler AMD Wraith (95W)
Motherboard Asus Crossblade Ranger
Chipset AMD A88X
Memory size 16 GB
Memory type Corsair Vengeance Pro Series 16 GB (2x 8 GB) DDR3-1866
Memory speed 1866 MT/s (actual)
Memory timings 9-10-9-27
System drive Samsung 850 Pro 512 GB

 

Processor
Core i3-8100

(simulated via Core i5-6600K

at 3.6 GHz and 65 W)

Core i5-8400
CPU cooler Cooler Master MasterAir Pro 3
Motherboard Gigabyte Aorus Z270X-Gaming 8 Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z270 Intel Z370
Memory size 16 GB
Memory type G.Skill Trident Z 16 GB (2x 8 GB) DDR4-3200
Memory speed DDR4-3200 (actual)
Memory timings 14-14-14-34 2T
System drive Samsung 960 Pro 512 GB

We used the following system to host our discrete GPUs for testing:

Processor
Intel Core i7-8700K
CPU cooler Corsair H110i 280-mm liquid cooler
Motherboard Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z370
Memory size 16 GB
Memory type G.Skill Trident Z DDR4-3200 (rated) SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 2T
System drive Samsung 960 Pro 512 GB

Some other notes regarding our testing methods:

  • Each motherboard was updated to the most recent firmware version available prior to testing, including pre-release firmware versions available through processor manufacturers.
  • Our Intel test systems were both updated with Meltdown mitigations through Windows Update and Spectre mitigations through firmware updates. These patches were confirmed to be in use through the InSpectre utility.
  • Each software utility or program used in our benchmarking was the most recent version publicly available prior to our testing period. Where necessary, we used beta versions of certain utilities as recommended by CPU manufacturers for the best compatibility with the systems under review.
  • Each system used Windows 10’s Balanced power plan. Our Ryzen systems were set up with AMD’s Ryzen Balanced power plan.
  • Unless otherwise noted, our gaming tests were conducted at 1600×900 in exclusive fullscreen mode. Vsync was disabled both in-game and in the graphics driver control panel where possible.
  • “Multi-core enhancement” or “multi-core turbo” settings were disabled in our motherboards’ firmware.

Our testing methods are generally publicly available and reproducible. If you have questions regarding our testing methods, you can email me, leave a comment on this article, or join us in our forums. We take the integrity of our test results seriously and will go to reasonable lengths to clear up any apparent anomalies.

 

Memory subsystem performance

The AIDA64 utility includes some basic tests of memory bandwidth and latency that will let us peer into the differences in behavior among the memory subsystems of the processors on the bench today, if there are any.

With the same DDR4-3200 CL14 RAM in the DIMM slots of all of our systems, AMD’s chips take a small advantage in raw bandwidth. Memory latency, on the other hand, continues to favor Intel’s microarchitectures by a wide margin. AMD has slightly decreased the latency of its chips’ integrated memory controllers in the move from Summit Ridge to Raven Ridge, but the difference likely isn’t large enough to translate into noticeably higher performance.

Some quick synthetic math tests

AIDA64 also includes some useful micro-benchmarks that we can use to sketch out broad differences among CPUs on our bench. The PhotoWorxx test uses AVX2 instructions on all of these chips. The CPU Hash integer benchmark uses AVX, while the single-precision FPU Julia and double-precision Mandel tests use AVX2 with FMA. The Ryzen 3 2200G had to sit out the PhotoWorxx test, as running that benchmark caused a hard lock on the host system.

Core for core and thread for thread, the Ryzen 5 2400G can’t outpace the similarly-provisioned Ryzen 5 1500X. Weirdly, it’s the four-core, four-thread Ryzen 3 1300X that leads the AMD pack here.

As we’ve long understood now, Ryzen CPUs include support for Intel’s SHA Extensions, meaning that they can accelerate the algorithm behind this micro-benchmark. That acceleration explains the wide gulf between even the Ryzen 3 1300X and the Core i5-8400’s otherwise-impressive score here.

In these tests of floating-point performance, Intel’s wider AVX units give its chips a wide lead.

Now that we have a basic idea of how these chips perform, let’s get to gaming.

 

Doom (Vulkan)
Doom‘s Vulkan renderer is a familiar sight in our graphics-performance reviews by now. We were able to crank the game all the way up to medium settings at 1600×900 to give all of our test mules a workout.


The Radeon Vega IGP duo gets off to a fine start in Doom. The Ryzen 3 2200G’s Vega 8 IGP crushes its similarly-provisioned Kaveri predecessor, while the 2400G’s Vega 11 nearly holds pace with the GT 1030 in both average frame rates and delivered smoothness (as measured by the 99th-percentile frame time). That’s spectacular performance from an IGP without its own dedicated graphics memory. The UHD 630 runs the game without crashing, at least.

One bit of oddness at this resolution is how thoroughly the GTX 1050 outpaces the RX 460, in contrast with Radeons’ usual advantages under Doom‘s Vulkan renderer. I suspect we’re seeing a bottleneck in the driver or something from the Radeon side that the GTX 1050 doesn’t suffer from. This isn’t the only time you’ll see similar behavior from the GTX 1050 in this review, so stow your pitchforks. I’m just as piqued by this issue as you are.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame, while a constant stream of frames at 8.3-ms intervals would correspond to 120 FPS.

With the range of integrated graphics solutions we’re testing, there’ll be a wealth of opportunity to begin our analyses at the 50-ms mark. As you already guessed, the UHD Graphics 630 IGP is not delivering anything approaching a smooth or enjoyable gaming experience, spending over 12 seconds of our one-minute test run on especially tough frames. The A10-7850K joins the UHD 630 at the 33.3-ms mark with 11 seconds spent beyond 33.3 ms, while the Vega 8 IGP turns in a tiny bit of roughness past the 33.3-ms mark.

Despite the Vega 11 IGP’s apparent performance parity with the GT 1030 in our intial graphs, the entry-level GeForce delivers a noticeably smoother gaming experience in Doom by spending five fewer seconds on frames that take longer than 16.7 ms to render. The Vega 8 spends a full third of our test run rendering frames that take longeer than 16.7 ms to render, suggesting that lower detail settings or resolution would be in order for a smoother gaming experience.

 

Rise of the Tomb Raider
Rise of the Tomb Raider is both gorgeous and fully modern under the hood, thanks to its DirectX 12 rendering path. We used a blend of low settings at 1600×900 to get this demanding game up and running on the Vega IGP duo.


With those modest settings, the Ryzen APU duo bookends the GT 1030 in our average-FPS measure of performance potential. 99th-percentile frame times above 33.3 ms—well above, in the case of the Ryzen 3 2200G’s Vega 8 and the GT 1030—suggest anything but a smooth ride for our competitors, though. Even the Ryzen 5 2400G can’t deliver the majority of its frames under the 33.3-ms threshold.


A look at just how much time each chip spends past 33.3 ms paints a better picture for the Ryzen 5 2400G than it does the GT 1030 or Ryzen 3 2200G. The beefy integrated Vega 11 spends just a fifth of a second in total on tough frames that take longer than 33.3 ms to render, while the GT 1030 and Vega 8 tally up four to five seconds on that tough work. If we use the common metric of a consistent 30 FPS as a (forgiving) definition of “playable” for these chips, the Vega 11 IGP nearly aces the test. Impressive work from integrated graphics.

 

Grand Theft Auto V
Grand Theft Auto V‘s online mode remains one of the most popular games around, and these IGPs seem like ideal vehicles for allowing budding miscreants to drop in on Los Santos. We used a combination of high settings at 1600×900 with “high-resolution shadows” and “long shadows” enabled in the game’s advanced graphics options.


GTA V tends to favor GeForces, and our IGP experience proves no different. The extra shader power of the Ryzen 5 2400G’s Vega 11 isn’t good for much extra performance potential compared to the Vega 8 on the Ryzen 3 2200G, but it does keep the beefier integrated Vega on the right side of 33.3 ms in our 99th-percentile frame-time accounting. As you can see from its pencil-thin frame-time graph, however, the GT 1030 delivered a significantly smoother and more fluid gaming experience still.


The Ryzen 3 2200G’s potentially concerning 99th-percentile frame time turned out to be less of an issue than it first appeared. Our time-spent-beyond-X graphs let us see that the lesser Vega IGP spends just over a tenth of a second in total on frames that take longer than 33.3 ms to render.

The really interesting look into our contenders’ results comes at the 16.7-ms mark, where the GT 1030 spends less than a third of the time past that threshold working on tough frames compared to the Ryzen 5 2400G’s Vega 11. As a result, GTA V just feels better on the entry-level GeForce than it does on the Vega IGPs.

 

Rocket League

Let’s take a detour out of the big leagues and into Rocket League. This popular esports title runs on the Unreal Engine. We pushed the resolution up to 1920×1080 here and used the game’s highest-quality appearance settings.


Rocket League proves a close match for Pascal and Vega. The Ryzen 5 2400G closely shadows the GT 1030 in both performance potential and delivered smoothness, while the Ryzen 3 2200G commendably turns in 99% of its frames a couple milliseconds under the critical 33.3-ms mark. The GT 1030 ultimately provides the most fluid and smooth gameplay here by a small margin among our entry-level contenders.


Some spikiness in our frame-time graphs translates into time past 50 ms on the board for both Ryzen APUs’ Vega IGPs, suggesting one or two noticeable hitches in gameplay. Happily, neither Vega IGP noticeably spends much time past 33.3 ms. We have to draw the line at 16.7 ms before any of the Ryzen APUs or the GT 1030 give us cause for concern. Here, the GT 1030 spends three fewer seconds in total than the Vega 11 on frames that take over 16.7 ms to render, while the Vega 8 spends over five more seconds yet compared to its more powerful sibling.

 

Dota 2

With a global audience and a burgeoning tournament scene boasting multi-million-dollar prize pools, one could say that Dota 2 is a big deal, and it’s another test that any integrated graphics processor worth its salt has to pass. We ran the game at 1920×1080 on its “Best Looking” preset to see how the Vega duo fares.


Dota 2 is clearly bottlenecked somewhere other than shader-processing resources. In fact, the Ryzen 5 2400G’s Vega 11 turns in a worse 99th-percentile frame time than its less-well-endowed sibling here. The GT 1030 pulls well ahead in both performance potential and delivered smoothness. Folks eyeing a spot at the next International should probably join team green.


Where the GT 1030 spends just about four seconds in total past 16.7 ms on tough frames, the Ryzen 5 2400G’s Vega 11 puts 14 seconds in that bucket, and the Ryzen 3 2200G’s Vega 8 chalks up nearly 16. Dota 2 certainly isn’t unplayable on these parts, but some dialing-back of resolution and eye candy would seem warranted.

 

Hitman

We enter the home stretch for our gaming tests with another DirectX 12 monster. Hitman puts every one of a GPU’s shader processors to work, and it’s still one of the more demanding triple-A games around. We dialed resolution back to 1600×900 for this test, cranked up shadows and textures as far as our chips’ 2 GB of VRAM would allow, and used mostly high settings save for leaving screen-space ambient occlusion off.


Although Hitman has long served as  a showcase for AMD’s graphics cards, I was still surprised by how much hurt it puts on the GT 1030. The entry-level Nvidia card just isn’t delivering a playable experience here, while the Ryzen 5 2400G’s Vega 11 runs this game with little fuss. If we dialed back settings a bit more, the Ryzen 3 2200G’s 99th-percentile frame time would likely drop below the critical 33.3-ms mark, as well.


The GT 1030 is already struggling hard as we look at the time spent past the 50-ms mark. It’s most instructive, then, to check how the Vega duo is performing with a look at the 33.3-ms mark. Happily, neither Vega IGP puts noticeable numbers on the board past 33.3 ms. Flip over to the 16.7-ms mark, however, and it becomes clear that neither Vega IGP is delivering a perfectly fluid experience. Both Vegas spend around a third of this test run working on tough frames that would drop the delivered frame rate below 60 FPS.

 

Tomb Raider (2013)
Tomb Raider‘s 2013 reboot revitalized the franchise, and it remains a fun and visually-rich-enough experience to justify revisiting on these IGPs. We cranked the game’s resolution back up to 1920×1080 and used a slightly-tweaked version of its High preset to add tesselation and a higher degree of anisotropy to its texture filtering.


After the beating that was Hitman, the GT 1030 dusts itself off and pulls dead-even with the Ryzen 5 2400G here in both our average-FPS measure of performance potential and in 99th-percentile frame times. Both the beefy Vega and the pint-size Pascal turn in 99th-percentile results well under the 33.3-ms mark we’re so keenly watching for, and they’re plenty capable of running this game well at our relatively high-resolution and visually-rich settings choices.


Our time-spent-beyond-X graphs don’t put any more light between the GT 1030 and the Ryzen 5 2400G, either. The 16.7-ms threshold does show us where the performance of the Vega 8 and Vega 11 IGPs diverges, though. The Vega 11 spends about 11-and-a-half seconds on tough frames that take longer than 16.7 ms to render, while the Vega 8 spends nearly a third of our one-minute test run juggling those frames. The extra oomph of the Ryzen 5 2400G’s IGP might be worth having in older titles if you plan to make an attempt on the 1920×1080 summit.

Tomb Raider concludes our gaming performance results. The Ryzen 5 2400G’s Vega 11 IGP sometimes proves a worthy competitor or even superior to the GT 1030, an impressive achievement for a chip that has to get its memory bandwidth from system RAM instead of a dedicated pool of GDDR5. The Ryzen 3 2200G’s performance is certainly better than Intel’s UHD Graphics 630 IGP by a long shot, but its performance suggests that resolutions lower than our 1600×900 reference point and lesser amounts of eye candy will be friendliest to the Vega 8. Still, both Vega IGPs far outpace the DDR3-bound Radeon R7 graphics on the A10-7850K, and that’s an achievement to rival the move from AMD’s family of construction cores to the Zen architecture.

Let’s see just how much of an advance over Steamroller and Excavator Zen represents across our wide swath of productivity tasks now.

 

Javascript

In these tests of single-threaded latency and throughput, Raven Ridge parts prove themselves about on par with their Summit Ridge brethren and the newly Meltdown- and Spectre-hampered i3-8100. The Core i5-8400’s lofty 4 GHz boost clock seems to give it an edge in the JetStream and Octane benchmarks.

The Speedometer benchmark, a new addition to our test suite, puts the Ryzen 5 1500X, Core i3-8100, and i5-8400 well in the lead. Speedometer runs longer than any of our traditional benchmarks, which might prove a disadvantage for the 65-W AMD parts more so than the Intel competition.

Compiling code with GCC

File encryption with 7-zip

Disk encryption with Veracrypt

 

Cinebench

The evergreen Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

Blender rendering

Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

Corona rendering

Corona, as its developers put it, is a “high-performance (un)biased photorealistic renderer, available for Autodesk 3ds Max and as a standalone CLI application, and in development for Maxon Cinema 4D.”

Handbrake transcoding
Handbrake is a popular video-transcoding app. To see how it performs on these chips, we’re switching things up from some of our past reviews. Here, we converted a roughly two-minute 4K source file from an iPhone 6S into a 1920×1080, 30 FPS MKV using the HEVC algorithm implemented in the x265 open-source encoder. We otherwise left the preset at its default settings.

CFD with STARS Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here. We configured Euler3D to use every thread available from each of our CPUs.

It should be noted that the publicly-available Euler3D benchmark is compiled using Intel’s Fortran tools, a decision that its originators discuss in depth on the project page. Code produced this way may not perform at its best on Ryzen CPUs as a result, but this binary is apparently representative of the software that would be available in the field. A more neutral compiler might make for a better benchmark, but it may also not be representative of real-world results with real-world software, and we are generally concerned with real-world performance.

 

Digital audio workstation performance

One of the neatest additions to our test suite of late is the duo of DAWBench project files: DSP 2017 and VI 2017. The DSP benchmark tests the raw number of VST plugins a system can handle, while the complex VI project simulates a virtual instrument and sampling workload.

We used the latest version of the Reaper DAW for Windows as the platform for our tests. To simulate a demanding workload, we tested each CPU with a 24-bit depth and 96-KHz sampling rate, and at two ASIO buffer depths: a punishing 64 and a slightly-less-punishing 128. In response to popular demand, we’re also testing the same buffer depths at a sampling rate of 48 KHz. We added VSTs or notes of polyphony to each session until we started hearing popping or other audio artifacts. We used Focusrite’s Scarlett 2i2 audio interface and the latest version of the company’s own ASIO driver for monitoring purposes.

A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI’s support—this benchmark would not have been possible without the help of the folks there. Be sure to check out their many fine digital audio products.





 

Conclusions

AMD’s Ryzen processors with Vega integrated graphics would seem to fix lots of things that prevented past APUs from finding a foothold in the market. Fast DDR4 memory offers copious bandwidth to a powerful Vega GPU. The Zen CPU core puts up competitive performance against Intel’s latest and greatest. An advanced 14-nm process helps Zen and Vega coexist in a thrifty 65 W thermal envelope. Those are a lot of advances to wrap up in one piece of silicon, and Raven Ridge shows what AMD is capable of when it’s firing on all cylinders.

For all that, Raven Ridge desktop parts don’t change AMD’s competitive position against Intel much on desktop CPU performance alone. For folks who are uninterested or only mildly interested in gaming, the Ryzen 5 2400G is simply too close in price to the considerably superior Core i5-8400. Intel’s UHD Graphics 630 IGP will suit non-gamers just fine, and the i5-8400’s potent Coffee Lake (née Skylake) cores and high all-core clock speeds will chew through even the toughest desktop workloads with aplomb. (Yes, the Ryzen 5 1600 still exists, but it needs a graphics card to make it useful. Tried to buy one of those recently?) The same basic reasoning is largely true of the Ryzen 5 2200G versus the Core i3-8100.

Evaluating these Ryzen chips on their CPU performance alone is willfully missing the point, though. As it always has with its APUs, AMD is aiming these things at the person who cares more about gaming than raw CPU power, but who also doesn’t have the cash for an entry-level discrete graphics card. In today’s crypto-crazy market, that means about $80 for a GeForce GT 1030 or $150 and up for a GTX 1050. (Excuse me while I curl into a fetal position and sob.) That kind of cash matters a lot in builds trying to stay south of the $500 mark.

The question, then, is whether AMD has truly taken the need for such a graphics card out of the equation for entry-level gaming PCs. To get an answer, we can evaluate the delivered smoothness of each integrated graphics processor against the discrete cards we tested using our 99th-percentile frame-time measure. It’s hard to make price-to-performance comparisons when the Vega 8 and Vega 11 are inseparable from their host CPUs, so I’ve simply taken the geometric mean of the 99th-percentile frame times each graphics processor delivers across all of our tests. I then converted that figure into FPS so our higher-is-better logic works.

If we go by the 30-FPS line that entry-level gamers often draw in the sand to mark playable performance, the overall picture is quite favorable for AMD. Our numbers suggest that the Ryzen 5 2400G will usually deliver 99% of its frames in under 33.3 ms (or at a rate above 30 FPS), and the Ryzen 3 2200G could get there with some settings tweaks. AMD’s most powerful integrated graphics processor yet might not beat out the GeForce GT 1030, but it’s still quite impressive that the 2400G and its Vega 11 IGP come as close as they do without the benefit of a pool of GDDR5 memory.

It’s worth noting that each graphics processor’s performance varied from game to game in our test suite despite the at-a-glance temptations of our 99th-percentile FPS metric, and the GT 1030 proved superior to the Vega IGPs in the pair of esports-y titles that drive so many PC purchases in this price range. Folks whose tastes run more to the triple-A than the twitchy will find plenty to like in the all-around competency of the Vega 11 IGP, but those laser-focused on digital sports will still find the best performance from something like a GT 1030. Whether that’s worth the extra dough—about $70 in a budget system build right now, by our reckoning—is up to the individual builder.

If you value all-around competence from your budget PC, however, it might be worth saving that $70 and enjoying the Ryzen 5 2400G’s powerful CPU-and-IGP combo. Our first system builds with the 2400G will certainly deliver better CPU performance than a Core i3-8100 gaming PC in tough tasks, and you don’t give up too much in the way of graphics prowess for the savings. In a price bracket that used to require sacrificing a ton of single-threaded CPU performance for a passable Radeon IGP, the Ryzen 5 2400G finally offers a well-balanced package. Overclockers will find free rein in the Ryzen 5 2400G if they want to tweak, as well, something the Core i3-8100 can’t claim.

AMD Ryzen 3 2200G

AMD Ryzen 5 2400G

February 2018

The Ryzen 3 2200G, for its part, is a no-brainer for a hundred bucks. Dial back the resolution and eye candy a bit from our rather ambitious test settings, and the 2200G’s integrated Vega 8 graphics should prove a fine first step into the world of PC gaming or a capable companion for folks on the tightest budgets. Enthusiasts on a shoestring will find solid CPU performance with fully unlocked multipliers to play with for an extra shot of oomph, too. That kind of freedom simply isn’t available from the most affordable Intel CPUs. Folks concerned about productivity alone will still want to consider the Core i3-8100, but the Ryzen 3 2200G will certainly be the superior do-it-all part.

It’s that all-around competence and enthusiast-friendly nature that leads me to call the Ryzen 3 2200G and Ryzen 5 2400G TR Editor’s Choice award winners. We’re happy to see a pair of APUs that finally deliver on the promised power of Radeon graphics and AMD CPU cores on one chip, and we’re sure that PC builders will be, too.

Comments closed
    • Thbbft
    • 1 year ago

    WHERE IS THE SCATTER PLOT LABELED ‘GAMING’?

    People needing serious ‘productivity’ performance aren’t buying CPUs at this price point.

    • anotherengineer
    • 1 year ago

    Nice review, I know you answered the question about the discrete GPU/CPU, but…

    Would it be possible to put that under the testing notes going forward please/thanks?

    I agree with the issue of bottle-necking and using a CPU to avoid it, however I think in this budget segment, it probably would have been better to pair it with the i3 to provide a more realistic frame time/rate one could expect in that budget range, then add the *note the 460/1050 are/could be bottle-necked by the cpu on certain games*
    Or
    On the testing graphs * beside the 460-1050 – a xxx cpu was used to prevent bottlenecking, etc.

    It probably would have cut down this particular comment section 😉

    • maroon1
    • 2 years ago

    There is must be something wrong with Hitman benchmark

    GTX 1050 performed more than 3x faster than GT 1030 in this game. Yet it other games it was often less than 2x faster. ANy reason why the results are so poor for GT 1030 in hitman even compared to other nvidia GPU ?!! Even if you go by the specs (tflops, memory bandwith), GTX 1050 should not be more 2x faster than GT 1030, nevermind being 3x faster

    If it wasn’t for hitman, the GT 1030 would have better performance than 2400G A{U

      • NoOne ButMe
      • 2 years ago

      ROPs are 32 v. 8 i believe.

      could be that?

        • Mr Bill
        • 2 years ago

        Wonder if there is any way to characterize the GPU workload each game creates by breaking out the percentages of operations.

        • Jeff Kampman
        • 2 years ago

        Turns out Hitman DX12 is just flat broke on the GT 1030; DX11 is fine.

    • derFunkenstein
    • 2 years ago

    Not sure I agree with the conclusion on the Ryzen 5 2400G.

    Core i3-8100 on Newegg is $120. It only goes in a Z370 motherboard right now, which starts at around $120 for the cheapest I could find. Total platform cost = $240.

    Ryzen 5 2400G is $170. If you don’t want to overclock, you could stuff it into an A320 board for $50. Total platform cost = $220. If you want to OC you can spend $10 more on an ASRock AB350M for a total cost of $230.

    On the CPU benchmarks, the 2400G wins hands down. With integrated graphics tests, it wins hands down. It still wins if you spend $70 more on a GT 1030 for the i3-8100.

    So why are you lukewarm on it? Because the 2200G “won” the browser tests and Rocket League what could be a margin of error? For compiling, encryption, and compression it was miles ahead of the 2200G. For most of the games it delivered the difference between above 33.3ms and below 33.3ms. Surely the extra ~12-15% system cost is warranted by the performance gains on a dollars-to-performance ratio.

      • Jeff Kampman
      • 2 years ago

      I’ve been playing with similar configurations and there will likely be a rethink of this as soon as I get a spare second. The original conclusion was formulated in less-than-ideal conditions.

        • derFunkenstein
        • 2 years ago

        That’s fair. I’ve had the luxury of a couple days since the review went up and it only dawned on me this morning. 🙂

        • ermo
        • 2 years ago

        Jeff,

        FYI, in addition to the content you deliver, the fact that you consistently engage with TR’s readership in this way is why I will renew my subscription. Kudos. =)

    • thx1138r
    • 2 years ago

    Anybody see a review comparing these chips to the Skull canyon APU (i7-6770HQ)?
    Be interesting to see if the Ryzen APU’s have brought the gaming APU crown back to AMD. (Yes I know Hades Canyon is in the pipeline and will have 2-3x the performance, but it’s not released yet and we all know it’s going to be expensive).

      • jarder
      • 2 years ago

      Good question, I don’t have a direct comparison, but you can see that from here:
      [url<]https://hothardware.com/reviews/amd-raven-ridge-ryzen-3-2200g-and-ryzen-5-2400g-am4-apu-review?page=6[/url<] That the 2400G gets roughly double the benchmark scores of the previous generation i7-5775c. Then from a a review of the i7 6770HQ (skull canyon): [url<]https://www.techspot.com/review/1202-intel-skull-canyon-nuc6i7kyk/page6.html[/url<] We can see that the i7 6770HQ only beats the i7-5775c by 10% or so in real games. So, this is definitely not an apples-to-apples comparison, but it looks pretty clear the 2400G would be ahead.

    • maroon1
    • 2 years ago

    Why use DDR4 3200Mhz ?! You are telling me that the average joe who buys a budgest APU is going to use DDR4 3200Mhz with low latency ?!

    DDR4 2400 is probably what the average consumers is going to use.

    EDIT
    The 16GB G skill flare DDR4 they used for the APU COST 250 dollars on newegg
    [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16820232530[/url<] You can buy i5 8400 or Ryzen 6 1600 with cheaper RAM and spend the money on a better GPU. You get much better CPU and GPU performance this APU with very expensive RAM

      • Jeff Kampman
      • 2 years ago

      8 GB of DDR4-3200 CL16 is $103: [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16820231900[/url<] 8 GB of DDR4-2400 CL15 is $92: [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16820231886[/url<] With memory prices as they are right now, it would be insane for an APU builder to consider saving $11 on the much slower RAM. I wouldn't offer the G.Skill Flare X kit that AMD sent as a representative kit for a sub-$500 system build, but obtaining a smaller and slightly higher-latency DDR4-3200 kit is hardly out of the question for such a system. Also, have you looked at graphics-card prices lately? If you thought RAM was unreasonably expensive...

        • Airmantharp
        • 2 years ago

        [b<]maroon1[/b<]'s outrage is a bit out of place, but the point does remain: DDR4-3200 [b<][i<]CAS 14[/i<][/b<] was used for the review, which is essentially the fastest RAM that Ryzen can currently support.

      • anubis44
      • 2 years ago

      “DDR4 2400 is probably what the average consumers is going to use.”

      Do that and you’ve annihilated the performance you could have had from the Raven Ridge APU. It’s utterly pointless and stupid. If you can’t afford DDR4 3000 or 3200 MHz ram, then forget it.

        • JustAnEngineer
        • 1 year ago

        I agree that selecting PC4-2400 would be stupid when PC4-3200 is just [url=https://techreport.com/blog/33259/revisiting-the-value-proposition-of-amd-ryzen-5-2400g?post=1069073<]$14 more[/url<] and provides [url=https://www.techpowerup.com/reviews/AMD/Ryzen_5_2400G_Vega_11/17.html<]significantly improved performance[/url<]. Alas, even well-established system assemblers seem to be [url=https://techreport.com/blog/33259/revisiting-the-value-proposition-of-amd-ryzen-5-2400g?post=1069243<]afflicted with stupidity[/url<] occasionally.

    • Rza79
    • 2 years ago

    Does an AM4 motherboard without the newest BIOS boot with a Raven Ridge CPU installed or does it need a BIOS update before a Raven Ridge CPU is installed? Historically Intel motherboards needed a BIOS update before the new gen CPU was installed or else, no boot.
    If AM4 motherboards do need a BIOS update before they can even boot with a Raven Ridge CPU then every purchase made now won’t work for most customers since most boards sold now still have a september or oktober BIOS.

      • chuckula
      • 2 years ago

      You might be interested in [url=https://techreport.com/news/33232/motherboard-makers-are-all-set-for-amd-ryzen-desktop-apus?post=1068258<]this post.[/url<]

      • Jeff Kampman
      • 2 years ago

      At least with the Gigabyte boards I have here, they all needed a BIOS update before Raven Ridge chips would boot. The MSI board I got from AMD itself was already primed to work with them, so I can’t say for sure.

        • chuckula
        • 2 years ago

        This post really needs to be made into a story to get the word out.
        There are going to be some unhappy early adopters who didn’t get lucky with a fresh motherboard that was pre-flashed with the newest firmware.

        Especially when the number of people “upgrading” from a RyZen system bought last year to an APU this year is clearly tiny.

          • Redocbew
          • 2 years ago

          Oh sure. Wonderful. Awesome. That’s great. Really, just great. Chuck inspires a PSA, and now we’re stuck with him for good(I kid, I kid…).

          Seriously though, I wonder how many years it’s going to take before this is no longer a thing. I personally considered the upgrade path for CPUs to be dead years ago, or at least more trouble than it’s worth, but there always seems to be a surprising number of people hanging on to the idea.

            • chuckula
            • 2 years ago

            People forget that while I enjoy satire, I’ve also been there done that when it comes to this type of situation.

            • Redocbew
            • 2 years ago

            I don’t believe I ever have now that I think about it. I’ve had systems refuse to boot plenty of times, but none that even refused to post because of an old BIOS.

            It is sort of disappointing though that being able to flash a board sans-CPU isn’t a standard feature these days.

        • Rza79
        • 2 years ago

        I guess I’ll get an A6-9500 in stock to pre-flash all the boards I’ll use.

    • shank15217
    • 2 years ago

    Anybody know if you can drive dual 4K monitors with these?

      • EndlessWaves
      • 2 years ago

      I’d be surprised if it couldn’t these days, although I haven’t seen a proper review that covers such points yet.

      The hard bit will be finding a decent budget motherboard with appropriate outputs.

    • Jeff Kampman
    • 2 years ago

    To make following our gaming results easier, I’ve replaced “Vega 8” and “Vega 11” with the names of their host SoCs in our graphs. You may need to Ctrl-F5 to see the change.

      • willmore
      • 2 years ago

      Thank you!!!

      • Usacomp2k3
      • 2 years ago

      Thanks!

    • HERETIC
    • 2 years ago

    Have a gut feeling AMD is going to have to reduce these prices a little.

    Pentium’s with the cheapest Ram one can buy still hold the low end-office box/granny box.

    The 2200G could be a real gem for a HTPC (waiting for someone to fully test)

    Think Pentium and low end GPU take up the next spot-Am wondering if the evil NV already
    has a plan to introduce a cheap version of 1050 here,which would effectively kill off 2400G.
    On the huge plus side-This is AMD,so expect up to 20% improvement over next 6 months…

    • Klimax
    • 2 years ago

    Jeff, do you know what was typical frequency for Intel 630 itself? ETA: During gaming benchmarks.

      • Jeff Kampman
      • 2 years ago

      I don’t but I really don’t believe that frequency is the thing holding the UHD 630 back.

        • Klimax
        • 2 years ago

        Unlikely, but knowing it would help to narrow down WTF is going over there. (So much die space and so pitiful performance)

        Ok. thanks for answer.

          • NoOne ButMe
          • 2 years ago

          1. few transistors
          2. equal/lower clockspeeds-
          3. worse drivers- AMD’s drivers are better than Intel’s when it comes to GPUs.
          4. AMD’s internal buses are better than the ring bus Intel uses

          little more detail/guesswork:
          1. I’m guessing probably 60-80% of the transistors (630 to vega 11)
          2. Intel used to have a clockspeed advantage, at least in peak, but AMD has closed/passed this.
          3. Look at their drivers side by side…
          4. As I understand, the ring bus is shared between CPU only and GPU-CPU products that Intel makes. Tweaks will be made, but AMD has, or at least had, multiple buses for carrying info to the GPU back and forth. Intel probably has more beyond their ring bus, but AMD has certainly put a lot more work into it. Or more effective work.

          • tipoo
          • 2 years ago

          Historically they’ve spent more transistors so it could run at lower average power, rather than have higher performance.

    • Zizy
    • 2 years ago

    2200G should be the default 100$ CPU for all home PCs for now. Intel requires expensive boards as cheap ones aren’t out yet.
    2400G is a pretty decent cheap gaming box and has quite capable CPU and GPU. Both increases together make it worth 70$ extra over the 2200G, but the problem is that almost nobody needs both upgrades. People looking for office PCs don’t value the GPU improvement, while the ones looking for a gaming box don’t value the faster CPU much. But it would still be the part I would recommend to someone looking for a new budget gaming build. Great all-round thingy, even though the 2200G’s low price spoils this somewhat.

    Inevitable 2300G with 2400’s CPU (-100MHz?) and 2200’s GPU (+50MHz) would be a very nice i3 competitor. Slightly better CPU performance, much better GPU performance, overclocks, for just 10$ extra (assuming the same price as 1300X).

    As for RAM making these less attractive – well, 8GB 2400MHz RAM costs 80 eur, 3000MHz is 90. Both numbers are high, but the price difference isn’t that hard to stomach. Prices increase fast after 3000MHz though.

    • Bumper
    • 2 years ago

    Great review. I am seriously considering the 2200g for a htpc. I already got it loaded in a cart on amazon. My only dilemma is ram prices… I am having a hard time pulling the trigger on fast ram. How much slower would these be with cheaper ram?

    • Blytz
    • 2 years ago

    I’d like to see the bottleneck/benefits of using crappier ram, the ram used in this bench and something up at say 4000 and a little overclocking play to see what numbers can be drawn out with good ram/cooling.

      • willmore
      • 2 years ago

      Take a look at PCPER’s review, then. They nerfed the RAM speeds.

    • DancinJack
    • 2 years ago

    Hey I have a question. Weren’t all Ryzen desktop parts two CCX’s with so many cores disabled on them? If i’m not mistaken these are just one CCX + the GPU. Might be some cool benchmarks and metrics to compare against between the two.

      • derFunkenstein
      • 2 years ago

      The Ryzen 5 1500X is two CCXes with two cores + SMT in each, but the full 16MB of L3 enabled. Comparing that against the 2400G is what you want. The 2400G has less L3 cache but slightly higher clocks.

      But as Jeff mentioned elsewhere in the comments, the 2400G only ever ran with integrated graphics enabled, where the Ryzen 5 would have some discrete card, so it may still not be totally apples-to-apples. I have a feeling that the reason DAWBench was so very bad is due to the integrated graphics. That’s a bench where every bit of memory bandwidth helps. The lower cache doesn’t help either, but I’d be interested in those results using a discrete GPU.

    • gerryg
    • 2 years ago

    Will these work for some kind of Crossfire/Dual GPU setup thingy? I know AMD tried that with the A-series APUs, but I never really looked into it. These seem like a better candidate to match up with something like a RX550. Anybody know anything on this front?

      • gerryg
      • 2 years ago

      Ooh, or maybe if there will be a dual-socket mobo, put 2 APUs in it? LOL JK.

    • fredsnotdead
    • 2 years ago

    1. “All IGPs” tab doesn’t actually show all IGPs for Dota 2 and Hitman.

    2. Maybe in the “Time Spent Beyond…” graphs it might be clearer to use % of total time as the x-axis.

      • Jeff Kampman
      • 2 years ago

      Check the graphs again (you might need to Ctrl-F5 to see the reworked images).

        • fredsnotdead
        • 2 years ago

        Yup, now I see all the IGPs. How about my suggestion to use % of total time in the “Time Spent Beyond…” graphs?

    • christos_thski
    • 2 years ago

    So, if AMD has succesfully shrunk vega cores to good entry-level performance, should we expect low end and midrange vegas cards replacing polaris models ?

    • DragonDaddyBear
    • 2 years ago

    I keep seeing HTPC comments but I don’t think this can do UHD disks. If it could it would be a sweet set up.

    • derFunkenstein
    • 2 years ago

    Good news, everyone! While Raven Ridge doesn’t exactly suck at mining, it’s not very good.

    [url<]http://www.legitreviews.com/amd-ryzen-5-2400g-mining-performance-nicehash-xmr-stak_202662[/url<] [quote<]Watts with the system at idle and we are mining at 68.3 Watts! With 68 Watts power draw and a hashrate of 270 H/s we are looking at making $0.45 per day in revenue on Nicehash-CryptoNight. The profit after electric expenses for 24/7 mining would be roughly $0.32 per day or around $116 a year at current prices and difficulties.[/quote<] So in other words, if difficulties don't increase, the APU/mobo will pay for itself in about 2 years. 😆

      • chuckula
      • 2 years ago

      THANK YOU AMD!

      • Concupiscence
      • 2 years ago

      Out of morbid curiosity, what’s the RoR like on Intel GPUs? Surely somebody out there has tried it…

        • chuckula
        • 2 years ago

        One advantage of sucking at everything is that you suck at EVERYTHING.

        So Intel graphics are safe.

        • derFunkenstein
        • 2 years ago

        Probably worse than AMD, but Raven Ridge is already a poor-enough performer to skip for mining purposes. And I’m thrilled. 😆

      • Klimax
      • 2 years ago

      I wonder how they’d perform on CPU-only cryptos. (Magicoin, Riecoin, Verium and Yescrypt-based coins)

    • Voldenuit
    • 2 years ago

    A small request, but in the future, could TR include the tested resolutions and presets in the title of the graph for game performance?

    It would be nice to be able to glean that from a glance instead of digging through the text.

    • odizzido
    • 2 years ago

    No test setup or did I miss it? I just see some numbers and no idea what you did to get them.

    • B166ER
    • 2 years ago

    I’m not a big gamer, in fact, I just pulled out Skyrim for another go, as I last played in 2012. I still use an ancient Radeon 6870, but its fans are failing. I’m in the market for a new CPU, looking at the 2400 or a Ryzen 1500 and a cheapo GPU (yeah, good luck there buddy). Upon guestimations, how would Vega 11 fare at games like Skyrim? I’d like to run @ 1080, but my monitor is 2k so higher cant hurt.

      • derFunkenstein
      • 2 years ago

      If you’re hoping for 60fps I’d get ready for disappointment, especially at resolutions greater than 1080p.

      BTW, if your monitor has a resolution > 1080p, make sure the motherboard can drive whatever connection you have. I haven’t used an integrated graphics solution in a long time, but last I did, motherboards limited their HDMI and DVI outputs to 1080p (no HDMI 1.4, no dual-link DVI).

        • B166ER
        • 2 years ago

        Well, I’m gaming Skyrim now with, as I noted before, an ancient Radeon 6870 @ 1080 and medium-high settings. Its fluid, no stutters, but I don’t know fps. Again I’m not a big gamer so I’m obviously not expecting much, nor concerned. Just want a decent gaming experience. So I’d guess 30- 45 fps would be fine.
        My monitor is driven by dual link dvi, so that could be a problem .. but first I need to know if Im gonna go the IG route anyways.

      • gerryg
      • 2 years ago

      I’m running an old Phenom II X3 720 with a slightly more modern R7 260X GPU 2GB, with 4GB system memory. I play CSGO at 1080p at low/medium settings, it does pretty well. But I’m definitely looking to upgrade. If they release a faster APU in the next quarter, that might be enough for me to bite, if the price isn’t too bad, and get a graphics card later. Other wise I think I’ll go for a R5 1600 w/an RX500 series GPU. I’m of course assuming GPU stocks will improve in the next quarter…

        • B166ER
        • 2 years ago

        Yeah, my fingers is soooo twitchy right now, but I just can’t with graphics card being as they are. A freaking 1050ti @ $230? Nope. If Vega 11 can deliver a decent experience @ 18080p, I might be in there.

      • ptsant
      • 2 years ago

      AMD has been using Skyrim as an example of a game that runs well, see the official slides for example here: [url<]https://forums.anandtech.com/threads/amd-ryzen-5-2400g-and-ryzen-3-2200g-apus-performance-unveiled.2533111/[/url<] You can certainly drive Skyrim very hard with all the texture packs and whatnot, but I'm fairly certain that vanilla Skyrim at 1080p is not a big challenge, even for the 2200G.

    • swaaye
    • 2 years ago

    It will make some nice cheapo notebooks for gamers with very limited funds. The usual solid AMD APU result. Though such notebooks will probably throttle badly and have crap RAM if the past is anything to go on.

    I’d probably look for a used machine from 2-3 years ago with something like a 970M or 980M.

      • NoOne ButMe
      • 2 years ago

      a 2200G performance level at 45W package power should be possible, per Anandtech seeing that chip use ~54W.

      Go to 11CUs, clock lower, bin for low power…. walla! Now, no notebook OEM would ever make that… *grumble*

        • Shobai
        • 2 years ago

        Is this some special ‘Murica term? Or did you mean ‘et voila’?

          • UberGerbil
          • 2 years ago

          It’s an Indian term meaning “seller of” or “the guy you see to get something done” though it’s usually spelled with an “h” on the end.

            • Shobai
            • 2 years ago

            Thanks for the explanation. I don’t see that it fits, in this context, but I guess we’ll never know [unless no one replies – there’s a sentence I would never have imagined using!]

            • NoOne ButMe
            • 2 years ago

            knowing me, I think i took the term I have never written, and miswrote it…
            oops!

        • Zizy
        • 2 years ago

        There are 2200GE and 2400GE at 35W, mentioned on Anandtech. No data yet though.
        Plus both 2200G and 2400G have cTDP from 45W to 65W.

    • just brew it!
    • 2 years ago

    What’s up with the DAWBench VI results?

      • chuckula
      • 2 years ago

      AMD’S BENCHMARK SCORES BABY!

      • derFunkenstein
      • 2 years ago

      That’s what prompted me to ask about discrete graphics in those machines. Jeff said no, and I wonder if it’s the fact that at least some memory is pulling double-duty.

      • B166ER
      • 2 years ago

      On that note, I was noticing a few CPUS marks being low, so I went and double checked from previous DAWbench scores.

      The Ryzen 1500x scored 500 notes of polyphony @96k/128 samples back in the Ryzen 5 review but scored only 180 now?
      It scored 49 VST instances when last reviewed but this comparison shows it only at 32.
      Same test, same instruments, same DAW?
      Whats going on?
      I’m in the market now pickup either a 2400 or 1500 and a basic GPU, but the DAWbench numbers are so bad for the 2400. Like DRASTICALLY bad. How is this??

        • Jeff Kampman
        • 2 years ago

        Couple things: first, we began using a new version of DAWBench DSP (2017) for recent reviews with a new, more demanding VST. Older versions with different VSTs might indicate higher scores.

        As for the VI results, I just fired up the 1500X again and the only way I’m able to get 500+ notes of polyphony out of it is to use 48 KHz/128 samples. It’s entirely possible that I mislabeled a graph in an older review somewhere; if you point me to it I can offer more definite advice.

          • B166ER
          • 2 years ago

          Gladly:

          [url<]https://techreport.com/review/31979/amd-ryzen-5-cpus-reviewed-part-two/5[/url<] Fourth DAWbench graph 1500x has 500 notes @ 96k/128 samples. I'm hoping for the better, I was really nuts about that result! EDIT: to add, it hardly seems the graph was mislabeled, as other processers follow suit, the numbers (AMD and Intel) don't seem to be abnormal in any way as far as the leader score to the lowest score. the 1500x sits nicely in the middle between the 1600 and the 1400. Also both DAWbench graphs (old and new) state the same VST used (Kontakt) and DAWbench hasn't changed much that would show such wild sway numbers.

            • Jeff Kampman
            • 2 years ago

            I looked into this now that I’ve had some sleep and it’s down to the versions of the test files used. DAWBench VI 2017 uses a significantly different testing approach and it’s much more demanding for a given buffer size compared to the 2014 version. Both results are (still) valid within the context of their given reviews.

            • B166ER
            • 2 years ago

            Big thanks for looking into that. You’ve been a busy dude since this review dropped, just wanted to give you a lil textual feel good; You’re doing a major top notch job here, I see you. I see what you do!

          • smilingcrow
          • 2 years ago

          Do you generally document what VST you use for testing as that would be helpful?

            • derFunkenstein
            • 2 years ago

            [quote<]A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI's support—this benchmark would not have been possible without the help of the folks there. Be sure to check out their many fine digital audio products.[/quote<] You can download the project from here to give it a run yourself, and then you can see what specific patches are being used. [url<]http://www.dawbench.com/benchmarks.htm[/url<]

            • Jeff Kampman
            • 2 years ago

            VI uses proprietary Native Instruments Kontakt instances so it’s difficult for the average person to replicate.

            Our DSP results use the Shattered Glass Audio 1566 preamp VST, which is a free download: [url<]http://www.shatteredglassaudio.com/product.php?id=104[/url<]

            • smilingcrow
            • 2 years ago

            Thanks Jeff. I’ve been aware of DAWBench for ages probably via SoundOnSound but never looked into it.
            I have a lot of Kontakt libraries so was wondering how much they vary in terms of system load.
            Maybe the choice of Kontakt effects a particular instrument uses is a large factor?
            I came across a Kontakt Instrument GUI design tool a few days ago but I imagine it’s out of my league or even needs!
            [url<]http://www.rigid-audio.com/kgmv2.html[/url<] Looks interesting though but not sure how much you can do just using Kontakt when starting from scratch. I have a couple of 28" gongs that might be fun to sample.

      • Jeff Kampman
      • 2 years ago

      Raven Ridge only has 4 MB of L3 and I’ve come to understand that L3 cache size tends to have an outsize impact on DAWBench scores, all else being equal. It would explain some of the difference in performance among the 1300X/2200G (8 MB vs 4 MB) and 1500X/2400G (16 MB vs 4 MB).

      (edited because I can’t recall a single processor specification correctly today)

        • just brew it!
        • 2 years ago

        Ahh, that’s plausible. Maybe below a certain L3 size DAWBench simply can’t keep its working set in cache, and you end up completely bottlenecked by the DRAM. Access patterns that don’t “play nice” with AMD’s pre-fetch implementation could also exacerbate this.

        • B166ER
        • 2 years ago

        Am I missing something? I’m just now checking out DAWBench, using my friend’s computer with Cubase installed. I thought it was a suite of software working through Cubase (or Reaper) running custom test and algorithms. It’s not. It’s simply a few tracks meant to stress your DAW by either polyphony or VST count. It’s really simple. I’ve been doing this for years with Ableton and various VSTs, seeing how many I could run until my CPU meter freaked out.
        What I don’t understand is that you guys keep referring to DAWbench as the test, not the testing medium. DAWBench does nothing. The hardware variances affect the DAW, not DAWbench. In fact, I don’t like giving the test a name like DAWBench because it makes it seem that DAWBench is the measurement medium. DAWBench is just an easier way of saying “DAW test and stress tracks”.
        It came across as prohibitive as seeing you guys here at TR getting a license hookup from NI to run Kontakt, when I’d imagine using the demo would work as well, being that testing doesn’t take a long time. I was thinking, “I’ve got to buy a license just to get comparable test results?” These “tests” are basic and should be replicable. I like using SGA1566 as using the High CPU and 4x oversampling puts the smackdown on the CPU. Have you guys tried using Kontakt in demo mode to get test results?

    • AnotherReader
    • 2 years ago

    Though these APUs are impressive as APUs, one off-putting thing about them is the lack of PCIe lanes compared to their IGP-less brethren: 8 lanes for miscellaneous PCIe devices, 4 lanes for the chipset and 4 lanes for a M.2 drive.

    [Edit]: Another potentially off-putting point: with these APUs, AMD has joined Intel in replacing solder with non-metallic TIM between the die and the IHS.

    • AnotherReader
    • 2 years ago

    Thanks for putting up the results first. IGP performance is better than I expected. Did you have a chance to test power consumption without a discrete GPU? What were your impressions of the bundled cooler?

    • barich
    • 2 years ago

    Were all Spectre and Meltdown mitigations in place for all of these tests? Both UEFI/microcode updates and OS protections? You mentioned it offhand, but I wanted to verify.

      • Jeff Kampman
      • 2 years ago

      For the pair of Intel CPUs, yes, both a fully-updated Windows 10 and firmware with microcode updates for Spectre were employed.

        • barich
        • 2 years ago

        Thanks!

      • Klimax
      • 2 years ago

      Unlikely to change much:
      [url<]http://www.tomshardware.com/reviews/gaming-performance-meltdown-spectre-intel-amd,5457-4.html[/url<]

        • barich
        • 2 years ago

        Not for gaming, no, but there are other workloads that result in greater performance penalties.

          • Klimax
          • 2 years ago

          Since focus was gaming, I thought it was most relevant.

    • madseven7
    • 2 years ago

    Core for core and thread for thread, the Ryzen 5 2400G can’t outpace the similarly-provisioned (and 95 W) Ryzen 5 1500X. Weirdly, it’s the four-core, four-thread Ryzen 3 1300X that leads the AMD pack here.

    Isn’t the 1500X 65W?

      • Shobai
      • 2 years ago

      According to [url=https://www.amd.com/en/products/cpu/amd-ryzen-5-1500x<]AMD[/url<], that appears to be correct.

      • jarder
      • 2 years ago

      The 1500X is indeed 65W:
      [url<]https://www.amd.com/en/products/cpu/amd-ryzen-5-1500x[/url<] like the 2400G: [url<]https://www.amd.com/en/products/apu/amd-ryzen-5-2400g[/url<] But they are not similarly provisioned, in particular, the 1500X has a total L3 Cache of 16MB versus the 4MB of the 2400G.

      • Jeff Kampman
      • 2 years ago

      You’re right, I’ve corrected the piece.

    • TheMonkeyKing
    • 2 years ago

    RE: Intel math scores

    Were the Core i3 and i5 tested with patches and without? And if so, did it matter as compared to AMD’s results?

    • shank15217
    • 2 years ago

    Hey look, steam console!!!

    • chuckula
    • 2 years ago

    Interesting power consumption numbers from Anandtech: [url<]https://www.anandtech.com/show/12425/marrying-vega-and-zen-the-amd-ryzen-5-2400g-review/13[/url<] They used the 8350K as a comparison point, which is basically an upclocked 91watt TDP version if the i3 8100 from this review.

      • Airmantharp
      • 2 years ago

      Very interesting!

      If their conclusion that AMD’s Infinity Fabric is a large contributor to power draw, we might see significant progress as AMD tweaks it for future products.

      Can I get an eight-core Vegazen 2 in my next ultrabook? I’ll probably be buying when 5G unlimited data plans come out 😀

      • ptsant
      • 2 years ago

      The 2400G is perfectly contained in its 65W TDP. I wonder if a 95W version could do better and whether there is a decent OC margin, for either GPU or GPU part.

        • DPete27
        • 2 years ago

        Sounds like [url=https://www.anandtech.com/show/12233/amd-tech-day-at-ces-2018-roadmap-revealed-with-ryzen-apus-zen-on-12nm-vega-on-7nm/4<]you can expect 30% OC headroom on the GPU[/url<]

          • ptsant
          • 2 years ago

          Which would put the 2200G ridiculously close, even above in certain situations. Shame that decent memory is exorbitantly priced.

    • jensend
    • 2 years ago

    It’s a compelling value proposition – you can cheaply build a really decent box with a $100 APU, a ~$50 mobo, an inexpensive case, PSU, and small SSD, and [i<][b<]a gun and ski mask to help you steal 16GB of low-latency DDR4-3200[/b<][/i<]. 🙁

      • jensend
      • 2 years ago

      IOW, it’d be a lot easier to be excited about something like this if memory prices per GB were what they were two years ago instead of over twice that.

        • jarder
        • 2 years ago

        I didn’t even know that replying to your own post was a thing, this opens

          • jarder
          • 2 years ago

          up so many possibilities.

            • Wirko
            • 2 years ago

            It just reflects real life. You don’t agree with yourself at all times, or do you?

            • ermo
            • 2 years ago

            Sometimes I do, yet

            • ermo
            • 2 years ago

            sometimes I don’t.

            ¯\_(ツ)_/¯

      • jarder
      • 2 years ago

      Well, at least it’s possible to steal some DDR4. Some of the favors you have to, ahem, perform, to get a discreet GPU these days are even more unpalatable!

      • kvndoom
      • 2 years ago

      Sad when the handgun and ski mask combined are cheaper than the cost of the RAM.

        • Blytz
        • 2 years ago

        Yeah but most of those who can use both correctly are at the Olympics right now

          • EzioAs
          • 2 years ago

          They should’ve just handed out RAMs instead of medals as prizes then.

            • JustAnEngineer
            • 2 years ago

            I understood that most of the biathletes had a day job in the army of their home country. Are you suggesting that Martin Fourcade should instead rob banks in the off-season?

            • Redocbew
            • 2 years ago

            Only if the bank is half way snowed under, and there’s no snowmobiles around to give chase. If the dude has to drive away, then all bets are off.

      • Bauxite
      • 2 years ago

      I’m almost afraid to bring any attention to this, but right now ECC samsung b-die modules can be found cheaper than a lot of the tweaker kits. You can overclock it decently on AM4, and actually know the real limits right away instead of trying to guess from random instability.

    • Voldenuit
    • 2 years ago

    Why didn’t they call it ‘Vegazen’?

      • K-L-Waster
      • 2 years ago

      Because you’re not required to be on a wacky diet to buy one..?

        • Redocbew
        • 2 years ago

        That was the first thing I thought of also. To me “vegazen” sounds like a particularly militant kind of vegetarian.

      • Srsly_Bro
      • 2 years ago

      Because they aren’t dorks.

      • flip-mode
      • 2 years ago

      Sounds German.

      • gerryg
      • 2 years ago

      Or ‘Vezenega’?

        • Voldenuit
        • 2 years ago

        AVEYNGERZ

      • Blytz
      • 2 years ago

      They didn’t want a 90’s group doing a dance song about it.

      [url<]https://www.youtube.com/watch?v=6Zbi0XmGtMw[/url<] If you make it to the chorus it'll be stuck in your head. You can hate me later.

      • jihadjoe
      • 2 years ago

      Yeah, what a missed opportunity. Also, instead of ‘Infinity Fabric’ they could have called it the ‘Vega Bus’.

    • Concupiscence
    • 2 years ago

    Decisions, decisions – do I replace the Athlon X2 5050e still working as my mother in law’s office PC with the Haswell i3 I’ve gently used as a home theater PC for the past few years, or sell the latter to build a 2200G for her? Both would be overqualified, but I’d like whatever I use to last without incident for as long as possible.

      • NTMBK
      • 2 years ago

      I’d probably just buy her something with a warranty.

      • Kretschmer
      • 2 years ago

      Dell or HP business refurbs off Newegg. You can snag an i5 with SSD and warranty for under $400.

      Low-end builds just don’t make sense these days if you don’t need a GPU.

    • DPete27
    • 2 years ago

    Is it bad that I’m disappointed with the IGP even @ DDR4-3200?

      • chuckula
      • 2 years ago

      YES!

      • Airmantharp
      • 2 years ago

      Probably?

      Personally I’ve been impressed with Intel’s IGP’s lately; given that raw CPU power isn’t really necessary for most consumer workloads, though, the trade off with Vegazen of more GPU power for a little less CPU grunt actually makes a lot of sense when discrete GPUs aren’t considered.

      • derFunkenstein
      • 2 years ago

      Considering what the IGP has to work with, I think it does a really nice job. We’re talking about 50GB of total system memory bandwidth that has to be shared between graphics and the CPU. A GT 1030 with 64-bit GDDR5 gets that much all to itself, albeit with (I think) higher latency.

    • Unknown-Error
    • 2 years ago

    The price gap is a bit weird. 2200G at $99 and 2400G at $169? They clearly are missing a 2300G at around $129.

      • chuckula
      • 2 years ago

      I imagine this is the first wave of these parts and that the lineup will be completed over time.

        • gerryg
        • 2 years ago

        Ah, good. I’ll wait around for the 9999G “APU Ripper” then.

          • willmore
          • 2 years ago

          16 core, 32 thread, 36 CU?

    • Usacomp2k3
    • 2 years ago

    Good Article. That said, it was really confusing with the naming in the graphs jumping between the 2200g/2400g & Vega 9/Vega11. Any reason not to just use the CPU name throughout. It took me a couple back-and-forths to make sure iI wasn’t reading it as a discreet card.

    That said, the 2200g looks like a great price. Putting it with a small cheap mobo would make a great entry-level gaming PC.

      • godforsaken
      • 2 years ago

      Thank you, I was coming in here to say the same thing, it wasn’t until the last paragraph that there was a definitive statement to which (if any) ryzen g apu had either the vega 8 or the vega 11..
      and in case anyone still confused (and I am only basing this statement on this quote “and the 2200G’s integrated Vega 8”)
      the 2200g is vega 8
      the 2400g is vega 11

    • tsk
    • 2 years ago

    RIP RX550 and GT1030

      • chuckula
      • 2 years ago

      [quote<]RIP RX550 and GT1030[/quote<] No, it's [b<]THREAD[/b<]RIP[b<]PER[/b<] RX550 and GT1030.

        • Srsly_Bro
        • 2 years ago

        No, it’s a bad joke.

      • Welch
      • 2 years ago

      Hoping it just lowers their price. Suspect we will see more fanless versions. We need a dedicated card for those who want the 6 and 8 core CPUs for processing power but don’t need GPU.

      Give me a $40-50 Polaris or heck even a small Vega GPU that can be passive. Just tired of spending $100+ for a want to be gaming card. The 2400g may not be enough CPU for certain workloads.

        • MrJP
        • 2 years ago

        A small Vega with low enough power consumption to be passive might end being attractive to the miners unfortunately.

          • Redocbew
          • 2 years ago

          With the estimated global power usage of mining being compared with various countries of the world it doesn’t seem like there’s much of a draw towards passive components.

      • willmore
      • 2 years ago

      They’re still viable for people who were unwise enough to buy an Intel CPU with the hope of gaming on it.

    • Bauxite
    • 2 years ago

    The cached amazon link to these was working over the weekend (and apparently last week somehow).

    • DragonDaddyBear
    • 2 years ago

    Any plans to OC the 2400G and see what it can do with a good cooler?

      • Jeff Kampman
      • 2 years ago

      Perhaps when the firmware/drivers mature a bit.

        • Airmantharp
        • 2 years ago

        Ouch.

        Hopefully they get that locked down so that the screws can be tightened!

    • Welch
    • 2 years ago

    Curious to see if an inexpensive/entry level B350 + 2200g combo will start showing up on Newegg to bring down the price overall. Chip is $99 and a entry level B350 is about 55-60. Would make an amazingly cheap HTPC if you could get it a combo for $15-20 off.

    Still $99 is a sweet price.

      • DPete27
      • 2 years ago

      +$100 for 8GB RAM…..

        • Welch
        • 2 years ago

        Oh I know, I just bought 4 more 8gb kits last night. It KILLS me having to do it. But realtive to the cost of a GPU for an HTPC, the combo would be a deal still.

          • Redocbew
          • 2 years ago

          Yeah, and 8GB is still plenty of space if there’s no specific requirements. If I was in the market for HTPC parts one of these chips with 8GB memory, an M.2 SSD, and a picoPSU would make for a pretty nifty PC that’s easily hide-able.

        • JustAnEngineer
        • 2 years ago

        [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16820231935<]$94[/url<] for 2x4 GiB of PC4-24000 at 15-16-16-35, $103 for PC4-25600 or [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16820231906<]$106[/url<] for PC4-28800.

        • jarder
        • 2 years ago

        and -$100 for a GPU…

      • JustAnEngineer
      • 2 years ago

      Newegg has got some Ryzen G + motherboard bundles (-$10) up this afternoon.

    • tay
    • 2 years ago

    Good review. Thanks for the DotA2 benchmarks and the game variety as well.

      • Blytz
      • 2 years ago

      Question, why Dota and not Lol (with a bigger user base ?)

      I was really hoping it might have a salt filter too

        • Jeff Kampman
        • 2 years ago

        League didn’t have a replay function last I checked, so repeatable benchmarks are difficult if not impossible. Dota 2 benching is as easy as pressing play.

          • derFunkenstein
          • 2 years ago

          It has replays now, but [url=https://support.riotgames.com/hc/en-us/articles/234965248-Replays-FAQ-Pro-Tips<]their FAQ[/url<] says that once the next patch comes up, your replay is no good. That's pretty pathetic; Blizzard had no problem with versions in StarCraft 2. Might as well consider it to have no replay function, since by the next launch you won't be able to use the same replays. You won't be comparing across articles, anyway.

          • Blytz
          • 2 years ago

          Typical, biggest user base in the world, lazy coding for benchmarking.

          Thanks for the response Jeff. I am surmising it’ll run a little better. If I recall correctly the resource demands on Lol are a little lower than Dota’s (we get more toasters)

      • gerryg
      • 2 years ago

      I’m looking for a CSGO benchmark test. I know it’s kind of old, but it’s still played a lot, both for fun and for eSports $$ tourneys.

    • EzioAs
    • 2 years ago

    The 2200G doesn’t seem to bad – great even some might say considering it’s price. The 2400G on the other hand though, it’s a bit harder to sell IMO. Not strictly bad, but was definitely hoping for [i<]more[/i<]

      • jarder
      • 2 years ago

      The 2400G definitely has it’s place, but these results make it look like a rather small niche, the 2200G is just too close to it with a much nicer price tag.

        • tsk
        • 2 years ago

        I would say it’s more for the 8 threads than the GPU upgrade.

      • thx1138r
      • 2 years ago

      Just curious, but what were you hoping for with the 2400G? CPU-wise it’s not far behind the bigger ryzens despite have a much smaller L3 cache, and iGPU wise it’s at the front of the pack.

        • NTMBK
        • 2 years ago

        It’s a big jump in price for a relatively minor jump in GPU performance- especially when you consider that you need high speed memory to get the most out of those extra GPU units.

      • MileageMayVary
      • 2 years ago

      I am thinking that the 2400G would be a shoe-in at $150 but $170 feels like its pushing into the next level of CPUs.

      Guess I have a 2200G HTPC to build… or possibly three this year.

    • NTMBK
    • 2 years ago

    Is the Rocket League data mislabelled? The graph shows Vega 8 beating Vega 11, which seems… a bit off.

      • Jeff Kampman
      • 2 years ago

      Nope!

        • NTMBK
        • 2 years ago

        Huh, weird! Thanks Jeff 🙂

        • alloyD
        • 2 years ago

        It’s interesting that Vega 8 beats Vega 11 in that bench, but your commentary on that graph (time spent beyond 16ms) seems to indicate the opposite… Which is correct?
        (apologies if I’m mis-reading)

    • chuckula
    • 2 years ago

    Hi Jeff, quick testing setup question: Which CPU did you use as the platform for the discrete GPUs?

      • Jeff Kampman
      • 2 years ago

      This is a story in itself, but I ultimately had to resort to an i7-8700K to avoid creating CPU bottlenecks.

        • chuckula
        • 2 years ago

        Thanks.

        DARN YOU INTEL!

        • derFunkenstein
        • 2 years ago

        Also good to know. I had mistakenly figured it was the same Raven Ridge system across the board. I get the reasoning, though.

        Since you say there were CPU bottlenecks, then your data represents best-case scenario for those discrete cards, and adding a card to a Raven Ridge system would potentially have minor differences? (being realistic here; those are already low-end cards being pushed, relatively speaking).

        • NTMBK
        • 2 years ago

        Hmm, I think this would make for an interesting follow up story. People who are considering an $80 GPU are unlikely to be pairing it with such a high end CPU- much more likely that it would be matched with something like an i3-8100. How much of a difference did the weaker CPUs make, and which card was affected worse? (I know the Radeon cards have suffered from serious CPU overhead in the past.)

        • benedict
        • 2 years ago

        “Of course, each graphics processor’s performance varied widely across our test suite, and the GT 1030 proved superior to the Vega IGPs in the pair of esports-y titles that drive so many PC purchases in this price range. Folks whose tastes run more to the triple-A than the twitchy will find plenty to like in the all-around competency of the Vega 11 IGP, but those laser-focused on digital sports will still find the best performance from something like a GT 1030.”

        This whole paragraph is invalidated by the expensive CPU you used. Folks who buy a 1030 will pair it with a cheap CPU and then the 1030 will lose in all benchmarks.

          • Jeff Kampman
          • 2 years ago

          Except for the bit where the GT 1030 isn’t exposing bottlenecks. That issue is exclusive to the much more powerful RX 460 and GTX 1050.

            • JustAnEngineer
            • 2 years ago

            A fair comparison would be to compare builds in the same price range by matching the [b<]budget[/b<] instead of the very unrealistic case of comparing a $350 CPU plus an $85, $155 or $200 discrete GPU against APUs that cost $100 and $170. GeForce GT1030 is [url=https://www.newegg.com/Product/Product.aspx?Item=9SIA6ZP6419324<]$85[/url<]. Ryzen 5 2400G is [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819113480<]$170[/url<]. Let's see how Ryzen 5 2400G compares against the GeForce GT1030 and an $85 CPU (or a $140 to $175 CPU + motherboard combination, considering that B350 Socket AM4 motherboards are in the $55 to $90 range).

            • Airmantharp
            • 2 years ago

            I’ll agree that I would also like to see such tests, but the decision to push ahead with results that remove the CPU performance variable as much as possible is certainly supportable given that the goal is to isolate graphics performance.

            • EndlessWaves
            • 2 years ago

            Surely the goal is to provide useful information to potential purchasers of the product?

            This abstract look at what the technology is capable of was interesting, but I doubt that’s what most people will seek out the article for.

            • Airmantharp
            • 2 years ago

            I’d posit that builders would seek out the next System Builder’s Guide, as that would account for the bigger picture; further, we might expect another comparison before then (and actual availability of these parts) that considers JAE’s suggestions.

            And again in defense of the review as published, I’ll add that TR absolutely needed to get this review published upon NDA expiration, and I understand that the suggested additional testing would have put that goal in jeopardy or could have lowered editorial quality.

            • AnotherReader
            • 2 years ago

            I agree with your points regarding the effect of the NDA expiration deadline. The suggestions put forward by the readers can be addressed in a follow-up article some time later.

            • cegras
            • 2 years ago

            Without data, what additional information would a system builders guide provide? It would be wading into unknown territory constrained by budget.

            • Durante
            • 2 years ago

            I disagree: there are plenty of sources of information primarily aimed at informing purchasers.

            I prefer a more focused (or abstract, to use your term) technological review from Techreport.

            • MrJP
            • 2 years ago

            But surely the test doesn’t do that if the APU tests may have been bottle-necked by the built-in CPU in some places? Either the tests would need to be at settings where a CPU limitation is not a factor, or a more equivalent CPU (Ryzen 5?) should have been used for the discrete GPUs.

            • Jeff Kampman
            • 2 years ago

            As this comment thread is proving, I have to consider how and how much I want to explain it were frame times to go all weird for the RX 460 and the GTX 1050 thanks to an underpowered CPU in a (frankly) synthetic test. Nobody is buying those cards to run them at 1600×900 and medium or low settings.

            Even with the i7-8700K, you can still see some weirdness from our test settings with the RX 460 in GTA V that go away when you dial up 1920×1080. You’ll have to trust me when I say it was a lot worse for both discrete cards before I brought out the big-gun CPU. There is a major problem with trying to find a group of settings that work for every test card and test system when you have dynamic ranges of performance that span Intel IGPs to powerful discrete graphics options.

            You can’t separate the performance of the Vega 8 and Vega 11 from their host SoCs, so it’s impossible to determine what a “fair” bottlenecking factor is and how to consistently apply it for cards that are easily twice or thrice as powerful (if it exists at all).

            I ultimately feel that I chose the least worst option, which is to sidestep the issue as much as possible for a group of settings that most gamers will never use with the more powerful discrete graphics cards anyway.

            • chuckula
            • 2 years ago

            To put the whole thing to bed I’d pair the Rx 460 & GTX-1050 with one of the CPUs from the review (I really don’t care which one) and then run the benchmarks at the full 1080p resolution with “normal” settings for those graphics cards. Then run the exact same 1080p resolution test with the integrated graphics to show the difference.

            Even if you don’t get “optimal” performance from the GPUs using one of these CPUs, you’ll probably still get a pretty clear difference in performance.

            • JustAnEngineer
            • 2 years ago

            I suppose that it must depend on what you find interesting or intriguing.

            You proved that spending four times as much on a high-end CPU + heatsink + mid-range discrete GPU provides better gaming performance than an inexpensive APU. I was not surprised at this result.

            When it comes to APUs, I am interested in:
            Most important question: Does it meet the minimum performance necessary? You did a good job of showing that these new chips just barely meet the minimum necessary for desktop gaming, if your standards aren’t too demanding.

            Most interesting question: Is it a good value? I have usually been in the camp that dismissed low-end GPUs that perform at the level of the GeForce GT 1030 as a very poor value. Up until November, you could have gotten MUCH better performance by spending just a few dollars more to get a 4 GiB GeForce GTX 1050Ti or Radeon RX560. With the APUs integrating similar levels of graphics into the processor package and discrete GPU prices skyrocketing, the APU value proposition is distinctly different than that of a CPU plus a low-end graphics card.

            Here is where I believe that a more interesting evaluation could be performed by establishing what provides the best bang for the buck for the gamer on a budget. If you presented data that showed that a mid-range graphics card couldn’t perform satisfactorily without a more expensive CPU, that would certainly provide interesting information regarding the best combined value of possible component choices for a gaming PC at a given budget. It is these value questions that most interest people building or recommending components for gaming. Where is the sweet spot for CPU/APU, graphics, memory, cooling, etc.?

            • Redocbew
            • 2 years ago

            Once you start doing that, then you’re no longer testing performance. You’re not really testing the subject of the article at all. It requires no testing to determine the price of any of these components, but it does require testing to determine their performance, and price doesn’t mean anything to me without knowing how they perform.

            • JustAnEngineer
            • 2 years ago

            The problem with testing the discrete GPUs with a [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819117827<]$361[/url<] (+$$ for heatsink) Core i7-8700K CPU is that it's a rather different budget. The [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819117739<]$85[/url<] Pentium G4600, [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819113446<]$105[/url<] Ryzen 3 1200, [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819117822<]$121[/url<] Core i3-8100 and [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16819113445<]$125[/url<] Ryzen 3 1300X are the sorts of CPUs that would likely be included in a budget build. I suspect that what started the foray into using the high-end CPU was a performance anomaly that other sites also reported. [quote="W1zzard at TechPowerUp"<]You can also pair the new Ryzen APUs with a separate graphics card to increase 3D performance.... We used a GTX 1080 for our discrete GPU performance testing and saw surprising performance numbers that are lower than expected, especially at high resolutions, which is strange because the GPU should be the limiting factor here, not the CPU. We reported those results to AMD a week ago, but haven't been given an explanation for these numbers. It's not a configuration problem either. When swapping the Ryzen G CPU for a normal Ryzen CPU without changing any BIOS or software settings, using the same Windows installation and drivers, performance is back to expected values. [/quote<]

            • Redocbew
            • 2 years ago

            I don’t care what the intended budget is. I don’t care if AMD or Intel tells me by way of pricing where their products should be used. The whole point is for me to figure that out on my own, and I don’t think you can have it both ways here. Either you test for performance and choose your comparison parts to showcase that, which is what Jeff did here, or you forgo any kind of comparative analysis at all and do the kind of budget build you’re talking about. Almost by definition the budget build is not going to be a performance test of any one component, or at least not a very good one.

            • Zizy
            • 2 years ago

            You *are* testing performance – GPU performance as bottlenecked by the CPU you are likely to pair with.

            Which is the most relevant data point in the end – this is similar to the computers people actually use. You generally don’t pair G4560 with 1080Ti and you don’t pair 8700K with 560, so those numbers aren’t very relevant.

            If 1030, 1050 and 560 aren’t exposing CPU bottlenecks even when using G4560, R3 1200 or i3 8100, or whatever other cheap CPUs people might get for their build, then good, numbers in the article are valid even though a better CPU was used.

            But if some of those cards have 99 percentile gaps vs APUs halved when paired with cheap CPUs, this is changes a lot and should be factored in the conclusions of the article. People don’t cross shop between R5 2400G and i7 8700k+560 to figure out if the APU is acceptable or the next GPU step is a better purchase.

            Sure, 560 will remain a much better GPU even with junk CPUs, it cant be slowed down that much, but the gap might be shrinking. From say 50% now to perhaps 40% or even just 30% increase. This changes value proposition dramatically, note that the 70$ price increase is remarked as a questionable value proposition in the article – and you get 20% GPU performance plus huge 40% jump in CPU performance. Even the CPU boost seems worth it imo.

            • farmpuma
            • 1 year ago

            Definitely a topic(s) worthy of further exploring. Possible working title – Budget Gamer / HTPC Upgrade Path – Myth or Monumental Improvements?

            • MrJP
            • 2 years ago

            Hi Jeff. Thanks for taking the time to reply and I do understand the decision you had to make given the limited time it sounds like you had to pull this together.

            Having said that, I’d echo the requests to do a little more testing if you could to have a look at a slightly more level playing field. I’m genuinely interested in this as at some point I’m looking to put together an entry level gaming PC for the kids. It’s not so much the exact price parity argument, but how much extra performance would I get for spending the extra for something like a a Ryzen 3 1300 plus RX460 or 1050 vs the Ryzen 5 2400G? Based on the current review, it looks like I’d get 2x-3x the performance with the discrete cards, but would I really get this with a low-end CPU?

            If it’s not the CPU performance level that’s actually the issue, but rather some driver weirdness at the unusual resolutions, then perhaps just do everything at 1080p. I know that most of the IGPs you tested are not capable of reasonable 1080p performance, but it looks like the 2400G might be close.

            • leor
            • 2 years ago

            Here ya go.

            [url<]http://www.tomshardware.com/reviews/amd-ryzen-5-2400g-zen-vega-cpu-gpu,5467-7.html[/url<]

        • Voldenuit
        • 2 years ago

        Wouldn’t those CPU bottlenecks be a valid data point in and of themselves?

        If you’re cross shopping Vegazen, you’re not going to suddenly shell out $300 for an 8700 and $85 for a 1030.

        How’s a (s)crappy i3 8100 going to do when paired with a 1030 compared to a $169 Ryzen 5 2400G?

        EDIT: Or even a ~$80 pentium CPU + 1030.

          • ermo
          • 2 years ago

          Agreed. This is probably worth a follow-up article.

          • derFunkenstein
          • 2 years ago

          I agree it’d be nice to see. The fact the 1030 even with the fastest gaming CPU available can’t keep up with Vega 11 kind of mitigates my concerns. That changes if you’re considering one of these in a new budget build and are just holding out for GPU prices to drop, though.

          • EzioAs
          • 2 years ago

          I declare we make Vegazen official.

          • Jeff Kampman
          • 2 years ago

          I don’t believe they are. You’re not buying an RX 460 or GTX 1050 to play at 1600×900 and medium or low settings.

          • Mr Bill
          • 2 years ago

          The 1050 vs 460 have a bottleneck at 1600×900. But I suspect that a graph of FPS vs resolution would have a crossover in favor of the 460 at higher resolutions. Maybe AMD felt that designing for quick response at low resolutions was less important than optimizing frametimes at higher resolutions. This has been discussed in previous reviews.

        • Shobai
        • 2 years ago

        Not to have a go at you, but I’m keen to read about what you’ve encountered. [url=https://techreport.com/review/33046/amd-lays-out-its-ryzen-and-radeon-plans-for-2018-and-beyond-at-ces<]You said earlier[/url<]: [quote<]To be clear, I don't believe that it'd be necessary to pair a $200-ish processor like the Core i5-8400 with the GT 1030 to get comparable performance. One could easily get a Core i3-8100 for $130 and pair it with that same $70 or $80 graphics card and get competitive numbers to the effect of AMD's claims for the i5-8400 setup.[/quote<] I'm intrigued; has your position changed at all?

          • Jeff Kampman
          • 2 years ago

          I’ll say it again: I didn’t pull out the i7-8700K because of the GT 1030. The RX 460 and the GTX 1050 motivated that decision.

          The GT 1030 is not powerful enough to expose processor bottlenecks at any reasonable modern resolution.

            • Shobai
            • 2 years ago

            Thanks for the clear and concise response!

        • Anonymous Coward
        • 2 years ago

        Hmm, while eliminating the CPU bottleneck from the discrete GPUs was necessary to show their potential in a clean way, it does leave the Ryzen APU options out in the cold, with whatever their CPUs can pull. I think in this case, a different approach would be more illuminating.

        As you have observed, over multiple generations of AMD APUs, the question is what place they have in the market. Would we have a clearer picture of that if you tested the GPUs with whatever CPU was necessary to match 2200G or 2400G dollar-to-dollar? Perhaps you could settle on a CPU at $100 or $90 and even a motherboard and RAM that balances the cost equation for a GT1030 vs 2400G.

        I promise to click all over any such article. 😉

    • derFunkenstein
    • 2 years ago

    Good decision to get performance out the door first.

    Does Raven Ridge support Crossfire of the APU resources with a discrete card? Since there are no low-end Vegas I’d guess not. Looks like the extra thermal headroom let Vega IGP stretch its legs, though.

    Too bad it gets thumped so hard in CPU-centric tasks.

    edit: test setup seems to be missing. The CPU tests were performed with discrete graphics attached an built-in graphics disabled, right?

      • Jeff Kampman
      • 2 years ago

      Nope, configured for typical/expected use i.e. with integrated graphics running the show.

        • derFunkenstein
        • 2 years ago

        Good to know, thanks.

    • thx1138r
    • 2 years ago

    I didn’t expect the 2400G would best a GT1030 but there it is.
    Although it’s a great APU, I can see my upcoming HTPC re-build using the cheaper 2200G, it’s just better value for money.

      • chuckula
      • 2 years ago

      When I hear the term “HTPC” I think watching videos [and maybe as a source for music].
      Does that term actually mean “playing games on a TV” instead to most people?

        • drfish
        • 2 years ago

        Dunno about most, but certainly many.

          • DragonDaddyBear
          • 2 years ago

          I’d say most. Maybe I’m missing something, but it’s really quite difficult to get UHD stuff working (thanks to the DRM constraints). Even then, it’s rather difficult to legally obtain high-end videos and movies. If one does not have plans to do more than media watching/listening it’s not really worth the effort or cost.

        • Waco
        • 2 years ago

        I game on my HTPC quite often, so I’m definitely in the latter camp. Some games are just suited to couch gaming, and I don’t want a full-on gaming rig in the entertainment center (I don’t want to hear one or pay for one).

        Steam Links are cool and all…but having to fire up another system, hope it doesn’t bomb out randomly, etc all make a HTPC that can play light games worthwhile IMHO.

          • JustAnEngineer
          • 2 years ago

          If the strong gaming PC is running Steam in another part of the house on the same gigabit LAN, Steam will make your lowly HTPC into a gaming star.

            • Waco
            • 2 years ago

            Yep. It’s just a hassle to get it all set up and working consistently. I do it for “real” gaming sessions on the HTPC but it’s always more painful than it should be (locked screen issues, resolution settings, etc).

            • Mr Bill
            • 2 years ago

            You can pipe Steam to another PC? That’s actually pretty cool.

            • JustAnEngineer
            • 2 years ago

            [url<]https://support.steampowered.com/kb_article.php?ref=3629-RIAV-1617[/url<]

        • NTMBK
        • 2 years ago

        My “HTPC” is my main gaming system these days; I bought it instead of moving up to the PS4-era consoles.

        • stdRaichu
        • 2 years ago

        I think part of what stopped a lot of people from using HTPC for gaming of any sort was that “small, quiet box with enough GPU grunt in it to play stuff at 1080p” was basically a non-starter, since it more or less required a dedicated GPU. These APUs change that paradigm.

        For instance, ever since a whole bunch of steam games got ported to linux, along with the missus falling headlong for the RPi3 emulation station I set up as a toy, I’ve been wanting to get half-decent 3D performance in a small footprint, and none of the intel GPUs have cut in (and their 3D stack is hugely immature – lots of games won’t run at all with intel chips). Ryzen APUs look to change all that plus allowing the use of nice compact mITX cases with <120W DC power bricks.

        Of course, for KB+mouse games there’s no real substitute for a Proper PC, but for controller-friendly games HTPC should definitely include some light gaming duty now.

        Now to find someone selling these things…

        • ermo
        • 2 years ago

        Actually, my ideal HTPC would be a jail-broken PS4 Pro dual-booting between Orbis OS (FreeBSD-based OS created by Sony for the PS4) and Linux (Solus 3).

        Imagine being able to switch between watching whatever I feel like watching via Kodi, to playing console emulators, to playing Linux ports and then switch back to the PS4 (back-)catalogue.

        Pretty nifty.

        • Anovoca
        • 2 years ago

        If media streaming is all you want, then an HTPC is overkill. Roku’s or even a raspi are MUCH cheaper; and that is assuming your TV doesn’t have all the apps installed on it already to begin with.

      • shank15217
      • 2 years ago

      I really want to see how much a memory overclock will do for this. Also will it support ECC memory?

        • stdRaichu
        • 2 years ago

        ASRock still list ECC UDIMMs as supported on their website for their Ryzen boards, including those supporting raven ridge. The memory controller and motherboard traces should be more or less identical, so no obvious reason it shouldn’t work, but I’d wait for confirmation from a canary first.

          • Beahmont
          • 2 years ago

          You can put the ECC dimm in, but it won’t work in ECC mode. The traces for passing the error check stuff are just not there.

            • Bauxite
            • 2 years ago

            Source? ECC works with regular ryzen on many AM4 boards, easy to confirm with WHEA corrected errors in event logs (overclock does it in a pinch).

      • Anovoca
      • 2 years ago

      Personally I find a headless server with a good discrete GPU and cheap streaming boxes at each TV to be a more elegant solution. It costs more to implement at the start, but it is much easier and cheaper to buy another rokhu or Shield console and add it to other TVs if you want to expand your capabilities in the future compared to building a second HTPC.

    • chuckula
    • 2 years ago

    Game over Intel.

    [url=https://www.youtube.com/watch?v=dsx2vdn7gpY<]Game over.[/url<]

      • NTMBK
      • 2 years ago

      Take off and nuke the driver team from orbit, it’s the only way to be sure.

        • Klimax
        • 2 years ago

        Or maybe more resources, SW engineers.

      • shank15217
      • 2 years ago

      Yea I think Intel realized that when they decided to put an AMD GPU with their mobile CPU.

Pin It on Pinterest

Share This