Ryzen 3 CPUs hit store shelves July 27

We came away very impressed with the performance of AMD's Ryzen 5 1600 and Ryzen 5 1600X CPUs in productivity applications, bestowing the coveted Editor's Choice Award upon them. The Ryzen 5 chips have gaming chops, as well, especially for gamers with high-resolution displays. System builders on a budget that don't need all those threads can now circle July 27 on their calendars. AMD has finally given us a release date and clock speeds for the first Ryzen 3 chips, both of which sport four cores without simultaneous multi-threading.

The Ryzen 3 1200 will hum along near its 3.1 GHz base clock under most circumstances, boosting up to 3.4 GHz when called upon. The somewhat heartier Ryzen 3 1300X has a faster 3.5 GHz base clock and boosts all the way up to 3.7 GHz. AMD didn't say whether either of the chips will have the company's Extended Frequency Range functionality, though we expect to see it in action on both parts. The company also didn't mention the cache sizes for these lesser Ryzens. We would bet on the same 2 MB of L2 cache and 8 MB of L3 cache the company promised when it announced the Ryzen 3 Pro chips at the end of June. We also expect the chips to share the same 65W TDP specification as the Ryzen 5 and Ryzen 3 Pro processors.

For those without AMD product table tattoos, the most affordable Ryzen 5 chip, the four-core, eight-thread Ryzen 5 1400, is clocked at 3.2 GHz base and 3.4 GHz Turbo speeds. The R5 1400 is priced at $165 with the Wraith Stealth cooler in the box. The six-core, 12-thread Ryzen 5 1600 costs $215, and jumping up to the eight-core, 16-thread Ryzen 7 1700 runs $270.

Gamers on a budget might want to pay especially close attention to the Ryzen 3 1300X. The chip could potentially offer XFR-boosted clock speeds near 4 GHz. Games that aren't coded to capitalize on more than four hardware threads could work out very well on a hot-clocked four-core chip. As always, the key factor will be price. AMD didn't give away that bit of information, but we imagine that both chips will play in the middle ground between the competing $117 dual-core, four-thread Intel Core i3-7100 and the aforementioned $165 AMD Ryzen 5 1400.

Comments closed
    • AMDisDEC
    • 2 years ago

    Forget about Hilary.

    Make America competent again!

    As American as Virginia Hemp.

    Dr. Lisa Su for US President, 2020!

    Where else, but on the Green party ticket.

    • Tristan
    • 2 years ago

    Competition for Celeron and Pentium

      • DPete27
      • 2 years ago

      Hardly. Especially considering Celerons and Pentiums actually have an iGP.

    • Unknown-Error
    • 2 years ago

    There is a serious GHz wall with RyZen. None of the CPUs regardless of core count exceed 4.1 GHz in XFR. Is this architectural or process related? Or both?

      • ronch
      • 2 years ago

      I’m guessing 70% process, 30% architecture. Looking at the CPU core itself there’s mostly what seems to be a heavy application of high density libraries which could mean lower clocks. The process on the other hand is of course probably not as well tuned for clock speed as much as AMD wants, which of course means AMD is at the mercy of the founder.. er… foundry.

      • aspect
      • 2 years ago

      It’s really odd, usually with Intel we see the higher the core count the lower the clock speeds but Ryzen is roughly all in the same ballpark regardless of core count.

      • Krogoth
      • 2 years ago

      It is an architecture and fab tech issue.

      Ryzen was built around scalability not pure clockspeed. Intel is also moving this in direction with their high-core count silicon.

    • ronch
    • 2 years ago

    Market positioning:
    AMD 16C/32T vs Intel 8C/16T (Threadripper vs LGA2011)
    AMD 8C/16T vs Intel 4C/8T (Ryzen 7 vs i7)
    AMD 6C/12T vs Intel 4C/4T (Ryzen 5 vs i5)
    AMD 4C/4T vs Intel 2C/4T (Ryzen 3 vs i3)

    Good grief. You just can’t deny AMD’s Zen value proposition even if you’re an Intel fanboi. Personally, I’d go for the 1600X but I’d be happy to step down to a 1300X knowing it’s still practically an i5 for i3 money.

      • Airmantharp
      • 2 years ago

      You also can’t ignore Intel’s clockspeed and IPC advantage, even if you’re a “fanboi”.

      This doesn’t work out everywhere necessarily, and there are also platform and maturity considerations that vary in scope depending on market segment on top.

        • ronch
        • 2 years ago

        For sure. What I’m saying is anyone buying a computer owe it to themselves to check AMD also. They just might get something better for their needs. You couldn’t say that without 5 asterisks a year ago.

          • LostCat
          • 2 years ago

          I’m not sure you could say that since Phenom 1 (until now of course.)

            • ronch
            • 2 years ago

            To be more precise, since Core 2 came out in 2006.

            • Zizy
            • 2 years ago

            Nah, Phenom 2 was quite acceptable. 955BE was a decent competitor to the i5 750, and 1100T wasn’t too bad either. Worse than Intel stuff but cheap enough it still made sense to consider. Only BD was interesting just for very niche applications.

            • ronch
            • 2 years ago

            Phenom II was OK if you were shopping for a cheaper computer but if you wanted the fastest money you can buy you had to go Intel. Right now Ryzen is letting AMD cover all the bases. It’s not a clear win but they sure are playing hardball today.

            • Airmantharp
            • 2 years ago

            Fastest is application dependent- for games, it’s still Intel.

            [If you’re stuck at 60Hz, that’s probably a tossup, but man, I feel sorry for you!]

            • Krogoth
            • 2 years ago

            Not so much anymore.

            The difference isn’t that drastic unless you want to game at 200FPS+ with 1080 or lower resolutions or want the best performance for late-state strategy games and Dwarf Fortress.

            • Airmantharp
            • 2 years ago

            Drastic? No. But easily useful for reducing frametimes in the 100FPS range, and that you can feel.

            • Krogoth
            • 2 years ago

            That’s pretty much placebo effect. There isn’t much difference (if any) between 100-120FPS rendering expect for trying to accurately depict super-fast motion (race car/jet aircraft zooming by).

            • Airmantharp
            • 2 years ago

            …or projectiles or…

            This is gaming we’re talking about, ‘fast movement’ in all of its forms is important, both for seeing and reacting.

            • Antimatter
            • 2 years ago

            Well for gaming it’s also game dependent and setup dependent – resolution, GPU, monitor, etc.

        • Chrispy_
        • 2 years ago

        I think the high clockspeed and IPC advantage of Intel is starting to get overstated. It matters a lot for gaming, but outside of gaming there are essentially two types of consumer:

        1) The average joe who just wants a fast-enough processor but would like his smartphone videos of his family holidy to encode faster. The 85-90% IPCxClock of Ryzen3 compared to the i3 options makes it plenty fast enough, but the extra two true cores will make it shine compared to an i3 for those longer encodes, complex filters etc.

        2) The power user who needs [i<]moar cores[/i<] because they're always encoding/compiling/folding/rendering/simulating/VM'ing. For them, AMD's vastly better thread counts make AMD extra appealing, and compared to intel's mainstream stuff, the LGA2066 CPUs run hotter and slower than their much smaller LGA1153 brethren, almost nullifying the IPC advantage that Intel enjoys in the "gamer quad-core" sweet spot.

      • Zizy
      • 2 years ago

      Well, 16C/32T is positioned against 10C/20T and even 12C/24T is above 8C/16T from Intel.

      i7 is still somewhat interesting as it is the fastest gaming processor.

      i5 is dead, and was my favorite Intel thingy.

      iGPU on the i3 is nice for office stuff. I can’t understand why there is no mobo with iGPU for Ryzen. A 10$ chip and conquer all office sales in the mean time plus tons of workstation sales indefinitely should be a no-brainer decision.

        • JustAnEngineer
        • 2 years ago

        There are plenty of socket AM4 motherboards that support integrated graphics. It’s the Raven Ridge APUs that haven’t arrived yet.

          • Zizy
          • 2 years ago

          I know. But for say half a year, mobo with iGPU would lead to tons of sales for people wanting to have Ryzen NOW yet don’t care about graphics. All those offices, workstations etc. Plus, there will be no 8C RR APU. Just 4C ones. So a mobo with iGPU would be relevant even after RR launches – for people interested in 8C Zen for workstations.
          Yet, there is no such thing, and I wonder why not.

        • pluscard
        • 2 years ago

        R3 will have the igpu. Doesnt need it in the mb.

      • pluscard
      • 2 years ago

      Missing from the review: R3 has integrated Vega graphics. Love to see the bench on this.

    • Ummagumma
    • 2 years ago

    I suspect the Ryzen 3 family might be useful processors for value-oriented business computers.

    We probably all know that kind of computer: built in large quantities from vendors like HP, Dell and so on. Value-oriented business computers are reasonably inexpensive for most businesses to acquire and maintain while having enough processing power for most business applications.

    Of course there are some businesses that will go the Ryzen Pro direction because they want/need to manage company resources to that level, but value-oriented business computers tend to find a broader market in small and mid-size companies where budgets are tight.

    As always, the “power user” category is addressed by other product segments in the market.

      • ronch
      • 2 years ago

      Ryzen isn’t just about targeting the usual value conscious buyers anymore. Pricing now goes all the way up to $1K, and that’s hardly budget class. Heck, $300 and up isn’t even budget class anymore. This is about giving you more for your money, no matter how much you want to cheap out or splurge.

    • ronch
    • 2 years ago

    [quote<]The Ryzen 5 chips have gaming chops, as well, especially for gamers with high-resolution displays[/quote<] Yes, because that means you can crank up the resolution and make the GPU the bottleneck and masking Ryzen's lower gaming performance. /truth

      • Airmantharp
      • 2 years ago

      To a degree, in a test environment. On an end-user desktop, having more threads than the game needs helps keep maximum frametimes lower, and that makes things actually feel smoother. Higher per-core performance also helps that a little bit too, even if average framerates are within the margin of error.

      /truth

      [granted, if it’s between a Ryzen 5 and an i3 due to a hard budget, i.e. you don’t have enough threads on the i3 system to begin with, then yeah the Ryzen system is going to turn out better overall frametimes]

        • DoomGuy64
        • 2 years ago

        This, and who buys a gaming desktop to game @ 640×480 with no background tasks? Low resolution benchmark numbers is only useful for theoretics, the bios updates increased performance in 1080p, above that it doesn’t matter, plus the additional threads mean background tasks won’t interfere with your game. That said, I wouldn’t recommend anything too low end for games, and the 1600X probably hits the sweet spot the best out of all the options.

          • ronch
          • 2 years ago

          While testing at lower resolutions is perceived as merely academic, it determines which CPU is really faster for gaming. While framerates may hover above minimum acceptable performance levels the fact remains that you will want a CPU that runs the game more capably for times when the action gets intense and the stronger CPU can power through such difficult times when other CPUs will start to falter.

            • DoomGuy64
            • 2 years ago

            Except that’s not true outside of benchmarks specifically setup to perform that way in low resolutions, and with more and more software getting optimized for multiprocess, single thread performance is an overall negative trade off. Games that can use the extra cores will run better with more cores, and so will normal gaming sessions where you are running extra background processes like various social communication apps and game streaming.

            There’s no such thing as a CPU “faster for gaming” outside of marketing. There is only what processor suits your needs better for the price, and you gotta do some real mental gymnastics to justify Intel at this time. Low resolution benchmarks are that situation.

            This hasn’t changed since win9x single core + 3dfx vs XP dual core + Geforce. More threads is always better, whether or not it appears that way in “academic” tests. 9x, 3dfx, and single cores died off for a reason, as all the 16-bit 640×480 benchmarks in the world couldn’t justify regress over progress. People back then saw the benefit of multiprocess, 32-bit color, and high resolutions, and I think that precedent still stands today.

            Same thing with frame times designed to show the limitations of 60hz static refresh monitors. VRR removes the limitations of minor framerate dips, so benchmarks based solely on these metrics will become just as obsolete over time. Not to say frame times are completely useless, but it certainly doesn’t matter like it used to either. Today, it’s more a question of which hardware/software configuration runs better for your individual case, as there no longer is a single variable to say is “best”. Someone who custom builds and streams will have vastly different needs than a casual who buys a prebuilt. Neither is inherently “wrong”, because both cases meet the need of the user. However, being as this site doesn’t exactly cater to casual users, it would be disingenuous to promote casual hardware to it’s users with low resolution benchmarks.

            There are a lot of users here who have moved to Ryzen precisely because they can use the extra cores, so they obviously saw something useful besides those low resolution benchmarks. Maybe someday you’ll be one of them too.

            • ronch
            • 2 years ago

            Nope.

            • DoomGuy64
            • 2 years ago

            See, this is why you can’t be taken serious. Not only can you not see the forest for the trees, but you’re one inch away staring at some bark making forest predictions based on bark patterns.

            Sorry, but those bark patterns don’t represent anything but the single tree that you’re staring at, and have nothing to do with the actual forest. You need to back off that tree more than a mile before you’ll even remotely visualize the scope of the forest your commenting on.

            • ronch
            • 2 years ago

            Sorry dude, just tired of the arrogant nuts in the forest here.

            • DoomGuy64
            • 2 years ago

            Hard to believe a rabies infected squirrel when he says that.

            • ronch
            • 2 years ago

            Is that your strategy to win arguments? You resort to derogatory attacks? Real classy, dude. I would like you to keep it up. 🙂

            • DoomGuy64
            • 2 years ago

            And what was your first post? Chopped liver?

            You’re not arguing anyway. “Nope” is not a valid rebuttal, and neither are your cherry picked arguments about low resolution gaming. You clearly have pre-conceived notions that you won’t acknowledge counter points of. Nothing about that is “classy” in any way whatsoever. As for calling you a rabies infected squirrel, well when in France. Because clearly that’s where you started from, and still are. Maybe now that you notice the absurdity of our conversation, you can do some introspection and realize you’re doing nothing but calling the kettle black, and I only said it to highlight your stubborn irrationality.

            Low resolution benchmarks are nothing but a fanboy fallacy argument that doesn’t highlight the real world benefits of additional cores. Especially when most of these benchmarks were taken pre-OS patch and BIOS update, which greatly increased those same benchmarks, nor does it address the fact that nobody games in low resolution.

            I’m not denying your argument, like you are. I’m merely providing a counter point that it isn’t relevant, and the benefits outweigh the cons. It will probably take at least a year for software to fully catch up with the hardware, but in the meantime the updates we have got moved Ryzen well past any initial nit-picking, and provide plenty of additional headroom that is something enthusiasts usually respond positively to. Unless they’re ideologues stuck on fallacious benchmark scenarios. But hey, keep pushing that low resolution thing well past it’s expiration date, because it doesn’t hurt my argument at all.

      • D@ Br@b($)!
      • 2 years ago

      So what U are actually saying is that Ryzen CPU’s are only a good choice for gamers who have a better graphics card than a iGPU?!

    • Yan
    • 2 years ago

    There’s a benchmark for the Ryzen 3 1200 at [url=https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+3+1200&id=3029<]Passmark[/url<]: 7043 based on 2 samples , compared to 5103 for the Intel G4560.

      • defaultluser
      • 2 years ago

      Right, for a chip that costs half what the Ryzen 1200 will be likely priced at, and comes with a GPU.

      The Core i3 7100 ($120) is a much more likely price competitor to the the Ryzen 3 1200 with four AMD cores, and that slots-in at 5906. And will have faster single-threaded performance.

      [url<]http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i3-7100+%40+3.90GHz[/url<] The only real competition AMD wins with certainty is in the $200-and-up category. The 1600x absolutely slaughters the competition. And with the premium prices AMD is expected to charge for their APU Raven Ridge parts (top at $170 or more), don't expect this to change. The Pentium G4560 (4 threads, 3.5,GHz, $70) and Celeron G3930 (2 threads, 2.9 GHz, $35) wil continue to be the value go-to processors.

    • derFunkenstein
    • 2 years ago

    Are these single-CCX chips? Aside from SMT and clock speeds, what’s the difference between a 1300x and a 1400?

    edit: I’m aware these will probably be the same dual-CCX die as everything else so far. I just want to know what’s enabled.

      • Concupiscence
      • 2 years ago

      The Ryzen 3 parts disable SMT, lower clock speeds, and probably disable part of the cache. If I read Agner’s thoughts on the architecture correctly Ryzen benefits from SMT (or suffers from its absence) more than Intel’s, but since non-SMT Ryzen parts don’t exist in the wild yet I haven’t got much more to go on than that. His thoughts are here: [url<]http://www.agner.org/optimize/blog/read.php?i=838[/url<]

        • ronch
        • 2 years ago

        That Ryzen benefits more from SMT could mean. AMD has better SMT design (great job on the first try!) or their scheduler when only one thread is concerned is less smart than Intel’s. Still, I’m amazed by the Zen architecture. Hoping to play Space Quest III with it someday.

          • psuedonymous
          • 2 years ago

          For 8 or fewer threads, using SMT allows all threads to remain within one CCX and not incur the inter-CCX transfer penalty in the case of inter-thread dependency.

            • ronch
            • 2 years ago

            Theoretically, yes. But it’s kinda like Bulldozer. Do you feed 2 threads across 2 modules or feed them to just 1 module? The software developer will have to test which method delivers better performance and optimize accordingly. Not sure if it’s an OS thing or individual program thingy though. Perhaps it’s the latter, as AMD claims.

      • Takeshi7
      • 2 years ago

      I think they’re the same die as Ryzen 5 and Ryzen 7, so just 2 cores per CCX. The single CCX APUs come later.

        • derFunkenstein
        • 2 years ago

        I figure they’re using the same die, I just want to know what is enabled.

      • Zizy
      • 2 years ago

      All are believed to be dual CCX chips, not sure if this was officially confirmed. The only statement I recall is that all R5 are 2+2 and that there will be no lottery between 4+0 and 2+2. But I could have mixed it up and it might have been that all 4C chips will be 2+2.

      Anyway, I don’t think it is likely to have 3 cores ruined in one CCX and the other CCX with all 4 working to make sense to have that bin. Even in this very unlikely case the chip can be used for Epyc (unless there is other damage like L3)

      And aside from these 2 differences, assuming it is a dual CCX, there should be none. R3 is more or less i5 of Intel.

      • ImSpartacus
      • 2 years ago

      I bet it’ll be 2+2 like the 1500X and 1400.

      I wonder if we’ll eventually see asymmetrical 1+3 or 0+4 setups in particularly cheap offerings down the road (Maybe stuff bound for OEM rigs).

        • RAGEPRO
        • 2 years ago

        0+4 is definitely coming (or more accurately, GPU+4) in Raven Ridge APUs. IMO they can’t get here soon enough, heh.

          • demani
          • 2 years ago

          Can they test when it’s still a single CCX or do they need to wait for further assembly?

    • Takeshi7
    • 2 years ago

    If they can get a 4C/4T chip at $80 it would make the G4560 obsolete.

      • chuckula
      • 2 years ago

      Intel already beat them to the punch!

        • Takeshi7
        • 2 years ago

        Atoms don’t count

          • ronch
          • 2 years ago

          Tell that to your physics professor.

            • Goty
            • 2 years ago

            Physicist here. It all depends on the scale.

          • K-L-Waster
          • 2 years ago

          Of course atoms don’t count, they just cluster around probability wave functions.

      • defaultluser
      • 2 years ago

      AMD can’t afford to price it that low. And you still have to add a $30 GPU, and another $20 for bigger case + PSU. to this IGP-less system, which ups build costs for OEMs. It’s not a slam-dunk like their 1600 is. We’re talking systems here that might run just fine on integrated graphics, but don’t have the option.

      And we’ll see how low AMD will go with Raven Ridge. It took,m them forever to lower the cost of four Kaveri cores below $100, and that was way slower.

Pin It on Pinterest

Share This